id
stringlengths 20
20
| score
int64 1
5
| normalized_score
float64 0.2
1
| content
stringlengths 217
3.74M
| sub_path
stringclasses 1
value |
---|---|---|---|---|
BkiUc-s5qX_BnPKL7-7R
| 4 | 0.8 |
\section*{Funding}
M.I. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG) Quantum Dynamics in Tailored Intense Fields (QUTIF) grant.
M.I. and A. J-G acknowledge funding from the European Union's Horizon 2020 research and innovation programme
under grant agreement No 899794, ``Optologic.''
G. D. acknowledges support from Science and Engineering Research Board (SERB) India
(Project No. ECR/2017/001460).
\section*{Disclosures}
The authors declare no conflicts of interest.
|
train/arxiv
|
BkiUbO84ubnjor2J9GkK
| 5 | 1 |
\section{Introduction}
Throughout this paper, we denote the set of nonnegative integers (resp. positive integers)
by \(\mathbb{N}\) (resp. \(\mathbb{Z}^{+}\)). We write the integral and fractional parts of a real number
\(x\) by \(\lfloor x \rfloor\) and \(\{x\}\), respectively. Moreover, \(\lceil x \rceil\)
is the minimal integer not less than \(x\). We use the Vinogradov symbols \(\gg\) and \(\ll\),
as well as the Landau symbols \(O,o\) with their regular meanings.
Finally, \(f\sim g\) means that the ratio \(f/g\) tends to \(1\) \par
In what follows, we investigate the arithmetical properties of the values of power series \(f(X)\)
at algebraic points. For simplicity, we first consider the case where \(f(X)\) has the form
\[f(X)=\sum_{m=0}^{\infty} X^{w(m)},\]
where \((w(m))_{m=0}^{\infty}\) is a sequence of nonnegative integers satisfying
\(w(m)<w(m+1)\) for any sufficiently large \(m\). We call \(f(X)\) a gap series
if
\[\lim_{m\to\infty}\frac{w(m+1)}{w(m)}=\infty.\]
We say that \(f(X)\) is a lacunary series if
\[\liminf_{m\to\infty}\frac{w(m+1)}{w(m)}>1.\]
Note that if \(f(X)\) is a lacunary series, then there exists a positive real number \(\delta\) such that
\[w(m)>(1+\delta)^m\]
for any sufficiently large \(m\). \par
In the rest of this secction, suppose that \(\alpha\) is an algebraic number with \(0<|\alpha|<1\).
In paper \cite{bug2}, Bugeaud posed a problem on the transcendence of the values of
power series \(f(X)\) as follows: If \((w(m))_{m=0}^{\infty}\) increases sufficiently rapidly, then
\(\sum_{m=0}^{\infty}\alpha^{w(m)}\) is transcendental. \par
Corvaja and Zannier \cite{cor} showed that if \(f(X)=\sum_{m=0}^{\infty} X^{w(m)}\) is a lacunary
series, then \(\sum_{m=0}^{\infty}\alpha^{w(m)}\) is transcendental. For instance, let \(x, y\) be
real numbers with \(x>0\) and \(y>1\). Then two numbers
\[\sum_{m=0}^{\infty}\alpha^{\lfloor x (m!)\rfloor}
,
\sum_{m=0}^{\infty}\alpha^{\lfloor y^m \rfloor}
\]
are transcendental. \par
Adamczewski \cite{ada1} improved the result above in the case of \(\alpha=\beta^{-1}\), where
\(\beta\) is a Pisot or Salem number. Recall that Pisot numbers are algebraic integers greater than \(1\)
whose conjugates except themselves have absolute values less than \(1\).
Note that any rational integers greater than \(1\) are Pisot numbers. Salem numbers are
algebraic integers greater than \(1\) such that the conjugates except themselves have moduli
less than \(1\) and that there exists at least one conjugate with modulus \(1\).
Adamczewski \cite{ada1} showed that if
\[\liminf_{m\to\infty}\frac{w(m+1)}{w(m)}>1,\]
then \(\sum_{m=0}^{\infty}\beta^{-w(m)}\) is transcendental for any Pisot or Salem number \(\beta\).
\par
We now introduce known results on the algebraic independence of certain lacunary series at
fixed algebraic points. First we consider the case where \(f(X)\) is a gap series. Durand \cite{dur}
showed that if \(\alpha\) is a real algebraic number with \(0<\alpha<1\), then the
continuum set
\begin{align}
\left\{\left. \sum_{m=0}^{\infty} \alpha^{\lfloor x (m!)\rfloor}\right| x\in\mathbb{R},
x>0\right\}
\label{eqn:int1}
\end{align}
is algebraically independent. Moreover, Shiokawa \cite{shi} gave a criterion for the algebraic independence
of the values of certain gap series. Using his criterion, we deduce for general algebraic number
\(\alpha\) with \(0<|\alpha|<1\) that the set (\ref{eqn:int1}) is algebraically independent. \par
Next, we consider the case where \(f(X)\) is not a gap series.
Using Mahler's method for algebraic independence, Nishioka \cite{nis} proved that the set
\begin{align*}
\left\{\left. \sum_{m=0}^{\infty} \alpha^{k^m}\right| k=2,3,\ldots\right\}
\end{align*}
is algebraically independent. Moreover, Tanaka \cite{tan} showed that if
positive real numbers \(w_1,\ldots,w_m\) are linearly independent over \(\mathbb{Q}\), then
the set
\begin{align*}
\left\{\left. \sum_{m=0}^{\infty} \alpha^{\lfloor w_i k^m\rfloor}\right| i=1,\ldots,m, \ k=2,3,\ldots\right\}
\end{align*}
is algebraically independent. \par
On the other hand, it is generally difficult to study algebraic independence
in the case where \(f(X)\) is not lacunary.
In Section 2 we review known results on the criteria for transcendence of
the value \(\sum_{m=0}^{\infty}\beta^{-w(m)}\), where \(\beta\) is a Pisot or Salem number and
\((w(m))_{m=0}^{\infty}\) is a certain sequence of nonnegative integers with
\begin{align*}
\lim_{m\to\infty}\frac{w(m+1)}{w(m)}=1.
\end{align*}
In Section 3 we give the main results on the algebraic independence of real numbers applicable to
\[\sum_{m=1}^{\infty}\beta^{-\lfloor m^{\log m}\rfloor},
\sum_{m=3}^{\infty}\beta^{-\lfloor m^{\log \log m}\rfloor}.\]
In the same section we also investigate the linear independence of real numbers applicable to
\(\sum_{m=0}^{\infty} \beta^{-\lfloor m^{\rho}\rfloor}\) for a real number \(\rho>1\).
The main criteria for algebraic independence and linear independence, which are used to prove the main results,
are denoted in Section 4.
For the proof of the algebraic
independence and linear independence, we need no functional equation because our criteria are flexible.
We prove the main results in Section 5. Moreover, we show the criteria in Section 6.
\section{Transcendental results related to the numbers of nonzero digits}
In this section we review criteria for the transcendence of the value \(\sum_{n=0}^{\infty} t_n \beta^{-n}\),
where \((t_n)_{n=0}^{\infty}\) is a bounded sequence of nonnegative integers and \(\beta\) is a Pisot or
Salem number. First we consider the case where \(\beta=b\) is an integer greater than \(1\).
We denote the base-\(b\) expansion of a real number \(\eta\) by
\[\eta=\sum_{n=0}^{\infty}s_n^{(b)}(\eta) b^{-n},\]
where \(s_0^{(b)}(\eta)=\lfloor \eta\rfloor\)
and \(s_n^{(b)}(\eta)\in \{0,1,\ldots,b-1\}\) for any positive integer \(n\).
We may assume that \(s_n^{(b)}(\eta)\leq b-2\) for infinitely many \(n\)'s.
For any positive integer \(N\), put
\[\lambda_b(\eta;N):=\mbox{Card}\{n\in\mathbb{N}\mid n< N, s_n^{(b)}(\eta)\ne 0\},\]
where Card denotes the cardinality. \par
Borel \cite{bor} conjectured for each integral base \(b\geq 2\) that any algebraic irrational number is
normal in base-\(b\), which is still an open problem. For any real number \(\rho>1\), put
\[\gamma(\rho;X):=\sum_{m=0}^{\infty}X^{\lfloor m^{\rho}\rfloor}.\]
If Borel's conjecture is true, then \(\gamma(\rho;b^{-1})\) is transcendental because
\(\gamma(\rho;b^{-1})\) is a non-normal irrational number in base-\(b\).
However, the transcendence of such values is
not known except the case of \(\rho=2\). If \(\rho=2\), then
Duverney, Nishioka, Nishioka, Shiokawa \cite{duv} and Bertrand \cite{ber} independently proved for
any algebraic number \(\alpha\) with \(0<|\alpha|<1\) that
\(\gamma(2;\alpha)\) is transcendental. \par
Bailey, Borwein, Crandall, and Pomerance \cite{bai} gave a criterion for the transcendence of
real numbers, using lower bounds for the numbers of nonzero digits in the binary expansions of
algebraic irrational numbers. Let \(\eta\) be an algebraic irrational number with degree \(D\).
Bailey, Borwein, Crandall, and Pomerance \cite{bai} showed that there exist positive constants
\(C_1(\eta)\) and \(C_2(\eta)\), depending only on \(\eta\), satisfying
\begin{align*}
\lambda_2(\eta;N)\geq C_1(\eta) N^{1/D}
\end{align*}
for any integer \(N\) with \(N\geq C_2(\eta)\). Note that \(C_1(\eta)\) is effectively computable but
\(C_2(\eta)\) is not.
For any integral base \(b\geq 2\),
Adamczewski, Faverjon \cite{ada2} and Bugeaud \cite{bug1} gave effective versions
of lower bounds for \(\lambda_b(\eta;N)\)
as follows: There exist effectively computable positive constants \(C_3(b,\eta)\)
and \(C_4(b,\eta)\), depending only on \(b\) and \(\eta\), satisfying
\begin{align}
\lambda_b(\eta;N)\geq C_3(b,\eta) N^{1/D}
\label{eqn:tra2}
\end{align}
for any integer \(N\) with \(N\geq C_4(b,\eta)\). Using (\ref{eqn:tra2}), we obtain for any real number
\(\rho>1\) that \(\gamma(\rho;b^{-1})\) is not an algebraic number of degree less than \(\rho\). In fact,
\(\gamma(\rho;b^{-1})\) is an irrational number satisfying
\[\lambda_b\bigl(\gamma(\rho;b^{-1});N\bigr)\sim N^{1/{\rho}}\]
as \(N\) tends to infinity. Thus, (\ref{eqn:tra2}) does not hold if \(D<\rho\). \par
By (\ref{eqn:tra2}), we also deduce a criterion for the transcendence of real numbers as follows:
Let \(\eta\) be a positive irrational number. Suppose for any real positive real number \(\varepsilon\)
that
\begin{align}
\liminf_{N\to\infty}\frac{\lambda_b(\eta;N)}{N^{\varepsilon}}=0.
\label{eqn:tra3}
\end{align}
Then \(\eta\) is a transcendental number.
Note that the criterion above was essentially obtained by
Bailey, Borwein, Crandall, and Pomerance \cite{bai}.
Note that if \(\sum_{m=0}^{\infty} X^{w(m)}\) is lacunary, then \(\eta=\sum_{m=0}^{\infty} b^{-w(m)}\)
satisfies (\ref{eqn:tra3}) by
\[\lambda_b(\eta;N)=O(\log N).\]
We give another example of transcendental numbers. For any real numbers \(y>0\) and
\(R\geq 1\), we put
\[\varphi(y;R):=\exp\left((\log R)^{1+y}\right)=R^{(\log R)^y}.\]
Moreover, we set
\[\xi(y;X):=1+\sum_{m=1}^{\infty} X^{\lfloor \varphi(y;m)\rfloor}.\]
Note that \(\xi(y;X)\) is not lacunary by
\[\lim_{m\to\infty}\frac{\varphi(y;m+1)}{\varphi(y;m)}=1.\]
We get that \(\eta:=\xi(y;b^{-1})\) is transcendental for any
integer \(b\geq 2\) because \(\eta\) satisfies (\ref{eqn:tra3}). \par
In what follows, we consider the case where
\(\beta\) is a general Pisot or Salem number.
We introduce results in \cite{kan3} related to the \(\beta\)-expansion of algebraic numbers.
For any formal power series \(f(X)=\sum_{n=0}^{\infty} t_n X^n\), we put
\[S(f):=\{n\in\mathbb{N}\mid t_n\ne 0\}. \]
Moreover, for any nonempty set \(\mathcal{A}\) of nonnegative integers, we set
\[\lambda(\mathcal{A};N):=\mbox{Card}(\mathcal{A}\cap [0,N)).\]
We denote the degree of a field extension \(L/K\) by \([L:K]\).
\begin{thm}[\cite{kan3}]
Let \(A\) be a positive integer and
let \(f(X)=\sum_{n=0}^{\infty}t_n X^n\) be a power series with integral coefficients.
Assume that \(0\leq t_n\leq A\) for any nonnegative integer \(n\) and that there exist
infinitely many \(n\)'s satisfying \(t_n\ne 0\). Let \(\beta\) be a Pisot or Salem number.
Suppose that \(\eta=f(\beta^{-1})\) is an algebraic number with
\([\mathbb{Q}(\beta,\eta):\mathbb{Q}(\beta)]=D\). Then there exist effectively computable positive
constants \(C_5(A,\beta,\eta)\) and \(C_6(A,\beta,\eta)\), depending only on \(A,\beta\) and \(\eta\)
satisfying
\[\lambda\bigl(S(f);N\bigr)\geq C_5(A,\beta,\eta)\left(\frac{N}{\log N}\right)^{1/D}\]
for any integer \(N\) with \(N\geq C_6(A,\beta,\eta)\).
\label{thm:2-1}
\end{thm}
In the rest of this section, let \(\beta\) be a Pisot or Salem number.
Using Theorem \ref{thm:2-1}, we obtain for any real number \(\rho>1\) that
\[\left[\mathbb{Q}\bigl(\gamma(\rho;\beta^{-1}),\beta\bigr):\mathbb{Q}(\beta)\right]\geq \lceil \rho\rceil\]
by
\begin{align}
\lambda(S(\gamma(\rho;X));N)\sim N^{1/\rho}
\label{eqn:tra4}
\end{align} as \(N\) tends to infinity. \par
Note that Theorem \ref{thm:2-1} is applicable to the study of the nonzero digits in the \(\beta\)-expansions
of algebraic numbers. We recall the definition of \(\beta\)-expansion defined by R\'{e}nyi \cite{ren} in 1957.
Let \(T_{\beta}:[0,1)\to[0,1)\) be the \(\beta\)-transformation defined by
\(T_{\beta}(x)=\{\beta x\}\) for \(x\in[0,1)\). Then the \(\beta\)-expansion of a real number
\(\eta\in[0,1)\) is denoted as
\[\eta=\sum_{n=1}^{\infty} s_n^{(\beta)}(\eta)\beta^{-n},\]
where \(s_n^{(\beta)}(\eta)=\lfloor \beta T_{\beta}^{n-1}(\eta)\rfloor \) for any \(n\geq 1\).
Note that \(0\leq s_n^{(\beta)}(\eta)\leq \lfloor\beta\rfloor\) for any \(n\geq 1\). Put
\[\lambda_{\beta}(\eta;N):=\mbox{Card}\{n\in\mathbb{Z}^{+}, n\leq N, s_n^{(\beta)}(\eta)\ne 0\}\]
for any positive integer \(N\). Applying Theorem \ref{thm:2-1} with \(B=\lfloor\beta\rfloor\),
we deduce that if \(\eta\in[0,1)\) is an algebraic number
with \([\mathbb{Q}(\beta,\eta):\mathbb{Q}(\beta)]=D\), then
\[\lambda_{\beta}(\eta;N)\gg\left(\frac{N}{\log N}\right)^{1/D}\]
for any sufficiently large integer \(N\). \par
Using Theorem \ref{thm:2-1}, we also deduce a criterion for the transcendence of real numbers
as follows:
Let \(f(X)\) be a power series whose coefficients are bounded nonnegative integers.
Suppose that \(f(X)\) is not a polynomial and that
\[\liminf_{m\to\infty}\frac{\lambda_{\beta}(S(f);N)}{N^{\varepsilon}}=0\]
for any positive real number \(\varepsilon\). Then \(f(\beta^{-1})\) is transcendental. Note that
the criterion above was already obtained in \cite{kan2}
and that the criterion is applicable even if the representation
\(\sum_{n=0}^{\infty} t_n \beta^{-n}\) does not coincide with the \(\beta\)-expansion
of \(f(\beta^{-1})\). In the same way as the case where
\(\beta=b\geq 2\) is an integer, we obtain for any positive real number \(y\) that
\(\xi(y;\beta^{-1})\) is transcendental. \par
In the end of this section we introduce a corollary of Theorem \ref{thm:2-1}, which we need to
prove our criteria for linear independence.
\begin{cor}
Let \(A\) be a positive integer and \(f(X)\) a nonpolynomial power series whose coefficients are
bounded nonnegative integers.
Assume that there exists a positive real number \(\delta\) satisfying
\[\lambda\bigl(S(f);R\bigr)<R^{-\delta+1/A}\]
for infinitely many integer \(R\geq 0\).
Then, for any Pisot or Salem number \(\beta\), we have
\[\left[\mathbb{Q}\bigl(f(\beta^{-1}),\beta\bigr):\mathbb{Q}(\beta)\right]\geq A+1.\]
\label{cor:tra1}
\end{cor}
\section{Main results}
\subsection{Results on algebraic independence}
We use the same notation as Section 2.
\begin{thm}
Let \(\beta\) be a Pisot or Salem number. Then the continuum set
\begin{align}
\{\xi(y;\beta^{-1})\mid y\in\mathbb{R}, \ y\geq 1\}
\label{eqn:mai1}
\end{align}
is algebraically independent.
\label{thm:mai1}
\end{thm}
Note that if \(\beta=b\) is an integer greater than 1, then the algebraic independence of (\ref{eqn:mai1})
was proved in \cite{kan1}. However, the algebraic independence of the set
\[\{\xi(y;b^{-1})\mid y\in\mathbb{R}, \ y>0\}\]
is unknown. \par
On the other hand, considering the algebraic independence of two values, we obtain
more detailed results.
Set
\[{\Theta}:=\{(y,z)\in\mathbb{R}^2\mid y>0\mbox{, or }y=0\mbox{ and }z>0\}.\]
Moreover, for any real number \(R\geq 3\) and \((y,z)\in {\Theta}\), we put
\begin{align*}
\varphi(y,z;R)& :=\exp \left((\log R)^{1+y} (\log\log R)^z\right)\\
&=R^{(\log R)^y (\log\log R)^z}
\end{align*}
and
\begin{align*}
\xi(y,z;X):=1+\sum_{m=3}^{\infty} X^{\lfloor \varphi(y,z;m)\rfloor}.
\end{align*}
\begin{thm}
Let \((y_1,z_1)\) and \((y_2,z_2)\) be distinct elements in \({\Theta}\). Then the two values
\(\xi(y_1,z_1;\beta^{-1})\) and \(\xi(y_2,z_2;\beta^{-1})\) are algebraically independent
for any Pisot or Salem number \(\beta\).
\label{thm:mai2}
\end{thm}
Considering the case of \(z_1=z_2=0\) in Theorem \ref{thm:mai2}, we get the following:
\begin{cor}
Let \(y_1\) and \(y_2\) be distinct positive real numbers. Then the two values
\(\xi(y_1;\beta^{-1})\) and \(\xi(y_2;\beta^{-1})\) are algebraically independent
for any Pisot or Salem number \(\beta\).
\label{cor:mai1}
\end{cor}
In the case where \(\beta=b\) is an integer greater than 1, the algebraic independence of the two values
\(\xi(y_1;b^{-1})\) and \(\xi(y_2;b^{-1})\) was obtained in \cite{kan1}. \par
Applying Theorem \ref{thm:mai2} with \((y_1,z_1)=(1,0)\) and \((y_2,z_2)=(0,1)\), we deduce the following:
\begin{cor}
For any Pisot or Salem number \(\beta\) the two values
\[\sum_{m=1}^{\infty} \beta^{-\lfloor m^{\log m}\rfloor},
\sum_{m=3}^{\infty} \beta^{-\lfloor m^{\log\log m}\rfloor}\]
are algebraically independent.
\label{cor:mai2}
\end{cor}
In the last of this subsection, we introduce the algebraic independence of the values of \(\xi(y,z;X)\) and
lacunary series.
\begin{thm}
Let \((y,z)\in \Xi\) and let \(x\) be a real number greater than 1. Then,
\(\xi(y,z,\beta^{-1})\) and \(\sum_{m=0}^{\infty} \beta^{-\lfloor x^m\rfloor}\) are
algebraically independent for any Pisot or Salem number \(\beta\).
\label{thm:abc}
\end{thm}
\subsection{Results on linear independence}
Let \(\mathcal{F}\) be the set of nonpolynomial power series
\(g(X)\) satisfying the following three assumptions:
\begin{enumerate}
\item The coefficients of \(g(X)\) are bounded
nonnegative integers.
\item For an arbitrary positive real number \(\varepsilon\), we have
\[\lambda(S(g);R)=o(R^{\varepsilon})\]
as \(R\) tends to infinity.
\item There exists a positive constant \(C\) such that
\[[R, CR]\cap S(g)\ne \emptyset\]
for any sufficiently large \(R\).
\end{enumerate}
In order to state our results, we give a lemma on the zeros of certain polynomials.
For any positive integer \(k\), put
\[G_k(X):=(1-X)^k+(k-1)X-1.\]
\begin{lem}
Suppose that \(k\geq 3\). Then the following holds: \\
\(\mathrm{1)}\) There exists a unique zero \(\sigma_k\) of \(G_k(X)\)
on the interval \((0,1)\). \\
\(\mathrm{2)}\) Let \(x\) be a real number with \(0<x<1\). Then
\(G_k(x)<0\) (resp. \(G_k(x)>0\)) if and only if \(x<\sigma_k\) (resp. \(x>\sigma_k\)). \\
\(\mathrm{3)}\) \((\sigma_k)_{k=3}^{\infty}\) is strictly decreasing.
\label{lem:mai1}
\end{lem}
\begin{proof}
Observe that \(G_k'(X)=-k(1-X)^{k-1}+k-1\) is monotone increasing on the interval \((0,1)\)
and that \(G_k'(X)\) has a unique zero \(\widetilde{\sigma_k}\) on \((0,1)\).
Thus, \(G_k(X)\) is monotonically decreasing on \((0,\widetilde{\sigma_k}]\) and
monotonically increasing on \((\widetilde{\sigma_k},1)\). Hence, the first and second statements of the
lemma follow from \(G_k(0)=0\) and \(G_k(1)=k-2>0\). \par
Next, we assume that \(k\geq 4\). Using
\[G_{k-1}(\sigma_{k-1})=(1-\sigma_{k-1})^{k-1}+(k-2)\sigma_{k-1}-1=0,\]
we get
\[G_{k}(\sigma_{k-1})=(1-\sigma_{k-1})^k+(k-1)\sigma_{k-1}-1=(k-2)\sigma_{k-1}^2>0. \]
Hence, we obtain \(\sigma_k<\sigma_{k-1}\) by the second statement of the lemma.
\end{proof}
\begin{thm}
Let \(A\) be a positive integer and \(\rho\) a real number.
Suppose that
\begin{align}
\left\{
\begin{array}{cc}
\rho> A & \mbox{ if }A\leq 3,\\
\rho> \sigma_A^{-1} & \mbox{ if }A\geq 4.
\end{array}
\right.
\label{eqn:mai2}
\end{align}
Then, for any \(g(X)\in \mathcal{F}\) and any
Pisot or Salem number \(\beta\), the set
\[\{\gamma(\rho;\beta^{-1})^{k_1} g(\beta^{-1})^{k_2}\mid k_1,k_2\in\mathbb{N}, k_1\leq A\}\]
is linearly independent over \(\mathbb{Q}(\beta)\).
\label{thm:mai3}
\end{thm}
We give numerical examples of \(\sigma_n^{-1}\) (\(n\geq 4\)) as follows:
\[\sigma_4^{-1}=5.278\ldots , \
\sigma_5^{-1}=8.942\ldots , \ \sigma_6^{-1}=13.60\ldots.\]
\begin{cor}
Let \(A,\rho\) be as in Theorem \ref{thm:mai3}. \\
\(\mathrm{1)}\) For any real number \(y>1\) and any Pisot or Salem number \(\beta\), the set
\[\left\{
\left.\gamma(\rho;\beta^{-1})^{k_1} \left(\sum_{m=0}^{\infty}\beta^{-\lfloor y^m\rfloor}\right)^{k_2} \ \right| \
k_1,k_2\in\mathbb{N}, k_1\leq A\right\}\]
is linearly independent over \(\mathbb{Q}(\beta)\). \\
\(\mathrm{2)}\) For any \((y,z)\in {\Theta}\) and any Pisot or Salem number \(\beta\), the set
\[\{\gamma(\rho;\beta^{-1})^{k_1} \xi(y,z;\beta^{-1})^{k_2}\mid k_1,k_2\in\mathbb{N}, k_1\leq A\}\]
is linearly independent over \(\mathbb{Q}(\beta)\).
\label{cor:mai3}
\end{cor}
Using the asymptotic behavior of the sequence \((\sigma_m)_{m=3}^{\infty}\), we deduce the following:
\begin{cor}
Let \(\varepsilon\) be an arbitrary positive real number. Then there exists an effectively
computable positive constant \(A_0(\varepsilon)\), depending only on \(\varepsilon\)
satisfying the following:
Let \(A\) be an integer with \(A\geq A_0(\varepsilon)\) and \(\rho\) a real number with
\(\rho>(\varepsilon+1/2)A^2\). Then, for any \(g(X)\in \mathcal{F}\) and any
Pisot or Salem number \(\beta\), the set
\[\{\gamma(\rho;\beta^{-1})^{k_1} g(\beta^{-1})^{k_2}\mid k_1,k_2\in\mathbb{N}, k_1\leq A\}\]
is linearly independent over \(\mathbb{Q}(\beta)\).
\label{cor:mai4}
\end{cor}
\section{Criteria for algebraic independence and linear independence}
Let \(k\) be a nonnegative integer and \(f(X)\in\mathbb{Z}[[X]]\backslash\mathbb{Z}[X]\).
We denote the Minkowski sum of \(S(f)\) by
\begin{align*}
k S(f):=
\left\{
\begin{array}{cc}
\{0\} & (k=0),\\
\{s_1+\cdots+s_k\mid s_1,\ldots,s_k\in S(f)\} & (k\geq 1).
\end{array}
\right.
\end{align*}
Moreover, for any \((k_1,\ldots,k_r)\in\mathbb{N}^r\) and
\(f_1(X),\ldots,f_r(X)\in\mathbb{Z}[[X]]\backslash\mathbb{Z}[X]\), we set
\begin{align*}
\sum_{h=1}^r k_h S(f_h):=\{s_1+\cdots+s_r\mid s_h\in k_h S(f_h) \mbox{ for }h=1,\ldots,r\}.
\end{align*}
\begin{rem}
\begin{rm}
Suppose that \(0\in S(f_i)\) for \(i=1,\ldots,r\). Then, for any
\((k_1,\ldots,k_r)\in\mathbb{N}^r\) and \((k_1',\ldots,k_r')\in\mathbb{N}^r\) with
\(k_i\geq k_i'\) for any \(i=1,\ldots,r\), we have
\[\sum_{h=1}^r k_h S(f_h)\supset \sum_{h=1}^r k_h' S(f_h).\]
\label{rem:cri1}
\end{rm}
\end{rem}
Let \(\mathcal{A}\) be a nonempty set of nonnegative integers and
\(R\) a real number with \(R>\min \mathcal{A}\). Then we put
\[\theta(R;\mathcal{A}):=\max\{n\in\mathcal{A}\mid n<R\}.\]
\begin{thm}
Let \(A,r\) be integers with \(A\geq 1\) and \(r\geq 2\).
Let \(f_i(X)=\sum_{n=0}^{\infty} t_i(n) X^n (i=1,\ldots,r)\) be nonpolynomial power series with
integral coefficients.
We assume that \(f_1(X),\ldots,f_r(X)\) satisfy the following four assumptions:
\begin{enumerate}
\item There exists a positive constant \(C_7\) satisfying
\[0\leq t_i(n)\leq C_7\]
for any \(i=1,\ldots, r\) and nonnegative integer \(n\).
\item Let \(k_1,\ldots,k_r\) be nonnegative integers.
Suppose that
\begin{align}
\left\{
\begin{array}{cc}
k_1\leq A-1 & \mbox{ if }r=2,\\
k_1\leq A & \mbox{ if }r\geq 3.
\end{array}
\right.
\label{eqn:cri1}
\end{align}
Then
\begin{align}
&R-\theta\left(R;\sum_{h=1}^{r-2} k_h S(f_h)+(1+k_{r-1})S(f_{r-1})\right)\nonumber\\
&\hspace{35mm}=o\left(\frac{R}{\prod_{h=1}^r \lambda(S(f_h);R)^{k_h}}\right)
\label{ppp}
\end{align}
as \(R\) tends to infinity.
\item There exists a positive real number \(\delta\) satisfying
\begin{align*}
\lambda(S(f_1);R)=o\left(R^{-\delta+1/A}\right)
\end{align*}
as \(R\) tends to infinity. Moreover, for any \(i=2,\ldots,r\) and any real number \(\varepsilon\), we have
\[\lambda(S(f_i);R)=o\Bigl(\lambda\bigl(S(f_{i-1});R\bigr)^{\varepsilon}\Bigr)\]
as \(R\) tends to infinity.
\item There exist positive constants \(C_8,C_9\) such that
\[[R, C_8 R]\cap S(f_r)\ne \emptyset\]
for any real number \(R\) with \(R\geq C_9\).
\end{enumerate}
Then, for any Pisot or Salem number \(\beta\), the set
\[\{f_1(\beta^{-1})^{k_1}f_2(\beta^{-1})^{k_2}\cdots f_r(\beta^{-1})^{k_r}\mid
k_1,k_2,\ldots,k_r\in\mathbb{N}, k_1\leq A\}\]
is linearly independent over \(\mathbb{Q}(\beta)\).
\label{thm:cri1}
\end{thm}
Let \(a(R)\) be a real valued function defined on an interval \([R_0,\infty)\) with \(R_0\in\mathbb{R}\).
We say that
\(a(R)\) ultimately increasing if
\(a(R)\) is strictly increasing for any sufficiently large real number \(R\).
Similarly, we say that \((a(m))_{m=m_0}^{\infty}\) is ultimately increasing if this sequence is strictly
increasing for any sufficiently large integer \(m\).
\begin{thm}
Let \(a(R),u(R)\) be ultimately increasing functions defined on \([m_0,\infty)\) with \(m_0\in\mathbb{N}\).
Assume that \((\lfloor a(m)\rfloor)_{m=m_0}^{\infty}\) and
\((\lfloor u(m)\rfloor)_{m=m_0}^{\infty}\) are also
ultimately increasing.
Let \(b(R),v(R)\) be the inverse functions of \(a(R),u(R)\), respectively, for any sufficiently large \(R\).
Assume that \(a(R)\) satisfies the following two assumptions:
\begin{enumerate}
\item \((\log a(R))/(\log R)\) is ultimately increasing and
\begin{align}\lim_{R\to\infty}\frac{\log a(R)}{\log R}=\infty.
\label{qqq}
\end{align}
\item We have \(a(R)\) is differentiable. Moreover, for an arbitrary positive real number \(\varepsilon\),
there exists a positive constant \(C_{10}(\varepsilon)\), depending only on \(\varepsilon\), such that
\[(\log a(R))'<R^{-1+\varepsilon}\]
for any real number \(R\) with \(R\geq C_{10}(\varepsilon)\).
\end{enumerate}
Moreover, suppose that \(u(R)\) fulfills the following two assumptions:
\begin{enumerate}
\item There exists a positive constant \(C_{11}\) such that
\[\frac{u(R+1)}{u(R)}<C_{11}\]
for any sufficiently large real number \(R\).
\item
\begin{align}
\lim_{R\to\infty}\frac{\log b(R)}{\log v(R)}=\infty.
\label{eqn:cri3}
\end{align}
Then, for any Pisot or Salem number \(\beta\), the two numbers
\[\sum_{m=m_0}^{\infty}\beta^{-\lfloor a(m)\rfloor}, \sum_{m=m_0}^{\infty}\beta^{-\lfloor u(m)\rfloor}\]
are algebraically independent.
\end{enumerate}
\label{thm:cri2}
\end{thm}
\section{Proof of main results}
In this section we prove results in Section 3, using Theorems \ref{thm:cri1} and \ref{thm:cri2}.
\subsection{Proof of results on algebraic independence}
\begin{proof}[Proof of Theorem \ref{thm:mai1}]
Let \(y_1,y_2,\ldots,y_r\) be real numbers with \(1\leq y_1<y_2<\cdots<y_r\).
We show that
\(f_i(X):=\xi(y_i;X) \ (i=1,\ldots,r)\)
fulfill the assumptions in Theorem \ref{thm:cri1} for any positive integer \(A\).
The first assumption is clear.
Recall that we proved Theorem 1.3 in \cite{kan1}, showing for any integer \(b \geq 2\) that
\(f_1(b^{-1}),\ldots,f_r(b^{-1})\) satisfy the assumptions of Theorem 2.1 in \cite{kan1}.
In the same way, we can check that \(f_1(X),\ldots,f(X)\) fulfill the third and fourth assumptions in
Theorem \ref{thm:cri1}. \par
In what follows, we verify the second assumption.
Let \(y\) be a fixed positive real number. Then we denote the inverse function of
\(\varphi(y;R)\) by
\[\psi(y;R)=\exp\left((\log R)^{1/(1+y)}\right).\]
For \(i=1,\ldots,r\), we have
\[\lambda\bigl(S(f_i);R\big)\sim \psi(y_i;R)\]
as \(R\) tends to infinity.
\begin{lem}
Let \(\bk=(k_1,\ldots,k_r)\in\mathbb{N}^r\backslash \{(0,\ldots,0)\}\). Then
\[R-\theta\left(R;\sum_{i=1}^r k_i S(f_i)\right)\ll
\frac{R(\log R)^{k_1+\cdots+k_r}}{\prod_{i=1}^r \psi(y_i;R)^{k_i}}\]
for any real \(R\) with \(R\geq 2\).
\label{lem5:1}
\end{lem}
\begin{proof}
We can show Lemma \ref{lem5:1} in the same way as the proof of Lemma 3.1 in \cite{kan1}.
\end{proof}
Let \(A\) be any positive integer and \(k_1,\ldots,k_r\) any nonnegative integers.
Without loss of generality, we may assume that \(k_r\geq 1\).
Applying Lemma \ref{lem5:1} with
\(\bk=(k_1,\ldots,k_{r-2},1+k_{r-1},0)\in\mathbb{N}^r\backslash \{(0,\ldots,0)\}\),
we get for any \(R\geq 2\) that
\begin{align}
&R-\theta\left(R;\sum_{h=1}^{r-2} k_h S(f_h)+(1+k_{r-1})S(f_{r-1})\right)\nonumber\\
&\hspace{30mm}=o\left(\frac{R(\log R)^{1+k_1+\cdots+k_{r-1}}}
{\psi(y_{r-1};R)\prod_{h=1}^{r-1} \psi(y_{i};R)^{k_i}}\right).
\label{eqn5:2}
\end{align}
Observe that
\begin{align*}
\log \left((\log R)^{1+k_1+\cdots+k_{r-1}}\right)
& \ll \log\log R\\
& =o\left((\log R)^{1/{(1+y_{r-1})}}\right)\\
&=o\left(\frac12\log \psi(y_{r-1};R)\right)
\end{align*}
as \(R\) tends to infinity. Thus, we see
\begin{align}
(\log R)^{1+k_1+\cdots+k_{r-1}}=o\left(\psi(y_{r-1};R)^{1/2}\right).
\label{eqn5:3}
\end{align}
Combining (\ref{eqn5:2}) and (\ref{eqn5:3}), we obtain
\begin{align*}
&R-\theta\left(R;\sum_{h=1}^{r-2} k_h S(f_h)+(1+k_{r-1})S(f_{r-1})\right)\\
&=
o\left(\frac{R}
{\psi(y_{r-1};R)^{1/2}\prod_{h=1}^{r-1} \psi(y_{i};R)^{k_i}}\right)
=
o\left(\frac{R}
{\prod_{h=1}^{r} \psi(y_{i};R)^{k_i}}\right),
\end{align*}
where we use the third assumption
in Theorem \ref{thm:cri1} with \(i=r\) and \(\varepsilon=1/(2k_r)\) for the last equality. Therefore, we checked the second assumption.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:mai2}]
Without loss of generality, we may assume that \(y_1<y_2\), or \(y_1=y_2\) and \(z_1<z_2\).
Put
\[a(R):=\varphi(y_1,z_1;R), u(R):=\varphi(y_2,z_2;R).\]
In what follows, we check that
\(a(R),u(R)\) satisfy the assumptions in Theorem \ref{thm:cri2}. Note that
\(a(R), u(R)\ (R\geq 3) \) and
\((\lfloor a(m)\rfloor)_{m=3}^{\infty}\), \((\lfloor u(m)\rfloor)_{m=3}^{\infty}\) are
ultimately increasing. The assumptions on \(a(R)\) in Theorem \ref{thm:cri2} are easily checked.
In fact, the first assumption holds by
\begin{align*}
\frac{\log a(R)}{\log R}=(\log R)^{y_1} (\log \log R)^{z_1}.
\end{align*}
Moreover, the second assumption follows from
\begin{align}
&(\log a(R))'\nonumber\\
& =
\left\{
\begin{array}{cc}
(1+y_1)(\log R)^{y_1}/R & \mbox{ if } z_1=0, \\
(\log R)^{y_1}(\log \log R)^{-1+z_1}\bigl(z_1+(1+y_1)\log \log R\bigr)/R & \mbox{ if }z_1\ne 0.
\end{array}
\right.
\label{eqn5:5}
\end{align}
Calculating \((\log u(R))'\) in the same way as (\ref{eqn5:5}), we see
\[\lim_{R\to\infty}(\log u(R))'=0.\]
Using the mean value theorem, we get
\begin{align}
\lim_{R\to\infty}\frac{u(R+1)}{u(R)}=1,
\label{eqn5:6}
\end{align}
which implies the first assumption on \(u(R)\) in Theorem \ref{thm:cri2}. \par
We now check the second assumption on \(u(R)\). Using
\[\log a(R)=(\log R)^{1+y_1} (\log \log R)^{z_1},\]
we get
\begin{align}
\log R=(\log b(R))^{1+y_1} (\log \log b(R))^{z_1}.
\label{eqn5:7}
\end{align}
Similarly,
\begin{align}
\log R=(\log v(R))^{1+y_2} (\log \log v(R))^{z_2}.
\label{eqn5:8}
\end{align}
First we assume that \(y_1<y_2\). Put \(d:=y_2-y_1>0\). By (\ref{eqn5:7}) and (\ref{eqn5:8}), we get
\[(\log v(R))^{1+y_1+(2d)/3}<\log R< (\log b(R))^{1+y_1+d/3}\]
for any sufficiently large \(R\).
Consequently, we obtain
\[(\log v(R))^{d/3}<\left(\frac{\log b(R)}{\log v(R)}\right)^{1+y_1+d/3},\]
which implies (\ref{eqn:cri3}). \par
Next we assume that \(y_1=y_2=:y\) and \(z_1<z_2\). Using (\ref{eqn5:7}) and (\ref{eqn5:8}) again,
we see
\begin{align}
\frac{(\log\log v(R))^{z_2}}{(\log\log b(R))^{z_1}}
=\left(\frac{\log b(R)}{\log v(R)}\right)^{1+y}.
\label{eqn5:9}
\end{align}
Taking the logarithm of the both-hand sides of (\ref{eqn5:9}), we get
\begin{align}
&z_2\log\log\log v(R)-z_1\log\log\log b(R)\nonumber\\
&\hspace{20mm}=
(1+y)\log\log b(R) - (1+y)\log\log v(R)
\label{eqn5:10}
\end{align}
Note that \(b(R)\geq v(R)\) for any sufficiently large \(R\). Thus, dividing (\ref{eqn5:10}) by
\(\log\log b(R)\), we see
\begin{align}
\lim_{R\to\infty}\frac{\log \log v(R)}{\log \log b(R)}=1.
\label{eqn5:11}
\end{align}
Combining (\ref{eqn5:9}), (\ref{eqn5:11}), and \(z_2>z_1\), we deduce (\ref{eqn:cri3}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:abc}]
Applying Theorem \ref{thm:cri2} with
\[a(R):=\varphi(y,z;R), u(R):=x^R,\]
we deduce Theorem \ref{thm:abc}.
In fact, we can check the assumptions on \(a(R)\) in
Theorem \ref{thm:cri2} in the same way as the proof of
Theorem \ref{thm:mai2}.
Moreover, (\ref{eqn:cri3}) is seen by (\ref{eqn5:7}) and
\(v(R)=(\log R)/(\log x).\)
\end{proof}
\subsection{Proof of results on linear independence}
\begin{proof}[Proof of Theorem \ref{thm:mai3}]
We show that the assumptions on Theorem \ref{thm:cri1} are satisfied, where \(A\) is defined as in
Theorem \ref{thm:mai3}, \(r=2\), \(f_1(X):=\gamma(\rho;X)\), and \(f_2(X):=g(X)\).
The first assumption is clear.
The fourth assumption
follows from the third assumption on \(\mathcal{F}\). \par
In order to check the third assumption, it suffices to show that
\begin{align}
\frac{1}{\rho}<\frac1{A}
\label{eqn5:12}
\end{align}
by (\ref{eqn:tra4}) and the second assumption on \(\mathcal{F}\).
We may assume that \(A\geq 4\) by (\ref{eqn:mai2}).
Using
\begin{align*}
\log \left(1-\frac1{A}\right)^A
&=-A\sum_{n=1}^{\infty}\frac1{n}A^{-n}\\ &>-A\sum_{n=1}^{\infty}A^{-n}
=-1-\frac{1}{A-1},
\end{align*}
we get by \(A\geq 4\) that
\begin{align*}
\left(1-\frac1{A}\right)^A& >\exp\left(-1-\frac{1}{A-1}\right)\nonumber\\
&\geq \exp\left(-\frac43\right)>\frac14\geq \frac1{A}.
\end{align*}
Hence, we obtain
\[G_A\left(\frac1{A}\right)=\left(1-\frac1{A}\right)^A-\frac1{A}>0,\]
which implies (\ref{eqn5:12}) by (\ref{eqn:mai2})
and the second statement of Lemma \ref{lem:mai1}.
In what follows, we check the second assumption of Theorem \ref{thm:cri1}.
The following lemma was inspired by the results of Daniel \cite{dan}.
\begin{lem}
Let \(k\) be a positive integer. Then
\begin{align}
R-\theta\bigl(R;kS(f_1)\bigr)=O\left(R^{(1-1/{\rho})^k}\right)
\label{eqn5:14}
\end{align}
for any \(R\geq 1\), where the implied constant in the symbol \(O\) does not depend on \(R\), but
on \(k\).
\label{lem5:3}
\end{lem}
\begin{proof}
First we consider the case of \(k=1\). Using the mean value theorem, we see
that
\begin{align}
\lfloor (m+1)^{\rho}\rfloor - \lfloor m^{\rho}\rfloor
&= (m+1)^{\rho}-m^{\rho}+O(1)\nonumber\\
&=O\left(m^{{\rho}-1}\right)=O\left(\left\lfloor m^{\rho}\right\rfloor^{1-1/{\rho}}\right)
\label{eqn5:15}
\end{align}
for any positive integer \(m\). For any sufficiently large \(R\), take a positive integer \(m\) with
\[\lfloor m^{\rho}\rfloor < R \leq \lfloor (m+1)^{\rho}\rfloor\]
Then we get
\[R-\theta\bigl(R;S(f_1)\bigr)\leq \lfloor (m+1)^{\rho}\rfloor - \lfloor m^{\rho}\rfloor
=O\left(R^{1-1/{\rho}}\right)\]
by (\ref{eqn5:15}). \par
Next, we assume that (\ref{eqn5:14}) holds for a positive integer \(k\). Let
\[R_0:=R-\theta\bigl(R;kS(f_1)\bigr)\in\mathbb{Z}^{+}.\]
The inductive hypothesis implies that
\begin{align}
R_0=O\left(R^{(1-1/{\rho})^k}\right).
\label{eqn5:16}
\end{align}
Set
\[\eta:=\theta\bigl(R;kS(f_1)\bigr)+\theta\bigl(R_0;S(f_1)\bigr).\]
Then we have \(\eta\in (k+1)S(f_1)\) and
\begin{align}
R-\eta=R_0-\theta\bigl(R_0;S(f_1)\bigr)>0.
\label{eqn5:17}
\end{align}
Thus,
\begin{align}
\theta\bigl(R;(k+1)S(f_1)\bigr)\geq \eta.
\label{eqn5:18}
\end{align}
Combining (\ref{eqn5:17}) and (\ref{eqn5:18}), we obtain
\begin{align*}
R- \theta\bigl(R;(k+1)S(f_1)\bigr)&\leq R-\eta\\
&=R_0-\theta\bigl(R_0;S(f_1)\bigr
\end{align*}
Consequently, using (\ref{eqn5:14}) with
\(k=1\) and \(R=R_0\), we deduce that
\begin{align*}
0&< R- \theta\bigl(R;(k+1)S(f_1)\bigr)\\
&=O\left(R_0^{1-1/{\rho}}\right)=O\left(R^{(1-1/{\rho})^{k+1}}\right)
\end{align*}
by (\ref{eqn5:16}).
\end{proof}
Using Lemma \ref{lem5:3} with \(k=1+k_1\), we get
\begin{align*}
\log_R F_1(R)&:=\log_R\Bigl(R-\theta\bigl(R;(1+k_1)S(f_1)\bigr)\Bigr)\\
&\leq \left(1-\frac1{\rho}\right)^{1+k_1}+o(1)
\end{align*}
as \(R\) tends to infinity. Moreover, using (\ref{eqn:tra4}) and the second assumption on \(\mathcal{F}\),
we see
\begin{align*}
\log_R F_2(R):=\log_R\left(
\frac{R}{\prod_{i=1}^2\lambda(S(f_i);R)^{k_i}}
\right)
=1-\frac{k_1}{\rho}+o(1).
\end{align*}
Thus, we obtain
\[\log_R F_1(R)-\log_R F_2(R)\leq G_{1+k_1}\left(\frac1{\rho}\right)+o(1)\]
as \(R\) tends to infinity.
For the proof of (\ref{ppp}),
it suffices to show that
\begin{align}
G_{1+k_1}\left(\frac1{\rho}\right)<0.
\label{eqn5:19}
\end{align}
In fact, (\ref{eqn5:19}) implies that there exists a positive constant \(c\) satisfying
\[F_1(R)< R^{-c} F_2(R)\]
for any sufficiently large \(R\). \par
If \(k_1=0\) or \(k_1=1\), then (\ref{eqn5:19}) is clear by
\(G_1(X)=-X\) and \(G_2(X)=-X(1-X)\). If \(k_1=2\), then we have \(G_3(X)=-X(1-3X+X^2)\) and
\(\sigma_3=(3-\sqrt{5})/2\).
By (\ref{eqn5:12}) and (\ref{eqn:cri1}), we get
\[\frac1{\rho}<\frac1{A}\leq \frac1{1+k_1}=\frac13<\sigma_3,\]
which implies (\ref{eqn5:19}) by the second statement of Lemma \ref{lem:mai1}. Finally, suppose that
\(k_1\geq 3\). Using (\ref{eqn:mai2}), (\ref{eqn:cri1}), and the third statement of Lemma \ref{lem:mai1},
we obtain
\[\frac1{\rho}<\sigma_A\leq \sigma_{1+k_1}, \]
which means (\ref{eqn5:19}). Therefore, we proved Theorem \ref{thm:mai3}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:mai3}]
The first statement of Corollary \ref{cor:mai3} follows from Theorem \ref{thm:mai3} by
\[\sum_{m=0}^{\infty}X^{\lfloor y^m\rfloor}\in\mathcal{F}.\]
The second statement of the corollary is similarly verified by \(\xi(y,z;X)\in \mathcal{F}\).
In fact, the second assumption on \(\mathcal{F}\) follows from the fact that, for any
real number \(M\),
\[\lim_{R\to\infty}\frac{\varphi(y,z;R)}{R^M}=\infty.\]
Moreover, in the same way as the proof of (\ref{eqn5:6}),
we can show that
\[\lim_{R\to\infty}\frac{\varphi(y,z;R+1)}{\varphi(y,z;R)}=1.\]
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:mai4}]
By Theorem \ref{thm:mai3} and the second statement of Lemma \ref{lem:mai1}, it suffices to
show that \((\varepsilon+1/2)A^2>\sigma_A^{-1}\), namely,
\[0> G_A\left(\left(\frac12+\varepsilon\right)^{-1}A^{-2}\right)\]
for any sufficiently large \(A\), depending only on \(\varepsilon>0\).
We now fix an arbitrary positive real number \(\varepsilon\). In the proof of Corollary \ref{cor:mai4},
the implied constant in the symbol \(O\) does not depend on \(A\), but on \(\varepsilon\).
Observe that
\begin{align*}
&\log \left(1-\left(\frac12+\varepsilon\right)^{-1}A^{-2}\right)^A\\
&=A\left(-\left(\frac12+\varepsilon\right)^{-1}A^{-2}+O\left(A^{-4}\right)\right)\\
&=-\left(\frac12+\varepsilon\right)^{-1}A^{-1}+O\left(A^{-3}\right)
\end{align*}
and that
\begin{align*}
&\left(1-\left(\frac12+\varepsilon\right)^{-1}A^{-2}\right)^A\\
&= \exp\left(-\left(\frac12+\varepsilon\right)^{-1}A^{-1}+O\left(A^{-3}\right)\right)\\
&=
1-\left(\frac12+\varepsilon\right)^{-1}A^{-1}+\frac12
\left(\frac12+\varepsilon\right)^{-2}A^{-2}
+O\left(A^{-3}\right).
\end{align*}
Thus, we get
\begin{align*}
& G_A\left(\left(\frac12+\varepsilon\right)^{-1}A^{-2}\right)\\
&= \left(1-\left(\frac12+\varepsilon\right)^{-1}A^{-2}\right)^A\\
&\hspace{20mm} -1+\left(\frac12+\varepsilon\right)^{-1}A^{-1}
-\left(\frac12+\varepsilon\right)^{-1}A^{-2}\\
&=-\varepsilon \left(\frac12 +\varepsilon\right)^{-2} A^{-2}
+O\left(A^{-3}\right)<0
\end{align*}
for any sufficiently large \(A\), depending only on \(\varepsilon\).
\end{proof}
\section{Proof of our criteria}
\subsection{Proof of Theorem \ref{thm:cri2}}
We prove Theorem \ref{thm:cri2} by Theorem \ref{thm:cri1},
showing that
\[f_1(X):=1+\sum_{m=m_0}^{\infty} X^{\lfloor a(m)\rfloor}, \
f_2(X):=1+\sum_{m=m_0}^{\infty} X^{\lfloor u(m)\rfloor}\]
satisfy the assumptions of Theorem \ref{thm:cri1},
where \(r=2\) and \(A\) is any fixed positive integer.
The first assumption is trivial. The fourth assumption
of Theorem \ref{thm:cri1} follows from the first assumption on \(u(R)\).
Using (\ref{qqq}) and
the second assumption on \(u(R)\), we get the following: For any positive real number \(\varepsilon\),
\begin{align}
\lambda\bigl(S(f_1);R\big)&\sim b(R)=o\left( R^{\varepsilon}\right)
\label{eqn6:1},\\
\lambda\bigl(S(f_2);R\big)&\sim v(R)=o\left( b(R)^{\varepsilon}\right)=
o\left( \lambda\bigl(S(f_1);R\big)^{\varepsilon}\right)
\label{eqn6:2}
\end{align}
as \(R\) tends to infinity, which implies that the third assumption on Theorem \ref{thm:cri1} holds. \par
In what follows, we check the second assumption.
In the same way as the proof of Lemma \ref{lem5:3}, we show
the following:
\begin{lem}
Let \(k\) be a positive integer and \(\varepsilon\) a positive real number.
Then we have
\begin{align}
R-\theta\bigl(R;kS(f_1)\bigr)\ll \frac{R}{b(R)^{k-\varepsilon}}
\label{eqn6:3}
\end{align}
for any \(R\geq 1\), where the implied constant in the symbol \(\ll\) does not depend on \(R\),
but on \(k\) and \(\varepsilon\).
\label{lem6:1}
\end{lem}
\begin{proof}
It suffices to show for each \(k\geq 1\) that, for any \(\varepsilon>0\), (\ref{eqn6:3}) holds for any sufficiently large \(R\), depending on \(k\) and \(\varepsilon\).
We prove the lemma by induction on \(k\). \par
We first consider the case of \(k=1\). We may assume that \(\varepsilon<1\).
By the second assumption on
\(a(m)\) and the mean value theorem, we get for any sufficiently large \(m\) that
\[a(m)\leq a(m+1)\leq 2 a(m)\]
and that
there exists a real number \(\rho\) with \(0<\rho<1\) satisfying
\begin{align}
a(m+1)-a(m)&=a'(m+\rho
<\frac{a(m+\rho)}{(m+\rho)^{1-\varepsilon}}\nonumber\\
&\leq \frac{a(m+1)}{m^{1-\varepsilon}}\ll \frac{a(m)}{(m+1)^{1-\varepsilon}}.
\label{eqn6:4}
\end{align}
For any sufficiently large \(R\), there exists an integer \(m\geq m_0\) such that
\[\lfloor a(m)\rfloor < R\leq \lfloor a(m+1)\rfloor .\]
By (\ref{eqn6:4}), we obtain
\begin{align*}
R-\theta\bigl(R;S(f_1)\bigr)
& = R- \lfloor a(m)\rfloor\leq a(m+1)-a(m)+1\\
& \ll \frac{a(m)}{(m+1)^{1-\varepsilon}} \ll \frac{R}{b(a(m+1))^{1-\varepsilon}}
\leq \frac{R}{b(R)^{1-\varepsilon}},
\end{align*}
which implies (\ref{eqn6:3}) in the case of \(k=1\). \par
Next we assume that (\ref{eqn6:3}) holds for a fixed positive integer \(k\) and an arbitrary positive
real number \(\varepsilon\). In what follows, we verify (\ref{eqn6:3}) for \(k+1\)
with fixed \(\varepsilon<1\).
Put
\[R_0:=R-\theta\bigl(R;kS(f_1)\bigr). \]
It suffices to consider the case of
\begin{align}
R_0\geq \frac{R}{b(R)^{k+1}}.
\label{eqn6:5}
\end{align}
In fact, suppose that (\ref{eqn6:5}) does not hold. Since \(0\in S(f_1)\) by the definition of \(f_1(X)\),
we have
\[\theta\bigl(R;kS(f_1)\bigr)\in k S(f_1)\subset (k+1) S(f_1)\]
by Remark \ref{rem:cri1}. Thus, we get
\[R-\theta\bigl(R;(k+1)S(f_1)\bigr)\leq R_0 <\frac{R}{b(R)^{k+1}},\]
which implies (\ref{eqn6:3}). \par
In what follows, we assume that (\ref{eqn6:5}) is satisfied. In particular,
applying (\ref{eqn6:1}) to (\ref{eqn6:5}), we see
\begin{align}
R_0\geq R^{1-\varepsilon/4}
\label{eqn6:6}
\end{align}
for any sufficiently large \(R\). Moreover, the inductive hypothesis implies that
\begin{align}
R_0\ll \frac{R}{b(R)^{k-\varepsilon/2}}.
\label{eqn6:7}
\end{align}
In the same way as the proof of Lemma \ref{lem5:3}, putting
\[\eta:= \theta\bigl(R;kS(f_1)\bigr)+\theta\bigl(R_0;S(f_1)\bigr)\in (k+1) S(f_1),\]
we see that
\begin{align*}
R-\theta\bigl(R;(k+1)S(f_1)\bigr)& \leq R-\eta\\
&=R_0-\theta\bigl(R_0;S(f_1)\bigr)\ll \frac{R_0}{b(R_0)^{1-\varepsilon/4}},
\end{align*}
where for the last inequality we apply (\ref{eqn6:3}) with \(k=1\).
By (\ref{eqn6:6}) and (\ref{eqn6:7}), we obtain
\begin{align}
R-\theta\bigl(R;(k+1)S(f_1)\bigr)\ll \frac{R}{b(R)^{k-\varepsilon/2}b(R^{1-\varepsilon/4})^{1-\varepsilon/4}}.
\label{eqn6:8}
\end{align}
Using the assumption that \((\log a(x))/(\log x)\) is ultimately increasing with
\[x=b(R)> x'= b(R^{1-\varepsilon/4}),\]
we get
\begin{align*}
\frac{\log R}{\log b(R)}&=\frac{\log a(x)}{\log x}\geq \frac{\log a(x')}{\log x'}\\
&=\left(1-\frac{\varepsilon}{4}\right) \frac{\log R}{\log b(R^{1-\varepsilon/4})}.
\end{align*}
Consequently,
\[b\left( R^{1-\varepsilon/4}\right)\geq b(R)^{1-\varepsilon/4},\]
and so
\begin{align}
\frac{1}{b(R^{1-\varepsilon/4})^{1-\varepsilon/4}}\leq \frac{1}{b(R)^{(1-\varepsilon/4)^2}}
\leq \frac{1}{b(R)^{1-\varepsilon/2}}
\label{eqn6:9}
\end{align}
by \((1-\varepsilon/4)^2\geq 1-\varepsilon/2\). Combining (\ref{eqn6:8}) and (\ref{eqn6:9}), we
deduce that
\begin{align*}
(0<)R-\theta\bigl(R;(k+1)S(f_1)\bigr)\ll \frac{R}{b(R)^{k+1-\varepsilon}},
\end{align*}
which implies (\ref{eqn6:3}).
\end{proof}
Let \(k_1,k_2\) be nonnegative integers. Applying Lemma \ref{lem6:1} with
\(k=1+k_1\) and \(\varepsilon=1/2\), we deduce
by (\ref{eqn6:2}) that
\begin{align*}
R-\theta(R;(1+k_1) S(f_1)) & \ll \frac{R}{b(R)^{k_1+1/2}}\\
&=o\left(\frac{R}{b(R)^{k_1} v(R)^{k_2}}\right)=o\left(\frac{R}{\prod_{h=1}^2 \lambda(S(f_h);R)^{k_h}}\right)
\end{align*}
as \(R\) tends to infinity.
Finally, we proved Theorem \ref{thm:cri2}.
\subsection{Proof of Theorem \ref{thm:cri1}}
Put
\begin{align*}
\overline{f_i}(X):=\left\{
\begin{array}{cc}
f_i(X) & \mbox{if }f_i(0)\ne 0, \\
1+f_i(X) & \mbox{if }f_i(0)=0.
\end{array}
\right.
\end{align*}
Then \(\overline{f_1}(X),\ldots,\overline{f_r}(X)\) satisfy the assumptions of Theorem \ref{thm:cri1}.
The first and fourth assumptions are easily checked.
Moreover, the second and the third assumptions are also seen by
\begin{align*}
&\theta\left(R;\sum_{h=1}^{r-2}
k_h S \hspace{-0.7mm}\left( \hspace{0.3mm}\overline{f_h}\hspace{0.3mm}\right)+
(1+k_{r-1})S\hspace{-0.7mm}\left(\hspace{0.3mm}\overline{f_{r-1}}\hspace{0.3mm}\right)\right)\\
&\hspace{10mm}\geq \theta\left(R;\sum_{h=1}^{r-2} k_h S(f_h)+(1+k_{r-1})S(f_{r-1})\right)
\end{align*}
and, for \(h=1,\ldots,r\),
\[\lambda\left(S\left(\hspace{0.3mm}\overline{f_h}\hspace{0.3mm}\right);R\right) \sim \lambda(S(f_h);R)\]
as \(R\) tends to infinity. For the proof of Theorem \ref{thm:cri1},
it suffices to show that
\[\left\{\left. \overline{f_1}(\beta^{-1})^{k_1}
\overline{f_2}(\beta^{-1})^{k_2}\cdots \overline{f_r}(\beta^{-1})^{k_r}
\ \right| \ k_1,k_2,\ldots, k_r\in\mathbb{N}, k_1\leq A\right\}\]
is linearly independent over \(\mathbb{Q}(\beta)\). In particular, rewriting \(\overline{f_i}(X)\) by
\(f_i(X)\) for \(i=1,\ldots,r\), we may assume that \(f_i(0)\ne 0\) for any \(i=1,\ldots, r\). \par
For simplicity, put, for \(i=1,\ldots,r\),
\[\xi_i:=f_i(\beta^{-1}), S_i:=S(f_i), \lambda_i(R):=\lambda\bigl(S(f_i);R\bigr).\]
Using Corollary \ref{cor:tra1} and the third assumption of Theorem \ref{thm:cri2}, we see that
\[[\mathbb{Q}(\xi_1,\beta):\mathbb{Q}(\beta)]\geq A+1\]
and that \(\xi_2,\ldots,\xi_r\) are transcendental. \par
We introduce notation for the proof of Theorem \ref{thm:cri1}.
For any nonempty subset \(\mathcal{A}\) of \(\mathbb{N}\) and any positive integer \(k\), let
\(\mathcal{A}^k\) denote the \(n\)-fold Cartesian product. For convenience, set
\[\mathcal{A}^0:=\{0\}.\]
Let \(k\in \mathbb{N}\) and \(\bp=(p_1,\ldots,p_k)\in \mathbb{N}^k\). We put
\begin{align*}
|\bp|:=\left\{
\begin{array}{cc}
0 & (k=0), \\
p_1+\cdots+p_k & (k\geq 1)
\end{array}
\right.
\end{align*}
and, for \(i=1,\ldots, r\),
\begin{align*}
t_i (\bp):=
\left\{
\begin{array}{cc}
1 & (k=0), \\
t_i(p_1)\cdots t_i(p_k) & (k\geq 1).
\end{array}
\right.
\end{align*}
Moreover, for any \(\bk=(k_1,\ldots,k_r)\in \mathbb{N}^r\), let
\[\underline{X}^{\bk}=\prod_{i=1}^{r} X_i^{k_i}, \
\underline{\xi}^{\bk}:=\prod_{i=1}^r \xi_i^{k_i}, \ \underline{\lambda}(N)^{\bk}:=
\prod_{i=1}^r \lambda_i (N)^{k_i}. \]
We calculate \(\underline{\xi}^{\bk}\) in the same way as the proof of Theorem 2.1 in \cite{kan1}.
The method was inspired by the proof of Theorem 7.1 in \cite{bai}.
Let \(\bk\in\mathbb{N}^r\backslash\{(0,\ldots,0)\}\). Then we have
\begin{align}
\underline{\xi}^{\bk}
& =
\prod_{i=1}^r \left(\sum_{m_i\in S_i} t_i(m_i) \beta^{-m_i}\right)^{k_i}
\nonumber\\
&=\prod_{i=1}^r\sum_{\bm_i\in S_i^{k_i}} t_i(\bm_i) \beta^{-|\bm_i|}
=:
\sum_{m=0}^{\infty} \beta^{-m} \rho(\bk;m),
\label{pr1}
\end{align}
where
\begin{align*}
\rho(\bk;m)=\sum_{\bm_1\in S_1^{k_1},\ldots,\bm_r\in S_r^{k_r}\atop
|\bm_1|+\cdots+|\bm_r|=m} t_1(\bm_1)\cdots t_r(\bm_r)\in\mathbb{N}.
\end{align*}
Note that \(\rho(\bk;m)\) is positive if and only if
\[m\in \sum_{h=1}^r k_h S_h.\]
We see that
\begin{align}
\rho(\bk;m)\leq
\sum_{\bm_1\in S_1^{k_1},\ldots,\bm_r\in S_r^{k_r}\atop
|\bm_1|+\cdots+|\bm_r|=m} C_7^{|\bk|} \leq C_7^{|\bk|} (1+m)^{|\bk|}.
\label{pr2}
\end{align}
We give an analogue of Lemma 4.1 in \cite{kan1}.
\begin{lem}
Let \(\bk\in \mathbb{N}^r\backslash \{(0,\ldots,0)\}\) and let \(N\in\mathbb{Z}^{+}\).
Then we have
\begin{align}
\sum_{m=0}^{N-1}\rho(\bk;m)\leq C_7^{|\bk|} \underline{\lambda}(N)^{\bk}
\label{pr3}
\end{align}
and
\begin{align}
\mbox{Card}\hspace{0.6mm} \{m\in\mathbb{N}\mid
m< N, \rho(\bk;m)>0\}\leq C_7^{|\bk|} \underline{\lambda}(N)^{\bk}.
\label{pr4}
\end{align}
\label{prlem1}
\end{lem}
\begin{proof}
We see that (\ref{pr4}) follows from (\ref{pr3}) because \(\rho(\bk;m)\in\mathbb{N}\) for any
\(m\). Put \(S(i;N):=S_i\cap [0,N)\) for \(i=1,\ldots,r\). Then we get
\begin{align*}
\sum_{m=0}^{N-1} \rho(\bk;m)
&=\sum_{\bm_1\in S_1^{k_1},\ldots,\bm_r\in S_r^{k_r}\atop
|\bm_1|+\cdots+|\bm_r|< N} t_1(\bm_1)\cdots t_r(\bm_r)\\
&\leq
C_7^{|\bk|}\sum_{\bm_1\in S(1;N)^{k_1}}\sum_{\bm_2\in S(2;N)^{k_2}}
\cdots \sum_{\bm_r\in S(r;N)^{k_r}} 1\\
&= C_7^{|\bk|} \underline{\lambda}(N)^{\bk},
\end{align*}
which implies (\ref{pr3}).
\end{proof}
Assume that the set \(\{\underline{\xi}^{\bk}\mid \bk=(k_1,\ldots,k_r)\in \mathbb{N}^r, k_1\leq A\}\)
is linearly independent over \(\mathbb{Q}(\beta)\). Then there exists
\(P(X_1,\ldots,X_r)\in \mathbb{Z}[\beta][X_1,\ldots,X_r]\backslash\mathbb{Z}[\beta]\) such that
the degree of \(P(X_1,\ldots,X_r)\) in \(X_1\) is at most \(A\) and that
\begin{align}
P(\xi_1,\ldots,\xi_r)=0.
\label{pr5}
\end{align}
Let \(D\) be the total degree of \(P(X_1,\ldots,X_r)\).
Without loss of generality, we may assume that \(X_r(-1+X_r)\) divides \(P(X_1,\ldots,X_r)\) and
that if \(r\geq 3\), then \(X_{r-1}\) divides \(P(X_1,\ldots,X_r)\). Put
\begin{align}
P(X_1,\ldots,X_r)=:\sum_{\bk\in \Lambda} A_{\bk} \underline{X}^{\bk},
\label{pr6}
\end{align}
where \(\Lambda\) is a nonempty finite subset of \(\mathbb{N}^r\) and
\(A_{\bk}\in \mathbb{Z}[\beta]\backslash \{0\}\) for any \(\bk\in \Lambda\).
For any \(\bk=(k_1,\ldots,k_r) \in \Lambda\), we have \(k_r\geq 1\) because
\(X_r\) divides \(P(X_1,\ldots,X_r)\). Moreover, if \(r\geq 3\), then
\begin{align}
k_{r-1}\geq 1
\label{pr7}
\end{align}
because \(X_{r-1}\) divides \(P(X_1,\ldots,X_r)\). \par
The lexicographic order \(\succ\) on \(\mathbb{N}^r\) is defined as follows:
Let \(\bk=(k_1,\ldots,k_r)\) and \(\bk'=(k_1',\ldots,k_r')\) be distinct
elements of \(\mathbb{N}^r\).
Put \(l:=\min \{i\mid 1\leq i\leq r, k_i\ne k_i'\}\). Then \(\bk\succ \bk'\) if and only if
\(k_l>k_l'\). The third assumption of Theorem \ref{thm:cri1} implies that if \(\bk\succ \bk'\), then
\begin{align}
\underline{\lambda}(N)^{\bk'}=o\left(\underline{\lambda}(N)^{\bk}\right)
\label{pr8}
\end{align}
as \(N\) tends to infinity. \par
Let \(\bg=(g_1,\ldots,g_r)\) be the greatest element of \(\Lambda\) with respect to \(\succ\).
Without loss of generality, we may assume that
\begin{align}
A_{\bg}\geq 1.
\label{pr9}
\end{align}
We see that
\begin{align}
g_{r-1}\geq 1.
\label{pr10}
\end{align}
In fact, (\ref{pr10}) follows from (\ref{pr7}) if \(r\geq 3\). Suppose that \(r=2\). Then \(g_1\) is the
degree of \(P(X_1,X_2)\) in \(X_1\). Thus, \(g_1\) is positive because \(\xi_2\) is transcendental. \par
Putting
\[\Lambda_1:=\{\bk=(k_1,\ldots,k_{r-1},k_r)\mid k_1=g_1,\ldots,k_{r-1}=g_{r-1},k_r<g_r\}\]
and
\[\Lambda_2:=\{\bk=(k_1,\ldots,k_{r-1},k_r)\mid k_i< g_i \mbox{ for some }i\leq r-1\},\]
we see \(\Lambda=\{\bg\}\cup \Lambda_1\cup\Lambda_2\). Using the fact that \(\xi_r\) is
transcendental and that \(-1+X_r\) divides \(P(X_1,\ldots,X_r)\), we obtain the following lemma,
applying the same method as the proof of
Lemma 4.3 in \cite{kan1} with \(F(X_{r-1},X_r)=1\):
\begin{lem}
\(\Lambda_1\) and \(\Lambda_2\) are not empty.
\label{prlem2}
\end{lem}
Set
\[\be=(g_1,\ldots,g_{r-2},-1+g_{r-1},1+D).\]
Recall that the degree \(g_1\) of \(P(X_1,\ldots,X_r)\) in \(X_1\) is at most \(A\). Thus, we
can apply the second assumption of Theorem \ref{thm:cri1} with \(\bk=(k_1,\ldots,k_r)=\be\).
In fact, we see
\begin{align*}
k_1=\left\{
\begin{array}{cc}
-1+g_1 & (\mbox{if }r=2),\\
g_1 & (\mbox{if }r\geq 3).
\end{array}
\right.
\end{align*}
Hence, there exits a positive constant \(C_{12}\) satisfying the following:
For any integer \(R\) with \(R\geq C_{12}\), we have
\begin{align}
\lambda_r(R)\geq 5
\label{pr11}
\end{align}
and
\begin{align}
R-\theta\left(\sum_{h=1}^{r-1} g_h S_h;R\right)< \frac{R}{\underline{\lambda}(R)^{\be}}.
\label{pr12}
\end{align}
In what follows, we set
\[\theta(R):=\theta\left(\sum_{h=1}^{r-1} g_h S_h;R\right)\]
for simplicity.
Using (\ref{pr11}) and (\ref{pr12}), we obtain the following lemma in the same way as the proof of
Lemma 4.4 in \cite{kan1}:
\begin{lem}
Let \(M,E\) be real numbers with
\[M\geq C_{12}, E\geq \frac{4M}{\underline{\lambda}{(M)}^\be}.\]
Then
\[M+\frac12E<\theta(M+E).\]
\label{prlem3}
\end{lem}
Using \(k_1\leq A\) and the third assumption of Theorem \ref{thm:cri1}, we get
\[\lim_{R\to\infty}\frac{R}{\underline{\lambda}(R)^{\be}}=\infty.\]
Thus, the set
\[\Xi:=\left\{
N\in\mathbb{N}\left|
\frac{N}{\underline{\lambda}(N)^{\be}}\geq \frac{n}{\underline{\lambda}(n)^{\be}}
\mbox{ for any }n\leq N\right.\right\}
\]
is infinite. We now verify for any \(\bk=(k_1,\ldots,k_r) \in\Lambda_2\) that
\begin{align}
\underline{\lambda}(N)^{\bk}=o\left(\underline{\lambda}(N)^{\be}\right)
\label{pr13}
\end{align}
as \(N\) tends to infinity. For the proof of (\ref{pr13}), it suffices to check
\begin{align}
\be\succ \bk
\label{pr14}
\end{align}
by (\ref{pr8}).
If \(g_i>k_i\) for some \(i\leq r-2\), then (\ref{pr14}) holds. Suppose that
\(g_i=k_i\) for any \(i\leq r-2\). Then we get \(-1+g_{r-1}\geq k_{r-1}\) and \(1+D>k_r\) by
\(\bk\in \Lambda_2\), which implies (\ref{pr14}). \par
Combining (\ref{pr5}), (\ref{pr6}), and (\ref{pr1}), we get
\[0=\sum_{\bk\in \Lambda} A_{\bk} \underline{\xi}^{\bk}
=\sum_{\bk\in \Lambda} A_{\bk} \sum_{m=0}^{\infty} \rho(\bk;m)\beta^{-m}.\]
For an arbitrary nonnegative integer \(R\), multiplying \(\beta^R\) to the both-hand sides of the equality above,
we obtain
\[0=\sum_{\bk\in \Lambda} A_{\bk} \sum_{m=-R}^{\infty} \rho(\bk;m+R)\beta^{-m}.\]
Putting
\begin{align}
Y_R&:=
\sum_{\bk\in \Lambda} A_{\bk} \sum_{m=1}^{\infty} \rho(\bk;m+R)\beta^{-m}\nonumber \\
&=
-
\sum_{\bk\in \Lambda} A_{\bk} \sum_{m=-R}^{0} \rho(\bk;m+R)\beta^{-m},
\label{pr15}
\end{align}
we see that \(Y_R\) is an algebraic integer because \(\beta\) is a Pisot or Salem number.
\begin{lem}
There exist positive integers \(C_{13}\) and \(C_{14}\) satisfying the following:
For any integer \(R\) with \(R\geq C_{14}\), we have
\[Y_R=0\mbox{, or }|Y_R|\geq R^{-C_{13}}.\]
\label{prlem4}
\end{lem}
\begin{proof}
Let \(d\) be the degree of \(\beta\) and let \(\sigma_1,\sigma_2,\ldots,\sigma_d\) be the
conjugate embeddings of \(\mathbb{Q}(\beta)\) into \(\mathbb{C}\) such that
\(\sigma_1(\gamma)=\gamma\) for any \(\gamma \in \mathbb{Q}(\beta)\).
Set
\[C_{15}:=\max\{|\sigma_i(A_{\bk})|\mid i=1,\ldots,d, \bk\in\Lambda\}.\]
Let \(2\leq i\leq d\). Using (\ref{pr15}) and (\ref{pr2}), and \(|\beta_i|\leq 1\), we get
\begin{align*}
|\sigma(Y_R)|&=
\left|
\sum_{\bk\in\Lambda}\sigma_i(A_{\bk})\sum_{n=0}^R\rho(\bk;-n+R)\sigma_i(\beta)^n
\right|\\
&\leq\sum_{\bk\in \Lambda}C_{15}\sum_{n=0}^R C_7^{D}(1+R)^D\ll (R+1)^{D+1}.
\end{align*}
In particular, if \(R\gg 1\), then
\[|\sigma(Y_R)|\leq R^{D+2}.\]
Hence, if \(Y_R\ne 0\), then we obtain
\[1\leq |Y_R|\prod_{i=2}^d |\sigma(Y_R)|\leq |Y_R| R^{(D+2)(d-1)}.\]
\end{proof}
In the case of \(\beta=2\) and \(r=1\), Bailey, Borwein, Crandall, and Pomerance estimated
the numbers \(\widetilde{y_N}\) of positive \(Y_R\) with \(R< N\)
in order to give lower bounds for the nonzero digits
in binary expansions (Theorem 7.1 in \cite{bai}).
Moreover, if \(\beta=b>1\) is a rational integer and \(r\geq 2\), then \(\widetilde{y_N}\) is applied to
prove a criterion for algebraic independence (Theorem 2.1 in \cite{kan1}). \par
Now, we put, for \(N\in \mathbb{Z}^{+}\),
\[y_N:=\mbox{Card}\left\{R\in\mathbb{N}\ \left| \ R< N, Y_R\geq \frac1{\beta}\right.\right\}.\]
In the case where \(\beta\) is a Pisot or Salem number and \(r=1\), then
\(y_N\) is estimated to give lower bounds for the numbers of nonzero digits in \(\beta\)-expansions
(Theorem 2.2 in \cite{kan3}). In what follows, we calculate upper and lower bounds for \(y_N\), which
gives contradiction. First, we estimate upper bounds for \(y_N\) in Lemma \ref{prlem5}. Next, we
give lower bounds for \(y_N\) in Lemma \ref{prlem10}, estimating upper bounds for \(R-\theta(R;\Omega)\)
in Lemma \ref{prlem9}, where
\begin{align}
\Omega=\left\{R\in\mathbb{N}\left|Y_R\geq \frac1{\beta}\right.\right\}.
\label{Omega}
\end{align}
In what follows, we assume that
\(N\) is a sufficiently large integer satisfying
\begin{align}
\left(1+\frac1{N}\right)^D<\frac{\beta+1}2. \label{pr16}
\end{align}
\begin{lem}We have
\[y_N=o\left(N^{1-\delta/2}\right)\]
as \(N\) tends to infinity.
\label{prlem5}
\end{lem}
\begin{proof}
Put \[K:=\lceil (1+D)\log_{\beta} N\rceil.\]
Then we see
\begin{align*}
y_N&\leq K+y_{N-K}
=K+\sum_{0\leq R< N-K\atop Y_R\geq 1/\beta} 1\\
&\leq K+\beta\sum_{R=0}^{N-K-1}|Y_R|
\end{align*}
and
\begin{align*}
\sum_{R=0}^{N-K-1}|Y_R|&\leq
\sum_{R=0}^{N-K-1}\sum_{\bk\in \Lambda}\sum_{m=1}^{\infty} |A_{\bk}|\beta^{-m}\rho(\bk;m+R)\\
&= \sum_{\bk\in\Lambda} |A_{\bk}|Y(\bk;N),
\end{align*}
where
\[Y(\bk;N)=\sum_{R=0}^{N-K-1}\sum_{m=1}^{\infty} \beta^{-m}\rho(\bk;m+R)\]
for \(\bk\in \Lambda\). For the proof of Lemma \ref{prlem5}, it suffices to show for any
\(\bk=(k_1,k_2,\ldots,k_r)\in\Lambda\) that
\begin{align}
Y(\bk;N)=o\left(N^{1-\delta/2}\right)
\label{pr17}
\end{align}
as \(N\) tends to infinity. Observe that
\begin{align}
0\leq Y(\bk;N)&=\sum_{m=1}^{K}\sum_{R=0}^{N-K-1} \beta^{-m}\rho(\bk;m+R)\nonumber\\
&\hspace{10mm}
+\sum_{m=K+1}^{\infty}\sum_{R=0}^{N-K-1} \beta^{-m}\rho(\bk;m+R)\nonumber\\
&=:S^{(1)}(\bk;N)+S^{(2)}(\bk;N).
\label{pr18}
\end{align}
Using (\ref{pr3}), we get
\begin{align*}
S^{(1)}(\bk;N)
&\leq
\sum_{m=1}^{K}\beta^{-m}\sum_{R=0}^{N-1} \rho(\bk;R)
\leq
\sum_{m=1}^{\infty}\beta^{-m}\sum_{R=0}^{N-1} \rho(\bk;R)\\
&\leq \sum_{m=1}^{\infty}\beta^{-m}C_7^D \underline{\lambda}(N)^{\bk}
\ll \underline{\lambda}(N)^{\bk}.
\end{align*}
Thus, the third assumption of Theorem \ref{thm:cri1} implies that
\begin{align}
S^{(1)}(\bk;N)\ll \lambda_1(N)^A\prod_{i=2}^r\lambda_i(N)^{k_i}
=o\left(N^{1-\delta/2}\right).
\label{pr19}
\end{align}
Using (\ref{pr2}), we see
\begin{align*}
S^{(2)}(\bk;N)&\leq \sum_{m=K+1}^{\infty}\beta^{-m}\sum_{R=0}^{N-K-1}C_7^D(m+R+1)^D\\
&\ll \sum_{m=K+1}^{\infty}\beta^{-m} N(m+N)^D.
\end{align*}
Note for any \(m\in\mathbb{N}\) that
\[\left(\frac{m+1+N}{m+N}\right)^D\leq \left(1+\frac1N\right)^D< \frac{\beta+1}2\]
by (\ref{pr16}). Hence, we obtain
\begin{align}
S^{(2)}(\bk;N)
&\ll
\beta^{-K-1}N(K+1+N)^D\sum_{m=0}^{\infty}\beta^{-m}\left(\frac{\beta+1}2\right)^m\nonumber\\
&\ll \beta^{-K-1} N^{D+1}\leq 1.
\label{pr20}
\end{align}
Hence, combining (\ref{pr18}), (\ref{pr19}), and (\ref{pr20}), we deduce (\ref{pr17}).
\end{proof}
In what follows, we estimate lower bounds for \(y_N\) in the case where \(N\in \Xi\) is sufficiently large.
Recall that \(\Lambda_2\) is not empty by Lemma \ref{prlem2} and that \(0\in S_i\)
for \(i=1,\ldots,r\).
In particular, for any \(\bk\in \Lambda\), we have
\(\rho(\bk;0)>0\). Put
\begin{align*}
&\{T\in\mathbb{N}\mid T< N, \rho(\bk;T)>0\mbox{ for some }\bk\in \Lambda_2\}\\
&\hspace{30mm}=:\{0=T_1<T_2<\cdots<T_{\tau}\}.
\end{align*}
If \(N\) is sufficiently large, then (\ref{pr4}) and (\ref{pr13}) imply that
\[\tau\leq \sum_{\bk\in\Lambda_2} C_7^{|\bk|}\underline{\lambda}(N)^{\bk}\leq
\frac{1}{32}\underline{\lambda}(N)^{\be}.
\]
For convenience, put \(T_{1+\tau}:=N\). Set
\[\mathcal{J}:=\{J=J(j)\mid 1\leq j\leq \tau\},\]
where \(J(j)\) is an interval of \(\mathbb{R}\) defined by
\(J(j)=[T_j,T_{1+j})\) for \(1\leq j\leq \tau\). \par
In what follows, we denote the length of a bounded interval \(I\) of \(\mathbb{R}\) by \(|I|\). Then we have
\[\sum_{J\in\mathcal{J}}|J|=N. \]
Let
\begin{align*}
\mathcal{J}_1
&:=
\left\{J\in\mathcal{J}\ \left| \ |J|\geq \frac{16 N}{\underline{\lambda}(N)^{\be}}\right.\right\},\\
\mathcal{J}_2
&:=
\{J\in \mathcal{J}_1\mid J\subset[C_{12},N)\}.
\end{align*}
In the same way as the proof of Lemma 4.7 in \cite{kan1}, we obtain the following:
\begin{lem}
If \(N\in \Xi\) is sufficiently large, then we have
\[
\sum_{J \in \mathcal{J}_1}|J|\geq \frac{N}2, \
\sum_{J \in \mathcal{J}_2}|J|\geq \frac{N}3.
\]
\label{prlem6}
\end{lem}
Recall that \(\Lambda_1\) is not empty by Lemma \ref{prlem2}. Let \(\bk_1\) be the maximal element
of \(\Lambda_1\) with respect to \(\succ\). Set
\begin{align*}
&\{R\in\mathbb{N}\mid R< N, \rho(\bk;R)>0\mbox{ for some }\bk\in\Lambda_1\}\\
&\hspace{30mm}=:
\{0=R_1<R_2<\cdots<R_{\mu}\}
\end{align*}
and \(R_{1+\mu}:=N\). Then (\ref{pr4}) implies that
\[\mu\leq \sum_{\bk\in \Lambda_1} C_7^{|\bk|}\underline{\lambda}(N)^{\bk}
\leq C_{16}\underline{\lambda}(N)^{\bk_1},\]
where \(C_{16}\) is a positive constant. \par
Let
\[\mathcal{I}:=\{I=I(i)\mid 1\leq i\leq \mu\},\]
where \(I(i)\) is an interval of \(\mathbb{R}\) defined by
\(I(i)=[R_i,R_{i+1})\) for \(1\leq i\leq \mu\).
Set
\[y_N(i):=\mbox{Card}\left\{R\in I(i) \left| Y_R\geq \frac1{\beta}\right.\right\}\]
for \(i=1,\ldots,\mu\). Observe that
\[\sum_{I\in\mathcal{I}}|I|=N\]
and that
\begin{align}
\sum_{i=1}^{\mu} y_N(i)=y_N.
\label{pr21}
\end{align}
Set
\begin{align*}
\mathcal{I}_1&:=\{I\in\mathcal{I}\mid I\subset J\mbox{ for some }J\in\mathcal{J}\},\\
\mathcal{I}_2&:=\left\{I\in\mathcal{I}_1\left||I|\geq \frac1{12C_{16}}\frac{N}{\underline{\lambda}(N)^{\bk_1}}
\right.\right\}.
\end{align*}
In the same way as the proof of Lemma 4.8 in \cite{kan1}, we obtain the following:
\begin{lem}
For any sufficiently large \(N\in \Xi\), we have
\begin{align}
\sum_{I\in \mathcal{I}_1}|I|\geq \frac{N}6, \ \sum_{I\in \mathcal{I}_2}|I|\geq \frac{N}{12}.
\label{pr22}
\end{align}
\label{prlem7}
\end{lem}
In what follows, we assume that
\(N\in \Xi\) satisfies
\begin{align}
N^{\delta/2}\geq (1+C_8)C_9.
\label{pr23}
\end{align}
Let \(1\leq i\leq \mu\) with \(I(i)\in\mathcal{I}_2\) and let \(R\in (R_i,R_{i+1})\).
We now show that
\begin{align}
\rho(\bk;R)=0
\label{pr24}
\end{align}
for any \(\bk\in \Lambda_1\cup\Lambda_2=\Lambda\backslash\{\bg\}\).
In fact, if \(\bk\in \Lambda_1\), then (\ref{pr24}) follows from
the definition of \(R_1,\ldots,R_{\mu+1}\). Suppose that \(\bk\in \Lambda_2\).
By the definition of \(\mathcal{I}_2\), we have \(I(i)\subset J(j)\) for some \(j\) with \(1\leq j\leq \tau\),
and so \(R\in (T_j,T_{1+j})\). Thus, we get (\ref{pr24}). \par
Applying the third assumption of Theorem \ref{thm:cri1} with \(\varepsilon=\delta/(2D)\),
we see by \(g_1\leq A\) that
\[\underline{\lambda}(N)^{\bk_1}=o\left(N^{-\delta/2+1}\right)\]
as \(N\in \Xi\) tends to infinity. Thus, we obtain for any sufficiently large \(N\in \Xi\) that
\begin{align}
|I(i)|\geq \frac{1}{12C_{16}}\frac{N}{\underline{\lambda}(N)^{\bk_1}}\geq N^{\delta/2}.
\label{pr25}
\end{align}
We can apply
the fourth assumption of Theorem \ref{thm:cri1} with
\[R=\frac{|I(i)|}{1+C_8}\geq \frac{N^{\delta/2}}{1+C_8}\geq C_9\]
by (\ref{pr25}) and (\ref{pr23}). Thus, we get that
there exists \(V(N,i)\in S_r\) with
\[\frac{|I(i)|}{1+C_8}\leq V(N,i)\leq \frac{C_8|I(i)|}{1+C_8}.\]
Put
\(M=M(N,i):=R_i+V(N,i)\). Then we have
\begin{align}
R_i+\frac{|I(i)|}{1+C_8}\leq M\leq R_i+\frac{C_8|I(i)|}{1+C_8}.
\label{pr26}
\end{align}
By the definition of \(R_i\), there exists \(k_r\leq -1+g_r\) such that
\[R_i\in\sum_{h=1}^{r-1} g_h S_h+k_r S_r.\]
Using Remark \ref{rem:cri1}, we see
\[R_i\in\sum_{h=1}^{r-1} g_h S_h+(-1+g_r) S_r.\]
Thus, we get
\begin{align}
M\in\sum_{h=1}^r g_h S_h
\label{pr27}
\end{align}
by \(V(N,i)\in S_r\).
\begin{lem}
Let \(N\in\Xi\) be sufficiently large and let \(1\leq i\leq \mu\) with \(I(i)\in\mathcal{I}_2\). Then
\(Y_R>0\) for any \(R\) with \(R_i\leq R<M\).
\label{prlem8}
\end{lem}
\begin{proof}
We prove Lemma \ref{prlem8} by induction on \(R\). First we show that \(Y_{M-1}>0\).
We see
\begin{align}
Y_{M-1}&=
A_{\bg} \sum_{m=1}^{\infty}\beta^{-m}\rho(\bg;m+M-1)\nonumber\\
&\hspace{10mm}+\sum_{\bk\in\Lambda\backslash\{\bg\}}A_{\bk}
\sum_{m=1}^{\infty}\beta^{-m}\rho(\bk;m+M-1)\nonumber\\
&=:S^{(3)}+S^{(4)}.
\label{pr28}
\end{align}
By (\ref{pr27})
\begin{align}
S^{(3)}\geq \frac{A_{\bg}}{\beta}\rho(\bg;M)\geq \frac{1}{\beta}.
\label{pr29}
\end{align}
We now estimate upper bounds for \(|S_4|\). Let \(m\) be an integer with
\begin{align}
1\leq m\leq -1+\lceil 2D\log_{\beta} N\rceil.
\label{eqn:abc}
\end{align}
Using (\ref{pr26}) and (\ref{pr25}), we get
\begin{align*}
R_{i+1}-M&\geq R_{i+1}-R_i-\frac{C_8|I(i)|}{1+C_8}\\
&=\frac{|I(i)|}{1+C_8}>m
\end{align*}
for sufficiently large \(N\in \Xi\) and
\[R_{i+1}>m+M-1>R_i.\
Thus, applying (\ref{pr24}) with \(R=m+M-1\) for any \(m\) with (\ref{eqn:abc}), we obtain
by (\ref{pr2}) that
\begin{align*}
|S^{(4)}|&\leq
\sum_{\bk\in\Lambda\backslash\{\bg\}}|A_{\bk}|\sum_{m=\lceil 2D\log_{\beta} N\rceil}^{\infty}\beta^{-m}\rho(\bk;m+M-1)\\
&\leq
\sum_{\bk\in\Lambda\backslash\{\bg\}}|A_{\bk}|\sum_{m=\lceil 2D\log_{\beta} N\rceil}^{\infty}
\beta^{-m}C_7^D(m+N)^D\\
&\ll \sum_{m=\lceil 2D\log_{\beta} N\rceil}^{\infty}\beta^{-m}(m+N)^D.
\end{align*}
Therefore, (\ref{pr16}) implies that
\[|S^{(4)}|\ll N^{-2D}\left(\lceil 2D\log_{\beta} N\rceil+N\right)^D\sum_{m=0}^{\infty}\beta^{m}
\left(\frac{1+\beta}{2}\right)^m=o(1)\]
as \(N\) tends to infinity. In particular, if \(N\in \Xi\) is sufficiently large, then
\begin{align}
|S^{(4)}|<\frac1{2\beta}.
\label{pr30}
\end{align}
Combining (\ref{pr28}), (\ref{pr29}), and (\ref{pr30}), we deduce that
if \(N\in \Xi\) is sufficiently large, then \(Y_{M-1}>0\). \par
Next, we assume that \(Y_R>0\) for some \(R\) with \(R_i<R<M\). Using (\ref{pr24}), we see
\begin{align}
Y_{R-1}&=
\sum_{\bk\in\Lambda} A_{\bk}\frac1{\beta}\rho(\bk;R)+
\sum_{\bk\in\Lambda} A_{\bk}\sum_{m=2}^{\infty}\beta^{-m}\rho(\bk;m+R-1)\nonumber\\
&=\frac{A_{\bg}}{\beta}\rho(\bg;R)
+\frac1{\beta}\sum_{\bk\in\Lambda} A_{\bk}\sum_{m=1}^{\infty}\beta^{-m}\rho(\bk;m+R)\nonumber\\
&=
\frac{A_{\bg}}{\beta}\rho(\bg;R)+\frac1{\beta}Y_R.
\label{pr31}
\end{align}
By the inductive hypothesis
\[Y_{R-1}>\frac{A_{\bg}}{\beta}\rho(\bg;R)\geq 0.\]
Therefore, we proved Lemma \ref{prlem8}.
\end{proof}
Recall that \(\Omega\) is defined in (\ref{Omega}).
\begin{lem}
Let \(N\in\Xi\) be sufficiently large and let \(1\leq i\leq \mu\) with \(I(i)\in\mathcal{I}_2\). Let \(R\) be an integer with
\[R_i+4C_{13}\log_{\beta} N\leq R< M. \]
Then we have
\begin{align}
R-\theta(R;\Omega)
\leq 2C_{13}\log_{\beta} N.
\label{pr32}
\end{align}
\label{prlem9}
\end{lem}
\begin{proof}
Put
\(R_1:=\theta(R;\Omega
.\)
In the same way as the proof of (\ref{pr31}), we see for any integer \(n\) with \(R_i<n<R_{i+1}\) that
\begin{align}
Y_{n-1}=\frac{A_{\bg}}{\beta}\rho(\bg;n)+\frac1{\beta}Y_n.
\label{pr33}
\end{align}
First, we consider the case of \(Y_R\geq 1\). Then (\ref{pr33}) implies that
\[Y_{R-1}\geq \frac{1}{\beta}\]
and that \(R-R_1=1\), which implies (\ref{pr32}). \par
In what follows, we may assume that \(0<Y_R<1\) by Lemma \ref{prlem8}. Let \(S:=\lceil C_{13} \log_{\beta} N\rceil\).
Suppose for any integer \(m\) with \(0\leq m\leq S\) that
\[\rho(\bg;R-m)=0.\]
Noting \(M>R>R-1>\cdots>R-S>R_i\), we get by (\ref{pr33}) that
\[1>Y_R=\beta Y_{R-1}=\cdots=\beta^S Y_{R-S}=\beta^{1+S}Y_{R-S-1}>0,\]
where we use Lemma \ref{prlem8} for the last inequality by \(R_i<R-S-1<M\). So we get
\[\beta^{S+1}<Y_{R-S-1}^{-1}=|Y_{R-S-1}|^{-1}.\]
Since
\[R-S-1\geq 2 C_{13}\log_{\beta} N>C_{14}\]
for any sufficiently large \(N\), we apply Lemma \ref{prlem4} as follows:
\[\beta^{S+1}<|Y_{R-S-1}|^{-1}\leq (R-S-1)^{C_{13}}<N^{C_{13}}. \]
Thus, we obtain
\[\lceil C_{13} \log_{\beta} N\rceil+1=S+1<C_{13} \log_{\beta} N,\]
a contradiction. \par
Hence, there exists an integer \(m'\) with \(0\leq m'\leq S\) satisfying
\(\rho(\bg;R-m')\geq 1\).
Applying (\ref{pr33}) with \(n=R-m'\), we get by \(Y_{R-m'}>0\) that
\[Y_{R-m'-1}\geq \frac{A_{\bg}}{\beta}\rho(\bg;R-m')\geq \frac{1}{\beta},\]
where for the last inequality we use (\ref{pr9}). Hence, we deduce that
\[R-R_1\leq m'+1\leq 2C_{13}\log_{\beta} N. \]
\end{proof}
\begin{lem}
\[\limsup_{N\to\infty}\frac{y_N}{\log N}>0.\]
\label{prlem10}
\end{lem}
\begin{proof}
Let \(N\in \Xi\) be sufficiently large and let \(1\leq i\leq \mu\) with \(I(i)\in\mathcal{I}_2\).
Note that
\begin{align}
\lim_{N\to\infty}\frac{|I(i)|}{\log_{\beta}N}=\infty
\label{eqn:bcd}
\end{align}
by (\ref{pr25}). Combining (\ref{pr26}), (\ref{eqn:bcd}), and Lemma \ref{prlem9}, we see that there exists a constant \(C_{17}\) such that
\[y_N(i)\geq C_{17}\frac{|I(i)|}{\log N}. \]
Therefore, using (\ref{pr21}) and (\ref{pr22}), we obtain
\[y_N\geq \sum_{1\leq i\leq \mu\atop I(i)\in \mathcal{I}_2} y_N(i)\geq
\sum_{I\in \mathcal{I}_2}C_{17}\frac{|I|}{\log N}\gg \frac{N}{\log N}.\]
\end{proof}
Finally, we deduce a contradiction from Lemma \ref{prlem5} and \ref{prlem10}, which proves
Theorem \ref{thm:cri1}.
\section*{Acknowledgements}
This work was supported by JSPS KAKENHI Grant Number 15K17505.
|
train/arxiv
|
BkiUdZI5i7PA9PGpdbej
| 5 | 1 |
\section{Introduction}
Computer vision has traditionally been an area of significant interest in many practical tasks. In medicine and other related fields, imaging techniques have shown great promise when used for segmentation, tracking, and detection in various applications. Furthermore, deep learning-based (DL) methods are increasingly being developed for medical imaging and endoscopy, as data becomes more available and procedures become more complex \cite{luo_advanced_2018}.
These advances, however, have also highlighted several shortcomings in many DL methods, which arise from the nature of the medical settings: not only are the tasks themselves quite difficult, but data availability is a significant hurdle for most of the common methods used successfully in other domains \cite{ros_comparative_2021}. Furthermore, when there is available data, it tends to be limited to only a small fraction of the cases one would observe in real-world scenarios: everything from instruments and acquisition devices, to procedures used and the lighting conditions are variable and can change from a hospital-to-hospital basis \cite{jc-tec-2022}.
\begin{figure}
\center
\includegraphics[width=0.41\textwidth]{fig_EndoUDA_BEPOL2.png}
\caption{Sample images from EndoUDA's White-light Imaging modalities (WLI) and Narrow-band Imaging (NBI) for both Barret's Esophagus (BE, left) and polyps (right).}
\label{fig:endouda}
\end{figure}
One of such tasks where these data constraints become apparent is in computer vision applications for endoscopy. Since the models commonly trained for this use entail networks meant for very high volumes of data, the most commonly used methodologies in other domains are not always easily deployed in such settings. In particular, image segmentation often faces domain shift issues due to these data availability limitations, combined with the natural difficulty of the task itself. Existing methods used to overcome these difficulties are often network- or dataset-specific, requiring additional information to adapt to new data, which limits their use.
An example of this limitation is the usage of different light modalities during an endoscopic examination. White-Light Imaging (WLI) may be used for a general examination, with more specific areas highlighted with Narrow-Band Imaging (NBI), allowing a clinician to inspect different anatomical aspects of the same lesion. If a model for endoscopic computer aided diagnostic tools is to be usable in this setting, then it is important that such models can work seamlessly in both of these imaging modalities, and that they avoid the need for any modality-specific training. As shown in Fig. \ref{fig:endouda} (which shows two sample images from the EndoUDA dataset using two different modalities) the changes introduced by switching the lighting modality strongly alters the visual properties of the areas to be segmented \cite{celik_endouda_2021}. These changes affect a model's ability to generalise. This problem is further exacerbated due to the relatively limited amount of NBI images compared to those available for WLI.
If one uses more of the available data for WLI, then we must envisage methods that can reduce the impact of the domain shift when the Source Domain (SD) is WLI, and the Target Domain (TD) is NBI. Developing such a modality agnostic approach for the segmentation of injuries in the esophagus, and pre-cancerous polyps is the focus of this work.
\begin{figure*}[htbp]
\centerline{\includegraphics[width=0.92\textwidth]{isbi-2023-fig.png}}
\caption{A summary of our proposed model. A simple UNet is trained using source domain data, with each frame also being used to generate a segmentation output using Simple Linear Iterative Clustering (SLIC), a k-means clustering algorithm. Then, the generated mask from UNet is combined with the superpixel grid, where two different loss objectives are combined: One is the superpixel guided loss (S) where we evaluate how closely the mask follows the superpixel boundaries (on the right, a blue check-mark shows a boundary being closely followed, whereas the red circle shows a segmentation that does not follow a boundary), and the other is Binary Cross Entropy (B), which evaluates the overall accuracy in the prediction (on the right, a green checkmark shows good accuracy, whereas the red cross shows poor accuracy).
}
\label{fig:summary}
\end{figure*}
Several techniques have been proposed to reduce the effects of domain shift on a model \cite{csurka_unsupervised_2021, tommasi_learning_2016}, which can be divided into either domain adaptation methods (where we have a set of target domains with different data distribution) or domain generalisation methods (where we wish to make our model better with any unseen data). There are many possible directions that one could follow to apply these methods in deep learning, from using conventional algorithms and regularization during training, to modifying the model being used to learn more relevant features or creating additional methods to learn how to model and lessen the impact of domain shift \cite{zhou_domain_2022}
Domain generalisation methods have shown to be effective in semantic image segmentation tasks in areas such as autonomous driving. Thus, in this paper we draw inspiration from this previous work \cite{zhang_transferring_2020} to propose a new loss function that utilizes traditional superpixel segmentation using the Simple Linear Iterative Clustering (SLIC) method, a k-means-based technique \cite{achanta_slic_2012} to enforce a cluster-based consistency during training, generating more geometrically regular predictions that are better translated to other datasets without requiring any changes to the base model. Our contributions herein are two-fold: i) SUPRA (SUPeRpixel Augmented), a framework that generates the superpixels and incorporates our loss to a model and ii) SLICLoss, a loss that combines agreement with superpixel boundaries generated by SLIC, which is then added to the BCE loss function.
In order to validate our proposal (schematically depicted in Fig. \ref{fig:summary}), we make use of a vanilla UNet model \cite{ronneberger_u-net_2015} and incorporate an additional loss term in the form of the proposed SLICLoss. In our experiments, we compared our method with a baseline model using only the BCE loss on different modalities for BE in EndoUDA. The rationale for this case study is to assess if the proposed loss is capable of improving the network's generalisation capabilities for segmenting images containing a significant domain shift.
The rest of this paper is organized as follows: Section II further discusses the importance of dealing with domain shift in endoscopic tasks, analyzing some recent works in literature. Section III presents the proposed approach and introduces the SLICLoss. In Section IV we discuss the experimental setup, providing training and testing details. Section V presents the results of our experiments in the EndoUDA dataset. Finally, Section VI concludes the article and discusses future work.
\section{State of the art}
The issue of domain shift is one that has been addressed by many other works in literature. For instance, federated learning has been proposed to leverage many small datasets shared between hospitals to generate a single, large model from them \cite{liu_feddg_2021}. Another approach has been to modify network architectures to learn differently from the same data, and then use those various approaches cooperatively for performing the final prediction \cite{chen_cooperative_2021}. Other studies, have made use of data augmentation, or have incorporated meta-learning strategies to exploit different characteristics from the training data in a self-supervised manner \cite{zhou_domain_2022}.
Another subset of domain generalisation methods forgo using additional data and instead seek to improve the features learned by a network to make them more globally relevant to the task. Clustering and patch-based constraints have been used to improve generalisation in road segmentation \cite{zhang_transferring_2020}, and have been shown to reduce the accuracy loss caused by domain shift. The idea behind using constraints is to enforce consistency even if doing so may hurt the training accuracy. This is akin to seeing the domain shift problem as an overfit: the constraints have a regularizing effect that encourages the network to learn more globally applicable features.
Following the promising results of constraints for domain generalisation, in this work we drew inspiration from superpixel-based methods \cite{zhang_transferring_2020} applied to reduce the domain shift generated by a switch from synthetic to real data. Through the use of SLIC, a superpixel grid is generated, without requiring any prior training. In the context of our work, the domain shift we observe arises from different light modalities used in endoscopic interventions. We demonstrate that superpixel patch consistency not only improves results when switching from real to synthetic data, but also increases performance when different lighting modalities are used.
\section{Proposed Approach}
In order to alleviate the problem of domain shift in segmentation tasks for medical imaging, we propose a framework for a model to favor results that have specific visual properties more globally relevant to the lesions in our data. One characteristic that is present in both the images seen in polyps and BE is that the lesions tend to be mostly smooth shapes with few patches. With our method, we aim to encourage the model to produce similar results in both image modalities, using the same model weights without the need of retraining.
\begin{figure*}[htbp]
\centerline{\includegraphics[width=0.77\textwidth]{qualitative_C.png}}
\caption{GT: Ground Truth. A qualitative comparison between SUPRA-UNet (SUPRA) and vanilla UNet (UNet). For Barret's Esophagus (left side), SUPRA performs better in narrow-band imaging (Target Dataset). For Polyps (right side), the performance is very similar between both models.
}
\label{fig:qual}
\end{figure*}
To achieve this goal, we made use of a superpixel approach based on the SLIC method for superpixel generation. We propose and test a loss function that penalizes any predictions that generate images that disagree with color differences present in the image.
The SLIC superpixel generation algorithm works by using two main parameters \cite{achanta_slic_2012}. The first parameter is $k$, which is the number of superpixels to generate; this parameter enforces the generation of similarly sized regions with spacing $S=\sqrt{N/k}$, where $N$ is the number of pixels in the image. This value then controls the number of regions generated by SLIC. The second parameter of the algorithm is $m$, a constant used to calculate a distance measure used to determine which region a pixel belongs to,
$$D = \sqrt{d_c^2+\left( \frac{d_s}{S}\right)^2 m^2}$$
where $d_c$ is euclidean distance for each color space, and $d_s$ is the euclidean distance between pixels. A higher value of $m$ will encourage compactness, creating regions with a lower area-to-perimeter ratio and more regular shapes. When $m$ is lower, it produces more irregular superpixels that more strictly adhere to color boundaries.
The loss function is constructed from two main elements: the first is BCE, and the second is a correlation measure that determines whether the predicted mask produces results that are consistent with the superpixel segmentation generated by SLIC. To combine them, the SLIC correlation is multiplied by a weighing factor ($\lambda$), and then both losses are added and normalized. The result is then used as the loss for the network.
\begin{equation}
L(x,y,y')=\frac{BCE(y,y')+\lambda C_m(x,y')}{1+\lambda}
\label{eq:corr}
\end{equation}
The loss function in eq. \ref{eq:corr} has an adjustable parameter ($\lambda$) that encourages more color-consistent results. $C_m$ is the correlation measure between the superpixels generated from the original image and the proposed mask generated by the model. It is expressed as a percentage of superpixels where a fraction of pixels (controlled by a threshold) are classified as belonging to a single class. $\lambda$, $k$ and $m$ are parameters tested during our grid search, while the threshold for $C_m$ was set according to the best value observed for BE in EndoUDA.
\section{Experimental Design}
\subsection{Data Partitioning}
The dataset used to perform these experiments is the EndoUDA \cite{celik_endouda_2021, ali_deep_2021, borgli_hyperkvasir_2020}, which contains endoscopic images from two different medical tasks (binary segmentation for BE and polyps). EndoUDA is composed of 799 images for BE, with 284 in NBI modality and 515 in (WLI) modality. It also contains 1042 images for polyps, with 42 in NBI modality and 1000 in WLI modality (see Fig. \ref{fig:endouda}).
We used a 75-25 split on the WLI frames for training and validation, and used every frame available for NBI during testing (284 for BE and 42 for polyps).
\subsection{Training}
A control test was first performed with an early stopping scheme (finalizing at 15 epochs) and with a 30 epochs duration with the standard BCE loss. The models reported in Table \ref{tab1} are those presenting the best validation score, corresponding to those with a training duration of 15 epochs for UNet and 30 epochs for SUPRA-UNet. All models were trained using Tensorflow 2.8.1 on a RTX 2060 GPU using CUDA 11.2, with a learning rate of 1e-04, using an Adam optimizer. The batch size was set to 1 for all runs. Geometric data augmentations were used in the training set, using the default Tensorflow generators. These include horizontal mirroring, rotation, width and height shift, shearing, and zooming. The maximum strength for these transformations were set to 5\% where applicable, except for rotation which was set to 20\%.
\subsection{Hyperparamater tuning}
In order to find the best values for the hyperparameters $\lambda$, $k$ and $m$, we performed a grid search. Our initial values were chosen based on the qualitative performance of the hyperparameters on frames from the BE dataset, choosing initial values of 100 superpixels, a value of 40 for $m$, and with a $\lambda$ of 25\%.
These values created a superpixel grid that followed the boundaries of the ground truth acceptably. For $C_m$, the threshold for the expected single-class occupation was set to 80\%, as this was the observed minimum percentage of occupation in the ground truth masks. Then, the hyperparameters were modified as summarized in Table \ref{tab2}, choosing the best value based on its performance in the target domain. The final hyperparameters, indicated in Table \ref{tab2} with an asterisk, were then trained for 15 and 30 epochs to compare against the performance of the control model, for both BE and polyps.
The reported scores in Table \ref{tab1} are those obtained with 30 epochs. The proposed loss yielded the best results with the configuration: $k$
= 500 superpixels, $m$ = 50 and $\lambda$ = 75\%.
\section{Results and discussion}
\begin{table}[t!]
\caption{Segmentation analysis for the different models}
\begin{center}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|l|l|l|l}
\textbf{Network} & \textbf{SD IoU} & \textbf{TD IoU} & \textbf{SD Dice} & \textbf{TD Dice } \\ \hline
UNet (BE) & 0.7935 & 0.4673 & 0.8698 & 0.6137 \\
SUPRA-UNet (BE) & 0.7479 & \textbf{0.6157} & 0.8253 & \textbf{0.7458} \\ \hline
UNet (Polyps) & 0.6921 & 0.5935 & 0.7831 & 0.7113 \\
SUPRA-UNet (Polyps) & 0.6955 & \textbf{0.6012} & 0.7835 & \textbf{0.7181}
\end{tabular}}
\label{tab1}
\end{center}
SD: Validation split of the Source Domain. TD: Target Domain (Test). SUPRA-Unet: Model trained on proposed loss. UNet: the vanilla UNet was trained only on the BCE loss. The best results are shown with bold formatting.
\end{table}
\begin{table}[t!]
\caption{Grid Search}
\begin{center}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|l|l|l|l|l}
& \textbf{Hyperparameter} &\textbf{ SD IoU} &\textbf{ TD IoU }& \textbf{SD Dice} &\textbf{ TD Dice} \\ \hline
\multirow{3}{*}{$\boldsymbol{\lambda}$} & Weight 50\% & 0.5438 & \textbf{0.5944} & 0.6548 & \textbf{0.7221} \\
& Weight 75\%* & \textbf{0.7643} & 0.5865 & \textbf{0.8450} & 0.7217 \\
& Weight 100\% & 0.6962 & 0.4522 & 0.7925 & 0.6089 \\ \hline
\multirow{4}{*}{\textbf{k}} & 50 Superpixels & \textbf{0.7466} & 0.4937 & \textbf{0.8222} & 0.6362 \\
& 150 Superpixels & 0.7059 & 0.3738 & 0.7985 & 0.5228 \\
& 500 Superpixels* & 0.6881 & \textbf{0.6429} & 0.7676 & \textbf{0.7679} \\
& 1000 Superpixels & 0.7379 & 0.4135 & 0.7942 & 0.5619 \\ \hline
\multirow{3}{*}{\textbf{m}} & 20 Consistency & 0.7083 & 0.3650 & 0.8012 & 0.5153 \\
& 30 Consistency &\textbf{0.7236} & 0.3872 & \textbf{0.8140} & 0.4763 \\
& 50 Consistency* & 0.4975 & \textbf{0.5919} & 0.6151 & \textbf{0.7212} \\ \hline
\end{tabular}}
\end{center}
SD: Validation split of the Source Domain. TD: Target Domain (Test).
\textbf{$\boldsymbol{\lambda}$:} Weighing factor for the superpixel boundary consistency
\textbf{k:} Number of superpixels generated.
\textbf{m:} Compactness.
\textbf{*} Hyperparameter value used for the final model (SUPRA). The best results are shown with bold formatting.
\label{tab2}
\end{table}
The results for the experiments are summarized in Table \ref{tab1}. We can observe that there is a significant drop in segmentation performance when using BCE loss from the validation set (Source Domain, SD) to the test set (Target Domain, TD), confirming the existence of severe domain shift. All tests with the SLICloss showed some degree of improvement in the TD and some degradation of performance in the SD set compared to the baseline.
SUPRA-UNet for BE achieved a SD IoU of -5.7\% and a SD Dice of -5.1\%, but a TD performance of +24.1\% IoU and +17.7\% Dice – a drop of only 5\% in IoU and an improvement of 24\% in IoU over the baseline model.
Some masks generated by SUPRA-UNet and UNet can be observed in Fig. \ref{fig:qual}. Generally, we observe that incorporating SUPRA yields masks with a larger prediction and fewer holes. The performance observed for NBI was a significant improvement for BE images. For polyps the same trend can be observed, but the improvement and degradation are much less pronounced.
We can observe that SUPRA is more effective for BE than for polyps. This can be due to the superpixel generation method where SLIC tires to exploit the color boundaries mostly, but it is not effective at separating a protrusion observed in polyps from the rest of the background. Even with these limitations the method still yields comparable results to vanilla UNet and does not harm performance in polyp segmentation. However, this does highlight the need for incorporating structural information to the objective function in the form of an additional loss term or to explore a superpixel generation method that takes into account topological information.
\section{Conclusions}
Following a model-agnostic approach, SUPRA-UNet yielded a significant improvement in our BE target dataset, with a 24\% relative increase in IoU when compared to using only BCE as loss that includes white-light imaging (WLI) as the source domain for training and narrow-band imaging (NBI) as the target domain. A slight improvement was also observed in polyp dataset. In future, we will explore using topology information inside the SUPRA to achieve improved performance on protruded lesions as well, such as polyps, with the incorporation of structural details in either the loss function or in the superpixel generation process.
\section*{Acknowledgments}
The authors wish to thank CONACYT for the master scholarship for Rafael Martinez Garcia-Peña and the PhD scholarship for Mansoor Ali Teevno at Tec de Monterrey.
\section*{Compliance with ethical approval}
The images were captured in medical procedures following the ethical principles outlined in the Helsinki Declaration of 1975, as revised in 2000, with the consent of the patients.
\bibliographystyle{IEEEbib}
|
train/arxiv
|
BkiUdjw5qdmB60zYsqZk
| 5 | 1 |
\section{Introduction}
Let $G = (V,E)$ denote the connected, finite, simple graph on $n$ vertices. Let $N(u) = \{v \in V : uv \in E\}$ be the neighborhood of a vertex $u$. We denote the degree of a vertex $v$ by $deg(v)$ and is given by $deg(v) = |N(v)|$. Let $f: V \rightarrow \{1,2, \dots, n\}$ be a bijective function. If the sum $\sum_{uv \in E}f(v)$ is constant $k$, for all $v \in V$, then $f$ is called a \textit{distance magic labeling}, $k$ is called a \textit{magic constant}, and $G$ is called a \textit{distance magic graph}. Arumugam et al. \cite{uniquek} proved that a magic constant of a graph $G$, if it exists, is unique, i.e., independent of distance magic labeling. Researchers have studied the existence of distance magic labeling for many families of graphs, but the general characterisation of a graph being distance magic is not known yet. For a detailed survey on graph labeling, one can refer to \cite{dm_survey_gallian}.\par Dalibor \cite{group_cycles_dalibor} introduced the concept of group distance magic labeling. A \textit{group distance magic labeling} or a $\Gamma$\textit{-distance magic labeling} of a graph $G = (V,E)$ is an injection from $V$ to an abelian group $\Gamma$ of order $n$ such that weight of every vertex $x \in V$ is equal to the same element $\mu \in \Gamma$. The group distance magic labeling has some interesting properties. Group distance magic labeling need not necessarily admit the uniqueness of magic constant as seen in distance magic labeling. Every distance magic graph is $\mathbb{Z}_n$-distance magic, but the converse is not necessarily true. Several articles are available on the existence of group distance magic labeling of various families of graphs.
In this paper, we introduce the concept of $p$-distance magic labeling as a generalization of distance magic labeling of graphs. These also constitute as an infinite class of necessary conditions which when true for all $p$ becomes sufficient. It is well known that there cannot be a subgraph avoidance criterion for the graphs which admit magic labeling (due to the fact that any connected graph can be embedded in a distance magic graph \cite{sigma_jinnah, vilfredt}), as a result the problem of magic labeling does not allow constructing a magic labeling from a labeling on a subgraph. One can say the problem of distance magic labeling is a difficult problem with little room for ``construction" in the underlying graph in sense that it does not allow ``cutting and pasting" type of techniques, but the $p$-distance magic labeling does allow such cutting and pasting techniques as evident from the application of Chinese remainder theorem (see Theorem \ref{th:crt}). Due to the simplification through existence of a $p$-magic labeling and the patching up of two different such labeling through the Chinese remainder theorem gives us some constructive approach. Such possibilities give us hope that to tackle the problem of distance magic labeling this ``construction" idea through $p$-distance magic labeling will prove to be very important. With sufficiently larger values of $p$, the distance magic labeling becomes the particular case of $p$-distance magic labeling, and with $p=n$, $p$-distance magic labeling becomes $\mathbb{Z}_n$-distance magic labeling. In some cases, we provide ways to construct different magic constants in a $\mathbb{Z}_n$-distance magic labeling. Further, in some cases, we prove the uniqueness of the magic constant in $\mathbb{Z}_n$-distance magic labeling. Recall that a graph $G$ is \textit{Eulerian} if and only if it has at most one nontrivial component and its vertices all have even degrees, and a \textit{matching} in a graph is an independent subset of a set of edges. Throughout the paper, the ring $\mathbb{Z}_n$ is considered with the usual addition and multiplication modulo $n$. For graph theoretic terminology and notations, we refer to West \cite{west}.
\section{Known Results}
\begin{theorem}\cite{uniquek} \label{uniquek}
If a graph $G$ is distance magic, then its magic constant $k=\frac{n(n+1)}{2\gamma_{ft}}$.
\end{theorem}
\section{Main Results}
The multiset is the set in which repetition of elements is allowed. We formally define the multiset of our interest.
\begin{definition}
Given an integer $p \ge 1$ by $\{1, 2, \dots, n \}_p$ we mean a \textit{multiset} obtained by reducing all numbers $1, 2, \dots, n$ modulo $p$ and replacing $0$'s if any by $p$. Note that $p > n$, $\{1,2, \dots, n\}_p = \{1,2, \dots, n\}$.
\end{definition}
For example, with $p = 4$ and $n = 9$, we have the multiset $\{1, 2, 3, 4, 1, 2, 3, 4, 1\}_4$ obtained by reducing the elements $1, 2, 3, 4, 5, 6, 7, 8, 9$ to modulo $4$ and replacing $0$ by $4$.
\begin{definition}
Let $G$ be a graph and given an integer $p \ge 1$. Consider a multiset $L = \{1, 2, \dots, n\}_p$. We call graph $G$ a $p$-\textit{distance magic} if there is a bijective map $f$ from the vertex set $V$ to a multiset $L$ called an $p$\textit{-distance magic labeling} such that the weight of each vertex $x \in V$ denoted by $w(x)$ is equal to the same element $\mu (\bmod~p)$, where $w(x) = \sum_{y \in N(x)}f(y)$.
\end{definition}
For example, consider a graph $G$ in Figure \ref{fig:2-magic} on $11$ vertices. With $p=2$, we have a multiset \(\{1,2,1,2,1,2,1,2,1,2,1\}_{2}\). Then with labeling, as shown in the below figure, $G$ is $2$-magic with magic constant $0$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{2magicon11vertices.eps}
\caption{$2$-magic labeling}
\label{fig:2-magic}
\end{figure}
\begin{theorem} \label{main}
A graph $G$ is distance magic if and only if it is $p$-distance magic for all $p \ge 1$.
\end{theorem}
\begin{proof}
Suppose that $G$ is a distance magic graph of order $n$ with magic constant $k$ and distance magic labeling $f$. Let $p \ge 1$. Define $f^{*} : V \to \{1,2,\dots, n\}_p$ by $f^{*}(v) = f(v)(\bmod~{p})$. Therefore, the weight $w_{f^{*}}(x)$ of every vertex $x$ in $V$ is equal to same element $k^{*} = k (\bmod{p})$ of $\mathbb{Z}_p$. This proves the necessary part. Conversely, suppose that $G$ is $p$-distance magic for all $p \ge 1$. Therefore, $G$ induces a $p$-distance magic labeling for $p = \frac{n(n+1)}{2}$. Then $w(x) = k'$ for some $k' \in \mathbb{Z}_{\frac{n(n+1)}{2}}$. But $k' < \frac{n(n+1)}{2}$. Hence $\frac{n(n+1)}{2}$-distance magic labeling is the required distance magic labeling.
\end{proof}
It follows from the Theorem \ref{main} that the $p$-distance magic labeling coincides with distance magic labeling whenever $p$ is sufficiently large. This shows that $p$-distance magic labeling is a generalisation of distance magic labeling. Also, some structural characterisations are possible with $p=2$. Since $p = 2$ is the least possible, we focus on $2$-distance magic labelings. \par Let $G$ be a graph whose vertices are labeled under the labeling $f$ using the numbers in multiset \(L = \{1,2, \dots, n\}_p\) for some $p \ge 1$. Let $V_i = \{v \in V : f(v) \equiv i(\bmod~{p})\}$, for all $i = 1,2, \dots, p$. By $G_i$, we mean a subgraph of $G$ induced by $V_i$ i.e. $G_i = <V_i>$.
\begin{theorem}
Let $G$ admit $2$-distance magic labeling. If
\begin{enumerate}
\item a magic constant is $0$ then the subgraph $G_1$ is an Eulerian graph.
\item a magic constant is $1$ then the subgraph $G_1$ contains a matching.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose a graph $G$ is $2$-distance magic. We prove the theorem in the following two cases.\\
\textit{Case} $i$. Let the magic constant be $0$. If $G$ itself is an Eulerian, then nothing to prove. Suppose $G$ is not Eulerian. For the magic constant to be $0$, each vertex in $G_1$ must have an even number of neighbors in $V_1$ and any number of neighbors in $V_2$. Therefore, each vertex in $G_1$ is of even degree. Hence $G_1$ is an Eulerian graph.\\
\textit{Case} $ii$. Let a magic constant be $1$. Let $v \in V_1$. For $w(v)=1, v$ must be adjacent to the odd number of vertices in $V_1$. Now we construct a matching say $M$. Let $v_1 \in V_1$ such that $vv_1 \in E(G_1)$. We add this edge $vv_1$ to the matching $M$. We repeat the process, which must terminate after a finite number of steps as $G_1$ is a finite graph. Hence, we obtain a maximal matching $M$. This proves the theorem.
\end{proof}
Figure \ref{fig:2magicgraphs} shows the $2$-distance magic labeling of two graphs. A graph in Figure \ref{fig:2magic1} has magic constant $0$, and the subgraph $G_1$ induced by $V_1$ is depicted in dotted edges and hollow vertices is an Eulerian subgraph. A graph in Figure \ref{fig:2magic2} has a magic constant $1$ and $G_1$ contains a matching depicted in dotted edges. Note that in Figure \ref{fig:2magic2}, matching in $G_1$ is not unique, but it is a perfect matching. However, it is not the case that we always obtain a perfect matching. Also, in Figure \ref{fig:2magic2}, none of the vertices in $G_1$ are of even degree. Hence obtaining an Eulerian circuit is not possible.
\begin{figure}[ht]
\begin{center}
\subfigure[$G_1$ is an Eulerian]{%
\label{fig:2magic1}
\includegraphics[width=0.4\textwidth]{2magic1.eps}
\label{2magic1}
}%
\subfigure[$G_1$ with matching]{%
\label{fig:2magic2}
\includegraphics[width=0.35\textwidth]{2magic2.eps}
}%
\end{center}
\caption{%
$2$-magic graphs with an Eulerian subgraph or a matching.
}%
\label{fig:2magicgraphs}
\end{figure}
\begin{definition}
We call graph $G$ a \textit{$r$-modulo $p$ regular} if $deg(u) \equiv r (\bmod~{p})$ and we write $G$ is \textit{$r(\bmod~p$)-regular}.
\end{definition}
\begin{theorem} \label{multiple}
If $G$ is $r(\bmod~p)$-regular, $p$-distance magic graph with magic constant $k$ and $p$-distance magic labeling $f$. Then for each positive integer $i$, the map $f' = f + i$ is also an $p$-distance magic labeling with magic constant $k' = (k+ir) (\bmod~{n})$.
\end{theorem}
\begin{proof}
Let $G$ be $r (\bmod~p)$-regular, $p$-distance magic graph with magic constant $k$ and labeling $f$ and let $i \in \mathbb{Z}^+$. Define $p$-distance magic labeling $g$ by $g(u) = i + f(u)$. Then, for $u \in V$,
\begin{align*}
w(u) &= \sum_{uv \in E} g(v)\\
&= \sum_{uv \in E} (i + f(v))\\
&= i~deg(u) + \sum_{uv \in E} f(v)\\
&\equiv (ir + k)(\bmod~p).
\end{align*}
This completes the proof.
\end{proof}
Theorem \ref{multiple} shows that a $p$-magic constant need not be unique. Nevertheless, in some cases, one can obtain the uniqueness of the same as described in the following theorem.
\begin{theorem} \label{unique}
Let $G$ be a graph on $n$ vertices. If a graph $G$ is a $p$-distance magic with magic constant $k$ and if $\frac{n(n+1)}{2}$ is unit in the ring $\mathbb{Z}_p$ then $k$ is unique.
\end{theorem}
\begin{proof}
Let $G$ be a graph on $n$ vertices $\{x_1, x_2, \dots, x_n\}$ having two $p$-distance magic labelings $f$ and $g$ with respective magic constants $k$ and $l$. Let $\bar{u}$ be the vector with all $n$ entries $1$. Let $A$ be an adjacency matrix of $G$ and put $X = (f(x_1), \dots, f(x_n))^\top$ and $Y = (g(x_1), \dots, g(x_n))^\top$. Since, $f$ and $g$ are distance magic labelings with magic constants $k$ and $l$ it follows that $AX = k\bar{u}$ and $AY = l\bar{u}$. Since, $X^\top AY$ is $1 \times 1$ matrix, we have $X^\top AY = (X^\top AY)^\top = Y^\top AX$. This gives
\begin{align*}
l X^\top \bar{u} &= k Y^\top \bar{u}\\
\implies l(1+2+ \dots + n) &= k(1+2+ \dots + n)
\end{align*}
Since, $(1+2+ \dots + n)$ is unit in $\mathbb{Z}_p$, by the cancellation laws we have $l = k$.
\end{proof}
\noindent Following corollary of the theorem is evident in the case when $p=2$.
\begin{corollary} \label{uniquecor1}
If $G$ is a $2$-distance magic graph of order $n \ge 3$, where $n \equiv 1 \text{ or } 2 (\bmod~{4})$ with magic constant $k$, then $k$ is unique.
\end{corollary}
\begin{theorem} \label{th:crt}
If $G$ is a $p$-distance magic as well as $q$-distance magic graph on $n$ vertices for some relatively prime integers $p$ and $q$ such that $pq \le n$, then $G$ is $pq$-distance magic.
\end{theorem}
\begin{proof}
Let $G$ be a graph with vertex set $\{x_1, x_2, \dots, x_n\}$ which is both $p$-distance magic and $q$-distance magic for some relatively prime integers $p$ and $q$ such that $pq \le n$. Let $f_p, f_q$ be corresponding labelings, and $k_p, k_q$ be corresponding magic constants, respectively. Let $f_p(x_i) \equiv a_i (\bmod~p)$ and $f_q(x_i) \equiv a_i (\bmod~q)$ for each $1 \le i \le n$. Since $p$ and $q$ are coprime, by the Chinese remainder theorem, the system of congruences
\begin{align*}
f_p(x_i) \equiv a_i (\bmod~p)\\ f_q(x_i) \equiv a_i (\bmod~q)
\end{align*}
has unique solution say $y_i$ modulo $(\bmod~pq)$, for each $i (1 \le i \le n)$. Now we define new labeling $f_{pq}$ by $f_{pq}(x_i) = y_i$, for each $i(1 \le i \le n)$. The uniqueness of labels $y_i$'s guarantees that $f_{pq}$ is a bijective map. Now we calculate the weight of a vertex $x_i \in V$ under $f_{pq}$.
\begin{align*}
w(x_i) &= \sum_{x_j \in N(x_i)}f_{pq}(x_j)\\
&= \sum_{x_j \in N(x_i)}y_j\\
&\equiv
\begin{cases}
k_p (\bmod~p) \\
k_q (\bmod~q)
\end{cases}.
\end{align*}
For each $i(1 \le i \le n)$, again we solve the system of congruences
\begin{equation} \label{sys:1}
\begin{aligned}
w(x_i) \equiv k_p (\bmod~p)\\
w(x_i) \equiv k_q (\bmod~q)
\end{aligned}
\end{equation}
and by the Chinese remainder theorem, we conclude that the system of congruences (\ref{sys:1}) has a unique solution, say $k_{pq} (\bmod~pq)$. This proves that $f_{pq}$ is required labeling and $G$ is a $pq$-distance magic graph.
\end{proof}
Given a graph $G$ on $n$ vertices which is both $p$-distance magic and $q$-distance magic for some relatively prime integers $p$ and $q$ such that $pq \le n$ then using the Theorem \ref{th:crt}, we can find $pq$-distance magic labeling for $G$. Note that when $pq$ is sufficiently large, we obtain distance magic labeling. However, when $pq > n$, we need not always obtain a $pq$-distance magic graph.
For example consider a cycle $C_4$ with vertex set $\{v_1, v_2, v_3, v_4\}$ in clockwise sense. Define $2$-magic labeling of $C_4$ by $f_2(v_1) = 1, f_2(v_2) = 2, f_2(v_3) = 2, f_2(v_4) = 1$ and a $3$-magic labeling by $f_3(v_1) = 2, f_3(v_2) = 1, f_3(v_3) = 3, f_3(v_4) = 1$ as shown in the Figure \ref{fig:crt}. In Figure \ref{fig:crt}, the graph $G_1$ shows the $2$-distance magic labeling with magic constant $1$, and the graph $G_2$ shows a $3$-distance magic labeling with magic constant $2$ of the same graph $C_4$. For each $i (1 \le i \le 4)$, we solve the system
\begin{equation} \label{sys:2}
\begin{aligned}
x_1 \equiv f_2(x_i) (\bmod~2)\\
x_2 \equiv f_3(x_i) (\bmod~3)
\end{aligned}
\end{equation}
by the Chinese remainder theorem. For each $i (1 \le i \le 4)$, let the unique solution of the system \ref{sys:2} is $2,4,6,1$ respectively. We label the vertices of $C_4$ using these solutions: $f_6(v_1) = 2, f_6(v_2) = 4, f_6(v_3) = 6, f_6(v_4) = 1$ as shown in the graph $G_3$ of Figure \ref{fig:crt}. Observe that $f_6$ is not a map from $V(C_4)$ to $\{1,2,3,4\}_6$. Thus, in this case, we can not obtain $6$-distance magic graph using $2$-distance magic and $3$-distance magic graph. This does not contradict the Theorem \ref{th:crt} because $(p=2) \times (q=3) = 6 > 4 = n$. Hence, the condition $pq < n$ can not be ignored in the statement of the Theorem \ref{th:crt}. However, it may happen that in some cases, each label $y (\bmod~pq)$ obtained by the procedure described in the above theorem satisfies $1 \le y \le n$. We call such a labeling a \textit{consistent} labeling. In such cases, the labeling is indeed a $pq$-distance magic labeling.
\begin{figure}[ht]
\begin{center}
\subfigure[$G_1$]{
\includegraphics[width=0.2\textwidth]{crtg1.eps}
}
\subfigure[$G_2$]{
\includegraphics[width=0.2\textwidth]{crtg2.eps}
}
\subfigure[$G_3$]{
\includegraphics[width=0.2\textwidth]{crtg3.eps}
}
\end{center}
\caption{}
\label{fig:crt}
\end{figure}
We state the following proposition without proof, as its proof is quite similar to Theorem \ref{th:crt}.
\begin{proposition}
If $G$ is a $p$-distance magic as well as $q$-distance magic graph on $n$ vertices for some relatively prime integers $p$ and $q$ such that $pq > n$ and if the new labeling obtained as described in the proof of Theorem \ref{th:crt} is consistent then the graph $G$ is $pq$-distance magic.
\end{proposition}
\subsection{Group Distance Magic}
A magic constant is not unique in group distance magic labeling. We give a few constructions to obtain multiple magic constants in group distance labeling. We omit the proof pf following theorem as it is similar to the proof of Theorem \ref{multiple}.
\begin{theorem}
If $G$ is $r(\bmod~n$)-regular, $\mathbb{Z}_n$-distance magic graph with magic constant $k(\bmod~{n})$ then for any $i$, $(ir+k) (\bmod~{n})$ is also a magic constant.
\end{theorem}
Thus a magic constant in group distance magic labeling is not unique in general. Although for a graph $G$ of order $n$, the uniqueness of the $\mathbb{Z}_n$-magic constant follows from the Theorem \ref{uniqueforgroup}.
\begin{theorem} \label{uniqueforgroup}
If a graph $G$ of order $n$ is $\mathbb{Z}_n$-distance magic with magic constant $k$ and if $\sum_{x \in \mathbb{Z}_n}x \in \mathbb{Z}_n^{*}$, then $k$ is unique.
\end{theorem}
We present observations on $\mathbb{Z}_n$-distance magic labelings. Let $G$ be a graph on $n$ vertices. Define $\mathscr{G}_n := \{f: f \text{ is a } \mathbb{Z}_n\text{-distance magic labeling} \} \bigcup \{\textbf{0}\}$, where $\textbf{0}$ is a map from vertex set of $G$ to $\mathbb{Z}_n$ which sends every vertex of $G$ to the additive identity of $\mathbb{Z}_n$.
\begin{theorem}
The set $\mathscr{G}_n$ forms a $\mathbb{Z}_n$-module under the usual addition and scalar multiplication of functions.
\end{theorem}
\begin{proof}
Since the collection $M$ of all mappings from vertex set of $G$ to $\mathbb{Z}_p$ is a $\mathbb{Z}_p$ module, it suffices to prove that $\mathscr{G}_p$ is a submodule of $M$. Let $f_1,f_2 \in \mathscr{G}_p$ with magic constants $k_1$ and $k_2$ respectively and let $\alpha \in \mathbb{Z}_p$ be arbitrary. Set $f = \alpha f_1 + f_2$ Then for $u \in V$, weight of $u$
\begin{align*}
w_{f}(u) &= \sum_{uv \in E}(f(u))\\
&= \sum_{uv \in E} ((\alpha f_1 + f_2) (u))\\
&= \sum_{uv \in E} \alpha f_1 (u) + \sum_{uv \in E} f_2 (u)\\
&= \alpha k_1 + k_2
\end{align*}
is independent of $u$. Therefore $ \alpha f_1 + f_2 \in \mathscr{G}_p$. This proves the theorem.
\end{proof}
\begin{observation}
The group $Aut(G)$ of automorphisms of a graph $G$ acts on the module $\mathscr{G}_n$ under the action $g \cdot f = f \circ g$, for all $f \in \mathscr{G}_n$ and all $g \in Aut(G)$. Therefore, $\mathscr{G}_n$ is $Aut(G)$ representation.
\end{observation}
\section{Further directions in research}
We raise the following two problems:
\begin{problem}\label{prob:1}
Given a graph $G$, which is not distance magic, find a prime number $q$ such that $G$ is not $q$-distance magic.
\end{problem}
\begin{problem} \label{prob:2}
Given a distance magic graph $G$, characterise the all primes $q \le \frac{n(n+1)}{2}$ such that $G$ is $q$-distance magic graph.
\end{problem}
Affirmative solution to the Problems \ref{prob:1} and \ref{prob:2} provides a complete characterisation of distance magic graphs in terms of $q$-distance magic labeling.
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUd4_xK3YB9m3uwfCr
| 5 | 1 |
\section{Introduction}
It is well established that supermassive black holes~(BHs) reside at the centers of almost all nearby massive galaxies~\citep{1992ApJ...393..559K,1994ApJ...435L..35H,1995Natur.373..127M}. We also know that a small fraction of galaxies have bright nuclei
referred to as active galactic nuclei, or AGN, which are powered by accreting supermassive BHs.
Determining the dominant mechanisms that drive AGN fueling and their connection to BH-galaxy co-evolution is an ongoing challenge.
Fueling BH accretion requires the availability of cold gas with low angular momentum.
The large scale~($\gtrsim1$ $h^{-1}$ Mpc) and small scale~($\lesssim0.1$ $h^{-1}$ Mpc) environments of AGN can therefore provide important clues about their fueling mechanisms.
For example, the weak dependence seen for the observed large scale clustering on AGN luminosity~\citep{2006MNRAS.373..457L,2018MNRAS.474.1773K,2019MNRAS.483.1452W,2020ApJ...891...41P} implies that more massive haloes do not necessarily host more luminous AGN, which is also seen in the large scatter in the AGN luminosity vs host halo mass relations in hydrodynamic simulations~\citep{2019MNRAS.485.2026B}.
Numerous mechanisms may contribute to AGN triggering. On one hand, this can be driven by secular processes occurring within the host galaxy such as supernova winds~\citep{2009ApJ...695L.130C,2010MNRAS.404.2170K} and hydrodynamic instabilities~\citep{2004ARA&A..42..603K,2011ApJ...741L..33B}. On the other hand, external disturbances to the host galaxy, such as tidal torques generated during galaxy interactions and mergers are also very promising candidates, particularly in gas-rich, major mergers \citep[e.g.,][]{1996ApJ...471..115B,2005Natur.433..604D,2008ApJS..175..356H, 2013MNRAS.429.2594B,2015MNRAS.447.2123C,2019SCPMA..62l9511Y}.
Currently, there is no clear consensus about whether galaxy mergers or secular processes are the dominant driver of BH fueling.
A vast majority of AGN host galaxies do not exhibit any evidence of recent mergers~\citep{2014MNRAS.439.3342V,2019ApJ...882..141M}. Several works analysing the morphologies of the host galaxies found no significant differences in the `merger fractions' between active and inactive galaxies~\citep[e.g.,][]{2009ApJ...691..705G,2011ApJ...726...57C,2012ApJ...744..148K,2012MNRAS.425L..61S,2014MNRAS.439.3342V,2017MNRAS.466..812V,2019ApJ...882..141M,2019MNRAS.489..497Z,2019ApJ...877...52Z}. In contrast, many other works have also found that galaxies that do exhibit signatures of mergers or interactions have higher AGN fractions compared to those that do not~\citep[e.g.,][]{2011ApJ...737..101L,2011ApJ...743....2S,2011MNRAS.418.2043E,2013MNRAS.435.3627E,2014AJ....148..137L,2014MNRAS.441.1297S,2017MNRAS.464.3882W,2018PASJ...70S..37G,2019MNRAS.487.2491E}.
Potential signatures of the AGN-merger connection can also be seen in small-scale quasar clustering measurements of binary quasar pairs, wherein enhanced clustering amplitude is reported at small scales~(few tens of kiloparsecs), as compared to extrapolations from large scale clustering~\citep{2000AJ....120.2183S,2006AJ....131....1H,2012MNRAS.424.1363K,2016AJ....151...61M,2017MNRAS.468...77E}. However, observational studies of merger-triggered AGN are associated with several challenges.
For one, the AGN luminosity can make it difficult to identify morphological merger signatures in the host galaxy.
Galaxy mergers may also create
a significant amount of quasar obscuration, as seen in both simulations~\citep[e.g.,][]{2006ApJS..163....1H,2013ApJ...768..168S,2018MNRAS.478.3056B} and observations~\citep[e.g.,][]{1988ApJ...325...74S,1996ARA&A..34..749S,2009ApJS..182..628V,2017MNRAS.468.1273R}; this can make it difficult to identify merger-triggered AGN. The resulting systematic biases could potentially explain the seemingly conflicting results in the existing literature; this is corroborated by the growing evidence of high merger fractions amongst obscured AGN~\citep[e.g.,][]{2008ApJ...674...80U,2015ApJ...806..218G, 2015ApJ...814..104K,2018Natur.563..214K,2020arXiv200400680G}.
Cosmological hydrodynamic simulations have also shown statistically robust evidence of the presence of the merger-AGN connection. A recent study by \cite{2020MNRAS.494.5713M} looked at merger fractions of AGN hosts as well as AGN fractions of merging galaxies within the EAGLE simulation~\citep{2015MNRAS.446..521S} and demonstrated the existence of a merger-AGN connection, though they also found that merger driven activity does not contribute significantly to the overall growth history of the BH populations. Using data from the MassiveBlackII simulation, \citep{2019MNRAS.485.2026B, 2020MNRAS.492.5620B} demonstrated that merger-driven AGN activity also leads to the formation of systems of multiple active AGN.
Additionally, in a companion paper to the present work, Thomas et al. (in prep) are using very high time resolution data from IllustrisTNG to quantify merger-driven AGN fueling in detail.
An inevitable consequence of hierarchical clustering of halos (and the eventual merging of their member
galaxies) is the formation of systems of multiple BHs. These processes involve several stages which together encompass a huge dynamic range~($\sim9$ orders of magnitude) of separation scales between the BHs. The earliest stages are marked by the gravitational clustering of dark matter halos involving scales $\sim1-100~\mathrm{Mpc}$. The next stage can be marked by when the halos merge and their respective galaxies start interacting; this occurs at scales of $\sim 100$ kpc. The galaxies then eventually merge via dynamical friction~\citep{1943ApJ....97..255C}.
Following the galaxy merger, dynamical friction causes the BHs to continue to inspiral until they reach parsec scales.
The timescales for further hardening of the ensuing BH binaries to scales below $\sim1$ pc are uncertain and may be many Gyr in some cases~\citep{1980Natur.287..307B,2003AIPC..686..201M}; this is known as the ``Final Parsec problem.
Binaries may evolve on these scales via repeated three body scatterings with stars~\citep{1996NewA....1...35Q,2015ApJ...810...49V,2017MNRAS.464.2301G,2020MNRAS.493.3676O} as well as via interactions with gas~\citep{2002ApJ...567L...9A,2014ApJ...794..167S,2016ApJ...827..111R}. If the binaries reach sufficiently small scales~($\sim$ a few mpc), gravitational wave (GW) radiation will take over and cause the BHs to merge; these GWs may be detectable with current and upcoming facilities such as pulsar timing arrays (PTAs)
\citep[e.g.,][]{2013PASA...30...17M,2016MNRAS.458.3341D,2016MNRAS.458.1267V,2019BAAS...51g.195R} as well as the Laser Interferometer
Space Antenna (LISA) \citep{2019arXiv190706482B}. Cosmological hydrodynamic simulations enable us to probe the formation and evolution of such BH systems from $\sim1~\mathrm{Mpc}$ to $\sim0.01~\mathrm{Mpc}$ scales (the
resolution of the simulation prevents us from probing scales smaller than
$\sim 0.01~\mathrm{Mpc}$). These correspond to relatively early stages of galaxy mergers, which are precursors to gravitational bound BH binaries~(BHBs) that will be powerful GW sources for
LISA and PTAs.
Numerous recent models based on simulations or semi-analytic modeling have made detailed predictions about the
formation and evolution of BHBs~\citep{2010ApJ...719..851S,2013ApJ...773..100K,2014MNRAS.442...56R,2015ApJ...810..139H,2016MNRAS.461.4419B,2017MNRAS.464.3131K,2019MNRAS.486.4044B,2019ApJ...887...35M,2020arXiv200414399N}. However, connecting these models to observations continues to be a challenge. Current statistical samples of close BH pairs are largely between $\sim 1-100~\mathrm{kpc}$ scale separations \citep[e.g.,][]{2011MNRAS.418.2043E,2011ApJ...737..101L,2011ApJ...735L..42K,2013ApJ...777...64C,2019ApJ...882...41H,2019ApJ...883..167P,2020arXiv200110686H}. In contrast, only one confirmed parsec-scale BH \textit{binary} is known~\citep{2006ApJ...646...49R}, and the growing population of unresolved, mpc-scale binary candidates requires extensive follow-up for confirmation~\citep{2019ApJ...884...36L,2020MNRAS.494.4069K}.
Therefore, while these early-stage, $\sim 1-100~\mathrm{kpc}$ scale BH pairs are still at separations much larger than the GW regime, their properties can serve as an important baseline for BHB models to make predictions on the overall abundances of BHBs and their electromagnetic signatures.
In this work, we use the TNG100 simulation from the \texttt{Illustris-TNG} simulation suite to investigate the possible association between AGN activity
and the richness of the AGN environment
at a wide range of scales~($0.01-1$ $h^{-1}$ Mpc). For our purposes, we measure ``environmental richness" in terms of BH multiplicity, or the abundance of nearby BHs. In the process, we explore the possibility of enhanced AGN activity associated with multiple BH systems, which~(if it exists) may be attributed to a range of physics including
1) large-scale ($\gtrsim$ 1 $h^{-1}$ Mpc) clustering of massive haloes hosting luminous AGN
and 2) galaxy mergers and interactions on $\lesssim$ 0.1 $h^{-1}$ Mpc\ scales producing luminous AGN.
In particular, we identify systems of multiple BHs within separations of 0.01, 0.1, \& 1.0 $h^{-1}$ Mpc\ and investigate AGN activity of these multiples as compared to isolated BHs. We also examine AGN activity in merging BHs, based on the recorded time of BH merger rather than the final pre-merger BH pair separation resolved in the simulation. We do not classify multiple BH systems based on host galaxy properties such as stellar mass ratio \citep[in contrast to the recent study of][]{2020MNRAS.494.5713M}; instead, we focus solely on AGN activity as a function of relative BH positions and merger times. In addition to its simplicity, our approach avoids the uncertainty in measuring stellar masses of close or interacting systems, as tidal stripping tends to strongly alter the mass ratio between first infall and merger~\citep{2016MNRAS.458.2371R,2017MNRAS.464.1659Q}.
In addition,
our analysis of multiple BH systems on 0.01 - 1 Mpc scales is complementary to the approach in
our companion paper (Thomas et. al.~ in prep.), which provides an in-depth analysis of merger-triggered BH growth
using higher time resolution BH data.
In Section \ref{methods}, we describe our basic methodology which includes a brief description of \texttt{Illustris-TNG}, as well as the criteria used for the identification of BH systems. Section \ref{bh_systems} presents some basic properties of the BH systems, particularly the relationship with their host halos, as well as their abundances. Section \ref{AGN_activity} focuses on the AGN activity of these BH systems. Section \ref{conclusions} summarizes the main results and conclusions.
\section{Methods}
\label{methods}
\subsection{\texttt{Illustris-TNG} simulation}
The \texttt{Illustris-TNG} project~\citep[e.g.,][]{2018MNRAS.473.4077P,2018MNRAS.475..624N,2018MNRAS.480.5113M,2019MNRAS.490.3196P,2019MNRAS.490.3234N} is a suite of large
cosmological magnetohydrodynamics~(MHD) simulations with three cosmological volumes: \texttt{TNG50}, \texttt{TNG100} and \texttt{TNG300}, corresponding to box lengths
of $50$, $100$ and $300$ comoving Mpc, respectively. The \texttt{Illustris-TNG} simulations are successors to the original \texttt{Illustris} simulation~\citep[e.g.,][]{2014MNRAS.444.1518V,2015A&C....13...12N}, with improved subgrid physics modeling that produces more realistic galaxy populations in better agreement with observations
\citep[e.g.,][]{2018MNRAS.479.4056W,2018MNRAS.475..648P,2018MNRAS.475..676S,2019arXiv190407238V}.
The simulation was run using the moving mesh code \texttt{AREPO}~\citep{2010MNRAS.401..791S,2011MNRAS.418.1392P,2016MNRAS.462.2603P}, which solves for self-gravity coupled with MHD.
The gravity solver uses the PM-tree method~\citep{1986Natur.324..446B} whereas the fluid dynamics solver uses a finite volume Godunov scheme in which
the spatial discretization is performed using an unstructured, moving Voronoi tessellation of the domain. The base cosmology is adopted from the results of \cite{2016A&A...594A..13P} which is summarized by the following set of parameters: $\Omega_\Lambda=0.6911,~\Omega_m=0.3089,~\Omega_b=0.0486,~H_0=67.74~\mathrm{km~sec^{-1}~Mpc^{-1}},~\sigma_8=0.8159,~n_s=0.9667$. These cosmological parameters are assumed throughout this work. The simulations were initialised at $z=127$ using glass initial conditions~\citep{1994astro.ph.10043W}
along with the Z'eldovich approximation~\citep{1970A&A.....5...84Z} to construct the initial displacement field.
In addition to the gravity and MHD,
the simulation includes a wide array of physics to model the key processes responsible for galaxy formation and evolution. Due to resolution limitations, the implementation is carried out in the form of `sub-grid' recipes that include the following:
\begin{itemize}
\item Star formation in a multiphase interstellar medium~(ISM) based on the prescription in \cite{2003MNRAS.339..289S}, with inclusion of chemical enrichment and feedback from supernovae (SNe) and stellar winds as described in \cite{10.1093/mnras/stx2656}.
\item Cooling of metal-enriched gas in the presence of a redshift dependent, spatially uniform, ionizing UV background, with self-shielding in dense gas as described in \cite{2013MNRAS.436.3031V}.
\item Magnetic fields are included via a small uniform initial seed field~($\sim10^{-14}$ Gauss) at an arbitrary orientation~\citep{2018MNRAS.480.5113M}. The subsequent evolution~(coupled with the gas) is driven by the equations of magnetohydrodynamics.
\item BH growth via gas accretion and mergers, as well as AGN feedback, which we describe in the following section.
\end{itemize}
Halos are identified using a Friends-of-Friends~(FOF) algorithm~\citep{1985ApJ...292..371D} with a linking length equal to 0.2 times the mean particle separation. Within these halos, self-bound substructures~(subhalos) are identified using \texttt{SUBFIND}~\citep{2001NewA....6...79S}.
\subsection{BH growth and AGN feedback in \texttt{Illustris-TNG}}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$z$ & $n^{\mathrm{bh}}$& $d_{\mathrm{max}}$ & $f^{\mathrm{sys}}(\mathscr{M}=1)$ & $f^{\mathrm{sys}}(\mathscr{M}=2)$ & $f^{\mathrm{sys}}(\mathscr{M}=3)$ & $f^{\mathrm{sys}}(\mathscr{M}=4)$& $f^{\mathrm{sys}}(\mathscr{M}>4)$ \\
& [$h^3$ Mpc$^{-3}$] & [$h^{-1}$ Mpc] & & & & &\\
\hline
0 & 6.01e-02 & 1.0 & 25.6 $\%$ & 13.0 $\%$ & 7.77 $\%$ & 5.33 $\%$ & 48.3 $\%$ \\
\texttt{"} & \texttt{"} & 0.1 & 88.8 $\%$ & 8.83 $\%$ & 1.75 $\%$ & 0.379 $\%$ & 0.185 $\%$ \\
\texttt{"} & \texttt{"} & 0.01 & 99.5 $\%$ & 0.458 $\%$ & & & \\
\hline
0.6 & 5.74e-02 & 1.0 & 25.5 $\%$ & 13.9 $\%$ & 8.31 $\%$ & 6.79 $\%$ & 45.5 $\%$ \\
\texttt{"} & \texttt{"} & 0.1 & 89.3 $\%$ & 9.00 $\%$ & 1.38 $\%$ & 0.314 $\%$ & 0.0410 $\%$ \\
\texttt{"} & \texttt{"} & 0.01 & 99.7 $\%$ & 0.306 $\%$ & & & \\
\hline
1.5 & 4.65e-02 & 1.0 & 29.0 $\%$ & 16.0$\%$ & 9.84 $\%$ & 6.60 $\%$ & 38.5 $\%$ \\
\texttt{"} & \texttt{"} & 0.1& 89.6 $\%$ & 8.78 $\%$ & 1.16 $\%$ & 0.387 $\%$ & 0.0810 $\%$ \\
\texttt{"} & \texttt{"} & 0.01 & 99.7 $\%$ & 0.285 $\%$ & & &\\
\hline
3 & 1.95e-02 & 1.0 & 41.5 $\%$ & 18.1$\%$ & 10.7 $\%$ & 7.00 $\%$ & 22.7 $\%$ \\
\texttt{"} & \texttt{"} & 0.1 & 91.1 $\%$ & 7.88 $\%$ & 0.875 $\%$ & 0.146 $\%$ & \\
\texttt{"} & \texttt{"} & 0.01 & 99.7 $\%$ & 0.267 $\%$ & & &\\
\hline
\hline
\end{tabular}
\caption{The overall abundances and BH systems in TNG100 in terms of number densities~(in units of $h^3~\mathrm{Mpc}^{-3}$). $n^{\mathrm{bh}}$~(3rd column) is the number density of BHs. $f^{\mathrm{sys}}$~(4th-8th columns) is the percentage of BH singles~($\mathscr{M}=1$), pairs~($\mathscr{M}=2$), triples~($\mathscr{M}=3$), quadruples~($\mathscr{M}=4$) and beyond~($\mathscr{M}>4$) at various redshift snapshots~($z$) and separation scales~($d_{\mathrm{max}}$, in comoving $h^{-1}$ Mpc) within the \texttt{TNG100} simulation box. The percentages~(in parentheses) refer to the fraction of BHs living as BH singles, pairs and multiples.}
\label{number_of_systems}
\end{table*}
BHs of mass $8\times10^5~M_{\odot}~h$$^{-1}$\ are seeded in halos of total mass $>5\times10^{10}~M_{\odot}~h$$^{-1}$\ that do not already contain a BH. Once seeded, these BHs grow via Eddington-limited Bondi-Hoyle accretion given by
\begin{eqnarray}
\dot{M}_{\mathrm{BH}}=\mathrm{min}(\dot{M}_{\mathrm{Bondi}}, \dot{M}_{\mathrm{Edd}})\\
\dot{M}_{\mathrm{Bondi}}=\frac{4 \pi G^2 M_{\mathrm{BH}}^2 \rho}{c_s^3}\\
\dot{M}_{\mathrm{Edd}}=\frac{4\pi G M_{\mathrm{BH}} m_p}{\epsilon_r \sigma_T}c
\end{eqnarray}
where $G$ is the gravitational constant, $M_{\mathrm{BH}}$ is the mass of the BH, $\rho$ is the local gas density, $c_s$ is the local sound speed of the gas, $m_p$ is the mass of the proton, $\epsilon_r$ is the radiative efficiency and $\sigma_T$ is the Thompson scattering cross section. Accreting BHs radiate with a bolometric luminosity given by
\begin{equation}
L=\epsilon_r \dot{M}_{\mathrm{BH}} c^2,
\end{equation}
with an assumed radiative efficiency of $\epsilon_r=0.2$.
A fraction of the energy released gets coupled to the surrounding gas as thermal or kinetic feedback. \texttt{Illustris-TNG} implements a two-mode feedback model as described in \cite{2017MNRAS.465.3291W}, the key features of which are summarized as follows. If the Eddington ratio~(defined as $\eta\equiv \dot{M}_{\mathrm{BH}}/\dot{M}_{\mathrm{edd}}$) exceeds a critical value of $\eta_{\mathrm{crit}}=\mathrm{min}[0.002(M_{\mathrm{BH}}/10^8 M_{\odot})^2,0.1]$, thermal energy is injected into the neighboring gas at a rate given by $\epsilon_{f,\mathrm{high}} \epsilon_r \dot{M}_{\mathrm{BH}}c^2$, with $ ~\epsilon_{f,\mathrm{high}} \epsilon_r=0.02$. $\epsilon_{f,\mathrm{high}}$ is referred to as the ``high accretion state" coupling efficiency. If the Eddington ratio is below this critical value, kinetic energy is injected into the gas at regular intervals of time, in the form of a `wind' oriented along a randomly chosen direction. The injected energy is given by $\epsilon_{f,\mathrm{low}}\dot{M}_{\mathrm{BH}}c^2$ where $\epsilon_{f,\mathrm{low}}$ is referred to as the `low accretion state' coupling efficiency. $\epsilon_{f,\mathrm{low}}$ is assigned to have a maximum value of 0.2 with smaller values at very low gas densities. For further details on both feedback modes, we encourage the interested reader to refer to \cite{2017MNRAS.465.3291W}.
An accurate modelling of BH dynamics at small scales is difficult because of the finite simulation resolution, which can result
in spurious accelerations imparted to the BHs by numerical noise. To avoid this, BHs are (re)-positioned to the local potential minimum within a sphere containing $n$
neighboring gas cells, where $n=1000$ is the value assigned for \texttt{Illustris-TNG}. Such a repositioning naturally leads to a prompt merging of two BHs shortly after their parent subhalos merge. As discussed in detail below, this limits our ability to study unmerged BH systems on 0.01 $h^{-1}$ Mpc\ scales and motivates our inclusion of merging BH systems in parts of our analysis. We therefore avoid drawing statistical conclusions about these smallest-scale BH pairs and multiples.
\subsection{Identifying systems of BHs}
We identify BH systems by linking individual BHs within a maximum distance scale denoted by $d_{\mathrm{max}}$. In particular, every member
of a BH system must be within a comoving distance $d_{\mathrm{max}}$ with respect to at least one other member.
We investigate systems at three values of $d_{\mathrm{max}}$: 1.0, 0.1, \& 0.01 $h^{-1}$ Mpc. $d_{\mathrm{max}}=1$ $h^{-1}$ Mpc\ roughly corresponds to typical distances between BHs in halos that have come together via gravitational clustering and are close to a merger; typically, the occupying galaxies themselves are not yet close enough to be visibly interacting. $d_{\mathrm{max}}=0.1$ $h^{-1}$ Mpc\ roughly corresponds to typical distances in the early stages of galaxy interactions, while
$d_{\mathrm{max}}=0.01$ $h^{-1}$ Mpc\ corresponds to typical distances in late-stage galaxy mergers.
It is important to also note that at the smallest $<0.01$ $h^{-1}$ Mpc\ scales, our samples are highly incomplete, because a significant portion of pairs at these scales are promptly merged by the BH repositioning scheme. As discussed below, we find that we are nonetheless able to draw useful qualitative conclusions from the population of unmerged small scale BH pairs. We additionally perform a more quantitative analysis of BHs in late-stage galaxy mergers using the more complete sample of BH merger progenitors (which are defined based on merger time, not BH separation). We shall be comparing the results for $<0.01$ $h^{-1}$ Mpc\ BH systems to those for
the merging BHs, thereby
allowing us to identify any systematic bias that may
exist within the $<0.01$ $h^{-1}$ Mpc\ multiple-BH systems due to their
incompleteness.
We define \textit{multiplicity}~(denoted by $\mathscr{M}$) as the number of members within a BH system. We characterize the AGN activity of a BH system by the member having the highest Eddington ratio; we shall refer to this as the \textit{primary} member of the system. Note that traditionally, the primary is defined to be the most massive BH. However, a merger triggered enhancement in the AGN activity does not necessarily occur within the most massive member. Therefore, our choice of the highest Eddington ratio member as the primary ensures that within every BH system, we are probing the BHs that are most likely to be associated with merger-triggered AGN (independent of BH mass). We also note that~(unless otherwise stated), our results are qualitatively independent of the choice of the primary~(the only exception being in Section \ref{impact_on_edd} and is discussed there).
In order to study BH systems on both large and small spatial scales, we need a combination of high enough resolution as well as large enough volume to include a population of rare BH multiples.
Therefore, in this work, we use
the highest resolution realization of the \texttt{TNG100} box with $2\times1820^3$
resolution elements (DM particles and gas cells).
\subsection{Constructing randomized samples of BH systems}
\label{randomized_samples}
In order to analyse possible sources of selection bias in our results, we prepare an ensemble of ``randomized samples" of BH systems. Each randomized sample is constructed by randomly shuffling the ``system IDs"~(a unique integer ID we assign to each BH that determines which BH system it belongs to, if any)
amongst all the BHs in the simulation. In other words, each system ID is assigned to a random BH within the simulation box. Therefore, for every BH system with multiplicity $\mathscr{M}$, there exists a subset of $\mathscr{M}$ randomly assigned BHs within each randomized sample.
Using this procedure, for every sample of BH systems, we constructed an ensemble of 10 corresponding randomized samples that
have identical abundances by construction for all multiples~(pairs, triples, and beyond).
The following terminology is used for the remainder of the paper. We shall often refer to the actual BH systems within 1, 0.1, \& 0.01 $h^{-1}$ Mpc\ scales as ``true systems", and thereby compare their properties to their corresponding ``randomized systems" / ``randomly selected sets of BHs". Any selection bias in the computation of a quantity for a true system will be fully captured in the trends exhibited by the randomized systems.
\label{bh_systems}
\section{BH systems in \texttt{TNG100}}
\begin{figure*}
\centering
\includegraphics[width=18cm]{RICHNESS_vs_halomass_L75n1820TNG_edd.png}
\caption{Multiplicity~($\mathscr{M}$) as a function of the host halo mass of the primary BH~(member with the highest Eddington ratio), for BH systems~(all members have masses $>10^6~h^{-1}~M_{\odot}$) as predicted by the TNG100 simulation.~The blue circles correspond to the scatter at $z=0$. The colored solid lines show the mean trends at $z=0,1.5,3,4$. The different panels correspond to different values of $d_{\mathrm{max}}$, which is the maximum comoving distance between a member
and at least one other member within the system. We see that more massive halos tend to host richer systems of BHs.}
\label{richness_vs_halomass}
\end{figure*}
Table \ref{number_of_systems} summarizes
the abundances of the BH systems within the \texttt{TNG100} box. Here we discuss some of the basic properties~(environment and number densities) of these BH systems. Figure \ref{richness_vs_halomass} shows the multiplicity vs. host halo mass of the BH systems identified within the \texttt{TNG100} universe. Note that for $d_{\mathrm{max}}=1$ $h^{-1}$ Mpc, not all members will necessarily be within the same halo; in this case we choose the host halo mass of the primary member. Across all scales, we find that systems with higher multiplicity tend to live in more massive halos. This is simply a consequence of higher BH occupations in more massive halos. At scales within $1$ $h^{-1}$ Mpc~(leftmost panel), BH pairs, triples, and quadruples primarily reside in halos with total mass
between $10^{11}-10^{13}~ h^{-1} M_{\odot}$, whereas the more massive $M_h\gtrsim10^{13}~h^{-1} M_{\odot}$ halos tend to host BH systems with $\mathscr{M} \gtrsim10$. The median halo masses ($M_h\sim10^{11.5-12}~h^{-1} M_{\odot}$) of
BH systems on these scales vary little with multiplicity.
On smaller scales ($\leq0.1$ $h^{-1}$ Mpc, middle panel), a stronger trend with $M_h$ is seen;
the median halo mass for pairs and triples is $\sim 10^{12}~h^{-1} M_{\odot}$, while the highest order multiples at these scales contain up to $\sim5$ members and have median halo masses of $\sim10^{13}~h^{-1} M_{\odot}.$ Finally, at scales of
$\leq0.01$ $h^{-1}$ Mpc~(rightmost panel), no higher-order ($\mathscr M>2$) systems are present; BH pairs have
median halo masses of $M_h\sim10^{12}~h^{-1} M_{\odot}$.
\begin{figure*}
\centering
\includegraphics[width=18cm]{black_hole_multiplicity_functions_L75n1820TNG_eddington_mass_ratio.png}
\caption{Solid lines show the BH multiplicity functions~(defined as the number density $n^{\mathrm{sys}}$ of BH systems at each
multiplicity $\mathscr{M}$),
as predicted by the TNG100 simulation. Horizontal dotted lines mark the overall number density of BHs. Different colors correspond to snapshots at different redshift
between $z=0-3$. Left to right, panels correspond to $d_{\mathrm{max}}=1$, 0.1, \& 0.01 $h^{-1}$ Mpc. The error bars correspond to Poisson errors. The number density of multiple BH systems decreases~(roughly as a power law) with increasing multiplicity, and higher-$\mathscr{M}$ BH systems are increasingly rare on small scales (lower $d_{\mathrm{max}}$). Additionally, we also construct 10 randomized samples of BH systems~(see Section \ref{methods} for how they are constructed) wherein the BHs are randomly grouped together to form systems such that they have the same multiplicity functions as the actual sample.}
\label{bh_multiplicity_functions}
\end{figure*}
\begin{figure*}
\includegraphics[width=18cm]{fraction_most_masiveBH_L75n1820TNG_proper_range_bh_mass_ratio_edd.png}
\caption{\textbf{Upper Panels:} $f_{\mathscr{M}=2}$ is the fraction of systems that are pairs, plotted as a function of BH mass threshold~(defined by the mass of the most massive member). Left to right, panels correspond to $d_{\mathrm{max}}=1.0$, 0.1, \& 0.01 $h^{-1}$ Mpc, respectively. Different colors correspond to snapshots at different redshifts
between $z=0$ and 3. \textbf{Lower Panels:} Similar to the top panels, but $f_{\mathscr{M}\geq3}$ is the fraction of systems that are triples and higher order multiples. We find that more massive BHs have a higher likelihood of being members of multiple BH systems.}
\label{fraction_most_masive}
\end{figure*}
Now we consider the redshift evolution of the multiplicity versus halo mass relation from $z\sim0-3$. We focus on scales within 0.1 and $1$ $h^{-1}$ Mpc, where there are sufficient statistics for analysis. For BH systems within $0.1$ $h^{-1}$ Mpc\ scales, BH multiples at fixed halo mass are somewhat more common at higher redshift.
In part, this reflects the fact that many such systems merge between $z=3$ and $z=0$. For $1$ $h^{-1}$ Mpc\ scales, no significant redshift evolution is seen, likely because more of these large-scale BH multiples are still unmerged at $z=0$.
Figure \ref{bh_multiplicity_functions} shows the volume density of the BH systems as a function of multiplicity. At $z=0$,
the number densities for simulated BH pairs~($\mathscr{M}=2$) are
$3.9\times10^{-3}$, $2.7\times10^{-3}$ and $1.4\times10^{-4}~h^3\mathrm{Mpc}^{-3}$ at scales of 1.0, 0.1, \& 0.01 $h^{-1}$ Mpc, respectively. Comparing this to the number density of the overall BH population at $z=0$ ($6 \times 10^{-2}~h^3\mathrm{Mpc}^{-3}$), we see that $\sim13\%$ of BHs live in pairs within 1 $h^{-1}$ Mpc\ scales. At smaller scales ($\leq0.1$ \& 0.01 $h^{-1}$ Mpc), the percentages of BHs in pairs decrease to $\sim9\%$ and $\sim0.5\%$, respectively.
Amongst the pairs, $\sim37\%$ and $\sim17\%$ have additional companions to form triples within scales of $1$ $h^{-1}$ Mpc\ and $0.1$ $h^{-1}$ Mpc, respectively. Some of these systems may
eventually form gravitationally bound triple BH systems, which may induce rapid BH mergers and provide a possible solution to the so-called ``final parsec problem"~\citep{2016MNRAS.461.4419B,2018MNRAS.473.3410R}. In addition to offering exciting prospects for gravitational wave detections, strong triple BH interactions will often eject the lightest BH from the system, creating a possible population of wandering BHs in galaxy halos \citep{Perets_2008,2010ApJ...721L.148B}.
For higher order multiples, we see an approximate power-law decrease in the abundance of BH systems with increasing multiplicity for all values of $d_{\mathrm{max}}$. At smaller $d_{\mathrm{max}}$, there are fewer (or no) systems of multiple BHs, which leads to an increasingly sharp decline
as we go from $d_{\mathrm{max}}=1$ $h^{-1}$ Mpc\ to $0.01$ $h^{-1}$ Mpc. The number densities of BHs and corresponding BH systems are nearly constant at $z<1.5$, while at $z=3$ they are lower by a factor of $\sim 3$. For a given value of $d_{\mathrm{max}}$, the relative proportion of BH singles and pairs is nearly constant from $z=0-3$, and the number density of $\mathscr{M}>2$ systems at $z=0$ is only slightly higher~(by $\sim10\%$ \& $60\%$ at $d_{\mathrm{max}}=0.1~\&~0.01$ $h^{-1}$ Mpc, respectively) than at $z>0.6$
We now focus on how BH multiplicity
depends on BH mass. We do this by looking at the relationship between the multiplicity and the most massive member of the system; this is shown in Figure \ref{fraction_most_masive} for pairs~($\mathscr{M}=2$) and multiples~($\mathscr{M}\geq3$). Let us first focus on systems that are exclusively pairs~($\mathscr{M}=2$). At scales within $0.1$ $h^{-1}$ Mpc, the percentage of pairs increases with BH mass from $\sim5\%$ for $\sim10^6~h^{-1}~M_{\odot}$ BHs to $\sim20-40\%$ for $\sim10^9~h^{-1} M_{\odot}$ BHs. At scales within $0.01$ $h^{-1}$ Mpc, $\sim0.2-2\%$ of BHs live in pairs across the entire range of BH masses; there is some hint of increase in multiplicity with BH mass, although the statistics are very limited. At scales within $1$ $h^{-1}$ Mpc, we see that the percentage of pairs remains largely constant at $\sim20\%$ up to $\sim10^8~h^{-1}~M_{\odot}$, and then drops down to $\lesssim10\%$ at $\sim10^9~h^{-1}~M_{\odot}$; this is because at these scales, as we increase the mass of BHs, they have a much higher tendency of living in multiples~($\mathscr{M}\geq3$) instead of pairs.
At scales within $0.1$ $h^{-1}$ Mpc, the percentage of BHs living in multiples with $\mathscr{M}\geq3$ is $\sim1-2\%$ for $\sim10^7~h^{-1}~M_{\odot}$ BHs; this increases up to
$\sim20\%$ for
$\sim10^9~h^{-1}~M_{\odot}$ BHs.
At $\leq1$ $h^{-1}$ Mpc\ scales, the percentage increases from $\sim30\%$ for $\sim10^7~h^{-1}~M_{\odot}$ BHs to $\sim60\% $ for $\sim10^8~h^{-1}~M_{\odot}$ BHs; almost all BHs with $\sim10^9~M_{\odot}$ and higher live within $1$ $h^{-1}$ Mpc\ scale pairs.
Overall, we find that higher-mass BHs are more likely to have companions, as they live in more massive halos (cf.~Figure \ref{richness_vs_halomass}). Note that this also means that higher-multiplicity systems will have higher bolometric luminosities, on average.
This motivates our choice~(in the following sections) to characterize the luminosity of the BHs in terms of their Eddington ratios; the Eddington ratios do not correlate as strongly with BH mass, making them a better proxy for the probing the AGN activity~(independent of the trends seen with BH mass).
\section{AGN activity within BH systems}
\label{AGN_activity}
\begin{figure*}
\centering
\includegraphics[width=13.5cm]{black_hole_AGN_fractions_L75n1820TNG_eddington_mass_ratio1.png}
\vspace{0.5 cm}
\includegraphics[width=13.5cm]{black_hole_AGN_fractions_L75n1820TNG_eddington_mass_ratio2.png}
\vspace{0.5 cm}
\includegraphics[width=13.5cm]{black_hole_AGN_fractions_L75n1820TNG_eddington_mass_ratio3.png}
\caption{\textbf{Upper/ larger panels:} The AGN fraction~($f_{\mathrm{AGN}}$) is defined as the fraction of BH systems that have at least 1 AGN. Each row assumes a different Eddington ratio threshold to define an AGN, as indicated in the y-axis labels. The solid lines are the AGN fractions for BH systems in IllustrisTNG, with error bars corresponding to Poisson errors in their number counts. The filled circles show the median AGN fractions for 10 samples of randomly-selected BHs.
Additionally, we also have dashed lines which also show predictions for the randomized systems, but computed analytically using Eq.~(5). As expected, the circles nearly overlap with the dashed lines. The left, middle, and right columns correspond to $d_{\mathrm{max}}=1$, 0.1, \& 0.01 $h^{-1}$ Mpc, respectively. \textbf{Lower/ smaller panels:} Ratios of AGN fractions with respect to analytical predictions for randomly-selected BH samples;
i.e., the solid lines and filled circles are obtained by comparing
the solid lines vs. dashed lines and circles vs. dashed lines, respectively, from the upper panels. At scales within $0.1$ and $0.01$ $h^{-1}$ Mpc, wherever adequate statistics are available, the highest Eddington ratio AGN ($\eta\geq0.1$)
are more likely to be found in BH pairs, triples, and higher-order multiples than would be expected from random subsampling. The same is not true at scales of $\leq 1$
$h^{-1}$ Mpc, where the higher likelihood for multiple BH systems to contain at least one luminous AGN is mostly a result of simple combinatorics.
For low Eddington ratio thresholds~($\eta>0.01$), AGN fractions are always high, such that little to no enhancement of AGN activity is detectable in spatially-associated BH systems.}
\label{AGN_fractions}
\end{figure*}
\subsection{AGN fractions of BH systems}
\label{AGN_fractions_of_black_hole_systems}
Figure \ref{AGN_fractions} shows the fraction~($f
_{\mathrm{AGN}}$) of BH systems~(as a function of multiplicity) that contain at least one AGN, where ``AGN" are defined via various Eddington ratio thresholds. We see that across the entire range of Eddington ratios and redshifts, systems with higher multiplicity
have higher AGN fractions.
However, this is to some extent a trivial consequence of the higher probability that a system containing many BHs will contain at least one AGN. In order to quantify the enhancement in AGN activity that can be attributed to environment, we
compare this result to AGN fractions~(filled circles in Figure \ref{AGN_fractions}) of the randomly-selected samples of BH systems~(see Section \ref{randomized_samples}). Additionally, we can also compute the AGN fractions of the randomized samples analytically (dashed lines in Figure \ref{AGN_fractions}). For a sample containing $N_{\mathrm{BH}}$ BHs and $N_{\mathrm{AGN}}$ AGN,
the fraction~($f^{\mathrm{random}}_{\mathrm{AGN}}$) of randomly chosen sets of $\mathscr{M}$ BHs which have at least one AGN is given by
\begin{equation}
f^{\mathrm{random}}_{\mathrm{AGN}}=\frac{\binom{N_{\mathrm{BH}}}{\mathscr{M}}-\binom{N_{\mathrm{BH}}-N_{\mathrm{AGN}}}{\mathscr{M}}}{\binom{N_{\mathrm{BH}}}{\mathscr{M}}}.
\label{eqn:AGN_fraction_randomized}
\end{equation}
We see that the AGN fractions for the randomized samples computed using the two methods~(filled circles vs dashed lines) are consistent with each other, providing further validation for our use of randomized samples to identify true enhancements of AGN activity in spatially-associated BH systems.
Also evident in Figure \ref{AGN_fractions} is that for BH systems on 1 $h^{-1}$ Mpc\ scales, there is no significant enhancement of AGN fractions compared to their corresponding randomized samples, even at the highest Eddington ratios ($\eta \geq 0.7$).
Enhanced AGN fractions are seen for high Eddington ratio AGN in closer BH systems, however, suggesting a merger-triggered origin for these luminous AGN. The
following paragraphs describe more details about these enhancements at various Eddington ratio thresholds.
Let us first focus on the AGN fractions of the most luminous AGN ($\eta \geq 0.7$). The topmost panels in Figure \ref{AGN_fractions} contain only data at $z\geq 1.5$
(at $z\leq0.6$, the very few $\eta \geq 0.7$ AGN that exist are insufficient to make statistically robust predictions).
We see that at scales of 0.1 $h^{-1}$ Mpc,
the AGN fractions of BH pairs and multiples~($\mathscr{M}>1$) are enhanced
by up to factors of $\sim3-6$ compared to their corresponding randomized samples. At the smallest ($\leq 0.01$ $h^{-1}$ Mpc) scales, we see hints of a similar trend, but there are too few luminous AGN systems to draw definite conclusions.
In contrast,
BHs that are isolated ($\mathscr{M}=1$) at
1 $h^{-1}$ Mpc\ scales
are actually slightly \textit{less} likely to host luminous AGN than individual BHs sampled randomly from the overall population.
These trends imply a strong association between luminous AGN triggering and BH multiplicity on $\leq 0.1$ $h^{-1}$ Mpc\ scales.
If the AGN Eddington ratio threshold is decreased to $\eta_{\rm min} = 0.1$,
enhanced AGN fractions are seen for $d_{\rm max} \leq 0.1$ Mpch\ systems at all redshifts (excepting $z=3$, where most primary BHs are $\eta\geq0.1$ AGN even in the randomized samples).
These AGN enhancements
are smaller~(up to factors $\sim2$) than those for the most luminous AGN ($\eta>0.7$) at $\leq0.1$ $h^{-1}$ Mpc~ scales. On the smallest ($\leq0.01$ $h^{-1}$ Mpc) scales, however, the AGN fractions of BH pairs at low redshift
are more strongly enhanced~(up to factors of $\sim 8$ at $z=0$)
compared to their corresponding randomized pairs.
If the AGN Eddington ratio threshold is further decreased to $\eta_{\rm min}=0.01$, no significantly enhanced AGN fractions
are seen in any multiple BH systems. A notable excpetion is
$\leq0.01$ $h^{-1}$ Mpc\ scale pairs at $z=0$,
which are enhanced by up to factors of $\sim2$.
In a nutshell, the above trends indicate that AGN are more likely to be found in multiple BH systems at 1) high Eddington ratios
and 2) smaller BH separations. Because multiple BH systems on $\leq 0.1$ $h^{-1}$ Mpc\ scales are likely to be hosted in ongoing galaxy interactions or mergers, this finding is in agreement with previous studies indicating enhanced AGN activity in interacting galaxies \citep{2011ApJ...737..101L,2011ApJ...743....2S,2011MNRAS.418.2043E,2013MNRAS.435.3627E,2014AJ....148..137L,2014MNRAS.441.1297S,2017MNRAS.464.3882W,2018PASJ...70S..37G,2019MNRAS.487.2491E}. Our findings are also
in agreement with previous studies suggesting that luminous AGN are strongly clustered in rich environments at small scales~$\lesssim0.1$ $h^{-1}$ Mpc, in particular the ``one-halo" term of the AGN/ quasar clustering measurements~\citep{2012MNRAS.424.1363K,2017MNRAS.468...77E}.
Finally, the fact that no enhancements are seen at $1$ $h^{-1}$ Mpc\ scales is also consistent with previous clustering studies, which find no significant luminosity dependence on large scale clustering amplitude~\citep{2006MNRAS.373..457L,2018MNRAS.474.1773K,2019MNRAS.483.1452W,2020ApJ...891...41P}.
\begin{figure*}
\centering
\includegraphics[width=14cm]{AGNs_that_are_systems_L75n1820TNG_mass_ratio_edd.png}
\caption{\textbf{Upper/ larger panels}: $f_{\mathrm{\mathscr{M}(\geq)}}$ refers to the fraction of primary AGN that live in BH systems with a minimum multiplicity~$\mathscr{M}(\geq)$; this is plotted as a function of the threshold Eddington ratio $\eta_{\mathrm{min}}$. Solid lines correspond to the BH systems, and
dashed lines correspond to the median values for the 10 samples of randomly-selected BHs. The top, middle, and bottom rows correspond to $d_{\mathrm{max}}=1$, 0.1, \& 0.01 $h^{-1}$ Mpc, respectively. The errorbars correspond to Poisson errors. Within each row, the \textbf{lower/ smaller panels} denote the ratio~(solid/ dashed lines) between the predictions for the true BH systems vs. that of the randomized systems. The left, middle and right panels correspond to systems with at least 2, 3 and 4 members, respectively. The different colors correspond to different redshifts. At higher Eddington ratios, the likelihood of AGN to belong to multiple BH systems within $0.1$ and $0.01$ $h^{-1}$ Mpc\ scales is significantly enhanced compared to that of the randomized systems; on the other hand, there is very little enhancement for BH systems within $1$ $h^{-1}$ Mpc\ scales.}
\label{AGN_that_are_systems}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=8.1cm]{ultra_hard_flux_cuts_L75n1820TNG_mass_ratio_bol_lum.png} &
\includegraphics[width=8cm]{hard_flux_cuts_L75n1820TNG_mass_ratio_bol_lum.png}
\end{tabular}
\caption{ \textbf{Upper panels:} The fraction $f(\mathscr{M}\geq2)$ of primary AGN~(defined here to be the most luminous member of the BH system) with 14-195 keV~(right panel) and 2-10 keV~(left panel) threshold flux,
that live in pairs/ multiples. The different colors generally correspond to different redshifts~(see legend). In the left panel, the thin black lines correspond to systems where the companion BHs have $L_{2-10~\mathrm{keV}}>10^{42}~\mathrm{erg~s^{-1}}$ at $z=0.035$. Solid, dashed and dotted lines correspond to $d_{\mathrm{max}}=1$, 0.1, \& 0.01 $h$$^{-1}$\ Mpc, respectively. The assumed bolometric corrections for the X-ray bands have been adopted from \protect\cite{2010MNRAS.402.1081V}~(left panel) and \protect\cite{2012MNRAS.425..623L}~(right panel). The vertical line in the right panel marks the detection limit~($7.2\times10^{-12}~\mathrm{erg~s^{-1}~cm^{-2}}$) for the 105 month Swift-BAT survey~\protect\citep{2018ApJS..235....4O}. The vertical lines in the left panel mark detection limits of the following fields obtained from the \textit{Chandra} X-ray observatory: the dotted line~($5.5\times10^{-17}~\mathrm{erg~s^{-1}~cm^{-2}}$) corresponds to the \textit{Chandra} Deep Fields-North and South~(CDF-N and CDF-S); the dashed line~($6.7\times10^{-16}~\mathrm{erg~s^{-1}~cm^{-2}}$) corresponds to the Extended \textit{Chandra} Deep Field - South~(ECDF-S); and the solid line~($5.4\times10^{-15}~\mathrm{erg~s^{-1}~cm^{-2}}$) corresponds to the \textit{Chandra} Stripe 82 ACX survey~\protect\citep{2013MNRAS.432.1351L}. At the detection threshold of \textit{Swift}-BAT, the fraction of ultra-hard X-ray AGN associated with BH pairs and multiples within scales of $0.1$ $h^{-1}$ Mpc\ is $\sim10-20\%$. At the Stripe 82 ACX detection limit, the fraction of hard X-ray AGN in $0.1$$h^{-1}$ Mpc\
scale pairs and multiples is $\sim10-20\%$~($5\%$) at $z\sim0.6-1.5$~(at $z\sim0.035$).} \textbf{Lower panels:} $f^{\mathit{Swift}~\mathrm{BAT}}_{\mathrm{\mathscr{M}\geq2}}$~($f^{\mathrm{Stripe~ 82}}_{\mathrm{\mathscr{M}\geq2}}$) is defined as the fraction of \textit{Swift}-BAT~(Stripe 82) primary AGN that have at least one detectable companion above a given flux threshold ($x$-axes show the flux threshold of the companion AGN). $\sim3\%$ of \textit{Swift} BAT AGN at $z=0.035$ have companions within $0.1$ $h^{-1}$ Mpc~that are detectable at the 105-month survey limit.
\label{hard_xray}
\end{figure*}
\subsection{What fraction of observable AGN are members of BH systems?}
As mentioned earlier, some observational studies also look for `merger fractions' of AGN ---i.e., what fraction of AGN are hosted by merging/ interacting systems. In terms of our work, a robust proxy for these
merger fractions is the fraction of AGN that are members of BH pairs and multiples. Figure \ref{AGN_that_are_systems}~(solid lines) shows the fraction~($f_{\mathscr{M}\geq}$) of primary AGN that are members of BH systems of a threshold multiplicity $\mathscr{M}\geq$, plotted as a function of threshold Eddington ratio $\eta_{\mathrm{min}}$. These are compared to corresponding predictions for AGN belonging to the randomized samples. We find that in the regime of Eddington ratio thresholds between $\sim0.01-1$, higher Eddington ratio AGN are more likely to have one or more companions compared to lower Eddington ratio AGN. However, this trend is also seen for the randomized samples which shows that this is~(in part) due to our choice of the most luminous member as the primary; this accompanies an inherent statistical bias in favor of more luminous AGN being more likely to be picked out from higher order BH systems. For $\eta<0.01$, $f(\mathscr{M}\geq)$ tends to gradually flatten for both randomized samples and true samples, but this is simply because as we continue to decrease the Eddington ratio threshold, we eventually cover the full AGN population. If we look at the redshift evolution at a fixed Eddington ratio threshold, we see that lower-redshift AGN have a higher probability of being a member of a BH multiple. This is primarily because at fixed multiplicity, BH systems tend to have decreasing Eddington ratios at lower redshifts due to a general decrease in AGN luminosity with decreasing redshift~(as seen in Appendix \ref{appendix1}: Figure \ref{appendix1_fig}); as a natural corollary, at fixed Eddington ratio, BH systems have higher multiplicities at lower redshifts.
We now look at the difference in the $f(\mathscr{M}\geq)$ between the true samples of BH systems and the corresponding randomized samples in order to filter out the effects that are physical~(see ratio plots of Figure \ref{AGN_that_are_systems}). At scales of $1$ $h^{-1}$ Mpc, we see no significant difference between $f(\mathscr{M}\geq)$ predictions for the true samples and the randomized samples for the entire range of Eddington ratio thresholds; this is similar to our findings for the AGN fractions in the previous section. As we approach scales of $0.1$ $h^{-1}$ Mpc, we find that $f(\mathscr{M}\geq)$ is enhanced for the true samples as compared to the randomized samples at high enough Eddington ratios. These enhancements start to appear at $\eta \sim 0.01$ and increases up to factors of $\sim4$ for the most luminous AGN~($\eta\sim0.7-1$). At scales within $0.01$ $h^{-1}$ Mpc, we see the strongest enhancements; in particular, if we look at $z=0,0.6$ pairs where we have the best statistics, the enhancements are up to factors of $\sim7-9$ for the most luminous AGN~($\eta\sim0.1$). Furthermore, at $z=0$ the enhancements start appearing at Eddington ratios as low as $\eta\gtrsim0.001$.
To summarize the above trends, we find that more luminous AGN have enhanced likelihood of having companion BHs within
0.1 $h^{-1}$ Mpc; at the same time, there is no enhancement in the likelihood of AGN having companion BHs within $1$ $h^{-1}$ Mpc. This further corroborates the inferences drawn in the previous sections; i.e., no signatures of large scale AGN clustering are seen in
our identified BH systems, but enhanced AGN activity is
associated with rich environments small scales~($\leq0.1$ $h^{-1}$ Mpc), likely triggered by mergers and interactions between galaxies.
\subsubsection{Companions of X-ray Selected AGN}
From an observational perspective, it is also instructive to
estimate the fraction of AGN in BH pairs and multiples that would be detectable in a survey with a given flux limit.
Therefore, in addition to analysing AGN samples characterized by Eddington ratios, we now repeat our analysis by characterizing AGN in multiple BH systems based on their estimated intrinsic fluxes in the 2-10 keV~(hard) and 14-195 keV~(\textit{Swift}/BAT ultra hard) X-ray bands (Figure \ref{hard_xray}, upper panels). For the 2-10 keV band, the bolometric corrections are adopted from \cite{2012MNRAS.425..623L}, where they assume best fit relations between the bolometric luminosities and 2-10 keV X-ray luminosities of the AGN samples from the \textit{XMM}-COSMOS~\citep{2009A&A...497..635C} survey. For the 14-195 keV X-ray band, we assume a constant bolometric correction of 15, as in previous
analyses of \textit{Swift}/BAT AGN~\citep[e.g.,][]{2010MNRAS.402.1081V, 2012ApJ...746L..22K}.
Figure \ref{hard_xray}~(upper panels) shows the fraction of AGN found in multiple BH systems ($f_{\mathscr{M}\geq2}$) as a function of the assumed hard or ultra-hard X-ray flux threshold.
We see that in either case, brighter AGN are more likely to live in pairs and multiples, which is not surprising given the trends seen with Eddington ratios.
We first focus on predictions for ultra-hard X-ray AGN (Figure \ref{hard_xray}, left panels). The 105-month all-sky \textit{Swift}/BAT survey has a flux limit of $7.2\times10^{-12}~\mathrm{erg~s^{-1}~cm^{-2}}$ in the 14-195 keV band \citep{2018ApJS..235....4O}. At these X-ray energies, even heavily obscured AGN experience little attenuation. Coupled with its sky coverage, this means that the \textit{Swift}/BAT survey yields a uniquely complete sample of low-redshift AGN ($z\lesssim0.05$). We present predictions for the simulation snapshot at $z=0.035$~(maroon lines in Figure \ref{hard_xray}: upper-left panels).
Let us first look at $1$ $h^{-1}$ Mpc\ scales, where we previously found no enhanced AGN activity in BH pairs or multiples. We see that the majority~($\sim70-80\%$) of the detectable AGN live in BH pairs and multiples within scales of $1$ $h^{-1}$ Mpc. In line with our previous results, however, we conclude that this is primarily driven by the gravitational clustering of halos hosting BHs and has little to do with the AGN activity of the BHs. On scales $\leq 0.1$ $h^{-1}$ Mpc, where we did previously find enhanced AGN activity in multiple BH systems, we see that $\sim10-20\%$ of the detectable AGN population is associated with BH pairs and multiples.
We can compare this population to the \textit{Swift}/BAT-selected AGN sample studied in \citet{2012ApJ...746L..22K}. Using optical imaging, they selected BAT AGN hosted in galaxies that have companions within 100 projected kpc. \citet{2012ApJ...746L..22K} then compared with 2-10 keV X-ray observations to identify those companions that also hosted AGN to determine the dual AGN frequency on these scales. They assumed a minimum AGN luminosity of $L_{2-10~\mathrm{kev}} > 10^{42}$ erg~s$^{-1}$, to avoid confusion with X-ray emission from star-forming regions. We apply similar criteria to identify dual AGN in our data, selecting AGN that would be detectable in the 105-month BAT survey and that have companion AGN within 0.1 $h^{-1}$ Mpc\ with $L_{2-10~\mathrm{kev}} > 10^{42}$ erg s$^{-1}$. Using these criteria, we find that $\sim10\%$ of BAT-detected AGN in our sample are dual AGN on 0.1 $h^{-1}$ Mpc\ scales (thin lines in Figure \ref{hard_xray}, upper left panel). This is consistent with the results of \citet{2012ApJ...746L..22K}.
In the lower left panel of Figure \ref{hard_xray}, we examine the fraction of BAT AGN with at least one companion BH that would also be detected at the limit of the BAT survey. The fraction decreases with increasing flux, owing to the rarity of luminous AGN. We see that at $z\sim0.035$, $\sim3\%$ and $\sim10\%$ of BAT AGN have companions within $0.1$ and $1$ $h^{-1}$ Mpc, respectively, that are detectable at the 105-month survey limit.
We similarly examine the companions of hard X-ray selected AGN, based on their inferred intrinsic 2-10 keV flux. We do not attempt to model the amount of AGN obscuration, although we note that many AGN have significant attenuation in the 2-10 keV band, particularly in late stage mergers~\citep[e.g.,][]{2015ApJ...814..104K,2017MNRAS.468.1273R,2020arXiv200601850S}. These results will therefore be most useful for comparison with X-ray AGN for which intrinsic luminosities can be estimated. We focus on the \textit{Chandra} Stripe 82 ACX survey~(solid vertical line in Figure \ref{hard_xray}), owing to its large area ($\sim17~\mathrm{deg^2}$) that yields statistically large samples of X-ray bright AGN. At $1$ $h^{-1}$ Mpc \ scales (due to gravitational clustering), $\sim70-80\%$ of the detectable AGN live in BH pairs and multiples at $z\sim0.6-3$; this decreases to $\sim40\%$ at $z\sim0.035$. At $\leq 0.1$ $h^{-1}$ Mpc\ scales, $\sim10-20\%$ of the detectable AGN population is associated with BH pairs and multiples at $z\sim0.6-1.5$; this decreases to $\sim5\%$ at $z\sim0.035$. Within $0.01$ $h^{-1}$ Mpc \ scales, $\lesssim1\%$ of the detectable AGN are associated with BH pairs.
These findings are consistent with our previous results; for the Eddington ratio selected AGN in Figure \ref{AGN_that_are_systems}, we see that at the highest Eddington ratios, only up to $\sim40\%$ of AGN have companions within $0.1$ $h^{-1}$ Mpc. Overall, this suggests that the majority of AGN activity is actually \textit{not} associated with mergers/ interactions, but is instead driven by secular processes. This agrees with other recent observational~\citep{2014MNRAS.439.3342V,2017MNRAS.466..812V,2019ApJ...882..141M,2019ApJ...877...52Z} and theoretical studies~\citep{2020MNRAS.494.5713M}.
Finally, we examine the fraction of Stripe 82 AGN that would have companions detectable at various flux limits, giving rise to dual or multiple AGN~(lower right panel of Figure \ref{hard_xray}). Here we primarily focus on summarizing the results for companions within $\leq0.1$ $h^{-1}$ Mpc~(where we report statistically robust evidence of AGN enhancements), but results at $\leq0.01,1$ $h^{-1}$ Mpc ~are also presented in Figure \ref{hard_xray} for completeness. At $z\sim0.035$, where the Stripe 82 flux limit corresponds to an AGN luminosity of $\sim 3 \times 10^{40}$ erg s$^{-1}$, almost all the available companions are detectable, implying that $\sim5\%$ of Stripe 82 AGN have companions already detectable without deeper observations. However, such low luminosities are quite difficult to distinguish from X-ray emission from star-forming regions. At $z\sim0.6$, where even intensely star forming regions are unlikely to mimic detectable AGN at the Stripe 82 limit, $\sim2\%$ of Stripe 82 AGN have companions detectable without deeper observations. At higher redshifts~($z\sim1.5,3$), there are no companions that are detectable within Stripe 82. However, the prospect of detecting companions is better for deeper observations such as the \textit{Chandra} deep field~(CDF) and extended \textit{Chandra} deep field~(ECDF) surveys. In particular, at the flux limit of the ECDF, almost all the available companions are detectable, implying that $\sim20\%$ \& $\sim30\%$ of Stripe 82 AGNs at $z=1.5$ \& $z=3$, respectively, have companions detectable at the ECDF limit.
\subsection{Disentangling AGN enhancements in multiples from trends with host mass}
\label{decoupling_degeneracy}
\begin{figure*}
\centering
\includegraphics[width=14.7cm]{eddington_ratio_bh_mass_L75n1820TNG.png}
\caption{AGN Eddington ratio~($\eta$) vs. BH mass~($M_{\rm bh}$). The overall sample is split into objects with Eddington ratios higher~(maroon color) and lower~(green color) than the median Eddington ratio at fixed BH mass.
From left to right, the panels show snapshots at $z=$ 0, 0.6, 1.5 \& 3.
This demonstrates how we split the BHs
into high- and low-Eddington ratio populations
to investigate the relative likelihood of these populations
to live in BH pairs and multiples.}
\label{AGN_selection}
\includegraphics[width=14.7cm]{mass_function_L75n1820TNG_bh.png}
\caption{\textbf{Top panels}: Maroon and green solid lines are the host halo mass~($M_{h}$) functions of BHs with Eddington ratios higher and lower than the median value at fixed BH mass. \textbf{Bottom panels}: Ratio between the blue vs. red lines presented on the top panels. The green region ($10^{11}<M_{h}<10^{12}~h^{-1}~M_{\odot}$) represents the BHs that
were selected for the computation of $f(\mathscr{M}\geq)$ in Figure \ref{quartiles}.
This region is chosen to ensure that the halo mass functions for the high- and low-Eddington ratio
populations of Figure \ref{AGN_selection} match to within $\sim30\%$. From left to right, the panels show
snapshots at $z=$ 0, 0.6, 1.5 \& 3.
\label{host_halo_mass_functions}
\includegraphics[width=14.7cm]{stellar_mass_function_L75n1820TNG_bh.png}
\caption{\textbf{Top panels}: Blue and red solid lines are the host galaxy~(subhalo) stellar mass~($M_*$) functions of BHs with Eddington ratios higher and lower than the median value at fixed BH mass. \textbf{Bottom panels}: Ratio between the blue vs. red lines presented on the top panels. The green region represents the BHs that
were selected for the computation of $f(\mathscr{M}\geq)$ in Figure \ref{quartiles}. This region of $10^{8.6}<M_{h}<10^{10.6}~h^{-1}~M_{\odot}$ is chosen
to ensure that the stellar mass functions for the high- and low-Eddington ratio
populations of Figure \ref{AGN_selection} match to within $\sim30\%$. From left to right, the panels show snapshots at $z=$ 0, 0.6, 1.5 \& 3.}
\label{host_stellar_mass_functions}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{upper_lower_quartile_L75n1820TNG_proper_range_bh_mass_ratio_edd.png}
\caption{\textbf{Upper/ larger panels:} $f(\mathscr{M}\geq2)$ as a function of redshift is the fraction of primary AGN that live in pairs. The blue and red lines represent primary AGN with Eddington ratios higher and lower, respectively, than the median value at fixed BH mass. The solid lines correspond to true BH systems, while the
dashed lines correspond to the median values of 10 randomized systems.
We further subsample the populations such that host halo and stellar masses
are confined to the green highlighted regions in Figures \ref{host_halo_mass_functions} and \ref{host_stellar_mass_functions}, respectively. \textbf{Lower/ smaller panels:} The ratio
between the predictions of $f(\mathscr{M}\geq)$ for the true samples of BH systems vs. that of the randomized samples. We find that the high Eddington ratio primary AGN~(blue color) have \textit{slightly higher likelihood} of belonging to multiple BH systems within scales of $0.1$ and $0.01$ $h^{-1}$ Mpc, compared to
random subsets of BHs. At the same time, low Eddington ratio primary AGN~(red color) have \textit{slightly lower likelihood} of belonging to a multiple BH systems at scales within $0.1$ and $0.01$ $h^{-1}$ Mpc, compared to randomly chosen subsets of BHs. These effects are not seen at scales of $1$ $h^{-1}$ Mpc.}
\label{quartiles}
\end{figure*}
We have so far established that there is enhanced AGN activity in close systems of multiple BHs ($\leq 0.1$ $h^{-1}$ Mpc). The AGN enhancement at these scales may partly be attributed to AGN triggering by galaxy mergers and interactions. At the same time, there may also be a contribution from: 1) a possible correlation between AGN Eddington ratio and the mass of its host halo or galaxy, and 2) the fact that more massive haloes host a higher number of
galaxies containing BHs, and therefore are richer in both active and inactive BHs.
In this section, we shall statistically control for the host halo and galaxy mass and further look for possible enhancements in the AGN activity within BH systems that can be solely attributed to small scale galactic dynamics.
Figure \ref{AGN_selection} shows the Eddington ratio vs. the BH mass
of the overall BH populations within \texttt{TNG100}. We first divide the population based on whether the luminosities are higher~(blue circles) or lower~(red circles) than the median Eddington ratio at fixed BH mass. We then ensure that for each of the two populations, we select subsamples with similar host halo masses and host galaxy stellar masses, using the following procedure. In Figure \ref{host_halo_mass_functions}, blue and red solid lines show the resulting host halo mass functions of the BH samples represented by circles of the corresponding color in Figure \ref{AGN_selection}. The sharp drop in the halo mass function at $M_h\sim5\times10^{10}~h^{-1}~M_{\odot}$ corresponds to the threshold halo mass for inserting BH seeds. (The small tail of $M_h<5\times10^{10}~h^{-1}~M_{\odot}$ halos correspond to those that have seeded BHs at an earlier time, but have lost some mass due to tidal stripping.) We find that for halos with $M_h\gtrsim10^{12}~h^{-1}~M_{\odot}$ and $M_h\lesssim10^{11}~h^{-1}~M_{\odot}$, the halo mass functions for hosts of high- and low-Eddington ratio BHs differ slightly in their normalization. Thus, for this analysis we focus on BHs hosted in $10^{11}< M_h < 10^{12}~h^{-1}~M_{\odot}$ halos at all snapshots between $0<z<4$, where the difference in the halo mass function is small~($\lesssim30\%$).
We follow the same procedure for host galaxy stellar masses~($M_*$), wherein we select galaxies with $10^{8.6}<M_*<10^{10.6}~h^{-1}~M_{\odot}$ such that
the stellar mass functions differ by $\lesssim30\%$ between high- and low-Eddington ratio BH hosts (Figure \ref{host_stellar_mass_functions}: green regions).
Overall, we have 1) divided the BH population into those with Eddington ratios higher and lower than the median value at fixed BH mass, and 2) further selected subsamples of both the populations with minimal differences in their host halo masses and host galaxy stellar masses. In the process, we have constructed two BH subsamples with similar host halo properties that differ solely in their Eddington ratios. We can now quantify the fraction of AGN in each of these subsamples that live in BH pairs and multiples, relative to the corresponding randomized BH samples.
Figure \ref{quartiles}~(upper panels) shows the fraction $f(\mathscr{M}\geq2)$ of AGN that are primary members of BH pairs and multiples,
plotted as a function of redshift for the high- and low-Eddington ratio
populations of primary BHs described above (and in Figures \ref{AGN_selection}-\ref{host_stellar_mass_functions}).
The solid and dashed lines correspond to the predictions for the true BH systems and the randomized samples, respectively. For both the randomized samples as well as true samples, more luminous AGN
have a greater likelihood of being members of BH systems, compared to those that are less luminous.
This, again, is in part due to the statistical bias arising due to our choice of the most luminous AGN as the primary. In order to isolate
the physical effects, we look at the ratio of $f(\mathscr{M}\geq2)$ between the true samples and the randomized samples, which are shown in Figure \ref{quartiles}~(lower panels). At scales within $0.1$ $h^{-1}$ Mpc, we find that~(with the exception of $z\sim0$) the likelihood for more luminous AGN
to live in BH pairs and multiples is enhanced for the true samples compared to that of randomized samples. Likewise, the likelihood of less luminous AGN
to live in BH pairs and multiples is suppressed for the true samples compared to that of randomized samples. Therefore, at $z\gtrsim0.6$, we see clear
evidence that at scales of $\leq0.1$ $h^{-1}$ Mpc, BH pairs and multiples
are indeed associated with more enhanced AGN activity, independent of the overall masses of the host halos and galaxies.
At $z\sim0$,
we do not see any enhancement at $\leq0.1$ $h^{-1}$ Mpc\ scales. This is simply because
the typical Eddington ratios at $z=0$ are lower overall~(median Eddington ratio $\sim0.01$, see Figure \ref{AGN_selection}). As we saw in Figure \ref{AGN_that_are_systems}, no significant enhancements are seen in low-luminosity ($\eta\lesssim0.01$) AGN for $d_{\rm max}\geq0.1$ $h^{-1}$ Mpc\ BH systems.
However, if we look at $\leq0.01$ $h^{-1}$ Mpc\ scales, we do see evidence of enhanced AGN activity at $z\lesssim2$, including at $z=0$.
Additionally, note that the enhancements in $f(\mathscr{M}\geq2)$
seen in Figure \ref{quartiles} are significantly smaller than the strongest enhancements reported for the most luminous AGN~($\eta\gtrsim0.7$) shown in Figure \ref{AGN_that_are_systems}. But this is simply because selecting the high Eddington ratio samples in Figure \ref{quartiles} is broadly equivalent to samples with ``effective Eddington ratio thresholds" ranging between $\eta\sim0.01$ at $z=0$ to $\eta\sim0.1$ at $z=3$~(see Figure \ref{AGN_selection}); these values are significantly smaller than $\eta\gtrsim0.7$ and therefore correspond to weaker enhancements.
Lastly, we also show
the results for the $\leq1$ $h^{-1}$ Mpc\ scales (leftmost panels in Figure \ref{quartiles}), wherein
we find no difference in $f(\mathscr{M}\geq2)$ between true and randomized BH systems;
this is expected, given the results in Figure \ref{AGN_that_are_systems}.
To summarize, the results in this section further solidify
the association of enhanced AGN activity with BH pairs and multiples within
scales of
0.1 $h^{-1}$ Mpc. When controlled for host halo mass, the fraction of high-Eddington-ratio AGN that are in multiple BH systems is only modestly enhanced over random associations. Nonetheless, our results demonstrate that this trend
does exist independent of the fact that
massive halos and galaxies tend to host more luminous and more numerous BHs. This enhancement in AGN activity is likely driven by mergers and interactions between galaxies.
\subsection{Impact of small scale environment on the Eddington ratios of BH systems}
\label{impact_on_edd}
Having established the influence of small scale~($\leq 0.1$ $h^{-1}$ Mpc) environment on AGN activity, we now quantify in greater detail the magnitude of Eddington ratio enhancements in
BH pairs and multiples compared to isolated BH. In particular, we look at the Eddington ratios associated with the primary members of BH multiples~($\mathscr{M}\geq2,3,4$) as well as those
of isolated BHs~($\mathscr{M}=1$),
as a function of redshift. Figure \ref{average_eddington_ratio} shows that the Eddington ratios for the true sample of BH multiples
increase with multiplicity at fixed redshift, but so do the Eddington ratios of the random samples. This again owes to the fact that
the primary AGN form a biased sample compared to the full population.
We are most interested in the comparison of the median Eddington ratios between the true samples of BH multiples and
the randomized samples~(solid vs dashed lines); these are shown in the lower panels of Figure \ref{average_eddington_ratio}. At scales of $1$ $h^{-1}$ Mpc, the median Eddington ratios for the true samples of BH multiples have minimal differences~($\lesssim0.1$ dex) with respect to the randomized samples, as expected from our analysis so far. At scales of $d_{\mathrm{max}}=0.1$ $h^{-1}$ Mpc~(middle panel), we find that the Eddington ratios for the true samples of BH multiples tend to be increasingly
enhanced
at higher redshifts. At the highest redshifts of $z\sim3-4$, the enhancements are up to $\sim0.3-0.5~\mathrm{dex}$. This agrees with our results in Figure \ref{quartiles} and likely reflects
stronger enhancements in merger-driven AGN activity due to greater availability of cold gas at higher redshifts. At the lowest redshifts of $z\lesssim0.6$, there is no significant enhancement of the Eddington ratios at $\leq0.1$ $h^{-1}$ Mpc\ scales.
At
separation scales of $d_{\mathrm{max}}=0.01$ $h^{-1}$ Mpc, however,
where we only have BH pairs,
we find that at $z\lesssim0.6$, the median Eddington ratios for the true samples are enhanced by $\sim0.3-0.4~\mathrm{dex}$. (At higher redshifts, some evidence of enhanced Eddington ratios is also seen, but the results are at best only marginally significant.)
As we discussed earlier, our sample of BH pairs at these scales is incomplete, because a significant fraction of them merge prematurely due to the BH repositioning scheme implemented by the simulation. Therefore, we simultaneously look at the complete sample of BH mergers, which are recorded at much higher time resolution than the snapshot data.
There are $15953$ merger events recorded during the simulation run. We modulate this sample of merging BHs such that it has
similar distributions of masses and mass ratios as
the sample of $0.01$ $h^{-1}$ Mpc\ BH pairs. We perform the modulation based on randomly selecting subsamples of BHs at various bins of mass ratios and masses, where the relative fraction of objects in each bin is tuned to represent the mass ratio and mass distributions of $0.01$ $h^{-1}$ Mpc\ BH pairs~(we do this to make the samples more comparable, since the Eddington ratios have been found to depend on the mass ratios as well as masses of the merging BHs). After the modulation we end up with $1970$ merging BH systems
(as compared to the much smaller number of $0.01$ $h^{-1}$ Mpc\ scale BH pairs, which is only 285). We find that the median Eddington ratios of these merging BH pairs~(cyan lines in the rightmost panel) are broadly consistent with that of the $0.01$ $h^{-1}$ Mpc\ scale pairs. Thus, we can conclude that the incompleteness of small-separation pairs does not introduce systematic bias into our results, and that the higher Eddington ratios seen in these close pairs reflect a genuine enhancement in AGN activity.
To summarize, we find
a measurable impact of the small scale~($\leq0.1$ $h^{-1}$ Mpc) environment on
AGN Eddington ratios,
which generally tends to increase at higher redshift. The median Eddington ratios are, at best, enhanced up to factors of $\sim2-3$~($0.3-0.5$ dex). This supports the existence of a merger-AGN connection. However, because the enhancements in AGN activity for BH pairs and multiples are relatively modest, our results do not suggest that merger-driven AGN fueling is a dominant channel of BH growth overall \citep[see also][and Thomas et.~al., in prep.]{2020MNRAS.494.5713M
\begin{figure*}
\includegraphics[width=17cm]{median_eddington_ratio_mass_ratio_edd.png}
\caption{\textbf{Upper / larger panels}: Median values of the AGN eddington ratio $\left<\log_{10}\eta\right>$ of the primary BHs of pairs and multiples as a function of redshift. The blue, red, and green circles correspond to all multiple BH systems with $\mathscr{M}\geq2,3,$ \& 4, respectively. The black circles correspond to isolated BHs~($\mathscr{M}=1$). The dashed lines correspond to the median values for 10 samples of randomly-selected systems. The cyan lines correspond to pairs of merging BHs recorded at high time resolution during the simulation run; the sample of merging BHs has
been modulated to have similar distributions of masses and mass ratios as the $0.01$ $h^{-1}$ Mpc\ pairs. \textbf{Lower / smaller panels}: The difference~(solid$-$dashed lines) between
$\left<\log_{10}\eta\right>$ for the true BH systems vs.
the randomized systems. The different rows correspond to various values of $d_{\mathrm{max}}$. The error-bars on the y axis are obtained using bootstrap resampling. We see that Eddington ratios associated with primary BHs are enhanced when they belong to BH systems on 0.1 or 0.01 $h^{-1}$ Mpc\ scales, relative to
randomly chosen subsets of BHs. On $0.01$ $h^{-1}$ Mpc\ scales, the AGN enhancements
are most significant at $z\sim0$. In contrast, little enhancement in AGN activity is seen in
BH systems at scales of $1$ $h^{-1}$ Mpc.}
\label{average_eddington_ratio}
\end{figure*}
\section{Conclusions}
\label{conclusions}
In this work, we investigate the role of environment
on
AGN activity within the \texttt{TNG100} realization of the \texttt{Illustris-TNG} simulation suite. In particular, we investigate whether BH pairs and multiples~(within separations of $0.01-1$ $h^{-1}$ Mpc)
have enhanced AGN activity compared to samples of randomly assigned pairs and multiples.
The number density of BHs in TNG100 is
$n \sim 0.06\,h^3$ Mpc$^{-3}$ at $z\lesssim1.5$ ($n\sim 0.02\, h^3$ Mpc$^{-3}$ at $z=3$). About $10\%$ of these BHs live in pairs on scales of 0.1 $h^{-1}$ Mpc, and $\sim10\%$ of these pairs (i.e., $\sim 1\%$ of all BHs) have additional companions, forming triples or higher-order multiples. A similar fraction ($\sim12\%$) of BHs are in pairs on 1 $h^{-1}$ Mpc\ scales, but $\sim30\%$ of these ($\sim3.6\%$ of all BHs) have additional companions on these scales. On the smallest scales ($d_{\rm max}=0.01$ $h^{-1}$ Mpc), in contrast, only $\sim 0.2\%$ of BHs are found in pairs (though as discussed above, this sample of pairs is incomplete). Overall,
pairs and triples live in haloes with a range of masses, but the median host halo mass ($\lesssim10^{12}~h^{-1}~M_{\odot}$) varies little with redshift.
We find that the AGN activity associated with these BH systems is enhanced at scales within $0.01$ $h^{-1}$ Mpc\ and $0.1$ $h^{-1}$ Mpc\ across the entire redshift regime~($z\sim0-4$) we covered in this study. However, no such enhancements are found for BH systems within $1~\mathrm{Mpc}$ scales. The lack of enhancements in AGN activity at $\sim1~\mathrm{Mpc}$ scales, is consistent with recent observational constraints on large scale clustering, which were found to exhibit no significant dependence on AGN luminosity. On the other hand, the enhancements at smaller scales~$\sim0.01~\& 0.1$ $h^{-1}$ Mpc\ can be attributed to AGN activity triggered by merging and interacting galaxies.
The influence of the small scale~($\leq0.1$ $h^{-1}$ Mpc) environment on the AGN activity is strongest at high Eddington ratios.
In particular, for the highest Eddington ratio~($\gtrsim0.7$) AGN, the AGN fractions are significantly enhanced~(up to factors of $\sim3-6$) for pairs, triples and quadruples at scales within $\leq0.1 $ $h^{-1}$ Mpc\ compared to random BH samples.
As we decrease the Eddington ratio thresholds, these environmental enhancements gradually become smaller and eventually disappear around Eddington ratios of $\sim 0.01$.
Additionally, the enhancements~(at fixed Eddington ratio) also tend to be highest at the smallest ($\leq0.01$ $h^{-1}$ Mpc) scales. For example, at Eddington ratios greater than $0.1$, the AGN fractions of $\leq0.01$ $h^{-1}$ Mpc\ pairs at $z=0$ are enhanced up to factors of $\sim8$. Similarly, we also find that more luminous AGN have an enhanced likelihood~(up to factors of $\sim4$ and $\sim9$ within 0.1 and $0.01$ $h^{-1}$ Mpc\ scales, respectively) of living in BH pairs and multiples, compared to random subsamples of BHs.
In order to control for possible systematic biases, we investigate whether
our results are influenced by the possibility that more luminous AGN tend to
live in more massive
galaxies and halos, which incidentally tend to also host a higher number of BHs. We found that even after statistically controlling for the host halo mass and host galaxy stellar mass, more luminous AGN continue to have enhanced likelihood of living in BH pairs and multiples within
0.1 $h^{-1}$ Mpc, compared to random subsamples of BHs. This further solidifies the
correlation between AGN activity and the richness of the small scale~($\leq0.1$ $h^{-1}$ Mpc) environment over the entire redshift range between 0 to 4. Additionally, we find that the enhancement in accretion rates within BH systems is stronger at higher redshift, which presumably reflects the
higher cold gas fractions at higher redshifts.
Because the Eddington ratio of AGN is not a directly observable quantity (and BH mass measurements must often rely on indirect methods), we also estimate the X-ray luminosities of the AGN in our sample and determine the likelihood for X-ray selected AGN to live in BH pairs and multiples, as a function of the X-ray flux limits relevant to current surveys. At the limit of the 105 month \textit{Swift}-BAT survey, about $\sim10-20\%$ of detectable AGN at $z=0.035$ have at least one secondary companion within $0.1$ $h^{-1}$ Mpc \
scales. $\sim3\%$ of these BAT AGN have companions that are also detectable at the \textit{Swift}-BAT survey flux limit. Additionally, when we define dual AGN as in~\cite{2012ApJ...746L..22K}~(i.e., when AGN companions are selected based on a minimum 2-10 keV luminosity of $10^{42}$ erg s$^{-1}$), we report a dual AGN frequency of $\sim10\%$, consistent with their measurements.
If instead we consider the companions of AGN selected in the 2-10 keV band at the limit of the \textit{Chandra} Stripe 82 survey (with no constraints on the ultra-hard X-ray band), we find that $\sim 5$\% of AGN live in pairs and multiples within 0.1 $h^{-1}$ Mpc \ scales at $z=0.035$. At higher redshifts~($z\sim0.6-1.5$), up to $\sim30\%$ of such AGN have companions within 0.1 $h^{-1}$ Mpc\ scales. However, for only $\lesssim2\%$ of these $z\gtrsim0.6$ AGN, the companions are detectable without observations deeper than Stripe 82. At the flux limits of ECDF, most of the companions~(up to $z\sim3$) are available for detection, but those with low X-ray luminosities will likely be indistinguishable from star formation, and many will also have significant dust attenuation.
With its wide-field imaging capabilities, the upcoming Advanced Telescope for High Energy Astrophysics (Athena) mission~\citep{2013sf2a.conf..447B} will enable new surveys that are expected to detect hundreds of AGN at $z>6$~\citep{2013arXiv1306.2307N}. The proposed Advanced X-ray Imaging Satellite (AXIS)~\citep{2018SPIE10699E..29M} and Lynx X-ray Observatory (Lynx) missions~\citep{2018arXiv180909642T} would enable detection of large new populations of AGN including high redshift AGN and close ($\lesssim 0.01$ $h^{-1}$ Mpc) dual AGN, owing to their sub-arcsecond imaging requirements and their factors of 10 and 100, respectively, better sensitivity than \textit{Chandra}. Our finding that merger-driven AGN activity is a significant but subdominant channel for BH fueling in TNG100 provides additional motivation for pursuing these key science goals with Athena.
While the enhanced AGN activity in rich, small-scale environments is
consistent with the presence of the merger-AGN connection, we find
that only a subdominant~(at best $\sim40\%$ for the highest Eddington ratio AGN) fraction of AGN actually live in BH pairs and multiples. Furthermore, enhancements in the Eddington ratios in BH pairs and multiples are, at best, only up to factors $\sim2-3$. Therefore, most AGN fueling as well as BH growth in \texttt{TNG100} may still be primarily triggered by secular processes, with a significant but minor role played by galaxy mergers/ interactions. We plan to explore this question in more detail in future work, including our companion paper, Thomas et.~al.~(in prep).
|
train/arxiv
|
BkiUbPw5qdmB66aQ4eXl
| 5 | 1 |
\section{Introduction}
For integers $n \geq 1$ and $k \geq 0$, denote $S_k = \sum_{i=1}^{n} i^k$. It is well-known that $S_k$ can be expressed in the
so-called Faulhaber form (see, e.g., \cite{beardon,cere1,cere2,cere3,cere4,edwards1,edwards2,knuth,kotiah,krishna,witmer})
\begin{align}
S_{2k} & = S_2 \big[ b_{k,0} + b_{k,1} S_1 + b_{k,2} S_{1}^2 + \cdots + b_{k,k-1} S_{1}^{k-1} \big], \label{f1} \\
S_{2k+1} & = S_{1}^2 \big[ c_{k,0} + c_{k,1} S_1 + c_{k,2} S_{1}^2 + \cdots + c_{k,k-1} S_{1}^{k-1} \big], \label{f2}
\end{align}
where $b_{k,j}$ and $c_{k,j}$ are non-zero rational coefficients for $j =0,1,\ldots,k-1$ and $k \geq 1$. In particular, $S_3 = S_{1}^2$. We
can write \eqref{f1} and \eqref{f2} more compactly as
\begin{align*}
S_{2k} & = S_2 F_{2k}(S_1), \\
S_{2k+1} & = S_{1}^2 F_{2k+1}(S_1),
\end{align*}
where both $F_{2k}(S_1)$ and $F_{2k+1}(S_1)$ are polynomials in $S_1$ of degree $k-1$. For later convenience, we also quote the
relationship between $S_{2}^2$ and $S_1$, namely
\begin{equation}\label{ship}
S_{2}^2 = \frac{1}{9} S_{1}^{2} (1 + 8 S_{1}).
\end{equation}
Recently, Miller and Trevi\~{n}o \cite{miller} derived the following formulas for $S_{2r+1}$ and $S_{2r+2}$:
\begin{align}
S_{2r+1} & = \frac{r+1}{2} \left( S_{r}^2 - \sum_{i=r}^{2r-1} d_i S_i \right), \label{mt1} \\
S_{2r+2} & = \frac{(r+1)(r+2)}{2r+3} \left( S_r S_{r+1} - \sum_{i=r+1}^{2r} e_i S_i \right), \label{mt2}
\end{align}
from which, by using mathematical induction, they concluded (see \cite[Theorem 1]{miller}) that, for $k \geq 1$, there exists a
polynomial $g_k \in \mathbb{Q}[x,y]$ such that $g_k(0,0)=0$ and $S_k = g_k(S_1,S_2)$.
In this note we argue that, actually, a slight reformulation of the said theorem enables one to demonstrate the theorem of Faulhaber
embodied in equations \eqref{f1} and \eqref{f2} above. Indeed, as will become clear in the next section, the formulas in \eqref{mt1}
and \eqref{mt2} can be used to generate recursively the Faulhaber polynomials in \eqref{f2} and \eqref{f1}, respectively. Before going
further, it should be noticed that, throughout this work, we adopt the convention of expressing $S_{2}^2$ as in equation \eqref{ship},
so that we consider that, formally, $S_{2}^2$ does {\it not\/} depend on $S_2$. According to \eqref{f1}, this implies that, if $k$ is even,
$S_{k}^2$ is a polynomial in $S_1$ (see \cite[Corollary 3.2]{beardon}). For example, we will write $S_5$ as $\frac{1}{3} (4 S_{1}^3 -
S_{1}^2 )$, and {\it not\/} as $\tfrac{1}{2}(3S_{2}^2 - S_{1}^2)$. Assuming this convention, it turns out that, for odd $k$, say $k =
2r+1$, we can make the right-hand side of \eqref{mt1} to depend exclusively on $S_1$, so that the resulting polynomial for $S_{2r+1}$
obtained from \eqref{mt1} yields the Faulhaber polynomial $S_{2r+1} = S_{1}^2 F_{2r+1}(S_1)$. On the other hand, starting with
\eqref{mt2}, one can indeed show that $S_{2r+2} = g_{2r+2}(S_1,S_2)$, but this relationship is, again, nothing more than that given
by the Faulhaber polynomial $S_{2r+2} = S_2 F_{2r+2}(S_1)$.
\section{Generators of Faulhaber polynomials}
To support our claim, we make use of the following formula, which gives us the product of the power sums $S_k$ and $S_m$
(with $k,m \geq 1$):
\begin{equation}\label{prod1}
S_k S_m = \frac{1}{k+1} \sum_{j=0}^{k/2} B_{2j} \binom{k+1}{2j} S_{k+m+1-2j} +
\frac{1}{m+1} \sum_{j=0}^{m/2} B_{2j} \binom{m+1}{2j} S_{k+m+1-2j},
\end{equation}
where the $B_j$'s are the Bernoulli numbers and where the upper summation limit $k/2$ denotes the greatest integer less than or equal
to $k$. Equation \eqref{prod1} was stated as Theorem 1 in \cite{mac}, where, incidentally, it is further observed that it was known to Lucas
by 1891. For $k =m$, \eqref{prod1} reduces to
\begin{equation}\label{square}
S_k^2 = \frac{2}{k+1} \sum_{j=0}^{k/2} B_{2j} \binom{k+1}{2j} S_{2k+1-2j}, \quad k \geq 1.
\end{equation}
From \eqref{square}, we then obtain
\begin{equation}\label{prod2}
S_{2r+1} = \frac{r+1}{2}S_{r}^2 - \sum_{j=1}^{r/2} B_{2j} \binom{r+1}{2j} S_{2r+1-2j}, \quad r \geq 1,
\end{equation}
where we can see that the summation in the right-hand side of \eqref{prod2} involves only power sums $S_i$ with odd index $i$. Thus, according
to \eqref{f2}, every $S_i$ can be put as $S_{1}^2$ times a polynomial in $S_1$. Likewise, from \eqref{f1}, \eqref{f2}, and \eqref{ship}, it turns
out that $S_{r}^2$ can always be expressed as $S_{1}^2$ times a polynomial in $S_1$ of degree $r-1$. Hence, it follows that $S_{2r+1}$
in equation \eqref{prod2} must factorize as the product of $S_{1}^2$ times a polynomial in $S_1$ of degree $r-1$, namely, the Faulhaber form
$S_{2r+1} = S_{1}^2 F_{2r+1}(S_1)$. As a simple example, we may use \eqref{prod2} to evaluate $S_9$. For $r=4$, equation \eqref{prod2}
reads
\begin{equation*}
S_9 = \frac{5}{2} S_{4}^2 + \frac{1}{6}S_5 - \frac{5}{3}S_7.
\end{equation*}
Now, since $S_4 = \frac{1}{5} (6 S_1 S_2 - S_2)$, and taking into account \eqref{ship}, we obtain
\begin{equation*}
S_{4}^2 = \frac{1}{225} S_{1}^2 - \frac{4}{225} S_{1}^3 - \frac{4}{15} S_{1}^4 + \frac{32}{25} S_{1}^5 .
\end{equation*}
Thus, noting that $S_5 = \frac{1}{3} (4 S_{1}^3 - S_{1}^2 )$ and $S_7 = \tfrac{1}{3} (6S_{1}^4 - 4S_{1}^3 + S_{1}^2$),
we finally get the Faulhaber polynomial
\begin{equation*}
S_9 = \frac{1}{5}S_{1}^2 \big( -3 + 12 S_1 - 20 S_{1}^2 + 16 S_{1}^3 \big).
\end{equation*}
On the other hand, by taking $k =r$ and $m=r+1$ in \eqref{prod1}, and solving for $S_{2r+2}$, we find that
\begin{equation}\label{prod3}
S_{2r+2} = \frac{(r+1)(r+2)}{2r+3} \left( S_r S_{r+1} - B_{r+1} S_{r+1} - \sum_{j=1}^{r/2} h_{r,j} B_{2j} S_{2r+2-2j} \right),
\,\,\, r \geq 1,
\end{equation}
where
\begin{equation*}
h_{r,j} = \frac{1}{r+1}\binom{r+1}{2j} + \frac{1}{r+2}\binom{r+2}{2j} .
\end{equation*}
Note that the summation in the right-hand side of \eqref{prod3} involves only sums $S_i$ with even index $i$. Furthermore, the single term
$B_{r+1}S_{r+1}$ only survives when $r+1$ is even. Regarding the product $S_r S_{r+1}$, it is obvious that one of the indices $r$ or $r+1$
is even. Therefore, invoking \eqref{f1}, we conclude that $S_{2r+2}$ in equation \eqref{prod3} must factorize as the product of $S_2$ times a
polynomial in $S_1$ of degree $r$, namely, the Faulhaber form $S_{2r+2} = S_2 F_{2r+2}(S_1)$. As another concrete example, let us evaluate
$S_{10}$ using \eqref{prod3}. For $r=4$, equation \eqref{prod3} reads
\begin{equation*}
S_{10} = \frac{30}{11} \left( S_4 S_5 + \frac{7}{60} S_6 - \frac{3}{4} S_8 \right).
\end{equation*}
Now, substituting $S_4 = \frac{1}{5} (6 S_1 S_2 - S_2)$, $S_5 = \frac{1}{3} (4 S_{1}^3 - S_{1}^2 )$ , $S_6 = \frac{1}{7}(S_2 - 6S_1 S_2
+ 12 S_{1}^2 S_2$), and $S_8 = \frac{1}{15}(-3S_2 + 18 S_1 S_2 - 40 S_{1}^2 S_2 + 40 S_{1}^3 S_2)$ into the last equation, we get the
Faulhaber polynomial
\begin{equation*}
S_{10} = \frac{1}{11} S_2 \big( 5 - 30 S_1 +68 S_{1}^2 - 80 S_{1}^3 + 48 S_{1}^4 \big).
\end{equation*}
It should then be clear that the formulas for $S_{2r+1}$ and $S_{2r+2}$ in \eqref{prod2} and \eqref{prod3} or, equivalently, the formulas in
\eqref{mt1} and \eqref{mt2} act as generators of the Faulhaber polynomials, provided that both $S_{2r+1}$ and the square $S_{2r}^2$ are
expressed in terms of $S_1$. Armed with the formulas in \eqref{prod2} and \eqref{prod3}, it is then a trivial matter to inductively prove the
Faulhaber theorem given in equations \eqref{f1} and \eqref{f2}. A proof of this kind based on equations like \eqref{prod2} and \eqref{prod3}
was given elsewhere \cite{cere1}.
For the sake of completeness, it is worth observing that the Faulhaber polynomials can also be obtained by means of the identities\footnote{
Identity \eqref{i1} appears as Theorem 2 in \cite{mac} (see also \cite[Equation (6)]{acu} and \cite[Equation (17)]{kotiah}). Regarding
identity \eqref{i2}, it can be readily obtained from \cite[Equation (22)]{kotiah} (see also \cite[Equation (4.3)]{cere2}).}
\begin{equation}\label{i1}
S_1^k = \frac{1}{2^{k-1}} \sum_{j=0}^{\frac{k-1}{2}} \binom{k}{2j+1} S_{2k-1-2j}, \quad k \geq 1,
\end{equation}
and
\begin{equation}\label{i2}
S_2 S_1^k = \frac{1}{3 \cdot 2^k} \sum_{j=0}^{\frac{k+1}{2}} \frac{2k+3-2j}{2j+1} \binom{k+1}{2j} S_{2k+2-2j}, \quad k \geq 1.
\end{equation}
From \eqref{i1} and \eqref{i2}, it follows that
\begin{equation}\label{i3}
S_{2r+1} = \frac{2^r}{r+1} S_{1}^{r+1} - \frac{1}{r+1} \sum_{j=1}^{r/2} \binom{r+1}{2j+1} S_{2r+1-2j},
\end{equation}
and
\begin{equation}\label{i4}
S_{2r+2} = \frac{1}{2r+3}\left( 3 S_2 (2S_{1})^{r} - \sum_{j=1}^{\frac{r+1}{2}} \frac{2r+3-2j}{2j+1}
\binom{r+1}{2j} S_{2r+2-2j} \right),
\end{equation}
respectively. Note that the summation in the right-hand side of \eqref{i3} [\eqref{i4}] involves only odd [even] indexed power sums $S_i$,
Therefore, starting with $r =1$, one can recursively use \eqref{i3} [\eqref{i4}] to get the Faulhaber polynomials $S_{2r+1} = S_{1}^2
F_{2r+1}(S_1)$ [respectively, $S_{2r+2} = S_{2}F_{2r+2}(S_1)$].
\section{Conclusion}
In \cite[Theorem 1]{miller}, Miller and Trevi\~{n}o deduced from \eqref{mt1} and \eqref{mt2} that, for $k \geq 1$, there exist a polynomial
$g_k \in \mathbb{Q}[x,y]$ such that $g_k(0,0)=0$ and $S_k = g_k(S_1,S_2)$. The point raised in this note is that any such polynomial in $S_1$
and $S_2$ can always be reduced to the Faulhaber form in equations \eqref{f1} and \eqref{f2}. Indeed, as we have shown here, the above
polynomials \eqref{mt1} and \eqref{mt2} can be used to generate recursively the Faulhaber polynomials $S_{2r+1} = S_{1}^2 F_{2r+1}(S_1)$
and $S_{2r+2} = S_{2}F_{2r+2}(S_1)$, respectively, provided we adhere to the convention in \eqref{ship}
On the other hand, as discussed in \cite{miller}, there may be other possible ways of expressing $S_{k}$ in terms of power sums of lower degree.
In this sense, it is pertinent to recall the remarkable result achieved by Beardon (see \cite[Theorem 6.2]{beardon}), according to which, for each
pair of integers $i$ and $j$ with $1 \leq i < j$, there is a unique, non-constant irreducible polynomial $T_{ij}$ in two variables $x$ and $y$, with
integer coefficients, such that $T_{ij}(S_i, S_j) =0$. As a simple example, we have the relation (\cite[Equation (1.4)]{beardon})
\begin{equation*}
T(S_1, S_2) = 0, \quad \text{with} \quad T(x,y) = 8x^3 + x^2 - 9y^2,
\end{equation*}
which is just relation \eqref{ship}. Moreover, it was further shown there (see \cite[Theorem 7.1]{beardon}) that the relation $T_{i,j}(S_i, S_j)$ is
separable if, and only if, $i=1$. This is, of course, in agreement with the polynomial form in \eqref{f2}. A result already anticipated by Faulhaber in
1631 \cite{edwards1}.
\vspace{.5cm}
|
train/arxiv
|
BkiUa8A4dbgj407o0rNd
| 5 | 1 |
\section{Introduction}
The optimal reinsurance-investment problem is of large interest in the
actuarial literature. A reinsurance is a contract whereby a reinsurance
company agrees to indemnify the cedent (i.e. the primary insurer) against all
or part of the future losses that the latter sustains under the policies that
she has issued. For this service the insurer is asked to pay a premium. It is
well known that such a risk-sharing agreement allows the insurer to reduce the
risk, to increase the business capacity, to stabilise the operating results and so on.
In the existing literature there are a lot of works dealing with the optimal
reinsurance strategy, starting from the seminal papers \cite{definetti:1940},
\cite{buhlmann:1970} and \cite{gerber:1979}. During the last decades two
different approaches were used to study the problem: some authors model the
insurer's surplus as a jump process, others as a diffusion approximation (see
e.g. \cite{schmidli:2018risk} and references therein for details about
risk models). In addition, only two reinsurance agreements were considered:
the proportional and the excess-of-loss contracts (or both, as a mixed
contract). Among the optimization criteria, we recall the expected utility
maximization (see \cite{irgens_paulsen:optcontrol}, \cite{GUERRA2008529} and
\cite{Mania2010}), ruin probability minimization (see \cite{promislow2005},
\cite{schmidli:2001} and
\cite{schmidli2002}), dividend policy optimization
(see \cite{buhlmann:1970} and \cite{schmidli:control}) and others. In
particular, the former was developed only for CRRA and CARA utility
functions.
Our aim is to investigate the optimal reinsurance problem in a diffusion risk model when the insurer subscribes a general reinsurance agreement, with a retention level $u\in[0,I]$. The insurer's objective is to maximise the expected utility of the terminal wealth for a general utility function $U$, satisfying the classical assumptions (monotonicity and concavity). That is, we do not assume any explicit expression neither for the reinsurance policy nor for $U$. However, we also investigate how our general results apply to specific utility functions, including CRRA and CARA classes, and to the most popular reinsurance agreements such as proportional and excess-of-loss.
One additional feature of our paper is that the insurer's surplus is affected
by an environmental factor $Y$, which allows our framework to take into
account \textit{size} and \textit{risk fluctuations} (see \cite[Chapter
2]{grandell:risk}). We recall two main attempts of introducing a stochastic
factor in the risk model dynamic: in \cite{liangbayraktar:optreins} the
authors considered a Markov chain with a finite state space, while in
\cite{BC:IME2019}
$Y$ is a diffusion process, as in our case. However, they
considered jump processes and the rest of the model formulation is very
different (for instance, they restricted the maximization to the exponential
utility function and the proportional reinsurance). Moreover, in those papers
$Y$ only affects the insurance market.
Indeed, another important peculiarity of our model is the dependence between the insurance and the financial markets. We allow the insurer to invest her money in a risky asset, modelled as a diffusion process with both the drift and the volatility influenced by the stochastic factor $Y$. From the practical point of view, this characteristic reflects any connection between the two markets. From the theoretical point of view, we remove the standard assumption of the independence, which is constantly present in all the previous works, especially because it simplifies the mathematical framework.
The paper is organized as follows: in the following section we formulate our optimal stochastic control problem; next, in Section \ref{section:valuefun} we analyse the main properties of the value function, while in Section \ref{section:hjb} we characterize the value function as a viscosity solution to the Hamilton-Jacobi-Bellman (HJB) equation associated with our problem; in Section \ref{section:sahara} we apply our general results to the class of SAHARA utility functions, which includes CRRA and CARA utility functions as limiting cases. In addition, we characterize the optimal reinsurance strategy under the proportional and the excess-of-loss contracts, also providing explicit formulae. Finally, in Section \ref{section:numerical} we give some numerical examples.
\section{The Model}
The surplus process of an insurer is modelled as the solution to the
stochastic differential equation
\[ {\rm d} X_t^0 = m(t, Y_t, u_t) \;{\rm d} t + \sigma(t,Y_t, u_t) \;{\rm d} W_t^1\;,\qquad
X_0^0 = x\;,\]
where
$Y$ is an environmental process, satisfying
\[ {\rm d} Y_t = \mu_Y(t,Y_t) \;{\rm d} t + \sigma_Y(t,Y_t) \;{\rm d} W_t^Y\;,\qquad Y_0 =
y\;,\]
and $u_t$ is the reinsurance retention level of the insurer at time
$t$. We assume that $u_t$ is a cadlag process and can take all the values in
an interval $[0,I]$,
where $I \in (0,\infty]$ and that the functions $m(t,y,u)$, $\sigma(t,y,u)$,
$\mu_Y(t,y)$ and $\sigma_Y(t,y)$
are continuously differentiable bounded functions satisfying a Lipschitz
condition uniformly in $u$.
Further, the insurer has the possibility to invest into a risky asset $R$
modelled as the solution to
\[ {\rm d} R_t = \mu(t,Y_t) R_t \;{\rm d} t + \sigma_1(t,Y_t) R_t \;{\rm d} W_t^1 +
\sigma_2(t,Y_t) R_t \;{\rm d} W_t^2\;,\quad R_0 \in(0,+\infty)\;.\]
Also the functions $\mu(t,y)$, $\sigma_1(t,y)$ and $\sigma_2(t,y)$ are assumed
to be bounded continuous positive functions satisfying a Lipschitz condition. We
further assume that $\sigma_1(t,y) + \sigma_2(t,y)$ is bounded away from
zero. Here, $W^1$, $W^2$, $W^Y$ are independent Brownian motions on a
reference probability space $(\Omega, \calF, {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu P})$.
Thus, the reinsurance strategy does not influence the behaviour of the risky
asset. But, the surplus process and the risky asset are dependent. Choosing an
investment strategy $a$, the surplus of the insurer fulfils
\begin{eqnarray*}
{\rm d} X_t^{u,a} &=& \{m(t,Y_t,u_t) + a_t \mu(t,Y_t)\} \;{\rm d} t + \{(\sigma(t,Y_t, u_t)
+a_t \sigma_1(t,Y_t))\} \;{\rm d} W_t^1\\
&& \hskip5.4cm {}+ a_t \sigma_2(t,Y_t) \;{\rm d} W_t^2\;, \qquad X_0^{u,a} = x\;.
\end{eqnarray*}
In order that a strong solution exists we assume that ${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[\int_0^T a_t^2 \;{\rm d}
t] < \infty$.
Our goal is to maximise the terminal expected utility at time $T > 0$
\[ V^{u,a}(0,x,y) = {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[U(X_T^{u,a})\mid X_0^{u,a} = x, Y_0 = y]\;,\]
and, if it exists, to find the optimal strategy $(u^*, a^*)$. That is,
\[ V(0,x,y) = \sup_{u,a} V^{u,a}(0,x,y) = V^{u^*,a^*}(0,x,y)\;,\]
where the supremum is taken over all measurable adapted processes $(u,a)$ such
that the conditions above are fulfilled. $U$ is a
utility function. That is, $U$ is strictly increasing and strictly concave. We
make the additional assumption that $U''(x)$ is continuous. The
filtration is the smallest complete right-continuous filtration $\{\calF_t\}$
such that the Brownian motions are adapted. In particular, we suppose that $Y$
is observable.
We will also need the value functions if we do start at time $t$ instead. Thus
we define
\[ V^{u,a}(t,x,y) = {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[U(X_T^{u,a})\mid X_t^{u,a} = x, Y_t = y]\;,\]
where we only consider strategies on the time interval $[t,T]$
and, analogously, $V(t,x,y) = \sup_{u,a} V^{u,a}(t,x,y)$. The boundary
condition is then $V(T,x,y) = U(x)$. Because our underlying processes are
Markovian, $V(t,X_t^{u,a},Y_t)$ depends on $\calF_t$ via $(X_t^{u,a}, Y_t)$ only.
\section{Properties of the value function}
\label{section:valuefun}
\begin{lemma}
\begin{enumerate}
\item The value function is increasing in $x$.
\item The value function is continuous.
\end{enumerate}
\end{lemma}
\begin{proof}
That the value function is increasing in $x$ is clear. By It\^o's formula
\begin{eqnarray*}
U(X_T^{u,a}) &=& U(x) + \int_t^T [\{ m(s,Y_s,u_s) + a_s \mu(s,Y_s)\}
U'(X_s^{u,a})\\
&&\hskip5mm {}+ \protect{\mathbox{\frac12}}\{(\sigma(s,Y_s,u_s) + a_s \sigma_1(s,Y_s))^2 +
a_s^2 \sigma_2^2(s,Y_s) \}U''(X_s^{u,a})] \;{\rm d} s\\
&&{}+ \int_t^T(\sigma(s,Y_s,u_s) +
a_s \sigma_1(s,Y_s)) \;{\rm d} W_s^1 + \int_t^T a_s \sigma_2(s,Y_s) \;{\rm d} W_s^2\;.
\end{eqnarray*}
Because the stochastic integrals are martingales by our assumptions
\begin{eqnarray*}
{\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[U(X_T^{u,a})] &=& U(x) + {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}\Bigl[\int_t^T [\{ m(s,Y_s,u_s) + a_s \mu(s,Y_s)\}
U'(X_s^{u,a})\\
&&\hskip5mm{} + \protect{\mathbox{\frac12}}\{(\sigma(s,Y_s,u_s) + a_s \sigma_1(s,Y_s))^2\\
&&\hskip15mm{} + a_s^2 \sigma_2^2(s,Y_s) \}U''(X_s^{u,a})] \;{\rm d} s\Bigr]\;.
\end{eqnarray*}
Taking the supremum over the strategies we get the continuity by the Lipschitz
assumptions.
\end{proof}
\begin{lemma}
The value function is concave in $x$.
\end{lemma}
\begin{proof}
If the value function was not concave, we would find $x$ and a test
function $\phi$ with $\phi_{x x}(t,x,y) \ge 0$, $\phi_x(t,x,y) > 0$ and
$\phi(t',x',y') \le V(t',x',y')$ for all $t'$, $x'$, $y'$ and $\phi(t,x,y) =
V(t,x,y)$. By the proof of Theorem \ref{thm:vis.sol} below,
\begin{eqnarray*}
0 &\ge& \phi_t + \sup_{u,a} \{m(t,y,u) + a \mu(t,y)\} \phi_x\\
&&{}+ \protect{\mathbox{\frac12}} \{(\sigma(t,y, u) + a \sigma_1(t,y))^2 + a^2 \sigma_2^2(t,y)\}
\phi_{x x} + \mu_Y(t,y) \phi_y\\
&&{}+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) \phi_{y y}\;.
\end{eqnarray*}
But it is possible to choose $a$ such that the above inequality does not hold.
\end{proof}
\section{The HJB equation}
\label{section:hjb}
We expect the value function to solve
\begin{eqnarray}\label{eq:hjb}
0 &=& V_t + \sup_{u,a} \{m(t,y,u) + a \mu(t,y)\} V_x\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \{(\sigma(t,y, u) + a \sigma_1(t,y))^2 + a^2 \sigma_2^2(t,y)\}
V_{x x} + \mu_Y(t,y) V_y\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) V_{y y}\;.
\end{eqnarray}
A (classical) solution is only possible if $V_{x x} < 0$. In this case,
\begin{equation}
\label{eqn:a_optimal}
a = - \frac{\mu(t,y) V_x + \sigma(t,y,u) \sigma_1(t,y) V_{x
x}}{(\sigma_1^2(t,y)+ \sigma_2^2(t,y)) V_{x x}}\;.
\end{equation}
Thus, we need to solve
\begin{eqnarray}\label{eqn:hjb2}
0 &=& V_t + \sup_{u} m(t,y,u) V_x - \frac{(\mu(t,y) V_x + \sigma(t,y,u)
\sigma_1(t,y) V_{x x})^2}{2(\sigma_1^2(t,y)+ \sigma_2^2(t,y)) V_{x x}}\nonumber\\
&&{}+ \frac{1}{2}\sigma^2(t,y,u) V_{x x}+ \mu_Y(t,y) V_y
+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) V_{y y}\;.
\end{eqnarray}
By our assumption that $m(t,y,u)$ and $\sigma(t,y,u)$ are continuous functions
on a closed interval of the compact set $[0,\infty]$, there is a value
$u(x,y)$ at which that supremum is
taken.
\begin{theo}\label{thm:vis.sol}
The value function is a viscosity solution to \eqref{eq:hjb}.
\end{theo}
\begin{proof}
Without loss of generality we only show the assertion for $t = 0$.
Choose $(\bar u,\bar a)$ and $\epsilon, \delta, h > 0$. Let $\tau^{\bar u,
\bar a} = \inf\{ t > 0: \max\{|X_t^{\bar u, \bar a} - x|, |Y_t-y|\} >
\epsilon\}$ and $\tau = \tau^{\bar u, \bar a} \wedge h$. Consider the
following strategy. $(u_t,a_t) = (\bar u, \bar a)$ for $t < \tau^{\bar u,
\bar a} \wedge h$, and $(u_t, a_t) = (\tilde u_{t-(\tau^{\bar u,
\bar a} \wedge h)}, \tilde u_{t-(\tau)})$ for some
strategy $(\tilde u, \tilde a)$, such that $V^{\tilde u, \tilde
a}(\tau, X_{\tau}^{\bar u,
\bar a},Y_{\tau^{\bar u, \bar a} \wedge
h}) > V(\tau, X_{\tau^{\bar u,
\bar a} \wedge h}^{\bar u, \bar a},Y_{\tau^{\bar u, \bar a} \wedge
h})-\delta$. Note that the strategy can be chosen in a measurable way since
$V(t,x,y)$ is continuous. Let $\phi(t,x,y)$ be a test function, such that
$\phi(t,x',y') \le V(t,x',y')$ with $\phi(0,x,y) = V(0,x,y)$. Then by It\^o's
formula
\begin{eqnarray*}
\phi(\tau, X_{\tau}^{\bar u,
\bar a},Y_{\tau}) &=& \phi(0,x,y) + \int_0^\tau [\phi_t(t,X_t, Y_t)\\
&&{}+ \{m(t, Y_t, \bar u) + \bar a \mu(t,Y_t)\} \phi_x(t,X_t, Y_t)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(t,Y_t,\bar u) + \bar a \sigma_1(t,Y_t))^2 + \bar a^2
\sigma_2^2(t, Y_t)\} \phi_{x x}(t,X_t, Y_t)\\
&&{}+ \mu_Y(t,Y_t) \phi_y(t,X_t,Y_t) + \protect{\mathbox{\frac12}} \sigma_Y^2(t,Y_t) \phi_{y
y}(t,X_t,Y_t)] \;{\rm d} t\\
&&{}+ \int_0^\tau [\sigma(t,Y_t,\bar u) + \bar a
\sigma_1(t,Y_t)] \phi_x(t,X_t, Y_t) \;{\rm d} W_t^1\\
&&{}+ \int_0^\tau \bar a \sigma_2(t,Y_t) \phi_x(t,X_t, Y_t)\;{\rm d} W_t^2\\
&&{}+ \int_0^\tau \sigma_Y(t,Y_t) \phi_y(t,X_t,Y_t) \;{\rm d} W_t^Y\;.
\end{eqnarray*}
Note that the integrals with respect to the Brownian motions are true
martingales since the derivatives of $\phi$ are continuous and thus bounded on
the (closed) area, and therefore the integrands are bounded. Taking expected
values gives
\begin{eqnarray*}
V(0,x,y) &\ge& V^{u,a}(0,x,y) = {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[V^{u,a}(\tau, X_\tau, Y_\tau)] \ge {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[V(\tau,
X_\tau, Y_\tau)] - \delta\\
&\ge& {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[\phi(\tau,X_\tau, Y_\tau)] - \delta\\
&=& V(0,x,y) -\delta + {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}\Bigl[ \int_0^\tau [\phi_t(t,X_t, Y_t)\\
&&{}+ \{m(t, Y_t, \bar u) + \bar a \mu(t,Y_t)\} \phi_x(t,X_t, Y_t)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(t,Y_t,\bar u) + \bar a \sigma_1(t,Y_t))^2 + \bar a^2
\sigma_2^2(t, Y_t)\} \phi_{x x}(t,X_t, Y_t)\\
&&{}+ \mu_Y(t,Y_t) \phi_y(t,X_t,Y_t) + \protect{\mathbox{\frac12}} \sigma_Y^2(t,Y_t) \phi_{y
y}(t,X_t,Y_t)] \;{\rm d} t\Bigr]\;.
\end{eqnarray*}
The right hand side does not depend on $\delta$. We thus can let $\delta =
0$. This yields
\begin{eqnarray*}
0 &\ge& {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}\Bigl[ \frac1h\int_0^\tau [\phi_t(t,X_t, Y_t)
+ \{m(t, Y_t, \bar u) + \bar a \mu(t,Y_t)\} \phi_x(t,X_t, Y_t)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(t,Y_t,\bar u) + \bar a \sigma_1(t,Y_t))^2 + \bar a^2
\sigma_2^2(t, Y_t)\} \phi_{x x}(t,X_t, Y_t)\\
&&{}+ \mu_Y(t,Y_t) \phi_y(t,X_t,Y_t) + \protect{\mathbox{\frac12}} \sigma_Y^2(t,Y_t) \phi_{y
y}(t,X_t,Y_t)] \;{\rm d} t\Bigr]\;.
\end{eqnarray*}
It is well known that $h {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu P}[\tau \le h]$ tends to zero as $h
\downarrow 0$. Thus, letting $h \downarrow 0$ gives
\begin{eqnarray*}
0 &\ge& \phi_t + \{m(t,y,\bar u) + \bar a \mu(t,y)\} \phi_x\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \{(\sigma(t,y, \bar u) + \bar a \sigma_1(t,y))^2 + a^2
\sigma_2^2(t,y)\} \phi_{x x} + \mu_Y(t,y) \phi_y\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) \phi_{y y}\;.
\end{eqnarray*}
Since $(\bar u, \bar a)$ is arbitrary,
\begin{eqnarray*}
0 &\ge& \phi_t + \sup_{u,a} \{m(t,y,u) + a \mu(t,y)\} \phi_x\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \{(\sigma(t,y, u) + a \sigma_1(t,y))^2 + a^2 \sigma_2^2(t,y)\}
\phi_{x x} + \mu_Y(t,y) \phi_y\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) \phi_{y y}\;.
\end{eqnarray*}
Let now $\phi(t,x',y')$ be a test function such that $\phi(t,x',y') \ge
V(t,x',y')$ and $\phi(0,x,y) = V(0,x,y)$. Then there is a strategy $(u,a)$,
such that $V(0,x,y) < V^{u,a}(0,x,y) + h^2$. Choose a localisation sequence
$\{t_n\}$, such that
\begin{eqnarray*}
&&\int_0^{\tau \wedge t_n \wedge t} [\sigma(s,Y_s,u_s) + a_s
\sigma_1(s,Y_s)] \phi_x(s,X_s^{u,a}, Y_s) \;{\rm d} W_s^1\;,\\
&&\int_0^{\tau \wedge t_n \wedge t} a_s \sigma_2(s,Y_s) \phi_x(s,X_s^{u,a}, Y_s)\;{\rm d}
W_s^2\;,\\
\noalign{and}
&&\int_0^{\tau\wedge t_n \wedge t} \sigma_Y(s,Y_s) \phi_y(s,X_s^{u,a},Y_s) \;{\rm d} W_s^Y
\end{eqnarray*}
are martingales, where as above, $\tau = \tau^{u, a} \wedge h$.
We have
\begin{eqnarray*}
\lefteqn{\phi(0,x,y) = V(0,x,y) \le V^{u,a}(0,x,y) + h^2}\hskip1cm\\
&=& {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[V(\tau\wedge t_n , X_{\tau \wedge t_n}, Y_{\tau \wedge t_n})] +
h^2 \le {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[\phi(\tau \wedge t_n, X_{\tau \wedge t_n}, Y_{\tau \wedge t_n})] +
h^2\\
&=& \phi(0,x,y) + {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}\Bigl[ \int_0^{\tau \wedge t_n} [\phi_t(t,X_t, Y_t)\\
&&{}+ \{m(t, Y_t, u_t) + a_t \mu(t,Y_t)\} \phi_x(t,X_t, Y_t)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(t,Y_t,u_t) + a_t \sigma_1(t,Y_t))^2 + a_t^2
\sigma_2^2(t, Y_t)\} \phi_{x x}(t,X_t, Y_t)\\
&&{}+ \mu_Y(t,Y_t) \phi_y(t,X_t,Y_t) + \protect{\mathbox{\frac12}} \sigma_Y^2(t,Y_t) \phi_{y
y}(t,X_t,Y_t)] \;{\rm d} t\Bigr] + h^2\;.
\end{eqnarray*}
Because we consider a compact interval, we can let $n \to \infty$ and obtain
by bounded convergence
\begin{eqnarray*}
0 &\le& {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}\Bigl[ \int_0^\tau [\phi_t(t,X_t, Y_t)\\
&&{}+ \{m(t, Y_t, u_t) + a_t \mu(t,Y_t)\} \phi_x(t,X_t, Y_t)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(t,Y_t,u_t) + a_t \sigma_1(t,Y_t))^2 + a_t^2
\sigma_2^2(t, Y_t)\} \phi_{x x}(t,X_t, Y_t)\\
&&{}+ \mu_Y(t,Y_t) \phi_y(t,X_t,Y_t) + \protect{\mathbox{\frac12}} \sigma_Y^2(t,Y_t) \phi_{y
y}(t,X_t,Y_t)] \;{\rm d} t\Bigr] + h^2\\
&\le& {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}\Bigl[ \int_0^\tau \sup_{\bar u, \bar a}[\phi_t(t,X_t, Y_t)\\
&&{}+ \{m(t, Y_t, \bar u) + \bar a \mu(t,Y_t)\} \phi_x(t,X_t, Y_t)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(t,Y_t,\bar u) + \bar a \sigma_1(t,Y_t))^2 + \bar a^2
\sigma_2^2(t, Y_t)\} \phi_{x x}(t,X_t, Y_t)\\
&&{}+ \mu_Y(t,Y_t) \phi_y(t,X_t,Y_t) + \protect{\mathbox{\frac12}} \sigma_Y^2(t,Y_t) \phi_{y
y}(t,X_t,Y_t)] \;{\rm d} t\Bigr] + h^2\;.
\end{eqnarray*}
This gives by dividing by $h$ and letting $h \to 0$
\begin{eqnarray*}
0 &\le& \phi_t + \sup_{u,a} \{m(t,y,u) + a \mu(t,y)\} \phi_x\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \{(\sigma(t,y, u) + a \sigma_1(t,y))^2 + a^2 \sigma_2^2(t,y)\}
\phi_{x x} + \mu_Y(t,y) \phi_y\nonumber\\
&&{}+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) \phi_{y y}\;.
\end{eqnarray*}
This proves the assertion.
\end{proof}
Let now $u^*(t,x,y)$ and $a^*(t,x,y)$ be the maximiser in
\eqref{eq:hjb}. By \cite[Sec.~7]{wagner} we can choose these maximisers in a
measurable way. We further denote by
$u_t^* = u^*(t,X_t^{u^*, a^*}, Y_t)$ and $a_t^* = a^*(t,X_t^{u^*, a^*}, Y_t)$
the feedback strategy.
\begin{theo}\label{thm:opt.strat}
Suppose that $V$ is a classical solution to the HJB equation
\eqref{eq:hjb}. Suppose further that the strategy $(u^*, a^*)$ admits a unique
strong solution for $X^{u^*, a^*}$ and that $\{X_t^{u^*, a^*}\}$ is uniformly
integrable. Then the strategy $(u^*, a^*)$ is optimal.
\end{theo}
\begin{proof}
By It\^o's formula we get for $X_t = X_t^{u^*,a^*}$
\begin{eqnarray*}
\lefteqn{V(t, X_t,Y_t) = V(0,x,y) +
\int_0^t [V_t(s,X_s, Y_s)}\hskip1.5cm\\
&&{}+ \{m(s, Y_s, u_s^*) + a_s^* \mu(s,Y_s)\} V_x(s,X_s, Y_s)\\
&&{}+ \protect{\mathbox{\frac12}}\{ (\sigma(s,Y_s,u_s^*) + a_s^* \sigma_1(s,Y_s))^2 + {a_s^*}^2
\sigma_2^2(s, Y_s)\} V_{x x}(s,X_s, Y_s)\\
&&{}+ \mu_Y(s,Y_s) V_y(s,X_s,Y_s) + \protect{\mathbox{\frac12}} \sigma_Y^2(s,Y_s) V_{y
y}(s,X_s,Y_s)] \;{\rm d} s\\
&&{}+ \int_0^t [\sigma(s,Y_s,u_s^*) + a_s^*
\sigma_1(s,Y_s)] V_x(s,X_s, Y_s) \;{\rm d} W_s^1\\
&&{}+ \int_0^t a_s \sigma_2(s,Y_s) V_x(s,X_s, Y_s)\;{\rm d} W_s^2\\
&&{}+ \int_0^t \sigma_Y(s,Y_s) V_y(s,X_s,Y_s) \;{\rm d} W_s^Y\\
&=& V(0,x,y) + \int_0^t a_s \sigma_2(s,Y_s) V_x(s,X_s, Y_s)\;{\rm d} W_s^2\\
&&{}+\int_0^t [\sigma(s,Y_s,u_s^*) + a_s^*
\sigma_1(s,Y_s)] V_x(s,X_s, Y_s) \;{\rm d} W_s^1\\
&&{}+ \int_0^t \sigma_Y(s,Y_s) V_y(s,X_s,Y_s) \;{\rm d} W_s^Y\;.
\end{eqnarray*}
Thus, $\{V(t, X_t,Y_t)\}$ is a local martingale. From $U(X_T) \le U(x) +
(X_T-x) U'(x)$ and the uniform integrability we get that $\{V(t, X_t,Y_t)\}$
is a martingale. We therefore have
\[ {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[U(X_T)] = {\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[V(T, X_T, Y_T)] = V(0,x,y)\;.\]
This shows that the strategy is optimal.
\end{proof}
\begin{corollary}\label{cor:op.str}
Suppose that $V$ is a classical solution to the HJB equation
\eqref{eq:hjb}. Suppose further that the strategy $(u^*, a^*)$ admits a unique
strong solution for $X^{u^*, a^*}$ and that ${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[\int_0^T (a_t^*)^2 \;{\rm d} t] <
\infty$. Then the strategy $(u^*,a^*)$ is optimal.
\end{corollary}
\begin{proof}
Since the parameters are bounded, the condition ${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[\int_0^T (a_t^*)^2 \;{\rm d} t] <
\infty$ implies uniform integrability of $\{X_t^{u^*,a^*}\}$. The result
follows from Theorem~\ref{thm:opt.strat}.
\end{proof}
\section{SAHARA utility functions}
\label{section:sahara}
In this section we study the optimal reinsurance-investment problem when the
insurer's preferences are described by SAHARA utility functions. This class of
utility functions was first introduced by~\cite{chen:sahara} and it includes
the well known exponential and power utility functions as limiting cases. The
main feature is that SAHARA utility functions are well defined on the whole
real line and, in general, the risk aversion is non monotone.
More formally, we recall that a utility function $U\colon{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} \to{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} $ is of the SAHARA class if its absolute risk aversion (ARA) function $A(x)$ admits the following representation:
\begin{equation}
\label{eqn:arafun}
-\frac{U''(x)}{U'(x)} =: A(x)=\frac{a}{\sqrt{b^2+(x-d)^2}}\;,
\end{equation}
where $a>0$ is the risk aversion parameter, $b>0$ the scale parameter and
$d\in{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} $ the threshold wealth.
Let us try the ansatz
\begin{equation}
\label{eqn:ansatz_sahara}
V(t,x,y)=U(x)\tilde{V}(t,y)\;.
\end{equation}
\begin{remark}
By~\eqref{eqn:a_optimal} and~\eqref{eqn:ansatz_sahara}, the optimal investment strategy admits a simpler expression:
\begin{equation}
\label{eqn:a_sahara}
a^*(t,x,y) = \frac{\mu(t,y)-A(x)\sigma(t,y,u)\sigma_1(t,y)}{A(x)
(\sigma_1(t,y)^2+\sigma_2(t,y)^2)}\;.
\end{equation}
In particular, $a^*(t,x,y)$ is bounded by a linear function in $x$ and therefore our
assumption ${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[\int_0^T (a_t^*)^2 \;{\rm d} t] < \infty$ is fulfilled. Under our
hypotheses, if the HJB equation admits a classical solution, the assumptions
in Corollary~\ref{cor:op.str} are satisfied.
Let us note that $a^*(t,x,y)$ is influenced by the reinsurance strategy $u$.
\end{remark}
In this case~\eqref{eqn:hjb2} reads as follows:
\begin{eqnarray*}
0 &=& U(x)\tilde{V}_t+ \mu_Y(t,y) U(x)\tilde{V}_y
+ \protect{\mathbox{\frac12}} \sigma_Y^2(t,y) U(x)\tilde{V}_{yy} \\
&&{}+ U'(x)\tilde{V}(t,y)\sup_{u\in[0,I]}\Psi_{t,x,y}(u)\;,
\end{eqnarray*}
where
\begin{multline}
\label{eqn:psi_sahara}
\Psi_{t,x,y}(u)\doteq m(t,y,u)\\
+\frac{\mu(t,y)^2-2\mu(t,y)\sigma(t,y,u)\sigma_1(t,y)A(x)-\sigma(t,y,u)^2\sigma_2(t,y)^2A(x)^2}{2[\sigma_1(t,y)^2+\sigma_2(t,y)^2]A(x)}\;.
\end{multline}
By our assumptions, $\Psi_{t,x,y}(u)$ is continuous in $u$, hence it admits a
maximum in the compact set $[0,I]$. However, we need additional requirements to
guarantee the uniqueness.
\begin{lemma}
\label{lemma:sahara_concavity}
If $m(t,y,u)$ is concave in $u\in[0,I]$ and $\sigma(t,y,u)$ is non negative and convex in $u\in[0,I]$, then there exists a unique maximiser for $\sup_{u\in[0,I]}\Psi_{t,x,y}(u)$.
\end{lemma}
\begin{proof}
We prove that $\Psi_{t,x,y}(u)$ is the sum of two concave functions, hence it is concave itself. As a consequence, there exists only one maximiser in $[0,I]$. Now, since $m(t,y,u)$ is strictly concave by hypothesis, we only need to show that
\[
\sigma(t,y,u)^2\sigma_2(t,y)^2A(x)+2\mu(t,y)\sigma(t,y,u)\sigma_1(t,y)
\]
is convex in $u$. We know that this quadratic form is convex and increasing when the argument is non negative. Recalling that $\sigma(t,y,u)\ge0$ by hypothesis, we can conclude that the function above is convex, because it is the composition of a non decreasing and convex function with a convex function ($\sigma(t,y,u)$ is so, by assumption). The proof is complete.
\end{proof}
\begin{remark}
Uniqueness is not necessary. If $u^*(t,x,y)$ is not unique, we have to choose
a measurable version in order to determine an optimal strategy.
\end{remark}
\subsection{Proportional reinsurance}
Let us consider the diffusion approximation to the classical risk model with
non-cheap proportional reinsurance, see e.g.{} \cite[Chapter 2]{schmidli:control}. More formally,
\begin{equation}
\label{eqn:noncheap_model}
\;{\rm d} X^{0}_t=(p-q +qu)\;{\rm d} t + \sigma_0u\;{\rm d} W^1_t\;, \qquad X^{0}_0=x\;,
\end{equation}
with $p<q$ and $\sigma_0>0$. Here $I=1$.
From the economic point of view, the insurer transfers a proportion $1-u$ of her risks to the reinsurer (that is $u=0$ corresponds to full reinsurance).
In this case, by~\eqref{eqn:psi_sahara} our optimization problem reduces to
\begin{equation}
\label{eqn:noncheap_pb}
\sup_{u\in[0,1]} qu+\frac{\mu(t,y)^2-2\mu(t,y)\sigma_1(t,y)A(x)\sigma_0u-\sigma_2(t,y)^2A(x)^2\sigma_0^2u^2}{2[\sigma_1(t,y)^2+\sigma_2(t,y)^2]A(x)}\;.
\end{equation}
The optimal strategy is characterized by the following proposition.
\begin{prop}
\label{prop:sahara}
Under the model~\eqref{eqn:noncheap_model}, the optimal reinsurance-investment strategy is given by $(u^*(t,x,y),a^*(t,x,y))$, with
\begin{equation}
\label{eqn:u_sahara}
u^*(t,x,y)=
\begin{cases}
0 & \text{$(t,x,y)\in A_0$}\\
\frac{(\sigma_1(t,y)^2+\sigma_2(t,y)^2)q-\mu(t,y)\sigma_0\sigma_1(t,y)}{\sigma_0^2\sigma_2(t,y)^2A(x)}
& \text{$(t,x,y)\in (A_0\cup A_1)^C$} \\
1 & \text{$(t,x,y)\in A_1\;,$}
\end{cases}
\end{equation}
where
\begin{align*}
A_0&\doteq\Set{(t,x,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} ^2 : q<\frac{\mu(t,y)\sigma_1(t,y)\sigma_0}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}}\;,\\
A_1&\doteq\Set{(t,x,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} ^2: q>\frac{\sigma_0[\sigma_2^2A(x)\sigma_0+\mu(t,y)\sigma_1(t,y)]}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}}\;,
\end{align*}
and
\begin{equation}
\label{eqn:a_noncheap}
a^*(t,x,y)=\frac{\mu(t,y)-A(x)\sigma_0u^*(t,x,y)\sigma_1(t,y)}{A(x)(\sigma_1(t,y)^2+\sigma_2(t,y)^2)}\;.
\end{equation}
\end{prop}
\begin{proof}
The expression for $a^*(t,x,y)$ can be readily obtained by \eqref{eqn:a_sahara}.
By Lemma~\ref{lemma:sahara_concavity}, there exists a unique maximiser $u^*(t,x,y)$ for $\sup_{u\in[0,I]}\Psi_{t,x,y}(u)$, where $\Psi_{t,x,y}(u)$ is defined in~\eqref{eqn:psi_sahara} replacing $m(t,y,u)=p-(1-u)q$ and $\sigma(t,y,u)=\sigma_0u$. Now we notice that
\[
(t,x,y)\in A_0 \Rightarrow \frac{\partial \Psi_{t,x,y}(0)}{\partial u} <0\;,
\]
therefore full reinsurance is optimal. On the other hand,
\[
(t,x,y)\in A_1 \Rightarrow \frac{\partial \Psi_{t,x,y}(1)}{\partial u} >0\;,
\]
hence in this case null reinsurance is optimal. Now let us observe that
\[
(t,x,y)\in A_0 \Rightarrow q<\frac{\mu(t,y)\sigma_1(t,y)\sigma_0}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}<\frac{\sigma_0[\sigma_2^2A(x)+\mu(t,y)\sigma_1(t,y)]}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}\;,
\]
which implies $A_0\cap A_1=\emptyset$. Finally, when $(t,x,y)\in (A_0\cup A_1)^C$, the optimal strategy is given by the unique stationary point of $\Psi_{t,x,y}(u)$. By solving $\frac{\partial \Psi_{t,x,y}(u)}{\partial u}=0$, we obtain the expression in~\eqref{eqn:u_sahara}.
\end{proof}
\begin{remark}
The previous result holds true under the slight generalization of
$p(t,y)$, $q(t,y)$, $\sigma_0(t,y)$ dependent on time and on the environmental
process. In this case, there will be an additional effect of the exogenous factor $Y$.
\end{remark}
Proposition~\ref{prop:sahara} also holds in the case of an exponential utility
function.
\begin{corollary}
For $U(x)=-{\rm e}^{-\beta x}$ with $\beta>0$, the optimal strategy is given by
$(u^*(t,y),a^*(t,y))$, with
\begin{equation}
\label{eqn:u_exp}
u^*(t,y)=
\begin{cases}
0 & \text{$(t,y)\in A_0$}\\
\frac{(\sigma_1(t,y)^2+\sigma_2(t,y)^2)q-\mu(t,y)\sigma_0\sigma_1(t,y)}{\sigma_0^2\sigma_2(t,y)^2\beta}
& \text{$(t,y)\in (A_0\cup A_1)^C$} \\
1 & \text{$(t,y)\in A_1\;,$}
\end{cases}
\end{equation}
where
\begin{align*}
A_0&\doteq\Set{(t,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} : q<\frac{\mu(t,y)\sigma_1(t,y)\sigma_0}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}}\;,\\
A_1&\doteq\Set{(t,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} : q>\frac{\sigma_0[\sigma_2^2\sigma_0\beta+\mu(t,y)\sigma_1(t,y)]}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}}\;,
\end{align*}
and
\begin{equation}
\label{eqn:a_noncheap_exp}
a^*(t,y)=\frac{\mu(t,y)-\beta\sigma_0
u^*(t,y)\sigma_1(t,y)}{\beta(\sigma_1(t,y)^2+\sigma_2(t,y)^2)}\;.
\end{equation}
\end{corollary}
\begin{proof}
By definition of the ARA function, the exponential utility function corresponds to the special case $A(x)=\beta$. Hence, we can apply Proposition~\ref{prop:sahara}, by replacing the ARA function. All the calculations remain the same, but the optimal strategy will be independent on the current wealth level $x$.
\end{proof}
\subsection{Excess-of-loss reinsurance}
Now we consider the optimal excess-of-loss reinsurance problem. The retention
level is chosen in the interval $u\in[0,+\infty]$ and for any future claim
the reinsurer is responsible for all the amount which exceeds that threshold $u$.
For instance, $u = \infty$ corresponds to no reinsurance.
The surplus process without investment is given
by, see also \cite{EisSchm}
\begin{equation}\label{eqn:XL_model}
\;{\rm d} X^{0}_t= \Bigl(\theta\int_0^u \bar{F}(z)\;{\rm d} z- (\theta-\eta){\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[Z] \Bigr)\;{\rm d} t
+ \sqrt{\int_0^u2z\bar{F}(z)\;{\rm d} z}\;{\rm d} W^1_t\;, \quad X^{0}_0=x\;,
\end{equation}
where $\theta,\eta>0$ are the reinsurer's and the insurer's safety loadings,
respectively, and $\bar{F}(z) = 1 - F(z)$ is the tail of the claim size
distribution function.
In the sequel we require ${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[Z] < \infty$ and, for the sake of the simplicity of the presentation, that
$F(z)<1$ $\forall z\in[0,+\infty)$.
Notice also that it is usually assumed $\theta>\eta$.
However, we do not exclude the so called \textit{cheap reinsurance}, that is $\theta=\eta$.
By~\eqref{eqn:psi_sahara}, we obtain the following maximization problem:
\begin{eqnarray}
\label{eqn:XL_pb}
\lefteqn{\sup_{u\in[0,\infty]}\theta\int_0^u\bar{F}(z)\;{\rm d} z}\nonumber\\
&&{}-\frac{2\mu(t,y)\sigma_1(t,y)\sqrt{\int_0^u2z\bar{F}(z)\;{\rm d} z}+\sigma_2(t,y)^2
A(x)\int_0^u2z\bar{F}(z)\;{\rm d} z}{2[\sigma_1(t,y)^2+\sigma_2(t,y)^2]}\;.\hskip1cm
\end{eqnarray}
\begin{prop}
\label{prop:xl_sahara}
Under the model~\eqref{eqn:XL_model}, suppose that the function in
\eqref{eqn:XL_pb} is strictly concave in $u$. There exists a unique maximiser
$u^*(t,x,y)$ given by
\begin{equation}
\label{eqn:XL_solution}
u^*(t,x,y)=
\begin{cases}
0 & \text{$(t,y)\in A_0$}
\\
\hat{u}(t,x,y) & \text{$(t,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} \setminus A_0\;$}
\end{cases}
\end{equation}
where
\[
A_0\doteq\Set{(t,y)\in[0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} : \theta \le \frac{2\mu(t,y)\sigma_1(t,y)}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}}
\]
and $\hat{u}(t,x,y)$ is the solution to the following equation:
\begin{equation}
\label{eqn:XL_nullderivative}
\theta(\sigma_1(t,y)^2+\sigma_2(t,y)^2)=2\mu(t,y)\sigma_1(t,y)\Bigl(\int_0^u2z\bar{F}(z)\;{\rm d} z\Bigr)^{-\frac{1}{2}}u+\sigma_2(t,y)^2A(x)u\;.
\end{equation}
\end{prop}
\begin{proof}
We first note that, using L'Hospital's rule,
\[ \lim_{u \to \infty} \frac{(\int_0^u \bar F(z) \;{\rm d} z)^2}{\int_0^u 2 z \bar
F(z) \;{\rm d} z} = 0\;.\]
The derivative with respect to $u$ of the function in \eqref{eqn:XL_pb} is
\[ \Bigl(\theta - u \frac{2\mu(t,y)\sigma_1(t,y) \bigl(\int_0^u 2z\bar{F}(z)\;{\rm d}
z\bigr)^{-\frac{1}{2}} +
\sigma_2(t,y)^2A(x)}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}\Bigr) \bar F(u)\;.\]
Consider the expression between brackets
\begin{equation}\label{eq:XL_pb-der}
\theta - u \frac{2\mu(t,y)\sigma_1(t,y) \bigl(\int_0^u 2z\bar{F}(z)\;{\rm d}
z\bigr)^{-\frac{1}{2}} +
\sigma_2(t,y)^2A(x)}{\sigma_1(t,y)^2+\sigma_2(t,y)^2}\;.
\end{equation}
Since $\int_0^u 2 z \bar F(z) \;{\rm d} z \le u^2$, we see that
for any $(t,y)\in A_0$ the function in~\eqref{eqn:XL_pb} is strictly
decreasing. Thus $u^* = 0$ in this case. For $(t,y)\notin A_0$ we obtain by L'Hospital's rule,
\[ \lim_{u \to 0} \frac{\int_0^u 2 z \bar F(z) \;{\rm d} z}{u^2} = 1\;.\]
This implies that the function to be maximised increases close to zero. In
particular, the maximum is not taken at zero. Further, if ${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[Z^2] < \infty$,
then \eqref{eq:XL_pb-der} tends to $-\infty$ as $u \to \infty$. If
${\rm I \mkern-2.5mu \nonscript\mkern-.5mu I \mkern-6.5mu E}[Z^2] = \infty$, then
\[ \lim_{u \to \infty} \frac{\int_0^u 2 z \bar F(z) \;{\rm d} z}{u^2} = 0\;.\]
Thus also in this case, \eqref{eq:XL_pb-der} tends to $-\infty$ as $u \to
\infty$. Thus the maximum is taken in $(0,\infty)$, and uniqueness of
$\hat{u}(t,x,y)$ is guaranteed by the concavity. Now the proof is complete.
\end{proof}
\begin{corollary}
Under the assumptions of Proposition~\ref{prop:xl_sahara}, the optimal re\-in\-sur\-ance-investment strategy is given by
\[ \biggl(\frac{\mu(t,y)-A(x)\sigma_1(t,y)\sqrt{\int_0^{u^*(t,x,y)} 2 z
\bar{F}(z) \;{\rm d} z}}{A(x)(\sigma_1(t,y)^2+\sigma_2(t,y)^2)},\;u^*(t,x,y)\biggr)\;,
\]
with $u^*(t,x,y)$ given in~\eqref{eqn:XL_solution}.
\end{corollary}
The main assumption of Proposition~\ref{prop:xl_sahara}, that is the concavity of the function in \eqref{eqn:XL_pb}, may be not easy to verify. In the next result we relax that hypothesis, only requiring the uniqueness of a solution to equation \eqref{eqn:XL_nullderivative}.
\begin{prop}
Under the model~\eqref{eqn:XL_model}, suppose that the equation \eqref{eqn:XL_nullderivative} admits a unique solution $\hat{u}(t,x,y)$ for any $(t,x,y)\in[0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} ^2$. Then it is the unique maximiser to \eqref{eqn:XL_pb}.
\end{prop}
\begin{proof}
In the proof of Proposition~\ref{prop:xl_sahara} we only used the concavity to
verify uniqueness of the maximiser. Therefore, the same proof
applies.
\end{proof}
\subsection{Independent markets}
Suppose that the insurance and the financial markets are conditionally
independent given $Y$. That is, let $\sigma_1(t,x) = 0$. Then by~\eqref{eqn:a_sahara} we get
\[ a^*(t,x,y) = \frac{\mu(t,y)}{A(x) \sigma_2^2(t,y)}\;.\]
\begin{remark}
Suppose that $\sigma(t,y,u)\ge0$ as usual. The insurer invests a larger amount of its surplus in the risky asset when the financial market is independent on the insurance market. Indeed, the reader can easily compare the formula above with \eqref{eqn:a_sahara}.
\end{remark}
Regarding the reinsurance problem, by \eqref{eqn:psi_sahara} we have to maximise this quantity:
\[ \Psi_{t,x,y}(u) := m(t,y,u)
+\frac{\mu(t,y)^2-\sigma(t,y,u)^2\sigma_2(t,y)^2A(x)^2}{2\sigma_2(t,y)^2
A(x)}\;.\]
\begin{prop}
Suppose that $\Psi_{t,x,y}(u)$ is strictly concave in $u\in[0,I]$. Then the optimal reinsurance strategy admits the following expression:
\begin{equation}
\label{eqn:independent_solution}
u^*(t,x,y)=
\begin{cases}
0 & (t,x,y)\in A_0
\\
\hat{u}(t,x,y) & (t,x,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} \setminus (A_0\cup A_I)
\\
I & (t,x,y)\in A_I\;,
\end{cases}
\end{equation}
where
\begin{align*}
A_0&\doteq\Set{(t,x,y)\in[0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} ^2 : \pderiv{m(t,y,0)}u \le A(x)\sigma(t,y,0)\pderiv{\sigma(t,y,0)}u}\;,\\
A_I&\doteq\Set{(t,x,y)\in[0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} ^2 : \pderiv{m(t,y,I)}u \ge A(x)\sigma(t,y,I) \pderiv{\sigma(t,y,I)}u}\;,
\end{align*}
and $\hat{u}(t,x,y)$ is the unique solution to
\[
\pderiv{m(t,y,u)}u = A(x) \sigma(t,y,u)\pderiv{\sigma(t,y,u)}u \;.
\]
\end{prop}
\begin{proof}
Since $\Psi_{t,x,y}(u)$ is continuous in $u$, it admits a unique maximiser in the compact set $[0,I]$. The derivative is
\[ \pderiv{m(t,y,u)}u - \protect{\mathbox{\frac12}} A(x) \pderiv{\sigma^2(t,y,u)}u
=\pderiv{m(t,y,u)}u - A(x) \sigma(t,y,u)\pderiv{\sigma(t,y,u)}u \;.\]
If $(t,x,y)\in A_0$, then $\pderiv{\Psi_{t,x,y}(0)}u\le0$ and $\Psi_{t,x,y}(u)$ is decreasing in $[0,I]$, because it is concave; hence $u^*(t,x,y)=0$ is optimal $\forall(t,x,y)\in A_0$. Now notice that $A_0\cap A_I=\emptyset$, because of the concavity of $\Psi_{t,x,y}(u)$. If $(t,x,y)\in A_I$, then $\pderiv{\Psi_{t,x,y}(1)}u\ge0$ and $\Psi_{t,x,y}(u)$ is increasing in $[0,I]$, therefore it reaches the maximum in $u^*(t,x,y)=I$. Finally, if $(t,x,y)\in [0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R} \setminus (A_0\cup A_I)$, the maximiser coincides with the unique stationary point $\hat{u}(t,x,y)\in(0,I)$.
\end{proof}
The main consequence of the preceding result is that the reinsurance and the investment decisions depend on each other only via the surplus process and not via the parameters.
Now we specialize Propositions~\ref{prop:sahara} and~\ref{prop:xl_sahara} to the special case $\sigma_1(t,x) = 0$.
\begin{corollary}
\label{corollary:prop_nullsigma1}
Suppose that $\sigma_1(t,x) = 0$ and consider the case of proportional
reinsurance~\eqref{eqn:noncheap_model}. The optimal retention level is given by
\begin{equation*}
u^*(x)=\frac{q}{\sigma_0^2A(x)}\land1\;.
\end{equation*}
\end{corollary}
\begin{proof}
It is a direct consequence of the Propositions~\ref{prop:sahara}. In fact, the reader can easily verify that $A_0=\emptyset$ and the formula~\eqref{eqn:u_sahara} simplifies as above.
\end{proof}
As expected, the optimal retention level is proportional to the reinsurance
cost and inversely proportional to the risk aversion. Moreover, reinsurance
is only bought for wealth not too far from $d$ (recall
equation~\eqref{eqn:arafun}). Note that the optimal strategy is independent on
$t$ and $y$, i.e. it is only affected by the current wealth.
Finally, full
reinsurance is never optimal.
\begin{corollary}
Suppose that $\sigma_1(t,x) = 0$ and consider excess-of-loss
reinsurance~\eqref{eqn:XL_model}. The optimal retention level is given by
\begin{equation*}
u^*(x)=\frac{\theta}{A(x)}\;.
\end{equation*}
\end{corollary}
\begin{proof}
Using Proposition~\ref{prop:xl_sahara} we readily check that $A_0=\emptyset$ and by equation~\eqref{eqn:XL_nullderivative} we get the explicit solution $u^*(x)$.
\end{proof}
Again, the retention level turns out to increase with the reinsurance
safety loading and decrease with the risk aversion parameter. In addition,
it increases with the distance between the current wealth $x$ and the
threshold $d$.
\section{Numerical results}
\label{section:numerical}
In this section we provide some numerical examples based on Proposition \ref{prop:sahara}. All the simulations are performed according to the parameters in Table \ref{tab:parameters} below, unless indicated otherwise.
\begin{table}[H]
\caption{Simulation parameters}
\label{tab:parameters}
\centering
\begin{tabular}{ll}
\toprule
\textbf{Parameter} & \textbf{Value}\\
\midrule
$\mu$ & $0.08$\\
$\sigma_1$ & $0.5$\\
$\sigma_2$ & $0.5$\\
$\sigma_0$ & $0.5$\\
$q$ & $0.05$\\
$x$ & $1$\\
$a$ & $1$\\
$b$ & $1$\\
$d$ & $0$\\
\bottomrule
\end{tabular}
\end{table}
The choice of constant parameters may be considered as fixing
$(t,y,x)\in[0,T]\times{\rm I \mkern-2.5mu \nonscript\mkern-.5mu R}^2$. Note that the strategy depends on $\{Y_t\}$ via
the parameters only. Now we illustrate how the strategy depends on the
different parameters. In the following figures, the solid line shows the
reinsurance strategy, the dashed line the investment strategy.
First, we analyse how the volatility coefficients of the risky asset influence
the optimal strategies. In Figures \ref{img:sigma1} and \ref{img:sigma2}
we notice very different behaviour.
On the one hand, the retention level $u^*$ is convex with respect to $\sigma_1$ up to a certain threshold, above which null reinsurance is optimal. On the other, when $\sigma_1>0$ (see Figure \ref{img:sigma2_a}) $u^*$ is null up to a given point and concave with respect to $\sigma_2$ from that point on. Finally, for $\sigma_1=0$ (see Figure \ref{img:sigma2_b}) the retention level is constant (see Corollary \ref{corollary:prop_nullsigma1}). Let us observe that the regularity of the optimal investment in Figure \ref{img:sigma2_b} is due to the absence of influence from $u^*$ (which remains constant).
\begin{figure}[H]
\centering
\ifpdf
\includegraphics[scale=0.28]{sigma1.jpg}
\else \includegraphics[scale=0.28]{sigma1.eps}
\fi
\caption{The effect of $\sigma_1$ on the optimal reinsurance-investment strategy.}
\label{img:sigma1}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=\textwidth]{sigma2.jpg}
\else \includegraphics[width=\textwidth]{sigma2.eps}
\fi
\caption{Case $\sigma_1>0$}
\label{img:sigma2_a}
\end{subfigure}
\vspace{1em}
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=\textwidth]{sigma2_nullsigma1.jpg}
\else \includegraphics[width=\textwidth]{sigma2_nullsigma1.eps}
\fi
\caption{Case $\sigma_1=0$}
\label{img:sigma2_b}
\end{subfigure}
\caption{The effect of $\sigma_2$ on the optimal reinsurance-investment strategy.}
\label{img:sigma2}
\end{figure}
Now let us focus on Figure \ref{img:sigma0}. When $\sigma_0$ increases the insurer rapidly goes from null reinsurance to full reinsurance, while the investment $a^*$ strongly depends on the retention level $u^*$. Under $\sigma_1>0$ (see Figure \ref{img:sigma0_a}), as long as $u^*=1$, $a^*$ decreases with $\sigma_0$; when $u^*\in(0,1)$ starts decreasing, $a^*$ increases; finally, when $u^*$ stabilises at $0$, then $a^*$ stabilises at the starting level. On the contrary, when $\sigma_1=0$ the investment remains constant and $u^*$ asymptotically goes to $0$.
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=\textwidth]{sigma0.jpg}
\else \includegraphics[width=\textwidth]{sigma0.eps}
\fi
\caption{Case $\sigma_1>0$}
\label{img:sigma0_a}
\end{subfigure}
\vspace{1em}
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=\textwidth]{sigma0_nullsigma1.jpg}
\else \includegraphics[width=\textwidth]{sigma0_nullsigma1.eps}
\fi
\caption{Case $\sigma_1=0$}
\label{img:sigma0_b}
\end{subfigure}
\caption{The effect of $\sigma_0$ on the optimal reinsurance-investment strategy.}
\label{img:sigma0}
\end{figure}
As pointed out in the previous section, the current wealth level $x$ plays an
important role in the evaluation of the optimal strategy and this is still
true under the special case $\sigma_0=0$. In Figure \ref{img:x} below we
illustrate the optimal strategy as a function of $x$. Both the reinsurance and
the investment strategy are symmetric with respect to $x=d=0$. Moreover, they
both increase when $x$ moves away from the threshold wealth $d$. This is not
surprising because the risk aversion decreases with the distance to
$d$.
\begin{figure}[H]
\centering
\ifpdf
\includegraphics[scale=0.28]{x.jpg}
\else\includegraphics[scale=0.28]{x.eps}
\fi
\caption{The effect of the current wealth $x$ on the optimal reinsurance-investment strategy.}
\label{img:x}
\end{figure}
In the next Figure \ref{img:utility} we investigate the optimal strategy
reaction to modifications of the utility function. As expected, the higher is
the risk aversion, the larger is the optimal protection level and the lower is the investment in the risky asset (see Figure \ref{img:a}). When $b$ increases, both the investment and the retention level monotonically increase (see Figure \ref{img:b}). Let us recall that $b\to0$ corresponds to HARA utility functions. Finally, by Figure \ref{img:d} we notice that any change of $d$ produces the same result of a variation in current wealth $x$ (see Figure \ref{img:x}).
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=0.75\textwidth]{a.jpg}
\else \includegraphics[width=0.75\textwidth]{a.eps}
\fi
\caption{The effect of the risk aversion on the optimal strategy.}
\label{img:a}
\end{subfigure}
\vspace{1em}
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=0.75\textwidth]{b.jpg}
\else \includegraphics[width=0.75\textwidth]{b.eps}
\fi
\caption{The effect of the scale parameter on the optimal strategy.}
\label{img:b}
\end{subfigure}
\vspace{1em}
\begin{subfigure}{1\textwidth}
\ifpdf
\includegraphics[width=0.75\textwidth]{d.jpg}
\else \includegraphics[width=0.75\textwidth]{d.eps}
\fi
\caption{The effect of the wealth threshold on the optimal strategy.}
\label{img:d}
\end{subfigure}
\caption{The effect of the SAHARA utility function parameters on the optimal reinsurance-investment strategy.}
\label{img:utility}
\end{figure}
\newpage
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUdzc5i7PA9OlIE11c
| 5 | 1 |
\section{Introduction}
\label{intro}
Studies of clusters of galaxies, including measurements of their
number density and growth from the highest density perturbations
in the early Universe, offer insight into the underlying cosmology
\citep{2003ApJ...590...15V,2004MNRAS.353..457A}. However, in order
to use clusters as a cosmological probe three essential tools are
required \citep{2010A&A...514A..80D}: (a) an efficient method to find
clusters over a wide redshift range, (b) an observational method of
determining the cluster mass, and (c) a method to compute the
selection function or the survey volume in which clusters are found.
These requirements are met by large surveys with well understood
selection criteria. Arguably the most effective method of building
large, well defined cluster samples has been via X-ray selection.
The high X-ray luminosities of clusters make it relatively easy to
detect and study clusters out to high redshifts.
Many cluster samples have been constructed based upon large X-ray
surveys such as the Einstein Medium Sensitivity Survey
\citep[EMSS;][]{1990ApJS...72..567G} and the ROSAT All Sky Survey
\citep[RASS;][]{1992eocm.rept....9V}. However due to the relatively
poor angular resolution of these X-ray observatories,
observations of clusters were susceptible to point source
contamination. Indeed, within the ROSAT Brightest Cluster Sample
\citep[BCS;][]{1998MNRAS.301..881E} and its low-flux extension
\citep[eBCS;][]{2000MNRAS.318..333E}, 9 out of 201 clusters and 8 out
of 99 clusters respectively were flagged as probably having a
significant fraction of the quoted flux from embedded point sources.
Being able to resolve these point sources is of crucial importance for
the reliable estimation of cluster properties, and indeed the nature
of the point source contamination is of independent interest.
The study of galaxy clusters has been transformed with the launch of
powerful X-ray telescopes such as \emph{Chandra} and XMM Newton, which
have allowed the study of the X-ray emitting intracluster medium (ICM)
with unprecedented detail and accuracy. With the launch of this new
generation of X-ray telescopes, we are able to uncover interesting
features in the morphologies of individual clusters. In particular,
\emph{Chandra's} high angular resolution provides the means to examine
individual cluster features with great detail.
Abell 689 \citep[hereafter A689;][]{1958ApJS....3..211A}, at z=0.279
\citep{1995MNRAS.274.1071C}, was detected in the RASS in an
accumulated exposure time of 317s. It is included in the BCS, with a
measured X-ray luminosity of 3.0$\times$10$^{45}$ erg s$^{-1}$
in the 0.1--2.4 keV band. This luminosity is the third
highest in the BCS, and thus A689 meets the selection criteria for
various highly X-ray luminous cluster samples
\citep[e.g.][]{2002ApJS..139..313D}. However this cluster was noted
as having possible point source contamination, and for this reason has
often been rejected from many flux limited samples. In this study we
present results of \emph{Chandra} observations designed to separate
any point sources and determine uncontaminated cluster properties.
The outline of this paper is as follows. In $\S$~\ref{reduction} we
discuss the observation and data analysis. Results of the X-ray
cluster analysis is presented in $\S$~\ref{xray analysis}. In
$\S$~\ref{point source} we present our analysis of the X-ray point
source through X-ray, optical and radio observations. We interpret
our results in $\S$~\ref{discussion} and the conclusions are presented
in $\S$~\ref{conclusions}. Throughout this paper we adopt a cosmology
with $\Omega_M$= 0.3, $\Omega_{\Lambda}$ = 0.7 and
H$_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, so that 1$^{\prime\prime}$
corresponds to 4.22 kpc at the redshift of A689. We define
spectral index, $\alpha$, in the sense S $\propto$ $\nu^{-\alpha}$.
\section{Observations and data reduction}
\label{reduction}
The \emph{Chandra} observation of A689 (ObsID 10415) was carried out January
01, 2009. A summary of the cluster's properties is
presented in Table~\ref{table:a689}. The observation was taken in
VFAINT mode, and the source was observed in an ACIS-I configuration at
the aim point of the I3 chip, with the ACIS S2 chip also turned on.
\begin{table*}
\begin{tabular}{cccccc}
\hline\hline
Name & RA & Dec & z & N$_{\rm H,Gal}$ & RASS L$_{ \rm {X,0.1-2.4 keV}}$ \\
\hline
Abell 689 & 08$^{\rm h}$ 37$^{\rm m}$ 24$^{\rm s}$.70 & +14$^{\rm o}$ 58$^{\prime}$ 20$^{\prime\prime}$.78 & 0.279 & 3.66$\times$10$^{20}$
cm$^{-2}$ & 30.4$\times$10$^{44}$ erg s$^{-1}$ \\
\hline
\end{tabular}
\caption{\small{Basic properties. Columns: (1) = Source name; (2) =
Right Ascension at J2000 from {\em Chandra}; (3) = Declination at
J2000 from {\em Chandra}; (4) = Redshift; (5) = Galactic column
density; (6) = Intrinsic X-ray luminosity in the 0.1--2.4 keV band
based upon ROSAT observations
\citep{1998MNRAS.301..881E}.}}
\label{table:a689}
\end{table*}
For the imaging analysis of the cluster we used the {\scriptsize CIAO}\footnote{See http://cxc.harvard.edu/ciao/} 4.2 software package with {\scriptsize CALDB}\footnote{See http://cxc.harvard.edu/caldb/} version 4.3.0 and
followed standard reduction methods. Since our observation was
telemetered in VFAINT mode additional background screening was applied
by removing events with significantly positive pixels at the border
of the 5$\times$5 event island\footnote{See
http://cxc.harvard.edu/ciao/why/aciscleanvf.html}. We inspected
background light curves of the observation following the
recommendations given in \cite{2003ApJ...583...70M}, to search for
possible background fluctuations. The light curve was cleaned by
3$\sigma$ clipping and periods with count rates $>$20$\%$ above the
mean rate were rejected.
curve with rejected periods showed in red. The final level-2 event
file had a total cleaned exposure time of 13.862 ks.
As discussed in Sect. 4, there is a bright point source at the center
of the extended cluster emission which is affected by pileup. For the
analysis of this source we followed the same reduction method,
with the exception that VFAINT cleaning was not applied. Applying
VFAINT cleaning leads to incorrect rejection of piled-up events,
introducing artifacts in the data.
\begin{figure}
\begin{center}
\includegraphics[width=8.4cm]{a689_lightcurve.epsi}
\end{center}
\caption{\small{Background light curve for the observation of A689 in
the 0.3-12.0 keV band. The CCD on which the cluster
falls (ACIS-I3) and all point sources are excluded. The red bands show
periods excluded by the Good Time Interval (GTI) file.}}
\label{fig:lc}
\end{figure}
\section{X-RAY CLUSTER ANALYSIS}
\label{xray analysis}
In this section we determine global cluster properties of
Abell 689. Figure~\ref{fig:clust} shows a Gaussian smoothed image of
the cleaned level 2 events file in the 0.7-2.0 keV band (the readout
streak is removed using the {\scriptsize CIAO} tool {\scriptsize ACISREADCORR}), with an inset image of the point source which lies at the center of the cluster.
The extent of the diffuse cluster emission was determined by plotting
an exposure-corrected radial surface brightness profile
(Fig~\ref{fig:rprof}), in the 0.7-2.0 keV band, of both the
observation and blank-sky background to determine where the cluster
emission is lost against the background. Figure~\ref{fig:rprof}
demonstrates that the diffuse cluster emission is detectable to a
radius r $\approx$ 570$^{\prime\prime}$ ($\approx$ 2.41 Mpc). At
large radii (r$\ge$700$^{\prime\prime}$) the curves rise due to
vignetting corrections (larger at larger radii) applied to all the
counts, whereas in reality each curve contains a component from
particles that have not been focused by the telescope.
\begin{figure*}
\begin{center}
\includegraphics[width=17.5cm]{a689clust.ps}
\end{center}
\caption{\small{0.7-2.0 keV image of A689, smoothed by a Gaussian
($\sigma$=1.5 pixels, where 1 pixel = 3.94$^{\prime\prime}$), cleaned
in VFAINT mode and with the readout streak removed. Inset is a
zoomed-in unbinned image of the central point source within A689,
cleaned in FAINT mode (see $\S$~\ref{reduction}). The inner black
circle (r = 208$^{\prime\prime}$) is the region in which we extract the
spectra for our analysis of the cluster emission (see
$\S$~\ref{X-ray props}), the outer
black circle (r = 570$^{\prime\prime}$) represents the detected
cluster radius (see $\S$~\ref{xray analysis}), and the region
between this and the black box was used for the local
background (see $\S$~\ref{X-ray props}). The white box displays
the size of the inset, and the inner white circle
(r = 26$^{\prime\prime}$) shows the region excluded due to the
central point source. Many point sources are seen in the
observation and are excluded from our analysis. An extended
source on the NE chip can be seen, that is unrelated to A689 and
is also excluded from our analysis.}}
\label{fig:clust}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm, angle=270]{10415inc_corew10lbinprof.ps}
\end{center}
\caption{\small{Exposure-corrected radial surface brightness profiles
(0.7-2.0 keV band) of the cluster (red) and the blank-sky
background (blue), with the green line representing the radius
beyond which no significant cluster emission is detected.}}
\label{fig:rprof}
\end{figure}
\subsection{Background Subtraction}
\label{background}
In order to take the background of the observation into account,
appropriate period E {\emph{Chandra} blank-sky backgrounds were
obtained, processed identically to the cluster, and reprojected
onto the sky to match the cluster observation. We then followed a
method similar to that outlined in \cite{2006ApJ...640..691V}, in
order to improve the accuracy of the background by applying small
adjustments to the baseline model. Firstly we corrected for the rate
of charged particle events, which has a secular and short-term
variation by as much as 30\%. We renormalise the background in the
9.5--12 keV band, where the \emph{Chandra} effective area is nearly
zero and the observed flux is due entirely to the particle background
events. The renormalisation factor was derived by taking the ratio of
the observed count rate in the source and background observations
respectively. The normalised spectrum of the blank-sky background is
shown in Figure~\ref{fig:bgs}, over-plotted on the local background
for comparison. The spectra agree well in the 9.5--12.0 keV band, and
across the whole spectrum with only slight differences. In addition
to the particle background, the blank-sky and source observations
contain differing contributions from the soft X-ray background,
containing a mixture of the Galactic and geocoronal backgrounds,
significant at energies $\leq$1 keV. To take into account any
difference in this background component between the blank-sky and
source observations, the spectra were subtracted and residuals were
modeled in the 0.4-1keV band using an APEC thermal plasma model
\citep{2001ApJ...556L..91S}. This model was included in the spectral
fitting for the cluster analysis. As can be seen in
Figure~\ref{fig:bgs} this component is very weak in the case of A689.
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm, angle=270]{10415testbgspec_paper.ps}
\end{center}
\caption{\small{Comparison of the local (black) and blank-sky
(red) background spectra, normalised to match in the 9.5-12.0 keV band.}}
\label{fig:bgs}
\end{figure}
\subsection{X-ray Cluster Properties}
\label{X-ray props}
The analysis of the diffuse X-ray emission allows us to determine
the X-ray environment surrounding the cluster central point source.
Throughout this process we excluded the central point source
(Fig~\ref{fig:clust}; r$\leq$26$^{\prime\prime}$) and the associated
readout streaks to avoid contaminating the cluster emission.
To determine cluster properties we extract spectra out to a radius
chosen so that the cluster has the maximum possible signal-to-noise
(SNR). The net number of counts, corrected for background, is then
414 (with SNR = 15) and the extraction annulus is
26$^{\prime\prime}$$<$r$<$208$^{\prime\prime}$ (see
Fig~\ref{fig:clust}), centered on the cluster (at $\alpha$, $\delta$
= 08$^{\rm h}$ 37$^{\rm m}$ 24$^{\rm s}$.70, 14$^{\rm o}$ 58$^{\prime}$
20$^{\prime\prime}$.78). We fitted the extracted spectrum in {\scriptsize XSPEC} with an absorbed thermal plasma model (WABS$\times$APEC) and
subtracted the background described in $\S$~\ref{background}. We
obtain a temperature of 13.6$^{+13.2}_{-5.1}$ keV and a bolometric
luminosity of L$_{\rm bol}$=(10.2$\pm$2.9)$\times$$10^{44}$ ergs
s$^{-1}$. The measured temperature is far above what we would expect
from the luminosity. Figure~\ref{fig:LT} shows the
luminosity-temperature relation for a sample of 115 galaxy clusters
\citep{2008ApJS..174..117M}, along with the luminosity and temperature
derived for A689 from our values above (pink square). As the
\cite{2008ApJS..174..117M} sample of clusters covers a wide redshift
range, the luminosities of the clusters were corrected for the expected
self-similar evolution, given by L$_{\rm X}$$\times$E(z)$^{-1}$, where:
\begin{equation}
E(z) = \Omega_{\rm M0} (1 + z)^3 + (1 - \Omega_{\rm M0} - \Omega_{\Lambda})(1 + z)^2
+ \Omega_{\Lambda}.
\end{equation}
The same correction was also applied to the A689 data for the plot.
Our luminosity was derived within the same annular region as the
cluster temperature and extrapolated both inward and outward in radius
between (0-1)r$_{\rm 500}$ (where r$_{500}$ represents the radius at
which the density of the cluster is 500 times the critical density of
the Universe at that redshift) using parameters from a $\beta$-profile
fit to the surface brightness profile. This takes into account our
exclusion of the region near the central point source within the
cluster, and the extrapolation to r$_{\rm 500}$ is in order to compare
with the \cite{2008ApJS..174..117M} sample. Our r$_{\rm 500}$ value
was determined from the temperature and using the relation between
r$_{500}$ and T given in \cite{2006ApJ...640..691V}.
The high temperature for A689 could be an indication that our observation
suffers from background flaring which would lead to an overestimate of
the cluster temperature. However, no evidence is found that the
spectrum of the blank-sky background differs from that of the local
background (Figure~\ref{fig:bgs}), or that the background of the
observation suffers from periods of flaring (Figure~\ref{fig:lc}).
To investigate the sensitivity of the temperature to our choice of
background, we repeated the analysis using a local background region
far from the cluster emission (see Fig~\ref{fig:clust}). We obtained
a temperature of 10.0$^{+13.8}_{-3.3}$ keV. This temperature is again
anomalous given the luminosity-temperature relation
(Fig~\ref{fig:LT}, green triangle. The luminosity is derived
using the same method as above, only this time using this new value
for the temperature to derive r$_{\rm 500}$. The spectrum with a
local background subtracted is shown in Figure~\ref{fig:clustfit}. We
see an excess of photons in the $\sim$6-9 keV band, which might have
an effect on the spectral fit. We perform the same fit using a local
background, however this time fitting in the 0.6-6.0 keV band to
ignore these excess high energy photons. We obtain a temperature of
9.4$^{+8.7}_{-2.9}$ keV.
Figure~\ref{fig:LT} shows that the temperature is high for the
luminosity, or the cluster is X-ray under-luminous. Before
considering a physical interpretation for the apparently high cluster
temperature, we investigated possible systematic effects from the
background subtraction. This was done by independently modeling the
background.
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm, angle=270]{maughan_LT.ps}
\end{center}
\caption[]{\small{Luminosity-Temperature relation of a sample of 115
clusters of \cite{2008ApJS..174..117M}(blue open circles). The luminosities are measured within [0 $<$ r $<$ 1]r$_{500}$ and the temperatures within [0.15 $<$ r $<$ 1.0]r$_{500}$, in order to minimize the effect of cool cores on the derived cluster temperature. Our derived temperatures for A689 are overplotted for comparison (pink square, green triangle, red diamond) (see $\S$~\ref{X-ray props}).}}
\label{fig:LT}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=5.5cm, angle=270]{10415clust-localbg.ps}
\end{center}
\caption{\small{Spectrum of the cluster with the local background
subtracted fitted with an absorbed thermal plasma model, with a
reduced statistic of $\chi^2_{\rm \nu}$=1.45 ($\nu$=79).}}
\label{fig:clustfit}
\end{figure}
We model the background based upon a physical representation of its
components. Our model consists of a thermal plasma APEC model, two
power-law components and five Gaussian components. The APEC model
and one of the power-law components are convolved with files
describing the telescope and instrument response (the Auxiliary Response
File (ARF) and Redistribution Matrix File (RMF)), and are intended
to model soft X-ray thermal emission from the Galaxy
and unresolved X-ray background. The second power-law component is
not convolved with the ARF as it is used to describe the high energy
particle background, which does not vary with effective area. The five
Gaussian components are similarly not convolved, as these are used to model
line features in the spectrum caused by the fluorescence of material
in the telescope and focal plane caused by high energy particles.
As described below and summarized in Table~\ref{table:model}, the
parameters of this background model were fit to the blank-sky
background, local background or high energy cluster regions or taken
from the literature, in order to build the most reliable model
possible. We start by modeling blank-sky background in the region we
used to extract our cluster spectrum, with the model outlined
above. This allows us to place reasonable constraints on the line
energy and widths of each Gaussian component, and the slope of the
unconvolved power-law. We find line features at energies of 1.48, 1.75,
2.16, 7.48 and 8.29 keV. We also model the convolved power-law
component, fixing the slope at a value of 1.48 taken from
\cite{2006ApJ...645...95H}. The normalisation of this power-law
component will then be used throughout our modeling process, scaled
by area where necessary. A spectrum of the blank-sky background and
a fit using our model are shown in Figure~\ref{fig:blanksky}.
Next we further constrained model parameters by fitting our
high-energy Gaussian and unconvolved power-law components in the
5.0-9.0 keV band within the cluster region. At these energies,
the emission is dominated by particle background. We fix the line
energies and widths to those found in the blank-sky background and
fit for the normalisations. This fit finds a slope of the power-law
consistent with that found in the blank-sky background. We therefore
fix the slope of the power-law at $\Gamma$=0.0061, as found for the
blank-sky background, and fit for the normalisation. We finally fit
the low energy Gaussians and APEC normalisation model parameters in
a local background region far from the cluster emission. The high energy
Gaussians and unconvolved power-law components are frozen at the
values found in the blank-sky and cluster regions, with the
normalisations scaled by area. The convolved power-law
component is frozen at the values found in the blank-sky background
and the normalisation is scaled by area. The Gaussian features at 1.48, 1.75
and 2.16 keV are frozen at the energies and widths found in the
blank-sky background. The temperature of the APEC model is frozen at
0.177 keV \citep[taken from][]{2006ApJ...645...95H}. We note that our
APEC temperature is not well constrained by our data, but this is a weak
component. A spectrum of the local background and the corresponding
fit with the model are shown in Figure~\ref{fig:local}.
We now model the cluster with an absorbed thermal plasma
(WABS$\times$APEC) model, including the background model outlined
above. The normalisations of the background APEC component and the
Gaussians at energies 1.48, 1.75 and 2.16 keV were scaled by the ratio
of the areas from the local background region to the cluster region.
The normalisations of the fluorescent lines also vary with detector
position. To account for this effect in the low energy ($<$3 keV)
lines (where we must fit the normalisations in the off-axis local
background region), we measure the relative change in the
normalisations of each line between the local background and source
regions in the blank-sky background data. The normalisation of each
low energy Gaussian component in the fit to the cluster data
is scaled for the different detector of the cluster region by the
relative change in normailsiation determined above (in addition to the
geometrical scaling for size of the extraction region). The
unconvolved power-law and Gaussians at 7.48 and 8.29 keV are all
frozen at the values found in the 5.0-9.0 keV cluster
region fit. The normalisation of the convolved power-law is frozen at
the value found in the blank-sky background. All parameters of the
model used to describe the background are frozen in the corresponding
cluster fit, we also freeze the redshift at 0.279 and the abundance at
0.3. Our fit yields a temperature of 5.1$^{+2.2}_{-1.3}$ keV
($\chi^2_{\rm \nu}$=1.15 ($\nu$=79)). We measure a bolometric
luminosity of L$_{\rm bol}$ = 1.7$\times$10$^{\rm 44}$ erg s$^{\rm -1}$.
The result is shown in Figure~\ref{fig:LT} (red diamond). The
spectrum with the corresponding fit to the cluster including the
background model is shown in Figure~\ref{fig:cluster}.
\begin{table*}
\begin{tabular}{cc|c|c|c}
\hline\hline
Component & Represents & Parameter & Value & Where measured \\
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Convolved power-law}} &
\multicolumn{1}{|c|}{\multirow{2}{*}{Unresolved X-ray bg}} &
\multicolumn{1}{c}{slope} & 1.48 &
\cite{2006ApJ...645...95H} \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
4.11$\times$10$^{-6}$ & blank-sky bg \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{2}{*}{Unconvolved power-law}} &
\multicolumn{1}{|c|}{\multirow{2}{*}{Particle bg}} &
\multicolumn{1}{c}{slope} & 0.061 & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
0.015 & 5.0-9.0 keV cluster region \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{3}{*}{Gaussian 1}} &
\multicolumn{1}{|c|}{\multirow{3}{*}{Al K$\alpha$ fluorescence}} &
\multicolumn{1}{c}{energy} & 1.48 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{width} &
0.022 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
1.82$\times$10$^{-4}$ & local bg \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{3}{*}{Gaussian 2}} &
\multicolumn{1}{|c|}{\multirow{3}{*}{Si K$\alpha$ fluorescence}} &
\multicolumn{1}{c}{energy} & 1.75 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{width} &
0.95 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
1.45$\times$10$^{-2}$ & local bg \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{3}{*}{Gaussian 3}} &
\multicolumn{1}{|c|}{\multirow{3}{*}{Au M$\alpha\beta$ fluorescence}} &
\multicolumn{1}{c}{energy} & 2.16 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{width} &
0.045 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
2.24$\times$10$^{-3}$ & local bg \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{3}{*}{Gaussian 4}} &
\multicolumn{1}{|c|}{\multirow{3}{*}{Ni K$\alpha$ fluorescence}} &
\multicolumn{1}{c}{energy} & 7.48 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{width} &
0.022 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
5.73$\times$10$^{-3}$ & 5.0-9.0 keV cluster region \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{3}{*}{Gaussian 5}} &
\multicolumn{1}{|c|}{\multirow{3}{*}{Cu + Ni fluorescence}} &
\multicolumn{1}{c}{energy} & 8.29 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{width} &
0.168 keV & blank-sky bg \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
4.56$\times$10$^{-3}$ & 5.0-9.0 keV cluster region \\ \cline{1-5}
\multicolumn{1}{|c|}{\multirow{4}{*}{APEC}} &
\multicolumn{1}{|c|}{\multirow{4}{*}{Galactic foreground emission}} &
\multicolumn{1}{c}{kT} & 0.177 keV & \cite{2006ApJ...645...95H} \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{abundance} &
1.0 & solar abundance \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{redshift} &
0 & Galactic \\ \cline{3-5}
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{normalisation} &
2.9$\times$10$^{-5}$ & local bg \\ \cline{1-5}
\end{tabular}
\caption{\small{Table of the individual model components used to
represent the background, with a brief interpretation of each component,
individual component parameters, parameter values and where each
value is calculated. All normalisations are measured in
photons/keV/cm$^2$/s at 1 keV, and scaled to the cluster region.
Blank-sky background parameters were derived in the same region
and the local background were measured in an area 2.45 times that of
the cluster and therefore the normalisations reduced by this
factor. The low energy Gaussians were also corrected due to
the normalisation dependence with position on the detector (see text).}}
\label{table:model}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[width=5.5cm, angle=270]{10415blankskyfit.ps}
\end{center}
\caption{\small{Spectrum of the blank-sky background in the source
region and corresponding the fit (see Sect. 3.2),
$\chi^2_{\rm \nu}$=1.17 ($\nu$=585).}}
\label{fig:blanksky}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=5.5cm, angle=270]{10415localbg.ps}
\end{center}
\caption{\small{Spectrum of the local background and corresponding
fit (see Sect 3.2). $\chi^2_{\rm \nu}$=0.79 ($\nu$=120).}}
\label{fig:local}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=5.5cm, angle=270]{10415sourcefit.ps}
\end{center}
\caption{\small{Spectrum of the cluster plus background fit with an
absorbed thermal plasma model including a background model (see Sect 3.2).
$\chi^2_{\rm \nu}$=1.15 ($\nu$=79).}}
\label{fig:cluster}
\end{figure}
\smallskip
\section{The Central Point Source}
\label{point source}
The point source is displayed in Figure~\ref{fig:a689zoom}.
The presence of strong readout streaks indicate that the point source is
likely to be affected by pile-up. The readout streaks occur as X-rays
from the source are received during the ACIS parallel frame transfer,
which provides 40$\mu$s exposure per frame in each pixel along the
streak. We detail our analysis of the point source and estimate of the
pileup fraction in the following section.
\begin{figure}
\begin{center}
\includegraphics[width=8.4cm]{a689_readout.epsi}
\end{center}
\caption{\small{\emph{Chandra} image of A689 showing the regions used for
extracting spectra of the readout streak (inner rectangles) and
the corresponding background regions for the readout streak (outer
rectangles).}}
\label{fig:a689zoom}
\end{figure}
\subsection{X-ray Analysis of the Point Source}
\label{xray point}
As a first test of the predicted pile-up, we compared the image of the
point source to the \emph{Chandra} Point Spread Function (PSF). We
made use of {\scriptsize CIAO} tool {\scriptsize MKPSF} to create an image of the on-axis PSF following the method outlined in \cite{2003A&A...407..503D}. This consists of merging 7 different monochromatic PSFs chosen and weighted
on the basis of the source energy spectrum between 0.3 and 8 keV.
This method can be summarized as follows:
\begin{enumerate}
\item We first extract the energy distributions of the photons from
a circular region centered on the peak brightness of the source with a
radii of 2.5$^{\prime\prime}$.
\item We choose seven discrete energy values at which to creating each
PSF, with the number of counts at each energy corresponding to that
PSF's `weight'.
\item Using {\scriptsize MKPSF} we create seven monochromatic PSFs at the position of the point source on the detector and co-added them. Each PSF is
weighted by its relative normalisation (found in the previous step).
\end{enumerate}
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm, angle=270]{10415src_psf_comp.epsi}
\end{center}
\caption{\small{Surface brightness profiles of the point source (red
squares) and the composite PSF (blue circles) in the 0.3-8.0 keV
band, normalised to agree in the 2.46-4.92 arcsec radii region.}}
\label{fig:PSFcomp}
\end{figure}
Once we obtained the composite PSF, we normalised it to the counts
within an annulus (inner and outer radius 2.46 and 4.92 arcsec
respectively) in order to avoid any effects of pile-up in the central
region. We then compare surface brightness profiles of the point
source and PSF to look for evidence of pile-up in the core of the
point source image. Figure~\ref{fig:PSFcomp} shows the radial surface
brightness profiles of the point source (red) and the composite PSF
(blue). We find that the point source and PSF agree well in the wings
of the PSF ($>$ 2.46$^{\prime\prime}$) and that there is an excess in the
PSF surface brightness above that of the point source at the peak of
the source. This is consistent with the flattening of the source
profile relative to that of the PSF due to pileup in the core.
The PSF then gives an estimate of the non piled up count rate. Given
this count rate we use
PIMMS\footnote{http://cxc.harvard.edu/toolkit/pimms.jsp}
to estimate a pile-up fraction of 65\%.
We also compute a second estimate of the core count rate using the
ACIS readout streak. By fitting a model to the spectrum extracted
from the readout streak we can compare this to a spectrum extracted in
the core and fitted using a pileup model. We follow the method outlined
in \cite{2005ApJS..156...13M} in order to correct the exposure time of
the readout streak spectrum. For an observation of live time
t$_{live}$, a section of the readout streak that is $\theta_s$ arcsec
long accumulates an exposure time of
t$_{\rm s}$ = 4 $\times$10$^{-5}$t$_{\rm live}$$\theta_{\rm s}$/(t$_f$$\theta_{\rm x}$) s, where
$\theta_{\rm x}$ = 0.492$^{\prime\prime}$ is the angular size of an ACIS
pixel. For our observation, t$_{\rm live}$ = 13.862 ks and the frame time
parameter t$_{\rm f}$ = 3.1 s, giving t$_{\rm s}$ = 165s in a streak segment that is
454$^{\prime\prime}$ long. Figure~\ref{fig:a689zoom} shows the
regions used for extracting spectra of the readout and an adjacent
background region. This choice of background region ensures the
cluster emission is subtracted from the readout streak spectrum.
Using the {\scriptsize SHERPA} package \citep{SciPyProceedings_51} we fitted an absorbed 1-D power-law model (WABS$\times$POWER-LAW) to the
extracted spectrum of the readout streak. We obtain fit parameters
for the photon index = 2.33$^{+0.34}_{-0.30}$ and a normalisation of
0.0033$\pm0.0005$ photons keV$^{-1}$ cm$^{-2}$ s$^{-1}$
($\chi^2_{\rm \nu}$=0.34 ($\nu$=63)).
We then extract a spectrum of the point source in a region of radius
5$^{\prime\prime}$, and subtracted the same background as for the
readout streak. We once again fitted an absorbed power-law model,
including this time a pileup model (jdpileup). We obtain fit
parameters for the photon index = 2.22$^{+0.05}_{-0.04}$, consistent
to that found from the readout streak, and a normalisation of
0.0015$\pm$0.0001 photons keV$^{-1}$ cm$^{-2}$ s$^{-1}$
($\chi^2_{\rm \nu}$=1.6 ($\nu$=119)). The extracted spectrum and
corresponding fit is shown in Figure~\ref{fig:pileup}. The pileup
fraction is estimated to be 60\%, which is consistent with that
found using the PSF count rate. The normalisation found in the fit
can be converted to an X-ray flux density for this source,
f$_{\rm 1~keV}$=0.99$\pm$0.07 $\mu$Jy.
\begin{figure}
\begin{center}
\includegraphics[width=7.0cm, angle=270]{a689pileupsrc.ps} \\
\end{center}
\caption{\small{Spectra of the point source fitted with an absorbed
power-law, including a pileup model. $\chi^2_{\rm \nu}$=1.6 ($\nu$=119).}}
\label{fig:pileup}
\end{figure}
\subsection{Optical Observations}
\label{optical point}
Abell 689 was observed with the Hubble Space Telescope (HST) with the
F606W filter (\~V-band) on January 20, 2008. Marking the
position of the peak X-ray emission of the point source on the HST
image, we find that this corresponds to an object resembling an active
nucleus in a relatively bright galaxy
(Fig~\ref{fig:hst}). We searched the SDSS DR7 archive for information
on the spectral properties of this object. At the coordinates of the
X-ray point source (SDSS coordinates of $\alpha$,$\delta$ =
08$^{h}$ 37$^m$ 24.7, 14$^{o}$ 58$^{\prime}$ 19$^{\prime\prime}$.8) we find a
blue object with a corresponding relatively featureless spectrum
(Fig~\ref{fig:sdss}). From SDSS we quote an r-band magnitude of
17.18. The spectrum resembles that of a BL Lac object,
a type of AGN orientated such that the relativistic jet is closely
aligned to the line of sight. From the H and K lines in the spectrum
(dotted green lines in Fig~\ref{fig:sdss}), thought to be from the
host galaxy, the redshift is determined to be z=0.279, consistent with
the redshift assigned to the cluster \citep{1995MNRAS.274.1071C}.
Using the HST observation we measure an optical flux for the BL Lac of
f$_{\rm 5997\AA}$=112mJy.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{a689_hst_zoom.ps}
\end{center}
\caption{\small{HST image of the point source with a cross marking the
position of the peak of the X-ray emission.}}
\label{fig:hst}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{a689_spectrum.eps}
\end{center}
\caption{\small{Spectra of the BL Lac from SDSS (SDSS
J083724.71+145819.8). The vertical green lines represent the H
and K lines thought to be from the host galaxy and the green line
at the bottom represents the error spectrum.}}
\label{fig:sdss}
\end{figure}
\subsection{Radio Observations}
\label{radio point}
Archival radio observations of Abell 689 are available, allowing us to
determine the radio spectral index, $\alpha$. We obtained 8.46 GHz
data taken in March 1998 from the VLA archive which we mapped using AIPS.
The source is unresolved at 8.46 GHz, and we measure a flux density
of 18.6$\pm$0.27 mJy. We also obtained a 1.4 GHz radio image from
the FIRST survey, from which we measure a flux of 62.7$\pm$0.25 mJy.
From these data we obtain a spectral index of
$\alpha_{\rm r}$$\sim$0.67$\pm$0.01. We note that a slight angular
extension in the FIRST survey suggests that the 8.46 GHz image may be
missing some flux density.
\section{Discussion}
\label{discussion}
\subsection{The ICM properties of Abell 689}
\begin{table*}
\begin{tabular}{c|ccccc}
\hline\hline
& r$_{500}$ & T$_{X}$ & L$_{\rm X,bol}$ & & \\
Background Subtraction & (arcsec) & (keV) & ($\times$10$^{44}$ erg
s$^{-1}$) & reduced $\chi^2$ & degrees of freedom \\
\hline
Blank-sky & 1130 & 13.6$^{+13.2}_{-5.1}$ & 10.1$\pm$2.9 & 1.19 & 74 \\
Local & 696 & 10.0$^{+13.8}_{-3.3}$ & 6.2$\pm$1.4 & 1.45 & 79 \\
Physically motivated model$^{\dagger}$ & 266 & 5.1$^{+2.2}_{-1.3}$ &
3.3$\pm$0.3 & 1.15 & 79 \\
\hline
\end{tabular}
\caption{\small{Table listing the derived spectral properties of A689
for each of the background treatments we employ in our
analysis. $^{\dagger}$ indicates our favored method for determining
the cluster properties of A689.}}
\label{tab:spec_results}
\end{table*}
We have derived the ICM properties of A689 using three methods of
background subtraction. Table~\ref{tab:spec_results} shows the
spectral properties of the ICM for each of the background treatments
we employ with our favored method coming from a physically motivated
model of the background components. Through detailed modeling of the
local and blank-sky backgrounds for A689, and including this
background model in a spectral fit to the cluster, we have determined
a temperature and luminosity for A689 of T = 5.1$^{+2.2}_{-1.3}$ keV
and L$_{\rm bol}$ = 3.3$\times$10$^{44}$ erg s$^{-1}$. Plotting these
values on the luminosity-temperature plot (Fig~\ref{fig:LT}) and
comparing to the large X-ray sample of \cite{2008ApJS..174..117M},
we find that A689 is observed to be at the edge of the observed
scatter in the luminosity-temperature relation. This suggests that
either the temperature of the ICM has been enhanced or suppression of
the luminosity has occurred.
It has been shown that systems that host a radio source are likely to
have higher temperatures at a given luminosity.
\cite{2005MNRAS.357..279C} showed that this is the case for radio-loud
galaxy groups, and \cite{2007MNRAS.379..260M} showed that for clusters
that host a radio source there is a departure from the typical
luminosity-temperature relation, particularly in the case of low mass
systems. \cite{2005MNRAS.357..279C} also showed, through analysis of
{\em Chandra} and {\em XMM-Newton} observations, evidence for
radio-source interaction with the surrounding gas for many of the
radio-loud groups. A similar process could be occurring within A689,
which has a confirmed radio source at the center of the cluster. We
note that more detailed X-ray and radio observations would to be
needed in order to test any interaction between the BL Lac and ICM.
The other possible explanation for the offset of radio-loud systems
from the luminosity-temperature relation is the suppression of the
luminosity, which could be caused by displacement of large amounts of gas
due to the interaction of the radio source with the ICM. The
interaction of the radio source with the ICM will cause an increase in the
entropy of the local ICM. This higher entropy gas will be displaced
so that it is in entropy equilibrium with the surrounding gas.
However, \cite{2007MNRAS.379..260M} showed that there is a correlation
between the radio luminosity and the heat input required to produce
the observed temperature increment in clusters hosting radio sources.
They note that this correlation favors an enhanced temperature
scenario caused by the radio galaxy induced heating.
\subsection{Comparison with the BCS}
A689 was noted in the BCS as having a significant fraction of its
flux coming from embedded point sources. We have confirmed that A689
contains a point source at the center of the cluster and is that of a
BL Lac object. We stated that the measured X-ray luminosity for A689
as quoted in the BCS (L$_{\rm 0.1 - 2.4 keV}$ = 3$\times$10$^{45}$ erg
s$^{-1}$, see $\S$~\ref{intro}), is the third brightest in the BCS.
From our follow up observation with Chandra we calculate the
luminosity and compare with the BCS value. The luminosities in the
BCS are calculated within a standard radius of 1.43~Mpc, which at the
redshift of A689 corresponds to a radius of 338$^{\prime\prime}$. We
therefore employ the same method as in Sect~\ref{X-ray props} and
integrate under a beta model fitted to the derived surface brightness
profile and extrapolate inward and outward from
26-206$^{\prime\prime}$ to 0-338$^{\prime\prime}$. The unabsorbed
flux in the 0.1 - 2.4 keV band (observed frame) was f$_{\rm 0.1-2.4, keV}$
= 5.8$\times$10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. After k-correction the
X-ray luminosity in the 0.1 - 2.4 keV band (rest frame) was
L$_{\rm 0.1 - 2.4, keV}$ = 2.8$\times$10$^{44}$ erg s$^{-1}$. Note
that we assume an H$_{0}$ of 50 for consistency with the BCS catalog.
This value is $\sim$10 times lower than that quoted in the BCS, and
A689 is now ranked 110th out of 201 in luminosity.
\subsection{Classifying the BL Lac}
BL Lac objects may be split into `High-energy peak BL
Lacs' (HBL) and `Low-energy peak BL Lacs' (LBL), for objects which
emit most of their synchrotron power at high (UV--soft-X-ray) or
low (far-IR, near-IR) frequencies respectively
\citep{1995MNRAS.277.1477P}. HBL and LBL objects
have radio-to-X-ray spectral indies of $\alpha_{rx}$$\le$0.75 and
$\alpha_{rx}$$\ge$0.75, respectively. We calculate a radio-to-X-ray
spectral index for the BL Lac in A689 of $\alpha_{rx}$=0.58$\pm$0.04.
From this value we classify our BL Lac as an HBL type. We also
compared our BL Lac with those of \cite{1998MNRAS.299..433F}, who
investigated the properties of large samples of BL lacs at radio to
$\gamma$-ray wavelengths. Our value of $\alpha_{rx}$=0.58 falls into
the region 0.35$\le$$\alpha_{rx}$$\le$0.7, dominated by X-ray selected
BL Lacs (XBL). This is consistent with the X-ray selection of this
cluster. Using the measured HST flux, we also calculate
$\alpha_{ro}$=0.50 and $\alpha_{ox}$=0.77. These values are not
atypical for BL Lac objects \citep[Figure 7 in][]{1999ApJ...516..163W}.
\cite{2003A&A...407..503D} tried to determine whether HBLs and LBLs
were characterized by different environments. They found that of
5 sources exhibiting diffuse X-ray emission that 4 were HBLs and 1
was an LBL. The BL Lac in A689 continues this trend, as it appears
to be an HBL embedded within a cluster environment.
\subsection{Evidence for Inverse-Compton Emission}
Our initial analysis of the extended emission in A689 yielded temperature
estimates that were significantly higher than expected based upon its
X-ray luminosity. We also found evidence for an hard excess of X-ray
photons in the 6.0-9.0 keV band of the cluster spectrum. Before
assigning a physical cause we must be careful to eliminate systematic
effects in the background subtraction, as underestimating the particle
background will give a hard excess. This is unlikely to be the case
here as a high temperature was obtained when either a blank-sky and
local background was used.
When using our background model instead of subtracting a background
spectrum, the cluster temperature was more consistent with its
luminosity (Fig~\ref{fig:LT}). This appears to be due to the
unconvolved power law fitting a hard excess in the cluster region.
In order to assess what impact the high-energy particle component has
on the cluster temperature, we varied the normalisation of the
unconvolved power-law component of our background model (sect. 3.2)
within its 1$\sigma$ errors. The temperature ranged from 3.86 to 8.51
keV. Thus small changes in the particle background have a significant
influence on the cluster temperature. Our power-law component was
derived within the cluster region in the 5.0-9.0 keV band, so our
background model may be removing a physical hard X-ray component
associated with the cluster. We assess the possible properties of
such a component by re-deriving the normalisation of the unconvolved
power-law component in the local background region, scaling this value
by the ratio of the areas, and using this value for the unconvolved
power-law normalisation in the background model for our cluster fit.
We find a power-law normalisation of 0.0129
photons keV$^{-1}$ cm$^{-2}$ s$^{-1}$ (as opposed to 0.0149
photons keV$^{-1}$ cm$^{-2}$ s$^{-1}$ in the cluster region). Using this value and
re-fitting (Fig~\ref{fig:plcomp}), we find a cluster temperature of
14.3$^{+15.6}_{-5.4}$ keV. This suggests that the hard component is
spatially associated with the cluster.
\begin{figure}
\begin{center}
\includegraphics[width=5.5cm, angle=270]{a689_model_plcomp.ps}
\end{center}
\caption{\small{Spectrum of the cluster including our physically
motivated model, with the normalisation of the unconvolved
power-law component derived in the local background region.
$\chi^2_{\rm \nu}$=1.17 ($\nu$=79)}}
\label{fig:plcomp}
\end{figure}
\cite{2007ApJ...668..796B} find a similar excess of X-ray emission in
the cluster A3112, which is known to have a central AGN. It is argued
that this excess may be due to emission of a non-thermal component.
Relativistic electrons in the intergalactic medium will cause CMB
photons to scatter into the X-ray band (inverse Compton scattering).
The same process could occur in A689, with the relativistic particles
responsible for the inverse Compton scattering provided by the jets of
the BL Lac.
In order to test the assumption of inverse-Compton emission, we add a
convolved power-law component with $\alpha$=1.5 (appropriate for
modeling IC emission from aged electrons). We fit for the
normalisation of this added power-law component and freeze all other
parameters of the background and cluster model. We note that we use
the normalisation of the unconvolved power-law found in the local
background region as found above (a value of 0.0129). When fit, the
$\chi$$^2$ increases slightly but the fit is still acceptable at the
95\% confidence level. From the additional power-law component we measure
a 1 keV flux density of $\sim$7 nJy. If the X-ray emission is from
scattering of the CMB by an aged population of electrons of power-law
number index 4.0, we can determine what the implied synchrotron
emission would be at 1.4 GHz, given plausible magnetic fields.
Clusters typically have magnetic fields of a few $\mu$G
\citep{2002ARA&A..40..319C}. The NVSS survey would have detected a
flux density of $\sim$10 mJy over the entire cluster. Adopting a
1.4 GHz flux density limit of 10 mJy, we require a magnetic field of
B $\leq$ 2 $\mu$G over the cluster to avoid over-predicting radio
emission. We also calculate a value for the minimum-energy magnetic
field, B$_{\rm E\_min}$, that would give a 1.4 GHz flux density of
S$_{\rm 1.4~GHz}$ = 10~mJy, which in this case is B$_{\rm E\_min}$ = 7.5
$\mu$G. For a flux density of S$_{\rm 1.4~GHz}$ = 5 mJy, we would require
a magnetic field of 1.6~$\mu$G and B$_{\rm E\_min}$ = 6.5~$\mu$G. Note that
B $\propto$ S$_{\rm 1.4~GHz}$$^{1 / 1 + \alpha}$ and B$_{\rm E\_min}$
$\propto$ S$_{\rm 1.4~GHz}$$^{1 / 3 + \alpha}$. We conclude that the
excess X-ray emission can be attributed to inverse-Compton scattering
without over-predicting the radio emission if the magnetic field
strength is in a range typical of clusters and within a factor of a
few of the minimum energy value.
\section{Conclusions}
\label{conclusions}
We have used a 14 ks \emph{Chandra} observation of the galaxy cluster A689 in
order to determine the nature of the cluster's point source
contamination and to analyze the cluster properties excluding the
central point source. Our main conclusions are as follows.
\newcounter{qcounter}
\begin{list}{ \arabic{qcounter}.~}{\usecounter{qcounter}}
\item Using background subtraction of both local and blank-sky
backgrounds, we obtain temperatures which are high relative to the
luminosity-temperature relation.
\item We construct a physically motivated model for the background and
include this model in a fit to the cluster spectrum. If the
particle background in the cluster is allowed to exceed that in the
local and blank-sky backgrounds we obtain a temperature
of 5.1$^{+2.2}_{-1.3}$ keV. However, there is no reason for there to
be a higher particle rate in the specific region of the CCD in which
the cluster lies. A hard excess needed to bring the temperature to
a reasonable value must have a different origin.
\item We confirm the presence of a point source within A689 as
suspected in the BCS. When excluding the point source and using
our derive background model we find a luminosity of
L$_{\rm 0.1 - 2.4, keV}$ = 2.8$\times$10$^{44}$ erg s$^{-1}$, a
value $\sim$10 times lower than quoted in the BCS.
\item From the X-ray analysis of the point source we find a ``flat-topped''
point source with a pileup fraction of $\simeq$ 60\%.
\item Optical observations of the cluster from SDSS and HST lead us
to conclude that the point source is a BL Lac type AGN.
\item We classify the BL Lac as an `High-energy peak BL Lac' with
$\alpha$$_{rx}$=0.58$\pm$0.04.
\item We interpret the hard X-ray excess needed to bring the cluster
temperature to a reasonable value as inverse-Compton emission from
aged electrons that may have been transported into the cluster from
the BL Lac.
\end{list}
We have shown here not only the importance of resolving and excluding
point sources in cluster observations, but also the effect these point
sources can have when determining the ICM properties of galaxy clusters.
The detailed analysis we have performed here may not however be
suitable for all clusters as it is unclear whether this analysis can
be performed at higher redshifts. Separating the point source and
cluster emissions becomes increasingly difficult at high redshifts,
however Chandra has proved capable at resolving point source emission
in clusters to z$\sim$2 \citep[e.g.][]{2009A&A...507..147A}.
Resolving point source and cluster emission out to high redshift
becomes more important with redshift due to the evolution of
the number density of point sources within clusters
\citep{2009ApJ...694.1309G}. The area used to model the background
components associated with the high energy cluster regions will
decrease spatially with redshift, and separating the between thermal and
inverse Compton emissions at higher redshift will become increasingly
difficult \citep{2003MNRAS.341..729F}, due to the increase in the
energy density of the CMB with redshift.
\section*{Acknowledgments}
We thank J.Price for useful discussions regarding the SDSS and Hubble
data. We thank Ewan O'Sullivan and Dominique Sluse for useful
discussions on the nature of the point source. We thank the anonymous
referee for valuable comments and suggestions. PG also acknowledges
support from the UK Science and Technology Facilities Council.
|
train/arxiv
|
BkiUa6bxK0iCl7DT6WVU
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Young L dwarfs are often noted to be redder than their field-age counterparts \citep{kirkpatrick2008, cruz2009, faherty2013, faherty2016, liu2016}. This is partly due to their lower surface gravities, which lead to lower pressures in their atmospheres, and hence reduced collision-induced absorption (CIA) by H$_2$ \citep{lin69}. Further, lower surface gravities can lead to higher altitude clouds, leading to less efficient gravitational settling of condensate particles (e.g., \citealt{madhu2011, helling2014}). Red near-infrared colors have been efficiently utilized to characterize and discover new young brown dwarfs and planetary-mass objects (e.g., \citealt{kellogg2015, schneider2017}). There also exists a population of red L dwarfs that do not have obvious signs of youth (e.g., \citealt{looper2008, kirkpatrick2010, marocco2014}). While the exact reasons for the red colors of these relatively high-gravity objects are not entirely clear, their spectra have been well-reproduced by the presence of micron or submicron-sized grains in their upper atmospheres \citep{marocco2014, hiranaka2016, charnay2018}. This high-altitude dust suppresses emission at shorter wavelengths much more efficiently than longer wavelengths, leading to significantly reddened spectra compared to ``normal'' brown dwarfs. There is evidence that the strength of silicate absorption features in the mid-infrared correlates with the near-infrared colors of L dwarfs \citep{burgasser2008,suarez2022}, indicating that variations in silicate cloud thickness also plays a role. Further, viewing angle \citep{vos2017} and variability \citep{ashraf2022} have been shown to be related to the colors of substellar objects. There is also evidence that convective instabilities can produce similar effects as clouds in young red L dwarfs \citet{tremblin2017}. In any case, young red L dwarfs and old reddened L dwarfs have proven to be compelling laboratories for the study of low temperature substellar atmospheres.
The vast majority of the current population of directly-imaged planetary-mass companions are also young and have similar effective temperatures, masses, and radii as young L dwarfs, as well as observed properties, including unusually red near-infrared colors. Examples include 2M1207b \citep{chauvin2004, chauvin2005, patience2010}, HD 206893B \citep{milli2017, delorme2017, krammerer2021, meshkat2021, ward2021}, VHS J125601.92$-$125723.9B \citep{gauza2015}, 2MASS J22362452$+$4751425b \citep{bowler2017}, BD$+$60 1417B \citep{faherty2021}, HR8799bcd \citep{marois2008}, and HD 203030B \citep{metchev2006}. Young, red L dwarfs in the field provide an opportunity to study the physical properties of giant exoplanet-like atmospheres without the technical challenge of blocking host star light.
In this article, we present the discovery of CWISE J050626.96$+$073842.4 (CWISE J0506$+$0738), an exceptionally red brown dwarf discovered as part of the Backyard Worlds: Planet 9 (BYW) citizen science project \citep{kuchner2017}. We detail its discovery in Section \ref{sec:discovery}, present Keck/NIRES spectroscopic follow-up observations in Section \ref{sec:obs}, analyze these data in Section \ref{sec:anal}, and discuss CWISE J0506$+$0738 in the context of other red brown dwarfs in Section \ref{sec:discussion}.
\section{Discovery of CWISE 0506+0738}
\label{sec:discovery}
CWISE J0506$+$0738 was submitted as an object of interest to the BYW project by citizen scientists Austin Rothermich, Arttu Sainio, Sam Goodman, Dan Caselden, and Martin Kabatnik because it had notable motion amongst epochs of WISE observations. BYW uses unWISE images \citep{lang2014, meisner2018} covering the 2010--2016 time frame and is typically sensitive to objects with proper motions $\gtrsim$ 0\farcs05--0\farcs1 yr$^{-1}$. As part of the initial investigation to evaluate whether or not CWISE J0506$+$0738 was a newly discovered substellar object, we gathered available photometry from the Two Micron All-Sky Survey (2MASS) reject catalog \citep{skrutskie2006, tmass2006}, the United Kingdom Infrared Telescope (UKIRT) Hemisphere Survey DR1 (UHS; \citealt{dye2018}), and the CatWISE 2020 main catalog \citep{marocco2021}, and determined a photometric spectral type of $\sim$L7.5 using the method described in \cite{schneider2016a}. It was noted during the initial evaluation of this object that its $J-K$ color, using UHS $J$- and 2MASS $K$-band photometry, was exceptionally red ($J-K$ = 3.17$\pm$0.21 mag), more than half a magnitude redder than the reddest known free-floating L dwarf, PSO J318.5338$-$22.8603 ($J-K$ = 2.64$\pm$0.02 mag; \citealt{liu2013}). An inspection of 2MASS, UHS, WISE, and Pan-STARRS DR2 \citep{magnier2020} images showed no sources of contamination, suggesting that the near-infrared colors accurately reflect the true spectral energy distribution of the source (Figure \ref{fig:finder}).
The astrometry and photometry of CWISE J0506$+$0738 were further analyzed using measurements from the UHS DR2 catalog, which will provide $K$-band photometry for much of the northern hemisphere (Bruursema et al.~in prep.). CWISE J0506$+$0738 was found to have a $K$-band magnitude of 15.513$\pm$0.022 mag, consistent with the previous 2MASS measurement but significantly more precise. This measurement results in a UHS $(J-K)_{\rm MKO}$ color of 3.24$\pm$0.10 mag, slightly redder but consistent with UHS and 2MASS photometry. We therefore considered this candidate a high-priority target for follow-up spectroscopic observations.
\begin{figure*}
\plotone{Figure1.pdf}
\caption{Images of CWISE J0506$+$0738 from 2MASS (upper left and center), UHS (bottom left and center), Pan-STARRS (upper right, three-color image with $g/i/y$ bands), and WISE (lower right, three-color image with $W1/W2/W3$ bands). The position of CWISE J0506$+$0738 as determined in the UHS $K$-band images is denoted by a red circle. Note that CWISE J0506$+$0738 is undetected at 2MASS $J$ and in the Pan-STARRS 3-color image, but clearly detected in the 2MASS $K$-band, UHS, and WISE images. The greenish hue of CWISE J0506$+$0738 in the WISE images shows that this object is significantly brighter at WISE channel W2 (4.6 $\mu$m) than WISE channel W1 (3.4 $\mu$m) or W3 (12 $\mu$m), typical of brown dwarfs with late-L or later spectral types.}
\label{fig:finder}
\end{figure*}
\begin{deluxetable}{lcccc}
\label{tab:cwise0506}
\tablecaption{Properties of CWISE J050626.96$+$073842.4}
\tablehead{
\colhead{Parameter} & \colhead{Value} & \colhead{Ref.}}
\startdata
R.A. (\degr) (epoch=2022.7)\tablenotemark{a} & 76.6124377 & 1 \\
Dec. (\degr) (epoch=2022.7)\tablenotemark{a} & 7.6449299 & 1 \\
R.A. (\degr) (epoch=2017.8)\tablenotemark{a} & 76.6123885 & 2 \\
Dec. (\degr) (epoch=2017.8)\tablenotemark{a} & 7.6450716 & 2 \\
$\mu$$_{\alpha}$ (mas yr$^{-1}$) & 31.5$\pm$2.6 & 1\\
$\mu$$_{\delta}$ (mas yr$^{-1}$) & -82.7$\pm$2.7 & 1\\
$d$\tablenotemark{b} (pc) & 32$^{+4}_{-3}$ & 1 \\
RV (km s$^{-1}$) & +16.3$^{+8.8}_{-7.7}$ & 1 \\
$J_{\rm MKO}$ (mag) & 18.487$\pm$0.017 & 1 \\
$K_{\rm MKO}$ (mag) & 15.513$\pm$0.022 & 2 \\
W1 (mag) & 14.320$\pm$0.015 & 3 \\
W2 (mag) & 13.552$\pm$0.013 & 3 \\
Sp.~Type & L8$\gamma$--T0$\gamma$ & 1 \\
\enddata
\tablenotetext{a}{R.A. and Dec. values are given in the ICRS coordinate system.}
\tablenotetext{b}{Photometric distance estimate based on the UHS $K_{\rm MKO}$-band magnitude and the absolute magnitude-spectral type relation in \citealt{dupuy2012} (see Section \ref{sec:dist}).}
\tablerefs{ (1) This work; (2) UHS DR2 (\citealt{dye2018}, Bruursema et al.~in prep); (3) CatWISE 2020 \citep{marocco2021} }
\end{deluxetable}
\section{Observations}
\label{sec:obs}
\subsection{UKIRT/WFCAM}
\label{sec:ukirt}
In an effort to refine the astrometry and photometry of CWISE J0506$+$0738, we observed it with the $J_{\rm MKO}$ filter on the infrared Wide-Field Camera (WFCAM; \citealt{casali2007}) on UKIRT on 20 September 2022. Observations were performed using a 3 $\times$ 3 microstepping pattern, with the resulting 9 images interleaved \citep{dye2006} to provide improved sampling over that of a single WFCAM exposure. The microstepping sequence was repeated five times, resulting in 45 single exposures each lasting 20 seconds, for a total exposure time of 900 seconds. We re-registered the world coordinate system (WCS) of each interleaved frame using the Gaia DR3 catalog \citep{gaia2022}. Images were then combined using the \texttt{imstack} routine from the \texttt{CASUTOOLS} package\footnote{http://casu.ast.cam.ac.uk/surveys-projects/software-release} \citep{irwin2004}. The position and photometry of CWISE J0506$+$0738 were extracted using the \texttt{CASUTOOLS} \texttt{imcore} routine.
Combining the position of this $J$-band observation with the UHS $K$-band observation, we calculated proper motion components of $\mu$$_{\alpha}$ = 31.5$\pm$2.6 mas yr$^{-1}$ and $\mu$$_{\delta}$ = -82.7$\pm$2.7 mas yr$^{-1}$. CatWISE 2020 reports proper motions of $\mu$$_{\alpha}$ = 44.2$\pm$7.9 mas yr$^{-1}$ and $\mu$$_{\delta}$ = -97.5$\pm$8.4 mas yr$^{-1}$ (with offset corrections applied according to \citealt{marocco2021}). The proper motion calculated from our UKIRT observations is significantly more precise than the proper motion measurements from CatWISE 2020, and we adopt the former for our analysis.
We measure a $J_{\rm MKO}-$band magnitude of 18.487$\pm$0.017 mag from these observations, which is $>$2$\sigma$ brighter than the value from the UKIRT Hemisphere Survey (18.76$\pm$0.10 mag). To verify our measured photometry, we compared the photometry for other sources found to have similar magnitudes (18.4 $< J <$ 18.6 mag) in our images to UHS values. We found a median $J$-band difference for the 52 objects in this sample to be -0.03 mag, with a median absolute deviation of 0.07 mag, showing that differences as large as that measured for this object (0.27 mag) are relatively rare. The origin of the difference between these $J$-band measurements is unclear, though we note that variability may be a contributing factor, as young (and red) objects are often found to have larger amplitude variability than field-age objects with similar spectral types (e.g., \citealt{vos2022}). While this new $J$-band measurement results in bluer $(J-K)_{\rm MKO}$ = 2.97$\pm$0.03 mag and $J_{\rm MKO}-$W2 = 4.94$\pm$0.02 mag colors, they both remain significantly redder than those of any previously identified free-floating brown dwarf.
All UKIRT photometry and astrometry for CWISE J0506$+$0738 are provided in Table \ref{tab:cwise0506}.
\subsection{Keck/NIRES}
CWISE J0506$+$0738 was observed with the Near-Infrared Echellette Spectrometer (NIRES; \citealt{wilson2004}) mounted on the Keck II telescope on UT 19 January 2022. NIRES provides a resolution $\lambda/\Delta\lambda$ $\approx$ 2700 over five cross-dispersed orders spanning a wavelength range of 0.9--2.45 $\mu$m. CWISE J0506$+$0738 was observed in four 250 second exposures nodded in an ABBA pattern along the slit, which was aligned with the parallactic angle, for a total on-source integration time of 1000 seconds. The spectrum was extracted using a modified version of the SpeXTool package \citep{vacca2003, cushing2004}, with the A0~V star HD 37887 ($V$ = 7.67) used for telluric correction. The large $J-K$ color of CWISE J0506$+$0738 resulted in significant signal to noise (S/N) differences across the final reduced spectrum, with a S/N$\sim$25 at the $J$-band peak ($\sim$1.3~$\mu$m) and a S/N$\sim$200 at the $K$-band peak ($\sim$2.2~$\mu$m).
The inter-band flux calibration for Keck/NIRES orders is occasionally skewed by seeing or differential refraction slit losses. In particular, there is a gap between the third ($K$-band) and fourth ($H$-band) orders spanning 1.86 to 1.89 $\mu$m\footnote{https://www2.keck.hawaii.edu/inst/nires/genspecs.html}, and the overlap between the fourth and fifth ($J$-band) orders lies in a region of strong telluric and stellar H$_2$O absorption. We therefore re-scaled the resulting spectrum to have a $J-K$ synthetic color consistent with UKIRT $J$-band and UHS $K$-band photometry by applying small multiplicative constants to the $H$- and $K$-band portions of the spectrum. The final reduced spectrum is shown in Figure \ref{fig:spectrum}.
\begin{figure*}
\plotone{Figure2.pdf}
\caption{The Keck/NIRES spectrum of CWISE J0506$+$0738, shown
in the original resolution (grey lines) and smoothed to a resolution of $\lambda/\Delta\lambda$ $\approx$ 100 (black lines).
CWISE J0506$+$0738 is compared to the L7 spectral standard 2MASSI J0825196$+$211552 \citep{kirkpatrick2000, cruz2018} in the top panel, and the young L7 VL-G dwarf PSO J318.5338$-$22.8603 \citep{liu2013} in the bottom panel. Both comparisons highlight the extremely red nature of CWISE J0506$+$0738. All spectra are normalized between 1.27 and 1.29 $\mu$m, and prominent absorption features have been labeled.
}
\label{fig:spectrum}
\end{figure*}
\section{Analysis}
\label{sec:anal}
\subsection{Spectral Type}
\label{sec:spt}
As with many of the known, young, late-type red L dwarfs, none of the L dwarf spectral standards \citep{kirkpatrick2010, cruz2018} provide a suitable match to the near-infrared spectrum of CWISE J0506$+$0738. The best match to the $J$-band portion of the spectrum is the L7 standard 2MASSI J0825196$+$211552 \citep{kirkpatrick1999,cruz2018}, which is shown in the top panel of Figure \ref{fig:spectrum}. CWISE J0506$+$0738 shows much stronger H$_2$O absorption around 1.1 $\mu$m, a feature commonly seen in low-gravity L dwarfs. This comparison also shows how red CWISE J0506$+$0738 is compared to a normal, field-age/field-gravity late-L dwarf. The bottom panel of Figure \ref{fig:spectrum} shows a comparison of CWISE J0506$+$0738 with PSO J318.5338$-$22.8603 \citep{liu2013}, which is typed as L7 VL-G in that work. These two objects match relatively well across the $J$-band portion of the spectrum, though the extreme redness of CWISE J0506$+$0738 can still be seen in this comparison via the mismatch in the $H$- and $K$-band portions of their spectra.
We also note that the spectrum of CWISE J0506$+$0738 has a noticeable absorption feature at the $H$-band peak. There is also a second, less-pronounced absorption feature present in the $K$-band portion of CWISE J0506$+$0738's spectrum between 2.2 and 2.3 $\mu$m. While we cannot {\em a priori} rule out systematic noise or a data reduction artifact for these features, we note that no similar features have been seen in Keck/NIRES spectra of L dwarfs obtained and reduced by our group (e.g., \citealt{meisner2021, schapera2022, softich2022, theissen2022}). We also note that these features occur at the approximate locations of CH$_4$ absorption seen in model spectra of low-surface gravity brown dwarfs with effective temperatures $\lesssim$1400 K. Figure \ref{fig:ch4} compares solar-metallicity model spectra from \cite{marley2021} with fixed low-surface gravities (log(g)=3.5) and varying effective temperatures. Prominent methane absorption features can be seen in the $H$- and $K$-bands for \teff\ $\lesssim$1400 K. While these models are informative for (potentially) identifying the source of some of the absorption features seen in the spectrum of CWISE J0506$+$0738, we were unable to find any models that successfully reproduced the overall shape of CWISE J0506$+$0738's spectrum, similar to previous studies of young brown dwarfs (e.g., \citealt{manjavacas2014}).
\begin{figure*}
\plotone{Figure3.pdf}
\caption{Model spectra from \cite{marley2021} with varying effective temperatures and surface gravity fixed at log(g)=3.5. The gray bands highlight the approximate regions of the absorption features seen in the spectrum of CWISE J0506$+$0738. }
\label{fig:ch4}
\end{figure*}
The presence of CH$_4$ in the $H$- and $K$-band peaks of CWISE J0506$+$0738's spectrum would suggest that this source is early T dwarf \citep{burgasser2006}, although these features are fairly weak in strength. \cite{charnay2018} showed that the presence of clouds can greatly reduce the abundance of CH$_4$ in the photospheres of low-gravity objects, a possible explanation for the absence of CH$_4$ bands in the spectra of 2M1207b and HR8799bcd \citep{barman2011a, barman2011b, konopacky2013}. If the same effect holds here, it would argue for a particularly low temperature for CWISE J0506$+$0738, below that of the {\teff} $\approx$ 1200~K planetary-mass L dwarf PSO J318.5338$-$22.8603 and VHS 1256$-$1257B which originally showed no indication of CH$_4$ absorption in the 1--2.5~$\mu$m region\footnote{Recent high S/N {\em JWST}/NIRSPEC observations of VHS~1256$-$1257B have revealed the presence of weak 1.6~$\micron$ absorption in its spectrum \citep{miles2022}.}. \citep{liu2013, gauza2015}. These two sources do have detectable absorption in the 3.3 $\mu$m $\nu_3$ CH$_4$ fundamental band \citep{miles2018}, and cloud scattering opacity is likely responsible for muting the 1.6~$\mu$m and 2.2~$\mu$m bands in these red L dwarfs \citep{charnay2018,burningham2021}. Indeed, it has been noted previously that PSO J318.5338$-$22.8603 is just on the warmer side of the transition to CH$_4$ becoming the dominant carbon-bearing molecule in its atmosphere \citep{tremblin2017}. We tentatively assert that both $H$- and $K$-band features in the spectrum of CWISE J0506$+$0738 are due to CH$_4$ absorption, which may be tested with more detailed analysis (e.g., atmospheric retrievals; \citealt{burningham2017, burningham2021}) and higher S/N moderate-resolution data. Given the similarity of the $J$-band portion of CWISE J0506$+$0738's spectrum to PSO J318.5338$-$22.8603 (L7 VL-G), and likely detection of CH$_4$ in the $H$- and $K$-bands, we assign a near-infrared spectral type of L8$\gamma$--T0$\gamma$ to CWISE J0506$+$0738, where the $\gamma$ signifies very low surface gravity \citep{kirkpatrick2005}.
\subsection{Spectral Evidence of Youth}
\label{sec:youth}
The characterization of brown dwarfs and planetary mass objects as ``low surface gravity'' or ``young'' typically arises from gravity-sensitive (or more specifically, photosphere pressure-sensitive) spectral features quantified by spectral indices (e.g., \citealt{steele1995, martin1996, luhman1997, gorlova2003, mcgovern2004, kirkpatrick2006, allers2007, manjavacas2020}). Many of these spectral indices, however, are designed for optical spectra (e.g., \citealt{cruz2009}) or are only applicable to objects with spectral types earlier than $\sim$L5 (e.g., \citealt{allers2013, lodieu2018}). The $H$-cont index is a gravity-sensitive index defined in \cite{allers2013} that is one of the few gravity-sensitive indices applicable to spectral types later than L5. This index is designed to approximate the slope of the blue side of the $H$-band peak, with low-gravity objects exhibiting a much steeper slope than field-age brown dwarfs. However, this index is defined using a band centered at 1.67 $\mu$m, which is where a feature potentially attributable to CH$_4$ occurs in our spectrum. Thus the $H$-cont index does not provide an accurate assessment of the slope of the blue side of the $H$-band peak for this object.
We have created a modified slope index for the blue side of the $H$-band peak by computing a simple linear least-squares fit to the 1.45--1.64 $\mu$m region after normalizing to the $J$-band peak between 1.27 and 1.29 $\mu$m. We measured this slope (normalized flux/$\mu$m) for several late-L and early-T dwarfs, both field and young association members, as shown in Figure~\ref{fig:slope-index}. We note that the largest slope for the entire sample belongs to WISE J173859.27$+$614242.1, an object that has been difficult to classify \citep{mace2013}, but is most consistent with an extremely red L9 \citep{thompson2013}. It is unclear if this object is young, has an extremely dusty photosphere, or both. For typical L7-T0 dwarfs, $H$-slope values for field objects range from 2--4, while equivalently classified young L dwarfs have values that range over 3--5. For CWISE J0506$+$0738, we find a slope of 4.38, significantly larger than field-age late-L dwarfs. The known population of young, very red L dwarfs similarly has larger $H$-slope values than their field-age counterparts.
\begin{figure}
\plotone{Figure4.pdf}
\caption{$H$-band slope index versus spectral type for field late-L and T dwarfs (colored circles) based on data from the SPLAT archive \citep{burgasser2017}, with colors corresponding to spectral type. Young L and T dwarfs are represented by purple squares. CWISE J0506$+$0738 (blue diamond) is an outlier amongst field-age late-Ls, similar to the young, late-type L dwarf population. Small offsets have been added to spectral type values for differentiation purposes.}
\label{fig:slope-index}
\end{figure}
\cite{schneider2014} also showed that the H$_2$($K$) index defined in \cite{canty2013} could distinguish young, low-gravity late-Ls from the field late-L population. The H$_2$($K$) index determines the slope of the $K$-band between 2.17~$\mu$m and 2.24~$\mu$m. CWISE J0506$+$0738 has an H$_2$($K$) value of 1.030, which is again consistent with the known population of low-gravity late-type L dwarfs (1.029 $\leq$ H$_2$($K$) $\leq$ 1.045) compared field-age L6--L8 brown dwarfs (H$_2$($K$) $\gtrsim$ 1.05).
Another spectral feature that has been used to distinguish low-surface gravity late-L dwarfs are the K I absorption lines between 1.1 and 1.3 $\mu$m \citep{mcgovern2004,allers2013,miles2022}. Our Keck/NIRES spectrum does not have sufficient S/N around the $J$-band peak to investigate these lines. A higher S/N spectrum would help to ensure no ambiguity regarding the surface gravity of CWISE J0506$+$0738.
\subsection{Radial Velocity}
\label{sec:rv}
The resolution of the Keck/NIRES data is sufficient to obtain a coarse measure of the radial velocity (RV) of CWISE J0506$+$0738, particularly in the vicinity of strong molecular features. We followed a procedure similar to that described in \citep{burgasser2015} (see also \citealt{blake2010,hsu2021}), forward-modeling the wavelength-calibrated spectrum prior to telluric correction in the 2.26--2.38~$\mu$m region. This spectral band contains the prominent 2.3~$\mu$m CO 2-0 band present in L dwarf spectra, as well as strong telluric features that allow refinement of the spectral wavelength calibration (cf. \citealt{newton2014}). We used a {\teff} = 1300~K, {\logg} = 4.5~dex (cgs) BTSettl atmosphere model ($M[\lambda]$) from \citet{allard2012} which provides the best match to the CO band strength, and a telluric absorption model ($T[\lambda]$) from \citet{livingston1991}. We forward modeled the data ($D[\lambda]$) using four parameters: the barycentric radial velocity of the star (RV$_\oplus$), the strength of telluric absorption ($\alpha$), the instrumental gaussian broadening profile width ($\sigma_{broad}$), and the wavelength offset from the nominal SpeXtool solution ($\Delta\lambda$):
\begin{equation}
D[\lambda] = \left(M[\lambda^*+\Delta\lambda]\times{T[\lambda+\Delta\lambda]^\alpha}\right) \\ \otimes\kappa_G(\sigma_{broad})
\end{equation}
with $\lambda^* = \lambda(1+{RV_\oplus}/{c})$ accounting for the radial motion of the star and $\kappa_G$ representing the gaussian broadening kernel. Preliminary fits that additionally included rotational broadening of the stellar spectrum indicated that this parameter was equal to the instrumental broadening and is likely unresolved ($v\sin{i}$ $\lesssim$ 65~km/s), so it was ignored in our final fit.
After an initial ``by-eye'' optimization of parameters, we used a simple Markov Chain Monte Carlo (MCMC) algorithm to explore the parameter space, evaluating goodness of fit between model and data using a $\chi^2$ statistic. Figure~\ref{fig:rv} displays the posterior distribution of our fit parameters after removing the first half of the MCMC chain (``burn-in''), which are normally distributed. There is a small correlation between RV$_\oplus$ and $\Delta\lambda$ which is expected given that stellar and telluric features are intermixed in this region. This correlation increases the uncertainties of these parameters. We find that the best-fit model from this analysis is an excellent match to the NIRES spectrum, with residuals consistent with uncertainties. After correction for barycentric motion ($-$19.2~km/s), we determine a heliocentric radial velocity of +16.3$^{+8.8}_{-7.7}$~km s$^{-1}$ for for CWISE J0506$+$0738.
\begin{figure*}
\plotone{Figure5.pdf}
\caption{MCMC forward model fit of the normalized 2.26--2.38~$\mu$m spectrum of CWISE J0506$+$0738 for RV measurement. The panels along the diagonal show the posterior distributions for our four fitting parameters: the barycentric radial velocity of the star (RV$_\oplus$ in km/s), the strength of the telluric absorption ($\alpha$), the instrumental gaussian broadening profile width ($\sigma_{broad}$ in km/s), and the wavelength offset from the nominal SpeXtool solution ($\Delta\lambda$ in {\AA}). The lower left panels illustrate correlations between parameters; only the RV and $\Delta\lambda$ parameters show a modest inverse correlation, effectively expanding the uncertainty on the RV measurement. The upper right corner shows the NIRES spectrum of CWISE J0506$+$0738 prior to telluric correction (black line) and the best-fit model spectrum (magenta line) composed of stellar model and telluric absorption components (offset lines above fit). Residuals (data minus model, blue line) are consistent with measurement uncertainties (grey band).
}
\label{fig:rv}
\end{figure*}
\begin{figure*}
\plotone{Figure6.pdf}
\caption{Color-color diagrams showing known brown dwarfs recovered in the UKIRT Hemisphere Survey (Schneider et al.~in prep), supplemented with known red L dwarfs from Table \ref{tab:redLs}. CWISE J0506$+$0738 is a clear outlier, being significantly redder than other known L dwarfs both in $J-K$ and $J-$W2 color. }
\label{fig:ccds}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
\subsection{Redder than Red}
\label{sec:red}
CWISE J0506$+$0738 has exceptionally red colors compared to the known brown dwarf population. Figure \ref{fig:ccds} highlights this by comparing CWISE J0506$+$0738 to other UHS DR2 L and T dwarfs (Schneider et al.~in prep.) and red L dwarfs not covered by the UHS survey. Table \ref{tab:redLs} summarizes photometric and spectral type information for all known free-floating L dwarfs with $J-K$ colors greater than 2.2 mag. All photometry is on the MKO system and comes from the VISTA Hemisphere Survey (VHS; \citealt{mcmahon2013}), \cite{liu2016}, or \cite{best2021}. WISE J173859.27$+$614242.1 has no near-infrared MKO photometry in the literature or in available catalogs. For this source, we used its low-resolution near-infrared spectrum published in \citet{mace2013} normalized to its most precise $K$-band photometric measurement (2MASS $K_{\rm S}$; \citealt{skrutskie2006}), and then computed synthetic $J_{\rm MKO}$ and $K_{\rm MKO}$ photometry. Even amongst known red L dwarfs, CWISE J0506$+$0738 stands out as exceptionally red, being $\sim$0.3 mag redder in both $(J-K)_{MKO}$ and $J_{MKO}-$W2 color than all other known free-floating L dwarfs.
Directly imaged planetary-mass companions also have exceptionally red near-infrared colors. Some of the L-type companions (Table~\ref{tab:redLs}) do not have {\em WISE} W1 (3.4 $\mu$m) and W2 (4.6 $\mu$m) photometry, but have equivalent Spitzer/IRAC photometry in ch1 (3.6 $\mu$m) and ch2 (4.5 $\mu$m). For HD 203030B, we use $J$- and $K$-band photometry from \citet{metchev2006} and \citet{miles2017}, and convert Spitzer/IRAC ch1 and ch2 photometry from \cite{martinez2022} using the Spitzer-WISE relations from \cite{kirkpatrick2021}. For VHS 1256$-$1257B, we use $J$- and $K$-band photometry from \cite{gauza2015}, and convert Spitzer/IRAC ch2 photometry from \cite{zhou2020} to W2 using the \cite{kirkpatrick2021} relation. We chose not to use the published W1 photometry of VHS 1256$-$1257B from \cite{gauza2015} because of its large uncertainty (0.5 mag). For BD$+$60 1417B, all photometry comes directly from \cite{faherty2021}. Both HD 203030B and BD$+$60 1417B are included in both panels of Figure \ref{fig:ccds}, while VHS 1256$-$1257B is included in the left panel of Figure \ref{fig:ccds}. We note that none of these companions have $(J-K)_{MKO}$ or $J_{MKO}-$W2 colors as red as CWISE J0506$+$0738. Of the remaining planetary-mass companions that lack 3--5~$\mu$m photometry, only 2M1207b ($J-K$=3.07$\pm$0.23 mag; \citealt{chauvin2004, chauvin2005, mohanty2007, patience2010}) and HD 206893B ($J-K$=3.36$\pm$0.08 mag; \citealt{milli2017, delorme2017, krammerer2021, meshkat2021, ward2021}) have redder $J-K$ colors than CWISE J0506$+$0738.
\subsection{WISE Photometric Variability}
Young brown dwarfs have been shown to have enhanced photometric variability compared to field-age brown dwarfs \citep{biller2015, metchev2015, schneider2018, vos2020, vos2022}. Most brown dwarfs with detected variability at 3--5 $\mu$m, measured largely with Spitzer/IRAC, have amplitudes of a few percent or less (see compilation in \citealt{vos2020}). Multi-epoch photometry from WISE generally does not have the precision to detect such variability (\citealt{mace2015}, Brooks et al.~submitted). However, objects with extremely high-amplitude variability could be distinguished in multi-epoch WISE data.
\begin{figure*}
\plotone{Figure7.pdf}
\caption{Standard deviation ($\sigma$) versus average magnitude over all single-exposure WISE/NEOWISE W1 (left) and W2 (right) detections of known brown dwarfs. Color contours indicate 16--84\% and 5--95\% confidence intervals in 0.5 magnitude bins. The insets on each panel show the difference between measured $\sigma$ values and polynomial fits to the magnitude trend. 2MASS J2139$+$0220 (dark green square), PSO J318.5338$-$22.8603 (light green circle), 2MASSW J0310599$+$164816 (light purple hexagon), CWISE J0506$+$0738 (cyan diamond), and WISE J052857.68$+$090104.4 (dark purple pentagon) are all highlighted as clear deviants from these trends. }
\label{fig:var}
\end{figure*}
Given tentative evidence of near-infrared photometric variability (see Section \ref{sec:ukirt}), we investigated WISE \citep{wright2010} and NEOWISE \citep{mainzer2011, mainzer2014} data for evidence of mid-infrared variability for CWISE J0506$+$0738. WISE/NEOWISE has been scanning the mid-infrared sky for over 10 years, and a typical location on the sky has been observed with the W1 and W2 filters every six months since early 2010.\footnote{With the exception of a $\sim$3 year gap between the initial WISE mission and reactivation as NEOWISE from February 2011 to December 2013.} During each $\sim$1 day visit, 10--15 individual exposures are typically acquired. We chose to analyze these single exposures as opposed to epochal coadds (e.g. ``unTimely''; \citealt{meisner2022}) because CWISE J0506$+$0738 is brighter than the nominal threshold where single exposure photometry becomes unreliable, especially at W2 ($\sim$14.5 mag; \citealt{schneider2016a}); and the concern that the coadded frames would dilute any traces of photometric variability. Such coadded photometry may prove useful for future investigations of long-term/long-period variability.
We gathered photometry from the WISE/NEOWISE Single Exposure Source Catalogs \citep{wise2020a, wise2020b, wise2020c, neowise2020} for CWISE J0506$+$0738 and the same set of known L, T, and Y dwarfs shown in Figure \ref{fig:ccds}. Collectively, these objects should have comparable levels of low-amplitude variability generally undetectable by WISE. For each source, we measured the average and standard deviation of both W1 and W2 magnitudes. We omit frames with {\it qual\_frame} values equal to zero, as these frames likely have contaminated flux measurements. Because single exposure frames are subject to astronomical transients (e.g., cosmic ray hits, satellite streaks), we excluded 4$\sigma$ outliers from the set of single exposure photometry for each source. We also excluded sources that were either blended or contaminated (e.g., bright star halos, diffraction spikes).
Figure \ref{fig:var} compares mean and standard deviation values, which show clear trends in both W1 and W2 photometry. We immediately identify four objects with magnitudes between 12 and 14.5 that have photometric scatter above the 5--95\% confidence interval ($\gtrsim$2$\sigma$) in either W1 or W2.
\noindent
{\em 2MASS 21392676$+$0220226 (2MASS J2139$+$0220)} is a T1.5 dwarf \citep{burgasser2006} that is well-known for its large-amplitude infrared variability. \cite{radigan2012} monitored 2MASS J2139$+$0220 and found $J$-band variability with a peak-to-peak amplitude of $\sim$26\%, which until recent observations of VHS 1256$-$1257B \citep{zhou2022} was the highest amplitude variability found for any brown dwarf. Since the \cite{radigan2012} study, this object has been the subject of numerous variability investigations \citep{apai2013, khandrika2013, karalidi2015}, with \cite{yang2016} finding variability of 11--12\% in Spitzer/IRAC ch1 and ch2 photometry. The extreme variability of 2MASS J2139$+$0220 is attributed to variations in the thickness of silicate clouds \citep{apai2013, karalidi2015, vos2022b}. This object has also been shown to have a nearly edge-on inclination \citep{vos2017}, and is a kinematic member of the $\sim$200 Myr-old Carina-Near moving group \citep{zhang2021}.
\noindent
{\em WISE J052857.68$+$090104.4 (WISE~J0528$+$0901)} is a clear W1 outlier, originally classified as a late-M giant by \cite{thompson2013} but later reclassified as a very low-gravity L1 brown dwarf member of the $\sim$20 Myr 32 Orionis group \citep{burgasser2016}. This planetary-mass object has an anomalous $J-$W2 color, suggestive of excess flux at 5 $\mu$m, although \cite{burgasser2016} found no evidence of circumstellar material or cool companions. The source may also be a variable in the W2 band, but its fainter magnitude here makes it less distinct than comparably bright L and T dwarfs. Nevertheless, these data suggest that WISE~J0528$+$0901 has an unusually dusty and variable atmosphere, making it a compelling source for future photometric monitoring.
\noindent
{\em PSO J318.5338$-$22.8603} is a clear W2 outlier and exceptionally red $\beta$ Pic member that has been shown to have large-amplitude infrared variability in the infrared \citep{biller2015, vos2019}, with a peak-to-peak amplitude of 3.4\% in Spitzer/IRAC ch2 photometry \citep{biller2018}. Interestingly, PSO J318.5338$-$22.8603 is an outlier in W2 and not in W1, which may indicate a cloud depth effects given that the W1 and W2 bands probe different depths in the atmosphere.
\noindent
{\em 2MASSW J0310599$+$164816 (2MASS~J0310+1648AB)} is another W2 outlier, and is an optically classified L8 \cite{kirkpatrick2000}. This object is a resolved (0\farcs2) $\sim$equal brightness binary \citep{stumpf2010} that shows evidence of high amplitude variability in the near-infrared \citep{buenzli2014}. While the variability observations were not long enough to determine a true amplitude or period, the brightening rate of $\sim$2\% per hour was the largest measured in the sample. While there is no clear evidence of youth for 2MASS~J0310+1648AB in the literature, this object was typed as L9.5 (sl.~red) in \cite{schneider2014}. Further investigation of the potential youth and cloud properties of this object may be warranted.
CWISE J0506$+$0738 joins this group of variability outliers, as one of very few objects with both W1 and W2 scatter outside the 16--84\% confidence interval of comparable-brightness L and T dwarfs. To estimate the amplitude of variability associated with these deviations, we fit tenth-order polynomials to the scatter versus magnitude trends in W1 and W2, and calculated RMS values by finding the magnitude offset (in quadrature) for our outlying targets. Assuming sinusoidal variability, RMS values can be converted to peak-to-peak amplitudes with a multiplicative factor of 2$\sqrt{2}$. Using the 16--84\% confidence region as uncertainties for the predicted values from the polynomial fits, we find peak-to-peak variability on the order of 13$\pm$1\% for W1 and 12$\pm$2\% for W2 for 2MASS J2139$+$0220, which is generally consistent with results from Spitzer \citep{yang2016}. For CWISE J0506$+$0738, we estimate 15$\pm$5\% variability for W1 and 23$\pm$9\% variability for W2. Variability at these levels would certainly be extraordinary; however, we caution that the relatively low precision of WISE/NEOWISE single exposure measurements may inflate these results. Future photometric and/or spectroscopic monitoring would help to explore the variability properties of CWISE J0506$+$0738.
\subsection{Distance}
\label{sec:dist}
CWISE J0506$+$0738 is faint at optical wavelengths and was therefore undetected by the Gaia mission \citep{gaia2022}. The currently available astrometry for CWISE J0506$+$0738 is insufficient for a parallax measurement. Because CWISE J0506$+$0738 has such an unusually shaped spectrum, standard spectral-type versus absolute magnitude relations for normal, field-age brown dwarfs are not applicable. There have been efforts to create relations between absolute magnitudes and spectral types for low-gravity brown dwarfs; however, these are typically valid for spectral types earlier than L7 (e.g., \citealt{faherty2016, liu2016}). \cite{faherty2013} found absolute photometry of the young L5 dwarf 2MASS J03552337$+$1133437 was fainter than field L5 dwarfs at wavelengths shorter than $\sim$2.5 $\mu$m, and brighter at longer wavelengths. \cite{schneider2016b} investigated other young, red L dwarfs with measured parallaxes and found that $K$-band photometry produced photometric distances that aligned well with parallactic distances. This trend was also noted in \cite{filippazzo2015}, \cite{faherty2016}, and \cite{liu2016}.
Here, we use nine young, free-floating brown dwarfs (Table \ref{tab:redLs}) with measured parallaxes \citep{liu2016, best2020, kirkpatrick2021, gaia2022} to compare measured distances to photometric distances based on absolute magnitude-spectral type relations for $J_{\rm MKO}$, $K_{\rm MKO}$, W1, and W2 (\citealt{dupuy2012,kirkpatrick2021}; Figure~\ref{fig:dist}). Consistent with prior results, we find that $K_{\rm MKO}$-band photometric distances (average offset $\Delta$d = $-$0.8~pc, scatter $\sigma_d$ = 3.3~pc) are generally more accurate than $J_{\rm MKO}$ ($\Delta$d = $-$10~pc, $\sigma_d$ = 5.1~pc), W1 ($\Delta$d = +2.6~pc, $\sigma_d$ = 3.8~pc), or W2 ($\Delta$d = +4.5~pc, $\sigma_d$ = 4.1~pc) photometric distances. To ensure these values are not biased, we also evaluated the fractional difference for each photometric band, defined as $\Delta$d/d$_{\rm plx}$, and find that $K$-band photometric distances are typically within 5\% for this sample, compared to 52\%, 11\%, and 20\% for $J_{\rm MKO}$, W1, and W2, respectively.
Using the absolute magnitude-spectral type relation from \cite{dupuy2012}, a spectral type of L9$\pm$1, and its measured $K_{MKO}$ photometry, we estimate a photometric distance of 32$^{+4}_{-3}$ pc for CWISE J0506$+$0738. Again, given the exceptional nature of this source, and its unknown multiplicity, we advise that this distance estimate be used with caution until it can be confirmed with a trigonometric parallax.
\begin{figure}
\plotone{Figure8.pdf}
\caption{A comparison of photometric and parallactic distances for free-floating objects from Table \ref{tab:redLs} with measured parallaxes. Objects are labeled on the x-axis. Dashed lines show average differences between photometric and parallactic distances for each band, with colors corresponding to those given in the legend. }
\label{fig:dist}
\end{figure}
\subsection{Moving Group Membership}
\label{sec:mg}
Young brown dwarfs are often associated both spatially and kinematically with young, nearby moving groups, thereby serving as invaluable age benchmarks.
To assess the potential moving group membership of CWISE J0506$+$0738, we use the BANYAN $\Sigma$ algorithm \citep{gagne2018}, which deploys a Bayesian classifier to assign probabilities of moving group membership through 6D coordinate alignment (position and velocity) to 26 known moving groups in the solar neighborhood. We used the position and proper motion of CWISE J0506$+$0738 from UKIRT and UHS measurements (Table \ref{tab:cwise0506}), and our measured radial velocity from the NIRES spectrum (Section \ref{sec:rv}). With these values alone, we find an 82\% membership probability in the $\beta$ Pictoris moving group (BPMG; \citealt{zuckerman2001}), a 3\% membership probability in the AB Doradus moving group (ABDMG; \citealt{zuckerman2004}), and a 15\% probability of being unassociated with any moving group. The predicted/optimal distances for membership in BPMG and ABDMG are 32~pc and 64~pc, respectively; our estimated distance clearly aligns with the former. If we include the distance estimate in the BANYAN $\Sigma$ algorithm, the probability of BPMG membership goes up to 99\%.
We also tested the kinematic membership of CWISE J0506$+$0738 using the LACEwING analysis code \citep{riedel2017}. Again, using just the position, proper motion, and radial velocity of CWISE J0506$+$0738, we find non-zero probabilities for ABDMG (56\%), the Argus Moving Group (71\%), BPMG (28\%), the Columba Association (52\%), and the Tucana-Horologium Association (6\%). Note that LACEwING is stricter in assigning membership probabilities than BANYAN, with bona fide BPMG members having a maximum membership probability of $\sim$70\% when only proper motion and radial velocity are used \citep{riedel2017}. If we use our photometric distance as an additional constraint, BPMG is returned as the group with the highest probability of membership at 86\%.
Membership in the $\beta$ Pictoris moving group is clearly favored for CWISE J0506$+$0738, although a directly measured distance is necessary for confirmation. If confirmed, CWISE J0506$+$0738 would have the latest spectral type and lowest mass amongst free-floating BPMG members, following PSO J318.5338$-$22.8603 \citep{liu2013}. Several candidate members with L7 or later spectral types have also been proposed (\citealt{best2015, schneider2017, kirkpatrick2021, zhang2021}; however, see \citealt{hsu2021}). PSO J318.5338$-$22.8603 has proven to be an exceptionally valuable laboratory for studying planetary-mass object atmospheres \citep{biller2015, biller2018, allers2016, faherty2016}. A second planetary-mass object in this group that bridges the L/T transition will further contribute to these studies.
Assuming $\beta$ Pic membership, we can use the group age of 22$\pm$6 Myr \citep{shkolnik2017} to estimate the mass of CWISE J0506$+$0738. To do this, we must first estimate the luminosity ($L_{\rm bol}$) or effective temperature (\teff) of the source. For the former, we used the empirical $K$-band bolometric correction/spectral type relation for young brown dwarfs quantified in \cite{filippazzo2015}. Combining this with the UHS $K$-band magnitude and our distance estimate, we infer a bolometric luminosity of $\log$($L_{\rm bol}$/$L_{\odot}$) = -4.55$\pm$0.12. We caution that this value is based on our estimated distance from Section \ref{sec:dist}, and will need to be updated when a measured parallax becomes available. We then used the solar metallicity evolutionary models of \cite{marley2021} to infer a mass of 7$\pm$2 $M_{\rm Jup}$. The evolutionary models also provide a radius of 1.32$\pm$0.03 $R_{\rm Jup}$ for these parameters, consistent with the radii of low-gravity late-type L dwarfs \citep{filippazzo2015}. Combining this radius with our bolometric luminosity, we find \teff\ = 1140$\pm$80 K. This is $\sim$130 K cooler than a field-age L9 \citep{kirkpatrick2021}, consistent with previous works showing low-gravity late-Ls tend to be $\sim$100--200 K cooler than field-age objects at the same spectral type \citep{filippazzo2015, faherty2016}. In particular, this temperature is 50-100~K cooler than \teff\ estimates of PSO J318.5338$-$22.8603 \citep{liu2013,miles2018}, consistent with the appearance of CH$_4$ absorption at lower temperatures.
The predicted mass of 7$\pm$2 $M_{\rm Jup}$ is well below the deuterium-fusion minimum mass of 14 $M_{\rm Jup}$ commonly used to distinguish brown dwarfs from planetary mass objects. As such, this object helps bridge the mass gap between the lowest mass free-floating $\beta$ Pic members and directly imaged exoplanets, such as 51 Eri b ($\sim$T6.5; \citealt{macintosh2015, rajan2017}). CWISE J0506$+$0738 could also help to constrain the effective temperature of the L/T transition at an age of $\sim$20-25 Myr \citep{binks2014, bell2015, messina2016, nielsen2016, shkolnik2017, miret2020}. CWISE J0506$+$0738 would be one of the youngest objects to join a small but growing number of benchmark substellar objects with known ages a the L/T transition such as HD 203030B (30--150 Myr; \citealt{metchev2006, miles2017}), 2MASS J13243553+6358281 ($\sim$150 Myr; \citealt{looper2007, gagne2018b}), HIP 21152B and other T-type Hyades members ($\sim$650 Myr; \citealt{kuzuhara2022, schneider2022}), $\epsilon$ Indi Ba ($\sim$3.5 Gyr; \citealt{scholz2003, chen2022}), and the white dwarf companion COCONUTS-1 ($\sim$7 Gyr; \citealt{zhang2020}).
\section{Summary}
We have presented the discovery and analysis of an exceptionally red brown dwarf, CWISE J0506$+$0738, identified as part of the Backyard Worlds: Planet 9 citizen science project. The near-infrared spectrum of CWISE J0506$+$0738 is highly reddened and shows signatures of low-surface gravity, as well as weak absorption features that we associate with methane bands. This object has the reddest $J-K$ and $J-$W2 colors of any free-floating L-type brown dwarf, and we tentatively assign a near-infrared spectral type of L8$\gamma$--T0$\gamma$. The exceptionally red color of CWISE J0506$+$0738 may be due to several factors. Objects with low surface gravities have inefficient gravitational settling of silicate dust grains, which can remain high in the atmospheres. Such grains can be directly detected at long wavelengths (e.g., \citealt{cushing2006, burgasser2008, suarez2022}) and could be constrained for CWISE J0506$+$0738 with future long-wavelength observations (e.g., \citealt{miles2022}). The angle at which a brown dwarf is viewed has also been shown to affect its near-infrared colors, with objects viewed equator-on tending to have redder colors than those viewed pole-on \citep{vos2017}. A measurement of CWISE J0506$+$0738's rotational period combined with its rotational velocity (e.g., $v$sin$i$) from a high-resolution spectrum could determine whether or not CWISE J0506$+$0738 is viewed closer to pole-on or equator-on. A high-resolution spectrum would also allow for a higher precision radial velocity measurement and a more detailed probe of gravity-sensitive features.
CWISE J0506$+$0738's astrometry and kinematics points to likely membership in the 22~Myr $\beta$ Pictoris moving group, to be confirmed or rejected with future trigonometric parallax and higher precision radial velocity measurements. If associated, CWISE J0506$+$0738 would be the lowest-mass $\beta$ Pictoris member found to date, with an estimated mass of 7$\pm$2 $M_{\rm Jup}$, well within the planetary-mass regime. The extreme colors of this object, and its relatively low proper motion ($<$100 mas yr$^{-1}$), suggests the existence of a other extremely red L dwarfs that may have been missed by previous searches due to assumptions about brown dwarf colors or selection requirements for large proper motions. Recent large-scale near-infrared surveys such as UHS \citep{dye2018} and VHS \citep{mcmahon2013} that push several magnitudes deeper than previous efforts (e.g., 2MASS) may be able to confidently detect the faint $J$-band magnitudes of similar objects.
Because of this object's unique spectroscopic properties, and the fact that young brown dwarfs often display large-amplitude variability (e.g., \citealt{vos2022}), CWISE J0506$+$0738 is an intriguing target for future photometric or spectroscopic variability monitoring. Longer wavelength observations with the James Webb Space Telescope would have the additional advantage of further constraining the existence and abundance of CH$_4$ and analyzing the presence and properties of dust grains through silicate absorption features \citep{miles2022}.
\begin{longrotatetable}
\begin{deluxetable*}{lcccccccccccc}
\label{tab:redLs}
\tablecaption{Infrared Photometry for L Dwarfs with $J-K$ $>$ 2.2 mag}
\tablehead{
\colhead{Name} & \colhead{Disc.} & \colhead{SpT} & \colhead{SpT} & \colhead{$J_{\rm MKO}$} & \colhead{$K_{\rm MKO}$} & \colhead{NIR} & \colhead{W1} & \colhead{W2} & \colhead{$(J-K)_{\rm MKO}$} & \colhead{$J_{\rm MKO}-$W2}\\
\colhead{} & \colhead{Ref.} & \colhead{} & \colhead{Ref.} & \colhead{(mag)} & \colhead{(mag)} & \colhead{Ref.} & \colhead{(mag)} & \colhead{mag} & \colhead{(mag)} & \colhead{(mag)} }
\startdata
\cutinhead{Free Floating}
WISEP J004701.06$+$680352.1 & 1 & L6--L8$\gamma$ & 2 & 15.490$\pm$0.070 & 13.010$\pm$0.030 & 3 & 11.768$\pm$0.010 & 11.242$\pm$0.008 & 2.480$\pm$0.076 & 4.248$\pm$0.070 \\
PSO 057.2893$+$15.2433 & 4 & L7 red & 4 & 17.393$\pm$0.027 & 14.869$\pm$0.012 & 20 & 13.818$\pm$0.014 & 13.254$\pm$0.012 & 2.524$\pm$0.030 & 4.139$\pm$0.030 \\
2MASS J03552337$+$1133437 & 5 & L3--L6$\gamma$ & 2 & 13.940$\pm$0.003 & 11.491$\pm$0.001 & 20 & 10.617$\pm$0.012 & 10.032$\pm$0.008 & 2.449$\pm$0.003 & 3.908$\pm$0.009 \\
CWISE J050626.96$+$073842.4 & 6 & L8--T0$\gamma$ & 6 & 18.487$\pm$0.017 & 15.513$\pm$0.022 & 6,20 & 14.320$\pm$0.015 & 13.552$\pm$0.013 & 2.974$\pm$0.028 & 4.935$\pm$0.021 \\
WISEA J090258.99$+$670833.1 & 7 & L7 red & 7 & 16.864$\pm$0.246 & 14.305$\pm$0.108 & 8 & 13.192$\pm$0.013 & 12.722$\pm$0.009 & 2.559$\pm$0.269 & 4.142$\pm$0.246 \\
2MASS J11193254$-$1137466 & 9 & L7 VL-G\tablenotemark{a} & 10 & 17.330$\pm$0.029 & 14.751$\pm$0.012 & 21 & 13.540$\pm$0.014 & 12.879$\pm$0.010 & 2.580$\pm$0.032 & 4.451$\pm$0.031 \\
WISEA J114724.10$-$204021.3 & 11 & L7$\gamma$ & 12 & 17.445$\pm$0.028 & 14.872$\pm$0.011 & 21 & 13.677$\pm$0.013 & 13.088$\pm$0.011 & 2.573$\pm$0.030 & 4.357$\pm$0.030 \\
2MASS J16154255$+$4953211 & 13 & L3--L6$\gamma$ & 2 & 16.506$\pm$0.016 & 14.260$\pm$0.070 & 3,20 & 13.225$\pm$0.012 & 12.648$\pm$0.008 & 2.246$\pm$0.072 & 3.858$\pm$0.018\\
WISE J173859.27$+$614242.1 & 14 & L9 pec(red) & 14 & 17.680$\pm$0.110\tablenotemark{c} & 15.237$\pm$0.100\tablenotemark{c} & 6,22 & 14.059$\pm$0.011 & 13.374$\pm$0.009 & 2.443$\pm$0.149\tablenotemark{c} & 4.306$\pm$0.100\tablenotemark{c}\\
WISE J174102.78$-$464225.5 & 15 & L5--L7$\gamma$ & 2 & 15.951$\pm$0.010 & 13.533$\pm$0.005 & 21 & 12.362$\pm$0.027 & 11.802$\pm$0.024 & 2.418$\pm$0.011 & 4.149$\pm$0.026\\
PSO J318.5338$-$22.8603 & 16 & L7 VL-G & 16 & 17.181$\pm$0.018 & 14.540$\pm$0.009 & 21 & 13.210$\pm$ 0.013 & 12.526$\pm$0.010 & 2.640$\pm$0.020 & 4.655$\pm$0.021\\
2MASS J21481628$+$4003593 & 17 & L6.5 pec & 17 & 14.054$\pm$0.003 & 11.745$\pm$0.001 & 20 & 10.801$\pm$0.011 & 10.292$\pm$0.007 & 2.309$\pm$0.003 & 3.762$\pm$0.008 \\
ULAS J222711$-$004547 & 18 & L7 pec & 18 & 17.954$\pm$0.039 & 15.475$\pm$0.014 & 21\tablenotemark{b} & 14.259$\pm$0.014 & 13.663$\pm$0.013 & 2.479$\pm$0.041 & 4.291$\pm$0.041\\
2MASS J22443167$+$2043433 & 19 & L6--L8$\gamma$ & 2 & 16.401$\pm$0.016 & 13.826$\pm$0.006 & 20 & 12.775$\pm$0.012 & 12.130$\pm$0.008 & 2.575$\pm$0.017 & 4.271$\pm$0.018\\
\cutinhead{Companions}
BD$+$60 1417B & 23 & L6--L8$\gamma$ & 23 & 18.53$\pm$0.20 & 15.83$\pm$0.20 & 23 & 14.461$\pm$0.014 & 13.967$\pm$0.013 & 2.70$\pm$0.28 & 4.46$\pm$0.20 \\
HD 203030B & 24 & L7.5 & 24 & 18.77$\pm$0.08 & 16.21$\pm$0.10 & 24,25 & 15.67$\pm$0.02\tablenotemark{d} & 14.77$\pm$0.02\tablenotemark{d} & 2.56$\pm$0.13 & 4.00$\pm$0.08 \\
VHS 1256$-$1257B & 26 & L7.5 & 26 & 17.136$\pm$0.020 & 14.665$\pm$0.010 & 21 & \dots & 12.579$\pm$0.020\tablenotemark{e} & 2.471$\pm$0.022 & 4.557$\pm$0.028 \\
2MASS J1207334$-$393254b & 27,28 & L3 VL-G & 29 & 20.0$\pm$0.2 & 16.93$\pm$0.11 & 27,30 & \dots & \dots & 3.07$\pm$0.23 & \dots \\
HD 206893B & 31 & L4--L8 & 32 & 18.38$\pm$0.03 & 15.02$\pm$0.07 & 32 & \dots & \dots & 3.36$\pm$0.08 & \dots \\
2MASS J22362452+4751425b & 33 & late-L pec & 33 & 19.97$\pm$0.11 & 17.28$\pm$0.04 & 33 & \dots & \dots & 2.69$\pm$0.12 & \dots \\
HR 8799b & 34 & L5--T2 & 35 & 19.46$\pm$0.17 & 16.99$\pm$0.06 & 36,37,38 & \dots & \dots & 2.47$\pm$0.18 & \dots \\
\enddata
\tablenotetext{a}{2MASS J11193254$-$1137466 is a binary \citep{best2017} and the spectral type listed is the unresolved spectral type.}
\tablenotetext{b}{ULAS J222711$-$004547 also has $J$- and $K$-band photometry in the UKIRT Large Area Survey (LAS; \citealt{lawrence2007}). We use the VHS photometric measurements here because they have smaller uncertainties than those in the UKIRT LAS.}
\tablenotetext{c}{Near-infrared photometry for WISE J173859.27$+$614242.1 was determined synthetically from its near-infrared spectrum.}
\tablenotetext{d}{Converted from Spitzer ch1 and ch2 photometry in \cite{miles2017} using relations in \cite{kirkpatrick2021}.}
\tablenotetext{e}{Converted from Spitzer ch2 photometry in \cite{zhou2020} using relations in \cite{kirkpatrick2021}.}
\tablerefs{(1) \cite{gizis2012}; (2) \cite{gagne2015}; (3) \cite{liu2016}; (4) \cite{best2015}; (5) \cite{reid2006}; (6) This work; (7) \cite{schneider2017}; (8) \cite{best2021}; (9) \cite{kellogg2015}; (10) \cite{best2017}; (11) \cite{schneider2016b}; (12) \cite{faherty2016}; (13) \cite{metchev2008}; (14) \cite{mace2013}; (15) \cite{schneider2014}; (16) \cite{liu2013b}; (17) \cite{looper2008}; (18) \cite{marocco2014}; (19) \cite{dahn2002}; (20) UHS (\citealt{dye2018}, Bruursema et al.~in prep.); (21) VHS \citep{mcmahon2013}; (22) 2MASS \citep{skrutskie2006}; (23) \cite{faherty2021}; (24) \cite{metchev2006}; (25) \cite{miles2017}; (26) \cite{gauza2015}; (27) \cite{chauvin2004}; (28) \cite{chauvin2005}; (29) \cite{allers2013}; (30) \cite{mohanty2007}; (31) \cite{milli2017}; (32) \cite{ward2021}; (33) \cite{bowler2017}; (34) \cite{marois2008}; (35) \cite{bowler2010}; (36) \cite{esposito2013}; (37) \cite{oppenheimer2013}; (38) \cite{liu2016} }
\end{deluxetable*}
\end{longrotatetable}
\acknowledgments
.
The Backyard Worlds: Planet 9 team would like to thank the many Zooniverse volunteers who have participated in this project. We would also like to thank the Zooniverse web development team for their work creating and maintaining the Zooniverse platform and the Project Builder tools. This research was supported by NASA grant 2017-ADAP17-0067. This material is supported by the National Science Foundation under Grant No. 2007068, 2009136, and 2009177. This publication makes use of data products from the UKIRT Hemisphere Survey, which is a joint project of the United States Naval Observatory, The University of Hawaii Institute for Astronomy, the Cambridge University Cambridge Astronomy Survey Unit, and the University of Edinburgh Wide-Field Astronomy Unit (WFAU). UHS is primarily funded by the United States Navy. The WFAU gratefully acknowledges support for this work from the Science and Technology Facilities Council through ST/T002956/1 and previous grants. The authors acknowledge the support provided by the US Naval Observatory in the areas of celestial and reference frame research, including the USNO's postdoctoral program. (Some of) The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This publication makes use of data products from the {\it Wide-field Infrared Survey Explorer}, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE which is a project of the Jet Propulsion Laboratory/California Institute of Technology. {\it WISE} and NEOWISE are funded by the National Aeronautics and Space Administration. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has benefitted from the SpeX Prism Spectral Libraries, maintained by Adam Burgasser. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
\facilities{UKIRT/WFCAM, Keck/NIRES, WISE, NEOWISE}
\software{
BANYAN~$\Sigma$ \citep{gagne2018},
CASUTOOLS \citep{irwin2004},
LACEwING \citep{riedel2017},
SpeXTool \citep{cushing2004},
SPLAT \citep{burgasser2014},
WiseView \citep{caselden2018}
}
\clearpage
|
train/arxiv
|
BkiUeS44eIOjR1Ci-YiV
| 5 | 1 |
\section{Introduction}
A \emph{$2$-$(v,k,\lambda)$ design} $\mathcal{D}$ is a pair $(\mathcal{P},\mathcal{B})$ with a set $\mathcal{P}$ of $v$ \emph{points} and a set $\mathcal{B}$ of \emph{blocks}
such that each block is a $k$-subset of $\mathcal{P}$ and each two distinct points are contained in $\lambda$ blocks.
We say $\mathcal{D}$ is \emph{nontrivial} if $2<k<v$, and \emph{symmetric} if $v=b$.
All $2$-$(v,k,\lambda)$ designs in this paper are assumed to be nontrivial.
An automorphism of $\mathcal{D}$ is a permutation of the point set
which preserves the block set.
The set of all automorphisms of $\mathcal{D}$ with the composition of permutations forms a
group, denoted by ${\rm Aut}(\mathcal{D})$.
For a subgroup $G$ of ${\rm Aut}(\mathcal{D})$,
$G$ is said to be \emph{point-primitive}
if $G$ acts primitively on $\mathcal{P}$,
and said to be \emph{point-imprimitive} otherwise.
A \emph{flag} of $\mathcal{D}$ is a point-block
pair $(\alpha,B)$ where $\alpha$ is a point
and $B$ is a block incident with $\alpha$.
A subgroup $G$ of ${\rm Aut}(\mathcal{D})$ is said to be \emph{flag-transitive} if $G$ acts transitively on the set of flags of $\mathcal{D}$.
A $2$-$(v,k,\lambda)$ design with $\lambda=1$
is also called a finite \emph{linear space}.
In 1990, Buekenhout, Delandtsheer, Doyen, Kleidman, Liebeck and Saxl~\cite{BDDKLS} classified all flag-transitive linear spaces apart from
those with a one-dimensional affine automorphism group.
Since then, there have been efforts to classify $2$-$(v,k,2)$ designs $\mathcal{D}$
admitting a flag-transitive group $G$ of automorphisms.
Through a series of papers~\cite{Re2005,Biplane1,Biplane2,Biplane3}, Regueiro proved that,
if $\mathcal{D}$ is symmetric, then either $(v,k)\in\{(7,4),(11,5),(16,6)\}$, or $G\leq {\rm A\Gamma L}(1,q)$ for some odd prime power $q$. Recently, Zhou and the second author~\cite{Liang1} proved that, if $\mathcal{D}$
is not symmetric and $G$ is point-primitive, then $G$ is affine or almost simple. In each of these cases $G$ has a unique minimal normal subgroup, its \emph{socle} ${\rm Soc}(G)$, which is elementary abelian or a nonabelian simple group, respectively.
Our first objective in this paper is to fill in a missing piece in this story, namely to treat the case where $G$ is flag-transitive and point-imprimitive and $\mathcal{D}$ is a not-necessarily-symmetric $2$-$(v,k,2)$ design. Such flag-transitive, point-imprimitive designs exist: it was shown in 1945 by Hussain \cite{Huss}, and independently in 1946 by Nandi \cite{Nandi},
that there are exactly three $2$-$(16,6,2)$-designs. O'Reilly Regueiro
\cite[Examples 1.2]{Re2005} showed that exactly two of these designs are flag-transitive, and each admits a point-imprimitive, flag-transitive subgroup of automorphisms (one with automorphism group $2^4\mathrm{S}_6$ and point stabiliser $(\mathbb{Z}_2\times\mathbb{Z}_8)(\mathrm{S}_4.2)$ and the other with automorphism group $\mathrm{S}_6$ and point stabiliser $\mathrm{S}_4.2$, see also \cite[Remark 1.4(1)]{Praeger}).
We prove that these are the only point-imprimitive examples, and thus, together with~\cite[Theorem 1.1]{Liang1} and~\cite[Theorem 2]{Re2005}, we obtain the following result.
\begin{theorem}\label{Th1}
Let $\mathcal{D}$ be a $2$-$(v,k,2)$ design with a flag-transitive group $G$ of automorphisms. Then either
\begin{enumerate}
\item[\rm(i)] $\mathcal{D}$ is one of two known symmetric $2$-$(16,6,2)$ designs with $G$ point-imprimitive; or
\item[\rm(ii)] $G$ is point-primitive of affine or almost simple type.
\end{enumerate}
\end{theorem}
Theorem~\ref{Th1} reduces the study of flag-transitive $2$-$(v,k,2)$ designs to those whose automorphism group $G$ is point-primitive of affine or almost simple type. Regueiro \cite{Re2005,Biplane1,Biplane2,Biplane3} has classified all such examples where the design is symmetric (up to those admitting a one-dimensional affine group). In the non-symmetric case, the second author and Zhou have dealt with the cases where the socle ${\rm Soc}(G)$ is a sporadic simple group or an alternating group, identifying three possibilities: namely $(v,k)=(176,8)$ with $G=\mathrm{HS}$, the Higman-Sims group in \cite{Liang1}, and $(v,k)=(6,3)$ or $(10,4)$ with ${\rm Soc}(G)=A_v$ in~\cite{Liang2}. Our contribution is the case where ${\rm Soc}(G)={\rm PSL}(n,q)$ for some $n\geq 3$ and $q$ a prime power.
In contrast to the cases considered previously,
an infinite family of examples occurs, which may be obtained from the following
general construction method for flag-transitive designs from linear spaces.
\begin{construction}\label{cons}
For a $2$-$(v,k,1)$ design $\mathcal{S=(P,L)}$ with $k\geq 3$, let
\[
\mathcal{B}=\{\ell\setminus\{\alpha\}\,\mid\,\ell\in \mathcal{L},\,\alpha\in\ell\}
\]
and $\mathcal{D(S)}=(\mathcal{P},\mathcal{B})$.
\end{construction}
We show in Proposition \ref{exist 1}
that $\mathcal{D(S)}$ is a $2$-$(v,k-1,k-2)$ design, and
moreover, that $\mathcal{D(S)}$ is $G$-flag-transitive whenever $G\leq{\rm Aut}(\mathcal{S})$ is flag-transitive on $\mathcal{S}$ and induces a 2-transitive action on each line of $\mathcal{S}$. In particular, these conditions hold if $\cal S$ is the design of points and lines of ${\rm PG}(n-1,3)$, for some $n\geq 3$, and ${\rm Soc}(G)={\rm PSL}(n,3)$ (Proposition~\ref{exist 1}). Apart from these designs, our analysis shows that there is only one other $G$-flag-transitive $2-(v,k,2)$ design with ${\rm Soc}(G)={\rm PSL}(n,q)$, $n\geq3$.
\begin{theorem}\label{Th2}
Let $\mathcal{D}$ be a $2$-$(v,k,2)$ design admitting a flag-transitive group $G$ of automorphisms, such that ${\rm Soc}(G)={\rm PSL}(n,q)$ for some $n\geq3$ and prime power $q$. Then either
\begin{enumerate}[(a)]
\item $\mathcal{D}=\mathcal{D}(\cal S)$ is as in Construction~$\ref{cons}$, where $\cal S$ is the design of points and lines of ${\rm PG}(n-1,3)$; or
\item $\mathcal{D}$ is the complement of the Fano plane (that is, blocks are the complements of the lines of ${\rm PG}(2,2)$).
\end{enumerate}
\end{theorem}
The designs in part (a) are non-symmetric (Proposition~\ref{exist 1}), while the complement of the Fano plane is symmetric, and arises also in Regueiro's classification \cite[Theorem 1]{Biplane2} (noting that the group ${\rm PSL}(3,2)$ is isomorphic to the group ${\rm PSL}(2,7)$ in her result).
The proofs of Theorems \ref{Th1} and \ref{Th2} will be given in Sections \ref{sec3} and \ref{sec4}, respectively.
\section{Preliminaries}\label{sec2}
We first collect some useful results on flag-transitive designs and groups of Lie type.
\begin{lemma} \label{condition 1}
Let $\mathcal{D}$ be a $2$-$(v,k,\lambda)$ design and let $b$ be the number of blocks of $\mathcal{D}$.
Then the number of blocks containing each point of $\mathcal{D}$ is a constant $r$ satisfying the following:
\begin{enumerate}
\item[\rm(i)] $r(k-1)=\lambda(v-1)$;
\item[\rm(ii)] $bk=vr$;
\item[\rm(iii)] $b\geq v$ and $r\geq k$;
\item[\rm(iv)] $r^2>\lambda v$.
\end{enumerate}
In particular, if $\mathcal{D}$ is non-symmetric then $b>v$ and $r>k$.
\end{lemma}
\par\noindent{\sc Proof.~}
Parts~(i) and~(ii) follow immediately by simple counting.
Part~(iii) is Fisher's Inequality \cite[p.99]{Ryser}.
By~(i) and~(iii) we have
\[
r(r-1)\geq r(k-1)=\lambda(v-1)
\]
and so $r^2\geq\lambda v+r-\lambda$.
Since $\mathcal{D}$ is nontrivial, we deduce from (i) that $r>\lambda$.
Hence $r^2>\lambda v$, as stated in part~(iv).
\qed
For a permutation group $G$ on a set $\mathcal{P}$ and an element $\alpha$ of $\mathcal{P}$, denote by $G_\alpha$ the stabiliser of $\alpha$ in $G$, that is, the subgroup of $G$ fixing $\alpha$. A \emph{subdegree} $s$ of a transitive permutation group $G$ is the length of some orbit of $G_\alpha$. We say that $s$ is \emph{non-trivial} if the orbit is not $\{\alpha\}$, and $s$ is \emph{unique} if $G_\alpha$ has only one orbit of size $s$.
\begin{lemma} \label{condition 2}
Let $\mathcal{D}$ be a $2$-$(v,k,\lambda)$ design, let $G$ be a flag-transitive subgroup of ${\rm Aut}(\mathcal{D})$, and let $\alpha$ be a point of $\mathcal{D}$.
Then the following statements hold:
\begin{enumerate}
\item[\rm(i)] $|G_\alpha|^3>\lambda |G|$;
\item[\rm(ii)] $r$ divides $\gcd(\lambda(v-1),|G_{\alpha}|)$;
\item[\rm(iii)] $r$ divides $\lambda\gcd(v-1,|G_{\alpha}|)$;
\item[\rm(iv)] $r$ divides $s\gcd(r,\lambda)$ for every nontrivial subdegree $s$ of $G$.
\end{enumerate}
\end{lemma}
\par\noindent{\sc Proof.~}
By Lemma~\ref{condition 1} we have $r^2>\lambda v$.
Moreover, the flag-transitivity of $G$ implies that $v=|G|/|G_\alpha|$ and $r$ divides $|G_\alpha|$, and in particular, $|G_\alpha|\geq r$.
It follows that
\[
|G_\alpha|^2\geq r^2>\lambda v=\frac{\lambda|G|}{|G_\alpha|}
\]
and so $|G_\alpha|^3>\lambda|G|$.
This proves statement~(i).
Since $r$ divides $r(k-1)=\lambda(v-1)$ and $r$ divides $|G_\alpha|$, we conclude that
$r$ divides
\begin{equation}\label{1}
\gcd(\lambda(v-1),|G_{\alpha}|),
\end{equation}
as statement~(ii) asserts. Note that the quantity in~\eqref{1} divides
\[
\gcd(\lambda(v-1),\lambda|G_{\alpha}|)=\lambda\gcd(v-1,|G_{\alpha}|).
\]
We then conclude that $r$ divides $\lambda\gcd(v-1,|G_\alpha|)$, proving statement~(iii).
Finally, statement~(iv) is proved in \cite[p.91]{Dav1} and \cite{Dav2}.
\qed
For a positive integer $n$ and prime number $p$, let $n_p$ denote the \emph{$p$-part of $n$} and let $n_{p'}$ denote the \emph{$p'$-part of $n$}, that is, $n_p=p^t$ such that $p^t\mid n$ but $p^{t+1}\nmid n$ and $n_{p'}=n/n_p$.
We will denote by $d$ the greatest common divisor of $n$ and $q-1$.
\begin{lemma}\label{bound}
Suppose that $\mathcal{D}$ is a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={\rm PSL}(n,q)$, where $n\geq 3$ and $q=p^f$ for some prime $p$ and positive integer $f$, and $d=\gcd(n,q-1)$. Then for any point $\alpha$ of $\mathcal{D}$ the following statements hold:
\begin{enumerate}
\item[\rm(i)] $|X|<2(df)^2|X_{\alpha}|^3$;
\item[\rm(ii)] $r$ divides $2df|X_{\alpha}|$;
\item[\rm(iii)] if $p\mid v$, then $r_p$ divides $2$, $r$ divides $2df|X_{\alpha}|_{p'}$, and $|X|<2(df)^2|X_{\alpha}|^2_{p'}|X_{\alpha}|$.
\end{enumerate}
\end{lemma}
\par\noindent{\sc Proof.~}
Since $G$ is point-primitive and $X$ is normal in $G$, the group $X$ is transitive on the point set. Hence $G=XG_\alpha$ and so
\[
\frac{|G_\alpha|}{|X_\alpha|}=\frac{|G_\alpha|}{|X\cap G_\alpha|}=\frac{|XG_\alpha|}{|X|}=\frac{|G|}{|X|}.
\]
Moreover, as ${\rm Soc}(G)=X={\rm PSL}(n,q)$, we have $G\leq{\rm Aut}(X)$. Hence $|G_\alpha|/|X_\alpha|=|G|/|X|$ divides $|\Out(X)|=2df$.
Consequently, $|G_\alpha|/|X_\alpha|\leq2df$.
Since Lemma \ref{condition 2}(i) yields
\[
|G_\alpha|^3>2|G|=\frac{2|X||G_\alpha|}{|X_\alpha|},
\]
it follows that
\[
2|X|<|X_\alpha||G_\alpha|^2=\left(\frac{|G_\alpha|}{|X_\alpha|}\right)^2|X_\alpha|^3\leq (2df)^2|X_\alpha|^3.
\]
This leads to statement~(i).
Since $|G_\alpha|/|X_\alpha|$ divides $|\Out(X)|=2df$ and the flag-transitivity of $G$ implies that $r$ divides $|G_\alpha|$, we derive that $r$ divides $2df|X_{\alpha}|$, as in statement~(ii).
Now suppose that $p$ divides $v$.
Then the equality $2(v-1)=r(k-1)$ implies that $r_p$ divides $2$.
As a consequence of this and part (ii) we see that
$r$ divides $2df|X_{\alpha}|_{p'}$.
Since $r^2>2v$ by Lemma~\ref{condition 1}(iv), and $v=|X|/|X_{\alpha}|$ by the point-transitivity of $X$, it then follows that
\[
(2df|X_{\alpha}|_{p'})^2>2v=\frac{2|X|}{|X_\alpha|}.
\]
This implies that $2(df)^2|X_{\alpha}|^2_{p'}|X_{\alpha}|>|X|$, completing the proof of part (iii).
\qed
\begin{lemma}\label{L:subgroupdiv}
Suppose that $\mathcal{D}$ is a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={\rm PSL}(n,q)$, where $n\geq 3$ and $q=p^f$ for some prime $p$ and positive integer $f$, and $d=\gcd(n,q-1)$. Let $\alpha$ and $\beta$ be distinct points of $\mathcal{D}$, and suppose $H\leq G_{\alpha,\beta}$. Then $r$ divides $4df|X_\alpha|/|H|$.
\end{lemma}
\par\noindent{\sc Proof.~} By Lemma~\ref{condition 2}(iv), $r$ divides $2|\beta^{G_\alpha}|=2|G_\alpha|/|G_{\alpha\beta}|$. Since
$|{G}_\alpha|$ divides $2df|{X}_\alpha|$ (see proof of Lemma \ref{bound}) and $H$ divides $|G_{\alpha,\beta}|$,
it follows that $r$ divides $4df|X_\alpha|/|H|$.
\qed
We will need the following results on finite groups of Lie type.
\begin{lemma}\label{parabolic} Suppose that $\mathcal{D}$ is a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={\rm PSL}(n,q)$, where $n\geq 3$ and $q=p^f$ for some prime $p$ and positive integer $f$, and $r$ is the number of blocks incident with a given point. Let $\alpha$ be a point of $\mathcal{D}$.
Suppose that $X_\alpha$ has a normal subgroup $Y$, which is a finite simple group of
Lie type in characteristic $p$, and $Y$ is not isomorphic to $\mathrm{A}_{5}$ or $\mathrm{A}_{6}$ if $p=2$.
If $r_p\mid 2_p$, then $r$ is divisible by the index of
a proper parabolic subgroup of $Y$.
\end{lemma}
\par\noindent{\sc Proof.~}
Since $G$ is flag-transitive, we have $r=|G_\alpha|/|G_{\alpha,B}|$, where
$B$ is a block through $\alpha$. Since $X_\alpha\unlhd G_\alpha$, $|X_\alpha|/|X_{\alpha,B}|$ divides $r$.
Now since $Y\unlhd X_\alpha$, we also have that $|Y|/|Y_{B}|$ divides $r$. Let $H:=Y_{B}$. Since $r_{p}\mid 2_{p}$, we have that $|Y{:}H|_{p}\leq 2_{p}$. We claim that $H$ is contained in a proper parabolic subgroup of $Y$.
First assume $|Y{:}H|_{p}=1$. Then by \cite[Lemma 2.3]{Saxl},
$H$ is contained in a proper parabolic subgroup of $Y$. Now suppose $|Y{:}H|_{p}=2$. Then $p=2$ and $4\nmid |Y{:}H|$,
and so by \cite[Lemma 7]{Biplane2}, $H$ is contained in a
proper parabolic subgroup of $Y$. So the claim is proved in both cases.
It follows that $r$ is divisible by the index of a parabolic subgroup of $Y$.
\qed
\begin{lemma}{\rm (\cite[Lemma 4.2, Corollary 4.3]{Alavi})}\label{eq2}
Table~$\ref{tab1}$ gives upper bounds and lower bounds for the orders of certain $n$-dimensional classical groups defined over a field of order $q$, where $n$ satisfies the conditions in the last column.
\begin{table}[h]
\begin{center}
\renewcommand\arraystretch{1.9}
\caption {Bounds for the order of some classical groups}
\label{tab1}
\vspace{5mm}
\begin{tabular}{l|l|l|c}
\hline
Group $G$& Lower bound on $|G|$ & Upper bound on $|G|$& Conditions on $n$ \\
\hline
${\rm GL}(n,q)$ &$>(1-q^{-1}-q^{-2})q^{n^2}$ &$\leq(1-q^{-1})(1-q^{-2})q^{n^2}$ &$n\geq 2$ \\
${\rm PSL}(n,q)$ &$>q^{n^2-2}$ &$\leq(1-q^{-2})q^{n^2-1}$ &$n\geq 2$ \\
${\rm GU}(n,q)$ &$\geq(1+q^{-1})(1-q^{-2})q^{n^2}$ &$\leq(1+q^{-1})(1-q^{-2})(1+q^{-3})q^{n^2}$ & $n\geq 2$ \\
${\rm PSU}(n,q)$ &$>(1-q^{-1})q^{n^2-2}$ &$\leq(1-q^{-2})(1+q^{-3})q^{n^2-1}$ &$n\geq 3$ \\
${\rm Sp}(n,q)$&$>(1-q^{-2}-q^{-4})q^{\frac{1}{2}n(n+1)}$ & $\leq(1-q^{-2})(1-q^{-4})q^{\frac{1}{2}n(n+1)}$ & $n\geq 4$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{lemma}
We finish this section with an arithmetic result.
\begin{lemma}{\rm (\cite[Lemma 1.13.5]{Low})}\label{gcd}
Let $p$ be a prime, let $n, e$ and $f$ be positive integers
such that $n>1$ and $e\mid f$,
and let $q_{0}=p^e$ and $q=p^f$.
Then
\begin{enumerate}
\item[\rm(i)]
$\displaystyle{
\frac{q-1}{{\rm lcm}(q_0-1,(q-1)/\gcd(n,q-1))}=\gcd\left(n,\frac{q-1}{q_0-1}\right);
}$
\item[\rm(ii)]
$\displaystyle{
\frac{q+1}{{\rm lcm}(q_0+1,(q+1)/\gcd(n,q+1))}=\gcd\left(n,\frac{q+1}{q_0+1}\right);
}$
\item[\rm(iii)] If $f$ is even, then $q^{1/2}=p^{f/2}$ and \quad
$\displaystyle{
\frac{q-1}{{\rm lcm}(q^{1/2}+1,(q-1)/\gcd(n,q-1))}=\gcd\left(n,q^{1/2}-1\right).
}$
\end{enumerate}
\end{lemma}
\section{Proof of Theorem \ref{Th1}}\label{sec3}
Let $\cal D =(\mathcal{P}, \cal B)$ be a $2$-$(v,k,2)$ design admitting
a flag-transitive group $G$ of automorphisms.
If $G$ is point-primitive, then by~\cite{Liang1} and~\cite{Re2005},
$G$ is of affine or almost simple type. Thus we may assume that $G$
leaves invariant a non-trivial partition
$\mathcal{C}=\{\Delta_1,\Delta_2,\dots,\Delta_y\}$ of $\mathcal{P}$,
where
\begin{equation}\label{Eq5}
v=xy.
\end{equation}
with $1<y<v$ and $|\Delta_i|=x$ for each $i$.
If $(v,k)=(16,6)$ then by Lemma~\ref{condition 1}, it follows that
$\cal D$ is symmetric and hence, in the light of the discussion before
the statement of Theorem \ref{Th1}, in this case Theorem \ref{Th1}(i) holds.
Hence we may assume further that $(v,k)\ne (16,6)$. Our objective now is to derive a contradiction to these assumptions.
Our proof uses the facts, which can easily be verified by \textsc{Magma} \cite{magma}, that for each 2-transitive permutation group of degree $2p=10$ or $22$ there is a unique class of subgroups of index $2p$ and each such group is
almost simple with a 2-transitive unique minimal normal subgroup (its socle). In fact the socle is one of ${\rm PSL}(2,9)$ or $\mathrm{A}_{10}$ (for degree 10), or $M_{22}$ or $\mathrm{A}_{22}$ (for degree 22).
First we introduce a new parameter $\ell$: let $\alpha\in\mathcal{P}$ and $\Delta\in \mathcal{C}$
such that $\alpha\in\Delta$; choose $B\in \mathcal{B}$ containing $\alpha$,
and let $\ell=|B\cap\Delta|$. It follows from~\cite[Lemma~2.1]{Praeger} that,
for each $B'\in\mathcal{B}$ and $\Delta'\in\mathcal{C}$ such that $B'\cap\Delta'\neq\emptyset$,
the intersection size $|B'\cap\Delta'|=\ell$,
so that $B'$ meets each of exactly $k/\ell$ parts of $\cal C$ in $\ell$ points and is disjoint from the other parts. Moreover,
\begin{equation}\label{Eq6}
\ell\mid k\quad \mbox{and}\ 1<\ell < k.
\end{equation}
(Note that the proof of~\cite[Lemma~2.1]{Praeger}
uses flag-transitivity of $\cal D$, but is valid for all $2$-designs, not only symmetric ones.)
\medskip\noindent
\textit{Claim 1:}\quad $(v,b, r,k, \ell)=(x^2, \frac{2x^2(x-1)}{x+2}, 2x-2, x+2, 2)$, and $x=2p$ with $p\in\{5, 11\}$.
\smallskip\noindent
\textit{Proof of Claim:}\quad Counting the point-block pairs $(\alpha',B')$ with $\alpha'\in\Delta\setminus\{\alpha\}$ and $B'$ containing $\alpha$ and $\alpha'$, we obtain
\begin{equation}\label{Eq7}
2(x-1)=r(\ell-1).
\end{equation}
It follows from~\eqref{Eq5} and Lemma~\ref{condition 1}(i) that
\[
r(k-1)=2(xy-1)=2y(x-1)+2(y-1),
\]which together with~\eqref{Eq7} yields
\begin{equation}\label{Eq8}
r(k-1)=yr(\ell-1)+2(y-1).
\end{equation}
Let $z=k-1-y(\ell-1)$. Then $z$ is an integer and, by \eqref{Eq8}, $rz=2(y-1)>0$
so $z$ is a positive integer and
\begin{equation}\label{Eq9}
y=\frac{rz+2}{2}.
\end{equation}
This in conjunction with~\eqref{Eq8} leads to
\[
r(k-1)+2=y(r(\ell-1)+2)=\frac{(rz+2)(r(\ell-1)+2)}{2}.
\]
Hence
\begin{equation}\label{Eq10}
2(k-\ell-z)=rz(\ell-1).
\end{equation}
Since $k\leq r$ (Lemma~\ref{condition 1}(iii)), we have
\[kz(\ell-1)\leq rz(\ell-1)=2(k-\ell-z)<2k,
\]
and hence $z=1$ and $\ell=2$.
Then~\eqref{Eq7} becomes $r=2x-2$,
and so~\eqref{Eq9} gives $y=x$ (and hence $v=x^2$) and the definition of $z$ gives $k=x+2$.
It then follows from $r\geq k$ that $x\geq 4$, and from~\eqref{Eq6} that $k$, and hence also $x$, is even. Finally by Lemma~\ref{condition 1}(ii),
\[
b=\frac{vr}{k}=\frac{x^2(2x-2)}{x+2} = 2x^2-6x+12 -\frac{24}{x+2},
\]
and hence $(x+2)\mid24$. Therefore, $x=4$, $6$, $10$ or $22$, but since we are assuming that $(v,k)\ne (16,6)$ the parameter $x\ne4$. If $x=6$, then $(v,b,r,k)=(36,45,10,8)$, but one can see from~\cite[II.1.35]{Handbook} that there is no $2$-$(36,8,2)$ design. Thus $x=10$ or $22$, and Claim 1 is proved.\qed
\medskip\noindent
\textit{Claim 2:}\quad For $\Delta\in\cal C$, the induced group $G_\Delta^\Delta$ is 2-transitive. Moreover the kernel $K:=G_{(\cal C)}\ne 1$, $\cal C$ is the set of $K$-orbits in $\mathcal{P}$, and $K^\Delta$ and its socle ${\rm Soc}(K)^\Delta$ are $2$-transitive with 2-transitive socle ${\rm PSL}(2,9)$ or $\mathrm{A}_{10}$ for degree 10, and $M_{22}$ or $\mathrm{A}_{22}$ for degree 22.
\smallskip\noindent
\textit{Proof of Claim:}\quad
Since each element of $G$ fixing $\alpha$ stabilises $\Delta$, we have the inclusion ${G_\alpha}\leq G_{\Delta}$. Let $\beta, \gamma$ be arbitrary points in $\Delta\setminus\{\alpha\}$, and consider $B_1\in\mathcal{B}$ containing $\alpha$ and $\beta$, and $B_2\in \mathcal{B}$ containing $\alpha$ and $\gamma$.
Since $G$ is flag-transitive, there exists $h\in G_\alpha$ such that $B_1^h=B_2$, and in particular, $\beta^h\in B_2$. As $\ell=2$ (by Claim 1), each block of $\mathcal{D}$ through $\alpha$ contains exactly one point in $\Delta\setminus\{\alpha\}$. Since $\beta^h\in(\Delta\setminus\{\alpha\})^h=\Delta\setminus\{\alpha\}$, it then follows that $\beta^h=\gamma$. This shows that $G_\alpha$ is transitive on $\Delta\setminus\{\alpha\}$, and hence $G^{\Delta}_{\Delta}$ is 2-transitive and hence primitive.
By Claim 1, each non-trivial block of imprimitivity for $G$ in $\mathcal{P}$ has size $x=\sqrt{v}=2p$ (with $p=5$ or $11$),
and hence the induced permutation group $G^{\mathcal{C}}$ on $\mathcal{C}$ is primitive. Suppose that $K=1$, so $G^{\mathcal{C}}\cong G$.
Since $G$ is point-transitive and $v=4p^2$, it follows that $|G|=|G^{\mathcal{C}}|$ is divisible by $p^2$, and hence $G^{\mathcal{C}}_{\Delta}\cong G_\Delta$ has order divisible by $p$ (since $|G:G_\Delta|=2p$). Thus $G^{\mathcal{C}}_{\Delta}$ contains an element of order $p$ which acts on $\cal C$ as a $p$-cycle fixing $p$ of the parts. Then by a result of Jordan~\cite[Theorem 13.9]{Wie1964} we have $G^{\mathcal{C}}=\mathrm{A}_{2p}$ or $\mathrm{S}_{2p}$ and thus $G_\Delta\cong G^{\mathcal{C}}_{\Delta}=\mathrm{A}_{2p-1}$ or $\mathrm{S}_{2p-1}$. The kernel of the action of $G_\Delta$ on $\Delta$ is normal in $G_\Delta$ and so can only be $1$, $\mathrm{A}_{2p-1}$ or $\mathrm{S}_{2p-1}$. Since $G_\Delta^\Delta$ is transitive of degree $2p>2$, this kernel must be trivial. Hence $G_\Delta\cong G_\Delta^\Delta$
is primitive of degree $2p$ and neither $\mathrm{A}_{2p-1}$ nor $\mathrm{S}_{2p-1}$ has such an action, for $p\in\{5,11\}$.
This contradiction implies that $K\ne 1$.
Since $K\ne 1$ and $K$ is normal in $G$, its orbits are nontrivial blocks of imprimitivity for $G$ in $\mathcal{P}$, and by Claim 1, they must have size $x=2p$. Hence the set of $K$-orbits in $\mathcal{P}$ is the partition $\cal C$.
Since $1\ne {\rm Soc}(K)\unlhd G$ it follows that ${\rm Soc}(K)^\Delta\ne1$ and hence
${\rm Soc}(K)^\Delta$ contains the socle of $G^{\Delta}_{\Delta}$, which is $2$-transitive on $\Delta$ (see above). Therefore ${\rm Soc}(K)^\Delta$ is $2$-transitive, and so also $K^\Delta$ is $2$-transitive. By Burnside's Theorem (see~\cite[Theorem 3.21]{PS}), since $|\Delta|=2p$ is not a prime power,
$G^{\Delta}_{\Delta}$ , $K^\Delta$ and ${\rm Soc}(K)^\Delta$ are almost simple with 2-transitive nonabelian simple socle. As mentioned above these 2-transitive groups must have socle ${\rm PSL}(2,9)$ or $\mathrm{A}_{10}$ for degree 10, and $M_{22}$ or $\mathrm{A}_{22}$ for degree 22, and that socle is also 2-transitive on $\Delta$.
\qed
\medskip\noindent
\textit{Claim 3:}\quad The group $K$ is faithful on $\Delta$, so $K$ is almost simple with nonabelian simple socle.
\smallskip\noindent
\textit{Proof of Claim:}\quad Let $\Delta\in\cal C$ and suppose that $A=K_{(\Delta)}\ne1$. Let $F$ denote the set of fixed points of $A$, so $\Delta\subseteq F$. If $\beta\in F$ and $\beta\in\Delta'\in\cal C$, then since $K$ is transitive on $\Delta'$ (Claim 2) and $A\unlhd K$, it follows that $A$ fixes $\Delta'$ pointwise. Thus $A\leq K_{(\Delta')}$, and since $K_{(\Delta)}, K_{(\Delta')}$ are conjugate in $G$ we have $A= K_{(\Delta')}$. Therefore $F$ is a union of parts of $\cal C$.
If $g\in G$, then $A^g$ has fixed point set $F^g$ and $F^g$ is a union of some parts of $\cal C$.
Thus if $F\cap F^g$ contains a point $\beta$ and $\beta\in\Delta'\in\cal C$, then by the previous paragraph $A= K_{(\Delta')} = A^g$ and so $F=F^g$. It follows that $F$ is a block of imprimitivity for $G$ in $\mathcal{P}$, and $F$ is non-trivial since $A\ne 1$. Thus ${\cal C'}:=\{\ F^g \mid g\in G \}$ is a
non-trivial $G$-invariant partition of $\mathcal{P}$. By Claim 1, $|F|=x$, and since $F$ contains $\Delta$ we conclude that $F=\Delta$. This means that $A^{\Delta'}\ne 1$ for each $\Delta'\in{\cal C}\setminus\{\Delta\}$, and since $K^{\Delta'}$ is $2$-transitive (Claim 2), it follows that $A^{\Delta'}$ is transitive. Now choose $\alpha, \beta\in F=\Delta$ and let $B_1, B_2\in\cal B$ be the two blocks containing $\{\alpha,\beta\}$. Then $A\leq G_{\alpha\beta}$, and $G_{\alpha\beta}$ fixes $B_1\cup B_2$ setwise. By Claim 1, there exists $\Delta'\in{\cal C}\setminus\{\Delta\}$ such that $|B_1\cap \Delta'|=\ell=2$, and $|B_2\cap \Delta'|=0$ or $2$. Thus $(B_1\cup B_2)\cap \Delta'$ has size between 2 and 4 and is fixed setwise by $A$. This is a contradiction since $A$
is transitive on $\Delta'$ and $|\Delta'|=2p\geq 10$. Therefore $A=1$ so $K$ is faithful on $\Delta$. By Claim 2, $K\cong K^\Delta$ is almost simple with nonabelian simple socle. \qed
Since $K$ is 2-transitive of degree $c=2p$, as mentioned above, $K$ has only one conjugacy class of subgroups of index $2p$, and so $K$ has a unique $2$-transitive representation of degree $c$, up to permutational equivalence. It follows that, for $\alpha\in\Delta$, the stabiliser $K_\alpha$ fixes exactly one point in each part of $\cal C$. Let $\beta$ be another point fixed by $K_\alpha$.
Let
$B_1, B_2\in\cal B$ be the two blocks containing $\{\alpha,\beta\}$. By Claim 1,
$|B_i\cap \Delta|=2$ for each $i$ and hence $K_{\alpha\beta}$ fixes setwise $(B_1\cup B_2)\cap
\Delta$, a set of size 2 or 3. On the other hand $K_{\alpha\beta}=K_\alpha$ since $\beta$ is a fixed point of $K_\alpha$, and by Claim 2, $K$ is 2-transitive on $\Delta$, so the $K_\alpha$-orbits in $\Delta$ have sizes $1, c-1$. This final contradiction completes the proof of Theorem~\ref{Th1}.
\qed
\section{Proof of Theorem \ref{Th2}}\label{sec4}
Our first result in this section proves that the designs arising from
Construction \ref{cons} are all $2$-designs, and inherit certain
symmetry properties from those of the input design.
In particular we show that the designs
coming from projective geometries over
a field of three elements give examples for Theorem \ref{Th2}.
\begin{proposition} \label{exist 1}
Let $\mathcal{S=(P,L)}$ be a $2$-$(v,k,1)$ design, $\ell\in\mathcal{L}$ and $G\leq {\rm Aut}(\mathcal{S})$.
\begin{enumerate}
\item[\rm(i)] Then the design $\mathcal{D}(\mathcal{S})$ given in Construction~\ref{cons} is a non-symmetric $2$-$(v,k-1,k-2)$ design and $G$ is a subgroup of ${\rm Aut}(\mathcal{D}(\mathcal{S}))$;
\item[\rm(ii)] Moreover, if $G$ is flag-transitive on $\mathcal{S}$ and $G_{\ell}$ is $2$-transitive on $\ell$, then $G$ is flag-transitive and point-primitive on $\mathcal{D}(\mathcal{S})$;
\item[\rm(iii)] In particular, if $\cal S$ is the design of points and lines of the projective space ${\rm PG}(n-1,3)$ ($n\geq3$), and $G\geq{\rm PSL}(n,3)$, then
$\mathcal{D}(\mathcal{S})$ is a non-symmetric $G$-flag-transitive, $G$-point-primitive $2$-$(v,3,2)$ design.
\end{enumerate}
\end{proposition}
\par\noindent{\sc Proof.~}
Let $\mathcal{D}=\mathcal{D}(\mathcal{S})$ with block set
$\mathcal{B}=\{\ell\setminus\{\alpha\}\,\mid\,\ell\in \mathcal{L},\,\alpha\in\ell\}$,
so $\mathcal{D=(P,B)}$.
Let $\alpha,\beta$ be distinct points of $\mathcal{P}$.
Then there exists a unique line $\ell\in \mathcal{L}$,
such that $\alpha,\beta\in \ell$.
As $|\ell|=k$, exactly
$k-2$ blocks of $\mathcal{B}$ contain $\alpha$ and $\beta$.
Thus, $\mathcal{D}$ is a 2-$(v,k-1,k-2)$ design, which is nontrivial provided that $3<k$.
By Lemma \ref{condition 1} applied to $\mathcal{S}$,
$|\mathcal{L}|\geq v$, and since $|\mathcal{B}|=k|\mathcal{L}|>|\mathcal{L}|$
it follows that $\mathcal{D}$ is not symmetric.
Moreover, for all $B=\ell\setminus\{\alpha\}\in \mathcal{B}$
and for all $g\in G\leq {\rm Aut}(\mathcal{S})$,
we have $\ell^{g}\in \mathcal{L}$ and $\alpha^{g}\in \ell^{g}$,
and so $B^{g}=(\ell \backslash \{\alpha\})^{g}=\ell^{g}\backslash \{\alpha^{g}\}\in \mathcal{B}$. Thus, $G\leq {\rm Aut}(\mathcal{D})$ and part (i) is proved.
Now assume that $G$ is flag-transitive on $\mathcal{S}$ and $G_{\ell}$ is
$2$-transitive on $\ell$. Let $\alpha\in \ell$ and $B=\ell\setminus\{\alpha\}$.
From the flag-transitivity of $G$, we know that $G$ acts
primitively on the point set $\mathcal{P}$ by~\cite[Propositions 1--3]{HM},
and $G$ acts transitively on the block set $\mathcal{B}$ of $\mathcal{D}$. Furthermore,
$G_{\ell,\alpha}\leq G_{B}$.
Since $G_{\ell}$ is 2-transitive on $\ell$, $G_{\ell,\alpha}$ is transitive on $B$.
Hence $G_{B}$ is transitive on $B$, and so
$G$ is flag-transitive on $\mathcal{D}$ and part (ii) is proved.
In the special case where $\cal S$ is the design of points and lines of the projective space ${\rm PG}(n-1,3)$ ($n\geq3$), and $H={\rm PSL}(n,3)$, $H$ is flag-transitive on $\cal S$ and $H_\ell$ induces the $2$-transitive group ${\rm PGL}(2,3)\cong \mathrm{S}_4$ on $\ell$. Thus part (iii) follows from part (i) and (ii) for any group $G$ such that $H\leq G\leq {\rm Aut}(G)$.
\qed
\subsection{Broad proof strategy and the natural projective action}
In the remainder of the paper we assume the following hypothesis:
\begin{hypothesis}\label{H}
Let $\mathcal{D}=(\mathcal{P},\cal B)$ be a $2$-$(v,k,2)$ design admitting a flag-transitive point-primitive group $G$ of automorphisms with socle $X={\rm PSL}(n,q)$ for some $n\geq 3$, where $q=p^f$ with prime $p$ and positive integer $f$.
\end{hypothesis}
Observe that $G\cap {\rm P\Gamma L}(n,q)$ has a natural projective action on a vector space $V$ of dimension $n$ over the field $\mathbb{F}_q$. Consider a point $\alpha$ of $\mathcal{D}$ and a basis $v_{1},v_{2},\ldots,v_{n}$ of the vector space $V$. Since $G$ is primitive on $\mathcal{P}$, the stabiliser $G_\alpha$ is maximal in $G$, and so by Aschbacher's Theorem~\cite{Asch}(see also~\cite{PB}), $G_\alpha$ lies in
one of the geometric subgroup families $\mathcal{C}_i(1\leq i \leq 8)$,
or in the family $\mathcal{C}_9$ of almost simple subgroups not contained in any of these families.
When investigating the subgroups in the Aschbacher families, we make frequent use of the information
on their structures in~\cite[Chap. 4]{PB}.
We will sometimes use the symbol $\tilde{H}$ to indicate that we are giving the structure of the pre-image of $H$ in the corresponding (semi)linear group.
In the next proposition we treat the case where $\mathcal{P}$ is the point set of the projective space ${\rm PG}(n-1,q)$ associated with $V$.
\begin{proposition} \label{4}
Assume Hypothesis~$\ref{H}$, and that $\mathcal{P}$ is the point set of
the projective space ${\rm PG}(n-1,q)$, with $G$ acting naturally on $\mathcal{P}$.
Then either
\begin{enumerate}[(a)]
\item $q=3$, $k=3$, $v=(3^{n}-1)/2$ and $\mathcal{D}=\mathcal{D}(\cal S)$ from Construction~$\ref{cons}$, where $\cal S$ is the design of points and lines of ${\rm PG}(n-1,3)$; or
\item $q=2$, $k=4$, $v=2$ and $\mathcal{D}$ is the complement of the Fano plane (that is, blocks are the complements of the lines in ${\rm PG}(2,2)$).
\end{enumerate}
\end{proposition}
\par\noindent{\sc Proof.~}
Let $\alpha,\beta$ be distinct points. Since $\lambda=2$,
there are exactly two blocks $B_{1}$ and $B_{2}$ containing $\alpha$ and $\beta$.
Moreover, $G_{\alpha\beta}$ fixes $B_{1}\cup B_{2}$ setwise,
so $B_{1}\cup B_{2}$ is a union of $G_{\alpha,\beta}$-orbits.
Let $\ell$ be the unique projective line containing $\alpha$ and $\beta$.
Then $G_{\alpha,\beta}$ is transitive on the $v-(q+1)$ points $\mathcal{P}\backslash\ell$ and on $\ell\backslash\{\alpha,\beta\}.$
Hence, either
\begin{enumerate}
\item[1.] $(B_{1}\cup B_{2})\backslash\{\alpha,\beta\}\supseteq \mathcal{P}\backslash\ell$, or
\item[2.] $B_{1}\cup B_{2}=\ell$.
\end{enumerate}
Suppose first that $(B_{1}\cup B_{2})\backslash\{\alpha,\beta\}\supseteq \mathcal{P}\backslash\ell$.
Then $2k-2\geq |B_{1}\cup B_{2}|\geq 2+v-(q+1)$, that is $k-1\geq (v-q+1)/2$.
Now $r(k-1)=2(v-1)$ (Lemma~\ref{condition 1}) and $v=(q^{n}-1)/(q-1)$,
so that
\begin{equation}\label{Eq11}
r=\frac{2(v-1)}{k-1}\leq\frac{4(v-1)}{v-q+1}=4\cdot \left(1+\frac{q-2}{q^{n-1}+\cdots+q^{2}+2}\right)<8.
\end{equation}
Since $r\geq k$, we have that $k\leq 7$. Now combining this with $k-1\geq (v-q+1)/2$,
we have that $12\geq 2(k-1)\geq q^{n-1}+\cdots+q^{2}+2$.
If $n\geq 4$, then $12\geq q^{n-1}+\cdots+q^{2}+2\geq q^3+q^{2}+2\geq 2^3+2^{2}+2=14$, a contradiction. So $n=3$ and $12\geq q^{2}+2$, which implies that $q\leq 3$. If $q=3$, then $v=13$, and $6\geq k-1\geq (v-q+1)/2$ implies that $k=7$. Now $r(k-1)=2(v-1)$ implies that $r=4$, contradicting $r\geq k$.
Hence $(n,q)=(3,2)$.
Then $v=7$, $k-1\geq 3$, and $r=2(v-1)/(k-1)\leq4$, and so $r\leq 4\leq k$. Since $r\geq k$, we get that $r=k=4$, and thus
$b=(vr)/k=7$.
Thus, $\mathcal{D}$ is a symmetric $2$-$(7,4,2)$ design with $X={\rm PSL}(3,2)$.
Since $k=4$, and $G_B$ is transitive on the block $B$,
it follows that $B$ does not contain a line of ${\rm PG}(2,2)$.
The only possibility is that $B=\mathcal{P}\backslash \ell'$,
where $\ell'$ is a line of ${\rm PG}(2,2)$,
that is, the blocks are complements of the lines of ${\rm PG}(2,2)$.
Hence $\mathcal{D}$ is the complement of the Fano projective plane and (b) holds.
Now assume that $B_{1}\cup B_{2}=\ell$, and every block is contained in a line of the projectice space.
We get $2k-2\geq|B_{1}\cup B_{2}|=q+1$, while $q+1=|\ell|>|B_{i}|=k$.
Hence $q>k-1\geq(q+1)/2>q/2$.
Assume that there are $s$ blocks of $\mathcal{D}$ through $\alpha$
contained in the projective line $\ell$.
Since $G$ acts flag-transitively on the projective space ${\rm PG}(n-1,q)$,
for any projective line $\ell'$ and any point $\alpha'\in\ell'$,
there are $s$ blocks containing $\alpha'$
that are contained in $\ell'$.
Since for any two distinct points,
there is a unique projective line containing them,
the sets of blocks on $\alpha$ that are contained in
distinct lines $\ell,~\ell'$ through $\alpha$ are disjoint.
Note that there are $(q^{n-1}-1)/(q-1)$
projective lines through $\alpha$,
so the number of blocks through $\alpha$
is $r=s(q^{n-1}-1)/(q-1)$.
As $r(k-1)=2(v-1)$, it follows that
$s(k-1)(q^{n-1}-1)/(q-1)=2((q^{n}-1)/(q-1)-1)$,
so $s(k-1)=2q$.
Then it follows from $q>k-1> q/2$
that $1>2/s>1/2$, and so $s=3$.
Thus there are 3 blocks through $\alpha$ contained in $\ell$,
and $k-1=2q/3$, so $q=3^{f}$ for some $f$,
and $k=2\cdot3^{f-1}+1$.
Assume that there are $c$ blocks of $\mathcal{D}$ contained in the projective line $\ell$.
Since $G$ acts transitively on the projective lines, for any projective line $\ell'$,
there are $c$ blocks contained in $\ell'$. Now, counting the number of flags $(\gamma,B)$ in two ways, where $\gamma\in \ell$ and $B\subseteq \ell$ for a fixed line $\ell$,
we have that $3(q+1)=ck$, so $3(3^{f}+1)=c(2\cdot 3^{f-1}+1)$, which can be rewritten as $3^{f-1}(9-2c)=c-3$. Suppose
$f\geq2$. Then $3$ divides $c$: when $c=3$, the equation cannot hold, and when $c\geq 6$ the left hand side is negative while the right hand side is positive. Hence $f=1$, $q=3$, $k=3$, and $c=4$.
Therefore, the blocks contained in $\ell$ are all the sets $\ell\backslash\{\gamma\}$,
for $\gamma\in \ell$,
and this implies that $\mathcal{B}=\{\ell\backslash\{\gamma\}\,|\,\ell\in \mathcal{L}, \gamma\in \ell\}$.
Therefore, $\mathcal{D}=\mathcal{D}(\cal S)$ is the design in Construction \ref{cons}, where $\cal S$ is the design of points and lines of ${\rm PG}(n-1,3)$.
\qed
\medskip
In what follows, we analyse each of the families $\mathcal{C}_{1}$--$\mathcal{C}_{9}$ for $G_\alpha$.
\subsection{$\mathcal{C}_{1}$-subgroups}\label{sec3.1.1}
In this analysis we repeatedly use the Gaussian binomial coefficient $\genfrac{[}{]}{0pt}{}{m}{i}_q$ for the number of $i$-spaces in an $m$-dimensional space $\mathbb{F}_q^m$, where $0\leq i\leq m$. A straightforward argument counting bases of $\mathbb{F}_q^m$ and its subspaces shows that, for $i\geq1$,
\[
\genfrac{[}{]}{0pt}{}{m}{i}_q= \frac{(q^m-1)(q^m-q)\cdots(q^{m}-q^{i-1})}{(q^i-1)(q^i-q)\cdots(q^i-q^{i-1})}
= \frac{\prod_{j=1}^i(q^{m-i+j}-1)}{\prod_{j=1}^i(q^j-1)} = \prod_{j=1}^i \frac{q^{m-i+j}-1}{q^{j}-1}.
\]
We use this equality without further comment. We also use the facts that $\genfrac{[}{]}{0pt}{}{m}{i}_q=\genfrac{[}{]}{0pt}{}{m}{m-i}_q$, that the number of complements in
$\mathbb{F}_q^m$ of a given $i$-space is $q^{i(m-i)}$, and hence that the number of decompositions $U\oplus W$ of $\mathbb{F}_q^m$ with $\dim(U)=i$ is $\genfrac{[}{]}{0pt}{}{m}{i}_q\cdot q^{i(m-i)}$.
\begin{lemma}\label{c1'}
Assume Hypothesis~$\ref{H}$. If the point-stabilizer $G_\alpha\in\mathcal{C}_{1}$,
then $G_\alpha$ is the stabiliser in $G$ of an $i$-space and $G\leq{\rm P\Gamma L}(n,q)$.
\end{lemma}
\par\noindent{\sc Proof.~}
If $G\leq {\rm P\Gamma L}(n,q)$ then $G_\alpha$ is the stabiliser in $G$ of an $i$-space, for some $i$, so assume
that $G\nleq{\rm P\Gamma L}(n,q)$. Then $G$ contains a graph automorphism of ${\rm PSL}(n,q)$, so in particular $n\geq3$,
and $G_{\alpha}$ stabilizes a pair $\{U,W\}$
of subspaces $U$ and $W$, where $U$ has dimension $i$ and $W$ has dimension $n-i$ with $1\leq i< n/2$.
It follows that $G^*:=G\cap{\rm P\Gamma L}(n,q)$ has index $2$ in $G$. Moreover, either $U\subseteq W$ or $U\cap W=0$.
\textbf{Case~1:} $U \subset W$.
In this case, $v$ is the number $\genfrac{[}{]}{0pt}{}{n}{n-i}_q$ of $(n-i)$-spaces $W$ in $V$, times the number $\genfrac{[}{]}{0pt}{}{n-i}{i}_q$ of $i$-spaces $U$ in $W$, so
\begin{align*}
v
&=\genfrac{[}{]}{0pt}{}{n}{n-i}_q\cdot\genfrac{[}{]}{0pt}{}{n-i}{i}_q = \prod_{j=1}^{n-i} \frac{q^{n-(n-i)+j}-1}{q^{j}-1}\cdot
\prod_{j=1}^i \frac{q^{(n-i)-i+j}-1}{q^{j}-1}\\
&=\left. \prod_{j=1}^{n-i} (q^{i+j}-1) \right/ {\left(\prod_{j=1}^{n-2i} (q^{j}-1)\cdot \prod_{j=1}^i (q^{j}-1)\right)}\\
&= \prod_{j=1}^{i} \frac{q^{i+j}-1}{q^{j}-1} \cdot \prod_{j=1}^{n-2i} \frac{q^{2i+j}-1}{q^{j}-1}.\\
\end{align*}
Then, using the fact that $q^m-1>q^{m-j}(q^j-1)$, for integers $1\leq j<m$,
\[
v > \prod_{j=1}^i q^{i} \cdot \prod_{j=1}^{n-2i} q^{2i} = q^{i^2 + 2i(n-2i)} = q^{i(2n-3i)}.
\]
Consider the following points of $\mathcal{D}$: $\alpha = \{U,W\}$, where $W=\langle v_{1},v_{2},\ldots, v_{n-i}\rangle$ and $U= \langle v_{1},v_{2},\ldots, v_{i}\rangle$, and $\beta = \{U',W\}$, where $U'= \langle v_{1},v_{2},\ldots, v_{i-1}, v_{i+1}\rangle$. Then the $G^*_\alpha$-orbit $\Delta$ containing $\beta$ consists of all the points $\{U'',W\}$
such that the $i$-space $U''\subset W$ and $\dim(U\cap U'')=i-1$. Thus the cardinality $|\Delta|$
is the number $\genfrac{[}{]}{0pt}{}{i}{i-1}_q$ of $(i-1)$-spaces $U\cap U''$ in $U$, times the number $\genfrac{[}{]}{0pt}{}{n-2i+1}{1}_q-1$ of $1$-spaces in $W/(U\cap U'')$ distinct from $U/(U\cap U'')$.
Therefore, since $\genfrac{[}{]}{0pt}{}{i}{i-1}_q=\genfrac{[}{]}{0pt}{}{i}{1}_q$,
\[
|G^*_\alpha:G^*_{\alpha\beta}| = |\Delta| = \genfrac{[}{]}{0pt}{}{i}{1}_q\cdot \left(\genfrac{[}{]}{0pt}{}{n-2i+1}{1}_q-1\right) =
\frac{q^i-1}{q-1}\cdot \frac{q(q^{n-2i}-1)}{q-1}.
\]
Note that $G_\alpha$ contains a graph automorphism, and each such graph automorphism interchanges $U$ and $W$, and hence does not leave $\Delta$ invariant. Thus the $G_\alpha$-orbit containing $\beta$ has cardinality $2|\Delta|$ (a subdegree of $G$), so by Lemma~\ref{condition 2}(iv), $r$ divides
\[
4|\Delta|=\frac{4q(q^{n-2i}-1)(q^i-1)}{(q-1)^2}.
\]
Note that $(q^j-1)/(q-1)<2q^{j-1}$ for each integer $j>0$.
It follows that
\[
r\leq \frac{4q(q^{n-2i}-1)(q^i-1)}{(q-1)^2}
<4q\cdot2q^{n-2i-1}\cdot2q^{i-1}=16q^{n-i-1}.
\]
Combining this with $r^2>2v$ and $v>q^{i(2n-3i)}$,
we see that
$16^2q^{2(n-i-1)}>2q^{i(2n-3i)}$,
that is,
\begin{equation}\label{Eq12}
2^7>q^{2(i-1)n-3i^2+2i+2}\geq 2^{2(i-1)n-3i^2+2i+2}.
\end{equation}
Since $n>2i$, it follows that $2(i-1)n-3i^2+2i+2 > 4i(i-1)-3i^2+2i+2 = i^2-2i+2$,
and so $i^2-2i-5<0$, which implies $i\leq 3$.
\textbf{Subcase~1.1:} $i=3$.
Then $n>2i=6$. From \eqref{Eq12} we have $2^7>q^{4n-19}\geq 2^{4n-19}$,
which implies $n\leq 6$, a contradiction.
\textbf{Subcase~1.2:} $i=2$.
Then $n>4$. From \eqref{Eq12} we have $2^7>q^{2n-6}\geq 2^{2n-6}$,
which implies $n=5$ or $6$.
Then $r\mid 4q(q+1)^{n-4}$ (for $n=5$ or $6$) and $v>q^{4n-12}$.
Combining this with $r^2>2v$, we deduce $16q^2(q+1)^{2n-8}>2q^{4n-12}$,
that is, $8(q+1)^{2n-8}>q^{4n-14}$. For $n=6,$ this gives $8(q+1)^4>q^{10}$, which is impossible. Thus $n=5$ and $8(q+1)^2>q^6$, so $q=2$ and $v=5\cdot7\cdot31$. On the one hand $r\mid 24$ and on the other hand the condition $r^2>2v$ implies $r\geq 47$, a contradiction.
\textbf{Subcase~1.3:} $i=1$.
Then $n>2$, $r$ divides $4q(q^{n-2}-1)/(q-1)$, and
\[
v=\frac{(q^{n}-1)(q^{n-1}-1)}{(q-1)^2}.
\]
Combining this with the condition $r\mid 2(v-1)$,
we seee that $r$ divides
\begin{align*}
R:&=\gcd\left(2(v-1),\frac{4q(q^{n-2}-1)}{q-1}\right)\\
&=2\gcd\left(\frac{(q^{n}-1)(q^{n-1}-1)}{(q-1)^2}-1,\frac{2q(q^{n-2}-1)}{q-1}\right),\\
&=\frac{2q}{(q-1)^2}\cdot
\gcd\left(q^{2n-2}-q^{n-1}-q^{n-2}-q+2,2(q-1)(q^{n-2}-1)\right).
\end{align*}
Since
\[
(q^{2n-2}-q^{n-1}-q^{n-2}-q+2)-(q-1)^2=(q^n+q^2-q-1)(q^{n-2}-1)
\]
is divisible by $(q-1)(q^{n-2}-1)$, we see that
\[
\gcd\left(q^{2n-2}-q^{n-1}-q^{n-2}-q+2,(q-1)(q^{n-2}-1)\right)
\text{ divides }
(q-1)^2,
\]
and so
\[
\gcd\left(q^{2n-2}-q^{n-1}-q^{n-2}-q+2,2(q-1)(q^{n-2}-1)\right)
\text{ divides }
2(q-1)^2.
\]
Therefore, $R$ divides
\[
\frac{2q}{(q-1)^2}\cdot2(q-1)^2=4q.
\]
Combining this with $r\mid R$, $r^2>2v$ and $v>q^{2n-3}$,
we deduce $16q^2>2q^{2n-3}$.
Therefore, $8>q^{2n-5}\geq 2^{2n-5}$, which leads to $n=3$, and $q<8$. Note that $v=(q^2+q+1)(q+1)$, so $R=\gcd\left(2(v-1),4q\right)=2\gcd\left(q(q^2+2q+2),2q\right)=2q\gcd\left(q^2+2q+2,2\right)$. When $q$ is odd we see that $R=2q$.
Then $r^2>2v$ leads to
$ 2(q^2+q+1)(q+1)<r^2\leq R^2= 4q^2, $ which is not possible.
Hence $q\in\{2,4\}$ and $R=4q$.
First assume that $q=4$. Then $v=105$ and $R=16$.
Combining this with $r\mid R$ and $r^2>2v$, we conclude that $r=16$.
Then it follows from $r(k-1)=2(v-1)$ and $bk=vr$ that $k=14$ and $b=120$.
Since $G$ is block-transitive, it follows that $X:={\rm Soc}(G)={\rm PSL}(3,4)$ has equal length
orbits on blocks, of length dividing $b=120$. This implies that $X$ has a maximal subgroup of index dividing $120$, and hence by \cite[page 23]{Atlas}, we conclude that
$X$ is primitive on blocks, that the stabiliser $X_B$ of a block $B$ is a maximal $\mathcal{C}_5$-subgroup stabilising an $\mathbb{F}_2$-structure $V_0=\mathbb{F}_2^3 <V$, and $X_B$ has two orbits on 1-spaces, and on $2$-spaces in $V$. An easy computation shows that $X_B$ has precisely four orbits on the point set $\mathcal{P}$, of lengths 14, 14, 21, 56: these are subsets of flags $\{U,W\}$ determined by whether $U\cap V_0$ contains a non-zero vector or not, and whether $W\cap V_0$ is a 2-space of $V_0$ or not. Since $X_B$ preserves the $k=14$ points of $B$, it follows that $B$ is equal to one of the $X_B$-orbits of length 14, so that $X$ acts flag-transitively and point-imprimitively on $\mathcal{D}$, contradicting Theorem~\ref{Th1}. (In fact $G_B$ interchanges the two $X_B$-orbits of length $14$ and so $G_B$ does not leave invariant a point-subset of size 14.)
Thus $q=2$. Then $v=21$ and $R=8$ and $G={\rm PSL}(3,2).2\cong {\rm PGL}(2,7)$.
This together with $r\mid R$ and $r^2>2v$ implies $r=8$.
Then we derive from $r(k-1)=2(v-1)$ and $bk=vr$ that $k=6$ and $b=28$.
However, one can see from~\cite[II.1.35]{Handbook} that there is no $2$-$(21,6,2)$ design, a contradiction.
We also checked with \textsc{Magma} that considering every subgroup of index 28 as a block stabiliser, and each of its orbits of size 6 as a possible block, the orbit of that block under $G$ does not yield a $2$-design.
\medskip
\textbf{Case~2:} $V=U\oplus W$.
In this case the number $v$ of points is the number $\genfrac{[}{]}{0pt}{}{n}{i}_q$ of $i$-spaces $U$ of $V$, times the number $q^{i(n-i)}$ of complements $W$ to $U$ in $V$, so
\begin{align*}
v&=q^{i(n-i)} \prod_{j=1}^i \frac{q^{n-i+j}-1}{q^j-1},
\end{align*}
so in particular $p\mid v$, and
by Lemma \ref{bound}(iii), $r_p$ divides 2.
Note that $q^i-1>q^{i-j}(q^j-1)$, for integers $i>j$.
Thus
\begin{align*}
v&> q^{i(n-i)} \prod_{j=1}^i q^{n-i} =q^{i(n-i)}(q^{n-i})^i=q^{2i(n-i)}.
\end{align*}
We consider the point $\alpha=\{U,W\}$ with $U=\langle v_{1},\dots,v_i\rangle, W=\langle v_{i+1},\ldots, v_{n}\rangle$ and the $G^*_\alpha$-orbit $\Delta$ containing $\beta=\{U', W'\}$ with $U'=\langle v_{1},\ldots, v_{i-1},v_{i+1}\rangle,
W'=\langle v_i,v_{i+2},\ldots, v_{n}\rangle$. Then $\Delta$ consists of all $\{U'',W''\}$ with
$\dim(U''\cap U)=i-1, \dim(W''\cap W)=n-i-1, \dim(U''\cap W)= \dim(W''\cap U)=1$, so $|\Delta|$ is the number $\genfrac{[}{]}{0pt}{}{i}{1}_q\cdot q^{i-1}$ of decompositions $U=(U''\cap U)\oplus (W''\cap U)$, times the number
$\genfrac{[}{]}{0pt}{}{n-i}{1}_q\cdot q^{n-i-1}$ of decompositions $W=(U''\cap W)\oplus (W''\cap W)$. Thus
\[
|G_{\alpha}^*:G_{\alpha\beta}^*|=|\Delta|= q^{i-1}\frac{q^i-1}{q-1}\cdot q^{n-i-1}\frac{q^{n-i}-1}{q-1} = q^{n-2}\frac{(q^i-1)(q^{n-i}-1)}{(q-1)^2},
\]
and $G$ has a subdegree $|\Delta|$ or $2|\Delta|$.
By Lemma~\ref{condition 2}(iv), $r$ divides $4|\Delta|$. Since $r_p\mid 2$,
we deduce that $r$ divides $4(q^i-1)(q^{n-i}-1)/(q-1)^2$ (and even $2(q^i-1)(q^{n-i}-1)/(q-1)^2$ if $q$ is even). Let $a=1$ if $q$ is even and $2$ otherwise. Then $r$ divides $2^a(q^i-1)(q^{n-i}-1)/(q-1)^2$
Considering the inequality $r^2>2v>2q^{2i(n-i)}$ and the fact that $(q^j-1)/(q-1)<2q^{j-1}$ for each integer $j>0$, it follows that
\begin{equation}\label{Alice}
q^{2i(n-i)}<2^{2a-1}\frac{(q^i-1)^2(q^{n-i}-1)^2}{(q-1)^4}<2^{2a-1}(2q^{i-1})^2(2q^{n-i-1})^2=2^{2a+3}q^{2n-4}.
\end{equation}
Thus $2^{2a+3}>q^{2n(i-1)-2i^2+4}\geq 2^{2n(i-1)-2i^2+4},$ so (since $n>2i$)
$$
2a-2\geq 2n(i-1)-2i^2>4i(i-1)-2i^2=2i(i-2).
$$
Hence $i=1$ or $2$, and the case $i=2$ only happens if $a=2$, that is if $q$ is odd.
Assume $i=2$, so $q$ is odd. Then $2\geq 2n(i-1)-2i^2=2n-8$, so $n\leq 5$. On the other hand $n>2i$, so $n=5$.
By \eqref{Alice} $q^{12}<2^7q^{6}$, so $q^{6}<2^7$, a contradiction since $q\geq 3$.
Therefore $i=1$.
In this case $v=q^{n-1}\frac{q^{n}-1}{q-1}$ and we compute that $v-1=\frac{q^{n-1}-1}{q-1}\cdot (q^n+q-1)$. Since $r\mid 2(v-1)$, $r$ divides $\gcd(2\frac{q^{n-1}-1}{q-1}\cdot (q^n+q-1),4\frac{q^{n-1}-1}{q-1})=2\frac{q^{n-1}-1}{q-1}\gcd(q^n+q-1,2)=2\frac{q^{n-1}-1}{q-1}$. In other words $a=1$ in the computation above whether $q$ is odd or even.
Then by \eqref{Alice} $q^{2(n-1)}<2(q^{n-1}-1)^2/(q-1)^2<2(2q^{n-2})^2=2^3q^{2n-4}$, which can be rewritten as
$q^2<2^{3}$, so $q=2$. Thus $v= 2^{n-1}(2^{n}-1)$ and $r$ divides $2(2^{n-1}-1)$, so $r^2>2v$ implies that $ 2^{2n-1}-2^{n-1}<2(2^{n-1}-1)^2=2^{2n-1}-2^{n+1}+2$, which is impossible.
\qed
\begin{lemma}\label{c1}
Assume Hypothesis~\ref{H}, and that the point-stabilizer $G_\alpha\in\mathcal{C}_{1}$.
Then either
\begin{enumerate}[(a)]
\item $\mathcal{D}=\mathcal{D}(\cal S)$ is as in Construction~\ref{cons}, where $\cal S$ is the design of points and lines of ${\rm PG}(n-1,3)$; or
\item $\mathcal{D}$ is the complement of the Fano plane.
\end{enumerate}
\end{lemma}
\par\noindent{\sc Proof.~}
By Lemma~\ref{c1'}, $G\leq{\rm P\Gamma L}(n,q)$, and
$G_{\alpha}\cong {\rm P_{i}}$ is the stabiliser of
a subspace $W$ of $V$ of dimension $i$, for some $i$.
As we will work with the action on the underlying space $V$ we will usually consider a linear group $\tilde{G}$ satisfying $\tilde{X}={\rm SL}(n,q)\leq \tilde{G}\leq{\rm \Gamma L}(n,q)$, acting unfaithfully on $\mathcal{P}$ with kernel a subgroup of scalars.
By Proposition~\ref{4} we may assume that $i\geq2$. Also, on applying a graph automorphism that interchanges $i$-spaces and $(n-i)$-spaces (and replacing $\cal D$ by an isomorphic design) we may assume further that $i\leq n/2$.
Then $v$ is the number of $i$-spaces:
\[
v=\genfrac{[}{]}{0pt}{}{n}{i}_q= \prod_{j=1}^i \frac{q^{n-i+j}-1}{q^{j}-1}
\]
Using the fact that $q^i-1>q^{i-j}(q^j-1)$, for integers $i>j$, it follows that $v>q^{i(n-i)}$.
Consider the following points of $\mathcal{D}$: $\alpha = W$, where $W= \langle v_{1},v_{2},\ldots, v_{i}\rangle$, and $\beta = W'$, where $W'= \langle v_{1},v_{2},\ldots, v_{i-1}, v_{i+1}\rangle$. Then the $\tilde{G}_\alpha$-orbit $\Delta$ containing $\beta$ consists of all the points $W''$
such that $\dim(W\cap W'')=i-1$. Thus the cardinality $|\Delta|$
is the number $\genfrac{[}{]}{0pt}{}{i}{i-1}_q$ of $(i-1)$-spaces $W\cap W''$ in $W$, times the number $\genfrac{[}{]}{0pt}{}{n-i+1}{1}_q-1$ of $1$-spaces in $V/(W\cap W'')$ distinct from $W/(W\cap W'')$.
Therefore, since $\genfrac{[}{]}{0pt}{}{i}{i-1}_q=\genfrac{[}{]}{0pt}{}{i}{1}_q$,
\[
|\Delta|=\genfrac{[}{]}{0pt}{}{i}{1}_q\cdot \left(\genfrac{[}{]}{0pt}{}{n-i+1}{1}_q-1\right)=\frac{q(q^{i}-1)(q^{n-i}-1)}{(q-1)^2}.
\]
Since $\tilde{G}$ is flag-transitive, $r$ divides $2|\Delta|$ (by Lemma \ref{condition 2}(iv)).
Combining this with $r^{2}>2v$ (Lemma~\ref{condition 1}(iv))
we have that
\[
\frac{2q^2(q^{i}-1)^2(q^{n-i}-1)^2}{(q-1)^4}>\frac{(q^{n}-1)\cdots(q^{n-i+1}-1)}{(q^{i}-1)\cdots(q-1)}>q^{i(n-i)}.
\]
Since $2q^{j-1}>(q^j-1)/(q-1)$ for all $j\in\mathbb{N}$, it follows that
\begin{equation}\label{Alice2}
q^{i(n-i)}<\frac{2q^2(q^{i}-1)^2(q^{n-i}-1)^2}{(q-1)^4}<2q^2(2q^{i-1})^2(2q^{n-i-1})^2=32q^{2n-2}\leq q^{2n+3}
\end{equation}
Hence
\begin{equation}\label{eq3}
2n+3>i(n-i)
\end{equation}
and so $i^2+3>n(i-2)\geq2i(i-2)$, which implies that $i\leq4$. Note from Lemma~\ref{condition 1} that $r\mid 2(v-1)$. Let $R=2\gcd(|\Delta|,v-1)$. As $r$ divides $2|\Delta|$, it follows that $r$ divides $R$ and hence $r\leq R$.
\textbf{Case~1:} $i=4$.
In this case, we derive from~\eqref{eq3} that $n\leq9$. This together with the restriction $n\geq2i=8$ leads to $n=8$ or $9$. We also deduce from~\eqref{Alice2} that $32q^{2n-2}>q^{4(n-4)}$, that is $32>q^{2n-14}$. First assume that $n=8$. Then $32>q^{2}$, so $q\leq 5$.
We get
\[
|\Delta|=\frac{q(q^4-1)^2}{(q-1)^2}
\]
and
\[
v=\frac{(q^8-1)(q^7-1)(q^6-1)(q^5-1)}{(q^4-1)(q^3-1)(q^2-1)(q-1)}\]
We easily compute that $R=4,6,40,10$ when $q=2,3,4,5$ respectively, in each case contradicting $r^2>2v$, since $r\leq R$.
Next assume $n=9$. Then $32>q^{4}$, so $q=2$. We get
\[
|\Delta|=\frac{q(q^4-1)(q^5-1)}{(q-1)^2}=930
\]
and
\[
v=\frac{(q^9-1)(q^8-1)(q^7-1)(q^6-1)}{(q^4-1)(q^3-1)(q^2-1)(q-1)}=3309747.
\]
Therefore $R=124$, again contradicting $r^2>2v$.
\textbf{Case~2:} $i=3$.
In this case, we derive from~\eqref{eq3} that $n\leq11$. Together with the restriction $n\geq2i=6$ leads to $n\in\{6,7,8,9,10,11\}$.
For $n=6$, $|\Delta|=\frac{q(q^3-1)^2}{(q-1)^2}=q(q^2+q+1)^2$, while \[v-1=\frac{(q^6-1)(q^5-1)(q^4-1)}{(q^3-1)(q^2-1)(q-1)}-1=q(q^8+q^7+2q^6+3q^5+3q^4+3q^3+3q^2+2q+1).\] Thus $R= 2\gcd(|\Delta|,v-1)=2q\gcd((q^2+q+1)^2,q^8+q^7+2q^6+3q^5+3q^4+3q^3+3q^2+2q+1)$. Using the Euclidean algorithm, we easily see that \[\gcd(q^2+q+1,q^8+q^7+2q^6+3q^5+3q^4+3q^3+3q^2+2q+1)=1,\] so $R=2q$, contradicting $r^2>2v$.
For $n=7$, $|\Delta|=\frac{q(q^3-1)(q^4-1)}{(q-1)^2}=q(q^2+q+1)(q+1)(q^2+1)$, while \[v-1=\frac{(q^7-1)(q^6-1)(q^5-1)}{(q^3-1)(q^2-1)(q-1)}-1=q(q^2+1)(q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1).\] Thus \[R= 2\gcd(|\Delta|,v-1)=2q(q^2+1)\gcd((q^2+q+1)(q+1),q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1).\] Using the Euclidean algorithm, we easily see that \[\gcd(q^2+q+1,q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1)=1\] and \[\gcd(q+1,q^9+q^8+q^7+2q^6+3q^5+2q^4+2q^3+2q^2+2q+1)=1,\] so $R=2q(q^2+1)$, contradicting $r^2>2v$.
Assume now that $8\leq n\leq 11$. We deduce from~\eqref{Alice2} that $32q^{2n-2}>q^{3(n-3)}$, that is $32>q^{n-7}$. So there are only a finite number of cases to consider and we easily check that for all of them, $R^2<2v$, a contradiction.
\textbf{Case~3:} $i=2$.
In this case, the point set is the set of $2$-spaces
and $n\geq 4$, but the above restrictions on $r$ do not lead easily to contradictions as they do for larger values of
$i$. So we have a different approach.
Recall that $\tilde{X}={\rm SL}(n,q)\leq \tilde{G}\leq{\rm \Gamma L}(n,q)$, acting unfaithfully on $\mathcal{P}$ (with kernel a scalar subgroup of $\tilde{G}$).
First we deal with $n=4$. In this case
\[
v=\frac{(q^4-1)(q^{3}-1)}{(q^2-1)(q-1)} = (q^2+1)(q^2+q+1), \quad |\Delta|=\frac{q(q^{2}-1)^2}{(q-1)^2} = q(q+1)^2
\]
and by Lemmas~\ref{condition 1} and \ref{condition 2}, $r^2>2v$ and $r$ divides
\begin{align*}
2\gcd(v-1, |\Delta|) &= 2\gcd(q^4+q^3+2q^2+q, q(q+1)^2)\\
& = 2q\gcd(q^3+q^2+2q+1,(q+1)^2)\\
&= 2q\gcd((q+1)^2(q-1)+3q+2,(q+1)^2)\\
& = 2q\gcd(3q+2,(q+1)^2) = 2q
\end{align*}
which implies $4q^2\geq r^2 > 2v > q^4$, a contradiction. Thus $n\geq5$.
Let $H:= \tilde{G}\cap{\rm GL}(n,q)$. Then setwise stabiliser $H_{\{\alpha,\beta\}}$ of the points $\alpha = W=\langle v_1, v_2\rangle$
and $\beta=W'=\langle v_1,v_3\rangle$, fixes setwise the two blocks $B_1, B_2$ of $\mathcal{D}$ containing $\{\alpha,\beta\}$.
Also $H_{\{\alpha,\beta\}}$ leaves invariant the spaces $Y=W+W'=\langle v_1,v_2,v_3\rangle$ and $Y'=W\cap W'=\langle v_1\rangle$,
induces ${\rm GL}(n-3,q)$ on $V/Y$ (since even ${\rm SL}(V)\cap ({\rm GL}(\langle v_1\rangle)\times {\rm GL}(\langle v_4,\dots,v_n\rangle))$ induces
${\rm GL}(n-3,q)$ on $V/Y$).
Moreover $H_{\{\alpha,\beta\}}$ is transitive on $V\setminus Y$, and has orbits of lengths $1, 2q, q^2-q$ on the 1-spaces in $Y$.
Since $H_{\{\alpha,\beta\}}\cap \tilde{G}_{B_1}=H_{\{\alpha,\beta\}}\cap H_{B_1}$ is normal of index 1 or 2 in $H_{\{\alpha,\beta\}}$, it follows that
$H_{\{\alpha,\beta\}}\cap H_{B_1}$ also induces at least ${\rm SL}(n-3,q)$ on $V/Y$ and is transitive on $V\setminus Y$.
Hence the only non-zero proper subspaces of $V$ left invariant by $H_{\{\alpha,\beta\}}\cap H_{B_1}$ are
$Y, W, W', Y'$, and if $q=2, 3$ then possibly also the $q-1$ other 2-spaces of $Y$ containing $Y'$.
We claim that $H_{B_1}$ is irreducible on $V$. Suppose to the contrary that $H_{B_1}$ leaves invariant a nonzero proper subspace
$U$. Then also $H_{\{\alpha,\beta\}}\cap H_{B_1}$ leaves $U$ invariant. We see from the previous paragraph that $U$ must be contained in $Y$. If $U=Y$, then as $\tilde{G}_{B_1}$ is transitive on the set $[B_1]$ of points of $\mathcal{D}$ incident with $B_1$, it follows that all such points must be 2-spaces contained in $Y$. This is impossible since $\dim(Y)=3$, while some block, and hence all blocks, must be incident with a pair of 2-spaces which intersect trivially. Thus $U$ is a proper subspace of $Y$. The only 1-space invariant under $H_{\{\alpha,\beta\}}\cap H_{B_1}$ is $Y'$, and if $U=Y'$ then the same argument would yield that all 2-spaces incident with $B_1$ would contain $Y'$, which is not true since some block, and hence all blocks, must be incident with a pair of 2-spaces which intersect trivially. Thus $\dim(U)=2$, and $U$ is a 2-space of $Y$ containing $Y'$. Since $H_{B_1}$ does not fix $\alpha$ or $\beta$, it follows that $U\ne W$ or $W'$, and hence $q=2$ or $3$, and $U$ is one of the $q-1$ other 2-spaces
containing $Y'$. Again, since $\tilde{G}_{B_1}$ is transitive on $[B_1]$, each 2-space $\alpha'\in [B_1]$ intersects $U$ in a 1-space. Let $\gamma=W''$ be a 2-space which intersects $\alpha=W$ trivially, and let $B$ be a block of $\mathcal{D}$ containing $\{\alpha,\gamma\}$. Then $H_B$ leaves invariant a 2-space, say $U'$, and we have shown that both $W\cap U'$ and $W''\cap U'$ have dimension 1, so $U'$ is contained in the 4-space $W\oplus W''$. Now the subgroup induced by $H_{\{\alpha,\gamma\}}$ on $W\oplus W''$ contains ${\rm GL}(W)\times{\rm GL}(W'')$. The orbit of $U'$ under this group has size $(q+1)^2$. However the group $H_{\{\alpha,\gamma\}}\cap H_{B}$ has index at most $2$ in $H_{\{\alpha,\gamma\}}$ and fixes $U'$, so we have a contradiction. Thus we conclude that $H_{B_1}$ is irreducible.
The irreducible group $H_{B_1}$ has a subgroup $H_{\{\alpha,\beta\}}\cap H_{B_1}$ inducing at least ${\rm SL}(n-3,q)$ on
$V/Y$. We will apply a deep theorem from \cite{NieP} which relies on the presence of various prime divisors of the subgroup order $|H_{B_1}|$.
For $b, e\geq 2$, a primitive
prime divisor (ppd) of $b^e-1$ is a prime $r$ which divides $b^e-1$ but which does not divide $b^i-1$ for any $i<e$. Such ppd's are known to exist
unless either $(b,e)=(2,6)$, or $e=2$ and $b=2^s-1$ for some $s$, (a theorem of Zsigmondy, see \cite[Theorem 2.1]{NieP}). Each ppd $r$ of $b^e-1$ satisfies $r\equiv 1\pmod{e}$, and if $r>e+1$ then $r$ is said to be large; usually $b^e-1$ has a large ppd and the rare exceptions are known explicitly, see
\cite[Theorem 2.2]{NieP}. Also, if $b=p^f$ for a prime $p$ then each ppd of $p^{fe}-1$ is a ppd of $b^e-1$ (but not conversely) and this type of ppd of $b^e-1$ is called basic. We will apply \cite[Theorem 4.8]{NieP} which, in particular, classifies all subgroups $H_{B_1}$ with the following properties:
\begin{enumerate}
\item for some integer $e$ such that $n/2 < e\leq n-4$, $|H_{B_1}|$ is divisible by a ppd of $q^e-1$ and also by a ppd of $q^{e+1}-1$;
\item for some (not necessarily different) integers $e', e''$ such that $n/2 < e'\leq n-3$ and $n/2 < e''\leq n-3$, $|H_{B_1}|$ is divisible by a large ppd of $q^{e'}-1$ and a basic ppd of $q^{e''}-1$.
\end{enumerate}
Since $|H_{B_1}|$ is divisible by $|{\rm SL}(n-3,q)|$, it is straightforward to check, using \cite[Theorems 2.1 and 2.2]{NieP}, that
$H_{B_1}$ has these
properties whenever either $n\geq 11$ with arbitrary $q$, or $n\in\{9,10\}$ with $q>2$. In these cases we can apply \cite[Theorem 4.8]{NieP} to the irreducible subgroup $H_{B_1}$ of ${\rm GL}(n,q)$. Note that $H$ does not contain ${\rm SL}(n,q)$ since it fixes $[B_1]$ setwise; also, since $e, e+1$ differ by 1 and $e+1\leq n-3$, $H$ is not one of the `Extension field examples' from \cite[Theorem 4.8 (b), see Lemma 4.2]{NieP}, and finally since $n\geq9$ and $e+1\leq n-3$, $H$ is not one of the `Nearly simple examples' from \cite[Theorem 4.8 (c)]{NieP}. Thus we conclude that either $n\in\{9,10\}$ with $q=2$, or $n\in\{5, 6, 7, 8\}$.
Finally we deal with the remaining values of $n$. Since $H_{\{\alpha,\beta\}}\cap H_{B_1}$ has index at most 2 in $H_{\{\alpha,\beta\}}$
it follows that $H_{B_1}$ has a subgroup of the form $[q^{3\times (n-3)}].{\rm SL}(n-3,q)$ which is transitive on $V\setminus Y$, and hence $H_{B_1}$
has order divisible by $q^{x}$ with $x=x(n)=3(n-3)+\binom{n-3}{2} = (n-3)(n+2)/2$; also $H_{B_1}$ does not contain ${\rm SL}(n,q)$ since it fixes
$[B_1]$ setwise. It follows that $H_{B_1}\cap{\rm SL}(n,q)$ is contained in a maximal subgroup of ${\rm SL}(n,q)$ which is irreducible (that is, not in class $\mathcal{C}_1$ in \cite{Low}) and has order divisible by $q^{x(n)}$. A careful check of the possible maximal subgroups in the relevant
tables in \cite{Low}, as listed in Table~\ref{tabc1}, shows that no such subgroup exists.
This completes the proof.
\qed
\begin{table}[h]
\begin{center}
\caption {Tables from \cite{Low} to check for the proof of Lemma~\ref{c1}, Case $i=2$}\label{tabc1}
\vspace{5mm}
\begin{tabular}{c|c|l}
\hline
$n$ &$x(n)$& Tables from \cite{Low} for $n$ \\
\hline
$5$ & $7$ & Tables 8.18 and 8.19 \\
$6$ & $12$ & Tables 8.24 and 8.25 \\
$7$ &$18$ & Tables 8.35 and 8.36 \\
$8$ & $25$ & Tables 8.44 and 8.45 \\
$9$ & $33$ & Tables 8.54 and 8.55 \\
$10$ & $42$& Tables 8.60 and 8.61 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{$\mathcal{C}_{2}$-subgroups}\label{sec3.2}
Here $G_{\alpha}$ is a subgroup of type ${\rm GL}(m,q)\wr {\rm \mathrm{S}_{t}}$, preserving a decomposition $V=V_{1}\oplus\cdots\oplus V_{t}$
with each $V_{i}$ of the same dimension $m$, where $n=mt$, $t\geq2$. We can think of the pointset of $\mathcal{D}$ as the set of these decompositions (for a fixed $m$ and $t$).
Note that graph automorphisms swap $i$-spaces with $n-i$-spaces, so $G\leq {\rm P\Gamma L}(n,q)$ unless $t=2$. When $t=2$ we have to consider that $G$ could contain graph automorphisms, and so could $G_\alpha$.
\begin{lemma}\label{c2}
Assume Hypothesis~\ref{H}. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{2}$.
\end{lemma}
\par\noindent{\sc Proof.~}
Recall that we denote $\gcd(n,q-1)$ by $d$.
By Lemma \ref{eq2}, $|X|=|{\rm PSL}(n,q)|>q^{n^2-2}$,
and by \cite[Proposition 4.2.9]{PB},
\[
v=\frac{|{\rm GL}(mt,q)|}{|{\rm GL}(m,q)|^tt!}\quad\mbox{so}\quad |X_{\alpha}|=\frac{|X|}{v}=\frac{t!|{\rm GL}(m,q)|^t}{d(q-1)}.
\]
\textbf{Case~1:} $m=1$.
Then $n=t\geq 3$, so $\tilde{G}\leq{\rm \Gamma L}(n,q)$. Take $\alpha$ as the decomposition $\oplus_{i=1}^n\langle e_i\rangle$ and $\beta$ as the decomposition
$\langle e_1+e_2\rangle\oplus (\oplus_{i=2}^n\langle e_i\rangle)$. The orbit of $\beta$ under $G_{\alpha}$ consists of the decomposition $\langle e_i+\lambda e_j\rangle\oplus (\oplus_{\ell\neq i}\langle e_\ell\rangle)$, which has size $s:=n(n-1)(q-1)$.
Thus by Lemmas~\ref{condition 1}(iv) and~\ref{condition 2}(iv), and Table~\ref{tab1},
\[
4n^2(n-1)^2(q-1)^2\geq (2s)^2\geq r^2 > 2v =2\frac{|{\rm GL}(n,q)|}{(q-1)^n n!} > 2\frac{q^{n^2}}{4(q-1)^nn!}
\]
so $8n^2(n-1)^2 n! > q^{n^2}/(q-1)^{n+2}>q^{n^2-n-2}$. This implies that either $(n,q)=(5,2)$ or $(4,2)$, or $n=3$ and $q\leq 5$.
Suppose first that $n=3$. Then $v=q^3(q^2+q+1)(q+1)/6$ and $r\leq 2s=12(q-1)$. Since $r^2>2v$ we conclude that $q=2$ or $3$. In either case $v$ is divisible by $q$, and since $r$ divides $2(v-1)$ (Lemma~\ref{condition 1}), $r$ is not divisible by $4$ if $q=2$, and not divisible by $3$ if $q=3$. Hence $r$ divides $6(q-1)$ if $q=2$, or $4(q-1)$ if $q=3$ (Lemma~\ref{condition 2}), and then $r^2>2v$ leads to a contradiction. Thus $q=2$ and $n$ is 4 or 5.
In either case, $v$ is divisible by $4$, so $4$ does not divide $r$ (Lemma~\ref{bound}). Then, since $r$ divides $2s=2n(n-1)$, we see that $r$ divides $6$ or $10$ for $n=4, 5$ respectively, giving a contradiction to $r^2> 2v$. Thus we may assume that $m\geq2$.
\textbf{Case~2:} $t=2$.
Next we deal with the case where $G$ may contain a graph automorphism, namely the case $t=2$, so $n=mt\geq 4$, and $G$ acts on decomposition into two subspaces of dimension $m=n/2$.
Let ${\alpha}$ be the decomposition $V_{1}\oplus V_{2}$ where
\[
V_1=\langle v_{1},\ldots, v_{m}\rangle, \quad V_2=\langle v_{m+1},\ldots, v_{2m}\rangle.
\]
Leet $\beta$ be the decomposition $V_1'\oplus V_2'$, where $V_1'= \langle v_{1},\ldots, v_{m-1},v_{m+1}\rangle$ and $V_2'=
\langle v_{m},v_{m+2},\ldots, v_{2m}\rangle$. Let $G^*:=G\cap{\rm P\Gamma L}(n,q)$, so $|G:G^*|\leq 2$. Since $G$ is point-primitive, $G$ is point-transitive, and so
$|G_\alpha:G^*_\alpha|=|G:G^*|\leq 2$.
Moreover, let $G^*_{V_1,V_2}$ be the subgroup of $G^*_\alpha$ fixing $V_1$ and $V_2$, so $G^*_{V_1,V_2}$ has index at most $2$ in
$G^*_\alpha$. If $m>2$, then we are in the same situation as in Lemma \ref{c1'} (Case 2) with $i=m=n/2$ and \[
|\beta^{G^*_{V_1,V_2}}|= q^{n-2}\frac{(q^m-1)^2}{(q-1)^2} =q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}.
\]
If $m=2$, then we have double counted (as $G^*_{V_1,V_2}$ does not fix each of the spaces $V_i\cap V_j'$; in fact it contains an element $x:v_1\leftrightarrow v_2, v_3\leftrightarrow v_4$), and $|\beta^{G^*_{V_1,V_2}}|=q^{2(m-1)}\frac{(q^m-1)^2}{2(q-1)^2}$.
In both cases, $|\beta^{G_{\alpha}}|\mid 4q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}.$
By Lemma \ref{condition 2}(iv), $r$ divides $2|\beta^{G_{\alpha}}|$, and hence
\[
r\mid 8q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}.
\]
Note that
\[
v=\frac{|X|}{|X_{\alpha}|}=\frac{q^{m^2}(q^{2m}-1)\cdots(q^{m+1}-1)}{2(q^m-1)\cdots(q-1)}>\frac{q^{2m^2}}{2}
\]
and in particular $p\mid v$. By Lemma \ref{bound}(iii), $r_p$ divides 2,
and hence $r$ divides
$
\frac{8(q^m-1)^2}{(q-1)^2}.
$
This together with $r^2>2v$ leads to
\begin{equation}\label{Eq20}
\frac{64(q^m-1)^4}{(q-1)^4}>q^{2m^2}.
\end{equation}
It follows that $64\cdot(2q^{m-1})^4>q^{2m^2}$ and so
\[
2^{10}>q^{2(m^2-2m+2)}\geq 2^{2(m^2-2m+2)}.
\]
Hence $10>2(m^2-2m+2)$ and so $m=2$ and $r\mid 8(q+1)^2$. Then we deduce from~\eqref{Eq20} that $64(q+1)^4>q^{8}$, which implies that $q=2$ or $3$.
Assume $q=2$. Then $r_2\mid 2$, so $r$ divides $2(q+1)^2=18$, contradicting the condition $r^2>2v=560$.
Hence $q=3$, $r\mid 2^7$ and $v=5265$.
Combining this with $r\mid2(v-1)$ we conclude that $r$ divides $2^5$, again contradicting the condition $r^2>2v$. Thus $t\geq3$ and in particular
$n=mt\geq6$ and $G\leq {\rm \Gamma L}(n,q)$.
\textbf{Case~3:} $t\geq3$.
Since
$|{\rm GL}(m,q)|<q^{m^2}$, we have
\[
|X_{\alpha}|=\frac{t!|{\rm GL}(m,q)|^t}{d(q-1)}<\frac{t!q^{n^2/t}}{d(q-1)}.
\]
Combining this with the assertion $|X|<2(df)^2|X_{\alpha}|^3$ from Lemma \ref{bound}(i), we obtain
\[
|X|<\frac{2f^2(t!)^3q^{3n^2/t}}{d(q-1)^3}<2(t!)^3q^{3n^2/t}.
\]
It then follows from $|X|>q^{n^2-2}$ that $q^{n^2-2}<2(t!)^3q^{3n^2/t}$, that is,
\begin{equation}\label{Eq18}
q^{n^2(1-\frac{3}{t})-2}<2(t!)^3.
\end{equation}
Since $n\geq 2t$, we derive from~\eqref{Eq18} that
\begin{equation}\label{Eq19}
2^{4t(t-3)-2}\leq q^{4t(t-3)-2}\leq q^{n^2(1-\frac{3}{t})-2}<2(t!)^3.
\end{equation}
Hence either $t=3$ or $(t,q)=(4,2)$. Consider the latter case. Here~\eqref{Eq18} becomes $2^{n^2/4-2}<2\cdot(4!)^3$ and hence $n\leq8$. As $n\geq2t=8$, we conclude that $n=8$ and $m=2$. However, then $|X|=|{\rm PSL}(8,2)|$ and $|X_\alpha|=24|{\rm GL}(2,2)|^4$, contradicting the condition $|X|<2(df)^2|X_{\alpha}|^3=2|X_{\alpha}|^3$ from Lemma \ref{bound}(i).
Thus $t=3$, and ${\alpha}$ is a decomposition $V_{1}\oplus V_{2}\oplus V_3$ with $\dim(V_{1})=\dim(V_{2})=\dim(V_{3})=m=n/3$. Say
\[
V_1=\langle v_{1},\ldots, v_{m}\rangle, \quad V_2=\langle v_{m+1},\ldots, v_{2m}\rangle, \quad V_3=\langle v_{2m+1},\ldots, v_{3m}\rangle.\]
Let $\beta$ be the decomposition $\langle v_{1},\ldots, v_{m-1},v_{m+1}\rangle\oplus\langle v_{m},v_{m+2},\ldots, v_{2m}\rangle\oplus V_3$. Arguing as in Case~2 we find that
$|\beta^{G_{V_1,V_2,V_3}}| =q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}$ if $m\geq3$, or $|\beta^{G_{V_1,V_2,V_3}}| =q^{2(m-1)}\frac{(q^m-1)^2}{2(q-1)^2}$ if $m=2$.
Now $G_{V_1,V_2,V_3}$ has index dividing $6$ in $G_\alpha$, so $|\beta^{G_{\alpha}}|$ divides $6q^{2(m-1)}\frac{(q^m-1)^2}{(q-1)^2}$.
By Lemma \ref{condition 2}(iv), $r$ divides $2|\beta^{G_{\alpha}}|$. Since $v=\frac{|{\rm GL}(3m,q)|}{|{\rm GL}(m,q)|^3 3!}$, it follows that $p$ divides $v$ and so by Lemma~\ref{bound}, $r_p$ divides $2$, and hence
\[
r\mid 12\frac{(q^m-1)^2}{(q-1)^2},\quad \mbox{so}\quad r^2 < 144 (2q^{m-1})^4 = 2304 q^{4m-4}.
\]
Note that
\[
v=\frac{|{\rm GL}(3m,q)|}{|{\rm GL}(m,q)|^3 3!}=\frac{q^{3m^2}}{6} \prod_{i=1}^m\frac{q^{2m+i}-1}{q^i-1}\cdot \prod_{i=1}^m \frac{q^{m+i}-1}{q^i-1} >\frac{1}{6}q^{3m^2+2m\cdot m+m\cdot m}=\frac{q^{6m^2}}{6},
\]
and since $r^2>2v$, we get
\[
\frac{q^{6m^2}}{3}<2v<r^2< 2304 q^{4m-4},
\]
and so $6912>q^{6m^2-4m+4}\geq 2^{6m^2-4m+4}\geq 2^{20}$,
a contradiction.
\qed
\subsection{$\mathcal{C}_{3}$-subgroups}\label{sec3.3}
Here $G_{\alpha}$ is an extension field subgroup.
\begin{lemma}\label{c3}
Assume Hypothesis~\ref{H}. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{3}$.
\end{lemma}
\par\noindent{\sc Proof.~}
By Lemma \ref{eq2} we have $|X|>q^{n^2-2}$,
and by \cite[Proposition 4.3.6]{PB},
\[
X_{\alpha}\cong\mathbb{Z}_a.{\rm PSL}(n/s,q^s).\mathbb{Z}_b.\mathbb{Z}_s,
\]
where $s$ is a prime divisor of $n$, $d=\gcd(n,q-1)$, $a=\gcd(n/s,q-1)(q^s-1)/(d(q-1))$, and $b=\gcd(n/s,q^s-1)/\gcd(n/s,q-1)$.
Thus,
\[
|X_{\alpha}|=\frac{s|{\rm GL}(n/s,q^s)|}{d(q-1)}.
\]
\textbf{Case~1:} $n=s$.
Here $n$ is a prime, $|X_{\alpha}|=n(q^n-1)/(d(q-1))$,
and by Lemma \ref{bound}(i),
\[
|X|<2(df)^2|X_{\alpha}|^3
=\frac{2f^2n^3}{d}\left(\frac{q^n-1}{q-1}\right)^3
<2q^2n^3\cdot(2q^{n-1})^3
=16n^3q^{3n-1}.
\]
Combining this with $|X|>q^{n^2-2}$ we obtain
\begin{equation}\label{Eq20b}
q^{n^2-3n-1}<16n^3,
\end{equation}
and so $2^{n^2-3n-1}<16n^3$, which implies $n\leq 5$.
\textbf{Subcase~1.1:} $n=5$.
In this case~\eqref{Eq20b} implies that $q^{9}<16\cdot5^3$, which leads to $q=2$.
However, this means that $|X|=|{\rm PSL}(5,2)|$ and
$|X_\alpha|=5\cdot31$,
contradicting the condition $|X|<2(df)^2|X_{\alpha}|^3=2|X_{\alpha}|^3$ from Lemma \ref{bound}(i).
\textbf{Subcase~1.2:} $n=3$.
Then $X={\rm PSL}(3,q)$, $|X_\alpha|=3(q^2+q+1)/d$
and so $v=q^3(q^2-1)(q-1)/3$.
It follows from Lemma \ref{bound}(ii) that $r$ divides $2df|X_{\alpha}|=6f(q^2+q+1)$.
Combining this with $r^2>2v$, we obtain that
$54f^2(q^2+q+1)^2>q^3(q^2-1)(q-1)$, that is,
\[
54f^2>\frac{q^6-q^5-q^4+q^3}{(q^2+q+1)^2}.
\]
This inequality holds only when
\[
q\in\{2,3,4,5,7,8,9,16,32\}.
\]
Let $R=\gcd(6f(q^2+q+1), 2(v-1))$.
Then $r$ is a divisor of $R$. For each $q$ and $f$ as above,
the possible values of $v$ and $R$ are listed in Table \ref{tab3}.
\begin{table}[h]
\begin{center}
\caption {Possible values of $q$, $v$ and $R$}\label{tab3}
\vspace{5mm}
\begin{tabular}{clllll|lllllllllllllllllll}
\hline
$q$ && $v$ & $R$& & &&&& $q$ & $v$ & $R$ \\
\hline
$2$ && $8$ & $14$ &&&&&& $8$ & $75264$ & $146$ \\
$3$ && $144$ & $26$ &&&&&& $9$ & $155520$ & $182$ \\
$4$ && $960$ & $14$ &&&&&& $16$ & $5222400$ & $182$ \\
$5$ &&$4000$ & $186$&&&&&& $32$ & $346390528$ & $6342$ \\
$7$ &&$32928$ & $38$&&&&&& &&\\
\hline
\end{tabular}
\end{center}
\end{table}
Hence the condition $r^2>2v$ implies that $q\in \{2,3,5\}$.
Assume $q=2$. Then $v=8$ and $r$ and divides $14$. From $r(k-1)=2(v-1)$ and $r\geq k\geq 3$ we deduce that $r=7$ and $k=3$, which contradicts the condition that $bk=vr$. Similarly, we have $q\neq 5$ (two cases to check: $(r,k)\in\{(186,44),(93,87)\}.$)
Hence $q=3$. By Table~\ref{tab3}, $v=144$ and $r$ divides $26$. Then from $r(k-1)=2(v-1)$, $bk=vr$ and $r\geq k\geq 3$, we deduce that $r=26$,
$k=12$ and $b=312$. Since $|X_\alpha|=39$, Lemma \ref{condition 2}(ii) implies that $G>X$. Since $Out(X)$ has size $2$, we must have $G=X.2$ (with graph automorphism). By flag-transitivity, a block stabiliser must have index $312$ and have an orbit of size $12$. We checked with \textsc{Magma}, considering every subgroup of index $312$, and only one has an orbit of size 12 (which is unique), and the orbit of that block under $G$ does not yield a $2$-design.
\textbf{Case~2:} $n\geq2s$.
By Lemma \ref{eq2} we have
\[
|X_\alpha|=\frac{s|{\rm GL}(n/s,q^s)|}{d(q-1)}\leq \frac{s(1-q^{-s})(1-q^{-2s})q^{n^2/s}}{d(q-1)}
<\frac{sq^{n^2/s}}{d(q-1)}.
\]
Moreover $|X_\alpha|_p=s_p\cdot q^{n(n-s)/2s}$ and $|X|_p=q^{n(n-1)/2}$.
We deduce that $p$ divides $v=|X:X_\alpha|$, so by Lemma~\ref{bound}(iii), $r_p$ divides $2$, and
\[
|X|<2(df)^2|X_{\alpha}|_{p'}^2|X_{\alpha}|=2(df)^2|X_{\alpha}|^3/|X_{\alpha}|_p^2
<\frac{2f^2s^3q^{(3n^2/s)-n(n-s)/s}}{(s_p)^2d(q-1)^3}
\leq \frac{n^3}4 q^{(2n^2/s) +n}.
\]
For the last inequality, we used that $s\leq n/2$ and $f^2\leq (q-1)^3$.
Combining this with $|X|>q^{n^2-2}$ we obtain
\begin{equation}\label{Eq21}
4q^{(1-2/s)n^2-n-2}\leq n^3.
\end{equation}
\textbf{Subcase~2.1:} $s\geq 3$.
Then $n\geq 2s\geq 6$ and \eqref{Eq21} implies that
\[
n^3\geq 4q^{(1-2/s)n^2-n-2}\geq 4q^{(n^2/3)-n-2}\geq 2^{(n^2/3)-n}.
\]
We easily see that this inequality only holds for $n\leq 6$. Therefore $n=2s=6$, and so \eqref{Eq21} implies that $q=2$.
It follows that $X={\rm PSL}(6,2)$ and $|X_\alpha|=3|{\rm GL}(2,8)|=2^3\cdot3^3\cdot7^2$, so we can compute $v=|X|/|X_\alpha|=2^{12}\cdot3\cdot5\cdot31$ and $v-1=11\cdot 173149$. We know that $r\mid 2(v-1)$.
By Lemma \ref{bound}(iii), we also know that $r\mid 2df|X_\alpha|_{p'}=2\cdot3^3\cdot7^2$, thus $r\mid 2$, contradicting $r^2>2v$.
\textbf{Subcase~2.2:} $s=2$.
Then $n=2m\geq4$ and $n$ is even,
\[
|X_\alpha|=\frac{2|{\rm GL}(n/2,q^2)|}{d(q-1)}= \frac{2q^{n(n-2)/4}(q^n-1)(q^{n-2}-1)\cdots (q^{2}-1)}{d(q-1)},
\]
and
\[
v=\frac{q^{n^2/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^{3}-1)(q-1)}{2}.
\]
As we observed, $r_p\mid 2$. Also $v$ is even, and so, from $r(k-1)=2(v-1)$ we deduce that $4\nmid r$.
First assume that $n=4$. Then
\[
|X_\alpha|=\frac{2q^{2}(q^4-1)(q+1)}{d}
\quad\text{and}\quad
v=\frac{q^4(q^3-1)(q-1)}{2}.
\]
By Lemma \ref{bound}(iii), $r$ divides $2df|X_\alpha|_{p'}$ and hence $r\mid 2f(q^4-1)(q+1)$, which can be rewritten as $r\mid 2f(q^2+1)(q-1)(q+1)^2$ .
Note that
\[
v-1=\frac{(q+1)(q^7-2q^6+2q^5-3q^4+4q^3-4q^2+4q-4)}{2}+1,
\]
so that $\gcd(v-1,q+1)=1$. Hence, since $r\mid 2(v-1)$, it follows that $\gcd(r,q+1)\mid 2$.
Moreover, it follows from $(q-1)\mid v$ that $\gcd(r,q-1)\mid 2$.
Combining this with $4\nmid r$ and $r\mid 2f(q^4-1)(q+1)$,
we obtain $r\mid 2f(q^2+1)$. Therefore, using Lemma~\ref{condition 1}(iv),
\[
4f^2(q^2+1)^2\geq r^2>2v=q^4(q^3-1)(q-1).
\]
However, there is no $q=p^f$ satisfying $4f^2(q^2+1)^2>q^4(q^3-1)(q-1)$, a contradiction.
Thus $n\geq 6$. Recall that $\tilde{X}={\rm SL}(n,q)\leq \tilde{G}\leq{\rm \Gamma L}(n,q)$, acting unfaithfully on $\mathcal{P}$ (with kernel a scalar subgroup of $\tilde{G}$). We regard $V$ as an $m$-dimensional vector space over $\mathbb{F}_{q^2}$
with basis $\{e_1,e_2,\ldots,e_m\}$ and $\tilde{G}_\alpha$ the subgroup of $\tilde{G}$ preserving this vector space structure. Take $w\in\mathbb{F}_{q^2}\backslash\mathbb{F}_q$. Then
\[
V=\langle e_1,e_2,\ldots,e_m\rangle_{\mathbb{F}_{q^2}}
=\langle e_1,we_1,e_2,we_2,\ldots,e_m,we_m\rangle_{\mathbb{F}_{q}}.
\]
Let
\[
W=\langle e_1,e_2\rangle_{\mathbb{F}_{q^2}}
=\langle e_1,we_1,e_2,we_2\rangle_{\mathbb{F}_{q}}.
\]
Consider $g\in {\rm SL}(n,q)$ defined by
\[
\begin{cases}
e^g_1=e_1,~e^g_2=-e_2,~(we_1)^g=we_2,~(we_2)^g=we_1 &\text{for }1\leq i\leq 2;\\
(e_i)^g=e_i~,(we_i)^g=we_i &\text{for }3\leq i\leq m.
\end{cases}
\]
Then $g$ does not fix $\alpha$. Let $\beta=\alpha^g$ and let $\tilde{G}_{\alpha,(W)}$ be the subgroup of $\tilde{G}_\alpha$ fixing every vector of $W$.
Note that $W^g=\langle e_1,we_1,-e_2,we_2\rangle_{\mathbb{F}_{q}}=W$ and so $\tilde{G}_{\alpha,(W)}\leq \tilde{G}_{\alpha,\beta}$.
Now ${\rm SL}(n,q)_{\alpha,(W)}$ contains $I_4\times{\rm SL}(n/2-2,q^2)$, and since this subgroup intersects the scalar subgroup trivially it follows that ${X}_{\alpha,(W)}$ contains a subgroup isomorphic to ${\rm SL}(n/2-2,q^2)$ (and so do ${G}_{\alpha,(W)}$, ${G}_{\alpha,\beta}$, and ${X}_{\alpha,\beta}$).
By Lemma~\ref{L:subgroupdiv}, $r$ divides $4df|X_\alpha|/|{\rm SL}(\frac{n}{2}-2,q^2)|=8fq^{2n-6}(q^n-1)(q^{n-2}-1)(q+1)$.
Combining this with $r_p\mid 2$ and $4\nmid r$,
we obtain
\begin{equation}\label{Eq22b}
r\mid 2f(q^n-1)(q^{n-2}-1)(q+1).
\end{equation}
Then from $r^2>2v$ and
\[
2v=q^{n^2/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^{3}-1)(q-1)
\]
we deduce that
\begin{equation}\label{Eq22}
4f^2(q^n-1)^2(q^{n-2}-1)^2(q+1)^2
>q^{n^2/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^3-1)(q-1),
\end{equation}
and so
\[
4q^2(q^n)^2(q^{n-2})^2(2q)^2
>q^{n^2/4}q^{n-2}q^{n-4}\cdots q^4q^2=q^{(n^2-n)/2}.
\]
Therefore,
\[
2^4q^{4n}>q^{(n^2-n)/2}.
\]
This implies that
\[
2^4>q^{n(n-9)/2}\ge 2^{n(n-9)/2},
\]
and hence $n\le 8$ (since $n$ is even).
Assume that $n=8$. By \eqref{Eq22} we have that
\[
4f^2(q^{8}-1)^2(q^{6}-1)^2(q+1)^2
>q^{16}(q^{7}-1)(q^{5}-1)(q^3-1)(q-1),\]
and this implies that $q\in\{2,3,4\}$. By \eqref{Eq22b}, $r$ divides
$
u:=2f(q^{8}-1)(q^{6}-1)(q+1),
$ and hence $r$ divides
$R:=\gcd(2(v-1),u)$.
However, for each $q\in\{2,3,4\}$,
we find $R^2<2v$, contradicting the fact that $r^2>2v$.
Hence $n=6$, and here $r\mid 2f(q^6-1)(q^4-1)(q+1)$ by \eqref{Eq22b}, which can be rewritten as
$r\mid 2f(q^2-q+1)(q^2+1)(q^3-1)(q-1)(q+1)^3$ . Recall that $r\mid 2(v-1)$, and in this case
$2(v-1)=q^9(q^5-1)(q^3-1)(q-1)-2$, which is congruent to $6$ module $q+1$. Thus
$\gcd(2(v-1),q+1)=6$, and so $\gcd(r,q+1)$ divides 6.
On the other hand, $(q^3-1)(q-1)$ divides $v$, so $\gcd(r,(q^3-1)(q-1))$ divides 2. Recall that $4\nmid r$.
We conclude that $r\mid 54f(q^2-q+1)(q^2+1)$.
Thus
\[
2916f^2(q^2-q+1)^2(q^2+1)^2\geq r^2>2v=q^9(q^5-1)(q^3-1)(q-1),
\]
which implies that $q=2$.
It then follows that $v=55,552$ and $r\mid 810$.
However, as $r\mid 2(v-1)$, we conclude that $r\mid 6$, contradicting $r^2>2v$.
\qed
\subsection{$\mathcal{C}_{4}$-subgroups}\label{sec3.4}
Here $G_{\alpha}$ stabilises a tensor product $V_1 \otimes V_2$, where $V_1$ has dimension $a$, for some divisor $a$ of $n$, and $V_2$ has dimension $n/a$, with $2\leq a<n/a$, that is $2\leq a<\sqrt{n}$. In particular $n\geq 6$. Recall that $d=\gcd(n,q-1)$.
\begin{lemma}\label{c4}
Assume Hypothesis~\ref{H}. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{4}$.
\end{lemma}
\par\noindent{\sc Proof.~}
According to \cite[Proposition~4.4.10]{PB}, we have
\[
|X_{\alpha}|=\frac{\gcd(a,n/a,q-1)}{d}
\cdot|{\rm PGL}(a,q)|\cdot|{\rm PGL}(n/a,q)|.
\]
By Lemma \ref{eq2},
\[
|X_{\alpha}|\leq|{\rm PGL}(a,q)|\cdot|{\rm PGL}(n/a,q)|
<\frac{(1-q^{-1})q^{a^2}}{q-1}\cdot
\frac{(1-q^{-1})q^{n^2/a^2}}{q-1}
=q^{a^2+(n^2/a^2)-2}.
\]
Let $f(a)=a^2+{\frac{n^2}{a^2}}-2=(a+\frac{n}{a})^2-2-2n$. This is a decreasing function of $a$ on the interval
$(2,\sqrt{n})$, and hence $f(a)\leq f(2)= (n^2/4)+2$.
Hence $|X_{\alpha}|<q^{a^2+(n^2/a^2)-2}\leq q^{(n^2/4)+2}$.
By Lemma \ref{bound}(i),
\[
|X|<2(df)^2|X_{\alpha}|^3
<2d^2f^2q^{(3n^2/4)+6}<2q^{(3n^2/4)+10}.
\]
Combining this with the fact that $|X|>q^{n^2-2}$ (from Lemma \ref{eq2}),
we obtain
\[
q^{(n^2/4)-12}<2.
\]
Therefore, $n^2/4\leq 12$, which implies that $n=6$,
and hence that $a=2$. Thus
\[
|X_{\alpha}|=\frac{q^4(q^3-1)(q^2-1)^2}{d}
\quad\text{and}\quad
v=q^{11}(q^6-1)(q^5-1)(q^2+1).
\]
Consequently, $p\mid v$ and $v$ is even. By Lemma \ref{bound}(iii), $r_p$ divides 2,
$4\nmid r$,
and $r$ divides $2df|X_\alpha|_{p'}$ and hence $r$ divides $2f(q^3-1)(q^2-1)^2$. Note that $(q^3-1)(q+1)\mid q^6-1$ and $q-1\mid q^5-1$, so $(q^3-1)(q^2-1)$ divides $v$. We conclude that $\gcd(r,(q^3-1)(q^2-1))$ divides $2$.
Hence, $r\mid 2f(q^2-1)$, contradicting the condition~$r^2>2v$.
\qed
\subsection{$\mathcal{C}_{5}$-subgroups}\label{sec3.5}
Here $G_{\alpha}$ is a subfield subgroup of $G$ of type ${\rm GL}(n,q_{0})$,
where $q=p^f=q_{0}^s$ for some prime divisor $s$ of $f$.
\begin{lemma}\label{c5}
Assume Hypothesis~\ref{H}. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{5}$.
\end{lemma}
\par\noindent{\sc Proof.~}
According to \cite[Proposition 4.5.3]{PB},
\[
|X_{\alpha}|\cong\frac{q-1}{d\cdot
{\rm lcm}(q_0-1,(q-1)/\gcd(n,q-1))}{\rm PGL}(n,q_{0})
\]
and, setting $d_0= \gcd(n, (q-1)/(q_0-1))$ (a divisor of $d$), by Lemma \ref{gcd}(i) we have
\begin{align}\label{Eq23a}
|X_{\alpha}|&=\frac{d_0}{d}\cdot|{\rm PGL}(n,q_{0})|
=\frac{d_0}{d}\cdot q_{0}^{n(n-1)/2}(q_{0}^n-1)(q_{0}^{n-1}-1)\cdots(q_{0}^2-1).
\end{align}
In particular, the $p$-part $|X_\alpha|_p=q_0^{n(n-1)/2}$ is strictly less than $|X|_p=q^{n(n-1)/2}$, so $v=|X|/|X_\alpha|$ is divisible by $p$, and hence, by Lemma \ref{bound}(iii), $r_p$ divides 2, and $2(df)^2|X_\alpha|^2_{p'}|X_\alpha|>|X|$.
Hence
\[
q^{n^2-2} < |X| < 2d^2f^2 q_{0}^{n(n-1)/2}\cdot \frac{d_0^3}{d^3}\cdot((q_{0}^n-1)
(q_{0}^{n-1}-1)\cdots(q_{0}^2-1))^3.
\]
Since $d_0\leq d<q$, $f<q$ and $2\leq q_0$, this implies that
\begin{equation}\label{Eq23}
q^{n^2-2} < 2d^2f^2\cdot q_{0}^{n(n-1)/2}\cdot q_{0}^{3(n+2)(n-1)/2} < q_0\cdot q^4\cdot q_{0}^{2n^2+n-3}.
\end{equation}
As $q=q_{0}^s$,
we have $s(n^2-2) < 4s+2n^2+n-2$, so
\[
2n^2+n-3\geq s(n^2-6).
\]
\textbf{Case~1:} $s\geq 5$.
Then $2n^2+n-3\geq 5(n^2-6)$, and so $n=3$.
However, the first inequality in \eqref{Eq23} then implies
\[
q^{7}<2\cdot 3^2\cdot q^2\cdot q_{0}^{18},
\]
that is, $q_{0}^{5s-18}<18$. This is not possible as $q_{0}^{5s-18}\geq q_0^7\geq2^7$.
\textbf{Case~2:} $s=3$, that is $q=q_0^3$.
Then $2n^2+n-3\geq 3(n^2-6)$, and so $n=3$ or $4$.
Suppose $n=4$.
Then the first inequality in \eqref{Eq23} implies
\[
q^{14}<2\cdot 4^2\cdot q^2\cdot q_{0}^{33},
\]
that is, $32>q_{0}^3$. This leads to $q_{0}=2$ or $3$, and so $q=q_{0}^3=8$ or $27$,
which does not satisfy the first inequality in \eqref{Eq23}, a contradiction.
Therefore, $n=3=s$, and examining $d=\gcd(3,q-1)$ and $d_0=\gcd(3,q_0^2+q_0+1)$, we see that
$d_0=d\in\{1,3\}$.
The inequality $|X|<2(df)^2|X_\alpha|^2_{p'}|X_\alpha|$ from Lemma \ref{bound}(iii) becomes (using \eqref{Eq23a})
\[
q^3(q^3-1)(q^2-1)/d=|X|<2d^2f^2q_{0}^3
\cdot(q_{0}^3-1)^3(q_{0}^2-1)^3,
\]
or equivalently, since $q=q_0^3$,
\[
q_{0}^6(q_{0}^9-1)(q_{0}^6-1)<2d^3f^2\cdot(q_{0}^3-1)^3(q_{0}^2-1)^3.
\]
Since $(q_{0}^3-1)^3(q_{0}^2-1)^3<(q_{0}^9-1)(q_{0}^6-1)$ and $d\leq n=3$,
it follows that
\begin{equation}\label{Eq24}
q_{0}^6< 2d^3f^2\leq 54f^2.
\end{equation}
As $3\mid f$ and $q_0=p^{f/3}$,
we then conclude that $f=3$ and $q_{0}=2$, but this means that $d=1$,
contradicting the first inequality of \eqref{Eq24}.
\textbf{Case~3:} $s=2$, that is $q=q_0^2$.
In this case, $d_0=\gcd(n,q_0+1)$ in the expression for $|X_\alpha|$ in \eqref{Eq23a}.
Let $a\in \mathbb{F}_q\backslash \mathbb{F}_{q_{0}}$ and consider
$$
g=\begin{pmatrix} a & &\\ & a^{-1}&\\&&I_{n-2} \end{pmatrix}\in \tilde{X}={\rm SL}(n,q).
$$
Now $g$ does not preserve $\alpha$. Let $\beta=\alpha^g\ne \alpha$. Then
\[
\left\{
\begin{pmatrix}
1 & & \\
&1 & \\
& & B
\end{pmatrix}
\,\middle|\,
B\in {\rm SL}(n-2,q_{0})
\right\}
\leq \tilde{X}_{\alpha}\cap (\tilde{X}_{\alpha})^g=\tilde{X}_{\alpha\beta}.
\]
Since this subgroup intersects the scalar subgroup trivially, $X_{\alpha\beta}$ contains a subgroup isomorphic to ${\rm SL}(n-2,q_{0})$, and hence so does $G_{\alpha\beta}$.
By Lemma~\ref{L:subgroupdiv},
$r$ divides $4df|X_{\alpha}|/|{\rm SL}(n-2,q_{0})|$. Thus, using \eqref{Eq23a},
\[
r\mid 4fd_0q_{0}^{2n-3}(q^n_{0}-1)(q^{n-1}_{0}-1).
\]
Recall that $r_{p}\mid 2$. Moreover,
\[
v=\frac{|X|}{|X_\alpha|}=\frac{q_{0}^{n(n-1)/2}(q_{0}^n+1)(q_{0}^{n-1}+1)\cdots(q_{0}^2+1)}
{d_0}
\]
is even, and so $4\nmid r$.
Therefore,
\begin{equation}\label{Eq25}
r\mid 2fd_0(q^n_{0}-1)(q^{n-1}_{0}-1).
\end{equation}
From $r^2>2v$, that is to say, $r^2/2 > v$, we see that
\begin{equation}\label{Eq26}
2f^2d_0^2(q^n_{0}-1)^2(q^{n-1}_{0}-1)^2
>\frac{q_{0}^{n(n-1)/2}(q_{0}^n+1)(q_{0}^{n-1}+1)\cdots(q_{0}^2+1)}{d_0},
\end{equation}
and so, using $f<q=q_0^2$,
\[
2d_0^3q_{0}^{4n+2}>q_{0}^{n^2-1},
\]
that is, $2\gcd(n,q_{0}+1)^3=2d_0^3>q_{0}^{n^2-4n-3}$.
If $n\geq6$, then it follows that $2(q_{0}+1)^3>q_{0}^{9}$, a contradiction.
Thus $3\leq n\leq 5$.
Assume that $n=5$, so $2d_0^3> q_0^2$.
It follows that $d_0\neq 1$, and so $d_0=\gcd(5,q_{0}+1)=5$. This together with $250>q_{0}^2$ implies that $q_{0}\in\{4,9\}$. In either case $f=4$, and the inequality \eqref{Eq26} does not hold, a contradiction. Hence $n\leq 4$.
Since ${\rm PSL}(n,q_{0})\lhd X_\alpha$ and $r_{p}\mid 2_{p}$,
by Lemma \ref{parabolic},
$r$ is divisible by the index of a parabolic subgroup of ${\rm PSL}(n,q_{0})$, that is, the number of $i$-spaces for some $i\leq n/2$.
\textbf{Subcase~3.1:} $n=4$.
There are $(q_0+1)(q_0^2+1)$ 1-spaces and $(q_0^2+1)(q_0^2+q_0+1)$ 2-spaces, so $q^2_{0}+1$ divides $r$.
Moreover, it follows from $v=q^6_{0}(q^4_{0}+1)(q^3_{0}+1)(q^2_{0}+1)/\gcd(4,q_{0}+1)$
that $q^2_{0}+1$ divides $v$, since $\gcd(4,q_{0}+1)$ is a divisor of $q_0^3+1$.
Therefore, $q^2_{0}+1$ divides $\gcd(r,v)$.
However $r\mid 2(v-1)$ and hence $\gcd(r,v)\mid 2$,
and this implies that $q^2_{0}+1$ divides $2$, a contradiction.
\textbf{Subcase~3.2:} $n=3$.
Here the number $q_0^2+q_0+1$ of 1-spaces must divide $r$.
Since $r\mid 2(v-1)$ and $q^2_{0}+q_{0}+1$ is odd,
it follows that $q^2_{0}+q_{0}+1$ divides $v-1$.
On the other hand $v=q^3_{0}(q^3_{0}+1)(q^2_{0}+1)/d_0$,
and it follows that $\gcd(v-1,q^2_{0}+q_{0}+1)=q^2_{0}+q_{0}+1$ must divide $2q_{0}+d_0$.
This implies that $q_{0}=2$ and $d_0=\gcd(3,q_0+1)=3$.
Therefore, $7\mid r$, $f=2$ and $v=120$.
However, from \eqref{Eq25} and $r\mid 2(v-1)$ we obtain $r=7$ or $14$, contradicting $r^2>2v$.
\qed
\subsection{$\mathcal{C}_{6}$-subgroups}\label{sec3.6}
Here $G_{\alpha}$ is of type $t^{2m}\cdot{\rm Sp}_{2m}(t)$,
where $n=t^m$ for some prime $t\ne p$ and positive integer $m$, and moreover $f$ is odd and is minimal such that $t\gcd(2,t)$ divides $q-1=p^f-1$(see \cite[Table 3.5.A]{PB}).
\begin{lemma}\label{c6}
Assume Hypothesis~\ref{H}. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{6}$.
\end{lemma}
\par\noindent{\sc Proof.~}
From \cite[Propositions~4.6.5 and~4.6.6]{PB} we have $|X_\alpha|\leq t^{2m}|{\rm Sp}_{2m}(t)|$,
and from Lemma \ref{eq2} we have $|{\rm Sp}_{2m}(t)|<t^{m(2m+1)}$.
Moreover $t<q$, since $t\gcd(2,t)$ divides $q-1$.
Hence $|X_\alpha|<t^{2m+m(2m+1)}<q^{2m^2+3m}$.
By Lemma \ref{bound}(i), recalling that $d=\gcd(n,q-1)$,
\[
|X|<2(df)^2|X_{\alpha}|^3
<2d^2f^2q^{6m^2+9m}<2q^{6m^2+9m+4}.
\]
Combining this with the fact that $|X|>q^{n^2-2}=q^{t^{2m}-2}$ (by Lemma \ref{eq2}),
we obtain
\[
q^{t^{2m}-(6m^2+9m+6)}<2.
\]
Therefore,
\begin{equation}\label{Eq27}
t^{2m}\leq 6m^2+9m+6.
\end{equation}
As $t\geq 2$, we deduce that $2^{2m}\leq 6m^2+9m+6$, and hence $m\leq 3$.
\textbf{Case~1:} $m=1$.\quad
Here $t=n\geq3$, so $t$ is an odd prime, and from \eqref{Eq27} we have $t^2\leq 21$.
Hence $t=n=3$, so that $t\gcd(2,t)=3$ divides $q-1$, and $d=\gcd(n,q-1)=3$. Also
$|X_\alpha|\leq t^{2m}|{\rm Sp}_{2m}(t)|=3^2|{\rm Sp}_2(3)|=2^3\cdot 3^3$, and
then it follows from $q^{n^2-2}<|X|<2(df)^2|X_\alpha|^3$ that
\[
q^7<2(3f)^2|X_\alpha|^3\leq2\cdot(3f)^2\cdot(2^3\cdot 3^3)^3=f^2\cdot2^{10}\cdot 3^{11}.
\]
This inequality, together with the fact that $f$ is odd and is minimal such that $t\gcd(2,t)=3$ divides $p^f-1$, implies that $q\in\{7,13\}$, and hence also that $f=1$. In particular, $q\equiv4$ or 7$\pmod{9}$, so that, by \cite[Proposition 4.6.5]{PB}, we have $X_\alpha\cong3^2.Q_8$.
According to Lemma \ref{bound}(ii), $r$ divides $2df|X_{\alpha}|=432$. Thus $r$ divides
$R:=\gcd(432, 2(v-1))$. If $q=7$ then $v= 2^2\cdot7^3\cdot19$, and so $R=6$; and if $q= 13$, then $v=2^2\cdot7\cdot13^3\cdot 61$, and again $R=6$.
Then $R^2<2v$, contradicting $r^2>2v$.
\textbf{Case~2:} $m=2$.\quad
In this case \eqref{Eq27} shows that $t^4\leq 48$ and so $t=2$ and $n=4$.
Thus $|X_\alpha|\leq t^{2m}|{\rm Sp}_{2m}(t)|=2^{4}|{\rm Sp}_{4}(2)|<2^{14}$.
From \cite[Proposition 4.6.6]{PB} we see that $q=p\equiv1\pmod 4$. In particular, $f=1$ and $d=4$.
Then the condition $q^{n^2-2}<|X|<2(df)^2|X_\alpha|^3$ implies that
\[
q^{14}<2\cdot4^2\cdot(2^{14})^3=2^{47},
\]
which yields $q=5$. Then by \cite[Proposition 4.6.6]{PB} we have $X_\alpha\cong2^4.\mathrm{A}_6$.
Therefore, $v=|X|/|X_\alpha|=5^5\cdot13\cdot31$.
By Lemma \ref{bound}(ii), $r$ divides $2df|X_{\alpha}|=2^{10}\cdot 3^2\cdot5$.
This together with $r\mid 2(v-1)$ implies that $r\mid 4$,
contradicting the condition $r^2>2v$.
\textbf{Case~3:} $m=3$.\quad
We conclude similarly (using \cite[Proposition 4.6.6]{PB}) that $t=2$, $n=8$, $q=p\equiv1\pmod 4$ (so $f=1$) and
$|X_\alpha|<2^{27}$.
However, this together with $q^{n^2-2}<|X|<2(df)^2|X_\alpha|^3$ implies that
$q^{62}<2^{82}\gcd(8,q-1)^2<2^{88}$. Thus $q=2$, a contradiction.
\qed
\subsection{$\mathcal{C}_{7}$-subgroups}\label{sec3.7}
Here $G_\alpha$ is a tensor product subgroup of type ${\rm GL}(m,q)\wr {\rm S}_t$,
where $t\geq 2$, $m\geq 3$ and $n=m^t$ (see \cite[Table 3.5.A]{PB}).
\begin{lemma}\label{c7}
Assume Hypothesis~\ref{H}. Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{7}$.
\end{lemma}
\par\noindent{\sc Proof.~}
From \cite[Proposition 4.7.3]{PB} we deduce that
$|X_{\alpha}|\leq |{\rm PGL}(m,q)|^t\cdot t!$.
This together with Lemma \ref{eq2} implies that $|X_{\alpha}|<q^{t(m^2-1)}\cdot t!$.
Then by Lemma \ref{bound}(i),
\[
|X|<2(df)^2|X_{\alpha}|^3
<2d^2f^2q^{3t(m^2-1)}\cdot (t!)^3<q^{3t(m^2-1)+5}\cdot (t!)^3.
\]
Combining this with the fact that $|X|>q^{n^2-2}=q^{m^{2t}-2}$ (by Lemma \ref{eq2}),
we obtain
\begin{equation}\label{Eq28}
(t!)^3 > q^{m^{2t}-3t(m^2-1)-7} \geq 2^{m^{2t}-3t(m^2-1)-7}.
\end{equation}
Let $f(m)=m^{2t}-3t(m^2-1)-7$. It is straightforward to check that $f(m)$ is an increasing function of $m$, for $m\geq 3$, and hence $f(m)\geq f(3)= 3^{2t}-24t-7$. Thus \eqref{Eq28} implies that
\[
2^{3^{2t}-24t-7}< (t!)^3 \leq t^{3t}.
\]
Taking logarithms to base 2 we have $3^{2t}-24t-7 < 3t\log_2(t)$, which has no solutions for $t\geq2$.
\qed
\subsection{$\mathcal{C}_{8}$-subgroups}\label{sec3.8}
Here $G_\alpha$ is a classical group in its natural representation.
\begin{lemma}\label{c8.1} Assume Hypothesis~\ref{H}.
If the point-stabilizer $G_\alpha\in\mathcal{C}_{8}$, then $G_\alpha$ cannot be symplectic.
\end{lemma}
\par\noindent{\sc Proof.~}
Suppose for a contradiction that $G_\alpha$ is a symplectic group in $\mathcal{C}_{8}$. Then by \cite[Proposition 4.8.3]{PB}, $n$ is even, $n\geq 4$, and
\[
X_{\alpha}\cong {\rm PSp}(n,q)\cdot\left[\frac{\gcd(2,q-1)\gcd(n/2,q-1)}{d}\right],
\]
where ~$d=\gcd(n,q-1)$. For convenience we will also use the notation $d'= \gcd(n/2,q-1)$ in this proof. Therefore,
\[
|X_\alpha|=q^{n^2/4}(q^n-1)(q^{n-2}-1)\cdots(q^2-1)d'/d,
\]
and so
\[
v=\frac{|X|}{|X_\alpha|}=\frac{q^{(n^2-2n)/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^3-1)}{d'},
\]
so in particular $p\mid v$. By Lemma \ref{bound}(iii), $r_p$ divides 2.
Since ${\rm PSp}(n,q)\trianglelefteq X_{\alpha}$, except for $(n,q)=(4,2)$, we can apply Lemma \ref{parabolic}, and so in these cases $r$ is divisible by the index of a parabolic subgroup of ${\rm PSp}(n,q)$.
We first treat the case $n=4$.
\textbf{Case~1:} $n=4$.
In this case,
\[
X_{\alpha}\cong {\rm PSp}(4,q)\cdot\left[\frac{\gcd(2,q-1)^2}{\gcd(4,q-1)}\right]
\quad\text{and}\quad
v=\frac{q^2(q^3-1)}{\gcd(2,q-1)}.
\]
If $(n,q)= (4,2)$, then a Magma computation shows that the subdegrees of $G$ are $12$ and $15$, so by
Lemma~\ref{condition 2}~(iv), $r\mid\gcd(24,30)=6$, contradicting $r^2>2v$. Since $X\cong {\rm A}_8$, using \cite[Theorem 1]{Biplane1} for symmetric designs and \cite[Theorem 1.1]{Liang2} for non-symmetric designs also rules out this case.
Hence $(n,q)\ne (4,2)$. Then, since the indices of the parabolic subgroups
${\rm P}_1$ and ${\rm P}_2$ in ${\rm PSp}(4,q)$
are both equal to $(q+1)(q^2+1)$, it follows that $(q+1)(q^2+1)\mid r$ and,
since $r\mid 2(v-1)$, that $(q+1)(q^2+1)$ divides $2(v-1)$.
Suppose first that $q$ is even. Then
\[
2(v-1) = 2q^2(q^3-1)-2 = 2(q^2+1)(q^3-q-1) +2q,
\]
which is not divisible by $q^2+1$. Thus $q$ is odd, and we have
\[
2(v-1) = q^2(q^3-1)-2 = (q^2+1)(q^3-q-1) +q-1,
\]
and again this is not divisible by $q^2+1$. Thus $n\ne 4$.
\textbf{Case~2:} $n\geq 6$.
Let $\tilde{X}={\rm SL}(n,q)$, the preimage of $X$ in ${\rm GL}(n,q)$, and let $\{e_1,\dots, e_{n/2}, f_1,\dots, f_{n/2}\}$
be a basis for $V$ such that the nondegenerate alternating form
preserved by $\tilde{X}_\alpha$ satisfies
\[
(e_i,e_j)=(f_i,f_j)=0\quad\mbox{and}\quad (e_i,f_j)=\delta_{ij}\quad\mbox{for all $i, j$}.
\]
Let ${\rm SL}(4,q)$ denote the subgroup of $\tilde{X}$
acting naturally on $U:=\langle e_1,e_2, f_1, f_2\rangle$ and fixing
$W:=\langle e_3,\dots, e_{n/2}, f_3,\dots, f_{n/2}\rangle$ pointwise, and let
${\rm Sp}(4,q)={\rm SL}(4,q)\cap \tilde{X}_\alpha$, namely the pointwise stabiliser of $W$ in $\tilde{X}_\alpha$.
Let $g\in {\rm SL}(4,q)\setminus\mathbf{N}_{{\rm SL}(4,q)}({\rm Sp}(4,q))$
so $g\not\in\tilde{X}_\alpha$, and let $\beta=\alpha^g\neq\alpha$. Since $g$ fixes $W$ pointwise,
it follows that the alternating forms preserved by $\alpha$ and $\beta$ agree on $W$ and hence that
$\tilde{X}_{\alpha\beta}=\tilde{X}_{\alpha}\cap (\tilde{X}_{\alpha})^g$ contains the pointwise stabiliser
${\rm Sp}(n-4,q)$ of $U$ in $\tilde{X}_\alpha$.
Since this subgroup ${\rm Sp}(n-4,q)$ intersects the scalar subgroup trivially, $X_{\alpha\beta}$ contains a subgroup isomorphic to
${\rm Sp}(n-4,q)$, and hence so does $G_{\alpha\beta}$.
By Lemma~\ref{L:subgroupdiv},
$r$ divides $4df|X_{\alpha}|/|{\rm Sp}(n-4,q)|$,
that is,
\[
r\mid 4d'fq^{2n-4}(q^n-1)(q^{n-2}-1).
\]
Recall that $r_p\mid 2$. Also, since $n\geq6$, $v$ is even,
and hence $4 \nmid r$.
Similarly, it follows from $(q-1)\mid v$ that $r_t\mid2$
for each prime divisor $t$ of $q-1$.
Therefore,
\[
r\mid 2f\cdot\frac{q^n-1}{q-1}\cdot\frac{q^{n-2}-1}{q-1}.
\]
As $f<q$ and $(q^j-1)/(q-1)<2q^{j-1}$ for all $j$, it follows that $r<8q^{2n-3}$.
From $r^2>2v$ we derive that
\begin{align*}
64q^{4n-6}&> 2q^{(n^2-2n)/4}(q^{n-1}-1)(q^{n-3}-1)\cdots(q^3-1)/d'\\
&>2q^{(n^2-2n)/4}(q^{n-2}q^{n-4}\cdots q^2)/q =2q^{(n^2/2)-n-1},
\end{align*}
and so $32>q^{(n^2/2)-5n+5}\geq 2^{(n^2/2)-5n+5}$,
that is, $n^2-10n<0$. This implies that $n\leq 8$.
Suppose that $n=8$. Here $d'=\gcd(4,q-1)$. In this case the index of each of the parabolic subgroups
$P_i$, for $1\leq i\leq 4$, is divisible by $q^4+1$, and hence $q^4+1$ divides $r$, which in turn divides $2(v-1)$ by Lemma~\ref{condition 2}. Then
\[
q^4+1\mid 2d'(v-1) = 2 q^{12}(q^{7}-1)(q^{5}-1)(q^3-1) - 2d'.
\]
Since the remainders on dividing $q^{12}, q^{7}-1, q^{5}-1$ by $q^4+1$ are $-1, -q^3-1$ and $-q-1$, respectively, it follows that
\[
q^4+1\mid -2 (q^3+1)(q+1)(q^3-1)-2d' = -2 (q^6-1)(q+1)-2d'.
\]
The remainder on dividing $q^6-1$ by $q^4+1$ is $-q^2-1$,
and hence
\[q^4+1 \mid 2 (q^2+1)(q+1)-2d'=2(\frac{q^4-1}{q-1}-d').\]
This implies that
\[
q^4+1 \mid 2(q^4-1)-2d'(q-1) = 2(q^4+1) -4-2d'(q-1)
\]
and hence $q^4+1\leq 2d'(q-1)+4\leq 8q-4$ (since $d'\leq 4$), a contradiction.
Thus $n=6$. Here $d'=\gcd(3,q-1)$.
The indices of the parabolic subgroups
${\rm P}_1$, ${\rm P}_2$ and ${\rm P}_3$ in ${\rm PSp}(6,q)$ are $(q^3+1)(q^2+q+1)$, $(q^3+1)(q^2+q+1)(q^2+1)$ and $(q^3+1)(q^2+1)(q+1)$, and since one of these numbers divides $r$,
we deduce that $(q^3+1)\mid r$, and so $(q^3+1)$ divides $2d'(v-1)= 2\left(q^{6}(q^{5}-1)(q^3-1) - d'\right)$.
Since the remainders on dividing $q^{6}, q^{5}-1, q^{3}-1$ by $q^3+1$ are $1, -q^2-1$ and $-2$, respectively, it follows that
$q^3+1$ divides $2 \left( 2(q^2+1)-d'\right)$. Hence
$q^3+1\leq 2(2q^2+2-d')\leq 2(2q^2+1)$, which implies that $q\leq 4$.
If $q=3$, but then $d'=1$ and
$q^3+1=28$ does not divide $2(2(q^2+1)-d')=38$.
Thus $q$ is even and the divisibility condition implies that $q^3+1$ divides $2(q^2+1)-d'\leq 2q^2+1$, which forces $q=2$ and $d'=1$.
Hence $v=2^6\cdot7\cdot31$, and therefore $v-1$ is coprime to $5$ and $7$. However $r$, and hence also $2(v-1)$
is divisible by the index of one of the parabolic subgroups
${\rm P}_1$, ${\rm P}_2$ or ${\rm P}_3$ of ${\rm PSp}(6,2)$, and these are $3^2\cdot7$, $3^2\cdot5\cdot7$, $3^3\cdot5$. This is a contradiction.
\qed
\begin{lemma}\label{c8.2}
Assume Hypothesis~\ref{H}.
If the point-stabilizer $G_\alpha\in\mathcal{C}_{8}$,
then $G_\alpha$ cannot be orthogonal.
\end{lemma}
\par\noindent{\sc Proof.~}
Suppose for a contradiction that $G_\alpha$ is an orthogonal group in $\mathcal{C}_{8}$. Then by \cite[Proposition 4.8.4]{PB}, $q$ is odd,
$n\geq3$, and
\[
X_{\alpha}\cong {\rm PSO}^\epsilon(n,q).\gcd(n,2),
\]
where $\epsilon\in\{\circ,+,-\}$. Let $\tilde{X}={\rm SL}(n,q)$ and let $\tilde{X}_\alpha$ denote the full preimage of $X_\alpha$ in $\tilde{X}$.
Let $\varphi$ be the non-degenerate symmetric bilinear form on $V$ preserved by $\tilde{X}_\alpha$, and let $e_1, f_1\in V$
be a hyperbolic pair, that is $e_1, f_1$ are isotropic vectors and $\varphi(e_1, f_1)=1$.
Let $U=\langle e_1,f_1\rangle$, and consider the decomposition $V=U\oplus U^\perp$. Let $g\in \tilde{X}$ fixing $U^\perp$ pointwise and mapping $e_1$ onto itself and $f_1$ onto $e_1+f_1$. Then $g$ maps the isotropic vector $f_1$ onto the non-isotropic vector $e_1+f_1$, and so
$g\notin \tilde{X}_{\alpha}$. Let $\beta=\alpha^g$, so that $\tilde{X}_\beta$ leaves invariant the form $\varphi^g$.
Then, since $\varphi$ and $\varphi^g$ restrict to the same form on $U^\perp$, we have that
\[
\left\{
\begin{pmatrix}
I_2 & \\
& B
\end{pmatrix}
\,\middle|\,
B\in {\rm SO}^\epsilon(n-2,q)
\right\}
\leq \tilde{X}_{\alpha}\cap \tilde{X}^g_{\alpha}=\tilde{X}_{\alpha\beta}.
\]
Since this group intersects the scalar subgroup trivially, ${X}_{\alpha\beta}$ contains a subgroup isomorphic to ${\rm SO}^\epsilon(n-2,q)$, and hence so does $G_{\alpha\beta}$.
By Lemma~\ref{L:subgroupdiv},
\begin{align}\label{EqOrth}
r\mid 4df|X_{\alpha}|/| {\rm SO}^\epsilon(n-2,q)|.
\end{align}
We now split into cases where $n$ is odd or even.
\textbf{Case~1:} $n=2m+1$ is odd, so $\epsilon=\circ$ and is usually omitted.
In this case, $X_{\alpha}\cong {\rm PSO}(2m+1,q)$.
Thus
\[
|X_\alpha|=q^{m^2}(q^{2m}-1)(q^{2m-2}-1)\cdots(q^2-1),
\]
and so
\[
v=|X|/|X_\alpha| = q^{m^2+m}(q^{2m+1}-1)(q^{2m-1}-1)\cdots(q^{3}-1)/d,
\]
where $d=\gcd(2m+1,q-1)$, and this implies that $v$ is even and $p\mid v$. By Lemma \ref{bound}(iii), $r_p$ divides 2, so $r_p=1$ since $q$ is odd. Moreover, since $r\mid 2(v-1)$, it follows that $4\nmid r$.
\textbf{Subcase~1.1:} $m=1$.
Then
\[
|X_{\alpha}|=q(q^2-1)
\quad\text{and}\quad
v=q^{2}(q^3-1)/d.
\]
As $p\mid v$, it follows from Lemma \ref {bound}(iii) that
$r$ divides $2df|X_{\alpha}|_{p'}$ and hence
$r$ divides $2df(q^2-1)$.
Combining this with $r\mid 2(v-1)$, we deduce that $r$ divides
\begin{align*}
2\gcd\left(d(v-1),df(q^2-1)\right)
=& 2\gcd\left(q^{2}(q^3-1)-d,df(q^2-1)\right).
\end{align*}
Noting that $\gcd\left(q^{2}(q^3-1)-d,q^2-1\right)$ divides
$$
q^{2}(q^3-1)-d-(q^2-1)(q^3+q-1)=q-1-d,
$$
we conclude that $r$ divides $2df(q-1-d)$.
If $d=\gcd(3,q-1)=3$, then $q\geq7$ (since $q$ is odd) and $r\mid 6f(q-4)$. From $r^2>2v=2q^{2}(q^3-1)/3$ we derive that $54f^2(q-4)^2>q^{2}(q^3-1)$,
which yields a contradiction. Consequently, $d=1$. Then $r\mid 2f(q-2)$, and from $r^2>2v=2q^{2}(q^3-1)$ we derive that $2f^2(q-2)^2>q^{2}(q^3-1)$, which is not possible.
\textbf{Subcase~1.2:} $m\geq 2$.
By \eqref{EqOrth},
$r\mid 4df|X_{\alpha}|/| {\rm SO}^\epsilon(n-2,q)|$,
that is,
\[
r\mid 4dfq^{2m-1}(q^{2m}-1).
\]
Recall that $r_p=1$ and $4\nmid r$.
We conclude that
\[
r\mid 2df(q^{2m}-1).
\]
Therefore, as $r^2>2v$, we have
\[
4d^2f^2(q^{2m}-1)^2>\frac{2q^{m^2+m}(q^{2m+1}-1)(q^{2m-1}-1)\cdots(q^{3}-1)}{d},
\]
and hence
\begin{align*}
2q^3\cdot q^2\cdot q^{4m}&>2d^3f^2(q^{2m}-1)^2\\
&>q^{m^2+m}(q^{2m+1}-1)(q^{2m-1}-1)\cdots(q^{3}-1)\\
&>q^{m^2+m}(q^{2m}q^{2m-2}\cdots q^{2})\\
&=q^{2m^2+2m},
\end{align*}
This implies that $q^{2m^2-2m-5}<2$ and so $2m^2-2m-5\leq 0$. Thus $m=2$ and $d\leq 5$. Therefore
$q^{6}(q^5-1)(q^{3}-1)<2d^3f^2(q^{4}-1)^2<250f^2(q^{4}-1)^2$, which implies $q=2$, a contradiction.
\textbf{Case~2:} $n=2m$ is even, where $m\geq 2$ since $2m=n\geq 3$.
In this case, $X_{\alpha}\cong {\rm PSO}^\epsilon(2m,q)\cdot 2$ with $\epsilon=\pm$ (we identify $\pm$ with $\pm1$ for superscripts).
Hence
\[
|X_\alpha|=q^{m(m-1)}(q^{m}-\epsilon)(q^{2m-2}-1)(q^{2m-4}-1)\cdots(q^2-1),
\]
and so
\[
v=\frac{|X|}{|X_\alpha|}
=\frac{q^{m^2}(q^{m}+\epsilon)(q^{2m-1}-1)(q^{2m-3}-1)\cdots(q^{3}-1)}
{d},
\]
where $d=\gcd(2m,q-1)$, and this implies that $v$ is even and $p\mid v$. By Lemma \ref{bound}(iii), $r_p$ divides 2, so $r_p=1$ since $q$ is odd. Moreover, since $r\mid 2(v-1)$, it follows that $4\nmid r$.
By \eqref{EqOrth},
$r\mid 4df|X_{\alpha}|/| {\rm SO}^\epsilon(n-2,q)|$,
that is, $r$ divides
\[4dfq^{2m-2}(q^{m}-\epsilon)\frac{q^{2m-2}-1}{q^{m-1}-\epsilon}=
4dfq^{2m-2}(q^{m}-\epsilon)(q^{m-1}+\epsilon).
\]
As $r_p=1$ and $4\nmid r$, it follows that
\begin{align}\label{Eq30a}
r\mid 2df(q^{m}-\epsilon)(q^{m-1}+\epsilon).
\end{align}
Then we deduce from $r^2>2v$ that
\begin{align}\label{Eq30}
&2d^3f^2(q^{m}-\epsilon)^2(q^{m-1}+\epsilon)^2\nonumber\\
>&q^{m^2}(q^{m}+\epsilon)(q^{2m-1}-1)(q^{2m-3}-1)\cdots(q^{3}-1),
\end{align}
and so
\begin{align*}
2q^3\cdot q^2(2q^{2m-1})^2&>2d^3f^2(q^{m}-\epsilon)^2(q^{m-1}+\epsilon)^2\\
&>q^{m^2}(q^{m}+\epsilon)(q^{2m-1}-1)(q^{2m-3}-1)\cdots(q^{3}-1)\\
&>q^{m^2}(2q^{m-1})(q^{2m-2}\cdots q^{2})\\
&=2q^{2m^2-1}.
\end{align*}
Hence $q^{2m^2-4m-4}<4$ and so $2m^2-4m-4<2$, which implies $m=2$ and $d\leq 4$.
Thus $X_\alpha\cong {\rm PSO}^\epsilon(4,q)\cdot 2$.
Suppose $\epsilon=-$, so that $X_\alpha\cong {\rm PSO}^{-}(4,q)\cdot 2$.
Then \eqref{Eq30} gives
\[2d^3f^2(q^{2}+1)^2(q-1)^2>q^{4}(q^{2}-1)(q^{3}-1),
\]
which can be simplified to
\begin{equation}\label{Eq31bis}
2d^3f^2(q^{2}+1)^2>q^{4}(q+1)(q^{2}+q+1).
\end{equation}
Thus $128f^2(q^{2}+1)^2>q^{4}(q+1)(q^{2}+q+1)$. Since $q$ is odd, this implies that $q=3$ so that $d=2$, but then \eqref{Eq31bis} is not satisfied.
Therefore $\epsilon=+$, so that $X_\alpha\cong {\rm PSO}^{+}(4,q)\cdot 2$.
Then \eqref{Eq30} gives
\begin{equation}\label{Eq31}
2d^3f^2(q^{2}-1)^2(q+1)^2
>q^4(q^2+1)(q^3-1)
\end{equation}
and thus
\[
128f^2(q+1)^2>(q^2+1)(q^3-1).
\]
Since $q$ is odd, we conclude that $q=3$ or $5$.
However, $q=3$ does not satisfy \eqref{Eq31}, thus $q=5$, $f=1$ and $d=4$.
Then $v=|X|/|X_\alpha|=503750$. By \eqref{Eq301}, $r\mid 2df(q^2-1)(q+1)=2^7*3^3$. This together with $r\mid 2(v-1)$ (Lemma \ref{condition 1}(i)) leads to $r\mid 2$, contradicting $r^2>2v$.
\qed
\begin{lemma}\label{c8.3}
Assume Hypothesis~\ref{H}.
If the point-stabilizer $G_\alpha\in\mathcal{C}_{8}$,
then $G_\alpha$ cannot be unitary.
\end{lemma}
\par\noindent{\sc Proof.~}
Suppose that $G_\alpha$ is a unitary group in $\mathcal{C}_{8}$. Then by \cite[Proposition 4.8.5]{PB}, $n\geq 3$, $q=q^2_{0}$, and
\[
X_{\alpha}\cong {\rm PSU}(n,q_0)\cdot\left[\frac{\gcd(n,q_0+1)c}{d}\right],
\]
where $d=\gcd(n,q-1)$ and $c=(q-1)/{\rm lcm}(q_0+1,(q-1)/d)$.
By Lemma \ref{gcd}(iii), $c=\gcd(n,q_0-1)$.
Hence
\begin{align*}
|X_{\alpha}|
&=|{\rm PSU}(n,q_0)|\cdot\frac{\gcd(n,q_0+1)\gcd(n,q_0-1)}{\gcd(n,q^2_0-1)}\\
&=\frac{c}{d}\cdot q^{n(n-1)/2}_0\prod_{i=2}^n(q^i_0-(-1)^i)
\end{align*}
and
\[
v=\frac{|X|}{|X_\alpha|}
=\frac{1}{c}\cdot q^{n(n-1)/2}_0\prod_{i=2}^n(q^i_0+(-1)^i),
\]
which implies that $p\mid v$ and $v$ is even.
Since $r\mid 2(v-1)$, it follows that $r_p\mid 2$ and $4\nmid r$.
\textbf{Case~1:} $n=3$.
In this case,
\[
|X_{\alpha}|=\frac{cq^3_0(q^3_0+1)(q^2_0-1)}{d}
\quad\text{and}\quad
v=\frac{q^3_0(q^3_0-1)(q^2_0+1)}{c},
\]
where $c=\gcd(3, q_0-1)$ and $d=\gcd(3,q^2_0-1)$.
Since ${\rm PSU}(n,q_0)\trianglelefteq X_\alpha$, by Lemma \ref{parabolic}, $r$ is divisible by the index of a parabolic subgroup of ${\rm PSU}(3,q_0)$, that is, $q^3_0+1$.
Hence $(q^3_0+1)\mid r$, which implies that $(q^3_0+1)$ divides $2(v-1)$ and hence also
$2c(v-1)=2 q^3_0(q^3_0-1)(q^2_0+1) -2c$.
Since the remainders on dividing $q^3_0, q^3_0-1$ by $q^3_0+1$ are $-1, -2$, respectively, it follows that
\[
q_0^3+1\mid 4 (q^2_0+1) - 2c,
\]
which implies that $q_0=2$, $d=3$, $c=1$, and $f=2$.
Thus $v=q^3_0(q^3_0-1)(q^2_0+1)=280$ and $|X_\alpha|=q^3_0(q^3_0+1)(q^2_0-1)/3=72$.
Since $r\mid 2(v-1)$ and $r\mid 2df|X_\alpha|_{p'}$ by Lemma \ref{bound}(iii),
we conclude that $r$ divides $18$, contradicting the condition $r^2>2v$.
\medskip
\textbf{Case~2:} $n\geq 4$.
Let $\tilde{X}={\rm SL}(n,q)$ and let $\tilde{X}_\alpha$ denote the full preimage of $X_\alpha$ in $\tilde{X}$.
Let $U=\langle e_1, f_1\rangle$ be a nondegenerate $2$-subspace of $V$ relative to the
unitary form $\varphi$ preserved by $\tilde{X}_\alpha$.
Let $A\in {\rm SL}(U)$ such that $A$ does not preserve modulo scalars the restriction of $\varphi$ to $U$.
Then the element
$g=\begin{pmatrix}
A & \\
& I
\end{pmatrix}\in \tilde{X}$ but $g$ does not lie in $\tilde{X}_\alpha$. Hence $\beta :=\alpha^g\neq\alpha$.
On the other hand
\[
\left\{
\begin{pmatrix}
I & \\
& B
\end{pmatrix}
\,\middle|\,
B\in {\rm SU}(n-2,q_0)
\right\}\leq \tilde{X}_{\alpha}\cap \tilde{X}^g_{\alpha}=\tilde{X}_{\alpha\beta}.
\]
Since this group intersects the scalar subgroup trivially, ${X}_{\alpha\beta}$ contains a subgroup isomorphic to ${\rm SU}(n-2,q)$, and hence so does $G_{\alpha\beta}$.
By Lemma~\ref{L:subgroupdiv},
$r$ divides $4df|X_{\alpha}|/|{\rm SU}(n-2,q_0)|$,
that is,
\[
r\mid 4cfq^{2n-3}_0(q^{n}_0-(-1)^n)(q^{n-1}_0-(-1)^{n-1}).
\]
Since $r_p\mid 2$ and $4\nmid r$, we derive that
\[
r\mid 2cf(q^{n}_0-(-1)^n)(q^{n-1}_0-(-1)^{n-1}).
\]
This together with $r^2>2v$ and $v=|X|/|X_\alpha|$ leads to $r^2|X_\alpha|>2|X|$.
By Lemma \ref{eq2} we have
\[
|X|>q^{2n^2-4}_0
\quad\text{and}\quad
|X_\alpha|<\frac{q^{n^2-1}_0c\gcd(n,q_0+1)}{d}.
\]
Consequently, noting that $\gcd(n,q_0+1)\leq d=\gcd(n,q^2_0-1)$, $c=\gcd(n,q_0-1)<q_0$, and $f<q=q_0^2$, we get
\begin{align*}
2q^{2n^2-4}_0
&<4c^3f^2(q^{n}_0-(-1)^n)^2(q^{n-1}_0-(-1)^{n-1})^2\cdot \frac{q^{n^2-1}_0\gcd(n,q_0+1)}{d}\\
&<4q^7_0(q^{n}_0-(-1)^n)^2(q^{n-1}_0-(-1)^{n-1})^2\cdot q^{n^2-1}_0\\
&<4q^{n^2+6}_0(2q^{n+n-1}_0)^2=16q^{n^2+4n+4}_0
\end{align*}
and hence
\[
q^{n^2-4n-8}_0<8.
\]
It follows that $n^2-4n-8<3$, which implies $n=4$ or $5$.
\textbf{Subcase~2.1:} $n=4$.
Then
\[
v=q^6_0(q^4_0+1)(q^3_0-1)(q^2_0+1)/c,
\]
where $c=\gcd(4,q_0-1)$.
Since $r$ is divisible by the index of a parabolic subgroup of ${\rm PSU}(4,q_0)$,
which is either $(q^2_0+1)(q^3_0+1)$ or $(q_0+1)(q^3_0+1)$,
we derive that $(q^3_0+1)\mid r$. Hence $(q^3_0+1)$ divides $2(v-1)$, and hence also
$2c(v-1)= 2 q^6_0(q^4_0+1)(q^3_0-1)(q^2_0+1) -2c$.
Since the remainders on dividing $q^{6}_0, q^{4}_0+1, q^{3}_0-1$ by $q^3_0+1$ are $1, -q_0+1$ and $-2$, respectively, it follows that
$q^3_0+1$ divides $2 (-q_0+1)(-2)(q_0^2+1) -2c = 4(q_0-1)(q_0^2+1)-2c$, which equals
$4(q_0^3+1) - 4(q_0^2-q_0+2) -2c$. It follows that
$q^3_0+1$ divides $4(q^2_0-q_0+2)+2c$, which implies $q_0=2$.
Thus $v=2^6\cdot5\cdot7\cdot17$, and the index of a parabolic subgroup of ${\rm PSU}(4,q_0)$ is either $45$ or $27$.
However, neither $45$ nor $27$ divides $2(v-1)$, a contradiction.
\textbf{Subcase~2.2:} $n=5$.
Then
\[
v=q^{10}_0(q^5_0-1)(q^4_0+1)(q^3_0-1)(q^2_0+1)/c,
\]
where $c=\gcd(5,q_0-1)$.
Since $r$ is divisible by the index of a parabolic subgroup of ${\rm PSU}(5,q_0)$,
which is either $(q^2_0+1)(q^5_0+1)$ or $(q^3_0+1)(q^5_0+1)$,
we derive that $(q^5_0+1)\mid r$. Hence $(q^5_0+1)$ divides $2(v-1)$, and hence also
$2c(v-1)= 2q^{10}_0(q^5_0-1)(q^4_0+1)(q^3_0-1)(q^2_0+1) -2c$.
Since the remainders on dividing $q^{10}_0, q^{5}_0-1, (q^{3}_0-1)(q_0^2+1)$ by $q^5_0+1$ are $1, -2$ and $q_0^3-q_0^2-2$,
respectively, it follows that
$q^5_0+1$ divides $-4(q_0^4+1)(q_0^3-q_0^2-2)-2c$, which equals
\[
-4(q_0^5+1)(q_0^2-q_0) -4(-2q_0^4+q_0^3-2q_0^2+q_0-2) -2c.
\]
Thus $q^5_0+1$ divides $8q_0^4 -4q_0^3 +8q^2_0 -4q_0+8-2c$.
However, there is no prime power $q_0$ satisfying this condition, a contradiction.
\qed
\subsection{$\mathcal{C}_{9}$-subgroups}\label{sec3.9}
Here $G_\alpha$ is an almost simple group not contained in any of the subgroups in $\mathcal{C}_1$--$\mathcal{C}_8$.
\begin{lemma}\label{c9}
Assume Hypothesis~\ref{H}.
Then the point-stabilizer $G_\alpha\notin\mathcal{C}_{9}$.
\end{lemma}
\par\noindent{\sc Proof.~}
By Lemma \ref{condition 2}(i) and Lemma \ref{eq2},
we have $|G_\alpha|^3>|G|\geq|X|=|{\rm PSL}(n,q)|>q^{n^2-2}$.
Moreover, by \cite[Theorem 4.1]{Liemaximal}, we have that $|G_\alpha|<q^{3n}$.
Hence $q^{n^2-2}<|G_\alpha|^3<q^{9n}$, which yields $n^2-2<9n$ and so $3\leq n\leq9$.
Further, it follows from \cite[Corollary 4.3]{Liemaximal} that either $n=y(y-1)/2$ for some integer $y$ or $|G_\alpha|<q^{2n+4}$.
If $n=y(y-1)/2$, then as $3\leq n\leq9$ we have $n=3$ or $6$.
If $|G_\alpha|<q^{2n+4}$, then we deduce from $|G_\alpha|^3>q^{n^2-2}$ that $q^{6n+12}>q^{n^2-2}$,
which implies $6n+12>n^2-2$ and so $3\leq n\leq7$.
Therefore, we always have $3\leq n\leq7$. The possibilities for $X_\alpha$ can be read off from \cite[Tables 8.4, 8.9, 8.19, 8.25, 8.36]{Low}. In Table \ref{tab5} we list all possibilities, sometimes fusing some cases together. Not all conditions from \cite{Low} are listed, but we list what is necessary for our proof.
Note that in some listed cases $X_\alpha$ is not maximal in $X$ but there is a group $G$ with $X<G\leq {\rm Aut}(X)$ such that $G_\alpha$ is maximal in $G$ and $G_\alpha\cap X$ is equal to this non-maximal subgroup $X_\alpha$.
\begin{longtable}{cclll}
\caption{Possible groups $X$ and $X_\alpha$ }\label{tab5}\\ \hline
\endfirsthead
\multicolumn{5}{l}{}\\
\hline
\endhead
\hline
\multicolumn{5}{r}{}\\
\endfoot\hline
\endlastfoot
Case &$X$ &$X_\alpha$ & Conditions on $q$ from \cite{Low} &Bound \eqref{lastineq} \\
\hline
1&${\rm PSL}(3,q)$ &${\rm PSL}(2,7)$ &$q=p\equiv1,2,4\pmod7$, $q\neq2$ &$q<14$ \\
2& &${\rm A}_6$ &$q=p\equiv1,4\pmod{15}$ &$q<19$ \\
3& &${\rm A}_6$&$q=p^2,p\equiv2,3\pmod5$, $p\neq3$ &$q<23$\\
\hline
4&${\rm PSL}(4,q)$ &${\rm PSL}(2,7)$ &$q=p\equiv1,2,4\pmod7$, $q\neq 2$ &$q< 4$ \\
5& &${\rm A}_7$ &$q=p\equiv1,2,4\pmod 7$ &$q< 7$ \\
6& &${\rm PSU}(4,2)$ &$q=p\equiv1\pmod6$ &$q< 12$ \\
\hline
7&${\rm PSL}(5,q)$ &${\rm PSL}(2,11)$ &$q=p$ odd &$q< 3$ \\
8& &${\rm M}_{11}$ &$q=3$ &$q< 4$ \\
9& &${\rm PSU}(4,2)$ &$q=p\equiv1\pmod6$ &$q< 5$ \\
\hline
10&${\rm PSL}(6,q)$ &${\rm A}_6.2_3$ &$q=p$ odd &$q< 3$ \\
11& &${\rm A}_6$ &$q=p$ or $p^2$ odd &$q< 2$ \\
12& &${\rm PSL}(2,11)$ &$q=p$ odd &$q< 3$ \\
13& &${\rm A}_7$ &$q=p$ or $p^2$ odd &$q< 3$ \\
14& &${\rm PSL}(3,4)^{.}2^-_1$ &$q=p$ odd &$q< 3$ \\
15& &${\rm PSL}(3,4)$ &$q=p$ odd &$q< 3$ \\
16& &${\rm M}_{12}$ &$q=3$ &$q< 4$ \\
17& &${\rm PSU}(4,3)^{.}2^-_2$ &$q=p\equiv1\pmod{12}$ &$q< 5$ \\
18& &${\rm PSU}(4,3)$ &$q=p\equiv7\pmod{12}$ &$q< 5$ \\
19& &${\rm PSL}(3,q)$ &$q$ odd & \\
\hline
20&${\rm PSL}(7,q)$ &${\rm PSU}(3,3)$ &$q=p$ odd &$q< 2$ \\
\end{longtable}
By Lemma \ref{bound}(i) and Lemma \ref{eq2}, we have $2d^2f^2|X_\alpha|^3>|X|>q^{n^2-2}$. Using the fact that $d=\gcd(n,q-1)\leq n$, it follows that
\begin{equation}\label{lastineq}
q<\left(2n^2f^2|X_\alpha|^3\right)^{1/(n^2-2)}.
\end{equation}
Note that, except for case (19), we know that $f=1$ or $2$. This inequality gives us, in each case
except (19), an upper bound for $q$, which is listed in the last column in Table \ref{tab5}. Comparing the last two columns of the table we see the condition and bound are satisfied only in the following cases: (1) for $q=11$,
(3) for $q=4$, (5) for $q=2$, (6) for $q=7$, (8) and (16). For case (19), we know that $f<q$ and $|{\rm PSL}(3,q)|<q^8$ by Lemma \ref{eq2}, so $72q^2q^{24}>q^{34}$, that is $q^8<72$, which is not satisfied for any $q$.
In case (1) for $q=11$, $d=1$ and the inequality $2d^2f^2|X_\alpha|^3>q^{n^2-2}$ is not satisfied.
For each of the remaining cases, we compute $v$ and $2df|X_\alpha|$. By Lemma \ref{bound}(ii), $r\mid 2df|X_\alpha|$. On the other hand $r\mid 2(v-1)$, so $r$ divides $R:=\gcd(2(v-1),2df|X_\alpha|)$. Now using $R^2\geq r^2>2v$, this argument rules out cases (3) for $q=4$, (6) for $q=7$, (8) and (16). This leaves the single remaining case (5) with $q=2$. Then this argument yields $r\mid 14$, $v=8$.
As $r^2>2v$, $r=7$ or $14$.
By Lemma \ref{condition 1}(i), $r(k-1)=14$, so the condition $k\geq3$ implies that $r=7$ and $k=3$. Now Lemma \ref{condition 1}(ii) yields a contradiction since $k\nmid vr$.
Hence, we rule out case (5) for $q=2$, completing the proof.
\qed
|
train/arxiv
|
BkiUdC_xaL3Sugla0Z2V
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
\subsection{Reusing a one-time pad}
\label{sec:intro.reusing}
A one-time pad can famously be used only once~\cite{Sha49}, i.e., a
secret key as long as the message is needed to encrypt it with
information\-/theoretic security. But this does not hold anymore if
the honest players can use quantum technologies to communicate. A
quantum key distribution (QKD) protocol~\cite{BB84,SBCDLP09} allows
players to expand an initial short secret key, and thus encrypt
messages that are longer than the length of the original key. Instead
of first expanding a key, and then using it for encryption, one can
also swap the order if the initial key is long enough: one first
encrypts a message, then recycles the key. This is possible due to the
same physical principles as QKD: quantum states cannot be cloned, so
if the receiver holds the exact cipher that was sent, the adversary
cannot have a copy, and thus does not have any information about the
key either, so it may be reused. This requires the receiver to verify
the authenticity of the message received, and if this process fails, a
net key loss occurs \--- the same happens in QKD: if an adversary
tampers with the communication, the players have to abort and also
lose some of the initial secret key.
\subsection{Quantum authentication and key recycling}
\label{sec:intro.auth}
Some ideas for recycling encryption keys using quantum ciphers were
already proposed in 1982~\cite{BBB82}. Many years later, Damg{\aa}rd
et al.~\cite{DPS05} (see also \cite{DPS14,FS17}) showed how to encrypt
a classical message in a quantum state and recycle the key. At roughly
the same time, the first protocol for authenticating quantum messages
was proposed by Barnum et al.~\cite{BCGST02}, who also proved that
quantum authentication necessarily encrypts the message as
well. Gottesman~\cite{Got03} then showed that after the message is
successfully authenticated by the receiver, the key can be leaked to
the adversary without compromising the confidentiality of the message.
And Oppenheim and Horodecki~\cite{OH05} adapted the protocol of
\cite{BCGST02} to recycle key. But the security definitions in these
initial works on quantum authentication have a major flaw: they do not
consider the possibility that an adversary may hold a purification of
the quantum message that is encrypted. This was corrected by Hayden,
Leung and Mayers~\cite{HLM11}, who give a composable security
definition for quantum authentication with key recycling. They then
show that the family of protocols from~\cite{BCGST02} are secure, and
prove that one can recycle part of the key if the message is accepted.
The security proof from~\cite{HLM11} does however not consider all
possible environments. Starting in works by Simmons in the 80's and
then Stinson in the 90's (see, for example,
\cite{Sim85,Sim88,Sti90,Sti94}) the classical literature on
authentication studies two types of attacks: \emph{substitution
attacks} \--- where the adversary obtains a valid pair of message
and cipher\footnote{Here we use the term \emph{cipher} to refer to the
authenticated message, which is often a pair of the original message
and a tag or message authentication code (MAC), but not
necessarily.} and attempts to substitute the cipher with one that
will decode to a different message \--- and \emph{impersonation
attacks} \--- where the adversary directly sends a forged cipher to
the receiver, without knowledge of a valid message\-/cipher pair. To
the best of our knowledge, there is no proof showing that security
against impersonation attacks follows from security against
substitution attacks, hence the literature analyzes both attacks
separately.\footnote{In fact, one can construct examples where the
probability of a successful impersonation attack is higher than the
probability of a successful substitution attack. This can occur,
because any valid cipher generated by the adversary is considered a
successful impersonation attack, whereas only a cipher that decrypts
to a different message is considered a successful substitution
attack.} This is particularly important in the case of composable
security, which aims to prove the security of the protocol when used
in any arbitrary environment, therefore also in an environment that
first sends a forged cipher to the receiver, learns wether it is
accepted or rejected, then provides a message to the sender to be
authenticated, and finally obtains the cipher for this message. This
is all the more crucial when key recycling is involved, since the
receiver will already recycle (part of) the key upon receiving the
forged cipher, which is immediately given to the environment. The work
of Hayden et al.~\cite{HLM11} only considers environments that perform
substitution attacks \--- i.e., first provide the sender with a
message, then change the cipher, and finally learn the outcome of the
authentication as well as receive the recycled key. Hence they do not
provide a complete composable security proof of quantum
authentication, which prevents the protocol from being composed in an
arbitrary environment.\footnote{For example, QKD can be broken if the
underlying authentication scheme is vulnerable to impersonation
attacks, because Eve could trick Alice into believing that the
quantum states have been received by Bob so that she releases the
basis information.}
More recently, alternative security definitions for quantum
authentication have been proposed, both without~\cite{DNS12,BW16} and
with~\cite{GYZ16} key recycling (see also \cite{AM16}). These still
only consider substitution attacks, and furthermore, they are,
strictly speaking, not composable. While it is possible to prove that
these definitions imply security in a composable framework (if one
restricts the environment to substitution attacks), the precise way in
which the error $\eps$ carries over to the framework has not been
worked out in any of these papers. If two protocols with composable
errors $\eps$ and $\delta$ are run jointly (e.g., one is a subroutine
of the other), the error of the composed protocol is bounded by the
sum of the individual errors, $\eps+\delta$. If a security definition
does not provide a bound on the composable error, then one cannot
evaluate the new error after composition.\footnote{In an asymptotic
setting, one generally does not care about the exact error, as long
as it is negligible. But for any (finite) implementation, the exact
value is crucial, since without it, it is impossible to set the
parameters accordingly, e.g., how many qubits should one send to get
an error $\eps \leq 10^{-18}$.} For example, quantum authentication
with key recycling requires a backwards classical authentic channel,
so that the receiver may tell the sender that the message was
accepted, and allow her to recycle the key. The error of the complete
protocol is thus the sum of errors of the quantum authentication and
classical authentication protocols. Definitions such as those of
\cite{DNS12,BW16,GYZ16} are not sufficient to directly obtain a bound
on the error of such a composed protocol.
In the other direction, it is immediate that if a protocol is
$\eps$\=/secure according to the composable definition used in this
work, then it is secure according to \cite{DNS12,BW16,GYZ16} with the
same error $\eps$. More precisely, proving that the quantum
authentication scheme constructs a secure channel is sufficient to
satisfy \cite{DNS12,BW16} \--- i.e., the ideal functionality is a
secure channel which only allows the adversary to decide if the
message is delivered, but does not leak any information about the
message to the adversary except its length (confidentiality), nor does
it allow the adversary to modify the message (authenticity). And
proving that the scheme constructs a secure channel that additionally
generates fresh secret key is sufficient to satisfy the definition of
\emph{total authentication} from \cite{GYZ16}. Garg et
al.~\cite{GYZ16} also propose a definition of \emph{total
authentication with key leakage}, which can be captured in a
composable framework by a secure channel that generates fresh key and
leaks some of it to the adversary. This is however a somewhat
unnatural ideal functionality, since it requires a deterministic
leakage function, which may be unknown or not exist, e.g., in concrete
protocols the bits leaked can depend on the adversary's behavior \---
this is the case for the \emph{trap code}~\cite{BGS13,BW16}, which we
discuss further in \secref{sec:conclusion}. The next natural step for
players in such a situation is to extract a secret key from the
partially leaked key, and thus the more natural ideal functionality is
what one obtains after this privacy amplification
step~\cite{BBCM95,RK05}: a secure channel that generates fresh secret
key, but where the key generated may be shorter than the key
consumed. The ideal functionality used in the current work provides
this flexibility: the amount of fresh key generated is a parameter
which may be chosen so as to produce less key than consumed, the same
amount, or even more.\footnote{One may obtain more key than consumed
by using the constructed secure channel to share secret key between
the players. We use this technique to compensate for key lost in a
classical authentication subroutine, that cannot be recycled.}
Hence, with one security definition, we encompass all these different
cases \--- no key recycling, partial key recycling, total key
recycling, and even a net gain of secret key. Furthermore, having all
these notions captured by ideal functionalities makes for a
particularly simple comparison between the quite technical definitions
appearing in \cite{DNS12,BW16,GYZ16}.
\subsection{Contributions}
\label{sec:intro.contributions}
In this work we use the Abstract Cryptography (AC)
framework~\cite{MR11} to model the composable security of quantum
authentication with key recycling. AC views cryptography as a resource
theory: a protocol constructs a (strong) resource given some (weak)
resources. For example, the quantum authentication protocols that we
analyze construct two resources: a secure quantum channel \--- a
channel that provides both \emph{confidentiality} and
\emph{authenticity} \--- and a secret key resource that shares a fresh
key between both players. In order to construct these resources, we
require shared secret key, an insecure (noiseless) quantum channel and
a backwards authentic classical channel. These are all resources, that
may in turn be constructed from weaker resources, e.g., the classical
authentic channel can be constructed from a shared secret key and an
insecure channel, and noiseless channels are constructed from noisy
channels. Due to this constructive aspect of the framework, it is also
called \emph{constructive cryptography} in the
literature~\cite{Mau12,MR16}.
Although this approach is quite different from the Universal
Composability (UC) framework~\cite{Can01,Can13}, in the setting
considered in this work \--- with one dishonest player and where
recipients are denoted by classical strings\footnote{In a more general
setting, a message may be in a superposition of ``sent'' and ``not
sent'' or a superposition of ``sent to Alice'' and ``sent to Bob'',
which cannot be modeled in UC, but is captured in
AC~\cite{PMMRT17}.} \--- the two frameworks are essentially
equivalent and the same results could have been derived with a quantum
version of UC~\cite{Unr10}. In UC, the constructed resource would be
called \emph{ideal functionality}, and the resources used in the
construction are setup assumptions.
We thus first formally define the ideal resources constructed by the
quantum authentication protocol with key recycling \--- the secure
channel and key resource mentioned in this introduction \--- as well
as the resources required by this construction. We then prove that a
family of quantum authentication protocols proposed by Barnum et
al.~\cite{BCGST02} satisfy this construction, i.e., no distinguisher
(called environment in UC) can distinguish the real system from the
ideal resources and simulator except with an advantage $\eps$ that is
exponentially small in the security parameter. This proof considers
all distinguishers allowed by quantum mechanics, including those that
perform impersonation attacks.
We show that in the case where the message is accepted, every bit of
key may be recycled. And if the message is rejected, one may recycle
all the key except the bits used to one-time pad the
cipher.\footnote{Key recycling in the case of a rejected message is
not related to any quantum advantage. A protocol does not leak more
information about the key than (twice) the length of the cipher, so
the rest may be reused. The same holds for classical
authentication~\cite{Por14}.} We prove that this is optimal for the
family of protocols considered, i.e., an adversary may obtain all
non\-/recycled bits of key. This improves on previous results, which
recycled less key and only considered a subset of possible
environments. More specifically, Hayden et al.~\cite{HLM11}, while
also analyzing protocols from \cite{BCGST02}, only recycle part of the
key in case of an accept, and lose all the key in case of a
reject. Garg et al.~\cite{GYZ16} propose a new protocol, which they
prove can recycle all of the key in the case of an accept, but do not
consider key recycling in the case of a reject either. The protocols
we analyze are also more key efficient than that of~\cite{GYZ16}. We
give two instances which need $\Theta(m +\log 1/\eps)$ bits of initial
secret key, instead of the $\Theta((m +\log 1/\eps)^2)$ required by
\cite{GYZ16}, where $m$ is the length of the message and $\eps$ is the
error. Independently from this work, Alagic and Majenz~\cite{AM16}
proved that one of the instances analyzed here satisfies the weaker
security definition of \cite{GYZ16}.
Note that the family of protocols for which we provide a security
proof is a subset of the (larger) family introduced
in~\cite{BCGST02}. More precisely, Barnum et al.~\cite{BCGST02} define
quantum authentication protocols by composing a quantum one-time pad
and what they call a \emph{purity testing code} \--- which, with high
probability, will detect any noise that may modify the encoded message
\--- whereas we require a stricter notion, a \emph{strong purity
testing code} \--- which, with high probability, will detect any
noise. This restriction on the family of protocols is necessary to
recycle all the key. In fact, there exists a quantum authentication
scheme, the \emph{trap code}~\cite{BGS13,BW16}, which is a member of
the larger class from~\cite{BCGST02} but not the stricter class
analyzed here, and which leaks part of the key to the adversary, even
upon a successful authentication of the message \--- this example is
discussed in \secref{sec:conclusion}.
We then give two explicit instantiations of this family of quantum
authentication protocols. The first is the construction used in
\cite{BCGST02}, which requires an initial key of length $2m+2n$, where
$m$ is the length of the message and $n$ is the security parameter,
and has error $\eps \leq 2^{-n/2+1} \sqrt{2m/n+2}$. The second is an
explicit unitary $2$\-/design~\cite{Dan05,DCEL09} discovered by
Chau~\cite{Cha05}, which requires $5m+4n$ bits of initial
key\footnote{The complete design would require $5m+5n$ bits of key,
but we show that some of the unitaries are redundant when used for
quantum authentication and can be dropped.} and has error
$\eps \leq 2^{-n/2+1}$. Both constructions have a net loss of $2m+n$
bits of key if the message fails authentication. Since several other
explicit quantum authentication protocols proposed in the literature
are instances of this family of schemes, our security proof is a proof
for these protocols as well \--- this is discussed further in
\secref{sec:conclusion}.
Finally, we show how to construct the resources used by the
protocol from nothing but insecure noisy channels and shared secret
key, and calculate the joint error of the composed protocols. We also
show how to compensate for the bits of key lost in the construction of
the backwards authentic channel, so that the composed protocol still
has a zero net key consumption if no adversary jumbles the
communication.
There is currently no work in the literature that analyzes the composable
security of quantum authentication without key recycling. And although
a security proof with key recycling is automatically a security proof
without key recycling, the parameters are not optimal \--- recycling
the key results in a larger error \--- and the proof given in this
paper is only valid for strong purity testing codes. So for
completeness, we provide a proof of security for quantum
authentication without key recycling in
|
train/arxiv
|
BkiUaPnxK02iP15vdliT
| 5 | 1 |
\section{Introduction}
Coherent structures can be defined as organised fluid elements that capture the overall flow dynamics and are responsible for the transfer of mass, momentum and energy \citep{Hussain1986}. The energetic coherent structures in the near-field of a jet directly affect the production of acoustic noise, the entrainment of quiescent fluid, and the laminar-turbulent transition. For the implications that these aspects have on a range of engineering problems, coherent structures in the near-field of jets have been the subject of several investigations over the last decades.
In their pioneering work, \textcite{Glauser1987} applied Proper Orthogonal Decomposition (POD) \citep{lumley1967} to the jet mixing layer at three nozzle diameters downstream of the jet exit. They showed that the first POD mode contains $40\%$ of the total turbulent kinetic energy, with an additional $40\%$ from a combination of the next two modes. Later, \textcite{Citriniti2000} expanded from these observations by applying POD to the instantaneous fields of streamwise velocities obtained from an array of 138 hot-wire anemometer probes, at the same downstream location. They observed the existence of azimuthally coherent ``volcano-like'' bursting events, which were short-lived even if containing most of the energy, separated by a ``braid'' region of streamwise counter-rotating vortices. This scenario is consistent with the flow visualisations of \textcite{Liepmann1992} (see figures 7 to 11 of their work).
The experimental study of \textcite{Jung2004} examined the effects of both downstream position and Reynolds number on the energetic coherent structures of a jet. The total energy content of the modes at azimuthal wavenumber $m=0$ decreases with the distance from the nozzle, while the energy distribution for the first POD mode does not depend on the Reynolds number. The ``volcano-like'' eruptions are dominant from $x/D=2$ to $x/D=4$, but beyond 4 diameters they die off. \textcite{Gamard2004} showed that in the far-field region of the jet the streamwise velocity fluctuations stabilise asymptotically to the azimuthal wavenumber $m=2$. Later, \textcite{Iqbal2007} studied the near-field coherent structures of a jet using the three velocity components from hot-wire signals. From the projection of the first POD mode onto the two-components instantaneous realisations, the local dynamic behaviour of the coherent structures was determined, revealing a helical vortical structure in the range between 4 and 6 diameters. In the studies examined thus far, the jets under analysis can be assumed incompressible. Experimental works with laser doppler anemometry (LDV) and particle image velocimetry (PIV) showed that compressibility effects do not modify the near-field large-scale coherent structures \citep{Taylor2001,Tinney2008}. More recently, the existence of elongated streaky-structures was found using a combination of Spectral POD, Resolvent Analysis, and transient growth analysis (\textcite{Nogueira2019}). These streaks are analogous to those observed by \textcite{Hellstrom2016} in turbulent pipe flows and by \textcite{Hutchins2007} in a boundary layer. \textcite{Nogueira2019} also found that these streaks are characterized by a ratio between the streamwise and azimuthal length scales remaining constant as the azimuthal wavenumber is varied.
Although the coherent structures and their energy content are relatively well-known aspects for a jet with a circular nozzle, different nozzle geometries have scarcely been investigated, and mainly in relation to mixing and entrainment. \textcite{Chrighton1974} was among the first to examine how the nozzle geometry affects the instabilities in the near field of a jet, using linear stability analysis. Later, \textcite{Ho1987} conducted experiments on a jet from an elliptic nozzle at a low aspect ratio. The mass entrainment rate was observed to increase as compared with a round jet, while the mean flow properties in the near field, such as spread and momentum thickness, were found to be different in the planes of the major and minor axis. Later, \textcite{Husain1993} examined an elliptic jet, both numerically and experimentally. The formation of azimuthally-fixed streamwise vortices was observed, which enhance entrainment and mixing when compared with the round jet. The use of tabs in the jet nozzle as a passive means of flow control received also large attention, both in relation to mixing \cite{Foss1999} and entrainment \cite{Zaman1997}. Tabs lead to an increase of the mixing over a wide range of scales, and to a general enhancement of the entrainment rate. \textcite{Gutmark1999} provide a detailed survey on how deviations from circularity affect the near-field coherent structures, which then result in a change of the mean flow properties.
One of the first attempts aimed at deriving a low-dimensional representation of a jet from a non-circular nozzle is the work by \textcite{Moreno2004}, who applied POD to planar velocity fields of a supersonic rectangular jet, from PIV measurements. Most of the fluctuation energy is contained within the first two modes. Low-order modelling confidently predicted the global flow characteristics, although the effects of the rectangular nozzle were not discussed in relation with the circular nozzle. To identify the individual contributions of the different vortical structures to entrainment, \textcite{ElHassan2010} performed a POD analysis of jets from circular orifice and from daisy-shaped orifice, and \textcite{ElHassan2011} investigated the near field of a cross-shaped orifice jet using a similar approach. These investigations were conducted with time-resolved stereoscopic PIV and evidenced that the Kelvin-Helmholtz dynamics play a central role in the entrainment of the circular jet. The braid region produces the largest level of entrainment, while the front part of the Kelvin-Helmholtz ring tends to expand the flow and dramatically reduce the entrainment, even in presence of strong streamwise vortex pairs generated by the lobes of the daisy-shaped orifice.
Lobed nozzles were found to increase turbulent mixing over a wide range of length scales \cite{Hu2001, Hu2002a, Hu2002b}. The involvement of multiple scales in the mixing process is a consequence of the break down of the large-scale streamwise vortices into smaller but not weaker vortices. The enhancement of mixing produced by the lobes is particularly strong in the first diameter, while it becomes almost negligible after the first two diameters. \textcite{Mao2006} and \textcite{Mao2009} studied experimentally the effects of the lobe geometry on the strength of the streamwise vortices in relation to mixing. Lobes having parallel side walls, i.e. rectangular lobes, were found to generate stronger streamwise vortices and hence better mixing performance than triangular and scalloped geometries.
In the studies discussed up until here, nozzle geometries different from circular were meant to enhance mixing or entrainment. However, the reduction of the aeroacoustic emissions represented the main motivation for a number works on jet from non-circular orifices. Among these, the experimental work of \textcite{Tam2000} examined the far-field noise from elliptic, rectangular, lobed, and tabbed jets. While the noise radiated from elliptic and rectangular jets is the same as that from the circular jet, a significant suppression of the large-scale noise is obtained from the lobed jet. Tabs also impact on the noise field primarily by shifting the spectral peak to a higher frequency.
With the aim of investigating the screeching in a rectangular jet, \textcite{Alkislar2003} performed stereoscopic PIV. Coherent vortical structures at increasing strength in the shear layer are associated with screeches of progressively stronger intensity. Later, stereoscopic PIV and microphones were used to investigate the aeroacoustics effects of chevron and microjets \textcite{Alkislar2007}.
The low frequency noise is attenuated using both the flow control techniques, while the high frequency noise tends to increase. The attenuation mechanism of both chevron and microjets was associated with the formation of streamwise vortices disrupting the generation of azimuthally coherent large-scale structures.
The time evolution of the near-field structures in a jet both from a circular nozzle and from a chevron nozzle has been studied using time-resolved tomographic PIV (\textcite{Violato2011}). In that work, the intense vortical structures were identified with criteria based on the analysis of the velocity gradient tensor.
From Powell's acoustic analogy, the pairing of the vortex rings represents the main source of noise in the circular jet. On the other hand, vortex rings are nearly absent in the chevron jet, while counter-rotating streamwise vortices develop from the chevron notches. The decay of these streamwise vortices leading to the formation of C-shaped structures is regarded as the main mechanism for noise production in the chevron jet. The same dataset was also used to examine with POD the spatial organization of the structures at the jet core breakdown \citep{Violato2013}.
Recently, \textcite{Sinha2016} applied linear stability theory to derive the instability modes and their downstream evolution both in circular and in chevron jets. The serrated nozzle reduces the growth rates of the most unstable eigenmodes of the jet, although their phase speeds are approximately similar. Coherent structures in the near field of a circular jet were investigated by \textcite{Lesshafft2019}, who applied spectral POD on velocity fields from time-resolved stereoscopic PIV \cite{Jaunet2017}. In the range of Strouhal number around 0.4, the leading modes of spectral POD obtained from experimental data and from resolvent-based modeling show a very good agreement.
\textcite{Rigas2019} compared the spectral POD modes from a chevron jet and from a circular jet, both obtained from large-eddy simulations. The analysis identified structures that take the form of elongated streamwise streaks, analogous to those observed by \textcite{Nogueira2019}. These streaks have been associated with the non-modal lift-up mechanism in wall-bounded flows. In the circular jet, the energetic streaks appear at the azimuthal wavenumber $m=1$, while in the chevron jet the streaks form in consequence of the nozzle geometry, and they inherit the periodicity of the nozzle geometry itself. The relative importance of the lift-up, the Kelvin-Helmholtz, and the Orr mechanisms was recently investigated by \textcite{Pickering2020} from Large-Eddy Simulations of a round jet. The work points at the lift-up mechanism as an important linear amplifier of disturbances in turbulent jets, thus confirming and expanding the findings by \textcite{Nogueira2019}. In jets at Mach numbers 0.4, 0.9, and 1.5, the lift-up mechanism was found to be responsible for the generation the streamwise streaks at low-frequency and non-zero azimuthal wavenumbers.
From the analysis of the past literature, it emerges that the non-circular orifice geometries that were examined were all characterised by only one spatial length scale. It is however not completely understood how an orifice geometry constructed using multiple length scales affects the near-field coherent structures in a jet. Furthermore, in the majority of these investigations the focus was on mixing and on entrainment. To the authors' best knowledge, an analysis of the lift-up mechanism in jets issuing from a non-circular orifice has never been performed. The novel contribution of this work is that we examine and compare near-field coherent structures in jets with two different orifice geometries, i.e.~a round orifice and a fractal orifice, using Fourier Proper Orthogonal Decomposition as the primary tool for investigation. Although a fractal orifice may not appear to be of direct interest to industry, this geometry embedding a wide range of length scales can be used to efficiently assess the sensitivity of the near-field coherent structures to the flow initial conditions. The velocity fields under analysis are obtained from planar and tomographic PIV at high spatial resolution used in previous works \citep{Breda2018b, Breda2019}. Measurements at two equivalent nozzle diameters from the exit are considered. In the first part of the study, we quantify the role of the orifice geometry on the energy distribution across the hierarchy of modal structures from the three velocity components. The properties of self-similarity of these modal structures are examined for the two orifice jets, both along the radial and the azimuthal directions. Analogies and dissimilarities between POD modes observed in these jets and those recently found in pipe flows are discussed. In the second part of the study, the structures of streamwise vorticity leading to the near-field velocity streaks (\textcite{Nogueira2019}) are investigated. The focus is on the effects of the orifice geometry on the spatial characteristics of these vorticity structures and on the lift-up mechanism of streaks formation.
\section{Experimental datasets}
The jet flow was generated by an open jet facility at Imperial College London, described elsewhere \cite{Breda2018b, Breda2019} (see figure 2 of Ref.~\cite{Breda2018b} for details on the nozzle geometry). In order to prevent biasing the particle images with unseeded, quiescent air being entrained into the jet, a seeded, mild co-flow of air was applied. The exit flow was found to have a sharp 'top-hat' mean velocity profile and a turbulence intensity $< 1 \%$.
\begin{figure}
\centering
\includegraphics[width=0.62\textwidth]{nozzle_geometry.png}
\caption{The orifice geometries considered in this study, (a) circular and (b) fractal. The red circle in panel (b) traces the outline of the circular orifice in panel (a).}
\label{fig:nozzle}
\end{figure}
A round orifice and a fractal orifice were used as illustrated in Fig.~\ref{fig:nozzle}, where the fractal geometry is obtained from a repeating fractal pattern applied to a base square shape and has a fractal dimension of 1.5 and 3 iterations. This pattern substantially increases both the wetted perimeter and the number of corners. A similar geometry has been previously adopted for the perimeter of axisymmetric wake-generating plates where a break-up of the coherence and a reduction of the shedding energy were observed by \textcite{Nedic2015}. The two orifices have same open area of $D_e^{2}$, where $D_e = 15.78$ mm is the equivalent diameter as defined in \textcite{Breda2018a}.
The two jets have the same exit velocity $U_j = 9.93$ $m s^{-1}$, meaning that the Reynolds number $Re_{D_e} = {U_j D_e}/{\nu}$ is the same and is equal to $10^{4}$.
A Cartesian coordinate system is introduced, centred on the geometric centre of the orifices, with the $z$-axis oriented along the streamwise flow direction and where the $x$- and $y$-axes are aligned with the diagonals of the fundamental square pattern for the fractal orifice geometry, as illustrated in Fig.~\ref{fig:nozzle}(b). In polar coordinates, radial and azimuthal directions are indicated by $r$ and $\theta$, respectively, with the latter originating on the $x$-axis. Velocity fluctuations along the three Cartesian directions are denoted as $u_x, u_y$ and $u_z$, while time averaged quantities are denoted by an overbar. In polar coordinates, radial and azimuthal velocity fluctuations are denoted as $u_r$ and $u_\theta$, respectively.
Three datasets from particle image velocimetry (PIV) are analysed in this work, \textit{i.)} a dataset from planar PIV at higher resolution (HPIV), \textit{ii.)} a dataset from planar PIV at lower resolution but at larger field of view (LPIV), and \textit{iii.)} a dataset from tomographic PIV (TPIV). To obtain the HPIV dataset, a final interrogation area of $12 \times 12$ pixels with a $50\%$ overlap was used in the processing. This led to a vector spacing of $0.013D_{e}$, with a spatial resolution of less than $3.7\eta$, where $\eta$ is the Kolmogorov length scale. The LPIV dataset enabled the study of the flow in the range between $0$ and $23D_{e}$, as shown in figure 2.5 of \textcite{Bredathesis2018}. The fields of view were $9D_{e} \times 6.7D_{e}$. An initial interrogation area of $64 \times 64$ pixels was used to process the images hierarchically, with a final window size of $16 \times 16$ pixels. A $50\%$ overlap between adjacent interrogation areas was used leading to a final vector spacing of $0.03D_{e}$. Additional details on the two datasets of planar PIV can be found in \textcite{Breda2018b} and in \textcite{Bredathesis2018}.
The three-dimensional velocity fields under analysis were obtained with tomographic particle image velocimetry (TPIV) at two orifice diameters downstream of the orifice. The interrogation volume was of $48 \times 48 \times 48$ voxels, with a $75 \%$ window overlapping. The spatial resolution in the worst case was of $11 \eta$. The processing of the PIV images from the three datasets was done with the software DaVis by LaVision. Additional information on the TPIV measurements can be found in \textcite{Breda2019}, in section 2.3.
\section{Mean flow properties}
\label{sec:meanflow}
In this section, mean flow properties of the two jets are illustrated and discussed. The spreading rate of the two jets is first examined. From Fig.~\ref{fig:mean_streamwise}(a), the two jets exhibit analogous spreading rates. Specifically, the spreading rate is around $\mathrm{d} r_{1/2}/\mathrm{d}z=0.09$, where $r_{1/2}$ is the jet half-width. This result is similar to previous measurements on a jet issuing from circular nozzle (see \textcite{Panchapakesan1993} and \textcite{Hussein1994}). However, Fig.~\ref{fig:mean_streamwise}(a) shows that the fractal jet exhibits a larger half-width than the round jet.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{mean_streamwise.pdf}
\caption{Mean flow properties for both the round jet (red circles) and the fractal jet (cyan crosses). (a) Spreading rate, (b) streamwise velocity decay, (c) turbulence intensity. }
\label{fig:mean_streamwise}
\end{figure}
In Fig.~\ref{fig:mean_streamwise}(b), the streamwise velocity decay is shown, where $U_{CL}$ is the jet centerline velocity and $z^*$ is the jet virtual origin \cite{Pope2000}. Beyond a distance from the orifice of ten equivalent diameters, the streamwise velocity decay becomes approximately constant. The value of the velocity-decay constant is not the same in the two jets, which is an important difference in the mean flow properties. Specifically, a larger decay is observed in the fractal jet. This is a direct consequence of the shorter potential core of the round jet when compared to the fractal jet, as shown by \textcite{Breda2018b}.
Additional evidence that the potential core of the round jet is shorter than that of the fractal jet comes from examining the streamwise turbulence intensity $u_{z_{rms}}/U_j$ at the centerline, shown in Fig.~\ref{fig:mean_streamwise}(c). In fact, the turbulence intensity in the centerline of the round jet presents a peak at $z/D_{e} \approx 6$ before starting to decay, whereas for the fractal jet velocity fluctuations saturate at a later distance from the orifice, at around $z/D_{e} \approx 9$.
Radial profiles of the streamwise velocity of the round jet (continuous red line) and of the fractal jet (dashed cyan line) are presented in Fig.~\ref{fig:planar-profiles}(a), at the same four different downstream locations from the orifice, i.e. at $z/D_{e}=1$, $2$, $3$, and $4$. Over this range, the velocity profiles undergo a transition from a top-hat shape to nearly Gaussian, and exhibit a radial spread.
It can also be observed that at $z/D_{e}=1$ and $2$ both velocity profiles present a mild overshoot, as they reach values of $u_z(x,0,\cdot)/U_{j}>1$. This can be explained as a \textit{vena contracta} effect in proximity of the orifices. At $z/D_{e}=4$, the centerline velocity of the fractal jet is larger than that of the round jet, suggesting that the jet spreads radially more pronouncedly in the round jet, despite the strong mixing action induced by the irregular orifice geometry. This observation is consistent with \textcite{Breda2018b}.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{planar-profiles}
\caption{In the first column, (a) mean streamwise velocity profiles at four different streamwise distances from the orifice, for the fractal (red continuous line) and round (cyan dashed line) orifices; (b,c) profiles of the streamwise and transversal rms velocity, respectively, for the two orifices at the same streamwise locations. In the second column, planar turbulent kinetic energy $\overline{k}^* \equiv \overline{(u_z^2 + u_x^2)}/2$ normalised by the local streamwise velocity, for (d) the round jet and (e) the fractal jet. }
\label{fig:planar-profiles}
\end{figure}
The radial profiles of the streamwise velocity fluctuation rms are given in Fig.~\ref{fig:planar-profiles}(b), at the four different downstream locations from the orifice. In general, more intense fluctuations in the near field region are observed in the round jet. At $z/D_{e}=1$, the profiles exhibit two local maxima in proximity of the shear layer and a local minimum at the centerline. At growing downstream locations, the local maxima tend to become weaker, whereas the local minimum strengthens. The increase of the turbulence levels at the centerline is associated with the thickening of the shear layer at increasing downstream locations. Fig.~\ref{fig:planar-profiles}(c) shows profiles of the rms transversal velocity, i.e. of the rms of the velocity component along the $x$-axis (see Fig.~\ref{fig:nozzle}). These profiles evidence a much stronger discrepancy between the round jet and the fractal jet than the streamwise velocity rms profiles observed in Fig.~\ref{fig:planar-profiles}(b). In fact, the transversal velocity fluctuations in the fractal jet are significantly less intense, especially at $z/D_e = 1$. Such an attenuation is consistent with the delay of the streamwise velocity decay found in the fractal jet, and with a potential core extending over a longer streamwise distance. It is anticipated here, and fully detailed in the Fourier-POD analysis of the velocity fluctuations at $z/D_e=2$ reported in section \ref{sec:podcomponents}, that this reduction is associated to the suppression of the strong Kelvin-Helmholtz instabilities featuring in the near field of the round jet, which strongly couple the streamwise and transversal velocity components.
The planar turbulent kinetic energy in the near field of the fractal and round jets is reported in Figs.~\ref{fig:planar-profiles}(d) and \ref{fig:planar-profiles}(e), respectively. These maps confirm that the potential core of the fractal jet presents a longer downstream development. This finding is consistent with the downstream development of the centerline turbulence intensity in Fig.~\ref{fig:mean_streamwise}(c), and with the estimates of the number of eddies overturned in time for a given downstream distance estimated in \textcite{Breda2018b} (their figure 7).
From the discussion of the results presented up until here, the fractal orifice leads to the following modifications of the near-field jet structure when compared to the circular orifice: 1) an increase of the streamwise extent of the potential core; 2) a reduction of the decay rate of the streamwise velocity; 3) a strong attenuation of the transversal velocity rms and a mild attenuation of the streamwise velocity rms. These experimental findings are consistent with recent results from linear stability analysis presented in \textcite{Lyu2019} and in \textcite{Lajus2019} on jets issuing from non-circular (lobed) nozzles. These two studies concluded that the temporal growth of near-field instabilities decreases for increasing number of lobes. Additionally, \textcite{Lyu2019} observed that also a larger penetration ratio of the lobes of the jet nozzle produces a decrease in the temporal growth rate.
\begin{figure}
\includegraphics[width=0.9\textwidth]{average-flow-vort}
\caption{(a,c) Maps of the mean streamwise velocity for (a) the round jet and (d) the fractal jet; (b,e) maps of the mean planar vorticity for (b) the round jet and (e) the fractal jet; (c) radial location of the maximum of $\bar{\omega}_{\bot}$ as a function of the azimuth $\theta$ for both jets; (f) mean velocity components $\bar{u}_{x}$ and $\bar{u}_{y}$ on the $xy$ plane and map of the mean streamwise vorticity $\bar{\omega}_{z}$ for the fractal jet. }
\label{fig:average-flow-vort}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\textwidth]{average-flow-rms}
\caption{Maps of the root mean square (rms) of the (a, d) streamwise, (b, e) radial, and (c)(f) azimuthal velocity components for (a, b,c ) the round jet (top row) and (d, e, f) the fractal jet (bottom row). }
\label{fig:average-flow}
\end{figure}
We now focus on the TPIV dataset and present maps of the normalised mean streamwise velocity $\bar{u}_z(x, y, z_\textup{0})$ at a downstream distance from the orifice of $z_\textup{0}/D_e = 2$. These are shown in Figs. \ref{fig:average-flow-vort}(a, d) for the round and fractal orifices, respectively. The fractal geometry produces a significant spatial modulation of the mean velocity distribution in the near field, breaking the azimuthal symmetry of the vortex sheet that is otherwise observed for the round jet. This effect is even more evident when looking at the norm of the mean planar vorticity $\bar{\omega}_{\bot}=\sqrt{\bar{\omega}_{r}^2 + \bar{\omega}_{\theta}^2}$, in Figs. \ref{fig:average-flow-vort}(b,e), with $\bar{\omega}_{r}$ and $\bar{\omega}_{\theta}$, the mean radial and azimuthal vorticity components. The location of the maximum of the planar vorticity along the radial direction, Fig.~\ref{fig:average-flow-vort}(c), reveals that important differences exist between the two orifices. Although in the round jet the mean planar vorticity is maximum in the proximity of the orifice lip line, at a nearly constant radial position, in the fractal jet the radial location of the planar vorticity peak varies with the azimuth. In particular, the radial location varies between $r/D_e=0.4$ and $r/D_e=0.6$. Associated to the corrugation of the mean vortex sheet, a modulation of the mean streamwise vorticity $\bar{\omega}_z$ is also observed, as shown in Fig.~\ref{fig:average-flow-vort}(f), in the form of pairs of adjacent positive-negative patches, elongated along the radial direction. Each of these pairs is associated with a distinct lobed structure from the map of the mean streamwise velocity shown in Fig.~\ref{fig:average-flow-vort}(d), resulting from secondary flows in the plane of the figure. In total, eight pairs of positive-negative patches are formed, four of which characterised by a stronger intensity (see \textcite{Breda2018a} for a deeper discussion). It is remarkable that the mean streamwise vorticity within these patches is as large as one third of the mean planar vorticity in Fig.~\ref{fig:average-flow-vort}(e). In wall bounded turbulent flows, spanwise inhomogeneities of surface attributes were found to produce streamwise-elongated vorticity structures (see \textcite{Pujals2010}, \textcite{Kevin2017}, \textcite{Vanderwel2019} among others), resulting from the anisotropy of turbulent stresses \cite{perkins_1970, anderson_barros_christensen_awasthi_2015}. In laminar flows, streamwise-elongated vortices were found to delay transition to turbulence \cite{Fransson2005, Fransson2006} in boundary layers and to reduce growth rates of Kelvin-Helmholtz-type instabilities in free shear flows (see \textcite{Lajus2015}, \textcite{Boujo2015}, \textcite{Marant2018} among others). In the present case, the near-field flow in the fractal jet is highly turbulent and strongly influenced by the irregular geometry of the nozzle. However, as it will be extensively quantified in the next sections using a Proper Orthogonal Decomposition, a significant reduction of the strength of azimuthally coherent velocity fluctuations in the fractal jet is observed \cite{Breda2018a}, which is akin to mechanisms observed in laminar flows.
The strong shear between the jet flow and the quiescent ambient fluid is responsible for an intense production of turbulent kinetic energy. To examine how the orifice geometry affects the spatial distribution of the different terms constituting the turbulent kinetic energy, we show in Fig.~\ref{fig:average-flow} maps of the rms value of the (a, d) streamwise, (b, e) radial, and (c, f) azimuthal velocity components, respectively, for the (top row) round jet and for the (bottom row) fractal jet. Among the three components, the streamwise velocity is dominant. For this component, the rms value is significantly lower in the fractal jet, consistent with our observations from the planar PIV datasets presented in Fig.~\ref{fig:planar-profiles}(b). As mentioned above, it is argued that this effect can be attributed to the strong deviation from the azimuthal symmetry of the fractal orifice, which reduces the growth of Kelvin-Helmholtz instabilities and the near-field jet development. A lower intensity of Kelvin-Helmholtz-induced fluctuations also explains the significantly lower levels of radial velocity rms in the fractal jet, since azimuthally coherent vortex rings also induce strong radial motions \cite{Liepmann1992}. This can be observed in Figs.~\ref{fig:average-flow}(b) and \ref{fig:average-flow}(e). We note that, although not shown here, the correlation coefficient between the radial and streamwise components in the region of intense shear is comparable in the two jets. The coefficient, approximately 0.6, is positive, since a positive (outward) radial motion induces a positive streamwise fluctuation by exploiting the high mean velocity gradients in the near field. On the other hand, the rms of the azimuthal velocity appears to be milder than that of the other two components, and of comparable strength in both jets, as shown in Figs.~\ref{fig:average-flow}(c) and (f).
\section{Proper orthogonal decomposition: method of analysis}
\label{pod-method}
We use Proper Orthogonal Decomposition to identify near-field coherent structures in an objective manner and elucidate the role of the orifice geometry on the structure of turbulence.
We follow the standard approach for problems defined in cylindrical coordinates \citep{GAMARD:2002eh, Hellstrom2016}, and consider velocity fluctuation vector fields $\mathbf{u}(r, \theta, z, t)$ with radial, azimuthal and streamwise velocity components restricted to a radial-azimuthal plane located at $z_0/D = 2$. For completeness, we provide in what follows a brief description of the methodology, restricted to the scalar implementation where the analysis is performed on each velocity component independently \cite{Tinney2008}.
A classical variational technique \citep{lumley1967} can be used to derive a complete, orthonormal set of modal structures $\{ \phi^i_j(r, \theta) \}_{i=1}^\infty$, ordered by the modal kinetic energies $\{\lambda^i_j \}_{i=1}^\infty$, from the solution of the integral eigenvalue problem
\begin{equation}
\int_0^{2\pi} \int_0^\infty R_j(r, r^\prime, \theta, \theta^\prime) \phi^i_j(r^\prime, \theta^\prime)r^\prime \mathrm{d}r^\prime\mathrm{d}\theta^\prime = \lambda^i_j\phi^i_j(r, \theta),
\end{equation}
where $R_j(r, r^\prime, \theta, \theta^\prime)$ is the two-point correlation tensor defined by
\begin{equation}\label{eq:two-point-corr-tensor}
R_{j}(r, r^\prime, \theta, \theta^\prime) = \mathrm{E}\left[ u_j(r, \theta, t) u_j(r^\prime, \theta^\prime, t)\right],
\end{equation}
where $\mathrm{E}[\cdot]$ is the expectation operator, and $j$ identifies the radial ($r$), azimuthal ($\theta$) or streamwise ($z$) velocity component. For space-only POD, as in the present case, the expectation operator is the arithmetic average over the available velocity snapshots.
For the round jet, the equations of motions and boundary conditions are equivariant under the continuous group of rotations $\mathcal{R}^\beta : \mathbf{u}(r, \theta) \mapsto \mathbf{u}(r, \theta + \beta)$. Turbulence statistics are then homogeneous along the azimuthal coordinate $\theta$ and the two-point correlation tensor $R$ only depends on the azimuthal separation, i.e. $R_{j}(r, r^\prime, \theta, \theta^\prime) = R_{j}(r, r^\prime, \theta - \theta^\prime)$. It is well known that, in this particular case, the POD modes have the azimuthal structure of Fourier modes. This property can be exploited to first Fourier transform the velocity fluctuation snapshots along the azimuthal direction
and then apply the POD to the complex-valued transformed fields by solving the eigenvalue problem
\begin{equation}\label{eq:integral-equation-fourier}
\int_0^\infty R_j(r, r^\prime, m) \phi_j^i(r^\prime, m)r^\prime \mathrm{d}r^\prime = \lambda^i_j(m)\phi_j^i(r, m),
\end{equation}
for a set of azimuthal wavenumbers $m$, where $R_{j}(r, r^\prime, m) = \mathrm{E}[u_j(r, m, t) u^\dag_j(r, m, t)]$, with $(\dag)$ denoting complex conjugation. To lift the asymmetry of the kernel in the integral equation (\ref{eq:integral-equation-fourier}) introduced by the term $r^\prime$ arising from the energy-based inner product of velocity fields defined over a cylindrical coordinate system, we follow the established approach used in other works on round jets or pipe flow \citep{Iqbal2007, Hellstrom2016}. In what follows, we refer to this approach as Fourier-POD analysis. In practice, we proceeded by 1) interpolating the PIV velocity fluctuation fields onto a polar grid originating at the jet centre (identified from the mean field), 2) Fourier transforming the data along the streamwise direction, 3) constructing the two-point correlation tensor $R_j(r, r^\prime, m)$ and 4) solving the discrete equivalent of the eigenvalue problem of equation (\ref{eq:integral-equation-fourier}).
For the jet issuing from the fractal orifice, the equations and boundary conditions are equivariant under a cyclic group of order four generated by the symmetry $\mathcal{T} = \mathcal{R}^{\pi/2}$, rotating velocity fields around the $z$ axis by $\pi/2$. The important consequence is that velocity statistics are still periodic, but are not homogeneous in the azimuthal direction. The two-point correlation tensor (\ref{eq:two-point-corr-tensor}) does not depend only on the azimuthal separation $\theta - \theta^\prime$ and the POD modes do not necessarily have a simple harmonic azimuthal structure. Hence, while the Fourier-POD analysis is optimal for the velocity dataset of the round jet, it inevitably becomes sub-optimal for the fractal orifice dataset. However, using the same analysis for the two geometries has the advantage of enabling a direct comparison of a) the azimuthal wavenumber distribution of the kinetic energy and of b) the radial profiles of the modal structures. To quantify the degree of sub-optimality of the Fourier-POD approach for the jet issuing from the fractal orifice, we also compute and present optimal structures for the fractal orifice geometry by applying the snapshot POD approach \citep{sirovich} to a larger dataset constructed by applying the rotations $\mathcal{T}, \mathcal{T}^2$ and $\mathcal{T}^3$ on each fluctuation velocity field, thus increasing the number of snapshots used in the analysis by a factor of four.
\section{Proper orthogonal decomposition of the three velocity components}
\label{sec:podcomponents}
From the analysis of the maps of mean velocity and vorticity examined in previous sections, it was argued that the fractal orifice breaks the azimuthal coherence of the vortex rings that can typically be found in jets issuing from a circular orifice. Therefore, the instantaneous velocity fields of the fractal jet are expected to be populated by structures that are much smaller in size compared with those in the round jet. This should be more evident in regions characterised by intense shear, therefore in proximity to the lip line. In Fig.~\ref{fig:snapshots}, instantaneous snapshots of the streamwise velocity fluctuations are presented, for the (a, b, c) round jet and for the (d, e, f) fractal jet, to illustrate this behaviour. Velocity vectors in the $x-y$ plane are also shown to illustrate the nature of the cross-plane flow. Footprints of the vortex rings associated with the Kelvin-Helmholtz instabilities passing through the observation plane can be observed in the three snapshots obtained from the round jet. The streamwise velocity fluctuations are organised in large coherent structures developing along the azimuthal direction, and preferentially located in the proximity of the lip line. Structures of smaller size and presenting a more random orientation, on the other hand, can be observed in the instantaneous snapshots from the fractal jet. Therefore, the fractal orifice appears to break the large-scale, azimuthally-coherent structures, and redistributes the resulting energy among structures at smaller length scales. It can also be observed that the cross-plane velocity components are weaker in the fractal jet, consistent with the lower values of the rms of the radial velocity component in Fig.~\ref{fig:average-flow-vort}(e). Later in this paper, we will often be referring to Fig. \ref{fig:snapshots} to show how the analytical results that will be presented are supported by the qualitative observations from these instantaneous snapshots.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{snapshots.png}
\caption{Snapshots of the instantaneous vector fields containing the velocity components $u_{x}$ and $u_{y}$ on the $x-y$ plane and maps of the streamwise velocity fluctuations at $z_0/D_{e} = 2$, for the (a,b,c) round jet and for the (d,e,f) fractal jet. Note that the velocity vector fields of the velocity components $u_{x}$ and $u_{y}$ overlap to the color maps of $u^{\prime}_z$, and they share the same Cartesian axes.}
\label{fig:snapshots}
\end{figure}
\subsection{Analysis of the POD energy distribution}
To quantify how the turbulent kinetic energy at $z_0/D_{e} = 2$ is distributed across coherent structures at different wavenumbers, the Fourier-POD analysis described in section \ref{pod-method} is performed. A scalar implementation of this technique (see \textcite{Tinney2008}) is performed separately on each of the three velocity components. The relative fraction of turbulent kinetic energy captured by the first three POD modes for the first eleven azimuthal wavenumbers $m$ in each direction is presented in Fig.~
\ref{fig:energy} by vertical bars. The first, second, and third rows of the figure correspond to the streamwise, radial, and azimuthal velocity components, respectively, while the left and right columns show the results of the analysis for the round jet and for the fractal jet, respectively. The dashed cyan lines represent the cumulative energy distribution of the first POD modes as a function of the wavenumber $m$, whereas the continuous cyan lines represent the cumulative energy distribution for the first five POD modes. Note that quantities in this figure are relative respect to the overall energy across the three components. When examining Figs.~\ref{fig:energy}(a) and \ref{fig:energy}(d), the cumulative energy contained in the first ten POD modes and in the first eleven azimuthal wavenumbers of the streamwise velocity component is similar in the two orifices, i.e.~it is equal to approximately $60\%$. However, there are significant differences in the distribution of turbulent kinetic energy among the wavenumbers.
\begin{figure}[ht]
\includegraphics[width=0.9\textwidth]{energy-components-2.pdf}
\caption{Relative energy distribution of the first three POD modes $i \in [1, 3]$ at the first eleven azimuthal modes, $m \in [0, 10]$, (a,c,e) for the round jet (left column) and (b,d,f) for the fractal jet (right column), at the downstream location of $z_0$. The POD analysis is made for the (a,b) streamwise (first row), (c,d) radial (second row), and (e,f) azimuthal (third row) components of the three-dimensional velocity vector field. The cumulative energy content of the first and first five POD modes are represented by a cyan dashed line and a cyan continuous line, respectively. For the fractal orifice, the cumulative energy of the first $10 (m + 1)$ snapshot POD modes is reported as a thicker red line, to quantify sub-optimality of the Fourier-POD analysis.}
\label{fig:energy}
\end{figure}
In the round jet, streamwise velocity fluctuations in the near field are dominated by the development of the Kelvin-Helmholtz instability in the axisymmetric vortex sheet, leading to the roll up of azimuthally coherent vortical rings in the transition region \citep{yule78}. This is observed in our results. The energy for the streamwise component is mostly concentrated at the wavenumber $m=0$, consistent with the results of \textcite{Jung2004}, see their figure 7. A rapid and monotonic decrease appears, though, for increasing $m$, which is slightly different from \textcite{Jung2004}, where the energy distribution of the first POD mode shows a local maximum at $m=6$. This subtle difference between our analysis and the study by \textcite{Jung2004} could be attributed to the lower Reynolds number of the present jet. The increase of the relative dominance of the first POD mode at the fundamental wavenumber for decreasing Reynolds numbers observed in figure 8 of \textcite{Jung2004} is consistent with this explanation. Another explanation for this difference is that the jet investigated by \textcite{Jung2004} is a nozzle jet while the jet examined here is an orifice jet. Therefore, the subtle decrease in the relative dominance of modes at $m=0$ could be a consequence of the \textit{vena contracta} effect associated with the orifice nozzle. In their experiments, \textcite{Iqbal2007} found that at $x/D=3$ the dominant wavenumber mode is $m=1$. \textcite{Iqbal2007} attributed the observed difference between their results and the results from \textcite{Jung2004} to the different initial conditions of the jet flows.
The analysis of the jet issuing from the fractal orifice, Fig.~\ref{fig:energy}(d), reveals that the largest amount of streamwise fluctuation energy is contained at the fundamental wavenumber $m=4$. This is consistent with the fundamental geometric square pattern of the orifice, but is unrelated to the mean field of Fig.~\ref{fig:average-flow-vort}, since the Fourier-POD analysis is performed on the zero-mean velocity fluctuation fields. The dominance of the wavenumber mode associated with the orifice base-pattern is expected also for different orifice geometries, however less so when increasing the side number of the base pattern. Furthermore, in the fractal jet, the energy of the streamwise fluctuations is more scattered among the different azimuthal modes in the range $0 \le m \le 6$. As evident from the instantaneous snapshots in Fig.~\ref{fig:snapshots}, the physical mechanism at play is that the fractal geometry promotes the break-up of the near-field azimuthal coherence observed in the round jet. Hence, energy is injected at the fundamental wavenumber $m=4$ and is scattered across the wavenumber spectrum by the convective nonlinearity of the equations of motions.
Analogous to the Fourier-POD analysis of the streamwise component, the analysis of the radial component reveals important differences between the round jet and the fractal jet, as can be appreciated from Figs.~\ref{fig:energy}(b) and \ref{fig:energy}(e). When looking at the cumulative energy distribution, it can be observed that the energy captured by the first five POD modes and by the first eleven azimuthal modes is larger in the round jet. Also, the cumulative energy from the first POD mode of each of the first eleven azimuthal wavenumbers (cyan dashed line) is almost double in the round jet than in the fractal jet. If we focus on the first POD mode of the round jet, we can observe that the zeroth and the first azimuthal wavenumbers, together, capture over $9 \%$ of the total energy. The observed dominance of the zeroth azimuthal wavenumber in the round jet is indicative of a low-dimensional behaviour of the radial component, which can be also appreciated in the instantaneous snapshots of Fig.~\ref{fig:snapshots}. Overall, the energy distribution among the modes obtained from the radial velocity component is consistent with previous experimental investigations with hot-wire anemometry from the literature \cite{Tinney2008, Iqbal2007, Jung2004}. In the fractal jet, however, the first POD mode accounts for a much smaller percentage of the total energy, independent of the wavenumber considered. The marginal importance of the radial POD modes in the fractal jet is a consequence of the reduced growth rate of the Kelvin-Helmholtz instabilities. As discussed in the recent stability analyses by \textcite{Lyu2019} and \textcite{Lajus2019}, the stronger is the deviation from the axisymmetry of the jet orifice, the lower is the growth rate of the near-field instabilities. For this reason, at the downstream position of $z_0/D_e = 2$, the energy of the radial velocity fluctuations is much larger in the round jet than in the fractal jet, as the rms in Figs.~\ref{fig:average-flow}(b) and \ref{fig:average-flow}(e) also show.
The Fourier-POD analysis of the azimuthal component is presented in the third row of Fig.~\ref{fig:energy}. Here, the energy distribution appears almost insensitive to the orifice geometry. The wavenumber $m=1$ is dominant in both jets, with a monotonic decrease of the energy content of the modes for progressively higher wavenumbers. An analogous trend was found in previous Fourier-POD analyses of the azimuthal component from the literature \cite{Jung2004, Iqbal2007, Tinney2008}. The overall cumulative contribution of the first five POD modes and of the first eleven wavenumbers is only of approximately $15\%$ of the total energy. Therefore, among the three velocity components, the azimuthal component captures the lowest proportion of the turbulent kinetic energy.
The energy distribution presented in Fig.~\ref{fig:energy} is obtained by applying a Fourier decomposition in the azimuthal direction followed by a POD analysis. This procedure has the advantage of enabling a straightforward comparison between the two jets, although it is sub-optimal for the fractal nozzle geometry, as explained in section \ref{pod-method}. Alternatively, the optimal modes could be obtained by a snapshot POD analysis. It is thus of interest to quantify possible changes in the cumulative energy distribution when this is computed from the modes obtained with the optimal snapshot POD analysis. The cumulative energy distribution of the first $10(m+1)$ modes obtained from the optimal snapshot POD analysis of the fractal orifice dataset is shown by the continuous red line in Figs.~\ref{fig:energy}(d, e, f). The Fourier-POD analysis, which is sub-optimal, leads to a cumulative distribution that only mildly underestimates the distribution obtained from the optimal snapshot POD analysis, supporting the relevance of the discussion on the Fourier-POD analysis.
\begin{figure}
\includegraphics[width=0.9\textwidth]{energy-components-1.pdf}
\caption{Relative energy contribution from the (a) streamwise, (b) radial, and (c) azimuthal velocity components to the energy contained within each of the first thirty wavenumbers. Red circles represent the round jet and cyan crosses the fractal jet. }
\label{fig:energy-components-1}
\end{figure}
A more detailed quantification of the relative energy contributions of the POD modes of the different velocity components as a function of the azimuthal wavenumber is presented in Fig.~\ref{fig:energy-components-1}. The three panels show the ratios
\begin{equation}
\beta_{(\cdot)}(m) = \frac{\sum_{i}\lambda^i_{(\cdot)}(m)}{\sum_i \left [\lambda^i_z(m) + \lambda^i_r(m) + \lambda^i_\theta(m) \right ]},
\end{equation}
for the streamwise, radial and azimuthal components. As expected, and consistently with the results from Fig.~\ref{fig:energy}, the largest energy contribution to the total turbulent kinetic energy is obtained from the Fourier-POD modes of the streamwise velocity component. In Fig.~\ref{fig:energy-components-1}(a), the modes from the streamwise velocity component are responsible for approximately $70 \%$ of the energy contained in the wavenumber $m=0$, both in the round jet and in the fractal jet. However, this result should not be misinterpreted. The contribution of the wavenumber $m=0$ to the total turbulent kinetic energy is more than double in the round jet than it is in the fractal jet. This can be observed when comparing the cyan continuous lines in Figs.~\ref{fig:energy}(a) and \ref{fig:energy}(b). From Fig.~\ref{fig:energy-components-1}(b), as the wavenumber increases, the energy from the streamwise velocity component becomes gradually less important, whereas the radial component captures increasingly more energy. The most significant difference between the round jet and the fractal jet can be observed when assessing the proportion of energy taken by the radial and azimuthal components at low wavenumbers. In the round jet, the radial component captures around $30\%$ of the energy contained at wavenumber $m=0$, while the azimuthal component accounts for less than $5\%$ only. In the round jet, the proportion of energy from the azimuthal component is the lowest independent of the wavenumber. In the fractal jet, the same behavior holds, with the only exceptions of $m=1,2,3$.
\subsection{Analysis of the spatial structure of POD modes}
The modes $\phi_z^i(r, \theta)$ of the first eighteen modes obtained from the snapshot POD of the streamwise component for the fractal orifice dataset are shown first in Fig.~\ref{fig:modes-fractal}. The structure of the two most energetic modes resembles the structure of the modes with $m=4$ obtained from the Fourier-POD analysis. This observation is consistent with Fig.~\ref{fig:energy}(b), where the modes at $m=4$ capture the largest amount of energy. The spectral content of the subsequent modes approximately follows the distribution reported in Fig.~\ref{fig:energy}(b). Thus, wavenumbers $m=2, 3$ and 5 all feature prominently in the first twelve modes. However, unlike the Fourier-POD analysis, the snapshot POD does not separate structures with different wavenumbers if these have similar energy and most of the POD modes of Fig.~\ref{fig:modes-fractal} have complex spectral characteristics.
\begin{figure}[h!]
\includegraphics[width=0.9\textwidth]{modes-fractal.png}
\caption{Streamwise component of the first eighteen snapshot POD modes in the jet with fractal orifice, on an arbitrary colour scale.}
\label{fig:modes-fractal}
\end{figure}
To compare objectively the modal structures resulting from the two geometries, we examine the radial profiles of the first, second, and third Fourier-POD modes for azimuthal wavenumbers $m=0, 1, 2, 5, 10$, from the scalar analysis of the streamwise, radial, and azimuthal velocity components for the two jets. The profiles are normalised such that $\max_r\, \phi_{(\cdot)}^i(r, m) = 1$, and are presented in Fig.~\ref{fig:radial-modes}. In general, the $i$-th POD mode has $i$ extrema. For wavenumber $m=0$, the radial profiles of the first three POD modes tend to have a stronger support at low radial locations, approaching the centerline. This feature becomes stronger for increasing POD mode indices, and it is more prominent in the circular orifice than in the fractal orifice, as at $z_0/D_e=2$ the round jet presents a much larger energy content in proximity to the centerline (see Fig.~\ref{fig:average-flow}). The profiles of the first POD modes are similar to the profiles of the modes obtained with Spectral POD, Resolvent Analysis and transient growth analysis by \textcite{Nogueira2019}, reflecting the low-dimensional nature of the near-field jet dynamics.
\begin{figure}
\includegraphics[width=0.86\textwidth]{radial-modes-new.pdf}
\caption{Radial profiles of the first three POD modes of the streamwise (first row), radial (second row), and azimuthal (third row) velocity components for azimuthal wavenumbers $m=0, 1, 2, 5, 10$, for the round jet (left columns) and for the fractal jet (right column). The grey vertical lines indicate the radial location $r/D_e = 0.5$. }
\label{fig:radial-modes}
\end{figure}
The peaks of the radial profiles for the first, second and third POD modes tend to move outwards for growing azimuthal wavenumbers. However, this occurs more mildly for the fractal geometry, where the modal structures are more compactly localised around $r/D_e=0.5$. To quantify this behaviour, we denote by $R^{p}_{(\cdot)}$ the location of the first peak of $\phi_{(\cdot)}^i(r,m)$ from $r=0$. This quantity is shown in Fig.~\ref{fig:collapse} for the first POD mode as a function of the azimuthal wavenumber $m$, for the three velocity components and for the two orifice geometries. The peak location of the radial profiles exhibits analogous trends when computed from the three different velocity components. The first POD mode is radially localised around $R^{p}_{(\cdot)}/D_{e} \approx 0.5$, for the most energetic azimuthal wavenumbers, increasing to $R^{p}_{(\cdot)}/D_{e} \approx 0.65$ for high wavenumbers. The influence of the orifice geometry is mainly confined to the lowest wavenumbers, where the radial position of the peaks tends to be closer to the centerline. Circular and fractal orifices, however, present an analogous behaviour at $m>2$. This is not surprising, since the mean turbulent kinetic energy profiles in Figs.~\ref{fig:planar-profiles}(b) and (c) have comparable peak locations. An analogous radial displacement of the peak location was also observed in the Fourier-POD analysis of pipe flow reported by \textcite{Hellstrom2016} (their figure 2(a)). As discussed by \textcite{Nogueira2019}, an important difference between jet and pipe flow, however, is that in pipe flow the radial expansion of the POD modes is limited by the wall, whereas in a jet the POD modes are not radially constrained and can expand beyond $r/D_{e}=0.5$.
\begin{figure}
\includegraphics[width=0.88\textwidth]{collapse-radial-peak.pdf}
\caption{Normalised radial peak location of the radial profiles for the first POD mode of the (a) streamwise, $R^{p}_{z}/D_e$, (b) radial, $R^{p}_{r}/D_e$, and (c) azimuthal, $R^{p}_{\theta}/D_e$, velocity components as a function of the azimuthal wavenumber $m$, for the round (red circles) and fractal (cyan crosses) orifices. The grey horizontal lines indicate the radial location $r/D_{e} = 0.5$, i.e. the radial location of the edge of the round orifice. }
\label{fig:collapse}
\end{figure}
To further characterise the size of the POD structures, we introduce the radial half-size of the POD modes defined as $L^r_{(\cdot)} =( R^{p}_{(\cdot)} - R^{5}_{(\cdot)})/D_e$, with $R^{5}_{(\cdot)}$ being the radial location where $\phi_{(\cdot)}^i(R^{5}_{(\cdot)}, m) = 0.05 \phi_{(\cdot)}^i(R^{p}_{(\cdot)}, m)$. This parameter is more appropriate for an unbounded shear flow, as opposed to the parameter introduced by \textcite{Hellstrom2016} for pipe flow, where the radial length scale is defined as the distance from the wall to the peak location.
\begin{figure}
\includegraphics[width=0.88\textwidth]{collapse-radial-profile.pdf}
\caption{Radial profiles of the streamwise (first row), radial (second row), and azimuthal (third row) velocity components of the first, $i=1$, second, $i=2$, and third, $i=3$, POD modes for the azimuthal wavenumbers $m\in [2, 40]$, for the round jet (red continuous line) and for the fractal jet (cyan dashed line).}
\label{fig:collapse-profiles}
\end{figure}
Modal structures identified by Fourier-POD analysis in pipe flow have been observed to collapse in the radial direction to a universal distribution when appropriately scaled with the radial length scale \cite{Hellstrom2016}. The same behaviour is observed in the present case.
The collapse of the Fourier-POD profiles is shown in Fig.~\ref{fig:collapse-profiles}, for the first, second, and third POD modes for wavenumbers $m\in [2, 40]$. Profiles obtained from the POD analysis of the streamwise, radial and azimuthal velocity components are shown, for the round and fractal orifices. One salient observation is that for all azimuthal wavenumbers, the radial profiles collapse to a distribution that is surprisingly robust to the orifice geometry. It is argued that this is due to the similarity of the mean velocity profiles in Fig.~\ref{fig:planar-profiles}(a), since similar shear profiles produces fluctuations with similar spatial structure.
Another salient observation is that the profiles closely resemble those found in turbulent pipe flow by \textcite{Hellstrom2016}. Recently, analogies between the coherent structures in the near-field of a jet and in wall-bounded flows have been found by \textcite{Nogueira2019} and by \textcite{Samie2020}. In particular, following the self-similarity of POD structures reported by \textcite{Hellstrom2016}, \textcite{Nogueira2019} observed self-similarity in Spectral POD modes in the streamwise direction, where an azimuthal length scale is utilised to scale the streamwise development of the structures.
\begin{figure}
\includegraphics[width=0.88\textwidth]{collapse-radial-aspect.pdf}
\caption{Radial length scale of first Fourier-POD modes as a function of their azimuthal length scale for the (a) streamwise, (b) radial, and (c) azimuthal velocity components. Data points for wavenumbers $m\in [1, 40]$ are shown, moving from right to left for growing wavenumbers, as shown by the arrows. The straight lines are used to estimate the aspect ratio of the eddies. }
\label{fig:aspect}
\end{figure}
To examine the self-similarity of the first POD modes ($i=1$) in the present case, we introduce here the azimuthal length scale $L^{\theta}_{(\cdot)}=2\pi R^{p}_{(\cdot)}/(m D_e)$. The radial length scale of the POD structures as a function of the azimuthal length scale is presented in Fig.~\ref{fig:aspect}, for the (a) streamwise, (b) radial, and (c) azimuthal velocity components, for the first forty azimuthal wavenumbers $m\in[1, 40]$. A first remark on Fig.~\ref{fig:aspect} is that the POD structures associated to the first few, most energetic azimuthal wavenumbers of the fractal jet are more spatially compact, as the radial length scale $L^r_{(\cdot)}$ can be up to $\sim 25$\% smaller than in the canonical round orifice. The largest, more energetic structures have a radial support of about $0.4D-0.5D$, in agreement with the profile of mean turbulent kinetic energy of Figs.~\ref{fig:average-flow}(c, f). On the other hand, the radial support of the high-wave-number POD modes, $m \gtrsim 8$, does not seem to be heavily influenced by the orifice geometry, and it decays slowly for higher wavenumbers. These results suggest that the fractal orifice does not significantly affect the properties of the small scales, but it rather acts towards breaking the large-scale structures that would naturally appear in the round jet. In the figures, the straight lines identify structures with a constant aspect ratio $L^{r}_{(\cdot)}/L^{\theta}_{(\cdot)}$. As can be observed, a univocal aspect ratio describing the eddies over the whole dynamic range of scales cannot be clearly defined, for none of the velocity components. This result makes the near-field jet different from pipe flow, where, according to \textcite{Hellstrom2016}, eddies are characterised by a constant aspect ratio of 0.2 across the wavenumber spectrum.
This result is consistent with the spatial structure shown in the velocity snapshots in Fig.~\ref{fig:snapshots}, where eddies with small azimuthal length scale are not necessarily associated with smaller radial length scale. From a physical point of view, we argue that this behaviour arises from the absence of the wall-blocking effect present in pipe flow, which limits outward motions in the radial direction.
\section{PROPER ORTHOGONAL DECOMPOSITION OF THE STREAMWISE VORTICITY AND VELOCITY}
\label{sec:coupling}
In the previous sections, Fourier-POD modes were identified for each velocity components separately, using a scalar POD implementation.
However, this approach does not capture possible correlations and couplings between the velocity components. Regarding this aspect, \textcite{Nogueira2019} showed that energetic streamwise velocity streaks in the high-shear region of the jet flow are strongly coupled to streamwise-elongated vortical structures by the lift-up mechanism \cite{BRANDT201480} as in wall-bounded shear flows \cite{Hamilton1995, Schoppa2002}, inducing strong radial motions. In this section, we examine the relationship between the streamwise velocity fluctuations and the structure of the streamwise vorticity field involved in the lift-up mechanism, to assess the effects of the orifice geometry.
\begin{figure}
\includegraphics[width=0.96\textwidth]{conditional_average.png}
\caption{(a,b) Color maps of the normalised conditional average of the streamwise vorticity fluctuations, and labelled contours of the conditionally-averaged streamwise velocity, for both (a) the round jet and (b) the fractal jet (see text for details on the conditioning procedure); (c,d) radial profiles at constant $r/D_{e}=0.5$ as a function of the azimuthal angle $\theta$ of (c) the streamwise vorticity and (d) the streamwise velocity, from the conditional averages presented in the maps in (a,b). The grey lines in (a,b) trace the nominal lip line location.}
\label{fig:conditional_average}
\end{figure}
We first show conditional averages of the streamwise vorticity and streamwise velocity fields. With the idea of characterising intense streamwise velocity fluctuation events, the condition for the average is for the value of the streamwise velocity fluctuation to be larger than a threshold value based on the rms of the streamwise velocity, $u_{z_{rms}}$, at the reference point $x/D_{e}=0.5$, $y/D_{e}=0.0$, $z/D_{e}=2.0$. This conditioning based on a threshold value can be expressed by the following equation:
\begin{equation}\label{thres}
u^{\prime}_z > c \cdot u_{z_{rms}},
\end{equation}
where $c$ is a positive constant. The threshold value of the constant $c$ in equation \ref{thres} is chosen based on a sensitivity analysis, in which four different values of this constant were adopted, i.e. $c = 0.25, 0.5, 0.75, 1.0$. A similar condition can also be stated for negative fluctuations, but the probability distribution of the velocity fluctuations are not significantly skewed and lead to similar results. Although not presented here, the results of this analysis evidence that the obtained conditional averages are only moderately sensitive to the constant $c$, and that, as expected, larger values of $c$ lead to more intense events, with a smaller number of samples contributing to the averages. It was therefore chosen to set $c=0.5$ in equation (\ref{thres}), which represents a compromise between identifying intense events, and retaining a sufficiently large statistical sample. In any case, the physics of the interactions remain unchanged for different constants $c$. A further improvement of the statistical significance is obtained by repeating the averaging for one hundred reference points distributed on the circumference at $r/D_{e}=0.5$, rotating the results back to $x/D_{e}=0.5$, $y/D_{e}=0.0$ and averaging along the azimuth. Conditional averages are denoted by angle brackets. Maps obtained from the described averaging procedure are presented in Figs.~\ref{fig:conditional_average}(a) and \ref{fig:conditional_average}(b) for the round and fractal jet, respectively. The colour map shows the conditionally-averaged vorticity field while the labelled contours show the conditionally-averaged streamwise velocity field. These variables are normalised by the maps of the rms of the respective quantities. The grey circle traces the nominal orifice lip line location. The averaging procedure unveils a pair of vorticity structures of opposite sign, flanking the point where the condition is defined, i.e.~in the proximity to the lip line. Therefore, positive streamwise velocity fluctuations are associated with positive fluctuations of the radial velocity component, i.e. a flow towards the jet periphery, that produces the mushroom-like structures observed in figure 9(a) of \textcite{Liepmann1992}. Similarly, although not presented here, negative fluctuations of the streamwise velocity are associated to a pair of high vorticity regions with opposite sign, with negative fluctuations of the radial velocity, i.e.~a flow towards the jet centerline. The described coupling between the sign of the streamwise velocity fluctuations at the lip line and the sign of the radial velocity fluctuations can also be observed from the instantaneous snapshots reported in Fig.~\ref{fig:snapshots}, in particular from the snapshot in Fig.~\ref{fig:snapshots}(c). Qualitatively similar results are obtained when the radial position of the point of condition is varied within the region of intense shear. The averaging captures a significant fraction, about 20\%, of the streamwise vorticity rms. Physically, high-speed fluid elements in the jet core are pushed outwards by radial motions produced by the streamwise vorticity features in regions of lower mean velocity, generating positive velocity fluctuations and vice versa. The observed structure of the conditionally-averaged streamwise velocity and vorticity fields supports the idea that a lift-up mechanism is active in turbulent jets \cite{Nogueira2019}. In this scenario, the patches of coherent streamwise velocity derived from the conditional average can be regarded as the footprints of the streaks identified by \textcite{Nogueira2019} passing through the observation plane. If we compare the structures of streamwise vorticity obtained from the two orifice geometries, we can see that the vorticity structures from the round jet tend to have a larger extent in the azimuthal direction. This is more evident when examining Fig.~\ref{fig:conditional_average}(c), which shows radial profiles from the conditionally-averaged streamwise vorticity at constant $r/D_{e}=0.5$, as a function of the azimuthal angle $\theta$. It can be seen that the average vorticity structure within the fractal jet spans the range $-3 \pi/16 < \theta < 3 \pi/16$, while the structure within the round jet spans the range $-\pi/4 < \theta < \pi/4$. Fig.~\ref{fig:conditional_average}(d) shows the radial profiles from conditional averages of the streamwise velocity at constant $r/D_{e}=0.5$. For the round jet, the profile does not go to zero even for large azimuthal angles, consistent with the dominance of the azimuthal mode $m=0$ discussed in the previous section. Therefore, in the fractal jet the coupling between streamwise velocity and streamwise vorticity involves structures that are smaller in size, although of comparable strength. It should be stressed that the conditional averages presented up until here are obtained at the streamwise location $z/D_e=2.0$. If the same conditional averages were calculated at different streamwise locations, different structures of vorticity would be found, as the observed coupling between velocity and vorticity is strongly dependent on the streamwise position. However, this work focuses on examining the effects of the orifice geometry on the coupling between streamwise velocity and vorticity at a fixed downstream distance. Studying how this coupling varies with the streamwise position is left to future work.
The conditional averages presented above enable us to estimate the characteristic size of the vorticity structures sustaining the streamwise velocity fluctuations in both jets. With the aim of quantifying to what extent vorticity structures at different azimuthal scales contribute to the described mechanism, a vector implementation of the Fourier-POD analysis \cite{Tinney2008} is applied to composite snapshots of streamwise vorticity and streamwise velocity. The analysis identifies composite streamwise vorticity-velocity modes, and ranks them based on their importance. Note that, in what follows, we use the concept of modal energy, although this does not have the same dimensions of kinetic energy, as for the scalar POD implementation. Thus, the dominant vorticity-velocity are those that better capture intense and correlated fluctuations of the streamwise velocity and streamwise vorticity, on a wavenumber-by-wavenumber basis.
\begin{figure}
\includegraphics[width=0.96\textwidth]{vector-POD-energy-distribution.png}
\caption{Relative energy distribution of the first three vector POD modes $i \in [1,3]$ at the first twenty-one azimuthal modes, $m \in [0,20]$, (a) for the round jet and (b) for the fractal jet. The Fourier-POD analysis is performed for the streamwise components of the vorticity and velocity fields. The cumulative energy content of the first and first five POD modes are represented by a cyan dashed line and continuous line, respectively.}
\label{fig:vector-POD-energy-distribution}
\end{figure}
The relative energy distribution of the first three POD modes, $i \in [1,3]$, at the first twenty-one azimuthal modes, $m \in [0,20]$, is presented in Figs.~\ref{fig:vector-POD-energy-distribution}(a) and \ref{fig:vector-POD-energy-distribution}(b), respectively for the round jet and for the fractal jet. The dashed cyan lines represent the cumulative energy distribution of the first POD modes as a function of $m$, whereas the continuous cyan lines represent the cumulative energy distribution for the first five POD modes. As can be observed, the azimuthal wavenumber $m=6$ is dominant in the round jet, while the azimuthal wavenumber $m=8$ is the most important in the fractal jet. This result is consistent with the conditional averages of Fig.~\ref{fig:conditional_average}, where it was observed that smaller vorticity structures feature in the near field of the fractal jet. Fig.~\ref{fig:vector-POD-energy-distribution} also shows that the fractal orifice leads to a more scattered energy distribution across wavenumbers than the circular orifice, analogous to what found from the Fourier-POD analysis of the velocity, in the previous section. This property is also evident from cumulative energy distributions for the two jets. Although the first five POD modes and the first twenty-one azimuthal modes capture around $90\%$ of the total energy, modes at low wavenumbers (long length scales) are more important in the round jet than in the fractal jet, which leads to a steeper trend of the cumulative energy distribution in the former jet than in the latter one.
\begin{figure}
\includegraphics[width=0.9\textwidth]{vector-POD-maxima.png}
\caption{(a) Normalised radial peak location of the radial profiles of the streamwise vorticity component for the first vector POD mode; (b) normalised radial half-size of the vorticity component of the first POD modes as a function of the wavenumber $m$; (c) the quantity $\chi(m)$, defined in \eqref{eq:chi} as a function of $m$. }
\label{fig:vector-POD-maxima}
\end{figure}
The normalised radial peak positions of the profiles of the vorticity component of the joint streamwise vorticity/velocity Fourier-POD modes are presented in Fig.~\ref{fig:vector-POD-maxima}(a) as a function of the azimuthal wavenumber. Analogous to what found in the analysis of the three velocity components, the peaks are located in the high-shear region and their locations move towards the periphery for increasing azimuthal wavenumber. Also, the peak positions for the fractal jet are nearer to the centerline when compared to the round jet. A reasonable explanation for this result is that the actual lip line of the fractal orifice reaches radial positions as close to the centerline as $r/D_{e} \approx 0.25$, introducing higher streamwise vorticity fluctuations in this area. The normalised radial length scale of the first POD vorticity structures $L^{r}_{\omega_{z}}/D_{e}$, defined similarly to the radial length scale of the velocity Fourier-POD modes, is presented in Fig.~\ref{fig:vector-POD-maxima}(b) as a function of the azimuthal wavenumber. Excluding the first four azimuthal wavenumbers, which define large-scale vorticity modes of little physical interest, the radial length scale of the POD structures decreases only moderately for increasing azimuthal wavenumbers, reaching values lower than $L^{r}_{\omega_{z}}/D_{e} \approx 0.2$. Although the dimension is lower than the radial length scale of the velocity POD modes discussed previously, the mild decrease of the radial length scale for $m>4$ indicates that a constant aspect ratio for the vorticity eddies cannot be determined. In fact, the azimuthal length scale $L^\theta_{\omega_z} = 2\pi R^p_{\omega_z} / m$ (not shown here) decreases much more rapidly with $m$ than $L^{r}_{\omega_{z}}$ and is thus not a suitable scale to characterise the POD profiles. As discussed, we argue that unlike in wall-bounded flows \cite{Hellstrom2016}, the lack of a solid wall originates a family of fluid structures that are free to occupy the entire radial dimension of the high-shear region, with little influence of the azimuthal wavenumber. This perspective is supported by the fact that this trend is essentially insensitive to the orifice geometry.
In Fig.~\ref{fig:vector-POD-maxima}(c), the ratio between the peaks of the velocity and vorticity components of the joint velocity-vorticity Fourier-POD modes, defined by the quantity
\begin{equation}\label{eq:chi}
\displaystyle \chi(m) = 100 \times \frac{\max_r \phi^1_{u_z}(r, m)}{ \max_r \phi^1_{\omega_z}(r, m) },
\end{equation}
is presented as a function of the azimuthal wavenumber $m$ for both jets. This quantity is used here to quantify for the relative importance of streamwise velocity component, and as a proxy for the correlation between the two variables. It can be observed that $\chi(m)$ is largest for both orifices at wavenumbers in the range $5 \le m \le 8$. The fact that the velocity and vorticity correlation is larger at the wavenumbers where energy is mostly concentrated (see Fig.~ \ref{fig:vector-POD-energy-distribution}) corroborates the physical mechanisms described above, evidencing that a strong coupling between the streamwise vorticity and velocity is active in the near field.
Radial profiles of the vorticity and velocity component of the first two composite Fourier-POD modes are shown in Fig.~\ref{fig:vector-POD-scaled-profiles}, for wavenumbers $m\in[6, 20]$. The radial coordinate is scaled similarly to the profiles of Fig.~\ref{fig:collapse-profiles}, using the radial half-sizes $L^r_{\omega_z}$ and $L^r_{u_z}$. The real and imaginary parts of the Fourier-POD profiles are dominant for the vorticity and velocity components, respectively, and are thus reported in the figure.
\begin{figure}
\includegraphics[width=0.7\textwidth]{vector-POD-scaled-profiles.png}
\caption{Radial profiles of the (a, b) streamwise vorticity and (c, d) streamwise velocity of the (a, c) first POD mode and (b, d) second POD mode, for the azimuthal wavenumbers $m\in [5, 40]$, for the round jet (red continuous line) and for the fractal jet (cyan dashed line). They grey vertical lines indicate the radial location $r/D_{e}=0.5$, i.e. the radial location of the edge of the round orifice. }
\label{fig:vector-POD-scaled-profiles}
\end{figure}
It can be observed that, similar to the velocity POD modes, the profiles collapse to a universal distribution that is independent of the orifice geometry. Profiles for smaller wavenumbers, corresponding to large-scale rotational motions with little dynamical relevance, deviate from such collapse. The relative arrangement of the vorticity and velocity components is more clearly understood by visualising the full structure of the Fourier-POD modes in the $x, y$ plane.
\begin{figure}
\includegraphics[width=0.95\textwidth]{vector-POD-mirini.png}
\caption{Streamwise velocity of the three most energetic Fourier-POD modes, on an arbitrary color scale, and contours of the streamwise vorticity, with continuous lines identifying positive values and dashed lines negative values, in the jet with (a, b, c) round orifice and with (d, e, f) fractal orifice.}
\label{fig:vector-POD-mirini}
\end{figure}
The three most important Fourier-POD modes are presented in Figs.~\ref{fig:vector-POD-mirini}(a, b, c) and \ref{fig:vector-POD-mirini}(d, e, f), respectively for the round jet and for the fractal jet. Color maps of the streamwise velocity and contours of the streamwise vorticity are shown using arbitrary scales, since the modes are normalised. The same observations made on the relative spatial organisation of the conditionally-averaged vorticity and velocity fields apply to the Fourier-POD modes. Specifically, regions characterised by positive or negative fluctuations of the streamwise velocity are associated respectively with positive and negative fluctuations of the radial component, induced by pairs of vorticity structures. As observed, in the fractal jet, the azimuthal mode $m=8$ is dominant (Fig.~\ref{fig:vector-POD-mirini}(f)). From the maps of mean streamwise velocity and mean streamwise vorticity presented respectively in Fig.~\ref{fig:average-flow-vort}(d) and in Fig.~\ref{fig:average-flow-vort}(f), it can be observed that the mean flow characteristics dictate the relative importance of the vorticity-velocity Fourier-POD modes. In particular, this suggests that the spatial organization of both streaks and vortical structures is directly associated with the orifice geometry, consistent with the spectral POD analysis of \textcite{Rigas2019} on a jet from chevron nozzle.
\section{Conclusions and future work}
In this work, a Fourier-POD analysis was performed to investigate the coherent structures in the near-field of a jet issuing from a non-circular orifice. More specifically, we considered an orifice with fractal geometry constructed from a base square pattern. These were compared with the structures in a jet issuing from a round orifice. The study considers three-dimensional velocity vector fields tomographic-PIV datasets obtained at a downstream distance from the orifice exit of two equivalent orifice diameters, which is used to characterize the role of the orifice geometry on the initial jet development.
From the analysis of the streamwise velocity component, the mode at wavenumber $m=0$, which captures the largest amount of turbulent kinetic energy in the jet with circular orifice, is not the dominant mode in the jet with fractal orifice. This is because the fractal geometry injects energy at the fundamental wavenumber $m=4$, thus breaking up the azimuthal coherence associated with the vortex rings typical of the round jet. As a result, while in the jet with circular orifice most of the energy is contained within these vortex rings, in the jet with fractal orifice the energy is distributed among the first seven azimuthal modes ($0 \le m \le 6$) more uniformly. Instantaneous snapshots of streamwise velocity confirm these findings, and show that structures within the fractal jet are smaller in size and lack of a preferential organisation. Consistent with this scenario, the energy in the radial component at the first wavenumber $m=0$, associated with the azimuthally-coherent radial motions from the Kelvin-Helmoltz vortex rings, is significantly lower in the fractal jet. In both cases, at the most energetic wavenumbers, the streamwise component was found to capture approximately $60\%$ of the total kinetic energy, followed by the radial component and by the azimuthal velocity components.
The radial distribution of the Fourier-POD modes from the two orifice geometries was also examined. It was found that the modal shapes at different wavenumbers collapse to a distribution that is independent of the orifice geometry, when scaled with a characteristic radial dimension. This finding is common to the Fourier-POD modes from each of the three velocity components. The most significant difference is in the radial support of the modes of the jet with fractal orifice being smaller at low wavenumbers. This evidences that the most energetic structures tend to be smaller in size in the jet with fractal orifice. Regardless, the orifice geometry does not appear to significantly affect the shape of the first POD modes, but mostly the energy distribution. The collapse of the radial profiles was also observed in recent experiments in turbulent pipe flow \cite{Hellstrom2016}. However, an important difference with these observations is that, in the present case, the ratio between the azimuthal and radial length scales of the Fourier-POD structures varies with the wavenumber $m$, while it is constant in pipe flow and approximately equal to $0.2$. This result was explained by the fact that in jet flow the lack of a wall-blocking effect does not significantly constrain the radial extent of turbulent motions. In this respect, while \textcite{Nogueira2019} found that the SPOD structures are self-similar in the streamwise/azimuthal directions, the aspect ratio in the radial/azimuthal directions was not examined. The present investigation of the radial/azimuthal aspect ratio contributes therefore to complete the chararacterization of coherent structures in turbulent jets.
Streaky structures have been recently found in the near field of a jet, resulting from a lift-up mechanism analogous to turbulent wall-bounded flows (\textcite{Nogueira2019}). The structures of streamwise vorticity leading to streamwise velocity fluctuations were examined in relation to the orifice geometry. To this aim, \textit{i.}) an averaging of the streamwise vorticity conditioned on the intense streamwise velocity fluctuations at points on the nominal lip line location, and \textit{ii.}) a joint Fourier-POD analysis of streamwise velocity and streamwise vorticity were performed. Conditional averaging showed that intense positive fluctuations of the streamwise velocity are associated with pairs of streamwise vorticity structures of opposite sign, flanking the point of conditioning. Consistent with streak/roll dynamics in wall-bounded shear flows, the combined activity of this vortex pair induces positive fluctuations of the radial velocity, i.e. a flow towards the jet periphery. Alternatively, negative fluctuations of the streamwise velocity are associated to negative fluctuations of the radial velocity, i.e. a flow towards the jet centerline. The flow pattern obtained by this averaging procedure is analogous for both the orifice geometries, although the vorticity structure from the round jet is $30\%$ larger than the vorticity structure from the fractal jet. From the joint Fourier-POD analysis, the fractal orifice promotes the involvement of structures over a wider range of length scales in the mechanism leading to the streaks formation. Lower wavenumber modes contribute in larger part to the overall fluctuation budget in the round jet compared with the fractal jet. This is consistent with the larger size of the vorticity patterns from conditional averaging being larger in the jet from the circular orifice. These aspects have been examined only at two diameters from the orifice, and further investigations on the axial development of the coupling between streamwise velocity and streamwise vorticity are warranted. However, our expectation is that the coupling becomes milder as the jet develops axially, due to the progressively decreasing mean shear at larger streamwise distances.
Recent observations both from experiments (\textcite{Jordan2018}) and from numerical simulations (\textcite{Towne2017}) underline the importance of the Kelvin-Helmholtz instabilities in the generation of tonal noise from jet-flap interactions. According to these studies, the wavepackets associated with the Kelvin-Helmholtz instabilities resonate with upstream-travelling trapped acoustic modes located in the potential core, thus producing far-field tonal noise. In relation to these studies, the attenuation of the Kelvin-Helmholtz instabilities by the non-circular orifice could mitigate the effects of this resonance, and ultimately reduce the emissions of tonal noise originating from the interactions between a round jet and the edge of a flap. Future studies of jet-flap interaction noise with a jet nozzle of non-circular geometry could shed further light on these aspects.
|
train/arxiv
|
BkiUbfo5qWTD6cbksItD
| 5 | 1 |
\section{Introduction}
The decentralized nature of wireless ad hoc networks makes them vulnerable to security threats. A prominent example of such threats is jamming: a malicious attack whose objective is to disrupt the communication of the victim network intentionally, causing interference or collision at the receiver side. Jamming attack is a well-studied and active area of research in wireless networks. Unauthorized intrusion of such kind has initiated a race between the engineers and the hackers; therefore, we have been witnessing a surge of new smart systems aiming to secure modern instrumentation and software from unwanted exogenous attacks.
The problem under consideration in this paper is inspired by recent discoveries of jamming instances in biological species. In a series of playback experiments, researchers have found that resident pairs of Peruvian warbling antbirds sing coordinated duets when responding to rival pairs. But under other circumstances, cooperation breaks down, leading to more complex songs. Specifically, it has been reported that females respond to unpaired sexual rivals by jamming the signals of their own mates, who in turn adjust their signals to avoid the
interference \cite{tobias09}.
Ad hoc networks consist of mobile energy-constrained nodes. Mobility affects all layers in a network protocol stack including the physical layer as channels become time-varying \cite{GoldsmithWicker}. Moreover, nodes such as sensors deployed in a field or military vehicles patrolling in remote sites are often equipped with non-rechargeable batteries. Power control and adaptive resource allocation (RA) play, therefore, a crucial role in designing robust communications systems. At the physical layer, power control can be used to maximize rate or minimize the transmission error probability, see \cite{AzouziDebbah, BelmegaDebbah} and the references therein. In addition, in multi-user networks, power control can be used to regulate the interference level at the terminals of other users \cite{Scutari,WeiYu,ElBatt}. Due to the lack of a centralized infrastructure in ad hoc networks, distributed solutions are essential. In this work, similar to \cite{BelmegaDebbah,Scutari,WeiYu}, we model the power allocation problem as a noncooperative game, which allows us to devise a non-centralized solution. As a departure from previous research, however, the power control mechanism we propose splits the power budget of each player into two portions: a portion used to communicate with team-mates and a portion used to jam the players of the other team. More importantly, the objective function is chosen to be the difference between the cumulative bit error rate (BER) of each team; this allows for increased freedom in choosing physical design parameters, besides the power level, such as the size of modulation schemes.
Adaptive RA mechanisms involve varying physical layer parameters according to channel, interference, and noise conditions in order to optimize a specific metric, such as spectral efficiency. Adapting the modulation scheme, choosing coding schemes, and controlling the transmitter power level are examples of adaptive RA schemes. Goldsmith and Chua demonstrated in \cite{GoldsmithChua} that adaptive RA provides five times more the spectral efficiency of nonadaptive schemes. In this work, we propose an adaptive modulation scheme based on a zero-sum matrix game played by both teams.
The conflicting objectives of the two teams entails the use of game-theoretic framework to study this problem. We identify three main tasks that each team needs to perform:
\begin{enumerate}
\item \emph{Optimal trajectory:} computing the optimal motion path for the agents
\item \emph{Power Allocation:} dividing power between internal communications and jamming
\item \emph{Adaptive Modulation:} choosing an appropriate modulations scheme
\end{enumerate}
We addressed Task 1 in \cite{gamecomm11} by posing the problem as a {\it{pursuit-evasion game}}. The optimal strategies of the players are obtained by using techniques from {\it{differential game theory}}. We also addressed Task 2 in \cite{gamecomm11} using \emph{continuous kernel static games}; we will generalize the formulation in this work to include a minimum rate constraint. This will lead to a more practical scheme as it guarantees a non-zero communications rate. Task 3 will be addressed in this work using \emph{static matrix games}. The saddle-point equilibrium of the power allocation problem is parametrized by the modulation schemes of the two teams. We therefore introduce a third game in order to arrive at the equilibrium modulation schemes. In fact, this gives rise to a \emph{games-within-games} structure: the optimal trajectory is first found, power allocation is then performed, and finally the optimal modulation is computed at each time instant.
The main contributions of this paper are as follows. We introduce a third layer of games to our recent work \cite{gamecomm11} in order to perform adaptive modulation. We also generalize the power allocation problem introduced in \cite{gamecomm11} to ensure non-zero communications rate. We introduce an optimization framework taking into consideration constraints in energy and power among the agents. Moreover, we relate the problem of optimal power allocation for communication and jamming to the communication model between the agents. Finally, we provide a sufficient condition for existence of an optimal decision strategy among the agents based on the physical parameters of the problems.
The rest of the paper is organized as follows. We formulate the problem in Section \ref{ProblemFormulation} and explain the underlying notation. The saddle-point equilibrium properties of the team power control problem are studied in Section \ref{PowerAllocation} with the specific example of systems employing uncoded M-quadrature amplitude modulations (QAM). We introduce our adaptive modulation scheme in Section \ref{AdaptiveModulation}. Simulation results are presented in Section \ref{Simulations}. We conclude the paper and provide future directions in Section \ref{Conclusion}.
\section{Problem Formulation} \label{ProblemFormulation}
Consider two teams of mobile agents. Each agent is communicating with members of the team it belongs to, and, at the same time, jamming the communication between members of the other team. In particular, each team attempts to minimize its own BER while maximizing the BER of the other team. We consider a scenario where each team has two members, though at a conceptual level our development applies to higher number of team members as well. Team A is comprised of the two players $\{1^{a},2^{a}\}$ and Team B is comprised of the two players $\{1^{b},2^{b}\}$. We assume that $f_{a}$ and $f_{b}$ are the frequencies at which Team A and Team B communicate, respectively, and $f_{a}\neq f_{b}$.
Naturally, for an initial position $\textbf{x}_{0}\in {\bf X}$, the outcome of the game $\pi$, is given by the difference in the BERs of both teams during the entire course of the game. Formally:
\begin{eqnarray*}
\pi(\textbf{x}_{0},\u^{a}_{i},\u^{b}_{j})=N\cdot\int^{T}_{0}\underbrace{[p^{a}_{1}(t)+p^{a}_{2}(t)-p^{b}_{1}(t)-p^{b}_{2}(t)]}_{L}dt,
\end{eqnarray*}
where $p^{a}_{i}(t)$ and $p^{b}_{j}(t)$ are the BERs of agent $i$ in Team A and and agent $j$ in Team B, respectively; $\u^a_i$ and $\u^b_j$ are likewise the control inputs of agents $i$ and $j$ in teams A and B, respectively; $N$ is the total number of transmitted bits which remains constant throughout the game; and $T$ is the time of termination of the game. We conclude that the objective of Team A is to minimize $\pi$, whereas that of Team B is to maximize it.
Since the agents are mobile, there are limitations on the amount of energy available to each agent that is dictated by the capacity of the power source carried by each agent. The game is said to terminate when any agent runs out of power. Let $P^{a}_{i}(t)$ and $P^{b}_{j}(t)$ denote the instantaneous power for communication used by player $i$ in Team A and player $j$ in Team B, respectively. We model this restriction as the following integral constraint for each agent:
\begin{eqnarray}
\int^{T}_{0}P_{i}(t)dt\leq E
\label{eqn:encon}
\end{eqnarray}
For each transmitter and receiver pair, we assume the following communications model in the presence of a jammer which is motivated by \cite{poov}. Given that the transmitter and the receiver are separated by a distance $d$, and the transmitter transmits with constant power $P_{T}$, the received signal power $P_{R}$ is given by
\begin{eqnarray}
P_{R}=\rho P_{T}d^{-\alpha},
\label{eqn:power}
\end{eqnarray}
where $\alpha$ is the path-loss exponent and $\rho$ depends on the antennas' gains. Typical values of $\alpha$ are in the range of $2$ to $4$. According to the free space path loss model, $\rho$ is given by:
\begin{equation*}
\rho = \frac{G_TG_R\lambda^2}{(4\pi)^2},
\end{equation*}
where $\lambda$ is the signal's wavelength and $G_T$, $G_R$ are the transmit and receive antennas' gains, respectively, in the line of sight direction. In real scenarios, $\rho$ is very small in magnitude. For example, using nondirectional antennas and transmitting at $900$ MHz, we have $\rho = \frac{(1)(0.33)}{(4\pi)^2} = 6.896\times10^{-4}$.
The received signal-to-interference ratio (SINR) $s$ is given by
\begin{eqnarray}
s=\frac{P_{R}}{I+\sigma^2},
\label{eqn:sinr}
\end{eqnarray}
where $\sigma^2$ is the power of the noise added at the receiver, and $I$ is the total received interference power due to jamming and is defined as in (\ref{eqn:power}).
Let $P^{a}_{i}(t)$ and $P^{b}_{j}(t)$ denote the instantaneous power levels for communication used by player $i$ in Team A and player $j$ in Team B, respectively. Since the agents are mobile, there are limitations on the amount of energy available to each agent that is dictated by the capacity of the power source carried by each agent. We model this restriction as the following integral constraint for each agent
\begin{eqnarray}
\int^{T}_{0}P_{i}(t)dt\leq E.
\label{eqn:encon}
\end{eqnarray}
In addition to the energy constraints, there are limitations on the maximum power level of the devices that are used onboard each agent for the purpose of communication. For each player, this constraint is modelled by $0\leq P^{a}_{i}(t),P^{b}_{i}(t)\leq P_{max}$.
We also assume that players of each team have access to different M-QAM modulation schemes. We denote the set of available modulation sizes to the players in Team A by $\mathcal{M}^a$ and that available to players of Team B by $\mathcal{M}^b$. The sizes of the employed QAM modulation by the teams are M$^a \in \mathcal{M}^a$ and M$^b \in \mathcal{M}^b$. We assume that Team A can choose among $n$ different modulation schemes, and Team B chooses from a set of $m$ different schemes, i.e., $|\mathcal{M}^a| = n, |\mathcal{M}^b| = m$.
The instantaneous BER depends on the SINR, the modulation scheme, and the error control coding scheme utilized. Communications literature contains closed-form expressions and tight bounds that can be used to calculate the BER when the noise and interference are assumed to be Gaussian \cite{Goldsmith}. For uncoded M-QAM, where Gray encoding is used to map the bits into the symbols of the constellation, the BER can be approximated by \cite{Palomar}
\begin{equation} \label{BERuncodedQAM}
p(t) = g(s) \approx \frac{\zeta}{r} \mathcal{Q}\left(\sqrt{\beta s} \right),
\end{equation}
where $r=\log(M)$, $\zeta = 4(1 - 1/\sqrt{M})$, $\beta = 3/(M-1)$, and $\mathcal{Q}$(.) is the tail probability of the standard Gaussian distribution.
To ensure a non-zero communication rate between the agents of each team, we impose a minimum rate constraint for each agent: $R^{a}_{i}(t),R^{b}_{i}(t)\geq \tilde{R}$, where $\tilde{R}>0$ is a threshold design rate, which we assume is the same for all agents. The results can be readily extended to networks of players having a different value of the minimum design rate.
At every instant, each agent has to decide on the fraction of the power that needs to be allocated for communication and jamming. Table \ref{tbl:decvar} provides a list of decision variables for the players, which models the power allocation. Each decision variable is a non-negative real number and lies in the interval $[0,1]$. The decision variables belonging to each row add up to one. The fraction of the total power allocated by the player in row $i$ to the player in column $j$ is given by the first entry in the cell $(i,j)$. This allocated power is used for jamming if the player in column $j$ belongs to the other team; otherwise, it is used to communicate with the agent in the same team. Similarly, the distance between the agent in row $i$ and the agent in column $j$ is given by the second entry in cell $(i,j)$. Since distance is a symmetric quantity, $d^{ij}_{}=d^{ji}_{}$ and $d_{ij}^{}=d_{ji}^{}$. Fig. \ref{fig:fig131} depicts the power allocation between the members of the same team as well as between members of different teams.
\begin{table}[h]
\caption{Decision variables and distances among agents.}
\label {tbl:decvar}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
&&&&\\
& $1^{b}$ & $2^{b}$&$1^{a}$&$2^{a}$\\
&&&&\\
\hline
&&&&\\
$1^{a}$&$\gamma^{1}_{1},d_{1}^{1}$&$\gamma^{1}_{2},d^{1}_{2}$&---&$\gamma^{12},d^{12}_{}$\\
&&&&\\
\hline
&&&&\\
$2^{a}$&$\gamma^{2}_{1},d_{1}^{2}$&$\gamma^{2}_{2},d_{2}^{2}$&$\gamma^{21},d_{}^{21}$&---\\
&&&&\\
\hline
&&&&\\
$1^{b}$&---&$\delta^{}_{12},d^{}_{12}$&$\delta^{1}_{1},d_{1}^{1}$&$\delta^{2}_{1},d_{1}^{2}$\\
&&&&\\
\hline
&&&&\\
$2^{b}$&$\delta^{}_{21},d^{}_{21}$&---&$\delta^{1}_{2},d_{2}^{1}$&$\delta^{2}_{2},d_{2}^{2}$\\
&&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=6cm]{coupjamcoup}
\caption{Power allocation among the agents for communication as well as jamming. }
\label{fig:fig131}
\end{figure}
In addition to power allocation, each team has to decide on the size of the QAM modulation to be used. In summary, each agent has to compute the following decision variables at each instance in accordance with the above tasks: (i) the instantaneous control (Task 1); (ii) the instantaneous power level, $P_{i}(t)$ (Task 2); (iii) all the decision variables present in the row corresponding to the agent in Table \ref{tbl:decvar} (Task 2); and (iv) the size of the QAM schemes, M$^a$ or M$^b$ (Task 3).
\section{Power Allocation} \label{PowerAllocation}
From (\ref{eqn:sinr}), the received SINR and the rate achieved by each agent are given by the following expressions
\begin{eqnarray}
s^{a}_{i} & = & \frac{\rho_aP^{a}_{j}(t)\gamma^{ji}_{}(d^{ij})^{-\alpha}}{\sigma^2+\rho_bP^{b}_{1}(t)\delta^{i}_{1}(d^{i}_{1})^{-\alpha}+\rho_bP^{b}_{2}(t)\delta^{i}_{2}(d^{i}_{2})^{-\alpha}} \nonumber \\
s^{b}_{i} & = & \frac{\rho_bP^{b}_{j}(t)\delta_{ji}(d_{ij})^{-\alpha}}{\sigma^2+\rho_aP^{a}_{1}(t)\gamma^{1}_{i}(d^{1}_{i})^{-\alpha}+\rho_aP^{a}_{2}(t)\gamma^{2}_{i}(d^{2}_{i})^{-\alpha}} \nonumber \\
R^a_i & = & \log(1+s^a_j), \quad R^b_i = \log(1+s^b_j)
\label{eqn:sinrexp}
\end{eqnarray}
Agents of the same team embark in a team problem which eliminates their need to exchange information about their decision variables. Further, since agents in different teams do not communicate, they possess information only about their own decision variables. This makes the power allocation problem a continuous kernel zero-sum game between the teams:
\emph{Team A:} The objective of each agent is to minimize $L$.
\begin{equation}
\min_{P^{a}_{i},\gamma^{i}_{1},\gamma^{i}_{2},\gamma^{ij}} L(M^a,M^b)\Rightarrow \min_{P^{a}_{i},\gamma^{i}_{1},\gamma^{i}_{2},\gamma^{12}} (\underbrace{p^{a}_{j}-p^{b}_{1}-p^{b}_{2}}_{L^{a}_{i}(M^a,M^b)})
\label{eqn:opt1a}
\end{equation}
subject to: \quad $0 \leq P^{a}_{i}(t)\leq P_{\text{max}}, \quad R^a_i \geq \tilde{R}$
\[\gamma^{i}_{1}+\gamma^{i}_{2}+\gamma^{ij}=1,\quad \gamma^{i}_{1},\gamma^{i}_{2},\gamma^{ij}\geq0\]
\emph{Team B:} The objective of each agent is to maximize $L$.
\begin{equation}
\max_{P^{b}_{i},\delta^{1}_{i},\delta^{2}_{i},\delta_{ij}} L(M^a,M^b)\Rightarrow \max_{P^{b}_{i},\delta^{1}_{i},\delta^{2}_{i},\delta_{ij}} (\underbrace{p^{a}_{1}+p^{a}_{2}-p^{b}_{j}}_{L_{i}^{b}(M^a,M^b)})
\label{eqn:opt1b}
\end{equation}
subject to: \quad $0 \leq P^{i}_{b}(t)\leq P_{\text{max}},\quad R^b_i \geq \tilde{R}$
\[\delta^{1}_{i}+\delta^{2}_{i}+\delta_{ij}=1,\quad \delta^{1}_{i},\delta^{2}_{i},\delta_{ij}\geq0\]
Note that the power allocation vector for $1^a$ denoted by $\gamma = (\gamma^{12},\gamma^1_1,\gamma^1_2)$ belongs to the intersection between the three-dimensional simplex $\Delta^{3}$ and the plane $\mathbf{r}_0$, where $\mathbf{r}_0 = \{\gamma|\gamma^{12} = \frac{1}{a_1}(2^{\tilde{R}}-1)\}$. The power allocation vectors of other players belong to similar sets.
In \cite{gamecomm11}, we showed that the optimal value of the power consumption for each player is $P_{\max}$. We also showed that the entire game terminates in a fixed time $T=\frac{E}{P_{\max}}$ irrespective of the initial position of the agents. Moreover we provided a sufficient condition for the existence of a pure-strategy saddle-point equilibrium (PSSPE) for the power allocation game when uncoded M-QAM schemes are used by all agents. Here, we modify the condition to allow teams to use different modulation schemes as made formal by the next theorem.
\begin{theorem} \label{thm:QAMPSNE}
When all players employ uncoded $M$-QAM modulation schemes, the power allocation team game formulated above has a unique PSSPE solution if the following condition is satisfied:
\begin{equation}
P_{max} \cdot \max \left\{\frac{\rho_a(d^{12})^{-\alpha}}{M^a - 1}, \frac{\rho_b(d_{12})^{-\alpha}}{M^b - 1} \right\} < \sigma^2.
\end{equation}
For the special case of $M^a = M^b = M$ and $\rho_b \approx \rho_a = \rho $, the condition becomes
\begin{equation}
\beta \rho P_{max} \left(\min\{d^{12}, d_{12} \}\right)^{-\alpha} < 3\sigma^2. \label{Thm4Cond}\\
\end{equation}
\end{theorem}
The proof is similar to that presented in \cite{gamecomm11} and is omitted here. Note that the left hand side of inequality (\ref{Thm4Cond}) depends entirely on physical design parameters; this is of particular importance for design purposes. Moreover, we showed in \cite{gamecomm11} that this condition can be expressed in terms of the received signal-to-noise-ratios (SNRs) for all players, which could be more insightful from a communication systems perspective. Consider, for example, 1$^a$, and let SNR$^x_y = \frac{P_{max} \gamma^x_y \rho (d^x_y)^{-\alpha} }{\sigma^2}$ and SNR$_{xy} = \frac{P_{max} \delta_{xy} \rho (d_{yx})^{-\alpha} }{\sigma^2}$. We then have:
\begin{eqnarray*}
\text{SNR}_{ij} & < & \frac{3}{\beta}(\text{SNR}_j^2 + 1) \quad i,j\in\{1,2\}; i\neq j\\
\end{eqnarray*}
Yet another useful way to interpret condition (\ref{Thm4Cond}) is regarding it as a minimum rate condition:
\begin{equation*}
r>\log\left(1+\frac{\rho P_{max} \left(\min\{d^{12}, d_{12} \}\right)^{-\alpha}}{\sigma^2}\right).
\end{equation*}
Assuming (\ref{Thm4Cond}) holds, the objective function is strictly convex in the decision variables of $1^a$, $2^a$ and strictly concave in the decision variables of $1^b$, $2^b$. A unique globally optimal solution $(\bar{\gamma})$ therefore exists, which we characterize using the KKT conditions \cite{Luenberger}. Consider, for example, the case of $1^a$. The expressions for SINR provided in (\ref{eqn:sinrexp}) relevant to the optimization problem being solved by $1^{a}$ can be written in a concise form as shown below:
\begin{eqnarray*}
s^{a}_{2}= a_{1}\gamma^{12},\quad
s^{b}_{1}=\frac{b_{1}}{c_{1}+\gamma_{1}^{1}},\quad
s^{b}_{2}=\frac{d_{1}}{e_{1}+\gamma_{2}^{1}},
\end{eqnarray*}
where
\begin{eqnarray*}
a_1 & = & \frac{1}{\frac{\sigma}{P_{max}\rho_a(d^{12})^{-\alpha}}+\tilde{\rho}\delta^{2}_{1}\left(\frac{d^{2}_{1}}{d^{12}}\right)^{-\alpha}+\tilde{\rho}\delta^{2}_{2}\left(\frac{d^{2}_{2}}{d^{12}}\right)^{-\alpha}},\\
b_1 & = &\tilde{\rho}\delta_{21}\left(\frac{d_{12}}{d^{1}_{1}}\right)^{-\alpha}, \quad d_1 = \tilde{\rho}\delta_{12}\left(\frac{d_{12}}{d^{1}_{2}}\right)^{-\alpha}, \\
c_1 & = & \frac{\sigma}{P_{max}\rho_a(d^{1}_{1})^{-\alpha}}+\gamma^{2}_{1}\left(\frac{d^{2}_{1}}{d^{1}_{1}}\right)^{-\alpha}, \\
e_1 & = & \frac{\sigma}{P_{max}\rho_a(d^{1}_{2})^{-\alpha}}+\gamma^{2}_{2}\left(\frac{d^{2}_{2}}{d^{1}_{2}}\right)^{-\alpha},\quad \tilde{\rho} = \left(\frac{\rho_b}{\rho_a}\right).
\end{eqnarray*}
The KKT conditions can then be written as:
\begin{eqnarray}
\nabla L_1^a(\bar{\gamma})+\displaystyle\sum^{4}_{i=1}\lambda_{i}\nabla h_{i}(\bar{\gamma})+\eta\nabla h(\bar{\gamma})=0,\\
\lambda_{i}h_{i}(\bar{\gamma})=0,\quad \lambda_{i},\eta\geq0,\quad i\in\{1,2,3\}\nonumber
\end{eqnarray}
where
\begin{eqnarray}
h_{1}(\bar{\gamma})&=&-\gamma^{12} + \min\left\{\frac{2^{\tilde{R}}-1}{a_1},1\right\} \leq 0 \nonumber\\
h_{2}(\bar{\gamma})&=&-\gamma^{1}_{1}\leq0, \quad h_{3}(\bar{\gamma})=-\gamma^{1}_{2}\leq0 \nonumber\\
h(\bar{\gamma})&=&\gamma^{12}+\gamma^{1}_{1}+\gamma^{1}_{2}-1=0\nonumber
\end{eqnarray}
Now, we present the necessary and sufficient conditions for the solution to the optimization problem for the agents. Let us consider the case of $1^{a}$. The assumptions in Theorem 4 regarding strict convexity of $L^{a}_{1}$ render the KKT conditions to be necessary as well as sufficient conditions for the unique global minimum.
\begin{figure*}
$\mathbf{A}=$
\begin{align} \label{StaticMatrixGame}
\begin{tabular}{r|c|ccc|c|c}
\multicolumn{1}{r}{}
& \multicolumn{1}{c}{$\mathcal{M}^b(1)$}
& \multicolumn{1}{c}{$\mathcal{M}^b(2)$}
& \multicolumn{1}{c}{...}
& \multicolumn{1}{c}{$\mathcal{M}^b(m-1)$}
& \multicolumn{1}{c}{$\mathcal{M}^b(m)$} \\
\cline{2-6}
$\mathcal{M}^a(1)$ & $L(\mathcal{M}^a(1),\mathcal{M}^b(1))$ & $L(\mathcal{M}^a(1),\mathcal{M}^b(2))$ & ... & $L(\mathcal{M}^a(1),\mathcal{M}^b(m-1))$ & $L(\mathcal{M}^a(1),\mathcal{M}^b(m))$ \\
\cline{2-6}
$\mathcal{M}^a(2)$ & $L(\mathcal{M}^a(2),\mathcal{M}^b(1))$ & $L(\mathcal{M}^a(2),\mathcal{M}^b(2))$ & ...& $L(\mathcal{M}^a(2),\mathcal{M}^b(m-1))$ & $L(\mathcal{M}^a(2),\mathcal{M}^b(m))$\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & Team A \\
$\mathcal{M}^a(n)$ & $L(\mathcal{M}^a(n),\mathcal{M}^b(1))$ & $L(\mathcal{M}^a(n),\mathcal{M}^b(2))$ & ...& $L(\mathcal{M}^a(n),\mathcal{M}^b(m-1))$ & $L(\mathcal{M}^a(n),\mathcal{M}^b(m))$\\
\cline{2-6}
\multicolumn{1}{r}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{Team B}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{}
\end{tabular}
\end{align}
\hrulefill
\vspace{-0.3cm}
\end{figure*}
To this end, we obtain:
\[\nabla L_1^a=\left[\begin{array}{c}a_1g'(s^{a}_{2})\\ \frac{b_1g'(s^{b}_{1})}{(c_1+\gamma^{1}_{1})^{2}}\\ \frac{b_1g'(s^{b}_{2})}{(c_1+\gamma^{1}_{2})^{2}}\end{array}\right],\nabla h(\bar{\gamma})=\left[\begin{array}{c}1\\1\\1\end{array}\right]\]
\[\nabla h_{1}(\bar{\gamma})=\left[\begin{array}{c}-1\\0\\0\end{array}\right],\nabla h_{2}(\bar{\gamma})=\left[\begin{array}{c}0\\-1\\0\end{array}\right], \nabla h_{3}(\bar{\gamma})=\left[\begin{array}{c}0\\0\\-1\end{array}\right].
\]
Since $\gamma\in \Delta^{3} \cap \mathbf{r}_o$, at most three of the constraints can be active at any given point. Hence, the gradient of the constraints at any feasible point are always linearly independent.
If two of the three constraints among $\{h_{1},h_{2},h_{3}\}$ are active, then $\bar{\gamma}$ has a unique solution that is given by the vertex of the simplex that satisfies the two constraints. If only one of the constraints among $\{h_{1},h_{2},h_{3}\}$ is active, then we have the following cases depending on the active constraint
\begin{enumerate}
\item $h_{1}(\bar{\gamma}^{1})=0$: $\bar{\gamma}^{1}=(\tilde{\gamma}^{12},\gamma^{1*}_{1},1-\gamma^{1*}_{1}-\tilde{\gamma}^{12})$, where $\tilde{\gamma}^{12} = \min\left\{\frac{2^{\tilde{R}}-1}{a_1},1\right\}$, satisfies the equation
\begin{equation}
g'(s_{2}^{b})\frac{d_{1}}{[e_{1}+(1-\gamma^{1*}_{1}-\tilde{\gamma}^{12})]^{2}}=g'(s_{1}^{b})\frac{b_{1}}{[c_{1}+\gamma^{1*}_{1}]^{2}}
\label{eqn:Case1}
\end{equation}
\item $h_{2}(\bar{\gamma}^{2})=0$: $\bar{\gamma}^{2}=(1-\gamma^{1*}_{2},0,\gamma^{1*}_{2})$ satisfies the following equation
\begin{eqnarray}
a_{1}g'(s^{a}_{2})=\frac{d_{1}g'(s^{b}_{2})}{(e_{1}+\gamma^{1*}_{2})^{2}}
\label{eqn:Case2}
\end{eqnarray}
\item $h_{3}(\bar{\gamma}^{3})=0$: $\bar{\gamma}^{3}=(1-\gamma^{1*}_{1},\gamma^{1*}_{1},0)$ satisfies the following equation
\begin{eqnarray}
a_{1}g'(s^{a}_{2})=\frac{b_{1}g'(s^{b}_{1})}{(c_{1}+\gamma^{1*}_{1})^{2}}
\label{eqn:Case3}
\end{eqnarray}
\end{enumerate}
If none of the inequality constraints are active, then $\bar{\gamma}^{4}=(\underbrace{1-\gamma^{1*}_{1}-\gamma^{1*}_{2}-\tilde{\gamma}^{12}}_{\gamma^{12*}_{}},\gamma^{1*}_{1},\gamma^{1*}_{2})$
is the solution to:
\begin{eqnarray}
a_{1}g'(s^{a}_{2})-\frac{b_{1}}{[c_{1}+\gamma^{1*}_{1}]^{2}}g'(s^{b}_{1})=0\nonumber\\
a_{1}g'(s^{a}_{2})-\frac{d_{1}}{[e_{1}+\gamma^{1*}_{2}]^{2}}g'(s^{b}_{2})=0
\label{eqn:Case4}
\end{eqnarray}
Here, $\bar{\gamma}$ lies in the set $\{(1,0,0),(0,1,0),(0,0,1),\bar{\gamma}^{1},\bar{\gamma}^{2},\bar{\gamma}^{3},\bar{\gamma}^{4}\}$. An important point to note is that $a_{1},b_{1},c_{1},d_{1}$ and $e_{1}$ depend on the decisions of the other players. Therefore, the computation of the decision variables depend on the value of the decision variables of the rest of the players. A possible way to deal with this problem is to use iterative schemes for computation of strategies. \cite{Basar} provides some insights into the efficacy of such schemes from the point of view of convergence and stability. In this work, we assume that each agent has enough computational power so as to complete these iterations in a negligible amount of time compared to the total horizon of the game. The specific conditions for 1$^a$ corresponding to (\ref{eqn:Case1})-(\ref{eqn:Case3}) when M-QAM modulations are utilized are:
\begin{eqnarray}
\left(\frac{s_1^b}{s_2^b} \right)^{\frac{3}{2}}\exp\left(-\frac{\beta}{2}(s_1^b - s_2^b) \right) - \frac{b_1}{d_1} &=& 0 \\
\left(\frac{s_2^b}{s_2^a} \right)^{\frac{1}{2}}\exp\left(-\frac{\beta}{2}(s_2^a - s_2^b) \right) - \frac{a_1d_1}{(e_1+\gamma_2^1)^2} &=& 0 \label{eqn:2in1}\\
\left(\frac{s_1^b}{s_2^a} \right)^{\frac{1}{2}}\exp\left(-\frac{\beta}{2}(s_2^a - s_1^b) \right) - \frac{a_1b_1}{(c_1+\gamma_1^1)^2} &=& 0 \label{eqn:3in1}
\end{eqnarray}
Also, (\ref{eqn:Case4}) in this case corresponds to solving (\ref{eqn:2in1}) and (\ref{eqn:3in1}) jointly.
\section{Adaptive Modulation} \label{AdaptiveModulation}
The time-varying nature of the channels due to mobility emphasizes the need for robust communications. Adaptive modulation is a widely used technique as it allows for choosing the design parameters of a communications system to better match the physical characteristics of the channels in order to optimize a given metric such as: minimizing BER or maximizing spectral efficiency. In this work, we model the adaptive modulation as a matrix zero-sum game between the two teams. We therefore look for an equilibrium solution which would dictate what modulations should be adopted by the teams at each time instant. The competitive nature of the jamming teams makes our approach to the problem most practical as any other non-equilibrium solution cannot produce an improved outcome, relative to that yielded by the equilibrium, for any of the teams.
The matrix game is given in (\ref{StaticMatrixGame}). The rows are all the possible actions for players of Team A, and the columns are the different options available for Team B. The $(i,j)$-th element of the matrix is the value of the objective function $L$ when Team A employs $M^a = \mathcal{M}^a(i)$, and Team B employs $M^b = \mathcal{M}^b(j)$.
A PSSPE does not always exist for the power allocation game. The condition for the existence of a PSSPE is $\min_i \max_j \mathbf{A}_{ij} = \max_j \min_i \mathbf{A}_{ij}$ \cite{Basar}. In case a PSSPE does not exist, we need to look for a solution in the larger class of mixed-strategies. A pair of strategies $\{M^{a*},M^{b*}\}$ is said to be a a mixed-strategy saddle point equilibrium (MSSPE) for the matrix game if \cite{Basar}
\begin{equation*}
(M^{a*})^T\mathbf{A}M^{b} \leq (M^{a*})^T\mathbf{A}M^{b*} \leq (M^{a})^T\mathbf{A}M^{b*}
\end{equation*}
For an $n\times m$ matrix game, the following theorem from \cite{Basar}, which we state without proof, establishes the existence of an MSSPE for the adaptive modulation game.
\begin{theorem} \label{thm:AdaptiveMod}
The adaptive modulation game admits an MSSPE.
\end{theorem}
In case multiple MSSPEs exist, the following corollary becomes essential \cite{Basar}.
\begin{corollary}
If $\{\mathcal{M}^a(i_1), \mathcal{M}^b(j_1)\}$ and $\{\mathcal{M}^a(i_2), \mathcal{M}^b(j_2)\}$ are two MSSPEs of the adaptive modulation game, then $\{\mathcal{M}^a(i_1), \mathcal{M}^b(j_2)\}$ and $\{\mathcal{M}^a(i_2), \mathcal{M}^b(j_1)\}$ are also MSSPEs.
\end{corollary}
This is termed the \emph{ordered interchangeability property} and its importance lies in that it removes any ambiguity associated with the existence of multiple equilibrium solutions as the teams do not need to communicate to each other which equilibrium solution they will be adopting. Literature contains different efficient low-complexity algorithms that computes MSSPEs for matrix games, such as Gambit \cite{Gambit}. We refer the interested reader to \cite{Basar} for a discussion of some of these approaches. Section \ref{Simulations} illustrates these concepts and shows how the choice of modulation size changes with the characteristics of the environment. Finally, it is assumed that players of each team communicate their modulation choices among themselves through a reliable side channel.
\section{Simulations Results} \label{Simulations}
To better understand the adaptive modulation scheme, we present the following example. Let $\mathcal{M}^a = \mathcal{M}^b = \{16,64,265 \}$, $d^1_1=17.7864$, $d^1_2=15.3376$, $d^2_1= 19.8951 $, $d^2_2=14.1128$, $d^{12}=20.6309$, $d_{12}=26.3224$, $\rho_a=0.0570$, $\rho_b= 0.0517$, $P_{max}=100$, $\sigma^2=10^{-3}$, $\tilde{R}=1$, and $\alpha = 2$. The matrix $\mathbf{A}$ corresponding to these values was found to be:
\begin{eqnarray*}
\mathbf{A} =
\begin{tabular}{r|c|c|c|c}
\multicolumn{1}{r}{}
& \multicolumn{1}{c}{$16$}
& \multicolumn{1}{c}{$64$}
& \multicolumn{1}{c}{$256$} \\
\cline{2-4}
$16$ & $0.0158$ & $0.0533$ & $0.1229$ \\
\cline{2-4}
$64$ & $-0.0356$ & $0.0091$ & $0.0728$ & Team A\\
\cline{2-4}
$256$ & $-0.1155$ & $-0.0677$ & $0.0040$\\
\cline{2-4}
\multicolumn{1}{r}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{Team B}
& \multicolumn{1}{c}{}
\end{tabular}
\end{eqnarray*}
Note that the third row dominates the other rows \emph{strictly}, and there is a unique PSSPE given by $\{256,256\}$ in this case.
Fig. \ref{fig:fig222} depicts how the teams adapt their modulation scheme relative to the SNR, which we define as $P_{max}/\sigma^2$. The set of modulations available to each team is $\mathcal{M} = \{16,20,24,28\}$. Players $1^a$, $2^a$, and $1^b$ were placed close to each other, while $2^b$ is far from all of them; in particular: $d^1_1=2.2036$, $d^1_2=33.6830$, $d^2_1= 2.4211 $, $d^2_2=33.6393$, $d^{12}= 4.5607 $, $d_{12}= 33.2022 $. We also let $\rho_a=0.0570$, $\rho_b= 0.0517 $, $P_{max}=1$, $\tilde{R}=1$, $\alpha = 3$, and varied the noise variance at all receivers to simulate the presented SNR range. We observe that both teams switch to a constellation of a smaller size at SNR$=50$ dB. This is due to both teams switching from pure communications to perform both communication and jamming. In order to do so, they both switch to a smaller constellation size which will guarantee robust communications for them as they will allocate some power to jam.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Mods}
\caption{Adaptive Modulation }
\label{fig:fig222}
\end{figure}
\section{Conclusion and Future Work} \label{Conclusion}
This paper has studied the power allocation problem for jamming teams. An underlying static game was used to obtain the optimal power allocation, where the power budget of each user is split between communication and jamming powers. A separate matrix game was utilized in order to arrive at the optimal modulation schemes for each team. This work focused on the analysis of teams consisting of two players only; a potential future direction is to generalize the results to teams consisting of multiple agents. Moreover, future work will consider scenarios of players possessing incomplete information and study the problem in the context of Bayesian games.
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUdJY5qsNCPbQZ7Nvm
| 5 | 1 |
\section*{}
\vspace{-1cm}
\footnotetext{\textit{$^{a}$~LMGC, Universit\'e de Montpellier, CNRS, Montpellier, France. \\
[email protected],\\ [email protected],\\[email protected],\\[email protected]}}
\footnotetext{\textit{$^{b}$~Laboratoire de Micromécanique et Intégrité des Structures (MIST), UM, CNRS, IRSN, France.}}
\footnotetext{\textit{$^{c}$~Department of Civil, Geological and Mining Engineering, Polytechnique, 2500, chemin de Polytechnique, Montréal, Qu\'ebec, Canada.
[email protected]}}
\footnotetext{\textit{$^{d}$~Institut Universitaire de France (IUF), Paris, France.}}
\section{Introduction}
The importance of understanding the physics behind the compaction of granular systems made of soft particles lies in the numerous natural phenomena and human activities that deal with such kind of materials.
They are present from constitutive biological cells, foams, and suspensions \cite{Katgert2010_Jamming, Chelin2013_Simulation, Dijksman2017, Katgert2019_The} to powder compaction, pharmaceutical industries, and food activities \cite{Heckel1961, Montes2010, Parilak2017, Montes2018}.
In some civil engineering construction, mixing coarse grains with rubber residues exhibit surprising properties such as better stress relaxation \cite{Indrarantna2019_Use, Khatami2019_The, Anastasiadis2012_Small, Platzer2018} or better foundation damping \cite{Mashiri2015_Shear, Senetakis2012_Dynamic, Kianoosh_2021}.
In particular, far beyond the jamming point, the compaction behavior of soft granular materials is a vast and still open subject, with notable experimental, numerical, and theoretical challenges.
Among these challenges, a three-dimensional characterization, by a realistic model that would consider both the change in grain shape and the assemblies' multi-contact aspects, remains poorly studied.
In experiments, an underlying difficulty is to track the change in particle shape while detecting the making of new contacts.
Photoelasticimetry \cite{ Howell1999, KDaniels2017, Zadeh_2019} or inverse problem method coupled with Digital Image Correlation (DIC) \cite{HURLEY2014154, Marteau2007} is the most used experimental technic for analyzing hard particles' two and three-dimensional behavior.
In particular, the DIC method, which has been very recently extended to analyze two-dimensional soft particle assemblies far beyond the jammed state \cite{Vu2019, Vu2019a}, directly quantifies the deformation field inside the particles and characterizes the deformation mechanisms.
However, for high packing densities, the image resolution may sharply limit the tracking of the grains and the detection of contacts between highly deformed particles, which is crucial for three-dimensional geometries.
Furthermore, in three dimensions, it is not always possible to use an optical approach to measure local properties of the particles, and tomography reconstructions may be necessary, but technically laborious \cite{Dijksman2017,MTiel2017,bares2020_Transparent, Ando2020}.
Concerning numerical modeling, the discrete element method (DEM), coupled with a complementary approach such as the Bonded-Particle Method (BPM) or the Finite-Element Method (FEM), is a suitable framework to simulate and analyze the compaction of soft particles assemblies.
In DEM-BPM, deformable particles are seen as aggregates of rigid particles interacting via elastic bonds \cite{dostaNumericalInvestigationCompaction2017, Nezamabadi_2017, Asadi2018_Discrete}.
This approach is relatively straightforward and allows one to simulate 3D packing composed of a large number of aggregates.
However, a major drawback is that the deformable particles often present a plastic behavior at large strain, and their characterization can be complex depending on the imposed numerical parameters (e.g., size of primary particles or interaction laws) \cite{Azema2018, voMechanicalStrengthWet2018a}.
In contrast, DEM-FEM strategies have the advantage of being closely representative both in terms of geometry and bulk properties of the particles. The price to pay is that these simulations are computationally expensive.
DEM-FEM methods can be classified into two classes. The Multi-Particle Finite Element Method (MPFEM) \cite{Munjiza_2004}, in which regularized contact interactions are used, and the Non-Smooth Contact Dynamic Methods (NSCD) \cite{Moreau1994_Some, Jean1999}, which uses non-regularized contacts laws.
To our best knowledge, the first MPFEM simulations applied to the compaction of deformable disks were performed in the 2000s \cite{Gethin_2002, Procopio2005}, and the first 3D compaction simulations appear a few years later \cite{Shmidt_2010, HARTHONG2012784, ABDELMOULA2017142, ZOU2020297, PENG2021478}.
The first applications of the NSCD to the compaction of soft grains assembly are reported in recent works by Vu et al. \cite{Vu2019, Vu2020_compaction, Vu2021_Effects} for two-dimensional hyper-elastic disks.
In practice, an inherent difficulty to DEM-FEM methods that explains the small number of studies, particularly in 3D, is the high computational cost, limiting the number of particles that can be simulated \cite{HARTHONG2012784, ABDELMOULA2017142, PENG2021478}.
Finally, a lack in the description of the microstructural phenomena during the compression limits the development of theoretical models upon the compaction of soft grain assemblies (i.e., a relation between the applied pressure $P$ and the evolution of the packing faction $\phi$).
As reviewed in literature \cite{Heckel1961, Panelli2001, Comoglu2007, Denny2002, Popescu2018_Compaction, Montes2018, Platzer2018, nezamabadiModellingCompactionPlastic2021}, many compaction equations have been proposed during the last decades.
In general, existing models are based on macroscopic assumptions, and, thus, fitting parameters are required to adjust each expression to the data.
Among these equations, the most used is the one proposed by Heckel \cite{Heckel1961} and later improved by Secondi \cite{Secondi2002_Modelling}. It states that $P \propto \ln(\phi_{max}-\phi)$, where $\phi_{max}$ is the maximum packing fraction that the assembly can reach.
Carroll and Kim justified this equation by an analogy between the corresponding loss of void space and the collapse of a cavity within an elastic medium under isotropic compression \cite{Carroll1984, Kim1987}.
Depending on the authors, different interpretations have been provided to the fitting parameters, which are supposed to represent either a characteristic pressure, a hardening parameter, or is linked to the assembly's plasticity \cite{Platzer2018, Montes2018}.
Only recently, Cantor et al. \cite{Cantor2020_Compaction} and Cardenas-Barrantes et al. \cite{Cardenas_pentagons_2021} set up a systematic micro-mechanical
approach to study the compaction of soft granular assemblies.
By applying this framework to two-dimensional systems modeled with NSCD simulations, new compaction laws entirely determined through the evolution of the connectivity of the particles and the contact properties were presented.
This article presents a three-dimensional numerical and theoretical analysis of the compaction of assemblies composed of highly deformable (elastic) spherical particles using the Non-Smooth Contact Dynamics Method.
We are interested in the compaction evolution as a function of the applied stress from the jammed state to a packing fraction close to unity.
As mentioned before, similar studies were recently performed in 2D \cite{Cantor2020_Compaction, Cardenas_pentagons_2021}.
The transition from 2D to 3D requires additional numerical and technical efforts to manage the particles' deformation correctly.
Furthermore, as we will see, the resulting compaction equation differs from those previously established in 2D using the same micromechanical framework.
This will be understood from the approximation of the Hertz's contact law in the small deformation regime, where the contact force between two deformable particles shifts from a linear dependence with the contact deflection in 2D to a power-law dependence in 3D.
The paper is organized as follows. In Section \ref{Numerical_method}, we introduce the numerical framework used in the simulations.
The numerical results and the theoretical model of the 3D compaction curves are discussed in Sec.\ref{Sec_results}.
Section \ref{Stress_evolution} deals with the evolution of the packing fraction and particle connectivity beyond the jamming point as a function of the applied stress.
In Sec. \ref{Theory}, we present the micro-structural elements behind the evolution of the packing fraction and the corresponding resulting 3D equation.
In Section \ref{Local_stress}, a more refined description of the particle stresses is presented within the limit of the representativeness of the considered samples.
Finally, some conclusions are discussed in Section \ref{Conclu_section}.
\section{Numerical Approach}
\label{Numerical_method}
\subsection{The Non-Smooth Contact Dynamic Method (NSCD)}
The simulations are performed using the Non-Smooth Contact Dynamics (NSCD), a method developed by Moreau and Jean \cite{Moreau1994_Some, Jean1999, Dubois2018}.
The NSCD extends the Contact Dynamic (CD) method \cite{Moreau1994_Some} to deformable bodies through a finite element approach (FEM).
The CD method is based on an implicit time integration of the equations of motion and non-regularized contact laws.
These contact laws set the non-penetrability and friction behavior between the particles.
No elastic repulsive potentials and no smoothing of the Coulomb friction law are needed to determine the contact forces.
Therefore, the unknown variables, i.e., particle velocities and contact forces, are simultaneously solved via a nonlinear Gauss-Seidel scheme.
Considering deformable bodies (in the sense of continuous mechanics) is natural with CD, although technically very complex to implement.
In this case, the bodies are discretized via finite elements, so the degrees of freedom - the coordinates of the nodes - and contact interactions are resolved simultaneously.
We used an implementation of the three-dimensional Non-Smooth Contact Dynamics Method available on the open-source software LMGC90, capable of modeling a collection of deformable or non-deformable particles of various shapes, behaviors and interactions \cite{Dubois2006}.
\subsection{Packing composed of 3D elastic particles}
When dealing with three-dimensional and highly deformable particles, a problematic issue is to find the best compromise between sample representativeness and numerical efficiency.
In this study, we are interested in the isotropic compression of elastic spherical particles.
Therefore, one necessary condition is to verify that the mesh used is, at least, sufficiently accurate concerning the Hertz approximation in the range of small deformations \cite{ johnsonContactMechanics1985}.
Let us first consider the case of an elastic spherical particle of diameter $d$, with a Poisson's ratio $\nu$ equals $0.495$ and a Young modulus $E$. The sphere is compressed axially as shown in Fig. \ref{Mes_part}(a).
The bottom wall is fixed while the top wall moves downwards at a constant velocity $v_0$ chosen, such as the inertial effects are negligible (i.e., $I<<1$), where $ I=v_0 \sqrt{\rho_0/E}$ \cite{gdrmidiDenseGranularFlows2004}, with $\rho_0$ being the density of the particles.
Figure \ref{Mes_part}(b) shows the evolution of the normal force $f$ as a function of the vertical displacement $\delta$ using $444$, $808$, $6685$ and $14688$ tetrahedral elements with four nodes, together with the corresponding prediction of the Hertz law given by \cite{johnsonContactMechanics1985}:
\begin{equation}
\label{Hert_wall_sphere}
\frac{f}{d^2} = \frac{2^{3/2}}{3} \frac{E}{1-\nu^2} {\left(\frac{\delta}{d}\right)}^{3/2}.
\end{equation}
Compared to this equation, we obtain a good prediction with $14688$ and $6685$ elements, while with $444$ elements, it shows decreasing accuracy.
On the contrary, with $808$ elements, the force-displacement relation is slightly overestimated at the beginning of the deformation, but the response quickly reaches the prediction at higher deformation.
Following this simple analysis, we fixed the number of elements to $808$ for all the simulations presented below.
\begin{figure}
\centering
\hspace{26pt} \includegraphics[width=0.7\linewidth]{figures/1Particle.pdf} \hspace{20pt} (a)
\includegraphics[width=0.91\linewidth]{figures/Hertz.pdf}(b)
\caption{(a) 3D cross-section of an elastic spherical particle vertically compressed between two walls. The color intensity is proportional to the mean displacement field.
(b) Normal contact force applied on a single spherical particle as a function of the deformation for different meshes. The continuous black line is the approximation given by the Hertz's contact law Eq. (\ref{Hert_wall_sphere}).}
\label{Mes_part}
\end{figure}
Concerning the number of particles in an assembly, we adopt a double approach.
First, we rely on previously published works in which it is shown that a number between $32$ and $200$ particles is sufficient to qualitatively represent the loading surfaces, the compaction, or the plastic flow of a compressed assembly of deformable spherical particles \cite{HARTHONG2012784, Shmidt_2010, ABDELMOULA2017142, ZHOU2020153226, PENG2021478}. Second, we use a statistical analysis by considering different samples and focus on their averaged behavior.
Thus, in this study, we consider $8$ systems, $4$ composed of $N=50$ particles and $4$ composed of $N = 100$ particles. For each system, particles are spheres made of an elastic material with Poisson's ratio equals to $0.495$.
The particles are first randomly dropped in a cubic box with a small particle size dispersity around their mean diameter $\left<d\right>$ in order to avoid crystallization ($ d \in \left[0.8 \left<d \right>,1.2 \left<d \right> \right]$). All packings are then isotropically compressed under a stress $\sigma_0$, such that $\sigma_0/E<<1$ (i.e., the particles can be considered as rigid, in comparison to the applied stress).
This initial compression ends when the change of the packing fraction $\phi$ is below $0.01\%$. After this point, all systems can be considered at the jammed state, characterized by the initial packing fraction $\phi_0$.
Then, the packings are isotropically compressed by imposing a constant velocity $v$ on the box's boundaries. The velocity $v$ is carefully chosen to ensure that the systems are always in the quasi-static regime, characterized by an inertial number $I<<1$.
In our simulations, we use a constant friction coefficient between particles $\mu = 0.3$, and we keep the friction coefficient with the walls and gravity equal to zero.
Figure \ref{Snapshot} presents screenshots of an assembly composed of $100$ particles at the jammed state ($\phi\sim0.49$) and near to the maximal dense state with $\phi\sim0.96$.
In the following, the mean behavior for systems composed of $N_p=50$ and $N_p=100$ particles is obtained by averaging over the $4$ corresponding independent sets.
The average jammed packing fraction $\phi_0$ obtained is $0.5$ for $50$ particle assemblies and $0.51$ for $100$ particle assemblies.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/Snap1.pdf}
\includegraphics[width=1\linewidth]{figures/Snap2.pdf}
\caption{View of a granular assembly composed of $100$ soft spherical particles at (a) the initial configuration and (b) close to $\phi=0.96$.
The color intensity, from blue to red, is related to the mean stresses in the particles.}
\label{Snapshot}
\end{figure}
\section{Results}
\label{Sec_results}
\subsection{Packing compaction and particle connectivity}
\label{Stress_evolution}
In this section we analyze the compaction of the assemblies, characterized by the evolution of the packing fraction $\phi$ as a function of the mean confining stress $P$ and the mean particle connectivity $Z$.
The mean confining stress, in the granular system, is extracted from the granular stress tensor $\bm \sigma$, which is computed at each step of the compression as \cite{moreauStressTensorGranular2009}:
\begin{equation}
{\sigma_{\alpha \beta} } = \frac{1}{V} \sum_{c \in V} {f_{\alpha}^c \ell_{\beta}^c},
\label{eq:sigma}
\end{equation}
where $\alpha$ and $\beta$ correspond to $x$, $y$ or $z$, $f_{\alpha}^c$ is the $\alpha^{\rm{th}}$ component of the contact force at the contact $c$, and $\ell_{\beta}^c$ is the $\beta^{\rm{th}}$ component of the vector that join the two centers of the particles interacting at the contact $c$.
Note that the total contact force between two deformable particles is computed as the vectorial sum of the forces at the contact nodes along the shared interface.
The mean confining stress is then given by $P = (\sigma_{1} + \sigma_{2} + \sigma_{3})/3$, where $\sigma_{1}$, $\sigma_{2}$ and $\sigma_{3}$ are the principal stress values
of $\bm \sigma$. The packing fraction $\phi$ is also related to the macroscopic deformation $\epsilon$, by $\epsilon = - \ln(\phi_0/\phi)$.
Figure \ref{PPhi_et_ZPhi}(a) shows the evolution of $\phi$ as a function of the mean confining stress $P$, normalized by the reduced Young Modulus $E^*=E/2(1-\nu^2)$, for the assemblies composed of $50$ and $100$ particles.
From the jammed state, the packing fraction asymptotically increases towards the value $\phi_{max}$, at high pressure.
Note that the compaction curves for the $50$ and $100$ particle systems collapse on the same curve, which is in agreement with the previous works mentioning the minimum number of grains necessary to capture the average comparative behavior. On these compaction curves, we also show the approximation proposed by Heckel-Secondi, with the following form \cite{Heckel1961,Secondi2002_Modelling}:
\begin{equation}
\frac{P}{E^*} = -A \ln\left( \frac{\phi_{max}-\phi}{\phi_{max}-\phi_0} \right),
\label{Eq_Heckel}
\end{equation}
with $A$ a fitting constant equal to $0.15$ in our case, and $\phi_{max}=0.965$.
Equation (\ref{Eq_Heckel}), although very simple in its form, is able to capture the general tendency of the compaction but slightly mismatches its evolution for intermediate pressures.
Also, the parameter $A$ does not have a well-established physical meaning, and different values may be required to fit the data depending on the friction coefficient \cite{Cardenas2020_Compaction}or the bulk behavior of the particles \cite{Carroll1984, Platzer2018}.
Some improvements to the Heckel-Secondi equation have been proposed by Ge et al. \cite{Ge1995_A}, Zhang et al. \cite{Zhang2014}, and Wunsch et al. \cite{Wunsch2019_A} by considering a double log approach (i.e., $\ln P \propto \log \ln \phi$). However, unlike the Heckel-Secondi equation, which can be justified \cite{Carroll1984}, these new approaches rely only on data fitting.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/Phi_P.pdf}(a)
\includegraphics[width=0.8\linewidth]{figures/Z_Phi.pdf}(b)
\caption{(a) Packing fraction $\phi$ as a function of the mean confining stress $P$ normalized by the reduced Young Modulus $E^*$.
The dotted line is the approximation given by Heckel Eq. (\ref{Eq_Heckel}), the dashed line is the small strain approximation given by Eq. (\ref{eq:Pgsd}) (SD), and the continuous black line is the prediction given by our micromechanical approach Eq. (\ref{eq:Pglobal_2}).
The inset shows the macroscopic volumetric strain $\varepsilon$ as a function of the mean contact strain $\langle \epsilon_\ell \rangle$ in the small deformation domain.
(b) Reduced coordination number $Z - Z_0$ as a function of the reduced solid fraction $\phi-\phi_0$ (log-log scale is shown in the inset).
The continuous black line is the power-law relation given by Eq. (\ref{eq:ZvsPpi}) with exponent $0.5$.
Error bars represent the standard deviation on the averaged behavior performed over $4$ independent samples.}
\label{PPhi_et_ZPhi}
\end{figure}
In Fig.\ref{PPhi_et_ZPhi}(b), we plot the evolution of the mean particle connectivity $Z$ as a function of $\phi$.
At the jammed state, the packing structure is characterized by a minimal value $Z_0$, which depends on the coefficient of friction, the packing preparation, and the shape of the particles \cite{Hecke2009_Jamming, Donev2005_Pair, smithAthermalJammingSoft2010}. For spherical assemblies, $Z_0$ is equal to $6$ when the friction vanishes, and it varies between $4$ and $6$ for higher friction coefficients.
In our two frictional systems, we find $Z_0\simeq 3.5$.
Beyond the jammed state, $Z$ continues to increase and, as shown in the previous 2D numerical \cite{Vu2020_compaction, Cardenas2020_Compaction, Cardenas_pentagons_2021} and experimental studies \cite{Majmudar2007_Jamming, Katgert2010_Jamming, Vu2019}, this increase follows a power law with exponent $1/2$:
\begin{equation}\label{eq:ZvsPpi}
(Z - Z_0) = \psi \sqrt{\phi - \phi_0},
\end{equation}
with $\psi \approx 8.5$, a constant fully defined through the characteristics of the jammed state and the final dense state as $\psi = (Z_{max}-Z_0)/\sqrt{\phi_{max}-\phi_0}$, with
$Z_{max}$ the maximum packing fraction as $\phi \rightarrow\phi_{max}$.
Thus, this power-law relation already observed in 2D can now be extended to the case of 3D soft particle assemblies.
\subsection{A compaction equation}
\label{Theory}
As discussed in the introduction, there are many compaction equations trying to relate the confining stress to the evolution of the packing fraction.
Some of them, such as the Heckel-Secondi equation, although based on fitting parameters, can be justified by macroscopic arguments \cite{Carroll1984, Kim1987}.
However, the vast majority are settled on adjustment strategies sometimes involving several fitting parameters.
In this section, we briefly recall the general framework introduced in a previous study \cite{Cardenas_pentagons_2021} allowing us to relate the packing fraction to the applied stress through the micromechanical specificity of a given system. We then apply this general framework to the case of three-dimensional soft spherical particles.
The stress tensor Eq. (\ref{eq:sigma}) can be rewritten, as a sum over all contacts, as:
\begin{equation}
\sigma_{\alpha \beta} = n_c \langle f^c_{\alpha}\ell^c_{\beta} \rangle_c,
\label{eq:sigma_contact}
\end{equation}
where $\langle...\rangle_c$ is the average over all contacts. The density of contacts $n_c$ is given by $n_c=N_c/V$, with $N_c$ the total number of contacts in the volume $V$.
Considering a small particle size distribution around the diameter $\langle d \rangle$, $\sum_{p\in V} V_p\simeq N_pV_p$, with $V_p = (\pi/6)d^3$, the contact density can be rewritten as $n_c \simeq 3Z\phi/(\pi d^3)$, with $Z=2N_c/N_p$.
From the definition of $P$ via the principal stresses of $\sigma$, we get \cite{Rothenburg1989_Analytical, Agnolin2007c, Agnolin2008_On, Khalili2017b}:
\begin{equation}\label{eq:Pglobal_local_contact}
P \simeq \frac{ \phi Z} {\pi} \sigma_{\ell},
\end{equation}
with $\sigma_{\ell} = \langle f^c \cdot \ell^c \rangle_c/\langle d \rangle^3$, a measure of the mean contact stress.
This way of writing $P$ as a function of $Z$, $\phi$, and $\sigma_\ell$ is, in fact, very common and has been successfully applied in different contexts.
For example, it has been used to relate the bulk properties of an assembly to the elastic contact properties \cite{Brodu2015b,Khalili2017a}, or to link the macroscopic cohesive strength to the cohesive behavior between the interface of particles in contact \cite{Richefeu2006, Azema2018}.
The Equation (\ref{eq:Pglobal_local_contact}) reveals the active role of the evolution of the microstructure in the evolution of $P$ with $\phi$.
First, we focus on the small deformation domain. We can rely on Hertz's prediction where, in 3D, the force $f^c$ at a contact $c$ between two touching particles is related to the contact deflection $\delta^c$ by $f^c = (2/3) E^* d^{1/2} {\delta^c}^{3/2}$.
Then, since $\ell_c\sim d$ in the range of small deformations, we get $\sigma_\ell \sim (2/3)E^* \langle \varepsilon_{\ell} \rangle^{3/2}$, where $\varepsilon_{\ell} = \delta^c/d$ is the deformation at a contact $c$, assuming that $\langle \varepsilon_{\ell}^{3/2}\rangle \sim \langle \varepsilon_{\ell} \rangle^{3/2}$ which is well verified in our weakly polydisperse systems. Also, with a good approximation, we get that $Z=Z_0$.
Finally, our simulations show that the mean contact strain $\langle \epsilon_\ell \rangle$ and
the macroscopic volumetric strain are linearly dependent as $\langle \epsilon_\ell \rangle \sim (1/\Gamma) \varepsilon$, with $\Gamma\sim4.4$ (see inset in Fig.\ref{PPhi_et_ZPhi}(a)).
This value is close to the one obtained in 2D with disks and non-circular particles \cite{Cardenas_pentagons_2021}. Note that $\Gamma=3$ in the ideal case of a cubic lattice arrangement of spheres.
Finally, by considering all these ingredients, Eq. (\ref{eq:Pglobal_local_contact}) is rewritten as:
\begin{equation}
\label{eq:Pgsd}
\frac{P_{SD}}{E^*} = -\frac{2}{3\pi\Gamma^{3/2}}Z_0\phi \ln^{(3/2)}\left(\frac{\phi_0}{\phi}\right),
\end{equation}
with $P_{SD}$ the limit of $P(\phi)$ at small deformations.
The prediction given by Eq. (\ref{eq:Pgsd}) is shown in Fig. \ref{PPhi_et_ZPhi}.
As expected, we see a fair approximation of the compaction evolution in the small-strain domain, but it fails to predict the evolution at larger strains.
The critical issue for the large strain domain is to find a proper approximation of $\sigma_\ell(\phi)$.
To find so, we can combine the previous microscopic approach with a macroscopic development by Carroll and Kim \cite{Carroll1984, Kim1987}.
Assuming that the compaction behavior can be equivalent to the collapse of a cavity within the elastic medium, they showed that $P \propto \ln[(\phi_{max}-\phi)/(\phi_{max}-\phi_0)]$.
Using this macroscopic approximation together with the micromechanical expression of $P$ given by Eq. (\ref{eq:Pglobal_local_contact}), and remarking that the quantity $Z\phi$ is finite, it is easy to show that, necessarily, $\sigma_\ell = \alpha(\phi) \ln[(\phi_{max}-\phi)/(\phi_{max}-\phi_0)]$, with $\alpha$ a function that depends, {\it a priori}, on $\phi$.
Then, by ($i$) introducing the above form of $\sigma_\ell$ into Eq. (\ref{eq:Pglobal_local_contact}), ($ii$) ensuring the continuity to small deformation (i.e., $P\rightarrow P_{SD}$ for $\phi\rightarrow \phi_0$), and ($iii$) introducing the $Z-\phi$ relation (Eq. (\ref{eq:ZvsPpi})) into Eq. (\ref{eq:Pglobal_local_contact}), we get:
\begin{equation}\label{eq:Pglobal_2}
\frac{P}{E^*} = -\frac{2}{3\pi\Gamma^{3/2}}\left(\frac{\phi_{max}-\phi_0}{\phi_0^{3/2}}\right) \phi \sqrt{\phi - \phi_0} \left[Z_0 - \psi \sqrt{\phi-\phi_0}\right] \ln\left( \frac{\phi_{max}-\phi}{\phi_{max}-\phi_0} \right).
\end{equation}
The compaction equation given by Eq. (\ref{eq:Pglobal_2}) is plotted with a black continuous line in Fig. \ref{PPhi_et_ZPhi}(a) together with our numerical data.
The prediction is able to capture the asymptotic behavior close to the jammed state and the asymptotic behavior at high pressures.
In Eq. (\ref{eq:Pglobal_2}), and in contrast to previous models, only one parameter, the maximum packing fraction $\phi_{max}$, is unknown. Other constants are entirely determined through the initial jammed state and the mapping between the packing fraction and coordination curve.
Finally, it is worth mentioning the differences between Eq. (\ref{eq:Pglobal_2}) and its two-dimensional equivalent \cite{Cantor2020_Compaction,Cardenas_pentagons_2021}.
In two dimensions, the numerical simulations show that $\sigma_\ell$ depends linearly on the mean contact strain, consistently with the approximation classically done in 2D MD-like simulations \cite{cundall1979}. This linear dependence in 2D then simplifies the development by replacing the terms $(2\sqrt{\phi - \phi_0})/(3\Gamma\phi_0)^{3/2})$ in the Eq. (\ref{eq:Pglobal_2}) by only $1/(\Gamma\phi_0)$.
\subsection{Particle shape and particle stress distribution}
\label{Local_stress}
During the compression, the shape of the particles evolves from an initial spherical shape to a polyhedral shape, which also modifies the stress distribution.
At the lowest order, the shape of the particles can be characterized by means of the sphericity parameter $\hat{\rho}$, defined by:
\begin{equation} \label{eq:circu_mix}
\hat{\rho} = \left\langle \pi^{1/3} \frac{(6V_i)^{2/3}}{a_i} \right\rangle_i,
\end{equation}
with $a_i$ the surface area of the particle and $\langle ...\rangle_i$ the average over the particles in the volume $V$. By definition, the sphericity of a sphere is one, with values below one for any other geometry.
In Fig. \ref{fig:shape_3D}, we plot the evolution of $(\hat{\rho}-\hat{\rho}_0)$, with $\hat{\rho}_0\sim 1$ the initial sphericity of the particles, as a function of the excess packing fraction, $\phi-\phi_0$.
We find that the shape parameter increases as a power law with exponent $\beta$:
\begin{equation}
\label{Eq_shape_phi_3D}
\hat{\rho}-\hat{\rho}_0 = A (\phi-\phi_0)^{\beta},
\end{equation}
with $\beta \approx 2.5$ and $A \approx 0.6$.
It is interesting to note that a similar tendency has been recently observed in 2D for soft-disks assemblies \cite{} with a similar exponent, which evidences a seemingly universal geometrical characteristic of the compaction of rounded soft particles, as for the relation between $Z$ and $\phi$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/shape.pdf}
\caption{Evolution of the excess sphericity, $\hat{\rho}-\hat{\rho}_0$ as a function of the excess packing fraction $\phi-\phi_0$ for the isotropic compaction of soft spheres. The dashed line is the power-law relation given by Eq. (\ref{Eq_shape_phi_3D}).
}
\label{fig:shape_3D}
\end{figure}
The change in grain shape is necessarily coupled with a redistribution of stresses within the grains. Thus, let us consider the Cauchy stress tensor $\boldsymbol \sigma^{{C}}$ calculated inside the grains.
Note that $\boldsymbol \sigma^{{C}}$ should not be confused with the granular stress tensor ${\boldsymbol \sigma}$ defined above and calculated from the contact forces.
Figure \ref{snap_pe} shows a cross-section images of an assembly of $100$ particles, where the color scale represents the von Mises stress computed at each node.
After the jammed state, strong heterogeneities in the stress distribution inside the particles can be seen (see Fig.\ref{snap_pe}(a)).
The grains are mainly deformed at the contact points, which generally support the maximum stress.
Far beyond the jammed state (Fig.\ref{snap_pe}(b,c)), the shape of the grains strongly changes, the size of the pore declines, and the spatial stress distributions tend to homogenize.
\begin{figure*}
\centering
\includegraphics[width=0.28\linewidth]{figures/40_3D_S3.pdf}(a)
\includegraphics[width=0.28\linewidth]{figures/89_3D_S3.pdf}(b)
\includegraphics[width=0.28\linewidth]{figures/112_3D_S3.pdf}(c)
\includegraphics[width=0.25\linewidth]{figures/legend.pdf}
\caption{Three dimensional cross-section of the von Misses stress field $\sigma_{vm}$ at each particle and for different packing fraction $\phi=0.66$ (a), $\phi=0.86$ (b) and $\phi=0.96$ (c) in an assembly of 100 particles.
The color intensity is proportional to the von Mises stress scaled by the Young modulus, $\sigma_{vm}/E$.}
\label{snap_pe}
\end{figure*}
Fig. \ref{pdf_pe} shows the evolution of the probability density functions (PDF) of the equivalent von Mises stress $\sigma_{vm}$.
Close to the jammed state, we observe exponential decays reminiscent of the distribution of contact forces classically observed in rigid particle assemblies \cite{Nguyen2014_Effect,Mueth1998_Force,Daniels2017_Photoelastic,Abed2019_Enlightening}. This underlines the fact that, although the assembly is isotropically compressed at the macroscopic scale, the particles may undergo large shear stress.
As the packing fraction increases, the PDFs get narrower and gradually transform into Gaussian-like distributions centered around a given mean value.
From these observations, and consistently to the previous observations made in two dimensions, a schematic picture emerges to describe the compaction from a local perspective.
During the compaction, the assembly shifts from a rigid granular material to a continuous-like material.
In the granular-material state, the voids are filled by affine displacement of the particles and small deformations that do not change the spherical shape of the particles significantly.
Then, stress and contact force homogenize within the packing due to the increasing average contact surface and the mean coordination number. This progressive shift of the distributions to Gaussian-like distributions evidence that the system is turning into a more continuous-like material as the packing fraction approaches its maximum value.
This is verified by the decreasing standard deviation of such distributions (inset in Fig. \ref{pdf_pe}(b)).
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{figures/VonMisesYield.pdf
\caption{Probability density function (PDF) of
the local von Mises stress $\sigma_{vm}$ computed on each node and normalized by the corresponding average value in one of the systems composed of 100 particles.
The inset shows the standard deviation of the distribution of $\sigma_{vm}$ as a function of the packing fraction $\phi$ for $50$ and $100$ particles assemblies
(averaged over the four independent systems). }
\label{pdf_pe}
\end{figure}
\section{Conclusions}
\label{Conclu_section}
This paper investigates the compaction behavior of three-dimensional soft spherical particle assemblies through the Non-Smooth Contact Dynamic Method.
From the jammed state to a packing fraction close to $1$, various packings composed of $50$ and $100$ meshed spherical particles were isotropically compressed by applying a constant inward velocity on the boundaries. The mean compaction behavior was analyzed by averaging over the independent initial states.
One of the main results of this work is the writing of a new equation for the compaction of 3D soft spherical particle assemblies based on micromechanical considerations and entirely determined from the structural properties of the packing.
More precisely, this equation is derived from the micromechanical expression of the granular stress tensor together with the approximation of the Hertz contact law between two spherical particles at small strain and assuming a logarithmic shape of the compaction curve at large strain.
Moreover, our numerical data shows that the power-law relation between the coordination number and the packing fraction, after the jamming, is still valid in three-dimensional compaction of elastic spheres, which allows us, {\it in fine}, to write a compaction equation nicely fitting our numerical data. Further, we show that the stress distribution within the particles becomes more homogenous as the packing fraction increases.
Close to the jammed state, the probability density functions of the von Mises stress decrease exponentially as the maximum stress increases.
The distributions progressively shift into a Gaussian-like shape at high packing fractions, which means that the system turns into a more continuous-like material.
The general methodology used for the build-up of this 3D compaction equation was previously implemented in two-dimensional geometries.
Although the compaction curves in two and three dimensions appear to be similar in their overall shape (i.e., in both cases, the packing fraction increases and tends asymptotically to a maximum value as the confining stress increases), it is interesting to note that the equation underlying the variation of $P$ with $\phi$ established with the same micromechanical framework depends on the dimensionality.
The origin of this dependence on the space dimension lies in the functional form of the contact law in the small deformation regime.
Thus, for pursuing a more general compaction equation, it is possible to apply the same micromechanical framework described in this article to assemblies whose particles have more complex behaviors, such as plastic, elastoplastic, or visco-elastoplastic, and also to polydisperse systems.
It will be enough to identify the force law between two particles and integrate it into the framework presented here for all these cases.
Finally, we would also like to point out that, from our best knowledge, this is the first time that the Non-Smooth Contact Dynamics Method is applied to the case of compaction of deformable grains assembly in three dimensions.
From a purely numerical perspective, many efforts still need to be made on numerical optimization and parallelization of algorithms to increase performance and system size.
In particular, it would be interesting to consider periodic conditions in 3D, at least in two directions.
We warmly thank Frederic Dubois for the valuable technical advice on the simulations in LMGC90 and the fruitful discussions regarding the numerical strategies for modeling highly deformable particles in the frame of the Non-Smooth Contact Dynamic method, specifically in three dimensions. We also acknowledge the support of the High-Performance Computing Platform MESO@LR.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{}
\vspace{-1cm}
\footnotetext{\textit{$^{a}$~LMGC, Universit\'e de Montpellier, CNRS, Montpellier, France. \\
[email protected],\\ [email protected],\\[email protected],\\[email protected]}}
\footnotetext{\textit{$^{b}$~Laboratoire de Micromécanique et Intégrité des Structures (MIST), UM, CNRS, IRSN, France.}}
\footnotetext{\textit{$^{c}$~Department of Civil, Geological and Mining Engineering, Polytechnique, 2500, chemin de Polytechnique, Montréal, Qu\'ebec, Canada.
[email protected]}}
\footnotetext{\textit{$^{d}$~Institut Universitaire de France (IUF), Paris, France.}}
\section{Introduction}
The importance of understanding the physics behind the compaction of granular systems made of soft particles lies in the numerous natural phenomena and human activities that deal with such kind of materials.
They are present from constitutive biological cells, foams, and suspensions \cite{Katgert2010_Jamming, Chelin2013_Simulation, Dijksman2017, Katgert2019_The} to powder compaction, pharmaceutical industries, and food activities \cite{Heckel1961, Montes2010, Parilak2017, Montes2018}.
In some civil engineering construction, mixing coarse grains with rubber residues exhibit surprising properties such as better stress relaxation \cite{Indrarantna2019_Use, Khatami2019_The, Anastasiadis2012_Small, Platzer2018} or better foundation damping \cite{Mashiri2015_Shear, Senetakis2012_Dynamic, Kianoosh_2021}.
In particular, far beyond the jamming point, the compaction behavior of soft granular materials is a vast and still open subject, with notable experimental, numerical, and theoretical challenges.
Among these challenges, a three-dimensional characterization, by a realistic model that would consider both the change in grain shape and the assemblies' multi-contact aspects, remains poorly studied.
In experiments, an underlying difficulty is to track the change in particle shape while detecting the making of new contacts.
Photoelasticimetry \cite{ Howell1999, KDaniels2017, Zadeh_2019} or inverse problem method coupled with Digital Image Correlation (DIC) \cite{HURLEY2014154, Marteau2007} is the most used experimental technic for analyzing hard particles' two and three-dimensional behavior.
In particular, the DIC method, which has been very recently extended to analyze two-dimensional soft particle assemblies far beyond the jammed state \cite{Vu2019, Vu2019a}, directly quantifies the deformation field inside the particles and characterizes the deformation mechanisms.
However, for high packing densities, the image resolution may sharply limit the tracking of the grains and the detection of contacts between highly deformed particles, which is crucial for three-dimensional geometries.
Furthermore, in three dimensions, it is not always possible to use an optical approach to measure local properties of the particles, and tomography reconstructions may be necessary, but technically laborious \cite{Dijksman2017,MTiel2017,bares2020_Transparent, Ando2020}.
Concerning numerical modeling, the discrete element method (DEM), coupled with a complementary approach such as the Bonded-Particle Method (BPM) or the Finite-Element Method (FEM), is a suitable framework to simulate and analyze the compaction of soft particles assemblies.
In DEM-BPM, deformable particles are seen as aggregates of rigid particles interacting via elastic bonds \cite{dostaNumericalInvestigationCompaction2017, Nezamabadi_2017, Asadi2018_Discrete}.
This approach is relatively straightforward and allows one to simulate 3D packing composed of a large number of aggregates.
However, a major drawback is that the deformable particles often present a plastic behavior at large strain, and their characterization can be complex depending on the imposed numerical parameters (e.g., size of primary particles or interaction laws) \cite{Azema2018, voMechanicalStrengthWet2018a}.
In contrast, DEM-FEM strategies have the advantage of being closely representative both in terms of geometry and bulk properties of the particles. The price to pay is that these simulations are computationally expensive.
DEM-FEM methods can be classified into two classes. The Multi-Particle Finite Element Method (MPFEM) \cite{Munjiza_2004}, in which regularized contact interactions are used, and the Non-Smooth Contact Dynamic Methods (NSCD) \cite{Moreau1994_Some, Jean1999}, which uses non-regularized contacts laws.
To our best knowledge, the first MPFEM simulations applied to the compaction of deformable disks were performed in the 2000s \cite{Gethin_2002, Procopio2005}, and the first 3D compaction simulations appear a few years later \cite{Shmidt_2010, HARTHONG2012784, ABDELMOULA2017142, ZOU2020297, PENG2021478}.
The first applications of the NSCD to the compaction of soft grains assembly are reported in recent works by Vu et al. \cite{Vu2019, Vu2020_compaction, Vu2021_Effects} for two-dimensional hyper-elastic disks.
In practice, an inherent difficulty to DEM-FEM methods that explains the small number of studies, particularly in 3D, is the high computational cost, limiting the number of particles that can be simulated \cite{HARTHONG2012784, ABDELMOULA2017142, PENG2021478}.
Finally, a lack in the description of the microstructural phenomena during the compression limits the development of theoretical models upon the compaction of soft grain assemblies (i.e., a relation between the applied pressure $P$ and the evolution of the packing faction $\phi$).
As reviewed in literature \cite{Heckel1961, Panelli2001, Comoglu2007, Denny2002, Popescu2018_Compaction, Montes2018, Platzer2018, nezamabadiModellingCompactionPlastic2021}, many compaction equations have been proposed during the last decades.
In general, existing models are based on macroscopic assumptions, and, thus, fitting parameters are required to adjust each expression to the data.
Among these equations, the most used is the one proposed by Heckel \cite{Heckel1961} and later improved by Secondi \cite{Secondi2002_Modelling}. It states that $P \propto \ln(\phi_{max}-\phi)$, where $\phi_{max}$ is the maximum packing fraction that the assembly can reach.
Carroll and Kim justified this equation by an analogy between the corresponding loss of void space and the collapse of a cavity within an elastic medium under isotropic compression \cite{Carroll1984, Kim1987}.
Depending on the authors, different interpretations have been provided to the fitting parameters, which are supposed to represent either a characteristic pressure, a hardening parameter, or is linked to the assembly's plasticity \cite{Platzer2018, Montes2018}.
Only recently, Cantor et al. \cite{Cantor2020_Compaction} and Cardenas-Barrantes et al. \cite{Cardenas_pentagons_2021} set up a systematic micro-mechanical
approach to study the compaction of soft granular assemblies.
By applying this framework to two-dimensional systems modeled with NSCD simulations, new compaction laws entirely determined through the evolution of the connectivity of the particles and the contact properties were presented.
This article presents a three-dimensional numerical and theoretical analysis of the compaction of assemblies composed of highly deformable (elastic) spherical particles using the Non-Smooth Contact Dynamics Method.
We are interested in the compaction evolution as a function of the applied stress from the jammed state to a packing fraction close to unity.
As mentioned before, similar studies were recently performed in 2D \cite{Cantor2020_Compaction, Cardenas_pentagons_2021}.
The transition from 2D to 3D requires additional numerical and technical efforts to manage the particles' deformation correctly.
Furthermore, as we will see, the resulting compaction equation differs from those previously established in 2D using the same micromechanical framework.
This will be understood from the approximation of the Hertz's contact law in the small deformation regime, where the contact force between two deformable particles shifts from a linear dependence with the contact deflection in 2D to a power-law dependence in 3D.
The paper is organized as follows. In Section \ref{Numerical_method}, we introduce the numerical framework used in the simulations.
The numerical results and the theoretical model of the 3D compaction curves are discussed in Sec.\ref{Sec_results}.
Section \ref{Stress_evolution} deals with the evolution of the packing fraction and particle connectivity beyond the jamming point as a function of the applied stress.
In Sec. \ref{Theory}, we present the micro-structural elements behind the evolution of the packing fraction and the corresponding resulting 3D equation.
In Section \ref{Local_stress}, a more refined description of the particle stresses is presented within the limit of the representativeness of the considered samples.
Finally, some conclusions are discussed in Section \ref{Conclu_section}.
\section{Numerical Approach}
\label{Numerical_method}
\subsection{The Non-Smooth Contact Dynamic Method (NSCD)}
The simulations are performed using the Non-Smooth Contact Dynamics (NSCD), a method developed by Moreau and Jean \cite{Moreau1994_Some, Jean1999, Dubois2018}.
The NSCD extends the Contact Dynamic (CD) method \cite{Moreau1994_Some} to deformable bodies through a finite element approach (FEM).
The CD method is based on an implicit time integration of the equations of motion and non-regularized contact laws.
These contact laws set the non-penetrability and friction behavior between the particles.
No elastic repulsive potentials and no smoothing of the Coulomb friction law are needed to determine the contact forces.
Therefore, the unknown variables, i.e., particle velocities and contact forces, are simultaneously solved via a nonlinear Gauss-Seidel scheme.
Considering deformable bodies (in the sense of continuous mechanics) is natural with CD, although technically very complex to implement.
In this case, the bodies are discretized via finite elements, so the degrees of freedom - the coordinates of the nodes - and contact interactions are resolved simultaneously.
We used an implementation of the three-dimensional Non-Smooth Contact Dynamics Method available on the open-source software LMGC90, capable of modeling a collection of deformable or non-deformable particles of various shapes, behaviors and interactions \cite{Dubois2006}.
\subsection{Packing composed of 3D elastic particles}
When dealing with three-dimensional and highly deformable particles, a problematic issue is to find the best compromise between sample representativeness and numerical efficiency.
In this study, we are interested in the isotropic compression of elastic spherical particles.
Therefore, one necessary condition is to verify that the mesh used is, at least, sufficiently accurate concerning the Hertz approximation in the range of small deformations \cite{ johnsonContactMechanics1985}.
Let us first consider the case of an elastic spherical particle of diameter $d$, with a Poisson's ratio $\nu$ equals $0.495$ and a Young modulus $E$. The sphere is compressed axially as shown in Fig. \ref{Mes_part}(a).
The bottom wall is fixed while the top wall moves downwards at a constant velocity $v_0$ chosen, such as the inertial effects are negligible (i.e., $I<<1$), where $ I=v_0 \sqrt{\rho_0/E}$ \cite{gdrmidiDenseGranularFlows2004}, with $\rho_0$ being the density of the particles.
Figure \ref{Mes_part}(b) shows the evolution of the normal force $f$ as a function of the vertical displacement $\delta$ using $444$, $808$, $6685$ and $14688$ tetrahedral elements with four nodes, together with the corresponding prediction of the Hertz law given by \cite{johnsonContactMechanics1985}:
\begin{equation}
\label{Hert_wall_sphere}
\frac{f}{d^2} = \frac{2^{3/2}}{3} \frac{E}{1-\nu^2} {\left(\frac{\delta}{d}\right)}^{3/2}.
\end{equation}
Compared to this equation, we obtain a good prediction with $14688$ and $6685$ elements, while with $444$ elements, it shows decreasing accuracy.
On the contrary, with $808$ elements, the force-displacement relation is slightly overestimated at the beginning of the deformation, but the response quickly reaches the prediction at higher deformation.
Following this simple analysis, we fixed the number of elements to $808$ for all the simulations presented below.
\begin{figure}
\centering
\hspace{26pt} \includegraphics[width=0.7\linewidth]{figures/1Particle.pdf} \hspace{20pt} (a)
\includegraphics[width=0.91\linewidth]{figures/Hertz.pdf}(b)
\caption{(a) 3D cross-section of an elastic spherical particle vertically compressed between two walls. The color intensity is proportional to the mean displacement field.
(b) Normal contact force applied on a single spherical particle as a function of the deformation for different meshes. The continuous black line is the approximation given by the Hertz's contact law Eq. (\ref{Hert_wall_sphere}).}
\label{Mes_part}
\end{figure}
Concerning the number of particles in an assembly, we adopt a double approach.
First, we rely on previously published works in which it is shown that a number between $32$ and $200$ particles is sufficient to qualitatively represent the loading surfaces, the compaction, or the plastic flow of a compressed assembly of deformable spherical particles \cite{HARTHONG2012784, Shmidt_2010, ABDELMOULA2017142, ZHOU2020153226, PENG2021478}. Second, we use a statistical analysis by considering different samples and focus on their averaged behavior.
Thus, in this study, we consider $8$ systems, $4$ composed of $N=50$ particles and $4$ composed of $N = 100$ particles. For each system, particles are spheres made of an elastic material with Poisson's ratio equals to $0.495$.
The particles are first randomly dropped in a cubic box with a small particle size dispersity around their mean diameter $\left<d\right>$ in order to avoid crystallization ($ d \in \left[0.8 \left<d \right>,1.2 \left<d \right> \right]$). All packings are then isotropically compressed under a stress $\sigma_0$, such that $\sigma_0/E<<1$ (i.e., the particles can be considered as rigid, in comparison to the applied stress).
This initial compression ends when the change of the packing fraction $\phi$ is below $0.01\%$. After this point, all systems can be considered at the jammed state, characterized by the initial packing fraction $\phi_0$.
Then, the packings are isotropically compressed by imposing a constant velocity $v$ on the box's boundaries. The velocity $v$ is carefully chosen to ensure that the systems are always in the quasi-static regime, characterized by an inertial number $I<<1$.
In our simulations, we use a constant friction coefficient between particles $\mu = 0.3$, and we keep the friction coefficient with the walls and gravity equal to zero.
Figure \ref{Snapshot} presents screenshots of an assembly composed of $100$ particles at the jammed state ($\phi\sim0.49$) and near to the maximal dense state with $\phi\sim0.96$.
In the following, the mean behavior for systems composed of $N_p=50$ and $N_p=100$ particles is obtained by averaging over the $4$ corresponding independent sets.
The average jammed packing fraction $\phi_0$ obtained is $0.5$ for $50$ particle assemblies and $0.51$ for $100$ particle assemblies.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/Snap1.pdf}
\includegraphics[width=1\linewidth]{figures/Snap2.pdf}
\caption{View of a granular assembly composed of $100$ soft spherical particles at (a) the initial configuration and (b) close to $\phi=0.96$.
The color intensity, from blue to red, is related to the mean stresses in the particles.}
\label{Snapshot}
\end{figure}
\section{Results}
\label{Sec_results}
\subsection{Packing compaction and particle connectivity}
\label{Stress_evolution}
In this section we analyze the compaction of the assemblies, characterized by the evolution of the packing fraction $\phi$ as a function of the mean confining stress $P$ and the mean particle connectivity $Z$.
The mean confining stress, in the granular system, is extracted from the granular stress tensor $\bm \sigma$, which is computed at each step of the compression as \cite{moreauStressTensorGranular2009}:
\begin{equation}
{\sigma_{\alpha \beta} } = \frac{1}{V} \sum_{c \in V} {f_{\alpha}^c \ell_{\beta}^c},
\label{eq:sigma}
\end{equation}
where $\alpha$ and $\beta$ correspond to $x$, $y$ or $z$, $f_{\alpha}^c$ is the $\alpha^{\rm{th}}$ component of the contact force at the contact $c$, and $\ell_{\beta}^c$ is the $\beta^{\rm{th}}$ component of the vector that join the two centers of the particles interacting at the contact $c$.
Note that the total contact force between two deformable particles is computed as the vectorial sum of the forces at the contact nodes along the shared interface.
The mean confining stress is then given by $P = (\sigma_{1} + \sigma_{2} + \sigma_{3})/3$, where $\sigma_{1}$, $\sigma_{2}$ and $\sigma_{3}$ are the principal stress values
of $\bm \sigma$. The packing fraction $\phi$ is also related to the macroscopic deformation $\epsilon$, by $\epsilon = - \ln(\phi_0/\phi)$.
Figure \ref{PPhi_et_ZPhi}(a) shows the evolution of $\phi$ as a function of the mean confining stress $P$, normalized by the reduced Young Modulus $E^*=E/2(1-\nu^2)$, for the assemblies composed of $50$ and $100$ particles.
From the jammed state, the packing fraction asymptotically increases towards the value $\phi_{max}$, at high pressure.
Note that the compaction curves for the $50$ and $100$ particle systems collapse on the same curve, which is in agreement with the previous works mentioning the minimum number of grains necessary to capture the average comparative behavior. On these compaction curves, we also show the approximation proposed by Heckel-Secondi, with the following form \cite{Heckel1961,Secondi2002_Modelling}:
\begin{equation}
\frac{P}{E^*} = -A \ln\left( \frac{\phi_{max}-\phi}{\phi_{max}-\phi_0} \right),
\label{Eq_Heckel}
\end{equation}
with $A$ a fitting constant equal to $0.15$ in our case, and $\phi_{max}=0.965$.
Equation (\ref{Eq_Heckel}), although very simple in its form, is able to capture the general tendency of the compaction but slightly mismatches its evolution for intermediate pressures.
Also, the parameter $A$ does not have a well-established physical meaning, and different values may be required to fit the data depending on the friction coefficient \cite{Cardenas2020_Compaction}or the bulk behavior of the particles \cite{Carroll1984, Platzer2018}.
Some improvements to the Heckel-Secondi equation have been proposed by Ge et al. \cite{Ge1995_A}, Zhang et al. \cite{Zhang2014}, and Wunsch et al. \cite{Wunsch2019_A} by considering a double log approach (i.e., $\ln P \propto \log \ln \phi$). However, unlike the Heckel-Secondi equation, which can be justified \cite{Carroll1984}, these new approaches rely only on data fitting.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/Phi_P.pdf}(a)
\includegraphics[width=0.8\linewidth]{figures/Z_Phi.pdf}(b)
\caption{(a) Packing fraction $\phi$ as a function of the mean confining stress $P$ normalized by the reduced Young Modulus $E^*$.
The dotted line is the approximation given by Heckel Eq. (\ref{Eq_Heckel}), the dashed line is the small strain approximation given by Eq. (\ref{eq:Pgsd}) (SD), and the continuous black line is the prediction given by our micromechanical approach Eq. (\ref{eq:Pglobal_2}).
The inset shows the macroscopic volumetric strain $\varepsilon$ as a function of the mean contact strain $\langle \epsilon_\ell \rangle$ in the small deformation domain.
(b) Reduced coordination number $Z - Z_0$ as a function of the reduced solid fraction $\phi-\phi_0$ (log-log scale is shown in the inset).
The continuous black line is the power-law relation given by Eq. (\ref{eq:ZvsPpi}) with exponent $0.5$.
Error bars represent the standard deviation on the averaged behavior performed over $4$ independent samples.}
\label{PPhi_et_ZPhi}
\end{figure}
In Fig.\ref{PPhi_et_ZPhi}(b), we plot the evolution of the mean particle connectivity $Z$ as a function of $\phi$.
At the jammed state, the packing structure is characterized by a minimal value $Z_0$, which depends on the coefficient of friction, the packing preparation, and the shape of the particles \cite{Hecke2009_Jamming, Donev2005_Pair, smithAthermalJammingSoft2010}. For spherical assemblies, $Z_0$ is equal to $6$ when the friction vanishes, and it varies between $4$ and $6$ for higher friction coefficients.
In our two frictional systems, we find $Z_0\simeq 3.5$.
Beyond the jammed state, $Z$ continues to increase and, as shown in the previous 2D numerical \cite{Vu2020_compaction, Cardenas2020_Compaction, Cardenas_pentagons_2021} and experimental studies \cite{Majmudar2007_Jamming, Katgert2010_Jamming, Vu2019}, this increase follows a power law with exponent $1/2$:
\begin{equation}\label{eq:ZvsPpi}
(Z - Z_0) = \psi \sqrt{\phi - \phi_0},
\end{equation}
with $\psi \approx 8.5$, a constant fully defined through the characteristics of the jammed state and the final dense state as $\psi = (Z_{max}-Z_0)/\sqrt{\phi_{max}-\phi_0}$, with
$Z_{max}$ the maximum packing fraction as $\phi \rightarrow\phi_{max}$.
Thus, this power-law relation already observed in 2D can now be extended to the case of 3D soft particle assemblies.
\subsection{A compaction equation}
\label{Theory}
As discussed in the introduction, there are many compaction equations trying to relate the confining stress to the evolution of the packing fraction.
Some of them, such as the Heckel-Secondi equation, although based on fitting parameters, can be justified by macroscopic arguments \cite{Carroll1984, Kim1987}.
However, the vast majority are settled on adjustment strategies sometimes involving several fitting parameters.
In this section, we briefly recall the general framework introduced in a previous study \cite{Cardenas_pentagons_2021} allowing us to relate the packing fraction to the applied stress through the micromechanical specificity of a given system. We then apply this general framework to the case of three-dimensional soft spherical particles.
The stress tensor Eq. (\ref{eq:sigma}) can be rewritten, as a sum over all contacts, as:
\begin{equation}
\sigma_{\alpha \beta} = n_c \langle f^c_{\alpha}\ell^c_{\beta} \rangle_c,
\label{eq:sigma_contact}
\end{equation}
where $\langle...\rangle_c$ is the average over all contacts. The density of contacts $n_c$ is given by $n_c=N_c/V$, with $N_c$ the total number of contacts in the volume $V$.
Considering a small particle size distribution around the diameter $\langle d \rangle$, $\sum_{p\in V} V_p\simeq N_pV_p$, with $V_p = (\pi/6)d^3$, the contact density can be rewritten as $n_c \simeq 3Z\phi/(\pi d^3)$, with $Z=2N_c/N_p$.
From the definition of $P$ via the principal stresses of $\sigma$, we get \cite{Rothenburg1989_Analytical, Agnolin2007c, Agnolin2008_On, Khalili2017b}:
\begin{equation}\label{eq:Pglobal_local_contact}
P \simeq \frac{ \phi Z} {\pi} \sigma_{\ell},
\end{equation}
with $\sigma_{\ell} = \langle f^c \cdot \ell^c \rangle_c/\langle d \rangle^3$, a measure of the mean contact stress.
This way of writing $P$ as a function of $Z$, $\phi$, and $\sigma_\ell$ is, in fact, very common and has been successfully applied in different contexts.
For example, it has been used to relate the bulk properties of an assembly to the elastic contact properties \cite{Brodu2015b,Khalili2017a}, or to link the macroscopic cohesive strength to the cohesive behavior between the interface of particles in contact \cite{Richefeu2006, Azema2018}.
The Equation (\ref{eq:Pglobal_local_contact}) reveals the active role of the evolution of the microstructure in the evolution of $P$ with $\phi$.
First, we focus on the small deformation domain. We can rely on Hertz's prediction where, in 3D, the force $f^c$ at a contact $c$ between two touching particles is related to the contact deflection $\delta^c$ by $f^c = (2/3) E^* d^{1/2} {\delta^c}^{3/2}$.
Then, since $\ell_c\sim d$ in the range of small deformations, we get $\sigma_\ell \sim (2/3)E^* \langle \varepsilon_{\ell} \rangle^{3/2}$, where $\varepsilon_{\ell} = \delta^c/d$ is the deformation at a contact $c$, assuming that $\langle \varepsilon_{\ell}^{3/2}\rangle \sim \langle \varepsilon_{\ell} \rangle^{3/2}$ which is well verified in our weakly polydisperse systems. Also, with a good approximation, we get that $Z=Z_0$.
Finally, our simulations show that the mean contact strain $\langle \epsilon_\ell \rangle$ and
the macroscopic volumetric strain are linearly dependent as $\langle \epsilon_\ell \rangle \sim (1/\Gamma) \varepsilon$, with $\Gamma\sim4.4$ (see inset in Fig.\ref{PPhi_et_ZPhi}(a)).
This value is close to the one obtained in 2D with disks and non-circular particles \cite{Cardenas_pentagons_2021}. Note that $\Gamma=3$ in the ideal case of a cubic lattice arrangement of spheres.
Finally, by considering all these ingredients, Eq. (\ref{eq:Pglobal_local_contact}) is rewritten as:
\begin{equation}
\label{eq:Pgsd}
\frac{P_{SD}}{E^*} = -\frac{2}{3\pi\Gamma^{3/2}}Z_0\phi \ln^{(3/2)}\left(\frac{\phi_0}{\phi}\right),
\end{equation}
with $P_{SD}$ the limit of $P(\phi)$ at small deformations.
The prediction given by Eq. (\ref{eq:Pgsd}) is shown in Fig. \ref{PPhi_et_ZPhi}.
As expected, we see a fair approximation of the compaction evolution in the small-strain domain, but it fails to predict the evolution at larger strains.
The critical issue for the large strain domain is to find a proper approximation of $\sigma_\ell(\phi)$.
To find so, we can combine the previous microscopic approach with a macroscopic development by Carroll and Kim \cite{Carroll1984, Kim1987}.
Assuming that the compaction behavior can be equivalent to the collapse of a cavity within the elastic medium, they showed that $P \propto \ln[(\phi_{max}-\phi)/(\phi_{max}-\phi_0)]$.
Using this macroscopic approximation together with the micromechanical expression of $P$ given by Eq. (\ref{eq:Pglobal_local_contact}), and remarking that the quantity $Z\phi$ is finite, it is easy to show that, necessarily, $\sigma_\ell = \alpha(\phi) \ln[(\phi_{max}-\phi)/(\phi_{max}-\phi_0)]$, with $\alpha$ a function that depends, {\it a priori}, on $\phi$.
Then, by ($i$) introducing the above form of $\sigma_\ell$ into Eq. (\ref{eq:Pglobal_local_contact}), ($ii$) ensuring the continuity to small deformation (i.e., $P\rightarrow P_{SD}$ for $\phi\rightarrow \phi_0$), and ($iii$) introducing the $Z-\phi$ relation (Eq. (\ref{eq:ZvsPpi})) into Eq. (\ref{eq:Pglobal_local_contact}), we get:
\begin{equation}\label{eq:Pglobal_2}
\frac{P}{E^*} = -\frac{2}{3\pi\Gamma^{3/2}}\left(\frac{\phi_{max}-\phi_0}{\phi_0^{3/2}}\right) \phi \sqrt{\phi - \phi_0} \left[Z_0 - \psi \sqrt{\phi-\phi_0}\right] \ln\left( \frac{\phi_{max}-\phi}{\phi_{max}-\phi_0} \right).
\end{equation}
The compaction equation given by Eq. (\ref{eq:Pglobal_2}) is plotted with a black continuous line in Fig. \ref{PPhi_et_ZPhi}(a) together with our numerical data.
The prediction is able to capture the asymptotic behavior close to the jammed state and the asymptotic behavior at high pressures.
In Eq. (\ref{eq:Pglobal_2}), and in contrast to previous models, only one parameter, the maximum packing fraction $\phi_{max}$, is unknown. Other constants are entirely determined through the initial jammed state and the mapping between the packing fraction and coordination curve.
Finally, it is worth mentioning the differences between Eq. (\ref{eq:Pglobal_2}) and its two-dimensional equivalent \cite{Cantor2020_Compaction,Cardenas_pentagons_2021}.
In two dimensions, the numerical simulations show that $\sigma_\ell$ depends linearly on the mean contact strain, consistently with the approximation classically done in 2D MD-like simulations \cite{cundall1979}. This linear dependence in 2D then simplifies the development by replacing the terms $(2\sqrt{\phi - \phi_0})/(3\Gamma\phi_0)^{3/2})$ in the Eq. (\ref{eq:Pglobal_2}) by only $1/(\Gamma\phi_0)$.
\subsection{Particle shape and particle stress distribution}
\label{Local_stress}
During the compression, the shape of the particles evolves from an initial spherical shape to a polyhedral shape, which also modifies the stress distribution.
At the lowest order, the shape of the particles can be characterized by means of the sphericity parameter $\hat{\rho}$, defined by:
\begin{equation} \label{eq:circu_mix}
\hat{\rho} = \left\langle \pi^{1/3} \frac{(6V_i)^{2/3}}{a_i} \right\rangle_i,
\end{equation}
with $a_i$ the surface area of the particle and $\langle ...\rangle_i$ the average over the particles in the volume $V$. By definition, the sphericity of a sphere is one, with values below one for any other geometry.
In Fig. \ref{fig:shape_3D}, we plot the evolution of $(\hat{\rho}-\hat{\rho}_0)$, with $\hat{\rho}_0\sim 1$ the initial sphericity of the particles, as a function of the excess packing fraction, $\phi-\phi_0$.
We find that the shape parameter increases as a power law with exponent $\beta$:
\begin{equation}
\label{Eq_shape_phi_3D}
\hat{\rho}-\hat{\rho}_0 = A (\phi-\phi_0)^{\beta},
\end{equation}
with $\beta \approx 2.5$ and $A \approx 0.6$.
It is interesting to note that a similar tendency has been recently observed in 2D for soft-disks assemblies \cite{} with a similar exponent, which evidences a seemingly universal geometrical characteristic of the compaction of rounded soft particles, as for the relation between $Z$ and $\phi$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/shape.pdf}
\caption{Evolution of the excess sphericity, $\hat{\rho}-\hat{\rho}_0$ as a function of the excess packing fraction $\phi-\phi_0$ for the isotropic compaction of soft spheres. The dashed line is the power-law relation given by Eq. (\ref{Eq_shape_phi_3D}).
}
\label{fig:shape_3D}
\end{figure}
The change in grain shape is necessarily coupled with a redistribution of stresses within the grains. Thus, let us consider the Cauchy stress tensor $\boldsymbol \sigma^{{C}}$ calculated inside the grains.
Note that $\boldsymbol \sigma^{{C}}$ should not be confused with the granular stress tensor ${\boldsymbol \sigma}$ defined above and calculated from the contact forces.
Figure \ref{snap_pe} shows a cross-section images of an assembly of $100$ particles, where the color scale represents the von Mises stress computed at each node.
After the jammed state, strong heterogeneities in the stress distribution inside the particles can be seen (see Fig.\ref{snap_pe}(a)).
The grains are mainly deformed at the contact points, which generally support the maximum stress.
Far beyond the jammed state (Fig.\ref{snap_pe}(b,c)), the shape of the grains strongly changes, the size of the pore declines, and the spatial stress distributions tend to homogenize.
\begin{figure*}
\centering
\includegraphics[width=0.28\linewidth]{figures/40_3D_S3.pdf}(a)
\includegraphics[width=0.28\linewidth]{figures/89_3D_S3.pdf}(b)
\includegraphics[width=0.28\linewidth]{figures/112_3D_S3.pdf}(c)
\includegraphics[width=0.25\linewidth]{figures/legend.pdf}
\caption{Three dimensional cross-section of the von Misses stress field $\sigma_{vm}$ at each particle and for different packing fraction $\phi=0.66$ (a), $\phi=0.86$ (b) and $\phi=0.96$ (c) in an assembly of 100 particles.
The color intensity is proportional to the von Mises stress scaled by the Young modulus, $\sigma_{vm}/E$.}
\label{snap_pe}
\end{figure*}
Fig. \ref{pdf_pe} shows the evolution of the probability density functions (PDF) of the equivalent von Mises stress $\sigma_{vm}$.
Close to the jammed state, we observe exponential decays reminiscent of the distribution of contact forces classically observed in rigid particle assemblies \cite{Nguyen2014_Effect,Mueth1998_Force,Daniels2017_Photoelastic,Abed2019_Enlightening}. This underlines the fact that, although the assembly is isotropically compressed at the macroscopic scale, the particles may undergo large shear stress.
As the packing fraction increases, the PDFs get narrower and gradually transform into Gaussian-like distributions centered around a given mean value.
From these observations, and consistently to the previous observations made in two dimensions, a schematic picture emerges to describe the compaction from a local perspective.
During the compaction, the assembly shifts from a rigid granular material to a continuous-like material.
In the granular-material state, the voids are filled by affine displacement of the particles and small deformations that do not change the spherical shape of the particles significantly.
Then, stress and contact force homogenize within the packing due to the increasing average contact surface and the mean coordination number. This progressive shift of the distributions to Gaussian-like distributions evidence that the system is turning into a more continuous-like material as the packing fraction approaches its maximum value.
This is verified by the decreasing standard deviation of such distributions (inset in Fig. \ref{pdf_pe}(b)).
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{figures/VonMisesYield.pdf
\caption{Probability density function (PDF) of
the local von Mises stress $\sigma_{vm}$ computed on each node and normalized by the corresponding average value in one of the systems composed of 100 particles.
The inset shows the standard deviation of the distribution of $\sigma_{vm}$ as a function of the packing fraction $\phi$ for $50$ and $100$ particles assemblies
(averaged over the four independent systems). }
\label{pdf_pe}
\end{figure}
\section{Conclusions}
\label{Conclu_section}
This paper investigates the compaction behavior of three-dimensional soft spherical particle assemblies through the Non-Smooth Contact Dynamic Method.
From the jammed state to a packing fraction close to $1$, various packings composed of $50$ and $100$ meshed spherical particles were isotropically compressed by applying a constant inward velocity on the boundaries. The mean compaction behavior was analyzed by averaging over the independent initial states.
One of the main results of this work is the writing of a new equation for the compaction of 3D soft spherical particle assemblies based on micromechanical considerations and entirely determined from the structural properties of the packing.
More precisely, this equation is derived from the micromechanical expression of the granular stress tensor together with the approximation of the Hertz contact law between two spherical particles at small strain and assuming a logarithmic shape of the compaction curve at large strain.
Moreover, our numerical data shows that the power-law relation between the coordination number and the packing fraction, after the jamming, is still valid in three-dimensional compaction of elastic spheres, which allows us, {\it in fine}, to write a compaction equation nicely fitting our numerical data. Further, we show that the stress distribution within the particles becomes more homogenous as the packing fraction increases.
Close to the jammed state, the probability density functions of the von Mises stress decrease exponentially as the maximum stress increases.
The distributions progressively shift into a Gaussian-like shape at high packing fractions, which means that the system turns into a more continuous-like material.
The general methodology used for the build-up of this 3D compaction equation was previously implemented in two-dimensional geometries.
Although the compaction curves in two and three dimensions appear to be similar in their overall shape (i.e., in both cases, the packing fraction increases and tends asymptotically to a maximum value as the confining stress increases), it is interesting to note that the equation underlying the variation of $P$ with $\phi$ established with the same micromechanical framework depends on the dimensionality.
The origin of this dependence on the space dimension lies in the functional form of the contact law in the small deformation regime.
Thus, for pursuing a more general compaction equation, it is possible to apply the same micromechanical framework described in this article to assemblies whose particles have more complex behaviors, such as plastic, elastoplastic, or visco-elastoplastic, and also to polydisperse systems.
It will be enough to identify the force law between two particles and integrate it into the framework presented here for all these cases.
Finally, we would also like to point out that, from our best knowledge, this is the first time that the Non-Smooth Contact Dynamics Method is applied to the case of compaction of deformable grains assembly in three dimensions.
From a purely numerical perspective, many efforts still need to be made on numerical optimization and parallelization of algorithms to increase performance and system size.
In particular, it would be interesting to consider periodic conditions in 3D, at least in two directions.
We warmly thank Frederic Dubois for the valuable technical advice on the simulations in LMGC90 and the fruitful discussions regarding the numerical strategies for modeling highly deformable particles in the frame of the Non-Smooth Contact Dynamic method, specifically in three dimensions. We also acknowledge the support of the High-Performance Computing Platform MESO@LR.
\section*{Conflicts of interest}
There are no conflicts to declare.
|
train/arxiv
|
BkiUdec5qoTDtv38vmV4
| 5 | 1 |
\section{Introduction}
A typical goal of an evolutionary model would be to predict the fixation or extinction in the population of a mutation in the genotype of the individuals, which entails reaching a sufficient understanding of 1. the map genotype-phenotype and 2. the interactions between the phenotype and the surrounding environment: both of them are formidable tasks and object of cutting-edge research at the intersection between physics and biology \cite{neher2011,neher2018,lassig2017,dichio2021}.\\
Any theoretical and modelling effort in the field of population genetics is notoriously hampered by the simple and inevitable lack of experimental data (with some famous exceptions e.g. \cite{good2017,lenski1991}), the major reason being the difficulty in observing evolutionary time-scales under experimental conditions. \\
Suitable data for these kind of models must necessarily refer to rapidly evolving populations, such as bacteria or viruses, whose analysis can lead to crucial predictions of their evolutionary fate. Nevertheless, even when data are available, e.g. as a database of all the genotypes in a viral population, it may be tough to extract useful information from the deafening noise of sequencing errors, finite-size population effects etc.\\
The importance of similar considerations has become evident well beyond the boundaries of academic research in the last year and a half, when the prediction of spread or extinction of a variant of the SARS-CoV-2 has become a matter primarily concern for world-wide public health.
Indeed, following the outbreak of the pandemic, an unprecedented effort has been put in collecting genomic sequences of the SARS-CoV-2 in a public shared repository, in order to boost the research on the virus. The GISAID repository~\cite{GISAID} contains an increasing collection of SARS-CoV-2 whole genome sequences, and has been used to identify mutational hot spots ~\cite{Pacchetti2020} and to infer epistatic fitness parameters~\cite{Zeng2020,Cresswell2021}.
In the last year, Nature has performed an experiment in the growth of Variants of Concern (VOC),
in Pangolin classification referred to as
B.1.1.7, B.1.351 and B.1.617.2. These variants of the virus are today also referred to as
``alpha", ``beta" and ``delta": earlier they were called
``UK-variant", ``South Africa-variant" and ``Indian variant", as they were first identified in south-east England~\cite{B.1.1.7}, South Africa~\cite{Tegally2021,VoC-6} and India~\cite{Indian_variant}, respectively.
Although most PCR-assays for these variants are based on variability at a few positions,
full definitions contain more loci, both within and outside the Spike-protein.
This holds both for Pangolin definitions and others, as we will show.
The frequencies of the mutations at the different positions hence give information on whether these variants in fact grow as large clones, or if they have mutated or recombined into several clones, or if they were several clones from the beginning.
In this paper, we find that different scenarios hold for the three variants alpha, beta and delta.
This illustrates that allele frequency analysis is a method that can complement phylogenetic approaches in order to more accurately define VOCs at an earlier stage.
The overall message of this work is that comparatively simple time series
analysis can reveal properties of large and complex data which is often
obscured in other methods. In our case this is for instance so for mutations obviously
mis-labelled in the original definitions. We will see there are quite a few of these.
Our analysis hence emphasizes again the importance of time variability in
complex systems~\cite{Holme2012,Holme-2019,Masuda-2016}, and of time series analysis as a tool to
elucidate relationships in complex data \cite{Marwan2007,Donges2015,Runge2019}.
The paper is organized as follows. In Section~\ref{sec:data}
we describe how we use the uniquely rich data source of GISAID, with the
hope to encourage also further studies in the complex systems community.
The allele frequency analysis at individual mutated locus as well as the joint ones for three Variant of Concern are described in Section~\ref{sec:allele_fre_analysis_annotation}. The annotations of these mutations are also listed in this section. The main results and discussion are given in Section~\ref{sec:results} and~\ref{sec:discussion} respectively.
A preliminary version of this work covering only variants alpha and beta
and using data on GISAID up to early 2021 was deposited on the bioRxiv
pre-print server in April 2021 \cite{earlier-version}.
\section{Data preparation and transformation}
\label{sec:data}
We analyzed the consensus sequences of SARS-CoV-2 deposited in GISAID database (https://www.gisaid.org) ~\cite{GISAID}. These high quality sequences with full lengths (number of bps $\approx 30,000 $) were obtained through the ``complete" and ``high coverage" options on the GISAID interface. GISAID metadata
includes both ``collection time" and ``submission time" of the genome sequences.
As the first is usually two weeks or more earlier than the second
we here used the ``collection time" option.
\subsection{Data Collection}
The data set prepared here were downloaded from GISAID website installments at different times
until the middle of August 2021, with ``collection time" of the genomic sequences until the end of July 2021. In this manner $1,845,260$ genomic sequences are obtained in total which are all included in the allele frequency analysis.
For deposition, sequences collected in 2020 are packed in one file while the sequences downloaded in 2021 are packed monthly as number of sequences has grown dramatically. All separate data sets are available on the Github repository~\cite{Zeng-github}. They are also stored physically in a desktop computer with 64G RAM named ``hlz" at Nanjing University of Posts and Telecommunications (NJUPT). The allele frequencies and the visualizations were both done using MATLAB software on this machine.
\subsection{Multiple-Sequence Alignment (MSA)}
Multiple sequence alignments (MSAs) were constructed from the downloaded raw sequences with the online alignment server MAFFT~\cite{Katoh2017,Kuraku2013} with the reference sequence ``Wuhan-Hu-1" carrying Genbank accession number ``MN908947.3". The length of sequences are kept the same as the reference (29903) during sequence alignment.
An MSA is a big matrix $\mathbf{S}=\{s_i^n|i=1,...,L, n = 1,...,N\}$, composed of $N$ genomic sequences which are aligned over $L$ positions.
Each entry $s_i^n$ of matrix $\mathbf{S}$ is either one of the 4 nucleotides (A,C,G,T), or ``not known nucleotide'' (N),
or some minorities,
or the alignment gap `-', introduced to treat nucleotide deletions and insertions. In the main analysis we transformed the minorities `KFY...' into one overall symbol `N' when constructing the MSA. We hence retain six states, i.e., `-NACGT'. We note that with minorities, `-' and `N', deletions can also be detected from the temporal analysis of the allele frequencies (data not shown).
The whole MSA could be divided into pieces along the horizontal direction further for storage. Then each piece is a sub-structures of the entire gene (NSP1 to NSP16, Spike, ORF3a etc) over all pooled time. This operation could reduce the memory requirement during the computation.
\section{Allele frequency of three Variants of Concern (VOC)}\label{sec:allele_fre_analysis_annotation}
The work focused on the individual and joint allele frequencies analysis for the mutations or deletions listed for B.1.1.7 \cite{B.1.1.7} also known as alpha or ``UK variant", B.1.351 \cite{VoC-6} also known as beta or ``South Africa variant" and B.1.617.2 \cite{Delta_variant} also known as delta or ``Indian variant".
\subsection{Allele frequency calculation}
In the time period $\Delta t$, the individual frequencies of a certain nucleotide $x$ at the $i$-th locus are computed by eq.~\eqref{eq:fre_cal}.
\begin{equation}\label{eq:fre_cal}
f_i(x,\Delta t) = \frac{n_i(s,\Delta t)}{N_i(\Delta t)},
\end{equation}
With $s\in\{-,N,A,C,G,T\}$ and $\Delta t$ the time length of the analyzed snapshots $n_i(s,\Delta t)$ denotes the number of allele $s$ at locus $i$ during the period of $\Delta t$ while the denominator is the total number of the nucleotides on this locus during the same period $\Delta t$.
Wild type in biology denotes the original variant or the most common variant.
The allele frequency analysis with the ``wild type" denoting ``the most common variant" in all
the pooled data for B.1.1.7 and B.1.351 variant was performed in~\cite{earlier-version}, which is a proxy for the original variant of SARS-CoV-2 after it transited into its
human host.
In this work we have taken it to be the original variant, which is thus the same
as the reference sequence Wuhan-Hu-1. A mutant is a deviation from this wild type.
To visualize if sequences grow together we therefore plot (a) the joint
frequency of wild type alleles as a group of loci and (b) the joint frequency of
the second most prevalent alleles provided by the definitions of three VOCs as shown in the ``mutation locus" column of Table~\ref{tab:B.1.1.7},~\ref{tab:B.1.351} and ~\ref{tab:B.1.617.2}. The relevant formulae are
\begin{equation}\label{eq:joint_fre_cal}
F(v,\Delta t) = \frac{n(v,\Delta t)}{N(\Delta t)},
\end{equation}
where $v$ is the number of the sequence containing all the corresponding mutations with the B.1.1.7 / B.1.351 / B.1.617.2
while the denominator is the total number of sequences during the same period $\Delta t$.
To take into account the effect of evolution time for SARS-CoV-2 virus, both the individual and joint allele frequencies are computed on the time scale of each month from the initial outbreak of the COVID-19 pandemic.
There are two obvious outliers denoted by the black square line (16176) and the black dot line (26801) in Fig.~\ref{fig:frequeces_of_loci_monthly_UK}, which have different temporal behaviors compared with the others. In~\cite{earlier-version} we concluded that the mutation of $16176$ was most likely mislabeled. The $23063$ mutation of B.1.351 in Fig.~\ref{fig:frequeces_of_loci_monthly_SA} has a distinct pattern from other alleles.
\subsection{Annotated nucleotide mutations}
Differently from the allele frequencies based on the data snapshots (Months), the annotated nucleotide mutations are obtained from the whole prepared data set. With the sorted allele frequencies computed from the whole data set, the most prevalent nucleotide and the second most one are selected as the first (denoted as ``Allele 1") and second allele (``Allele 2"), see in the forth and fifth column of Table~\ref{tab:B.1.1.7}, \ref{tab:B.1.351} and \ref{tab:B.1.617.2} for B.1.1.7, B.1.351 and B.1.617.2 respectively. Meanwhile the first column is the corresponding gene sub-structures of the defined mutations (second column) as reported in the literature; finally, the amino acid mutations are shown in the third column in each table.
\subsection*{Definition of B.1.1.7 (``UK variant")}
In this work we have used the definition of
SARS-CoV-2 Variant of Concern 202012/01
(B.1.1.7) as originally given in
"Technical briefing 1"~\cite{B.1.1.7}
Table 1 and text above Table 1
(publication date December 21, 2020).
This information with annotations is
given as Table~\ref{tab:B.1.1.7} below.
All non-synonymous amino acid mutations are listed on the later Pangolin description~\cite{PANGO_UK}.
In a later report from
the same group (Technical
briefing 6, publication date February
13, 2021~\cite{VoC-6}) another definition of
B.1.1.7 was given in Table 2a.
That definition differs from the one
used here in that mutation $C28977T$
in the $N$ gene and the six synonymous
mutations have not been retained.
\begin{table}[!ht]
\centering
\caption{Defining mutations for variant B.1.1.7}
\begin{threeparttable}
\begin{tabular}{lllll}
gene sub-structure & mutation locus \tnote{a} & amino acid mutation & Allele 1 \tnote{b} & Allele 2 \tnote{b} \\
\hline
NSP2& C913T & --- & C & T \\
NSP3& C3267T & ORF1ab: T1001I & C & T \\
NSP3& C5388A & ORF1ab: A1708D & C & A \\
NSP3& C5986T & --- & C & T \\
NSP3& T6954C & ORF1ab: I2230T & T & C\\
NSP6& 11288-11296 & SGF 3675-3677 del & ? & - \quad \tnote{*} \\
NSP12& C14676T & --- & C & T \\
NSP12& C15279T & --- & C & T\\
NSP12& C16176T & --- & \cellcolor{lightgray}T & \cellcolor{lightgray}C\quad\tnote{c}\\
Spike& 21765-21770 & HV 69-70 del & ? & -\\
Spike& 21991-21993 & Y144 del & ? & -\\
Spike& A23063T & S: N501Y & A & T \\
Spike& C23271A & S: A570D & C & A\\
Spike& C23604A & S: P681H & \cellcolor{lightgray}A & \cellcolor{lightgray}C\quad\tnote{c}\\
Spike& C23709T & S: T716I & C & T\\
Spike& T24506G & S: S982A & T & G \\
Spike& G24914C & S: D1118H & G & C\\
M& T26801C & --- & \cellcolor{lightgray}C & \cellcolor{lightgray}G\quad\tnote{d} \\
ORF8& C27972T & ORF8: Q27stop & C & T \\
ORF8& G28048T & ORF8: R52I & G & T \\
ORF8& A28111G & ORF8: Y73C & A & G\\
N& G28280C & N: D3L & G & C\quad\tnote{e}\\
N& A28281T & N: D3L & A & T\quad\tnote{e}\\
N& T28282A & N: D3L & T & A\quad\tnote{e}\\
N& C28977T & N: S235F & C & T\\
\hline
\end{tabular}
\begin{tablenotes}
\item[] \footnotesize{The `---'s in the amino acid mutation column indicates the synonymous mutations.}
\item[a] \footnotesize{Genomic position as in \protect\cite{B.1.1.7} Table 1 and text above Table 1. Positions refer to SARS-CoV-2 sequence Wuhan-Hu-1 with the Genbank accession number ``MN908947.3".}
\item[b] \footnotesize{Frequencies of alleles have been computed
from the entire data set (reference) after
multiple sequence alignment as described. Frequencies of
alleles at one locus have then been sorted as Allele 1 (major allele), Allele 2 (first minor allele), etc.}
\item[*]\footnotesize{The question mark ``?" indicates different nucleotides in the deletions.}
\item[c]\footnotesize{In time-sorted GISAID data
alleles at this locus have the opposite behavior than expected if the wild-type at this locus was C.
Using the same convention as the other loci
we have take the mutation at this locus to be $T16176C$.
The mutation at 23604 based on the whole prepared dataset also has a swapped mutation compared with that provided in the literature~\cite{B.1.1.7}.
}
\item[d] \footnotesize{
In time-sorted GISAID data the most
common allele at this locus is initially $C$ later overtaken by $G$.
However, the time course is very different from the rest of
the UK variant. Possibly this points
to the use of another reference sequence
for this single mutation in gene $M$
in \protect\cite{B.1.1.7}.
Using the same convention as the other loci
the mutation at this locus would be $C26801G$.
}
\item[e]\footnotesize{This locus is one of three annotated as $28280 GAT->CTA$ in \protect\cite{B.1.1.7} Table 1.}
\label{tab:B.1.1.7}
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection*{Definition of B.1.351 (``South Africa variant")}
In this work we have used the definition of
SARS-CoV-2 Variant of Concern 202012/02
(B.1.351) as given in
``Technical briefing 6"~\cite{VoC-6}
Table 4a. This information with annotations is
given as Table~\ref{tab:B.1.351} below.
The later Pangolin description of South Africa variant~\cite{PANGO_SA} is a subset of the above definition. The Defining Pangolin amino acid mutations are shown in the green cells of Table~\ref{tab:B.1.351}.
\begin{table}[!ht]
\centering
\caption{Defining mutations for variant B.1.351}
\begin{threeparttable}
\begin{tabular}{lllll}
gene sub-structure & mutation locus \tnote{a} & amino acid mutation & Allele 1 \tnote{b} & Allele 2 \tnote{b} \\
\hline
NSP2& C1059T & ORF 1ab: T265I & C & T\\
NSP3& G5230T & \cellcolor{green}ORF1ab: K1655N & G & T\quad\tnote{d}\\
NSP5& A10323G & ORF 1ab: K3353R & A & G \\
NSP6& 11288$\_$96 del & 3675-3677 del & -- & --\quad\tnote{e}\\
Spike& C21614T & S:L18F & C & T\quad\tnote{c} \\
Spike& A21801C & \cellcolor{green}S: D80A & \cellcolor{lightgray}A & \cellcolor{lightgray}N\quad\tnote{d} \\
Spike& A22206G & \cellcolor{green}S: D215G & A & G\quad\tnote{d} \\
Spike& -- & 242-244del & -- & --\\
Spike& G22299T & R246I & \cellcolor{lightgray}G & \cellcolor{lightgray}N\quad\tnote{c} \\
Spike& G22813T & \cellcolor{green}S: K417N & G & T\quad\tnote{c,d}\\
Spike& G23012A & \cellcolor{green}SGF E484K & G & A\quad\tnote{d} \\
Spike& A23063T & \cellcolor{green}S: N501Y & A& T\quad\tnote{d,e} \\
Spike& C23664T & \cellcolor{green}S: A701V & C & T\quad\tnote{d}\\
ORF3a& G25563T & ORF3a: Q57H & G & T\\
ORF3a& C25904T & ORF3a: S171L & C & T\\
E& C26456T & \cellcolor{green}E: P71L & C & T\quad\tnote{d} \\
N& C28887T & \cellcolor{green}N: T205I & C & T\quad$^{\rm d}$\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] \footnotesize{Genomic position as in \protect\cite{VoC-6} Table 4a. Positions refer to SARS-CoV-2 sequence Wuhan-Hu-1 with the Genbank accession number ``MN908947.3".}
\item[b] \footnotesize{Frequencies of alleles have been computed
from the entire data set (reference) after
multiple sequence alignment as described. Frequencies of
alleles at one locus have then been sorted as Allele 1 (major allele), Allele 2 (first minor allele), etc.}
\item[c]\footnotesize{
Annotated in \protect\cite{VoC-6} caption to Table 4a as acquisitions in subset of isolates within the lineage.}
\item[d]\footnotesize{
Annotated in \protect\cite{VoC-6} in Table 4b
as ``PROBABLE"; at least 4 lineage defining non-synonymous changes called as alternate
base and all other positions either N or mixed base OR at least 5 of the 9
non-synonymous changes.}
\item[e]\footnotesize{
This mutation is also present in the UK variant,
compare Table~\protect\ref{tab:B.1.1.7}.}
\end{tablenotes}
\label{tab:B.1.351}
\end{threeparttable}
\end{table}
\subsection*{Definition of B.1.617.2 (``delta variant")}
In this work we have used the definition of SARS-CoV-2 Variant of Concern B.1.617.2 as provided on the Nextstrain website \cite{Delta_variant}. The mutation information with annotations is given as Table~\ref{tab:B.1.617.2} below. The amino acid mutations with green background are presented in the Pangolin definition of delta variant~\cite{PANGO_delta}. They are all included in the third column of Table~\ref{tab:B.1.617.2}.
\begin{table}[!ht]
\centering
\caption{Defining mutations for Delta variant B.1.617.2}
\begin{threeparttable}
\begin{tabular}{lllll}
gene sub-structure & mutation locus & amino acid mutation \tnote{a} & Allele 1 \tnote{b} & Allele 2 \tnote{b} \\
\hline
ORF1b & C14408T & ORF1b: P314L & \cellcolor{lightgray}T & \cellcolor{lightgray}C\quad\tnote{c}\\
ORF1b & C16466T & ORF1b: P1000L & C & T\\
Spike & C21618G & \cellcolor{green}S: T19R & C & G \\
Spike & 22028-33 del & 156-157 del & -- & --\quad\tnote{d}\\
Spike & A22034G & S: R158G & A & \cellcolor{lightgray}N\quad\tnote{e} \\
Spike & T22917G & \cellcolor{green}S:L452R & T & G\\
Spike & C22995A & \cellcolor{green}S: T478K & C & A \\
Spike & A23403G & D614G & \cellcolor{lightgray}G & \cellcolor{lightgray}A\quad\tnote{c} \\
Spike & C23604G & \cellcolor{green}S: P681R & \cellcolor{lightgray}A & \cellcolor{lightgray}C\quad\tnote{f}\\
Spike & G24410A & \cellcolor{green}S: D950N & G & A \\
ORF3a & C25469T & \cellcolor{green}ORF3a: S26L & C & T \\
M & T26767C & \cellcolor{green}M: I82T & T & C \\
ORF7a & T27638C & \cellcolor{green}ORF7a: V82A & T & C \\
ORF7a & C27752T & \cellcolor{green}ORF7a: T120I & C & T \\
N & A28461G & \cellcolor{green}N: D63G & A & G\\
N & G28881T & \cellcolor{green}N: R203M & \cellcolor{lightgray}A & \cellcolor{lightgray}G\quad\tnote{f}\\
N & G29402T & \cellcolor{green}N: D377Y & G & T \\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] \footnotesize{Amino acid mutations provided on the website~\cite{Delta_variant} from corresponding protein. The Mutation loci in the second column refer to SARS-CoV-2 sequence Wuhan-Hu-1 with the Genbank accession number ``MN908947.3". The green ones are listed in the Pangolin description.}
\item[b] \footnotesize{Frequencies of alleles have been computed
from the entire data set. Frequencies of
alleles at one locus have then been sorted as Allele 1 (major allele), Allele 2 (first minor allele), etc.}
\item[c]\footnotesize{The wild type and the second most common alleles in the definition of B.1.617.2 have been swapped.}
\item[d]\footnotesize{These deletions are still there.}
\item[e]\footnotesize{
This mutation based on the whole dataset became deletion now.}
\item[f]\footnotesize{
The wild type in these mutations based on the whole dataset has absent in the definition (second column). The wild type in the definition became the second most common allele based on the dataset.}
\end{tablenotes}
\end{threeparttable}
\label{tab:B.1.617.2}
\end{table}
\section{Results}\label{sec:results}
The GISAID repository holds a large collection of whole-genome sequences~\cite{GISAID}. We have downloaded high quality sequences up to the end of July, 2021.
All collected genomes are aligned in an MSA table. The frequency of a given allele at a given locus is then how many times that allele is found in the corresponding column in the table divided by the number of rows of the table. In this way we determine the frequency of a mutation at one locus as a function of time. Similarly, we determine the joint frequency of a set of mutations as the number of times all these mutations are found in the same row in the table, divided by the number of rows in the table.
\subsection{Temporal behavior of B.1.1.7 variant}
The first report from Public Health England defining B.1.1.7 as a Variant of Concern lists 17 non-synonymous mutations (including deletions) and six synonymous mutations ~\cite{B.1.1.7}, see Table~\ref{tab:B.1.1.7}.
The time series of the frequencies of these mutations is shown in Fig.~\ref{fig:frequeces_of_loci_monthly_UK}. Of the 23 mutations, 21 have a similar time course, C16176T has the precise opposite time course, and T26801C an unrelated time course. In the following we have assumed that C16176T is a mis-labelling, and that this mutation in fact is T16176C. We have further assumed that T26801C, a synonymous mutation in the M gene, pertains mostly to another clone or to another reference sequence. In the following we have not retained data from this locus. In Fig.~\ref{fig:frequeces_of_loci_monthly_UK} we also show the time course of the joint frequency of the 22 retained mutations for this variant.
The frequencies of the 22 retained mutations for the UK-variant increase after late summer / early autumn 2020, see Fig. \ref{fig:frequeces_of_loci_2nd_UK}. The lines in this figure connect frequencies of the second most common allele (first minor allele) within the same month of sampling time in the GISAID data. With one exception (16176, discussed above) this second most common allele agrees with the mutation at this locus as given in~\cite{B.1.1.7}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.80\textwidth]{figs/Indi_joint_fre_UK_variant.pdf}
\caption{Individual frequency of major allele (solid marked-lines) and the joint frequency (dashed diamond-lines) at the defining loci for B.1.1.7 over time as determined from GISAID. The red dashed line is for the joint frequency of the wild type while the blue one for the second most common allele of the B.1.1.7 variant. 21 out of 23 mutations listed for the UK variant report have similar temporal patterns, except the 26801 locus in M gene and the mutation of $C16176T$ in NSP12. Two outliers are not included in the joint frequency analysis.}
\label{fig:frequeces_of_loci_monthly_UK}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{figs/selected_UK_non_synonymous_mutations.pdf}
\caption{Frequency of second most common allele at the defining loci for B.1.1.7 over time as determined from GISAID. X-axis gives genomic positions. The 26801 locus in M gene is not included compared with Fig.1. The average distance between the 22 retained mutations is about $1,500$ bp, but some such as $C23604A$ and $C23709T$ ($P681H$ and $T716I$ in Spike) lie closer. The mutations happened right to Spike gene are close to each other also.}
\label{fig:frequeces_of_loci_2nd_UK}
\end{figure}
The growth of the first minor allele of the UK-variant is uneven across the SARS-CoV-2 genome. In a first phase (early 2020-November 2020), the frequency of the HV 69-70 deletion in Spike (21765-21770) is noticeably higher than the other mutations defining B.1.1.7. This is consistent with this mutation initially being present also in clones unrelated to B.1.1.7. As time progresses, the relative difference between the frequency at this locus and the frequencies at the other loci decreases. Starting in December 2020 for C23604A (P681H in Spike), January 2021 for A23063T (N501Y in Spike) and February 2021 for the deletion 11288-11296 in NSP6 the frequencies of these three mutations begin to differ in frequency, and these differences increase up to May 2021. The joint frequency of the mutations hence progressively deviates from that of the single mutations in the spring of 2021.
\subsection{Temporal behavior of B.1.351 variant}
The definition of B.1.351 given by Public Health England in February 2021, lists 17 non-synonymous mutations (including deletions) out of which 9 in Spike~\cite{VoC-6},
see Table~\ref{tab:B.1.351}.
In later descriptions, such as in Pangolin, only a fraction of these mutations has been retained,
see the cells in Tab.~\ref{tab:B.1.351} with green background.
Of the 17 mutations listed in the first PHE definition, two are also listed in B.1.1.7. As this variant reached higher prevalence world-wide, frequencies of mutations at those loci followed that course and we have not retained them for B.1.351. Three other mutations listed in the first PHE definition appeared much before B.1.351 was defined and have an unrelated time course, see
three appeared much before this variant was
defined and have an unrelated time course,
see Fig.~\ref{fig:frequeces_of_loci_monthly_SA}.
In the following we have assumed
that these three
mutations, $C1059T$ ($T265I$ in $NSP2$),
$C21614T$ ($L18F$ in Spike)
and $G25563T$ ($Q57H$
in $ORF3a$) also mostly pertain to other
clones and/or to another reference sequence.
We have not retained data
from these loci.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.80\textwidth]{figs/Indi_joint_fre_SA_variant.pdf}
\caption{Individual and joint frequency of major allele at the defining loci for B.1.351 over time as determined from GISAID. They are plotted in solid marked-lines and dashed diamond-lines respectively. Three out of $17$ mutations listed for the South Africa variant (one is 1059, the other two 21614, 25563 are covered by the diamond lines) display different dynamics and have been excluded from the following analysis. Of the others, two mutations shared with B.1.1.7 increase to large frequencies: the $3675-3677$ deletion ($11288-11296$) in NSP6 and the $N501Y$ mutation ($A23063T$) in Spike (shown in pentagonal lines). The remaining mutations reach about the 2\% level and are discussed in text.}
\label{fig:frequeces_of_loci_monthly_SA}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.70\textwidth]{figs/selected_muations_in_spike_gene_SA_variant.pdf}
\caption{ Frequency of second most common allele at the defining loci in Spike for B.1.351 over time as determined from GISAID. X-axis gives genomic positions.
The mutation at $23012$ is SGF $E484K$ in Spike $S2$. The mutation $N501Y$ ($A23063T$) is also defined for B.1.1.7 and is not shown, see instead Fig.~\protect\ref{fig:frequeces_of_loci_2nd_UK}.
}
\label{fig:frequeces_of_loci_monthly_2nd_SA}
\end{figure}
To display the frequencies of mutations in B.1.351, we have chosen to focus on mutations in Spike, except N501Y in Spike ($A23063T$) which is shared with the UK variant, see Fig. ~\ref{fig:frequeces_of_loci_monthly_2nd_SA}.
The growth of these mutations in B.1.351 in Spike are as follows. From the beginning of Spike up and including the 242-244 deletion (three loci) there is a roughly even growth up to approximately the 2\% level up to March 2021. April 2021 is still even while May 2021 is more uneven with A21801C up and $A22206G$ down. Immediately to the right of this set there is a sharp drop in frequency so that very few sequences carry the $R246I$ mutation. This mutation has now been excluded from the defining mutations of B.1.351, together with several other mutations originally listed~\cite{PANGO_lineages}.
Further to the right, $K417N$ ($G22813T$) follows approximately the pattern of the 242-244 deletion while $A701V$ ($C23664T$) appears to grow about twice as fast and reaches approximately the 5\% level in May 2021. $E484K$ ($G23012A$) on the other hand follows an erratic trajectory peaking at above 1\% in August 2020, falling back to below 1\% in November 2020, and then increasing again up to 9\% in April 2021 and then falling back to 6\% again in May 2021. This reflects the fact that this mutation is shared with the $P1$ and other variants~\cite{PANGO_lineages}.
\subsection{Temporal behavior of B.1.617.2 variant}
The B.1.617.2 variant also known as ``delta variant", was first identified in India in December 2020. It has contributed to the widespread cases of COVID-19 all over the world today. The definition of this variant we used here is listed on the Nextstrain website, including 17 non-synonymous (including deletions) out of which 8 in Spike~\cite{Delta_variant}. In the later Pangolin version of delta variant~\cite{PANGO_delta}, two mutations from ORF1b and 3 in Spike are not included, as shown in cells of Table~\ref{tab:B.1.617.2} with green background.
There are mainly three kinds of temporal behavior of mutations for this variant at the early stage of evolution, see Fig.\ref{fig:frequeces_of_loci_monthly_Delta}. The allele frequencies of most wild types start decreasing around April 2021 while that of 23604 and 28881 went down around the end of year 2020. The temporal allele frequencies of 14408 and 23403 have a similar behavior with that the joint frequency of the wild type. The frequencies of all listed alleles at these four locations are computed from the whole dataset we used in this work. They are sorted and the first and second prevalent alleles are labelled as Allele 1 and Allele 2 respectively in Table~\ref{tab:B.1.617.2}. The above 4 mutations are marked with lightgray, which indicates the mutations calculated from dataset at these loci are different from the definitions. For instance, $C14408T$ became $T14408C$ and $A23403G$ became $G23403A$. For the $C23604G$ mutation, the original wild type became the second most common allele while the domain one came from the minor alleles which is not included in the definition. The same situation also holds for the mutation of $G28881T$.
The joint frequency of the second most common alleles as shown in the diamond lines starts going up around April 2021 reached at around 90\%. This indicates why the delta variant has been detected everywhere nowadays.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.80\textwidth]{figs/Delta-variant-frequencies.pdf}
\caption{Individual and joint frequency of major allele at the defining loci for B.1.617.2 over time as determined from GISAID. They are plotted in solid marked-lines and dashed diamond-lines respectively. The deletions are not shown here. The allele frequency of the Wild type at most mutated locus went down around April while 23604 around the end of 2020. 28881 is a well-known locus that has heavy variations. 14408 and 23403 have a similar behavior with that of the joint frequency for the wild type.}
\label{fig:frequeces_of_loci_monthly_Delta}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.80\textwidth]{figs/fre_2nd-nucleotide-in-Spike_Delta.pdf}
\caption{Individual frequency of second most common allele at the defining loci in Spike for B.1.617.2 over time as determined from GISAID. X-axis gives genomic positions. Most of the mutations grow up with time goes on while this never happen at 22304. This is consistent with the annotated mutations in Table~\ref{tab:B.1.617.2} for this locus, the ``Allele 2" became ``N" till now instead of ``G" shown in the definition.
}
\label{fig:frequeces_of_loci_monthly_2nd_Delta}
\end{figure}
Similarly to the analysis of the previous two variants, the frequencies of the second most common alleles for delta variant are also computed, as shown in Fig. \ref{fig:frequeces_of_loci_monthly_2nd_Delta}. Here only mutations in Spike are shown. Obviously, most of the allele frequencies growing up with time goes on. However, there is one outlier $22034$ whose frequency never goes up and around zero. This is consistent with the corresponding mutation in Table \ref{tab:B.1.617.2}. Based on the computation on the dataset, the second most common allele at this location is `N', which means the mutation at this locus may become ``deletion" instead of the $A22034G$ in the definition.
\section{Discussion}\label{sec:discussion}
One conclusion of this work is that it cannot be the case that the
three variants, as originally defined, have grown as large clones.
No sophisticated statistical analysis is required to reach this conclusion.
It is enough to plot the frequencies of single locus mutations and eyeball the time series curves.
In this work we have used almost two million whole-genome SARS-CoV-2 sequences from GISAID.
The method of analyzing allele frequencies and plotting the data stratified by time is a simple and efficient way of visualizing the dynamics of variants when there is such abundant data available on the genomic level.
With the growing availability of genomic sequencing and the possibility of sequencing samples in disease outbreaks one can expect that this will soon become standard. Not only for serious
pandemics like COVID19, but also for milder infectious diseases such as seasonal influenza.
More in detail, SARS-CoV-2 variant alpha originally grew as one clone -- with the exception of
two likely mis-labelled mutations and the $HV 69-70$ deletion, which diverged into separate clones as the variant accumulated novel mutations. Indeed, over time the list of defining mutations for B.1.1.7 was adjusted and the synonymous mutations removed. It is evident that SARS-CoV-2 variant beta originally contained several mis-classified mutations. Most notably 21614 in the Spike protein, $1059$ in NSP2 and $25563$ in ORF3a, all of which subsequently were removed from the list of defining mutations~\cite{B.1.1.7, VoC-6}. Regardless of these mutations it is clear that this variant does not grow as one large clone, but rather consists of multiple variants.
For delta variant, except the well known mutation located at 23604 and 28881 which has already been detected by the pairwise analysis based on the dataset till the beginning of August 2020~\cite{Zeng2020}, the temporal frequencies in Fig.~\ref{fig:frequeces_of_loci_monthly_Delta} clearly show two different patterns which may indicates the non-unique clone in the evolution. Meanwhile, the
silent mutation at $22034$ locus in Fig.~\ref{fig:frequeces_of_loci_monthly_2nd_Delta} probably shows a problematic definition of mutation $A22034G$ at this position.
This may be why this mutation is not included in the Pangolin version of delta variant definition.
This instability of clones is supported by recent observations pointing towards the emergence of multiple lineages of SARS-CoV-2 within the same individual
\cite{Avanzato2020,Baang2021,Choi2020,Hensley2021,Kemp2021}.
In all cases the patients had prolonged viremia and received convalescent plasma treatment and/or monoclonal antibody therapy. Treatment with convalescent plasma or monoclonal antibodies applies selection pressure on a viral population within the host that may drive the emergence of antibody resistant clones. Also, the large number of viral genomes present simultaneously in a single patient enable opportunities for within host recombination. The phenotypic effects of all described mutations in the spike protein of SARS-CoV-2 are just beginning to be unraveled. For example, the N501Y substitution increases the affinity for ACE2 binding~\cite{Gu2020}. Also, compensatory mutations have been described as in the case for the E484K substitution in combination with del69-70, where a reduction in antibody sensitivity is compensated with increased infectivity. Specific mutations that confer advantages, in the form of increased infectivity or antibody escape, will increase in frequency as they are shared with novel variants that either emerge from the original or emerge in unrelated variants. This is illustrated for E484K of B.1.351 which increases more rapidly than the other defining mutations for this variant. The E484K mutation is also present in for example the P1 variant and a sub-variant of the B.1.1.7~\cite{VoC-6}.
On a more general and speculative note, Coronaviruses, the larger family to which SARS-CoV-2 belongs,
in general exhibit a large amount of recombination~\cite{LaiCavanagh1997,Graham2010}.
Large-scale recombination would be important in the
COVID19 pandemic for several reasons.
First it increases the resiliance of the viral population
against hostile agents. Beneficial (to the virus) changes can spread faster
and more reliably throughout the population.
Second it leads to form of evolution optimizing fitness and less impacted by traits inherited by chance: while a clone replicating asexually
will likely have points of weakness,
in a recombining population such errors are shared around and eliminated.
Third, substantial amount of recombination
is a confounder for phylogentic reconstruction.
Crudely put, phylogenetic trees
reconstructed from population-wide sequence data
may not reflect the actual evolution in such
populations, an issue which has been discussed in
bacterial phylogenetics since some time
\cite{Falush2003,Dixit2017,Sakoparnig2021}.
Overall, our discussion underlines the usefulness of a time series analysis when handling data on systems as complex as an evolving genome may be. This is almost a truism from the point of view of complexity scientist: in the context of network science, for instance, a whole new branch of the field has developed with a focus on temporal dimension \cite{Holme2012,Holme-2019}. Indeed, projecting out the temporal dimension in a single static "snapshot" of the systems on one hand allows rapid analysis and computations on the data but on the other hand may be harmful when the dynamical time scales
at stake are not separable or unknown, which is almost always the case for evolutionary and epidemiological models (mutation rate of the genome, recombination rate, rate of diffusion and so on). We thus suggest that the use of time series analysis can be a useful complement to more automated and sophisticated forms of data analysis, whose assumptions may hide important and perhaps dominant phenomena.
\bibliographystyle{iopart-num}
|
train/arxiv
|
BkiUdME4eIZjn9vgHy53
| 5 | 1 |
\section{Introduction}
\label{sec:introduction}
The phase diagram of strongly interacting matter has attracted a lot of attention in the past couple of decades~\cite{Fukushima:2010bq, Hands:2001ve, Shuryak:1996pb}. At low temperature $T$ and low baryon chemical potential $\mu_B$, quantum chromodynamics (QCD) matter consists of colorless hadrons, while at high temperature and baryon density the degrees of freedom are colored quarks and gluons. Lattice QCD simulations indicate that, at $\mu_B\sim 0$, the transition from the hadronic phase to the quark-gluon plasma phase (QGP) is actually an analytic cross over~\cite{Aoki:2006br, Borsanyi:2010bp, Borsanyi:2015waa, Petreczky:2012fsa, Ding:2015ona, Friman:2011zz}. At finite baryon densities $\mu_B/T \gtrsim 1$, however, lattice QCD calculations are affected by the sign problem~\cite{Hands:2001jn,Aoki:2005dt,Alford:2002ng,deForcrand:2010ys}, and, even though very recent works, based on the analytical continuation from imaginary to real $\mu_B$, are now probing values of the chemical potential up to real $\mu_B \sim 300$~MeV~\cite{Borsanyi:2020fev}, results at large values of $\mu_B$ are still scarce. Hence, in this regime of the phase diagram one has to resort to effective models of QCD, such as the Nambu-Jona-Lasinio model~\cite{Klevansky:1992qe, Hatsuda:1994pi}, the quark-meson-coupling model~\cite{Schaefer:2007pw} etc. Finally, at low temperature and sufficiently large baryon density many phenomenological models predict the transition between the hadronic phase and the deconfined phase to be of first order~\cite{Asakawa:1989bq,Barducci:1989eu,Barducci:1993bh,Berges:1998rc,Halasz:1998qr,Scavenius:2000qd,Antoniou:2002xq,Hatta:2002sj,Bhattacharyya:2010wp}: see also the discussion in ref.~\cite{Bzdak:2019pkr}.
Apart from the effective QCD models mentioned above, a simple model to describe the hadronic phase of QCD is the hadron resonance gas model (HRG). This model is based on the $S$-matrix formulation of statistical mechanics~\cite{Dashen:1969ep}. At low density, as it turns out, the thermodynamics can be approximately modeled in terms of a non-interacting gas of hadrons and resonances~\cite{Dashen:1974yy,Welke:1990za,Venugopalan:1992hy}. The predictions of this model have been compared with lattice QCD simulations, finding good agreement for temperatures up to $T\sim 150$~MeV except for some discrepancies in the trace anomaly~\cite{Karsch:2003vd,Vovchenko:2016rkn,Dash:2018can}. Later studies found that the agreement can be improved, if the contribution of a continuous density of states is included in the mass spectrum of the HRG~\cite{NoronhaHostler:2008ju,NoronhaHostler:2012ug,Kadam:2014cua,Vovchenko:2014pka}. Remarkably, analogous results have been obtained also in lattice simulations of $\mathrm{SU}(N)$ Yang-Mills theories without dynamical quarks~\cite{Meyer:2009tq, Caselle:2015tza}, and even in three spacetime dimensions~\cite{Caselle:2011fy}.
A description of the density of hadron states in terms of a continuous distribution is the basis of the statistical bootstrap model (SBM)~\cite{Hagedorn:1965st,Frautschi:1971ij}, which attracted a lot of attention in the particle physics community in the pre-QCD era. The mass spectrum of abundant formation of heavy resonances and higher angular momentum states can be consistently described by a self-similar structure of hadrons through the bootstrap condition. These high-mass resonances have an interesting effect on the strong interaction thermodynamics: in the thermodynamic system dominated by exponentially rising resonance states there is a finite limiting temperature $T_{\mbox{\tiny{H}}}$, called Hagedorn temperature. The existence of this limiting temperature indicates that the hadron resonance gas cannot exist at physical temperatures $T>T_{\mbox{\tiny{H}}}$, and suggests that strongly interacting matter should then enter a different phase. The bootstrap condition of the SBM requires the density of states to be of the form $ \rho(m)\sim m^{a}\: \exp(m/T_{\mbox{\tiny{H}}})$~\cite{Satz:1978us, Huang:1970iq, Blanchard:2004du}, where $a$ is a constant. Interestingly, the string model (or dual resonance model) of strong interactions~\cite{Mandelstam:1974fq} also predicts this type of density of states. The $a$ constant plays an important role in determining the thermodynamics of the SBM near the Hagedorn temperature. In fact, for the choice $a=-4$ both the energy density and the entropy density remain finite near $T_{\mbox{\tiny{H}}}$ and one expects a phase transition to take place~\cite{Antoniou:2002xq, Satz:1978us, Castorina:2009de}, so that $T_{\mbox{\tiny{H}}}$ can be interpreted as a critical temperature, $T_{\mbox{\tiny{c}}}$.
A particularly interesting point in the QCD phase diagram is the conjectured critical end point (CEP). It should be remarked that, so far, the existence of the CEP has neither been proven theoretically, nor has it been observed experimentally. However, its existence is strongly suggested by the aforementioned model calculations investigating the phase diagram region at low temperatures and baryon densities larger than that of nuclear matter, which predict a first-order transition line separating the hadronic phase from a deconfined phase: since that line is known not to extend all the way to the $\mu_B=0$ axis (where the transition is actually a crossover), it should end at a CEP, where the transition should be a continuous one~\cite{Stephanov:1998dy}. A lot of theoretical investigation has been carried out, and is still going on, to locate the CEP and predict possible experimental signatures, see refs.~\cite{Gavai:2016lwu,Mohanty:2009vb,Stephanov:2004xs,Luo:2017faz} for reviews. On the experimental side, an entire experimental program, namely the Beam Energy Scan (BES) program, has been devoted at the Super Proton Synchrotron (SPS) and at the Relativistic Heavy Ion Collider (RHIC) to search for the CEP~\cite{Aggarwal:2010cw, Heinz:2015tua}. In particular, as suggested in ref.~\cite{Stephanov:1998dy}, the existence and the location of the CEP could be revealed by the observation of a suppression in temperature and chemical potential fluctuations on an event-by-event basis, and by large fluctuations in the multiplicity of low-energy pions.
A very important feature of the critical point is the emergence of a universal critical mode. As the system approaches the critical point, this mode rises very rapidly with some power of the correlation length $\xi$, which eventually diverges at the critical point. For instance, the variance, skewness and kurtosis of the non-Gaussian fluctuation of the critical mode grow as $\xi^{2}$, $\xi^{9/2}$ and $\xi^{7}$, respectively~\cite{Stephanov:2008qz,Asakawa:2015ybt,Bluhm:2020mpc}. In the experimental search for the critical endpoint, these critical fluctuations can be accessed by measuring event-by-event fluctuations of particle multiplicities~\cite{Heinz:2015tua}.
While ``static'' critical phenomena have been extensively studied theoretically, an avenue that has been explored less is the one of ``dynamical'' critical phenomena. As it turns out, critical singularities can also occur in quantities encoding the dynamical properties of the medium, like transport coefficients. Away from the critical point, the dynamic properties of a system can be characterized by hydrodynamics, which provides an effective description of the fluid in the low-frequency, long-wavelength limit. Hydrodynamics describes fluctuations of conserved quantities at equilibrium and any additional slow variable that occurs due to the existence of a spontaneously broken symmetry. In the hydrodynamic effective theory the dynamical critical fluctuations are described by coupling the order-parameter field with the conserved momentum density. In this model, which is called the H model in the classification of dynamical critical phenomena~\cite{Hohenberg:1977ym} by Hohenberg and Halperin, the transport coefficients depend on the correlation length as
\begin{equation}
\eta \sim \xi^{\frac{\epsilon}{19}},\hspace{0.5cm} \kappa \sim \xi,\hspace{0.5cm} D_{B}\sim \frac{1}{\xi}, \hspace{0.5cm}\zeta\sim \xi^{3}.
\end{equation}
This behavior suggests that the transport coefficients would affect the bulk hydrodynamic evolution of the matter created in heavy-ion collisions near the QCD critical point~\cite{Monnai:2016kud, Monnai:2017ber,Paech:2003fe,Nonaka:2004pg,Paech:2005cx,Herold:2013bi,Nahrgang:2016ayr,Kapusta:2012zb}.
It is worth emphasizing that, while lattice calculations remain \emph{the} tool of choice for theoretical first-principle studies of various quantities relevant for strong-interaction matter, their applicability in studies of transport coefficients in the proximity of the QCD critical endpoint is severely limited, for a two-fold reason. On one hand, as we remarked above, the existence of the sign problem poses a formidable barrier to lattice simulations at finite baryon-number density: a barrier that might even be impossible to overcome with classical computers, if it is related to fundamental computational-complexity issues~\cite{Troyer:2004ge}. On the other hand, even at zero baryon-number density, the lattice determination of transport coefficients of QCD matter involves its own difficulties, due to the fact that lattice QCD calculations are done in a Euclidean spacetime, and typically the extraction of quantities involved in real-time dynamics requires a Wick rotation back to Minkowski signature, with the reconstruction of a continuous spectral function from a finite set of Euclidean data, which is an ill-posed numerical problem~\cite{Meyer:2011gj}. Despite some recent progress (see, e.g., refs.~\cite{Cuniberti:2001hm, Meyer:2007ic, Meyer:2007dy, Burnier:2011jq, Panero:2013pla, Rothkopf:2016luz}), a general solution to this type of problems is still unknown.
For these reasons, phenomenological models remain a useful theoretical tool to get some insight into the physics near the QCD critical endpoint. In this work, we extract critical exponents~\cite{Satz:1978us} and amplitudes of thermodynamic quantities relevant near the critical point within the statistical bootstrap model. We then derive the singularities characterizing shear and bulk viscosity coefficients, starting from an \emph{Ansatz} for viscosity coefficients~\cite{Antoniou:2016ikh} that is suitable for a hydrodynamic system with conserved baryon charge. We then estimate viscosity coefficients near the critical point from the hadronic side using the critical exponents of this model.
We organize the paper as follows. In section~\ref{sec:SBM} we review the derivation of the critical exponents (and amplitudes) close to the critical point in the critical bootstrap model, that was first worked out in ref.~\cite{Satz:1978us}, with a few additional remarks and comments. In section~\ref{sec:transport_coefficients} we derive the singularities of shear and bulk viscosity near the critical point. In section~\ref{sec:results} we present our results. Finally, in section~\ref{sec:discussion_and_conclusions} we summarize our findings and conclude. Throughout the paper, we work in natural units ($\hbar=c=k_{\mbox{\tiny{B}}}=1$).
\section{Statistical bootstrap model: criticality and critical point exponents}
\label{sec:SBM}
\subsection{Critical exponents}
The analysis of critical phenomena is based on the assumption that, in the $T\rightarrow T_{\mbox{\tiny{c}}}$ limit, any relevant thermodynamic quantity can be separated into a regular part and a singular part. The singular part may be divergent or it can have a divergent derivative. It is further assumed that the singular part of all the relevant thermodynamic quantities is proportional to some power of $t$, where $t=(T-T_{\mbox{\tiny{c}}})/T_{\mbox{\tiny{c}}}$. These powers, called critical exponents, characterize the nature of singularity at the critical point. The critical exponents, $\hat{\alpha}$, $\hat{\beta}$, $\hat{\gamma}$ and $\hat{\nu}$ are defined through the following power laws~\cite{Huang:1987sm} (in the limit $t\rightarrow 0^{-}$):
\begin{eqnarray}
\label{alpha_exponent}
C_V &=&\mathcal{C_{-}} \: |t|^{-\hat{\alpha}}, \\
\label{beta_exponent}
1-\frac{n_{B}}{n_{B,c}} &=&\mathcal{N_-} \: |t|^{\hat{\beta}}, \\
\label{gamma_exponent}
k_{T} &=&\mathcal{K}_{-} \: |t|^{-\hat{\gamma}}, \\
\label{nu_exponent}
\xi &=&\Xi_{-}\:|t|^{-\hat{\nu}},
\end{eqnarray}
where $C_V$, $n_{B,c}$, $k_{T}$ and $\xi$ respectively denote the specific heat, the critical baryon density, the isothermal compressibility and the correlation length, while $\mathcal{C_{-}}$, $\mathcal{N_-}$, $\mathcal{K}_{-}$ and $\Xi_{-}$ are the corresponding amplitudes from the hadronic side ($T <T_{\mbox{\tiny{c}}}$). Note that eq.~(\ref{beta_exponent}) is an equation of state, relating baryon density $n_{B}$ and pressure $p$ near the critical point.
\subsection{Formulation of the model}
\label{subsec:formulation_of_the_model}
We follow ref.~\cite{Satz:1978us} to extract the amplitudes and critical exponents within the statistical bootstrap model. We first discuss the case of vanishing baryonic chemical potential, $\mu_B=0$. Consider an ideal gas of hadrons and all possible resonance states as non-interacting constituents: the partition function of this system can be written as~\cite{Beth:1937zz}
\begin{equation}
\mathcal{Z}(T,V)=\sum_{N=1}^\infty \frac{V^N}{N!}\prod_{i=1}^{N}\int \frac{d^3p_i}{(2\pi)^3}\: dm_{i}\:\rho_{i}(m_{i}) e^{-E_{i}/T}
\label{pf}
\end{equation}
where $\rho(m)$ is the hadron spectrum included in the HRG model. In the simplest formulation of the model, that was discussed in ref.~\cite{Satz:1978us}, only pions were considered as the ``basic'' hadrons. More recently, however, it has become customary to include all the hadrons and resonances that have been detected experimentally up to some energy scale $M$ and take $\rho(m)=\sum_{j}\delta(m-m_j)$. Such discrete mass spectrum leads to the physical hadron resonance gas model. In the physical HRG, if $g_i$ is the degeneracy of the $i$-th hadronic species, then for spin degrees of freedom the degeneracy factors turn out to be $g_i \sim m_{i}^2$~\cite{Castorina:2009de}. Thus, one sees that the spin multiplicity already can result in an unbounded increase in resonances. The upshot of the $m^2$ dependence of resonance degeneracy is that the partition function of the physical resonance gas and all of its higher-order derivatives remain finite at $T_{\mbox{\tiny{c}}}$. Thus, the required degeneracy structure is absent in the physical resonance gas and hence it does not show critical behavior.
It turns out that the degeneracy structure required to show critical behavior is present in the Hagedorn density of states, which can be used to model the spectral density above $M$ in terms of a continuous distribution. Consider a density of states of the form
\begin{equation}
\label{states}
\rho(m)= \sum_{i} \left[ g_{i} \cdot \delta(m-m_{i})\right] +\theta(m-M)\rho_{\mbox{\tiny{H}}}(m),
\end{equation}
where the sum ranges over all hadrons species with mass $m_i \le M$, $g_i$ denotes the degeneracy of each species, while $\rho_{\mbox{\tiny{H}}}$ is the continuous contribution to the density of states. For our analysis, we included all hadronic states reported in ref.~\cite{Zyla:2020ssz} with masses not larger than $M=2.25$~GeV. It should be noted that choosing a different $M$ value in the same ballpark would not lead to significant differences e.g. in the equilibrium thermodynamic quantities in the low-temperature phase. The reason for this is that at low temperatures the thermodynamics is dominated by the lightest hadrons, and including or not including the contribution from some discrete heavy states does not have significant impact on the equation of state at low $T$.
Note that, in the simplest possible formulation of the model, the discrete part of the spectrum could include only pions, and one could model all the states of the spectrum above the two-pion threshold (setting $M=2m_\pi$) in terms of a Hagedorn density of states:
\begin{equation}
\label{pion_states}
\rho_{\mbox{\tiny{simplest}}}(m)=g_{\pi} \cdot \delta(m-m_{\pi})+\theta(m-2m_\pi)\rho_{\mbox{\tiny{H}}}(m)
\end{equation}
where $g_\pi$ denotes the pion degeneracy, which is equal to $3$. While such picture is clearly a very crude model of the hadron spectrum, it still captures some interesting finite-temperature features, at least at a qualitative level, and is useful to highlight some general consequences for the thermodynamics and transport properties.
The logarithm of the partition function~(\ref{pf}) with the density of states in eq.~(\ref{states}) is written as
\begin{equation}
\label{ln_Z_as_sum}
\ln\mathcal{Z}(T,V)=\ln\mathcal{Z}_{\mbox{\tiny{discrete}}}(T,V)+\ln\mathcal{Z}_{\mbox{\tiny{H}}}(T,V)
\end{equation}
in which the first summand on the right-hand side, which does not depend on the \emph{Ansatz} for the continuous part of the density of states, encodes the contribution from a gas of non-interacting hadrons in the discrete part of the spectrum (i.e. hadrons whose masses are not larger than $M$). In particular, the contribution to $\ln\mathcal{Z}_{\mbox{\tiny{discrete}}}$ due to pions can be written in the form
\begin{equation}
\label{pion_contribution_to_ln_Z}
\ln\mathcal{Z}_{\pi}(T,V)=-g_{\pi}\int \frac{d^3p}{(2\pi)^{3}}\ln\left[1-\exp \left( \frac{\sqrt{p^2+m_\pi^2}}{T} \right) \right]=\frac{g_{\pi}m_\pi^2 TV}{2\pi^2}\sum_{n=1}^\infty \frac{K_{2}(n m_\pi /T)}{n^2},
\end{equation}
where $K_{n}(z)$ is a modified Bessel function of the second kind of order $n$. For large real values of its argument, one has
\begin{equation}
\label{KN_at_large_argument}
K_{n}(z) = \sqrt{\frac{\pi}{2z}}\:e^{-z}\left[1+\mathcal{O}(z^{-1})\right] .
\end{equation}
The contributions to $\ln\mathcal{Z}_{\mbox{\tiny{discrete}}}$ from the other hadron species in the discrete part of the spectrum can be derived in a similar way, and one obtains
\begin{equation}
\label{discrete_contribution_to_ln_Z}
\ln\mathcal{Z}_{\mbox{\tiny{discrete}}}(T,V)=\sum_i \frac{g_{i}m_i^2 TV}{2\pi^2}\sum_{n=1}^\infty (-\eta_i)^{n+1}\frac{K_{2}(n m_\pi /T)}{n^2},
\end{equation}
where the sum over $i$ ranges over all hadrons with mass $m_i \le M$, as in eq.~(\ref{states}), and $\eta_i=-1$ for bosons, while $\eta_i=1$ for fermions.
The second summand appearing on the right-hand side of eq.~(\ref{ln_Z_as_sum}) represents the contribution due to the continuous part of the spectrum:
\begin{equation}
\ln\mathcal{Z}_{\mbox{\tiny{H}}}(T,V)=V\Phi_{B}(T)
\label{pf1}
\end{equation}
with
\begin{equation}
\Phi_{B}(T)=\int_{M}^{\infty}dm \rho_{\mbox{\tiny{H}}}(m)\int \frac{d^{3}p}{(2\pi)^3}e^{-E_{i}/T}
\end{equation}
where, as above, $M$ is the threshold separating the discrete (for $m\le M$) and the continuous (for $m > M$) parts of the spectrum. Performing the momentum integration, one obtains
\begin{equation}
\Phi_{B}(T)=\frac{T}{2\pi^2}\int_{M}^{\infty}dm\:m^2 \rho_{\mbox{\tiny{H}}}(m) K_{2}(m/T).
\end{equation}
Using eq.~(\ref{KN_at_large_argument}), for $m/T\gg1$ one gets
\begin{equation}
\Phi_{B}(T)=\left(\frac{T}{2\pi}\right)^{3/2}\int_{M}^{\infty}dm\:m^{3/2} \rho_{\mbox{\tiny{H}}}(m) e^{-m/T}.
\end{equation}
All the thermodynamic functions can be readily obtained from the partition function in eq.~(\ref{pf1}) once the continuous part of the mass spectrum $\rho(m)$ is specified.
In the statistical bootstrap model (see refs.~\cite{Tawfik:2014eba, Rafelski:2015cxa} for reviews), hadronic matter at high temperature is dominated by formation of resonances whose number grows exponentially. The bootstrap condition leads to a solution for the mass spectrum of the form~\cite{Hagedorn:1965st, Frautschi:1971ij}
\begin{equation}
\rho_{\mbox{\tiny{H}}}(m)=A\:m^{a}\:e^{bm}
\label{hagspec}
\end{equation}
where $A$, $a$, and $b$ are constant parameters. In particular, the parameter $A$ provides the normalization of the resonance contributions relative to that of the pions. The parameter $a$ specifies the nature of the degeneracy of high-mass resonances, and also determines the critical behavior of hadronic matter. One possible solution of the bootstrap condition was derived in ref.~\cite{Nahm:1972zc}, yielding $a \simeq 3$. Finally, the parameter $b$ turns out to be the inverse of the Hagedorn temperature at which thermodynamic functions show singular behavior.
Restoring the dependence on the fugacity $z_B=\exp(\mu_B/T)$, the contribution to the partition function associated with the continuous spectrum~(\ref{hagspec}) can be written as
\begin{equation}
\ln\mathcal{Z}_{\mbox{\tiny{H}}}(T,V,z_B)=AVz_B\left(\frac{T}{2\pi}\right)^{3/2}\int_{M}^{\infty}dm\:m^{a+3/2} e^{(b-1/T) m}.
\label{pf4}
\end{equation}
At this point, we should stress an important observation: in order to obtain eq.~(\ref{pf4}), in which $z_B$ is factorized on the right-hand side, it has been implicitly assumed that $b$ is \emph{independent} from $\mu_B$, i.e. that the critical temperature $T_{\mbox{\tiny{c}}}$ does not depend on the fugacity $z_B$. Strictly speaking, however, this is not fully justified: as has been discussed in detail in the literature~\cite{Kapoyannis:1997qj,Kapoyannis:1998py,Kapoyannis:1999ar}, in the presence of arbitrary fugacity $z_B$, the bootstrap equation takes the form
\begin{equation}
\label{generalized_bootstrap_equation}
\phi(T,z_B)=2G(T,z_B)-\exp[G(T,z_B)]+1,
\end{equation}
where $\phi$ is an {\it{input function}}, receiving contributions from the physical hadrons of the theory, while $G$, which encodes their interactions in terms of the bootstrap picture (whereby strongly interacting systems of particles form larger clusters of particles, which in turn form larger clusters, etc.) is the Laplace transform of the mass spectrum. Eq.~(\ref{generalized_bootstrap_equation}) has a square-root branch point singularity for $\phi=2\ln 2-1$ (or, equivalently, for $G=\ln2$), which defines the boundary of the hadronic phase in this model through a non-trivial relation between $T$ and $z_B$. In other words, strictly speaking, the critical temperature $T_c$ is a non-trivial function of $z_B$. In eq.~(\ref{pf4}) and in the rest of this work, however, we assume that the dependence of $T_c$ on the fugacity is mild, i.e. we work in the approximation in which $b=1/T_c$ is constant. While this simplification may appear to be crude, it is worth noting that during the past few years lattice QCD calculations have conclusively proven that the change of state between the hadronic, broken-chiral-symmetry phase and the deconfined, chirally symmetric phase at zero chemical potential is a crossover~\cite{Bernard:2004je,Cheng:2006qk,Aoki:2006we}, and that at small but finite values of $\mu_B$ the curvature of the line describing the crossover in the QCD phase diagram is very small~\cite{Kaczmarek:2011zz, Endrodi:2011gv, Cea:2014xva, Bonati:2014rfa, HotQCD:2018pds}. As a consequence, it is not unreasonable to expect that, even within the approximation of a critical temperature independent from the chemical potential, the statistical bootstrap model may still capture the physics close to a possible critical endpoint of QCD at finite chemical potential. Assuming $T_{\mbox{\tiny{c}}}$ to be approximately independent from $\mu_B$ simplifies the expression of the partition function, and allows one to get more analytical insight into the physical quantities of interest. In a nutshell, the fact that $z_B$ factors out in the expression of the logarithm of $\mathcal{Z}_{\mbox{\tiny{H}}}$ implies that the dependence on the chemical potential in this model is somewhat ``trivial''. While the validity of this approximation at large values of $\mu_B$ is not obvious, lattice results lead us to think that its use at least for small and intermediate values of $\mu_B$ should be a reasonable approximation.
With these caveats in mind, in the next section we shall calculate the critical exponents by taking appropriate derivatives of the partition function~(\ref{pf4}) and then taking the $T\to 1/b$ limit.
\subsection{Critical exponents in the statistical bootstrap model}
\label{subsec:critical_exponents_in_SBM}
With the change of variable $w=m(1/T-b)$, we get
\begin{eqnarray}
\ln\mathcal{Z}_{\mbox{\tiny{H}}}(T,V,z_B) &=& AVz_B\left(\frac{T}{2\pi}\right)^{3/2} (1/T-b)^{-(a+5/2)}\int_{M(1/T-b)}^{\infty}dw\:w^{a+3/2} \: e^{-w} \nonumber \\
&=& AVz_B\left(\frac{T}{2\pi}\right)^{3/2} (1/T-b)^{-(a+5/2)} \Gamma\left( a+\frac{5}{2} ,M (1/T-b)\right), \label{pf3}
\end{eqnarray}
having expressed the integral in terms of the upper incomplete $\Gamma$ function. The energy density can then be written as
\begin{equation}
\varepsilon=\frac{T^2}{V}\frac{\partial \ln \mathcal{Z}_{\mbox{\tiny{H}}}}{\partial T}
\end{equation}
and for $T\to 1/b$ one finds that
\begin{equation}
\label{en}
\varepsilon \simeq \left\{
\begin{array}{ll}
\frac{Az_B}{(2\pi b)^{3/2}}\:\Gamma\left(a+\frac{7}{2}\right)\:(1/T-b)^{-(a+\frac{7}{2})}, & \hspace{0.5cm} \mbox{for } a>-7/2\\
-\frac{Az_B}{(2\pi b)^{3/2}}\:\ln[M(1/T-b)], & \hspace{0.5cm} \mbox{for } a=-7/2\\
\text{constant}, & \hspace{0.5cm} \mbox{for } a<-7/2
\end{array}
\right. .
\end{equation}
Hence for $a<-7/2$ the energy density remains finite (and approaches some critical value $\varepsilon_{\mbox{\tiny{c}}}$) as $T\rightarrow T_{\mbox{\tiny{H}}}$, implying that the system cannot exist in this state for $\varepsilon>\varepsilon_{\mbox{\tiny{c}}}$ and suggesting that a phase transition must take place.
The specific heat at constant volume can then be written as
\begin{equation}
C_V=\frac{2\varepsilon}{T} +\frac{T^2}{V} \frac{\partial^2}{\partial T^2} \ln \mathcal{Z}_{\mbox{\tiny{H}}}
\end{equation}
and for $T\to 1/b$ one gets
\begin{equation}
\label{cv}
C_V \simeq \left\{
\begin{array}{ll}
\frac{Ab^2z_B}{(2\pi b)^{3/2}}\:\Gamma\left(a+\frac{9}{2}\right)\:(1/T-b)^{-(a+9/2)}, & \hspace{0.5cm} \mbox{for } a>-9/2\\
-\frac{A b^2z_B}{(2\pi b)^{3/2}}\ln[M(1/T-b)], & \hspace{0.5cm} \mbox{for } a=-9/2\\
\text{constant}, & \hspace{0.5cm} \mbox{for } a<-9/2
\end{array}
\right. .
\end{equation}
Comparing eq.~(\ref{cv}) with eq.~(\ref{alpha_exponent}) we deduce the amplitude $\mathcal{C_-}$ as
\begin{equation}
\mathcal{C_-}=\left\{
\begin{array}{ll}
\frac{Ab^2z_B}{(2\pi b)^{3/2}}\:\Gamma\left(a+\frac{9}{2}\right), &\hspace{0.5cm} \text{for} \:a \geqslant-9/2\\
\frac{Ab^2z_B}{(2\pi b)^{3/2}}, & \hspace{0.5cm} \text{for} \:a <-9/2
\end{array}
\right.
\end{equation}
while the critical exponent $\hat{\alpha}$ reads
\begin{equation}
\hat{\alpha}=\left\{
\begin{array}{ll}
a+\frac{9}{2}, &\hspace{0.5cm} \text{for} \:a \geqslant-9/2\\
0, & \hspace{0.5cm} \text{for} \:a <-9/2
\end{array}
\right. .
\end{equation}
The baryon number density $n_B$ can be evaluated as
\begin{equation}
n_{B}=\frac{z_B}{V}\frac{\partial}{\partial z_B} \ln\mathcal{Z}_{\mbox{\tiny{H}}} =Az_B\left(\frac{T}{2\pi}\right)^{3/2}\:(1/T-b)^{-(a+5/2)}\:\Gamma\left(a+\frac{5}{2},M(1/T-b)\right),
\end{equation}
hence for $T$ close to $1/b$ we get the critical density as
\begin{equation}
n_{B,\mbox{\tiny{c}}} \simeq \left\{
\begin{array}{ll}
\frac{Az_B}{(2\pi b)^{3/2}}\:\Gamma\left(a+\frac{5}{2}\right)\:(1/T-b)^{-(a+5/2)}, & \hspace{0.5cm} \text{for } a>-5/2\\
-\frac{Az_B}{(2\pi b)^{3/2}}\:\ln[M(1/T-b)], & \hspace{0.5cm} \text{for } a=-5/2\\
\text{constant}, & \hspace{0.5cm} \text{for } a<-5/2
\end{array}
\right. .
\end{equation}
The inverse of the isothermal compressibility is defined as
\begin{equation}
k_{T}^{-1}=-V\:\left(\frac{\partial p}{\partial V}\right)_{T}
\end{equation}
and for a non-interacting resonance gas it takes the following, very simple form:
\begin{equation}
k_{T}^{-1}=n_{B}T.
\end{equation}
For temperatures close to $1/b$, one obtains,
\begin{equation}
k_{T}^{-1} \simeq \left\{
\begin{array}{ll}
\frac{Az_B}{b(2\pi b)^{3/2}}\:\Gamma\left(a+\frac{5}{2}\right)\:(1/T-b)^{-(a+5/2)}, & \hspace{0.5cm} \text{for } a>-5/2\\
-\frac{Az_B}{b(2\pi b)^{3/2}}\:\ln[M(1/T-b)], & \hspace{0.5cm} \text{for } a=-5/2\\
\text{constant}, & \hspace{0.5cm} \text{for } a<-5/2
\label{kt}
\end{array}
\right. ,
\end{equation}
from which it is straightforward to deduce the amplitude $\mathcal{K_-}$
\begin{equation}
\mathcal{K_-}=\left\{
\begin{array}{ll}
\bigg[\frac{Az_B}{b(2\pi b)^{3/2}}\:\Gamma\left(a+\frac{5}{2}\right)\bigg]^{-1}, &\hspace{0.5cm} \text{for} \:a >-5/2\\
\bigg[\frac{Az_B}{b(2\pi b)^{3/2}}\bigg]^{-1}, & \hspace{0.5cm} \text{for} \:a <-5/2
\end{array}
\right.
\end{equation}
and the critical exponent $\hat{\gamma}$ as
\begin{equation}
\hat{\gamma}=\left\{
\begin{array}{ll}
-\left(a+\frac{5}{2}\right), &\hspace{0.5cm} \text{for} \:a >-5/2\\
0, & \hspace{0.5cm} \text{for} \:a <-5/2
\end{array}
\right. .
\end{equation}
We note that a continuous density of states with $a<-7/2$ makes the energy and entropy densities finite, while all higher-order derivatives diverge near $T_{\mbox{\tiny{H}}}$. In our analysis of transport coefficients we shall consider the $a=-4$ case~\cite{Antoniou:2002xq,Castorina:2009de,Fiore:1984yu} which leads to normal behavior of the hadronic system near the boundaries of the quark-hadron phase transition line, since it does not allow the energy density to become infinite even for pointlike particles.
At this point, an important observation is in order. Hadrons are \emph{not} elementary, pointlike particles: rather, they arise as color-singlet bound states of the strong interaction, and, for this reason, they can be associated with a characteristic finite size, of the order of the fm. As a consequence of the very nature of hadrons as complex bound states of relativistic, strongly interacting constituents (which defies a description in terms of sufficiently simple phenomenological models), the measurement and even the definition of hadron sizes are, in general, non-trivial (see, for example, ref.~\cite{Pohl:2010zza} for an experimental determination of the radius of a well-known hadron: the proton). It is worth noting that, if corrections related to the finiteness of the particles' physical size are taken into account in our model, the restriction on the admissible values of $a$ become milder, in the sense that finite-particle-size corrections make some of the divergent quantities obtained in the pointlike approximation finite. The fact that finite-particle-size effects can have even a qualitative impact on the details of the description of the thermodynamics of the confining phase of QCD is hardly surprising, as it is well known that they have a significant role in fits of particle multiplicities produced in heavy-ion collisions~\cite{Rischke:1991ke,Yen:1998pa, Yen:1997rv}, and even in the interpretation of non-perturbative theoretical predictions from lattice simulations~\cite{Alba:2016fku}. For this reason, in a more complete discussion, \emph{a priori} one should not discard the $a$ values that lead to unphysical infinities for a system of pointlike particles. However, a fully systematic discussion of finite-particle-size effects would involve a non-trivial amount of additional technicalities (and a certain degree of arbitrariness in the way to define these effects), and lies beyond the scope of our present work. For this reason, in the following we restrict our attention to the simpler, idealized case of pointlike particles, which is nevertheless expected to provide a reasonable approximation of the physics that is studied in currents experiments, especially in view of the fact that the typical sizes of the systems produced in nuclear collisions are significantly larger than hadron sizes~\cite{Kolb:2003dz}, and which does not introduce additional parameters in the description.
\section{Transport coefficients near the critical point}
\label{sec:transport_coefficients}
Approaching the critical point, the thermodynamic quantities relevant for the computation of transport coefficients are: energy density ($\varepsilon$), baryon number density ($n_B$), specific heats ($C_V$ and $C_p$), isothermal compressibility ($k_T$), speed of sound ($C_s$) and correlation length ($\xi$). A set of \emph{Ans\"atze} for the transport coefficients near the critical point can be written in terms of thermodynamic quantities as~\cite{Antoniou:2016ikh}
\begin{eqnarray}
\label{eta_visc_Tc}
\frac{\eta}{s}&=&\frac{T}{C_s\xi^2s}\mathcal{F_{\eta}}\left(\frac{C_p}{C_V}\right),\\
\label{zeta_visc_Tc}
\frac{\zeta}{s}&=&\frac{hC_s\xi}{s}\mathcal{F_{\zeta}}\left(\frac{C_p}{C_V}\right).
\end{eqnarray}
Near the critical point, the correlation length $\xi$ is the only relevant length scale. Further, longitudinal perturbations can be assumed to be those of the non-equilibrium modes near $T_{\mbox{\tiny{c}}}$. A particularly simple form of the functions $F_{\eta,\zeta}$, namely $F_{\eta}(C_p/C_V)=f_{\eta}\times (C_p/C_V)$ and $F_{\zeta}=f_{\zeta}\times (C_p/C_V)$, can be obtained from a perturbative treatment of conventional fluids. Here, $f_\eta$ and $f_\zeta$ are non-universal dimensionless constants and depend on the microscopic length scale of the system. Substituting the singular part of the thermodynamic quantities from eqs.~(\ref{alpha_exponent})--(\ref{nu_exponent}) into eqs.~(\ref{eta_visc_Tc}) and~(\ref{zeta_visc_Tc}) we get, as $t \to 0^-$ (i.e $T \to T_{\mbox{\tiny{c}}}$ from the hadronic side)
\begin{eqnarray}
\left(\frac{\eta}{s}\right)_{-} &=& \frac{f_{\eta} \mathcal{K}_{-} \lambda_{c}}{\Xi_{-}^2 s_c}\sqrt{\frac{{T_{\mbox{\tiny{c}}}}^3 h_{\mbox{\tiny{c}}}}{\mathcal{C}_{-}} \left(1+\frac{\mathcal{C}_{-}|t|^{\hat{\gamma}-\hat{\alpha}}}{T_{\mbox{\tiny{c}}} \lambda_{c}^{2}\mathcal{K}_{-}}\right)^{-1}}\:|t|^{-\hat{\gamma}+2\hat{\nu}+\hat{\alpha}/2}, \label{etatc} \\
\left(\frac{\zeta}{s}\right)_{-} &=& \frac{f_{\zeta} \mathcal{K}_{-} \Xi_{-}\lambda_{c}^3}{s_c}\sqrt{\frac{{T_{\mbox{\tiny{c}}}}^3 h_{\mbox{\tiny{c}}}}{\mathcal{C}_{-}^3} \left(1+\frac{\mathcal{C}_{-}|t|^{\hat{\gamma}-\hat{\alpha}}}{T_{\mbox{\tiny{c}}} \lambda_{c}^{2}\mathcal{K}_{-}}\right)^{-3}}\:|t|^{-\hat{\gamma}-\hat{\nu}+3\hat{\alpha}/2}.
\label{zetatc}
\end{eqnarray}
Here, $h_{\mbox{\tiny{c}}}$ and $s_c$ respectively denote the enthalpy and entropy densities at $T_{\mbox{\tiny{c}}}$, both of which are finite when one sets $a=-4$ in the Hagedorn density of states, while $\lambda_c=(\partial p/\partial T)_{V}$ at $T=T_{\mbox{\tiny{c}}}$. The amplitudes $f_\eta$ and $f_\zeta$ are free parameters, which can be fixed by imposing some constraint on the viscosity coefficients near $T_{\mbox{\tiny{c}}}$. For instance, as we already mentioned, the gauge-string duality~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} suggests a universal lower bound $1/(4\pi)$ for the $\eta/s$ ratio~\cite{Kovtun:2004de}. Similar constraints can be imposed on the $\zeta/s$ ratio, too.
\section{Results}
\label{sec:results}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
density of states & $a$ & $b~[\text{GeV}^{-1}]$ & $A_{1}~[\text{GeV}^{3}]$ \\ \hline
$\rho_{1}$ & $-4$ & $6.25$ & $0.06144$ \\ \hline
\end{tabular}
\caption{Parameters of the continuous part of the density of states, taken from refs.~\cite{Castorina:2009de, NoronhaHostler:2012ug}. According to the discussion in section~\ref{sec:SBM}, the parameter $b$ is set to the inverse of $T_{\mbox{\tiny{c}}}$, whose value is $T_{\mbox{\tiny{c}}}=0.160$~GeV. Note that the value of $A_1$ chosen for the $\rho_1$ model corresponds to $A_1=15 T_{\mbox{\tiny{c}}}^3$, as discussed in the text.}
\label{tbl_hag}
\end{center}
\end{table}
\begin{figure}[h!]
\begin{center}
\centerline{\includegraphics[width=0.48\textwidth]{pre.pdf} \hfill \includegraphics[width=0.48\textwidth]{energy.pdf}}
\centerline{\includegraphics[width=0.48\textwidth]{e3p.pdf} \hfill \includegraphics[width=0.48\textwidth]{entropy.pdf}}
\caption{Equilibrium thermodynamics quantities for different types of continuous resonance spectrum distributions, as a function of the temperature $T$, in MeV. The four panels show the pressure $p$ (top left), the energy density $\varepsilon$ (top right), and the trace of the energy-momentum tensor $\Delta=\varepsilon-3p$ (bottom left) in units of $T^4$, and the entropy density $s$ (bottom right) in units of $T^3$. The solid curves correspond to $\rho_1(m)=A_1\:m^{a}\exp(bm)$ with $a=-4$, whereas the dashed curves are obtained for $a=-17/4$. The quark chemical potential is assumed to be $\mu_B=0$ for the blue curves, while the red curves are obtained at $\mu_B=220$~MeV. The parameter $b$ is set to the inverse of the critical Hagedorn temperature, as discussed in the paragraph after eq.~(\ref{hagspec}). In addition, we also plot the curves representing the contribution due to an ideal pion gas (dotted green curves), i.e. to the lightest states in the {\it{discrete}} part of the spectrum in eq.~(\ref{states}), which does not depend on the functional form that is assumed to model the {\it{continuous}} part of the spectrum.}
\label{thermo}
\end{center}
\end{figure}
Before discussing the behavior of viscosity coefficients near $T_c$, it is instructive to point out a few remarks about the thermodynamics of the model. In table~\ref{tbl_hag}, we report the parameters of the continuous part of the density of states, taken from refs.~\cite{Castorina:2009de, NoronhaHostler:2012ug}. Note that, as discussed in those references, the continuous part of the density of states is assumed to start at mass values corresponding to the pion-pair threshold, and that, in addition to the continuous part, the density of states also includes a $\delta$-like contribution at the pion mass. From the density of states constructed using the parameters in table~\ref{tbl_hag}, one obtains the equilibrium thermodynamic quantities shown in fig.~\ref{thermo}, namely the pressure ($p$), the energy density ($\varepsilon$) and the entropy density ($s$), in units of $T^4$ (for $p$ and $\varepsilon$) and $T^3$ (for $s$). We have also plotted the trace of the energy-momentum tensor $\Delta$ in units of the fourth power of the temperature, $\Delta/T^4=(\varepsilon-3p)/T^4$. The solid blue curves correspond to a continuous density of states of the form $\rho_1(m)=A_1\:m^{-4}\exp (b m)$ at vanishing chemical potential, whereas the solid red ones are obtained at $\mu_B=220$~MeV. To give an idea of the dependence of equilibrium thermodynamic quantities on $a$, we also show the results that one would obtain for a different value of $a$, i.e. for a spectral density with continuous part $\rho_1(m)=A_1\:m^{-17/4}\exp (b m)$, which are shown, with the same color code, by the dashed curves. One immediately realizes that, as compared with the solid curves, the dashed ones exhibit only small quantitative differences. The reason for the choice $a=-17/4$ stems from the fact that, as was discussed in subsection~\ref{subsec:critical_exponents_in_SBM}, the specific heat exhibits power-law behavior only if $a$ is larger than $-9/2$. On the other hand, we also remarked that $a$ is constrained to be less than $-7/2$, because in this range the energy density remains finite when $T$ tends to the critical temperature. This leaves us with $(-9/2,-7/2)$ as the most interesting interval of values for $a$. Thus, $a=-17/4$ is a value which is exactly equidistant from our choice $a=-4$ and the lower end of the interval of interesting values, and as such is expected to reveal some information on the dependence of our results on the choice of $a$. As the plots in fig.~\ref{thermo} clearly show, this dependence is very mild, indicating that our predictions for these quantities are robust (at least within the interval of $a$ values, i.e. $-9/2 \le a \le -7/2$).
Finally, the dotted green curves show the contributions from the ideal pion gas, i.e. the lightest hadrons included in the discrete and model-independent part of the density of states in eq.~(\ref{states}), which can be directly derived from eq.~(\ref{pion_contribution_to_ln_Z}): for example, the pion-gas contribution to the pressure (that one can denote as $p_\pi$) can be written as
\begin{equation}
\label{pion_contribution_to_pressure}
p_\pi=\frac{T}{V}\ln\mathcal{Z}_{\pi}=\frac{g_{\pi}}{2\pi^2}m_\pi^2 T^2 \sum_{n=1}^\infty \frac{K_{2}(n m_\pi /T)}{n^2}.
\end{equation}
It is known from comparison with lattice QCD results (as reviewed, for instance, in the recent ref.~\cite{Dexheimer:2020zzs}) that the hadron resonance gas model provides a very accurate description of the equation of state for all temperatures below $T_{\mbox{\tiny{c}}}$. The contribution to thermodynamics from the part of the hadronic spectrum that is modelled in terms of a continuous density of states becomes significant when the temperature is sufficiently large. Nevertheless, in the case of $\rho_1$ with $a=-4$ both the energy and the entropy densities remain finite for $T\to T_{\mbox{\tiny{c}}}^-$. This reflects the fact that, for $a=-4$, the second derivative of the partition function is divergent, but the first is not. In fact, setting $A_1=15T_{\mbox{\tiny{c}}}^3=0.06144$~GeV$^3$ corresponds to $\varepsilon_c/T_{\mbox{\tiny{c}}}^4\simeq 4$~\cite{Castorina:2009de}.
It is worth noting that the bootstrap model predicts the existence of a phase transition at the finite critical temperature $T_{\mbox{\tiny{c}}}$. This can be interpreted by saying that this phenomenological model, which provides a description for the thermodynamics of hadronic matter in rather simple terms (e.g. neglecting hadron-hadron interactions) and without reference to the microscopic QCD Lagrangian, is able to capture the existence of a finite temperature, above which hadrons cannot exist anymore. To draw an analogy with the description of physics at the electro-weak scale within and beyond the Standard Model, the statistical bootstrap model can be interpreted as an ``effective field theory'' describing the thermal properties of nuclear matter in terms of its ``low-energy degrees of freedom'' (i.e. those that manifest themselves at energy scales below the characteristic hadronic scale, $O(10^2)$~MeV), and its breakdown at a finite temperature $T_{\mbox{\tiny{c}}}$ hints at the existence of ``new physics'' above that scale. In this case, the ``new physics'' above that temperature is the quark-gluon plasma, whose existence could be argued (and reconciled with the bootstrap model~\cite{Hagedorn:1965st, Frautschi:1971ij}) after the introduction of QCD~\cite{Cabibbo:1975ig}. In this analogy, QCD plays the role of the ``more fundamental theory'', which holds up to higher energies (being, in fact, a renormalizable, asymptotically free and ultraviolet-complete theory) and at the same time reduces to the ``effective model'' at low energies, by predicting the existence of massive hadrons through the mechanisms of color confinement and dynamical chiral symmetry breaking~\cite{Kronfeld:2012uk}. One should remark that, despite the remarkable qualitative prediction of a finite maximal temperature at which hadrons exist, the bootstrap model does not capture all quantitative details of the change of phase between hadronic matter and the quark-gluon plasma: in particular, non-perturbative lattice calculations based on the QCD Lagrangian show that, for zero or nearly zero values of the baryonic chemical potential, this change of phase is actually an analytical crossover, rather than an actual phase transition (see refs.~\cite{Szabo:2014iqa, Ding:2015ona} and references therein). As a consequence, the statistical bootstrap model prediction (for $a=-4$ and at zero net baryon density) of a phase transition with critical exponents $\hat{\alpha}=1/2$, $\hat{\beta}=1$, and $\hat{\gamma}=0$ is disproven by lattice QCD. Still, the statistical bootstrap model remains a useful phenomenological model, in particular when studying regions of the phase diagram at large baryonic densities, where a critical endpoint might exist, and in which, as we already pointed out in section~\ref{sec:introduction}, lattice QCD calculations are hampered by particularly severe computational challenges.
\begin{figure}[h!]
\begin{center}
\centerline{\includegraphics[width=0.48\textwidth]{eta.pdf} \hfill \includegraphics[width=0.48\textwidth]{zeta.pdf}}
\caption{The left-hand-side panel shows a comparison of the prediction of our model for the shear viscosity to entropy density ratio with those from other models~\cite{Antoniou:2016ikh,Kovtun:2004de,FernandezFraile:2009mi,Denicol:2013nua}. Our predictions for $a=-4$, denoted by the solid blue line, are also compared with those for $a=-17/4$ (shown by the dotted blue curve, which is nearly indistinguishable from the former), and the difference $\delta_{\eta/s}$ is plotted in the inset figure. The right-hand-side panel shows the prediction of the statistical bootstrap model for the bulk viscosity to entropy density ratio (for $a=-4$ and for $a=-17/4$, and the difference between the two, denoted by $\delta_{\zeta/s}$ and displayed in the inset figure) and its comparison with other works~\cite{Monnai:2016kud,Monnai:2017ber,Antoniou:2016ikh}. Our results correspond to $f_\eta=0.5$ and $f_{\zeta}=0.85$, which are fixed by requiring consistency with other models in the vicinity of the critical point, as discussed in the text.}
\label{visc}
\end{center}
\end{figure}
Fig.~\ref{visc} shows the predictions for the shear and bulk viscosities near the QCD critical point based on eqs.~(\ref{etatc}) and~(\ref{zetatc}). We take the correlation length amplitude to be $\Xi_{-}=1$~fm and the estimate for the critical point location to be $(T_{\mbox{\tiny{c}}},\mu_{B,\mbox{\tiny{c}}})=(160\mbox{~MeV}, 220\mbox{~MeV})$~\cite{Antoniou:2008vv}. For $a=-4$ one can easily derive the critical exponents and amplitudes needed for the estimate of the viscosity coefficients near $T_{\mbox{\tiny{c}}}$. The critical exponents are not independent but are constrained by scaling laws. In particular, the exponents $\hat{\alpha}$ and $\hat{\nu}$ are related by the Josephson scaling law $\hat{\nu} d=2-\hat{\alpha}$, where $d$ is the number of space dimensions~\cite{Huang:1987sm}.
In the left-hand-side panel of fig.~\ref{visc}, the solid blue curve shows the shear viscosity to entropy density ratio within the statistical bootstrap model, with density of states specified by $\rho_1$, and with the $f_{\eta}$ parameter appearing on the right-hand side of eq.~(\ref{etatc}) set to $0.5$. Note that the choice of this amplitude value, which we have done with the procedure discussed below, introduces some systematic uncertainties. On the other hand, to give an idea of the dependence of this prediction on the parameter $a$, we also present the prediction that one would obtain for $a=-17/4$ (again with $f_{\eta}=0.5$), which is displayed by the dotted blue curve, and which is nearly indistinguishable from the latter. Hence, in the inset plot we show the quantity $\delta_{\eta/s}$, defined as the prediction of the bootstrap model for $\eta/s$ for $a=-17/4$, minus the one for $a=-4$: the relative difference between the predictions corresponding to the two $a$ values is at the per~mille level. We conclude that the dependence of our prediction on $a$ (within the range of values of $a$ of our interest) has a negligible impact on the uncertainties affecting the prediction for the $\eta/s$ ratio. In the larger plot in the figure, we also compare the critical solution for $\eta/s$ with those obtained from various other models: in particular, the dashed black curve corresponds to the conjectured universal lower bound $1/(4\pi)$ for this ratio, that was derived in ref.~\cite{Kovtun:2004de}, while the brown curve describes the result that one would obtain for a pion gas~\cite{FernandezFraile:2009mi}, and the magenta curve shows the result that can be derived assuming the medium to be described in terms of a hadronic mixture~\cite{Denicol:2013nua} at low temperature and density. Finally, the red curve corresponds to the same solution for the viscosity coefficients as in this work, but with the critical exponents of the three-dimensional Ising model and the amplitudes constrained by universality arguments~\cite{Antoniou:2016ikh}. In our case, we chose to fix the $f_{\eta}$ amplitude to optimize the consistency with the other predictions shown in the figure at temperatures $-0.2 \lesssim t \lesssim -0.1$: in particular, in that temperature interval our choice yields an almost perfect consistency with the curve predicted in ref.~\cite{Denicol:2013nua}, which is the one that is intermediate among those predicted in those works. We should remark, however, that in general the choice of the $f_{\eta}$ amplitude remains a source of systematics that are difficult to quantify (and, hence, the value that we quote should be taken \emph{cum grano salis}). Note, however, that, as shown by the two curves derived in ref.~\cite{Antoniou:2016ikh} with two different choices for $f_{\eta}$, i.e. the solid and dash-dot-dotted red lines, the choice of the numerical value of the amplitude has a strong impact at temperatures far from the critical point, but this discrepancy is already reduced to small values for reduced temperatures between approximately $-0.2$ and $-0.1$. We note that the critical behavior of the statistical bootstrap model leads to a linear decrease in $\eta/s$ as a function of the temperature, and that at low temperatures the estimated magnitude of $\eta/s$ is in agreement with that of a pion gas, or of the hadron gas mixture. Near $T_{\mbox{\tiny{c}}}$ there is a mild violation of the bound conjectured in ref.~\cite{Kovtun:2004de} (which could make it problematic to fix $f_{\eta}$ through some constraint in a region of temperatures very close to $T_{\mbox{\tiny{c}}}$). Such violation has also been noted for one of the solutions discussed in ref.~\cite{Antoniou:2016ikh}, shown by the dashed red curve in fig.~\ref{visc}.
The right-hand-side panel of fig.~\ref{visc} shows the bulk viscosity to entropy density ratio, in which one notes that the statistical bootstrap model predicts a rapid increase in the bulk viscosity as a function of the temperature. Also in this case, we present our results both for $a=-4$ (solid blue line) and for $a=-17/4$ (dotted blue curve), and the difference between the latter and the former, which is denoted by $\delta_{\zeta/s}$ and shown by the dashed blue line in the inset plot. In this case, the relative difference between the predictions corresponding to the two $a$ values is below $10^{-2}$, meaning that also for $\zeta/s$ the dependence on $a$ induces a very mild systematic uncertainty. Near $T_{\mbox{\tiny{c}}}$, our results, with the amplitude coefficient appearing in eq.~(\ref{zetatc}) fixed to $f_{\zeta}=0.85$ by requiring an approximate match with those of refs.~\cite{Monnai:2016kud,Monnai:2017ber} at $t \simeq -0.1$, are in remarkable agreement with those from that work (shown by the dashed green curve), where the bulk viscosity has been estimated under the assumption of a QCD critical point belonging to the dynamical universality class of the so-called H model~\cite{Hohenberg:1977ym}. Remarkably, this agreement between the two curves is observed for essentially all negative values $t \gtrsim -0.1$, which is non-trivial, as that is the region in which the $\zeta/s$ ratio grows rapidly to very large values. Nevertheless, also in this case the readers should be warned that there is no obvious method to fix the value of $f_{\zeta}$ in a unique, completely rigorous way from first principles, and the systematic uncertainties associated with any choice remain difficult to assess. For comparison, in the plot we also show the prediction for $\zeta/s$ from ref.~\cite{Antoniou:2016ikh}. Coming to the interpretation of fig.~\ref{visc}, we note that a large bulk viscosity should manifest itself in heavy-ion collisions through the decrease of the average transverse momentum of final-state hadrons. Moreover, due to the increase in entropy associated with the dissipation through large bulk viscosity, this effect should be accompanied by an increase in total multiplicity for final-state hadrons. The large bulk viscosity near the critical point would play a particularly important role in the elliptic flow measurement of the matter produced in the BES program.
Note that the features of the transport coefficients predicted by our model are only expected to hold close to $T_{\mbox{\tiny{c}}}$, and there is no reason to expect the curves plotted in fig.~\ref{visc} to be quantitatively accurate predictions at temperatures much smaller than the critical one. The reason for this was already discussed in ref.~\cite{Antoniou:2016ikh}, in which it was remarked that the extrapolation of power-law behavior beyond the critical region can be, at best, a crude approximation. Indeed, by definition, the critical exponents only capture the ``universal'' critical features of the system, not its full dynamics. Nevertheless, it is interesting to plot these quantities in a range of temperatures similar to the one that was used for the equilibrium thermodynamics quantities (for which, as we pointed out above, the predictions of our model are instead expected to extend to all temperatures below $T_{\mbox{\tiny{c}}}$), which allows one to highlight, in particular, the monotonically decreasing dependence of $\eta/s$ as a function of the temperature for $T\le T_{\mbox{\tiny{c}}}$, and the dramatic increase of $\zeta/s$ close to the critical point.
\section{Discussion and conclusions}
\label{sec:discussion_and_conclusions}
\subsection{Discussion}\label{subsec:discussion}
In this work we derived the predictions of the statistical bootstrap model for thermodynamic quantities and transport coefficients near the critical endpoint of QCD. While it is well known that equilibrium thermodynamic quantities at temperatures below the (pseudo-)critical one are described well in terms of a gas of non-interacting hadrons, when all experimentally observed hadronic states with masses up to approximately $2$~GeV~\cite{Vovchenko:2014pka, HotQCD:2014kol} are included, the introduction of a continuous, Hagedorn-like, density of states for heavier states in the spectrum leads to the manifestation of critical behavior, without substantially altering the predictions for the equation of state at low temperatures. Moreover, as we remarked, the phenomenological implications of the model do not depend on the precise value of $M$, which in our computation was set to $2.25$~GeV.
Even though the derivation of the critical exponents for this model is based on the assumption of a spectral density valid at zero chemical potential, and the dependence on $\mu_B$ of the logarithm of the partition function is simply encoded in a fugacity factor, we argued that it may still capture the correct physics close to a possible critical point at finite $\mu_B$. For a continuous density of states of the form $\rho(m)\sim m^{a}\:\exp \left (bm \right)$ with $a<-7/2$, the energy and entropy densities remain finite even for point-like hadrons, while all higher-order derivatives diverge near $T_{\mbox{\tiny{c}}}$. For $a=-4$ the energy density remains finite as $T\to T_{\mbox{\tiny{c}}}$, signalling the existence of a phase transition.
In passing, it is worth mentioning that a continuous spectral density of the form required for self-consistency in the statistical bootstrap model, eq.~(\ref{hagspec}), also arises if one models hadrons in terms of confining strings of glue (as was done, for example, in ref.~\cite{Isgur:1984bm}).
Next, we studied the critical behavior of shear and bulk viscosities within the statistical bootstrap model. Identifying the thermodynamic quantities whose singular parts would contribute to the viscosity coefficients it is possible to write down \emph{Ans\"atze} for the viscosity coefficients valid near the critical point. Using the \emph{Ans\"atze} in eqs.~(\ref{eta_visc_Tc}) and~(\ref{zeta_visc_Tc}) together with the singular part of the relevant thermodynamic quantities in eqs.~(\ref{alpha_exponent})--(\ref{nu_exponent}), one can obtain the dominating contributions for the viscosity coefficients in eqs.~(\ref{eta_visc_Tc}) and~(\ref{zeta_visc_Tc}). We found that the statistical bootstrap model predicts the shear viscosity to entropy density ratio $\eta/s$ to decrease quite rapidly near $T_{\mbox{\tiny{c}}}$. We observe that the magnitude of $\eta/s$ away from the critical point is in good agreement with the predictions of non-critical models, and that there is a mild violation of the $\eta/s\ge 1/(4\pi)$ bound~\cite{Kovtun:2004de} near $T_{\mbox{\tiny{c}}}$. It is worth emphasizing that this (slight) violation of the $\eta/s\ge 1/(4\pi)$ bound might be unphysical, i.e. an artifact of the model. In fact, it is also worth remarking that, while the conjecture of the $\eta/s\ge 1/(4\pi)$ bound was first derived in a holographic context~\cite{Kovtun:2004de} (and is expected to be saturated in strongly coupled gauge theories with a known gravity dual~\cite{Buchel:2003tz}, such as the $\mathcal{N}=4$ supersymmetric Yang-Mills theory~\cite{Policastro:2001yc}), its origin is, in fact, much more general, being related to the uncertainty principle of quantum systems. As for the $\zeta/s$ ratio, we found it to rise very rapidly near $T_{\mbox{\tiny{c}}}$, in remarkable agreement with refs.~\cite{Monnai:2016kud,Monnai:2017ber}.
The anomalous behavior of shear and bulk viscosity coefficients near the critical endpoint might be very important for heavy-ion collision experiments. In particular, an enhanced bulk viscosity should manifest itself in heavy ion collisions through a decrease of the average transverse momentum of final state hadrons, and a corresponding increase in entropy. This feature may be particularly important for the experimental search for the critical endpoint of QCD through the BES program.
It is interesting to discuss a comparison of our findings with other related works. In particular, a study similar to ours was recently reported in ref.~\cite{Rais:2019chb}, which also predicts the transport properties of hot QCD matter within a hadron resonance model, finding a very low shear viscosity to entropy density ratio near $T_{\mbox{\tiny{H}}}$, in agreement with our results. In contrast to our work, however, the focus of that article is not on the behavior near criticality (where, as we have discussed in detail above, much information can be derived with purely analytical calculations and general universality arguments), but instead on their numerical determination in a wider range, using the Gie{\ss}en Boltzmann-Uehling-Uhlenbeck transport model~\cite{Buss:2011mx} and Monte~Carlo calculations. The fact that the numerical approach used in ref.~\cite{Rais:2019chb} reproduces our analytical results close to criticality is a non-trivial cross-check of the results.
We should emphasize again that there is no fundamental proof that the statistical bootstrap model described in this work should necessarily provide a complete description of the fundamental properties at the CEP. Based on very general arguments (including the continuous nature of the transition at the critical endpoint, spacetime dimensionality, and the underlying symmetries---or lack thereof---of the theory), one may instead argue that the universality class of the critical endpoint of QCD should instead be the one of the Ising model in three dimensions (for a review, see ref.~\cite{Pelissetto:2000ek}). The critical exponents in this model have recently been computed to very high precisions using conformal bootstrap techniques~\cite{ElShowk:2012ht, El-Showk:2014dwa}, finding~$\hat{\alpha}=0.11008(1)$, $\hat{\gamma}=1.237075(10)$, etc., which are clearly incompatible with those predicted by the model that we considered here. If the critical endpoint of QCD exists, it may well be that its actual critical exponents are those of the three-dimensional Ising model, and that deviations from the description in terms of the statistical bootstrap model start to occur when one approaches the CEP. In this respect, it would be interesting to study theoretically how these deviations start to manifest themselves when the system is off, but close to, the critical point---perhaps using the analytical tools of conformal perturbation theory~\cite{Guida:1995kc} (see also ref.~\cite{Caselle:2016mww}, for an explicit example of application), as recently proposed in ref.~\cite{Caselle:2019tiv}. An interesting issue associated with the description of the critical endpoint of QCD in terms of the three-dimensional Ising model concerns the identification of the lines, in the QCD phase diagram, that describe relevant deformations of the model (see also ref.~\cite{Caselle:2020tjz}): what are the directions that correspond to a ``thermal'' and to a ``magnetic'' perturbation of the critical point? How do they affect the reliability of the description of the thermodynamics of the hadronic phase in terms of a hadron resonance gas with a spectrum of the form in eq.~(\ref{states})? The answers to these questions may have important phenomenological implications for the evolution of the medium in energy scans going through or close to the critical endpoint, since they could directly affect the dynamics of hadrons before freeze-out.
Finally, it should be noted that, by construction, the statistical bootstrap model does not allow one to describe the approach to the critical endpoint of QCD from ``above'', i.e. from the deconfined phase. Perturbative computations show that the $\eta/s$ ratio is generally large for a weakly coupled quark-gluon plasma~\cite{Hosoya:1983xm, Thoma:1991em} (a seemingly counter-intuitive result, which, in fact, reflects the fact that suppression of interactions makes ``transverse'' propagation of momentum difficult), but it is well known that thermal weak-coupling expansions are affected by non-trivial divergences, which are not present at $T=0$ (see, for example, ref.~\cite{Ghiglieri:2020dpq} and references therein), and require a sophisticated treatment~\cite{Jeon:1995zm, Arnold:2000dr, Arnold:2003zc, Arnold:2002zm}.
\subsection{Conclusions}
\label{subsec:conclusions}
To summarize, in this work we derived the theoretical predictions of the statistical bootstrap model in the vicinity of the critical endpoint of QCD. Working in the approximation in which the critical temperature does not depend on the value of the chemical potential (which, as we remarked in section~\ref{subsec:formulation_of_the_model}, has support from lattice QCD calculations showing that the crossover line in the phase diagram of the theory has very small curvature~\cite{Kaczmarek:2011zz, Endrodi:2011gv, Cea:2014xva, Bonati:2014rfa}), we showed that, for a suitable choice of its parameters, this model ``predicts'' the existence of a phase transition, and allows one to derive the associated critical exponents in an analytical way. Moreover, the model also gives predictions for the transport properties near the CEP, which are encoded in the shear and bulk viscosities. Both for the equation of state and for these transport coefficients, the dependence of the predictions of the model on the parameter $a$ (within the rather narrow interval of physical interest) is very mild. In spite of the relative simplicity of the model, these results are qualitatively and quantitatively very similar to those obtained from other approaches. These findings may hopefully guide the future experimental identification of the CEP in heavy-ion collision experiments and the determination of the physical properties of QCD matter in the proximity of the critical endpoint.
\vspace{5mm}
\section*{Acknowledgments}
Guruprasad~Kadam is financially supported by the DST-INSPIRE faculty award under Grant No. DST/INSPIRE/04/2017/002293. G.K. thanks Sangyong~Jeon for useful comments.
|
train/arxiv
|
BkiUbVk5i7PA9LCKv-It
| 5 | 1 |
\section{Introduction and main results}
\subsection{Basic notation}
Let $(M,g)$ be a connected, possibly incomplete, $n$-dimensional Riemannian manifold, $n \geq 2$, endowed with its Riemannian measure $\,\mathrm{dv}$. Unless otherwise specified, integration will be always performed with respect to this measure. The Riemannian metric $g$ gives rise to the intrinsic distance $\mathrm{dist}(x,y)$ between a couple of points $x,y \in M$. The corresponding open metric ball centered at $o \in M$ and of radius $R>0$ is denoted by $B_{R}(o)$. The $\operatorname{Riem}$ and $\operatorname{Ric}$ symbols are used to denote, respectively, the Riemann and the Ricci curvature tensors of $(M,g)$. Finally, the Laplace-Beltrami operator of $(M,g)$ is denoted by $\Delta = \operatorname{trace} \operatorname{Hess} = \div \nabla$. We stress that we are using the sign convention according to which, on the real line, $\Delta = +\frac{d^{2}}{dx^{2}}$.\smallskip
This paper deals with sub-solutions of elliptic PDEs involving the Schr\"odinger operator
\[
\mathcal{L} = \Delta - \l(x)
\]
where $\l(x)$ is a smooth function.
We say that $u \in L^{1}_{loc}(M)$ is a {\it distributional solution} of $\mathcal{L} u \geq f \in L^{1}_{loc}(M)$ if, for every $0 \leq \varphi \in C^{\infty}_{c}(M)$,
\[
\int_{M}u \mathcal{L}\varphi \geq \int_{M}f \varphi.
\]
Sometimes, we will call such a $u$ a {\it distributional subsolution} of the equation $\mathcal{L} u = f$. The notion of {\it distributional supersolution} is defined by reversing the inequalities and we say that $u \in L^{1}_{loc}(M)$ is a {\it distributional solution} of $\mathcal{L} u = f$ if it is a subsolution and a supersolution at the same time.\smallskip
In the presence of more local regularity of the function involved we can also speak of a weak solution of the same inequality. Namely, $u \in W^{1,1}_{loc}(M)$ is a {\it weak solution} of $\mathcal{L} u \geq f \in L^{1}_{loc}(M)$ if, for every $0 \leq \varphi \in C^{\infty}_{c}(M)$, it holds
\[
- \int_{M} g(\nabla u , \nabla \varphi) \geq \int_{M} (\l + f) \varphi
\]
By a density argument, the inequality can be extended to test functions $0 \leq \varphi \in W^{1,\infty}_{c}(M)$. If the regularity of $u$ is increased to $W^{1,2}_{loc}(M)$ then test functions can be taken in $W^{1,2}_{c}(M)$.
Finally, we need to recall that a function $u \in W^{1,1}_{loc}(M)$ is a distributional solution of $\mathcal{L} u \geq f \in L^{1}_{loc}$ if and only if it is a weak solution of the same inequality.
\subsection{The BMS conjecture}
This paper and its companion \cite{GPSV-preprint} originate from the unpublished preprint \cite{PV1} by two of the authors. The main goal of the present paper is to expand the investigation of the BMS conjecture on possibly incomplete Riemannian manifolds. In \cite{GPSV-preprint} it is investigated a generalization of the BMS conjecture in the broader setting of metric measure spaces, which involves a new notion of distributional subsolutions.
The BMS conjecture was introduced in \cite[Appendix B]{BMS}, and it is concerned with the $L^p$-positivity preserving property for Riemannian manifolds. We recall the definition of this property by B. G\"uneysu, \cite{Gu-JGEA}:
\begin{definition}
Let $1 \leq p \leq +\infty$. The Riemannian manifold $(M,g)$ is said to be $L^{p}$-Positivity Preserving ($L^{p}$-PP for short) if the following implication holds true:
\begin{gather}\label{P}\tag{$L^p\mathrm{-PP}$}
\begin{cases}
(- \Delta + 1) u \geq 0 \text{ distributionally on M}\\
u \in L^{p}(M)
\end{cases}
\Longrightarrow
u \geq 0\text{ a.e. on }M.
\end{gather}
More generally, one can consider any family of functions $\mathscr{C} \subseteq L^{1}_{loc}(M)$ and say that $(M,g)$ is $\mathscr{C}$-Positivity Preserving if the above implication holds when $L^{p}(M)$ is replaced by $\mathscr{C}$.
\end{definition}
The following conjecture, motivated by the study of self-adjointness of covariant Schr\"odinger operators (see the discussions in \cite{Gu-survey, Gu-book} and Subsection \ref{subsec:core} below) was formulated by M. Braverman, O. Milatovic and M. Shubin in \cite[Appendix B]{BMS}.
\begin{conjecture}[BMS conjecture]
Assume that $(M,g)$ is geodesically complete. Then $(M,g)$ is $L^{2}$-positivity preserving.
\end{conjecture}
The validity of the BMS conjecture has been verified under additional restrictions on the geometry of the complete Riemannian manifold $(M,g)$. More precisely:
\begin{itemize}
\item In the seminal paper \cite[p.140]{Ka}, T. Kato proved that $\mathbb{R}^{n}$ is $L^{2}$-PP.
\item In \cite[Proposition B.2]{BMS} it is assumed that $(M,g)$ has $C^{\infty}$-bounded geometry, i.e., it satisfies $\| \nabla^{(j)}\operatorname{Riem} \|_{L^{\infty}} < +\infty$ for any $j\in \mathbb{N}$ and $\mathrm{inj}(M)>0$.
\item In \cite{Gu-JGEA}, B. G\"uneysu showed that $\operatorname{Ric} \geq 0$ is enough to conclude. Subsequently, in \cite[Theorem XIV.31]{Gu-book}, he proved that if $\operatorname{Ric} \geq -K^{2}$ then $(M,g)$ is $L^{p}$-PP on the whole scale $p \in [1,+\infty]$.
\item In \cite{BS}, D. Bianchi and A.G. Setti observed that the BMS conjecture is true even if the Ricci curvature condition is relaxed to
\[
\operatorname{Ric} \geq - C (1+ r(x))^{2}
\]
where $r(x) = \mathrm{dist}(x,o)$ for some origin $o \in M$. Under the same curvature assumptions, the $L^p$-PP can be extended almost directly to any $p\in[2,\infty)$, and with a little more effort to any $p\in[1,+\infty]$, \cite{Gu-book,MV}.
\item In the very recent \cite{MV}, L. Marini and the third author considered the case of a Cartan-Hadamard manifold (complete, simply connected with $\operatorname{Riem} \leq 0$). In this setting it is proved that $(M,g)$ is $L^{p}$-PP for any $p \in [2,+\infty)$ provided
\[
-(m-1)B^{2}(1+r(x))^{\a +2} \leq \operatorname{Ric} \leq -(m-1)^{2}A^{2}(1+ r(x))^{\a}
\]
for some $\a>0$ and $B > \sqrt{2}(m-1)A>0$.
\end{itemize}
Kato's argument in $\mathbb{R}^n$ relies on the positivity of the operator $(-\Delta + 1)^{-1}$ acting on the space of tempered distributions, which in turn is proved using the explicit expression of its kernel. Instead, in all the above quoted works on Riemannian manifolds, the proofs stem from an argument by B. Davies, \cite[Proposition B.3]{BMS} that relies on the existence of good cut-off functions with controlled gradient and Laplacian. Obviously, the construction of these cut-offs requires some assumption on the curvature.
\subsection{Riemannian manifolds and \texorpdfstring{$L^{p}$}{Lp}-Positivity Preservation}
In this paper we prove some results on the $L^p$ positivity-preserving property for a Riemannian manifold $M$ and its link to completeness of $M$, the $p$-parabolicity of $M$, and the validity of the property on $M\setminus K$ in relation to the size of $K$.
The approach we use is somewhat different from the other results available in literature. In particular, our approach is based on a new \textsl{a priori} regularity result for positive subharmonic distributions (see Section \ref{sect:reg}), which permits to prove a Liouville type theorem and on a Brezis-Kato inequality on Riemannian manifolds (see Section \ref{sec:Kato}). Both these results, in turn, rely on a smooth monotonic approximation of distributional solutions of $\mathcal{L} u \geq 0$ (see Section \ref{sect:approx}). This approximation can be proved using general potential-theoretic arguments, but in this paper we also present an explicit construction that uses the Riemannian Green's function as a sort of mollifier to smoothen any $L^1$ subsolution $u$.
This approach avoids any curvature restrictions on the manifold $M$, and allows us to prove that
\begin{theorem}\label{th:BMS}
Let $(M,g)$ be a complete Riemannian manifold. Then $M$ is $L^{p}$-Positivity Preserving for every $p \in (1,+\infty)$. In particular, the BMS conjecture is true.
\end{theorem}
As a matter of fact, a by-product of our approach is that a complete manifold is $\mathscr{C}$-Positivity Preserving where
\[
\mathscr{C} = \{ u \in L^{1}_{loc}(M): \| u \|_{L^{p}(B_{2R}\setminus B_{R})} = o(R^{2/p}) \text{ as }R \to +\infty \},
\]
where $p\in (1,+\infty)$ (see Remark \ref{rem-subquadratic}).\smallskip
\begin{remark}
We note explicitly that, in the statement of Theorem \ref{th:BMS}, the endpoint cases $p=1$ and $p=+\infty$ are excluded. This is because, in general, the corresponding property may fail. Indeed, it is well known that there are complete Riemannian manifolds which are not $L^{\infty}$-Positivity Preserving (since stochastically incomplete). On the other hand, \cite{BM} contains an example of complete manifold whose sectional curvature decays more than quadratically to $-\infty$ and such that the $L^{1}$-Positivity Preservation is not satisfied.
\end{remark}
\subsection{\texorpdfstring{$L^{p}$}{Lp}-Positivity Preservation, parabolicity, capacity and removable sets}
A natural problem is to understand to which extent geodesic completeness is a necessary condition for the \eqref{P} property to hold. We present two families of results in this direction:
\begin{enumerate}
\item Theorem \ref{th:caccio} states that a not necessarily complete Riemannian manifold $M$ still enjoys the \eqref{P} property if it has a finite number of ends, all of which are $q$-parabolic for possibly different values of $q>\frac{2p}{p-1}$.
\item Corollary \ref{coro:LpPP cap}, Proposition \ref{prop_Mink} and Section \ref{sec_Kbig} deal with manifolds of the form $M\setminus K$, where $M$ is a complete Riemannian manifold and $K$ is a compact subset with either capacity or Minkowski-type upper bounds.
\end{enumerate}
We refer the reader to \cite{mattilone, dl_book_tangent, grig} for a background on capacity and parabolicity. The relevant definitions will be also recalled in Section \ref{sec:parabolic}.
The main idea for these results is that sets that are sufficiently ``small'' in a suitable sense do not influence the behavior of solutions to $(-\Delta+1)u\geq 0$, and parabolicity of a manifold guarantees the same property for the boundary at infinity of $M$.
\smallskip
To be more precise, we prove the following results:
\begin{proposition}\label{prop:LpPP cap}
Let $1<p<+\infty$ and let $M = N \setminus K$, where $(N,h)$ is an $n$-dimensional complete Riemannian manifold and $K\subset N$ is a compact set. Suppose that the Hausdorff dimension of $K$ satisfies
\begin{gather}
\dim_{H}(K)<n-\frac{2p}{p-1}\, .
\end{gather}
for some $\frac{2p}{p-1}<q \leq +\infty$. Then $M$ is $L^p$-Positivity Preserving.
\end{proposition}
As a matter of fact the conclusion of Proposition \ref{prop:LpPP cap} holds if $K$ is $q$-polar for some $\frac{2p}{p-1}<q \leq +\infty$, a condition that is implied by the smallness of Hausdorff dimension.
With a stronger assumption on the size of $K$, it is possible to deal with the threshold dimension as well. In particular, we have the following result. Here, and in the subsequent parts of the paper, given a subset $E$ of a complete Riemannian manifold $(N,h)$, we denote by $B_{r}(E) = \cup_{x\in E}B_{r}(x)$ its open tubular neighborhood.
\begin{proposition}\label{prop:LpPPMink}
Let $1<p<+\infty$ and let $M = N \setminus K$, where $(N,h)$ is an $n$-dimensional complete Riemannian manifold and $K\subset N$ is a bounded set. If the tubular neighborhoods of $K$ have uniform volume bounds of the form
\begin{gather}
\mathrm{vol}\ton{\mathscr{B} r K }\leq C r^{\frac{2p}{p-1}}
\end{gather}
for some $C$ independent of $r$, then $M$ is $L^p$-Positivity Preserving.
\end{proposition}
On the other hand, it is possible to build explicit examples of complete manifolds $(M,h)$ that lose the $L^p$-PP property when a set $K$ of ``big size'' is removed from them. In particular, Proposition \ref{prop_no_good} shows that if the set $K$ has Hausdorff dimension
\begin{gather}
\dim_{H}(K)>n-\frac{2p}{p-1}\, ,
\end{gather}
then the \eqref{P} property does not hold on $M\setminus K$, so that the dimensional threshold $n-\frac{2p}{p-1}$ is essentially sharp.
\subsection{\texorpdfstring{Essential self-adjointness and $L^{p}$}{Lp}-operator cores for Schr\"odinger operators}\label{subsec:core}
The $L^{p}$-PP property first appeared, in a somewhat implicit form, in the seminal paper \cite{Ka} where T. Kato addressed the problem of the self-adjointness of Sch\"odinger operators with singular potentials in Euclidean spaces. Its validity on Riemannian manifolds was later systematically investigated with the aim of extending Kato results to covariant Sch\"odinger operators with singular potentials on vector bundles; see e.g. \cite{BMS, Gu-JGEA} and references therein. More generally, a basic problem in the $L^{p}$ spectral theory of Schr\"odinger opeators is to understand under which conditions on the potential and on the underlying manifold, the space of smooth compactly supported functions is an $L^{p}$ core of the operator, namely, it is dense in the domain of its maximal realization. Again, in Euclidean space this is classical and due to Kato, \cite{Ka2}, in the Riemannian case we refer e.g. to \cite{Mi}.\\
When considered in this framework, the results in our paper can be exploited to prove $L^{p}$ spectral properties for a class of Schr\"odinger operators on possibly incomplete manifolds. As we shall see in a moment, some of these results look relevant even for the Laplace-Beltrami operator.
To start with, we note that a straightforward application of \cite[Theorem 2.9]{Gu-JGEA} gives
\begin{corollary}\label{coro:batu}
Let $M$ satisfy the assumptions of either Proposition \ref{prop:LpPP cap} or Proposition \ref{prop:LpPPMink}.
\begin{itemize}\item[(a)] If $V \ge 0$ and $V\in L^2_{
loc}(M)$, then the operator $\Delta-V$ (defined on $C^\infty_c(M)$)
is essentially self-adjoint in the Hilbert space $L^2(M)$.
\item[(b)] Assume $p \in (1,\infty)$, $V \ge 0$, and $V \in L^p_{loc}(M)$. Then the operator
$\Delta - V$ (defined on $C^\infty_c(M)$) is closable in $L^p (M)$, and its closure generates a contraction semigroup in the Banach
space $L^p(M)$.
\end{itemize}
\end{corollary}
Suppose in addition that the minimal and the maximal extension of $\Delta:C^\infty_c\to C^\infty_c$ in $L^p$ coincide, i.e. $\Delta_{\mathrm{min},p}=\Delta_{\mathrm{max},p}$. Then, following \cite{Mi} and \cite[Appendix A]{GP}, one has that $C^\infty_c$ is an operator core for $(\Delta-V)_{\mathrm{max},p}$. As a consequence of Propositions \ref{prop:LpPPMink} and \ref{prop:density}, we obtain the following
\begin{corollary}\label{cor_core}
Let $1<p<+\infty$ and let $M = N \setminus K$, where $(N,h)$ is an $n$-dimensional complete Riemannian manifold and $K\subset N$. Suppose that the tubular neighborhoods of $K$ have uniform volume bounds of the form
\begin{gather}
\mathrm{vol}\ton{\mathscr{B} r K }\leq C r^{q}
\end{gather}
for some $C$ independent of $r$ and some $q\geq \frac{2p}{p-1}$ if $p<3$, or $q>p$ if $p\ge3$.
Then
$C^\infty_c(M)$ is an operator core for $(\Delta-V)_{\mathrm{max},p}$.
\end{corollary}
Clearly, the latter corollary applies in particular in the case where $K$ is empty so that $M=N$ is geodesically complete, and $p\in(1,\infty)$. This special case is contained in \cite[Corollary 1]{Mi2019} (see also \cite{Mi2010}).\\
When $V=0$, the essential self-adjointness of $\Delta:C^\infty_c(M)\to C^\infty_c(M)$ in $L^2(M)$ on incompete Riemannian manifolds $M$ was studied in \cite{CdV,Ma}. In particular, it is there considered the case where $M=N\setminus K$ with $K$ a closed submanifold and it is proved that $\Delta$ is essentially self-adjoint if (and only if, at least when $N=\mathbb{R}^{n},\ss^{n}$) the co-dimension is greater than $3$. Note that smooth closed submanifolds satisfy the assumptions of Proposition \ref{prop:LpPPMink}. In a different but related direction, in \cite{Ma, GM} the authors investigate spectral properties of the Gaffney Laplacian $\Delta_G : W^{1,2}(M) \to L^{2}(M)$ on incomplete manifolds, obtaining conditions for its self-adjointness in terms of probabilistic properties like parabolicity and stochastic completeness and related of the Minkowski content of the removed part, somehow in the spirit of our Proposition \ref{prop:LpPPMink}.\\
When $M$ is geodesically complete, Corollary \ref{coro:batu} (a) was proved in \cite{Mi2019}; see also references therein for previous partial results and \cite{GuPo} for the case where the negative part of the potential is in some Kato class.
Geodesically incomplete manifolds were also considered in \cite{MiTu} where the authors require no assumptions on the geometry of $M$, but asks for a controlled behaviour of the potential $V$ near the Cauchy boundary of $M$.
\section{Smooth approximation of distributional subsolutions}\label{sect:approx}
In the classical potential theory for the Euclidean Laplacian in $\mathbb{R}^{n}$ (and therefore on any $2$-dimensional manifold in isothermal local coordinates) it is known that subharmonic distributions are the monotone limit of smooth subharmonic functions.
In particular, given a subharmonic function $u:\mathbb{R}^n\to \mathbb{R}$, $u\in L^1_{\text{loc}}\ton{\mathscr{R}^n}$, we can consider a $C^\infty_c(\mathscr{R}^n)$ mollifier $\phi_r(x)=r^{-n}\phi(x/r)$ and the one-parameter family of functions $u(x,r)=\phi_r \ast u$. Using standard estimates, one can show that for all $r>0$, $u(x,r)$ are smooth subharmonic functions that, as $r\to 0$, converge monotonically from above to $u(x)$ in the $L^1_{\text{loc}}$ sense.
We need to extend this property to the locally uniformly elliptic operator
\[
\mathcal{L} = \Delta - \l(x)
\]
on a Riemannian manifold, $\l(x)$ being a smooth function.
\begin{theorem}\label{th:smoothing}
Let $(M,g)$ be an open manifold. For any $0\le \lambda \in C^\infty (M)$, the operator $\mathcal{L}=\Delta-\lambda(x)$ has the property of local smooth monotonic approximation of $L^{1}_{loc}$-subsolutions, i.e the following holds.\\
For every $x_{0}\in M$ and $x_0\in \Omega$ open with smooth boundary, there exists $\Omega'\Subset \Omega \Subset M$ such that, if $u \in L^{1}(\Omega)$ solves $\mathcal{L} u \geq 0$ in $\Omega$, then there exists a sequence $\{ u_{k} \} \subseteq C^{\infty}(\overline \Omega')$ satisfying the following properties:
\begin{enumerate}
\item [a)] $u \leq u_{k+1} \leq u_{k}$ for all $k \in \mathbb{N}$;
\item [b)] $u_{k}(x) \to u(x)$ as $k \to +\infty$ for a.e. $x \in \Omega'$;
\item [c)] $\mathcal{L} u_{k} \geq 0$ in $\Omega'$ for all $k \in \mathbb{N}$;
\item [d)] $\| u - u_{k} \|_{L^{1}(\Omega')} \to 0$ as $k\to +\infty$.
\end{enumerate}
\end{theorem}
We preliminary observe that, in order to prove the monotonic approximation, we can replace the Schr\"odinger operator $\mathcal{L}$ by an operator of the form $\Delta_{\alpha} = \a^{-2}\div(\a^{2}\nabla \cdot)$ for a suitable smooth function $\a >0$. Indeed, given a smooth neighborhood $\Omega \Subset M$ of $x_0$, let $\alpha\in C^\infty(M)$ be a solution of the problem
\[
\begin{cases}
\mathcal{L} \alpha = 0,\\
\alpha >0,
\end{cases}\text{on } \Omega.
\]
Thanks to the maximum principle, such an $\alpha$ can be obtained for instance as a solution of the Dirichlet problem $\mathcal{L} \alpha=0$ with positive constant boundary data on $\partial \Omega$. We use the following trick introduced by M.H. Protter and H.F. Weinberger in \cite{PW}.
\begin{remark}
The assumption $\lambda \ge 0$ in Theorem \ref{th:smoothing} can be avoided up to taking a small enough neighborhood of $x_0$. Indeed, suppose that $\lambda(x_0)<0$. Since the $M$ is asymptotically Euclidean in $x_0$, the constant in the Poincaré inequality can be made arbitrarily small on small domain. Namely, there exists a small enough neighborhood $U$ of $x_0$ so that
\[
\int_{U} -\lambda \varphi^2 \,\mathrm{dv} \le -\frac{\lambda(x_0)}{2}\int_{U} \varphi^2 \,\mathrm{dv}\le \int_U |\nabla \varphi|^2 \,\mathrm{dv} ,\quad \forall \varphi\in C^\infty_c(U),\] i.e., the bottom of the spectrum $\lambda^U_1(-\mathcal{L})$ of $-\mathcal{L}=-\Delta+\lambda$ on $U$ is non-negative. By the monotonicity of $\lambda^U_1(-\mathcal{L})$ with respect to the domain, we thus obtain that $\lambda^\Omega_1(-\mathcal{L})>0$ on some smaller domain $\Omega\Subset U$. Accordingly, there exists a strictly positive solution of $\mathcal{L} \alpha =0$ on $\Omega$.
\end{remark}
\begin{lemma}\label{lemma_alpha} The function $w\in L^1_{loc}(M)$ is a distributional solution of $\mathcal{L} w\ge 0$ on $\Omega$ if and only if $w/\alpha$ is a distributional solution of $\Delta_\alpha (w/\alpha):=\alpha^{-2}\operatorname{div}(\alpha^2 \nabla(w/\alpha)) \ge 0$ on $\Omega$.
\end{lemma}
\begin{proof}
Let $0\leq \varphi \in C^\infty_c(\Omega)$. Since $\Delta\alpha = \lambda \alpha$, we have
\begin{align*}
\alpha \Delta_\alpha(\varphi/\alpha) &= \alpha^{-1}\operatorname{div} (\alpha^2 \nabla(\varphi/\alpha))\\
&= \alpha^{-1} \operatorname{div}(\alpha\nabla \varphi - \varphi\nabla \alpha)\\
&= \alpha^{-1} (\alpha\Delta \varphi - \varphi\Delta \alpha)\\
&= \mathcal{L} \varphi.
\end{align*}
Noticing that the operator
$\Delta_{\alpha}$ is symmetric with respect to the smooth weighted measure $\alpha^2\,\mathrm{dv}$, we get
\begin{align*}
(\Delta_\alpha \frac{w}{\alpha}, \alpha\varphi) = \int_{M} \frac{w}{\alpha}\Delta_{\alpha}\frac{\varphi}{\alpha} \alpha^2\,\mathrm{dv} =
\int_{M} \frac{w}{\alpha}\alpha^2\frac{\mathcal{L}\varphi}{\alpha} \,\mathrm{dv} =
\int_{M} w \mathcal{L}\varphi \,\mathrm{dv} = (\mathcal{L} w,\varphi).
\end{align*}
Since $0\le \alpha\varphi \in C^\infty_c(\Omega)$, this concludes the proof of the lemma.
\end{proof}
According to this lemma, setting $v=\alpha^{-1}u$, we can infer the conclusion of the theorem from the equivalent statement for the operator $\Delta_\alpha$. Indeed, since $v\in L^1_{loc}(\Omega)$ solves $\Delta_\alpha v \geq 0$ distributionally in $\Omega$, if we prove that then there exists a smaller neighborhood $\Omega'\Subset \Omega$ of $x_0$ and a sequence $\{ v_{k} \} \subseteq C^{\infty}(\overline \Omega')$ satisfying the following properties:
\begin{enumerate}
\item [a')] $v \leq v_{k+1} \leq v_{k}$ for all $k \in \mathbb{N}$;
\item [b')] $v_{k}(x) \to v(x)$ as $k \to +\infty$ for a.e. $x \in \Omega'$;
\item [c')] $\Delta_\alpha v_{k} \geq 0$ in $\Omega'$ for all $k \in \mathbb{N}$.
\item [d')] $\| v - v_{k} \|_{L^{1}(\Omega')} \to 0$ as $k\to +\infty$.
\end{enumerate}
then the sequence $u_{k} = \a v_{k}$ satisfies a), b), c) and d) as desired.\\
With this preparation, Theorem \ref{th:smoothing} could be proved by exploiting the powerful machinery developed in the axiomatic potential theory. Indeed,
according to \cite[Theorem 1]{Sj}, $v$ is a $\Delta_\alpha$-subharmonic function in the sense of Herv\'e, \cite{He}. Hence, to conclude, we can apply a slightly modified version of \cite[Theorem 7.1]{BL}. Namely, one can verify that Theorem 7.1 in \cite{BL} works without any change if one uses in its proof the Green function of $\Omega$ with null boundary conditions on $\partial \Omega$. The existence of this Green function, in turn, can be deduced using different methods. For instance, from the PDE viewpoint, we can appeal to the fact that the Dirichlet problem on smooth relatively compact domains is uniquely solvable; see the classical \cite{LSW}.
It is apparent that this kind of argument is quite involved. Therefore for the reader's convenience, we present a self-contained proof of Theorem \ref{th:smoothing} that do not rely on any abstract potential theory. Instead, it involves a rather different family of approximating functions with a convolution like flavor that, we feel, could be of independent interest.
\begin{proof}[Proof of Theorem \ref{th:smoothing}] Given the existence of local isothermal coordinates, and given that the Laplacian is conformally invariant in dimension $2$, the theorem follows easily in this case from the Euclidean case.
\vspace{5mm}
If $(M,g)$ has dimension $\geq 3$, by Lemma \ref{lemma_alpha} we can focus on the approximation problem for $\Delta_\alpha$, which is proportional to the Laplace-Beltrami operator relative to the metric $\tilde g_{ij} = \alpha ^{\frac 4 {n-2}} g_{ij}$. Thus, up to changing metric, we can simply study the approximation for the standard Laplace-Beltrami operator on the manifold $M$.
Let $\Omega$ be any relatively compact neighborhood of $x_0$, and let $\Omega'\Subset \Omega'' \Subset \Omega$, with $d(\Omega',\Omega''^C)\geq \epsilon>0$. Assume for simplicity that $\Omega$ has smooth boundary. Let $G_\Omega(x,y)=G(x,y)$ be the Green's function of the operator $\Delta$ on $\Omega$ with Dirichlet boundary conditions, where the signs are chosen so that $\Delta G = -\delta$. We recall some standard facts about the Green's function (see for example \cite[theorem 4.13]{AUGREEN}):
\begin{align}
&G(x,y) \ \text{is smooth on } \ \Omega\times \Omega \setminus D\, , \ \text{where} \ D=\cur{(x,x)\, \ \ x\in \Omega} \ \text{is the diagonal}\\
&G(x,y)>0\\
&\label{G_est} G(x,y)\leq C d(x,y)^{2-n}\\
&\label{dG_est} \abs{\nabla G(x,y)} \leq C d(x,y)^{1-n}\\
&G(x,y)=G(y,x)\\
&\label{G_Delta}\Delta_x G(x,y)= \Delta_y G(x,y)=-\delta_{x=y}
\end{align}
In the following, we will denote $G_x(y)=G(x,y)$ when $x$ is fixed. Thus we can also define the level sets $G_x^{-1}(t)= \cur{y \in M \ \ s.t. \ \ G_x(y)=t}\subset M$. Moreover, $\nabla_y G_x(y)=\nabla_y G(x,y)$ denotes the gradient of $G$ w.r.t. $y$. We will drop the subscript when there is no risk of confusion \footnote{ it is worth mentioning that while $G(x,y)$ is symmetric in $x$ and $y$, this does not mean that $\nabla_x G(x,y)=\pm \nabla_y G(x,y)$. Consider for example the symmetric function $f:\mathscr{R}^2\to \mathscr{R}$ given by $f(x,y)=xy$. However, since the usual Green's function in $\mathscr{R}^n$ is translation invariant, $G(x,y)=\abs{x-y}^{2-n}$, then in this case $\nabla_x G(x,y)=-\nabla_y G(x,y)$. This is not the case on a Riemannian manifold. }.
It can be convenient sometimes to extend the definition of $G$ to $M\times M$ by setting it to be $0$ outside $\Omega\times \Omega$. This however turns $G$ into a non $C^1$ function over $\partial \Omega$, and \eqref{dG_est}, \eqref{G_Delta} hold only on $\Omega \times \Omega$.\\
Following a similar approach to the one in \cite{BL}, see also \cite{Ni} for a relevant representation formula for smooth functions, we can use the Green's function to construct explicitly the approximating sequence $v_k$. Let $\dot \psi$ be a smooth function $\dot \psi:\mathbb R \to [0;1]$ such that
\begin{gather}
\operatorname{supp}\ton{\dot \psi}\subseteq [-1;1]\ \, , \qquad \qquad \int_{\mathbb R} \dot \psi=1\, .
\end{gather}
Define the one parameter family of functions
\begin{gather}
v(x,r)= \int_M \dot \psi\ton{G(x,y)-r} u(y)\abs{\nabla_y G(x,y)}^2\ \,\mathrm{dv}(y)\, .
\end{gather}
For convenience, we will drop the integration symbol $\,\mathrm{dv}(y)$ when there is no risk of confusion.
Since $\operatorname{supp}\ton{\dot \psi}\subseteq[-1;1]$, and since $G(x,y)$ is smooth as long as $x\neq y \ \ \Longleftrightarrow G(x,y)\neq \infty$, the function $v(x,r)$ is smooth in $x$ for all $r<\infty$ fixed. Using the coarea formula, we can re-write $v(x,r)$ as
\begin{gather}
v(x,r)= \int_M \ \dot \psi\ton{G(x,y)-r} u(y)\abs{\nabla G(x,y)}^2=\\ \nonumber= \int_{-\infty}^\infty ds \ \dot \psi(s-r)\int_{G_x^{-1}(s)} u(y)\abs{\nabla G(x,y)}\, .
\end{gather}
Philosophically, $v(x,r)$ plays the role of a mollified $u(x)$ on the ball $\mathscr{B} {r^{2-n}}{x}$ as $r\to \infty$.
\vspace{5mm}
\textbf{Staying away from the boundary of $\Omega$}. A technical point needed for the proof of all the properties that we want to show is that we need to ``stay away from the boundary of $\Omega$''. What this means will become clear in the computations below, but for the moment let us mention that we can fix $r_0 \gg 0$ in such a way that for all $r\geq r_0$:
\begin{gather}\label{eq_r_0}
\forall x\in \Omega' \, , \ \forall y \in \Omega''^C \qquad \qquad \psi( G(x,y)-r)=0\, .
\end{gather}
This is possible because of the definition of $\psi$ and the upper bounds in \eqref{dG_est}.
Our final approximating sequence will be defined by
\begin{gather}
v_k(x)= v(x,r_0+k)\, .
\end{gather}
\vspace{5mm}
\textbf{Proof of monotonicity}.
Assuming that $u$ is smooth, and denoting $n$ the outward normal to the level sets $G_x^{-1}(t)$, we compute the derivative of $v(x,r)$ wrt the parameter $r$ by:
\begin{gather}
\frac \partial{\partial r} v(x,r)= \int_\mathscr{R} ds -\ddot \psi(s-r) \int_{G_x^{-1}(s)}\ u(y) \abs{\nabla G}= \\=\notag
\int_\mathscr{R} ds -\ddot \psi(s-r) \int_{G_x=s}\ u(y) \ps{\nabla G}{-n}= \\ \notag
= \int ds \ \ddot \psi(s-r) \int_{G_x\geq s} \ps{\nabla u}{\nabla G} + \underbrace{\int ds \ \ddot \psi(s-r)}_{=0} \underbrace{\int_{G_x\geq s} u(y)\Delta_y G_x}_{=u(x) \ \ \text{indep. of } s} =\\[10pt]=\notag
\int ds \ \ddot \psi(s-r) \int_s^\infty dt \int_{G_x=t} \ps{\nabla u}{\frac{\nabla G}{\abs{\nabla G}}}=
\end{gather}
integrating by parts, since $\dot \psi$ has compact support the boundary term vanishes and we're left with:
\begin{gather}
\notag= +\int ds \ \dot \psi(s-r) \int_{G_x=s} \ps{\nabla u}{\frac{\nabla G}{\abs{\nabla G}}}= -\int ds \ \dot \psi(s-r) \int_{G_x=s} \ps{\nabla u}{n}=\\
=\notag -\int ds \ \dot \psi(s-r) \int_{G_x\geq s} \Delta u=-\int_M \Delta u \int ds \ \dot \psi(s-r) \chi_{G_x\geq s}=\\
\notag =\int_M \Delta u \int_{-\infty}^{G_x(y)} ds \ \dot \psi(s-r) =-\int_{M}\Delta u \cdot \psi(G_x(y)-r)\, .
\end{gather}
We can conclude that
\begin{gather}
\frac{\partial}{\partial r} v(x,r)=-\int_{M}\Delta u \cdot \psi(G_x(y)-r)\, .
\end{gather}
Now by the definition of $r_0$ in \eqref{eq_r_0}, if $r\geq r_0$ then for all $x\in \Omega'$ the support of $\psi(G_x(y)-r)$ is contained in $\Omega''$, and so $\psi(G_x(y)-r)$ is a non-negative smooth function of $y$ \footnote{the problem is that $G(x,y)$ is the Green's function of $\Omega$, so it is $C^0$ but not $C^1$ over $\partial \Omega$}. Since $\psi(s)=\int_{-\infty}^s \dot \psi (s)\geq 0$, we can use the distributional subharmonicity of $u$ to conclude that
\begin{gather}
\frac{\partial}{\partial r} v(x,r)=-\int_{M}\Delta u \cdot \psi(G_x(y)-r)=-\int_{M}u \cdot \Delta\qua{\psi(G_x(y)-r)}\leq 0
\end{gather}
Since both the first and the third term in the last chain of equality pass to the limit as $u_j\to u$ in $L^1_{loc}$, this last inequality holds for all $u\in L^1$ that are distributionally subharmonic, smoothness of $u$ is not necessary here.
\vspace{5mm}
\textbf{Proof of subharmonicity}. We proceed in a similar way for the proof of subharmonicity of $v(x,r)$ (here $r$ is fixed, and we study the subharmonicity in $x\in M$). Assuming as above that $u$ is smooth, we can compute
\begin{gather}
v(x,r)= \int_\mathscr{R} ds \ \dot \psi(s-r) \int_{G_x=s}u\abs{\nabla G}= -\int_\mathscr{R} ds \ \dot \psi(s-r) \int_{G_x=s}u\ \ps{\nabla G}{-n}=\\
\notag =u(x)-\int_\mathscr{R} ds \ \dot \psi (s-r) \int_{G\geq s } \ps{\nabla u}{\nabla G}\, .
\end{gather}
Thus, rearranging the terms, we get
\begin{gather}
v(x,r)-u(x)= -\int_\mathscr{R} ds \ \dot \psi (s-r) \int_{G\geq s } \ps{\nabla u}{\nabla G}=\\
= \notag - \int_\mathscr{R} ds \ \dot \psi (s-r) \qua{\int_{G=s } G\ \ps{\nabla u}{n} - \int_{G_x\geq s} G \Delta u}=\\
= \notag - \int_\mathscr{R} ds \ \dot \psi (s-r) \int_{G_x\geq s} (s-G) \Delta u=\\
= \notag - \int_\Omega \Delta u \int ds \ (s-G(x,y))\dot \psi (s-r) \chi_{G_x\geq s}
\end{gather}
Let $\hat \psi(t)=\int_{-\infty}^t ds \psi(s)$ be the primitive of $\psi$. Notice that
\begin{gather}
\int ds \ (s-G(x,y))\dot \psi (s-r) \chi_{G_x\geq s}=\\
\notag =\int_{-\infty}^{G(x,y)} ds \ s\dot \psi (s-r) -G(x,y)\int_{-\infty}^{G(x,y)} ds \ \dot \psi (s-r) =\\
\notag =G\psi(G-r)-\hat\psi(G-r)-G\psi(G-r)=-\hat \psi(G(x,y)-r) \, .
\end{gather}
Summing up, we have that, if $u$ is smooth,
\begin{gather}\label{eq_fr-u}
v(x,r)-u(x)=\int_\Omega \hat \psi(G(x,y)-r) \Delta u(y)\, .
\end{gather}
We can compute the Laplacian of $v(x,r)$ by
\begin{gather}
\Delta_x v(x,r)- \Delta_x u(x) = \int_M \ \Delta_x(\hat \psi(G(x,y)-r))\Delta_y u(y)\ \,\mathrm{dv}(y)\, .
\end{gather}
By direct computation:
\begin{gather}
\Delta \qua{\hat \psi(G(x,y)-r)}= \nabla_i \qua{\psi (G(x,y)-r)\nabla_i G(x,y) } = \\
\notag=\dot \psi(G(x,y)-r) \abs{\nabla G(x,y)}^2 +\psi(G(x,y)-r)\Delta G(x,y)=\\
\notag=\dot \psi(G(x,y)-r) \abs{\nabla G(x,y)}^2 -\psi(G(x,y)-r)\delta_{x=y}=\dot \psi(G(x,y)-r) \abs{\nabla G(x,y)}^2 -\delta_{x=y}\, .
\end{gather}
Thus we obtain that
\begin{gather}
\Delta_x v(x,r)- \Delta_x u(x) = -\Delta_x u(x)+\int_\Omega \dot \psi(G(x,y)-r) \abs{\nabla_x G}^2 \Delta u(y)\,\mathrm{dv}(y)\, ,\\
\notag \Delta_x v(x,r)=\int_\Omega \dot \psi(G(x,y)-r) \abs{\nabla_x G}^2 \Delta u\,\mathrm{dv}(y) =\int_\Omega u \ \Delta_y \qua{\dot \psi(G(x,y)-r) \abs{\nabla_x G}^2 }\,\mathrm{dv}(y)\, .
\end{gather}
The last equality makes sense also if $u$ is simply in $L^1_{\text{loc}}$ and not smooth.
Arguing as in the proof of monotonicity, if $r\geq r_0$ and $x\in \Omega'$, $G(x,y)$ is smooth in this integral, and the distributional subharmonicity of $u$ allows us to conclude that
\begin{gather}
\Delta_x v(x,r)\geq 0\, ,
\end{gather}
as desired.
\vspace{5mm}
\textbf{Proof of $L^1$ convergence}. In order to show that $v(x,r)\to v(x)$ in the $L^1(\Omega')$ sense as $r\to \infty$, we will use the equality \eqref{eq_fr-u} and the fact that, in the distributional sense, $\Delta u$ is a finite (non-negative) measure on the set $\Omega''$.
\medskip
Observe that if $u$ is a smooth subharmonic function, then by \eqref{eq_fr-u}:
\begin{gather}
v(x,r)-u(x)=\int_\Omega \hat \psi(G-r) \Delta u\, .
\end{gather}
Thus the $L^1(\Omega')$ norm of $v(x,r)-u(x)$ is
\begin{gather}
\int_{\Omega'} \abs{v(x,r)-u(x)} \,\mathrm{dv}(x)=\int_{\Omega'} v(x,r)-u(x)\ \,\mathrm{dv}(x)=\\
\notag =\iint_{{\Omega'}\times {\Omega}} \hat \psi(G(x,y)-r) \Delta u(y)\ \,\mathrm{dv}(x)\,\mathrm{dv}(y)
\notag =\int_{\Omega} \Delta u(y) \underbrace{\int_{\Omega'} \ \hat \psi(G(x,y)-r)\,\mathrm{dv}(x)}_{:=h(y,r)} \,\mathrm{dv}(y)\, .
\end{gather}
By the choice of $r_0$ in \eqref{eq_r_0}, the support of $h$ is contained in $\overline{\Omega''}$.
We claim that $h$ is a Lipschitz function and $\lim_{r\to \infty}\abs{h(y,r)}_{L^\infty(\Omega'')}=0$. Indeed, we observe that
\begin{gather}
\nabla h = \int dx \ \psi(G(x,y)-r)\nabla G(x,y)\, ,
\end{gather}
and as long as $r\geq r_0$ and $x\in \Omega'$, $\psi(G(x,y)-r)\nabla G(x,y)$ is a smooth function of $y$ away from $x$ and supported in $\Omega''$ and the estimates in \eqref{dG_est} hold. Thus we obtain that $h$ is a Lipschitz function.
Moreover, note that
\begin{gather}
\hat \psi(G(x,y)-r)\leq \max\cur{G(x,y)-r+1;0}\leq \max \cur{C d(x,y)^{2-n}-r+1;0}
\end{gather}
Now, for a subharmonic function $u\in L^1$, we can use this fact to prove that the sequence $v(x,r)$ is an $L^1$ Cauchy sequence as $r\to \infty$. Indeed, for $R>r>0$,
\begin{gather}
\int_{\Omega'} \abs{v(x,r)-v(x,R)} \,\mathrm{dv}(x)=\int_{\Omega'} v(x,r)-v(x,R)\ \,\mathrm{dv}(x)=\\
\notag =\iint_{{\Omega'}\times {\Omega}} \qua{\hat \psi(G(x,y)-r)-\hat \psi(G(x,y)-R)} \Delta u(y)\ \,\mathrm{dv}(x)\,\mathrm{dv}(y)=\\
\notag =\int_{\Omega} \Delta u(y) \underbrace{\int_{\Omega'} \ \qua{\hat \psi(G(x,y)-r)-\hat \psi(G(x,y)-R)} \,\mathrm{dv}(x)}_{:=h(y,r,R)} \,\mathrm{dv}(y)\, .
\end{gather}
$h(y,r,R)$ is a smooth function, so this equality makes sense distributionally. Moreover, $h(y,r,R)$ is a uniformly bounded sequence in $C^0$, that converges to $0$ as $r\to \infty$. Thus $v(x,r)$ is an $L^1$ Cauchy sequence in $r$.
In order to prove that $v(x,r)\to u(x)$ as $r\to \infty$, we will prove that a convex combination of $v(x,r)$ for some $r\geq r_0$ converges to $u(x)$. This and the $L^1$ convergence of $v(x,r)$ prove the original convergence.
In particular, for all $r_0$ fix some function $\alpha_{r_0}(r)$ such that
\begin{itemize}
\item $\alpha_{r_0}\in C^\infty_c([0;\infty))$
\item $\int \alpha_{r_0}(r) \ dr=1$
\item $\forall r, \ 0\leq \alpha_{r_0}(r)\leq \frac 1 r$
\item if $r\leq r_0$ or $r\geq 10 r_0$, $\alpha_{r_0}(r)=0$
\end{itemize}
Since $\int_{r_0}^{10 r_0} \frac {dr}{r} = \ln(10)>1$, this is possible. Consider the convex combinations
\begin{gather}
w(x,r_0):=\int \alpha_{r_0}(r) v(x,r) dr = \int_0^\infty ds \int_{G(x,y)=s} u(y) \abs{\nabla G} \int dr \alpha_{r_0}(r)\dot \psi(s-r)=\\
\notag = \int_{M} u(y) \abs{\nabla G}^2 \int dr \alpha_{r_0}(r)\dot \psi(G(x,y)-r)\,\mathrm{dv}(y)\, .
\end{gather}
Notice that by Cheng-Yau gradient estimates applied on a ball $\mathscr{B} {d(x,y)/3} x$, the positive harmonic function $G$ satisfies
\begin{gather}
\frac{\abs{\nabla G}}{G}\leq c (d(x,y)^{-1}+1)\, ,
\end{gather}
and since $G(x,y)\leq C d(x,y)^{2-n}$, for $\bar r\gg1$ we have
\begin{gather}
\abs{w(x,\bar r)-u(x)}\leq \int_{M} \abs{u(y)-u(x)} \abs{\nabla G}^2 \int dr \alpha_{\bar r}(r)\dot \psi(G(x,y)-r)\,\mathrm{dv}(y)\leq \\
\notag \leq \int_{M} \abs{u(y)-u(x)} d(x,y)^{-2} G^2(x,y) \frac{C}{G(x,y)-1} 1_{\cur{\bar r-1\leq G(x,y)\leq 10 \bar r+1}}\,\mathrm{dv}(y)\leq\\
\notag \leq C\int_{A_{\bar r}(x)} \frac{1}{d(x,y)^n}\abs{u(y)-u(x)}\,\mathrm{dv}(y) \, ,
\end{gather}
where $A_{\bar r}(x)=\cur{(c^{-1}(10 \bar r+1))^{1/(2-n)}\leq d(x,y)\leq (C^{-1}(\bar r-1))^{1/(2-n)}}$. The last inequality is due to the local estimate $G(x,y)\ge c\,d(x,y)^{2-n}$ on $\Omega'$; see for instance \cite[Theorem 2.4]{MRS}. Let $x$ be a Lebesgue point for $u$, then $\abs{w(x,\bar r)-u(x)}\to 0$, and this proves a.e. convergence to $u(x)$ for the convex combination $w(x,\bar r)$, and thus also for the original $v(x,r)$.
\end{proof}
\vspace{5mm}
In the following, we will assume that either
$\l(x) \equiv 0$ or $\l(x) = \l$ is a positive constant. For each of these choices of $\l$, the existence of a smooth local monotone approximation has striking consequences in the regularity theory of subharmonic distributions or on the validity of a variant of the traditional {\it Kato inequality}.
In the next two sections we are going to analyze separately these applications.
\section{Improved regularity of positive subharmonic distributions}\label{sect:reg}
By a {\it subharmonic distribution} on a domain $\Omega \subseteq M$ we mean a function $u \in L^{1}_{loc}(
\Omega)$ satisfying the inequality $\Delta u \geq 0$ in the sense of distributions. Note that, in this case, $\Delta u$ is a positive Radon measure.
Using the fact that a local monotone approximation exists, positive subharmonic distributions are necessarily in $W^{1,2}_{loc}$. Indeed an even stronger property can be proved, i.e., $u^{p/2}\in W^{1,2}_{loc}$ for all $p\in (1,\infty)$. This is the content of the next Theorem. To the best of our knowledge, in this generality the result is new also in the Euclidean setting.
\begin{theorem}\label{prop:reg positive subharm}
Let $(M,g)$ be a Riemannian manifold. Let $u \geq 0$ be an $L^1_{loc}(M)$-subharmonic distribution. Then, $u\in L^\infty_{loc}$ and $u^{s/2}\in W^{1,2}_{loc}$ for any $s\in (1,\infty)$.\\[-10pt]
Moreover, for all $\varphi \in C^\infty_c(M)$ and $1<s\leq p$, we have the estimate
\begin{align}\label{eq:caccioppoli-vp}
\frac{(s-1)^2}{s^2}\int_{\{\varphi\ge1\}} |\nabla u^{s/2}|^2 &\le
\int_M u^{s}|\nabla \varphi|^{2}
\le \left(\int_{\operatorname{supp}(\nabla \phi)} u^p\right)^{s/p}\left(\int_M|\nabla \varphi|^{\frac{2p}{p-s}}\right)^{(p-s)/p}\, ,
\end{align}
\end{theorem}
\begin{proof}
Fix $1 < s\le p < +\infty$ and let $\varphi\in C^\infty_c(\Omega)$ to be a cut-off function such that $0\le \varphi\le 1$ and $\varphi\equiv 1$ on some bounded open set $\Omega_{1}\neq \emptyset$. Fix a bounded open set $\Omega$ such that $\operatorname{supp}(\varphi)\Subset\Omega\Subset M$. According to Theorem \ref{th:smoothing} there is a smooth approximation of $u$
\[
u_{k} \geq u_{k+1} \geq u \geq 0
\]
by (classical) solutions of $\Delta u_{k} \geq 0$ on $\Omega$. Note that, up to replacing $u_k$ with $u_k+1/k$, we can suppose that $u_k>0$.
Now, define $\psi = u_k^{s-1} \varphi^{2}$.
Since the $u_k$'s are smooth, strictly positive and subharmonic, we obtain
\begin{align}\label{eq_pre_caccioppoli}
0 &\geq -\int_M \Delta u_k \psi \\
\notag& = \int_M g(\nabla u_k,\nabla \psi)\\
\notag&= (s-1)\int_{M} \varphi^{2}u_k^{s-2}|\nabla u_k |^{2} + 2\int_{M} \varphi u_k^{s-1}g(\nabla u_k,\nabla \varphi)\\
\notag&\ge (s-1-\varepsilon)\int_{M} \varphi^{2}u_k^{s-2}|\nabla u_k |^{2} - \varepsilon^{-1} \int_{M} u_k^{s}|\nabla \varphi |^{2},
\end{align}
for any $\varepsilon \in (0,s-1)$. Elaborating, we get the Caccioppoli inequality
\begin{align}
\varepsilon(s-1-\varepsilon)\int_{\Omega_{1}} u_k^{s-2}|\nabla u_k|^{2}
&\leq \varepsilon(s-1-\varepsilon)\int_{M} \varphi^2 u_k^{s-2}|\nabla u_k|^{2}\\ \notag
&\leq \int_{M} u_k^{s}|\nabla \varphi|^{2}\\
\notag&\leq \|\nabla \varphi\|_\infty^2\int_{\Omega} u_s^{p}.
\end{align}
Choosing $\epsilon=(s-1)/2$ and using the fact that $u_k\ge u_{k+1}> 0$, this latter implies
\begin{gather}\label{e:Caccioppoli intermediate}
\int_{\Omega_{1}} |\nabla u_k^{s/2}|^{2}=\frac{s^2}{4} \int_{\Omega_{1}} u_k^{s-2}|\nabla u_k|^{2}\leq \frac{s^2}{(s-1)^2} \int_{\Omega} u_k^{s}|\nabla \varphi|^2\leq \frac{s^2\|\nabla \varphi\|_\infty^2}{(s-1)^2} \int_{\Omega} u_1^{s},
\end{gather}
Noticing also that $\|u_k^{s/2}\|_{L^2(\Omega_{1})}\le \|u_1^{s/2}\|_{L^2(\Omega_{1})}$, we have thus obtained that the sequence $\{u_k^{s/2}\}$ is bounded in $W^{1,2}(\Omega_{1})$, hence weakly converges in $W^{1,2}(\Omega_{1})$ (up to extract a subsequence) to some $v \in W^{1,2}(\Omega_{1})$. Since $u_k^{s/2}$ converges point-wise a.e. to $u^{s/2}$, it holds necessarily $v =u^{s/2}$ a.e. on $\Omega_{1}$. In particular, $u^{s/2}\in W^{1,2}$ in a neighborhood in $\Omega_1$. Moreover, letting $k\to\infty$ (along the subsequence) in \eqref{e:Caccioppoli intermediate}, we obtain
\begin{gather}
\int_{\Omega_{1}} |\nabla u^{s/2}|^{2}\leq\frac{s^2}{(s-1)^2} \int_{\Omega} u^{s}|\nabla \varphi|^2\leq \frac{s^2}{(s-1)^2}
\left(\int_{\operatorname{supp}(\nabla \phi)} u^p\right)^{s/p}\left(\int_M|\nabla \varphi|^{\frac{2p}{p-s}}\right)^{(p-s)/p},
\end{gather}
where we used the H\"older inequality in the last step. \\[-10pt]
\end{proof}
\section{A variant of the Kato inequality}\label{sec:Kato}
The original inequality by T. Kato, \cite[Lemma A]{Ka}, states that if $u \in L^{1}_{loc}(M)$ satisfies $\Delta u \in L^{1}_{loc}(M)$ then
\[
\Delta | u | \geq \operatorname{sgn}(u) \Delta u
\]
or, equivalently, if we set $u_{+} = \max(u,0) = (u + |u|)/2$, it holds
\[
\Delta u_{+} \geq 1_{\{ u>0\}} \Delta u.
\]
Note that, in these assumptions, $|\nabla u| \in L^{1}_{loc}(M)$ (see \cite[Lemma 1]{Ka}) and, therefore, $u \in W^{1,1}_{loc}(M)$.
Under the sole requirement that $u \in W^{1,1}_{loc}(M)$ is such that $\Delta u = \mu$ is a (signed) Radon measure, a precise form of the Kato inequality was proved by A. Ancona in \cite[Theorem 5.1]{An} elaborating on ideas contained in the paper \cite{Fu} by B. Fuglede.
In the special case where $M = \mathbb{R}^{n}$ and $\Delta$ is the Euclidean Laplacian, H. Brezis, \cite[Lemma A.1]{Br} and \cite[Proposition 6.9]{Po}, observed that the local regularity of the function can be replaced by the condition that $u$ satisfies a differential inequality of the form $\Delta u \geq f$ in the sense of distributions, where $f \in L^{1}_{loc}$. The proof uses standard mollifiers to approximate $u$ by smooth solutions of the same inequality and, in particular, it works locally on $2$-dimensional Riemannian manifolds thanks to the existence of isothermal local coordinates. In the next result, we extend its validity to higher dimensional manifolds by using the existence of a sequence of smooth subharmonic approximations.
\begin{proposition}[Brezis-Kato inequality]\label{prop-BK}
Let $(M,g)$ be a Riemannian manifold. If $u \in L^{1}_{loc}(M)$ satisfies $\Delta u \geq f$ in the sense of distributions, for some $f \in L^{1}_{loc}(M)$, then $u_{+} \in L^{1}_{loc}(M)$ is a distributional solution of $\Delta u_{+} \geq 1_{\{u>0\}} f$.
\end{proposition}
We shall use the following approximation Lemma, see \cite[Lemma 2]{Ka}.
\begin{lemma}\label{lemma-approx}
Let $u \in L^{1}_{loc}(M)$ satisfy $\Delta u \in L^{1}_{loc}(M)$. Then, for any fixed compact coordinate domain $\Omega \Subset M$, there exists a sequence of functions $u_{k} \in C^{\infty}(\Omega)$ such that
\[
u_{k} \overset{L^{1}(\Omega)} {\longrightarrow} u\quad \text{and}\quad \Delta u_{k} \overset{L^{1}(\Omega)}{\longrightarrow} \Delta u.
\]
\end{lemma}
\begin{proof}[Proof (of Proposition \ref{prop-BK})]
We fix a smooth coordinate domain $\Omega \Subset M$ where subharmonic distributions posses a monotone approximation by smooth subharmonic functions. We consider the Dirichlet problem
\[
\begin{cases}
\Delta g = f & \text{ in }\Omega \\
g = 0 & \text{ on }\partial \Omega
\end{cases}
\]
and we note that, since $f \in L^{1}(\Omega)$, then it has a unique solution $g\in W^{1,1}_{0}(\Omega)$; see \cite[Theorem 5.1]{LSW}. The new function
\[
w = u - g \in L^{1}(\Omega)
\]
is a subharmonic distribution, namely, $\Delta w \geq 0$.
Let $w_{k} \geq w_{k+1} \geq w$ be a monotonic approximation of $w$ by smooth solutions of $\Delta w_{k} \geq 0$ and define
\[
u_{k} = w_{k} + g .
\]
Note that, by construction,
\begin{equation}\label{approx}
i)\, u_{k} \searrow u \text{ a.e. in }\Omega, \quad ii)\, u_{k} \to u \text{ in }L^{1}(\Omega), \quad iii)\, \Delta u_{k} \geq f \text{ in }\Omega.
\end{equation}
Moreover, since the $w_{k}$ are smooth,
\[
u_{k} \in L^{1}(\Omega) \quad \text{and} \quad \Delta u_{k} \in L^{1}(\Omega).
\]
According to Lemma \ref{lemma-approx}, for each fixed $k$, let $\{ u_{k}^{n}\}_{n \in \mathbb{N}}$ be a sequence of smooth functions satisfying
\[
u_{k}^{n} \overset{L^{1}(\Omega)}{\longrightarrow} u_{k} \quad \text{and} \quad \Delta u_{k}^{n} \overset{L^{1}(\Omega)}{\longrightarrow} \Delta u_{k},\quad \text{as }n\to+\infty.
\]
Now, let $H:\mathbb R \to \mathbb R$ be a smooth function satisfying $H'(t),H''(t) \geq 0$. For any $n \ge 1$,
\begin{align*}
\Delta (H (u_{k}^{n})) \ge H'(u_{k}^{n}) \Delta u_{k}^{n}.
\end{align*}
For every $0\le \varphi\in C^\infty_c(\Omega)$,
\begin{equation}\label{eq:Hve}
\int_{\Omega} H(u_k^{n})\Delta \varphi \,\mathrm{d}x = \int_{\Omega} \Delta (H(u_k^{n}))\varphi \,\mathrm{d}x
\ge \int_{\Omega} \Delta u_k^{n} H'(u_k^{n})\varphi \,\mathrm{d}x.
\end{equation}
We apply this latter with $H(t)=H_\varepsilon(t)=(t+\sqrt{t^2+\varepsilon})/2$. First, we let $n \to \infty$.
Using the dominated convergence theorem we get
\[
\left|\int_{\Omega} (H_\varepsilon(u_k^{n})-H_\varepsilon(u_{k})) \Delta \varphi \,\mathrm{d}x\right| \leq \|H_\varepsilon'\|_{L^\infty}\| \Delta\varphi\|_{L^\infty}\int_{\Omega} |u_k^{n}-u_{k}|\,\mathrm{d}x \longrightarrow 0,
\]
and
\begin{align*}
&\left|\int_{\Omega} \Delta u^{n}_k H_\varepsilon'(u_k^{n})\varphi \,\mathrm{d}x-\int_{\Omega} \Delta u_{k}H_\varepsilon'(u_{k})\varphi \,\mathrm{d}x\right|\\
\le &\left|\int_{\Omega} (\Delta u_k^{n}- \Delta u_{k}) H_\varepsilon'(u_k^{n})\varphi \,\mathrm{d}x\right|+\left|\int_{\Omega} \Delta u_{k} (H_\varepsilon'(u_k^{n})-H_\varepsilon'(u_{k}))\varphi \,\mathrm{d}x\right|\\
\le & \|H_\varepsilon'\|_{L^\infty}\|\varphi\|_{L^\infty}\int_{\Omega} |\Delta u_{k}^{n}-\Delta u_{k}| \,\mathrm{d}x+\int_{\Omega} \Delta u_{k} |H_\varepsilon'(u_k^{n})-H_\varepsilon'(u_{k})|\, |\varphi| \,\mathrm{d}x \longrightarrow 0.
\end{align*}
Therefore, \eqref{approx}, \eqref{eq:Hve} and the fact that $H_{\varepsilon}',\varphi \geq 0$, yield
\begin{equation}\label{approx1}
\int_{\Omega} H_\varepsilon(u_{k}) \Delta \varphi \,\mathrm{d}x
\ge \int_{\Omega} H_\varepsilon'(u_{k}) \Delta u_{k} \varphi \,\mathrm{d}x \geq \int_{\Omega}H_{\varepsilon}'(u_{k})f \varphi \,\mathrm{d}x.
\end{equation}
Next we recall that $f \in L^{1}$ and, by \eqref{approx} ii), $u_{k} \to u$ in $L^{1}(\Omega)$. Thus, repeating the above estimates we can take the limit as $k \to +\infty$ in \eqref{approx1} and get
\[
\int_{\Omega} H_\varepsilon(u) \Delta \varphi \,\mathrm{d}x \geq \int_{\Omega}H_{\varepsilon}'(u)f \varphi \,\mathrm{d}x.
\]
Finally, we note that $H_\varepsilon(t) \to t_+$ uniformly on $\mathbb R$. Moreover the $H'_\varepsilon(t)$ are uniformly bounded both in $\varepsilon$ and in $t$, and converge pointwise a.e. as $\varepsilon\to 0$ to the characteristic function $1_{(0,+\infty)}(t)$. Letting $\varepsilon\to 0$ and applying again the dominated convergence theorem gives
\[
\int_{\Omega} u_+ \Delta\varphi \,\mathrm{d}x
\ge \int_{\Omega} 1_{\{ u>0 \}} f \varphi \,\mathrm{d}x,
\]
i.e.
\[
\Delta u \geq 1_{\{u>0\}} f\quad \text{distributionally in } \Omega.
\]
In order to conclude the proof, take a covering of $M$ by coordinate domains $\Omega_{j} \Subset M$ where the monotone approximation exists and consider a subordinated partition of unity $\{\eta_k\}$ such that $\eta_j\in C^\infty_c(\tilde \Omega_j)$ and $\sum_j\eta_j=1$. Given $\psi\in C^\infty_c(M)$, one has
\[
\int_{M} u_+ \Delta\psi \,\mathrm{dv}
=\sum_j \int_{M} u_+ \Delta(\eta_j\psi) \,\mathrm{dv} \ge \sum_j \int_{\tilde \Omega_j} 1_{\{ u>0\}} f \eta_j \psi \,\mathrm{dv}= \int_{M} 1_{\{u>0\}} f\psi \,\mathrm{dv}.
\]
\end{proof}
A direct application of Theorem \ref{prop:reg positive subharm} and Proposition \ref{prop-BK} gives the following
\begin{corollary}\label{coro:BrezisPonce}
Let $(M,g)$ be a Riemannian manifold. If $u \in L^{1}_{loc}(M)$ satisfies $\mathcal{L} u = \Delta u - \l u \geq 0$, with $\l\ge 0$, then $u_{+} \in L^{1}_{loc}(M)$ is a non-negative subharmonic distribution, i.e., $\Delta u_{+} \geq 0$. In particular, $u_{+} \in L^{\infty}_{loc}(M)$ and, for any $p \in (1,+\infty)$, $u_{+}^{p/2} \in W^{1,2}_{loc}(M)$.
\end{corollary}
\begin{remark}
It is also possible to prove Corollary \ref{coro:BrezisPonce} by adapting the proof in \cite[Lemma A.1]{Br} and \cite[Proposition 6.9]{Po} to the Riemannian setting, up to replacing the approximation by convolution used therein with the monotone approximation provided by Theorem \ref{th:smoothing}.
However, this latter strategy seems not to work for general $f\in L^1_{loc}$ (i.e. not of the form $\lambda u$), as in Proposition \ref{prop-BK}. Although the special case $f=\lambda u$ would be enough for what needed in this article, we decided to state and prove the general form of the Brezis-Kato inequality as we feel that it can be useful to study more general PDEs on manifolds.
\end{remark}
Once we have a Brezis-Kato inequality, the $L^{p}$-PP property enters in the realm of Liouville-type theorems for $L^{p}$-subharmonic distributions. This is explained in the next
\begin{lemma} \label{lemma-Lp-Sub}
Let $(M,g)$ be any Riemannian manifold. Consider the following $L^p$-Liouville property for subharmonic distributions, with $p\in(1,\infty)$:
\begin{equation}\label{Lp-Sub}\tag{$L-\displaystyle{Sub}_p$}
\begin{cases}
\Delta u \geq 0 \text{ on }M\\
u \geq 0 \text{ a.e. on }M\\
u \in L^{p}(M),
\end{cases}
\quad \Longrightarrow \quad
u \equiv const \text{ a.e. on } M.
\end{equation}
Then
\[
\eqref{Lp-Sub} \quad \Longrightarrow \eqref{P}.
\]
\end{lemma}
\begin{proof}
Suppose $u \in L^{p}(M)$ satisfies $\Delta u \leq u$. By Proposition \ref{prop-BK},
\[
\Delta (-u)_{+} \geq (-u)_{+} \geq 0\text{ on }M,
\]
i.e. $(-u)_{+} \geq 0$ is a subharmonic distribution. Obviously, $(-u)_{+}\in L^{p}(M)$. It follows from \eqref{Lp-Sub} that $(-u)_{+} = 0$ a.e. on $M$. This means precisely that $u \geq 0$ a.e. on $M$, thus proving the validity of \eqref{P}.
\end{proof}
\section{Parabolicity, capacity and \texorpdfstring{\eqref{P}}{Pp} property}\label{sec:parabolic}
Recall that, given $\Omega\subset M$ a connected domain in
$M$ and $D\subset \Omega$ a compact set, for $1 \le p < \infty$, the $p$-capacity of $D$ in $\Omega$ is defined
by $\operatorname{Cap}_p(D, \Omega) := \inf \int_\Omega
|d\varphi|^p$, where the infimum is among all Lipschitz functions compactly supported in $\Omega$ such that $\varphi \ge 1$ on $D$. Moreover, $D$ is said to be $p$-polar if $\operatorname{Cap}_p(D, \Omega)=0$ for every $\Omega \Supset D$. Finally, $M$ is $p$-parabolic if there exists a compact set
$D \subset M$ with non empty interior such that $\operatorname{Cap}_p(D, M)=0$. Similarly, if the compact set $U$ disconnects $M$ in a finite number of ends $E_1,\dots,E_N$, we say the end $E_j$ is $q$-parabolic if $\inf \int_\Omega
|d\varphi|^p=0$, where the infimum is among all Lipschitz functions such that $\varphi \ge 1$ on $M\setminus E_j$ and $\operatorname{supp}\varphi\cap E_j$ is compact.
\begin{theorem}\label{th:caccio}
Let $p\in(1,\infty)$. Let $M$ be an open connected (not necessarily complete) manifold with a finite number of ends $E_1,\dots, E_N$. If each end $E_j$ is $q_j$ parabolic for some $\frac{2p}{p-1}<q_j\le\infty$, then \eqref{Lp-Sub} holds on $M$, so that in particular $M$ is $L^p$-PP.
\end{theorem}
\begin{proof}
Without loss of generality we can suppose that $q_1=\min_{j=1}^Nq_j$. Let $U$ be a compact set disconnecting $M$, so that $M=U\cup (\cup_{j=1}^N E_j)$ and let $u\in L^p(M)$ be a non-negative subharmonic function. Since $u\in L^{\infty}_{loc}$, we have that $\|u\|_{L^{\infty}(U)}<\infty$. Fix an end $E_J$ and define the new function
\[
u_J:=\begin{cases}
(u-\|u\|_{L^{\infty}(U)})_+,&\text{in }E_J,\\0,&\text{elsewhere}.
\end{cases}
\]
Note that also $u_J$ is non-negative, subharmonic and in $L^p(M)$. Let $\{M_k\}$ be an exhaustion of $M$, i.e. $M_k\Subset M_{k+1}$ are compact and $M=\cup_kM_k$, and assume wlog that $M_1\supset U$. By definition of $q_J$-parabolicity of the end $E_J$, we can find a sequence of smooth compactly supported cut-offs $\{\varphi_k\}=\{\varphi_{J,k}\}$ such that $\varphi_k\equiv 1$ on $M_k\cup E_J^C$ and $\|\nabla\varphi_k\|_{L^{q_J}(E_J)}\to 0$ as $k\to\infty$. Applying \eqref{eq:caccioppoli-vp} to $u_J$ with $\varphi=\varphi_k$ and $2p/(p-s)=q_J$, i.e. $s=\frac{q_J-2}{q_J}p$, we get that for all $k$:
\begin{align}
\int_{\{\varphi_k\ge1\}} |\nabla u_J^{s/2}|^2 &\le C\left(\int_{\operatorname{supp}(\nabla \phi_k)} u_J^p\right)^{s/p}\left(\int_M|\nabla \varphi_k|^{q_J}\right)^{\frac 2{q_J}}\, ,
\end{align}
Taking the limit as $k\to \infty,$ we deduce that $u_J$ is constant, hence null, on\label{key} $E_J$. In particular $u$ is bounded on $E_J$, so that $u\in L^{\frac{pq_J}{q_J-2}}(E_J)$.
Now, by \eqref{eq:caccioppoli-vp} we get
\begin{align*}
\frac{(p-1)^2}{p^2}\int_{\{\varphi\ge1\}} |\nabla u^{p/2}|^2 &\le\int_M u^{p}|\nabla \varphi|^{2} \\
&=\int_U u^{p}|\nabla \varphi|^{2} +\sum_{j=1}^N\int_{E_j} u^{p}|\nabla \varphi|^{2}
\end{align*}
for any $\varphi\in C^{\infty}_c(M)$. Insert in this latter $\varphi=\varphi_k$, where $\{\varphi_k\}$ is a family of cut-offs such that $\varphi_k\equiv 1$ on $M_k$ and, for any $j=1,\dots,N$, $\|\nabla\varphi_k\|_{L^{q_j}(E_j)}\to 0$ as $k\to\infty$. Thus,
\begin{align}\label{eq:caccioppoli-vp_1_ends}
\int_{\{\varphi_k\ge1\}} |\nabla u^{p/2}|^2 \le \frac{p^2}{(p-1)^2}\sum_{j=1}^N\left(\int_{E_j} u^{\frac{pq_j}{q_j-2}}\right)^{(q_j-2)/q_j}\left(\int_{E_j}|\nabla\varphi_k|^{q_j}\right)^{2/q_j}
\end{align}
goes to $0$ as $k\to\infty$. Hence $u$ is constant, and Lemma \ref{lemma-Lp-Sub} concludes the proof.
\end{proof}
Here we point out some immediate consequences of the previous theorem.
First, we can interpret the $\infty$-parabolicity of the whole manifold as the geodesic completeness; see e.g. \cite{PS-Ensaios} and also \cite{AMP}. This implies the following
\begin{corollary}\label{prop:Lp-Sub}
Let $p\in(1,\infty)$. Let $(M,g)$ be a complete Riemannian manifold. Then \eqref{Lp-Sub} holds, so that in particular $M$ is $L^p$-PP.
\end{corollary}
\begin{remark} \label{rem-subquadratic} As it is clear from the proof of Corollary \ref{prop:Lp-Sub}, on any given complete Riemannian manifold and for any $p\in(1,\infty)$, the Liouville property \eqref{Lp-Sub} holds in the stronger form:
\begin{equation*}
\begin{cases}
\Delta u \geq 0 \text{ on }M\\
u \geq 0 \text{ a.e. on }M\\
u \in L_{loc}^{p}(M)\text{ and }\|u\|^p_{L^p(B_{2k}(o)\setminus B_k(o))}=o(k^2),
\end{cases}
\quad \Longrightarrow \quad
u \equiv c \text{ a.e. on } M.
\end{equation*}
Accordingly, also the Positivity Preserving property holds in this class of functions larger than $L^p(M)$.
\end{remark}
\begin{remark}\label{rmk-Liouville}
The endpoint cases $p=1$ and $p=+\infty$ must be excluded.
The failure of \eqref{Lp-Sub} for these values of $p$ is well known. Namely, the hyperbolic space supports infinitely many bounded (hence positive) harmonic functions whereas, on the opposite side, positive, non-constant, $L^{1}$-harmonic functions on complete Riemann surfaces with (finite volume and) super-quadratic curvature decay to $-\infty$ was constructed by P. Li and R. Schoen in \cite{LS}. While the existence of these functions tells us nothing about the failure of the $L^{p}$-PP property, counterexamples also to this latter have been provided in \cite{BM}.
\end{remark}
Similarly, restating Theorem \ref{th:caccio} in the simplest case, we have:
\begin{corollary}
Let $p\in(1,\infty)$. Let $M$ be an open (possibly incomplete) $q$-parabolic manifold for some $\frac{2p}{p-1}<q\le\infty$. Then $M$ is $L^p$-PP.
\end{corollary}
The third corollary deals with manifolds of the form $(N\setminus K, h)$, where $(N,h)$ is a complete Riemannian manifold and $K$ is a compact set. Indeed, $p$-parabolicity of $N\setminus K$ is naturally related to the capacity of $K$ in $N$.
\begin{corollary}\label{coro:LpPP cap}
Let $p\in(1,\infty)$. Let $N$ be a complete Riemannian manifold and $K\subset N$ a compact set. Suppose that $K$ is $q$-polar for some $q>\frac{2p}{p-1}$. Then $N\setminus K$ is $L^p$-PP.
\end{corollary}
Suppose now that $\dim_{\mathcal H}(K)<q<n-\frac{2p}{p-1}$. Hence $\mathcal H^q(K)=0$. By standard potential theory this implies that $K$ is $(n-q)$-polar; see for instance \cite{HKM} or \cite[Theorem 3.5]{Tr}.
Accordingly, we have the following straightforward consequence of Corollary \ref{coro:LpPP cap}.
\begin{corollary}\label{coro:LpPP Haus}
Let $p\in(1,\infty)$, let $N$ be an $n$-dimensional complete Riemannian manifold and let $K$ be compact set of $N$ such that $\dim_{\mathcal H}(K)<n-\frac{2p}{p-1}$. Then $N\setminus K$ is $L^p$-PP.
\end{corollary}
A natural question arising from Corollary \ref{coro:LpPP Haus} is what happens in the threshold case $\frac{2p}{p-1}=q$ and below the threshold. In this case Hausdorff co-dimension $\frac{2p}{p-1}$ is not enough to preserve the \eqref{P} property, but uniform Minkowski control is. We start by proving the latter statement, and give an example of failure for Hausdorff co-dimension in example \ref{ex_not_good}.
\begin{proposition}\label{prop_Mink}
Let $(N,h)$ be a complete Riemannian manifold, and let $E$ satisfy a uniform Minkowski-type estimate of the form
\begin{gather}
\mathrm{vol}\ton{\mathscr{B} r E} \leq C r^{\frac{2p}{p-1}}\, \qquad \text{for some } \ p>1\, .
\end{gather}
The $N\setminus \overline E$ is an open manifold that satisfies the \eqref{Lp-Sub} property, and hence also the \eqref{P} property.
\end{proposition}
\begin{remark} In the assumptions of Proposition \ref{prop:LpPPMink} we have in particular that all $L^2$ harmonic functions on $N\setminus \overline E$ are necessarily constant. In the very special case where $E$ is a point, this result has been recently proved in \cite[Theorem 8]{HMW}.
\end{remark}
\begin{remark}
For several significant examples of sets with non-integer Hausdorff dimension the Minkowski content is controlled at the right dimensional scale (this is the case for instance for auto-similar fractal sets). Accordingly, in all these cases Proposition \ref{prop_Mink} gives also a simpler and more direct proof of Corollary \ref{coro:LpPP Haus}. Moreover, a recursive application of (the proof of) Proposition \ref{prop_Mink} permits also to deal with sets $K$ whose Minkowski content is not finite at the right dimension, yet suitably controlled. For instance, this occurs whenever
$K=\cup_{j=0}^N K_j$ can be decomposed as the union of a finite increasing family of compact sets $K_j\subset K_{j+1}$ such that $K_0$ and $K_{j+1}\setminus K_j$, $j=0,\dots,N-1$, have finite $(n-\frac{2p}{p-1})$-dimensional Minkowski content, as in the toy example $K=\{0\}\cup \{1/j\}_{j\ge 1}\subset \mathbb{R}\subset\mathbb{R}^n$. However, there exist compact sets for which the local Minkowski dimension is larger than the Hausdorff dimension around any point of $K$, so that they satisfy the $L^p$-PP property, but this latter can not be deduced though the technique introduced in Proposition \ref{prop_Mink}. Examples presenting this feature are auto-affine sets, \cite{Be,McM,Ba}, i.e., roughly speaking auto-similar type fractal sets of $\mathbb{R}^n$ whose auto-similarity factor changes according to the direction.
\end{remark}
The proof of Proposition \ref{prop_Mink} is a direct consequence of the following possibly standard removable singularity lemma for $L^p$ functions.
\begin{lemma}\label{lem:laplacian extension}
Let $(N,h)$ be a complete Riemannian manifold, and let $E$ satisfy a uniform Minkowski-type estimate of the form
\begin{gather}
\mathrm{vol}\ton{\mathscr{B} r E} \leq \mathscr{C} r^{\frac{2p}{p-1}}\, \qquad \text{for some } \ p>1 \ \ \text{and all }\ r\in (0;1] \, .
\end{gather}
If $u\in L^p (N\setminus E)=L^p(N)$ and the distributional Laplacian of $u$ on $N\setminus \overline E$ is non-negative, then the distributional Laplacian of $u$ on $N$ is non-negative.
\end{lemma}
\begin{proof}
Notice that the uniform volume estimates imply that $E$ is a bounded set, and the estimates are stable under closure, meaning that $\mathrm{vol}\ton{\mathscr{B} r E} =\mathrm{vol}\ton{\mathscr{B} r {\overline E}}$.
Let $\psi_k$ be a sequence of cutoff functions with $\operatorname{supp}\ton{\psi_k}\subseteq \mathscr{B}{2k^{-1}}{E}$ and
\begin{itemize}
\item $\psi_k=1$ on $\mathscr{B}{k^{-1}}E$
\item $\abs{\nabla \psi_k}_\infty\leq ck$
\item $\abs{\nabla^2 \psi_k}_\infty \leq ck^2$. \end{itemize}
For instance, the $\psi_{k}$'s can be obtained by smoothing out the (Lip) distance function from $E$. Let also $\varphi\in C^\infty_c(N)$ be any non-negative test function. We have the distributional identity
\begin{gather}
\int_N \varphi \Delta u = \int_N \varphi (1-\psi_k)\Delta u + \int_N \varphi \psi_k \Delta u\, .
\end{gather}
Since $\varphi (1-\psi_k)\in C^\infty_c(N\setminus \overline{E})$ and it is non-negative, by hypothesis $\int_N \varphi (1-\psi_k)\Delta u\geq 0$. Moreover, we can estimate (here $A_k= \mathscr{B} {2k^{-1}}{E}\setminus \mathscr{B} {k^{-1}}{E}$):
\begin{gather}
\abs{\int_N \varphi \psi_k \Delta u}\leq \int u \qua{\abs{\psi_k} \abs{\Delta \varphi}_\infty +2\abs{\nabla \psi_k} \abs{\nabla \varphi}_{\infty} + \abs{\Delta \psi_k}\abs{\varphi}_\infty }\leq\\
\notag \leq c \int u \qua{\abs{\psi_k} +\abs{\nabla \psi_k} + \abs{\Delta \psi_k}}\leq c\int_{\mathscr{B} {2k^{-1}}{E}} u +c \int_{A_k} k u + k^2 u \leq\\
\notag \leq c\int_{\mathscr{B} {2k^{-1}}{E}} u +c \ton{\int_{A_k} u^p}^{\frac{1}{p}}\underbrace{\ton{k^{2q} \mathrm{vol}\ton{A_k}}^{\frac{1}{q}}}_{\leq \mathscr{C}}\leq c\qua{\int_{\mathscr{B} {2k^{-1}}{E}} u + \mathscr{C} \ton{\int_{A_k} u^p}^{\frac{1}{p}}}
\end{gather}
where $q$ is the H\"older exponent $q=\frac{p}{p-1}$. Since $u\in L^p$, $\abs{\int_N \varphi \psi_k \Delta u}\to 0$ as $k\to \infty$, and this concludes the proof.
\end{proof}
\begin{example}\label{ex_not_good} We note that a control on the Hausdorff dimension in the threshold case is not enough. Namely, there exists a compact $K\subset \mathbb{R}^4$ such that $\dim_{\mathcal H}(K)=0$ but $\mathbb{R}^4\setminus K$ is not $L^p$-PP.
Such a $K$ can be constructed as a generalized Cantor set. Let $\cur{\alpha_j}_{j=1}^\infty$ be a sequence bounded between $0$ and $1$, and consider the generalized Cantor set $C_\alpha$ constructed by starting with $K_0=[0,1]$ and removing open middle $\alpha_1$-th of $K_0$ to produce $K_1$ and so on.
Define $C_\alpha=\cap_{j=0}^\infty K_j$ and $K=C_\alpha\times\{0\}_{\mathbb{R}^3}\subset \mathbb{R}^4$. We choose $\alpha_j=\frac{10^{j}-2}{10^j}$. \footnote{The resulting set is a perfect set formed by all the numbers $x$ in $[0,1]$ such that in the decimal development of $x$ the digits between the $2^{j}$-th and the $(2^{j+1}-1)$-th are either all $0$ or all $9$.} Clearly, the $\dim_{\mathcal H}(C_\alpha)=0$ as it is smaller than $\log_{10^j}2=j^{-1}\log_{10}2$ for every $j\in\mathbb N$. Now, let $\mu$ be a Borel measure supported on $K$ such that $\mu(\mathbb{R}^4)=\mu(K)=1$ and $\mu(I\times\{0\}_{\mathbb{R}^3})=2^{-j}$ for every connected component $I$ of $K_j$. The measure $\mu$ can be constructed as follow. Let $\tilde{\mathcal H}^{\log_3^2}$ be the normalized $\log_32$-dimensional Hausdorff measure restricted to the standard mid-third Cantor set $C_{1/3}$, so that $\tilde{\mathcal H}^{\log_3^2}(C_{1/3})=1$. Namely, $C_{1/3}=C_{\alpha'}$ with $\alpha'_j=1/3$ for all $j$, and we name $K'_j$ the iterative steps in the construction of $C_{1/3}$. Consider a continuous increasing bijective function $f:[0,1]\to[0,1]$ constructed recursively as follows: for every $j\in\mathbb N$, $f$ maps $K'_{j+1}\setminus K'_j$ to $K_{j+1}\setminus K_j$. Define $\mu|_{C_\alpha}$ as the push-forward measure of $\tilde{\mathcal H}^{\log_3^2}$ through $f$ and extend it as zero to obtain a measure $\mu$ on the whole $\mathbb{R}^4$.
An easy computation shows that $\mathrm{diam}(I)=\prod_{k=1}^{j}10^{-k}=10^{-\frac{j^2+j}{2}}$ for every connected component $I$ of $K_j$, while any two such components have distance at least $8$ times larger. Thus, for every $x\in\mathbb{R}^4$, one has $\mu(B_\epsilon(x))\le 2^{-j}$ as soon as $\epsilon\le 4\cdot 10^{-\frac{j^2+j}{2}}$, i.e.
$2\log_{10}(1/\epsilon)\ge j^2+j-2\log_{10}(4)\ge j^2$, so that
\[
\mu(B_\epsilon(x))\le 2^{-\sqrt{\log_{10}(1/\epsilon^2)}}\le (\log^2(1/\epsilon))^{-2}
\]
for $\epsilon$ small enough. Applying a modified version of Lemma \ref{lem:valto-adam} (see Remark \ref{rmk:log growth}) we deduce that $\mathbb{R}^4\setminus K$ is not $L^2$-PP.
Note that, in this example, $\mathcal H^0(K)=\infty$. We do not know if there exists examples of compact sets of $\mathbb{R}^n$ of finite $(n-2p/(p-1))$-dimensional Hausdorff measure which are not $L^p$-PP.
\end{example}
We conclude this section with the following lemma, whose proof is similar to the proof of Lemma \ref{lem:laplacian extension}. This result is used to complete the argument for the validity of Corollary \ref{cor_core} stated in the Introduction.
\begin{proposition}
\label{prop:density}
Let $(N,h)$ be a complete Riemannian manifold. Fix $p>1$ and let $E$ satisfy a uniform Minkowski-type estimate of the form
\begin{gather}
\mathrm{vol}\ton{\mathscr{B} r E} \leq \mathscr{C} r^{q}\, \qquad \text{for some } \ q>p \ \ \text{and all }\ r\in (0;1] \, .
\end{gather}
Then the space $C^\infty_c(N\setminus E)$ is dense in the space
\[
\widetilde W^{2,p}(N\setminus E):=\{u\in L^p(N\setminus E)\ :\ \Delta u\in L^p(N\setminus E)\}
\]
with respect to its canonical norm $\|f\|_{\widetilde W^{2,p}(N\setminus E)}^p=\|f\|^p_{L^{p}(N\setminus E)}+\|\Delta f\|^p_{L^{p}(N\setminus E)}$.
\end{proposition}
\begin{proof}
Let $f\in \widetilde W^{2,p}(N\setminus E)$. Then $f \in L^p(N)$, and by the proof of Lemma \ref{lem:laplacian extension} it is clear that the distributional Laplacian of $f$ on $N$, denoted again by $\Delta f$ is in $L^p(N)$.
By a result due to O. Milatovic, there exists a sequence of functions $\eta_j$ in $C^\infty_c(N)$ which converges to $f$ in $\widetilde{W}^{2,p}(N)$ as $j\to\infty$; see \cite[Theorem 5]{GP}.
Fix $j$ and let $\psi_k$ be the cut-off functions introduced in the proof of Lemma \ref{lem:laplacian extension}. To conclude the proof, it suffices to show that $\psi_k\eta_j$ converges to $\eta_j$ in $\widetilde{W}^{2,p}(N\setminus E)$ as $k\to\infty$. To this end, compute
\begin{align*}
\|\eta_j-\psi_k \eta_j\|_{\widetilde{W}^{2,p}(N\setminus E)}&= \|\eta_j(1-\psi_k)\|_{L^p(N\setminus E)}+\|(1-\psi_k)\Delta \eta_j\|_{L^p(N\setminus E)} \\
&+ \| \eta_j\Delta\psi_k\|_{L^p(N\setminus E)} + 2\| |\nabla \eta_j||\nabla\psi_k|\|_{L^p(N\setminus E)}\nonumber
\end{align*}
The first two terms on the RHS vanish by the dominated convergence theorem. Moreover,
\begin{align*}
&\| \eta_j\Delta\psi_k\|_{L^p(N\setminus E)}\le \|\eta_j\|_{L^\infty}\| \Delta\psi_k\|_{L^p(N\setminus E)}\le c \|\eta_j\|_{L^\infty} k^{2-2q/p}\to 0, \\&\| |\nabla \eta_j||\nabla\psi_k|\|_{L^p(N\setminus E)}\le \||\nabla \eta_j|\|_{L^\infty}\| |\nabla \psi_k|\|_{L^p(N\setminus E)}\le c \||\nabla \eta_j|\|_{L^\infty} k^{1-2q/p}\to 0,
\end{align*}
as $k\to\infty$.
\end{proof}
\section{Removing sets of large size}\label{sec_Kbig}
As one might expect, Corollary \ref{coro:LpPP Haus} doesn't hold if the set $K$ is too big. Counterexamples can be easily built by looking at solutions of $\Delta f= \mu$, where $\mu$ is the right dimensional Hausdorff measure restricted to $K$.
\begin{example}
As a model example, consider $\mathscr{R}^n$ with $n\geq 3$ and a subset $K$ of the form $K= P \cap \overline{\mathscr{B} 1 0}$, where $P$ is a $k$-dimensional plane passing through the origin. If we focus on the manifold $\mathscr{R}^n\setminus K$, by Proposition \ref{prop_Mink} the \eqref{P} and \eqref{Lp-Sub} properties hold as long as $k\leq n-\frac{2p}{p-1}$.
In the other cases, we can consider the measure $\mu_K= \mathcal{H}^k|_K$ and consider the fundamental solutions of
\begin{gather}
\Delta u_1 = -\mu_K\, , \qquad \qquad -\Delta u_2 + u_2 = \mu_K\, .
\end{gather}
These solutions $u_1$ and $u_2$ can be easily written in terms of the Green's function $\abs{x-y}^{2-n}$ and the Bessel potential $J_2(x-y)$:
\begin{gather}
u_1(x)=\int_{\mathscr{R}^n} d\mu_K(y) \ \abs{x-y}^{2-n} \, , \qquad \qquad u_2(x)=\int_{\mathscr{R}^n} d\mu_K(y) \ J_2(x-y)
\end{gather}
By the estimates in \cite{adams1} (see also \cite[Lemma 10.12]{Po}), the functions $u_1$ and $u_2$ belong to $L^p(\mathscr{R}^n\setminus K) =L^p(\mathscr{R}^n)$, and they are clearly solutions of
\begin{gather}
\Delta u_1 =0 \qquad \text{on} \ \ \mathscr{R}^n\setminus K\\
\Delta u_2 -u_2=0 \qquad \text{on} \ \ \mathscr{R}^n\setminus K
\end{gather}
However, $u_1$ is not a constant, and $u_2$ is not non-negative (for example $u_2(x)\to -\infty$ as $x\to 0$). Thus $u_1$ is a counterexample to property \eqref{Lp-Sub} on $\mathscr{R}^n \setminus K$, and similarly $u_2$ is a counterexample to property \eqref{P} on $\mathscr{R}^n\setminus K$.
\end{example}
In general, we have the following.
\begin{proposition}\label{prop_no_good}
Let $(N,h)$ be a complete Riemannian manifold of dimension $n\geq 2$, and $\emptyset\neq K\Subset N$ have Hausdorff dimension $k$. If $k>n-\frac{2p}{p-1}$ for some $p\in(1,\infty)$, then the open manifold $(N\setminus K,h)$ does not enjoy either the \eqref{Lp-Sub} or the \eqref{P} property. \end{proposition}
\begin{proof}
The proof of this proposition is fairly standard, and follows the ideas laid out in the previous example.
First, we remark that the result is trivial if
$n=2$, as in this case for a fixed $y_0\in K$ the Bessel potential $J_2(x,y_0)$ is in $L^p$ and thus gives a counterexample to the \eqref{P} property. Similarly, the Green's function of a $2$ dimensional manifold is in $L^p$ for all $p\in (1,\infty)$, and can be used to produce a counterexample to the \eqref{Lp-Sub} property.
Moreover, if $k\ge n-2$, then we can replace $K$ with a subset $K'\subset K$ of Hausdorff dimension $k'\in (n-\frac{2p}{p-1},n-2)$. If $N\setminus K'$ does not enjoy the $L^p$-PP property, than \textit{a fortiori} also $N\setminus K$ does not enjoy it.
Hence, in the following
we can thus assume that
\begin{gather}\label{eq_kPDCD1K}
n\geq 3 \qquad \text{and}\qquad n-2>k>n-\frac{2p}{p-1}\, .
\end{gather}
Since all the ideas involved in the proof are local, it is convenient to assume that $K\Subset \Omega'\Subset \Omega$, where $\Omega$ is a relatively compact coordinate neighborhood and has smooth boundary. Thus we have at our disposal the fundamental solutions of $\Delta $ and $\Delta -1$ that are the Green's function $G(x,y)$ (with Dirichlet boundary conditions) on $\Omega$ and the Bessel potential $J_2(x,y)$ (with Dirichlet boundary conditions) on $\Omega$. For convenience, we choose the signs such that $G(x,y)\sim +c d(x,y)^{2-n}$, and similarly $J_2(x,y)\sim + c d(x,y)^{2-n}$, i.e.
\begin{gather}
\Delta G(x,y)=-\delta_{x-y}\, , \qquad \Delta J_2(x,y)-J_2(x,y)=-\delta_{x=y}\, .
\end{gather}
Recall that the Bessel potential can be defined for example with the Heat Kernel (see \cite[eq 4.2]{strichartz}) by:
\begin{gather}
J_2(x,y)= \int_0^\infty dt \ e^{-t} H_t(x,y)\, .
\end{gather}
We recall the comparison with the Green's function:
\begin{gather}
G(x,y) = \int_0^\infty dt H_t(x,y)\, .
\end{gather}
$J_{2}$ converges absolutely for almost all $x,y$ to a positive function, symmetric in $x,y$, and it is easy to see that, similarly to the Green's function, we have for all $x,y\in \Omega'$
\begin{gather}
cd(x,y)^{2-n}\leq J_2(x,y)\leq C d(x,y)^{2-n}\, .
\end{gather}
Notice that the second inequality is valid also globally on $N$, see e.g. \cite[theorem 7.1]{LSW}.
\vspace{5mm}
\textbf{Measure estimates.} If $\operatorname{dim}(K)>k$, then the $k$-dimensional Hausdorff measure $\mathcal{H}^k$ of $K$ is infinity. We want to show that there is a subset $S\subset K$ and a (potentially big) constant $C>1$ such that
\begin{gather}\label{eq_PDMTDCB}
\forall \, r ,\, x\, : \qquad \qquad \mathcal{H}^k(\mathscr{B} r x \cap S) \leq Cr^k\, ,\\
\notag \exists x \ \ s.t. \ \ \limsup_{r\to 0} r^{-k}\mathcal{H}^k(\mathscr{B} r x \cap S) \geq C^{-1}
\end{gather}
By the standard \cite[theorem 8.19]{mattilone} applied to $K$, there exists a compact subset $S_1\subset K$ with positive and finite $k$-dimensional Hausdorff measure. If we consider the upper density of the measure $\mathcal{H}^k|_{S_1}$, i.e. the limit
\begin{gather}
\Theta^{*,k}(x)=\limsup_{r\to 0} \frac{\mathcal{H}^k(S_1\cap \mathscr{B} r x)}{\omega_k r^k}\, ,
\end{gather}
we have by \cite[theorem 6.2]{mattilone} that for $\mathcal{H}^k$-almost all $x\in S_1$
\begin{gather}\label{eq_dens}
2^{-k}\leq \Theta^{*,k}(x)\leq 1
\end{gather}
and $\Theta^{*,k}(x)=0$ for $\mathcal{H}^k$-almost all $x\not\in S_1$.
Let $S_2\subseteq S_1$ be the subset where \eqref{eq_dens} holds, and let $r_u:S_2\to (0,\infty)$ be defined by
\begin{gather}
r_u(x) = \sup \cur{r>0 \ \ s.t. \ \ \forall s\leq r: \ \ \mathcal{H}^k(S\cap \mathscr{B} s x) \leq 7\omega_k s^k}\, .
\end{gather}
Since $r_u(x)>0$ for all $x\in S_2$, then the measurable subsets $\tilde S_i\subset S_2$ defined by
\begin{gather}
\tilde S_i = \cur{x\in S_2 \ \ s.t. \ \ r_u(x)>i^{-1}}
\end{gather}
constitute a monotone sequence converging to $S_2$, and thus there exists some $\hat i$ such that
\begin{gather}
\mathcal{H}^k(\tilde S_{\hat i})>\mathcal{H}^k(S_1)/2>0\, .
\end{gather}
Now if we consider the subset $S=\tilde S_{\hat i}$, we have that for all $x\in \mathscr{B} {\hat i^{-1}}S$:
\begin{gather}
\mathcal{H}^k(\mathscr{B} r x \cap S) \leq \begin{cases}
7\omega_k (2r)^k & \text{if } \ r\leq \hat i^{-1}/2\, ,\\
\mathcal{H}^k(S_1)<\infty& \text{if } \ r\geq \hat i^{-1}/2\, .
\end{cases}
\end{gather}
Thus we can conclude that there exists a constant $C$ (depending on $\hat i$) such that $\mathcal{H}^k(\mathscr{B} r x \cap S) \leq Cr^k$ for all $x,r$. Moreover, since $\mathcal{H}^k(S)>0$, by \cite[theorem 6.2]{mattilone}, $\Theta^{*,k}(S,x)>2^{-k}$ for $\mathcal{H}^k$-almost all $x\in S$, and thus there exists a point $\bar x\in S$ such that
\begin{gather}
\limsup_{r\to 0} r^{-k}\mathcal{H}^k(\mathscr{B} r {\bar x} \cap S) \geq 2^{-k}\, .
\end{gather}
Thus we have proved all the desired properties in \eqref{eq_PDMTDCB}. For convenience, from now on we will use the notation
\begin{gather}
\mu_K(E):= \mathcal{H}^k(S\cap E)\,
\end{gather}
to indicate a measure satisfying \eqref{eq_PDMTDCB}.
\medskip
\textbf{Counterexamples to the \ref{Lp-Sub} and \ref{P} properties.} We will prove that the fundamental solutions of
\begin{gather}\label{e:fundamental}
\Delta u_1 = -\mu_K\, , \qquad \qquad -\Delta u_2 + u_2 = -\mu_K\, .
\end{gather}
with Dirichlet boundary conditions satisfy $u_1,u_2\in L^p$, $u_1(x)$ is positive but not constant and $u_2(x)$ is not bounded from below.
\medskip
\textbf{Lower bounds.} First of all, we notice that if $x$ is a point of density for $\mu_K$, i.e. if $\limsup_{r\to 0} r^{-k} \mu(\mathscr{B} r x )\geq c>0$, then
\begin{gather}
\limsup_{y\to x} u_1(y)=\limsup_{y\to x} \int G(z,y)d\mu_K(z)\geq \limsup_{y\to x} \int_{\mathscr{B} r x} G(z,y)d\mu_K(z)\, .
\end{gather}
Given that there are infinitely many radia $r_i\to 0$ such that $\mu_K(\mathscr{B} {r_i}{x})\geq c r_i^{k}$, we have that if $d(x,y)\sim r_i$:
\begin{gather}
u_1(x) \geq c r_i^{2-n} r_i^{k} \to +\infty\, ,
\end{gather}
where we used \eqref{eq_kPDCD1K}. In a similar way, we obtain that around a density point $x$ we have
\begin{gather}
\limsup_{y\to x} u_2(x) =-\infty\, .
\end{gather}
The most complicated part of the proof are the $L^p$ estimates, but, up to trivial adaptation to the Riemannian setting, these are contained in \cite{adams1} (see also \cite[Lemma 10.12]{Po}). For the reader's convenience, we report a proof in the following lemma, which will conclude the proof of this proposition.
\end{proof}
\begin{lemma}\label{lem:valto-adam}
Let $0<s< n-2$ and $\mu$ be a positive measure with support in $\mathscr{B} 1 P$ for some $P$, and suppose that
\begin{gather}\label{e:morrey growth}
\mu(\mathscr{B} r x)\leq C r^s\, , \qquad \forall x,r\, .
\end{gather}
Suppose also that $\operatorname{Vol}(\mathscr{B} r x)\leq C r^n$ for all $x\in \Omega'$ and $r\leq 2$. Then
\begin{gather}
\int G(x,y) d\mu(y)\in L^p\, , \qquad p\in \ton{\frac{n}{n-2},\frac{n-s}{n-2-s}}\, ,\\
\int J_2(x,y)d\mu(y)\in L^p\, , \qquad p\in \ton{\frac{n}{n-2},\frac{n-s}{n-2-s}}\, .
\end{gather}
\end{lemma}
\begin{proof}
We consider the Green's function case, since the Bessel potential estimates are completely analogous. We have that
\begin{gather}
\notag \int G(x,y)d\mu(y) \leq C \int d(x,y)^{2-n} d\mu(y)=C \int_0^\infty dt \ \mu \cur{ d^{2-n}>r } = C \int_0^\infty dr \ \mu \cur{ d<r^{\frac 1 {2-n}} }=\\
= C (n-2)\int_0^\infty d\hat r \ \hat r^{1-n}\ \mu \cur{ d<\hat r } = C\int_0^\infty \frac {dr}{r} \frac 1 {r^{n-2}} \mu(\mathscr{B} r x)\, .
\end{gather}
Let $1<q<\infty$ and $q'$ be its conjugate exponent. By (the continuous version of) Minkowski inequality
\begin{gather}
\norm{\int G(x,y)d\mu(y)}_{q'} \leq C\int_0^\infty \frac {dr}{r^{n-1}} \norm{\mu(\mathscr{B} r x)}_{L^{q'}(x)}\, .
\end{gather}
We estimate very simply
\begin{gather}
\ton{\mu\ton{\mathscr{B} r x} }^{\frac{q}{q-1}}= \ton{\mu\ton{\mathscr{B} r x} }^{\frac{1}{q-1}} \mu\ton{\mathscr{B} r x} \leq \min \ton{C r^s, \ \mu(M)}^{\frac 1 {q-1}}\mu\ton{\mathscr{B} r x}\, .
\end{gather}
Notice also that
\begin{gather}
\int_{M} dx \ \mu(\mathscr{B} r x) = \int_M dx \ \int_{\mathscr{B} r x} d\mu(y) = \int_M d\mu(y) \int_{\mathscr{B} r y} dy \leq C r^n \mu(M)\, .
\end{gather}
Putting these two together we get
\begin{gather}
\norm{\mu}_{q'}^{q'}= \int_M dx \ \mu\ton{\mathscr{B} r x}^{\frac q{q-1}} \leq \min \ton{C r^s, \ \mu(M)}^{\frac 1 {q-1}}\int_M \mu\ton{\mathscr{B} r x}\leq \\
\notag \leq C \min \ton{C\mu(M) r^{\frac s{q-1} +n}, \ \mu(M)^{q'} r^n}\, ,\\
\notag \norm{\mu}_{q'}\leq C \min \ton{C\mu(M)^{\frac {q-1}{q}} r^{\frac s{q} +\frac{n(q-1)}{q}}, \ \mu(M) r^\frac{n(q-1)}{q}}
\end{gather}
Thus, if
\begin{gather}\label{eq_cond_ns}
2q-n<0\, , \qquad s+2q-n>0\, ,
\end{gather}
we get $\forall \kappa>0$:
\begin{gather}\label{e:k}
\norm{\int G(x,y)d\mu(y)}_{q'} \leq C\mu(M)^{\frac{q-1}{q}} \int_0^\kappa r^{\frac {s-n}{q}+1} + C\mu(M) \int_\kappa^\infty r^{1-\frac n q} =\\
\notag = C\mu(M)^{\frac{q-1}{q}} \int_0^\kappa r^{\frac {s+q-n}{q}} + C\mu(M) \int_\kappa^\infty r^{\frac {q-n} q} = C \mu(M)^{\frac{q-1}{q}} \kappa^{\frac {s+2q-n}{q}} + C \mu(M) \kappa^{\frac {2q-n} q}
\end{gather}
By choosing the ``best'' k, the one which minimizes this last expression, which is $k=c \mu(M) ^{\frac 1 s}$, we get
\begin{gather}
\norm{\int G(x,y)d\mu(y)}_{q'} \leq C \mu(M)^{\frac{q-1}{q}} \kappa^{\frac {s+2q-n}{q}} + C \mu(M) \kappa^{\frac {2q-n} q} =C \mu(M)^{1-\frac{n-2q}{sq}}\, .
\end{gather}
Choosing $q'=p$, we get the result. Notice that since $q$ must satisfy \eqref{eq_cond_ns}, i.e., $q\in \ton{\frac {n-s}{2},\frac n 2}$ we have that $p\in \ton{\frac{n}{n-2}, \frac{n-s}{n-s-2}}$.
Notice also that the exponent $1-\frac{n-2q}{sq}>0$ if $q>\frac{n-s}{2}$.
\end{proof}
\begin{remark}\label{rmk:log growth}
Note that Lemma \ref{lem:valto-adam} remains true also for $p=\frac{n-s}{n-2-s}$ if one assume instead of $\eqref{e:morrey growth}$ the strongest condition
\begin{gather}
\mu(\mathscr{B} r x)\leq C r^s(1+\abs{\log(r)})^{-2}\, , \qquad \forall x,r, \ \ 0\le s< n-2\, .
\end{gather}
Indeed, under this assumption one can choose $\kappa=\infty$ in \eqref{e:k}.
\end{remark}
\section{Acknowledgments} We are indebted to Andrea Bonfiglioli and Ermanno Lanconelli for clarifying some points about
their article \cite{BL}.
We also thanks Peter Sj\"ogren for sharing with us some useful comments about the Brelot-Hervé theory of subharmonic functions and the smoothing approximation procedure. Finally, we are very grateful to Batu G\"uneysu for pointing out to us the BMS conjecture, and for several fruitful discussions on the topic during the last years.
Partially supported by INDAM-GNAMPA.
\bibliographystyle{aomalpha}
|
train/arxiv
|
BkiUdlM4eILhP_ZFBdJf
| 5 | 1 |
\section{Introduction}
``What is reasonable is real. That which is real is reasonable.''
This famous proposition from Hegel, saying that
everything has its ``logic'', often resonates in
Alice's mind. Alice is a verification engineer responsible
for safety-critical cyber-physical systems (CPS). She
advocates the use of formal methods with requirements specified
in logic, as part of the development of complex CPS.
Formal specifications enable rigorous reasoning about a CPS product (for example its model checking or systematic testing) during all its design phases, as well as during operation (for example via runtime verification)~\cite{bartocci2018specification}. Alice is frustrated by the resistance of her colleagues
to adopt formal methods in their design methodology.
She is aware that one major bottleneck in a wider acceptance of these techniques results from the steep learning curve to translate informal requirements expressed in natural language into formal specifications. The correspondence between a requirement written in English and its temporal logic formalization
is not always straightforward, as illustrated in the example below:
\begin{itemize}
\item \textbf{English Requirement}:
Whenever V\_Mot is detected to become equal to 0, then at a time point starting after at most 100 time units Spd\_Act shall continuously remain on 0 for at least 20 time units.
\item \textbf{Signal Temporal Logic (STL)}:
$\textbf{G} (\textbf{rise} (\text{V\_Mot} = 0) \rightarrow \textbf{F}_{[0,100]} \textbf{G}_{[0,20]} \ (\text{Spd\_Act} = 0))$
\end{itemize}
Bob is Alice's colleague and an expert in machine learning. He
introduces Alice to the tremendous achievements in
natural language processing (NLP), demonstrated by
applications such as Google Translate and DeepL. Alice is
impressed by the quality of translations between natural languages.
She realizes that NLP is a technology that can
reduce the gap between engineers and formal methods, and significantly
increase the acceptance of rigorous
specifications.
However, Alice also observes
that this potential solution does not come without challenges.
In order to build a translator from one spoken language to another, there is a huge amount of texts available in both languages that can be used for training and there is also a series of systematic translation solutions. In contrast, for translating CPS requirements given in natural language into formal specifications, there are two major challenges:
\begin{itemize}
\item \textbf{Challenge 1}: Lack of available training data. The informal requirement documents are sparse and often not publicly available, and formal specifications are even sparser.
\item \textbf{Challenge 2}: No mature solutions for translating English requirements into formal specifications, where special features of these two languages need to be considered.
\end{itemize}
In this paper, as a first attempt to adopt NLP to tackle the above two challenges, we propose DeepSTL, a method and associated tool for the translation of CPS requirements
given in relatively free English to Signal Temporal Logic (STL)~\cite{maler2013monitoring}, a formal
specification language used by
the CPS academia and industry. To develop DeepSTL we
address the following five research questions (\textbf{RQ}), the solutions of which are also our main contributions.
\begin{description}
\item[RQ1:] What kind of empirical statistics of STL requirements, found in scientific literature, can guide data generation?
\item[RQ2:] How to generate synthetic examples of STL requirements consistently with the empirically collected statistics?
\end{description}
The first two research questions are related to \textbf{Challenge 1}. For \textbf{RQ1}, empirical STL statistics in literature and practice are analyzed in Section~\ref{sec:empirical_stl}. For \textbf{RQ2}, we design in Section~\ref{sec:corpus_construction}, a systematic grammar-based generation of synthetic data sets, consisting of pairs of STL formulae and their associated set of possible English translations.
\begin{description}
\item[RQ3:] How effective is DeepSTL in learning synthetic STL?
\item[RQ4:] How well does DeepSTL extrapolate to STL requirements found in scientific literature?
\item[RQ5:] How do alternative deep learning mechanisms used in machine translation compare to DeepSTL's transformer architecture?
\end{description}
The last three research questions are relevant to \textbf{Challenge 2}. They are addressed in Section~\ref{sec:machine_translation} and discussed in Section~\ref{sec:discussion}. We employ a corresponding transformer-based NLP architecture, whose attention mechanism enables the efficient training of accurate translators from English to STL. We also compare DeepSTL with other machine translation techniques with respect to translating performance, on synthetic STL training and test data set, and their extrapolations.
The collected data and the implementation codes in this paper can be found in the provided links\footnote{\label{repo}\emph{Artifact DOI:} \url{10.6084/m9.figshare.19091282} \\ \emph{Code repository:} \url{https://github.com/JieHE-2020/DeepSTL}}.
\section{Related Work}
\label{sec:realated}
\noindent
\textbf{From Natural Language to Temporal Logics}\quad
Despite many approaches proposed in the literature~\cite{NelkenF96,DwyerAC99,Ranta11,Kress-GazitFP08,YanCC15,AutiliGLPT15,BrunelloMR19,Nikora2009,LignosRFMK15,Rongjie2015,GhoshELLSS16,FantechiGRCVM94,Dzifcak2009,Christopher2015}, the problem of translating natural language requirements into temporal logics remains still open~\cite{BrunelloMR19}. The main challenge arises from translating an ambiguous and context-sensitive language into a more precise and context-free formal language. To facilitate and guide the translation process, most of the available methods require the use of predefined specification patterns~\cite{DwyerAC99,Sasha2005,AutiliGLPT15} or of a restricted and more controlled natural language~\cite{SantosCS18,Rongjie2015,Kress-GazitFP08}.
Handling the direct translation of unconstrained natural languages is instead more cumbersome. Other works~\cite{NelkenF96,Dzifcak2009,LignosRFMK15,GhoshELLSS16} address this problem by translating the natural language expression first into an intermediate representation. Then,
the translation process continues by applying a set of manually predefined rules/macros mapping the intermediate representation into temporal logic expressions. These
approaches are centered on Linear Temporal Logic (LTL)~\cite{ltl}, a temporal logic suitable to reason about events occurring
over logical-time Boolean signals.
In this paper we consider instead (for the first time to the best of our knowledge) the problem of automatic translation of unconstrained English sentences into Signal Temporal Logic (STL)~\cite{maler2013monitoring}, a temporal logic that extends LTL with operators to express temporal properties over dense-time real-valued signals. STL is a well-established formal language employed in both academia and advanced industrial research labs to specify requirements for CPS engineering~\cite{boufaied2021signal,HoxhaAF14}.
\noindent
\textbf{Semantic Parsing}\quad Our problem can be considered an example of semantic parsing~\cite{BrunelloMR19}. This task consists in automatically translating sentences written in a context-sensitive natural language into machine-understandable expressions such as executable code or logical representations.
Semantic parsers are automatically learned from a set of utterances written in natural languages that are annotated with the semantic interpretation in the target language. Some relevant toolkits~\cite{BrunelloMR19} available for developing semantic parsers are WASP~\cite{WongM06}, SEMPRE~\cite{sempre}, KRISP~\cite{KateM06}, SippyCup~\cite{sippycup}, and Cornell Semantic Parsing~\cite{Artzi:16spf}.
Applications of semantic parsing include the translation of questions or commands expressed in natural-language into Structured-Query-Language (SQL) queries~\cite{Yaghmazadeh0DD17,Fei2014,tang2000,ZelleM96,abs-1709-00103}, Python code~\cite{OdaFNHSTN15},
bash commands~\cite{LinWZE18}, and other domain specific languages~\cite{Quirk2015}.
The main difficulty for this task is to learn the set of semantic rules that can cover all the potential ambiguity arising when translating from a context-sensitive natural language. Thus, to be effective, this task requires a large training data set.
In order to cope with the lack of publicly available informal-requirement and formal-specification data sets, we first design a grammar-based generation technique of synthetic data, where each output is a random STL formula and its associated set of possible English translations. Then we address a neural-translation problem, where a deep neural network is trained to predict, given the utterance in English, the optimal STL formula expressing it. Our work leverages general-purpose deep learning frameworks such as PyTorch~\cite{Ketkar2017} or Tensorflow~\cite{AbadiBCCDDDGIIK16}, and of state-of-the-art solutions based on transformers and their attention mechanisms~\cite{VaswaniSPUJGKP17}.
\section{Signal Temporal Logic (STL)}
\label{sec:signal}
Signal Temporal Logic (STL) with both {\em past} and {\em future} operators is a formal specification
formalism used by the academic researchers and practitioners to formalize temporal requirements
of CPS behaviors. STL allows to express real time requirements of continuous-time
real-valued behaviors. An example is a simple bounded stabilization property formulated as follows:
\emph{Whenever V\_In is above 5, then there must exist a time point in the next 10 time units, at which the value of signal V\_Out should be less than 2.}
The syntax of an STL formula $\varphi$ over a set $X$ of real-valued variables is defined by the grammar:
$$
\begin{array}{lcl}
\varphi & := & x \sim u~|~\neg \varphi~|~\varphi_1 \vee \varphi_2~|~\varphi_1 \textbf{U}_I \varphi_2~|~\varphi_1 \textbf{S}_I \varphi_2 \\
\end{array}
$$
\noindent where $x \in X$, $\sim \in \{\geq, >, =, <, \leq \}$, $u \in \mathbb{Q}$,
$I \subseteq [0, \infty)$ is a non-empty interval. For intervals of the form $[a,a]$, we will use the notation $\{a\}$ instead.
With respect to a signal $w\,{:}\,X\,{\times}\,[0,d)\,{\to}\,\mathbb{R}$, the semantics of an STL formula is described via the satisfiability relation $(w,i) \models \varphi$, indicating that the signal $w$ satisfies $\varphi$ at the time index $i$:
$$
\begin{array}{lcl}
(w,i) \models x \sim u & \leftrightarrow & w(x,i) \sim u \\
(w,i) \models \neg \varphi & \leftrightarrow & (w,i) \not \models \varphi \\
(w,i) \models \varphi_1 \vee \varphi_2 & \leftrightarrow & (w,i) \models \varphi_1 \; \textrm{or} \; (w,i) \models \varphi_2 \\
(w,i) \models \varphi_1 \textbf{U}_I \varphi_2 & \leftrightarrow & \exists j \in (i + I) \cap \mathbb{T} \; \textrm{:} \; (w,j) \models \varphi_2 \; \\
& & \textrm{and } \forall i < k < j, (w,k) \models \varphi_1 \\
(w,i) \models \varphi_1 \textbf{S}_I \varphi_2 & \leftrightarrow & \exists j \in (i - I) \cap \mathbb{T} \; \textrm{:} \; (w,j) \models \varphi_2 \; \\
& & \textrm{and } \forall j < k < i, (w,k) \models \varphi_1 \\
\end{array}
$$
We use $\textbf{S}$ and $\textbf{U}$ as syntactic sugar for the {\em untimed} variants of the {\em since} $\,\textbf{S}_{(0,\infty)}$ and
{\em until} $\,\textbf{U}_{(0,\infty)}$ operators. From the basic definition of STL, we can derive the following standard operators.
$$
\begin{array}{llcl}
\text{tautology} & \textbf{true} & = & p \vee \neg p \\
\text{contradiction} & \textbf{false} & = & \neg \textbf{true} \\
\text{disjunction} & \varphi_1 \wedge \varphi_2 & = & \neg(\neg \varphi_1 \vee \neg \varphi_2) \\
\text{implication} & \varphi_1 \to \varphi_2 & = & \neg \varphi_1 \vee \varphi_2 \\
\text{eventually, finally} & \textbf{F}_I \varphi & = & \textbf{true}\,\textbf{U}_I\,\varphi \\
\text{always, globally} & \textbf{G}_I \varphi & = & \neg \textbf{F}_I \neg \varphi \\
\text{once} & \textbf{O}_I \varphi & = & \textbf{true}\,\textbf{S}_I\,\varphi \\
\text{historically} & \textbf{H}_I \varphi & = & \neg \textbf{O}_I \neg \varphi \\
\text{rising edge} & \textbf{rise}(\varphi) & = & \varphi \wedge \neg \varphi \, \textbf{S} \, \textbf{true} \\
\text{falling edge} & \textbf{fall}(\varphi) & = & \neg \varphi \wedge \varphi \, \textbf{S} \, \textbf{true} \\
\end{array}
$$
We can now formalize the rather verbose English description of the above \textit{Bounded response} requirement, with a succinct STL formula as follows:
$
\begin{array}{c}
\textbf{G} (\text{V\_In} > 5 \rightarrow \textbf{F}_{[0,10]} (\text{V\_Out} < 2 )).
\end{array}
$
This formula can be directly used during the verification of a CPS before it was deployed, or to generate a monitor, checking the safety of the CPS, after its deployment.
\section{Empirical STL Statistics}
\label{sec:empirical_stl}
In order to address the relative lack of publicly
available STL specifications, we develop a synthetic-training-data generator, as described in
Section~\ref{sec:corpus_construction}. Instead of
exploring completely random STL sentences, the
generator should focus on the creation of commonly used
STL specifications. In addition, every STL formula
shall be associated to a set of natural
language formulations, with commonly used sentence structure
and vocabulary.
We analyzed over $130$ STL specifications
and their associated English-language formulation, from
scientific papers and industrial documents.
The investigated literature covers multiple application
domains:
specification patterns~\cite{boufaied2021signal},
automatic
driving~\cite{HoxhaAF14,jin2014powertrain,gladisch2019experience},
robotics~\cite{liu2017distributed,kapoor2020model,aksaray2016q,liao2020survey,liu2021model},
time-series analysis~\cite{chen2020temporal} and
electronics~\cite{maler2013monitoring,BTS5016-2EKA}. Although
this literature contains data that is not
statistically exhaustive, it still provides valuable information
to guide the design of the data generator and address
the research question \textbf{RQ1}.
We present our results on the statistical analysis of the STL
specifications in Section~\ref{subsec:analysis_stl} and
of their associated natural-language requirements in
Section~\ref{subsec:analysis_natural}.
\subsection{Analysis of STL Specifications}
\label{subsec:analysis_stl}
We conducted two main types of analysis for the STL specifications
encountered in the literature: (1)~Identification of
common temporal-logic templates, and (2)~Computation of the frequency of
individual operators. During analysis, we made several other relevant observations that we report at the end of this subsection.
\subsubsection{STL-Templates Distribution}
\label{subsubsec:stl_templates}
We identified four common STL templates that we
call: \textit{Invariance/Reachability},
\textit{Immediate response},
\textit{Temporal response} and \textit{Stabilization/Recurrence}.
\vspace*{1mm}\noindent \textbf{Invariance/Reachability template:}
Bounded and unbounded invariance and reachability
are the simplest temporal STL properties. They have the form $\textbf{G} \varphi$, $\textbf{G}_{[a,b]} \varphi$,
$\textbf{F} \varphi$ or $\textbf{F}_{[a,b]} \varphi$,
where $\varphi$ is an atomic predicate. We provide one
example of bounded-invariance (BI)~\cite{jin2014powertrain},
and one example of
unbounded-reachability (UR)~\cite{kapoor2020model}
specification, respectively, as encountered
in our investigation:
$$
BI: \textbf{G}_{[\tau_{s},T]} (\mu < c_{l}), \quad
UR: \textbf{F} (x > 0.4)
$$
\noindent \textbf{Immediate response template:} This template represents
formulas of the form $\textbf{G} (\varphi \rightarrow \psi)$,
where $\varphi$ and $\psi$ are atomic propositions or their Boolean combinations. Except for the starting $\textbf{G}$ operator, there are no other temporal operators in the formula. An example of
an immediate response (IR) specification is the one
from~\cite{boufaied2021signal}:
$$
IR: \textbf{G} (\text{not\_Eclipse} = 0 \rightarrow \text{sun\_currents} = 0).
$$
\noindent \textbf{Temporal response template:} This template represents
formulas of the form $\textbf{G} (\varphi \rightarrow \psi)$,
where $\varphi$ and $\psi$ can have non-nested temporal
operators. We illustrate several TR specifications that
we encountered in the literature. They all belong to this class:
$$
\begin{array}{lll}
TR1: \textbf{G} (\textbf{rise} (\text{Op\_Cmd} = \text{Passive}) \rightarrow \textbf{F}_{[0,500]} \text{Spd\_Act} = 0) \\
TR2: \textbf{G} (\text{currentADCSMode} = \text{SM} \rightarrow P \ \textbf{U}_{[0,10799]} \ \neg P ) & \text{\cite{boufaied2021signal}} \\
TR3: \textbf{G} (\textbf{rise} (\text{gear\_id} = 1) \rightarrow \textbf{G}_{[0,2.5]} \neg \textbf{fall} (\text{gear\_id} = 1)) & \text{\cite{HoxhaAF14}} \\
\end{array}
$$
\noindent In $TR2$ above, $P \equiv \text{real\_Omega} - \text{target\_Omega} = 0$.
\vspace*{1mm}
\noindent \textbf{Stabilization/Recurrence template:} These templates represent
formulas allowing one nesting of the temporal operators.
Typical nesting is $\textbf{G}\textbf{F}\,\varphi$ for recurrence (RE), and $\textbf{F}\textbf{G}\,\varphi$ for stabilization (ST),
with their bounded counterparts. Here $\varphi$ is a non-temporal
formula. The following specifications from the literature are in this category:
$$
\begin{array}{ll}
ST: \textbf{F}_{[0,14400]} \textbf{G}_{[4590,9963]} \ (x_{10} \geq 0.325 ) & \text{\cite{chen2020temporal}} \\
RE: \textbf{G}_{[0,12]} ( \textbf{F}_{[0,2]} \ \text{regionA} \ \wedge \ \textbf{F}_{[0,2]} \ \text{regionB}) & \text{\cite{liao2020survey}} \\
\end{array}
$$
\noindent \textbf{Other formulas:} These
are formulas that do not fall into any of the above categories.
The following specification belongs to this class:
$$
\textbf{G}( \textbf{rise} (\varphi_1) \rightarrow \textbf{F}_{[0,t_{1}]} (\textbf{rise} (\varphi_2) \ \wedge (\varphi_2 \ \textbf{U}_{[t_{2},t_{3}]} \ \varphi_3 )) )
$$
It captures the following requirement:
\emph{Whenever the precondition $\varphi_{1}$ becomes true,
there is a time within $t_1$ units where $\varphi_2$
becomes true and continuously holds until $\varphi_3$
becomes true within another interval $[t_2, t_3]$.}
This pattern is used in the electronics field~\cite{maler2013monitoring} to describe the situation where one digital signal tracks another~\cite{BTS5016-2EKA}.
\vspace*{1mm}
\noindent \textbf{Statistics:}
We encountered $39$ \textit{Invariance/Reachability} ($30.0\%$),
$27$ \textit{Immediate response} ($20.8\%$), $33$ \textit{Temporal
response} ($25.4\%$) and $31$ \textit{Stabilization/Recurrence} ($23.8\%$)
templates. The category of Other templates is orthogonal to the first four ones since it includes ad hoc formulas. There are overall 13 ($39.4\%$) \textit{Temporal
response} and 6 ($19.3\%$) \textit{Stabilization/Recurrence} templates belonging to this type.
\subsubsection{STL-Operators Distribution}
\label{subsubsec:stl_operator_distribution}
We investigated the distribution of the STL-operators as
encountered in the specifications found in the above-mentioned literature.
Figure~\ref{fig:operator_occurrence_times} summarizes
our results.
\begin{figure}[H]
\centering
\begin{tikzpicture}
{\small
\tkzKiviatDiagram[scale=0.3,label distance=.4cm,
radial = 1,
gap = 1.6,
lattice = 5]{$\ \ \ x = u$,$x > u$, $x \geq u$,
$x < u$, $x \leq u$, $\textbf{rise}(\varphi)$,$\textbf{fall}(\varphi)$,
$\textbf{F}_I \varphi$, $\textbf{G}_I \varphi$, $\varphi_1 \textbf{U}_I \varphi_2$,
$\textbf{O}_I \varphi$, $\textbf{H}_I \varphi$, $\varphi_1 \textbf{S}_I \varphi_2$,
$\neg \varphi $, $\varphi_1 \wedge \varphi_2$, $\varphi_1 \vee \varphi_2$,
$\varphi_1 \rightarrow \varphi_2$
}
\tkzKiviatLine[thick,mark=ball,mark size=4pt,color=darkgray,
fill=green!25,opacity=.5](4.84/1.2, 1.4/1.2, 2.12/1.2, 1.84/1.2, 2.12/1.2, 1.96/1.2, 0.88/1.2, 2.72/1.2, 6.08/1.2, 1.04/1.2, 0.2/1.2, 0.44/1.2, 0.32/1.2, 1.12/1.2, 4.2/1.2, 0.52/1.2, 3.52/1.2)
\tkzKiviatGrad[prefix=,unity=30](1)}
\end{tikzpicture}
\caption{Frequency Distribution of STL Operators.}
\label{fig:operator_occurrence_times}
\vspace{-1ex}
\end{figure}
We first discuss the (atomic) numeric predicates.
We observe that the equality operator occurs
more often in numerical predicates
then the other ($\leq$, $<$, $\geq$, $>$) relations.
This happens because many specifications refer
to discrete mode signals and equality is used to
check if a discrete variable is in a given mode.
Conjunction and implication are the two most
frequently used Boolean operators. Conjunction
is often used to specify that a signal must lie within
a given range. Implication is used in a pretty wide range of
response specifications.
The $\textbf{rise}$ and $\textbf{fall}$ operators
are typically used in front of an atomic predicate
(for example $\textbf{rise}(x > \mu)$). The frequency of
rising edges is higher than that of falling edges,
which can be explained by the fact that many specifications
refer to time instants where a condition starts holding,
rather than when it stops holding.
The $\textbf{G}$ operator has a much higher frequency than any other temporal operator.
This is not surprising because $86.2\%$ (112/130) of the
specifications are invariance or response or recurrence properties
that start with an always operator. The $\textbf{F}$ operator
ranks second and is often used in specifications
of robotic applications to define reachability
objectives. We also remark that ``eventually'' is used
in bounded/unbounded and stabilization properties.
We finally observe that future temporal
operators ($\textbf{G}$, $\textbf{F}$, $\textbf{U}$) are used more often than their past counterparts ($\textbf{O}$, $\textbf{H}$, $\textbf{S}$)
and that unary temporal operators
($\textbf{G}$, $\textbf{F}$, $\textbf{H}$, $\textbf{O}$) are used more often
than the binary ones ($\textbf{U}$, $\textbf{S}$).
These two observations are explained by the fact that
most declarative specifications have a natural
future flavor (a trigger now implies an obligation that
must be fulfilled later) and unary temporal operators
are easier to understand and handle.
\subsubsection{Other Observations}
\label{subsubsec:other_observations}
In this section, we discuss additional findings that
we discovered during the analysis of the STL specifications occurring
in the literature:
\begin{itemize}
\item We found a frequent usage of the pattern
$|x -y|$ to denote the pointwise distance between
signals $x$ and $y$, especially in the motion control
applications~\cite{gladisch2019experience,liu2017distributed,kapoor2020model}.
\item Some publications use abstract predicates to denote
complex temporal patterns, without providing their detailed
formalization. One such example is the use of the predicate
$\textsf{spike}(x)$ to denote a spike occurring within the
signal $x$~\cite{boufaied2021signal}.
\item It is relatively common in the literature to decompose a complex STL
specification into multiple simpler ones, by giving
a name to a sub-formula and using that name as an
atomic proposition in the main formula.
\item Time bounds in temporal operators and signal
thresholds are sometimes given as parameters, rather
than constants.
\item $\textbf{rise}$, $\textbf{fall}$ and past temporal operators are normally used as pre-conditions, while future operators are often used as post-conditions. Negation is used conservatively, e.g., $\neg\textbf{fall}$ is used to represent a particular stabilized condition should hold for a designated time interval~\cite{maler2013monitoring}.
\end{itemize}
\subsection{Analysis of NL Specifications}
\label{subsec:analysis_natural}
In this section we investigate the usage of natural
language (NL) in the literature to express informal
requirements, which are then formalized using STL.
In particular, we identified the English vocabulary used
to formulate STL operators and sentences, and
studied the quality,
accuracy and preciseness of the language.
\subsubsection{English formulation of STL sentences}
\label{subsubsec:english_formulation}
We considered several aspects (e.g.,~nouns, verbs, adverbs, etc.) when studying the use
of natural language in the formulation of:
\begin{itemize}
\item Numeric (atomic) predicates,
\item Temporal operators (phrases),
\item Specific scenarios (e.g.,~a rising/falling edge).
\end{itemize}
The main outcome of this analysis is that the language
features used in the studied requirements are unbalanced
and sparse
and that it is hard to identify a general recurring pattern.
We illustrate this observation with two representative
examples:
\begin{itemize}
\item \textbf{Example 1}: We counted different English
utterances to express the semantics of $x > \mu$.
The most frequently used collocation is
``be above'', which appears overall four times.
Next comes ``increase above'' (somewhat ambiguous because this may also represent a rising edge), which is used two times.
Then ``be higher than'', ``be larger than'' and ``be greater than'' are only used
once respectively. However, we do not find any
requirements using other synonymous expressions
like ``be more than'' or ``be over''.
\item \textbf{Example 2}: We observed that two
temporal adverbs are frequently used to express
$\textbf{G}_{[0,t]}$ and $\textbf{H}_{[0,t]}$, which are
``for at least $t$ time units ($s, ms$, etc.)''
(eight times) and ``for more than $t$ time units''
(six times). However, other reasonable
possibilities like ``for the following/past $t$ time units'' are not found.
\end{itemize}
The sparsity and lack of balance
may be a consequence of the relative small base of publicly available
literature that defines this type of requirements.
Despite the fact that the findings of this analysis may not
be sufficiently representative, we can still use the
outcomes to improve our synthetic generation of examples.
\subsubsection{Language Quality}
For the English requirements found in the literature, of
particular interest is the language quality:
How accurately does a requirement reflect
the semantics of its corresponding STL formula?
Given this criterion, we classify the studied English requirements into {\em Clear}, {\em Indirect} and
{\em Ambiguous} requirements.
\vspace*{1mm}
\noindent \textbf{Clear:} These requirements have
a straightforward STL formalisation that results in an unambiguous specification without room for interpretation.
An example of a clear requirement is the sentence: \emph{If the value of signal \textit{control\_error} is less than $10^{\circ}$, then the value of signal \textit{currentADCSMode} shall be equal to \textit{NMF}}~\cite{boufaied2021signal}. The resulting
STL specification is given by the formula:
$$
\textbf{G} (\text{control\_error} < 10 \rightarrow \text{currentADCSMode} = \text{NMF})
$$
\noindent \textbf{Indirect:} These requirements need an expert to translate them into an STL formula that
faithfully captures the intended meaning. They
typically assume some implicit knowledge that must
be added to the formal specification from the context.
An example is the sentence: \emph{The vehicle shall stay within the lane boundaries, if this is possible with the actuators it is equipped with}~\cite{gladisch2019experience}. This is an indirect
requirement formalized using the following
STL formula:
$$
\textbf{G} (\tau < \tau_{max} \rightarrow \textbf{P})
$$
\noindent Here $\textbf{P}$ is the contextual sub-formula: $\textit{vehicle} \subseteq \textit{corridor}$.
\vspace*{1mm}
\noindent \textbf{Ambiguous:} These requirements lack
key information that cannot be easily inferred from
the context and that must be extracted from external
sources, such as tables, figures, timing diagrams, or experts.
They use vague and ambiguous language,
and can have multiple interpretations. An example is the following sentence: \emph{To prevent the destruction of the device by avalanche due to high voltages, there is a voltage clamp mechanism $Z_{DS(AZ)}$ implemented, that limits negative output voltage to a certain level $V_S-V_{DS(AZ)}$. Please refer to Figure 10 and Figure 11 for details}~\cite{BTS5016-2EKA}. This is an ambiguous requirement that can be translated to the following STL formulas:
$$
\begin{array}{c}
\textbf{G} (V_{OUT} < V_{GND} \wedge I_L > 0 \rightarrow V_{OUT} = V_S-V_{DS(AZ)})\\
\textbf{G} (V_{OUT} < V_{GND} \rightarrow V_{OUT} = V_S-V_{DS(AZ)})\\
\end{array}
$$
\noindent The English requirement only vaguely mentions the post-condition. The pre-condition characterizes the drop of voltage $V_{OUT}$ below $V_{GND}$ when the inductive load is being switched off. This is obtained from the previous context and Figure 11 of ~\cite{BTS5016-2EKA} with some physical knowledge that inductive current has to change smoothly.
We encountered $46$ \textit{Clear} ($35.4\%$),
$43$ \textit{Indirect} ($33.1\%$), and $41$ \textit{Ambiguous} ($31.5\%$)
English requirements.
\section{Corpus Construction}
\label{sec:corpus_construction}
This section addresses research question $\textbf{RQ2}$. It first introduces a new method for the
automatic generation of STL sentences and their
associated natural language requirements. The
generator incorporates the outcomes from Section~\ref{sec:empirical_stl} for improved results.
Finally, we use this method to do the actual generation
of STL-specification/NL-requirements pairs.
\subsection{Corpus Generation}
\label{subsec:stl_eng_generation}
In the following, we propose an automatic procedure for randomly generating synthetic examples. Each example consists of: (1)~An STL formula,
and (2)~A set of associated sentences in English
that describe this formula. We associate multiple
natural-language sentences to each formal STL
requirement to reflect the fact that formal specifications
admit multiple natural language formulations.
We illustrate this observation using the
\textit{Bounded response} specification from Section~\ref{sec:signal}, formalized as the STL formula below:
\vspace*{-1mm}
$$
\textbf{G}(\text{V\_In} > 5 \rightarrow \textbf{F}_{[0,10]} ( \text{V\_Out} < 2 ))
$$
This admits multiple synonymous English
formulations, including:
\begin{itemize}
\item Globally, if the value of V\_In is greater than 5, then finally the value of V\_Out should be smaller than 2 at a time point within 10 time units.
\item It is always the case that when the signal V\_In is larger than 5, then eventually at sometime during the following 10 time units the signal V\_Out shall be smaller than 2.
\end{itemize}
This example shows that two NL formulations of the same STL formula can be very different, making the generation of synthetic
examples a challenging task. The systematic translation
of unrestricted STL is indeed extremely difficult, especially
for specifications that include multiple nesting of
temporal operators. In practice, deep nesting of temporal
formulas is rarely used because the resulting specifications
tend to be difficult to understand.
Hence, we first restrict STL to a rich but well-structured sub-fragment that facilitates a fully automated translation,
while at the same time covering commonly used specifications.
\subsubsection{Restricted-STL Fragment}
\label{subsubsec:support_grammar}
In this subsection, we present the restricted fragment of STL that
we support in our synthetic example generator.
We define this fragment using three layers that
can be mapped to the syntax hierarchy identified in Section~\ref{subsubsec:stl_templates}.
The bottom layer, called
\textbf{simple-phrase} (\textbf{SP})
layer, consists of: (1)~\textbf{Atomic propositions}
($\bm{\alpha}$) including rising and falling edges
and (2)~\textbf{Boolean combinations} of up to two
atomic propositions.
\begin{align*}
\alpha \ \ := \ \
& \ x \circ u~|~\neg (x \circ u)~|~\textbf{rise}(x \circ u)~|~\textbf{fall}(x \circ u)~| \\
& \ ~\neg \textbf{rise}(x \circ u)~|~\neg \textbf{fall}(x \circ u) \\
\text{SP} \ \ := \ \
& \ \alpha~|~\alpha \wedge \alpha~|~\alpha \vee \alpha~
\end{align*}
\noindent where $x$ is a signal name, $u$ is a constant or a mode name, and
$\circ \in \{ <, \le, =, \ge, > \}$.
The middle layer, which we call
\textbf{temporal-phrase} (\textbf{TP}) layer,
admits the specification of temporal formulas
over simple phrases:
\begin{align*}
\text{TP} \ \ := \ \
& \ {\text{TP}}^{\prime}~|~\neg {\text{TP}}^{\prime}~|~\textbf{rise}~{\text{TP}}^{\prime}~|~\textbf{fall}~{\text{TP}}^{\prime}~|~\neg \textbf{rise}~{\text{TP}}^{\prime}~|~\neg \textbf{fall}~{\text{TP}}^{\prime}\\
{\text{TP}}^{\prime} \ \ := \ \
& \ \textbf{UTO}_I~(\alpha)~|~(\alpha)~\textbf{BTO}_I~(\alpha)
\end{align*}
where $\textbf{UTO} \in \{\textbf{F}, \textbf{G}, \textbf{O}, \textbf{H}\}$ and $\textbf{BTO} \in \{\textbf{U}, \textbf{S}\}$ are unary and binary temporal operators, respectively. $I$ is an interval of the form $[t_1,t_2]$ with
$0 \le t_1 < t_2 \le \infty$. This can be omitted if $t_{1}=0$ and $t_{2}= \infty$.
The top layer, which we call single \textbf{nested-temporal-phrase (NTP)} layer, allows the formulation of formulas with a single nesting of
a subset of temporal operators:
\begin{align*}
\text{NTP} \ \ := \ \
& \ \textbf{F}_I\textbf{G}_I(\alpha)~|~\textbf{G}_I\textbf{F}_I(\alpha)
\end{align*}
where $I$ follows the same definition as mentioned above.
Finally, with an auxiliary syntactical component $\text{P} \ := \ \text{SP}~|~\text{TP}$, formula $\psi$ defines the \textbf{supported fragment} of STL that we map to the four template categories discussed in Section~\ref{subsubsec:stl_templates}.
\begin{align*}
\psi \ \ := \
& \ \ \ \textbf{G}_I(\text{SP})~|~\textbf{F}_I(\text{SP}) \ \ \ \ (Invariance/Reachability) \\
& \ |~\textbf{G}~(\text{SP} \to \text{SP}) \quad \quad (Immediate \ response) \\
& \ |~\textbf{G}~(\text{P} \to \text{TP}) \quad \quad \ \, (Temporal \ response) \\
& \ |~\textbf{G}~(\text{P} \to \text{NTP}) \ \ \, \quad (Stabilization/Recurrence)
\end{align*}
This fragment balances between \textit{generality}, needed to express common-practice requirements, and \textit{simplicity}, needed
to facilitate the automated generation of synthetic examples. It results in the following restrictions:
(1)~We allow the conjunction and disjunction of only two atomic propositions, (2)~Only one atomic proposition is allowed
inside a temporal operator in \textbf{TP}, (3)~We do not allow
Boolean combinations of \textbf{SP} and \textbf{TP} formulas,
and (4)~Formulas outside the four mentioned templates are not supported.
By relating the generator-fragment $\psi$ to the empirical statistics in Section~\ref{subsubsec:stl_templates}, Figure~\ref{fig:STL_template_support_summary} summarizes for each syntactical category, the proportion of templates that the fragment can support.
\begin{figure}[ht]
\includegraphics[width=0.4\textwidth]{STL_template_support_summary}
\caption{STL Template Support Summary.}
\label{fig:STL_template_support_summary}
\end{figure}
The generator nearly supports all \textit{Invariance/Reachability} templates appearing in our database. For \textit{Immediate response} ones, there is one template missing due to restriction (1). For \textit{Temporal response} templates, we are able to support $42.4\%$ of them. For the not supported ones, $18.2\%$ use a complex grammar that violates restriction (2) and (3), while the remaining ones ($39.4\%$) belong to the \emph{other} category for ad hoc purposes. Concerning nesting formulas, we only consider \textit{Stabilization/Recurrence} templates. Other combinations such as $\textbf{F} \textbf{F} \varphi $ or $\textbf{F} \varphi_1 \textbf{U} \varphi_2 $ are not supported: $48.4\%$ of them are in the \emph{complex grammar} group, while the left $19.3\%$ are in the \emph{other} category.
\subsubsection{Random-Sampling STL Formulas}
This short subsection briefly describes how we sample STL specifications from the restricted
fragment. The main idea is to decorate the grammar rules with probabilities according to the template distribution collected in Section~\ref{subsec:analysis_stl} and the operator distribution shown in Figure ~\ref{fig:operator_occurrence_times}.
Consequently, we use the probabilities described in Section~\ref{subsec:analysis_stl} to generate the four categories of fragment $\psi$, which will naturally make the $\textbf{G}$ operator rank first to a large extent, followed by the $\textbf{F}$ operator, regarding to usage frequency. The frequencies of the other operators within these categories are as discussed above.
\subsubsection{Translating STL into English}
The main translation strategy linked to~\ref{subsec:analysis_natural} is as follows. For the predicates used to express logical relations in the bottom layer, we use (with some reservations mainly with regards to accuracy) the frequencies of Section~\ref{subsubsec:english_formulation}. This way, the translation candidates are selected with different weights. For the others, such as the adverbs specifying temporal information, we incorporated relevant English utterances encountered in our database, on the condition that the generation and recognition can be done with both accuracy and fluency. Furthermore, we reserved enough space
to add synonymous utterances that can be typically used but that are not included in the database.
In order to systematically organize the translation and maximize language flexibility, we start with the translation of atomic propositions (defined as $\alpha$ in~\ref{subsubsec:support_grammar}) in the bottom syntactical layer, and use this as a pivot to tackle temporal phrases and their nesting scenarios in the middle and top layers.
\vspace*{1mm}
\noindent
\textbf{Bottom layer.}\quad
The English counterparts of atomic propositions typically consist of a subject, a predicate, and an object. They are indispensable in each English sentence. Hence, their variations especially in the predicate (including the choice of verbs, formats, tenses, and their active/passive voice) are considered first. The workflow for the organisation of their translations, which is divided into a \textit{Handler} and a \textit{Translator}, is illustrated in Figure~\ref{fig:translation_procedure}.
\begin{figure}[ht]
\includegraphics[width=0.482\textwidth]{figures/translation_procedure.pdf}
\caption{Translation Procedure for Atomic Propositions.}
\label{fig:translation_procedure}
\end{figure}
The Handler, as a preprocessor, takes the \textit{Type} and \textit{Position} information as inputs. \textit{Type} is a branch of $\alpha$ used to compute and output the \textit{Generation Information}. This includes an \textit{index} (it triggers a corresponding translation strategy), \textit{identifiers}, \textit{numbers}, and the \textit{STL expression} of a randomly generated atomic proposition.
\textit{Position} specifies the location of the proposition. This determines whether the translation states a certain scenario - if it is a condition (before implication symbol ``$\to$''), or if it emphasizes that a property has to hold with a satisfied condition (after ``$\to$''). In the latter case, modal verbs like ``should'' or``must'' will be used, and often together with adverbial modifiers like ``instantly'' or ``without any delay'' in case of \textit{Immediate response} formulas. This information is embedded into \textit{Predicate Commands}, incorporating the choice of verbs, their format, and the use of modal verbs and of adverbial modifiers.
\textit{Generation Information} and \textit{Predicate Commands} are sent to the \textit{Template Refiner} (inside Translator), whose architecture is shown in Figure~\ref{fig:template_refiner}. Here, the subject and object placeholders within the templates can be replaced by randomly generated identifiers and numbers. The verbs associated to predicates are changed to their proper format, and are decorated with adverbs when applicable.
\begin{figure}[ht]
\includegraphics[width=0.485\textwidth]{figures/template_refiner.pdf}
\caption{Template Refiner.}
\label{fig:template_refiner}
\end{figure}
In the next step, the \textit{Assembler} module of the Translator completes the refined templates into a complete sentence that also includes adverbial modifiers. Finally, the \textit{Randomized Sampler} module of the Translator, samples a designated number of sentences from the overall translation list.
\vspace*{1mm}
\noindent
\textbf{Middle/Top layer.}\quad
The translation approach presented above is extended to temporal phrases in a straight-forward manner, because the sentences generated by the bottom layer, can be reused except for the need to add adverbial modifiers, and enrich the verb tenses according to the temporal operators.
\begin{figure}[ht]
\includegraphics[width=0.48\textwidth]{figures/tp.pdf}
\caption{Translation of Temporal Phrases.}
\label{fig:translate_tp}
\end{figure}
The temporal aspects nevertheless do increase the
translation complexity. We need to consider three
orthogonal aspects (dimensions) as shown in Figure~\ref{fig:translate_tp}. The $x$-axis represents
the six STL temporal operators from the \textbf{TP}
layer, the $y$-axis their variants preceded by the negation,
rising or falling edge operators, and the $z$-axis
the choice of a verb tense in English for specific
temporal operators. Hence, a node in Figure~\ref{fig:translate_tp} represents a specific
combination of these three aspects.
We adopt a slicing approach to tackle the complexity. We first process nodes A-F with present tense, where the six temporal operators are used individually. Then we enrich the usage of verb tenses according to the semantics of a particular operator and its nesting situation. This results in tier ${\text{TP}}^{\prime}$ while preserving language flexibility. The same approach is used for processing unary operators in tier $\neg {\text{TP}}^{\prime}$. The semantics of direct negation of binary operators, rising/falling edges and their negations for layer \textbf{TP} are complicated. Considering their relatively low usage frequency, we provide several fixed templates to facilitate their translations.
\subsection{Corpus Statistics}
Following the approach described in Section~\ref{subsec:stl_eng_generation},
we have automatically generated a corpus consisting of 120,000 STL-English pairs where each pair consists of a randomly generated STL formula and one of its generated translation in natural language.
\subsubsection{STL-Formula Statistics}
In Figure~\ref{fig:frequency_operator_dataset} we provide the frequencies of the STL operators in our synthetic dataset (above corpus). As one can see they are largely consistent with the ones in Figure~\ref{fig:operator_occurrence_times}. As before, the most frequent STL operator is the global temporal operator $\textbf{G}_I \varphi$ with 138,715 occurrences. The least frequent STL operator is the $\varphi_1 \textbf{S}_I \varphi_2$ temporal operator with 5,105 occurrences. While this frequency differs a bit from Figure~\ref{fig:operator_occurrence_times}, it is still consistent with the empirical results.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
{\small
\tkzKiviatDiagram[scale=0.29,label distance=.4cm,
radial = 1,
gap = 1.6,
lattice = 5]{$\ \ \ x = u$,$x > u$, $x \geq u$,
$x < u$, $x \leq u$, $\textbf{rise}(\varphi)$,$\textbf{fall}(\varphi)$,
$\textbf{F}_I \varphi$, $\textbf{G}_I \varphi$, $\varphi_1 \textbf{U}_I \varphi_2$,
$\textbf{O}_I \varphi$, $\textbf{H}_I \varphi$, $\varphi_1 \textbf{S}_I \varphi_2$,
$\neg \varphi $, $\varphi_1 \wedge \varphi_2$, $\varphi_1 \vee \varphi_2$,
$\varphi_1 \rightarrow \varphi_2$
}
\tkzKiviatLine[thick,mark=ball,mark size=4pt,color=darkgray,
fill=blue!25,opacity=.5](4.1284, 2.1106, 2.0786, 2.0816, 2.1176, 2.4978, 1.0082, 2.2166, 5.5486, 0.3618, 0.2192, 0.2154, 0.2042, 1.5446, 3.1224, 0.6784, 3.35)
\tkzKiviatGrad[prefix=,unity=25,suffix=K](1) }
\end{tikzpicture}
\caption{Frequency of STL operators in the corpus.}
\label{fig:frequency_operator_dataset}
\vspace{-1ex}
\end{figure}
Table~\ref{tbl:stl_statistics} shows the statistics of templates and subformulas in the generated corpus. As mentioned in Section \ref{subsec:analysis_stl}, an STL template is defined as the parse tree of a formula without its leaves. For example, the template for the formula $\varphi~{=}~\textbf{G}(\text{In}> 5 \rightarrow \textbf{F}_{[0,10]}\,\text{Out} < 2 )$ is
$\textbf{G}(\varphi_1 \rightarrow \textbf{F}_{[0,10]} \varphi_2 )$. Each formula has a finite number of subformulas. For example the formula $\varphi$ above has five subformulas:
$\varphi_5~{=}~\textbf{G}(\text{In}\,{>}\,5~\rightarrow~\textbf{F}_{[0,10]}\,\text{Out} < 2 )$,
$\varphi_4~{=}~\text{In}\,{>}\,5~{\rightarrow}~\textbf{F}_{[0,10]}\,\text{Out}\,{<}\,2$,
$\varphi_3~{=}~\textbf{F}_{[0,10]} \text{Out}\,{<}\,2$,
$\varphi_2~{=}~\text{Out}\,{<}\,2$, and
$\varphi_1~{=}~\text{In}\,{>}\,5$.
\begin{table}[ht]
\caption{STL Formula Statistics: \# unique STL formulas, \# unique STL templates, \# subformulas for each formula.}
\vspace{3ex}
\setlength{\belowcaptionskip}{5pt}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\# formulas\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\# templates\end{tabular}} & \multicolumn{4}{c}{\# subformula per formula} \\
\cline{3-6}
& & min & max & avg. & median \\ \hline
120,000 & 5,852 & 2 & 18 & 6.98 & 7 \\ \hline
\end{tabular}
\label{tbl:stl_statistics}
\end{table}
Table~\ref{tbl:mapping_statistics} shows the mutual mapping relation between STL operators and STL formulas in our corpus.
We count for each STL operator, how many formulas it has appeared in. This produces the containment
statistics shown in the last three columns.
\begin{table}[ht]
\caption{STL-Formula Mapping Statistics: \# STL operators for each formula, \# STL formulas for
each operator.}
\vspace{3ex}
\setlength{\belowcaptionskip}{5pt}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{3}{l|}{\# STL oper. per formula} & \multicolumn{3}{l}{\# formulas per STL oper.} \\ \hline
avg. & median & max & avg. & median & \multicolumn{1}{l}{max} \\ \hline
6.98 & 7 & 18 & 42,120.3 & 45,125 & 101,870 \\ \hline
\end{tabular}
\label{tbl:mapping_statistics}
\end{table}
Since identifiers and constants frequently appear in our corpus, we also analyzed their frequency, as shown in Table~\ref{tbl:id_constant_statistics}.
\begin{table}[ht]
\caption{Identifier and Constants Statistics: average number of identifiers per formula, \# of chars used per identifier, \# number of digits used per constant.}
\vspace{3ex}
\setlength{\belowcaptionskip}{5pt}
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\# identifiers\\ per formula \end{tabular}} & \multicolumn{3}{l|}{\# chars per identifier} & \multicolumn{3}{l}{\# digits per constant} \\
\cline{2-7}
& min & avg. & median & min & avg. & median \\ \hline
2.59 & 1 & 5.50 & 5 & 1 & 2.31 & 2 \\ \hline
\end{tabular}
\label{tbl:id_constant_statistics}
\end{table}
\subsubsection{Natural-Language Statistics}
The statistical results of the natural language in our corpus are shown in Table~\ref{tbl:nl_statistics}. There are only 265 different effective English words (considering word variants, not including signal names which are strings generated randomly), constituting a relatively small vocabulary. This is understandable because most English words are used to express the logical relation in STL, the number of which is thus limited. Besides, Table~\ref{tbl:nl_statistics} records the statistics of effective word numbers in all English sentences. It also counts for each English word, the number of English sentences using it.
\begin{table}[ht]
\caption{English Statistics: \# unique sentences, \# unique words, \# words per sentences and \# sentences per word.}
\vspace{3ex}
\setlength{\belowcaptionskip}{5pt}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\multirow{2}{*}{\# sent.} & \multirow{2}{*}{\# word} & \multicolumn{2}{l|}{\# words per sent.} & \multicolumn{2}{l}{\# sent. per word} \\ \cline{3-6}
& & avg. & median & avg. & median \\ \hline
120,000 & 265 & 38.49 & 37 & 14,220.28 & 4,555.5 \\ \hline
\end{tabular}
\label{tbl:nl_statistics}
\end{table}
\section{Machine Translation}
\label{sec:machine_translation}
In order to answer questions \textbf{RQ3-5}, we take advantage of the corpus generated as discussed in the previous sections, to develop DeepSTL, a tool and technique for the translation of informal requirements given as free English sentences, into STL. DeepSTL employs a state-of-the-art transformer-based neural-translation technique, to train an accurate attentional translator. We compare the performance of DeepSTL with other NL translator architectures on the same corpus, and we also investigate how they are able to extrapolate to sentences out of the corpus.
\subsection{Neural Translation Algorithms}
The translation of natural language into STL formulas can be abstracted as the following probabilistic problem. Given an encoding sequence $\textbf{e}= (e_1, e_2, ..., e_m)$ from the source language (English requirements), a decoding sequence $\textbf{s} = (s_1, s_2, ..., s_n)$ from the target language (STL formulas) generates all of its tokens $s_k$ conditioning on the decoded history of the target sequence $s_{<k}$ and the whole input of the source sequence $\textbf{e}$ such that:
$
P(\textbf{s}|\textbf{e};\theta) = \prod_{k=1}^n P(s_k|s_{<k}, \textbf{e};\theta)
$
where $\theta$ are the parameters of the model. A current practice in the community of NLP is to learn these probabilities through Neural Translation (NT) where the tokens are encoded into real vectors.
\subsubsection{NT-Architectures considered}
\label{subsubsec:nl-architectures}
We considered three main NT-architectures: sequence to sequence (seq2seq), sequence to sequence plus attention (Att-seq2seq), and the transformer architecture.
\vspace*{1mm}
\noindent
\textbf{Seq2seq architecture.}
Seq2seq uses two recurrent neural networks (RNNs), one in the encoder, and one in the decoder, to sequentially process the sentences, word by word~\cite{sutskever2014sequence}.
\vspace*{1mm}
\noindent
\textbf{Att-seq2seq architecture} A drawback of the seq2seq architecture, is that it gradually encodes the dependencies among words in the input and output sentences, by sequentially passing the information to the next cell of the RNN. As a consequence, far-away dependencies may get diluted. In order to correct this problem, an attention mechanism is introduced in att-seq2seq, to explicitly capture and learn these dependencies~\cite{bahdanau2014neural}.
\vspace*{1mm}
\noindent
\textbf{Transformer-architecture} The previous two architectures are relatively slow to train, because the RNNs hinder parallel processing. To alleviate this problem, the transformer architecture,
introduces a self-attention mechanism, dropping completely the use of RNNs. This dramatically speeded up the computation time of attention-based neural networks, and conferred a considerable momentum to NT~\cite{VaswaniSPUJGKP17}. For this reason we adopted a transformer-based architecture for our DeepSTL translator.
\subsubsection{Preprocessing and Tokenization}
\label{subsubsec:preprocessing}
There are three main features that distinguish our translation problem from general translation tasks between natural languages (NL2NL): \begin{enumerate}
\item \textit{Out of Vocabulary (OOV)}: The signal names, we call them identifiers for short, and numbers, can be arbitrarily specified. Therefore, it is impossible to maintain a fixed-size vocabulary to cover all imaginable identifiers and numbers.
\item \textit{High Copying Frequency:} During translation, identifiers and numbers need to be much more frequently copied from the source language to the target language than in NL2NL.
\item \textit{Unbalanced Language:} English is a kind of high-resource language while STL formulas belong to a low-resource logical language that has very limited exclusive vocabulary.
\end{enumerate}
In view of the above characteristics, a successful translation of English to STL requires more than in NL2NL, a correct tokenization of identifiers and numbers. Although one can use an explicit copying mechanism~\cite{copynet2016}, this method requires to modify the structure of the neural network, which may increase complexity.
\vspace*{1mm}
\noindent
\textbf{Subword tokenization}\quad We therefore adopt a subword technique to tokenize sequences during data preprocessing. Subword algorithms, such as Byte-Pair-Encoding (BPE)~\cite{SennrichHB16a}, WordPiece~\cite{WuSCLNMKCGMKSJL16} and Unigram~\cite{kudo-2018-subword}, are commonly used in state-of-art NT systems to tackle the OOV problem. Without modifying the model structure, these algorithms are able to split words into meaningful morphemes and even independent characters based on statistical rules.
Ideally, we hope when identifiers and numbers are tokenized, they can be respectively encoded by separate characters and digits. For example, \texttt{PWM} and \texttt{12.5} are expected to get encoded as \texttt{[`P', `W', `M']} and \texttt{[`1', `2', `.', `5']} respectively. This way, we can use a limited number of characters and digits to represent arbitrary identifiers and numbers. We chose BPE due to its simplicity.
The tokenization procedure of BPE is executed as follows: (1)~Split every word (separated by a space) in the source data to a sequence of characters. (2)~A prepared token list will include all possible characters (without repetition) in the source data. (3)~The most frequently occurring pair of characters inside a word are merged and added to the token list, then this pair will be treated as an independent character afterwards. (4)~Step 3 is repeated until the size of the token list reaches to an upper limit or a specified hyperparameter. After tokenization, when a sequence is encoded, the generated list is iterated from the longest token to the shortest token attempting to match and substitute substrings for each word in the sequence.
Inspired by BPE algorithm, before tokenization, identifiers and constants have to be split into characters and digits in advance, so that for each pair of two adjacent characters and digits, there would be a whitespace between them (e.g., \texttt{PWM} -> \texttt{P W M}, \texttt{12.5} -> \texttt{1 2 . 5}). In this way, characters and digits will not participate in the merging procedure of BPE.
Hence, during the encoding phase, only characters and digits in the generated token list can match identifiers and constants after they are recognized and split.
During testing time, although it is easy to use regular expressions to match numbers and split them into digits, it is challenging to accurately match identifiers. This is because identifiers can be non-meaningful permutations of characters, or complete English words. These two scenarios cannot be easily distinguished. An ideal method is to adopt Name Entity Recognition (NER) to match identifiers and split them. Now identifiers are automatically identified, by checking that they do not belong to our data-set of commonly used English words, or the English formulation of STL operators.
\subsection{Implementation Details}
\subsubsection{Data split}
We overall generated 120000 English-STL pairs, from which we first sampled 10\% (12000) to prepare a fixed testing set. For the rest, before each training experiment, we sampled 90\% (97200) of them for training, and 10\% (10800) for validation.
\subsubsection{Hyperparameters}
The implementation of the three models mentioned in~\ref{subsubsec:nl-architectures} are mainly based on ~\cite{zhang2021dive} with several modifications using Pytorch. The following describes how hyperparameters are chosen for each model and the optimizer.
\noindent
\textbf{Seq2seq}\quad
We used Gated Recurrent Unit (GRU)~\cite{cho2014properties} as RNN units. The encoder is a 2-layer bidirectional RNN with hidden size $h=128$ for each direction, and the decoder is a 2-layer unidirectional RNN with $h=256$. The embedding dimension for mapping a one-hot vector of a token into real valued space is 128. Drop out rate is 0.1.
\noindent
\textbf{Att-seq2seq}\quad For the encoder-decoder, we used the same hyperparameters as Seq2seq-architecture. For Bahdanau attention ~\cite{bahdanau2014neural}, we used a feed-forward neural network with 1 hidden layer and 128 neurons
to calculate attention score.
\noindent
\textbf{Transformer}\quad For the encoder and the decoder, they both have 4 layers with 8 attention heads; Input and output dimensions for each computing block are always kept as $d_{\text{model}}=128$; Neuron number in feed-forward layers equals to $d_{ff}=512$; Drop out rate is 0.1; For layer normalization, $\epsilon = 10^{-5}$.
\noindent
\textbf{Optimizer}\quad We used Adam Optimization algorithm ~\cite{kingma2014adam} with $\beta_{1}=0.9$, $\beta_{1}=0.98$, $\epsilon=10^{-9}$, while the learning rate $lr$ is dynamically scheduled as (slightly changed from ~\cite{VaswaniSPUJGKP17}):
$$
lr = p \cdot d_{\text{model}}^{-0.5} \cdot \text{min} ( step\_num^{-0.5}, step\_num \cdot warmup_{steps}^{-1.5})
$$
where $warmup_{steps}=4000$, $d_{\text{model}}=128$. $step\_num$ represents the training steps (training on one batch corresponds to one step). $p$ is an adjustable parameter for each architecture, and we chose 1, 0.1 and 2 for Seq2seq, Att-seq2seq and Transformer respectively. For Seq2seq and Att-seq2seq models, in order to ease gradient explosion due to long sequence dependency, we also used gradient clipping to limit the maximum norm of gradients to 1.
\noindent
\textbf{Other}\quad We dealt with variable length of input and output sequence using padding. We firstly encoded all English and STL sequences into subword token lists, from which we picked the maximum length as the step limit both for the encoder and the decoder. During training, for sequence whose length is smaller than the maximum value, we padded a special token \texttt{<pad>} to its end for complement.
\subsubsection{Train/Validate/Test Procedure} For training and validation, we used ``teacher forcing'' strategy in the decoder. We firstly prepared two special tokens \texttt{<bos>} (begin of sentence) and \texttt{<eos>} (end of sentence). Suppose the reference output of the decoder is \texttt{ABC<eos>}. To start with, we input \texttt{<bos>} as a starting signal to the decoder and hoped that it could output \texttt{A}. No matter whether the actual first output of the decoder is \texttt{A}, we then sent \texttt{A} to the decoder, and hoped that it would output \texttt{B}. This procedure continues until the maximum step length of the decoder is reached.
We summed up all token-level (only for valid length) cross-entropy loss between the prediction and reference sequence, and divided it by the maximum length of the decoder. This is the loss for one sample. We trained a batch of 64 samples in parallel. The batch loss is averaged over all sample losses inside a single batch, which is used for back propagation to update network parameters.
For testing, ``teacher forcing'' is abandoned. The only token manually input to the decoder is \texttt{<bos>} for initialization. At each time step of decoding, the decoder adopts a greedy search strategy, outputting a token with maximum probability only based on its output in the previous step and the output of the encoder. The decoding procedure will end until the decoder outputs an \texttt{<eos>} token or the maximum limit length is reached.
\subsection{Results}
\subsubsection{Loss/Accuracy Curves}
We trained Seq2seq, Att-seq2seq and Transformer architectures for 80, 10 and 40 epochs respectively, using \emph{STL formula accuracy} (defined in~\ref{subsubsec:test_metrics}) in validation as an indicator to stop training. The validation loss/accuracy curves are obtained by 5 independent experiments and shown as follows.
\begin{figure}[h]
\includegraphics[width=0.38\textwidth]{figures/loss.pdf}
\vspace{-3ex}
\caption{Validation Loss.}
\vspace{-3ex}
\label{fig:loss_curve}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.38\textwidth]{figures/acc.pdf}
\vspace{-3ex}
\caption{Validation Formula Accuracy.}
\vspace{-3ex}
\label{fig:acc_curve}
\end{figure}
Figure~\ref{fig:loss_curve} and Figure~\ref{fig:acc_curve} show that, with the guidance of ``teacher forcing'', all the three models are able to converge during training, making the STL formula accuracy approach to 1 when the network becomes stabilized. The only difference is the rate of convergence, which depends on many factors like the volume of the model (e.g., number of parameters), noises, learning rate, etc.
\subsubsection{Testing Metrics}
\label{subsubsec:test_metrics}
We firstly report two different measures of accuracy: the \emph{STL formula accuracy} ($A_{F}$)
and the \emph{template accuracy} ($A_{T}$). The first measures the alignment accuracy for the reference and prediction sequence in a string level, while the second firstly transforms the reference and predication instances into STL templates and then calculates their alignment accuracy. For example,
\begin{center}
Formula: \texttt{always ( x > 0 )} $\ \Rightarrow \ $ Template: \texttt{always ( phi )} \\
Formula: \texttt{always ( y > 0 )} $\ \Rightarrow \ $ Template: \texttt{always ( phi )}
\end{center}
The first line is reference sequence and the second line represents model prediction. For better illustration, we insert a white-space between each token and thus there are six tokens in the formulas, and four tokens in the templates. For formulas, overall five tokens appear in the same position - \texttt{`always', `(', `>', `0', `)'}, while the left one token \texttt{`x'} in the reference is mistranslated to \texttt{`y'}. Therefore, the formula accuracy $A_{F}=5/6$. As for the template, since all tokens are aligned with each other, the template accuracy $A_{T}=1$.
We also report another metric called BLEU (Bilingual Evaluation Understudy)~\cite{papineni2002bleu} that has been pervasively used in machine translation research. It evaluates the number of n-grams ($n=4$) appearing in the reference sequence. The best BLEU score for a pair of sequences is 1, which means complete overlapping.
\vspace{-2ex}
\begin{table}[h]
\caption{Testing Accuracy.}
\vspace{3ex}
\centering
\begin{tabular}{c|c|c|c}
\hline
& \textbf{Formula Acc.} & \textbf{Template Acc.} & \textbf{BLEU } \\ [0.5ex]
\hline
Seq2seq & $0.071\pm0.0388$ & $0.207\pm0.0868$ & $0.092\pm0.0361$ \\ \hline
Att-seq2seq & $0.977\pm0.0060$ & $0.980\pm0.0063$ & $0.996\pm0.0011$ \\ \hline
Transformer & $0.987\pm0.0028$ & $0.995\pm0.0014$ & $0.998\pm0.0005$ \\ \hline
\end{tabular}
\label{table:test_acc}
\end{table}
In Table~\ref{table:test_acc}, it can be seen that once ``teacher forcing'' is removed, the performance of Seq2seq architecture decreases dramatically, which is partly due to its lack of attention mechanism to realize self-correction. For the other two models, both of them can achieve very high accuracy, with Transformer slightly better than Att-seq2seq. Since the testing data and training data are sampled from the same data-set, in this sense, these two models show high translation quality when the distribution of language patterns in testing cases are similar to the training data. We also find that the template accuracy is higher than the formula accuracy. This phenomenon is understandable - once one formula is transformed into the form of template, the potential translation errors in identifiers, constants and logical relation symbols are masked.
\subsubsection{Extrapolation}
\label{subsubsec:extrapolate}
In the following, we use the informal requirements that we identified from the literature in Section~\ref{sec:empirical_stl} to evaluate how well the
machine learning algorithm generalizes the translation
outside of the training and validation data set.
In order to have a fair evaluation, we used the
$14$ \textbf{Clear} requirements ($10\%$ of the
entire set) with the template
structure supported by our tool. We pre-processed
the requirements to remove units that are not supported
by our tool. Table~\ref{table:extra_acc} summarizes
the accuracy results for the three learning approaches.
We see that with non-synthetic examples the formula
accuracy drops considerably for all algorithms, while
the average template accuracy remains relatively high ($89.9\%$)
for the Transformer approach. We believe that higher
availability of publicly available informal requirements
that could be used for training would considerably help
improving the accuracy of the approach.
\vspace{-2ex}
\begin{table}[h]
\caption{Extrapolation Accuracy.}
\vspace{3ex}
\centering
\begin{tabular}{c|c|c|c}
\hline
& \textbf{Formula Acc.} & \textbf{Template Acc.} & \textbf{BLEU } \\ [0.5ex]
\hline
Seq2seq & $0.050\pm0.0283$ & $0.158\pm0.0895$ & $0.027\pm0.0120$ \\ \hline
Att-seq2seq & $0.559\pm0.0865$ & $0.742\pm0.0660$ & $0.888\pm0.0348$ \\ \hline
Transformer & $0.712\pm0.0678$ & $0.899\pm0.0100$ & $0.962\pm0.0030$ \\ \hline
\end{tabular}
\label{table:extra_acc}
\end{table}
In the following, we provide three examples \footnote{These examples are the actual outputs of the translator. They are not displayed in a mathematical way. The part that is incorrectly translated is represented in blue color.} that illustrate the possibilities and the limits of our approach (random seed $=100$). We also report the following metric that considers the average logarithmic value of the output confidence at each decoding step:
$
C_m = \frac{1}{L^\alpha} \sum_{k=1}^L \text{log} \ P(s_k|s_{<k}, \textbf{e};\theta)
$
where $L$ is the length of the output sequence, $\alpha$ is an adjustable factor which is set to $0.75$ by default, and $m \in \{ s, a, t \}$ with $s$, $a$, $t$ denoting Seq2seq, Att-seq2seq and Transformer model respectively.
\vspace*{1mm}
\noindent \textbf{Example 1:} If the value of signal RWs\_angular\_momentum is greater than 0.35, then the value of signal RWs\_torque shall be equal to 0.~\cite{boufaied2021signal}
\begin{itemize}
\item \textbf{Transformer ($C_t = -0.01393$)}: \\
always ( RWs\_angular\_momentum > 0.35 -> RWs\_torque == 0 )
\item \textbf{Att-seq2seq ($C_a = -0.30038$)}: \\
always ( RWs\_angular\_m{\color{blue}xyomemeEqm < 0.3} -> RWs\_torque==0)
\item \textbf{Seq2seq ($C_s = -2.77145$)}: \\
always ( {\color{blue} WNcAi1iDSDDyD1yD2y171a71aa2345324621 ) 5} {\color{red} ...... too long, display omitted}
\end{itemize}
\vspace*{1mm}
\noindent \textbf{Example 2:} Whenever Op\_Cmd changes to Passive then in response Spd\_Act changes to 0 after at most 500 time units.
\begin{itemize}
\item \textbf{Transformer ($C_t = -0.00091$)}: \\
always ( rise ( Op\_Cmd == Passive ) -> eventually [ 0 : 500 ] ( {\color{blue} rise} ( Spd\_Act == 0 ) ) )
\item \textbf{Att-seq2seq ($C_a = -0.10360$)}: \\
always ( rise ( Op\_Cmd == Passive ) -> {\color{blue} not} ( {eventually} [ 0 : 500 ] ( Spd\_Act == 0 ) ) )
\item \textbf{Seq2seq ($C_s = -3.03260$)}: \\
always ( rise ( {\color{blue} PIweD > 12.3 Q8y5yDy6y1y1R11y1y1g1y1A} {\color{red} ...... too long, display omitted}
\end{itemize}
\noindent \textbf{Example 3:} Whenever V\_Mot enters the range [1, 12] then in response starting after at most 100 time units Spd\_Act must be in the range [100, 1000].
\begin{itemize}
\item \textbf{Transformer ($C_t = -0.00873$)}: \\
always ( rise ( V\_Mot >= 1 and V\_Mot <= 12 ) -> eventually [ 0 : 100 ] ( Spd\_Act >= 100 and Spd\_Act <= 1000 ) )
\item \textbf{Att-seq2seq ($C_a = -0.06080$)}: \\
always ( rise ( V\_Mot >= 1 and V\_Mot <= 12 ) -> {\color{blue} not} ( eventually [ 0 : 100 ] ( Spd\_Act >= 100 and Spd\_Act <= 1000 ) ) )
\item \textbf{Seq2seq ($C_s = -2.68981$)}: \\
always ( rise ( {\color{blue} p\_qHX > 4 Q3DaQaDamymaOlQ ) ya ) 4 fall} {\color{red} ...... too long, display omitted}
\end{itemize}
The extrapolation test shows the poor translation of
Seq2seq that is consistent with its low accuracy
measured in~Table ~\ref{table:test_acc}. The translation
quality of Transformer and Att-seq2seq is much higher.
It is however sensitive to how similar the
patterns used in the informal requirement are to
the ones used in the training data.
In Example 1,
Transformer makes the correct translation, while
Att-seq2seq fails to copy the identifier and the number, and ``greater than'' is mistranslated into ``$<$''.
In Example 2, Transformer tends to add a \textbf{rise} ~ operator before the subformula wrapped inside an $\textbf{F}$ operator, although in some occasions this is equivalent to the actual intention of the requirement because ``changes to'' often indicates the significance of a rising edge. On the other side, Att-seq2seq adds a negation operator $\neg$ in front of the subformula starting with an $\textbf{F}$ operator, so the meaning becomes reversed. In Example 3, Transformer translates the requirement correctly while Att-seq2seq makes the same mistake. For the three examples considered, the Seq2seq model fails to translate all of them, without even guessing correctly the template: it tends to generate lengthy symbols without explicit meaning.
\section{Discussion}
\label{sec:discussion}
\noindent
\textbf{Corpus Generator}\quad
One of the major limitations is that the natural language is still generated through a rule-based approach with human intervention. Although for commonly-used STL formulas, the corpus generator can already produce enormous fluent synonymous translations, the diversity in expression is still limited. However, it is this ``cold-start'' approach that makes it possible in the future to adopt automatic data augmentation techniques~\cite{feng2021survey} in NLP to produce much more English utterances exponentially. These new variants may involve different linguistic features, such as ambiguity and vagueness.
Furthermore, the English requirements produced from the corpus generator are generic texts strictly following the semantics of STL without incorporating terminology and domain knowledge of a particular field, e.g., electronics, robotics, and biology. How to cover, characterize and process this information with modular design patterns is a future research direction.
\\\\
\noindent
\textbf{Neural Translator}\quad
An important improvement would be to unify the pipeline in data preprocessing: we need to combine Name Entity Recognition (NER) ~\cite{li2020survey, yadav2019survey} technique with the look-up table method to recognize arbitrarily designated identifiers, or those that are already given in signal tables from industrial data sheets.
Besides, translation accuracy and decoding confidence should be further exploited for training because they are indicators of translation performance. In fact, the template accuracy mentioned in \ref{subsubsec:test_metrics} is biased by design in that it penalizes positional mismatches (with cumulative effect) more strongly than individual token mismatching errors. Hence, an unbiased criterion for quantifying template accuracy needs to be considered. For translation confidence, a high level is not necessarily indicating that the decoder will always insist on the correct translation - sometimes it implies that the decoder may be stubborn to the wrong output (Example 2 of Transformer in \ref{subsubsec:extrapolate}). However, for lower confidence values, they tend to indicate insufficient training of a particular feature due to unbalanced training samples. Given the rich information conveyed, these two metrics can be promisingly used as feedback signals to guide the optimization of the loss function so that different problems occurring in training can be detected and corrected.
Furthermore, due to the attention mechanism, Att-seq2seq is still a strong competitor to Transformer, despite being inferior in certain test metrics and cases. Although it is interesting to improve translation quality from multiple Transformer-based pre-training techniques ~\cite{qiu2020pre} with large models, it is also important to figure out what role attention mechanism exactly plays in translation: unlike the English in daily communications or in literature, the contexts used to specify formal languages like STL are relatively limited.
Thus, an English-STL translator that only depends from statistical information learned from a data set (as generally it occurs in NL2NL translators) may be not ideal.
However, the approach presented here still depends on learning the statistical features of the training data rather than really understanding the requirement language. This is the reason
for the mistranslation of ``greater than'' into ``$<$'' in
Example 1 of Att-seq2seq.
Given this, in \ref{subsubsec:extrapolate} it is reasonable to see the drop in accuracy when testing cases are very different from what have been trained. To build DeepSTL with enhanced trustworthiness, the integration of syntactic and semantic information into the end-to-end learning process using attention mechanism will be a topic of further investigation.
Finally, from a user interaction point of view, the next generation of DeepSTL should output multiple possible translations for the user to choose from, thus remembering the user's language preferences in order to provide customized service.
\section{Conclusion}
\label{sec:conclusion}
We studied the problem of translating CPS natural language requirements to STL, commonly
used to formally specify CPS properties by the academic community and
practitioners.
To address the lack of publicly available
natural language requirements, we developed a procedure
for automatically generating English sentences from STL formulas.
We employed a transformer-based NLP architecture to efficiently train an accurate translator
from English to STL. Experiments demonstrated
promising results.
While this work focuses on STL specifications
and CPS applications, the underlying principles can be applied to other domains and specification formalisms and have a significant positive impact on the field of requirement engineering.
Unlike natural languages,
formal specifications have a very
constrained structure. We believe that this observation can be further explored in the future to develop an even more robust translation mechanism and thus further strengthen requirements engineering methodologies.
|
train/arxiv
|
BkiUfp8241xiNVUWlfdC
| 5 | 1 |
\section{Introduction}
Mishra et al. \cite{RBG} introduced Sz\'{a}sz-Mirakjan-Durrmeyer-type generalization given by
\begin{equation}\label{RBG}
D_n^{*}(f;x) = b_n\sum_{k=0}^{\infty} s_{b_n,k}(x)\int_0^{\infty}s_{b_n,k}(t)f\left(t\right)\, dt,
\end{equation}
where
\begin{equation}\label{Sza}
s_{b_n,k}(x)=e^{-b_nx}\dfrac{(b_nx)^k}{k!}, \quad k = 0,1,2, \ldots; n \in \mathbb{N},
\end{equation}
$(b_n)_{1}^{\infty}$ is an increasing sequence of positive real numbers, $ b_n \rightarrow \infty$ as $n \rightarrow \infty$, $b_1 \geq 1$ and studied the simultaneous approximation properties of the operators (\ref{RBG}). References \cite{RBG1} and \cite{RBG2} contains some more work in this direction.
This type of generalization was introduced by Durrmeyer \cite{Dur} generalizing Bernstein polynomials by introducing both the summation and integration processes and introduced the summation-integral type approximation process, using the Bernstein polynomials, as follows:
\begin{equation}\label{Dur}
D_n(f;x)=(n+1)\sum_{k=0}^n b_{n,k}(x)\left(\int_{0}^1 b_{n,k}(t)f(t)\,dt \right),
\end{equation}
where
\begin{equation*}
b_{n,k}(x)= \left(\begin{array}{c} n \\ k \end{array} \right)x^k (1-x)^{n-k}, \quad k=0,1, \ldots, n,
\end{equation*}
$b_{n,k}$'s are called the Bernstein basis functions.
\\
Derriennic \cite{Der} proved several results concerning these operators, including the approximation of the $r^{\text{th}}$-derivative of a function by operators $D_n$.
\\
Sz\'{a}sz \cite{Sza} and Mirakjan \cite{Mir} introduced and studied operators on unbounded interval $[0, \infty)$, known as Sz\'{a}sz-Mirakjan operators given by
\begin{equation}\label{Mir}
M_n(f;x)= \sum_{k=0}^{\infty}s_{n,k}(x)f\left(\dfrac{k}{n}\right),
\end{equation}
where
\begin{equation}\label{Sza1}
s_{n,k}(x)=e^{-nx}\dfrac{(nx)^k}{k!}, \quad k = 0,1,2, \ldots; n \in \mathbb{N}.
\end{equation}
Here $s_{n,k}$'s are known as Sz\'{a}sz basis functions. The operators $\left(M_n \right)_{1}^{\infty}$ were extensively studied in 1950 by O. Sz\'{a}sz \cite{Sza}.
\\
Mazhar and Totik \cite{Maz} introduced two Durrmeyer type
modifications of Sz\'{a}sz-Mirakjan operators (\ref{Mir}), which are defined on unbounded interval $[0, \infty)$ as
\begin{equation*}
T^{*}_n(f;x)=f(0)s_{n,0}(x)+ n\sum_{k=1}^{\infty}s_{n,k}(x)\int_{0}^{\infty}s_{n,k-1}(t)f(t)\,dt
\end{equation*}
and
\begin{equation}\label{Maz1}
T_n(f;x)= n\sum_{k=0}^{\infty}s_{n,k}(x)\int_{0}^{\infty}s_{n,k}(t)f(t)\,dt,
\end{equation}
where $s_{n,k}$'s are as given by (\ref{Sza1}).
\\
Operators in (\ref{RBG}) are generalization of the operators (\ref{Maz1}) by means of the introduction of sequence $(b_n)_{1}^{\infty}$ for $n$. This introduction of sequence $(b_n)_{1}^{\infty}$ is very natural and it is shown in this paper by introducing it on operators (\ref{Mir}) and considering the operators, denoted by $S_n$, defined as follows:
\begin{equation}\label{RBG1}
S_n(f;x)= \sum_{k=0}^{\infty}s_{b_n,k}(x)f\left(\dfrac{k}{b_n}\right),
\end{equation}
where $s_{b_n,k}$'s are as given in (\ref{Sza}), $(b_n)_1^{\infty}$ is an increasing sequence of positive real numbers, $ b_n \rightarrow \infty$ as $n \rightarrow \infty$, $b_1 \geq 1$. Clearly, for $b_n = n $, we get (\ref{Mir}).
\\
Walczak, in \cite{Wal1} introduced a generalization of (\ref{Mir}) given as follows:
\begin{equation}\label{Wal}
S_n[f; a_n, b_n, q, x] := \sum_{k=0}^{\infty}s_{a_n,k}(x)f\left(\dfrac{k}{b_n + q}\right),
\end{equation}
where $q \geq 0$ is a fixed number, $(a_n)_1^{\infty}$ and $(b_n)_1^{\infty}$ are given increasing and unbounded numerical sequences such that $b_n \geq a_n \geq 1$, and $(a_n/b_n)_1^{\infty}$ is non-decreasing and
\begin{equation*}
\dfrac{a_n}{b_n} = 1 + o\left(\dfrac{1}{b_n} \right).
\end{equation*}
It can be observed that for $a_n = b_n = n$ and $q = 0$, (\ref{Wal}) reduces to (\ref{Mir}) and for $a_n = b_n$ and $q = 0$, it reduces to (\ref{RBG1}). Walczak \cite{Wal1} discussed direct results related to pointwise and uniform convergence of the operators (\ref{Wal}) in exponential weight spaces.
\\
In Section $2$ of the paper, we discuss some results related to the operators (\ref{RBG1}) while operating upon test functions and calculations of moments for the operators (\ref{RBG1}) is carried out. In Section $3$, local results related to the operators (\ref{RBG1}) are derived. The global properties of the operators (\ref{RBG1}) are derived in Section $4$ in polynomial weight spaces where a characterization is established. It can be seen that these direct and indirect results are natural extensions of the results derived in \cite{Bec} for (\ref{Mir}) in polynomial weight spaces. Section $5$ discusses examples of the sequences $(b_n)_1^{\infty}$ and the corresponding approximate graphical representations under the operators (\ref{Mir}) and (\ref{RBG1}).
\section{Elementary results}
\subsection{Estimation of moments}
To begin with, we give some auxiliary results.
\begin{lemma} \label{L3}
For $e_i(t)=t^i, \quad i = 0,1,2,3,4$, the following holds:
\begin{enumerate}
\item[(a)]$S_n(e_0;x)=1,$
\item[(b)]$S_n(e_1;x)=x,$
\item[(c)]$S_n(e_2;x)=\dfrac{1}{b_n}(b_nx^2+x),$
\item[(d)]$S_n(e_3;x)=\dfrac{1}{b_n^2}(b_n^2x^3+3b_nx^2+x),$
\item[(e)]$S_n(e_4;x)=\dfrac{1}{b_n^3}(b_n^3x^4 + 6b_n^2x^3+7b_nx^2+x).$
\end{enumerate}
\end{lemma}
\begin{proof}
By elementary calculations, the results can be obtained.
\end{proof}
Remarks:
\begin{enumerate}
\item[1.] From lemma \ref{L3}, it can be seen that the operators (\ref{RBG1}) preserves linearity.
\item[2.] We have
\begin{eqnarray*}
s_{b_n, k}^{'}(x) &=& -b_n e^{-b_nx} \dfrac{(b_nx)^k}{k!}+ e^{-b_nx}b_n \dfrac{k(b_nx)^{k-1}}{k!} \\ &=& -b_n s_{b_n, k}(x)+ \dfrac{k}{x}s_{b_n, k}(x),
\end{eqnarray*}
hence
\begin{eqnarray} \label{e21}
\dfrac{x}{b_n} s_{b_n, k}^{'}(x) &=& \left( \dfrac{k}{b_n} - x \right)s_{b_n, k}(x).
\end{eqnarray}
\item[3.] We have for $r \in \mathbb{N}$,
\begin{eqnarray*}
S_n^{'}(t^r;x) &=& \sum_{k=0}^{\infty} s_{b_n, k}^{'}(x) \left(\dfrac{k}{b_n}\right)^{r} \\ &=& \sum_{k=0}^{\infty} \dfrac{b_n}{x}\left( \dfrac{k}{b_n} - x \right)s_{b_n, k}(x) \left(\dfrac{k}{b_n}\right)^{r} \qquad \text{(using (\ref{e21}) }
\end{eqnarray*}
On re-arranging terms, we get
\begin{equation} \label{e22}
S_n(t^{r+1};x) = \dfrac{x}{b_n} S_n^{'}(t^r;x) + xS_n(t^r;x)
\end{equation}
\end{enumerate}
\begin{lemma} \label{L4}
For $r \in \mathbb{N}$, the following relation holds:
\begin{equation} \label{e23}
S_n(t^r;x) = \sum_{j=1}^r a_{r,j}x^jb_n^{j-r} = x^r + \dfrac{r(r-1)}{2b_n}x^{r-1}+ \cdots + b_n^{1-r}x
\end{equation}
with positive coefficients $a_{r, j}$. In particular, $S_n(t^r;x)$ is a polynomial of degree $r$ without a constant term.
\end{lemma}
\begin{proof}
From (b)-(c) of lemma \ref{L3}, the representation (\ref{e23}) holds true for $r = 1, 2$ with $a_{1,1}=a_{2,2}=a_{2,1}=1$. We use the principle of mathematical induction and assume (\ref{e23}) to be true for some positive integer $r$. Now from (\ref{e22}),
\begin{eqnarray*}
S_n(t^{r+1};x) &=& \dfrac{x}{b_n} S_n^{'}(t^r;x) + xS_n(t^r;x) \\ &=& \dfrac{x}{b_n}\sum_{j=1}^{r}ja_{r,j}x^{j-1}b_n^{j-r}+ x\sum_{j=1}^{r}a_{r,j}x^{j}b_n^{j-r} \\ &=& a_{r,1}b_n^{-r}x + \sum_{j=2}^{r}\left(ja_{r,j}+ a_{r, j-1}\right) x^{j}b_n^{j-(r+1)}+a_{r,r}x^{r+1} \\ &=: &\sum_{j=1}^{r+1}a_{r+1,j}x^{j}b_n^{j-(r+1)},
\end{eqnarray*}
say. Thus the representation (\ref{e23}) is valid for all $r \in \mathbb{N}$ since
\begin{eqnarray*}
&& a_{r+1,r+1}=a_{r,r}= \cdots = a_{1,1}=1, \\ &&
a_{r+1,1}=a_{r,1}= \cdots = a_{1,1}=1, \\ &&
a_{r+1,r}=ra_{r,r} + a_{r,r-1} = r + a_{r,r-1}\\&& \qquad \quad = \cdots = r + (r-1)+ \cdots + 2 + a_{2,1} = r(r+1)/2.
\end{eqnarray*}
\end{proof}
Remark: In connection with the coefficients $a_{r, r-1}$, it can be derived that for $N \geq 2$,
\begin{equation}\label{e24}
a_{N+2, N+1}-2a_{N+1, N}+a_{N, N-1} = 1.
\end{equation}
Let us denote by $\mu_{n,m}$, the $m^{th}$ moments of the operators given by (\ref{RBG1}), defined as
\begin{equation}\label{E5}
\mu_{n,m}(x)= S_n((t-x)^m;x),\qquad m = 0,1,2,\ldots.
\end{equation}
\begin{lemma}\label{L5}
For the moments defined in ($\ref{E5}$), the following holds:
\begin{enumerate}
\item[(a)]$\mu_{n,1}(x) = S_n((t-x);x)=0 ,$
\item[(b)]$\mu_{n,2}(x)= S_n((t-x)^2;x)=\dfrac{x}{b_n},$
\item[(c)]$\mu_{n,3}(x)= S_n((t-x)^3;x)=\dfrac{x}{b_n^2},$
\item[(d)]$\mu_{n,4}(x)= S_n((t-x)^4;x)=\dfrac{3x^2}{b_n^2}+\dfrac{x}{b_n^3}.$
\end{enumerate}
\end{lemma}
\begin{proof}
The results follow from linearity of the operators $S_n$ and lemma $\ref{L3}$.
\end{proof}
Also, we mention some results related to the (modified) Steklov means. The (modified) Steklov means for $h > 0$ is defined by,
\begin{equation*}
f_h(x):= \left(\dfrac{2}{h}\right)^2 \int_0^{h/2} \int_0^{h/2} [2f(x+s+t) - f(x + 2(s+t))]ds dt.
\end{equation*}
We have
\begin{eqnarray*}
f(x) - f_h(x)&=& \left(\dfrac{2}{h}\right)^2 \int_0^{h/2} \int_0^{h/2} \Delta_{s+t}^2 f(x) ds dt, \\ f_h^{''}(x) &=& h^{-2}\left[8\Delta_{h/2}^2 f(x) - \Delta_{h}^2 f(x) \right],
\end{eqnarray*}
and hence
\begin{equation} \label{e3}
\Vert f - f_h \Vert_N \leq \omega_N^2(f, h), \qquad \Vert f_h^{''} \Vert_N \leq 9 h^{-2}\omega_N^2(f, h).
\end{equation}
\section{Local results}
\subsection{Direct result}
\noindent Consider the Banach lattice
\begin{equation*}
C_\gamma[0,\infty) = \{f \in C[0,\infty): \left|f(t) \right| \leq M (1+t)^{\gamma}\}
\end{equation*}
for some $M>0, \gamma>0$.
\begin{theorem}\label{T1}
$\lim_{n \rightarrow \infty} S_n(f;x)= f(x)$ uniformly for $x \in [0,a]$, provided $f \in C_\gamma [0,\infty)$, $\gamma \geq 2$ and $a > 0$.
\end{theorem}
\begin{proof}
For fix $a >0$, consider the lattice homomorphism $T_a: C[0, \infty) \rightarrow C[0,a]$ defined by $T_a(f):= \left.f\right|_{[0,a]}$ for every $f \in C[0, \infty)$, where $\left.f\right|_{[0,a]}$ denotes the restriction of the domain of $f$ to the interval $[0,a]$. In this case, we see that, for each $i = 0,1,2$ and by (a)-(c) of lemma \ref{L3},
\begin{equation}\label{E13}
\lim_{n \rightarrow \infty} T_a \left(S_n(e_i;x)\right)= T_a( e_i(x)), \; \text{uniformly on} \;[0,a].
\end{equation}
Thus, by using (\ref{E13}) and with the universal Korovkin-type property with respect to positive linear operators (see Theorem 4.1.4 (vi) of \cite{Alt}, p.199) we have the result.
\end{proof}
Let us consider the space $C_B[0,\infty)$ of all continuous and bounded functions on $[0, \infty)$ and for $f \in C_B[0,\infty)$, consider the supremum norm $\|f\|=\sup\{|f(x)|: x \in [0,\infty)\} $. Also, consider the $K-$functional
\begin{equation}\label{E7}
K_2(f;\delta)=\inf_{g \in W^2}\left\lbrace \|f-g \|+\delta\|g^{''} \| \right\rbrace,
\end{equation}
where $\delta>0$ and $W^2=\left\lbrace g \in C_B[0,\infty): g^{'}, g^{''} \in C_B[0, \infty) \right\rbrace$. For a constant $C >0$, the following relationship exists:
\begin{equation}\label{E8}
K_2(f;\delta)\leq C \omega_2(f, \sqrt{\delta}),
\end{equation}
where
\begin{equation}\label{E9}
\omega_2(f, \sqrt{\delta})=\sup_{0<h<\sqrt{\delta}} \; \sup_{x \in [0, \infty)}|f(x+2h)-2f(x+h)+f(x)|
\end{equation}
is the second order modulus of smoothness of $f \in C_B[0,\infty)$; and for $f \in C_B[0,\infty)$, let the modulus of continuity be given by
\begin{equation}\label{E10}
\omega_1(f, \sqrt{\delta})=\sup_{0<h<\sqrt{\delta}} \; \sup_{x \in [0, \infty)}|f(x+h)-f(x)|.
\end{equation}
\begin{theorem}\label{T2}
For $f \in C_B[0, \infty)$, we have
\begin{equation*}
|S_n(f;x) - f(x)| \leq C\omega_2 \left(f,\sqrt{\mu_{n,2}(x)}\right),
\end{equation*}
where $C$ is a positive constant.
\end{theorem}
\begin{proof}
For $g \in W^2$, $x \in [0, \infty)$ and by Taylor's expansion, we have
\begin{equation*}
g(t) = g(x) + (t-x)g^{'}(x)+\int_x^t(t-u)g^{''}(u)du.
\end{equation*}
Operating $S_n$ on both the sides,
\begin{eqnarray*}
\left|S_n(g;x)-g(x)\right| &=&\left| S_n\left( \int_x^t(t-u)g^{''}(u)du;x\right)\right| \\ &\leq & \dfrac{1}{2}\Vert g^{''} \Vert S_n((t-x)^2;x) \\ & = & \dfrac{1}{2} \Vert g^{''} \Vert \mu_{n,2}(x).
\end{eqnarray*}
Also, we have $\left|S_n(f;x)\right| \leq \Vert f \Vert$. Using these, we get
\begin{eqnarray*}
\left|S_n(f;x)- f(x)\right|& \leq & \left|S_n(f-g;x)- (f-g)(x)\right| + \left|S_n(g;x)- g(x)\right| \\ & \leq & 2 \Vert f-g \Vert + \dfrac{1}{2} \Vert g^{''} \Vert \mu_{n,2}(x).
\end{eqnarray*}
Taking infimum on the right hand side for all $g \in W^2$, we get
\begin{eqnarray*}
\left|S_n(f;x)- f(x)\right| & \leq & 2K_2\left(f,\dfrac{1}{4} \mu_{n,2}(x)\right).
\end{eqnarray*}
Using (\ref{E8}) and $\omega_2(f, \lambda \delta) \leq (\lambda + 1)^2 \omega_2(f, \delta)$ for $\lambda >0$, we get
\begin{eqnarray*}
\left|S_n(f;x)- f(x)\right| & \leq & C \omega_2\left(f, \sqrt{\mu_{n,2}(x)}\right),
\end{eqnarray*}
for some constant $C>0$.
\end{proof}
\subsection{A Voronovskaja-type result}
In this section we prove a Voronovskaja-type theorem for the operators $S_n$ given in (\ref{RBG1}).
\begin{lemma}\label{L6}
$\lim_{n \rightarrow \infty} b_n^2 \mu_{n,4}(x) = 3 x^2$ uniformly with respect to $x \in [0, a],\, a > 0$.
\end{lemma}
\begin{proof}
The result is obvious from lemma \ref{L5}(d).
\end{proof}
\begin{theorem}\label{T3}
For every $f \in C_\gamma[0,\infty)$ such that $f^{'}, f^{''} \in C_\gamma[0,\infty), \gamma \geq 4$, we have
\begin{equation*}
\lim_{n \rightarrow \infty} b_n \left[S_n(f;x)-f(x) \right]=\dfrac{x}{2}f^{''}(x)
\end{equation*}
with respect to $x \in [0, a] \, (a > 0)$.
\end{theorem}
\begin{proof}
Let $f,f^{'}, f^{''} \in C_\gamma[0,\infty)$ and $x \geq 0$. Define
\begin{equation*}
\Psi(t,x)= \dfrac{f(t)-f(x)-(t-x)f^{'}(x)-\dfrac{1}{2}(t-x)^2 f^{''}(x)}{(t-x)^2}, \quad \text{if} \, t \neq x,
\end{equation*}
and $\Psi(x,x)=0$. Then the function $\Psi(\cdot,x) \in C_\gamma[0,\infty)$. Hence, by Taylor's theorem we get
\begin{equation*}
f(t)= f(x)+(t-x)f^{'}(x)+\dfrac{1}{2}(t-x)^2 f^{''}(x)+(t-x)^2\Psi(t,x).
\end{equation*}
Now from lemma(\ref{L5})(a)-(b)
\begin{eqnarray}\label{E12}
b_n \left[S_n(f;x)-f(x)\right]& = & \dfrac{1}{2} b_nf^{''}(x)\mu_{n,2}(x) + b_n S_{n}((t-x)^2 \Psi(t,x)).
\end{eqnarray}
If we apply the Cauchy-Schwarz inequality to the second term on the right hand side of (\ref{E12}), then
\begin{equation*}
b_n S_{n}((t-x)^2 \Psi(t,x);x) \leq \left( b_n^2 \mu_{n,4}(x)\right)^{\frac{1}{2}} ( S_{n}(\Psi^2(t,x);x))^{\frac{1}{2}}
\end{equation*}
Now $\Psi^2(\cdot,x) \in C_\gamma[0,\infty)$, using theorem \ref{T1}, we have $S_{n}(\Psi^2(t,x);x) \rightarrow \Psi^2(x,x)=0$, as $n \rightarrow \infty$ and using lemma \ref{L6}, this third term on the right tends to zero for $x \in [0, a]$ and we get
\begin{equation*}
\lim_{n \rightarrow \infty} b_n \left[S_n(f;x)-f(x)\right]= \dfrac{1}{2} xf^{''}(x).
\end{equation*}
for $x \in [0, a], (a>0)$.
\end{proof}
\section{Polynomial weight spaces}
The results discussed in Section $1$, are of a local character dealing with compact sub-intervals of $[0, \infty)$. To derive global results for continuous functions on unbounded interval, one has to consider the spaces other than discussed earlier (refer \cite{Gad}). One such space is the polynomial weight space, where one can established global results for the operators (\ref{RBG1}) as well as a characterization for the same. So, in this section, we are discussing a polynomial weight space $C_N$ as given below and discuss direct and inverse results for the non-optimal cases $0 < \alpha < 2$, as well as for the saturation case $\alpha = 2$. \\
Consider the space $C_N$ defined using weight $w_N$, $N \in \mathbb{N}$ as follows:
\begin{eqnarray*}
&& w_0(x) = 1, \qquad w_N(x) = (1 + x^N)^{-1} \qquad (x \geq 0, N \in \mathbb{N}), \\ \\ &&
C_N = \left\lbrace f \in C[0, \infty): w_Nf \; \text{uniformly continous and bounded on} \; [0, \infty) \right\rbrace, \\ \\ &&
\Vert f \Vert_N = \sup_{x \geq 0} w_N(x)\left|f(x) \right|.
\end{eqnarray*}
The corresponding Leibschitz classes are given for $0 < \alpha \leq 2$ by $(h > 0)$
\begin{eqnarray*}
&& \Delta^2_h f(X) = f(x+2h) - 2f(x+h) + f(x), \\ \\ &&
\omega_N^2(f, \delta) = \sup_{0 < h \leq \delta} \Vert \Delta_h^2 f \Vert_N, \\ \\ &&
Lip_N^2 \alpha = \left\lbrace f \in C_N: \omega_N^2(f, \delta) = O\left(\delta^{\alpha}\right), \delta \rightarrow 0^{+} \right\rbrace.
\end{eqnarray*}
We have the following characterization using the operators defined by (\ref{RBG1}):
\begin{theorem}\label{T4}
Let $f \in C_N, N \in \mathbb{N}$, $\alpha \in (0, 2]$, then for the operators given by (\ref{RBG1}),
\begin{equation} \label{glo1}
w_N(x)\left| S_n(f;x) - f(x) \right| \leq M_N \left[ \dfrac{x}{b_n} \right]^{\alpha/2} \qquad (N \in \mathbb{N}, x \geq 0)
\end{equation}
is equivalent to $f \in Lip_N^2 \alpha$, where $M_N$ is a constant independent of $b_n$ and $x$.
\end{theorem}
\noindent Remark: The condition (\ref{glo1}) exhibit local structure. As the constant $M_N$ is independent of $b_n, x$, we may write (\ref{glo1}) as
\begin{equation} \label{glo2}
\Vert x^{-\alpha/2} \left[ S_n(f;x) - f(x) \right] \Vert_N = O\left(b_n^{-\alpha/2} \right).
\end{equation}
which reveals the global equivalence assertion and this is due to incorporation of weights $w_N$ into the approximation (\ref{glo1}) as well as into the definition of $Lip_N^2 \alpha$.
\begin{lemma}\label{L7}
For each $N \in \mathbb{N}\cup \left\lbrace 0 \right\rbrace$ there is a constant $M_N$ such that uniformly for $x \geq 0$, $n \in \mathbb{N}$
\begin{equation}\label{glo3}
w_N(x)S_n(1/w_N(t);x)\leq M_N.
\end{equation}
In particular, for any $f \in C_N$,
\begin{equation}\label{glo4}
\Vert S_n(f; \cdot) \Vert_N \leq M_N \Vert f \Vert_N.
\end{equation}
\end{lemma}
\begin{proof}
For $N=0$, $w_0(x)S_n(1/w_0(t);x)= 1\leq M_N$, for any constant $M_N \geq 1$. Now for $N \in \mathbb{N}$,
\begin{eqnarray*}
w_N(x)S_n(1/w_N(t);x)&=& w_N(x)\left[S_n(1;x)+S_n(t^N;x)\right]
\\ &=& \left(1+x^N\right)^{-1} \left[1+x^N+\sum_{j=1}^{N-1}a_{N,j}x^jb_n^{j-N} \right] \quad (\text{by lemma \ref{L4}})\\ &=& 1+ \dfrac{1}{b_n}\sum_{j=1}^{N-1}a_{N,j}x^jb_n^{j-(N-1)}/\left(1+x^N\right)\leq M_N
\end{eqnarray*}
as the coefficients $a_N,j$ only depend on $N$ and the sum is bounded with respect to $x, b_n$.\\ Now, since $f \in C_N$
\begin{eqnarray*}
w_N(x)\left|S_n(f(t);x)\right| &\leq & w_N(x) \sum_{k=0}^{\infty} w_N\left(\dfrac{k}{b_n}\right)f\left(\dfrac{k}{b_n}\right)\left[w_N\left(\dfrac{k}{b_n}\right)\right]^{-1}s_{b_n,k} \\ & \leq & \Vert f \Vert_N w_N(x) S_n(1/w_N(t);x)\leq M_N \Vert f \Vert_N.
\end{eqnarray*}
Taking supremum on right hand side over all $x$, we get the result.
\end{proof}
\begin{lemma}\label{L8}
For each $N \in \mathbb{N}\cup \left\lbrace 0 \right\rbrace$ there is a constant $M_N$ such that for all $x \geq 0$, $n \in \mathbb{N}$
\begin{equation}\label{glo5}
w_N(x) S_n((t-x)^2/w_N(t);x) \leq M_Nx/b_n.
\end{equation}
Furthermore, one has
\begin{equation}\label{glo6}
\dfrac{1}{b_n} w_N(x) S_n(t/w_N(t);x) \leq M_Nx/b_n.
\end{equation}
\end{lemma}
\begin{proof}
For $N=0$, by (c) of lemma \ref{L5}, $w_0(x) S_n((t-x)^2/w_0(t);x)= S_n((t-x)^2;x)= x/b_n \leq M_N x/b_n$ for a constant $M_N \geq 1$. \\ For $N=1$, by (c)-(d) of lemma \ref{L5},
\begin{eqnarray*}
w_1(x) S_n((t-x)^2/w_1(t);x) &=& w_1(x)\left[S_n((t-x)^2;x)+ S_n((t-x)^3;x)\right. \\ && \qquad \left. +x S_n((t-x)^2;x)\right] \\ &=& \dfrac{x}{b_n}\left[1 + \dfrac{1}{b_n(1+x)} \right] \leq M_N x/b_n
\end{eqnarray*}
for a constant $M_N \geq 2$. \\ For $N \geq 2$, lemma \ref{L4} and (\ref{e24}) imply
\begin{eqnarray*}
S_n((t-x)^2t^N;x) &=& S_n(t^{N+2};x)-2x S_n(t^{N+1};x)+x^2 S_n(t^{N};x)\\ &=& x^{N+2} + a_{N+2,N+1}x^{N+1}/b_n+ \cdots + xb_n^{-N-1} \\ && \quad -2x \left[ x^{N+1} + a_{N+1,N}x^{N}/b_n+ \cdots + xb_n^{-N}\right] \\ && \quad +x^2 \left[ x^{N} + a_{N,N-1}x^{N-1}/b_n+ \cdots + xb_n^{-N+1} \right] \\ &=& \left(a_{N+2,N+1}-2a_{N+1,N}+a_{N,N-1} \right)x^{N+1}/b_n + \cdots xb_n^{-N-1} \\ &=& \left[x^N + \cdots + b_n^{-N} \right]\dfrac{x}{b_n}.
\end{eqnarray*}
Hence
\begin{eqnarray*}
w_N(x)S_n((t-x)^2/w_N(t);x) &=& w_N(x) \left[1+ x^N + \cdots + b_n^{-N} \right]\dfrac{x}{b_n} \leq M_N \dfrac{x}{b_n}.
\end{eqnarray*}
This proves (\ref{glo5}). Also
\begin{eqnarray*}
S_n(t/w_N(t);x) &=& S_n(t;x)+ S_n(t^{N+1};x) \\ &=& x+ x^{N+1} + \cdots +x b_n^{-N}\\ &=& \left[1+ x^{N} + \cdots + b_n^{-N} \right]x .
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\dfrac{1}{b_n}w_N(x) S_n(t/w_N(t);x) &=& w_N(x) \left[1+ x^{N} + \cdots + b_n^{-N} \right]\dfrac{x}{b_n} \leq M_N \dfrac{x}{b_n}.
\end{eqnarray*}
\end{proof}
\subsection{Direct result}
\begin{lemma}\label{L9}
Let $N \in \mathbb{N}\cup \left\lbrace 0 \right\rbrace$, $g \in C_N^2 := \left\lbrace f \in C_N : f^{'}, f^{''} \in C_N \right\rbrace$. Then there exists a constant $M_N$ such that for all $n \in \mathbb{N}$, $x \geq 0$
\begin{equation}\label{glo7}
w_N(x)\left| S_n(g;x) - g(x) \right| \leq M_N \Vert g^{''} \Vert_N \dfrac{x}{b_n}.
\end{equation}
\end{lemma}
\begin{proof}
Using Taylor's theorem, we can write
\begin{equation*}
g(t) - g(x) = (t-x)g^{'}(x) + \int_x^t \int_x^s g^{''}(u) du ds \qquad \qquad (x, t \geq 0)
\end{equation*}
and the estimate
\begin{eqnarray*}
\left|\int_x^t \int_x^s \right| g^{''}(u) \left| du ds \right| &\leq & \Vert g^{''}\Vert_N \left| \int_x^t \int_x^s \dfrac{1}{w_N(u)} du ds \right| \\ &\leq & (1/2) \Vert g^{''}\Vert_N(t-x)^2\left[ \dfrac{1}{w_N(x)} + \dfrac{1}{w_N(t)}\right],
\end{eqnarray*}
the proof follows since by lemma \ref{L3}(a)-(b), lemma \ref{L5}(b) and (\ref{glo5})
\begin{eqnarray*}
w_N(x)\left|S_n(g ;x) - g(x) \right| &\leq & (1/2) \Vert g^{''}\Vert_N \left[\mu_{n,2}(x) + w_N(x) S_n\left(\dfrac{(t-x)^2}{w_N(t)};x \right) \right] \\ &\leq & M_N \Vert g^{''} \Vert_N \dfrac{x}{b_n}.
\end{eqnarray*}
\end{proof}
Now we can write the proof of direct part of theorem \ref{T4}.
\begin{theorem}\label{T5}
For any $N \in \mathbb{N} \cup \left\lbrace 0 \right\rbrace$, $f \in C_N$ there holds for all $x \geq 0$, $n \in \mathbb{N}$
\begin{equation}\label{glo8}
w_N(x) \left| S_n(f;x) - f(x) \right| \leq M_N \omega_N^2\left( f, \sqrt{\dfrac{x}{b_n}}\right).
\end{equation}
In particular, if $f \in Lip_N^2 \alpha$ for some $\alpha \in (0, 2]$, then
\begin{equation*}
w_N(x) \left| S_n(f;x) - f(x) \right| \leq M_N \left[\dfrac{x}{b_n}\right]^{\alpha/2}.
\end{equation*}
\end{theorem}
\begin{proof}
For $N=0$, the assertion follows from theorem \ref{T2}. For $f \in C_N$, $h>0$ one has by lemma \ref{L7}, \ref{L9} and by (\ref{e3}) that
\begin{eqnarray*}
w_N(x)\left|S_n(f ;x) - f(x) \right| &\leq & w_N(x)\left|S_n([f-f_h] ;x) \right| \\ && \quad + w_N(x)\left|S_n(f_h ;x) - f_h(x) \right| + w_N(x) \left|f_h(x) - f(x) \right| \\ &\leq & \Vert f - f_h \Vert_N \left[w_N(x) S_n(1/w_N(t);x)+1\right] + M_N \Vert f_h^{''}\Vert_N \dfrac{x}{b_n} \\ &\leq & M_N \omega_N^2(f,h)\left[1 + \dfrac{x}{b_nh^2} \right],
\end{eqnarray*}
so that (\ref{e3}) follows upon setting $h = \sqrt{\dfrac{x}{b_n}}$.
\end{proof}
Remark: The above theorem implies that for each $f \in C_N$, $x \geq 0$
\begin{eqnarray*}
\lim_{n \rightarrow \infty }w_N(x)\left|S_n(f ;x) - f(x) \right|=0.
\end{eqnarray*}
\subsection{Inverse result}
The main tool for the proof of the inverse theorems in non-optimal case $0<\alpha<2$ is an appropriate Bernstein-type inequality.
\begin{lemma}\label{L10}
For $f \in C_N$ and $x, \delta >0$ there holds
\begin{equation}\label{glo9}
w_N(x)\left| S_n^{''}(f;x)\right| \leq M_N \omega^2_N (f, \delta) \left[\dfrac{b_n}{x} + \delta^{-2} \right].
\end{equation}
\end{lemma}
\begin{proof}
We can represent $S_n^{''}(f;x)$ in two ways:
\begin{equation}\label{glo10}
S_n^{''}(f;x)= \left[\dfrac{b_n}{x}\right]^2\sum_{k=0}^{\infty}r_{n,k}(x)f\left(\dfrac{k}{b_n}\right)s_{b_n,k}(x),
\end{equation}
where $r_{n,k}(x)=\left[\left(\dfrac{k}{b_n}-x \right)^2 - \dfrac{k}{b_n^2} \right]$ and
\begin{equation}\label{glo11}
S_n^{''}(f;x)= b_n^2\sum_{k=0}^{\infty}\Delta_{1/b_n}^2 f\left(\dfrac{k}{b_n}\right)s_{b_n,k}(x).
\end{equation}
From (\ref{e3}) there follows
\begin{eqnarray}\label{glo12}
\left|\Delta_{1/b_n}^2 f_{\delta}\left(\dfrac{k}{b_n}\right)\right| &\leq & \int_0^{1/b_n}\int_0^{1/b_n}\left|f_{\delta}^{''}(t+u+v) \right|du dv \nonumber \\ & \leq & \Vert f_{\delta}^{''}\Vert_N\int_0^{1/b_n}\int_0^{1/b_n}\dfrac{1}{w_N(t+u+v)} du dv \nonumber \\ & \leq & 9 (b_n\delta)^{-2}\omega_N^2(f, \delta)/w_N(t+ 2/b_n).
\end{eqnarray}
By (\ref{glo10}), (\ref{glo11}) and (\ref{glo12}) one has for $x, \delta >0$
\begin{eqnarray*}
w_N(x) \left|S_n^{''}(f;x) \right| &\leq & w_N(x) \left|S_n^{''}\left(f - f_{\delta};x \right) \right| + w_N(x) \left|S_n^{''}\left(f_{\delta};x\right) \right| \\ & \leq & w_N(x) (b_n/x)^2 \sum_{k=0}^{\infty}\left| r_{n,k}(x)\right| \left| f\left(\dfrac{k}{b_n} \right) - f_{\delta}\left(\dfrac{k}{b_n} \right) \right| s_{b_n,k}(x) \\ && \quad + w_N(x) b_n^2 \sum_{k=0}^{\infty} \left|\Delta_{1/b_n}^2 f_{\delta}\left( \dfrac{k}{b_n}\right)\right| s_{b_n,k}(x)\\ & \leq & \omega_N^2(f, \delta)\left\lbrace w_N(x) (b_n/x)^2 \sum_{k=0}^{\infty}\left| r_{n,k}(x)\right| \left[w_N\left(\dfrac{k}{b_n} \right) \right]^{-1}s_{b_n,k}(x) \right. \\ && \left. \quad + 18 \delta^{-2} w_N(x) \sum_{k=0}^{\infty} \left[w_N\left(\dfrac{k+2}{b_n} \right) \right]^{-1}s_{b_n,k}(x)\right\rbrace.
\end{eqnarray*}
To obtain (\ref{glo9}) we have to show that
\begin{equation}\label{glo13}
w_N(x) (b_n/x)^2 \sum_{k=0}^{\infty}\left| r_{n,k}(x)\right| \left[w_N\left(\dfrac{k}{b_n} \right) \right]^{-1}s_{b_n,k}(x) \leq M_N \dfrac{x}{b_n}
\end{equation}
and
\begin{equation}\label{glo14}
w_N(x) \sum_{k=0}^{\infty} \left[w_N\left(\dfrac{k+2}{b_n} \right) \right]^{-1}s_{b_{\bar{n}},k}(x)\leq M_N.
\end{equation}
Now (\ref{glo14}) follows from (\ref{glo3}) since there exists a constant $M_N^{'}$ such that for all $k \in \mathbb{N} \cup {0}$ , $n \in \mathbb{N}$
\begin{equation*}
\left[w_N\left( \dfrac{k+2}{b_n}\right)\right]^{-1}\leq M_N^{'}\left[w_N(k/b_{\bar{n}})\right]^{-1},
\end{equation*}
whereas (\ref{glo13}) follows from lemma \ref{L7}, \ref{L8} since
\begin{equation*}
w_N(x)\left[S_n((t-x)^2/w_N(t);x) + \dfrac{1}{b_n}S_n(t/w_N(t);x)\right]\leq M_N\dfrac{x}{b_n}.
\end{equation*}
\end{proof}
\begin{lemma}[\cite{Bec},lemma 10]\label{L11}
For $x \geq 0$, $0 < h \leq 1$
\begin{equation}\label{glo15}
\int_0^h \int_0^h \dfrac{ds dt}{x + s + t } \leq \dfrac{6h^2}{x + 2h}.
\end{equation}
\end{lemma}
\begin{proof}
One has
\begin{equation*}
\int_0^h \int_0^h \dfrac{ds dt}{x + s + t } = \Delta_h^2[t \log t](x),
\end{equation*}
which in particular hold true for $x=0$. This yields for $x \in [0,h]$
\begin{equation*}
\int_0^h \int_0^h \dfrac{ds dt}{x + s + t } \leq \int_0^h \int_0^h \dfrac{ds dt}{ s + t } = 2h \log 2 < \dfrac{6h^2}{3h}\leq \dfrac{6h^2}{x+2h}.
\end{equation*}
For $h \leq x$ one has
\begin{equation*}
\int_0^x\int_0^x \dfrac{ds dt}{x + s + t } \leq \dfrac{3h^2}{3x}\leq \dfrac{3h^2}{x+2h}.
\end{equation*}
This proves (\ref{glo15}).
\end{proof}
\begin{theorem}\label{T6}
Let $N \in \mathbb{N} \cup \left\lbrace 0 \right\rbrace$. If $f \in C_N$ satisfies for some $\alpha \in (0, 2)$ and all $N \in \mathbb{N}$, $x \geq 0$
\begin{equation}\label{glo16}
w_N(x)\left|S_n(f;x) - f(x) \right| \leq M_N \left[ \dfrac{x}{b_n} \right]^{\alpha/2},
\end{equation}
then $f \in Lip_N^2 \alpha$.
\end{theorem}
\begin{proof}
It is sufficient to show that for $0< h$, $\delta \leq 1$, $\delta < \sqrt{h}$
\begin{equation}\label{glo17}
\omega_N^2(f,h) \leq M_N \left[\delta^{\alpha} + (h/\delta)^2\omega_N^2(f,\delta) \right].
\end{equation}
To this end, fix $0 < h$, $\delta \leq 1$, $\delta < \sqrt{h}$, $x \geq 0$. Using lemmas (\ref{L10}), (\ref{L11}) and observing that $w_N(x)/w_N(x+2h)\leq 3^N$ as $h \leq 1$, there follows from (\ref{glo16}) for all $n \in \mathbb{N}$
\begin{eqnarray*}
w_N(x)\left|\Delta_h^2f(x) \right| & \leq & w_N(x)\left|f(x+2h) - S_n(f(t + 2h);x) \right| \\ && + 2w_N(x)\left|S_n(f(t + h);x) - f(x+h) \right| \\ && + w_N(x)\left|f(x) - S_n(f(t);x) \right| + w_N(x)\left|\Delta_h^2S_n(f(t);x) \right| \\ & \leq & M_N \left[ \dfrac{x + 2h}{b_n} \right]^{\alpha/2} \left\lbrace \dfrac{w_N(x)}{w_N(x + 2h)} + \dfrac{2w_N(x)}{w_N(x + h)} + 1 \right\rbrace \\ && + w_N(x) \int_0^h \int_0^h\left| S_n^{''}(t+s+u;x)\right| ds du \\ & \leq & M_N \left[ \dfrac{x + 2h}{b_n} \right]^{\alpha/2} + M_N\omega_N^2(f,\delta)\left\lbrace \dfrac{w_N(x)}{w_N(x + 2h)} \right\rbrace \\ && \left[b_n\int_0^h \int_0^h \dfrac{ds du}{x + s+ u} + \left(\dfrac{h}{\delta} \right)^2 \right] \\ & \leq & M_N \left\lbrace \left[ \dfrac{x + 2h}{b_n} \right]^{\alpha/2} + \left[M b_n \dfrac{h^2}{x+2h} + \left(\dfrac{h}{\delta} \right)^2 \right]\omega_N^2(f,\delta) \right\rbrace.
\end{eqnarray*}
For the case $x=0$ let us only note that the estimate holds true in view of the existence of the integral for $x=0$ and the continuity of the expressions involved. Now choose $b_n$ such that
\begin{equation*}
\sqrt{\dfrac{x+2h}{b_n}} \leq \delta < \sqrt{\dfrac{x+2h}{(b_n - 1)}} \leq \sqrt{2} \sqrt{\dfrac{x+2h}{b_n}},
\end{equation*}
the last expression being $\geq 2 \sqrt{\dfrac{h}{b_n}} $. Then
\begin{equation*}
\omega_N^2(f,h) \leq M_N \left[\delta^{\alpha} + (h/\delta)^2\omega_N^2(f,\delta) \right].
\end{equation*}
proving (\ref{glo17}). This completes the proof.
\end{proof}
\begin{theorem}\label{T7}
Let $N \in \mathbb{N} \cup \left\lbrace 0 \right\rbrace$. If $f \in C_N$ satisfies for all $N \in \mathbb{N}$, $x \geq 0$
\begin{equation}\label{glo18}
w_N(x)\left|S_n(f;x) - f(x) \right| \leq M_N \left[ \dfrac{x}{b_n} \right],
\end{equation}
then $f \in Lip_N^2 2$.
\end{theorem}
\begin{proof}
For any $f \in C_N$ one has
\begin{equation}\label{glo19}
f \quad \text{is convex on} \quad [0, \infty) \quad \text{iff} \quad S_n(f;x) \geq f(x) \quad \text{for all} \quad n \in \mathbb{N}, x \geq 0.
\end{equation}
Furthermore, the representation (\ref{e23}) implies
\begin{eqnarray*}
S_n(t^{N+2};x) & = & x^{N+2} + \dfrac{(N+2)(N+1)}{2b_n}x^{N+1}+ \cdot + \dfrac{x}{b_n^{N+1}} \\ & \geq & x^{N+2} + \dfrac{x^{N+1}}{b_n} = x^{N}S_n(t^2;x)
\end{eqnarray*}
for all $N \in \mathbb{N} \cup {0}$, so that
\begin{equation}\label{glo20}
S_n(t^2;x)/w_N(x) \leq S_n(t^2/w_N(t);x).
\end{equation}
In view of lemma \ref{L3}(a)-(b), lemma \ref{L5}(b) and equation (\ref{glo18}), for $f \in C_N$
\begin{equation*}
\pm f(x) \leq \dfrac{M_N}{w_N(x)} \left( S_n(t^2;x) - x^2 \right) + S_n(\pm f(t); x).
\end{equation*}
Then one has by (\ref{glo20})
\begin{eqnarray*}
\dfrac{M_N x^2}{w_N(x)} \pm f(x) & \leq & M_N \dfrac{S_n(t^2;x)}{w_N(x)} + S_n(\pm f(t);x) \\ & \leq & S_n \left(\dfrac{M_Nt^2}{w_N(t)} \pm f(t);x \right).
\end{eqnarray*}
Therefore $\dfrac{M_N t^2}{w_N(t)} \pm f(t)$ are convex functions on $[0, \infty)$, that is,
\begin{equation*}
\Delta_h^2\left[\dfrac{M_N t^2}{w_N(t)} \pm f(t)\right](x) \geq 0.
\end{equation*}
In other words
\begin{equation*}
w_N(x)\left|\Delta_h^2 f(x) \right| \leq M_Nw_N(x)\Delta_h^2 \left[\dfrac{t^2}{w_N(t)}\right](x) \leq M_N h^2.
\end{equation*}
This proves that $f \in Lip_N^2 2 $.
\end{proof}
\section{Examples, graphs and analysis}
The modification (\ref{RBG1}) of (\ref{Mir}) is due to introduction of sequence $\left( b_n \right)_{1}^{\infty}$ which is an increasing sequence of real numbers with $b_n \rightarrow \infty$ as $n \rightarrow \infty$ and $b_1 \geq 1$. There are many such sequences in existence. Consider, for example, $b_n = r^n$, with $r > 1$ or take $b_n = n^m$ with $m \geq 1$. In case of these examples, there is an $N$ for which $b_n > n$ for all $n \geq N$. If we consider for $b_n$, $b_n = \sum_{N=1}^{n} \dfrac{1}{N^p}$, where $0 \leq p \leq 1$, then one can easily check that the sequence $\left( b_n \right)$ is increasing sequence with $b_1 \geq 1$, $b_n \rightarrow \infty$ as $n \rightarrow \infty$ and $b_n \leq n$ for all $n$. These sequences are well controlled and ordered sequences as in these cases, we get $ \sum_{N=1}^{n} \frac{1}{N} < \sum_{N=1}^{n} \frac{1}{N^{p_1}} < \sum_{N=1}^{n} \frac{1}{N^{p_2}} < n $ for $ 0 < p_2 < p_1<1 $. Some graphs of functions under the operators $M_n$ and $S_n$ represented by equations (\ref{Mir}) and (\ref{RBG1}), respectively, are given below showing how the choices of sequence $\left( b_n \right)$ affects the approximation procedure by the operators. As these operators are representations in the form of infinite series, we consider truncation of the series by fixing the values of summation index $k$. \\
Figures (\ref{F1a}-\ref{F1c}), represent graphs of $f(x) = e^x$ (Magenta), graphs of $M_n(f;x)$ (Blue), graphs of $S_n(f;x)$ where $b_n = \sum_1^n (1/N)$ (Red) and $b_n = \sum_1^n(1/\sqrt{N})$ (Green) for $n = 3 \, \text{to} \, 15$, $k = 50$ corresponding to $x \in [0,2]$, $x \in [0,4]$ and $x \in [0, 6]$, respectively. It can be observed that as interval expands the graphs of better estimate get degenerate first. Figures (\ref{F2a}-\ref{F2c}), represent the same graphs for $k = 100$, It can be seen that the effect of degeneration is now delayed. Figures (\ref{F4a}-\ref{F4c}), represent the same graphs for $k = 100$ but for $n = 20 \, \text{to} \, 30$, where more consolidation of graphs can be seen. Figure (\ref{F5b}) plots $M_n(f;x)$ (Blue) for $n = 20 \, \text{to} \, 30$, while $S_n(f;x)$ where $b_n = \sum_1^n (1/N)$ (Red) and $b_n = \sum_1^n(1/\sqrt{N})$ (Green) for $n = 120 \, \text{to} \, 130$. The graphs of $S_n(f;x)$ where $b_n = \sum_1^n(1/\sqrt{N})$ (Green) for $n = 120 \, \text{to} \, 130$ seem to be achieving the same accuracy as $M_n(f;x)$ for $n = 20 \, \text{to} \, 30$. Figure (\ref{F5c}) plots $M_n(f;x)$ for $n =7, 8, 9$ and $S_n(f;x)$ where $b_n = \sum_1^n(1/\sqrt{N})$ (Red) for $n = 78 \, \text{to} \, 95$. We have more consolidation for $S_n(f;x)$ then for $M_n(f;x)$. Figures (\ref{F6a}-\ref{F6b}) show similar comparison for $f(x) = \sin x$ (Magenta), $x \in [0, 2\pi]$, $k = 100$ and $k = 120$, respectively, for $M_n(f;x)$ (Blue) for $n = 20 \, \text{to} \, 25$, $S_n(f;x)$ where $b_n = \sum_1^n(1/\sqrt{N})$ (Red) for $n = 80 \, \text{to} \, 100$ and $S_n(f;x)$ where $b_n = \sum_1^n(1/N)$ (Green) for $n = 80 \, \text{to} \, 100$.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{1exp_x0to2_n3to15_k50_MnSnSrootn.jpg}
\caption{}
\label{F1a}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{2exp_x0to4_n3to15_k50_MnSnSrootn.jpg}
\caption{}
\label{F1b}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{3exp_x0to6_n3to15_k50_MnSnSrootn.jpg}
\caption{}
\label{F1c}
\end{subfigure}
\caption{}
\label{F1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{4exp_x0to2_n3to15_k100_MnSnSrootn.jpg}
\caption{}
\label{F2a}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{5exp_x0to4_n3to15_k100_MnSnSrootn.jpg}
\caption{}
\label{F2b}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{6exp_x0to6_n3to15_k100_MnSnSrootn.jpg}
\caption{}
\label{F2c}
\end{subfigure}
\caption{}
\label{F2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{10exp_x0to2_n20to30_k100_MnSnSrootn.jpg}
\caption{}
\label{F4a}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{11exp_x0to4_n20to30_k100_MnSnSrootn.jpg}
\caption{}
\label{F4b}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{12exp_x0to6_n20to30_k100_MnSnSrootn.jpg}
\caption{}
\label{F4c}
\end{subfigure}
\caption{}
\label{F4}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{13exp_x0to4_Mn20to30_k100_SnSrootn120to130.jpg}
\caption{}
\label{F5b}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{16exp_x0to6_k120_Mn789Srootn78to95.jpg}
\caption{}
\label{F5c}
\end{subfigure}
\caption{}
\end{figure}
\label{F5}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{17sin_x0to2pi_k100_Mn20to25_Srootn80to100_Sn80to100.jpg}
\caption{}
\label{F6a}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{18sin_x0to2pi_k120_Mn20to25_Srootn80to100_Sn80to100.jpg}
\caption{}
\label{F6b}
\end{subfigure}
\caption{}
\label{F6}
\end{figure}
|
train/arxiv
|
BkiUc8U5qoTAh9p00HzR
| 5 | 1 |
\section{Introduction}
The isotropic-nematic phase transition in dispersions of rigid, rodlike colloids occurs at sufficiently high concentration of rods. For uncharged rods, this phase transition is purely the result of a competition between orientational entropy, which is maximized in the isotropic phase, and the translational entropy, which favors the nematic phase, where rods tend to align along a nematic director $\hat{n}$. For long, rigid, needle-like rods, this phase transition is accurately described by Onsager's second-virial theory.~\cite{Onsager}
Many experimental systems do not form ordinary nematic phases, but instead form a cholesteric (chiral nematic) phase, where the nematic director field has a helical arrangement with a pitch much larger than the colloidal dimensions. Though the cholesteric phase is ubiquitous in experimental systems, the relationship between particle properties and macroscopic chirality remains unclear.~\cite{Harris1999,Grelet2003} An illustrative example of this involves suspensions of filamentous fd virus, which are semi-flexible charged need\-les with a chiral structure that form a cholesteric phase in a density regime that depends on the ionic strength. In Ref.~[\citen{Grelet2003}], however, fd-virus particles sterically stabilized by a coating with the neutral polymer polyethylene glycol (PEG) exhibited a phase diagram and a nematic order parameter independent of the ionic strength, but surprisingly, the fd-PEG continued to form a cholesteric phase with a pitch that did vary with the ionic strength.
Furthermore, molecular chirality does not guarantee macroscopic chirality. For example, the virus Pf1, with a chiral structure very similar to that of fd, does not form a cholesteric phase (or its pitch is too large to observe experimentally).~\cite{Fraden2000} Indeed, subtle alterations of the surface properties of fd that do not have a large effect on the phase diagram can have an appreciable effect on the cholesteric pitch.~\cite{Zhang2013} Reversing the surface charge of fd from negative to positive even prevented the observation of a cholesteric pitch, though the chemical modification of fd may have also introduced additional attractive forces.~\cite{Zhang2013} The fact that the cholesteric pitch is very sensitive to particle surface properties was also shown in a study of M13, which is a charged, large-aspect ratio bacteriophage with a right-handed structure that is shown to form a left-handed macroscopic phase.~\cite{Tombolato2006} Though steric effects are shown to favor a right-handed phase, charges added along grooves of the coarse-grained representation of M13 caused the calculated pitch to become left-handed.~\cite{Tombolato2006} The microscopic origin of chirality in colloidal suspensions remains a mystery despite many interesting recent works,\cite{Wensink2015,Ferrarini2015,Dussi} though charge seems to be one of the crucial ingredients.~\cite{Grelet2003,Zhang2013,Tombolato2006,Wensink2009,Wensink2011}
A wide variety of experimental systems that display nematic or cholesteric phases involve electrostatic interactions,~\cite{Vroege1992} for example, filamentous viruses,~\cite{Lapointe1973, Fraden1995, Dogic2006} actin filaments,~\cite{Suzuki1991,Coppin1992,Furukawa1993} cellulose derivatives,~\cite{Werbowyj1976, Werbowyj1980} and single-walled carbon nanotubes in superacids.~\cite{Davis2004, Rai2006} For strong electrostatic interactions or short screening lengths, the isotropic-nematic phase transition is well understood. Onsager~\cite{Onsager} was the first to note that the soft repulsion can be treated by renormalizing the diameter of the rods. Stroobants et al.~\cite{Lekkerkerker1986} showed that there is a second effect for strong electrostatic interactions, namely a ``twisting" due to the angle dependence of the electrostatic potential, which makes the rods resist aligning.
Weakly charged rods have also been studied extensively. In Refs.~[\citen{Semenov1988, Nyrkova1997}], a scaling theory was used to give qualitative predictions for charged rods. Interestingly, in a certain region of low charge density and moderate screening they predict that a competition between steric and electrostatic effects leads to a coexistence between a nematic and a highly oriented nematic phase. Most predictions of Refs.~[\citen{Semenov1988, Nyrkova1997}], including the existence of the nematic-nematic coexistence were confirmed in Ref.~[\citen{Chen1996}], using a Debye-H\"{u}ckel-like theory that includes some many-rod correlations. Another interesting result is that the correlation electrostatic energy due to charge fluctuations in a many-body system of charged rods and counterions makes orientational order more favorable, stabilizes a weakly ordered nematic at small rod concentrations, and leads to the possibility of two nematic phases.~\cite{Potemkin2002,Potemkin2005}
The goal of this paper is to quantitatively examine both weak and strong electrostatic interactions using second-virial theory and additionally to investigate how charge affects the stability of the nematic phase with respect to spontaneous twist deformations. In Sec.~\ref{sect:Onsager}, we review second-virial theory for charged rods and extend previous results to weakly charged rods. We also determine for which parameters the twisting effect becomes important for weakly charged rods. In Sec.~\ref{sect:PhaseDiagrams}, we construct the phase diagrams and identify the parameter regime where nematic-nematic coexistence can occur. We then briefly discuss the possibility of seeing the nematic-nematic coexistence experimentally in Sec.~\ref{sect:experiment}. In Sec.~\ref{sect:ElasticConsts}, we investigate if the twisting effect can stabilize a cholesteric phase. We do this by calculating the Frank elastic constants of the nematic phase and examining the relationship between the twisting effect and the twist elastic constant. We are especially interested whether the twist elastic constant (evaluated in the nematic state) can become negative, which would indicate the possibility of a cholesteric phase spontaneously forming. Finally, we examine the sign of the twist elastic constant for finite aspect-ratio rods in Sec.~\ref{sect:Simone}. We end with a summary and conclusions in Sec.~\ref{sect:conclusion}.
\section{Onsager theory}\label{sect:Onsager}
\subsection{Second-virial term}
\begin{figure}[htb]
\includegraphics[width=\linewidth, bb = 0 0 350 400]{figure1.pdf}
\caption{Illustration of two charged spherocylinders with diameters $D$ from two different viewpoints. The rods are oriented along unit vectors $\hat{\omega}$ and $\hat{\omega}'$, with $\gamma = \cos^{-1} \left( \hat{\omega} \cdot \hat{\omega}' \right)$ the angle between the rods and $x$ the shortest distance between them.}
\label{fig:spherocylinders}
\end{figure}
We consider $N$ charged colloidal rods of length $L$ and diameter $D$ suspended in an electrolytic solvent characterized by a salt concentration $\rho_s$ and a dielectric permittivity $\epsilon$. The system has a total volume $V$ and a temperature $T$. The Bjerrum length is given by $\lambda_B = e^2 \!/(4 \pi\, \epsilon_0 \, \epsilon \,k_\text{B} T)$, with $e$ the elementary charge, $k_\text{B}$ the Boltzmann constant, and $\epsilon_0$ the vacuum permittivity, and the Debye screening length is defined as $\kappa^{-1} = 1/\sqrt{8 \pi \lambda_B \rho_s}$.~\cite{mcquarrie}
In addition to a hard-core repulsion between a pair of rods, there is also a screened electrostatic interaction, approximated by the interaction between two line charges with effective linear charge density $v_\text{eff} = Z/L$ with $Z$ the number of elementary charges on a rod. The form of this electrostatic interaction for infinitely long cylinders in the Debye-H\"{u}ckel approximation is well known.~\cite{Fixman, Brenner, Stigter} The pair potential $U(x, \gamma)$ is given by
\begin{equation}
\label{eq:potential}
\beta U(x, \gamma)= \left\{
\begin{array}{cl}
\infty, & x \leq D \\
\displaystyle\frac{\mathcal{A} \, e^{-\kappa x }}{\kappa D |\sin \gamma|}, & x>D ,
\end{array}
\right.
\end{equation}
with $\beta =(k_\text{B} T)^{-1}$, $x$ the minimum separation between the two rods, $\gamma$ the angle between the rods with orientations $\hat{\omega}$ and $\hat{\omega}'$ defined by $\cos \gamma = \hat{\omega} \cdot \hat{\omega}'$ (see Fig.~\ref{fig:spherocylinders}), and where we have introduced the dimensionless coupling parameter
\begin{equation}
\mathcal{A} = 2 \pi \, v_\text{eff}^2\, \lambda_B \, D.
\label{eq:charge}
\end{equation}
Eq.~(\ref{eq:potential}) is valid when $\kappa^{-1} \ll L$ and $x \ll L$.
Following Onsager,~\cite{Onsager} we study the phase behavior of this suspension of charged needles in terms of the single-rod orientation distribution function $\psi(\hat{\omega})$, which suffices for translationally invariant phases. The distribution $\psi(\hat{\omega})$ is normalized as
\begin{equation}
\label{eq:norm}
\int \psi(\hat{\omega}) \, d\hat{\omega} = 1.
\end{equation}
Assuming $L \gg D$, we can write the Helmholtz free energy functional $F[\psi]$ of a suspension of rods in the second-virial approximation as
\begin{align}
\label{eq:freeEnergy}
\frac{\beta F[\psi]}{ V} =& \rho (\ln \mathcal{V}\rho -1)+ \rho \int \psi(\hat{\omega})\ln \psi(\hat{\omega}) \, d\hat{\omega} \nonumber\\
&+\frac{1}{2} \rho^2 \iint E(\hat{\omega},\hat{\omega}') \psi(\hat{\omega}) \psi(\hat{\omega}') \, d\hat{\omega} \,d\hat{\omega}' \nonumber\\
& + \mathcal{O}(\rho^3),
\end{align}
where $\rho = N/V$ is the number density and $\mathcal{V}$ is a thermal volume. In Eq.~(\ref{eq:freeEnergy}), the first term gives the translational entropy and the second gives the orientational entropy. The third term is the second-virial term, with the ``excluded volume" term $E (\hat{\omega}, \hat{\omega}') $ defined as
\begin{align}
\label{eq:ExclVolMayer}
E (\hat{\omega}_1, \hat{\omega}_2) &= -\frac{1}{V} \iint \Phi(\mathbf{r}_1-\mathbf{r}_2;\hat{\omega}_1,\hat{\omega}_2) \, d\mathbf{r}_1 \, d\mathbf{r}_2 \nonumber \\
&=-\int \left[ e^{-\beta U(\mathbf{r}_{12};\hat{\omega}_1,\hat{\omega}_2)}-1 \right] \, d\mathbf{r}_{12} ,
\end{align}
where we have used translational invariance, defined $\mathbf{r}_{12} =\mathbf{r}_1-\mathbf{r}_2$, and introduced the Mayer function $\Phi = e^{-\beta U}-1$ which depends on $U(\mathbf{r}_{12};\hat{\omega}_1,\hat{\omega}_2)$, the pair potential between a rod with orientation $\hat{\omega}_1$ and position $\vect{r}_1$ and a second rod with orientation $\hat{\omega}_2$ and position $\vect{r}_2$.
Now, performing the integration over the Mayer function in Eq.~(\ref{eq:ExclVolMayer}) with the potential given in Eq.~(\ref{eq:potential}) and using $d\vect{r}_{12} = L^2 |\sin \gamma| \, dx$, we can write $E(\gamma)= E(\hat{\omega}, \hat{\omega}')$ as~\cite{Fixman}
\begin{align}
\label{eq:ExclVol}
E(\gamma) =& -2L^2 \, |\sin \gamma| \int_0^\infty \Phi (x,\gamma)\, dx \nonumber \\
=& 2L^2 D \, |\sin \gamma| \left\{ 1+\frac{1}{\kappa D}\left[ \text{ln}\left( \frac{A'}{|\sin \gamma|} \right) \right. \right. \nonumber\\
& \qquad \left. \left. + \gamma_E -\text{Ei}\left( -\frac{A'}{|\sin \gamma|} \right) \right] \right\},
\end{align}
where $\gamma_E \approx 0.5772$ is Euler's constant, the exponential integral Ei is defined as $\text{Ei}(y) = -\int_{-y}^\infty \exp(-t)/t \,dt$, and $A'=\mathcal{A} \, e^{-\kappa D}/(\kappa D).$ The function $\text{Ei}(-A')$ becomes negligible for $A' \gtrsim 2$, which is the approximation used in Ref.~[\citen{Lekkerkerker1986}]. In the present work, we also consider $A' \lesssim 2$, and hence we keep the $\text{Ei}$ term in Eq.~(\ref{eq:ExclVol}) throughout.
\begin{figure*}[ht!]
\includegraphics[width=\linewidth, bb=0 0 482 170]{exclVol.pdf}
\caption{Dependence of ``excluded volume" $E(\gamma)$ (Eq.~\eqref{eq:ExclVol}) on the angle $\gamma$ between two rods (see Fig.~\ref{fig:spherocylinders}) for uncharged rods ($\mathcal{A}=0$) and charged rods for different values of screening parameter $\kappa D$ and Coulomb coupling $\mathcal{A}$. In (a), $E(\gamma)$ is scaled by the volume factor $2L^2D$. In (b), $E(\gamma)$ is scaled by effective volume factor $2L^2D_\text{eff}$ (see Eq.~\eqref{eq:Deff}). The value of twisting parameter $H$ (Eq.~\eqref{eq:newTwist}) is $0$, $0.71$, $1.1$, and $0.54$ for purple, green, pink, and light blue curves, respectively.}\label{fig:exclVol}
\end{figure*}
The function $E(\gamma)$ of Eq.~(\ref{eq:ExclVol}) depends on the intrinsic excluded volume $L^2 D$ of the rods, the screening parameter $\kappa D$, and the parameter $A'$. However, in order to be able to vary the charge density of the needles and the salt concentration independently, we prefer to use $\mathcal{A}$ rather than $A'$ as an independent parameter, since $\mathcal{A}$ only depends on the charge of the rods (and the Bjerrum length) and not on $\kappa D$. In Fig.~\ref{fig:exclVol}(a), we plot $E(\gamma)$ as a function of the angle $\gamma$ between the rods for a few values of $\kappa D$ and $\mathcal{A}$, along with the hard-rod excluded volume for comparison. We observe essentially two effects compared to the hard-rod excluded volume, for which $E(\gamma)/(2 L^2 D)$ reduces to $|\sin\gamma| $. First, due to the charge there is an overall increase in the ``excluded volume" for all parameters, and second, there is a change in the shape of $E(\gamma)$ from that of hard rods, in particular the $\gamma$-dependence is much stronger at small $\gamma$'s, i.e. charged rods disfavor small angles much more than hard rods do. The former we describe by an increasing effective diameter $D_\text{eff}$ and the latter we describe by a ``twisting" parameter. These two effects will be discussed in Sec.~\ref{sect:Deff} and Sec.~\ref{sect:twist}, respectively.
The equilibrium orientation distribution function is obtained by minimizing $F[\psi]/V$ with respect to $\psi (\hat{\omega})$, at fixed $\rho$, $T$, $\kappa^{-1}$, and $\mathcal{A}$, which gives the integral equation~\cite{Vroege1992}
\begin{equation}
\label{eq:EL}
\ln \psi(\hat{\omega})+ \rho \int E(\gamma(\hat{\omega},\hat{\omega}')) \psi(\hat{\omega}') \, d\hat{\omega}' = C,
\end{equation}
with $C$ a constant that ensures that the constraint of Eq.~(\ref{eq:norm}) is satisfied. There is an analytic solution to Eq.~(\ref{eq:EL}), namely $\psi_\text{i}(\hat{\omega})=1/(4\pi)$, describing the isotropic phase which is the only (stable) one at sufficiently low $\rho$.~\cite{Onsager, Kayser1978, Mulder1989}
At higher densities, $E(\gamma)$ becomes more important in Eq.~(\ref{eq:EL}) and the rods favor the nematic phase, where the orientation distribution function becomes peaked around a nematic director $\hat{n}$. We choose a coordinate system with the $z$-axis parallel to $\hat{n}$. The unit vector $\hat{\omega}$ can be written as $\hat{\omega}=(\sin \theta \, \cos \varphi, \sin \theta \, \sin \varphi, \cos\theta)$, where $\varphi$ is the azimuthal angle and $\theta$ is the polar angle with respect to $\hat{z}$. The orientation distribution function is independent of the azimuthal angle $\varphi$, has up-down symmetry, and hence we can write $\psi(\hat{\omega})=\psi(\hat{\omega} \cdot \hat{n}) = \psi(\hat{\omega} \cdot -\hat{n})$. To determine the orientation distribution function $\psi(\hat{\omega})$ for the nematic phase, we solve Eq.~(\ref{eq:EL}) using an iterative scheme on a discrete grid of polar angles $\theta \in [0,\pi/2)$.~\cite{Herzfeld1984, vanRoij2005}
\subsection{Effective diameter}\label{sect:Deff}
We introduce the double orientational average in the isotropic phase $\langle \langle \cdot \rangle \rangle_\text{i}$, as
\begin{equation}
\langle \langle f(\hat{\omega}, \hat{\omega}') \rangle \rangle_\text{i} = \frac{1}{16\pi^2}\iint f(\hat{\omega}, \hat{\omega}') \, d\hat{\omega} \, d\hat{\omega}',
\end{equation}
for an arbitrary function $f(\hat{\omega}, \hat{\omega}')$. We now follow Ref.~[\citen{Lekkerkerker1986}] and define
\begin{equation}
\label{eq:Deff}
D_\text{eff} = D+ \alpha \kappa^{-1} ,
\end{equation}
with the effective double-layer thickness parameter
\begin{equation}
\label{eq:alpha}
\alpha = \ln A' +\gamma_E + \ln 2 - \frac{1}{2} - \frac{4}{\pi} \langle \langle |\sin \gamma| \, \text{Ei}\left( -\frac{A'}{|\sin \gamma|} \right) \rangle \rangle_\text{i} .
\end{equation}
One checks from Eq.~(\ref{eq:ExclVol}) that the second-virial coefficient in the isotropic phase can be written as
\begin{equation}
\label{eq:EIso}
\frac{1}{2}\langle \langle E(\hat{\omega}, \hat{\omega}') \rangle \rangle_\text{i} =\frac{\pi}{4}L^2 D_\text{eff},
\end{equation}
where we have used
\begin{align}
&\langle \langle| \sin \gamma| \rangle \rangle_\text{i} = \frac{\pi}{4},\nonumber \\
&\langle \langle -|\sin \gamma| \, \ln |\sin \gamma| \rangle \rangle_\text{i} = \frac{\pi}{4}\left(\ln 2- \frac{1}{2}\right).
\end{align}
Eq.~(\ref{eq:EIso}) is precisely the second-virial coefficient of uncharged rods with a diameter $D_\text{eff}$ (in the isotropic phase). This justifies the interpretation of $D_\text{eff}$ as the effective diameter of the charged needles. The parameter $\alpha$ (Eq.~(\ref{eq:alpha})), which vanishes for $\mathcal{A}=0$, is a result of the electrostatic repulsions, which effectively increase the diameter of the rods.~\cite{Onsager, Lekkerkerker1986} The term in Eq.~(\ref{eq:alpha}) involving the exponential integral has to be integrated numerically. In Fig.~\ref{fig:exclVol}(b), we plot $E(\gamma)$ scaled by $2L^2D_\text{eff}$ in order to emphasize the twisting effect, which is discussed in detail in the following section.
\begin{figure*}[!htb]
\includegraphics[width=\linewidth, bb= 0 0 482 170]{contour1.pdf}
\caption{Color plots indicating the value of (a) scaled effective diameter $D_\text{eff}/D$ (see Eq.~\eqref{eq:Deff}) with contours showing $D_\text{eff}/D = 2,4,10$ (dotted, dashed, solid respectively) and (b) effective double-layer thickness parameter $\alpha$ (see Eq.~\eqref{eq:alpha}) with contours showing $\alpha = 0.1,1,4$ (dotted, dashed, solid respectively) as a function of salt concentration $\kappa D$ and charge $\mathcal{A}$, on a log-log scale. Note that the color bar of (a) is in a log scale.}\label{fig:contourDeff}
\end{figure*}
In Fig.~\ref{fig:contourDeff}, we present color plots as a function of $\kappa D$ and $\mathcal{A}$ indicating the value of (a) $D_\text{eff}$ (defined in Eq.~\eqref{eq:Deff}) and (b) of effective double-layer thickness parameter $\alpha$ (defined in Eq.~\eqref{eq:alpha}). We find that $D_\text{eff} \gg D$ (and so $\alpha > 0$) in a well-defined regime of sufficiently high $\mathcal{A}$ and low $\kappa D$, whereas $D_\text{eff} \approx D$ (thus $\alpha \approx 0$) in the complementary region. For example, fd virus is strongly charged ($v_\text{eff} \geq 4$ $e^-/$nm i.e. $\mathcal{A} \geq 500$) with a diameter of $D=6.6$ nm.~\cite{Purdy2004} The effective diameter of fd virus varies from $D_\text{eff}/D \approx 1.0$ at high ionic strength $\kappa D = 10$ (and so $\alpha \approx 0$) to $D_\text{eff}/D \approx 15$ at low ionic strength $\kappa D = 0.1$ (and so $\alpha \approx 1.4$).
\subsection{Twisting effect}\label{sect:twist}
In addition to an increase in the effective diameter, there is a second effect due to electrostatic interactions. This is a ``twisting" effect that is a result of the $|\sin \gamma|^{-1}$ term in the electrostatic potential (Eq.~\eqref{eq:potential}), first noted in Ref.~[\citen{Lekkerkerker1986}]. While the increase in the effective diameter of the rods tends to stabilize the nematic phase, this twisting effect tends to destabilize the nematic phase, pushing the isotropic-nematic phase transition to higher concentrations. This can be qualitatively understood if we consider $E(\gamma)$ in units of $2L^2D_\text{eff}$ as plotted in Fig.~\ref{fig:exclVol}(b), which reveals a strong $\gamma$ dependence for small angles $\gamma$.
In order to describe the twisting effect quantitatively, we follow Ref.~[\citen{Lekkerkerker1986}] and define the parameter
\begin{equation}\label{eq:oldTwist}
h= \frac{1}{\kappa D_\text{eff}},
\end{equation}
such that Eq.~(\ref{eq:ExclVol}) can be rewritten as
\begin{align}
\label{eq:exclVolNem}
E(\gamma) &= 2L^2 D_\text{eff} \, |\sin \gamma| \nonumber \\
%
&\times \left\{ \vphantom{\frac{A'}{|\sin \gamma|}}\right. 1+h \left[\vphantom{\frac{A'}{|\sin \gamma|}}\right. -\text{ln}\, |\sin \gamma|
- \ln2 +\frac{1}{2} -\text{Ei}\left( -\frac{A'}{|\sin \gamma|} \right) \nonumber \\
&\qquad+\frac{4}{\pi} \langle \langle |\sin \gamma| \,\text{Ei}\left( -\frac{A'}{|\sin \gamma|} \right) \rangle \rangle_\text{i} \left.\vphantom{\frac{A'}{|\sin \gamma|}} \right] \left.\vphantom{\frac{A'}{|\sin \gamma|}} \right\}.
\end{align}
In the regime where $A' \gtrsim 2$, both the Ei term and its double orientational average term in Eq.~(\ref{eq:exclVolNem}) essentially vanish. We see that in this regime only the parameter $h$ controls the magnitude of the twisting effect, and hence $h(\kappa D,A')$ and $D_\text{eff}(\kappa D,A')$ completely determine the system's phase behavior. However, for weakly charged rods at a low salt concentration, $A'$ can be small and the exponential integral terms in Eq.~(\ref{eq:exclVolNem}) can become important. In this case, the twisting effect not only depends on the combination $h(\kappa D,A')$, but also on $A'$ separately. Nevertheless, also in this regime it would be convenient to have a single parameter that characterizes the deviation of Eq.~(\ref{eq:exclVolNem}) from an effective hard rod-like excluded volume, $2 L^2 D_\text{eff} |\sin \gamma|$. Therefore, we define a new twisting parameter
\begin{align}\label{eq:newTwist}
H = & \frac{1}{h k}\int_0^\pi d\gamma \, \left[ \frac{E(\gamma)}{2L^2D_\text{eff} |\sin \gamma|} - 1 \right]^2\nonumber \\
=& \frac{h}{k}\int_0^\pi d\gamma \, \times \left[ \vphantom{\frac{A'}{|\sin \gamma|}}\right. -\text{ln}\, |\sin \gamma| - \ln2 +\frac{1}{2} \\
&\left. -\text{Ei}\left( -\frac{A'}{|\sin \gamma|} \right) +\frac{4}{\pi} \langle \langle |\sin \gamma| \,\text{Ei}\left( -\frac{A'}{|\sin \gamma|} \right) \rangle \rangle_\text{i} \right]^2 \nonumber,
\end{align}
where $k$ is a normalization factor, chosen to be
\begin{align}
k &= \int_0^\pi\left[-\ln |\sin \gamma| - \ln2 +\frac{1}{2}\right]^2 \, d\gamma \nonumber \\
&= \frac{\pi}{12}(3+\pi^2),
\end{align}
such that $H$ reduces to $h$ when $A' \gtrsim 2$.
\begin{figure*}[!htb]
\includegraphics[width=\linewidth, bb=0 0 482 170]{contour2.pdf}
\caption{Color plots indicating the value of (a) new twisting parameter $H$ (Eq.~\eqref{eq:newTwist}) with contours showing $H = 0.2,0.5,1$ (dotted, dashed, solid respectively) and (b) old twisting parameter $h$ (Eq.~\eqref{eq:oldTwist}) with contours showing $h = 0.2,0.5,1$ (dotted, dashed, solid respectively) as a function of salt concentration $\kappa D$ and charge $\mathcal{A}$, on a log-log scale. The triangles in (a) indicate the locations of the isotropic-nematic-nematic triple points found in the phase diagrams discussed following section. Note that the color bar of (b) is in a log scale.}\label{fig:contourTwist}
\end{figure*}
In Fig.~\ref{fig:contourTwist}, we show the dependence of (a) the new twisting parameter $H$ (defined in Eq.~\eqref{eq:newTwist}) and (b) the old twisting parameter $h$ (defined in Eq.~\eqref{eq:oldTwist}) on $\kappa D$ and $\mathcal{A}$. We see that the shapes of $H$ and $h$ differ but that they agree in the upper left corner where $A'= \mathcal{A} e^{-\kappa D}/(\kappa D) \gtrsim 2$. When $A' \lesssim 2$, $h$ increases but in this parameter regime it is no longer physically relevant; interestingly, Fig.~\ref{fig:contourTwist}(a) shows that at fixed $\kappa D \lesssim 1$, the new twist parameter $H$ goes through a maximum as a function of $\mathcal{A}$ at some $\mathcal{A} \lesssim 10^{-1}$, which implies that a low (but non-zero) charge on the rods gives the strongest twisting effect.
In the following section we study the effect of twisting on the isotropic-nematic phase transition in charged rods. In Sec.~\ref{sect:ElasticConsts}, we compute the Frank elastic constants in order to see how they are influenced by the twisting effect.
\section{Phase diagrams}\label{sect:PhaseDiagrams}
The concentrations of the coexisting isotropic and nematic phase, $c_i$ and $c_n$ respectively, can be found using the condition that the osmotic pressures $\Pi = -\left( \partial F/\partial V \right)_{N,T}$ and chemical potentials $\mu = \left( \partial F/\partial N \right)_{V,T}$ satisfy
\begin{align}
\Pi^\text{iso}(c_i) &= \Pi^\text{nem}(c_n) \\
\mu^\text{iso}(c_i) &= \mu^\text{nem}(c_n).
\end{align}
We introduce the dimensionless effective concentration
\begin{equation}
c_\text{eff} = \frac{\pi}{4} \frac{N}{V} L^2 D_\text{eff},
\label{eq:density}
\end{equation}
which we use rather than the usual dimensionless concentration $c=(\pi/4)(N/V)L^2D$ in order to show how twisting affects the phase behavior of charged rods.
In Fig.~\ref{fig:phaseDiagramsCharge}, we show phase diagrams in the ($c_\text{eff}$, $\mathcal{A}$) plane for (a) $\kappa D = 0.3$, (b) $\kappa D = 0.2$, and (c) $\kappa D = 0.1$, where the horizontal tie-lines connect coexisting states and the color coding represents the nematic order parameter $S= \langle (3\cos^2\theta -1)/2 \rangle$. A first glance reveals a very rich phase diagram with isotropic-nematic and nematic-nematic coexistence, including triple points and critical points. In all three phase diagrams, we see that at $\mathcal{A}=0$ (zero charge) the expected phase transition for uncharged rods occurs, with the isotropic phase (I) existing at low concentrations, the nematic phase (N) at high concentrations, and phase coexistence between I and N in the region between $c_\text{eff}=3.29$ and $c_\text{eff}=4.19$. As we increase the charge, the twisting parameter increases and destabilizes the nematic phase, so that the I-N phase transition moves to higher effective concentrations $c_\text{eff}$. At this point, it is good to note that the definition of $c_\text{eff}$ given in Eq.~(\ref{eq:density}), involves the \emph{effective} diameter, which, as shown in Fig.~\ref{fig:contourDeff}(a), increases with increasing $\mathcal{A}$. If we were to use the concentration $c$ instead of the effective concentration, the I-N phase transition would move to \emph{lower} concentrations. We will return to this point below.
We limit the phase diagrams of Fig.~\ref{fig:phaseDiagramsCharge} to low charge, where the twisting effect is important. However, as $\mathcal{A} \to \infty$ (at fixed $\kappa D$ this corresponds to $h \to 0$), we also find a hard rod-like I-N transition, in agreement with Ref.~[\citen{Lekkerkerker1986}]. Next to each phase diagram, we show the $\mathcal{A}$-dependence of the twisting parameter $H$, the scaled effective diameter $D_\text{eff}/D$ (which is equal to the ratio $c_\text{eff}/c$), and the zeta-potential $\zeta$, i.e. the electrostatic potential on the surface of the rod as obtained from the Poisson-Boltzmann equation in a cylindrical cell (see Appendix \ref{sect:PB}).
In Fig.~\ref{fig:phaseDiagramsCharge}(b), we see that the twisting effect is large enough to cause the nematic phase to split into a low density nematic N$_1$ and a higher density, more aligned nematic phase N$_2$. The phase diagram features a nematic-nematic (N$_1$-N$_2$) critical point and an isotropic-nematic-nematic (I-N$_1$-N$_2$) triple point. Finally, in Fig.~\ref{fig:phaseDiagramsCharge}(c), we have lowered $\kappa D$ further and we see again a triple point, and a larger region of N$_1$-N$_2$ phase coexistence, the critical point of which is outside the plotted range.
\begin{figure*}[htp]
\includegraphics[width=\linewidth, bb=0 0 482 570]{phaseDiagram1.pdf}
\caption{Phase diagrams in the density $c_\text{eff}$ (Eq.~\eqref{eq:density})-charge $\mathcal{A}$ (Eq.~\eqref{eq:charge}) representation for (a) $\kappa D = 0.3$, (b) $\kappa D = 0.2$, and (c) $\kappa D = 0.1$, with colors showing the nematic order parameter $S$, I denoting the stable isotropic phase, N the stable nematic, N$_1$ the weakly-aligned nematic, and N$_2$ the strongly aligned nematic phase. The N$_1$-N$_2$ is critical point is denoted by an asterisk and the three coexisting phases at the triple point are denoted by black dots. The tielines that connect coexisting phases are horizontal. Next to each phase diagram, the dependence of the twisting parameter $H$ (Eq.~\eqref{eq:newTwist}) and scaled effective diameter $D_\text{eff}/D$ (Eq.~\eqref{eq:Deff}) on $\mathcal{A}$ is shown. In fourth column, the dependence of the zeta-potential $\zeta$ on $\mathcal{A}$ (in units of millivolts) is shown for diameter-Bjerrum length ratio $D/\lambda_B = 10,1,0.2$ (purple, blue, green or left to right).}
\label{fig:phaseDiagramsCharge}
\end{figure*}
In Fig.~\ref{fig:phaseDiagramsKD}, we show three phase diagrams in the ($c_\text{eff}$, $\kappa D$) plane for fixed charges characterized by (a) $\mathcal{A}=0.08$, (b) $\mathcal{A}=0.03$, and (c) $\mathcal{A}=0.01$. As in Fig.~\ref{fig:phaseDiagramsCharge}, we include colors showing the nematic order parameter $S$ and we plot the $\kappa D$-dependence of the twisting parameter $H$, the scaled effective diameter $D_\text{eff}/D$, and the zeta-potential $\zeta$ next to each phase diagram. Note that the effective diameter increases with decreasing $\kappa D$. We find again that the nematic phase can split into a weakly and strongly aligned nematic, when the twisting parameter $H$ is of order unity (see $H(\mathcal{A},\kappa D)$ in second columns of Figs.~\ref{fig:phaseDiagramsCharge}-\ref{fig:phaseDiagramsKD} and also triangles in Fig.~\ref{fig:contourTwist}(a) indicating locations of triple points from Figs.~\ref{fig:phaseDiagramsCharge}-\ref{fig:phaseDiagramsKD}). Given the rather arbitrary definition of $H$ (Eq.~\eqref{eq:newTwist}), one should not expect the location of the triple points to coincide exactly with the ridge of $H$ in Fig.~\ref{fig:contourTwist}(a). We stress that the phase behavior is determined by ($\mathcal{A}$, $\kappa D$) and not by the single parameter $H$.
\begin{figure*}[htp]
\includegraphics[width=\linewidth, bb= 0 0 482 570]{phaseDiagram2.pdf}
\caption{Phase diagrams in the density $c_\text{eff}$-salt concentration $\kappa D$ representation for (a) $\mathcal{A} = 0.08$, (b) $\mathcal{A} = 0.03$ and (c) $\mathcal{A} = 0.01$ (see the caption of Fig.~\ref{fig:phaseDiagramsCharge} for explanation of regions and parameters), with colors showing the nematic order parameter $S$. Next to each phase diagram, the dependence of the twisting parameter $H$ and scaled effective diameter $D_\text{eff}/D$ on $\mathcal{A}$ is shown. In fourth column, the dependence of the zeta-potential $\zeta$ on $\kappa D$ (in units of millivolts) is shown for diameter-Bjerrum length ratio $D/\lambda_B = 10,1,0.2$ (purple, blue, green or left to right).}
\label{fig:phaseDiagramsKD}
\end{figure*}
The phase diagrams from Figs.~\ref{fig:phaseDiagramsCharge}(b) and \ref{fig:phaseDiagramsKD}(b) are shown using the usual dimensionless rod concentration $c$ in Figs.~\ref{fig:PDConcentration}(a) and \ref{fig:PDConcentration}(b) to clarify the distinction between concentration and effective concentration discussed above. This representation makes explicit the lowering of the I-N transition densities with increasing charge and decreasing salt.
\begin{figure*}[htb]
\includegraphics[width=\linewidth, bb= 0 0 482 170]{phaseDiagram3.pdf}
\caption{Phase diagram in the (a) density $c$-charge $\mathcal{A}$ (Eq.~\eqref{eq:charge}) representation for $\kappa D = 0.2$ (compare to Fig.~\ref{fig:phaseDiagramsCharge}(b)) and (b) density $c$-salt concentration $\kappa D$ representation for $\mathcal{A} = 0.03$ (compare to Fig.~\ref{fig:phaseDiagramsKD}(b)), with colors showing the nematic order parameter $S$. The rod concentration used is $c= \frac{\pi}{4} \frac{N}{V} L^2 D$. See the caption of Fig.~\ref{fig:phaseDiagramsCharge} for explanation of regions and parameters.}\label{fig:PDConcentration}
\end{figure*}
In order to shed light on the microscopic origin of the charge-induced nematic-nematic demixing, we show in Fig.~\ref{fig:triplePt}(a) the orientation distribution functions for the two nematic phases at the triple point of Fig.~\ref{fig:phaseDiagramsCharge}(b) as a function of polar angle $\theta$. We can relate the existence of two nematic phases to the shape of the ``excluded volume" $E(\gamma)$ as a function of $\gamma$, the angle between two rods as shown in Fig.~\ref{fig:triplePt}(b). We can characterize the shape of $E(\gamma)$ by introducing a cross-over angle $\gamma^*$ which we give the ad hoc definition $dE(\gamma^*)/d\gamma= 4L^2D_\text{eff}$ which approximately separates $E(\gamma)$ (for small $\gamma$) into a steep part for $\gamma<\gamma^*$ and a roughly linear part for $\gamma>\gamma^*$. Two rods with polar angles $\theta$ and $\theta'$ in the less aligned nematic phase can sample a larger range of $E(\gamma)$ (note that $\gamma(\theta,\theta',\varphi-\varphi') \in [ 0,\theta+\theta']$) and often can have an angle $\gamma$ larger than $\gamma^*$. A pair of rods in the more aligned nematic phase, however, rarely has an angle $\gamma$ larger than $\gamma^*$. Therefore we can understand the appearance of the denser nematic phase as a ``condensation" in the pocket $0<\gamma<\gamma^*$; the associated loss of orientational entropy is more than compensated for by the large reduction in the excluded volume.
\begin{figure*}[ht]
\includegraphics[width=\linewidth, bb= 0 0 482 170]{tripPt.pdf}
\caption{(a) Orientation distribution functions $\psi(\theta)$ as a function of the polar angle $\theta$ for the coexisting nematic phases at I-N$_1$-N$_2$ triple point with $\kappa D=0.2$, $\mathcal{A}=0.0688$ (see Fig.~\ref{fig:phaseDiagramsCharge}(b)). The low density nematic phase has effective concentration $c_\text{eff}=8.26$, nematic order parameter $S=0.92$, and typical angle $\sqrt{\langle \theta^2_1 \rangle} = 0.062$. The high density nematic phase has effective concentration $c_\text{eff}=10.91$, nematic order parameter $S=0.99$, and typical angle $\sqrt{\langle \theta^2_2 \rangle} = 0.0041$. The vertical dashed line denotes the cross-over angle $\gamma^*$ that seperates the ``excluded volume" $E(\gamma)$ into a steep and an essentially linear regime for $\gamma < \gamma^*$ and $\gamma > \gamma^*$, respectively. (b)~The ``excluded volume" scaled by effective volume factor $2L^2D_\text{eff}$ as a function of the angle $\gamma$ between two rods, for uncharged rods ($\mathcal{A} = 0$) and for the triple point parameters of Fig.~\ref{fig:phaseDiagramsCharge}(b), with the cross-over angle $\gamma^*$ (see text).}\label{fig:triplePt}
\end{figure*}
\section{Relation to experimental systems}\label{sect:experiment}
In this section, we investigate the possibility of seeing the charge-induced nematic-nematic demixing experimentally. In order for our approximations to be reliable, we require a system with $D\ll L$, $\kappa^{-1} \ll L$, and with reasonably rigid particles. We should also keep in mind that at higher densities, experimental systems of rodlike colloids undergo a nematic-smectic phase transition. For hard spherocylinders with diameter $D_\text{eff}$ and aspect-ratio $L/D_\text{eff} > 5$ this occurs at a density approximately 47\% of the close-packed density,\cite{Bolhuis1997} which gives $c_\text{eff}=3.94$, $10.32$, $20.97$, and $42.28$ for $L/D_\text{eff} = 10$, $25$, $50$, and $100$ respectively. In other words, for sufficiently long rods the smectic phase occurs far beyond the isotropic-nematic transition. For shorter rods, or rods with a smaller effective aspect ratio, $L/D_\text{eff} \sim 4-5$, the nematic regime is small and direct isotropic-smectic transitions are to be expected.
If we look at the phase diagram with fixed $\kappa D =0.2$ (Fig.~\ref{fig:phaseDiagramsCharge}(b)) for instance, we see nematic-nematic coexistence at around $\mathcal{A} = 0.05$. In order for $D, \kappa^{-1} \ll L$ we could look at a system with $D=1$ nm, $\kappa^{-1}=5$ nm and $L \sim 100 - 1000$ nm. In water ($\lambda_B=0.7$ nm) this gives a charge density of about $v_\text{eff} = 0.11$ $e^-/$nm or surface potential $\zeta = 9.3$ mV (see Appendix \ref{sect:PB}), while in oil ($\lambda_B=8$ nm), we would need $v_\text{eff} = 0.032$ $e^-/$nm or $\zeta = 31$ mV. Similarly, for $D=5$ nm and $\kappa^{-1}=25$ nm, we would need a charge density of about $v_\text{eff} = 0.048$ $e^-/$nm or $\zeta = 4.1$ mV in water or $v_\text{eff} = 0.014$ $e^-/$nm or $\zeta = 14$ mV in oil.
Compared to fd-virus or tobacco mosaic virus, these are very low charge densities and zeta-potentials. For example, fd virus with a length of $L=880$ nm, diameter $D=6.6$ nm, and a persistence length of 2200 nm, has about $7-10$ $e^-$/nm at room temperature with solution pH around neutral.~\cite{Tang1995, Purdy2004} For such a high charge density, the twisting effect is small ($h \lesssim 0.15$) and also not very sensitive to ionic concentration.~\cite{Tang1995} Similarly, the more rigid tobacco mosaic virus with length $L=300$ nm and diameter $D=18$ nm is very highly charged around neutral pH, with about $7-14$ $e^-$/nm.~\cite{Scheele1967,Fraden1989}
Colloidal silica rods are another interesting model system as they are both monodisperse and rigid, but they have lower aspect ratios ($L/D \lesssim 22$) and bigger diameters ($D \gtrsim 200$ nm), making it hard to meet the conditions of small $\kappa D$ and still have $\kappa^{-1} \ll L$.~\cite{Kuijk2014, Liu2014} In Ref.~[\citen{Liu2014}], for instance, while $\kappa D \approx 0.1$, the surface potential is quite large ($\zeta \approx 70$ mV) and since the aspect ratio is low ($L/D \lesssim 5.6$) the silica rods form a plastic crystal phase rather than a nematic phase.
However, chemical modifications of fd can change its isoelectric point to be around a pH of 10, making it possible to tune the surface charge to arbitrarily small values.~\cite{Zhang2010} Ideally, a modification of fd would be found with a slightly lower isoelectric point than pH 10, such that $\kappa^{-1}$ would not be too small. Also, some polymers are rigid, have small enough diameters, and are weakly charged enough to fall in the regime of large twisting. One such a candidate is cellulose nanofibrils dispersed in water, since the surface charge density of the fibrils can be decreased to zero by lowering the pH.\cite{Fall2011} So although some degree of tuning is needed, the predicted nematic-nematic transition seems to occur in an accessible parameter regime. An issue to consider, however, is the stability with respect to irreversible aggregation due to dispersion forces; the required low charge on the rods may not be able to balance strong Van der Waals forces so some degree of index matching may be needed.
\section{Frank elastic constants}\label{sect:ElasticConsts}
The strong twisting effect that we identified in the low-salt and low-charge regime raises the question to what extent the uniaxial nematic phase is actually stable with respect to spontaneous twist deformations. In general the stability of bulk nematics with respect to (weak) mechanical deformations is characterized by the Frank elastic constants $K_1$ (for splaying), $K_2$ (for twisting), and $K_3$ (for bending).~\cite{deGennes, Straley1973} Mechanical stability requires all three elastic constants to be positive. In this section, we check whether or not the strong twisting effect can affect the sign of $K_i$ ($i=1,2,3$) with a focus on the twist constant $K_2$. We derive an expression for the Frank elastic constants similar to the one derived by Vroege and Odijk,~\cite{Vroege1987} which is based on the derivation for uncharged rods by Straley.~\cite{Straley1973}
In a distorted liquid crystal, the locally preferred orientation (i.e. the local nematic director) is given by $\hat{n}(\vect{r})$, where we assume that this director varies slowly in space. The relative probability of a particle at position $\vect{r}$ having orientation $\hat{\omega}$ is given by the locally evaluated bulk orientation distribution function $\psi(\hat{\omega} \cdot \hat{n}(\vect{r}))$. The excess free energy due to a director field distortion, up to second order in the gradients of $\hat{n}(\vect{r})$, is given in terms of the Frank elastic constants by~\cite{Straley1973}
\begin{align}\label{eq:defOfKi}
\Delta F_d = \frac{1}{2} \int \, d \vect{r} &\left\{ K_{1} \, \left[\nabla \cdot \hat{n}(\vect{r}) \right]^2 +K_{2}\,\left[\hat{n}(\vect{r}) \cdot \nabla \times \hat{n}(\vect{r}) \right]^2 \right. \nonumber\\
&\quad \left. +K_{3}\,\left[\hat{n}(\vect{r}) \times \nabla \times \hat{n}(\vect{r})\right]^2 \right\}.
\end{align}
In Appendix \ref{sect:elastConstAppendix}, we show that within second-virial theory the Frank elastic constants are given by~\cite{Vroege1987,Odijk}
\begin{align}
\label{eq:elasticConstants}
\beta K_i D_\text{eff} = -\frac{4c_\text{eff}^2}{3\pi^2} \iint & d\hat{\omega} \, d\hat{\omega}' \, \left\{ \psi'(\hat{\omega} \cdot \hat{n}) \psi'(\hat{\omega}' \cdot \hat{n} ) \vphantom{\frac{E(\gamma)}{2 L^2 D_\text{eff}}} \right.\nonumber\\
& \left. \times \frac{E(\gamma)}{2 L^2 D_\text{eff}} \, F_i \right\},
\end{align}
where $\psi'(\hat{\omega} \cdot \hat{n})$ is a derivative of $\psi$ with respect to its argument and $F_i$ can be written in terms of local polar and azimuthal angles $\theta$ and $\phi$ as~\cite{Odijk}
\begin{align}
\label{eq:Fi}
&\text{Twist}: \quad &F_2 &=\frac{1}{4} \sin^3\theta \sin \theta' \cos(\phi-\phi') \nonumber \\
&\text{Bend}: \quad &F_3 &= \cos^2\theta \sin \theta \sin \theta' \cos(\phi - \phi')\nonumber \\
&\text{Splay}: \quad &F_1& = 3 F_2.
\end{align}
\begin{figure*}[htb]
\includegraphics[width=\linewidth, bb=0 0 482 170]{phaseDiagramK2.pdf}
\caption{Phase diagram in the density-charge $\mathcal{A}$ (Eq.~\eqref{eq:charge}) representation for $\kappa D = 0.2$ (see the caption of Fig.~\ref{fig:phaseDiagramsCharge} for explanation of regions) using (a) effective density $c_\text{eff}= \frac{\pi}{4} \frac{N}{V} L^2 D_\text{eff}$ and with colors showing dimensionless twist elastic constant $\beta K_2 D_\text{eff}$ and (b) usual dimensionless density $c= \frac{\pi}{4} \frac{N}{V} L^2 D$ and with colors showing dimensionless twist elastic constant $\beta K_2 D$.}\label{fig:K2}
\end{figure*}
\begin{figure}[htb]
\includegraphics[width=\linewidth, bb=0 0 241 170]{K2FixedC.pdf}
\caption{Dependence of twist elastic constant $K_2$, scaled by $\beta D_\text{eff}$, on the colloidal charge parameter $\mathcal{A}$ for screening constant $\kappa D = 0.2$ and effective concentration $c_\text{eff} = 5, 10, 15, 20, 25$, from bottom to top. The dotted lines represent regions in the phase diagram (Fig.~\ref{fig:phaseDiagramsCharge}(b)) for which $c_\text{eff}$ is in the two-state coexistence gap.}\label{fig:KVsA2}
\end{figure}
In Fig.~\ref{fig:K2}, we again show the phase diagram for $\kappa D = 0.2$ in (a) the ($c_\text{eff}$, $\mathcal{A}$) representation (see Fig.~\ref{fig:phaseDiagramsCharge}(b)) and (b) the ($c$, $\mathcal{A}$) representation (see Fig.~\ref{fig:PDConcentration}(b)), with colors now showing the twist elastic constant $K_2$ scaled in (a) by $\beta D_\text{eff}$ and (b) by $\beta D$. Note that $K_2$ is positive throughout the nematic part of the phase diagram. In Fig.~\ref{fig:KVsA2}, we show the twist elastic constant's dependence on the charge $\mathcal{A}$, for $\kappa D = 0.2$ and fixed values of effective concentration $c_\text{eff}$, which correspond to vertical lines in phase diagram Fig.~\ref{fig:K2}(a)). Here we see that the twist elastic constant has a hard-rod value for uncharged rods ($\mathcal{A}=0$), decreases for small $\mathcal{A}$ (as the twisting parameter increases), and finally increases slowly back to the hard-rod value as $\mathcal{A} \to \infty$. We see that the minimum in Fig.~\ref{fig:KVsA2} changes position slightly for different values of $c_\text{eff}$. This is because $K_2$ depends not only on the twisting effect, but also the nematic order parameter (see Fig.~\ref{fig:phaseDiagramsCharge}(b)), which first decreases and then increases with increasing $\mathcal{A}$. In addition, we calculated the bend elastic constant $K_3$ (not shown), which has a much stronger dependence on the nematic order parameter than $K_2$ does, however it is never decreased by the twisting effect. So for all parameters $\kappa D$, $\mathcal{A}$ and all nematic concentrations, we find positive Frank elastic constants.
\section{Finite aspect-ratio charged colloidal rods}\label{sect:Simone}
In this section we investigate if spontaneous chiral symmetry breaking can occur when we consider rods of finite aspect ratio. For this purpose, we apply the recently developed second-virial density functional theory for cholesteric phases~\cite{Belli,Dussi} to a simple model of uniaxially-charged colloidal rods. This theory allows us to compute numerically the free energy $F$ as a function of the wavenumber $q$ of the chiral twist, for a given thermodynamic state of the system (e.g. at given temperature and density). We can therefore distinguish between a stable achiral nematic phase, for which the minimum of $F(q)$ is at $q=0$, a stable cholesteric phase, for which the minimum of $F(q)$ is at $q^* \neq 0$, and a spontaneous breaking of the chiral symmetry, for which the minimum of $F(q)$ is at $\pm q^* \neq 0$. As stated before, finding $K_2 \propto \frac{d^2 F(q)}{d q^2 } |_{q=0}<0$ would also be an indication that the system exhibits a spontaneous chiral symmetry breaking.
In analogy with Ref.~[\citen{Eggen}], the colloids are modeled as hard spherocylinders (HSC) of diameter $D$ and length $L$. The total charge on the rods $Z$ is fixed by embedding $N_s$ spheres interacting via a hard-core Yukawa potential (HY). The $N_s$ spheres (with $N_s$ odd) are evenly distributed along the backbone of the rod: they are separated by a distance $\delta=\frac{L}{N_s -1}$ such that two spheres are always at the extremities of the cylindrical part of the spherocylinder. The total pair potential between two charged rods is therefore
$$
U_{12} (\mathbf{r},\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'}) = U_{HSC} (\mathbf{r},\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'}) + \sum_{i=1}^{N_s} \sum_{j=1}^{N_s} U_{HY} (r_{ij}),
$$
where $U_{HSC}$ is the hard-core potential between spherocylinders,
$$
\beta U_{HSC} (\mathbf{r},\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'})= \left\{
\begin{array}{ll}
\infty & \,\, d_\text{min}(\mathbf{r},\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'}) \leq D \\
0 & \,\, d_\text{min}(\mathbf{r},\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'}) > D
\end{array} \right. ,
$$
with $d_\text{min}(\mathbf{r},\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'})$ the minimum distance between two HSCs with center-of-mass separation $\mathbf{r}$ and orientations $\mathbf{\hat{\omega}},\mathbf{\hat{\omega}'}$. The sphere-sphere interaction is described by a (truncated) hard-core Yukawa potential
\begin{equation}
\beta U_{HY} (r_{ij})= \left\{
\begin{array}{ll}
\infty & \,\, r_{ij} < D \\
\beta \epsilon \frac{\exp \left[ -\kappa D (r_{ij}/D - 1) \right]}{r_{ij}/D} & \,\, D \leq r_{ij}<r_\text{cut} \\
0 & \,\, r_{ij} \geq r_\text{cut}
\end{array} \right. ,
\end{equation}
where $i,j$ indicates spheres belonging to rods $1,2$ respectively. The parameters $\beta \epsilon$ and $N_s$ are related by $\beta \epsilon={\left(\frac{Z}{N_s}\right)}^2$, so $N_s$ is simply a parameter that can be varied until convergence to the continuum limit is reached. As previously shown,~\cite{Eggen} this model with $N_s \geq 13$ is in excellent agreement with analytic results for the excluded volume of finite aspect-ratio rods with an effective linear charge distribution. Accordingly, we choose $N_s = 15$, which should guarantee a good agreement between the discrete-sphere and the linear-charge model. In the numerical integration we use a cutoff $r_\text{cut} \sim (1-2) $ $L$. The aspect ratio $L/D$, the total charge on the rod $Z$ and the inverse of Debye screening length $\kappa D$ are the independent physical parameters. Our approach~\cite{Belli,Dussi} relies on the numerical calculation of the excluded volume for a set of values of the chiral wavenumber $q$. Such a $q$-dependent excluded volume is calculated by performing a Monte Carlo (MC) integration using a large number of configurations and it is then used as input to calculate the free energy as a function of the chiral wavenumber $F(q)$.
We investigate a few combinations of aspect ratio ($L/D$) and total charge on the rods ($Z$), with fixed screening parameter $\kappa D =0.2$, as reported in Fig.~\ref{fig:sim}. In Fig.~\ref{fig:sim}(a), we show the free-energy difference $\Delta F (q)=F(q)-F(q=0)$ as a function of chiral wavenumber $q$, for $L/D=10$, $Z = 1.0$ (corresponding to $\mathcal{A} = 0.034 $), and two different packing fractions $\eta = 0.28, \, 0.32$. In some cases, we employ different $q$-grids to check that our results are consistent. However, within our numerical accuracy no evidence of a double minimum at $q= \pm q^*\neq0$ has been observed for the entire set of parameters studied. From the second-derivative of $\Delta F (q)$ it is possible to calculate $K_2$ as a function of packing fraction $\eta$, as shown in Fig.~\ref{fig:sim}(b) for $L/D=10$ and two values of total charge $Z=0.05$ ($\mathcal{A} = 8.5 \times 10^{-5}$) and $Z=1.0$ ($\mathcal{A} = 0.034$). We see that $K_2$ increases with packing fraction and that the numerical uncertainty increases with packing fraction. In Fig.~\ref{fig:sim}(c), we show the twist elastic constant as a function of packing fraction $\eta$ for aspect ratios $L/D = 40$, $20$, $10$, $5$ and different values of total charge $Z$ on the rods. Due to the large numerical uncertainties at large packing fraction, quantitative conclusions about the actual dependence of the twist elastic constant $K_2$ on charge should be drawn carefully. However, as mentioned before, there are no indications that $K_2$ becomes negative. In addition, we show results from the previous section (i.e. Fig.~\ref{fig:K2}(b)) for total charge on the rods $Z=0$ ($\mathcal{A}=0$) and $Z=1.2$ ($\mathcal{A}= 0.051$) for aspect ratios $L/D = 40$, $20$, $10$ (the dashed curves in Fig.~\ref{fig:sim}(c)). We see that the general trend of $K_2$ is similar to that of the MC results for the largest aspect ratio $L/D=40$, but as expected, Onsager theory becomes less accurate as the aspect ratio becomes smaller. In conclusion, just as in the case of infinite rods, we do not find any evidence that a linear charge distribution can induce a spontaneous chiral symmetry breaking in colloidal rods of finite length.
\begin{figure*}
\includegraphics[width=\linewidth, bb=0 0 482 256]{K2Comparison.pdf}
\caption{
(a) Free-energy difference $\Delta F(q) =F(q)-F(q=0)$ as a function of chiral wavenumber $q D$ for two different packing fractions $\eta$, and for rods with aspect ratio $L/D=10$, total charge on rods $Z=1.0$ (corresponding to $\mathcal{A}=0.034$) divided over $N_s=15$ spheres, screening parameter $\kappa D=0.2$, and cut-off $r_\text{cut}/D=20$. The error bars are calculated by averaging over 10 independent runs of $10^{10}$ MC steps. (b) Twist elastic constant $\beta K_2 D$ calculated from second derivative of $F(q)$ as a function of packing fraction $\eta$ for $Z=0.05$ (corresponding to $\mathcal{A}=8.5 \times 10^{-5}$) and $Z=1.0$ (corresponding to $\mathcal{A}=0.034$) (for the same $N_s$, $r_\text{cut}$, and $\kappa D$ as in (a)). (c) Twist elastic constant $\beta K_2 D$ as a function of packing fraction $\eta$ for fixed screening parameter $\kappa D = 0.2$, with different aspect ratios $L/D$, and with different total charges $Z$. The solid lines are results from the MC method and the dashed lines are results from theory for $Z=0$ ($\mathcal{A}=0$) and $Z=1.2$ ($\mathcal{A}=0.051$), which are shown for aspect ratios $L/D=40$, $20$, $10$ ($L/D=5$ is outside of the plotted range).}
\label{fig:sim}
\end{figure*}
\section{Summary and discussion}\label{sect:conclusion}
In this paper, we constructed phase diagrams for charged rods within the second-virial approximation. We found that in a low salt and low, finite charge interval, where the twisting effect dominates, a coexistence between a nematic and a second, more highly aligned nematic phase occurs as well as an isotropic-nematic-nematic triple point and a nematic-nematic critical point. The required salt and shape parameters $\kappa^{-1} \sim 5D$ and $L \gg \kappa^{-1}$ are rather easy to realize experimentally, but the required low but finite zeta-potential requires some degree of tuning near the isoelectric point.
In Refs.~[\citen{Semenov1988, Nyrkova1997}], a scaling analysis was used to treat the integral over the Mayer function in Eq.~\eqref{eq:ExclVolMayer}. Here it was predicted in a certain regime of relatively low charge density and moderate screening, that the ``excluded volume" $E(\gamma)$ is determined by steric interactions at larger angles $\gamma$ whereas its angular dependence at small angles $\gamma$ comes from electrostatic interactions. This competition was predicted to lead to the existence of two nematic phases, one with a weak ordering and one with a very strong ordering. This is qualitatively in agreement with our findings based on full numerical evaluations. In Ref.~[\citen{Chen1996}], the nematic-nematic coexistence was confirmed to be possible in the part of the regime from Refs.~[\citen{Semenov1988, Nyrkova1997}] given by $D/L\ll \mathcal{A}/(2\pi) \ll (\kappa D)^2$ (the other part being ruled out due to many-body effects). This upper bound is indeed confirmed by our calculations. However, since we found that $\mathcal{A}/(2 \pi)$ has values at nematic-nematic coexistence in a range $\sim 0.0025-0.03$, the condition for the aspect ratio $D/L$ to be (much) smaller than this is a stricter requirement on the aspect ratio than we made in this paper, where we set it to zero from the outset. As we used the full numerical form for $E(\gamma)$ (Eq.~\eqref{eq:ExclVol}) as well as numerically solved the integral equation Eq.~\eqref{eq:EL} rather than using approximate Gaussian orientation distributions, we believe our results provide a quantitative underpinning for the nematic-nematic transitions predicted earlier in Refs.~[\citen{Semenov1988, Nyrkova1997,Chen1996}].
We calculated the twist elastic constant of the nematic phase of uniaxial charged rods. We showed that at a fixed effective concentration the twisting effect can reduce the twist elastic constant $K_2$, though it always remains positive. In addition, we calculated $K_2$ for uniaxial finite aspect-ratio rods, where we found no signs of negative $K_2$ either. Therefore, a uniaxial charge distribution alone seems to be not enough to break chiral symmetry, at least not within a second-virial type theory. It is an interesting possibility that by also considering the third-virial term (which includes three-body correlations), the twisting effect could be shown to stabilize a cholesteric phase. In addition, it would be interesting to see if nonlinear uniaxial charge distributions or flexibility could lead to a negative twist elastic constant. These questions are left for future studies.
\section*{Acknowledgments}
This work is part of the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). We also acknowledge financial support from an NWO-VICI grant and from an NWO-ECHO grant.
|
train/arxiv
|
BkiUfUbxK7IDA31U13hu
| 5 | 1 |
\section{Introduction}
Improvements in automated theorem provers (ATPs) have
been so far predominantly done by
inventing new search paradigms such as
superposition~\cite{DBLP:journals/tcs/Fribourg85} and
SMT~\cite{DBLP:series/faia/BarrettSST09}. Over the
years, developers of these provers have optimized their modules and fine-tuned
their parameters. As time progresses, it is becoming evident that more
intricate collaboration between search algorithms and intuitive guidance is
necessary.
ATP developers have frequently manually translated their intuition into
guiding heuristics and tested many different parameter combinations. The first
success demonstrating the possibility
of replacing these heuristics by machine learning guidance has been
demonstrated in ITP Hammers~\cite{hammers4qed}. There, feature-based predictors
on large interactive theorem prover (ITP) libraries learn to select relevant
theorems for a conjecture. This step drastically reduces the search space.
As a result, ATPs can prove many conjectures proposed by ITP users.
The last landmark that is a major source of inspiration for this paper
is the development and success of self-improving neurally guided algorithms in
perfect information games~\cite{silver2017mastering}. In this work, we adapt
such algorithms to theorem proving tasks.
The hope is that systems using the learning paradigm will eventually improve
on and outperform the best
heuristically-guided ATPs.
We believe that the best design for a solver is one that generalizes
across many tasks. One way to achieve this is to minimize the amount of
algorithmic bias based on human knowledge of the specific domain.
Eventually, given enough examples, the neural architecture might be able to
recognize and exploit patterns by itself. For large domains that require vast
amount of knowledge and understanding, the number of required examples to
capture all the patterns is too large. That is why our experiments
are performed on a domain that contains a small number of
basic concepts.
\paragraph{Term Synthesis Tasks}
To test our approach, we choose among theorem proving tasks two
term synthesis tasks. In itself, term synthesis is a less commonly explored
technique as it is often a less
efficient way of exploring a search space as deduction-based methods.
This technique is however crucial in inductive theorem proving~\cite{hipspec13}
and in counterexample
generators~\cite{DBLP:conf/cpp/Bulwahn12,DBLP:conf/itp/BlanchetteN10}
as it can be used to provide an induction predicate or a witness.
A term synthesis task can be expressed as proving a theorem of the form
$\exists x.\ P(x)$ with the proof providing a witness for the term $x$.
In both our tasks, the theorem can be re-stated as $\exists x.\ f(x) = y$,
where $f$ is an evaluation function specific to the task and
$y$ is an image specifying the particular instance that has to be solved.
In this light, the aim of the prover is to find an element of the preimage of
$y$.
This might be hard even if we have an efficient algorithm for $f$ as
conjectured by the existence of one-way
functions~\cite{DBLP:journals/mlcs/Gradel94}.
In the first task, the aim is to construct an untyped SK-combinator which
is semantically equal to a $\lambda$-expression in head-normal form.
Applications range from better encoding of $\lambda$-expressions in
higher-order to first-order
translations~\cite{sledgehammer10,DBLP:conf/cpp/Czajka16}
to efficient compilations of functional programming languages to combinator code
~\cite{DBLP:journals/cl/JoyRB85}.
In the second task, the aim is to construct a polynomial $p$ whose Diophantine
set $D(p)$ is equal to a set of integers $S$. We say that the witness $p$
describes $S$. This process of constructing new and equivalent definitions for
$S$ is important during mathematician investigation as it gives alternative
point of view for the object $S$. For instance, the set of natural number
$\lbrace 0,2,4,8,10,12,\ldots\rbrace$ can be describe by the Diophantine
equation $k = 2\times x$ defining the concept of even numbers.
The Encyclopedia of integer sequences~\cite{DBLP:conf/mkm/Sloane07} contains
entries that can be
stated as Diophantine equations (e.g. sequence A001652).
Other motivations for investigating Diophantine equations
comes from elliptic curve cryptography~\cite{DBLP:phd/dnb/Baier02} and number
theory~\cite{duverney2010number}.
\paragraph{Contributions}
This paper presents a general framework that lays out the foundations for
neurally guided solving of theorem proving tasks.
We evaluate the suitability of tree neural networks on two term synthesis tasks.
We focus on showing how deep
reinforcement learning~\cite{Sutton:1998:IRL:551283} algorithms can acquire the
knowledge necessary to solve
such problems through exploratory searches.
The framework is integrated in the \holfour~\cite{hol4} system (see
Section~\ref{sec:result}).
The contributions of this paper are:
(i) the implementation of TNNs with the associated backpropagation algorithm,
(ii) the implementation of a guided Monte Carlo Tree Search (MCTS)
algorithm~\cite{montecarlo}
for arbitrarily specified search problems,
(iii) the demonstration of continuous self-improvement for a large number of
generations,
(iv) a comparison with state-of-the-art theorem provers on the
task of synthesizing combinators and
(v) a formal verification of the solutions in \holfour.
\section{Tree Neural Networks}\label{sec:tnn}
In the machine learning field, various kinds of predictors are more
suitable for learning various tasks. That is why with new problems come new
kinds of predictors. It is particularly true for predictors such as
neural networks. For maximum learning efficiency, the structure of the problem
should be reflected in the structure of the neural network. For example,
convolutional neural networks are best for handling pictures as their structure
have space invariant properties whereas recurrent networks can handle text
better. For our purpose, we have chosen neural networks that are in particular
designed to take into account the tree structure of terms and
formulas as in~\cite{DBLP:journals/tacl/KiperwasserG16a}.
\subsection{Architecture}
Let $\mathbb{O}$ be a set of operators and $\mathbb{T}_\mathbb{O}$ be the set
of all terms than can be constructed
from $\mathbb{O}$.
A tree neural network (TNN) is a machine learning
model designed to approximate functions from $\mathbb{T}_\mathbb{O} \mapsto
\mathbb{R}^n$. We define first the structure of the tree neural network and
then show how to compute with it. An example is given in Figure~\ref{fig:tnn}).
\paragraph{Definition}(Tree neural network)\\
We define a tree neural network to be a set of feed-forward
neural networks with n $layers$ and a $tanh$
activation function for each layer.
There is one network for each operator $f$ noted $N(f)$ and one for the head
noted $H$.
The neural network operator of a function with arity $a$ is to learn a function
from $\mathbb{R}^{a \times d}$ to $\mathbb{R}^d$ where $d$ is the dimension
of the embedding space.
And the head network is to approximate a function from $\mathbb{R}^d$ to
$\mathbb{R}^n$.
As an optimization for a operator $f$ with arity 0, $N(f)$ is defined to be
a vector of weights in the embedding space $\mathbb{R}^d$ since multiple layers
are not needed for learning a constant function.
\paragraph{Computation of the Embeddings}
Given a TNN, we can now define recursively an embedding function $E:
\mathbb{T}_\mathbb{O} \mapsto \mathbb{R}^d$ by
$E(f(t_1,\ldots,t_a)=_\mathit{def} \mathit{N}(f)(E(t_1),\ldots,E(t_a))$
This function produces an
internal representation of the terms to be later processed by the head network.
This internal representation is often called a \textit{thought vector}.
\paragraph{Computation of the Output}
The head network interprets the internal representation and makes the
last computations towards the expected result. In particular, it reduces the
embedding dimension $d$ to the dimension of the output $n$.
The application of a TNN on a term $t$ gives the result $H(E(t))$.
It is possible to learn a different objective by replacing only the head of the
network. We use this to our advantage in the reinforcement learning experiments
where we have the double objective of predicting a policy and a value (see
Section~\ref{sec:drl}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.8, every node/.style={scale=1.0}]
\node [embedding,node distance=3cm] (0l) {$a$};
\node [node distance=1.5cm,right of=0l] (0lm) {};
\node [embedding, node distance=3cm, right of=0l] (0m) {$b$};
\node [embedding, node distance=3cm, right of=0m] (0r) {$a$};
\node [nnop, node distance=1cm, above of=0lm] (p) {$f$};
\node [nnop, node distance=1cm, above of=0r] (s) {$g$};
\node [embedding, node distance=1cm, above of=s] (se) {$g(a)$};
\node [embedding, node distance=1cm, above of=p] (pe) {$f(a,b)$};
\node [node distance=2.25cm, right of=pe] (pr) {};
\node [nnop, node distance=1cm, above of=pr] (t) {$f$};
\node [embedding, node distance=1cm,above of=t] (te) {$f(f(a,b),g(a))$};
\node [nnop, node distance=1cm,above of=te] (h) {$\mathit{head}$};
\node [node distance=1cm,above of=h] (he) {};
\draw[-to,thick] (0l) to (p);
\draw[-to,thick] (0m) to (p);
\draw[-to,thick] (0r) to (s);
\draw[-to,thick] (0r) to (s);
\draw[-to,thick] (p) to (pe);
\draw[-to,thick] (s) to (se);
\draw[-to,thick] (pe) to (t);
\draw[-to,thick] (se) to (t);
\draw[-to,thick] (t) to (te);
\draw[-to,thick] (te) to (h);
\draw[-to,thick] (h) to (he);
\end{tikzpicture}
\caption{Computation flow during the application of a TNN to the term
$f(f(a,b),g(a))$~\label{fig:tree_fo}.
Rectangles represent embeddings and rounded squares neural networks.}
\label{fig:tnn}
\end{figure}
\paragraph{Higher-Order Terms}
A \holfour term can be encoded into a first-order term (i.e. a labeled tree) in
two steps. First, the function applications $f\ x$ are rewritten to
$\mathit{apply}(f,x)$ introducing an explicit $\mathit{apply}$ operator.
Second, the lambda terms $\lambda x. t$ are substituted with
$lam(x,t)$ using the additional operator $lam$.
For tasks performed on \holfour terms that are
essentially first-order and these encodings are not required.
\section{Deep Reinforcement Learning}\label{sec:drl}
When possible, the deep reinforcement learning
approach~\cite{Sutton:1998:IRL:551283} is preferable to a
supervised learning approach for two main reasons. First, an oracle is not
required. This means that the algorithm is more general as it does not require
a specific oracle for each task and can even learn a task for which nobody
knows a good solution.
Secondly by decomposing the problem in many steps, the trace of the computation
becomes visible which is particularly important if one wants a justification
for the final result.
We present here our methodology to achieve deep reinforcement learning through
self-supervised learning. It consists of two phases. During the exploration
phase, a search tree is built by the MCTS algorithm from a prior value and
prior policy. These priors are updated by looking at the consequence of each
action. During the learning phase, a new TNN learns to replicate this
improved policy and value. This new TNN will be used to guide the next
exploration phase. One iteration of the reinforcement learning loop is called a
\textit{generation}. In the later part of the paper, we refer to the
application of the full reinforcement learning loop (alternation of learning
phases and exploration phases) as the training of the
TNN. Once trained, we judge the performance of the TNN during a final
evaluation (Section~\ref{sec:result}).
Before describing both phases, we define
what a search problem is, what the
policy and value for a search algorithm are and how they can be combined
to guide the search using MCTS.
\subsection{Specification of a Search Problem}~\label{sec:abs_spec}
Any task that can be solved in a series of steps with decision points at
each step can be used to construct search problems. For example, theorem
proving is considered a search task by construction. Other para-proving tasks
are harder to view in such a light, such as: programming, conjecturing,
making definitions, re-factoring.
Here, a search task is described as a single-player perfect information game.
\paragraph{Definition}(Search problem)\\
A search problem is an oriented graph.\\
The nodes are a set of states $ \mathbb{S}$ with particular labels for:
a starting state $s_0\in \mathbb{S}$,
a subset of winning states $\mathbb{W} \subset \mathbb{S}$,
and a subset of losing states $\mathbb{L} \subset \mathbb{S}$.\\
The oriented edges are labeled by a finite set of moves $\mathbb{M}$. The
transition function $T: \mathbb{S} \times \mathbb{M}
\mapsto \mathbb{S}$ returns the state reached by making a move from the input
state. To solve the problem, an algorithm needs to find a path $p$ from the
starting
state to a winning state avoiding the losing states.
An end state is a state that is either winning or losing.
\paragraph{Policy and Value}
A policy $P$ is a function from $\mathbb{S}$ to $[0,1]^ {\mathit{cardinal}
(\mathbb{M})}$ that assigns to each state $s$ a real number for each move.
It is intended to be a probability which indicates the percentage of
times each move should be explored at each state. An impossible move is given a
policy score of $0$ and thus is never selected during MCTS.
A value $V$ is a function from $\mathbb{S}$ to the interval $[0,1]$.
$V(s)$ is used as an estimate of how likely the search
algorithm is to complete the task. Therefore, the function needs to respect
these additional constraints: a value of $1$ for winning states, and a value of
$0$ for losing states.
\subsection{Monte Carlo Tree Search}
An in-depth explanation of the Monte Carlo Tree Search algorithm is given in
\cite{montecarlo}. This search algorithm strikes a good balance between
exploration of uncertain paths and exploitation of path leading to states with
good value $V$.
The algorithm was recently improved in~\cite{silver2017mastering}.
The estimation of the value $V$, which used to be approximated by the
proportion of random walks (or roll-outs) to a winning state, is now returned
by a deep neural network. Explicitly, this means that we do not perform
roll-outs during the node extension steps and instead the learned value
$V$ is used to provide the rewards.
A search problem is explored by the MCTS algorithm with the help of a prior
policy $P$ and a prior value $V$.
The algorithm starts from an initial tree
(that we call a \textit{root tree}) containing an initial state and proceeds to
gradually build a search tree.
Each iteration of the MCTS loop can be decomposed into three main components:
node selection, node extension and backup.
One step of this loop usually creates a new node in the search tree unless the
node selection reached an end state (winning or losing).
The search is stopped after a fixed number of iterations of the loop called the
\textit{number of simulations} or after a fixed time limit.
The node selection process is guided by the PUCT
formula~\cite{DBLP:conf/pkdd/AugerCT13}. We set its \textit{exploration
coefficient} to 2.0 in our experiments.
The rewards are calculated as usual with
winning state given a reward of 1 and losing states a reward of 0 and the
rewards for other states are given by the value $V$.
\subsection{Exploration Phase}
During the exploration phase of the reinforcement learning loop, a full attempt
at a solution will rely on multiple calls to the MCTS algorithm.
The first call is performed starting from a root tree containing
the starting state. After an application of the MCTS algorithm, the constructed
tree is used to decide which move to choose. The application of this move to
the starting state is a \textit{big step}.
The MCTS algorithm is then restarted on a root tree containing the resulting
state.
This procedure is repeated until a big step results in an end state or
after the number of big steps exceeds a fixed bound.
This bound is fixed to be twice the size of an existing solution found during
problem generation (Section~\ref{sec:tasks}).
An attempt is successful if it ends in a winning state.
The decision which big steps to make is taken from the number of visits
for each child of the root. During the exploration phase,
the move with the highest number of visits is chosen.
To encourage exploration during training, a noise
is added to the prior policy of the root node. We draw noise from a
uniform distribution and thus do not have to choose the parameter of the
Dirichlet distribution as in~\cite{silver2017mastering}.
\paragraph{Remark}
The upper bound is also used to limit the depth of the MCTS calls
during training. Since it gives our prover some problem specific
knowledge about the limits of the search space and also prevents it
from finding longer solutions, the bound is not enforced
during the final evaluations on the testing sets.
\paragraph{Problem Selection}
It is computationally expensive to make attempts on all 2000 problems from the
training set (see Section~\ref{sec:datasets}).
That is why we select a subset of 200 problems. As we aim to make
the dataset of examples balanced, we select 100 positive problems and 100
negative problems. A problem is positive if it was attempted previously and
solved in the last attempt and negative otherwise.
On top of that, we try to focus the selection on problems away from problems
that are too hard or too easy. To estimate how easy and hard a problem is we
look at the list of successes and failures of the algorithm on this problem.
We count the number of successes (respectively failures) in a row starting from
the end of the list on a positive (respectively negative) problem.
The inverse of this number is higher for a problem that has just shifted from
negative to positive or conversely.
This score is normalized into a probability distribution across all the
positive (respectively negative)
examples. The selected positive (respectively negative) problems are drawn from
this distribution.
\subsection{Learning Phase}
After each big step of the exploration phase, we collect an example consisting
of an input term representing a search state associated with an improved policy
and an improved value for this search state.
More precisely, the example is constructed from statistics of the root of the
search tree (the root changes after each big step). The input of the example
is the term representing the state of the
root. The improved policy for this example is computed by dividing the number
of visits for each child by the total number of visits for all children.
The improved value is the average of the values of all the nodes in the tree
with the value of an end state counted as many times as it has been visited.
The new example is added to a dataset of training examples.
This dataset is then used to train the TNNs in future generations.
When the number of examples in the dataset reaches $x$, older
examples are discarded whenever newer examples are added so that the number of
examples never exceeds $x$. This number is called the \textit{size of the
window} and is set to 200,000 in our experiments.
The TNN learns to imitate the collected policy and value
examples by following the batch gradient descent
algorithm~\cite{DBLP:conf/kdd/LiZCS14}. In our implementation, we
rely on a mean square error loss and update the weights using backpropagation.
Furthermore, $P$ and $V$ are learned simultaneously by a
double-headed TNN. In other words, the learned term embeddings are shared
during the learning of the two objectives.
\paragraph{Remark}
The states (or their term representation) in the policy and value dataset
differs from the dataset of problems. Indeed,
problems are starting states whereas policies and values are extracted for
all intermediate states covered by the big steps.
\section{Specification of the Tasks}~\label{sec:tasks}
In this section, we specify the two synthesis tasks that are used for
experiments with
our framework. We describe how to express the search problems and provide
optimizations to facilitate the learning.
\subsection{Synthesis of Combinators}\label{sec:combin}
The aim of this task is to find a SK-combinator $c$
such that $c\ x_1\ x_2\ \ldots\ x_n \rightarrow
h'[x_1,x_2,\ldots,x_n]$
where $h'$ is a higher-order term only composed of the functions
$x_1,x_2,\ldots,x_n$ and $\rightarrow$ is rewrite relation given by
the term rewrite system $\lbrace S x y z \rightarrow (x z) (y
z),\ K x y \rightarrow x \rbrace$,
A combinator is by definition said to normalize if has a normal form. This
unique normal form can always be obtained using the left-outermost strategy.
Since having a solution $w$ to the problem implies that the combinator $w\ x_1\
x_2\ \ldots\ x_n$ normalizes and therefore $w$ normalizes too and its normal
form $w'$ is also a solution. Thus, we can limit the search to SK-combinators
in normal form.
To synthesize combinators we introduce a meta-variable $X$ as a place holder at
positions of to-be-constructed subterms.
The starting state is a couple of the partially synthesized term $X$ and
$h'[x_1,x_2,\ldots,x_n]$.
A move consists oFf applying one the
following five rewrite rules to the first occurrence of $X$ in the first
component of the state:
\[X \rightarrow S,\ X \rightarrow S X,\ X \rightarrow S X X,\
X \rightarrow K,\ X \rightarrow K X\]
This system is exactly producing all the combinators in normal forms. It
prevents the creation of any redex by limiting the number of arguments of $S$
and $K$.
A state is winning if the synthesized witness $w$ applies to $x_1\
x_2\ \ldots\ x_n$ rewrites to the normal form $h'[x_1,x_2,\ldots,x_n]$, which
is determined in practice by applying the left-outermost rewrite strategy. We
remove occurrences of $X$ in $w$ before testing for the winning condition.
A state is losing if the synthesized witness $w$ does not contain $X$ and is
not a solution.
\paragraph{TNN Representation of the State}
To make its decision between the five moves and evaluate the current state,
the TNN receives components of the state as first-order
terms by tagging each operator with its local arity and merge their embeddings
using a concatenation network.
For instance, the combinator $S (K S) K$ is encoded as $s_2(k_1(s_0),k_0)$.
Compared with the apply encoding,
the advantage is that we can represent each combinator with arity greater than
zero as a neural network instead of an embedding in our TNN architecture.
The drawback is that the learning is split between operators of
different arities.
\paragraph{Complexity}
For synthesizing a solution of size 20, an upper bound for the search space of
our algorithm is $5^{20} \approx 9.5 \times 10 ^ {13}$ as there are 5 possible
moves.
Since no moves are allowed from end states, there are more precisely at
most $4.9 \times 10^{11}$ states with a partially synthesized combinator of
size less than 20.
\subsection{Synthesis of Diophantine Equations}\label{sec:dioph}
The aim of this task is given a particular set $S$ to find a polynomial whose
Diophantine set is equal to $S$.
In the following description and for the experiments,
we limit the range of polynomials to the ones with one parameter $k$ and three
existential variables $x,y,z$. The Diophantine set $D(p)$ of the polynomial $p$
is defined by $D(p) =_{\mathit{def}} \lbrace k\ |\ \exists xyz.\ p(k,x,y,z) = 0
\rbrace$.
To make the computation of $D(p)$ tractable for any polynomial, the domain of
interpretation of the variables and operators is changed to
$\mathbb{Z}/16\mathbb{Z}$.
The polynomials considered for synthesis are normalized polynomials.
They are expressed as a sum of monomials.
Internally, each monomial is represented as a list of integers. The first
element of the list is the coefficient, and the remaining elements are the
exponents of the variables $k$,$x$,$y$ and $z$.
A polynomial is a list of monomials and its size is the sum of the length of
its monomials (represented as lists). For instance, the monomial
$k^2x^3 + 2y^4$ is internally represented as $[[1, 2, 3], [2, 0, 0, 4]]$.
Since multiplication is associative
and commutative (AC), variables are ordered alphabetically
in the monomial. Since addition is AC, monomials are sorted by comparing
their list of exponents using the lexicographic order.
Our starting state consists of the empty polynomial and an enumeration $S$.
To synthesize a polynomial we rely on two types of moves. The first one
is to start constructing the next monomial by choosing its coefficient. The
second one is to choose an exponent of the next variable in the current
monomial. In practice, we limit the maximum value of an exponent to 4 and
the number of monomials per polynomial to 5.
A state $(w,S)$ is winning if the polynomial $w$ defines the set $S$.
\paragraph{TNN Representation of the State}
Computing the embedding of a state $(w,S)$ relies on a neural network operator
for merging the embeddings of $w$ and $S$.
The set $S$ is encoded as a list of 16 real numbers.
The $i^{th}$ element of this list is 1 if $i\in S$ and -1 otherwise.
The polynomial $w$ is represented using a tree structure with operators $+$ and
$\times$ and we have learnable embeddings for
each variable and each coefficient. To compress the representation further,
variables with their exponent (e.g. $x^3$) are represented as a single
learnable embedding instead of a tree.
\paragraph{Complexity}
An approximation measure for the size of the search space is the
number of polynomials in normal form. This number is $\frac{(16 \times 5^4) ^
5}{5!} \approx 8.3 * 10 ^ {17}$.
\section{Datasets}\label{sec:datasets}
In all our tasks, our algorithms require a training set in order to learn the
task at hand. In a reinforcement learning setting, a training problem
does not come with its solution as in supervised learning, thus
problem-solving knowledge cannot be obtained by memorization and has to be
acquired through search. Still, we also create an independent testing set to
further estimate the generalization abilities of the
algorithm on problems not seen during training.
Even in the context of reinforcement learning, the ability of TNNs to learn a
task is heavily influenced by the quality of the training examples.
The following objectives should guide the generation of the training set: a
large and diverse enough set of input terms,
a uniform distribution of output classes and a gradual increase in difficulty.
For both tasks, problems are generated iteratively in the same way.
At the start, the set of problems $\mathbb{P}$ is empty.
At each step, a random witness $w$ (polynomial or combinator) is produced and
we compute its image $f(w)$.
If the image does not have the desired form, then $\mathbb{P}$ remains
unchanged.
If the image does not exist in $\mathbb{P}$, we
add the problem represented by $f(w)$ and its solution $w$ to the problem set.
If the image already exists in the set and the witness is smaller
than the previous one for this image, then we replace the previous solution
by the new one. If it is bigger, then $\mathbb{P}$ remains unchanged.
We repeat this process until we have 2200 distinct problems.
This set of problems is split randomly into a training set of 2000 problems and
testing set of 200 problems. We use the generated solutions to estimate the
difficulty of the problems and bound the number of big steps during training.
The generated solutions are not revealed to any other part of the algorithm. In
particular, no information about these solutions is used during the final
evaluation on the test set.
To generate a random combinator, we pick randomly a size between 1 and
20 and then draw uniformly at random from the set of normal form SK-combinators
of that size. Generating this set becomes too computationally expensive for
a size greater than 10, thus we rely on a top-down generation that exactly
simulates the process. It works by selecting the top operator and the size of
its arguments according to their frequencies which can be computed much more
efficiently.
To generate a random polynomial, we first select a number of monomials in
[|1,5|]. Then for each monomial, we choose a number of variables in [|0,4|], a
leading coefficient in [|1,15|] and an exponent in [|0,4|] for each variable.
Figure~\ref{fig:dis} shows the distributions of the problems according to their
difficulty. There, the size of the smallest generated
solution is used as a measure of difficulty for each problem.
\pgfplotscreateplotcyclelist{rw2}
{solid, mark repeat = 4, mark phase = 2, mark = *, black\\
solid, mark repeat = 4, mark phase = 4, mark = o, black\\}
\begin{figure}[]
\begin{tikzpicture}
\begin{axis}[
legend style={anchor=north east, at={(0.95,0.95)}},
width=\textwidth,
height=0.4*\textwidth,xmin=1, xmax=22,
ymin=0, ymax=300,
cycle list name=rw2
]
\addplot table[x=size, y=pb] {combin_dis};
\addplot table[x=size, y=pb] {poly_dis};
\legend{combinators,polynomials}
\end{axis}
\end{tikzpicture}
\caption{\label{fig:dis} Number (y) of problems a generated solution of size
(x)}
\end{figure}
\section{Results}\label{sec:result}
The following experiments demonstrate how the reinforcement learning
framework is able to gradually learn each task by recursive
self-improvement. We first analyze the progress made during training,
Then, we compare our method with alternatives during evaluation.
Finally, we analyze the distribution of solutions produced to gain some
insight into what has been learned.
\paragraph{Replicability}
The code for the framework and the experiments are available in the
repository for
\holfour\footnote{\url{https://github.com/HOL-Theorem-Prover/HOL}}.
Although the code is available on top of the master branch, to be able to import
the provided HOL4 datasets of combinator problems, one needs to switch to
the commit bcd916d1251cced25f45c90e316021d0fd8818e9 as the format for exporting
terms was changed.
The specification of each task is implemented in the
\textsf{examples/AI\_tasks}
directory. The underlying framework shared by both tasks is located in the
\textsf{src/AI} directory.
The file \textsf{examples/AI\_tasks/README} explains how to reproduce the
experiments.
The datasets can be downloaded from our repository
\footnote{\url{https://github.com/barakeel/synthesis_datasets}}.
\paragraph{Parameters}
The parameters of the TNN used during our experiments are an embedding
dimension of size 16, one fully connected
layer per operator and two fully connected layers for the policy head and the
value
head. The schedule of the learning phase consists of 10 epochs on a maximum
of 200,000 examples and a learning rate of 0.02.
\subsection{Combinators}
Experiments on combinators rely on an instance of the MCTS
algorithm given by the specification from Section~\ref{sec:combin}
noted MCTS$_{\mathit{combin}}$.
\paragraph{Training}
Figure~\ref{fig:combin} shows the number of problems solved at least once by
MCTS$_{\mathit{combin}}$ (run with multiple big steps) over 318
generations.
To give a better view of the progress of the algorithm, we also show the
number of problems that we expect the algorithm to be able to solve. This
number is obtained by
computing the frequency at which it solves each problem on the last five tries
and summing up the frequencies across all problems.
This graph shows that little improvement occurs after generation 270.
The discrepancy between the two lines also indicates that our TNN is not able
to memorize perfectly the previously discovered proofs. One solution to these
issues would be to increase the dimension of the embeddings. The trade-off
is that it will slow down quadratically the speed of inference and may
lead to a weaker generalization ability if applied without proper
regularization.
All but 100 examples are attempted before generation 34. At this point, only 135
solutions are found. This is to be compared to the total of 1599 solutions
found by the end of the training.
The oldest examples retrieved from searches are discarded after generation 64
as their number surpasses 200,000.
After each generation, we save the weights of the trained TNN. In the following
evaluation, we use the weights of the TNN from generation 318.
\begin{figure}[]
\begin{tikzpicture}
\begin{axis}[
legend style={anchor=north east, at={(0.95,0.4)}},
width=\textwidth,
height=0.5*\textwidth,xmin=0, xmax=318,
ymin=0, ymax=1700,
cycle list name=rw
]
\addplot table[x=gen, y=sol] {combin_graph};
\addplot table[x=gen, y=exp] {combin_graph};
\legend{at least once, expectancy}
\end{axis}
\end{tikzpicture}
\caption{\label{fig:combin} Number y of training problems solved after
generation x}
\end{figure}
\paragraph{Evaluation}
\begin{table}[]
\centering
\begin{tabular}{llcc}
\toprule
Prover & Strategy & Train (2000) & Test (200)\\
\midrule
\eprover 2.4 \cite{eprover}& auto & 38.80 & 36.0 \\
& auto-schedule & 50.35 & 48.5\\
\vampire 4.2.2 \cite{DBLP:conf/cav/KovacsV13} & default & \phantom{0}4.15 &
\phantom{0}3.5\\
& CASC mode & 63.45 & 62.0 \\
MCTS$_{\mathit{combin}}$ & uniform & 27.65 & 27.0 \\
& TNN-guided & 72.70 & 65.0 \\
\bottomrule
\end{tabular}
\caption{Percentage of problems solved within 60 seconds}\label{tab:combin}
\end{table}
During the final evaluation, we run each prover on one CPU for 60 seconds on
each problem. Their success rates are presented in
Table~\ref{tab:combin}. The MCTS$_{\mathit{combin}}$ algorithm is run without
noise and for as many simulations as the time limit allows. In particular, we
do not apply any big steps (non-backtrackable steps) as they were mainly
introduced to produce training examples for states appearing deeper in the
search. By comparing the results of MCTS$_{\mathit{combin}}$ on the training
and testing set, we observe that it generalizes well although not perfectly to
unseen data.
Our machine learning guided can then be compared with the uniform strategy
(approximating breadth-first search) where each branch is explored with the
same probability. Since the uniform strategy does not call the TNN, it can
perform on average more than twice as many simulations (414,306 vs 196,000) in
60 seconds. Even with this advantage, the trained algorithm outperforms it
significantly.
To compare our algorithm with state-of-the-art automated theorem provers, the
synthesis problem is stated using an existential first-order formula for the
conjecture. As an
example, the problem for
synthesizing a combinator equal to the head normal form $\lambda v_1v_2v_3.v_3$
can be expressed in the TPTP~\cite{tptp} format (an input format for ATPs) as:
\begin{verbatim}
fof(axS,axiom, ![X, Y, Z]: (a(a(a(s,X),Y),Z) = a(a(X,Z),a(Y,Z)))).
fof(axK,axiom, ![X, Y]: (a(a(k,X),Y) = X)).
fof(conjecture,conjecture, ?[Vc]: ![V1, V2, V3]: (a(a(a(Vc,V1),V2),V3) = V3)).
\end{verbatim}
Even using their more advanced set of strategies (auto-schedule for \eprover
and CASC mode for Vampire), the success rates of ATPs are less than the trained
MCTS$_{\mathit{combin}}$ on both the training and testing sets.
It is worth mentioning that the ATPs and our systems work in a very different
way on the synthesis tasks. In our system, there is a synthesis part guided by
our TNN and a checking part performed by a
deterministic normalization algorithm.
In contrast, the ATPs are searching for a proof by applying the rules of their
calculus. They essentially deduce intermediate lemmas (clauses) to
obtain the proof. As a comparison, the number of generated clauses by \eprover
on average on this dataset in 60 seconds is about three million.
One advantage of the approaches of ATP is that they might be
able to split the synthesis problem is smaller ones by finding an independent
part of the head normal form. The trade-off is that with more actions available,
smarter strategies are needed to reduce the search space. We believe that
combining both approaches, i.e. learning to guide searches on the ATP calculus,
would certainly lead to further gains.
Finally, our algorithm naturally provides a
synthesized witness but it may be harder to extract such witness from an ATP
proof. Therefore, we can now analyze the witnesses provided as solutions during
the final evaluation.
\paragraph{Examples}
The $\lambda$-abstraction $\lambda fxy.fyx$ gives a semantic description of the
$C$ combinator commonly used in functional programming.
The solution $S (S (K S) (S (K K) S)) (K K)$ proposed by our algorithm is
not equivalent to the solution $S (B B S) (K K)$ given by
Schönfinkel~\cite{schonfinkel1924bausteine}
with $B = S (K S) K$. Its normal form
$S (S (K (S (K S) K)) S) (K K)$ is different.
The witness synthesized by MCTS$_{\mathit{combin}}$ for
$\lambda xyz.\ x y (x y) (y (x y) (x y (y (x y)))) z$
is the largest among the solutions. This combinator is
$S (S (S (K (S (S (K S)))) K) S) (S (S (S (S K K))))$.
\paragraph{Analysis of the Solutions}
In Table~\ref{tab:combin_occ}, we observe patterns in the subterms of
the solutions by measuring how frequently they occur. This only gives us a very
rough understanding of what the TNN has learned since the decisions made
by the TNN are context-dependent.
The combinator $S$ occurs about twice as often as $K$. More interestingly,
the combinator $S K K$ occurs 585 times whereas the combinator $S K S$ which
as the same effect (i.e. the identity) only 39 times. Having strong preferences
between equivalent choices is beneficial as it avoids duplicating searches.
However, how this preference was acquired is still to be determined.
The combinator $B$ appears in fourth position among the 40 combinators in
normal form of size four showing its importance as a building block for
combinator synthesis.
\begin{table}[]
\centering\small
\begin{tabular}{lccccccc}
\toprule
Subterm & $S$ & $K$ & $S S$ & $S K$ & $K S$ & $S (K S)$ & $S K K$
\\
Occurrences & 11187 & 5114 & 1883 & 1082 & 710 & 709 & 585\\
\midrule
Subterm & $S S K$ & $K (S S)$ & $S (K (S S))$ & $S (S K K)$ & $S (S S)$ & $S S
(S K)$ & $S (S S K)$ \\
Occurrences & 370 & 305 & 305 & 273 & 245 & 204 & 183 \\
\midrule
Subterm & $K K$ & $S (K K)$ & $S (K S) K$ & $S (S (S K K))$ & $S (K S) S$ &
$S (S (K S) K)$ & \\
Occurrences & 173 & 165 & 153 & 144 &
138 & 135 & \\
\bottomrule
\end{tabular}
\caption{Number of occurrences of the 20 most frequent subterms
that are part of the 1584 combinator solutions of the 2200 combinator problems}
\label{tab:combin_occ}
\end{table}
\subsection{Diophantine Equations}
Experiments on Diophantine equations rely on an instance of the MCTS
algorithm given by the specification from Section~\ref{sec:dioph}
noted MCTS$_{\mathit{dioph}}$.
\paragraph{Training}
The evolution of the success rate during training is presented in
Figure~\ref{fig:dioph}. At the end of the training, the number of problems
solved at least once is 1986 out of 2000. However, the expectancy is quite
lower showing the same issue in the memorization ability of the TNN as for
the combinators. This might be solved by increasing the dimension of the
embeddings or by improving the network architecture. The number of examples
reaches 200,000 at generation 84 later than in the combinator experiments,
indicating that the polynomials synthesized are shorter. In general, the fact
that we obtain a much higher success rate in this experiment indicates that a
higher branching factor but shallower searches suit our algorithm better.
In the final evaluation, we use the weights of the TNN from generation 197.
\begin{figure}[]
\begin{tikzpicture}
\begin{axis}[
legend style={anchor=north east, at={(0.95,0.4)}},
width=\textwidth,
height=0.5*\textwidth,xmin=0, xmax=218,
ymin=0, ymax=2100,
cycle list name=rw
]
\addplot table[x=gen, y=sol] {dioph_graph};
\addplot table[x=gen, y=exp] {dioph_graph};
\legend{at least once, expectancy}
\end{axis}
\end{tikzpicture}
\caption{\label{fig:dioph} Number y of problems solved after generation x}
\end{figure}
\paragraph{Evaluation}
The results of the final evaluation presented in Table~\ref{sec:dioph} show
a drastic difference between the uniform strategy and the TNN-guided one.
Thus, this task produces many patterns that the TNN can
detect. This observation is reinforced by the fact that the uniform algorithm
performs on average 3 to 4 times more simulations in 60 seconds (269,193 vs
79,976) than the TNN-guided one.
To create a stronger competitor, we handcraft an evaluation function
reflecting our intuition about the problem. An obvious heuristic is to guide
the algorithm by how many of the elements of $[|0,15|]$ the current polynomial
correctly classifies as a member of the targeted set or not. This number is
then
divided by 16 to give an evaluation function
between 0 and 1. The results are displayed in the heuristic row in
Table~\ref{sec:dioph}. Surprisingly, the heuristically guided search performed
worse than the uniform strategy mainly due to an overhead cost for computing
the evaluation resulting in 70,233 simulations (on average) per MCTS calls.
We do not know of any higher-order theorem prover with support for arithmetic.
Finding an encoding of the arithmetic operations and/or the higher-order
features, which makes the problems tractable for a targeted ATP, is
non-trivial and is not investigated in this paper. That is why we did not
compare our results with ATPs on Diophantine equations.
\begin{table}[]
\centering
\begin{tabular}{lcc}
\toprule
Strategy & Train (2000) & Test (200)\\
\midrule
uniform & \phantom{0}3.70 & \phantom{0}4.0 \\
heuristic & \phantom{0}3.05 & \phantom{0}0.5 \\
TNN-guided & 77.15 & 78.5\\
\bottomrule
\end{tabular}
\caption{Percentage of problems solved in 60 seconds by
MCTS$_{\mathit{Dioph}}$}\label{tab:dioph}
\end{table}
\paragraph{Example}
The solution for the set $\lbrace 0, 1, 3, 4, 5, 9, 11, 12, 13 \rbrace$ is the
largest among all solutions. The algorithm generated the polynomial
$y^2 + 12x^4 + 7k + 7k^2x^2y^2 + 7k^2x^2y^2z^2$.
\paragraph{Analysis of the Solutions}
\begin{table}[]
\centering\small
\begin{tabular}{lcccccccccc}
\toprule
Monomial & $7k$ & $7k^2x^2$ & $7k^2x^2y^2$ & $7k^2$ & $7kx^2$ & $7$ & $14$
&$7k^4$ & $7k^4x^2$ & $12$\\
Occurences & 780 & 682 & 640 & 387 & 354 & 329 & 218 & 149 & 144 & 143\\
\midrule
Monomial & $8$ & $7k^3$ & $14k$ & $7k^2x^2y^2z^2$ & $4$ & $6$ &
$7k^4x^2y^2$
& $10$ & $10k$ & $2$ \\
Occurences & 118 & 111 & 106 & 92 & 90 & 74 & 72 & 69 & 68 & 61\\
\bottomrule
\end{tabular}
\caption{Number of occurrences of the 20 most frequent monomials
that are part of the 1700 polynomial solutions of the 2200 problems on
Diophantine equations}
\label{tab:dioph_freq}
\end{table}
The most frequent monomials appearing as part of a polynomial solution are
shown in Table~\ref{tab:dioph_freq}. The coefficients of those monomials are
either even or 7. Only the monomials with coefficient 7 contain
an existential variable $x$,$y$ or $z$. The exponent of an existential variable
is always 2. Following these three patterns exactly limits the search space
but might make some problems unsolvable.
The two previous examples contain monomials outside of these patterns.
This shows that the algorithm is able to deviate from them when necessary.
\section{Verification}
A final check to the correctness of our algorithm can be
made by verifying the solutions produced during evaluation.
The verification consists of producing HOL4 theorems stating that each
witness $w$ has the desired property $\mathit{f}(w) = c$. The function $f$ is
specified by the task and the image $c$ is given in the problem.
In the following, we present how to construct a verification procedure for an
arbitrary problem/solution pair for the two tasks.
\subsection{Combinators}
First, we express the properties of combinators as HOL4 formulas.
There exists an implementation of typed combinators in HOL4 but they cannot be
used to represent untyped combinators. For instance, the combinator $S
(S K K) (S K K)$ constructed with HOL4 constants is not well-typed.
Therefore, we use the apply operator $apply$ of type $\alpha \rightarrow \alpha
\rightarrow \alpha$ and the type $\alpha$ for representing the type of
combinators. We note $x.y$ the term $apply(x,y)$.
Definitions of the free variables $s$ and $k$ representing the
combinators $S$ and $K$ are added as a set of assumptions to every problem.
From a witness $w$ and a head normal form $h=\lambda xyz.h'(x,y,z)$,
we prove the theorem:
\[\lbrace \forall xyz.\ ((s.x).y).z = (x.z).(y.z),\ \ \forall
xy.\ (k.x).y = y \rbrace\ \vdash\ \forall x y z.\ ((w.x).y).z = h'(x,y,z)\]
A call to the tactic \texttt{ASM\_REWRITE\_TAC []} completes the proof.
The function \texttt{COMBIN\_PROVE}, following the described process,
was used to verify the correctness of the 130 solutions found by
MCTS$_\mathit{combin}$ on the test set.
\subsection{Diophantine Equations}
Since problems on Diophantine equations are about sets of natural numbers
described by using arithmetic operations, we rely on \holfour constants and
functions from the theories \texttt{num}, \texttt{arithmetic} and
\texttt{pred\_set} to express that the solution satisfies the problem.
We verify that $D(w)$ (the Diophantine set described by the polynomial $w$)
is the enumeration $\lbrace a_1,a_2,\ldots,a_i \rbrace$. The equality between
the
two sets can be expressed in \holfour as:
\[\lbrace k\ |\ k < 16 \wedge
\exists xyz.\ w(k,x_{16},y_{16},z_{16})
\ \mathit{mod}\ 16 =
0 \rbrace = \lbrace a_1,a_2,\ldots,a_i
\rbrace\]
For any variable $v$, the shorthand $v_{16}$ stands for
$v\ \mathit{mod}\ 16$.
All natural numbers are expressed using the standard HOL4
natural numbers. All variables have the type of HOL4 natural numbers. That
is why, in order to reason modulo 16, each existential variable $v$ is replaced
by $v\ \mathit{mod}\ 16$ and the parameter $k$ is bounded by $16$.
\paragraph{}
The proof starts by considering two predicates $P$ and $Q$ defined by:
\begin{align*}
P &=_{\mathit{def}}
\lambda k.\ (\exists xyz.\ w(k,x_{16},y_{16},z_{16}) = 0),\ \
Q =_{\mathit{def}}
\lambda k.\ (k = a_1 \vee k = a_2 \vee ... \vee k = a_i)
\end{align*}
To verify that these predicates are equivalent on a particular element $k \in
[|0,15|]$, we distinguish between two cases. Either the Diophantine equation
has a solution and both predicates are true, or it does
not admit a solution and both predicates are false. In both cases, ground
equations are proven using \texttt{EQT\_ELIM o EVAL} where \texttt{EVAL}
is a rule (a function that returns a theorem) for evaluating ground expressions.
\paragraph{Positive case}
Let us assume that $k\in \lbrace a_1,a_2,\ldots,a_i \rbrace$.
To prove $P\ k$, we need to prove that $\exists xyz. w(k,x_{16},y_{16},z_{16})
\ \mathit{mod}\ 16 = 0$. Therefore, we search for the triple
$(a,b,c) \in [|0,15|]^3$ for which the ground equation holds. We call the
resulting theorem $\mathit{thm\_abc}$.
The goal $P\ k$ can then be closed by applying the tactic:
\begin{lstlisting}[language=SML]
EXISTS_TAC a THEN EXISTS_TAC b THEN EXISTS_TAC c THEN ACCEPT_TAC thm_abc
\end{lstlisting}
$Q\ k$ is proven by beta-reduction and a call to a decision procedure for
ground arithmetic:
\begin{lstlisting}[language=SML]
CONV_TAC (TOP_DEPTH_CONV BETA_CONV) THEN DECIDE_TAC
\end{lstlisting}
\vspace{-5mm}
\paragraph{Negative case}
Let us now assume that $k \not \in \lbrace a_1,a_2,\ldots,a_i \rbrace$.
To prove $\neg(P\ k)$, we need to prove that
$\forall xyz. w(k,x_{16},y_{16},z_{16}) \ \mathit{mod}\ 16 \not= 0$.
We deduce the following lemma for a predicate $R$ and a variable $v$:
$R\ 0 \wedge R\ 1 \wedge \ldots \wedge R\ 15\ \vdash\
\forall v.\ R\ v_{16}$.
Using this lemma, we can reconstruct the universally quantified theorems
from the proof all possible instantiations:
\begin{align*}
&\vdash\ w(k,0..15,0..15,0..15) \ \mathit{mod}\ 16 \not= 0\\
&\vdash\ \forall z.\ w(k,0..15,0..15,z_{16}) \ \mathit{mod}\ 16 \not= 0\\
&\vdash\ \forall yz.\ w(k,0..15,y_{16},z_{16}) \ \mathit{mod}\ 16 \not= 0\\
&\vdash\ \forall xyz.\ w(k,x_{16},y_{16},z_{16}) \ \mathit{mod}\ 16 \not= 0
\end{align*}
The notation $\vdash t[0..15]$ is a shorthand for $\vdash t[0] \wedge t[1]
\wedge \ldots \wedge t[15]$.
The proof of $\neg(Q\ k)$ relies on the same tactic as for proving $Q\ k$ in
the previous case.
\paragraph{}
By combining the positive and negative cases with some simple propositional
reasoning, we obtain equivalences for
all $k \in [|0,15|]$ .
From these equivalences, we prove the following lemma about the bounded sets:
\[(P\ 0 \Leftrightarrow Q\ 0) \wedge \ldots
\wedge (P\ 15 \Leftrightarrow Q\ 15)\ \vdash \
\lbrace k\ |\ k < 16 \wedge P\ k \rbrace =
\lbrace k\ |\ k < 16 \wedge Q\ k \rbrace \]
The final step is to convert the set defined by $Q$ into
an enumeration by proving the lemma:
\[ \vdash\ \lbrace k\ |\ k < 16 \wedge (\lambda k.\ (k = a_1 \vee k = a_2 \vee
... \vee k = a_i))\ k \rbrace = \lbrace a_1,a_2,\ldots,a_i \rbrace\]
The function \texttt{DIOPH\_PROVE} encompassing this process was used to
verify the correctness of the 157 solutions found by
MCTS$_\mathit{dioph}$ on the test set.
\section{Related Work}
The related work can be classified into three categories:
automation for solving synthesis tasks,
machine learning guidance inside ATPs, learning assisted reasoning in ITPs.
We present the most promising projects in each category
separately. Their description shows how they compare to our approach and
influence our methods.
First, synthesizing SK-combinators using machine learning from their defining
property has been attempted in~\cite{DBLP:conf/cade/Fuchs97}. There, they use
a genetic algorithm
to produce
combinators with a fitness function that encompasses various heuristics.
By comparison, we essentially try to learn this fitness function from
previous search attempts.
The bracket abstraction algorithm developed
by Schönfinkel~\cite{schonfinkel1924bausteine} can be
used to eliminate a variable and by abstracting all variables achieves
SK-combinator synthesis.
An improvement of the abstraction algorithm using
families of combinators is proposed in \cite{DBLP:conf/flops/Kiselyov18}.
A more recent work attempt to solve combinatory logic synthesis problems
using a SMT solvers~\cite{DBLP:journals/corr/abs-1908-09481}.
Matiyasevich proved~\cite{10009422455} that every enumerable set is a
Diophantine set.
It is theoretically possible although not practical to extract an algorithm
from his proof.
The closest attempt at synthesizing Diophantine equations does not rely on
statistical learning and is concerned with the synthesis of polynomials over
finite Galois fields~\cite{DBLP:conf/iccad/JabirPM06}.
If we consider functions to be programs, there is a whole domain of research
dedicated to program synthesis. Among these, this
approach~\cite{DBLP:journals/corr/abs-1911-10244}
relies on deep reinforcement learning.
Second, the work that comes closest to achieving our end goal of a competitive
learning-guided theorem prover is described
in~\cite{DBLP:conf/nips/KaliszykUMO18}. There, a guided MCTS algorithm is
trained with reinforcement learning. Its objective is to prove first-order
formulas from Mizar~\cite{mizar10} problems using a connection-style
search~\cite{DBLP:journals/jsc/OttenB03}.
The experiments show that gradual improvement stops on the test set after the
fifth generation. This is probably for two reasons: the small number
of problems relative to the diversity of the domains considered and
the inherent limitations of the feature-based predictor~\cite{xgboost} they
rely on.
Another approach is to modify state-of-the-art ATPs by introducing
machine-learned heuristics to influence important choice points in their search
algorithms.
A major project is the development of
\textsf{ENIGMA}~\cite{DBLP:conf/mkm/JakubuvU17,
DBLP:conf/cade/ChvalovskyJ0U19,DBLP:journals/corr/abs-1904-01677}
which guides given clause selection in \eprover~\cite{eprover}. There, a fast
machine learning model is trained to evaluate clauses from their contribution
to previous proofs. A significant slowdown detrimental to
the success rate of \eprover occurs when trying to replace the fast
predictors by deep neural networks~\cite{DBLP:conf/lpar/LoosISK17}.
Third, our work aims to ultimately bring more automation to ITP users.
Hammers~\cite{hammers4qed} rely on machine learning guided
premise selection, translation to first-order and calls to external ATPs
to provide powerful push-button automation. An instance of such a system
is implemented in \holfour~\cite{tgck-cpp15}. Its performance on induction
problems is limited by the encoding of the translation.
To solve this issue, the tactical prover
\tactictoe~\cite{tgckju-lpar17,gkukn-toappear-jar18}, also
implemented in \holfour, learns to apply tactics extracted from existing proof
scripts. It can perform induction on variables when an induction tactic has
been defined for the particular inductive type. Yet, it is currently limited by
its inability to synthesize terms as arguments of tactics.
\section{Conclusion and Future Work}
Our framework exhibits good performance on the two synthesis tasks exceeding
the performance of state-of-the-art ATP on combinators solving $65\%$ of the
test problems. Its success rate reaches $78.5\%$ on Diophantine equations.
Our proposed approach showcases how self-learning can solve a task by gathering
examples from exploratory searches. Compared to supervised learning, this
self-learning approach does not require the solutions of the problem to be
known in advance.
In the future, we intend to test this reinforcement learning framework on
many more tasks and test the possibility of joint
training~\cite{DBLP:journals/corr/abs-1902-04422}.
One domain to explore consists of other important tasks on
higher-order terms such as beta-reduction or higher-order unification.
Another interesting development is to apply the
ideas of this paper to the \tactictoe framework.
A direct application would
give the tactical prover the ability to synthesize the terms appearing as
arguments of tactic.
In general, our framework is able to construct programs and therefore
could be adapted to perform tactic synthesis.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUd_45qrqCyqr5hqZf
| 5 | 1 |
\section{Modern earthwork compaction}\label{sec:intro}
The first modern earthwork compaction rollers designed for continuous compaction control (CCC) were used in practice starting in the 1970s in the European community. CCC is a method of documenting compaction and is used to achieve homogeneous compaction in a minimum time \citep{Thur:Sand:00}. Rudimentary intelligent compaction (IC) technology was first available in the late 1990s. IC is an automated system that adjusts roller operation parameters for optimal compaction based on CCC data \citep{Sche:etal:07}. IC is a development aimed at improving quality assurance (QA) of the compaction process. Utilization of spatial uncertainty in modeling and estimation of this data will lead to improved IC and QA.
Each roller manufacturer has developed a proprietary measurement of soil stiffness used for CCC. This measure of stiffness, coupled with GPS coordinates of the roller when the measurement is taken is termed the roller measurement value (RMV). Current use of RMVs is the identification of potential areas of soft, or weak, spots. Acceptance of these areas is based on the weak spots meeting prespecified criteria \citep{Moon:etal:10}.
\subsection{Roller Measurement Values (RMVs)}
A typical smooth drum roller has a diameter of approximately 1m and is approximately 2m long and is outfitted with a sensor and GPS system. The sensor and the GPS system record at set, and not necessarily identical, frequencies. The RMV frequency is the minimum of these two frequencies. Each RMV reflects an aggregated volume of soil measuring a bulb extending to a depth of approximately 1m with a diameter of 0.5--0.6m \citep{Faca:09}. Typical construction practice is to compact in segments of road 10--15m wide and 50--100m long. See Figure \ref{fig:roller} for a representative roller manufactured by Ammann.
\begin{figure}[h!]
\centering
\includegraphics[height=6cm]{ammann2.jpg}
\caption{Ammann roller at work.}
\label{fig:roller}
\end{figure}
\subsection{Minnesota Construction Site} \label{sec:data}
As part of the NCHRP 21-09 project of the Transportation Research Board of The National Academies, Dr. Mike Mooney (Colorado School of Mines, Golden, CO) and his team collected RMV data from a test bed using a smooth, vibrating drum roller manufactured by Ammann \citep{Moon:etal:10}. The test bed lies along a stretch of road adjacent to Interstate 94 (\ang{45;15;45},\,\ang{-93;42;37}). The test bed is approximately 300 meters in length by 15 meters in width and divided into two cells, labeled 27 and 28. Measurements from the roller of the subsurface, subgrade, and base layers of the new road under construction were recorded. Figure \ref{fig:data} is a plot of the data from both cells and all three layers.
\begin{figure}[h!]
\centering
\includegraphics[height=6cm]{OriginalData.png}
\caption{Plot of subsurface (top), subgrade (middle), and base (bottom) data from cells 27 (right) and 28 (left) from the test bed in Albertville, MN, USA. $x$-direction is the direction of driving.}
\label{fig:data}
\end{figure}
Data from each layer was collected separately for each of the two cells, except for the subsurface layer where the roller drove continuously over both cells. Analysis was performed on each cell separately as the standard construction procedure focuses on one cell at a time. The subsurface layer was thus split into two cells at the cell boundary.
The subsurface layer is the existing material (clay) at the construction site. The subgrade layers consist of moisture conditioned clay and the base layers consist of a granular composite material \citep{Moon:etal:10}. The roller traversed the construction site in five to seven lanes, with the coordinates calculated by on-board GPS. Thus, the locations of the observation vector for each layer are unique.
\section{Quality Assurance and Intelligent Compaction}
The modeling and backfitting estimation routines applied to the Minnesota RMV dataset of \cite{Heer:Furr:13} can be implemented into a more comprehensive program improving quality assurance of the compaction process and also improving intelligent compaction. \cite{Holm:etal:11} developed scale space multiresolution analysis methods for image processing using a Bayesian framework. This methodology is demonstrated using several test images and is applied to climate change prediction fields. The method can also be used to analyze images of RMV estimates to identify weak, or soft, spots and large variations in the compaction area, i.e. QA and IC.
\subsection{Backfitting of RMVs}
The backfitting algorithm has been employed on a wide range of additive models, e.g. \cite{Furr:Sain:09}, \cite{Buja:etal:89}, \cite{Brei:Frie:85}. The classical model for the backfitting algorithm is $\y = \X\bbeta+\balpha+\bvarepsilon$, where $\var(\balpha) = \bSigma(\balpha)$ and $\var(\bvarepsilon) = \sigma^2\I$, $\balpha$ independent of $\bvarepsilon$. If the parameters $\btheta$ and $\sigma$ are known, the backfitting algorithm then iteratively estimates the fixed effects using generalized least squares and the spatial effects using spatial smoothing:
\begin{align*}
\widehat{\bbeta} &= \left(\X^T\bSigma_\y^{-1}\X\right)^{-1}\X^T\bSigma_\y^{-1}\y \;\text{ (generalized least-squares estimator),} \\ \widehat{\balpha} &= \bSigma(\btheta)\bSigma_\y^{-1}\left(\y - \X\widehat{\bbeta}\right) \;\text{ (spatial smoothing)},
\end{align*}
where $\bSigma_\y = \var(\y) = \bSigma(\btheta) + \sigma^2\I$, the covariance matrix of the observations. The spatial backfitting algorithm produces new iterative estimates until they converge, i.e. the estimates no longer differ with each iteration, up to a small number.
\cite{Heer:Furr:13} developed a sequential, spatial mixed-effects model and sequential backfitting methodology for complex spatial structures and apply that methodology to the dataset detailed in Section \ref{sec:data}. To aid in computational time, a spherical covariance structure is assumed as this produces sparse matrices. The model includes a state-space formulation to handle the unique data observation locations. The model assumed is of a spatial ``autoregressive'' type:
\begin{align*}
\y_1 &= \H_1\X_1\bbeta_1 + \H_1\balpha_1 + \bvarepsilon_1 \\
\y_2 &= \H_2\X_2\bbeta_2 + c\H_2\balpha_1 + \H_2\balpha_2 + \bvarepsilon_2 \\
\y_3 &= \H_3\X_3\bbeta_3 + c^2\H_3\balpha_1 + c\H_3\balpha_2 + \H_3\balpha_3 + \bvarepsilon_3,
\end{align*}
where $c$ represents the level of previously compacted layers the roller ``sees'' while compacting the current topmost layer of material, $\balpha_1, \balpha_2, \balpha_3$ are independent Gaussian random fields, and $\bvarepsilon_1, \bvarepsilon_2, \bvarepsilon_3$ are uncorrelated white noise processes, mutually independent of the $\balpha_i$s. $\X_1, \X_2, \X_3$ are full rank matrices of covariates consisting of an intercept, centered and scaled $(x, y)$-coordinates of the roller measurement, centered and scaled $x$-coordinates of the second and third power, and the driving direction of the roller (1 for right-to-left, 0 for left-to-right). Only higher order powers of the $x$-coordinate were used due to the scale difference in the two directions. Centering and scaling of the coordinates addresses the range anisotropy in RMVs, see \citet{Faca:etal:10} for discussion of anisotropy in RMVs. $\H_1, \H_2, \H_3$ are matrices mapping coordinates of the process level to observed locations.
The process level estimates of the compaction process were obtained from the Sequential Backfitting Algorithm \citep{Heer:Furr:13}. These estimates can then be used as ``images'' of the compaction site for each layer of compaction and each cell of the site.
In practice, there is a site specific threshold value of compaction required for the layer to be deemed sufficiently compacted. This threshold value can be dependent on the current material of compaction. For this analysis, a threshold value of 20 was used for all cells and layers for demonstration. The threshold value was subtracted from all images such that values less than zero represent areas that are too soft. Theoretically, the images are invariant to an additive constant such as this thresholding procedure, i.e. the thresholding does not change the range of the estimates thus the images are produced on the same color scale, irrespective of the threshold value. The thresholding was done for a practical advantage of more easily identifying soft areas.
\subsection{Multiresolution Scale Space Analysis}
To investigate significant features of an image at multiple scales, \cite{Holm:etal:11} developed a Bayesian framework of image processing at multiple resolutions. This multiresolution scale space analysis identifies credible regions of the image that correspond to positive and negative regions. Multiresolution scale space analysis is a method of simultaneously smoothing an input, such as data or an image, at several levels. Each smooth of the input provides a different scale of information.
Multiresolution scale space analysis was applied to the six images obtained from the backfitting algorithm to identify which features in the images are real features (in the Bayesian confidence region sense) and which are artifacts of random variation. The multiresolution scale space analysis uses a Bayesian framework. Each image created from the backfitting algorithm consists of the process level estimates: $\widehat{\y}_t = \X_t\widehat{\bbeta}_t + \sum_k=1^t c^{t-k}\widehat{\balpha}_k$, $t = 1, 2, 3$.
500 samples were then drawn from the ``posterior'' of $\bbeta_1$, $\bbeta_2$, $\bbeta_3$, $\balpha_1$, $\balpha_2$, and $\balpha_3$. Where
\begin{align*}
\bbeta_t &\sim \cN\left(\widehat{\bbeta}_t, \left(\X_t\T\H_t\T\bSigma_{y,t}^{-1}\H_t\X_t\right)^{-1}\right) \\
\balpha_t &\sim \cN\left(\widehat{\balpha}_t, \bSigma_t\H_t\T\bSigma_{y,t}^{-1} \left(\I - \H_t\X_t\left(\X_t\T\H_t\T\bSigma_{y,t}^{-1}\H_t\X_t\right)^{-1}\X_t\T\H_t\T\bSigma_{y,t}^{-1}\right) \H_t\bSigma_t\right),
\end{align*}
and $\bSigma_{y,t} = \H_t\bSigma_t\H_t\T + \sigma_t^2\I$.
Figure \ref{fig:cell27} depicts the results of this multiresolution scale space analysis for the subsurface, subgrade, and base layers of cell 27. Figure \ref{fig:cell28} depicts the analysis for cell 28. The set of smoothing levels used was [$8, 16, 1000, \infty$]. The smallest smoothing level corresponds to a small scale structure on the order of 5m. The largest smoothing levels correspond to an overall mean and large scale mean structure on the order of 75m. The intermediate smoothing level is smoothing of an intermediate order.
\begin{figure}[h!]
\centering
\includegraphics[height=8cm]{cell27.png}
\caption{Plot, on a scaled coordinate system, of estimated subsurface (left), subgrade (middle), and base (right) for cell 27 from the test bed in Albertville, MN, USA (top). Credibility plots from a multiresolution scale space analysis are depicted in the bottom three plots for different values of $\lambda$. Red corresponds to softer RMVs. All credibility plots for $\lambda = \infty$ are solid blue, indicating an overall sufficient compaction has been attained with the given threshold.}
\label{fig:cell27}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[height=8cm]{cell28.png}
\caption{Plot, on a scaled coordinate system, of estimated subsurface (left), subgrade (middle), and base (right) for cell 28 from the test bed in Albertville, MN, USA (top). Credibility plots from a multiresolution scale space analysis are depicted in the bottom three plots for different values of $\lambda$. Red corresponds to softer RMVs. All credibility plots for $\lambda = \infty$ are solid blue, indicating an overall sufficient compaction has been attained with the given threshold.}
\label{fig:cell28}
\end{figure}
\section{Discussion}
The blue color from the images generated by the multiresolution scale space analysis indicates credible regions of sufficient compaction. The red color indicates credible regions where a sufficient level of compaction is suspect. The $\lambda = 8, 16$ images are most informative for identifying areas requiring more compaction as they identify regions on the same scale as the roller. The larger $\lambda$ values provide a more general idea of the compaction of the entire cell by depicting a gradient of hard to soft values.
Cell 28 is consistently softer than cell 27 through all layers of compaction. This
is an example of heterogeneity in the pre-existing material at the construction site, as represented by the subsurface layer. The features of the subsurface layer are inherited by subsequent layers of compaction. The red spots on the generated images, especially at the $\lambda = 8$ smoothing level, can be seen in all three layers. This is an expected result as each layer of compaction is on the order of 20cm thick and the roller measures to a depth on the order of 1m \citep{Faca:09}.
Cell 27 displays a gradient of more compact to less compact material from top to bottom in the subgrade layer. This gradient is still evident in the base layer. For cell 28, the a gradient from right to left in the subsurface layer is evident. This gradient changes to a high to low to high gradient moving left to right in the subgrade layer and remains through the base layer.
The figures also detail credible regions of heterogeneity of the compaction region. These areas could have been compacted more to achieve a more homogeneous compaction and improve QA of the construction. The small RMVs found in the base layer of cell 27 could also be identified while compaction is in progress and roller parameters altered to better compact that region of the cell, i.e. IC.
An image of the estimated process level that contained all red and orange, i.e. is everywhere below the threshold value, would return a solid red credibility plot at the highest smoothing levels.
The detailed methodology of sequential, spatial backfitting of RMVs coupled with multiresolution scale space analysis of the resultant estimate images can be utilized to improve IC. By implementing the sequential, spatial mixed-effects model, the spatial uncertainty in RMV data is used to generate estimates of the true compaction level. The use of spherical covariance matrices speeds computation time of the estimation and the resultant images can be produced at a resolution that provides speed in computation of the multiresolution scale space analysis step. Implementation of such a scheme is time effective and could improve the rudimentary IC currently utilized.
The sequential, spatial mixed-effects model uses spatial, random processes in its additive decomposition. Splines can also replace these random processes. The estimation of a spline is mathematically equivalent to the universal kriging done in this paper, as detailed in \citet{Heer:Furr:13}. The literature on splines is extensive and computational feasibility can be maintained, i.e. \cite{Wahb:90}, \cite{Eile:etal:96}, \cite{Marx:Eile:98}, \cite{Eile:Marx:04}.
\bibliographystyle{mywiley}
|
train/arxiv
|
BkiUdfk4eIZjn9vgIFiT
| 5 | 1 |
\section{Introduction}
In \cite{HunekeWatanabeUpperBoundOfMultiplicity}, Huneke and Watanabe proved that, if $R$ is a noetherian, $F$-pure local ring of dimension $d$ and embedding dimension $v$,
then $e(R)\leq \binom{v}{d}$ where $e(R)$ denotes the Hilbert-Samuel multiplicity of $R$. The following was left as an open question in \cite[Remark 3.4]{HunekeWatanabeUpperBoundOfMultiplicity}:
\begin{qu}[Huneke-Watanabe]
\label{Huneke-Watanabe question}
Let $R$ be a noetherian $F$-injective local ring with dimension $d$ and embedding dimension $v$. Is it true that $e(R)\leq \binom{v}{d}$?
\end{qu}
In this note, we answer this question in the affirmative when $R$ is generalized Cohen-Macaulay.
\begin{theorem}
\label{thm: bound in GCM case}
Let $R$ be a $d$-dimensional noetherian $F$-injective generalized Cohen-Macaulay local ring of embedding dimension $v$. Then
\[e(R)\leq \binom{v}{d}.\]
\end{theorem}
Using reduction mod $p$, one can prove an analogous result for generalized Cohen-Macaulay rings of dense $F$-injective type in characteristic 0, {\it cf.} Theorem \ref{Theorem: the bound in characteristic zero}.
We also generalize these result to Cohen-Macaulay, non-$F$-injective rings as follows.
\begin{defi}[cf.~section 4 in \cite{LyubeznikFModulesApplicationsToLocalCohomology}]
Let $A$ be a commutative ring and let $H$ be an $A$-module with Frobenius map $\theta : H \rightarrow H$ (i.e., an additive map
such that $\theta(a h)=a^p \theta (h)$ for all $a\in A$ and $h\in H$). Write $\Nil H=\{ h\in H \,|\, \theta^e h = 0 \text{ for some } e\geq 0 \}$.
The \emph{Hartshorne-Speiser-Lyubeznik number} (henceforth \emph{abbreviated HSL number}) is defined as
$$\inf \{e\geq 0 \,|\, \theta^e \Nil H = 0\}. $$
The HSL number of a local, Cohen-Macaulay ring $(R, \mathfrak{m})$ is defined as the HSL number of the top local cohomology module $\HH^{\dim R}_\mathfrak{m} (R)$
with its natural Frobenius map.
\end{defi}
For artinian modules over a quotient of a regular ring, HSL numbers are finite. (\cite[Proposition 4.4]{LyubeznikFModulesApplicationsToLocalCohomology}).
\bigskip
Without the $F$-injectivity assumption, we have the following upper bound in the Cohen-Macaulay case which involves the HSL number of $R$.
\begin{theorem}[Theorem \ref{Theorem: bounds with HSL numbers}]
Assume that $(R,\fm)$ is a reduced, Cohen-Macaulay noetherian local ring of dimension $d$ and embedding dimension $v$.
Let $\eta$ be the HSL number of $R$ and write $Q=p^\eta$.
Then
\[e(R)\leq Q^{v-d}\binom{v}{d}.\]
\end{theorem}
This bound is asymptotically sharp as shown in Remark \ref{remark: asymptotically sharp}.
\section{Bounds on $F$-injective rings}
For each commutative noetherian ring $R$, let $R^o$ denote the set of elements of $R$ that are not contained in any minimal prime ideal of $R$.
\begin{remark}
\label{rem: nonzerodivisor}
If $R$ is a reduced noetherian ring, then each $c\in R^o$ is a non-zero-divisor.
\end{remark}
Given any local ring $(R,\fm)$, we can pass to $S=R[x]_{\fm R[x]}$ which admits an infinite residue field: this does not affect the multiplicity, dimension, embedding dimension and
Cohen-Macaulyness (cf.~\cite[Lemma 8.4.2]{HunekeSwansonIntegralClosure}). In addition, since $S$ is a faithfully flat extension of $R$,
$\HH^i_{\fm S} (S) = \HH^i_{\fm} (R) \otimes_R S$ and, if $\phi_i : \HH^i_{\fm} (R) \rightarrow \HH^i_{\fm} (R)$ is the natural Frobenius map
induced by the Frobenius map $r\mapsto r^p$ on $R$, then the natural Frobenius map on $\HH^i_{\fm S} (S)$
takes an element $a \otimes x^\alpha$ to $\phi_i(a) \otimes x^{\alpha p}$. Therefore, passing to $S$ preserves HSL numbers (and hence also $F$-injectivity). Therefore, for the purpose of seeking an upper bound of multiplicity, we may assume that that all mentioned local rings $(R,\fm)$ have infinite residue fields; consequently, $\fm$ admits a minimal reduction generated by $\dim R$ elements (cf.~\cite[Proposition 8.3.7]{HunekeSwansonIntegralClosure}).
We begin with a Skoda-type theorem for $F$-injective rings which may be viewed as a generalization of \cite[Theorem 3.2]{HunekeWatanabeUpperBoundOfMultiplicity}.
\begin{thm}
\label{Theorem: power of m in Frobenius closure}
Let $(R,\fm)$ be a commutative noetherian ring of characteristic $p$ and let $\fa$ be an ideal that can be generated by $\ell$ elements. Assume that each $c\in R^o$ is a non-zero-divisor. Then
\[\overline{\fa^{\ell+1}}\subseteq \fa^F,\]
where $\overline{\fa^{\ell+1}}$ is the integral closure of $\fa^{\ell+1}$ and $\fa^F$ the Frobenius closure of $\fa$.
\end{thm}
\begin{proof}
For each $x\in \overline{\fa^{\ell+1}}$ pick $c\in R^o$ such that for $N\gg 1$,
$c x^N \in \fa^{(\ell+1)N}$ (\cite[Corollary 6.8.12]{HunekeSwansonIntegralClosure}). Note that $c$ is a non-zero-divisor by our assumptions. We have
$c x^N \in c(\fa^{(\ell+1)N} : c)\subseteq cR \cap \fa^{(\ell+1)N}$.
An application of the Artin-Rees Lemma gives a $k\geq 1$ such that
$c x^N \in c \fa^{(\ell+1)N-k}$ for all large $N$, and so
$ x^N \in \fa^{(\ell+1)N-k}$ for all large $N$.
For any large enough $N=p^e$ we have
$x^{p^e}\in \fa^{[p^e]}$, i.e., $x$, and hence $\overline{\fa^{d+1}}$ is in the Frobenius closure of $\fa$.
\end{proof}
\begin{cor}
\label{Corollary: multiplicity bound in F-injective rings}
Let $(R,\fm)$ be a $d$-dimensional noetherian local ring of characteristic $p$. Assume that $\fm$ admits a minimal reduction $J$. Then
\begin{enumerate}
\item[(a)] $\fm^{d+1} \subseteq \overline{\fm^{d+1}} = \overline{J^{d+1}} \subseteq J^{F}$, and
\item[(b)] $e(R)\leq \binom{v}{d} + \ell(J^{F}/J)$.
\end{enumerate}
\end{cor}
\begin{proof}
Since $\fm^{d+1} \subseteq \overline{\fm^{d+1}} = \overline{J^{d+1}}$, (a) follows from Theorem \ref{Theorem: power of m in Frobenius closure}.
For part (b), since $\overline{J^{d+1}}\subseteq J^F$ and $J$ is generated by $d$ elements, we have $\ell(R/J^{F})\leq \binom{v}{d}$ (as in the proof of
\cite[Theorem 3.1]{HunekeWatanabeUpperBoundOfMultiplicity}). Then
\[e(R)\leq \ell (R/J)= \ell(R/J^{F})+\ell(J^{F}/J)\leq \binom{v}{d} + \ell(J^{F}/J).\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: bound in GCM case}]
Let $\hat{R}$ denote the completion of $R$. Then $R$ is $F$-injective and generalized Cohen-Macaulay if and only if $\hat{R}$ is so, and $e(R)=e(\hat{R})$. Hence we may assume that $R$ is complete. Since $R$ is $F$-injective, it is reduced (\cite[Remark 2.6]{SchwedeZhangBertiniTheorems}) and hence each $c\in R^o$ is a non-zero-divisor by Remark \ref{rem: nonzerodivisor}. It is proved in \cite[Theorem 1.1]{MaBuchsbaum} that a generalized Cohen-Macaulay local ring is $F$-injective if and only if every parameter ideal is Frobenius closed. Let $J$ denote a minimal reduction of $\fm$, then $J^F=J$.
Our theorem follows immediately from Corollary \ref{Corollary: multiplicity bound in F-injective rings}.
\end{proof}
\begin{comment}
\begin{thm}
\label{minimal reduction by filter regular seqeunce}
Let $(R,\fm)$ be any local ring. There exists a minimal reduction $C\subset \fm$ of $\fm$
which is generated by a filter regular sequence.
\end{thm}
\begin{proof}
We adapt the proof of \cite[Theorem 1]{NorthcottRees1954} as follows.
Let $\Sigma$ be the set of all ideals of the form $C^\prime +\fm^2$ where
$C^\prime$ is a reduction of $\fm$. For each such ideal, we may view its image modulo $\fm^2$
as a $R/\fm$-vector subspace of $\fm/\fm^2$, and we can chose a $C + \fm^2\in \Sigma$
with minimal dimension. The proof of \cite[Theorem 1]{NorthcottRees1954} shows that
if we now choose $c_1, \dots, c_d\in C$ whose images in $C + \fm^2/\fm^2$ form a $R/\fm$-basis, then
the ideal generated by $c_1, \dots, c_d$ is a minimal reduction of $\fm$.
We now show that $c_1, \dots, c_d$ can be chosen as above, and in addition to form a
filter regular sequence. Let $0\leq i < d$ and assume we have chosen a filter sequence
$c_1, \dots, c_i\in C$ whose images $(C + \fm^2)/\fm^2$ are $R/\fm$-linearly independent.
If $C\subseteq ((c_1, \dots, c_i)R + \fm^2) \bigcup
\left( \bigcup \Ass (c_1, \dots, c_i)R \setminus \{\fm\} \right)$ then by prime avoidance
$C\subseteq ((c_1, \dots, c_i)R + \fm^2)$ or
$C\subseteq P$ for some non-maximal associated prime $P$ of $(c_1, \dots, c_i)R$.
The former implies that $\dim_{R/\fm} (C + \fm^2)/\fm^2 \leq i < d$, which is false.
The latter contradicts the fact that $C$ is a reduction: if $C\subseteq P\subsetneq \fm$ and
$P\supseteq C \fm^k = \fm^{k+1}$, then $P=m$.
Hence we can find $c_{i+1}\in C$ such that $c_1, \dots, c_{i+1}$ form a filter regular sequence
and whose images in the $R/\fm$-vector space $(C + \fm^2)/\fm^2$ are linearly independent.
We can now construct $c_1, \dots, c_d$ by induction on $i$.
\end{proof}
\end{comment}
\section{Bounds on multiplicity using HSL numbers}
\begin{thm}\label{Theorem: bounds with HSL numbers}
Assume that $(R,\fm)$ is a reduced, Cohen-Macaulay noetherian local ring of dimension $d$ and embedding dimension $v$.
Let $\eta$ be the HSL number of $R$ and write $Q=p^\eta$.
Then
$e(R)\leq Q^{v-d}\binom{v}{d}$.
\end{thm}
\begin{proof}
We may assume that $R$ is complete since $e(R)=e(\hat{R})$. Hence $\fm$ admits a minimal reduction $J$ (generated by $d$ elements). We have
$e(R)= \ell (R/J)$, and Theorem \ref{Theorem: power of m in Frobenius closure}
shows that $\fm^{d+1} \subseteq J^F$.
Now $\left( J^F \right)^{[Q]}= J^{[Q]}$ for $Q=p^\eta$ hence
$\left(\fm^{d+1}\right)^{[Q]} \subseteq J^{[Q]}$.
Extend a set of minimal generators $x_1, \dots, x_d$ of $J$ to a minimal set of generators
$x_1, \dots, x_d, y_1, \dots y_{v-d}$ of $\fm$. Now $R/J^{[Q]}$ is spanned by
monomials
$$x_1^{\gamma_1} \dots x_d^{\gamma_d} y_1^{\alpha_1 Q + \beta_1} \dots y_{v-d}^{\alpha_{v-d} Q + \beta_{v-d}}$$
where $0\leq \gamma_1, \dots \gamma_d, \beta_1, \dots, \beta_{v-d} < Q$ and
$0\leq \alpha_1+\dots+\alpha_{v-d}<d+1$. The number of such monomials is
$Q^v \binom{v}{d}$ and so $\ell(R/J^{[Q]})\leq Q^v \binom{v}{d}$.
Note that as $J$ is generated by a regular sequence,
$ \ell(R/J^{[Q]}) = Q^d \ell(R/J)$ and
we conclude that
$$\ell(R/J) = \ell(R/J^{[Q]}) /Q^d \leq Q^{v-d} \binom{v}{d} .$$
\end{proof}
\begin{remark}
\label{remark: asymptotically sharp}
The next family of examples shows that the bound in Theorem \ref{Theorem: bounds with HSL numbers} is asymptotically sharp.
Let $\mathbb{F}$ be a field of prime characteristic $p$, let $n\geq 2$, and
let $S$ be $\mathbb{F}[x_1, \dots, x_n]$. Let $\mathfrak{m}=(x_1, \dots, x_n)S$, and let $E$ denote the injective hull of the residue field of $S_{\mathfrak{m}}$.
Define $f=\sum_{i=1}^n x_1^p \dots x_{i-1}^p x_i x_{i+1}^p \dots x_n^p$ and $h=x_1 \dots x_{n-1}$. We claim that $f$ is square-free: if this is not the case
write $f=r^\alpha s$ where $r$ is irreducible of positive degree, and $\alpha\geq 2$.
Let $\partial$ denote the partial derivative with respect to $x_n$.
Note that $\partial f= h^p$ and so
$$ h^p = \alpha r^{\alpha-1} (\partial r) s + r^\alpha (\partial s) = r^{\alpha-1}\left( \alpha (\partial r) s + r (\partial s) \right).$$
We deduce that $r$ divides $h$, but this would imply that $x_i^2$ divides all terms of $f$ for some $1\leq i\leq n-1$, which is false.
We conclude that $S/fS$ is reduced.
Let $R$ be the localization of $S/fS$ at $\mathfrak{m}$. We compute next the HSL number $\eta$ of $R$ using the
method described in sections 4 and 5 in \cite{KatzmanParameterTestIdealOfCMRings}.
It is not hard to show that $\HH^{n-1}_{\mathfrak{m}}(R) \cong \Ann_E f$ where $E=\HH^n_{\mathfrak{m}}(S)$,
and that, after identifying these, the natural Frobenius action on
$\Ann_E f$ is given by $f^{p-1} T$ where $T$ is the natural Frobenius action on $E$.
To find the HSL number $\eta$ of $\HH^{n-1}_{\mathfrak{m}}(R)$
we readily compute
$I_1(f)$ (cf.~\cite[Proposition 5.4]{KatzmanParameterTestIdealOfCMRings}) to be the ideal generated by
$\{ x_1 \dots x_{i-1} x_{i+1} \dots x_n \,|\, 1\leq i \leq n\}$ and
\begin{eqnarray*}
I_2(f^{p+1})& = &I_1\left( f I_1(f) \right)\\
&=& \sum_{i=1}^n I_1\big(
\sum_{j=1}^{i-1} x_1^{p+1} \dots x_{j-1}^{p+1} x_j^2 x_{j+1}^{p+1} \dots x_{i-1}^{p+1} x_{i}^{p} x_{i+1}^{p+1} \dots x_n^{p+1}\\
&+& x_1^{p+1} \dots x_{i-1}^{p+1} x_{i} x_{i+1}^{p+1} \dots x_n^{p+1}\\
&+& \sum_{j=i+1}^{n} x_1^{p+1} \dots x_{i-1}^{p+1} x_i^p x_{i+1}^{p+1} \dots x_{j-1}^{p+1} x_{j}^{2} x_{j+1}^{p+1} \dots x_n^{p+1} \big)\\
&=& I_1(f) \\
\end{eqnarray*}
and we deduce that $\eta=1$.
We now compute
$$ \Gamma_{n,p}:=\frac{\deg f}{ \binom{n}{n-1} p^\eta}=\frac{(n-1)p+1}{np} .$$
We have $\lim_{n \rightarrow \infty} \Gamma_{n,p}=1$ and $\lim_{p \rightarrow \infty} \Gamma_{n,p}=(n-1)/n$,
so we can find values of $\Gamma_{n,p}$ arbitrarily close to 1.
\end{remark}
\section{Examples}
The injectivity of the natural Frobenius action on the top local cohomology $H^d_{\fm}(R)$ does {\it not} imply $e(R)\leq \binom{v}{d}$ as shown by the following example.
\begin{ex}
Let $S=\mathbb{Z}/2\mathbb{Z}[x,y,u,v]$, let $\mathfrak{m}$ be its ideal generated by the variables,
define
$I=(v, x)\cap (u, x)\cap (v, y)\cap (u, y) \cap (y, x)\cap (v, u) \cap (y - u, x - v) =\left( xv(y-u), yu(x-v), yuv(y-u), xuv(x-v) \right)$,
and let $R=S/I$: this is a reduced
2-dimensional ring.
We compute the following graded $S$-free resolution of $I$
$$
\xymatrix{
0 \ar[r] & S(-6) \ar[r]^{B} & S^4(-5) \ar[r]^{A} & S^2(-3) \oplus S^2(-4) \ar[r] & I \ar[r] & 0\\
}
$$
where
$$
A=\left[\begin{array}{cccc}
u(x-v) & yu & 0 &0\\
0&0&xv&v(y-u)\\
0&-x&0&v-x\\
u-y&0&-y&0
\end{array}\right], \quad
B=\left[\begin{array}{c}
y\\v-x\\u-y\\x\\
\end{array}\right]
$$
and note that $R$ has projective dimension 3, hence depth 1 and so it is not Cohen-Macaulay.
Also, we can read the Hilbert series of $R$ from its graded resolution and we obtain
$$\frac{1-2t^3-2t^4+4t^5-t^6}{(1-t)^4}=\frac{1+2t+3t^2+2t^3-t^4}{(1-t^2)}$$
and so the multiplicity of $R$ is $1+2+3+2-1=7$ exceeding $\binom{4}{2}=6$ (cf.~\cite[\S 6.1.1]{HerzogHibiMonomialIdeals}.)
Note that $R$ is not $F$-injective, but the natural Frobenius action on the top local cohomology module is injective.
\end{ex}
From the proof of Theorem \ref{thm: bound in GCM case}, we can see that if a minimal reduction of the maximal ideal in an $F$-injective local ring $R$ is Frobenius closed then the bound $e(R)\leq \binom{v}{d}$ will hold. Hence we may ask whether minimal reductions would be Frobenius-closed in such rings (cf.~Theorem 6.5 and Problem 3 in \cite{QuyShimomotoFinjectivity}).
However, the following example shows this not to be the case.
\begin{ex}
Let $S=\mathbb{Z}/2\mathbb{Z}[x,y,u,v,w]$, let $\mathfrak{m}$ be its ideal generated by the variables and let
$I_1=(x,y) \cap (x+y, u+w, v+w)$, $I_2=(u,v,w) \cap (x,u,v) \cap (y,u,v)=(u,v,xyw)$, and $I=I_1\cap I_2$.
Fedder's Criterion \cite[Proposition 1.7]{FedderFPureRat} shows that $S/I_1$, $S/I_2$ and $S/(I_1+I_2)$ are $F$-pure,
and \cite[Theorem 5.6]{QuyShimomotoFinjectivity} implies that $S/I$ is F-injective. Also, $S/I$ is almost Cohen-Macaulay: it is 3-dimensional and its localization at $\mathfrak{m}$ has depth 2.
Its not hard to check that the ideal $J$ generated by the images in $S/I$ of $w, y+v, x+u$ is a minimal reduction.
However $J^F\neq J$: while $v^2\notin J$, we have
$$v^4= xyw^2 +v^2(y+v)^2 +yvw(x+y) +(v+w) (y^2v+xyw), $$
hence $v^2\in J^F\setminus J$.
\end{ex}
\section{Bounds in Characteristic zero}
Throughout this section $K$ will denote a field of characteristic zero,
$T=K[x_1, \dots, x_n]$,
$R$ will denote the finitely generated $K$-algebra
$R=T/I$ for some ideal $I\subseteq T$, and $\mathfrak{m}=(x_1, \dots, x_n)R$; $d$ and $v$ will denote the dimension and embedding dimension, respectively, of $R_{\mathfrak{m}}$.
We also choose $\mathbf{y}=y_1, \dots, y_d\in \mathfrak{m}$ whose images in $R_{\mathfrak{m}}$ form a minimal reduction of $\mathfrak{m}R_\mathfrak{m}$.
We may, and do assume that the only maximal ideal containing $\mathbf{y}$ is $\mathfrak{m}$. Otherwise, if $\fm_1, \dots, \fm_t$ are all the maximal ideals
distinct from $\fm$ which contain $\mathbf{y}$, we can pick $f\in (\fm_1\cap \dots \cap \fm_t) \setminus \fm$, and now the only
maximal ideal containing $\mathbf{y}$ in $R_f$ is $\fm R_f$.
We may now replace $R$ with $R^\prime=K[x_1, \dots, x_n,x_{n+1}]/I+\langle x_{n+1} f-1 \rangle\cong R_f$ and
since $R_{\mathfrak{m}}= (R_f)_{\mathfrak{m}}$ we are not affecting any local issues.
The main tool used in this section descent techniques described in
\cite{HochsterHunekeTightClosureInEqualCharactersticZero}. We start by introducing a flavour of it useful for our purposes.
\begin{defi}\label{Definition: descent}
By \emph{descent objects} we mean
\begin{itemize}
\item[(1)] a finitely generated $K$-algebra $R$ as above,
\item[(2)] a finite set of finitely generated $T$-modules,
\item[(3)] a finite set of $T$ linear maps between $T$-modules in (2),
\item[(4)] a finite set of finite complexes involving maps in (3),
\end{itemize}
By \emph{descent data} for these descent objects we mean
\begin{itemize}
\item[(a)] A finitely generated $\mathbb{Z}$-subalgebra $A$ of $K$, $T_A=A[x_1, \dots, x_n]$, $I_A\subseteq T_A$ such that with $R_A=T_A/I_A$
\begin{itemize}
\item[$\bullet$] $R_A \subseteq R$ induces an isomorphism $R_A\otimes_A K \cong R\otimes_A K=R$, and
\item[$\bullet$] $R_A$ is $A$-free.
\end{itemize}
\item[(b)] For each $M$ in (2), a finitely generated free $A$-submodule $M_A\subseteq M$ such that this inclusion induces an isomorphism
$M_A\otimes_A K \cong M\otimes_A K=M$.
\item[(c)] For every $\phi : M \rightarrow N$ in (3) an $A$ linear map $\phi_A : M_A \rightarrow N_A$ such that
\begin{itemize}
\item[$\bullet$] $\phi_A \otimes 1: M_A \otimes_A K \rightarrow N_A \otimes_A K$ is the map $\phi$, and
\item[$\bullet$] $\Image \phi$, $\Ker \phi$ and $\Coker \phi$ are $A$-free.
\end{itemize}
\item[(d)] For every homological complex
$$\mathcal{C}_\bullet = \dots \xrightarrow{\partial_{i+2}} C_{i+1} \xrightarrow{\partial_{i+1}} C_i \xrightarrow{\partial_{i}} \dots $$
in (4), an homological complex
$${\mathcal{C}_A}_\bullet= \dots \xrightarrow{(\partial_{i+2})_A} (C_{i+1})_A \xrightarrow{(\partial_{i+1})_A} (C_i)_A \xrightarrow{(\partial_{i})} \dots $$
such that $\HH_i({\mathcal{C}_A} \otimes_A K)=\HH_i({\mathcal{C}_A} )\otimes_A K$.
For every cohomological complex in (4), a similar corresponding contruction.
\end{itemize}
\end{defi}
Descent data exist: see \cite[Chapter 2]{HochsterHunekeTightClosureInEqualCharactersticZero}.
Notice that for any maximal ideal $\mathfrak{p}\subset A$, the fiber $\kappa(\mathfrak{p})=A/\mathfrak{p}$
is a finite field. Given any property $\mathcal{P}$ of rings of prime characteristic, we say that $R$ as in the definition above as
\emph{dense $\mathcal{P}$ type} if there exists descent data $(A, R_A)$ and such that for all maximal ideals $\mathfrak{p}\subset A$ the fiber
$R_A \otimes_A \kappa(\mathfrak{p})$ has property $\mathcal{P}$.
Notice also that for any complex $\mathcal{C}$ of free $A$ modules where the kernels and cokernels of all maps are $A$-free (as in Definition \ref{Definition: descent}(c) and (d)),
$\HH_i (\mathcal{C} \otimes_A \kappa(\fp)) = \HH_i (\mathcal{C}) \otimes_A \kappa(\fp)$.
The main result in this section is the following theorem.
\begin{thm}\label{Theorem: the bound in characteristic zero}
If $R_\mathfrak{m}$ is Cohen-Macaulay on the punctured spectrum and has dense $F$-injective type, then
$e(R_\mathfrak{m})\leq \binom{v}{d}$.
\end{thm}
\begin{lem}\label{Lemma: descent properties}
There exists descent data $(A, R_A)$ for $R$ with the following properties.
\begin{enumerate}
\item[(a)] $y_1, \dots, y_d\in R_A$,
\item[(b)] for all maximal ideals $\mathfrak{p}\subset A$ the images of $y_1, \dots, y_d$ in $R_{\kappa(\mathfrak{p})}$ are a minimal reduction of $\mathfrak{m} R_{\kappa(\mathfrak{p})}$,
\item[(c)] if $R_\mathfrak{m}$ is Cohen-Macaulay on its punctured spectrum, so is $R_{\kappa(\mathfrak{p})}$ for all maximal ideals $\mathfrak{p}\subset A$.
\item[(d)] if $R_\mathfrak{m}$ is unmixed, so is $R_{\fp}$ for all maximal ideals $\mathfrak{p}\subset A$.
\end{enumerate}
\end{lem}
\begin{proof}
Start with some descent data $(A, R_A)$ where $A$ contains all K-coefficients among a set of generators $g_1, \dots, g_\mu$ of $I$, $I_A$ is the ideal of $A[x_1, \dots, x_n]$
generated by $g_1, \dots, g_\mu$ and
$R_A=A[x_1, \dots, x_n]/I_A$. Let $\mathbf{x}$ denote $(x_1, \dots, x_n)$.
For (a) write $y_i=Q_i(x_1, \dots, x_n)+I$ for all $1\leq i\leq d$ and extend $A$ to include all the K-coefficients in $Q_1, \dots, Q_d$.
Assume that $\mathfrak{m}^{s+1} \subseteq \mathbf{y}\mathfrak{m}^s$ for some $s$. Write each monomial of degree $s+1$ in the form
$r_1(\mathbf{x}) Q_1(\mathbf{x}) + \dots + r_d(\mathbf{x}) Q_d(\mathbf{x}) + a(\mathbf{x})$ where $r_1, \dots, r_d$ are polynomials of degrees at least $s$
and $a(\mathbf{x})\in I$; enlarge $A$ to include all the $K$-coefficients of $r_1, \dots, r_d, a$. With this enlarged $A$ we have
$(\mathbf{x}R_A)^{s+1} \subseteq (\mathbf{y} R_A) (\mathbf{x} R_A)^s$ and tensoring with any $\kappa(\fp)$ gives
$( \mathbf{x} R_{\kappa(\fp)})^{s+1} \subseteq (\mathbf{y} R_{\kappa(\fp)}) (\mathbf{x} R_\kappa(\fp))^s$.
If $R_\mathfrak{m}$ is Cohen-Macaulay on its punctured spectrum, then we can find a localization of $R$ at one element whose only point at which it can fail
to be non-Cohen-Macaulay is $\fm$. After adding a new variable to $R$ as at the beginning of this section, we may assume that the non-Cohen-Macaulay locus of $R$ is contained in $\{ \fm \}$.
The hypothesis in (c) is now equivalent to the existence of a $k\geq 1$ such that $\mathfrak{m}^k \Ext_T^i(R,T)=0$ for all $\height I < i \leq n$.
Let $\mathcal{F}$ be a free $T$-resolution of $R$.
Include $\fm$, $\mathcal{F}$ and $\mathcal{C}=\Hom(\mathcal{F}, T)$ in the descent objects.
Now, with the corresponding descent data,
$\mathcal{F}_A$ is a $T_A$-free resolution of $R_A$.
Localize $A$ at one element, if necessary, so that $\fm_A^k \Ext_{T_A}^i (R_A, T_A)$ is $A$-free for all $\height I < i \leq n$.
Fix any $\height I < i \leq n$; we have
$$ \Ext_{T_A}^i (R_A, T_A) \otimes_A K=\HH^i( \Hom(\mathcal{F_A}, T_A) ) \otimes_A K= \HH^i( \mathcal{C_A} ) \otimes_A K = \HH^i ( \mathcal{C} ) $$
and hence $\fm_A^k \Ext_{T_A}^i (R_A, T_A) \otimes_A K =0$ so $\fm_A^k \Ext_{T_A}^i (R_A, T_A)=0$. Now for any maximal ideal $\fp \subset A$,
$\fm_{\kappa(\fp)}^k \Ext_{T_{\kappa(\fp)}}^i (R_{\kappa(\fp)}, T_{\kappa(\fp)})=0$, and hence
$R_{\kappa(\fp)}$ is Cohen-Macaulay on its punctured spectrum.
The last statement is \cite[Theorem 2.3.9]{HochsterHunekeTightClosureInEqualCharactersticZero}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Theorem: the bound in characteristic zero}]
Using \cite[Theorem 4.6.4]{BrunsHerzog} we write
$e(R_{\fm})= \chi(\mathbf{y}; R_{\fm})$, and using the fact that $R$ was constructed so that $\fm$ is the only maximal ideal
containing $\mathbf{y}$, we deduce that
$e(R_{\fm})= \chi(\mathbf{y}; R)= \sum_{i=0}^d (-1)^i \ell_R \HH_i( \mathbf{y}, R)$.
We add to the descent objects in Lemma \ref{Lemma: descent properties} the Koszul complex $\mathcal{K}_\bullet(\mathbf{y}; R)$ and extend the descent data
in Lemma \ref{Lemma: descent properties} to cater for these.
For all $0\leq i\leq d$ we have $\HH_i(\mathbf{y}; R) \cong \HH_i(\mathbf{y}; R_A) \otimes_A K$ and
$\ell \left( \HH_i(\mathbf{y}; R) \right) = \rank \HH_i(\mathbf{y}; R_A)$.
Pick any maximal ideal $\fp \subset A$.
We have $\HH_i(\mathbf{y}; R_A)\otimes_A \kappa(\fp) \cong \HH_i(\mathbf{y}; R_{\kappa(\fp)})$.
Note that that $\HH_i(\mathbf{y}; R_{\kappa(\fp)})$ is only supported at $\fm R_{\kappa(\fp)}$. Otherwise, we can find an $x\in \fm R_{\kappa(\fp)}$
such that
$0\neq \HH_i(\mathbf{y}; R_{\kappa(\fp)})_x \cong \HH_i(\mathbf{y}; R_A)_x \otimes_A \kappa(\fp)$ , hence
$\HH_i(\mathbf{y}; R_A)_x \neq 0$ and
$(\HH_i(\mathbf{y}; R_A) \otimes_A K)_x \cong \HH_i(\mathbf{y}; R)_x=0$,
contradicting the fact that $\Supp \HH_i(\mathbf{y}; R) \subseteq \{ \fm \}$.
Now
\begin{align*}
e((R_{\kappa(\fp)})_{\fm})= \chi(\mathbf{y}; (R_{\kappa(\fp)})_{\fm})&= \chi(\mathbf{y}; R_{\kappa(\fp)})\\
&= \sum_{i=0}^d (-1)^i \ell_R \HH_i( \mathbf{y}, R_{\kappa(\fp)})\\
&=\sum_{i=0}^d (-1)^i \rank \HH_i( \mathbf{y}, R_A)
\end{align*}
and so
Theorem \ref{thm: bound in GCM case} implies that $e(R_{\fm})=e((R_{\kappa(\fp)})_{\fm})\leq \binom{v}{d}$.
\end{proof}
\begin{remark}
In \cite{SchwedeFInjectiveAreDuBois} it is conjectured that being a $K$-algebra with dense $F$-injective type is equivalent to being a Du Bois singularity. Recently, the multiplicity of
Cohen-Macaulay Du Bois singularities has been bounded by $\binom{v}{d}$ (see \cite{Shibata2017}) and hence the results of this section provide further evidence for the conjecture above.
\end{remark}
\bibliographystyle{skalpha}
|
train/arxiv
|
BkiUfvA25V5jgmwVpoVG
| 5 | 1 |
\section{Introduction}
\textbf{Classical Ramsey numbers.} A $k$-uniform hypergraph $H = (P,E)$ consists of a vertex set $P$ and an edge set $E\subset {P\choose k}$, which is a collection of subsets
of $P$ of order $k$. The Ramsey number $R_k(s,n)$ is the minimum integer $N$ such that every $k$-uniform hypergraph on $N$ vertices contains either $s$ vertices such that every $k$-tuple induced by them is an edge, or contains $n$ vertices such that every $k$-tuple induced by them is not an edge.
Due to its wide range of applications in logic, number theory, analysis, and geometry, estimating Ramsey numbers has become one of the most central problems in combinatorics. For \emph{diagonal} Ramsey numbers, i.e.~when $s = n$, the best known lower and upper bounds for $R_k(n,n)$ are of the form\footnote{We write $f(n) = O(g(n))$ if $|f(n)| \leq c|g(n)|$ for some fixed constant $c$ and for all $n \geq 1$; $f(n) = \Omega(g(n))$ if $g(n) = O(f(n))$; and $f(n) = \Theta(g(n))$ if both $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$ hold. We write $f(n) = o(g(n))$ if for every positive $\epsilon > 0$ there exists a constant $n_0$ such that $|f(n)| \leq \epsilon |g(n)|$ for all $n \geq n_0$.} $R_2(n,n) = 2^{\Theta(n)}$, and for $k \geq 3$,
$$\twr_{k-1}(\Omega(n^2)) \leq R_k(n,n) \leq \twr_k(O(n)),$$
\noindent where the tower function $\twr_k(x)$ is defined by $\twr_1(x) = x$ and $\twr_{i + 1} = 2^{\twr_i(x)}$ (see \cite{es,erdos2,erdos3,rado}). Erd\H os, Hajnal, and Rado \cite{erdos3} conjectured that $R_k(n,n)=\twr_k(\Theta(n))$, and Erd\H os offered a \$500 reward for a proof. Despite much attention over the last 50 years, the exponential gap between the lower and upper bounds for $R_k(n,n)$, when $k\geq 3$, remains unchanged.
The \emph{off-diagonal} Ramsey numbers, i.e.~$R_k(s,n)$ with $s$ fixed and $n$ tending to infinity, have also been extensively studied. Unlike $R_k(n,n)$, the lower and upper bounds for $R_k(s,n)$ are much more comparable. It is known \cite{AKS,Kim,B,BK} that $R_2(3,n) =\Theta(n^2/\log n)$ and, for fixed $s > 3$
\begin{equation}\label{offgraph}
\Omega\left( n^{\frac{s+1}{2} - \epsilon} \right) \leq R_2(s,n) \leq O\left(n^{s-1}\right),\end{equation}
\noindent where $\epsilon > 0$ is an arbitrarily small constant. Combining the upper bound in (\ref{offgraph}) with the results of Erd\H os, Hajnal, and Rado \cite{rado,erdos3} demonstrates that
\begin{equation}\label{rado2} \twr_{k-1}(\Omega(n))\leq R_k(s,n) \leq \twr_{k-1}(O(n^{2s-4})),
\end{equation}
\noindent for $k \geq 3$ and $s \geq 2^k$. See Conlon, Fox, and Sudakov \cite{conlon} for a recent improvement.
\medskip
\noindent \textbf{Semi-algebraic setting.} In this paper, we continue a sequence of recent works on Ramsey numbers for $k$-ary semi-algebraic relations $E$ on $\RR^d$ (see \cite{bukh,matousek,suk,sukotd}). Before we give its precise definition, let us recall two classic Ramsey-type theorems of Erd\H os and Szekeres.
\begin{theorem}[\cite{es}]\label{mono}
For $N = (s-1)(n-1) + 1$, let $P = (p_1,\ldots,p_N) \subset \mathbb{R}$ be a sequence of $N$ distinct real numbers. Then $P$ contains either an increasing subsequence of length $s$, or a decreasing subsequence of length $n$.
\end{theorem}
\noindent In fact, there are now at least 6 different proofs of Theorem \ref{mono} (see \cite{steele}). The other well-known result from \cite{es} is the following theorem, which is often referred to as the Erd\H os-Szekeres cups-caps theorem. Let $X$ be a finite point set in the plane in general position.\footnote{No two members share the same $x$-coordinate, and no three members are collinear.} We say that $X = (p_{i_1},\ldots,p_{i_s})$ forms an \emph{$s$-cup} (\emph{$s$-cap}) if $X$ is in convex position\footnote{Forms the vertex set of a convex $s$-gon.} and its convex hull is bounded above (below) by a single edge. See Figure~\ref{cupcap}.
\begin{figure}
\begin{center}
\includegraphics[width=280pt]{cupcap.eps}
\caption{A 4-cup and a 5-cap.}\label{cupcap}
\end{center}
\end{figure}
\begin{theorem}[\cite{es}]\label{mono2}
For $N = {n + s- 4\choose s-2} + 1$, let $P = (p_1,\ldots,p_N)$ be a sequence of $N$ points in the plane in general position. Then $P$ contains either an $s$-cup or an $n$-cap.
\end{theorem}
Theorems \ref{mono} and \ref{mono2} can be generalized using the following semi-algebraic framework. Let $P = \{p_1,\ldots,p_N\}$ be a sequence of $N$ points in $\RR^d$. Then we say that $E\subset {P\choose k}$ is a \emph{semi-algebraic relation} on $P$ with \emph{complexity} at most $t$ if there are $t$ polynomials $f_1,\ldots,f_t \in \RR[x_1,\ldots,x_{kd}]$ of degree at most $t$, and a Boolean function $\Phi$ such that, for $1 \leq i_1 < \cdots < i_k \leq N$,
$$(p_{i_1},\ldots,p_{i_k}) \in E \hspace{.5cm}\Leftrightarrow \hspace{.5cm} \Phi(f_1(p_{i_1},\ldots,p_{i_k}) \geq 0,\ldots, f_t(p_{i_1},\ldots,p_{i_k}) \geq 0) = 1.$$
\noindent We say that the relation $E\subset {P\choose k}$ is \emph{symmetric} if $(p_{i_1},\ldots,p_{i_k}) \in E$ iff for all permutation $\pi$,
$$ \Phi(f_1(p_{\pi(i_1)},\ldots,p_{\pi(i_k)}) \geq 0,\ldots, f_t(p_{\pi(i_1)},\ldots,p_{\pi(i_k)}) \geq 0) = 1.$$
Point sets $P\subset \RR^d$ equipped with a $k$-ary semi-algebraic relation $E\subset{P\choose k}$ are often used to model problems in discrete geometry, where the dimension $d$, uniformity $k$, and complexity $t$ are considered fixed but arbitrarily large constants. Since we can always make any relation $E$ symmetric by increasing its complexity to $t' = t'(k,d,t)$, we can therefore simplify our presentation by only considering symmetric relations.
Let $R_{k}^{d,t}(s,n)$ be the minimum integer $N$ such that every $N$-element point set $P$ in $\RR^d$ equipped with a $k$-ary (symmetric) semi-algebraic relation $E\subset{P\choose k}$, which has complexity at most $t$, contains $s$ points such that every $k$-tuple induced by them is in $E$, or contains $n$ points such that no $k$-tuple induced by them is in $E$. Alon, Pach, Pinchasi, Radoi\v{c}i\'c, and Sharir \cite{noga} showed that for $k = 2$, we have
\begin{equation}\label{semi2}R_2^{d,t}(n,n) \leq n^C,\end{equation}
\noindent where $C = C(d,t)$. Roughly speaking, $C \approx t{d + t\choose t}$. Conlon, Fox, Pach, Sudakov, and Suk showed that one can adapt the Erd\H os-Rado argument in \cite{rado} and establish the following recursive formula for $R_k^{d,t}(s,n).$
\begin{theorem}[\cite{suk}]\label{semirec}
Set $M = R^{d,t}_{k-1}(s-1,n-1)$. Then for every $k \geq 3$,
$$R^{d,t}_{k}(s,n) \leq 2^{C_1M\log M},$$
\noindent where $C_1=C_1(k,d,t)$.
\end{theorem}
\noindent Together with (\ref{semi2}) we have $R_k^{d,t}(n,n) \leq \twr_{k-1}(n^C)$, giving an exponential improvement over the Ramsey numbers for general $k$-uniform hypergraphs. Conlon et al.~\cite{suk} also gave a construction of a geometric example that provides a $\twr_{k-1}(\Omega(n))$ lower bound, demonstrating that $R^{d,t}_k(n,n)$ does indeed grow as a $(k-1)$-fold exponential tower in $n$.
However, off-diagonal Ramsey numbers for semi-algebraic relations are much less well understood. The best known upper bound for $R^{d,t}_k(s,n)$ is essentially the trivial bound $$R_k^{d,t}(s,n) \leq \min\left\{R^{d,t}_k(n,n), R_k(s,n)\right\}.$$ The crucial case is when $k = 3$, since any significant improvement on estimating $R_3^{d,t}(s,n)$ could be used with Theorem~\ref{semirec} to obtain a better bound for $R^{d,t}_k(s,n)$, for $k \geq 4$. The trivial bound above implies that
\begin{equation}\label{triv}R^{d,t}_3(s,n) \leq 2^{n^C},\end{equation}
\noindent where $C = \min\{ C_1(d,t), C_2(s)\}$.
The main difficulty in improving (\ref{triv}) is that the Erd\H os-Rado upper bound argument \cite{rado} will not be effective. Roughly speaking, the Erd\H os-Rado argument reduces the problem from 3-uniform hypergraphs to graphs, producing a recursive formula similar to Theorem \ref{semirec}. This approach has been used repeatedly by many researchers to give upper bounds on Ramsey-type problems arising in triple systems \cite{conlon,suk,sukotd,dhruv2}. However, it is very unlikely that any variant of the Erd\H os-Rados upper bound argument will establish a subexponential upper bound for $R^{d,t}_3(s,n)$.
With a more novel approach, our main result establishes the following improved upper bound for $R^{d,t}_3(s,n)$, showing that the function $R^{d,t}_3(s,n)$ is indeed subexponential in $n$.
\begin{theorem}\label{main}
For fixed integers $d,t\geq 1$ and $s\geq 4$, we have $R^{d,t}_3(s,n) \leq 2^{n^{o(1)}}.$ More precisely
$$R^{d,t}_3(s,n) \leq 2^{2^{c\sqrt{\log n \log\log n}}},$$
\noindent where $c = c(d,t,s)$.
\end{theorem}
\noindent Let us remark that in dimension 1, Conlon, Fox, Pach, Sudakov, and Suk \cite{suk} established a quasi-polynomial bound for $R^{1,t}_3(s,n)$. In particular, $R^{1,t}_3(s,n) \leq 2^{(\log n)^C}$ where $C = C(t,s)$. Combining Theorems \ref{main} and \ref{semirec} we have the following.
\begin{corollary}\label{maincor}
For fixed integers $d,t\geq 1$, $k\geq 3$, and $s\geq k + 1$, we have
$$R^{d,t}_k(s,n) \leq \twr_{k-1}(n^{o(1)}).$$
\end{corollary}
\noindent The classic cups-caps construction of Erd\H os and Szekeres \cite{es} is an example of a planar point set with ${n + s - 4\choose s-2}$ elements and no $n$-cup and no $s$-cap. This implies that $R^{d,t}_3(s,n) \geq \Omega(n^{s-2})$ for $d\geq 2$ and $t \geq 1$, and together with the semi-algebraic stepping-up lemma proven in \cite{suk} (see also \cite{mat4}) we have $R^{d,t}_k(s,n) \geq \twr_{k-2}(\Omega(n^{s/2^k}))$ for $s,d \geq 2^{k}$.
In Section \ref{secosh}, we give an application of Theorem \ref{main} to a recently studied problem on hyperplane arrangements in $\RR^d$.
\bigskip
\noindent \textbf{Monochromatic triangles.} Let $R_2(s;m) = R_2(\underbrace{s,\ldots,s}_m)$ denote the smallest integer $N$ such that any $m$-coloring on the edges of the complete $N$-vertex graph contains a monochromatic clique of size $s$, that is, a set of $s$ vertices such that every pair from this set has the same color. For the case $s = 3$, the Ramsey number $R_2(3;m)$ has received a lot of attention over the last 100 years due to its application in additive number theory \cite{schur} (more details are given in Section \ref{schursec}). It is known (see \cite{fred,schur}) that
$$\Omega(3.19^{m}) \leq R_2(3;m) \leq O(m!).$$
Our next result states that we can improve the upper bound on $R_2(3;m)$ in our semi-algebraic setting. More precisely, let $R^{d,t}_2(3;m)$ be the minimum integer $N$ such that every $N$-element point set $P$ in $\RR^d$ equipped with symmetric semi-algebraic relations $E_1,\ldots,E_m\subset {P\choose 2}$, such that each $E_i$ has complexity at most $t$ and ${P\choose 2} = E_1\cup \cdots \cup E_m$, contains three points such that every pair induced by them belongs to $E_i$ for some fixed $i$.
\begin{theorem}
\label{color}
For fixed $d,t\geq 1$ we have
$$ R^{d,t}_2(3;m) < 2^{O(m\log\log m)}.$$
\end{theorem}
\medskip
We also show that for fixed $d\geq 1$ and $t\geq 5000$, the function $R^{d,t}_2(3;m)$ does indeed grow exponentially in $m$.
\begin{theorem}\label{lowermulti}
For $d\geq 1$ and $t\geq 5000$ we have
$$ R^{d,t}_2(3;m) \geq c(1681)^{m/7} \geq c(2.889)^m,$$
\noindent where $c$ is an absolute constant.
\end{theorem}
\medskip
\noindent \textbf{Organization.} In the next two sections, we recall several old theorems on the arrangement of surfaces in $\RR^d$ and establish a result on point sets equipped with multiple binary relations. In Section \ref{proofsection}, we combine the results from Sections \ref{cutsection} and \ref{multisection} to prove our main result, Theorem \ref{main}. We discuss a short proof of our application in Section \ref{secosh}, and our results on monochromatic triangles in Section~\ref{triangles}. We conclude with some remarks.
We systemically omit floor and ceiling signs whenever they are not crucial for the sake of clarity of our presentation. All logarithms are assumed to be base 2.
\section{Arrangement of surfaces in $\RR^d$}\label{cutsection}
In this section, we recall several old results on the arrangement of surfaces in $\RR^d$. Let $f_1,\ldots, f_m$ be $d$-variate real polynomials of degree at most $t$, with zero sets $Z_1,\ldots, Z_m$, that is, $Z_i = \{x\in \RR^d: f_i(x) = 0\}$. Set $\Sigma = \{Z_1,\ldots,Z_m\}$. We will assume that $d$ and $t$ are fixed, and $m$ is some number tending to infinity. A \emph{cell} in the arrangement $\mathcal{A}(\Sigma) = \bigcup_i Z_i$ is a relatively open connected set defined as follows. Let $\approx$ be an equivalence relation on $\mathbb{R}^d$, where $x \approx y$ if $\{i: x \in Z_i\} = \{i:y \in Z_i\}$. Then the cells of the arrangement $\mathcal{A}(\Sigma)$ are the connected components of the equivalence classes. A vector $\sigma \in \{-1, 0, +1\}^m$ is a {\it sign pattern} of $f_1,\ldots, f_m$ if there exists an $x \in \mathbb{R}^d$ such that the sign of $f_j(x)$ is $\sigma_j$ for all $j = 1,\ldots, m$. The Milnor-Thom theorem (see \cite{basu, milnor, thom}) bounds the number of cells in the arrangement of the zero sets $Z_1,\ldots, Z_m$ and, consequently, the number of possible sign patterns.
\begin{theorem}[Milnor-Thom]\label{milnor}
Let $f_1,\ldots,f_m$ be $d$-variate real polynomials of degree at most $t$. The number of cells in the arrangement of their zero sets $Z_1,\ldots,Z_m\subset \mathbb{R}^d$ and, consequently, the number of sign patterns of $f_1,\ldots,f_m$ is at most
$$\left(\frac{50mt}{d}\right)^d,$$
\noindent for $m \geq d \geq 1$.
\end{theorem}
While the Milnor-Thom Theorem bounds the number of cells in the arrangement $\mathcal{A}(\Sigma)$, the complexity of these cells may be very large (depending on $m$). A long standing open problem is whether each cell can be further decomposed
into semi-algebraic sets\footnote{A real semi-algebraic set in $\RR^d$ is the locus of all points that satisfy a given finite Boolean combination of polynomial equations and inequalities in the $d$ coordinates.} with bounded description complexity (which depends only on $d$ and $t$), such that the total number of cells for the whole arrangement is still $O(m^d)$. This can be done easily in dimension 2 by a result of Chazelle et al.~\cite{chazelle}. Unfortunately in higher dimensions, the current bounds for this problem are not tight. In dimension 3, Chazelle et al.~\cite{chazelle} established a near tight bound of $O(m^3\beta(m))$, where $\beta(m)$ is an extremally slowly growing function of $m$ related to the inverse Ackermann function. For dimensions $d\geq 4$, Koltun \cite{koltun} established a general bound of $O(m^{2d-4 + \epsilon})$ for arbitrarily small constant $\epsilon$, which is nearly tight in dimension 4. By combining these bounds with the standard theory of random sampling \cite{agarwal,shor,noga}, one can obtain the following result which is often referred to as the Cutting Lemma. We say that the surface $Z_i = \{x \in \mathbb{R}^d: f_i(x) = 0\}$ \emph{crosses} the cell $\Delta\
\subset \mathbb{R}^d$ if $Z_i\cap \Delta \neq \emptyset$ and $Z_i$ does not fully contain $\Delta$.
\begin{lemma}[Cutting Lemma]
\label{cut2}
For $d,t \geq 1$, let $\Sigma$ be a family of $m$ algebraic surfaces (zero sets) in $\mathbb{R}^d$ of degree at most $t$. Then for any $r > 0$, there exists a decomposition of $\mathbb{R}^d$ into at most $c_1r^{2d}$ relatively open connected sets (cells), where $c_1 = c_1(d,t) \geq 1$, such that each cell is crossed by at most $m/r$ surfaces from $\Sigma$.
\end{lemma}
As an application, we prove the following lemma (see \cite{mat2,chan} for a similar result when $\Sigma$ is a collection of hyperplanes).
\begin{lemma}\label{decomp}
For $d,t\geq 1$, let $P$ be an $N$-element point set in $\RR^d$ and let $\Sigma$ be a family of $m$ surfaces of degree at most $t$. Then for any integer $\ell$ where $\log m < \ell < N/10$, we can find $\ell$ disjoint subsets $P_i$ of $P$ and $\ell$ cells $\Delta_i$, with $\Delta_i \supset P_i$, such that each subset $P_i$ contains at least $ N/(4\ell)$ points from $P$, and every surface in $\Sigma$ crosses at most $c_2\ell^{1 - 1/(2d)}$ cells $\Delta_i$, where $c_2 = c_2(d,t)$.
\end{lemma}
\begin{proof}
We first find $\Delta_1$ and $P_1$ as follows. Let $\ell > \log m$ and let $c_1$ be as defined in Lemma \ref{cut2}. Given a family $\Sigma$ of $m$ surfaces in $\RR^d$, we apply Lemma \ref{cut2} with parameter $r = \left(\ell/c_1\right)^{1/2d}$, and decompose $\RR^d$ into at most $\ell$ cells, such that each cell is crossed by at most $\frac{m}{(\ell/c_1)^{1/2d}}$ surfaces from $\Sigma$. By the pigeonhole principle, there is a cell $\Delta_1$ that contains at least $N/\ell$ points from $P$. Let $P_1$ be a subset of exactly $\lfloor N/\ell \rfloor$ points in $\Delta_1\cap P$. Now for each surface from $\Sigma$ that crosses $\Delta_1$, we ``double it" by adding another copy of that surface to our collection. This gives us a new family of surfaces $\Sigma_1$ such that
$$|\Sigma_1| \leq m + \frac{m}{(\ell/c_1)^{1/2d}} = m\left(1 + \frac{1}{(\ell/c_1)^{1/2d}}\right).$$
After obtaining subsets $P_1,\ldots,P_i$ such that $|P_j|= \lfloor \frac{N}{\ell}(1 - \frac{1}{\ell})^{j-1}\rfloor$ for $1\leq j \leq i$, cells $\Delta_1,\ldots,\Delta_i$, and a family of surfaces $\Sigma_i$ such that
$$|\Sigma_i| \leq m\left(1 + \frac{1}{(\ell/c_1)^{1/2d}}\right)^i,$$
\noindent we obtain $P_{i + 1}$, $\Delta_{i + 1}$, $\Sigma_{i + 1}$ as follows. Given $\Sigma_i$, we apply Lemma \ref{cut2} with the same parameter $r = \left(\ell/c_1\right)^{1/2d}$, and decompose $\RR^d$ into at most $\ell$ cells, such that each cell is crossed by at most $\frac{|\Sigma_i|}{(\ell/c_1)^{1/2d}}$ surfaces from $\Sigma_i$. Let $P' = P\setminus(P_1\cup \cdots \cup P_i)$. By the pigeonhole principle, there is a cell $\Delta_{i + 1}$ that contains at least
$$\begin{array}{ccl}
\frac{|P'|}{\ell} & \geq & \left(N - \sum\limits_{j = 1}^i \frac{N}{\ell}(1 - \frac{1}{\ell})^{j - 1}\right)/\ell \\\\
& = & \frac{N}{\ell}\left( 1- \frac{1}{\ell}\sum\limits_{j = 1}^i(1 - \frac{1}{\ell})^{j - 1}\right) \\\\
& = & \frac{N}{\ell}\left(1 - \frac{1}{\ell}\right)^{i}
\end{array}$$
\noindent points from $P'$. Let $P_{i + 1}$ be a subset of exactly $\lfloor \frac{N}{\ell}\left(1 - 1/\ell\right)^{i} \rfloor$ points in $\Delta_{i + 1}\cap P'$. Finally, for each surface from $\Sigma_i$ that crosses $\Delta_{i + 1}$, we ``double it" by adding another copy of that surface to our collection, giving us a new family of surfaces $\Sigma_{i + 1}$ such that
$$\begin{array}{ccl}
|\Sigma_{i + 1}| & \leq & |\Sigma_i| + \frac{|\Sigma_i|}{(\ell/c_1)^{1/2d}} \\\\
& = & |\Sigma_i|\left( 1 +\frac{1}{(\ell/c_1)^{1/2d}}\right) \\\\
& \leq & m\left(1 + \frac{1}{(\ell/c_1)^{1/2d}}\right)^{i + 1}.
\end{array}$$
\noindent Notice that $|P_i| \geq N/(4\ell)$ for $i \leq \ell$. Once we have obtained subsets $P_1,\ldots,P_{\ell}$ and cell $\Delta_1,\ldots,\Delta_{\ell}$, it is easy to see that each surface in $\Sigma$ crosses at most $O(r^{1 - 1/2d})$ cells $\Delta_i$. Indeed suppose $Z \in \Sigma$ crosses $\kappa$ cells. Then by the arguments above, there must be $2^{\kappa}$ copies of $Z$ in $\Sigma_{\ell}$. Hence we have
$$2^{\kappa} \leq m\left(1 + \frac{1}{(\ell/c_1)^{1/2d}}\right)^{\ell} \leq me^{c_1\ell^{1 - 1/2d}}.$$
\noindent Since $\ell \geq \log m$, we have
$$\kappa \leq c_2\ell^{1 - 1/2d},$$
\noindent for sufficiently large $c_2 = c_2(d,t)$.\end{proof}
\section{Multiple binary relations}\label{multisection}
Let $P$ be a set of $N$ points in $\RR^d$, and let $E_1,\ldots,E_m\subset {P\choose 2}$ be binary semi-algebraic relations on $P$ such that $E_i$ has complexity at most $t$. The goal of this section is to find a large subset $P'\subset P$ such that ${P'\choose 2}\cap E_i = \emptyset$ for all $i$, given that the clique number in the graphs $G_i = (P,E_i)$ are small.
First we recall a classic theorem of Dilworth (see also \cite{monot}). Let $G =(V,E)$ be a graph whose vertices are ordered $V = \{v_1,\ldots,v_N\}$. We say that $E$ is \emph{transitive} on $V$ if for $1 \leq i_1 < i_2 < i_3\leq N$, $(v_{i_1},v_{i_2}), (v_{i_2},v_{i_3}) \in E$ implies that $(v_{i_1},v_{i_3}) \in E$.
\begin{theorem}[Dilworth]\label{dilworth}
Let $G = (V,E)$ be an $N$-vertex graph whose vertices are ordered $V = \{v_1,\ldots,v_N\}$, such that $E$ is transitive on $V$. If $G$ has clique number $\omega$, then $G$ contains an independent set of order $N/\omega$.
\end{theorem}
\begin{lemma}\label{colors}
For integers $m \geq 2$ and $d,t \geq 1$, let $P$ be a set of $N$ points in $\mathbb{R}^d$ equipped with (symmetric) semi-algebraic relations $E_1,\ldots,E_m\subset{P\choose 2}$, where each $E_i$ has complexity at most $t$. Then there is a subset $P'\subset P$ of size $N^{1/(c_3\log m)}$, where $c_3 = c_3(d,t)$, and a fixed ordering on $P'$ such that each relation $E_i$ is transitive on $P'$.
\end{lemma}
\begin{proof}
We proceed by induction on $N$. Let $c_3$ be a sufficiently large number depending only on $d$ and $t$ that will be determined later. For each relation $E_i \subset {P\choose 2}$, let $f_{i,1},\ldots,f_{i,t}$ be polynomials of degree at most $t$ and let $\Phi_i$ be a boolean function such that
$$(p,q) \in E_i \hspace{.5cm}\Leftrightarrow\hspace{.5cm} \Phi_i(f_{i,1}(p,q) \geq 0,\ldots,f_{i,t}(p,q) \geq 0) = 1.$$
For each $p \in P$, $i \in \{1,\ldots,m\}$, and $j \in \{1,\ldots,t\}$, we define the surface $Z_{p,i,j} = \{x \in \RR^d: f_{i,j}(p,x) = 0 \}$. Then let $\Sigma$ be the family of $Nmt$ surfaces in $\RR^d$ defined by
$$\Sigma = \{Z_{p,i,j} : p \in P, 1\leq i \leq m, 1\leq j \leq t\}.$$
By applying Lemma \ref{cut2} to $\Sigma$ with parameter $r = (mt)^2$, there is a decomposition of $\RR^d$ into at most $c_1(mt)^{4d}$ cells such that each cell has the property that at most $N/(mt)$ surfaces from $\Sigma$ crosses it. We note that $c_1 = c_1(d,t)$ is defined in Lemma \ref{cut2}. By the pigeonhole principle, there is a cell $\Delta$ in the decomposition such that $|\Delta\cap P| \geq N/(c_1(mt)^{4d})$. Set $P_1 = \Delta\cap P$.
Let $P_2\subset P\setminus P_1$ be such that each point in $P_2$ gives rise to $mt$ surfaces that do not cross $\Delta$. More precisely,
$$P_2 = \{p \in P\setminus P_1: Z_{p,i,j} \textnormal{ does not cross }\Delta, \forall i,j\}.$$
\noindent Since $m\geq 2$ by assumption, and $c_1 \geq 1$ from Lemma \ref{cut2}, we have $$|P_2| \geq N - \frac{N}{mt} - \frac{N}{c_1(mt)^{4d}} \geq \frac{N}{4}.$$ We fix a point $p_0 \in P_1$. Then for each $q\in P_2$, let $\sigma(q) \in \{-1,0,+1\}^{mt}$ be the sign pattern of the $(mt)$-tuple $(f_{1,1}(p_0,q),f_{1,2}(p_0,q),\ldots,f_{m,t}(p_0,q))$. By Theorem \ref{milnor}, there are at most $\left(\frac{50mt^2}{d}\right)^d$ distinct sign vectors $\sigma$. By the pigeonhole principle, there is a subset $P_3 \subset P_2$ such that $$|P_3| \geq \frac{|P_2|}{(50/d)^dm^dt^{2d}},$$ and for any two points $q,q' \in P_3$, we have $\sigma(q) = \sigma(q')$. That is, $q$ and $q'$ give rise to vectors with the same sign pattern. Therefore, for any $p,p' \in P_1$ and $q,q' \in P_3$, we have $(p,q) \in E_i$ if and only if $(p',q') \in E_i$, for all $i \in \{1,\ldots,m\}$.
Let $c_4 = c_4(d,t)$ be sufficiently large such that $|P_1|,|P_3| \geq \frac{N}{c_4m^{4d}}.$ By the induction hypothesis, we can find subsets $P_4 \subset P_1, P_5\subset P_3$, such that
$$|P_4|,|P_5| \geq \left(\frac{N}{c_4m^{4d}}\right)^{\frac{1}{c_3\log m}}\geq \frac{N^{\frac{1}{c_3\log m}}}{2},$$
\noindent where $c_3 = c_3(d,t)$ is sufficiently large, and there is an ordering on $P_4$ (and on $P_5)$ such that each $E_i$ is transitive on $P_4$ (and on $P_5$). Set $P' = P_4\cup P_5$, which implies $|P'| \geq N^{\frac{1}{c_3\log m}}$. We will show that $P'$ has the desired properties. Let $\pi$ and $\pi'$ be the orderings on $P_4$ and $P_5$ respectively, such that $E_i$ is transitive on $P_4$ and on $P_5$, for every $i\in \{1,\ldots,m\}$. We order the elements in $P' = \{p_1,\ldots,p_{|P'|}\}$ by using $\pi$ and $\pi'$, such that all elements in $P_5$ come after all elements in $P_4$.
In order to show that $E_i$ is transitive on $P'$, it suffices to examine triples going across $P_4$ and $P_5$. Let $p_{j_1},p_{j_2} \in P_4$ and $p_{j_3} \in P_5$ such that $j_1 < j_2 < j_3$. By construction of $P_4$ and $P_5$, if $(p_{j_1},p_{j_2}),(p_{j_2},p_{j_3}) \in E_i$, then we have $(p_{j_1},p_{j_3}) \in E_i$. Likewise, suppose $p_{j_1}\in P_4$ and $p_{j_2},p_{j_3} \in P_5$. Then again by construction of $P_4$ and $P_5$, if $(p_{j_1},p_{j_2}),(p_{j_2},p_{j_3}) \in E_i$, then we have $(p_{j_1},p_{j_3}) \in E_i$. Hence $E_i$ is transitive on $P'$, for all $i \in \{1,\ldots,m\}$, and this completes the proof.\end{proof}
By combining the two previous results, we have the following.
\begin{lemma}\label{apply}
For $m \geq 2$ and $d,t \geq 1$, let $P$ be a set of $N$ points in $\mathbb{R}^d$ equipped with (symmetric) semi-algebraic relations $E_1,\ldots,E_m\subset {P\choose 2}$, where each $E_i$ has complexity at most $t$. If graph $G_i = (P,E_i)$ has clique number $\omega_i$, then there is a subset $P'\subset P$ of size $\frac{N^{1/(c_3\log m)}}{\omega_1\cdots \omega_m}$, where $c_3 = c_3(d,t)$ is defined above, such that ${P'\choose 2}\cap E_i = \emptyset$ for all $i$.
\end{lemma}
\begin{proof}
By applying Lemma \ref{colors}, we obtain a subset $P_1 \subset P$ of size $N^{\frac{1}{c_3\log m}}$, and an ordering on $P_1$ such that $E_i$ is transitive on $P_1$ for all $i$. Then by an $m$-fold application of Theorem \ref{dilworth}, the statement follows.\end{proof}
\section{Proof of Theorem \ref{main}}\label{proofsection}
Let $P$ be a point set in $\RR^d$ and let $E\subset {P\choose 3}$ be a semi-algebraic relation on $P$. We say that $(P,E)$ is $K_s^{(3)}$-free if every collection of $s$ points in $P$ contains a triple not in $E$. Suppose we have $\ell$ disjoint subsets $P_1,\ldots,P_{\ell} \subset P$. For $1 \leq i_1 < i_2 < i_3 \leq \ell$, we say that the triple $(P_{i_1},P_{i_2},P_{i_3})$ is \emph{homogeneous} if $(p_1,p_2,p_3) \in E$ for all $p_1\in P_{i_1}, p_2 \in P_{i_2},p_3 \in P_{i_3}$, or $(p_1,p_2,p_3) \not\in E$ for all $p_1\in P_{i_1}, p_2 \in P_{i_2},p_3 \in P_{i_3}$. For $p_1,p_2 \in P_1\cup \cdots \cup P_{\ell}$ and $i \in \{1,\ldots,\ell\}$, we say that the triple $(p_1,p_2,i)$ is \emph{good}, if $(p_1,p_2,p_3) \in E$ for all $p_3 \in P_i$, or $(p_1,p_2,p_3) \not\in E$ for all $p_3 \in P_i$. We say that the triple $(p_1,p_2,i)$ is \emph{bad} if $(p_1,p_2,i)$ is not good and $p_1,p_2 \not\in P_i$.
\begin{lemma}\label{decomp2}
Let $P$ be a set of $N$ points in $\RR^d$ and let $E\subset {P\choose 3}$ be a (symmetric) semi-algebraic relation on $P$ such that $E$ has complexity at most $t$. Then for $r = \frac{N^{1/(30d)}}{tc_2}$, where $c_2$ is defined in Lemma \ref{decomp}, there are disjoint subsets $P_1,\ldots,P_{r}\subset P$ such that
\begin{enumerate}
\item $|P_i| \geq \frac{N^{1/(30d)}}{tc_2}$,
\item all triples $(P_{i_1},P_{i_2},P_{i_3})$, $1\leq i_1 < i_2 <i_3\leq r$, are homogeneous, and
\item all triples $(p,q,i)$, where $i\in \{1,\ldots,r\}$ and $p,q \in (P_1\cup \cdots \cup P_r)\setminus P_i$, are good.
\end{enumerate}
\begin{proof}
We can assume that $N > (tc_2)^{30d}$, since otherwise the statement is trivial. Since $E$ is semi-algebraic with complexity $t$, there are polynomials $f_1,\ldots,f_t$ of degree at most $t$, and a Boolean function $\Phi$ such that
$$(p_1,p_2,p_3) \in E\hspace{.5cm}\Leftrightarrow\hspace{.5cm} \Phi(f_1(p_1,p_2,p_3) \geq 0,\ldots,f_t(p_1,p_2,p_3) \geq 0) = 1.$$
\noindent For each $p,q \in P$ and $i \in \{1,\ldots,t\}$, we define the surface $Z_{p,q,i} = \{x \in \RR^d: f_i(p,q,x) = 0\}$. Then we set
$$\Sigma = \{Z_{p,q,i} : p,q \in P, 1\leq i\leq t\}.$$
Thus we have $|\Sigma| = N^2t$. Next we apply Lemma \ref{decomp} to $P$ and $\Sigma$ with parameter $\ell = \sqrt{N}$, and obtain subsets $Q_1,\ldots,Q_{\ell}$ and cells $\Delta_1,\ldots,\Delta_{\ell}$, such that $Q_i\subset \Delta_i$, $|Q_i| = \lfloor\sqrt{N}/4\rfloor$, and each surface in $\Sigma$ crosses at most $c_2N^{1/2 - 1/(4d)}$ cells $\Delta_i$. We note that $c_2 = c_2(d,t)$ is defined in Lemma \ref{decomp} and $\sqrt{N} \geq \log(tN^2)$. Set $Q = Q_1\cup \cdots \cup Q_{\ell}$. Each pair $(p,q) \in {Q\choose 2}$ gives rise to $2t$ surfaces in $\Sigma$. By Lemma \ref{decomp}, these $2t$ surfaces cross in total at most $2tc_2N^{1/2 - 1/(4d)}$ cells $\Delta_i$. Hence there are at most $2tc_2N^{5/2 - 1/(4d)}$ bad triples of the form $(p,q,i)$, where $i \in \{1,\ldots,\sqrt{N}\}$ and $p,q \in Q\setminus Q_i$. Moreover, there are at most $2tc_2N^{2 - 1/(4d)}$ bad triples $(p,q,i)$, where both $p$ and $q$ lie in the same part $Q_j$ and $j \neq i$.
We uniformly at random pick $r = \frac{N^{1/(30d)}}{tc_2}$ subsets (parts) from the collection $\{Q_1,\ldots,Q_{\ell}\}$, and $r$ vertices from each of the subsets that were picked. For a bad triple $(p,q,i)$ with $p$ and $q$ in distinct subsets, the probability that $(p,q,i)$ survives is at most
$$\left(\frac{r}{\sqrt{N}}\right)^3\left(\frac{r}{\sqrt{N}/4}\right)^2 = \frac{16}{(tc_2)^5}N^{1/(6d) - 5/2}.$$
\noindent For a bad triple $(p,q,i)$ with $p,q$ in the same subset $Q_j$, where $j \neq i$, the probability that the triple $(p,q,i)$ survives is at most
$$\left(\frac{r}{\sqrt{N}}\right)^2\left(\frac{r}{\sqrt{N}/4}\right)^2 = \frac{16}{(tc_2)^4}N^{2/(15d) - 2}.$$
\noindent Therefore, the expected number of bad triples in our random subset is at most
$$\left(\frac{16}{(tc_2)^5}N^{1/(6d) - 5/2}\right)\left(tc_2N^{5/2 - 1/(4d)}\right) + \left(\frac{16}{(tc_2)^4}N^{2/(15d) - 2}\right)\left(tc_2N^{2 - 1/(4d)}\right) < 1.$$
\noindent Hence we can find disjoint subsets $P_1,\ldots,P_r$, such that $|P_i| \geq r = \frac{N^{1/(30d)}}{tc_2}$, and there are no bad triples $(p,q,i)$, where $i \in \{1,\ldots,r\}$ and $p,q \in (P_1\cup \cdots \cup P_{r})\setminus P_i$.
It remains to show that every triple $(P_{i_1},P_{i_2},P_{i_3})$ is homogeneous for $1\leq i_1 < i_2 < i_3\leq r$. Let $p_1,\in P_{i_1}, p_2 \in P_{i_2}, p_3 \in P_{i_3}$ and suppose $(p_1,p_2,p_3) \in E$. Then for any choice $q_1,\in P_{i_1}, q_2 \in P_{i_2}, q_3 \in P_{i_3}$, we also have $(q_1,q_2,q_3) \in E$. Indeed, since the triple $(p_1,p_2,i_3)$ is good, this implies that $(p_1,p_2,q_3) \in E$. Since the triple $(p_1,q_3,i_2)$ is also good, we have $(p_1,q_2,q_3) \in E$. Finally since $(q_2,q_3,i_1)$ is good, we have $(q_1,q_2,q_3) \in E$. Likewise, if $(p_1,p_2,p_3) \not\in E$, then $(q_1,q_2,q_3) \not\in E$ for any $q_1,\in P_{i_1}, q_2 \in P_{i_2}, q_3 \in P_{i_3}$.\end{proof}
\end{lemma}
We are finally ready to prove Theorem \ref{main}, which follows immediately from the following theorem.
\begin{theorem}
Let $P$ be a set of $N$ points in $\RR^d$ and let $E\subset {P\choose 3}$ be a (symmetric) semi-algebraic relation on $P$ such that $E$ has complexity at most $t$. If $(P,E)$ is $K^{(3)}_s$-free, then there exists a subset $P'\subset P$ such that ${P'\choose 3}\cap E = \emptyset$ and
$$|P'| \geq 2^{\frac{(\log \log N)^2}{c^s\log\log\log N}},$$
\noindent where $c = c(d,t)$.
\end{theorem}
\begin{proof}
The proof is by induction on $N$ and $s$. The base cases are $s = 3$ or $N \leq (100tc_2)^{30d}$, where $c_2$ is defined in Lemma \ref{decomp}. When $N \leq (100tc_2)^{30d}$, the statement holds trivially for sufficiently large $c = c(d,t)$. If $s = 3$, then again the statement follows immediately by taking $P' = P$.
Now assume that the statement holds if $s ' \leq s, N' \leq N$ and not both inequalities are equalities. We apply Lemma \ref{decomp2} to $(P,E)$ and obtain disjoint subsets $P_1,\ldots,P_r$, where $r = \frac{N^{1/(30d)}}{tc_2}$, such that $|P_i| \geq \frac{N^{1/(30d)}}{tc_2}$, every triple of parts $(P_{i_1},P_{i_2},P_{i_3})$ is homogeneous, and every triple $(p,q,i)$ is good where $i \in \{1,\ldots,r\}$ and $p,q \in (P_1\cup\cdots\cup P_r) \setminus P_i$.
Let $P_0$ be the set of $\frac{N^{1/(30d)}}{tc_2}$ points obtained by selecting one point from each $P_i$. Since $(P_0,E)$ is $K^{(3)}_s$-free, we can apply the induction hypothesis on $P_0$, and find a set of indices $I = \{i_1,\ldots,i_m\}$ such that
$$\log |I| \geq \frac{\left(\log\log \frac{N^{1/(30d)}}{tc_2}\right)^2}{{c^s\log\log \log \frac{N^{1/(30d)}}{tc_2}} } \geq (1/2)\log \log N,$$
\noindent and for every triple $i_1 < i_2 < i_3$ in $I$ all triples with one point in each $P_{i_j}$ do not satisfy $E$. Hence we may assume $m = \sqrt{\log N}$, and let $Q_j = P_{i_j}$ for $1\leq j \leq m$.
For each subset $Q_i$, we define binary semi-algebraic relations $E_{i,j}\subset {Q_i\choose 2}$, where $j \neq i $, as follows. Since $E\subset {P\choose 3}$ is semi-algebraic with complexity $t$, there are $t$ polynomials $f_1,\ldots,f_t$ of degree at most $t$, and a Boolean function $\Phi$ such that $(p_1,p_2,p_3) \in E$ if and only if
$$\Phi(f_1(p_1,p_2,p_3) \geq 0,\ldots,f_t(p_1,p_2,p_3) \geq 0) = 1.$$
\noindent Fix a point $q_0 \in Q_j$, where $j \neq i$. Then for $p_1,p_2 \in Q_i$, we have $(p_1,p_2) \in E_{i,j}$ if and only if
$$\Phi(f_1(p_1,p_2,q_0) \geq 0,\ldots,f_t(p_1,p_2,q_0) \geq 0) = 1.$$
Suppose there are $2^{(\log N)^{1/4}}$ vertices in $Q_i$ that induce a clique in the graph $G_{i,j} = (Q_i,E_{i,j})$. Then these vertices would induce a $K_{s-1}^{(3)}$-free subset in the original (hypergraph) $(P,E)$. By the induction hypothesis, we can find a subset $Q_i' \subset Q_i$ such that
$$|Q'_i| \geq 2^{\frac{((1/4)\log\log N)^2}{c^{s-1}\log\log\log N}} \geq 2^{\frac{(\log \log N)^2}{c^s\log\log\log N}},$$
\noindent for sufficiently large $c$, such that ${Q'_i\choose 3}\cap E = \emptyset$ and we are done. Hence we can assume that each graph $G_{i,j} = (Q_i,E_{i,j})$ has clique number at most $2^{(\log n)^{1/4}}$. By applying Lemma \ref{apply} to each $Q_i$, where $Q_i$ is equipped with $m-1$ semi-algebraic relations $E_{i,j}$, $j \neq i$, we can find subsets $T_i\subset Q_i$ such that
$$|T_i| \geq \frac{|Q_i|^{1/(c_3\log m)}}{2^{(\log N)^{1/4}\sqrt{\log N}}} = \frac{2^{\frac{\log N}{30dc_3\log(\sqrt{\log N})}}}{(tc_2)^{1/c_3\log m}2^{(\log N)^{3/4}}} \geq 2^{\frac{\log N}{c_5\log\log N}},$$
\noindent where $c_5 = c_5(d,t)$, and ${T_i\choose 2} \cap E_j = \emptyset$ for all $j \neq i$. Therefore, we now have subsets $T_1,\ldots,T_m$, such that
\begin{enumerate}
\item $m = \sqrt{\log N}$,
\item for any triple $(T_{i_1},T_{i_2},T_{i_3})$, $1 \leq i_1 < i_2 < i_3\leq m$, every triple with one vertex in each $T_{i_j}$ is not in $E$,
\item for any pair $(T_{i_1},T_{i_2})$, $1 \leq i_1 < i_2 \leq m$, every triple with two vertices in $T_{i_1}$ and one vertex in $T_{i_2}$ is not in $E$, and every triple with two vertices in $T_{i_2}$ and one vertex in $T_{i_1}$ is also not in $E$.
\end{enumerate}
\noindent By applying the induction hypothesis to each $(T_i,E)$, we obtain a collection of subsets $U_i \subset T_i$ such that
$$\log |U_i| \geq \frac{\left(\log \left(\frac{\log N}{c_5\log\log N}\right)\right)^2}{c^s\log\log\left(\frac{\log N}{c_5\log\log N} \right)} \geq \frac{(\log\log N - \log(c_5\log\log N))^2}{c^s\log\log\log N},$$
\noindent and ${U_i\choose 3}\cap E = \emptyset$. Let $P' = \bigcup\limits_{i = 1}^m U_i$. Then by above we have ${P'\choose 3}\cap E = \emptyset$ and
\begin{eqnarray*}
\log |P'| & \geq & \frac{(\log\log N - \log(c_5\log\log N))^2}{c^s\log\log\log N} + \frac{1}{2}\log\log N \\\\
& \geq & \frac{(\log\log N)^2 - 2 (\log\log N)\log(c_5\log\log N) + (\log(c_5\log\log N))^2}{c^s\log\log\log N} + \frac{1}{2}\log\log N \\\\
& \geq & \frac{(\log\log N)^2}{c^s\log\log \log N},
\end{eqnarray*}
\noindent for sufficiently large $c = c(d,t)$.\end{proof}
\section{Application: One-sided hyperplanes}\label{secosh}
Let us consider a finite set $H$ of hyperplanes in $\RR^d$ in general position, that is, every $d$ members in $H$ intersect at a distinct point. Let $OSH_d(s,n)$ denote the smallest integer $N$ such that every set $H$ of $N$ hyperplanes in $\mathbb{R}^d$ in general position contains $s$ members $H_1$ such that the vertex set of the arrangement of $H_1$ lies above the $x_d = 0$ hyperplane, or contains $n$ members $H_2$ such that the vertex set of the arrangement of $H_2$ lies below the $x_d = 0$ hyperplane.
In 1992, Matou\v{s}ek and Welzl \cite{welzl} observed that $OSH_2(s,n) = (s-1)(n-1) + 1$. Dujmovi\'c and Langerman \cite{duj} used the existence of $OSH_d(n,n)$ to prove a ham-sandwich cut theorem for hyperplanes. Again by adapting the Erd\H os-Rado argument, Conlon et al.~\cite{suk} showed that for $d\geq 3$,
\begin{equation}\label{oldosh}
OSH_d(s,n) \leq \twr_{d-1}(c_6sn\log n),
\end{equation}
\noindent where $c_6$ is a constant that depends only on $d$. See Eli\'a\v{s} and Matou\v{s}ek \cite{matousek} for more related results, including lower bound constructions.
Since each hyperplane $h_i\in H$ is specified by the linear equation
$$a_{i,1}x_1 + \cdots + a_{i,d}x_d = b_i,$$
\noindent we can represent $h_i\in H$ by the point $h^{\ast}_i \in\mathbb{R}^{d + 1}$ where $h^{\ast}_i = (a_{i,1},\ldots,a_{i,d},b_i)$ and let $P = \{h^{\ast}_i:h_i \in H\}$. Then we define a relation $E\subset {P\choose d}$ such that $(h^{\ast}_{i_1},\ldots,h^{\ast}_{i_d}) \in E$ if and only if $h_{i_1}\cap\cdots\cap h_{i_d}$ lies above the hyperplane $x_d = 0$ (i.e. the $d$-th coordinate of the intersection point is positive). Clearly, $E$ is a semi-algebraic relation with complexity at most $t = t(d)$. Therefore, as an application of Theorem \ref{main} and Corollary \ref{maincor}, we make the following improvement on (\ref{oldosh}).
\begin{theorem}
For fixed $s \geq 4$, we have $OSH_3(s,n) \leq 2^{n^{o(1)}}$. For fixed $d\geq 4$ and $s\geq d+1$, we have
$$OSH_d(s,n) \leq \twr_{d-1}(n^{o(1)}).$$
\end{theorem}
\section{Monochromatic triangles}\label{triangles}
In this section, we will prove Theorem \ref{color}.
\begin{proof}[Proof of Theorem \ref{color}] We proceed by induction on $m$. The base case when $m = 1$ is trivial. Now assume that the statement holds for $m' < m$. Set $N = 2^{cm\log\log m}$, where $c = c(d,t)$ will be determined later, and let $E_1,\ldots,E_m\subset {P\choose 2}$ be semi-algebraic relations on $P$ such that ${P\choose 2} = E_1\cup \cdots \cup E_m$, and each $E_i$ has complexity at most $t$. For the sake of contradiction, suppose $P$ does not contain three points such that every pair of them is in $E_i$ for some fixed $i$.
For each relation $E_i$, there are $t$ polynomials $f_{i,1},\ldots,f_{i,t}$ of degree at most $t$, and a Boolean function $\Phi_i$ such that
$$(p,q) \in E_i \hspace{.5cm}\Leftrightarrow\hspace{.5cm} \Phi_i(f_{i,1}(p,q) \geq 0,\ldots,f_{i,t}(p,q)\geq 0) = 1.$$
For $ 1 \leq i \leq m, 1\leq j \leq t, p \in P$, we define the surface $Z_{i,j,p} = \{ x \in \RR^d: f_{i,j}(p,x) = 0\}$, and let $$\Sigma = \{ Z_{i,j,p} : 1 \leq i \leq m, 1\leq j \leq t, p \in P\}.$$
\noindent Hence $|\Sigma| = mtN$. We apply Lemma \ref{cut2} to $\Sigma$ with parameter $r = 2tm$, and decompose $\RR^d$ into $c_1(2tm)^{2d}$ regions $\Delta_i$, where $c_1 = c_1(t,d)$ is defined in Lemma \ref{cut2}, such that each region $\Delta_i$ is crossed by at most $tmN/r = N/2$ members in $\Sigma$. By the pigeonhole principle, there is a region $\Delta\subset \RR^d$, such that $|\Delta\cap P| \geq \frac{N}{c_1(2tm)^{2d}}$, and at most $N/2$ members in $\Sigma$ crosses $\Delta$. Let $P_1$ be a set of exactly $\left\lfloor \frac{N}{c_1(2tm)^{2d}}\right\rfloor$ points in $P\cap \Delta$, and let $P_2$ be the set of points in $P\setminus P_1$ that do not give rise to a surface that crosses $\Delta$. Hence
$$|P_2| \geq N - \frac{N}{c_1(2tm)^{2d}} - \frac{N}{2} \geq \frac{N}{4}.$$
Therefore, each point $p \in P_2$ has the property that $p\times P_1 \subset E_i$ for some fixed $i$. We define the function $\chi:P_2\rightarrow \{1,\ldots,m\}$, such that $\chi(p) = i$ if and only if $p\times P_1 \subset E_i$. Set $I = \{\chi(p): p \in P_2\}$ and $m_0 = |I|$, that is, $m_0$ is the number of distinct relations (colors) between the sets $P_1$ and $P_2$. Now the proof falls into 2 cases.
\medskip
\noindent \emph{Case 1.} Suppose $m_0 > \log m$. By the assumption, every pair of points in $P_1$ is in $E_i$ where $i \in \{1,\ldots,m\}\setminus I$. By the induction hypothesis, we have
$$\frac{2^{cm\log\log m}}{c_1(2tm)^{2d}} \leq |P_1|\leq 2^{c(m - m_0)\log\log m}.$$
\noindent Hence
$$cm_0\log\log m \leq \log(c_1(2tm)^{2d}) \leq 2d\log (c_12tm),$$
\noindent which implies
$$m_0 \leq \frac{2d\log(c_12tm)}{c\log\log m},$$
\noindent and we have a contradiction for sufficiently large $c = c(d,t)$.
\medskip
\noindent \emph{Case 2.} Suppose $m_0 \leq \log m$. By the pigeonhole principle, there is a subset $P_3\subset P_2$, such that $|P_3| \geq \frac{N}{4m_0}$ and $P_1\times P_3\subset E_i$ for some fixed $i$. Hence every pair of points $p,q \in P_3$ satisfies $(p,q) \not\in E_i$, for some fixed $i$. By the induction hypothesis, we have
$$\frac{2^{cm\log\log m}}{4m_0} \leq |P_3| \leq 2^{c(m-1)\log\log m}.$$
\noindent Therefore
$$c\log\log m \leq \log(4m_0) \leq \log(4\log(m)),$$
\noindent which is a contradiction since $c$ is sufficiently large. This completes the proof of Theorem \ref{color}.\end{proof}
We note that in \cite{fps15}, Fox, Pach, and Suk extended the arguments above to show that $R_2^{d,t}(s;m) \leq 2^{O(sm\log\log m)}$.
\subsection{Lower bound construction and Schur numbers}\label{schursec}
Before we prove Theorem \ref{lowermulti}, let us recall a classic theorem of Schur \cite{schur} which is considered to be one of the earliest applications of Ramsey Theory. A set $P\subset \RR$ is said to be \emph{sum-free} if for any two (not necessarily distinct) elements $x,y \in P$, their sum $x+y$ is not in $P$. The Schur number $S(m)$ is defined to be the maximum integer $N$ for which the integers $\{1,\ldots,N\}$ can be partitioned into $m$ sum-free sets.
Given a partition $\{1,\ldots,N\} =P_1\cup\cdots\cup P_m$ into $m$ parts such that $P_i$ is sum-free, we can define an $m$-coloring on the edges of a complete $(N+1)$-vertex graph which does not contain a monochromatic triangle as follows. Let $V = \{1,\ldots,N+1\}$ be the vertex set, and we define the coloring $\chi:{V\choose 2} \rightarrow m$ by $\chi(x,y) = i$ iff $|x-y| \in P_{i}$. Now suppose for the sake of contradiction that there are vertices $x,y,z$ that induce a monochromatic triangle, say with color $i$, such that $x < y < z$. Then we have $y-x, z-y, z-x \in P_i$ and $(y -x) + (z - y) = (z - x)$, which is a contradiction since $P_i$ is sum free. Therefore $S(m) < R_2(3;m)$.
Since Schur's original 1916 paper, the lower bound on $S(m)$ has been improved by several authors \cite{ab1,ab2,ex}, and the current record of $S(m) \geq \Omega(3.19^m)$ is due to Fredricksen and Sweet \cite{fred}. Their lower bound follows by computing $S(6)\geq 538$, and using the recursive formula
$$S(m)\geq c_{\ell}(2S(\ell) + 1)^{m/\ell},$$
\noindent which was established by Abbott and Hanson \cite{ab2}. Fredricksen and Sweet also computed $S(7) \geq 1680$, which we will use to prove Theorem \ref{lowermulti}.
\begin{lemma}\label{construct2}
For each integer $\ell \geq 1$, there is a set $P_{\ell}$ of $(1681)^{\ell}$ points in $\RR$ equipped with semi-algebraic relations $E_{1},\ldots,E_{7\ell} \subset {P_{\ell}\choose 2}$, such that
\begin{enumerate}
\item $E_{1}\cup \cdots \cup E_{7\ell} = {P_{\ell}\choose 2}$,
\item $E_{i}$ has complexity at most 5000,
\item $E_{i}$ is translation invariant, that is, $(x,y) \in E_i$ iff $(x + C,y + C) \in E_i$, and
\item the graph $G_{\ell,i} = (P_{\ell},E_{i})$ is triangle free for all $i$.
\end{enumerate}
\end{lemma}
\begin{proof}
We start be setting $P_{1} =\{1,2,\ldots,1681\}$. By \cite{fred}, there is a partition on $\{1,\ldots,1680\} = A_1\cup \cdots \cup A_7$ into seven parts, such that each $A_i$ is sum-free. For $i \in \{1,\ldots,7\}$, we define the binary relation $E_{i}$ on $P_1$ by
$$(x,y) \in E_{i}\hspace{.5cm}\Leftrightarrow\hspace{.5cm}(1 \leq |x- y| \leq 1680) \wedge(|x-y| \in A_i).$$
\noindent Since $|A_i| \leq 1680$, $E_{i}$ has complexity at most 5000. By the arguments above, the graph $G_{1,i} = (P_{1},E_{i})$ is triangle free for all $i \in \{1,\ldots,7\}$. In what follows, we blow-up this construction so that the statement holds.
Having defined $P_{\ell - 1}$ and $E_{1},....,E_{7\ell-7}$, we define $P_{\ell}$ and $E_{\ell -6},\ldots,E_{\ell }$ as follows. Let $C = C(\ell)$ be a very large constant, say $C > \left(5000 \cdot \max\{P_{\ell - 1}\}\right)^2$. We construct 1681 translated copies of $P_{\ell -1}$, $Q_i = P_{\ell - 1} + iC$ for $1\leq i \leq 1681$, and set $P_{\ell} = Q_1\cup \cdots \cup Q_{1681}$. For $1\leq j \leq 7$, we define the relation $E_{\ell - 7 + j}$ by
$$(x,y) \in E_{\ell -7 + j}\hspace{.5cm}\Leftrightarrow\hspace{.5cm}(C/2 \leq |x-y| \leq 1682C) \wedge(\exists z \in A_j: ||x - y|/C - z| < 1/1000).$$
Clearly $E_1,\ldots,E_{7\ell}$ satisfy properties (1), (2), and (3). The fact that $G_{\ell,i} = (P_{\ell},E_i)$ is triangle follows from the same argument as above.\end{proof}
Theorem \ref{lowermulti} immediately follows from Lemma \ref{construct2}.
\section{Concluding remarks}
\textit{1.} We showed that given an $N$-element point set $P$ in $\RR^d$ equipped with a semi-algebraic relation $E\subset{P\choose 3}$, such that $E$ has complexity at most $t$ and $(P,E)$ is $K^{(3)}_s$-free, then there is a subset $P'\subset P$ such that $|P'| \geq 2^{(\log\log N)^2/(c^s \log\log\log N)}$ and ${P'\choose 3} \cap E = \emptyset$. In \cite{suk}, Conlon et al.~conjectured that one can find a much larger ``independent set". More precisely, they conjectured that there is a constant $\epsilon = \epsilon(d,t,s)$ such that $|P'| \geq N^{\epsilon}$. Perhaps an easier task would be to find a large subset $P'$ such that $E$ is \emph{transitive} on $P'$, that is, there is an ordering on $P' = \{p_1,\ldots,p_m\}$ such that for $1\leq i_1 < i_2 < i_3 < i_4 \leq m$, $(p_{i_1},p_{i_2},p_{i_3}), (p_{i_2},p_{i_3},p_{i_4}) \in E$ implies that $(p_{i_1},p_{i_2},p_{i_4}),(p_{i_1},p_{i_3},p_{i_4}) \in E$.
\bigskip
\noindent\textit{2. Off diagonal Ramsey numbers for binary semi-algebraic relations.} As mentioned in the introduction, we have $R_2(s,n) \leq O(n^{s-1})$. It would be interesting to see if one could improve this upper bound in the semi-algebraic setting. That is, for fixed integers $t\geq 1$ and $d\geq s \geq 3$, is there an constant $\epsilon = \epsilon(d,t,s)$ such that $R^{d,t}_2(s,n) \leq O(n^{s-1- \epsilon})$? For $d < s$, it is likely that such an improvement can be made using Lemma \ref{cut2}.
\bigskip
\noindent \textit{3. Low complexity version of Schur's Theorem.} We say that the subset $P \subset \{1,\ldots,N\}$ has complexity $t$ if there are $t$ intervals $I_1,\ldots,I_t$ such that $P = \{1,\ldots,N\}\cap (I_1\cup \cdots \cup I_t)$. Let $S_t(m)$ be the maximum integer $N$ for which the integers $\{1,\ldots,N\}$ can be partitioned into $m$ sum-free parts, such that each part has complexity at most $t$. By following the proof of Theorem \ref{color}, one can show that $S_t(m) \leq 2^{m\log\log 2t}$.
|
train/arxiv
|
BkiUa244eILhP_ZE_vG_
| 5 | 1 |
\section{Introduction}
The notion of bounded cohomology was first introduced by Gromov (see \cite{Gr}). Let $X$ be a topological space, we denote $C^n(X)$ the group of singular $n$-cochains with real coefficients. We say an element $c$ in this cochain group is bounded, if the values of $c$ on each $n$-simplex is bounded, that is, the norm $||c||=sup\,\{\,|c(\sigma)|:\sigma:\Delta^n\rightarrow X \;\textrm{is continuous}\,\}$ is finite. We denote $C_b^n(X)$ the set of all bounded $n$-cochains, and the corresponding chain complex induces the bounded singular cohomology $H^n_b(X)$. The inclusion map $i:C_b^n(X)\rightarrow C^n(X)$ induces the comparison map $\phi:H^n_b(X) \rightarrow H^n(X)$. This notion of bounded cohomology can be extended to groups. For a discrete group $\Gamma$, the group cohomology with real coefficients $H^n(\Gamma)$ can be defined by the cochain complex $C^n(\Gamma)=\{f:\Gamma^n\rightarrow \mathbb{R}\}$. Similarly, one defines the bounded group cohomology $H^n_b(\Gamma)$ using a subcomplex, namely the bounded cochains $C^n_b(\Gamma)=\{f:\Gamma^n\rightarrow \mathbb{R}\mid f\,\textrm{is bounded}\}$. For a topological group $G$, so as not to lose its topological information, we consider the continuous cochain complex $C^n_c(G)=\{f:G^n\rightarrow \mathbb{R}\mid f\,\textrm{is continuous}\}$ and correspondingly the continuous bounded cochains $C^n_{c,b}(G)=\{f:G^n\rightarrow \mathbb{R}\mid f\,\textrm{is continuous and bounded}\}$. This gives rise to the continuous cohomology $H^n_c(G)$ and the continuous bounded cohomology $H^n_{c,b}(G)$.
Despite the fact that bounded cohomology is easily defined, little is known about these groups in general. It was shown by Gromov \cite{Gr} that $H^*_b(\pi_1(M))\simeq H^*_b(M)$ through the classifying map, so one might just focus on the study of the bounded cohomology for groups. It is quite clear that $H^0_b(\Gamma)\simeq \mathbb{R}$ and $H^1_b(\Gamma)$ vanishes following the definition, and it was pointed out in \cite{Gr} that $H^n_b(\Gamma)=0\;( n\geq 1)$ for any amenable group $\Gamma$ (following work of Hirsch and Thurston \cite{HT}). However, the bounded cohomology is hard to compute in general.
One way to study the bounded cohomology of groups is to look at the comparison map, from the bounded cohomology to the ordinary cohomology, and it is natural to ask whether this map is surjective. The answer is of course no in general, since we can easily construct abelian groups (for instance $\mathbb{Z}\oplus \mathbb{Z}$) where the ordinary cohomology is nonvanishing (in degree two in this case) while the bounded cohomology vanishes due to amenability. But surjectivity might still hold for certain classes of groups. For example, in the case of non-compact, connected, semisimple Lie groups, Dupont asked whether the comparison map is surjective \cite{Du2} (see also \cite[Problem A']{Mo}, and \cite[Conjecture 18.1]{BIMW}). The conjecture remains open even in the specific case of $\SL(n,\mathbb{R})$. Prior work includes Hartnick and Ott \cite{HO}, which confirmed the conjecture for Lie groups of Hermitian type (as well as some other cases). Domic and Toledo gave numerical bounds in degree two \cite{DT}, and this was later generalized by Clerc and {\O}rsted in \cite{CO}. Also, Lafont and Schmidt \cite{LS} showed surjectivity on top degree (the dimension of the corresponding symmetric space) in all cases excluding $\SL(3,\mathbb{R})$, followed by Bucher-Karlsson's complementary result \cite{Bu}, thus completing an equivalent conjecture of Gromov: that the simplicial volume of any closed locally symmetric space of noncompact type is positive. One of the key step in the approach of \cite{LS} is to show boundedness of a certain Jacobian, which relies heavily on previous work of Connell and Farb \cite{CF1}, \cite{CF2}. Recently, Inkang Kim and Sungwoon Kim \cite{KK} extended the Jacobian estimate to codimension one (but the codimension one surjectivity of the comparison map is automatic), and they also gave detailed investigation on rank two cases. Meanwhile, Lafont and Wang \cite{LW} showed surjectivity in codimesion $\leq \rank(X)-2$, in irreducible cases excluding $\SL(3,\mathbb R)$ and $\SL(4,\mathbb R)$. In this paper, we extend their result to smaller degrees and show the following:
\begin{main}
Let $X=G/K$ be an $n$-dimensional symmetric space of non-compact type of rank $r\geq 2$, and $\Gamma$ a cocompact torsion-free lattice in $G$. Assume $X$ has no direct factors of $\mathbb H^2$, $\SL(3,\mathbb R)/\SO(3)$, $\Sp(2,\mathbb R)/U(2)$, $G_2^2/\SO(4)$ and $\SL(4,\mathbb R)/\SO(4)$, then the comparison maps
$\eta:H^{*}_{c,b}(G,\mathbb{R})\rightarrow H^{*}_c(G,\mathbb{R})$ and $\eta':H^{*}_{b}(\Gamma,\mathbb{R})\rightarrow
H^{*}(\Gamma,\mathbb{R})$ are both surjective in all degrees $* \geq \text{srk}(X)+2$.
\end{main}
\begin{remark}
The approach is similar to \cite{LW} and is discussed in details in section 3. Notice the barycentric straightening method that we are using fails at the splitting rank (See \cite[Theorem 5.6]{LW}). This shows our \textbf{Main theorem} is very close to the optimal possible (via this method).
\end{remark}
\section{Splitting rank}
The notion of splitting rank was defined in \cite{LW}, as an obstruction in degree to a certain type of Jacobian being uniformly bounded (See \cite[Theorem 5.6]{LW}). In section \ref{sec:srk}, we will list in Table 1 the splitting rank of all irreducible symmetric spaces of non-compact type. In section \ref{sec:srk^k}, we will define and analyze the $k$-th splitting rank. Finally in section \ref{sec:reducible}, we will consider the reducible cases and give a corresponding estimate on the $k$-th splitting rank.
\subsection{Totally geodesic submanifolds with $\mathbb{R}$-factor}\label{sec:srk}
Let $X$ be a symmetric space of non-compact type. We write $X=G/K$ where $G={\mathrm{Isom}}} \newcommand{\Jac}{{\mathrm{Jac}}^0(X)$ is the connected component of isometry group of $X$ and $K$ is a maximal compact subgroup of $G$. Fixing a base point $p\in X$, we have the Cartan decomposition $\mathfrak{g}=\mathfrak{k}+\mathfrak{p}$ and $\mathfrak{p}$ can be identified with the tangent space of $X$ at $p$. We recall the splitting rank of a symmetric space $X$ of non-compact type, denoted by $\text{srk}(X)$, is the maximal dimension of a totally geodesic submanifold $Y\subset X$ which splits off an isometric $\mathbb{R}$-factor. For a totally geodesic submanifold, its tangent space can be identified with a Lie triple system.
\begin{remark}
We notice a similar notion of maximal totally geodesic submanifolds is discussed in \cite{BO}, but our definition is slightly different. Indeed, if $X=G_2^2/\SO(4)$, then the submanifold that has dimension equal to the splitting rank is $\mathbb H^2\times \mathbb R$. This is not maximal totally geodesic, since $\mathbb H^2\times \mathbb R\subset \SL(3,\mathbb R)/\SO(3)\subset G_2^2/\SO(4)$ gives a chain of totally geodesic inclusions.
\end{remark}
\begin{prop}\label{prop:max}
Suppose a totally geodesic submanifold $Y \times \mathbb{R}\subset X$ has dimension equal to the splitting rank of $X$. Then the corresponding Lie triple system $[\mathfrak{p}',[\mathfrak{p}',\mathfrak{p}']]\subset \mathfrak{p}'$ has the form $\mathfrak{p}'=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V)=0}\mathfrak{p}_\alpha$, where $\mathfrak{a}$ is a choice of maximal abelian subalgebra in $\mathfrak{p}$ that contains the $\mathbb{R}$-factor $V$.
\end{prop}
\begin{proof}
We identify the tangent space of $X$ with $\mathfrak{p}$ via the Cartan decomposition, and the tangent space of $Y \times \mathbb{R}$ with a Lie triple system $\mathfrak{p}''\subset \mathfrak{p}$. The product structure implies that any vector $v\in \mathfrak{p}''$ commutes with the $\mathbb{R}$-factor $V$. Hence $\mathfrak{p}''\subset \mathfrak{p}'$, where $\mathfrak{p}'=\{\;Z\in \mathfrak{p}\mid [Z,V]=0\;\}$. Notice that $\mathfrak{p}'$ is itself a Lie triple system. To see this, we first extend $V$ to a maximal abelian subalgebra $\mathfrak{a}\subset \mathfrak{p}$, and form the restricted root space decomposition $\mathfrak{p}=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+} \mathfrak{p}_\alpha$. Then $\mathfrak{p}'$ decomposes as $\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V)=0}\mathfrak{p}_\alpha$. By a standard Lie algebra computation, we see that $[\mathfrak{p}',\mathfrak{p}']\subset \mathfrak{k}'$, where $\mathfrak{k}'=\mathfrak{k}_0\bigoplus_{\alpha\in \Lambda^+, \alpha(V)=0}\mathfrak{k}_\alpha$, and also $[\mathfrak{k}',\mathfrak{p}']\subset \mathfrak{p}'$. Therefore $\mathfrak{p}'$ is a Lie triple system that contains $\mathfrak{p}''$. By the assumption that $\mathfrak{p}''$ has maximal dimension, we conclude $\mathfrak{p}'=\mathfrak{p}''$. This completes the proof.
\end{proof}
\begin{remark}
We comment that the totally geodesic submanifold in the above proposition is the same as $F(\gamma)$ --the union of all flats that goes through the geodesic $\gamma$ corresponding to the $\mathbb{R}$-factor. In general, $F(\gamma)=F_s(\gamma)\times \mathbb{R}^t$ where $F_s(\gamma)$ is also a symmetric space of non-compact type and $t$ is some integer that measures the singularity of $\gamma$ (see \cite[Proposition 2.20.10]{Eb} for more details). We see in the next proposition that $F(\gamma)$ attains maximal dimension when $t=1$.
\end{remark}
\begin{prop}
Suppose a totally geodesic submanifold $Y \times \mathbb{R}\subset X$ has dimension equal to the splitting rank of $X$. Then $Y$ is also a symmetric space of non-compact type (i.e. it does not split off an $\mathbb R$-factor).
\end{prop}
\begin{proof}
The proposition is a direct consequence of \cite[Proposition 2.20.10]{Eb} and Proposition \ref{prop:srk gap} below.
\end{proof}
We continue to analyze $Y$ via the above splitting of the Lie algebra. Let $\mathfrak{a}$ be a maximal abelian subalgebra containing the $\mathbb{R}$-factor $V$, and denote by $\mathfrak{a}'\subset \mathfrak{a}$ the orthogonal complement of $V$. Then the Lie triple system of $Y\times \mathbb{R}$ can be written as $V\oplus \mathfrak{a}'\bigoplus_{\alpha\in \Lambda^+, \alpha(V)=0}\mathfrak{p}_\alpha$, where $V$ represents the $\mathbb R$-factor, and $\mathfrak{a}'\bigoplus_{\alpha\in \Lambda^+, \alpha(V)=0}\mathfrak{p}_\alpha$ is the Lie triple system of $Y$. As $Y$ corresponds to a maximal parabolic subalgebra in $\mathfrak{g}$, we can choose a simple system $\Omega=\{\alpha_1,...,\alpha_r\}\subset \Lambda$ corresponding to $X$ such that $\Omega'=\{\alpha_1,...,\alpha_{r-1}\}\subset \ker(V)\cap \Lambda$ is a simple system corresponding to $Y$ (See \cite[Proposition 7.76]{Kn}). In other words, $Y$ has a truncated simple system generated by throwing away one element from the simple system of $X$. We give more detailed information in the next theorem, by simply working through all the cases of irreducible symmetric spaces of non-compact type.
\begin{thm}\label{thm:srk}
Let $X$ be an irreducible symmetric space of non-compact type. Assume $\dim(X)=n$ and $rank(X)=r\geq 2$. We give in the following table\footnotemark the splitting rank of $X$, as well as all totally geodesic submanifolds $Y\times \mathbb{R}$ whose dimension attains the splitting rank.
\end{thm}
\footnotetext{In the table, we write $\SO^0_{i,j}/\SO_i\times \SO_j$ short for $\SO_0(i,j)/\SO(i)\times \SO(j)$, and similarly for ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}_{i,j}/S(U_i\times U_j)$ and $\Sp_{i,j}/\Sp_i\times \Sp_j$}
\renewcommand{\arraystretch}{1.5}
\begin{equation*}
\begin{array}{|c|c|c|c|c|}
\hline
X & Y & \text{srk}(X) & n & \text{Comments}\\ \hline
\SL(r+1,\mathbb{R})/\SO(r+1) & \SL(r,\mathbb{R})/\SO(r) & n-r & r(r+3)/2 &r\geq 2\\
\SL(r+1,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r+1) & \SL(r,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r) & n-2r & r(r+2) &r\geq 2\\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r+2)/\Sp(r+1) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r)/\Sp(r) & n-4r & r(2r+3) &r\geq 2\\
E_6^{-26}/F_4 & \mathbb{H}^9 & 10 & 26 &r=2\\ \hline
\SO^0_{r,r+k}/\SO_r\times \SO_{r+k} &\SO^0_{r-1,r-1+k}/\SO_{r-1}\times \SO_{r-1+k} & n-(2r+k-2) & r(r+k) &r\geq 2,k\geq 1\\
\SO(2r+1,\mathbb{C})/\SO(2r+1) & \SO(2r-1,\mathbb{C})/\SO(2r-1) & n-(4r-2) & r(2r+1) &r\geq 2\\\hline
\Sp(r,\mathbb{R})/U(r) & \Sp(r-1,\mathbb{R})/U(r-1) & n-(2r-1) & r(r+1) &r\geq 3 \\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}_{r,r}/S(U_r\times U_r) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}_{r-1,r-1}/S(U_{r-1}\times U_{r-1}) & n-(4r-3) & 2r^2 &r\geq 3 \\
\Sp(r,\mathbb{C})/\Sp(r) & \Sp(r-1,\mathbb{C})/\Sp(r-1) & n-(4r-2) & r(2r+1)&r\geq 3 \\
\SO^*(4r)/U(2r) & \SO^*(4r-4)/U(2r-2) & n-(8r-7) & 2r(2r-1) &r\geq 4\\
\SO^*(12)/U(6) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(6)/\Sp(3) & 15 & 30 &r= 3\\
\Sp_{r,r}/\Sp_r\times \Sp_r & \Sp_{r-1,r-1}/\Sp_{r-1}\times \Sp_{r-1} & n-(8r-5) & 4r^2& r\geq 2\\
E_7^{-25}/E_6\times U(1) & E_6^{-26}/F_4 & 27 & 54 &r=3\\ \hline
\SO^0_{r,r}/\SO_r\times \SO_r & \SO^0_{r-1,r-1}/\SO_{r-1}\times \SO_{r-1} & n-(2r-2) & r^2 &r\geq 4\\
\SO(2r,\mathbb{C})/\SO(2r) & \SO(2r-2,\mathbb{C})/\SO(2r-2) & n-(4r-4) & r(2r-1) &r\geq 4\\ \hline
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}_{r,r+k}/S(U_r\times U_{r+k}) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}_{r-1,r-1+k}/S(U_{r-1}\times U_{r-1+k}) & n-(4r+2k-3) & 2r(r+k) &r\geq 1,k\geq 1\\
\Sp_{r,r+k}/\Sp_r\times \Sp_{r+k} & \Sp_{r-1,r-1+k}/\Sp_{r-1}\times \Sp_{r-1+k} & n-(8r+4k-5) & 4r(r+k) &r\geq 1,k\geq 1\\
\SO^*(4r+2)/U(2r+1) & \SO^*(4r-2)/U(2r-1) & n-(8r-3) & 2r(2r+1) &r\geq 2\\
E_6^{-14}/\text{Spin}(10)\times U(1) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(1,5)/S(U(1)\times U(5)) & 11 & 32 &r=2\\ \hline
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{|c|c|c|c|c|}
\hline
X& Y& \text{srk}(X)&n&\text{Comments}\\ \hline
E_6^6/\Sp(4) & \SO_0(5,5)/\SO(5)\times\SO(5) & 26 & 42 &r=6\\
E_6(\mathbb C)/E_6 & \SO(10,\mathbb C)/\SO(10) & 46 & 78 &r=6\\ \hline
E_7^7/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(8) & E_6^6/\Sp(4) & 43 & 70 &r=7\\
E_7(\mathbb C)/E_7 & E_6(\mathbb C)/E_6 & 79 & 133 &r=7\\ \hline
E_8^8/\SO(16) & E_7^7/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(8) & 71 & 128 &r=8\\
E_8(\mathbb C)/E_8 & E_7(\mathbb C)/E_7 & 134 & 248 &r=8\\ \hline
F_4^4/\Sp(3)\times\Sp(1) & \SO_0(3,4)/\SO(3)\times\SO(4)\; \text{or}\;\Sp(3,\mathbb R)/U(3) & 13 & 28 & r=4\\
E_6^2/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)\times\Sp(1) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3,3)/S(U(3)\times U(3)) & 19 & 40&r=4\\
E_7^{-5}/\SO(12)\times\Sp(1) & \SO^*(12)/U(6) & 31 & 64 &r=4\\
E_8^{-24}/E_7\times \Sp(1) & E_7^{-25}/E_6\times U(1) & 55 & 112&r=4\\
F_4(\mathbb C)/F_4 & \SO(7,\mathbb C)/\SO(7)\;\text{or}\;\Sp(3,\mathbb C)/\Sp(3) & 22 & 52&r=4\\ \hline
G_2^2/\SO(4) & \mathbb H ^2 & 3 & 8&r=2 \\
G_2(\mathbb C)/G_2 & \mathbb H ^3 & 4 & 14 &r=2\\ \hline
\end{array}
\end{equation*}
\vskip 5pt
\centerline{\bf Table 1: Splitting rank of irreducible symmetric spaces of non-compact type}
\begin{remark}
In the above table, the symmetric spaces are listed according to their Dynkin diagrams. The groups are listed in the order $A_r$, $B_r$, $C_r$, $D_r$, $(BC)_r$, $E_6$, $E_7$, $E_8$, $F_4$ and $G_2$. Notice that a symmetric space of non-compact type is uniquely determined by its Dynkin diagram together with the multiplicities of simple roots.
\end{remark}
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 3.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\put {$\cdot$} at 1.2 0
\put {$\cdot$} at 1.3 0
\put {$\cdot$} at 1.4 0
\put {$\cdot$} at 1.5 0
\put {$\cdot$} at 1.6 0
\put {$\cdot$} at 1.7 0
\put {$\cdot$} at 1.8 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_{r-1}$} at 2 -.3
\put {$\alpha_r$} at 3 -.3
\endpicture
}
\caption{Dynkin diagram of type $A_r$}
\end{figure}
\begin{proof}
We prove the case of $\SL(r+1,\mathbb{R})/\SO(r+1)$ (the first line in Table 1). If $X=\SL(r+1,\mathbb{R})/\SO(r+1)$, the Dynkin diagram is shown in Figure 1, with multiplicities all ones. By the previous discussion, $Y$ is generated by the truncated simple system $\{\alpha_1,...,\hat{\alpha_i},...,\alpha_r\}$ for some $i=1,...,r$, preserving the same multiplicities and configurations. Hence we have $Y=\SL(i,\mathbb{R})/\SO(i)\times \SL(r-i+1,\mathbb{R})/\SO(r-i+1)$, for some $i=1,...,r$. The dimension of $Y$ equals $(i-1)(i+2)/2+(r-i)(r-i+3)/2$, so the codimension of $Y\times \mathbb{R}\subset X$ is $-i^2+(r+1)i$, which attains its minimal codimension $r$ when $i=1,r$. In both cases we have $Y=\SL(r,\mathbb{R})/\SO(r)$, and hence $\text{srk}(X)=\dim(Y\times \mathbb R)=n-r$, where $n=\dim(X)=r(r+3)/2$. The remaining cases are analyzed similarly, and can be found in the Appendix.
\end{proof}
By looking at the table above, we immediately have the following corollaries.
\begin{cor}\label{cor:connect}
Under the same assumptions of Theorem \ref{thm:srk}, $Y$ is also irreducible.
\end{cor}
\begin{cor}\label{cor:si>rank}
If $X$ is an irreducible symmetric space of non-compact type, then $\text{srk}(X)\leq n-r$, and $\text{srk}(X)= n-r$ if and only if $X=\SL(r+1,\mathbb R)/\SO(r+1)$.
\end{cor}
Before moving to the next section, we will need a little more information about the dimensions of totally geodesic submanifolds $Y\times \mathbb R\subset X$. Here we focus on the cases where $Y$ is of non-compact type, that is, $Y$ is generated by a truncated simple system $\{\alpha_1,...,\hat{\alpha_i},...,\alpha_r\}$ where $\{\alpha_1,...,\alpha_r\}$ is a simple system of $X$. Besides the largest dimension case at $\text{srk}(X)$, we are also curious about the second largest dimension, and we will need to verify that there is enough gap between the two.
\begin{prop}\label{prop:gap}
Let $X$ be an irreducible symmetric space of non-compact type. Assume $\dim(X)=n$ and $rank(X)=r\geq 4$. If $Y\times \mathbb R$ and $Y'\times \mathbb R$ are two totally geodesic submanifolds in $X$ where $\dim(Y\times \mathbb R)=\text{srk}(X)$ and $\dim(Y'\times \mathbb R)$ attains the second largest dimension among all truncated simple systems generating $Y'$, then the gaps between the two dimensions ($\dim(Y)-\dim(Y')$) are given in the following table.
\end{prop}
\renewcommand{\arraystretch}{1.5}
\begin{equation*}
\begin{array}{|c|c|c|c|}
\hline
X & Y' & \text{Gap}& \text{Comments} \\ \hline
\SL(r+1,\mathbb{R})/\SO(r+1) & \mathbb H^2\times\SL(r-1,\mathbb{R})/\SO(r-1) & r-2 &r\geq 4\\
\SL(r+1,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r+1) & \mathbb H^3\times\SL(r-1,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-1) & 2r-4 &r\geq 4\\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r+2)/\Sp(r+1) & \mathbb H^5\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r-2)/\Sp(r-1) & 4r-8 &r\geq 4\\ \hline
\SO_0(r,r+k)/\SO(r)\times \SO(r+k)&\mathbb H^2\times \SO_0(r-2,r-2+k)/\SO(r-2)\times \SO(r-2+k) & 2r+k-5 &r+2k> 7\\
\SO_0(4,5)/\SO(4)\times \SO(5) &\SL(4,\mathbb R)/\SO(4)& 3 &r=4,k=1\\
\SO_0(5,6)/\SO(5)\times \SO(6) &\mathbb H^2\times\SO_0(3,4)/\SO(3)\times \SO(4)\;\text{or} \;\SL(5,\mathbb R)/\SO(5) & 6 &r=5,k=1\\
\SO(2r+1,\mathbb{C})/\SO(2r+1) & \mathbb H^3\times\SO(2r-3,\mathbb{C})/\SO(2r-3) & 4r-8 &r>5\\
\SO(9,\mathbb{C})/\SO(9) & \SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4) & 6 &r=4\\
\SO(11,\mathbb{C})/\SO(11) & \mathbb H^3\times\SO(7,\mathbb{C})/\SO(7) \; \text{or}\; \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5) & 12 &r=5\\\hline
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{|c|c|c|c|}
\hline
\Sp(r,\mathbb{R})/U(r) & \mathbb H^2\times\Sp(r-2,\mathbb{R})/U(r-2) & 2r-4 &r>5 \\
\Sp(4,\mathbb{R})/U(4) & \SL(4,\mathbb R)/\SO(4) & 3 &r=4 \\
\Sp(5,\mathbb{R})/U(5) & \mathbb H^2\times\Sp(3,\mathbb{R})/U(3)\; \text{or}\; \SL(5,\mathbb R)/\SO(5) & 6 &r=5 \\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r,r)/S(U(r)\times U(r)) & \mathbb H^3\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-2,r-2)/S(U(r-2)\times U(r-2)) & 4r-9 &r>6 \\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,4)/S(U(4)\times U(4)) & \SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4) & 3 &r=4 \\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5,5)/S(U(5)\times U(5)) & \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5) & 8 &r=5 \\ \hline
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6,6)/S(U(6)\times U(6)) & \mathbb H^3\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,4)/S(U(4)\times U(4))\;\text{or}\;\SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6) &15 &r=6 \\
\Sp(r,\mathbb{C})/\Sp(r) & \mathbb H^3\times\Sp(r-2,\mathbb{C})/\Sp(r-2) & 4r-8 &r>5 \\
\Sp(4,\mathbb{C})/\Sp(4) & \SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4) & 6 &r=4 \\
\Sp(5,\mathbb{C})/\Sp(5) & \mathbb H^3\times\Sp(3,\mathbb{C})/\Sp(3)\;\text{or}\;\SL(5,\mathbb C){\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5) & 12 &r=5 \\
\SO^*(4r)/U(2r) & \mathbb H^4\times\SO^*(4r-8)/U(2r-4) & 8r-19 &r>6\\
\SO^*(16)/U(8) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}*(8)/\Sp(4) & 3 &r=4\\
\SO^*(20)/U(10) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}*(10)/\Sp(5) & 12 &r=5\\
\SO^*(24)/U(12) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}*(12)/\Sp(6) & 25 &r=6\\
\Sp(r,r)/\Sp(r)\times \Sp(r) & \mathbb H^5\times\Sp(r-2,r-2)/\Sp(r-2)\times \Sp(r-2) & 8r-17 &r>5\\
\Sp(4,4)/\Sp(4)\times \Sp(4) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}*(8)/\Sp(4) & 9 &r=4\\
\Sp(5,5)/\Sp(5)\times \Sp(5) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}*(10)/\Sp(5) & 20 &r=5\\ \hline
\SO_0(r,r)/\SO(r)\times \SO(r) & \mathbb H^2\times\SO_0(r-2,r-2)/\SO(r-2)\times \SO(r-2) & 2r-5 & r>7 \\
\SO_0(4,4)/\SO(4)\times \SO(4) & \mathbb H^2\times\mathbb H^2\times\mathbb H^2 & 3 & r=4 \\
\SO_0(5,5)/\SO(5)\times \SO(5) & \SL(5,\mathbb R)/\SO(5) & 2 & r=5 \\
\SO_0(6,6)/\SO(6)\times \SO(6) & \SL(6,\mathbb R)/\SO(6) & 5 & r=6 \\
\SO_0(7,7)/\SO(7)\times \SO(7) & \mathbb H^2\times\SO_0(5,5)/\SO(52)\times \SO(5)\;\text{or}\; \SL(7,\mathbb R)/\SO(7) & 9 & r=7 \\
\SO(2r,\mathbb{C})/\SO(2r) & \mathbb H^3\times\SO(2r-4,\mathbb{C})/\SO(2r-4) & 4r-10 &r>7 \\
\SO(8,\mathbb{C})/\SO(8) & \mathbb H^3\times\mathbb H^3\times\mathbb H^3 & 6 &r=4 \\
\SO(10,\mathbb{C})/\SO(10) & \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5) & 4 &r=5 \\
\SO(12,\mathbb{C})/\SO(12) & \SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6) & 10 &r=6 \\
\SO(14,\mathbb{C})/\SO(14) & \mathbb H^3\times\SO(10,\mathbb{C})/\SO(10)\;\text{or}\;\SL(7,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(7) & 18 &r=7 \\ \hline
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{|c|c|c|c|}
\hline
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r,r+k)/S(U(r)\times U(r+k)) & \mathbb H^3\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-2,r-2+k)/S(U(r-2)\times U(r-2+k)) & 4r+2k-9 & r+2k>6 \\
{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,5)/S(U(4)\times U(5)) & \mathbb H^3\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(2,3)/S(U(2)\times U(3))\;\text{or}\;\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4) & 9 & r=4,k=1 \\
\Sp(r,r+k)/\Sp(r)\times \Sp(r+k) & \mathbb H^5\times\Sp(r-2,r-2+k)/\Sp(r-2)\times \Sp(r-2+k) & 8r+4k-17 & r\geq 4,k\geq 1\\
\SO^*(4r+2)/U(2r+1) & \mathbb H^5\times\SO^*(4r-6)/U(2r-3) & 8r-15 & r>4\\
\SO^*(18)/U(9) & {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(8)/\Sp(4) & 15 & r=4\\ \hline
E_6^6/\Sp(4) & \SL(6,\mathbb R)/\SO(6) & 5 &r=6\\
E_6(\mathbb C)/E_6 & \SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6) & 10 &r=6\\ \hline
E_7^7/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(8) & \SO_0(6,6)/\SO(6)\times\SO(6) & 6 &r=7\\
E_7(\mathbb C)/E_7 & \SO(12,\mathbb C)/\SO(12) & 12 &r=7\\ \hline
E_8^8/\SO(16) & \SO_0(7,7)/\SO(7)\times\SO(7) & 21 &r=8\\
E_8(\mathbb C)/E_8 & \SO(14,\mathbb C)/\SO(14) & 42 &r=8\\ \hline
F_4^4/\Sp(3)\times\Sp(1) & \mathbb H^2\times\SL(3,\mathbb R)/\SO(3) & 5 &r=4\\
E_6^2/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)\times\Sp(1) & \SO_0(3,5)/\SO(3)\times \SO(5) & 3&r=4\\
E_7^{-5}/\SO(12)\times\Sp(1) & \SO_0(3,7)/\SO(3)\times \SO(7) & 3&r=4 \\
E_8^{-24}/E_7\times \Sp(1) & \SO_0(3,11)/\SO(3)\times \SO(11) & 21&r=4\\
F_4(\mathbb C)/F_4 & \mathbb H^3\times \SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3) & 10&r=4\\ \hline
\end{array}
\end{equation*}
\vskip 5pt
\centerline{\bf Table 2: Dimension gap after splitting rank}
\begin{proof}
We prove the case of $\SL(r+1,\mathbb R)/\SO(r+1)$ (the first line of Table 2). As we see in the proof of Theorem \ref{thm:srk}, $Y=\SL(i,\mathbb R)\times \SL(r-i+1,\mathbb R)/\SO(r-i+1)$. And the codimesion of $Y\times \mathbb R\subset X$ is $-i^2+(r+1)i$, which attains its minimal codimension $r$ when $i=1,r$. Now it attains its second minimal value $2r-2$ when $i=2,r-1$ provided $r\geq 3$. In this case, $Y=\mathbb H^2\times \SL(r-1,\mathbb R)/\SO(r-1)$, and the gap is $r-2$. Again, the remaining cases are analyzed similarly in the Appendix.
\end{proof}
\begin{lem}(\text{Gap})\label{lem:gap}
Let $X$ be an irreducible symmetric space of non-compact type. Assume $\dim(X)=n$ and $rank(X)=r\geq 3$. If $Y\times \mathbb R$ is a totally geodesic submanifold whose dimension attains $\text{srk}(X)$, and $Y'\times \mathbb R$ is another totally geodesic submanifold whose dimension is $<\text{srk}(X)$. Then either $Y'$ is irreducible or the gap in dimensions of the two ($\dim(Y)-\dim(Y')$) is at least $r-2$.
\end{lem}
\begin{proof}
For $r=3$, the inequality is automatic. For $r\geq 4$, we check that the inequality follows from Table 2 above for all cases except for $\SO(5,5)/\SO(5)\times \SO(5)$, where the gap between the largest ($\SO(4,4)/\SO(4)\times \SO(4)$) and second largest dimension ($\SL(5,\mathbb R)/\SO(5)$) is $2$. However, the space $\SL(5,\mathbb R)/\SO(5)$ is irreducible. So the gap between $\SO(4,4)/\SO(4)\times \SO(4)$ and any reducible space $Y'$ will be at least $3$. This completes the proof.
\end{proof}
\subsection{The k-th Splitting Rank}\label{sec:srk^k}
\begin{defn}\label{defn:srk}
Let $X$ be a rank $r$ symmetric space of non-compact type. For each $k$ ($1\leq k\leq r$), we define the \emph{$k$-th splitting rank} of $X$, denoted $\text{srk}^k(X)$, to be the maximal dimension of a totally geodesic submanifold $Y\subset X$ which splits off an isometric $\mathbb{R}^k$-factor.
\end{defn}
\begin{remark}
In the above notion, the first splitting rank is just our previous notion of splitting rank. We can also see that $\text{srk}^{k+1}(X)\leq \text{srk}^{k}(X)$ for $1\leq k \leq r-1$ and $\text{srk}^r(X)=r$. We abuse notation and set $\text{srk}^0(X)=\dim(X)$.
\end{remark}
\begin{prop}\label{prop:k-max mfld}
Suppose $Y \times \mathbb{R}^k\subset X$ has the maximal dimension, that is, $\dim (Y \times \mathbb{R}^k)=\text{srk}^k(X)$. Then the corresponding Lie triple system $[\mathfrak{p}',[\mathfrak{p}',\mathfrak{p}']]\subset \mathfrak{p}'$ has the form $\mathfrak{p}'=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V_k)=0}\mathfrak{p}_\alpha$, where $\mathfrak{a}$ is a choice of maximal abelian subalgebra in $\mathfrak{p}$ that contains the $k$-dimensional Euclidean factor $V_k$.
\end{prop}
\begin{proof}
The proof is the same as Proposition \ref{prop:max}, just replacing $V$ with $V_k$.
\end{proof}
\begin{prop}\label{prop:srk gap}
For $1\leq k \leq r-1$, we have strict inequality $\text{srk}^{k+1}(X)< \text{srk}^{k}(X)$. Therefore, the totally geodesic submanifold $Y\times \mathbb R^k$ that has dimension $\text{srk}^{k}(X)$ does not split off any further $\mathbb R$-factors.
\end{prop}
\begin{proof}
Suppose $Y_{k+1}\times \mathbb{R}^{k+1}\subset X$ has the dimension $\text{srk}^{k+1}(X)$. According to Proposition \ref{prop:k-max mfld}, the tangent space of $Y_{k+1}\times \mathbb{R}^{k+1}$ is identified with $\mathfrak{p}'=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V_{k+1})=0}\mathfrak{p}_\alpha$, for some $\mathfrak{a}$ that contains the $\mathbb{R}^{k+1}$-factor $V_{k+1}$. We can choose a pair of root vectors $\pm H_{\alpha_0}$ so that it does not lie in the orthogonal complement $V_{k+1}^\perp$. Let $V_k=V_{k+1}\cap H_{\alpha_0}^\perp$ be a $k$-dimensional subspace of $V_{k+1}$, the Lie triple system $\mathfrak{p}''=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V_{k})=0}\mathfrak{p}_\alpha$ strictly contains $\mathfrak{p}'$ since $\alpha_0(V_k)=0$ but $\alpha_0(V_{k+1})\neq0$. Therefore, $\text{srk}^{k+1}(X)=\dim \mathfrak{p}'< \dim \mathfrak{p}''\leq \text{srk}^k(X)$.
\end{proof}
\begin{lem}\label{lem:k-srk estimate}
Let $X$ be an irreducible rank $r$ ($r\geq 2$) symmetric space of non-compact type. Then $\text{srk}^k(X)\leq \text{srk}(X)-2(k-1)$ holds for all $1\leq k< r$.
\end{lem}
\begin{proof}
We show this by induction on the rank of the symmetric space. For $r=2$, the only possible value for $k$ is $k=1$, and the inequality holds immediately. Suppose we have the inequality for all such irreducible symmetric spaces of rank $l$ ($l\geq 2$), assuming $\text{rank}(X)=l+1$, we want to show $\text{srk}^k(X)\leq \text{srk}(X)-2(k-1)$ for all $1\leq k< l+1$. Notice when $k=1$, the inequality is trivially true, so we may assume $k\geq 2$.
Let $\text{srk}^k(X)=\dim(Y_k\times \mathbb R^k)$, where $Y_k$ is described as in Proposition \ref{prop:k-max mfld}. Let $V_k$ denote the $\mathbb R^k$-factor. We inductively define $V_i$ so that it is an $i$ dimensional Euclidean subspace of $V_{i+1}$ and $\ker(V_{i+1})\cap \Lambda\subsetneqq \ker(V_i)\cap \Lambda$. This will give rise to an extending chain of Lie triples $\mathfrak{p}_k\subset ...\subset \mathfrak{p}_1\subset\mathfrak{p}$, corresponding to a totally geodesic chain $Y_k\times \mathbb R^k\subset ...\subset Y_1\times \mathbb R\subset X$, such that for each $1\leq i\leq k$, $\mathfrak p_i=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V_i)=0}\mathfrak{p}_\alpha$. The choice of $V_i$ implies that $\dim(\mathfrak p_{i+1})< \dim(\mathfrak p_i)$ and therefore $Y_i$ does not split off an $\mathbb R$-factor, for all $i$. Besides, since $Y_k\times \mathbb R^k\subset Y_1\times \mathbb R$ have a common $\mathbb R$-factor $V_1$, $Y_k\times \mathbb R^{k-1}$ is totally geodesic in $Y_1$.
Now if $Y_1$ is irreducible, by the induction hypothesis, we have $\dim(Y_k\times \mathbb R^{k-1})\leq \text{srk}^{k-1}(Y_1)\leq \text{srk}(Y_1)-2(k-2)$. According to Corollary \ref{cor:si>rank}, we have $\text{srk}(Y_1)\leq \dim(Y_1)-l\leq \dim(Y_1)-2$. Hence combining the two inequalities, we conclude $\dim(Y_k\times \mathbb R^k)\leq\text{srk}(Y_1)-2(k-2)+1\leq \dim(Y_1)-2-2(k-2)+1=\dim(Y_1\times \mathbb R)-2(k-1)\leq\text{srk}(X)-2(k-1)$.
If $Y_1$ is reducible, then $Y_1\times \mathbb R$ can not have the dimension equal to the splitting rank of $X$ by Corollary \ref{cor:connect}. So Lemma \ref{lem:gap} implies that $\dim(Y_1\times \mathbb R)\leq \text{srk}(X)-(l+1-2)$. The increasing chain $\mathfrak{p}_k\subset ...\subset \mathfrak{p}_1\subset\mathfrak{p}$ gives the inequality $\dim(\mathfrak{p}_k)\leq \dim(\mathfrak{p}_1)-(k-1)$. Notice $\mathfrak{p}_i$ is identified with the tangent space of $Y_i\times \mathbb R^i$, so we can then estimate $\dim(Y_k\times \mathbb R^k)=\dim(\mathfrak{p}_k)\leq \text{srk}(X)-(l+1-2)-(k-1)\leq \text{srk}(X)-2(k-1)$.
We have shown in both cases that the inequality holds for all rank $l+1$ irreducible symmetric spaces. This completes the induction argument and hence the proof of this lemma.
\end{proof}
\begin{cor}\label{cor:k-srk estimate}
Let $X$ be an irreducible rank $r$ ($r\geq 1$) symmetric space of non-compact type, excluding $\SL(3,\mathbb R)/\SO(3)$, $\Sp(2,\mathbb R)/U(2)$, $G_2^2/\SO(4)$ and $\SL(4,\mathbb R)/\SO(4)$. Then $\text{srk}^k(X)\leq \text{srk}(X)-2(k-1)$ holds for all $1\leq k\leq r$.
\end{cor}
\begin{proof}
Notice the inequality automatically holds in rank one cases, and in view of Lemma \ref{lem:k-srk estimate} we only need to consider the case when $k=r$. If $k=r$, the inequality is equivalent to $3r-2\leq \text{srk}(X)$. We know that the dimension ($=n$) of an irreducible symmetric space grows roughly quadratically in its rank ($=r$), and actually we can check that $n\geq 3r$ whenever $r\geq 3$. This together with Corollary \ref{cor:connect} implies that $\text{srk}(X)=\dim(Y\times \mathbb R)\geq 3(r-1)+1=3r-2$ whenever $r\geq 4$. This proves the corollary in all cases where $\rank\geq 4$. When $r=2$, it is equivalent to show $\text{srk}(X)\geq 4$, and according to Table 1, this excludes $\SL(3,\mathbb R)/\SO(3)$, $\Sp(2,\mathbb R)/U(2)$ and $G_2^2/\SO(4)$. When $r=3$, it is equivalent to show $\text{srk}(X)\geq 7$, and by Table 1 this only excludes $\SL(4,\mathbb R)/\SO(4)$.
\end{proof}
\subsection{Reducible Symmetric Spaces}\label{sec:reducible}
We intend to generalize Corollary \ref{cor:k-srk estimate} to all higher rank symmetric spaces of non-compact type. Below is the key lemma that characterizes certain $\mathbb R$-split totally geodesic submanifolds in reducible symmetric spaces.
\begin{lem}\label{lem:splitting}
Let $X$ be a symmetric space of non-compact type, and $Z=Y\times \mathbb R^k$ a totally geodesic subspace in $X$ that has dimension equal to the $k$-th splitting rank of $X$. If $X$ splits as a product of $X_1$ and $X_2$, then $Z$ also splits as a product of $Z_1$ and $Z_2$, where $Z_i=Y_i\times \mathbb R^{k_i}$ is totally geodesic in $X_i$, for $i=1,2$ and some $k_i\geq 0$ satisfying $k_1+k_2=k$.
\end{lem}
\begin{proof}
We write $X=G/K$ and fix a basepoint $x\in X$. We form the Cartan decomposition $\mathfrak{g}=\mathfrak{k}+\mathfrak{p}$, where $\mathfrak{p}$ can be identified with the tangent space $T_xX$. We denote $V_k\subset \mathfrak{p}$ the $\mathbb R^k$ factor of $Z$, and extend $V_k$ to a maximal abelian subalgebra $\mathfrak{a}\subset \mathfrak{p}$. Since $X$ splits as a product of $X_1$ and $X_2$, we can write $\mathfrak{a}=\mathfrak{a}_1\oplus \mathfrak{a}_2$, and also the set of roots $\Lambda$ of $X$ decomposes as $\Lambda_1\cup \Lambda_2$, where $\Lambda_i$ is the set of roots belonging to $X_i$ with respect to $\mathfrak{a}_i$. By Proposition \ref{prop:k-max mfld}, $Z$ has the Lie triple system $\mathfrak{p}'=\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V_k)=0}\mathfrak{p}_\alpha$. Let $\mathfrak{p}_i'=\mathfrak{a}_i\bigoplus_{\alpha\in \Lambda_i^+, \alpha(V_k)=0}\mathfrak{p}_\alpha$, we have $\mathfrak{p}'=\mathfrak{p}_1'\oplus \mathfrak{p}_2'$. Notice $\mathfrak{p}_i'\subset \mathfrak{p}_i$ is a Lie triple system. Indeed, $\mathfrak{p}_i'=\mathfrak{a}_i\bigoplus_{\alpha\in \Lambda_i^+, \alpha(V_{k,i})=0}\mathfrak{p}_\alpha$ where $V_{k,i}$ is the orthogonal projection of $V_k$ to $\mathfrak{a}_i$. This implies that $V_{k,i}$ is the Euclidean factor of $\mathfrak{p}_i$, therefore $V_k$ splits as $V_{k,1}\oplus V_{k,2}$, which completes the proof.
\end{proof}
\begin{cor}\label{cor:reducible k-srk}
Let $X$ be a rank $r$ symmetric space of non-compact type. Assume $X=X_1\times X_2$, where $\text{rank}(X_i)=r_i$ for $i=1,2$. Then $$\text{srk}^k(X)=\text{Max}\{\text{srk}^{j_1}(X_1)+\text{srk}^{j_2}(X_2):{0\leq j_1\leq r_1,0\leq j_2\leq r_2}\;\text{and}\;j_1+j_2=k\}$$
\end{cor}
\begin{remark}
This is a direct consequence of Lemma \ref{lem:splitting}. In the corollary, recall that by definition $\text{srk}^0(X)=\dim(X)$. As a result, $\text{srk}(X_1\times X_2)=\text{Max}\{\text{srk}(X_1)+\dim(X_2),\text{srk}(X_2)+\dim(X_1)\}$. Furthermore, if we denote $\text{si}^k(X):=n-\text{srk}^k(X)$ the k-th splitting index of $X$, then Corollary \ref{cor:reducible k-srk} simply says $$\text{si}^k(X_1\times X_2)=\text{Min}\{\text{si}^{j_1}(X_1)+\text{si}^{j_2}(X_2):0\leq j_1\leq r_1,0\leq j_2\leq r_2\;\text{and}\;j_1+j_2=k\}$$
and that $\text{si}(X_1\times X_2)=\text{Min}\{\text{si}(X_1),\text{si}(X_2)\}$.
\end{remark}
\begin{thm}\label{thm:brain}
Let $X$ be a rank $r$ symmetric space of non-compact type. Assume $X$ has no direct factors isometric to $\mathbb H^2$, $\SL(3,\mathbb R)/\SO(3)$, $\Sp(2,\mathbb R)/U(2)$, $G_2^2/\SO(4)$ or $\SL(4,\mathbb R)/\SO(4)$. Then $\text{srk}^k(X)\leq \text{srk}(X)-2(k-1)$ for all $1\leq k\leq r$.
\end{thm}
\begin{proof}
We write $X$ as a product of irreducible symmetric spaces $X_1\times...\times X_s$. Using the notion of splitting index described in the previous remark, the inequality is equivalent to $\text{si}^k(X)\geq \text{si}(X)+2(k-1)$. By repeatedly applying Corollary \ref{cor:reducible k-srk}, we can assume $\text{si}^k(X)=\sum_{l=1}^s\text{si}^{j_l}(X_l)$ for some $j_l$ satisfying $0\leq j_l\leq r_l$ and $\sum_{l=1}^s j_l=k$ where $r_l$ is the rank of $X_l$. For each $j_l>0$, we have $\text{si}^{j_l}(X_l)\geq \text{si}(X_l)+2(j_l-1)$ by Corollary \ref{cor:k-srk estimate}. Notice $\text{si}^0(X)=0$ and $\text{si}^{j_l}(X_l)$ does not contribute to the summation if $j_l=0$. We can further estimate
$$\text{si}^k(X)=\sum_{1\leq l\leq s,j_l>0}\text{si}^{j_l}(X_l)\geq \sum_{1\leq l\leq s,j_l>0} [\text{si}(X_l)+2(j_l-1)]= 2k+ \sum_{1\leq l\leq s,j_l>0}(\text{si}(X_l)-2)$$
As a consequence of Corollary \ref{cor:si>rank}, we have $\text{si}(X_l)\geq 2$ as we assume no $\mathbb H^2$-factors. Now we apply the inequality $\text{si}(X_l)\geq \text{Min}_{1\leq l\leq s}\text{si}(X_l)=\text{si}(X)$ to one of the $l$ in the summation, and apply $\text{si}(X_l)\geq 2$ to the rest of the $l$. We finally obtain $\text{si}^k(X)\geq \text{si}(X)+2k-2$, which completes the proof.
\end{proof}
\section{Bounded Cohomology}
In this section, we generalize the method of \cite{LW}, and show the surjectivity of comparison maps in a slightly larger range ($\geq \text{srk}+2$). The approach is quite similar: Theorem \ref{thm:brain} generalizes \cite[Lemma 4.6]{LW}, allowing us to improve its main theorem to the following
\begin{thm}\label{thm:main}
Let $X=G/K$ be an $n$-dimensional symmetric space of non-compact type of rank $r\geq 2$, and $\Gamma$ a cocompact torsion-free lattice in $G$. Assume $X$ has no direct factors of $\mathbb H^2$, $\SL(3,\mathbb R)/\SO(3)$, $\Sp(2,\mathbb R)/U(2)$, $G_2^2/\SO(4)$ and $\SL(4,\mathbb R)/\SO(4)$, then the comparison maps
$\eta:H^{*}_{c,b}(G,\mathbb{R})\rightarrow H^{*}_c(G,\mathbb{R})$ and $\eta':H^{*}_{b}(\Gamma,\mathbb{R})\rightarrow
H^{*}(\Gamma,\mathbb{R})$ are both surjective in all degrees $* \geq \text{srk}(X)+2$.
\end{thm}
\begin{remark}
In the irreducible case of $\SL(r+1,\mathbb R)/\SO(r+1)$, the splitting rank is exactly $n-r$, where we recover the main theorem of \cite{LW}. In general, $\text{srk}(X)$ is smaller then $n-r$ (see Corollary \ref{cor:si>rank}) so that our theorem produces surjectivity in a larger range than \cite{LW}.
\end{remark}
We summarize the approach of \cite{LW} in the following steps.
\textbf{Step 1}: Notice that surjectivity of $\eta'$ implies surjectivity of $\eta$. In order to show surjectivity of $\eta'$ in degree $k$, it is equivalent to assign each cohomology class $[f]\in H^k(\Gamma,\mathbb R)$ a bounded representative. By the explicit isomorphism $H_{dR}^k(X/\Gamma, \mathbb{R})\simeq H_{sing}^k(X/\Gamma,\mathbb{R})\simeq H^k(\Gamma,\mathbb{R})$, we can view $f:\Gamma^k\rightarrow \mathbb R$ as a function that integrates a $k$-form $f_\omega$ over a $k$-simplex generated by a $k$-tuple in $\Gamma$. We now replace it (within the same cohomology class) by a function that integrates the same form $f_\omega$ over a ``barycentrically straightened'' $k$-simplex, and claim it is bounded when $k$ is in certain degrees. This produces the bounded representative. (See \cite[section 5.1, 5.2]{LW} for more details.)
\textbf{Step 2}: For each $1\leq k\leq n$, the barycentric $k$-straightening is a map $st_k: C^k(X)\rightarrow C^k(X)$, where $X=G/K$ is a symmetric space of non-compact type. It gives a $C^1$-chain map and is chain homotopic to the identity. It is also $G$-equivariant hence preserves the $\Gamma$-action. Moreover, if one can in addition show that the straightened $k$-simplices have uniformly bounded Jacobian, then the function $f_\omega$ is also bounded in degree $k$. As a result the surjectivity of the comparison map is obtained in degree $k$. (See \cite[section 2.3]{LW} for more details.) Notice in \cite{LW}, they showed uniformly bounded Jacobian for irreducible symmetric spaces excluding $\SL(3,\mathbb R)/\SO(3)$ and $\SL(4,\mathbb R)/\SO(4)$ when $k\geq n-r+2$. This is generalized in this paper, where we show uniformly bounded Jacobian in degrees $k\geq \text{srk}(X)+2$, for all symmetric spaces satisfying the condition of Theorem \ref{thm:brain}.
\textbf{Step 3}: By the computation in \cite[section 2.3]{LW}, the Jacobian of straightened $k$-simplex is bounded above (up to a multiplicative constant) by the quotient $\det(Q_1|_S)^{1/2}/\det(Q_2|_S)$, where $S$ is a $k$-dimensional subspace in a tangent space $T_xX$, and $Q_1$, $Q_2$ are two positive semidefinite quadratic forms ($Q_2$ is actually positive definite) defined by the following:
$$Q_1(v,v)=\int_{\partial_F X}dB^2_{(x,\theta)}(v)d\mu(\theta)$$
$$Q_2(v,v)=\int_{\partial_F X}DdB_{(x,\theta)}(v,v)d\mu(\theta)$$
In the above expression, $B$ is the Busemann function on $X$ based at some fixed point, and $\mu$ is a probability measure fully supported on the Furstenberg boundary $\partial_F X$. Therefore if we can bound $\det(Q_1|_S)^{1/2}/\det(Q_2|S)$ by some constant $C$ that only depends on $X$ (independent of the choices $x\in X$ and $S\subset T_xX$), then we are able to control the Jacobian in degree $k=\dim(S)$. (See \cite[section 2.2, 2.3]{LW} for more details.)
\textbf{Step 4}: In order to show the ratio $\det(Q_1|_S)^{1/2}/\det(Q_2|_S)$ is uniformly bounded, we need an eigenvalue matching property. If $S$ is in top dimension $n$, then the ratio is just $\det(Q_1)^{1/2}/\det(Q_2)$. Following the approach from \cite{CF1}, \cite{CF2}, it suffices to give for each small eigenvalues of $Q_2$, two comparably small eigenvalues of $Q_1$ to cancel with. Generalizing this argument, if we were able to find $r-2$ additional small eigenvalues of $Q_1$ to cancel with the smallest eigenvalues of $Q_2$, then we can restrict the quadratic forms to a subspace $S$ of dimension $k$ (where $k\geq n-r+2$), and the ratio of the determinants $\det(Q_1|_S)^{1/2}/\det(Q_2|_S)$ remains uniformly bounded. This is implied by a weak eigenvalue matching theorem. (See \cite[Theorem 3.3]{LW} and originally \cite{CF2}.) Actually, there are at most $r$ many eigenvectors of $Q_2$ with small eigenvalues, and they almost lie in a tangent space of a flat (have small angle with a flat). For each such eigenvector $v_i$, we can always find two unit vectors $v_i'$, $v_i''$ so that the $Q_1$ values on these two vectors are bounded above by $Q_2(v_i,v_i)$, up to a uniform multiplicative constant (we say in this case $v_i'$ and $v_i''$ cancels with $v_i$). Moreover, for the smallest eigenvector $v_1$, we are able to find $r-2$ additional unit vectors $v_1^{(3)},...,v_1^{(r)}$ to cancel with, and the collection of all the pairs of vectors, together with the $r-2$ additional vectors, almost form an orthonormal frame. Then by the Gram-Schmidt process and standard linear algebra, we can find the eigenvalue match. (See \cite[section 3.3]{LW} for more details.)
\textbf{Step 5}: Finally, the weak eigenvalue matching theorem described in Step 4 can be further reduced to a combinatorial problem. For a single vector $v$ that lies in a tangent space $\mathfrak{a}$ of a flat, we denote by $v^*$ the most singular vector in a fixed small neighborhood of $v$. Then any vector that lies in $Q_v=\bigoplus_{\alpha\in \Lambda^+, \alpha(v^*)\neq0}\mathfrak{p}_\alpha$ will be able to cancel $v$. Now for each root $\alpha$, we pick an orthonormal frame $\{b_{\alpha_i}\}$ for $\mathfrak{p}_\alpha$, and we collect them into the set $B=\{b_i\}_{i=1}^{n-r}$. The idea is to find among the set $B$, $3r-2$ distinct vectors $v_1',v_1'',...,v_1^{(r)},v_2',v_2'',...,v_r',v_r''$ to cancel a given almost orthonormal frame $\{v_1,...v_r\}$ in $\mathfrak{a}$ (hence $v_1^*,...v_r^*$ are distinct). Notice this is a generalization of the classic Hall's Marriage Problem, and in order to solve this, it is sufficient to solve a cardinality inequality: for any subcollection of vectors $\{v_{i_1},...,v_{i_k}\}$, the number of vectors in $B$ that belongs to $\bigcup_{j=1}^k Q_{v_{i_j}}$ is at least $2k+r-2$ (which is showed in \cite[Lemma 4.6]{LW}). This solves the eigenvalue matching in the special case where the small eigenvectors of $Q_2$ all lie in a same $\mathfrak{a}$. For the general case where the small eigenvectors have small angles to a flat, a similar argument is used, and we refer the readers to \cite[Section 4]{LW} for more details.\\
\emph{Proof of Theorem \ref{thm:main}}: The proof is similar to that of \cite[Main Theorem]{LW}. Notice that Step 1-3 goes through unchanged in our case, and in Step 4-5 we only need to find $n-\text{srk}(X)-2$ additional (instead of $r-2$) small eigenvalues of $Q_1$ to cancel with the smallest eigenvalue of $Q_2$. Showing this requires a similar weak eigenvalue matching theorem as \cite[Theorem 3.3]{LW}, where the existing $(2k+r-2)$-frame $\{v_1',v_1'',...,v_1^{(r)},v_2',v_2'',...,v_k',v_k''\}$ is now replaced by a $(2k+n-\text{srk}(X)-2)$-frame $\{v_1',v_1'',...,v_1^{(n-\text{srk}(X))},v_2',v_2'',...,v_k',v_k''\}$ and the corresponding angle inequalities are satisfied. This is further reduced to a similar Hall's Marriage type combinatorial problem, and the corresponding cardinality estimate is ensured by the following modification of \cite[Lemma 4.6]{LW}:
\begin{lem}\label{lem:revise}
Let $X=G/K$ be a rank $r\geq 2$ symmetric space of non-compact type, without direct factors isometric to $\mathbb H^2$, $\SL(3,\mathbb R)/\SO(3)$, $\Sp(2,\mathbb R)/U(2)$, $G_2^2/\SO(4)$, or $\SL(4,\mathbb R)/\SO(4)$. Fix a maximal abelian subalgebra $\mathfrak{a}\subset \mathfrak{p}$. Assume $\{v_1^*,...,v_r^*\}$ spans $\mathfrak{a}$, and let $Q_i=\bigoplus_{\alpha\in \Lambda^+, \alpha(v_i^*)\neq0}\mathfrak{p}_\alpha$. Then for any subcollection of vectors $\{v_{i_1}^*,...,v_{i_k}^*\}$, we have $\dim(Q_{i_1}+...+Q_{i_k})\geq (2k+n-\text{srk}(X)-2)$.
\end{lem}
\begin{proof}
Notice $Q_{i_1}+...+Q_{i_k}=\bigoplus_{\alpha\in \Lambda^+, \alpha(V_k)\neq0}\mathfrak{p}_\alpha$ where $V_k$ is the span of $v_{i_1}^*,...,v_{i_k}^*$. Its orthogonal complement in $\mathfrak{p}$ is $\mathfrak{a}\bigoplus_{\alpha\in \Lambda^+, \alpha(V_k)=0}\mathfrak{p}_\alpha$, which has dimension at most $\text{srk}^k(X)$, hence according to Theorem \ref{thm:brain} is bounded above by $\text{srk}(X)-2k+2$. Therefore, $\dim(Q_{i_1}+...+Q_{i_k})\geq (2k+n-\text{srk}(X)-2)$. This completes Lemma \ref{lem:revise} hence Theorem \ref{thm:main}.
\end{proof}
\begin{remark}
Notice that the Main Theorem in \cite{LW} required the symmetric space to be irreducible. But the proof of Lemma 4.6 was the only step that used irreduciblity. The rest of the proof remains valid for reducible symmetric spaces. Thus replacing \cite[Lemma 4.6]{LW} by our Lemma \ref{lem:revise}, the actual proof goes through even in the reducible case.
\end{remark}
\section{Discussions}
As we have seen, the bound we get in the \textbf{Main Theorem} is almost the optimal possible. We showed the Jacobian is uniformly bounded in degrees $\geq \text{srk}(X)+2$, and we know the Jacobian estimate can not be improved to degrees $\leq \text{srk}(X)$. Thus the only case left unknown is in degree $\text{srk}(X)+1$. We believe in a negative answer in this degree, as evidenced by the analog in rank one. In rank one cases, Besson, Courtois and Gallot \cite{BCG} reproved the Mostow rigidity theorem using the barycenter method, where they obtained uniformly bounded Jacobian (same expression as ours) in degrees $\geq 3$ (they gave explicit bounds in order to show the Mostow rigidity theorem). Notice the splitting rank of a real rank one symmetric space is identically $1$, so our estimate agrees with the original estimate of Besson, Courtois and Gallot. While in degree $2=\text{srk}(X)+1$, the Jacobian estimate fails and correspondingly is the fact that the Mostow rigidity theorem is not true in dimension two.
\section{Appendix}
In this section, we finish the proof of Theorem \ref{thm:srk} and Proposition \ref{prop:gap}. We combine the two proofs as they are both a case by case argument. And we notice the case of $\SL(r+1,\mathbb{R})/\SO(r+1)$ has already been proved in the context.
\emph{Case of $\SL(r+1,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r+1)$}: the Dynkin diagram is of type $A_r$ and is shown in Figure 1, with multiplicities $2$ for all simple roots. Since $Y$ is generated by the truncated simple system $\{\alpha_1,...,\hat{\alpha_i},...,\alpha_r\}$ for some $i=1,...,r$, preserving the same multiplicities and configurations, we have $Y=\SL(i,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(i)\times \SL(r-i+1,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-i+1)$, for some $i=1,...,r$. The dimension of $Y$ equals $(i-1)(i+1)+(r-i)(r-i+2)$, so the codimension of $Y\times \mathbb{R}\subset X$ is $-2i^2+2(r+1)i$, which attains its minimal codimension $2r$ when $i=1,r$. In both cases we have $Y=\SL(r,\mathbb{C})/\SO(r)$, and hence $\text{srk}(X)=\dim(Y\times \mathbb R)=n-2r$, where $n=\dim(X)=r(r+2)$. The codimsion of $Y\times \mathbb{R}\subset X$ attains its second minimal value when $i=2, r-1$ provided $r\geq 3$. In this case, $Y=\mathbb H^3\times \SL(r-1,\mathbb{C})/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-1)$ and the codimension of $Y\times \mathbb{R}$ is $4r-4$, so the gap is $2r-4$.
\emph{Case of ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r+2)/\Sp(r+1)$}: the Dynkin diagram is of type $A_r$ and is shown in Figure 1, with multiplicities $4$ for all simple roots. Hence $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2i)/\Sp(i)\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r+2-2i)/\Sp(r+1-k)$ for some $i=1,...,r$. The codimension of $Y\times \mathbb{R}\subset X$ is $-4i^2+4(r+1)i$, which attains minimal when $i=1,r$. In both cases $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r)/\Sp(r)$, and $\text{srk}(X)=n-4r$. The codimension attains its second minimal value when $i=2,r-1$ provided $r\geq 3$, in which case $Y=\mathbb H^5\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2r-2)/\Sp(r-1)$. The codimension of $Y\times \mathbb{R}$ is $8r-8$, so the gap is $4r-8$.
\emph{Case of $E_6^{-26}/F_4$}: the Dynkin diagram is of type $A_2$ and is shown in Figure 1 where $r=2$, with multiplicities $8$ for both simple roots. Hence $Y$ can only be $\mathbb H^9$ so that $\text{srk}(X)=10$. Notice that $E_6^{-26}/F_4$ is of rank two and it does not satisfy the condition of Proposition \ref{prop:gap}.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 4.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\put {$\cdot$} at 1.2 0
\put {$\cdot$} at 1.3 0
\put {$\cdot$} at 1.4 0
\put {$\cdot$} at 1.5 0
\put {$\cdot$} at 1.6 0
\put {$\cdot$} at 1.7 0
\put {$\cdot$} at 1.8 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 3.7 0
\put {$\Longrightarrow$} at 3.35 0
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_{r-2}$} at 2 -.3
\put {$\alpha_{r-1}$} at 3 -.3
\put {$\alpha_r$} at 3.8 -.3
\endpicture
}
\caption{Dynkin diagram of type $B_r$}
\end{figure}
\emph{Case of $\SO_0(r,r+k)/\SO(r)\times \SO(r+k)$}: the Dynkin diagram is of type $B_r$ and is shown in Figure 2, with ordered multiplicities $1,1,...,1,k$. If we remove $\alpha_i$, the remaining diagram (with multiplicity information) will represent $Y_i=\SL(i,\mathbb R)/\SO(i)\times [\SO_0(r-i,r-i+k)/\SO(r-i)\times \SO(r-i+k)]$ (Notice $\SL(1,\mathbb R)/\SO(1)$ and $\SO(0,k)_0/\SO(0)\times \SO(k)$ are just a point by abuse of notation). Thus we can compute that $Y_i\times \mathbb R$ has codimension $-3i^2/2+(4r+2k-1)i/2$ in $X$. It attains a minimum when $i=1$ provided $r+2k>4$, and so $Y=\SO_0(r-1,r-1+k)/\SO(r-1)\times \SO(r-1+k)$, $\text{srk}(X)=n-(2r+k-2)$. The only space that satisfies $r+2k\leq 4$ is $\SO_0(2,3)/\SO(2)\times \SO(3)$, and it has splitting rank $3$, corresponding to $\mathbb H^2\times \mathbb R$. This agrees with the general formula hence can be absorbed into it. Now the codimension attains its second minimum when $i=2$ provided $r+2k>7$, and so $Y'=\mathbb H^2\times\SO_0(r-2,r-2+k)/\SO(r-2)\times \SO(r-2+k)$, $\dim(Y'\times \mathbb R)=n-(4r+2k-7)$. Hence the gap is $2r+k-5$. As we focus on $r\geq 4$ in Proposition \ref{prop:gap}, the spaces that are excluded by $r+2k>7$ is $\SO_0(4,5)/\SO(4)\times \SO(5)$ and $\SO_0(5,6)/\SO(5)\times \SO(6)$. If $X=\SO_0(4,5)/\SO(4)\times \SO(5)$ ($\dim(X)=20$), then $Y$ is $\SO_0(3,4)/\SO(3)\times \SO(4)$ (dimension $12$), and $Y'$ is $\SL(4,\mathbb R)/\SO(4)$ (dimension $9$), so the gap is $3$. If $X=\SO_0(5,6)/\SO(5)\times \SO(6)$, then $Y=\SO_0(4,5)/\SO(4)\times \SO(5)$, and $Y'$ is either $\mathbb H^2\times \SO(3,4)/\SO(3)\times\SO(4)$ or $\SL(5,\mathbb R)/\SO(5)$ with dimension $14$. So the gap is $6$.
\emph{Case of $\SO(2r+1,\mathbb{C})/\SO(2r+1)$}: the Dynkin diagram is of type $B_r$ and is shown in Figure 2, with multiplicities $2$ for all simple roots. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(i)\times \SO(2r-2i+1,\mathbb{C})/\SO(2r-2i+1)$ (notice $\SL(1,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(1)$ and $\SO(1,\mathbb C)/\SO(1)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2+(4r+1)i$. It has minimal value $4r-2$ when $i=1$ (when $r=2$ it takes minimal value on both $i=1,2$, but they both represent the same subspace $\mathbb H^3$). Hence the splitting rank is $n-(4r-2)$, corresponding to the subspace $Y=\SO(2r-1,\mathbb C)/\SO(2r-1)$. Now the codimension takes the second minimal value $8r-10$ when $i=2$, provided $r>5$. In this case, $Y'=\mathbb H^3\times \SO(2r-3,\mathbb C)/SO(2r-3)$ and the gap is $4r-8$. If $r=4$, then $X=\SO(9,\mathbb C)/\SO(9)$ and $Y=\SO(7,\mathbb C)/\SO(7)$. The codimension takes its second minimal value $20$ when $Y'=\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$, hence the gap is $6$. If $r=5$, then $X=\SO(11,\mathbb C)/\SO(11)$ and $Y=\SO(9,\mathbb C)/\SO(9)$. The codimension takes its second minimal value $30$ when $Y'$ is $\mathbb H^3\times \SO(7,\mathbb C)/\SO(7)$ or $\SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, hence the gap is $12$.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 4.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\put {$\cdot$} at 1.2 0
\put {$\cdot$} at 1.3 0
\put {$\cdot$} at 1.4 0
\put {$\cdot$} at 1.5 0
\put {$\cdot$} at 1.6 0
\put {$\cdot$} at 1.7 0
\put {$\cdot$} at 1.8 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 3.7 0
\put {$\Longleftarrow$} at 3.35 0
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_{r-2}$} at 2 -.3
\put {$\alpha_{r-1}$} at 3 -.3
\put {$\alpha_r$} at 3.8 -.3
\endpicture
}
\caption{Dynkin diagram of type $C_r$}
\end{figure}
\emph{Case of $\Sp(r,\mathbb{R})/U(r)$}: the Dynkin diagram is of type $C_r$ and is shown in Figure 3, with multiplicities $1$ for all simple roots. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb R)/\SO(i)\times \Sp(r-i,\mathbb{R})/U(r-i))$ (notice $\SL(1,\mathbb R)/\SO(1)$ and $\Sp(0,\mathbb{R})/U(0)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2/2+(4r+1)i/2$. It has minimal value $2r-1$ when $i=1$ (when $r=2$ it takes minimal value on both $i=1,2$, but they both represent the same space $\mathbb H^2$). Hence the splitting rank is $n-(2r-1)$ corresponding to the space $Y=\Sp(r-1,\mathbb{R})/U(r-1)$. Now the codimension takes the second minimal value $4r-5$ when $i=2$, provided $r>5$. In this case, $Y'=\mathbb H^2\times \Sp(r-2,\mathbb{R})/U(r-2)$ and the gap is $2r-4$. If $r=4$, then $X=\Sp(4,\mathbb{R})/U(4)$ and $Y=\Sp(3,\mathbb{R})/U(3)$. The codimension takes its second minimal value $10$ when $Y'=\SL(4,\mathbb R)/\SO(4)$, hence the gap is $3$. If $r=5$, then $X=\Sp(5,\mathbb{R})/U(5)$ and $Y=\Sp(4,\mathbb{R})/U(4)$. The codimension takes its second minimal value $15$ when $Y'$ is $\mathbb H^2\times \Sp(3,\mathbb{R})/U(3)$ or $\SL(5,\mathbb R)/\SO(5)$, hence the gap is $6$.
\emph{Case of ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r,r)/S(U(r)\times U(r))$}: the Dynkin diagram is of type $C_r$ and is shown in Figure 3, with ordered multiplicities $2,2,...,2,1$. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(i)\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-i,r-i)/S(U(r-i)\times U(r-i))$ (notice $\SL(1,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(1)$ and ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(0,0)/S(U(0)\times U(0))$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2+4ri$. It has minimal value $4r-3$ when $i=1$, provided $r>3$. If $r=2$, $X$ is isomorphic to $\SO_0(2,4)/\SO(2)\times\SO(4)$, which has been solved previously. If $r=3$, the codimension is minimal for both $i=1,3$, which corresponds to $\SO_0(2,4)/\SO(2)\times\SO(4)$ and $\SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)$. Hence the splitting rank is $n-(4r-3)$ corresponding to the space $Y=\Sp(r-1,\mathbb{R})/U(r-1)$ (except for the case $r=3$ where there are two subspaces). Now the codimension takes the second minimal value $8r-12$ when $i=2$, provided $r>6$. In this case, $Y'=\mathbb H^3\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-2,r-2)/S(U(r-2)\times U(r-2))$ and the gap is $4r-9$. If $r=4$, then $X={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,4)/S(U(4)\times U(4))$ and $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3,3)/S(U(3)\times U(3))$. The codimension takes its second minimal value $16$ when $Y'=\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$, hence the gap is $3$. If $r=5$, then $X={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5,5)/S(U(5)\times U(5))$ and $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,4)/S(U(4)\times U(4))$. The codimension takes its second minimal value $25$ when $Y'$ is $\SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, hence the gap is $8$. If $r=6$, then $X={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6,6)/S(U(6)\times U(6))$ and $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5,5)/S(U(5)\times U(5))$. The codimension takes its second minimal value $36$ when $Y'$ is $\mathbb H^3\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,4)/S(U(4)\times U(4))$ or $\SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)$, hence the gap is $15$.
\emph{Case of $\Sp(r,\mathbb{C})/\Sp(r)$}: the Dynkin diagram is of type $C_r$ and is shown in Figure 3, with multiplicities $2$ for all simple roots. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(i)\times \Sp(r-i,\mathbb{C})/\Sp(r-i))$ (notice $\SL(1,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(1)$ and $\Sp(0,\mathbb{C})/\Sp(0)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2+(4r+1)i$. It has minimal value $4r-2$ when $i=1$, provided $r>2$. If $r=2$, then $X$ is isomorphic to $\SO(5,\mathbb C)/\SO(5)$, which has been solved previously. Now the codimension takes the second minimal value $8r-10$ when $i=2$, provided $r>5$. In this case, $Y'=\mathbb H^3\times \Sp(r-2,\mathbb{C})/\Sp(r-2)$ and the gap is $4r-8$. If $r=4$, then $X=\Sp(4,\mathbb{C})/\Sp(4)$ and $Y=\Sp(3,\mathbb{C})/\Sp(3)$. The codimension takes its second minimal value $20$ when $Y'=\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$, hence the gap is $6$. If $r=5$, then $X=\Sp(5,\mathbb{C})/\Sp(5)$ and $Y=\Sp(4,\mathbb{C})/\Sp(4)$. The codimension takes its second minimal value $30$ when $Y'$ is $\mathbb H^3\times \Sp(3,\mathbb C)/\Sp(3)$ or $\SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, hence the gap is $12$.
\emph{Case of $\SO^*(4r)/U(2r)$}: the Dynkin diagram is of type $C_r$ and is shown in Figure 3, with ordered multiplicities $4,4,...,4,1$. If we remove $\alpha_i$, the remaining diagram will represent $Y_i={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2i-2)/\Sp(i)\times \SO^*(4r-4i)/U(2r-2i)$ (notice ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(0)/\Sp(1)$ and $\SO^*(0)/U(0)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-6i^2+(8r-1)i$. It has minimal value $8r-7$ when $i=1$, provided $r>3$. If $r=2$, then $X$ is isomorphic to $\SO_0(2,6)/\SO(2)\times\SO(6)$, which has been solved previously. If $r=3$, then $X=\SO^*(12)/U(6)$, and the codimension has minimal value $15$ when $i=3$. In this case, the splitting rank occurs when $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(6)/\Sp(3)$ and is equal to $15$. Now the codimension takes the second minimal value $16r-26$ when $i=2$, provided $r>6$. In this case, $Y'=\mathbb H^5\times \SO^*(4r-8)/U(2r-4)$ and the gap is $8r-19$. If $r=4$, then $X=\SO^*(16)/U(8)$ and $Y=\SO^*(12)/U(6)$. The codimension takes its second minimal value $28$ when $Y'={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(8)/\Sp(4)$, hence the gap is $3$. If $r=5$, then $X=\SO^*(20)/U(10)$ and $Y=\SO^*(16)/U(8)$. The codimension takes its second minimal value $45$ when $Y'={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(10)/\Sp(5)$, hence the gap is $12$. If $r=6$, then $X=\SO^*(24)/U(12)$ and $Y=\SO^*(20)/U(10)$. The codimension takes its second minimal value $66$ when $Y'={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(12)/\Sp(6)$, hence the gap is $25$.
\emph{Case of $\Sp(r,r)/\Sp(r)\times \Sp(r)$}: the Dynkin diagram is of type $C_r$ and is shown in Figure 3, with ordered multiplicities $4,4,...,4,3$. If we remove $\alpha_i$, the remaining diagram will represent $Y_i={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2i-2)/\Sp(i)\times [\Sp(r-i,r-i)/\Sp(r-i)\times \Sp(r-i)]$ (Notice ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(0)/\Sp(1)$ and $\Sp(0,0)/\Sp(0)\times \Sp(0)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-6i^2+(8r+1)i$. It has minimal value $8r-5$ when $i=1$, provided $r>2$. If $r=2$, then $X=\Sp(2,2)/\Sp(2)\times \Sp(2)$, and the codimension has minimal value $10$ when $i=2$. So the splitting rank occurs when $Y=\mathbb H^5$ and is equal to $6$. Now the codimension takes the second minimal value $16r-22$ when $i=2$, provided $r>5$. In this case, $Y'=\mathbb H^5\times [\Sp(r-2,r-2)/\Sp(r-2)\times \Sp(r-2)]$ and the gap is $8r-17$. If $r=4$, then $X=\Sp(4,4)/\Sp(4)\times \Sp(4)$ and $Y=\Sp(3,3)/\Sp(3)\times \Sp(3))$. The codimension takes its second minimal value $36$ when $Y'={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(8)/\Sp(4)$, hence the gap is $9$. If $r=5$, then $X=\Sp(5,5)/\Sp(5)\times \Sp(5)$ and $Y=\Sp(4,4)/\Sp(4)\times \Sp(4)$. The codimension takes its second minimal value $55$ when $Y'={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(10)/\Sp(5)$, hence the gap is $20$.
\emph{Case of $E_7^{-25}/E_6\times U(1)$}:
the Dynkin diagram is of type $C_3$ and is shown in Figure 3 where $r=3$, with ordered multiplicities $8,8,1$. Hence $Y$ can only be $\SO_0(2,10)/\SO(2)\times \SO(10)$ (when removing $\alpha_1$), or $\mathbb H^9\times \mathbb H^2$ (when removing $\alpha_2$), or $E_6^{-26}/F_4$ (when removing $\alpha_3$). Among the three spaces, $E_6^{-26}/F_4$ has largest dimension thus $\text{srk}(X)=27$. Notice that $E_7^{-25}/E_6\times U(1)$ is of rank three and it does not satisfy the condition of Proposition \ref{prop:gap}.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 5, y from -1.4 to 1.4
\linethickness=0.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\put {$\cdot$} at 1.2 0
\put {$\cdot$} at 1.3 0
\put {$\cdot$} at 1.4 0
\put {$\cdot$} at 1.5 0
\put {$\cdot$} at 1.6 0
\put {$\cdot$} at 1.7 0
\put {$\cdot$} at 1.8 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 3.4 0.4
\put {$\circ$} at 3.4 -0.4
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_{r-3}$} at 1.8 -.3
\put {$\alpha_{r-2}$} at 2.8 -.3
\put {$\alpha_{r-1}$} at 4 .4
\put {$\alpha_r$} at 4 -.4
\put {$\diagup$} at 3.2 0.2
\put {$\diagdown$} at 3.2 -0.2
\endpicture
}
\caption{Dynkin diagram of type $D_r$}
\end{figure}
\emph{Case of $\SO_0(r,r)/\SO(r)\times \SO(r)$}: the Dynkin diagram is of type $D_r$ and is shown in Figure 4, with multiplicities $1$ for all simple roots. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb R)/\SO(i)\times [\SO_0(r-i,r-i)/\SO(r-i)\times \SO(r-i)]$ when $i< r-2$ (notice $\SO_0(3,3)/\SO(3)\times \SO(3)$ is the same as $\SL(4,\mathbb R)/\SO(4)$), and $Y_i=\mathbb H^2\times \mathbb H^2\times\SL(r-2,\mathbb R)/\SO(r-2)$ when $i=r-2$, and $Y_i=\SL(r,\mathbb R)/\SO(r)$ when $i=r-1, r$. We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2/2+(4r-1)i/2$ for $1\leq i\leq r-2$ or $i=r$. It has unique minimal value $2r-2$ when $i=1$, provided $r>4$. If $r=4$, the codimension has minimal value $6$ when $i=1,3,4$. So the splitting rank occurs when $Y=\SL(4,\mathbb R)/\SO(4)$ and is equal to $10$. This agrees with the general result when $r>4$ hence can be absorbed into it. Now the codimension takes the second minimal value $4r-7$ when $i=2$, provided $r>7$. In this case, $Y'=\mathbb H^2\times [\SO_0(r-2,r-2)/\SO(r-2)\times \SO(r-2)]$ and the gap is $2r-5$. If $r=4$, then $X=\SO_0(4,4)/\SO(4)\times \SO(4)$ and $Y=\SL(4,\mathbb R)/\SO(4)$. The codimension takes its second minimal value $9$ when $Y'=\mathbb H^2\times \mathbb H^2\times \mathbb H^2$, hence the gap is $3$. If $r=5$, then $X=\SO_0(5,5)/\SO(5)\times \SO(5)$ and $Y=\SO_0(4,4)/\SO(4)\times \SO(4)$. The codimension takes its second minimal value $10$ when $Y'=\SL(5,\mathbb R)/\SO(5)$, hence the gap is $2$. If $r=6$, then $X=\SO_0(6,6)/\SO(6)\times \SO(6)$ and $Y=\SO_0(5,5)/\SO(5)\times \SO(5)$. The codimension takes its second minimal value $15$ when $Y'=\SL(6,\mathbb R)/\SO(6)$, hence the gap is $5$. If $r=7$, then $X=\SO_0(7,7)/\SO(7)\times \SO(7)$ and $Y=\SO_0(6,6)/\SO(6)\times \SO(6)$. The codimension takes its second minimal value $21$ when $Y'$ is either $\mathbb H^2\times\SO_0(5,5)/\SO(5)\times\SO(5)$ or $\SL(7,\mathbb R)/\SO(7)$, hence the gap is $9$.
\emph{Case of $\SO(2r,\mathbb{C})/\SO(2r)$}: the Dynkin diagram is of type $D_r$ and is shown in Figure 4, with multiplicities $2$ for all simple roots. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(i)\times \SO(2r-2i,\mathbb{C})/\SO(2r-2i)$ when $i< r-2$ (notice $\SO(6,\mathbb{C})/\SO(6)$ is the same as $\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$), and $Y_i=\mathbb H^3\times \mathbb H^3\times\SL(r-2,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-2)$ when $i=r-2$, and $Y_i=\SL(r,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r)$ when $i=r-1, r$. We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2+(4r-1)i$ for $1\leq i\leq r-2$ or $i=r$. It has unique minimal value $4r-4$ when $i=1$, provided $r>4$. If $r=4$, the codimension has minimal value $12$ when $i=1,3,4$. So the splitting rank occurs when $Y=\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$ and is equal to $16$. This agrees with the general result when $r>4$ hence can be absorbed into it. Now the codimension takes the second minimal value $8r-14$ when $i=2$, provided $r>7$. In this case, $Y'=\mathbb H^3\times \SO(2r-4,\mathbb{C})/\SO(2r-4)$ and the gap is $4r-10$. If $r=4$, then $X=\SO(8,\mathbb{C})/\SO(8)$ and $Y=\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$. The codimension takes its second minimal value $18$ when $Y'=\mathbb H^3\times \mathbb H^3\times \mathbb H^3$, hence the gap is $6$. If $r=5$, then $X=\SO(10,\mathbb{C})/\SO(10)$ and $Y=\SO(8,\mathbb{C})/\SO(8)$. The codimension takes its second minimal value $20$ when $Y'=\SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, hence the gap is $4$. If $r=6$, then $X=\SO(12,\mathbb{C})/\SO(12)$ and $Y=\SO(10,\mathbb{C})/\SO(10)$. The codimension takes its second minimal value $30$ when $Y'=\SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)$, hence the gap is $10$. If $r=7$, then $X=\SO(14,\mathbb{C})/\SO(14)$ and $Y=\SO(12,\mathbb{C})/\SO(12)$. The codimension takes its second minimal value $42$ when $Y'$ is either $\mathbb H^3\times \SO(10,\mathbb{C})/\SO(10)$ or $\SL(7,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(7)$, hence the gap is $18$.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 4.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\put {$\cdot$} at 1.2 0
\put {$\cdot$} at 1.3 0
\put {$\cdot$} at 1.4 0
\put {$\cdot$} at 1.5 0
\put {$\cdot$} at 1.6 0
\put {$\cdot$} at 1.7 0
\put {$\cdot$} at 1.8 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 3.9 0
\put {$\bigcirc$} at 3.9 0
\put {$\Longleftrightarrow$} at 3.4 0
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_{r-2}$} at 2 -.3
\put {$\alpha_{r-1}$} at 3 -.3
\put {$(\alpha_r,2\alpha_r)$} at 4.2 -.3
\endpicture
}
\caption{Dynkin diagram of type $(BC)_r$}
\end{figure}
\emph{Case of $ {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r,r+k)/S(U(r)\times U(r+k))$}: the Dynkin diagram is of type $(BC)_r$ and is shown in Figure 5, with ordered multiplicities $2,2,...,2,(2k,1)$. If we remove $\alpha_i$, the remaining diagram will represent $Y_i=\SL(i,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(i)\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-i,r-i+k)/S(U(r-i)\times U(r-i+k))$ (notice $\SL(1,\mathbb CR)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(1)$ and ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(0,k)/S(U(0)\times U(k))$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-3i^2+(4r+2k)i$. It has unique minimal value $4r+2k-3$ when $i=1$, provided $r+2k>3$, which holds for higher rank symmetric spaces. So the splitting rank occurs when $Y={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-1,r-1+k)/S(U(r-1)\times U(r-1+k))$ and is equal to $n-(4r+2k-3)$. Now the codimension takes the second minimal value $8r+4k-12$ when $i=2$, provided $r+2k>6$. In this case, $Y'=\mathbb H^3\times{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(r-2,r-2+k)/S(U(r-2)\times U(r-2+k))$ and the gap is $4r+2k-9$. As we focus on $r\geq 4$ in Proposition \ref{prop:gap}, the only space excluded by $r+2k>6$ is ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4,5)/S(U(4)\times U(5))$ ($r=4,k=1$). In this case, $Y$ is ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3,4)/S(U(3)\times U(4))$. The codimension takes its second minimal value $24$ when $Y'$ is either $\mathbb H^3\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(2,3)/S(U(2)\times U(3))$ or $\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$, hence the gap is $9$.
\emph{Case of $\Sp(r,r+k)/\Sp(r)\times \Sp(r+k)$}: the Dynkin diagram is of type $(BC)_r$ and is shown in Figure 5, with ordered multiplicities $4,4,...,4,(4k,3)$. If we remove $\alpha_i$, the remaining diagram will represent $Y_i={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2i)/\Sp(i)\times [\Sp(r-i,r-i+k)/\Sp(r-i)\times \Sp(r-i+k)]$ (notice ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2)/\Sp(1)$ and $\Sp(0,k)/\Sp(0)\times \Sp(k)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-6i^2+(8r+4k+1)i$. It has unique minimal value $8r+4k-5$ when $i=1$. So the splitting rank occurs when $Y=\Sp(r-1,r-1+k)/\Sp(r-1)\times \Sp(r-1+k)$ and is equal to $n-(8r+4k-5)$. Now the codimension takes the second minimal value $16r+8k-22$ when $i=2$, provided $2r+4k>11$. In this case, $Y'=\mathbb H^5\times\Sp(r-2,r-2+k)/\Sp(r-2)\times \Sp(r-2+k)$ and the gap is $8r+4k-17$. As we focus on $r\geq 4$ in Proposition \ref{prop:gap}, the inequality $2r+4k>11$ always holds.
\emph{Case of $\SO^*(4r+2)/U(2r+1)$}: the Dynkin diagram is of type $(BC)_r$ and is shown in Figure 5, with ordered multiplicities $4,4,...,4,(4,1)$. If we remove $\alpha_i$, the remaining diagram will represent $Y_i={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2i)/\Sp(i)\times \SO^*(4r-4i+2)/U(2r-2i+1)$ (notice ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(2)/\Sp(1)$ and $\SO^*(2)/U(1)$ are just a point by abuse of notation). We compute that the codimension of $Y_i\times \mathbb R$ in $X$ is $-6i^2+(8r+3)i$. It has unique minimal value $8r-3$ when $i=1$. So the splitting rank occurs when $Y=\SO^*(4r-2)/U(2r-1)$ and is equal to $n-(8r-3)$. Now the codimension takes the second minimal value $16r-18$ when $i=2$, provided $r>4$. In this case, $Y'=\mathbb H^5\times\SO^*(4r-6)/U(2r-3)$ and the gap is $8r-15$. As we focus on $r\geq 4$ in Proposition \ref{prop:gap}, the only space excluded by the inequality $r>4$ is $\SO^*(18)/U(9)$. In this special case, $Y=\SO^*(14)/U(7)$ and the codimension takes its second minimal value $44$ when $Y'={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(8)/\Sp(4)$, hence the gap is $15$.
\emph{Case of $E_6^{-14}/\text{Spin}(10)\times U(1)$}: the Dynkin diagram is of type $(BC)_2$ and is shown in Figure 5, with ordered multiplicities $6,(8,1)$. Hence $Y$ can only be $\mathbb H^9$ or ${\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(1,5)/S(U(1)\times U(5))\simeq \mathbb C\mathbb H^5$. Comparing the dimensions of the two spaces, we conclude the one that has splitting rank should be $\mathbb C\mathbb H^5\times \mathbb R$, and the splitting rank is $11$. Notice that $E_6^{-14}/\text{Spin}(10)\times U(1)$ is of rank two hence it does not satisfy the condition of Proposition \ref{prop:gap}.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 4.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\putrule from 1.1 0 to 1.9 0
\putrule from 3.1 0 to 3.9 0
\putrule from 2 0.1 to 2 0.7
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 4 0
\put {$\circ$} at 2 0.8
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_3$} at 2 -.3
\put {$\alpha_4$} at 2.5 0.8
\put {$\alpha_5$} at 3 -.3
\put {$\alpha_6$} at 4 -.3
\endpicture
}
\caption{Dynkin diagram of type $E_6$}
\end{figure}
\emph{Case of $E_6^6/\Sp(4)$}: the Dynkin diagram is of type $E_6$ and is shown in Figure 6, with multiplicities $1$ for all simple roots. If we remove one simple root, the remaining diagram will represent $4$ kinds of symmetric spaces: $Y_1=Y_6=\SO_0(5,5)/\SO(5)\times \SO(5)$, $Y_2=Y_5=\mathbb H^2\times \SL(5,\mathbb R)/\SO(5)$, $Y_3=\mathbb H^2\times \SL(3,\mathbb R)/SO(3)\times \SL(3,\mathbb R)/\SO(3)$ and $Y_4=\SL(6,\mathbb R)/\SO(6)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $26,17,13$ and $21$ respectively. So the splitting rank is $26$ and the gap is $5$.
\emph{Case of $E_6(\mathbb C)/E_6$}: the Dynkin diagram is of type $E_6$ and is shown in Figure 6, with multiplicities $2$ for all simple roots. If we remove one simple root, the remaining diagram will represent $4$ kinds of symmetric spaces: $Y_1=Y_6=\SO(10,\mathbb C)/\SO(10)$, $Y_2=Y_5=\mathbb H^3\times \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, $Y_3=\mathbb H^3\times \SL(3,\mathbb C)/SU(3)\times \SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)$ and $Y_4=\SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $46,28,20$ and $36$ respectively. So the splitting rank is $46$ and the gap is $10$.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 5.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\putrule from 1.1 0 to 1.9 0
\putrule from 3.1 0 to 3.9 0
\putrule from 2 0.1 to 2 0.7
\putrule from 4.1 0 to 4.9 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 4 0
\put {$\circ$} at 5 0
\put {$\circ$} at 2 0.8
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_3$} at 2 -.3
\put {$\alpha_4$} at 2.5 0.8
\put {$\alpha_5$} at 3 -.3
\put {$\alpha_6$} at 4 -.3
\put {$\alpha_7$} at 5 -.3
\endpicture
}
\caption{Dynkin diagram of type $E_7$}
\end{figure}
\emph{Case of $E_7^7/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(8)$}: the Dynkin diagram is of type $E_7$ and is shown in Figure 7, with multiplicities $1$ for all simple roots. If we remove one simple root, the remaining diagram will represent $7$ kinds of symmetric spaces: $Y_1=\SO_0(6,6)/\SO(6)\times \SO(6)$, $Y_2=\mathbb H^2\times \SL(6,\mathbb R)/\SO(6)$, $Y_3=\mathbb H^2\times \SL(3,\mathbb R)/\SO(3)\times \SL(4,\mathbb R)/\SO(4)$, $Y_4=\SL(7,\mathbb R)/\SO(7)$, $Y_5=\SL(3,\mathbb R)/\SO(3)\times \SL(5,\mathbb R)/\SO(5)$, $Y_6=\mathbb H^2\times \SO_0(5,5)/\SO(5)\times \SO(5)$ and $Y_7=E_6^6/\Sp(4)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $37,23,17,28,20,28$ and $43$ respectively. So the splitting rank is $43$ and the gap is $6$.
\emph{Case of $E_7(\mathbb C)/E_7$}: the Dynkin diagram is of type $E_7$ and is shown in Figure 7, with multiplicities $2$ for all simple roots. If we remove one simple root, the remaining diagram will represent $7$ kinds of symmetric spaces: $Y_1=\SO(12,\mathbb C)/\SO(12)$, $Y_2=\mathbb H^3\times \SL(6,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)$, $Y_3=\mathbb H^3\times \SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)\times \SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)$, $Y_4=\SL(7,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(7)$, $Y_5=\SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)\times \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, $Y_6=\mathbb H^3\times \SO(10,\mathbb C)/\SO(10)$ and $Y_7=E_6(\mathbb C)/E_6$. We compute that the dimensions of $Y_i\times \mathbb R$ are $67,39,27,49,33,49$ and $79$ respectively. So the splitting rank is $79$ and the gap is $12$.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 6.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 2.1 0 to 2.9 0
\putrule from 1.1 0 to 1.9 0
\putrule from 3.1 0 to 3.9 0
\putrule from 2 0.1 to 2 0.7
\putrule from 4.1 0 to 4.9 0
\putrule from 5.1 0 to 5.9 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 2 0
\put {$\circ$} at 3 0
\put {$\circ$} at 4 0
\put {$\circ$} at 5 0
\put {$\circ$} at 6 0
\put {$\circ$} at 2 0.8
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_3$} at 2 -.3
\put {$\alpha_4$} at 2.5 0.8
\put {$\alpha_5$} at 3 -.3
\put {$\alpha_6$} at 4 -.3
\put {$\alpha_7$} at 5 -.3
\put {$\alpha_8$} at 6 -.3
\endpicture
}
\caption{Dynkin diagram of type $E_8$}
\end{figure}
\emph{Case of $E_8^8/\SO(16)$}: the Dynkin diagram is of type $E_8$ and is shown in Figure 8, with multiplicities $1$ for all simple roots. If we remove one simple root, the remaining diagram will represent $8$ kinds of symmetric spaces: $Y_1=\SO_0(7,7)/\SO(7)\times \SO(7)$, $Y_2=\mathbb H^2\times \SL(7,\mathbb R)/\SO(7)$, $Y_3=\mathbb H^2\times \SL(3,\mathbb R)/\SO(3)\times \SL(5,\mathbb R)/\SO(5)$, $Y_4=\SL(8,\mathbb R)/\SO(8)$, $Y_5=\SL(4,\mathbb R)/\SO(4)\times \SL(5,\mathbb R)/\SO(5)$, $Y_6=\SL(3,\mathbb R)/\SO(3)\times \SO_0(5,5)/\SO(5)\times \SO(5)$, $Y_7=\mathbb H^2\times E_6^6/\Sp(4)$, and $Y_8=E_7^7/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(8)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $50,30,22,36,24,31,45$ and $71$ respectively. So the splitting rank is $71$ and the gap is $21$.
\emph{Case of $E_8(\mathbb C)/E_8$}: the Dynkin diagram is of type $E_8$ and is shown in Figure 8, with multiplicities $2$ for all simple roots. If we remove one simple root, the remaining diagram will represent $8$ kinds of symmetric spaces: $Y_1=\SO(14,\mathbb C)/\SO(14)$, $Y_2=\mathbb H^3\times \SL(7,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(7)$, $Y_3=\mathbb H^3\times \SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)\times \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, $Y_4=\SL(8,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(8)$, $Y_5=\SL(4,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(4)\times \SL(5,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(5)$, $Y_6=\SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)\times \SO(10,\mathbb C)/\SO(10)$, $Y_7=\mathbb H^3\times E_6(\mathbb C)/E_6)$, and $Y_8=E_7(\mathbb C)/E_7$. We compute that the dimensions of $Y_i\times \mathbb R$ are $92,52,36,64,40,54,82$ and $134$ respectively. So the splitting rank is $134$ and the gap is $42$.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 3.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0.1 0 to 0.9 0
\putrule from 1.8 0 to 2.6 0
\put {$\Longrightarrow$} at 1.35 0
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\circ$} at 1.7 0
\put {$\circ$} at 2.7 0
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\alpha_3$} at 1.7 -.3
\put {$\alpha_4$} at 2.7 -.3
\endpicture
}
\caption{Dynkin diagram of type $F_4$}
\end{figure}
\emph{Case of $F_4^4/\Sp(3)\times\Sp(1)$}: the Dynkin diagram is of type $F_4$ and is shown in Figure 9, with multiplicities $1$ for all simple roots. If we remove one simple root, the remaining diagram will represent $3$ kinds of symmetric spaces: $Y_1=\Sp(3,\mathbb R)/U(3)$, $Y_2=Y_3=\mathbb H^2\times \SL(3,\mathbb R)/\SO(3)$, and $Y_4=\SO_0(3,4)/\SO(3)\times \SO(4)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $13, 8$ and $13$ respectively. So the splitting rank is $13$ and the gap is $5$.
\emph{Case of $E_6^2/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(6)\times\Sp(1)$}: the Dynkin diagram is of type $F_4$ and is shown in Figure 9, with ordered multiplicities $1,1,2,2$. If we remove one simple root, the remaining diagram will represent $4$ kinds of symmetric spaces: $Y_1={\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3,3)/S(U(3)\times U(3))$, $Y_2=\mathbb H^2\times \SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)$, $Y_3=\mathbb H^3\times \SL(3,\mathbb R)/\SO(3)$, and $Y_4=\SO_0(3,5)/\SO(3)\times \SO(5)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $19, 11, 9$ and $16$ respectively. So the splitting rank is $19$ and the gap is $3$.
\emph{Case of $E_7^{-5}/\SO(12)\times\Sp(1)$}: the Dynkin diagram is of type $F_4$ and is shown in Figure 9, with ordered multiplicities $1,1,4,4$. If we remove one simple root, the remaining diagram will represent $4$ kinds of symmetric spaces: $Y_1=\SO^*(12)/U(6)$, $Y_2=\mathbb H^2\times {\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}^*(6)/\Sp(3)$, $Y_3=\mathbb H^5\times \SL(3,\mathbb R)/\SO(3)$, and $Y_4=\SO_0(3,7)/\SO(3)\times \SO(7)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $31, 17, 11$ and $28$ respectively. So the splitting rank is $31$ and the gap is $3$.
\emph{Case of $E_8^{-24}/E_7\times \Sp(1)$}: the Dynkin diagram is of type $F_4$ and is shown in Figure 9, with ordered multiplicities $1,1,8,8$. If we remove one simple root, the remaining diagram will represent $4$ kinds of symmetric spaces: $Y_1=E_7^{-25}/E_6\times U(1)$, $Y_2=\mathbb H^2\times E_6^{-26}/F_4$, $Y_3=\mathbb H^9\times \SL(3,\mathbb R)/\SO(3)$, and $Y_4=\SO_0(3,11)/\SO(3)\times \SO(11)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $55, 29, 15$ and $34$ respectively. So the splitting rank is $55$ and the gap is $21$.
\emph{Case of $F_4(\mathbb C)/F_4$}: the Dynkin diagram is of type $F_4$ and is shown in Figure 9, with multiplicities $2$ for all simple roots. If we remove one simple root, the remaining diagram will represent $3$ kinds of symmetric spaces: $Y_1=\Sp(3,\mathbb C)/\Sp(3)$, $Y_2=Y_3=\mathbb H^3\times \SL(3,\mathbb C)/{\mathrm{SU}}} \newcommand{\Sym}{{\mathrm{Sym}}(3)$, and $Y_4=\SO(7,\mathbb C)/\SO(7)$. We compute that the dimensions of $Y_i\times \mathbb R$ are $22, 12$ and $22$ respectively. So the splitting rank is $22$ and the gap is $10$.
\begin{figure}
\makebox[1pt][c]{
\beginpicture
\setcoordinatesystem units <.9cm,.9cm> point at -.4 1
\setplotarea x from -.4 to 1.4, y from -1.4 to 1.4
\linethickness=.7pt
\putrule from 0 0.09 to 0.85 0.09
\putrule from 0 -0.07 to 0.85 -0.07
\putrule from 0.1 0.01 to 0.9 0.01
\put {$\circ$} at 0 0
\put {$\circ$} at 1 0
\put {$\alpha_1$} at 0 -.3
\put {$\alpha_2$} at 1 -.3
\put {$\Rrightarrow$} at 0.8 0
\endpicture
}
\caption{Dynkin diagram of type $G_2$}
\end{figure}
\emph{Case of $G_2^2/\SO(4)$}: the Dynkin diagram is of type $G_2$ and is shown in Figure 10, with multiplicities $1$ for both simple roots. If we remove one simple root, the remaining diagram will represent the only symmetric space: $\mathbb H^2$. So the splitting rank is $3$ corresponding to the totally geodesic submanifold $\mathbb H^2\times \mathbb R$. Notice this space is of rank two so it does not satisfy the condition of Proposition \ref{prop:gap}.
\emph{Case of $G_2(\mathbb C)/G_2$}: the Dynkin diagram is of type $G_2$ and is shown in Figure 10, with multiplicities $2$ for both simple roots. If we remove one simple root, the remaining diagram will represent the only symmetric space: $\mathbb H^3$. So the splitting rank is $4$ corresponding to the totally geodesic submanifold $\mathbb H^3\times \mathbb R$. Notice this space is of rank two so it does not satisfy the condition of Proposition \ref{prop:gap}.
This verifies all cases, and completes the proofs of both Theorem \ref{thm:srk} and Proposition \ref{prop:gap}.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUcsI5qYVBL5XJbcl9
| 5 | 1 |
\section{Introduction}
The consideration of black holes as objects with important quantum-mechanical properties has been obvious since Hawking's discovery of their semi-classical evaporation \cite{Hawking:1974rv, Hawking:1974sw}. At approximately the same time, though, the issue of baryon number conservation in black holes was also considered \cite{Wheeler:1974, Carter:1974yx, Carr:1976zz}. However, this road has not been pursued much due to the assumption that baryon-number conservation should be broken or at least transcended \cite{Wheeler:1974} by the black hole. This and other related manifestations of the no-hair theorem have not been understood in any semi-classical approach.
A fully quantum proposal has been made by Dvali and Gomez to describe black holes, and other space-time geometries, as the results of certain peculiar configurations of a background Bose-Einstein condensate of gravitons \cite{DvaliGomez-N-Portrait, DvaliGomez, Dvali:2012rt} (see \cite{Flassig:2012re, Casadio:2015xva, Casadio:2015bna, Kuhnel:2014oja, Kuhnel:2015yka, mueckPT, Hofmann, Binetruy:2012kx, Kuhnel:2014gja, Casadio:2014vja, Brustein, Foit:2015wqa} for recent developments). Therein it is indeed possible to resolve all semi-classical paradoxes, in particular the one mentioned before. In this approach, any other species, like a baryon, which is captured by the black hole is strongly bound by the self-sustained bound state of condensed gravitons which make up the black hole. Now, the very mechanism which is responsible for Hawking radiation, namely quantum depletion, is also responsible for emission of any captured quantum, which is fully released over the life time of the black hole.
It is the aim of this paper to review the issue of baryon number conservation as well as the formation of related bound states in this novel corpuscular formulation of black holes. This shall be done in the context of black holes created in the very early Universe, i.e.~primordial black holes \cite{Zel'dovich-Novikov1967, Hawking:1971ei}, but we will also consider briefly the consequences for black holes formed from the astrophysical collapses of very massive stars. Specifically we wish to consider possible formation of bound states of remaining baryons as first hypothesised in \cite{Carter:1974yx} thereby quantifying the consequences of the baryon conservation in the Bose-Einstein condensate considered in \cite{Dvali:2012rt}.
{\it Primordial black holes\,---\,}
Primordial black holes are black holes formed in the very early Universe \cite{Zel'dovich-Novikov1967, Hawking:1971ei}. They are generally assumed to form when a critical mass over-density crosses the horizon and can subsequently create a horizon-size black hole. These over-densities can theoretically stem from the extreme ends of the initial inflationary spectrum or can also be sourced by exotic early-Universe phenomena such as cosmic string loops or bubble collisions (c.f.~\cite{Green:2014faa} for a recent review).
As these collapse more or less immediately after crossing the horizon, the initial mass $M_{*}$ of the black holes they form are given as the mass of a black hole with Schwarzschild radius equal to the horizon size at their formation time $t_{*}$. This can be found to be roughly of the order \cite{Carr:2005bd}:
\vspace{-2mm}
\begin{align}
M_{*}
&\approx
\frac{c^{3}\.t_{*}}{G_{\ensuremath{\mathrm{N}}}}
\approx
10^{12}
\left(
\frac{ t_{*} }{ 10^{-23}\.\ensuremath{\mathrm{s}} }
\right)
{\rm kg}
\; ,
\label{eq:PBHMass}
\end{align}
with $c$ being the speed of light, and $G_{\ensuremath{\mathrm{N}}}$ is Newton's constant. It is assumed that primordial black holes will not accrete substantially, so that their evolution is determined more or less only by the Hawking evaporation process that they undergo. Thus from the evaporation of the black hole we can define a lifetime $t_{\rm life}$ given by \cite{Hawking:1974rv}:
\begin{align}
t_{\rm life}
\approx
10^{71}
\left(
\frac{M_{*}}{M_{\odot}}
\right)^{\!3}
\ensuremath{\mathrm{s}}
\; ,
\label{eq:ClassLifeTime}
\end{align}
where $M_{\odot}$ is the solar mass.\footnote{For the extremely low mass black holes, the lifetime is shortened by the possibility of evaporation through additional particle species as the small black holes have higher temperature. For the black holes near the Planck mass $M \approx 10^{-5}\.\ensuremath{\mathrm{g}}$, effects of the uncertainty principle also comes into play, and the lifetime is only given as an expectation value of the lifetime.} Since primordial black holes formed later then $t \approx 10^{-23}\.\ensuremath{\mathrm{s}}$, i.e.~of a mass $M > 10^{13}\.{\rm kg}$ have lifetimes longer than the current age of the Universe, primordial black holes are potential candidates for dark matter in the Universe. For most masses the possible fraction of the dark matter that can be in the form of primordial black holes are constrained by the non-detection of their expected observable effects (see \cite{Green:2014faa, Carr:2009jm, Capela:2013yf} for recent reviews).
{\it Bose-Einstein Condensates\,---\,}
It has been suggested by Dvali and Gomez \cite{DvaliGomez-N-Portrait, DvaliGomez} that classical geometries, like black holes and de Sitter Universes, can be only the classical limit of quantum critical states of graviton Bose Einstein condensates. Here the defining length scale $L$ is given by the typical wavelength of the constituent gravitons and the number $N$ of gravitons in the coherent state is also defined by this length scale according to
\begin{align}
N
&\simeq
\left(
\frac{ L }{ L_{\ensuremath{\mathrm{P}}} }
\right)^{\!2}
\; ,
\label{eq:N}
\end{align}
where $L_{\ensuremath{\mathrm{P}}}$ is the Planck length.
In the case of a black hole, the defining length scale is the Schwarzschild radius $R_{H} = 2\.G_{\ensuremath{\mathrm{N}}}\.M / c^{2}$ of the resulting classical object, and hence the number of gravitons is given as
\begin{align}
N
&\simeq
\left(
\frac{2\.G_{\ensuremath{\mathrm{N}}}}{c^{2} L_{\ensuremath{\mathrm{P}}}}\.M
\right)^{\!2}
\; .
\label{eq:NofM}
\end{align}
Generically, the number of gravitons in the condensate is very large, so the main process to perturb gravitons from their condensate state is given by graviton-to-graviton scattering.
Since the wavelength is also given by the characteristic length scale, the interaction strength of this interaction is $\alpha = 1 / N$. This means that the condensate is at a self-similar critical point, i.e.~it remains at criticality even as gravitons are added or subtracted from the condensate. The graviton-to-graviton scattering, however, leads to a depletion of the soft ground-state gravitons of characteristic energy $\epsilon = 1 / \sqrt{N\,}$ according to
\begin{subequations}
\begin{align}
\dot{N}
&\simeq
-\,
N ( N - 1 )\.
\alpha^{2}\.
\frac{ \epsilon }{ \hslash }
\simeq
-\,
\frac{c}{L_{\ensuremath{\mathrm{P}}}\.\sqrt{N\,}}
\; ,
\label{eq:dot-N(t)}
\end{align}
where the first factor $N ( N - 1 )$ is a combinatorial one, which is cancelled by $\alpha^{2} = 1 / N^{2}$ (coming from the two relevant vertices). Here and in the following we shall assume $N \gg 1$, and in turn ignore higher-order terms in $1 / N$. If the condensate in addition contains a number $N_{\ensuremath{\mathrm{X}}} \ll N$ of X-particles, possibly of a conserved X-charge, these particles will also deplete from the condensate via scattering off gravitons, and their depletion will be given by (c.f.~Eq.~(10) in Ref.~\cite{Dvali:2012rt})
\begin{align}
\dot{N}_{\ensuremath{\mathrm{X}}}
&\simeq
-\,
N N_{\ensuremath{\mathrm{X}}}\.
\alpha^{2}\.
\frac{ \epsilon }{ \hslash }
\simeq
-\,
\frac{c}{L_{\ensuremath{\mathrm{P}}}\.\sqrt{N\,}}\frac{N_{\ensuremath{\mathrm{X}}}}{N}
\; .
\label{eq:dot-NX(t)}
\end{align}
\end{subequations}
Note that for $N_{\ensuremath{\mathrm{X}}} \ll N$ the depletion rate \eqref{eq:dot-NX(t)} will be much smaller than that for the gravitons, given by \eqref{eq:dot-N(t)}. In the following we will take this conserved particle species to be baryonic, however the general formulae for the number and densities of the particles will be valid also for other species of conserved charge.
{\it Depletion and formation of new bound state\,---\,}
Let us consider a system with $N_{B}$ baryons surrounded by $N$ gravitons in a quantum critical condensate state. We assume that initially $N \gg N_{B}$, and work to leading order in $1 / N$.
The equation set spanned by Eqs.~(\ref{eq:dot-N(t)},b) is analytically soluble and the solutions read (c.f.~Eqs.~(13) in Ref.~\cite{Dvali:2012rt})
\begin{subequations}
\begin{align}
N( t )
&\simeq
N_{*}
\left(
1
-
\frac{ 3 }{ 2 }
\frac{ t - t_{*} }{ N_{*}^{3 / 2} }
\right)^{\!2 / 3}
\; ,
\label{eq:N(t)}
\\[2mm]
N_{B}( t )
&\simeq
N_{B*}
\left(
1
-
\frac{ 3 }{ 2 }
\frac{ t - t_{*} }{ N_{*}^{3 / 2} }
\right)^{\!2 / 3}
\; ,
\label{eq:NB(t)}
\end{align}
\end{subequations}
with $N_{*} = N( t_{*} )$ and $N_{B*} = N_{B}( t_{*} )$ being the number of gravitons and baryons at the formation time $t_{*}$. Approximately, total lifetime $t_{\rm life}$ can be found to be $t_{\rm life} \approx 2 / 3\,N_{*}^{3 / 2}$, which corresponds exactly to the classical lifetime given by Eq.~(\ref{eq:ClassLifeTime}). This lifetime is at first glance the same for both the graviton condensate and the baryons inside it.
Though quantum depletion causes the baryon number inside the condensate to decrease over time, the volume which the condensate spans decreases much faster, causing the density of the baryons to increases. Since the baryons can interact strongly through the strong force when they are close together, it is not unnatural to assume that when the baryons in the condensate reach the QCD-scale density, their mutual strong interactions will become comparable to their interactions with the gravitons, causing a phase transition in the baryon-graviton ensemble. More precisely, at a number density of approximately $n_{\ensuremath{\mathrm{c}}} \equiv 1\.{\rm baryon} / {\rm fm}^{3}$ for the baryons we postulate that the state of the condensate changes into a stable state, no longer depleting. We dub the time when this possible phase transition occurs $t_{\ensuremath{\mathrm{c}}}$.\footnote{Whether this exactly constitutes a stable state for the gravitons and not only the baryons, is very unclear as this is a physical situation very far from known theory or experimental reach. However, the definition of this time, or a nearby time as a transitioning time for the ensemble is natural.} This happens when
\begin{align}
\frac{ N_{B}( t_{\ensuremath{\mathrm{c}}} ) }{ \frac{ 4 \pi }{ 3 } R_{S}( t_{\ensuremath{\mathrm{c}}} )^{3} }
&=
n_{\ensuremath{\mathrm{c}}}
\; ,
\label{eq:nc}
\end{align}
yielding
\begin{align}
t_{\ensuremath{\mathrm{c}}}
&\simeq
t_{*}
+
\frac{2}{3}\.N_{*}^{3/2}
\left(
1
-
\frac{27\.N_{B*}^{3}}
{64\.\pi^3 N_{*}^{9/2} n_{\ensuremath{\mathrm{c}}}^3}
\right)
.
\label{eq:tc}
\end{align}
This time is given only in terms of the initial numbers of gravitons and baryons, i.e.~they are set at the time of formation of the black hole. We note that for sufficiently large initial values of baryon to graviton ratios, this time will become negative. We interpret this to mean that states that would otherwise form black holes for which the initial baryon number is too high, a black hole state will not form, and the state will be dominated by the strong force instead. That is, the second factor in the brackets of Eq.~\eqref{eq:tc} should not exceed one in order to ensure $t_{\ensuremath{\mathrm{c}}} > 0$, i.e.
\vspace{-2mm}
\begin{subequations}
\begin{align}
\frac{27\.N_{B*}^{3}}
{64\.\pi^3 N_{*}^{9/2} n_{\ensuremath{\mathrm{c}}}^3}
&\leq
1
\; ,
\label{eq:aa}
\end{align}
and thus possible black hole formation.
Also, as an absolute bound, at $t = t_{\ensuremath{\mathrm{c}}}$ there should be at least one baryon left in the condensate for the new state reached to have any sensible meaning, i.e.
\begin{align}
\frac{9\.N_{B*}^{3}}
{16\.\pi^{2}N_{*}^{3}\.n_{\ensuremath{\mathrm{c}}}^{2}}
&\geq
1
\; .
\label{eq:bb}
\end{align}
\end{subequations}
Of course, in practice the latter bound will be much stricter. Below we investigate the consequences of these bounds and possible formation of a new stable state in the case of primordial black holes, and black holes formed from more astrophysical scenarios.
Both \eqref{eq:aa} as well as \eqref{eq:bb} set limits on the earliest as well as on the latest time of formation of the primordial condensate. In order to investigate this, let us assume radiation domination during the relevant time interval. Then the comoving Hubble radius is given by $R_{H}( t ) = 2\.c\,\sqrt{t_{0}\,} \sqrt{t\,}$, where $t_{0}$ equals the age of the Universe. For getting a rough estimate of the total baryon number contained in a Hubble patch at the time $t_{*}$, we can utilize today's value of the baryon number density $n_{B, 0} \approx 0.2 / \ensuremath{\mathrm{m}}^{3}$, and estimate $N_{B*}$ via
\begin{subequations}
\begin{align}
N_{B*}
&\simeq
n_{B, 0}\,\frac{ 4\.\pi }{ 3 } R_{H}( t_{*} )^{3}
\; .
\label{eq:NB*-estimate}
\end{align}
A horizon-size primordial BEC at $t = t_{*}$ approximately has mass as given by Eq.~\eqref{eq:PBHMass} and contains
\begin{align}
N_{*}
&\simeq
4\.\left(\frac{M( t_{*} )}{M_{P}}\right)^{2}
\approx
10^{86}\.t_{*}^2[ \ensuremath{\mathrm{s}} ]
\label{eq:N*-estimate}
\end{align}
\end{subequations}
gravitons. Using Eqs.~(\ref{eq:NB*-estimate},b), we find from Eqs.~(\ref{eq:aa},b) that the time of condensate formation has to be within the interval
\begin{align}
10^{-12}\.\ensuremath{\mathrm{s}}
\lesssim
t_{*}
\lesssim
10^{12}\.\ensuremath{\mathrm{s}}
\label{eq:t*-interval}
\end{align}
to lead to a possible non-trivial end state where gravity is balanced by the strong force. Let us stress again that in the Bose-Einstein condensate framework baryon number is conserved in any (would-be) black hole state. Hence, because at times earlier than $ 10^{-12}\.\ensuremath{\mathrm{s}}$ the baryon density is so high that the strong force counteracts the gravitational one, no black-hole-like bound state will form. This would affect the spectrum of any model of primordial black hole formation by removing the majority of the low mass black holes from it.
Since the time of formation of the stable state increases very rapidly with lower initial baryon density essentially all primordial condensates which are produced during this interval constitute of the ones that lay on the boundary and form the new hypothesised bound state more or less immediately. These have
\vspace{-2mm}
\begin{align}
N_{B}
&\simeq
2 \cdot 10^{35}
\quad
\text{and}
\quad
N
\simeq
6 \cdot 10^{62}
\label{eq:BEC-PBH-numbers}
\end{align}
baryons and gravitons, respectively, leading to a mass of
\begin{align}
M
&\simeq
3 \cdot 10^{23}
\.{\rm kg}
\; .
\label{eq:BEC-PBH-mass}
\end{align}
This is roughly an order of magnitude below the mass of the Earth. Since we do not expect primordial black holes or the new stable state condensate to form prior to the formation of these particular ones, we will expect a boost in formation of objects of exactly this size. Thus this framework provides a definite prediction of the remnant's mass, which can then be compared to the non-detection limits for primordial black holes of this size. The tightest bounds in this mass regime stem mainly from galactic microlensing studies such as the EROS \cite{Tisserand:2006zx} and MACHO \cite{Alcock:2001} surveys, or recent inspection of Kepler data \cite{Griest:2013aaa}.
{\it Astrophysically formed black holes\,---\,}
For a black hole formed as an end state of a very massive star, the limits given by Eqs.~(\ref{eq:aa},b) will instead of setting limits on the formation time, rather set a limit on initial densities and mass of the star that collapses. Because the main part of the mass contained in the late stages of a very massive star is composed of baryons, we can find the initial number of baryons as:
\begin{align}
N_{B*}
&\approx
\frac{ M }{ M_{p} }
\approx
10^{57}\.
\frac{ M }{ M_{\odot} }
\; ,
\end{align}
where $M_{p}$ is the proton mass, while the number of gravitons is still set by the mass of the star according to Eq.~\eqref{eq:NofM}. As the size of the black hole, and hence its Schwarzschild radius is given by the same initial mass of the late-stage star, we can simply compute the number of baryons per femtometer as a function of the mass. If it exceeds 1, the star will then presumably not collapse to a black hole, but instead form something else, possibly something like a quark star \cite{Itoh:1970uw}.
The limit for this becomes:
\begin{align}
M
&\geq
\frac{c^3}{4}
\sqrt{\frac{3 \cdot 10^{57}}{2\pi\.M_{\odot}\.n_{\ensuremath{\mathrm{c}}}}\,}\;
G_{\ensuremath{\mathrm{N}}}^{-3 / 2}
\approx
3\.M_{\odot}
\; ,
\end{align}
which is roughly twice the normal Chandrasekhar limit \cite{Chandrasekhar:1931ih}, which is the normal bound on black hole formation for white dwarfs, but lies around the Tolman-Oppenheimer-Volkoff limit \cite{TOVlimit} for black hole formation from neutron stars. Thus the lightest candidates for black hole formation, may form other exotic states instead such as quark stars. However, since the time it takes for the QCD condensate bound state to form, Eq.~\eqref{eq:tc}, is very sensitive to slight changes in the mass. Only the stars that are very close to the bound might form this exotic state. Here, unless there is some mechanism to strip the lighter remnants of baryons, there are no mechanisms for overproduction of states on the bound, thus these will be very rare.
{\it Summary and Outlook\,---\,}
In this work we have investigated the issue of baryon number conservation as well as the formation of related bound states in the corpuscular picture of black holes as a concretisation of the work done in \cite{Dvali:2012rt}. Here this has been done in the context of the black holes created in the very early Universe, i.e.~primordial black holes, but we also considered briefly the consequences for the ones formed from astrophysical collapses of very massive stars.
Regarding the former, we found that the corpuscular framework leads to distinct predictions for both the formation time, which is shown to happen only after $10^{-12}\.\ensuremath{\mathrm{s}}$ after Big Bang. At times earlier than this the baryon density, which is preserved in the graviton condensate framework, is so high that the strong force dwarfs gravity and no black-hole-like bound state will form. For black holes formed before $10^{12}\.\ensuremath{\mathrm{s}}$ after the Big Bang the endstate we hypothesise is not a totally evaporated state, but rather a bound state of baryons and gravitons, where gravity is counteracted by the strong force. However, only the black holes formed very close to the $10^{-12}\.\ensuremath{\mathrm{s}}$ will have reached the bound state by today; i.e.~we expect the spectrum of primordial black holes of mass smaller than $~3 \times 10^{23}\.{\rm kg}$ to be essentially zero and then an enhancement of the spectrum for masses just above this bound with respect to predictions given in the standard paradigm. This means that even inflationary theories which predict peaks in observationally disallowed low mass primordial black holes might be viable within this framework. These statements are rather general, and apply to basically any model of primordial black-hole formation. Also the possible enhancement of the spectrum occurs at values of primordial black hole mass for which there are only weak observational constraints \cite{Carr:2009jm, Capela:2013yf}.
Concerning astrophysical black holes, we have derived a bound on the minimal mass of such objects and found that it lies just above the Chandrasekhar limit.
Many more things need to be investigated{\,---\,}in particular, the dynamics and structure of the graviton-baryon bound state, the exact nature of the phase transition\./\.cross-over when exceeding the critical baryon density, and the determination of the latter. Also, the formation mechanisms of primordial black holes should be re-investigated in the corpuscular description of space-time with emphasis on the effect of possible quantum corrections to the mass spectrum. More precise investigation of this will most likely result in shifts in the predictions for the remnant masses and bounds, however the general outline given here should still hold. Furthermore, keeping in mind that all of those objects contribute to the dark matter density, one should study also more exotic states which involve a much larger baryon-to-graviton ratio than the ones investigated in this work.\\% and a unabridged investigation of those objects might unveil more insights into it.
\acknowledgments
F.~K.~acknowledges supported from the Swedish Research Council (VR) through the Oskar Klein Centre, and thanks the Institute of Theoretical Astrophysics at the University of Oslo where part of this work as been performed. We thank Gia Dvali for valuable comments.
|
train/arxiv
|
BkiUd-E5qdmDMTwvP2VD
| 5 | 1 |
\section{Introduction}
A common problem in analysis of experiments or of Monte Carlo
simulations is fitting a parameterized model to the average over
a number of samples of correlated data values. In particular,
lattice QCD calculations typically require fitting operator
correlators, which are a function of distance between the
operators, to sums of exponentials with unknown amplitudes and
masses.
If the number of
samples is not infinite, estimates of the variance of the parameters
(``error bars'') and of the goodness of fit are affected.
This can be viewed as a generalization of the well known rule
``replace $N$ by $N-1$ in the denominator'' in calculating
the error on an average to the case where the error is on
a parameter estimated by a fit to correlated data points.
We calculate
approximate corrections to the variance of the parameters (see Fig.~\ref{FIG_sim1}
for a graphical example)
for estimates made in the standard way from derivatives
of the parameters' probability distribution as well as from jackknife
and bootstrap estimates. (The distribution of parameter estimates is not
exactly Gaussian, so the variance of the parameters is not quite the
whole story.)
Without compensating for sample size effects, {\bf none} of these
methods give unbiased estimates of the parameters' variance.
Many numerical simulation programs or experiments involve two or more
stages of fitting, where the parameters resulting from the first stage
are the data input to the second stage. For example, in computations
of meson decay constants in lattice QCD the first stage involves
fitting a correlator of meson operators to exponentials and extracting
the mass and amplitude, and the second stage involves fitting these
masses and amplitudes to functions of the quark masses and lattice
spacings to allow extrapolation to the chiral and continuum limits.
(See for example Refs.~\cite{lightlight} and \cite{heavylight}.)
For example, in Ref.~\cite{lightlight} about 600 hadron
masses and amplitudes are computed in the first stage of fitting,
and these 600 numbers and their (co)variances are in turn the data
for the fitting in the second stage.
While an unbiased estimate of the variance of the parameters
is always welcome,
it is particularly important in this case since many parameters
in the first stage of fitting
are used as inputs (data) in the second stage of fitting, and if their
errors are systematically too large or too small the apparent
goodness of fit
in the second stage of fitting will be very good or very bad
respectively.
\section{The problem}
We consider a problem where we need to fit a function of $P$ parameters
to an average of $N$ samples, where each sample consists of $D$ data
points. We use subscript indices to label the component of
the data vectors and superscript indices to label the samples.
Thus $x_i^a$ is the $i$'th component of the $a$'th sample, with
$0\le i<D$ and $0 \le a<N$. Each sample is assumed to be normally distributed, but the
different components of the $D$ dimensional sample are generally
correlated. Averages over samples will be denoted by
overbars. We will need to imagine
averaging over many trials of
the experiment, and we will use angle brackets to denote such
an average: $\big\langle \OL{x_i} \big\rangle$.
So, for example
\begin{eqnarray}
\la{EQ_oldef}
\OL{ x_i} &=& \OON\sum_a x_i^a \nonumber\\
\OL{x_i x_j} &=& \OON\sum_a x_i^a x_j^a\nonumber\\
\OL{x_i} \OL{x_j} &=& \OON\sum_a x_i^a \OON \sum_b x_j^b\\
\nonumber\end{eqnarray}
The covariance matrix (``of the mean'') for one trial is
\begin{equation} \label{EQ_covarmat_def}
C_{ij} = \OON \LP \OL{x_i x_j} - \OL{x_i}\ \OL{x_j} \RP
= \OOX{N^2} \sum_a x_i^a x_j^a - \OOX{N^3} \LP \sum_a x_i^a \RP \LP
\sum_b x_j^b \RP \end{equation}
This covariance matrix will fluctuate around the true covariance matrix,
obtainable only in the limit $N \rightarrow\infty$.
Note we use $\OON$ instead of $\OOX{N-1}$ in normalizing $C_{ij}$.
For our purposes, the difference
between these normalizations is best included with the other order $\OON$
effects to be discussed.
Fit parameters $p_\alpha$, with $0\le\alpha<P$, are obtained by minimizing
\begin{equation} \chi^2 = \LP \OL{x_i} -x_i^f(p_\alpha) \RP \LP C^{-1} \RP _{ij}
\LP \OL{x_j} -x_j^f(p_\alpha) \RP \la{EQ_chisq_def}\end{equation}
where $x_i^f(p_\alpha)$ is the value of $\OL{x_i}$ predicted by the model.
As pointed out in Ref.~\cite{MICHAEL1}, since we are stuck with
estimates of the covariance matrix and the $\OL{x_i}$
obtained from the same samples, they are correlated.
First change to a convenient coordinate system (alas, available
only in theory, not in practice). For the moment we assume that
our fit model is good, so that the $x_i^f(p_\alpha)$ can be adjusted
to equal the true averages of the $x_i$. Shift the coordinates so
that $\langle \OL{x_i} \rangle$ is zero. Then rotate the coordinates
so that the {\bf true} covariance matrix is diagonal, and rescale
them so that $\langle (x_i^a)^2 \rangle = 1$. (So far, we have
followed Ref.~\cite{MICHAEL1}.)
We now have $\langle x_i^a x_j^b \rangle = \delta_{ij} \delta^{ab} $,
and the true covariance matrix is the unit matrix.
Make a further rotation so that the changes in the $x_i^f(p_\alpha)$
as the $p_\alpha$ vary around their true values are in the first
$P$ components, and so that the changes in the $x_i^f(p_\alpha)$
as the first parameter $p_0$ varies are in the first component.
Now we can rescale $p_0$ so that $\PAR{p_0}{x_0}=1$, which simply
means that $p_0$ is the average $\OL{x_0}$. In doing this we have
assumed that $p_0$ is linear enough in the $\OL{x_i}$ or that the
fluctuations in the $\OL{x_i}$ are small enough.
In this basis, write the covariance matrix (from the data in this experiment)
and its inverse in blocks,
\begin{eqnarray} \la{EQ_covmat_blocks}
C &\equiv& \LP \begin{array}{cc} U & V \\ V^T & W \end{array} \RP \nonumber\\
C^{-1} &\equiv& \LP \begin{array}{cc} A & B \\ B^T & E \end{array} \RP \\
\nonumber\end{eqnarray}
where the matrices $U$ and $A$ are $P$ by $P$, $V$ and $B$ are $P$ by $D-P$ and $W$ and $E$ are $D-P$ by $D-P$.
Now $\chi^2$ is given by
\begin{equation} \chi^2 = \LP \OL{x_i} - x_i^{f} \RP \LP C^{-1} \RP _{ij} \LP \OL{x_j} - x_j^{f} \RP \label{chisq_def2}\ \ \ ,\end{equation}
where only the first $P$ components of $x_i^f$ are nonzero.
For example, with two parameters
\begin{equation} \chi^2 = \LP \OL{x_1} - x_1^{f}, \OL{x_2} - x_2^{f}, \OL{x_3}, \ldots \RP
\LP \begin{array}{cc} A & B \\ B^T & E \end{array} \RP
\LP \begin{array}{c} \OL{x_1} - x_1^{f} \\ \OL{x_2} - x_2^{f} \\ \OL{x_3} \\ \ldots \end{array} \RP \end{equation}
The $x_i^f$ are found from minimizing $\chi^2$:
\begin{equation} 0 = \frac{\partial\chi^2}{\partial x_\is^f}
= 2 A_{\is\js}\LP \OL{x_\js}-x_\js^f \RP + 2 B_{\is\jp} \OL{x_\jp} \end{equation}
where here and in many subsequent equations starred indices run from $0$ to $P-1$ and
primed indices from $P$ to $D-1$, and the factor of two comes from differentiating
with respect to the $x_i^f$ on both sides of Eq.~\ref{chisq_def2}
and using the fact that $C^{-1}$ is symmetric.
This is solved by
\begin{equation} \label{EQ_xf_solution} x_\is^f = \OL{x_\is} + A_{\is \js}^{-1}B_{ \js \kp} \OL{x_\kp} \end{equation}
From $C\,C^{-1} = {\bf 1}$ and Eq.~\ref{EQ_covmat_blocks},
\begin{eqnarray}
\label{EQ_CCinv_eqs}
UA+VB^T &=& {\bf 1} \nonumber\\
UB+VE &=& {\bf 0} \nonumber\\
V^TA+WB^T &=& {\bf 0} \nonumber\\
V^TB+WE &=& {\bf 1} \\
\nonumber\end{eqnarray}
Using the third of Eqs.~\ref{EQ_CCinv_eqs}, remembering that $A$ and $W$ are symmetric, we get
an alternate to Eq.~\ref{EQ_xf_solution}:
\begin{eqnarray} \label{EQ_xf_solution_2} B &=& -AVW^{-1} \nonumber\\
A^{-1}B &=& -VW^{-1} \nonumber\\
x_\is^f &=& \OL{x_\is} - V_{\is\jp}W_{\jp\kp}^{-1}\OL{x_\kp} \\
\nonumber\end{eqnarray}
From this equation we see that, in this basis, parameter number zero, $x_0^f$, does
not depend on the other of the first $P$ components, $\OL{x_\is}$ with $1\le\is < P$.
Thus the distribution of parameters depends only
on the combination $D-P \equiv d$.
Similarly for $\chi^2$:
\begin{eqnarray} \chi^2 &=& \LP -\OL{x} B^T A^{-1}, \OL{x} \RP \LP \begin{array}{cc} A & B \\ B^T & E \end{array} \RP
\LP \begin{array}{c} -A^{-1}B \OL{x} \\ \OL{x} \end{array} \RP \\
&=& \OL{x_\ip} \LP -B^TA^{-1}B+E \RP \OL{x_\jp}
\nonumber\end{eqnarray}
Now insert $W^{-1}W = {\bf 1}$ and use the third and fourth equations in \ref{EQ_CCinv_eqs}
\begin{eqnarray} \label{EQ_chisq_1}
\chi^2 &=& \OL{x} W^{-1} \LP -WB^T A^{-1}B+WE \RP \OL{x} \nonumber\\
&=& \OL{x} W^{-1} \LP V^T A A^{-1}B+WE \RP \OL{x} \nonumber\\
&=& \OL{x} W^{-1} \LP V^T B+WE \RP \OL{x} \nonumber\\
&=& \OL{x_\ip} W_{\ip\jp}^{-1} \OL{x_\jp} \\
\nonumber\end{eqnarray}
But $W$ is just the covariance matrix for the last $D-P$ components of $\OL{x}$ in this basis,
so the statistical properties of $\chi^2$ are exactly the same as a $D-P$ dimensional problem
with no fit parameters, and the distribution of $\chi^2$, as expected, depends
only on the number of degrees of freedom, $\dof \equiv D-P$.
We note that the distribution of $\chi^2$ (more properly, $T^2$) is known.
Since it is important here and closely related to the estimates of
parameter errors, we quote the result in Appendix I.
\begin{figure}[tbh]
\epsfxsize=5.5in
\epsfbox[0 0 4096 4096]{parvar_nolines.ps}
\rule{0.0in}{0.0in}\vspace{1.0in}\\
\caption{
\label{FIG_sim1}
Number of samples times the variance (square of the error) of a parameter in fitting correlated data,
and averages over
trials of several methods for estimating this variance.
The horizontal line indicates the asymptotic value, $variance(x_0^f) = 1/N$.
$N$ is the number of samples; the meaning of
the plot symbols is described in the text.
}
\end{figure}
\section{Numerical example}
To illustrate the effects of sample size, we begin with a numerical example,
using the basis described above.
In this example, $N$ Gaussian distributed random data vectors with $D=25$ were generated.
The data was fit with $P=5$ parameters, which are just the first $P$ components
of the average data vector. This was repeated for many trials.
The black octagons in Fig.~\ref{FIG_sim1} show $N$ times the
variance (over trials) of one of the parameters, where the asymptotic value is one.
These black octagons are the correct answer for the variance of the parameter, and this is
the variance that we wish to estimate from our experiment, where
we only have one trial to work with. We see that for finite $N$ the parameters
fluctuate by an amount larger than the asymptotic value.
We also show the average over trials of the variance estimated from derivatives
of the parameter probability, the average over trials of the variance
estimated from a single elimination jackknife analysis, and the average
from a bootstrap analysis. For the jackknife and bootstrap, the plot contains
average variances both for the case where the full sample covariance matrix
was used in each resampling
and where a new covariance matrix was made
using the data in each jackknife or bootstrap sample.
Red squares are average variances from the
usual ``derivative'' method. Blue diamonds are from a single elimination
jackknife analysis where a new covariance matrix was made for each
jackknife sample. The two blue bursts (on top of
the red squares) use the full sample covariance matrix in each jackknife resample.
Similarly, the green fancy plusses are from a bootstrap
analysis, using the covariance matrix from the original sample.
The green crosses are estimates from a bootstrap analysis where a new
covariance matrix was made for each bootstrap sample.
We see that correct answer deviates from the asymptotic value
for finite $N$, and that the various methods for estimating this variance
produce biased estimates of the variance of the parameter.
\section{Large $N$ expansion}
Most of the effects shown in Fig.~\ref{FIG_sim1} can be understood
analytically.
We can expand the covariance matrix in each trial around its true
value,
\begin{equation} C_{ij} = \OON \LB \delta_{ij} + \LP \OL{x_i x_j} - \delta_{ij} -
\OL{x_i}\OL{x_j} \RP \RB \end{equation}
Here the term in parentheses has fluctuations of order $1/\sqrt{N}$
and an average of order $\OON$.
Thus its square will also have expectation value $\approx \OON$.
Then
\begin{eqnarray} C_{ij}^{-1} &=& N \delta_{ij} \nonumber\\
&-& N \LP \OL{x_i x_j} - \delta_{ij} - \OL{x_i}\OL{x_j} \RP \nonumber\\
&+& N \LP \OL{x_i x_k} - \delta_{ik} - \OL{x_i}\OL{x_k} \RP
\LP \OL{x_k x_j} - \delta_{kj} - \OL{x_k}\OL{x_j} \RP \nonumber\\
&+& \ldots \\
\nonumber\end{eqnarray}
Using the fact that integrals of polynomials weighted by Gaussians are found
by pairing the $x_i^a$ in all possible ways, or making all possible
contractions, we can develop rules for calculating these expectation
values. We will use parentheses to list the pairings. For example,
with $(12)$ indicating that the first and second $x$ are paired,
\begin{eqnarray}
&& \LL \OL{x_i} \OL{x_i x_j} \OL{x_j}\RR \nonumber\\
&=& \OL{x_i} \OL{x_i x_j} \OL{x_j} (12)(34) \nonumber\\
&+& \OL{x_i} \OL{x_i x_j} \OL{x_j} (13)(24) \nonumber\\
&+& \OL{x_i} \OL{x_i x_j} \OL{x_j} (14)(23) \\ \nonumber\end{eqnarray}
Using
\begin{displaymath}\label{EQ_basic_delta}\langle x_i^a x_j^b \rangle = \delta_{ij} \delta^{ab} \end{displaymath}
and
\begin{displaymath} \label{EQ_basic_bar}\OL{x_i}= \OON\sum_a x_i^a \end{displaymath}
we get the Feynman rules for contractions of barred quantities.
\begin{enumerate}
\item{Each contraction gives a $\delta_{ij}$ for the lower indices it connects.}
\item{Each bar gives a $\OON$, whether it covers a single $x$ or two, $\OL{x_i}$ or
$\OL{x_i x_j}$ --- see Eq.\ref{EQ_oldef}. }
\item{Each continuous line made of overbars and contraction symbols gives a factor
of $N$. This is from the $\sum_{ab\ldots} \delta^{ab}\delta^{bc}\ldots$, which has
$N$ nonzero terms. For example, $\OL{x_i}\OL{x_j x_k}\OL{x_l} (12)(34)$ is one continuous
line, while $\OL{x_i}\OL{x_j x_k}\OL{x_l} (14)(23)$ is two lines (one is a loop). This
results in every loop giving an extra factor of $N$ relative to other contractions with
the same number of fields. }
\end{enumerate}
Since an open line (not a loop) with $C$ contractions has $2C$ $x$'s and $C+1$ bars,
but a loop with $N$ contractions has $2C$ $x$'s and $C$ bars, these rules can be rephrased as:
\begin{enumerate}
\item{Each contraction gives a $\delta_{ij}$ for the lower indices it connects.}
\item{Each $x$ gives a factor of $1/\sqrt{N}$.}
\item{Each loop gives a factor of $N$.}
\end{enumerate}
In the expansion of $C^{-1}$ we find the combination
$\OL{x_i x_j} - \delta_{ij} - \OL{x_i}\OL{x_j}$, which we will
denote by $\OLL{x_i x_j}$. This occurs frequently enough that we should
state special rules for it.
In evaluating an expression containing $\OLL{x_i x_j}$ there will be contractions
where the $x_i$ and $x_j$ in $\OL{x_i x_j}$ are contracted with each other.
These contractions just cancel the $\delta_{ij}$. The terms with $\OL{x_i}$
and $\OL{x_j}$ contracted give a $-\delta_{ij}/N$.
Thus a ``tadpole'' where $\OLL{x_i x_j}(12)$ contracts with itself just gives a
$-\delta_{ij}/N$. (This includes the $N^{-1/2}$ from each of the $\OL{x}$'s.)
Now consider terms where $\OLL{x_i x_j}$ is part of an open line, like
\begin{equation} \OL{x_i} \OLL{x_i x_j} \OL{x_j} (12)(34) \end{equation}
In this case the $\OL{x_i x_j}$ and the $-\OL{x_i}\OL{x_j}$ cancel,
so $\OLL{x_i x_j}$ can never be part of an open line.
But if this object is part of a loop, like in
\begin{equation} \OLL{x_i x_j} \OLL{x_k x_l} (13)(24) \end{equation}
the $\OL{x_i x_j}$ part is part of the loop, but the $-\OL{x_i}\OL{x_j}$ part
breaks the loop. Thus the four paths hidden in these double bars give
\begin{equation} N-2+1 = (N-1)\delta_{ik} \delta_{jl} \end{equation}
Similarly, a loop of three double bars gives $2^3=8$ terms, $N-3+3-1 = (N-1)$,
and any loop made up entirely of $\OLL{x_i x_j}$'s gives a factor of $N-1$ times
the appropriate Kronecker $\delta$'s.
As trivial examples,
\begin{eqnarray} \LL \OL{x_i} \OL{x_j} \RR &=& \OL{x_i} \OL{x_j} (12) = \OON\delta_{ij} \\
\LL \OL{x_i x_j} \RR &=& \OL{x_i x_j} (12) = \delta_{ij} \\
\LL N\, C_{ij} \RR &=& 1+\OLL{x_i x_j}(12) = \LP 1 - \OON \RP \delta_{ij} \\
\nonumber\end{eqnarray}
We are also interested in the variances of averaged quantities.
For the variance of something, $var(X) = \LL X^2 \RR - \LL X \RR^2$,
we need the ``connected part'' of $\LL X^2 \RR$. We use a vertical bar to denote
this, and we only need contractions where some of the lines cross the bar.
As an example, for the variance of an arbitrary element of $C$ to lowest order,
\begin{eqnarray} \label{EQ_connpart_1} \LL var\LP C_{ij} \RP \RR
&=& \LL C_{ij}^2 \RR - \LL C_{ij} \RR_{no\ sum\ ij}^2 \\
&=& \OOX{N^2} \LL 1+\OLL{x_i x_j} \Big|\ 1+\OL{x_i x_j} \RR_{NS}\la{varc_2_eq}\\
&=& \OOX{N^2} \LP \OLL{x_i x_j} \Big|\ \OLL{x_i x_j} \RP (13)(24)+(14)(23) \nonumber\\
&=& \frac{N-1}{N^4} \LP \delta_{ii}\delta_{jj} + \delta_{ij}\delta_{ij} \RP_{NS} \nonumber\\
&=& \OOX{N^2} \frac{2}{N} ;\ i=j \nonumber\\
&& \OOX{N^2} \frac{1}{N} ;\ i\ne j \\
\nonumber\end{eqnarray}
The last line is written to display that the fractional variance on
the diagonal element is $\frac{2}{N}$.
In equations where the components are separated into starred indices,
$0 \le \is <P$ and primed indices, $P \le \ip < D$,
contractions of primed with starred indices are zero, contractions of starred with
starred indices give delta functions with $\delta_{\is\is} = P$, and primed with primed use
$\delta_{\ip\ip} = D-P$.
\section{Variance (and higher moments) of the parameters}
In this section we examine the variances of the parameters --
that is, the error bars on our answers. First we calculate how
much the parameters actually vary over many trials of the experiment.
Then we calculate the average of common ways of estimating this
variance --- from derivatives of the probability, from
an ``eliminate J'' jackknife analysis
or from a bootstrap resampling
(using either the covariance
matrix from the full sample, or a new covariance matrix made from
each jackknife or bootstrap resample).
The differences allow us to find and correct for
bias in our error estimates resulting from the finite sample size.
For the actual variance of our parameters, use Eq.~\ref{EQ_xf_solution_2}.
Since we are in a coordinate system where the average of this quantity is
zero, we don't need to worry about taking the connected part.
\begin{eqnarray} \label{EQ_par_var_1}
\LL x_0^f x_0^f \RR &=& \LL \OL{x_0}\OL{x_0} \RR \label{varreal_p0_eq} \nonumber\\
&+& 2\LL \OL{x_0} V_{0 \jp} W_{ \jp \kp}^{-1} \OL{x_\kp} \RR \\
&+& \LL \OL{x_\jp} W_{\jp \kp }^{-1} V_{\kp 0}^T V_{0 \mp } W_{\mp \np}^{-1} \OL{x_\np} \RR
\nonumber\end{eqnarray}
Since $V$ and $W$ are made entirely of double bars and can therefore only be part
of a loop, and primed indices can't contract with index zero,
the middle term (cross term) is zero.
We compute this to order $\frac{1}{N^3}$.
\begin{eqnarray}
\LL x_0^f x_0^f \RR &=& \LL \OL{x_0}\OL{x_0} \RR \nonumber\\
&+& \Big\langle \OL{x_\jp} \LP \delta_{\jp \mp} - \OLL{x_\jp x_\mp} + \OLL{x_\jp x_\pp} \OLL{x_\pp
x_\mp} \ldots \RP
\LP \OLL{x_\mp x_0} \RP \nonumber\\
&& \LP \OLL {x_0 x_\np} \RP
\LP \delta_{ \np \kp} - \OLL{x_\np x_\kp} + \OLL{x_\np x_\rp}\OLL{x_\rp x_\kp} \ldots \RP \OL{x_\kp} \Big\rangle \\
\nonumber\end{eqnarray}
The leading term, $\LL \OL{x_0}\OL{x_0} \RR$, is just $\OON$.
The term with six $x$'s has only one contraction:
\begin{eqnarray} && \OL{x_\jp} \OLL{x_\jp x_0} \OLL{x_0 x_\kp} \OL{x_\kp} \ \ (16)(25)(34) \nonumber\\
&=& \frac{N-1}{N^3} d \\ \nonumber\end{eqnarray}
where $d \equiv D-P$.
There are two equal terms with eight $x$'s.
There are three nonzero contractions of this term. Ignoring the $N^{-4}$ parts,
these are
\begin{eqnarray}
- 2\, \OL{x_\jp} \OLL{x_\jp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\kp} \OL{x_\kp}
\ (18)(27)(34)(56) &=& \frac{-2}{N^3} \delta_{\jp\kp}\delta_{\jp\kp} \delta_{\mp\mp} \delta_{00} \nonumber\\
- 2\, \OL{x_\jp} \OLL{x_\jp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\kp} \OL{x_\kp}
\ (18)(23)(47)(56) &=& \frac{+2}{N^3} \delta_{ \jp \kp}\delta_{ \jp \mp } \delta_{ \mp \kp} \delta_{00} \nonumber\\
- 2\, \OL{x_\jp} \OLL{x_\jp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\kp} \OL{x_\kp}
\ (18)(24)(37)(56) &=& \frac{-2}{N^3} \delta_{ \jp \kp}\delta_{ \jp \mp } \delta_{ \mp \kp} \delta_{00} \\
\nonumber\end{eqnarray}
Here the plus sign on the second contraction comes from the tadpole. The
second and third contractions cancel, so we just have
\begin{equation} \frac{-2}{N^3} d^2 \end{equation}
To order $\OOX{N^3}$ we only need two loop contractions from the terms with ten $x$'s.
There are three such terms, but two of them are equal.
\begin{eqnarray} && \LL \OL{x_\jp} \OLL{x_\jp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\np} \OLL{ x_\np x_\kp} \OL{x_\kp} \RR \nonumber\\
+2 && \LL \OL{x_\jp} \OLL{x_\jp x_\pp} \OLL{x_\pp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\kp} \OL{x_\kp} \RR \\ \nonumber\end{eqnarray}
Each term has two contractions:
\begin{eqnarray}
&& \OL{x_\jp} \OLL{x_\jp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\np} \OLL{x_\np x_\kp} \OL{x_\kp}
\ (1,10)(28)(39)(47)(56) \nonumber\\
&+& \OL{x_\jp} \OLL{x_\jp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\np} \OLL{ x_\np x_\kp} \OL{x_\kp}
\ (1,10)(29)(38)(47)(56) \nonumber\\
&=& \OOX{N^3} \LP d + d^2 \RP \\ \nonumber\end{eqnarray}
\begin{eqnarray}
&& 2\ \OL{x_\jp} \OLL{x_\jp x_\pp} \OLL{x_\pp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\kp} \OL{x_\kp}
\ (1,10)(24)(35)(69)(78) \nonumber\\
&+& 2\
\OL{x_\jp} \OLL{x_\jp x_\pp} \OLL{x_\pp x_\mp} \OLL{x_\mp x_0} \OLL{x_0 x_\kp} \OL{x_\kp}
\ (1,10)(25)(34)(69)(78) \nonumber\\
&=& \frac{2}{N^3} \LP d + d^2 \RP \\ \nonumber\end{eqnarray}
Putting it all together,
\begin{eqnarray} \label{EQ_par_var_2} \LL x_0^f x_0^f \RR &=& \OOX{N} + \frac{N-1}{N^3} (d)
+ \frac{-2}{N^3} (d)^2 + \frac{3}{N^3} (d+d^2) \nonumber\\
&=& \OOX{N} + \frac{d}{N^2} + \frac{d(d+2)}{N^3} + \ldots \\ \nonumber\end{eqnarray}
Thus the fluctuations in the parameters are larger than the asymptotic value $\OON$.
from the covariance matrix.
As noted above, the probability distribution of the parameters is not exactly
Gaussian. Higher moments of this distribution can be obtained in the same
way. At leading order in $\OON$ there is only one independent diagram for
the connected part of each moment, and we find, for $M$ even,
\begin{equation} \LL \LP x_0^f \RP^M \RR_{connected} = \frac{ \LP D-P \RP \LP M-1 \RP ! }{N^M} \end{equation}
\section{Estimates of the parameters' variance}
In practice, the most common method for estimating the variance of
the parameters is to use the covariance matrix for the parameters.
(See, for example, Ref.~\cite{TASI}.)
In our coordinate system, this matrix is just $A^{-1}$, and our estimate
for the variance of parameter zero is $\LP A^{-1}\RP_{00}$.
Using the third and first of Eqs.~\ref{EQ_CCinv_eqs},
\begin{eqnarray}
B^T &=& - W^{-1} V^T A \nonumber\\
{\bf 1} &=& UA-VW^{-1}V^TA \nonumber\\
A^{-1} &=& U - VW^{-1} V^T \\
\nonumber\end{eqnarray}
Then, our estimate for the variance of parameter zero is
\begin{eqnarray}
var(x_0^f)_{derivative} &=& A_{00}^{-1} \nonumber\\
&=& U_{00} - V_{0\kp} W_{\kp \lp}^{-1} V_{\lp 0}^T \nonumber\\
&=& \OON \LP \delta_{00} + \OLL{x_0 x_0} \RP \nonumber\\
&-& \OON \LP \OLL{x_0 x_\kp} \RP \LP \delta_{\kp \lp } - \OLL{x_\kp x_\lp}
+ \OLL{x_\kp x_\mp} \OLL{x_\mp x_\lp} \ldots \RP \OLL{x_\lp x_{0}} \\
\nonumber\end{eqnarray}
For the order $\OON$ correction we only need the $\delta_{\kp \lp }$ from
$W^{-1}$, and find
\begin{eqnarray}
\label{EQ_varest_p0}
N var(x_0^f)_{derivative}
&=& \delta_{00}
+ \OLL{x_0 x_0}(12)
- \OLL{x_0 x_\kp} \OLL{ x_\kp x_0} (14)(23) \nonumber\\
&=& 1
- \OON
- \frac{N-1}{N^2}(D-P) \nonumber\\
&=& 1 - \OON\LP 1+D-P\RP \\
\nonumber\end{eqnarray}
The order $1/N^2$ contribution to this estimate vanishes, as sketched in
Appendix II.
If $D=P=1$ this is just $ \OON \LL 1+\OLL{x_0 x_0}\RR = 1 - \OON $,
the standard correction for a simple average, reflecting our normalization
of the covariance matrix. Comparing to the desired result in Eq.~\ref{EQ_par_var_2},
we see that this is an underestimate of the variance of the parameters.
The difference between this error estimate and the correct one above
is that this estimate assumes that the covariance matrix remains
fixed while the data points vary, while the correct answer takes into account
the correlations between the data points and the covariance matrix
(constructed from these same data points).
\section{Variance of jackknife and bootstrap parameters}
The variance of the parameters is also often estimated by a jackknife
or bootstrap analysis. In these methods the fit is repeated many times
using subsets of the data sample, and the variance of the parameters
is estimated from the variance over the jackknife or bootstrap samples.
Both the jackknife and bootstrap can
be done either using the covariance matrix from the full sample in fitting
each jackknife or bootstrap sample, or by remaking a covariance matrix for
each resample. Using the full sample covariance matrix amounts
to seeing how the parameters vary with fixed covariance matrix, that is,
by varying $ \OL{x_\is}$ and $\OL{x_\kp}$ in Eq.~\ref{EQ_xf_solution}
with $A_{\is \js}^{-1}B_{ \js \kp}$ held fixed.
This is
the same question as is answered by $var(x_0^f)_{derivative}$ in Eq.~\ref{EQ_varest_p0}.
Since the change in the parameters is linear in $ \OL{x_\is}$ and $\OL{x_\kp}$,
it doesn't matter if the $\OL{x_i}$ are varied infinitesimally (by taking derivatives)
or slightly (jackknife) or fully (bootstrap).
In this case, the variance of the parameters will have the
same bias as does Eq.~\ref{EQ_varest_p0} --- no new calculation
is necessary, although there is a slight difference due to the
normalization of the covariance matrix used here.
Remaking the covariance matrix for each resample includes correlations
of the covariance matrix and data, but not in quite the desired way.
The calculations above can be extended to calculate the expectation value
of the parameter variance for the jackknife analysis in which the covariance
matrix is recomputed for each jackknife sample.
An ``eliminate J'' jackknife consists of making $N/J$
resamples, each omitting $J$ data vectors (numbers $nJ$ through $(n+1)J-1$),
and hence having $N_J \equiv N-J$ elements.
We will denote averages in the $n$'th jackknife sample with a superscript
$(n)$.
The average of $x^a$ in the $n$'th jackknife sample is
\begin{equation} \OL{x^{a(n)}} = \frac{1}{N_J} \LP \sum_{a \in (n)}x^a \RP \end{equation}
where $J$ data vectors (starting with number $nJ$) were deleted from the full
sample. The variance of this quantity (over the jackknife samples) is
\begin{equation}\label{EQ_jack_var_2} \frac{J}{N(N-J)} \ \ \ ,\end{equation}
so we generally multiply the variance over the jackknife samples
by $\frac{N-J}{J}$ to get the expected variance of the mean $\OON$.
We now compute the variance of the parameters in the jackknife fits. In doing
this we will need averages of products of quantities from different
jackknife ensembles. Without losing generality, we can think of these
as ensembles number zero and one, which differ only in their first
$J$ data elements. Thus, expectation values of sums over values in
different ensembles may produce factors of $N_J-J$ instead of
$N_J$, where $N_J$ is the number of samples in the jackknife,
and is really $N-J$. ($N_J-J = N-2J$ is the number of samples in common between
two different jackknife resamples.)
For example, using $(n)$ to denote quantities in the $n$'th jackknife
sample ($x_j^{a(n)}$ is the $j$'th component of the $a$'th data vector in
jackknife sample $(n)$), for $n \ne m$,
\begin{eqnarray} \label{EQ_jack_avg1} && \langle \OL{x_{j}^{(n)}} \OL{x_{k}^{(m)}} \rangle \nonumber\\
&=& \langle \frac{1}{N_J} \sum_a x_j^{a(n)} \frac{1}{N_J} \sum_b x_k^{b(m)} \rangle \nonumber\\
&=& \frac{1}{N_J^2} \sum_{ab} \delta_{jk} \OL{\delta}^{ab} \nonumber\\
&=& \frac{N_J-J}{N_J^2} \delta_{jk} \\
\nonumber\end{eqnarray}
where we define $\OL{\delta}^{ab} = 1$ if $a=b$ and $a,b \in (J,N-1)$, $0$ otherwise. Thus the
sum over $a$ and $b$ gives a factor of $N_J-J$ instead of $N_J$.
From Eq.~\ref{EQ_xf_solution_2}, parameter $0$ in jackknife fit $(n)$ is
\begin{equation} x_0^{(n)f} = \OL{x_0^{(n)}} + V_{0\jp}^{(n)} W_{\jp\kp}^{-1(n)} \OL{x_\kp^{(n)}} \end{equation}
and the variance of this parameter over the jackknife samples is
\begin{eqnarray}\label{EQ_jack_var} var_J(x_0^f) &=&
\Bigg\langle \LP \OL{x_0^{(n)}} - \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)}
- \frac{J}{N} \sum_m \LP \OL{x_0^{(m)}} - \OL{x_\kp^{(m)}} W_{\kp \ip}^{-1m)} V_{\ip 0}^{T(m)} \RP \RP
\nonumber\\
&& \LP \OL{x_0^{(n)}} - V_{0\jp}^{(n)} W_{\jp\kp}^{-1(n)} \OL{x_\kp^{(n)}}
- \frac{J}{N} \sum_p \LP \OL{x_0^{(p)}} - V_{0\jp}^{(p)} W_{\jp\lp}^{-1(p)} \OL{x_\lp^{(p)}} \RP \RP
\Bigg\rangle \\
\nonumber\end{eqnarray}
This is more complicated than Eq.~\ref{EQ_par_var_1} because the mean
over jackknife samples is not exactly zero. Also, the sums over sample vectors now
sometimes give $N_J$, sometimes $N_J-1$, and sometimes $N_J-J$, so some of the shortcuts developed
above won't work any more. Note the $J/N$ is correct -- there are $N/J$ jackknife
resamples, each containing $N_J = N-J$ elements.
In Eq.~\ref{EQ_jack_var} the sums contain terms where $n=m$ and terms where $n \ne m$.
Separate the diagonal and off-diagonal terms in the sums, and use the fact that
all non-diagonal terms are equal, $\sum_m$ contains $N/J-1$ terms with $m \ne n$,
and $\sum_{mp}$ has $N/J$ diagonal terms and $(N/J)(N/J-1)$ off diagonal:
\begin{eqnarray} var_J(x_0^f) = \Bigg \langle &&
\LP 1-\frac{J}{N} \RP \OL{x_0^{(n)}} \OL{x_0^{(n)}} \nonumber\\
&-& \LP 1-\frac{J}{N} \RP \OL{x_0^{(n)}} \OL{x_0^{(m)}} \nonumber\\
&-& 2 \LP 1-\frac{J}{N} \RP \OL{x_0^{(n)}}
V_{0\ip}^{(n)} W_{\ip\kp}^{-1(n)} \OL{x_\kp^{(n)}} \nonumber\\
&+& 2 \LP 1-\frac{J}{N} \RP \OL{x_0^{(n)}}
V_{0\ip}^{(m)} W_{\ip\kp}^{-1(m)} \OL{x_\kp^{(m)}} \nonumber\\
&+& \LP 1-\frac{J}{N} \RP \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)}
V_{0\ip}^{(n)} W_{\ip\kp}^{-1(n)} \OL{x_\kp^{(n)}} \nonumber\\
&-& \LP 1-\frac{J}{N} \RP \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)}
V_{0\ip}^{(m)} W_{\ip\kp}^{-1(m)} \OL{x_\kp^{(m)}}
\Bigg\rangle_{n \ne m} \\
\nonumber\end{eqnarray}
where $n \ne m$.
To evaluate this expression we need:
\begin{eqnarray} \label{jackvar_parts_eq0}
(a)&&\ \ \langle \OL{x_0^{(n)}} \OL{x_0^{(n)}} \rangle \nonumber\\
(b)&&\ \ \langle \OL{x_0^{(n)}} \OL{x_0^{(m)}} \rangle_{n \ne m} \nonumber\\
(c)&&\ \ \langle \OL{x_0^{(n)}} V_{0\ip}^{(n)} W_{\ip\kp}^{-1(n)} \OL{x_\kp^{(n)}} \rangle \nonumber\\
(d)&&\ \ \langle \OL{x_0^{(n)}} V_{0\ip}^{(m)} W_{\ip\kp}^{-1(m)} \OL{x_\kp^{(m)}} \rangle_{n \ne m} \nonumber\\
(e)&&\ \ \langle \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)} V_{0\jp}^{(n)} W_{\jp\kp}^{-1(n)}
\OL{x_\kp^{(n)}} \rangle \nonumber\\
(f)&&\ \ \langle \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)} V_{0\jp}^{(m)} W_{\jp\kp}^{-1(m)}
\OL{x_\kp^{(m)}} \rangle_{n \ne m} \\
\nonumber\end{eqnarray}
Here $(a)$, $(c)$ and $(e)$, which involve only jackknife sample $(n)$,
are the same as in the previous section with the
replacement of $N$ by $N_J$.
Because $W^{-1(n)}$ and $V^{(n)}$ consist only of double barred quantities which can't
be part of an open line, and primed and unprimed indices can't contract,
$(c)$ vanishes.
For $(d)$ we can imagine expanding all the $\OLL{x_i x_j}^{(m)}$'s into
pieces, $\LP \OL{x_i x_j}^{(m)} - \delta_{ij} - \OL{x_i}^{(m)} \OL{x_j}^{(m)} \RP$.
and making all contractions. There is only one factor of $x$ from jackknife
sample $(n)$, which must contract with something from $(m)$. Thus, all of
these terms differ from $(c)$ by replacement of exactly one factor of $N_J$ by $N_J-J$, and
therefore also sum to zero.
Similarly $(e)$ and $(f)$ differ by the replacement of one or more factors
of $N_J$ by $N_J-J$. Thus, their difference will be one order in $\frac{J}{N}$
less than their value.
This means that to get the first correction to the asymptotic form, we
need only keep the lowest order term in part $(e)$, and the analogous
contraction for part $(f)$.
\begin{eqnarray} \label{jackvar_parts_eq}
(a)&&\ \ \langle \OL{x_0^{(n)}} \OL{x_0^{(n)}} \rangle =
\frac{1}{N_J^2} \sum_{ab} x_0^{a(n)} x_0^{b(n)} = \frac{1}{N_J} \nonumber\\
(b)&&\ \ \langle \OL{x_0^{(n)}} \OL{x_0^{(m)}} \rangle =
\frac{1}{N_J^2} \sum_{ab} x_0^{a(n)} x_0^{b(m)} = \frac{N_J-J}{N_J^2} \nonumber\\
(e)&&\ \ \langle \OL{x_\jp^{(n)}} \OLL{x_\jp x_0^{(n)}} \OLL{x_0 x_\kp^{(n)}}
\OL{x_\kp^{(n)}} \rangle \nonumber\\
&&=\OL{x_\jp^{(n)}} \OLL{x_\jp x_0^{(n)}} \OLL{x_0 x_\kp^{(n)}}
\OL{x_\kp^{(n)}} \ \ (16)(25)(34) \nonumber\\
&&= \frac{N_J-1}{N_J^3}\LP D-P \RP \nonumber\\
(f)&&\ \ \langle \OL{x_\jp^{(n)}} \OLL{x_\jp x_0^{(n)}} \OLL{x_0 x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \rangle \nonumber\\
&&= \OL{x_\jp^{(n)}} \OLL{x_\jp x_0^{(n)}} \OLL{x_0 x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \ \ (16)(25)(34) \nonumber\\
&&= \frac{N_J-2J-1 + \ldots}{N_J^3}\LP D-P \RP \\
\nonumber\end{eqnarray}
Here $(a)$, $(c)$ and $(e)$ are the same as in the previous section.
To evaluate $(f)$ we need to separate the two terms in the double overbar (the delta function
isn't there since the indices can never be equal), since they may give different
numbers of factors of $N_J-J$. This evaluation proceeds as:
\begin{eqnarray}
(f) &\approx& \big\langle \OL{x_\jp^{(n)}} \OLL{x_\jp x_0^{(n)}} \OLL{x_0 x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \big\rangle_{n \ne m} \nonumber\\
&=& \Big\langle \OL{x_\jp^{(n)}} \OL{x_\jp x_0^{(n)}} \OL{x_0 x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \nonumber\\
&-& \OL{x_\jp^{(n)}} \OL{x_\jp^{(n)}} \OL{x_0^{(n)}} \OL{x_0 x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \nonumber\\
&-& \OL{x_\jp^{(n)}} \OL{x_\jp x_0^{(n)}} \OL{x_0^{(m)}} \OL{ x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \nonumber\\
&+& \OL{x_\jp^{(n)}} \OL{x_\jp^{(n)}} \OL{ x_0^{(n)}} \OL{x_0^{(m)}} \OL{ x_\kp^{(m)}}
\OL{x_\kp^{(m)}} \Big\rangle_{n \ne m} \nonumber\\
=\frac{1}{N_J^6} \sum_{abcdef} \Big\langle
&& x_\jp^{a(n)} x_\jp^{b(n)} x_0^{b(n)} x_0^{d(m)} x_\kp^{d(m)} x_\kp^{f(m)} \nonumber\\
&-& x_\jp^{a(n)} x_\jp^{b(n)} x_0^{b(n)} x_0^{d(m)} x_\kp^{e(m)} x_\kp^{f(m)} \nonumber\\
&-& x_\jp^{a(n)} x_\jp^{b(n)} x_0^{c(n)} x_0^{d(m)} x_\kp^{d(m)} x_\kp^{f(m)} \nonumber\\
&+& x_\jp^{a(n)} x_\jp^{b(n)} x_0^{c(n)} x_0^{d(m)} x_\kp^{e(m)} x_\kp^{f(m)} \Big\rangle_{n \ne m} \\
\nonumber\end{eqnarray}
Now the $(16)(25)(34)$ contraction gives
\begin{eqnarray}
=\frac{1}{N_J^6} \sum_{abcdef} \Bigg(
&& \delta_{\jp\kp} \OL{\delta}^{af} \delta_{\jp\kp} \OL{\delta}^{bd} \delta_{00} \OL{\delta}^{bd} \nonumber\\
&-& \delta_{\jp\kp} \OL{\delta}^{af} \delta_{\jp\kp} \OL{\delta}^{be} \delta_{00} \OL{\delta}^{bd} \nonumber\\
&-& \delta_{\jp\kp} \OL{\delta}^{af} \delta_{\jp\kp} \OL{\delta}^{bd} \delta_{00} \OL{\delta}^{cd} \nonumber\\
&+& \delta_{\jp\kp} \OL{\delta}^{af} \delta_{\jp\kp} \OL{\delta}^{be} \delta_{00} \OL{\delta}^{cd} \Bigg) \nonumber\\
= \frac{1}{N_J^6} && \Big( N_J^2 \LP N_J-J \RP^2 -2 N_J \LP N_J-J \RP^2 + \LP N_J-J \RP^3 \Big) \LP D-P \RP \nonumber\\
= \frac{1}{N_J^6} && \Big( N_J^4 -2N_J^3 J -N_J^3 + \ldots \Big) \LP D-P \RP \nonumber\\
= \frac{1}{N_J^3} && \Big( N_J-2J-1 + \ldots \Big) \LP D-P \RP \\
\nonumber\end{eqnarray}
Putting the pieces together, the variance of the parameter over the jackknife
samples is
\begin{eqnarray} && \LP 1-\frac{J}{N} \RP \LP \frac{J}{N_J^2} + \frac{2J(D-P)}{N_J^3} \RP \\
&=& \LP \frac{N-J}{N} \RP \LP \frac{J}{(N-J)^2} \RP \LP 1 + \frac{2(D-P)}{N} + \ldots \RP \\
\nonumber\end{eqnarray}
Comparing with Eq.~\ref{EQ_jack_var_2}, which is for $D=P$, we see that there
is an extra factor of $1 + \frac{2(D-P)}{N}$ (independent of $J$).
However, by comparison with Eq.~\ref{EQ_par_var_2}
we see that this effect is too large by a factor of two, so the jackknife
variance for the parameters is also biased.
The leading corrections to the bootstrap estimate of the parameters' variance
can be done in a similar way.
To be specific, our bootstrap procedure is to make $B$ resamplings, each made by
choosing $N$ data vectors with replacement from the original set of $N$ vectors,
and calculate the variance of the parameters over the bootstrap resamples.
Similarly to Eq.~\ref{EQ_jack_var}, the average over trials of the bootstrap estimate of the
variance is
\begin{eqnarray}\label{EQ_bootvar1} var_B(x_0^f) &=&
\Bigg\langle \LP \OL{x_0^{(n)}} - \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)}
- \frac{1}{B} \sum_m \LP \OL{x_0^{(m)}} - \OL{x_\kp^{(m)}} W_{\kp \ip}^{-1(m)} V_{\ip 0}^{T(m)} \RP \RP \nonumber\\
&& \LP \OL{x_0^{(n)}} - V_{0\jp}^{(n)} W_{\jp\kp}^{-1(n)} \OL{x_\kp^{(n)}}
- \frac{1}{B} \sum_p \LP \OL{x_0^{(p)}} - V_{0\jp}^{(p)} W_{\jp\lp}^{-1(p)} \OL{x_\lp^{(p)}} \RP \RP \Bigg\rangle
\\
\nonumber\end{eqnarray}
which, after separating diagonal and off-diagonal terms in the sums, becomes
\begin{eqnarray}\label{EQ_bootvar3} \LP 1-\frac{1}{B} \RP \Bigg \langle &&
\OL{x_0^{(n)}} \OL{x_0^{(n)}} \nonumber\\
&-& \OL{x_0^{(n)}} \OL{x_0^{(m)}} \nonumber\\
&-& 2 \OL{x_0^{(n)}}
V_{0\ip}^{(n)} W_{\ip\jp}^{-1(n)} \OL{x_\kp^{(n)}} \nonumber\\
&+& 2 \OL{x_0^{(n)}}
V_{0\ip}^{(m)} W_{\ip\jp}^{-1(m)} \OL{x_\kp^{(m)}} \nonumber\\
&+& \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)}
V_{0\jp}^{(n)} W_{\jp\kp}^{-1(n)} \OL{x_\kp^{(n)}} \nonumber\\
&-& \OL{x_\jp^{(n)}} W_{\jp \ip}^{-1(n)} V_{\ip 0}^{T(n)}
V_{0\jp}^{(m)} W_{\jp\kp}^{(-1m)} \OL{x_\kp^{(m)}}
\Bigg\rangle_{n \ne m} \\
\nonumber\end{eqnarray}
The overall $\LP 1-\frac{1}{B} \RP$ is the expected factor for
difference between the average over the original sample and average over bootstraps.
Label the parts as in Eq.~\ref{jackvar_parts_eq0}, where now $(n)$ means the $n$'th
bootstrap resample.
For part $(a)$,
\begin{equation}\label{EQ_bootvar_a}
\langle \OL{x_0^{(n)}} \OL{x_0^{(n)}} \rangle = \frac{1}{N^2} \sum_{ab} \LL x_0^{a(n)} x_0^{b(n)} \RR \end{equation}
where, in this section, the superscript $a(n)$ means the number of the data vector in the original
set that was chosen to be the $a$'th member of bootstrap resample $(n)$.
For example, if for $N=3$ our bootstrap ensemble members were members $0$, $1$ and $0$ of
the original ensemble, then $0(n)=0$, $1(n)=1$ and $2(n)=0$.
We will get contributions with nonvanishing expectation value when $a(n)=b(n)$. If a member
of the original ensemble is chosen $m$ times in the bootstrap sample, then there will be
$m^2$ contributions. Thus the total is the sum over all members of the original ensemble
of the square of the number of times that member was chosen for this bootstrap sample.
The probability distribution for the number of times a member appears in the bootstrap
sample is a binomial distribution with probability $p=1/N$.
The average square of the number of times a member appears in a bootstrap resample
is just the second moment of this distribution, etc.
\begin{eqnarray}
\LL (n_i) \RR &=& 1\nonumber\\
\LL (n_i)^2 \RR &=& = 2-\OON \nonumber\\
\LL (n_i)^3 \RR &=& = 5-\frac{6}{N}+\frac{2}{N^2} \ \ \ (N>2)\\
\nonumber\end{eqnarray}
Thus the expectation value of $(a)$ is $\frac{1}{N^2} N \LP 2-\OON \RP = \frac{2}{N} -\frac{1}{N^2}$.
Part $(b)$ is the expectation value of the number of times a member was chosen in bootstrap
resample $(n)$ times the number of times it was chosen in resample $(m)$. These two are
independent, so we get just the product of the averages, or $\frac{-1}{N}$.
For part $(c)$, break the double bar into its two components.
\begin{eqnarray}
(c) &=& -2\LL \OL{x_0}^{(n)} \OL{x_0 x_\ip}^{(n)} \OL{x_\ip}^{(n)} \RR
+2\LL \OL{x_0}^{(n)} \OL{x_0}^{(n)} \OL{x_\ip}^{(n)} \OL{x_\ip}^{(n)} \RR \nonumber\\
&=& \frac{-2}{N^3}\sum_{abc} \LL x_0^{a(n)} x_0^{b(n)} x_\ip^{b(n)} x_\ip^{c(n)} \RR
+ \frac{2}{N^4} \sum_{abcd} \LL x_0^{a(n)} x_0^{b(n)} x_\ip^{c(n)} x_\ip^{d(n)} \RR \\
\nonumber\end{eqnarray}
In the first term we get a contribution when $a(n)=b(n)=c(n)$. For each of the $N$ members
of the original ensemble we therefore get $n_a^3$ terms, where $n_a$ is the number of times
that member appeared in the bootstrap resample, so we get $N\LP 5 - \ldots \RP \LP D-P \RP$,
where the $D-P$ is from the implicit sum over $\ip$.
In the second term we get contributions when $a(n)=b(n)$ and $c(n)=d(n)$. The probabilities
of these two conditions are not quite independent, since if one member of the original ensemble
is chosen multiple times in the bootstrap resample the other members will be chosen fewer
times. This effect will be suppressed by a power of $\OON$, so to leading order
we just have $\LL n_a^2 \RR^2 = 4N^2 \LP D-P \RP$. Putting in the two and overall factors of $N$ from
the left, $(c) = \frac{-2(D-P)}{N^2} + \ldots$.
Parts $(d)$, $(e)$ and $f$ are done similarly, where to this order in $\OON$ we only need the
loop contraction in parts $(e)$ and $(f)$.
Putting it together
\begin{equation} var_B(x_0^f) = \LP 1-\frac{1}{B} \RP \OON \LP 1 + \frac{D-P-1}{N} \RP \end{equation}
\section{Correcting small biases}
Once the biases in the various estimates of the error on the parameter have been
calculated, it is a simple matter to correct for them. In particular,
we should multiply variance estimates from the derivative method by
$F_{deriv}$ in Eq.~\ref{EQ_parvar_oon_factors2}. Note this assumes the covariance matrix was normalized
as in Eq.~\ref{EQ_covarmat_def}. For the jackknife or bootstrap done with the full sample
covariance matrix, multiply the variance by $F_{reuse}$. This differs from
$F_{deriv}$ only in the $1$ in the denominator, the well known correction for
the difference between the sample average and the true average, which was not
included in our normalization of $C$. For the jackknife or bootstrap analysis where a
new covariance matrix is made for each jackknife or bootstrap sample, multiply the variance
by $F_{jackknife,remake}$ or $F_{bootstrap,remake}$.
Of course, if you are rescaling error
bars instead of the variance, you should use the square root of the factor below.
(In $F_{bootstrap,remake}$ we assumed that the $\frac{B-1}{B}$
in Eq.~\ref{EQ_bootvar3} has already been accounted for.)
\begin{eqnarray} \label{EQ_parvar_oon_factors2}
&&F_{deriv} = \frac{ 1 + \frac{1}{N}\LP D-P \RP + \frac{1}{N^2}\LP D-P \RP\LP D-P+2 \RP \ldots }
{ 1 - \OON\LP 1+D-P\RP + \frac{0}{N^2} } \nonumber\\
&&F_{reuse} = \frac{ 1 + \frac{1}{N}\LP D-P \RP + \frac{1}{N^2}\LP D-P \RP\LP D-P+2 \RP \ldots }
{ 1 - \OON\LP D-P\RP + \frac{0}{N^2} } \nonumber\\
&&F_{jackknife,remake} = 1 - \frac{1}{N} \LP D-P \RP \ldots \nonumber\\
&&F_{bootstrap,remake} = 1 + \frac{1}{N} \ldots \\
\nonumber\end{eqnarray}
\section{Comparison to numerical results}
\begin{figure}[tbh]
\epsfxsize=5.5in
\epsfbox[0 0 4096 4096]{parvar_rev.ps}
\rule{0.0in}{0.0in}\vspace{1.0in}\\
\caption{
\label{FIG_sim2}
Numerical results from Fig.\protect\ref{FIG_sim1} together with
the order $\OON$ and $\frac{1}{N^2}$ results from the previous section.
The meaning of the symbols is the same as in Fig.~\protect\ref{FIG_sim1}.
}
\end{figure}
\begin{figure}[tbh]
\epsfxsize=5.5in
\epsfbox[0 0 4096 4096]{parvar_adjust_rev.ps}
\rule{0.0in}{0.0in}\vspace{1.0in}\\
\caption{
\label{FIG_sim3}
Numerical results from Fig.\protect\ref{FIG_sim1} corrected for bias
up to corrections of order $\frac{1}{N^2}$ for the jackknife with remade
covariance matrices and order $\frac{1}{N^3}$ for the methods with fixed
covariance matrix.
}
\end{figure}
In Fig.~\ref{FIG_sim2} we plot the order $\OON$ forms for the variance
of the parameter and the various methods of estimating it together with
the numerical data. The horizontal axis has been inverted to $\OON$.
Figure~\ref{FIG_sim3} shows the same data, with the estimates for the
variance corrected for bias (up to errors of order $\frac{1}{N^3}$ or $\frac{1}{N^2}$).
Here the lines for the actual variance of the parameter (black)
and for the derivative or resampling with the full sample
covariance matrix (red) are second order in $\OON$, while
the line for the jackknife with remade covariance matrices (blue)
is only first order in $\OON$.
As an aside, we note that although the lowest order corrections for the bootstrap
with remade covariance matrices are smaller than for the other methods,
the next order corrections appear to be larger.
|
train/arxiv
|
BkiUaePxK6-gDw5AstWb
| 5 | 1 |
\section{Introduction}\footnote{This talk was presented by
M. Y. Ishida.}
Recently we found rather strong evidences for existence of the light
$\sigma$-particle analyzing the experimental data obtained through
both the scattering and the production process of $I=0\ S$-wave
$\pi\pi$ system.
In the preceding talk of this conference,\cite{ref1}
(which is referred as I in the following,) it was explained that
our applied methods of the analyses are generally consistent with the
unitarity of $S$-matrix.
More specifically, the Interfering Amplitude(IA) method applied
in reanalysis of $\pi\pi$ scattering phase shift satisfies the
elastic unitarity, and
the VMW method applied in analyses of $\pi\pi$ production processes
is consistent with the final
state interaction theorem.
In this talk first I collect the values of our observed property
of $\sigma$ meson, its mass and width.
Then I show this observed property of $\sigma$ is consistent with that
to be expected in the linear $\sigma$ model. Furthermore, I shall
show, by investigating the background phase shift
$\delta_{BG}$ theoretically in the framework of linear $\sigma$ model,
the experimental behaviors of $\delta_{BG}$ in both the $I=0$ and $I=2$
systems are quantitatively describable theoretically.
\section{Observed property of $\sigma$ meson}
{\it Reanalysis of $\pi\pi$ scattering phase shift}\ \ \
In the IA method\footnote{
The detailed explanation of IA method is given in I.
} the total phase shift
$\delta_0^0$ is represented by the sum of component phase shift,
$\delta_\sigma ,\delta_{f_0(980)}$ and $\delta_{BG}$. In the actual
analysis the $\delta_{BG}$
was taken phenomenologically of hard core type:
\begin{eqnarray}
\delta_0^0 &=& \delta_\sigma +\delta_{f_0(980)}+\delta_{BG};\ \ \
\delta_{BG} = -p_1r_c.
\label{eq2}
\end{eqnarray}
We analyzed
the data of ``standard phase shift''
$\delta^0_0$ between $\pi\pi$- and K$\overline{\rm K}$-thresholds
and also the data on upper and lower bounds
reported so far (see ref.\cite{ref2} in detail).
The results of the analyses are given in Fig. 2 of I, and
we concluded that the $\sigma$ meson exists with the property,
\begin{eqnarray}
m_\sigma &=& 585\pm 20(535\sim 675){\rm MeV},\ \
\Gamma_\sigma =385\pm 70{\rm MeV}.
\label{eqexp}
\end{eqnarray}
As explained in I, the fit with $r_c=0$ corresponds to the
conventional analyses without the repulsive $\delta_{BG}$.
In the present analysis with $\delta_{BG}$
the greatly improved $\chi^2$ value 23.6 is obtained
for standard $\delta_0^0$, compared with
that of the conventional analysis, 163.4. The similar
$\chi^2$ improvement
is also obtained for the upper and lower phase shifts.\footnote{
For upper phase shift the $\chi^2$ in the present(conventional)
analysis is $\chi^2/N_{d.o.f.}=32.3/(26-4)(135.1/(26-3))$.
For lower phase shift
$\chi^2/N_{d.o.f.}=42.1/(17-4)(111.7/(17-3))$.
} This fact strongly
suggests the existence of light $\sigma$ meson phenomenologically.
\footnote{
Concerning this $\chi^2$ improvement Klempt gave a seemingly-strange
criticism\cite{ref3} in his summary talk of Hadron '97, ``
However, the $\chi^2$ gain comes from a better description
of a small anomaly in the mass region around the $\rho (770)$ mass.
$\cdots$ A small feedthrough from P-wave to S-wave can very well
mimic this effect.
''
Actually the $\chi^2$ contribution from this ``anomalous'' region
650 MeV through 810 MeV in the present(conventional) fit is
5.3(62.6), and the $\chi^2$ contribution from the outside region is
23.6-5.3=18.3(163.4-62.6=100.8).
Thus, the $\chi^2$ improvement comes from a better description
of global phase motion below 1 GeV, showing the criticism is not correct.
Furthermore, we tried to fit the data without the relevant
data points. The obtained values of parameters
are almost equal to the ones with the data of full region, while
obtaining the similar improvement of $\chi^2$.
}\\
{\em Analyses of $\pi\pi$ production processes}\ \ \
We also analyzed the data of $\pi\pi$ production processes,
{\em pp} central collision experiment by
GAMS and
$J/\psi\rightarrow\omega\pi\pi$ decay
reported by DM2 collabration,
and showed the possible evidence of the
existence of the $\sigma$ particle.
In the analyses we applied the
VMW method,
where the production amplitude is represented by a
sum of the $\sigma$, $f_0$ and $f_2$ Breit-Wigner amplitudes
with relative phase factors. For detailed analyses, see ref. \cite{ref2}.
The obtained mass and width of $\sigma$
are
\begin{eqnarray}
m_\sigma &=& 580\pm 30\ {\rm MeV},\ \Gamma_\sigma =785\pm 40\ {\rm MeV}
\ \ \ {\rm for}\ pp\ {\rm central\ collision}\nonumber\\
m_\sigma &=& 480\pm 5\ {\rm MeV},\ \Gamma_\sigma =325\pm 10\ {\rm MeV}
\ \ \ {\rm for}\ J/\psi\rightarrow\omega\pi\pi\ {\rm decay}.
\end{eqnarray}
\section{Property of $\sigma$-meson and chiral symmetry}
Now the
property of $\sigma$ meson obtained above is checked from the viewpoint
of chiral symmetry.
In the $SU(2)$ linear $\sigma$ model(L$\sigma$M)
the coupling constant $g_{\sigma\pi\pi}$ of the $\sigma\pi\pi$ interaction
is related to $\lambda$ of the $\phi^4$ interaction
and $m_\sigma$ as
\begin{eqnarray}
g_{\sigma\pi\pi} &=& f_\pi\lambda =(m_\sigma^2-m_\pi^2)/(2f_\pi ).
\label{eqrel}
\end{eqnarray}
Thus, the $\Gamma_\sigma$ is related with $m_\sigma$ through the
following equation:
\begin{eqnarray}
\Gamma_{\sigma\pi\pi}^{\rm theor} &=&
\frac{3g_{\sigma\pi\pi}^2}{4\pi m_\sigma^2}p_1
\sim \frac{3m_\sigma^3}{32\pi f_\pi^2}.
\label{eqmw}
\end{eqnarray}
Substituting the experimental $m_\sigma$=535$\sim$675 MeV
given in Eq. (\ref{eqexp})
and $f_\pi $=93 MeV into Eq.(\ref{eqmw}),
we can predict
$\Gamma_\sigma^{\rm theor}=400\sim 900\ {\rm MeV},$
which is consistent with the $\Gamma_\sigma^{\rm exp}$ given in
Eq.(\ref{eqexp}).
Thus the observed $\sigma$ meson may be identified with the $\sigma$
meson described in the L$\sigma$M.
\section{Repulsive background phase shift}
{\em Experimental phase shift and repulsive core in the I=2 system}\ \ \
In our phase shift analyses of the $I=0$ $\pi\pi$ system
the $\delta_{\rm BG}$
of hard core type introduced phenomenologically
played an essential role.
In the $I=2$ $\pi\pi$ system, there is
no known / expected resonance, and accordingly it is expected that
the phase shift of repulsive core type will appear directly.
As shown in Fig.~1, actually the experimental
data\cite{ref2} from the threshold to $m_{\pi\pi}\approx 1400$ MeV
of the $I=2$ $\pi\pi$-scattering $S$-wave phase shift
$\delta_0^2$ are apparently negative, and fitted well also by the
hard-core formula
$\delta_0^2=-r_c^{(2)}|{\bf p}_1|$ with the core radius of
$r_c^{(2)}=0.87\ {\rm GeV}^{-1}$ (0.17 fm).\\
\begin{figure}[t]
\epsfysize=4.0 cm
\centerline{\epsffile{fig2.eps}}
\caption{\it $I$=2 $\pi\pi$-scattering phase shift. Fitting by
hard core formula is also shown.}
\label{fig:i2}
\end{figure}
{\em Origin of the $\delta_{BG}$ }\ \ \
The
origin of this $\delta_{\rm BG}$ seems to have a close connection to
the $\lambda\phi^4$-interaction in L$\sigma$M\cite{ref2}:
It represents a contact zero-range interaction
and is strongly repulsive both in the $I=0$ and 2 systems,
and has plausible properties
as the origin of repulsive core.
The $\pi\pi$-scattering $A(s,t,u)$-amplitude by $SU(2)$ L$\sigma$M
is given by
$
A(s,t,u) = (-2g_{\sigma\pi\pi})^2/(m_\sigma^2-s)-2\lambda .
$
Because of the relation (\ref{eqrel}),
the dominant part of the amplitude due to
virtual $\sigma$ production( 1st term)
is
cancelled by that due to
repulsive $\lambda\phi^4$ interaction( 2nd term)
in $O(p^0)$ level,
and
the $A(s,t,u)$ is rewritten into the following form:
\begin{eqnarray}
A(s,t,u) &=& \frac{1}{f_\pi^2}
\frac{(m_\sigma^2-m_\pi^2)^2}{m_\sigma^2-s}
-\frac{m_\sigma^2-m_\pi^2}{f_\pi^2}
=\frac{s-m_\pi^2}{f_\pi^2}+\frac{1}{f_\pi^2}
\frac{(m_\pi^2-s)^2}{m_\sigma^2-s},
\label{eq:Acancel}
\end{eqnarray}
where
in the last side
the $O(p^2)$ Tomozawa-Weinberg amplitude
and the $O(p^4)$ (and higher order) correction
term are left.
As a result the derivative coupling property
of $\pi$-meson as a Nambu-Goldstone boson is
preserved.
In this sense the $\lambda\phi^4$-interaction can be
called a ``compensating" interaction for
$\sigma$-effect.
Thus the strong cancellation
between the positive $\delta_\sigma$
and the negative $\delta_{\rm BG}$ in our analysis
leading to the $\sigma$,
as shown in Fig. 2 of I, is reducible
to the relation
Eq.(\ref{eq:Acancel}) in L$\sigma$M.
In the following we shall make a theoretical estimate of
$\delta_{BG}$ in the framework of L$\sigma$M.
The scattering ${\cal T}$ matrix
consists of a resonance part ${\cal T}_R$ and of a background
part ${\cal T}_{BG}$. The ${\cal T}_{BG}$ corresponds to the
contact $\phi^4$ interaction and the exchange of the relevant
resonances. The main term of ${\cal T}_{BG}$
comes from the $\lambda\phi^4$
interaction. This ${\cal T}_{BG}$ has a weak $s$-dependence
in comparison with that of ${\cal T}_R$.
The explicit form of ${\cal T}_{BG}$
for $I=0$ and $I=2$ $S$-wave channels are given by
\begin{eqnarray}
{\cal T}_{BG;S}^{I} &=& -6\lambda a
+2\left( \frac{(-2g_{\sigma\pi\pi})^2}{4p_1^2}
ln\left( \frac{4p_1^2}{m_\sigma^2}+1\right)-2\lambda\right)\nonumber\\
& + & b\cdot 2g_\rho^2\left(-1+\frac{2s-4m_\pi^2+m_\rho^2}{4p_1^2}
ln\left(\frac{4p_1^2}{m_\rho^2}+1\right)-\frac{s+2p_1^2}{m_\rho^2}
\frac{\Lambda^2+4m_\pi^2}{\Lambda^2+s}\right) ,
\label{eqB}
\end{eqnarray}
where $(a,b)=(1,2)$ for $I=0$ and $(a,b)=(0,-1)$ for $I=2$.
Here we introduced the $\rho$ meson
contribution,which are supposed to be described by
Schwinger-Weinberg Lagrangian, \footnote{The derivative $\phi^4$
interaction appearing in Schwinger -Weinberg Lagrangian
makes Eq.(\ref{eq9}) divergent.
Thus we introduce a form factor with cut off
$\Lambda\simeq 1$ GeV. }
$
{\cal L}_\rho =g_\rho\rho_\mu\cdot (\partial_\mu\phi\times\phi)
-g_\rho^2/2m_\rho^2\ (\phi\times\partial_\mu\phi )^2.
$
In order to obtain $\delta_{BG}$ theoretically, we
unitarize ${\cal T}$ by using the N/D method,
${\cal T}_{BG}(s)^I=e^{i\delta_{BG}}
{\rm sin}\delta_{BG}/\rho_1=N_{BG}^I/D_{BG}^I$. We take the Born term
Eq.(\ref{eqB}) as $N$-function. In obtaining $D$-function one
subtraction is necessary,
\begin{eqnarray}
N_{BG}^I &=& {\cal T}_{BG;S}^I,\ \ D_{BG}^I=1+b_I
+\frac{s}{\pi}\int_{4m_\pi^2}^\infty\frac{ds'}{s'(s'-s-i\epsilon)}
\rho_1(s')N_{BG}^I(s').
\label{eq9}
\end{eqnarray}
We adopt the subtraction condition\footnote{
By using this condition the resulting $\delta_{BG}$ takes the same
value as the one obtained by simple ${\cal K}$ matrix unitarization
at the resonance energy $\sqrt{s}=m_\sigma$.}
$
Re\ D_{BG}^I(m_\sigma^2) = 1.
$
The $m_\sigma$ is fixed with the value of the best fit, 0.585 GeV;
and the values of $m_\rho$ and $g_\rho$ are
determined from the experimental
property of $\rho$ meson; The obtained $\delta_{BG}^{I=0,2}$
is shown in Fig. 2.
\begin{figure}
\epsfysize=5.0 cm
\centerline{\epsffile{core.eps}}
\caption{\it The repulsive $\delta_{BG}$ in $I=0$ and $I=2$
$\pi\pi$ scattering predicted by the L$\sigma$M and
by the L$\sigma$M including $\rho$-meson
contribution. The results are compared with the phenomenological
$\delta_{BG}$ of hard core type.}
\label{figcore}
\end{figure}
The $s$-dependence of the theoretical $\delta_{BG}$ by
L$\sigma$M including $\rho$ meson contribution is almost consistent with
the phenomenological $\delta_{BG}$ of hard core type.
Concerning on our analysis of $\delta^{I=0}$,
Pennington made a criticism\cite{ref4}
that the form of $\delta_{BG}$ is completely arbitrary.
However, as shown in Fig. 2, our phenomenological
$\delta_{BG}$, Eq.(\ref{eq2}),
is almost consistent with the theoretical
prediction by L$\sigma$M.
Thus, the criticism is not valid.
\section{Concluding remark}
We have shortly summarized the properties of the light $\sigma$ meson
``observed'' in a series of our recent works.
The obtained values of mass and width
of $\sigma$ satisfy the relation predicted by
L$\sigma$M. This fact suggests the linear representation of chiral
symmetry is realized in nature.\\
In our phase shift analysis there occurrs a strong cancellation between
$\delta_\sigma$ due to the $\sigma$ resonance
and $\delta_{BG}$, which is guaranteed by
chiral symmetry.
A reason of overlooking $\sigma$ in conventional phase shift analysis
is due to overlooking of this cancellation mechanism.\\
The behavior of phenomenological $\delta_{BG}$ is shown to be
quantitatively describable in the framework of L$\sigma$M including
$\rho$ meson contribution.
Finally I give a comment: By the analysis of
$I=1/2$ $S$-wave $K\pi$ scattering phase shift in a similar method,
the existence of $\kappa (900)$ particle with a
broad ($\sim 500$ MeV) width is suggested.
The scalars below 1 GeV, $\sigma (600)$,
$\kappa (900)$, $a_0(980)$ and $f_0(980)$
are possibly to form a single scalar nonet.\cite{ref5}
Octet members of this nonet satisfy the Gell-Mann Okubo mass formula.
Moreover, this $\sigma$ nonet is shown to satisfy the mass and width
relation of $SU(3)$ L$\sigma$M, forming with pseudoscalar $\pi$ nonet
a linear representation of chiral symmetry.
|
train/arxiv
|
BkiUduo4uBhjCUQN8bcg
| 5 | 1 |
\section{Introduction}
\subsection{Background and Motivation}
\IEEEPARstart{R}{ecommender} systems have been widely advocated as a way of providing suitable recommendation solutions to customers in various fields such as e-commerce, advertising, and social media sites. One of the most important and popular techniques in recommender systems is collaborative filtering (CF), which computes similarities between users and items from historical interactions (e.g., clicks and purchases) to suggest relevant items to users by assuming that users who have behaved similarly with each other exhibit similar preferences for items \cite{hsieh2017collaborative,ebesu2018collaborative,han2019adaptive}. Moreover, following up the great success of network embedding (NE), also known as network representation learning, considerable research attention has been paid to NE-based recommender systems that attempt to model high-order connectivity information from user--item interactions viewed as a bipartite graph \cite{gori2007itemrank, yang2018hop, gao2018bine,chen2019collaborative}.\footnote{In the following, we use the terms ``graph" and ``network" interchangeably.} In recent years, {\em graph neural networks (GNNs)} \cite{kipf2017semi,hamilton2017inductive, xu2018powerful,velivckovic2017graph ,wu2019survey} have emerged as a powerful neural architecture to learn vector representations of nodes and graphs. By virtue of great prowess in solving various downstream machine learning problems, GNN-based recommender systems \cite{ying2018graph,zheng2018spectral,wang2019neural,wang2020multi,wu2020joint,chen2020revisiting,he2020lightgcn} have also been developed for improving the recommendation accuracy.
GNN models are basically trained by aggregating the information of direct neighbor nodes via message passing under the {\em homophily} (or {\em assortativity}) assumption that a target node and its neighbors are similar to each other \cite{wu2019survey}. Due to the neighborhood aggregation mechanism, existing literature posits that high homophily of the underlying graph is a necessity for GNNs to achieve good performance especially on node classification \cite{zhu2020beyond,pei2020geom,zhu2020graph}. On the other hand, in recommender systems, while users' feedback on many online websites (e.g., {\em likes/dislikes} on YouTube and high/low ratings on Amazon) can be positive and negative, existing GNN-based recommender systems overlook the existence of negative feedback \color{black}(e.g., {\em dislikes} on YouTube and low ratings on Amazon) \color{black}due to their ease of modeling. Precisely, most GNN-based approaches utilize only positive feedback by removing the negative interactions in order to exhibit strong homophily in neighbors (see Fig. \ref{fig:intro}).\footnote{A graph is usually referred to as homophilous (or assortative) if connected nodes are much more likely to have the same class label than if edges are independent of labels. In our study, a {\em homophilous bipartite} graph assumes that user--item relations are significantly more likely to be positive.} It is worthwhile to note that, despite the remarkable performance boost by existing GNN-based recommender systems, the low ratings can be still informative. This is because such information expresses signs of what users {\em dislike}. In other words, full exploitation of two types of feedback in GNNs may have the potential to further improve the recommendation performance, which remains a new design challenge.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{./figures/introduction.eps}
\caption{New challenges in GNN-based recommender systems.}
\label{fig:intro}
\end{figure}
\subsection{Main Contributions}
\color{black}
CF in recommender systems has been often studied by regarding low ratings as {\em implicit} negative feedback (see \cite{paudel2019loss, fiftyshades2016, fewer17,Tran2021}). However, such interpretations may not be straightforwardly extended to GNN-based recommender systems due to their inherent architecture including neighborhood aggregation. In this context, \color{black}even with the wide applications of GNNs to recommender systems \cite{ying2018graph,zheng2018spectral,wang2019neural,wang2020multi,wu2020joint,chen2020revisiting,he2020lightgcn}, a question that arises naturally is: ``How can we make use of \color{black}implicit \color{black} negative feedback (i.e., low rating scores) for representing users' preferences via GNNs?". In this paper, to answer this question, we introduce \textsf{SiReN}, a {\em \underline{Si}gn-aware} \underline{Re}commender system based on GN\underline{N} models. To this end, \textcolor{black}{as illustrated in Fig.~\ref{fig:intro}}, we design and optimize our new GNN-aided learning model while distinguishing users' positive and negative feedback.
Our proposed \textsf{SiReN} method includes three key components: 1) signed graph construction and partitioning, 2) model architecture design, and 3) model optimization. First, to overcome the primary problem of existing GNN-based recommender systems \cite{wang2019neural, chen2020revisiting, he2020lightgcn} that fail to learn both positive and negative relations between users and items, we start by constructing a {\em signed} bipartite graph, which is split into two edge-disjoint graphs with {\em positive} and {\em negative} edges each. This signed graph construction and partitioning process enables us to more distinctly identify users' preferences for observed items. Second, we show how to design our model architecture for discovering embeddings of all users and items in the signed bipartite graph. Although several GNN models were introduced in \cite{derr2018signed, huang2019signed} for signed unipartite graphs, simply applying them to recommender systems, corresponding to user--item bipartite graphs, may not be desirable. This is because such GNN models were built upon the assumption of balance theory \cite{heider1946attitudes}, which implies that ``{\em the friend of my friend is my friend}" and ``{\em the enemy of my enemy is my friend}". However, the balance theory no longer holds in recommender systems since users' preferences cannot be dichotomous. In other words, users are likely to have dissimilar preferences even if they dislike the same item(s). This motivates us to design our own model architecture for each partitioned graph, rather than employing existing GNN approaches based on the balance theory. Concretely, \textsf{SiReN} contains three learning models. For the graph with positive edges, we employ a GNN model suit for recommender systems. For the graph with negative edges, we adopt a multi-layer perceptron (MLP) due to the fact that negative edges can weaken the homophily and thus message passing to such dissimilar nodes would not be feasible. To obtain the final embeddings, we then use an attention model that learns the importance of two embeddings generated from GNN and MLP models. Third, as our objective function in the process of optimization, we present a {\em sign-aware} Bayesian personalized ranking (BPR) loss function, which is built upon the original BPR loss \cite{BPRMF09} widely used in recommender systems. More specifically, unlike the original BPR loss, our objective function takes into account two types of observed items, including both positive and negative relations between users and items, as well as unobserved ones.
To validate the superiority of our \textsf{SiReN} method, we comprehensively perform empirical evaluations using various real-world datasets. Experimental results show that our method consistently outperforms state-of-the-art GNN methods for top-$K$ recommendation in terms of several recommendation accuracy metrics. Such a gain is possible owing to the use of low rating information along with judicious model design and optimization. We also empirically validate the effectiveness of MLP in comparison with other model architectures used for the graph with negative edges. Additionally, our experimental results demonstrate the robustness of our \textsf{SiReN} method to more challenging interaction sparsity levels.
It is worth noting that our method is GNN-{\em model-agnostic} and thus any competitive GNN architectures can be appropriately chosen for potentially better performance. The main technical contributions of this paper are four-fold and summarized as follows:
\begin{itemize}
\item We propose \textsf{SiReN}, a novel GNN-aided recommender system that makes full use of the user--item interaction information after signed graph construction and partitioning;
\item We design our model architecture using three learning models in the sense of utilizing sign awareness so as to generate embeddings of users and items;
\item We establish the sign-aware BPR loss as our objective function;
\item We validate \textsf{SiReN} through extensive experiments using three real-world datasets while showing the superiority of our method over state-of-the-art NE-aided methods under diverse conditions.
\end{itemize}
\subsection{Organization and Notation}
The remainder of this paper is organized as follows. In Section II, we present prior studies related to our work. In Section III, we explain the methodology of our study, including basic settings and an overview of our \textsf{SiReN} method. Section IV describes technical details of the proposed method. Comprehensive experimental results are shown in Section V. Finally, we provide a summary and concluding remarks in Section VI.
\section{Related work}
The method that we proposed in this study is related to \color{black}three \color{black} broader research lines, namely standard CF approaches\color{black}, \color{black} NE-based recommendation approaches\color{black}, and NE approaches for signed graphs.\color{black}
\subsection{Standard CF Approaches}
As one of the most popular techniques, CF in recommender systems aims to capture the relationships between users and items from historical interactions (e.g., ratings and purchases) by discovering learnable vector representations for users and items based on a rating matrix \cite{koren2008factorization,ebesu2018collaborative}. Matrix factorization (MF) \cite{koren2009matrix} decomposes the user--item interaction matrix into a product of two low-dimensional matrices and models their similarities with the dot product of two matrices. ANLF \cite{luo2015nonnegative} was designed based on a non-negative MF (NMF) model \cite{lin2007projected} using the alternating direction method. In DMF \cite{xue2017deep} and NCF \cite{he2017neural}, MF models with neural network architectures were proposed to project users and items into a latent structured space. Moreover, SLIM \cite{ning2011slim} proposed a sparse linear model by directly reconstructing the low-density user--item interaction matrix. To capture complex relationships between items in sparse datasets, FISM \cite{kabbur2013fism} and NPE \cite{nguyen2018npe} showed how to learn item--item similarities as the product of two latent factor matrices. While MF-based models such as \cite{xue2017deep, he2017neural} rely on the dot product of user and item latent vectors as a similarity measure, CML \cite{hsieh2017collaborative} showed how to learn a joint metric space to encode not only users' preferences but also user--user and item--item similarities in the Euclidean distance. In NAIS \cite{he2018nais}, the attention mechanism was incorporated to obtain the importance of each item from user--item interactions for preference prediction. DLMF \cite{deng2016deep} designed a trust-aware recommender system that leverages deep learning to determine the initialization in MF by synthesizing the users' interests and their trusted friends' interests in addition to the user--item interactions.
\subsection{NE-Based Recommendation Approaches}
Recently, it has been comprehensively studied how to develop recommender systems using NE. While standard CF approaches are capable of only modeling the first-order connectivity between users and items, NE-based approaches aim to exploit {\em high-order proximity} among users and items through the user--item bipartite graph structure \cite{wang2019neural, he2020lightgcn}. BiNE \cite{gao2018bine} and CSE \cite{chen2019collaborative} were developed based on random walks in order to infer similar users (or items) in the underlying bipartite graph (i.e., user--item interactions).
Moreover, encouraged by the success of GNNs in solving many graph mining tasks \cite{wu2019survey}, GNN-based recommender systems have more recently emerged as promising techniques \cite{van2018graph, ying2018graph, zheng2018spectral, chen2020revisiting, zhang2019inductive, wang2019neural, he2020lightgcn, wang2020multi, wu2020joint }. Existing GNN models for top-$K$ recommendation were generally developed using implicit feedback, which treats observed user--item interactions as positive relations. GC-MC \cite{van2018graph} presented a graph autoencoder including graph convolutions for rating matrix completion. PinSage \cite{ying2018graph} showed a scalable GNN framework developed in production at Pinterest by improving upon GraphSAGE \cite{hamilton2017inductive} along with several aggregation functions. Spectral CF \cite{zheng2018spectral} presented a spectral convolution operation to learn the rich information of connectivity between users and items in the spectral domain. As state-of-the-art NE-based recommender systems using only user--item interactions as input, NGCF \cite{wang2019neural}, LR-GCCF \cite{chen2020revisiting}, and LightGCN \cite{he2020lightgcn} were developed based on GCN \cite{kipf2017semi}, the first attempt to apply convolutions to graph domains, while performing layer aggregation to solve the oversmoothing problem. IGMC \cite{zhang2019inductive} proposed an inductive GNN-based matrix completion model that learns local subgraphs in the underlying user--item interaction matrix. To explore users' latent purchasing motivations (e.g., cost effectiveness and appearance), MCCF \cite{wang2020multi} presented a CF approach based on GNNs for generating multiple user/item embeddings and then combining them via attention mechanisms. AGCN \cite{wu2020joint} proposed a GCN approach for joint item recommendation and attribute inference in an attributed user--item bipartite graph with incomplete attribute values.
\color{black}In addition to various GNN architectures in \cite{van2018graph, ying2018graph, zheng2018spectral, chen2020revisiting, zhang2019inductive, wang2019neural, he2020lightgcn, wang2020multi, wu2020joint } for recommender systems, to \color{black} alleviate the sparsity of user--item interactions, each user's local neighbors' preferences in social networks were also utilized in \cite{wu2019neural, fan2019graph, wu2020diffnet++} for better user embedding modeling, thus enabling us to improve the recommendation accuracy. \color{black}An approach for formulating feature-aware recommendation from the review information via a signed hypergraph convolutional network was also presented in \cite{chen2020neural_HGCN}.
\subsection{NE Approaches for Signed Networks}
The design of NE in {\em signed} networks has garnered considerable attention while primarily solving the problem of link sign prediction \cite{wang2017signed, kim2018side,derr2018signed,huang2019signed,li2020learning,liu2021signed}, which is to predict unobserved signs of existing edges \cite{leskovec2010predicting}. Specifically, NE methods in \cite{yuan2017sne}, \cite{kim2018side} for signed networks were developed by formulating their own likelihood functions based on generated random walk sequences. Moreover, GNN-aided NE approaches for signed networks were presented in \cite{derr2018signed,huang2019signed,li2020learning} while being guided by the structural balance theory \cite{heider1946attitudes}. SGCN \cite{derr2018signed} generalized GCN \cite{kipf2017semi} to signed networks by generating both balanced and unbalanced embeddings. SiGAT \cite{huang2019signed} was built upon GAT \cite{velivckovic2017graph} along with motif-based GNNs. SNEA \cite{li2020learning} presented a universal way of leveraging the graph attention mechanism to aggregate more important information through both positive and negative edges. By adopting the k-group theory beyond the balance theory, GS-GNN \cite{liu2021signed} designed a dual GNN architecture to learn global and local representations.
\color{black}
\subsection{Discussion}
Despite the aforementioned contributions, leveraging {\em explicit feedback} data (i.e., user--item interactions with ratings) in NE-based approaches has been largely underexplored in the literature. To learn vector representations for users and items based on the NE, it is common to utilize only positive user--item interactions as observed data when explicit feedback data are given (see \cite{chen2019collaborative,wu2019neural,wu2020diffnet++} and references therein). However, negative interactions with low ratings can be still informative since such information shows signs of what users {\em dislike}. It remains open how to make use of low ratings in designing NE-aided recommender systems.
\color{black}
On the other hand, as aforementioned, several GNN models \cite{derr2018signed,huang2019signed,li2020learning,liu2021signed} have been actively developed to learn vector representations of nodes in {\em signed} (unipartite) graphs with both positive and negative edges based on the balance theory or its variant. However, it is worth noting that these models focus primarily on such networks that exhibit friend/foe (or trust/distrust) relationships. This implies that they do not always hold in recommender systems due to the fact that users' preferences cannot be dichotomous, i.e., a behavior of users disliking similar items does not alway imply the same degree of user preferences about items. Therefore, adopting such GNN methods designed for signed graphs would not be desirable to capture different levels of user preferences in recommender systems. \color{black}
\begin{figure*}[t]
\centering
\includegraphics[width=1.8\columnwidth]{./figures/Schematic_overview_v6.eps}
\caption{The schematic overview of our \textsf{SiReN} method.}
\label{fig:overview}
\end{figure*}
\section{Methodology}
In this section, we describe our network model with basic settings. Next, we explain an overview of the proposed \textsf{SiReN} method as a solution to the problem of making full use of low ratings in GNN models.
\subsection{Network Model and Basic Settings}
In recommender systems, the basic input is the historical user--item interactions with ratings, which is represented as a weighted bipartite graph. Let us denote the underlying bipartite graph as $G =( \mathcal{U}, \mathcal{V}, \mathcal{E})$, where $\mathcal{U}$ and $\mathcal{V}$ are the set of $M$ users and the set of $N$ items, respectively, and $\mathcal{E}$ is the set of weighted edges between $\mathcal{U}$ and $\mathcal{V}$. A weighted edge $(u,v,w_{uv})\in \mathcal{E}$ can be interpreted as the rating $w_{uv}$ with which the user $u\in \mathcal{U}$ has given to the item $v\in \mathcal{V}$. We assume $G$ to be a static network without repeated edges, where ratings (i.e., user preferences) do not change over time.
In our study, we aim at designing a new GNN-aided recommender system for improving the accuracy of top-$K$ recommendation by making full use of the user--item rating information in $G$, including {\em low ratings} that have not been explored by conventional GNN-based recommender systems \cite{he2020lightgcn,chen2020revisiting}, without any side information.
\subsection{Overview of \textsf{SiReN}}
In this subsection, we explain our methodology along with the overview of the proposed \textsf{SiReN} method. We recall that our study is motivated by the fact that recent recommender systems built upon GNN models such as \cite{he2020lightgcn,chen2020revisiting} take advantage of only high rating scores as observed data by deleting some edges, corresponding to low ratings (e.g., the rating scores of 1 and 2 in the 1--5 rating scale), in the set $\mathcal{E}$ over the weighted bipartite graph $G$ \cite{chen2019collaborative, wu2019neural,wu2020diffnet++}. This is because such a removal of low ratings from $G$ enables us to aggregate the positively connected neighbors via message passing in GNNs. However, the set of negative interactions indicates what users {\em dislike} and thus is still quite informative. In other words, it remains open how to fully exploit the rating information in building GNN-based recommender systems as recently developed models fail to capture the effect of low rating scores.
To tackle this challenge, we present \textsf{SiReN}, a new {\em sign-aware} recommender system using GNNs, which is basically composed of the following three core components (refer to Fig. \ref{fig:overview}):
\begin{itemize}
\item signed bipartite graph construction and partitioning
\item embedding generation for each partitioned graph
\item optimization via a sign-aware BPR loss.
\end{itemize}
First, we describe how to construct a signed bipartite graph $G^s$ that enables us to more distinctly identify users' preferences based on all the user--item interactions. More specifically, we construct $G^s=(\mathcal{U}, \mathcal{V}, \mathcal{E}^s)$ with a parameter $w_o>0$, representing a criterion for dividing high and low ratings, where
\begin{align}
\label{align_signededges}
\mathcal{E}^s= \big\{ (u,v,w_{uv}^{s})\big|w_{uv}^{s}=w_{uv}-w_{o},~(u,v,w_{uv})\in \mathcal{E} \big\}.
\end{align}
Here, $w_o$ can be determined according to characteristics (e.g., the rating scale and the popularity distribution of items) of a given dataset; from an algorithm design perspective, we assume that a user $u$ likes an item $v$ if $w_{uv}^s>0$ and he/she dislikes $v$ otherwise, where $w_{uv}^s=w_{uv}-w_o$ corresponds to the edge weight in the signed bipartite graph $G^s$. \color{black} Note that this signed graph construction is not necessary if we deal with originally signed bipartite graphs (e.g., {\em like/dislike} rating systems on YouTube).
\color{black}Basically, GNN models are trained by aggregating the information of neighbor nodes under the {\em homophily} assumption \cite{wu2019survey}. However, due to the fact that the signed graph $G^s$ includes negative edges (i.e., interactions with low ratings), aggregating the information of such negatively connected neighbors may not be desirable. As illustrated in Fig. \ref{fig:overview}, to more delicately capture each relation of positively and negatively connected neighbors, we then {\em partition} the signed bipartite graph $G^s$ into two edge-disjoint graphs $G^p=(\mathcal{U}, \mathcal{V}, \mathcal{E}^p)$ and $G^n=(\mathcal{U}, \mathcal{V}, \mathcal{E}^n)$, consisting of the set of positive edges and negative edges, respectively. Here, it follows that $\mathcal{E}^s=\mathcal{E}^p\cup \mathcal{E}^n$, where
\begin{subequations}
\begin{align}
\label{graph_split}
\mathcal{E}^p&= \big\{ (u,v,w_{uv}^{s})\big|~w_{uv}^s>0,~(u, v, w_{uv}^{s})\in \mathcal{E}^s \big\}\\
\mathcal{E}^n&= \big\{ (u,v,w_{uv}^{s})\big|~w_{uv}^s<0,~(u, v, w_{uv}^{s})\in \mathcal{E}^s \big\}.
\end{align}
\end{subequations}
The purpose of this graph partitioning is to make the graphs $G^p$ and $G^n$, respectively, assortative and disassortative so that each partitioned graph is used as input to the most appropriate learning model.
Second, we describe how to generate embeddings of $M$ users and $N$ items along with three learning models in our \textsf{SiReN} method. Using the graph $G^p$ having positive edges, we adopt a GNN model suit for recommender systems (e.g., \cite{he2020lightgcn,chen2020revisiting}) to calculate embedding vectors ${\bf Z}^p\in \mathbb{R}^{(M+N)\times d}$ for the nodes in $\mathcal{U}\cup \mathcal{V}$:
\begin{align}
\label{align_GNN_pos}
{\bf Z}^p=\text{GNN}_{\theta_1}(G^p),
\end{align}
where $d$ is the dimension of the embedding space and $\theta_1$ is the learned model parameters of GNN. On the other hand, we adopt an MLP for the graph $G^n$ to calculate embedding vectors ${\bf Z}^n\in \mathbb{R}^{(M+N)\times d}$ for the same nodes as those in (\ref{align_GNN_pos}):
\begin{align}
\label{align_MLP_neg}
{\bf Z}^n=\text{MLP}_{\theta_2}(G^n),
\end{align}
where $\theta_2$ is the learned model parameters of MLP. We would like to state the following two remarks to explain the model selection according to types of graphs.
\begin{rmk}
\label{rmk_1}
It is worth noting that existing GNN models, built upon message passing architectures, work on the basic assumption of homophily \cite{zhu2020graph}. We recall that users who dislike similar items may not have similar preferences (i.e., the tendency in rating items) with each other. Negative edges in $G^n$ can undermine the effect of homophily and thus message passing to such dissimilar nodes would not be feasible. For these reasons, adopting GNNs in the graph with negative edges may not be desirable, based on the fact that many GNNs fail to generalize to disassortative graphs, i.e., graphs with low levels of homophily \cite{pei2020geom,zhu2020beyond,jin2021node}.
\end{rmk}
\begin{rmk}
\label{rmk_2}
Additionally, note that the MLP architecture itself does not exploit the topological information. However, it does not imply that the connectivity information in the graph $G^n$ is not used at all. In the optimization step, we update the embedding vectors in (\ref{align_GNN_pos}) and (\ref{align_MLP_neg}) by fully taking advantage of the set $\mathcal{E}^n$ in $G^n$ as well as the set $\mathcal{E}^p$ in $G^p$, which will be specified later.
\end{rmk}
Next, let us mention another training model in \textsf{SiReN}, the so-called {\em attention} model. To get the importance of two embeddings ${\bf Z}^p$ and ${\bf Z}^n$, we use the attention mechanism \cite{vaswani2017attention} that learns the corresponding importance $({\bf \alpha}_p,{\bf \alpha}_n)$ as follows:
\begin{align}
\label{align_attn}
({\bf \alpha}^p,{\bf \alpha}^n)=\text{ATTENTION}_{\theta_3} ({\bf Z}^p,{\bf Z}^n),
\end{align}
which results in the final embeddings:
\begin{align}
\label{attn_final_emb}
{\bf Z}=({\bf \alpha}^p{\bf 1}_{\text{attn}}) \odot {\bf Z}^p + ({\bf \alpha}^n{\bf 1}_{\text{attn}}) \odot {\bf Z}^n,
\end{align}
where ${\bf \alpha}^p,{\bf \alpha}^n\in \mathbb{R}^{(M+N)\times 1}$; ${\bf 1}_{\text{attn}}\in\mathbb{R}^{1\times d}$ is the all-ones vector; $\odot$ denotes the Hadamard (element-wise) product; $\theta_3$ is the learned model parameters of ATTENTION; and each row of ${\bf Z}\in\mathbb{R}^{(M+N)\times d}$ indicates the embedding vector of each node in $\mathcal{U}\cup\mathcal{V}$.
Third, we turn to the optimization of model parameters $\{\theta_1,\theta_2,\theta_3\}$, which updates the embeddings ${\bf Z}$ accordingly. In our study, we adopt the {\em BPR loss} \cite{BPRMF09}, which has been widely used in recommender systems to comprehensively learn what users prefer from the historical user--item interactions. Nevertheless, simply applying the existing BPR loss to our setting does not precisely capture the relations of negatively connected neighbors; thus, we establish a {\em sign-aware} BPR loss, which is a new BPR-based loss that takes into account both positive and negative relations in the signed bipartite graph $G^s$, while accommodating the sign of edges in $G^s$ as an indicator of what users like and dislike.
In the next section, we shall describe implementation details of the proposed \textsf{SiReN} method.
\begin{algorithm}[t]
\caption{: \textsf{SiReN}}
\label{algorithm_main}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $G$, $w_o$, $\Theta \triangleq \{\theta_1, \theta_2, \theta_3\}$, $N_{\text{neg}}$ (number of negative samples), $\lambda_{\text{reg}}$ (regularization coefficient)
\ENSURE $\mathbf{Z}$
\STATE \textbf{Initialization: }$\Theta \leftarrow $ random initialization
\STATE /* Graph partitioning */
\STATE Construction of $G^s$ from $G$ along with $w_o$
\STATE Splitting $G^s$ into $G^p$ and $G^n$
\WHILE{not converged}
\STATE Create \color{black}a training set \color{black} $\mathcal{D}_S$ from $G^s$ with $N_{\text{neg}}$ negative samples
\FOR {each \color{black}mini-batch \color{black} $\mathcal{D}_S' \subset \mathcal{D}_S$}
\STATE /* Generation of embeddings */
\STATE ${\bf Z}^p\leftarrow \text{GNN}_{\theta_1}(G^p)$
\STATE ${\bf Z}^n\leftarrow \text{MLP}_{\theta_2}(G^n)$
\STATE $({\bf \alpha}^p,{\bf \alpha}^n)\leftarrow \text{ATTENTION}_{\theta_3} ({\bf Z}^p,{\bf Z}^n)$
\STATE ${\bf Z}\leftarrow ({\bf \alpha}^p{\bf 1}_{\text{attn}})\odot {\bf Z}^p + ({\bf \alpha}^n{\bf 1}_{\text{attn}})\odot{\bf Z}^n$
\STATE /* Optimization */
\STATE $\mathcal{L}_0 \leftarrow \text{\textit{sign-aware} BPR loss}$
\STATE $\mathcal{L} \leftarrow {\mathcal{L}}_0+\lambda_{\text{reg}}\norm{\bf \Theta}^2$
\STATE Update $\Theta$ by taking one step of gradient descent
\ENDFOR
\ENDWHILE
\RETURN ${\bf Z}$
\end{algorithmic}
\end{algorithm}
\section{Proposed \textsf{SiReN} Method}
In this section, we elaborate on our \textsf{SiReN} method, designed for top-$K$ recommendation. \textsf{SiReN} has the following three key components: 1) constructing a signed bipartite graph $G^s$ for more precisely representing users' preferences, which is split into two graphs $G^p$ and $G^n$ with {\em positive} and {\em negative} edges each, 2) generating two embeddings ${\bf Z}^p$ and ${\bf Z}^n$ for the partitioned graphs with positive and negative edges via a GNN model and an MLP, respectively, and then using the attention model to learn the importance of ${\bf Z}^p$ and ${\bf Z}^n$, and 3) establishing a sign-aware BPR loss function in the process of optimization. The overall procedure of the proposed \textsf{SiReN} method is summarized in Algorithm \ref{algorithm_main}.
As one of main contributions to the design of our method, we start by constructing the signed bipartite graph $G^s$ and then partitioning $G^s$ into two edge-disjoint graphs $G^p$ and $G^n$ for exploiting the relation of positively and negatively connected neighbors.
In the following subsections, we explain how we discover embeddings of all nodes (i.e., users and items) and optimize our model via the sign-aware BPR loss during the training phase.
\subsection{Network Architecture}
In this subsection, we describe how to generate the embeddings of $M$ users and $N$ items in our \textsf{SiReN} method. As stated in Section III-B, \textsf{SiReN} basically contains three learning models, i.e., $\text{GNN}_{\theta_1}$, $\text{MLP}_{\theta_2}$, and $\text{ATTENTION}_{\theta_3}$. To generally indicate either users or items interchangeably, we denote a node in the graphs $G^p$ and $G^n$, which can be in either $\mathcal{U}$ or $\mathcal{V}$, by $x$.
First, we describe the GNN model, which is designed to be {\em model-agnostic}. To this end, we show a general form of the message passing mechanism \cite{hamilton2017inductive, gilmer2017neural, xu2018powerful} in which we iteratively update the representation of each node by aggregating representations of its neighbors using two functions, namely $\text{AGGREGATE}_x^{\ell}$ and $\text{UPDATE}_x^{\ell}$, along with model parameters of $\text{GNN}_{\theta_1}$ in (\ref{align_GNN_pos}). Formally, at the $\ell$-th layer of a GNN, $\text{AGGREGATE}^{\ell}_{x}$ aggregates (latent) feature information from the local neighborhood of node $x$ in $G^p$ at the $(\ell-1)$-th GNN layer as follows:
\begin{align}
\label{aggregate_}
{\bf m}_{x}^{\ell} \leftarrow \text{AGGREGATE}^{\ell}_{x}\big(\big\{ {\bf h}_{y}^{\ell-1}\big|y \in \mathcal{N}_{x} \cup \{x\}\big\}\big),
\end{align}
where ${\bf h}_{x}^{\ell-1}\in\mathbb{R}^{1\times d_{\text{GNN}}^{\ell-1}}$ denotes the $d_\text{GNN}^{\ell-1}$-dimensional latent representation vector of node $x$ at the $(\ell-1)$-th GNN layer, $\mathcal{N}_{x}$ is the set of neighbor nodes of $x$ in $G^p$, and ${\bf m}_{x}^{\ell}\in\mathbb{R}^{1\times d_\text{GNN}^{\ell-1}}$ is the aggregated information for node $x$ at the $\ell$-th GNN layer. Since $x$ belongs to a node in either $\mathcal{U}$ or $\mathcal{V}$, $\text{AGGREGATE}_{x}^{\ell}$ aggregates feature information of connected items if $x$ is a user node, and vice versa. In the update step, we use $\text{UPDATE}_{x}^{\ell}$ to obtain the $\ell$-th embedding vector ${\bf h}_{x}^{\ell}$ from the aggregated information ${\bf m}_{x}^{\ell}$ as follows:
\begin{align}
\label{update_}
{\bf h}_{x}^{\ell} \leftarrow \text{UPDATE}_{x}^{\ell}\big(x,{\bf m}_{x}^{\ell}\big).
\end{align}
We note that, for each node $x$, we randomly initialize the learnable 0-th embeddings (i.e., ${\bf h}^0_{x}$) due to the fact that we have no side information for users and items in our setting as in \cite{he2020lightgcn,chen2020revisiting}. Additionally, we present another function in our GNN model, namely $\text{LAYER-AGG}_x^{L_{\text{GNN}}}$, which performs {\em layer aggregation} similarly as in \cite{xu2018representation}. This operation is motivated by the argument that {\em oversmoothing} tends to occur in GNN-based recommender systems if the last GNN layer's embedding vectors are used as the final embedding ${\bf Z}^p$ \cite{li2018deeper}. To alleviate the oversmoothing problem, we calculate the embedding vector ${\bf z}_x^p\in \mathbb{R}^{1\times d}$ of node $x$ via layer aggregation as follows:
\begin{align}
\label{layer_agg}
{\bf z}^p_x \leftarrow {\text{LAYER-AGG}}_x^{L_{\text{GNN}}} \left( {\big\{{\bf h}_x^{\ell}\big\}}_{\ell=0}^{\ell=L_{\text{GNN}}}\right),
\end{align}
which results in the embeddings ${\bf Z}^p$ for the graph $G^p$ where $L_\text{GNN}$ is the number of GNN layers. The GNN architecture of our \textsf{SiReN} method including the above three functions $\text{AGGREGATE}_x^{\ell}$, $\text{UPDATE}_x^{\ell}$, and $\text{LAYER-AGG}_x^{L_{\text{GNN}}}$ is illustrated in Fig. \ref{fig:GNN_}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{./figures/GNN_v3.eps}
\caption{The GNN architecture in our \textsf{SiReN} method, composed of three functions in (\ref{aggregate_})--(\ref{layer_agg}). }
\label{fig:GNN_}
\end{figure}
\begin{rmk}
\label{GNN_reco_ex}
Now, let us state how the above three functions in (\ref{aggregate_})--(\ref{layer_agg}) can be specified by several types of GNN-based recommender systems. As one state-of-the-art method, \color{black} NGCF \cite{wang2019neural} can be implemented by using
\begin{subequations}
\begin{align}
\text{AGGREGATE}_{1,x}^{\ell} = &\text{ }W_{\text{GNN};1}^{\ell} {\bf h}_{x}^{\ell -1}\\
\text{AGGREGATE}_{2,x}^{\ell}=&\sum_{y \in \mathcal{N}_{x}}\frac{1}{\sqrt{|\mathcal{N}_x||\mathcal{N}_y|}}\Big( W_{\text{GNN};1}^{\ell}{\bf h}_y^{\ell -1}\\
\nonumber
&+W_{\text{GNN};2}^{\ell}({\bf h}_x^{\ell -1}\odot {\bf h}_y^{\ell -1}) \Big) \\
\text{UPDATE}_x^{\ell} = &\text{ }\text{LeakyReLU}({\bf m}_{1,x}^{\ell}+{\bf m}_{2,x}^{\ell}) \\
{\text{LAYER-AGG}}_x^{L_{\text{GNN}}} = &\text{ }{\bf h}_x^0 {\big| \big|} {\bf h}_x^1 {\big| \big|} \cdot \cdot \cdot {\big| \big|} {\bf h}_x^{L_{\text{GNN}}},
\end{align}
\end{subequations}
where $W_{\text{GNN};1}^{\ell},~ W_{\text{GNN};2}^{\ell}\in\mathbb{R}^{d_\text{GNN}^{\ell-1} \times d_\text{GNN}^{\ell}}$ are learnable weight transformation matrices, ${\big| \big|}$ is the concatenation operator, $L_{\text{GNN}}$ is the number of GNN layers, and $\text{LeakyReLU}$ is an activation function with a parameter $\alpha >0$:
\begin{align}
\text{LeakyReLU}(x)=\begin{cases}
x \text{ if $x>0$} \\
\alpha x \text{ otherwise}.
\end{cases}
\end{align}
\color{black}
In addition, as another state-of-the-art method for recommendation, LR-GCCF \cite{chen2020revisiting} can be implemented by using
\begin{subequations}
\begin{align}
\text{AGGREGATE}_x^{\ell}&=\sum_{y\in \mathcal{N}_x\cup \{x\}}\dfrac{1}{\sqrt{|\mathcal{N}_x|+1}\sqrt{|\mathcal{N}_y|+1}}{\bf h}_y^{\ell-1} \\
\text{UPDATE}_x^{\ell}&={\bf m}^\ell_x\cdot W^{\ell}_{\text{GNN}} \\
{\text{LAYER-AGG}}_x^{L_{\text{GNN}}} &= {\bf h}_x^0 {\big| \big|} {\bf h}_x^1 {\big| \big|} \cdot \cdot \cdot {\big| \big|} {\bf h}_x^{L_{\text{GNN}}},
\end{align}
where $W_{\text{GNN}}^{\ell}\in\mathbb{R}^{d_\text{GNN}^{\ell-1} \times d_\text{GNN}^{\ell}}$ is a learnable weight transformation matrix. \color{black}As the most recently developed state-of-the-art method, \color{black}LightGCN \cite{he2020lightgcn} can be specified according to the following function setting:
\end{subequations}
\begin{subequations}
\begin{align}
\text{AGGREGATE}_x^{\ell}&=\sum_{y\in \mathcal{N}_x}\dfrac{1}{\sqrt{|\mathcal{N}_x|}\sqrt{|\mathcal{N}_y|}}{\bf h}^{\ell-1}_y \\
\text{UPDATE}_x^{\ell}&={\bf m}^\ell_x\\
\text{LAYER-AGG}_x^{L_{\text{GNN}}} &=\dfrac{1}{L_{\text{GNN}}+1}\sum_{\ell=0}^{L_{\text{GNN}}}{{\bf h}_x^{\ell}}.
\end{align}
\end{subequations}
\end{rmk}
Second, we pay our attention to the MLP architecture designed for the graph $G^n$ having negative edges. We calculate the embeddings ${\bf Z}^n$ using the MLP as follows:
\begin{subequations}
\begin{align}
{\bf Z}^n_{\ell} &= \text{ReLU}\big( {\bf Z}_{\ell-1}^nW^{\ell}_{\text{MLP}}+{\bf 1}_{\text{MLP}}{\bf b}^{\ell}_{\text{MLP}} \big) \\
{\bf Z}^n &= {\bf Z}^n_{L_{\text{MLP}}},
\end{align}
\end{subequations}
where $\ell=1,2,\cdots,L_{\text{MLP}}$; $L_{\text{MLP}}$ is the number of MLP layers; $\text{ReLU}(x)=\max{(0,x)}$; $W_{\text{MLP}}^{\ell}\in\mathbb{R}^{d_\text{MLP}^{\ell-1} \times d_\text{MLP}^{\ell}}$ is a learnable weight transformation matrix; ${\bf b}_{\text{MLP}}^{\ell}\in \mathbb{R}^{1 \times d_\text{MLP}^{\ell}}$ is a bias vector; $d_\text{MLP}^{\ell}$ is the dimension of the latent representation vector ${\bf Z}_{\ell}^n$ at the $\ell$-th MLP layer; ${\bf 1}_{\text{MLP}}\in \mathbb{R}^{(M+N)\times 1}$ is the all-ones vector; and ${\bf Z}_0^n\in \mathbb{R}^{(M+N)\times d_\text{MLP}^{0}}$ is the learnable 0-th layer's embedding matrix for all nodes in the set $\mathcal{U}\cup \mathcal{V}$, which is randomly initialized. That is, ${\big\{W_{\text{MLP}}^{\ell}\big\}}_{\ell=1}^{\ell=L_{\text{MLP}}}$, ${\big\{{\bf b}_{\text{MLP}}^{\ell}\big\}}_{\ell=1}^{\ell=L_{\text{MLP}}}$, and ${\bf Z}_0^n$ correspond to the model parameters of $\text{MLP}_{\theta_2}$ in (\ref{align_MLP_neg}).
Third, we turn to describing the attention model. The importance $({\bf \alpha}^p, {\bf \alpha}^n)$ in (\ref{align_attn}) represents the attention values of two embeddings ${\bf Z}^p$ and ${\bf Z}^n$ for all nodes in $\mathcal{U}\cup \mathcal{V}$. Let us focus on node $x\in \mathcal{U}\cup \mathcal{V}$ whose embedding vectors calculated for the graphs $G^p$ and $G^n$ are given by ${\bf z}^p_{x},{\bf z}^n_{x}\in \mathbb{R}^{1\times d}$, respectively. Let $w_x^p$ and $w_x^n$ denote attention values of the two embeddings ${\bf z}^p_x$ and ${\bf z}^n_x$, respectively, for node $x$. Then, our attention model learns a weight transformation matrix $W_{attn}\in \mathbb{R}^{d' \times d}$, an attention vector ${\bf q}\in \mathbb{R}^{d' \times 1}$, and a bias vector ${\bf b}\in \mathbb{R}^{d' \times 1}$ with a dimension $d'$, corresponding to the model parameters of $\text{ATTENTION}_{\theta_3}$ in (\ref{align_attn}), as follows:
\begin{subequations}
\begin{align}
\label{weight_attn1}
w_x^p&={\bf q}^T tanh(W_{attn} {{\bf z}^p_x}^T + {\bf b}) \\
\label{weight_attn2}
w_x^n&={\bf q}^T tanh(W_{attn} {{\bf z}^n_x}^T + {\bf b}),
\end{align}
\end{subequations}
where $tanh(x)=\frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)}$ is the hyperbolic tangent activation function. By normalizing the attention values in (\ref{weight_attn1}) and (\ref{weight_attn2}) according to the softmax function, we have
\begin{subequations}
\begin{align}
\alpha_{x}^p&=\frac{\text{exp}({w_x^p})}{\text{exp}(w_x^p)+\text{exp}(w_x^n)}\\
\alpha_{x}^n&=\frac{\text{exp}({w_x^n})}{\text{exp}(w_x^p)+\text{exp}(w_x^n)},
\end{align}
\end{subequations}
where $\alpha_{x}^p$ and $\alpha^n_{x}$ are the resulting importance of two embeddings ${\bf z}_x^p$ and ${\bf z}_x^n$, respectively, which thus yields the final embedding ${\bf z}_x = \alpha_{x}^p{\bf z}_x^p + \alpha_{x}^n{\bf z}_x^n$ for each node $x$ in (\ref{attn_final_emb}).
\subsection{Optimization}
In this subsection, we explain the optimization of \textsf{SiReN} method in the training phase via our proposed loss function. We start by randomly initializing the learnable model parameters $\Theta=\{\theta_1, \theta_2, \theta_3\}$ in (\ref{align_GNN_pos})--(\ref{align_attn}) (refer to line 1 in Algorithm \ref{algorithm_main}). To train our learning models (i.e., the GNN, MLP, and attention models), we use \color{black}a training set \color{black} $\mathcal{D}_S$ consisting of multiple samples of a triplet $(u,i,j)$, where $(u,i,w_{ui}^s)\in\mathcal{E}^s$ and $j\in\mathcal{V}$ is a negative sample (i.e., an unobserved item), which is not in the set of direct neighbors of user $u$ in the signed bipartite graph $G^s$ (refer to line 6). \color{black}For sampling $j\in\mathcal{V}$, we use the degree-based noise distribution $P(j)\propto d_j^{3/4}$ \cite{mikolovskipgram,tang2015line}, where $d_j$ is the degree of item $j$. \color{black}More specifically, when $N_{\text{neg}}$ denotes the number of negative samples, we first acquire the set of edges, $\mathcal{E}^s$, in $G^s$ and then sample $N_{\text{neg}}$ negative samples $\{j_n\}_{n=1}^{N_{\text{neg}}}$ for each $(u,i,w_{ui}^s)\in\mathcal{E}^s$ \color{black} using the distribution $P(j_n)$ \color{black}in order to create new samples of a node triplet $(u,i,j_n)$ for all $n\in\{1,\cdots,N_{\text{neg}}\}$, which yields \color{black}the training set \color{black} $\mathcal{D}_S$ as follows:
\begin{align}
\label{data_batch}
\mathcal{D}_S=\big\{(u,i,j_n)\big|(u,i,w_{ui}^s)\in\mathcal{E}^s,j_n\notin \mathcal{N}_u,n\in\{1,\cdots,N_{\text{neg}}\} \big\},
\end{align}
where $\mathcal{N}_u$ is the set of neighbor nodes of user $u$ in $G^s$. We further subsample \color{black}mini-batches \color{black}$\mathcal{D}_S'\subset\mathcal{D}_S$ to efficiently train our learning models (refer to line 7). The sampled triplets are fed into the loss function in the training loop along with the calculated embeddings $\bf Z$ for all nodes in $\mathcal{U}\cup \mathcal{V}$ (refer to lines 9--12).
Now, we present our {\em sign-aware} BPR loss function, which is built upon the original BPR loss \cite{BPRMF09} widely used in recommender systems (see \cite{zheng2018spectral, wu2019neural, wu2020joint, wu2020diffnet++, wang2019neural, chen2020revisiting, he2020lightgcn} and references therein). To this end, we define a user $u$'s {\em predicted preference} for an item $i$ as the inner product of user and item final embeddings:
\begin{align}
\hat{r}_{ui} \triangleq {\bf z}_u{\bf z}_i^T,
\end{align}
which is used for establishing our loss function.
However, simply employing the original BPR loss is not desirable in our setting since it is a pairwise loss based on the relative order between observed and unobserved items by basically assuming that high ratings are more reflective of a user's preferences with higher prediction values of $\hat{r}_{ui}$ than the case of unobserved ones. On the other hand, our objective function should account for two types of observed items, which include both positive and negative relations between users and items, as well as unobserved ones. \color{black}The proposed sign-aware BPR loss function is designed in such a way that the predicted preference for an observed item is higher than its unobserved counterparts along with the induced difference between high and low ratings (i.e., strong and weak user--item bonds).
To this end, more formally, we define a ternary relation $>_u(i,j,w)\triangleq \{(i,j,w)|\hat{r}_{ui}>\hat{r}_{uj}~\text{if}~w>0~\text{and}~\hat{r}_{ui}>{1 \over c}\hat{r}_{uj}~\text{otherwise}\} \subset\mathcal{V}\times \mathcal{V} \times (\mathbb{R}\setminus \{0\})$, where $c>1$ is a hyperparameter used for adjusting two levels of user preferences for observed items during training. In the relation, a higher value of $c$ indicates a larger gap between the degrees of user preferences for observed items. \color{black}Based on the relation $>_u$, we aim at minimizing the following loss function $\mathcal{L}$ for a given \color{black}mini\color{black}-batch $\mathcal{D}_S'$ with the $L_2$ regularization:
\color{black}
\begin{align}
\label{lossfunction}
\mathcal{L}=\mathcal{L}_0+\lambda_{\text{reg}}\norm{\Theta}^2,
\end{align}
where $\lambda_{\text{reg}}$ is a hyperparameter that controls the $L_2$ regularization strength; $\Theta$ represents the model parameters; and $\mathcal{L}_0$ is the sign-aware BPR loss term realized by
\begin{align}
\label{sign_BPRloss}
\mathcal{L}_0=-\sum_{(u,i,j)\in\mathcal{D}_S'}{\log{p(>_u(i,j,w^s_{ui})\big| \Theta)}}.
\end{align}
\color{black}Here, to capture the above relation $>_u(i,j,w)$ for each $(u,i,j)\in\mathcal{D}_S'$, we model the likelihood in (\ref{sign_BPRloss}) as
\begin{align}
\label{likelihood}
p(>_u(i,j,w^s_{ui})\big| \Theta)=\begin{cases}
\sigma \big(\hat{r}_{ui}-\hat{r}_{uj}\big) \text{ if $w^s_{ui}>0$} \\
\sigma \big(\text{c}\cdot\hat{r}_{ui}-\hat{r}_{uj}\big) \text{ otherwise,}
\end{cases}
\end{align}
where $\sigma(x)=\frac{1}{1+\exp(-x)}$ is the sigmoid function and $c>1$. Our model is trained through the loss function in (\ref{lossfunction}) towards the objective of (i) $\hat{r}_{ui}>\hat{r}_{uj}$ for pairs $(u,i)$ with high ratings and (ii) $\hat{r}_{ui}>\frac{1}{c}\hat{r}_{uj}$ for pairs $(u,i)$ with low ratings. In consequence, it is possible to more elaborately learn representations of nodes depending on both positive and negative relations.
\color{black}
\section{Experimental Evaluation}
In this section, we first describe real-world datasets used in the evaluation. We also present five competing methods including two baseline MF methods and three state-of-the-art GNN-based methods for comparison. After describing performance metrics and our experimental settings, we comprehensively evaluate the performance of our \textsf{SiReN} method and five benchmark methods. The source code for \textsf{SiReN} can be accessed via https://github.com/woni-seo/SiReN-reco.
\subsection{Datasets}
We conduct experiments on three real-world datasets, which are widely adopted for evaluating recommender systems. For all experiments, we use user--item interactions with ratings in each dataset as the input. The main statistics of each dataset, including the number of users, the number of items, the number of ratings, the density, and the rating scale, are summarized in Table \ref{data_stat}. In the following, we explain important characteristics of the datasets briefly.
\textbf{MovieLens-1M (ML-1M)\footnote{{https://grouplens.org/datasets/movielens/1m/.}}}. This is the most popular dataset in movie recommender systems, which consists of 5-star ratings (i.e., integer values from 1 to 5) of movies given by users \cite{harper2015movielens}.
\textbf{Amazon-Book\footnote{{https://jmcauley.ucsd.edu/data/amazon/index.html.}}}. Among the Amazon-Review dataset containing product reviews and metadata, we select the Amazon-Book dataset, which consists of 5-star ratings \cite{he2016ups}. We remove users/items that have less than 20 interactions similarly as in \cite{he2016vbpr}.
\textbf{Yelp\footnote{{https://www.yelp.com/dataset.}}}. This dataset is a local business review data consisting of 5-star ratings. As in the Amazon-Book dataset, we remove users/items that have less than 20 interactions.
\begin{table}[t]
\centering
\begin{tabular}{lccc}
\toprule
\textbf{Dataset} & \textbf{ML-1M}& \textbf{Amazon-Book} & \textbf{Yelp}\\
\midrule
\# of users ($M$) & 6,040 & 35,736 & 41,772 \\
\# of items ($N$) & 3,952 & 38,121 & 30,037 \\
\# of ratings & 1,000,209 & 1,960,674&2,116,215\\
Density (\%) &4.19&0.14&0.16 \\
Rating scale &1--5&1--5&1--5\\
\bottomrule
\end{tabular}
\caption{Statistics of three real-world datasets.}
\label{data_stat}
\end{table}
\subsection{Benchmark Methods}
In this subsection, we present two baseline MF methods and \textcolor{black}{four} state-of-the-art GNN methods for comparison.
\textbf{BPRMF} \cite{BPRMF09}. This baseline method is a MF model optimized by the BPR loss, which assumes that each user prefers the items with which he/she has interacted to items with no interaction.
\textbf{NeuMF} \cite{he2017neural}. As another popular baseline, NeuMF is a neural CF model, which generalizes standard MF and uses multiple hidden layers to generate user and item embeddings.
\textbf{NGCF} \cite{wang2019neural}. This state-of-the-art GNN-based approach follows basic operations inherited from the standard GCN \cite{kipf2017semi} to explore the high-order connectivity information. More specifically, NGCF stacks embedding layers and concatenates embeddings obtained in all layers to constitute the final embeddings.
\textbf{LR-GCCF} \cite{chen2020revisiting}. LR-GCCF is a state-of-the-art GCN-based CF model. As two main characteristics, this model uses only linear transformation without nonlinear activation and concatenates all layers' embeddings to alleviate oversmoothing at deeper layers \cite{li2018deeper}.
\textbf{LightGCN} \cite{he2020lightgcn}. LightGCN simplifies the design of GCN \cite{kipf2017semi} to make the model more appropriate for recommendation by including only the most essential component such as neighborhood aggregation without nonlinear activation and weight transformation operations. Similarly as in LR-GCCF, this approach uses the weighted sum of embeddings learned at all layers as the final embedding.
\color{black}
\textbf{SGCN} \cite{derr2018signed}. SGCN is a generalized version of GCN \cite{kipf2017semi} and harnesses the balance theory while capturing both positive and negative edges coherently in the aggregation process. Although SGCN was primarily aimed at conducting the link sign prediction task in signed unipartite graphs, it can also be applied to signed bipartite graphs.
\color{black}
\begin{figure*}
\centering
\begin{tikzpicture}
\begin{groupplot}[
ybar=0pt,
enlarge x limits=0.25,
legend columns=-1,
legend pos=south east,
footnotesize,
symbolic x coords={$P@10$,$R@10$,$nDCG@10$},
xtick=data,
ymin=0.1, ymax=0.4, width=0.35\textwidth,
nodes near coords align={vertical},
group style={
group size=3 by 1,
xlabels at=edge bottom,
ylabels at=edge left,
xticklabels at=edge bottom}]
\nextgroupplot[bar width=0.3cm, xlabel={(a) ML-1M},legend style = { column sep = 5pt, legend columns = -1, legend to name = grouplegend},
legend to name=grouplegend,
legend entries={
\textsf{SiReN}-LightGCN, \textsf{SiReN}-LRGCCF, \textsf{SiReN}-NGCF
},
]
\addplot[style={black,fill=bblue,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.2774)+-(0.0037,0.0037) ($R@10$,0.2016)+-(0.0033,0.0033) ($nDCG@10$,0.3423)+-(0.0047,0.0047)};
\addlegendentry{\textsf{SiReN}-LightGCN}
\addplot[style={black,fill=ggreen,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.2642)+-(0.0027,0.0027) ($R@10$,0.1944)+-(0.0017,0.0017) ($nDCG@10$,0.3244)+-(0.0037,0.0037)};
\addlegendentry{\textsf{SiReN}-LRGCCF}
\addplot[style={black,fill=ppurple,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.261)+-(0.0081,0.0081) ($R@10$,0.191)+-(0.0038,0.0038) ($nDCG@10$,0.3212)+-(0.0093,0.0093)};
\addlegendentry{\textsf{SiReN}-NGCF}
\nextgroupplot[bar width=0.3cm, xlabel={(b) Amazon-Book},ymax=0.09,ymin=0.01]
\addplot[style={black,fill=bblue,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0565)+-(0.002,0.002) ($R@10$,0.0775)+-(0.0008,0.0008) ($nDCG@10$,0.0823)+-(0.0013,0.0013)};
\addplot[style={black,fill=ggreen,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0538)+-(0.001,0.001) ($R@10$,0.0742)+-(0.0022,0.0022) ($nDCG@10$,0.0771)+-(0.0022,0.0022)};
\addplot[style={black,fill=ppurple,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0519)+-(0.0018,0.0018) ($R@10$,0.072)+-(0.001,0.001) ($nDCG@10$,0.0753)+-(0.0017,0.0017)};
\nextgroupplot[bar width=0.3cm, xlabel={(c) Yelp},ymax=0.06,ymin=0.01]
\addplot[style={black,fill=bblue,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0352)+-(0.0007,0.0007) ($R@10$,0.0554)+-(0.0011,0.0011) ($nDCG@10$,0.0539)+-(0.0007,0.0007)};
\addplot[style={black,fill=ggreen,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.034)+-(0.0011,0.0011) ($R@10$,0.0534)+-(0.0003,0.0003) ($nDCG@10$,0.0513)+-(0.0007,0.0007)};
\addplot[style={black,fill=ppurple,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0333)+-(0.0009,0.0009) ($R@10$,0.0519)+-(0.0004,0.0004) ($nDCG@10$,0.05)+-(0.0009,0.0009)};
\end{groupplot}
\node at ($(group c2r1) + (0, 3cm) $) {\ref{grouplegend}};
\end{tikzpicture}
\caption{\color{black}Performance comparison according to different GNN models \color{black} in our \textsf{SiReN} method for each dataset.}
\label{Fig_RQ1}
\end{figure*}
\subsection{Performance Metrics}
To validate the performance of the proposed \textsf{SiReN} method and the six benchmark methods, we adopt three metrics, which are widely used to evaluate the accuracy of top-$K$ recommendation. Let $Te_u^+$ and $R_u (K)$ denote the ground truth set (i.e., the set of items rated by user $u$ in the test set) and the top-$K$ recommendation list for user $u$, respectively. In the following, we describe each of metrics for recommendation accuracy.
The precision $P@K$ is defined as the ratio of relevant items to the set of recommended items and is expressed as
\begin{align}
P@K=\dfrac{1}{M}\sum_{u\in \mathcal{U}}{\dfrac{\big|Te_u^+\cap R_u(K) \big|}{K}}.
\end{align}
The recall $R@K$ is defined as the ratio of relevant items to the ground truth set and is expressed as
\begin{align}
R@K=\dfrac{1}{M}\sum_{u\in \mathcal{U}}{\dfrac{\big| Te_u^+ \cap R_u(K) \big|}{\big| Te_u^+ \big|}}.
\end{align}
The normalized discounted cumulative gain $nDCG@K$ \cite{NDCG} measures a ranking quality of the recommendation list by assigning higher scores to relevant items at top-$K$ ranking positions in the list:
\begin{align}
nDCG@K = \dfrac{1}{M}\sum_{u\in \mathcal{U}}{nDCG_u@K}.
\end{align}
Let $y_k$ be the binary relevance of the $k$-th item $i_k$ in $R_u(K)$ for each user $u$: $y_k=1$ if $i_k \in Te_u^+$ and $0$ otherwise. Then, $nDCG_u@K$ can be computed as
\begin{align}
nDCG_u@K&=\dfrac{DCG_u@K}{IDCG_u@K},
\end{align}
where $DCG_u@K$ is
\begin{align}
DCG_u@K&=\sum_{i=1}^{K}{\dfrac{2^{y_i}-1}{\log_2(i+1)}}
\end{align}
and $IDCG_u@K$ indicates the ideal case of $DCG_u@K$ (i.e., all relevant items are at the top rank in $R_u(K)$). Note that all metrics are in a range of $[0,1]$, and higher values represent better performance.
\subsection{Experimental Setup}
In this subsection, we describe the experimental settings of neural networks in our \textsf{SiReN} method. We implement \textsf{SiReN} via PyTorch Geometric \cite{fey2019pyg}, which is a geometric deep learning extension library in PyTorch. In our experiments, we adopt GNN models for the graph $G^p$ with positive edges and the 2-layer MLP architecture for the graph $G^n$ with negative edges. We use the Xavier initializer \cite{xavier_} to initialize the model parameters $\Theta = \{\theta_1, \theta_2, \theta_3\}$. We use dropout regularization \cite{srivastava2014dropout} with the probability of 0.5 for the MLP and attention models in (\ref{align_MLP_neg}) and (\ref{align_attn}). We set the dimension of the embedding space and all hidden latent spaces to 64; the number of negative samples, $N_{\text{neg}}$, to 40; \color{black} the hyperparameter $c$ in (\ref{likelihood}) to 2;\footnote{\color{black}It was empirically found that the case of $c>2$ does not lead to high performance regardless of datasets even if it is not presented in this article for the sake of brevity.\color{black}} \color{black}and the strength of $L_2$ regularization, $\lambda_{\text{reg}}$, to 0.1 for the ML-1M dataset and 0.05 for the Amazon-Book and Yelp datasets. We train our model using the Adam optimizer \cite{kingma2015adam} with a learning rate of 0.005.
For each dataset, we conduct 5-fold cross-validation by splitting it into two subsets: 80\% of the ratings (i.e., user--item interactions) as the training set and 20\% of the ratings as the test set. In the training set, when we implement five benchmark methods, we regard only the items with the rating scores of 4 and 5 as observed interactions by removing the ratings whose scores are lower than 4 as in \cite{chen2019collaborative, wu2019neural,wu2020diffnet++}; however, when we implement our \textsf{SiReN} method, we utilize all user--item interactions including low ratings in the training set while the parameter $w_o$, indicating the design criterion for signed bipartite graph construction, is set to 3.5.\footnote{Our empirical findings reveal that such a setting in \textsf{SiReN} consistently leads to superior performance to that of other values of $w_o$ for all datasets having the 1--5 rating scale. \textcolor{black}{This is obvious due to the fact that, in our experiments, the rating scores of 4 and 5 correspond to positive user--item interactions as well as the ground truth set.}} It is worthwhile to note that, for fair comparison, the test set consists of only the ratings of 4 and 5 as the {\em ground truth} set for all the methods including \textsf{SiReN}.
\subsection{Experimental Results}
In this subsection, our empirical study is designed to answer the following four key research questions.
\begin{itemize}
\item [\textbf{RQ1}.] How do underlying GNN models affect the performance of the \textsf{SiReN} method?
\item [\textbf{RQ2}.] Which model architecture is appropriate to the graph $G^n$ with negative edges?
\item [\textbf{RQ3}.] How much does the \textsf{SiReN} method improve the top-$K$ recommendation over baseline and state-of-the-art methods?
\item [\textbf{RQ4}.] How robust is our \textsf{SiReN} method with respect to interaction sparsity levels
\end{itemize}
To answer these research questions, we comprehensively carry out experiments in the following.
\paragraph{Comparative Study Among GNN Models Used for $G^p$ (RQ1)}
In Fig. \ref{Fig_RQ1}, for all datasets, we evaluate the accuracy of top-$K$ recommendation in terms of $P@K$, $R@K$, and $nDCG@K$ when $K$ is set to 10 while using various GNN models used for the graph $G^p$ in our \textsf{SiReN} method. Since our method is GNN-model-agnostic, any existing GNN models can be adopted; however, in our experiments, we adopt three state-of-the-art GNN models that exhibit superior performance in recommender systems from the literature, namely NGCF \cite{wang2019neural} (\textsf{SiReN}-NGCF), LR-GCCF \cite{chen2020revisiting} (\textsf{SiReN}-LRGCCF), and LightGCN \cite{he2020lightgcn} (\textsf{SiReN}-LightGCN). From Fig. \ref{Fig_RQ1}, we observe that \textsf{SiReN}-LightGCN consistently outperforms other models for all performance metrics. As discussed in \cite{he2020lightgcn}, this is because nonlinear activation and weight transformation operations in GNNs rather tend to degrade the recommendation accuracy; LightGCN thus attempted to simplify the design of GCN by removing such operations. It turns out that such a gain achieved by LightGCN is also possible in our \textsf{SiReN} model that contains three learning models including the GNN, MLP, and attention models.
From these findings, we use \textsf{SiReN}-LightGCN in our subsequent experiments unless otherwise stated.
\begin{figure*}
\centering
\begin{tikzpicture}
\begin{groupplot}[
ybar=0pt,
enlarge x limits=0.25,
legend columns=-1,
legend pos=south east,
footnotesize,
symbolic x coords={$P@10$,$R@10$,$nDCG@10$},
xtick=data,
ymin=0.1, ymax=0.4, width=0.35\textwidth,
nodes near coords align={vertical},
group style={
group size=3 by 1,
xlabels at=edge bottom,
ylabels at=edge left,
xticklabels at=edge bottom}]
\nextgroupplot[bar width=0.3cm, xlabel={(a) ML-1M},legend style = { column sep = 5pt, legend columns = -1, legend to name = grouplegend2},
legend to name=grouplegend2,
legend entries={$\textsf{SiReN}_{\text{MLP-}G^n}$,$\textsf{SiReN}_{\text{GNN-}G^n}$,$\textsf{SiReN}_{\text{No-}G^n}$
},
]
\addplot[style={black,fill=bblue,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.2774)+-(0.0037,0.0037) ($R@10$,0.2016)+-(0.0033,0.0033) ($nDCG@10$,0.3423)+-(0.0047,0.0047)};
\addlegendentry{$\textsf{SiReN}_{\text{MLP-}G^n}$}
\addplot[style={black,fill=ggreen,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.2712)+-(0.0025,0.0025) ($R@10$,0.1985)+-(0.0038,0.0038) ($nDCG@10$,0.335)+-(0.0034,0.0034)};
\addlegendentry{$\textsf{SiReN}_{\text{GNN-}G^n}$}
\addplot[style={black,fill=ppurple,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.2735)+-(0.0016,0.0016) ($R@10$,0.1986)+-(0.0039,0.0039) ($nDCG@10$,0.3375)+-(0.0028,0.0028)};
\addlegendentry{$\textsf{SiReN}_{\text{No-}G^n}$}
\addplot[style={black,fill=ppink,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.2746)+-(0.0019,0.0019) ($R@10$,0.2005)+-(0.0039,0.0039) ($nDCG@10$,0.3389)+-(0.0031,0.0031)};
\addlegendentry{$\textsf{SiReN}_{\text{\textcolor{black}{No-split}}}$}
\nextgroupplot[bar width=0.3cm, xlabel={(b) Amazon-Book},ymax=0.09,ymin=0.01]
\addplot[style={black,fill=bblue,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0565)+-(0.002,0.002) ($R@10$,0.0775)+-(0.0008,0.0008) ($nDCG@10$,0.0823)+-(0.0013,0.0013)};
\addplot[style={black,fill=ggreen,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0508)+-(0.0018,0.0018) ($R@10$,0.0696)+-(0.0003,0.0003) ($nDCG@10$,0.0733)+-(0.0011,0.0011)};
\addplot[style={black,fill=ppurple,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0523)+-(0.0018,0.0018) ($R@10$,0.0719)+-(0.0009,0.0009) ($nDCG@10$,0.0759)+-(0.0016,0.0016)};
\addplot[style={black,fill=ppink,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0518)+-(0.0017,0.0017) ($R@10$,0.0713)+-(0.0007,0.0007) ($nDCG@10$,0.0754)+-(0.0012,0.0012)};
\nextgroupplot[bar width=0.3cm, xlabel={(c) Yelp},ymax=0.06,ymin=0.01]
\addplot[style={black,fill=bblue,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0352)+-(0.0007,0.0007) ($R@10$,0.0554)+-(0.0011,0.0011) ($nDCG@10$,0.0539)+-(0.0007,0.0007)};
\addplot[style={black,fill=ggreen,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0296)+-(0.0009,0.0009) ($R@10$,0.0467)+-(0.0009,0.0009) ($nDCG@10$,0.0447)+-(0.001,0.001)};
\addplot[style={black,fill=ppurple,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.0297)+-(0.0012,0.0012) ($R@10$,0.0471)+-(0.0004,0.0004) ($nDCG@10$,0.0451)+-(0.0011,0.0011)};
\addplot[style={black,fill=ppink,mark=none}, error bars/.cd,
y dir=both,y explicit]
coordinates {($P@10$, 0.03)+-(0.0009,0.0009) ($R@10$,0.0473)+-(0.0008,0.0008) ($nDCG@10$,0.0456)+-(0.0009,0.0009)};
\end{groupplot}
\node at ($(group c2r1) + (0, 3cm) $) {\ref{grouplegend2}};
\end{tikzpicture}
\caption{\color{black}Performance comparison according to several model architectures used for the graph $G^n$ in our \textsf{SiReN} method \textcolor{black}{for each dataset}.}
\label{Fig_RQ3}
\end{figure*}
\paragraph{Comparative Study Among Model Architectures Used for $G^n$ (RQ2)}
We perform another comparative study \textcolor{black}{four} among model architectures used for the graph $G^n$ with negative edges. We adopted the MLP for $G^n$ since negative edges in $G^n$ can undermine the assortativity and thus message passing to such dissimilar nodes would not be feasible. In this experiment, we empirically validate this claim by taking into account \textcolor{black}{three} other design scenarios. We evaluate the accuracy of top-$K$ recommendation when $K$ is set to 10 for all datasets. First, we recall the original \textsf{SiReN} method employing MLP for $G^n$, dubbed $\textsf{SiReN}_{\text{MLP-}G^n}$.
Second, instead of employing MLP, we introduce $\textsf{SiReN}_{\text{GNN-}G^n}$ that uses an additional GNN model for the graph $G^n$ to calculate the embedding vectors ${\bf Z}^n$:
\begin{align}
{\bf Z}^n = \text{GNN}_{\theta_1'}(G^n),
\end{align}
where $\theta_1'$ is the learned model parameters of $\text{GNN}$ for the graph $G^n$. In this experiment, we adopt LightGCN \cite{he2020lightgcn} among GNN models.
Third, as an ablation study, we introduce $\textsf{SiReN}_{\text{No-}G^n}$ that does not calculate the embedding vectors ${\bf Z}^n$. In other words, we only use the embedding vectors ${\bf Z}^p$ in (\ref{align_GNN_pos}) as the final embeddings ${\bf Z}$ in (\ref{attn_final_emb}) (i.e., ${\bf Z} = {\bf Z}^p$). Note that $\textsf{SiReN}_{\text{No-}G^n}$ is identical to the model architecture of LightGCN \cite{he2020lightgcn}, which generates embedding vectors by aggregating only the information of positively connected neighbors. However, unlike LightGCN, $\textsf{SiReN}_{\text{No-}G^n}$ utilizes the sign-aware BPR loss in (\ref{lossfunction})--(\ref{likelihood}) as our objective function in the process of optimization.
\color{black} \textcolor{black}{Fourth, we present $\textsf{SiReN}_\text{No-split}$, a variant of \textsf{SiReN}} that does not partition the signed bipartite graph $G^s$ into the two edge-disjoint graphs. That is, \textcolor{black}{$\textsf{SiReN}_\text{No-split}$ generates the embeddings ${\bf Z}$ in (\ref{attn_final_emb}) based on} the model architecture of LightGCN \cite{he2020lightgcn} without distinguishing \textcolor{black}{positive and negative edges}. However, \textcolor{black}{unlike LightGCN, $\textsf{SiReN}_\text{No-split}$ adopts the sign-aware BPR loss in the optimization.} \color{black}
From Fig. \ref{Fig_RQ3}, our findings are as follows:
\begin{itemize}
\item $\textsf{SiReN}_{\text{MLP-}G^n}$ is always superior to $\textsf{SiReN}_{\text{GNN-}G^n}$ regardless of \textcolor{black}{datasets and }performance metrics, which indeed validates our claim addressed in Remark \ref{rmk_1}.
\color{black}
\item \textcolor{black}{$\textsf{SiReN}_{\text{MLP-}G^n}$ is always superior to $\textsf{SiReN}_\text{No-split}$ regardless of datasets and performance metrics. This implies that partitioning the signed bipartite graph into two edge-disjoint graphs can capture the user preferences more precisely.}
\color{black}
\item \textcolor{black}{$\textsf{SiReN}_{\text{GNN-}G^n}$ exhibits the worst performance among the four model architectures used for $G^n$ for all cases. This implies that message passing to dissimilar nodes in $G^n$ is not effective.} \color{black}
\item Although all the models are \color{black} trained via our sign-aware BPR loss, both positive and negative relations in the signed bipartite graph $G^s$ are not precisely captured during training unless an appropriate model architecture is designed for the graph $G^n$.
\end{itemize}
From these findings, we use $\textsf{SiReN}_{\text{MLP-}G^n}$ in our subsequent experiments unless otherwise stated.
\begin{table*}[h!]
\centering
\setlength\tabcolsep{2.5pt}
\scalebox{1}{
\begin{tabular}{ccccc|ccc|cccccc}
\toprule
\multicolumn{1}{c}{} & {} & \multicolumn{3}{c|}{\color{black}$K=5$} & \multicolumn{3}{c|}{$K=10$} & \multicolumn{3}{c}{$K=15$} \\
\cmidrule{3-11}
Dataset& Method & $P@K$ & $R@K$ & $nDCG@K$ & $P@K$ & $R@K$ & $nDCG@K$& $P@K$ & $R@K$ & $nDCG@K$ \\
\midrule
\multirow{7}{*}{\rotatebox{90}{ML-1M}}
& BPRMF & 0.2360\tiny{$\pm$0.0058} & 0.0746\tiny{$\pm$0.0017} & 0.2536\tiny{$\pm$0.0081} & 0.1999\tiny{$\pm$0.0044} & 0.1227\tiny{$\pm$0.0019} & 0.2363\tiny{$\pm$0.0063} & 0.1772\tiny{$\pm$0.0033} & 0.1608\tiny{$\pm$0.0016} & 0.2314\tiny{$\pm$0.0053} \\
& NeuMF & 0.2785\tiny{$\pm$0.0054} & 0.0966\tiny{$\pm$0.0011} & 0.299\tiny{$\pm$0.0035} & 0.2397\tiny{$\pm$0.0030} & 0.1599\tiny{$\pm$0.0011} & 0.2847\tiny{$\pm$0.0035} & 0.2138\tiny{$\pm$0.0027} & 0.2098\tiny{$\pm$0.0023} & 0.2827\tiny{$\pm$0.0032} \\
& NGCF & 0.2973\tiny{$\pm$0.0043} & 0.1099\tiny{$\pm$0.0026} & 0.3238\tiny{$\pm$0.0045} & 0.2477\tiny{$\pm$0.0023} & 0.1748\tiny{$\pm$0.0025} & 0.3031\tiny{$\pm$0.0033} & 0.2174\tiny{$\pm$0.0022} & 0.2229\tiny{$\pm$0.0027} & 0.2985\tiny{$\pm$0.0029} \\
& LR-GCCF & 0.3052\tiny{$\pm$0.0033} & 0.114\tiny{$\pm$0.0023} & 0.333\tiny{$\pm$0.0044} & 0.2539\tiny{$\pm$0.0027} & 0.1802\tiny{$\pm$0.0031} & 0.3117\tiny{$\pm$0.0039} & 0.2220\tiny{$\pm$0.0025} & 0.2292\tiny{$\pm$0.0046} & 0.3066\tiny{$\pm$0.0042} \\
& LightGCN & 0.3218\tiny{$\pm$0.0022} & 0.1206\tiny{$\pm$0.0011} & 0.3519\tiny{$\pm$0.0023} & 0.2679\tiny{$\pm$0.0013} & 0.1909\tiny{$\pm$0.0016} & 0.3297\tiny{$\pm$0.0018} & 0.2349\tiny{$\pm$0.0016} & 0.2432\tiny{$\pm$0.0029} & 0.3249\tiny{$\pm$0.0022} \\
& \color{black}\textsf{SGCN} &
\color{black}0.2484\tiny{$\pm$0.0033} & \color{black}0.091\tiny{$\pm$0.0017} & \color{black}0.2683\tiny{$\pm$0.0035} &
\color{black}0.1873\tiny{$\pm$0.002} & \color{black}0.1492\tiny{$\pm$0.0026} & \color{black}0.2547\tiny{$\pm$0.0024} & \color{black}0.1873\tiny{$\pm$0.002} & \color{black}0.1932\tiny{$\pm$0.0037} & \color{black}0.253\tiny{$\pm$0.0026} & \\
& \textsf{SiReN} &
\color{black}\underline{0.3328}\tiny{$\pm$0.0054} & \color{black}\underline{0.1279}\tiny{$\pm$0.0027} & \color{black}\underline{0.3637}\tiny{$\pm$0.0055} &
\color{black}\underline{0.2774}\tiny{$\pm$0.0037} & \color{black}\underline{0.2016}\tiny{$\pm$0.0033} & \color{black}\underline{0.3423}\tiny{$\pm$0.0047} & \color{black}\underline{0.2444}\tiny{$\pm$0.0025} & \color{black}\underline{0.2569}\tiny{$\pm$0.0029} & \color{black}\underline{0.3388}\tiny{$\pm$0.004} & \\
\midrule
\multirow{7}{*}{\rotatebox{90}{Amazon-Book}
& BPRMF & 0.0298\tiny{$\pm$0.0039} & 0.0209\tiny{$\pm$0.0035} & 0.0331\tiny{$\pm$0.0046} & 0.0263\tiny{$\pm$0.0033} & 0.0365\tiny{$\pm$0.0057} & 0.0371\tiny{$\pm$0.0053} & 0.0243\tiny{$\pm$0.0029} & 0.0500\tiny{$\pm$0.0074} & 0.0419\tiny{$\pm$0.0058} \\
& NeuMF & 0.0402\tiny{$\pm$0.0021} & 0.0271\tiny{$\pm$0.0007} & 0.0448\tiny{$\pm$0.002} & 0.0339\tiny{$\pm$0.0018} & 0.0452\tiny{$\pm$0.0010} & 0.0483\tiny{$\pm$0.0018} & 0.0303\tiny{$\pm$0.0014} & 0.0597\tiny{$\pm$0.0012} & 0.0530\tiny{$\pm$0.0017} \\
& NGCF & 0.0463\tiny{$\pm$0.0014} & 0.032\tiny{$\pm$0.0007} & 0.0518\tiny{$\pm$0.0011} & 0.0391\tiny{$\pm$0.0014} & 0.0532\tiny{$\pm$0.0008} & 0.0562\tiny{$\pm$0.0010} & 0.0403\tiny{$\pm$0.0127} & 0.0706\tiny{$\pm$0.0007} & 0.0618\tiny{$\pm$0.0009} \\
& LR-GCCF & 0.0469\tiny{$\pm$0.0016} & 0.0324\tiny{$\pm$0.0002} & 0.0527\tiny{$\pm$0.0012} & 0.0399\tiny{$\pm$0.0014} & 0.0544\tiny{$\pm$0.0005} & 0.0574\tiny{$\pm$0.0009} & 0.0357\tiny{$\pm$0.0013} & 0.0721\tiny{$\pm$0.0004} & 0.0631\tiny{$\pm$0.0008} \\
& LightGCN & 0.0529\tiny{$\pm$0.0015} & 0.0362\tiny{$\pm$0.0007} & 0.0596\tiny{$\pm$0.0011} & 0.0443\tiny{$\pm$0.0013} & 0.0595\tiny{$\pm$0.0008} & 0.0638\tiny{$\pm$0.0007} & 0.0393\tiny{$\pm$0.0011} & 0.0781\tiny{$\pm$0.0008} & 0.0698\tiny{$\pm$0.0007} \\
& \color{black}SGCN &
\color{black}0.039\tiny{$\pm$0.0023} & \color{black}0.0267\tiny{$\pm$0.0012} & \color{black}0.0433\tiny{$\pm$0.0024} &
\color{black}0.0336\tiny{$\pm$0.0018} & \color{black}0.0454\tiny{$\pm$0.0018} & \color{black}0.0475\tiny{$\pm$0.0023} & \color{black}0.0304\tiny{$\pm$0.0016} & \color{black}0.0609\tiny{$\pm$0.0021} & \color{black}0.0527\tiny{$\pm$0.0024} & \\
& \textsf{SiReN} & \color{black}\underline{0.0678}\tiny{$\pm$0.0025} & \color{black}\underline{0.0474}\tiny{$\pm$0.0005} & \color{black}\underline{0.0766}\tiny{$\pm$0.0002} & \color{black}\underline{0.0565}\tiny{$\pm$0.0025} & \color{black}\underline{0.0775}\tiny{$\pm$0.0008} & \color{black}\underline{0.0823}\tiny{$\pm$0.0013} & \color{black}\underline{0.0497}\tiny{$\pm$0.0018} & \color{black}\underline{0.1009}\tiny{$\pm$0.0011} & \color{black}\underline{0.0897}\tiny{$\pm$0.0011}\\
\midrule
\multirow{7}{*}{\rotatebox{90}{Yelp}}
& BPRMF & 0.0124\tiny{$\pm$0.0013} & 0.0096\tiny{$\pm$0.0009} & 0.0137\tiny{$\pm$0.0013}& 0.0116\tiny{$\pm$0.0014} & 0.0180\tiny{$\pm$0.0020} & 0.0165\tiny{$\pm$0.0017} & 0.0111\tiny{$\pm$0.0013} & 0.0257\tiny{$\pm$0.0026} & 0.0194\tiny{$\pm$0.0021} \\
& NeuMF & 0.0199\tiny{$\pm$0.0011} & 0.015\tiny{$\pm$0.0012} & 0.0226\tiny{$\pm$0.0015} & 0.0174\tiny{$\pm$0.0009} & 0.0262\tiny{$\pm$0.0017} & 0.0254\tiny{$\pm$0.0015} & 0.0159\tiny{$\pm$0.0007} & 0.0358\tiny{$\pm$0.0019} & 0.0288\tiny{$\pm$0.0016} \\
& NGCF & 0.0285\tiny{$\pm$0.0012} & 0.0226\tiny{$\pm$0.0007} & 0.0329\tiny{$\pm$0.001} & 0.0243\tiny{$\pm$0.0009} & 0.0383\tiny{$\pm$0.0010} & 0.0368\tiny{$\pm$0.0010} & 0.0219\tiny{$\pm$0.0007} & 0.0515\tiny{$\pm$0.0008} & 0.0413\tiny{$\pm$0.0009} \\
& LR-GCCF & 0.0303\tiny{$\pm$0.0014} & 0.024\tiny{$\pm$0.0006} & 0.0351\tiny{$\pm$0.0013} & 0.0258\tiny{$\pm$0.0010} & 0.0405\tiny{$\pm$0.0009} & 0.0392\tiny{$\pm$0.0011} & 0.0232\tiny{$\pm$0.0008} & 0.0543\tiny{$\pm$0.0010} & 0.0439\tiny{$\pm$0.0012} \\
& LightGCN & 0.0333\tiny{$\pm$0.0011} & 0.0259\tiny{$\pm$0.0004} & 0.0386\tiny{$\pm$0.0009} & 0.0281\tiny{$\pm$0.0009} & 0.0435\tiny{$\pm$0.0007} & 0.0427\tiny{$\pm$0.0008} & 0.0251\tiny{$\pm$0.0008} & 0.0582\tiny{$\pm$0.0010} & 0.0476\tiny{$\pm$0.0008} \\
& \color{black}SGCN &
\color{black}0.0293\tiny{$\pm$0.001} & \color{black}0.0226\tiny{$\pm$0.0006} & \color{black}0.0332\tiny{$\pm$0.001} &
\color{black} 0.0256\tiny{$\pm$0.0008} & \color{black}0.0395\tiny{$\pm$0.0011} & \color{black}0.0377\tiny{$\pm$0.001} & \color{black}0.0232\tiny{$\pm$0.0007} & \color{black}0.0538\tiny{$\pm$0.0019} & \color{black}0.0426\tiny{$\pm$0.0012} & \\
& \textsf{SiReN} & \color{black}\underline{0.042}\tiny{$\pm$0.009} & \color{black}\underline{0.0332}\tiny{$\pm$0.0005} & \color{black}\underline{0.0488}\tiny{$\pm$0.0007} & \color{black}\underline{0.0352}\tiny{$\pm$0.0007} & \color{black}\underline{0.0554}\tiny{$\pm$0.0011} & \color{black}\underline{0.0539}\tiny{$\pm$0.0007} & \color{black}\underline{0.0314}\tiny{$\pm$0.0006} & \color{black}\underline{0.0737}\tiny{$\pm$0.0012} & \color{black}\underline{0.06}\tiny{$\pm$0.0006}\\
\midrule
\end{tabular}}
\caption{\color{black}Performance comparison among \textsf{SiReN} and six benchmark methods in terms of three performance metrics (average $\pm$ standard deviation) when $K\in\{5,10,15\}$. Here, the best method for each case is highlighted using underlines. }
\label{Q4table}
\end{table*}
\begin{table*}[h!]
\centering
\setlength\tabcolsep{2.5pt}
\scalebox{1}{
\begin{tabular}{cccccc|ccc|cccccc}
\toprule
\multicolumn{1}{c}{} & {}&{} & \multicolumn{3}{c|}{\color{black}$K=5$} & \multicolumn{3}{c|}{$K=10$} & \multicolumn{3}{c}{$K=15$} \\
\cmidrule{4-12}
Dataset&Group& Method & $P@K$ & $R@K$ & $nDCG@K$ & $P@K$ & $R@K$ & $nDCG@K$& $P@K$ & $R@K$ & $nDCG@K$ \\
\midrule
\multirow{21}{*}{\rotatebox{90}{ML-1M}}&
\multirow{7}{*}{\rotatebox{90}{\thead{$[0,20)$}}}
& BPRMF & 0.0625\tiny{$\pm$0.0065} & 0.0882\tiny{$\pm$0.0149} & 0.0858\tiny{$\pm$0.0131} & 0.0485\tiny{$\pm$0.0056} & 0.1348\tiny{$\pm$0.0200} & 0.1046\tiny{$\pm$0.0148} & 0.0433\tiny{$\pm$0.0031} & 0.1810\tiny{$\pm$0.0145} & 0.1223\tiny{$\pm$0.0128} \\
&& NeuMF & 0.0842\tiny{$\pm$0.0063} & 0.1185\tiny{$\pm$0.0122} & 0.1158\tiny{$\pm$0.0122} & 0.0675\tiny{$\pm$0.0050} & 0.1873\tiny{$\pm$0.0182} & 0.1449\tiny{$\pm$0.0147} & 0.0576\tiny{$\pm$0.0036} & 0.2397\tiny{$\pm$0.0204} & 0.1653\tiny{$\pm$0.0149} \\
&& NGCF & 0.0987\tiny{$\pm$0.0077} & 0.1384\tiny{$\pm$0.0157} & 0.1376\tiny{$\pm$0.0142} & 0.0760\tiny{$\pm$0.0040} & 0.2141\tiny{$\pm$0.0126} & 0.1679\tiny{$\pm$0.0129} & 0.0638\tiny{$\pm$0.0015} & 0.2660\tiny{$\pm$0.0100} & 0.1884\tiny{$\pm$0.0101} \\
&& LR-GCCF & 0.1035\tiny{$\pm$0.0062} & 0.1457\tiny{$\pm$0.0131} & 0.1457\tiny{$\pm$0.0153} & 0.0776\tiny{$\pm$0.0037} & 0.2148\tiny{$\pm$0.0145} & 0.1736\tiny{$\pm$0.0162} & 0.0643\tiny{$\pm$0.0025} & 0.2670\tiny{$\pm$0.0139} & 0.1938\tiny{$\pm$0.0147} \\
&& LightGCN & 0.1070\tiny{$\pm$0.0047} & 0.1508\tiny{$\pm$0.0107} & 0.1515\tiny{$\pm$0.0104} & 0.0832\tiny{$\pm$0.0029} & 0.2326\tiny{$\pm$0.0116} & 0.1847\tiny{$\pm$0.0108} & 0.069\tiny{$\pm$0.0028} & 0.2878\tiny{$\pm$0.0146} & 0.2063\tiny{$\pm$0.0108} \\
&& \color{black}SGCN & \color{black}0.0823\tiny{$\pm$0.0079} & \color{black}0.1151\tiny{$\pm$0.0084} & \color{black}0.1126\tiny{$\pm$0.0096} \color{black}& 0.0649\tiny{$\pm$0.0065} & \color{black}0.1789\tiny{$\pm$0.0158} & \color{black}0.1393\tiny{$\pm$0.0122} & \color{black}0.0541\tiny{$\pm$0.0051} & \color{black}0.224\tiny{$\pm$0.0177} & \color{black}0.1566\tiny{$\pm$0.0123} \\
&& \textsf{SiReN} &
\color{black}\underline{0.123}\tiny{$\pm$0.008} & \color{black}\underline{0.1755}\tiny{$\pm$0.0152} & \color{black}\underline{0.1756}\tiny{$\pm$0.0168} &
\color{black}\underline{0.0933}\tiny{$\pm$0.0053} & \color{black}\underline{0.2606}\tiny{$\pm$0.0129} & \color{black}\underline{0.2106}\tiny{$\pm$0.0149} & \color{black}\underline{0.077}\tiny{$\pm$0.002} & \color{black}\underline{0.3223}\tiny{$\pm$0.0051} & \color{black}\underline{0.2345}\tiny{$\pm$0.0121} & \\
\cmidrule{2-12}
&\multirow{7}{*}{\rotatebox{90}{\thead{$[20,50)$}}
& BPRMF & 0.0877\tiny{$\pm$0.0041} & 0.0825\tiny{$\pm$0.0029} & 0.1052\tiny{$\pm$0.0057} & 0.0731\tiny{$\pm$0.0018} & 0.1358\tiny{$\pm$0.0026} & 0.1194\tiny{$\pm$0.0041} & 0.0635\tiny{$\pm$0.0013} & 0.1773\tiny{$\pm$0.0046} & 0.1364\tiny{$\pm$0.0037}\\
&& NeuMF & 0.1245\tiny{$\pm$0.0032} & 0.1196\tiny{$\pm$0.0047} & 0.1488\tiny{$\pm$0.004}& 0.1010\tiny{$\pm$0.0020} & 0.1916\tiny{$\pm$0.0052} & 0.1679\tiny{$\pm$0.0038} & 0.0877\tiny{$\pm$0.0014} & 0.2485\tiny{$\pm$0.0061} & 0.1913\tiny{$\pm$0.0041} \\
&& NGCF & 0.1460\tiny{$\pm$0.0027} & 0.1428\tiny{$\pm$0.0052} & 0.1768\tiny{$\pm$0.0059}& 0.1149\tiny{$\pm$0.0021} & 0.2198\tiny{$\pm$0.0073} & 0.1961\tiny{$\pm$0.0060} & 0.0959\tiny{$\pm$0.0015} & 0.2727\tiny{$\pm$0.0086} & 0.2177\tiny{$\pm$0.0061} \\
&& LR-GCCF & 0.1548\tiny{$\pm$0.0036} & 0.1504\tiny{$\pm$0.0041} & 0.1885\tiny{$\pm$0.0047} & 0.1193\tiny{$\pm$0.0026} & 0.2298\tiny{$\pm$0.0050} & 0.2062\tiny{$\pm$0.0048} & 0.0993\tiny{$\pm$0.0022} & 0.2844\tiny{$\pm$0.0077} & 0.2284\tiny{$\pm$0.0055} \\
&& LightGCN & 0.1633\tiny{$\pm$0.0027} & 0.1596\tiny{$\pm$0.0037} & 0.2003\tiny{$\pm$0.0037} & 0.1262\tiny{$\pm$0.0020} & 0.2424\tiny{$\pm$0.0044} & 0.2194\tiny{$\pm$0.0037} & 0.1055\tiny{$\pm$0.0015} & 0.3013\tiny{$\pm$0.0055} & 0.2434\tiny{$\pm$0.0036} \\
&& \color{black}SGCN &\color{black} 0.1204\tiny{$\pm$0.0029} & \color{black} 0.1179\tiny{$\pm$0.0028} & \color{black}0.1451\tiny{$\pm$0.0031} & \color{black} 0.0985\tiny{$\pm$0.0022} & \color{black}0.1915\tiny{$\pm$0.0056} & \color{black}0.1657\tiny{$\pm$0.0032} & \color{black}0.084\tiny{$\pm$0.0026} & \color{black}0.2421\tiny{$\pm$0.0084} & \color{black}0.1865\tiny{$\pm$0.0043} \\
&& \textsf{SiReN} &
\color{black} \underline{0.1728}\tiny{$\pm$0.0036} & \color{black} \underline{0.1722}\tiny{$\pm$0.0044} & \color{black}\underline{0.2147}\tiny{$\pm$0.0046} &
\color{black} \underline{0.1342}\tiny{$\pm$0.0026} & \color{black} \underline{0.2613}\tiny{$\pm$0.0053} & \color{black}\underline{0.2361}\tiny{$\pm$0.0044} & \color{black}\underline{0.1116}\tiny{$\pm$0.0018} & \color{black}\underline{0.322}\tiny{$\pm$0.0052} & \color{black} \underline{0.261}\tiny{$\pm$0.0044} & \\
\cmidrule{2-12}
&\multirow{7}{*}{\rotatebox{90}{\thead{$[50,\infty)$}}}
& BPRMF & 0.3201\tiny{$\pm$0.0079} & 0.0698\tiny{$\pm$0.0019} & 0.3372\tiny{$\pm$0.0099}& 0.2722\tiny{$\pm$0.0068} & 0.1156\tiny{$\pm$0.0035} & 0.3021\tiny{$\pm$0.0083} & 0.2418\tiny{$\pm$0.0052} & 0.1514\tiny{$\pm$0.0037} & 0.2852\tiny{$\pm$0.0071} \\
&& NeuMF & 0.3673\tiny{$\pm$0.0091} & 0.0841\tiny{$\pm$0.0028} & 0.385\tiny{$\pm$0.0089} & 0.3194\tiny{$\pm$0.0055} & 0.1430\tiny{$\pm$0.0033} & 0.3514\tiny{$\pm$0.0066} & 0.2862\tiny{$\pm$0.0043} & 0.1893\tiny{$\pm$0.0037} & 0.3355\tiny{$\pm$0.0059} \\
&& NGCF & 0.3853\tiny{$\pm$0.0061} & 0.0922\tiny{$\pm$0.0025} & 0.4085\tiny{$\pm$0.006} & 0.3247\tiny{$\pm$0.0040} & 0.1505\tiny{$\pm$0.0031} & 0.3649\tiny{$\pm$0.0052} & 0.2875\tiny{$\pm$0.0039} & 0.1962\tiny{$\pm$0.0037} & 0.3459\tiny{$\pm$0.0052} \\
&& LR-GCCF & 0.3931\tiny{$\pm$0.0038} & 0.0944\tiny{$\pm$0.0019} & 0.4167\tiny{$\pm$0.0052} & 0.3321\tiny{$\pm$0.0033} & 0.1544\tiny{$\pm$0.0030} & 0.3730\tiny{$\pm$0.0046} & 0.2930\tiny{$\pm$0.0031} & 0.2006\tiny{$\pm$0.0032} & 0.3531\tiny{$\pm$0.0043} \\
&& LightGCN & 0.4146\tiny{$\pm$0.0044} & 0.1\tiny{$\pm$0.0024} & 0.4402\tiny{$\pm$0.0052}& 0.3502\tiny{$\pm$0.0026} & 0.1635\tiny{$\pm$0.0029} & 0.3898\tiny{$\pm$0.0109} & 0.3098\tiny{$\pm$0.0024} & 0.2125\tiny{$\pm$0.0034} & 0.3736\tiny{$\pm$0.0043} \\
&& \color{black}SGCN & \color{black}0.3227\tiny{$\pm$0.0055} & \color{black} 0.0766\tiny{$\pm$0.0017} & \color{black}0.3393\tiny{$\pm$0.0064} & \color{black} 0.2763\tiny{$\pm$0.0029} & \color{black} 0.1273\tiny{$\pm$0.0023} & \color{black}0.3393\tiny{$\pm$0.0064} & \color{black} 0.2472\tiny{$\pm$0.0033} & \color{black} 0.1682\tiny{$\pm$0.003} & \color{black}0.2926\tiny{$\pm$0.0048} \\
&& \textsf{SiReN} &
\color{black} \underline{0.4258}\tiny{$\pm$0.0075} & \color{black}\underline{0.1033}\tiny{$\pm$0.0029} & \color{black}\underline{0.4497}\tiny{$\pm$0.0079} & \color{black} \underline{0.3604}\tiny{$\pm$0.005} & \color{black}\underline{0.1688}\tiny{$\pm$0.0035} & \color{black} \underline{0.3896}\tiny{$\pm$0.0278} & \color{black}\underline{0.321}\tiny{$\pm$0.0033} & \color{black} \underline{0.2211}\tiny{$\pm$0.0035} & \color{black}\underline{0.3844}\tiny{$\pm$0.0055}\\
\midrule
\multirow{21}{*}{\rotatebox{90}{Amazon-Book}}&\multirow{6}{*}{\rotatebox{90}{\thead{$[0,20)$}}}
& BPRMF & 0.0181\tiny{$\pm$0.0027} & 0.0263\tiny{$\pm$0.0052} & 0.0239\tiny{$\pm$0.0042} & 0.0157\tiny{$\pm$0.0021} & 0.0455\tiny{$\pm$0.008} & 0.033\tiny{$\pm$0.0054} & 0.0142\tiny{$\pm$0.0019} & 0.0619\tiny{$\pm$0.0103} & 0.0393\tiny{$\pm$0.0062}\\
&& NeuMF & 0.0247\tiny{$\pm$0.0022} & 0.0333\tiny{$\pm$0.0014} & 0.032\tiny{$\pm$0.0018} & 0.0199\tiny{$\pm$0.0016} & 0.0542\tiny{$\pm$0.0013} & 0.042\tiny{$\pm$0.002} & 0.0171\tiny{$\pm$0.0012} & 0.0699\tiny{$\pm$0.0017} & 0.0482\tiny{$\pm$0.0021}\\
&& NGCF & 0.0288\tiny{$\pm$0.0016} & 0.0398\tiny{$\pm$0.0014} & 0.0377\tiny{$\pm$0.0012} & 0.0235\tiny{$\pm$0.0012} & 0.0649\tiny{$\pm$0.0019} & 0.0497\tiny{$\pm$0.0014} & 0.0204\tiny{$\pm$0.0011} & 0.0848\tiny{$\pm$0.0015} & 0.0576\tiny{$\pm$0.0013}\\
&& LR-GCCF & 0.0292\tiny{$\pm$0.0017} & 0.0402\tiny{$\pm$0.0007} & 0.0382\tiny{$\pm$0.001} & 0.0241\tiny{$\pm$0.0013} & 0.0665\tiny{$\pm$0.0013} & 0.0508\tiny{$\pm$0.0009} & 0.0208\tiny{$\pm$0.0012} & 0.0863\tiny{$\pm$0.001} & 0.0586\tiny{$\pm$0.0009}\\
&& LightGCN & 0.0318\tiny{$\pm$0.0015} & 0.0438\tiny{$\pm$0.0012} & 0.042\tiny{$\pm$0.0008} & 0.0258\tiny{$\pm$0.0011} & 0.0712\tiny{$\pm$0.0021} & 0.0551\tiny{$\pm$0.0008} & 0.0224\tiny{$\pm$0.0011} & 0.0923\tiny{$\pm$0.0022} & 0.0635\tiny{$\pm$0.0009}\\
&& \color{black}SGCN &\color{black} 0.0267\tiny{$\pm$0.0069} & \color{black}0.0317\tiny{$\pm$0.0038} & \color{black}0.0335\tiny{$\pm$0.0054} & \color{black} 0.0225\tiny{$\pm$0.0063} & \color{black} 0.053\tiny{$\pm$0.0054} & \color{black}0.0427\tiny{$\pm$0.003} & \color{black} 0.0199\tiny{$\pm$0.0058} & \color{black}0.0701\tiny{$\pm$0.0063} & \color{black}0.0492\tiny{$\pm$0.0025} \\
&& \textsf{SiReN} &
\color{black} \underline{0.0435}\tiny{$\pm$0.0028} & \color{black} \underline{0.06}\tiny{$\pm$0.0013} & \color{black}\underline{0.0575}\tiny{$\pm$0.001} & \color{black}\underline{0.0344}\tiny{$\pm$0.0024} & \color{black}\underline{0.0946}\tiny{$\pm$0.0013} & \color{black}\underline{0.0742}\tiny{$\pm$0.0014} & \color{black}\underline{0.0292}\tiny{$\pm$0.0019} & \color{black} \underline{0.1208}\tiny{$\pm$0.0018} & \color{black}\underline{0.0845}\tiny{$\pm$0.0013}\\
\cmidrule{2-12}
&\multirow{7}{*}{\rotatebox{90}{\thead{$[20,50)$}}
& BPRMF & 0.0242\tiny{$\pm$0.0031} & 0.0214\tiny{$\pm$0.0035} & 0.0268\tiny{$\pm$0.0038} & 0.0211\tiny{$\pm$0.0026} & 0.0371\tiny{$\pm$0.0058} & 0.032\tiny{$\pm$0.0046} & 0.0194\tiny{$\pm$0.0023} & 0.0508\tiny{$\pm$0.0077} & 0.0383\tiny{$\pm$0.0055}\\
&& NeuMF & 0.0337\tiny{$\pm$0.0021} & 0.0281\tiny{$\pm$0.0004} & 0.0374\tiny{$\pm$0.002} & 0.0281\tiny{$\pm$0.0019} & 0.0468\tiny{$\pm$0.0008} & 0.0426\tiny{$\pm$0.0015} & 0.0249\tiny{$\pm$0.0016} & 0.0622\tiny{$\pm$0.0010} & 0.0497\tiny{$\pm$0.0015}\\
&& NGCF & 0.0386\tiny{$\pm$0.0019} & 0.033\tiny{$\pm$0.0006} & 0.043\tiny{$\pm$0.0014} & 0.0322\tiny{$\pm$0.0016} & 0.0547\tiny{$\pm$0.0007} & 0.0494\tiny{$\pm$0.0008} & 0.0286\tiny{$\pm$0.0014} & 0.0727\tiny{$\pm$0.0009} & 0.0576\tiny{$\pm$0.0009}\\
&& LR-GCCF & 0.0394\tiny{$\pm$0.0016} & 0.0336\tiny{$\pm$0.0005} & 0.0442\tiny{$\pm$0.0011} & 0.033\tiny{$\pm$0.0016} & 0.056\tiny{$\pm$0.0007} & 0.0507\tiny{$\pm$0.0007} & 0.0265\tiny{$\pm$0.0013} & 0.0744\tiny{$\pm$0.0006} & 0.0591\tiny{$\pm$0.0007}\\
&& LightGCN & 0.0441\tiny{$\pm$0.0017} & 0.0375\tiny{$\pm$0.001} & 0.0496\tiny{$\pm$0.0012} & 0.0364\tiny{$\pm$0.0016} & 0.0615\tiny{$\pm$0.0009} & 0.0562\tiny{$\pm$0.0005} & 0.0319\tiny{$\pm$0.0014} & 0.0807\tiny{$\pm$0.0009} & 0.065\tiny{$\pm$0.0005}\\
&& \color{black}SGCN & \color{black} 0.0324\tiny{$\pm$0.0024} & \color{black} 0.0275\tiny{$\pm$0.0015} & \color{black}0.0358\tiny{$\pm$0.0025} & \color{black} 0.0277\tiny{$\pm$0.0019} & \color{black} 0.0469\tiny{$\pm$0.002} & \color{black}0.0417\tiny{$\pm$0.0022} & \color{black}0.0249\tiny{$\pm$0.0016} & \color{black}0.0632\tiny{$\pm$0.0025} & \color{black}0.0491\tiny{$\pm$0.0024} \\
&& \textsf{SiReN} &
\color{black} \underline{0.0576}\tiny{$\pm$0.0024} & \color{black}\underline{0.0489}\tiny{$\pm$0.0011} & \color{black}\underline{0.0647}\tiny{$\pm$0.0018} & \color{black} \underline{0.0476}\tiny{$\pm$0.0022} & \color{black}\underline{0.0805}\tiny{$\pm$0.0016} & \color{black} \underline{0.0735}\tiny{$\pm$0.0006} & \color{black}\underline{0.0414}\tiny{$\pm$0.002} & \color{black}\underline{0.1049}\tiny{$\pm$0.0019} & \color{black}\underline{0.0846}\tiny{$\pm$0.0006}\\
\cmidrule{2-12}
&\multirow{7}{*}{\rotatebox{90}{\thead{$[50,\infty)$}}}
& BPRMF & 0.0551\tiny{$\pm$0.0067} & 0.0143\tiny{$\pm$0.0019} & 0.0574\tiny{$\pm$0.007} & 0.0497\tiny{$\pm$0.0058} & 0.0255\tiny{$\pm$0.0034} & 0.0533\tiny{$\pm$0.0063} & 0.0466\tiny{$\pm$0.0052} & 0.0357\tiny{$\pm$0.0046} & 0.0527\tiny{$\pm$0.0062}\\
&& NeuMF & 0.0717\tiny{$\pm$0.0034} & 0.0183\tiny{$\pm$0.0007} & 0.0756\tiny{$\pm$0.0036} & 0.0625\tiny{$\pm$0.0026} & 0.0318\tiny{$\pm$0.0007} & 0.0683\tiny{$\pm$0.003} & 0.057\tiny{$\pm$0.0023} & 0.0433\tiny{$\pm$0.0008} & 0.0658\tiny{$\pm$0.0025}\\
&& NGCF & 0.0827\tiny{$\pm$0.0016} & 0.0215\tiny{$\pm$0.0003} & 0.0875\tiny{$\pm$0.0019} & 0.0718\tiny{$\pm$0.002} & 0.0371\tiny{$\pm$0.0003} & 0.0789\tiny{$\pm$0.0019} & 0.0652\tiny{$\pm$0.0014} & 0.0505\tiny{$\pm$0.0004} & 0.0761\tiny{$\pm$0.0012}\\
&& LR-GCCF & 0.0834\tiny{$\pm$0.0031} & 0.0216\tiny{$\pm$0.0004} & 0.0882\tiny{$\pm$0.0033} & 0.0729\tiny{$\pm$0.0022} & 0.0378\tiny{$\pm$0.0002} & 0.0799\tiny{$\pm$0.0026} & 0.0665\tiny{$\pm$0.0019} & 0.0515\tiny{$\pm$0.0004} & 0.0774\tiny{$\pm$0.002}\\
&& LightGCN & 0.0958\tiny{$\pm$0.0026} & 0.0249\tiny{$\pm$0.0003} & 0.1019\tiny{$\pm$0.0032} & 0.0823\tiny{$\pm$0.0021} & 0.0423\tiny{$\pm$0.0003} & 0.0911\tiny{$\pm$0.0025} & 0.0747\tiny{$\pm$0.0018} & 0.0574\tiny{$\pm$0.0004} & 0.0877\tiny{$\pm$0.0018}\\
&& \color{black}SGCN & \color{black} 0.0708\tiny{$\pm$0.0036} & \color{black}0.0185\tiny{$\pm$0.0007} & \color{black}0.0742\tiny{$\pm$0.0038} &\color{black} 0.0624\tiny{$\pm$0.0035} & \color{black}0.0324\tiny{$\pm$0.0015} & \color{black}0.0677\tiny{$\pm$0.0037} & \color{black} 0.0571\tiny{$\pm$0.003} & \color{black}0.0442\tiny{$\pm$0.002} & \color{black}0.0657\tiny{$\pm$0.0033} \\
&& \textsf{SiReN} &
\color{black} \underline{0.1176}\tiny{$\pm$0.0046} & \color{black}\underline{0.0309}\tiny{$\pm$0.0006} & \color{black}\underline{0.1249}\tiny{$\pm$0.0055} & \color{black} \underline{0.1008}\tiny{$\pm$0.0032} & \color{black}\underline{0.0525}\tiny{$\pm$0.0008} &\color{black} \underline{0.1117}\tiny{$\pm$0.004} & \color{black}\underline{0.0909}\tiny{$\pm$0.0028} & \color{black}\underline{0.0706}\tiny{$\pm$0.0011} & \color{black}\underline{0.1071}\tiny{$\pm$0.0033}\\
\midrule
\multirow{21}{*}{\rotatebox{90}{Yelp}}&\multirow{6}{*}{\rotatebox{90}{\thead{$[0,20)$}}}
& BPRMF & 0.0066\tiny{$\pm$0.0009} & 0.0107\tiny{$\pm$0.0012} & 0.0091\tiny{$\pm$0.0013} & 0.0061\tiny{$\pm$0.0009} & 0.0197\tiny{$\pm$0.0024} & 0.0132\tiny{$\pm$0.0018} & 0.0057\tiny{$\pm$0.0008} & 0.0276\tiny{$\pm$0.0031} & 0.0161\tiny{$\pm$0.0022} \\
&& NeuMF & 0.0097\tiny{$\pm$0.0011} & 0.0155\tiny{$\pm$0.0018} & 0.0138\tiny{$\pm$0.0016} & 0.0083\tiny{$\pm$0.0009} & 0.0268\tiny{$\pm$0.0024} & 0.0189\tiny{$\pm$0.0020} & 0.0075\tiny{$\pm$0.0007} & 0.0364\tiny{$\pm$0.0026} & 0.0224\tiny{$\pm$0.0021} \\
&& NGCF & 0.0152\tiny{$\pm$0.0013} & 0.0244\tiny{$\pm$0.0012} & 0.0219\tiny{$\pm$0.0014} & 0.0128\tiny{$\pm$0.0010} & 0.0413\tiny{$\pm$0.0017} & 0.0295\tiny{$\pm$0.0016} & 0.0114\tiny{$\pm$0.0009} & 0.0552\tiny{$\pm$0.0014} & 0.0347\tiny{$\pm$0.0016} \\
&& LR-GCCF & 0.0162\tiny{$\pm$0.0018} & 0.0259\tiny{$\pm$0.0014} & 0.0232\tiny{$\pm$0.0017}& 0.0134\tiny{$\pm$0.0013} & 0.0434\tiny{$\pm$0.0023} & 0.0310\tiny{$\pm$0.0022} & 0.0120\tiny{$\pm$0.0001} & 0.0584\tiny{$\pm$0.0022} & 0.0366\tiny{$\pm$0.0021} \\
&& LightGCN & 0.0171\tiny{$\pm$0.0012} & 0.0274\tiny{$\pm$0.0009} & 0.0248\tiny{$\pm$0.0012}& 0.0144\tiny{$\pm$0.0010} & 0.0462\tiny{$\pm$0.0012} & 0.0332\tiny{$\pm$0.0015} & 0.0127\tiny{$\pm$0.0009} & 0.0615\tiny{$\pm$0.0014} & 0.0389\tiny{$\pm$0.0016} \\
&& \color{black}SGCN & \color{black} 0.0152\tiny{$\pm$0.0012} & \color{black}0.0243\tiny{$\pm$0.0014} & \color{black}0.0213\tiny{$\pm$0.0013} & \color{black}0.0131\tiny{$\pm$0.001} & \color{black} 0.0422\tiny{$\pm$0.0027} & \color{black}0.0293\tiny{$\pm$0.0018} & \color{black} 0.0119\tiny{$\pm$0.0009} & \color{black} 0.0574\tiny{$\pm$0.0031} & \color{black}0.035\tiny{$\pm$0.002} \\
&& \textsf{SiReN} &
\color{black} \underline{0.0224}\tiny{$\pm$0.0016} & \color{black} \underline{0.0359}\tiny{$\pm$0.001} & \color{black}\underline{0.0321}\tiny{$\pm$0.0012} & \color{black} \underline{0.0184}\tiny{$\pm$0.0012} & \color{black}\underline{0.0596}\tiny{$\pm$0.0016} & \color{black} \underline{0.0427}\tiny{$\pm$0.0014} & \color{black}\underline{0.0163}\tiny{$\pm$0.0011} & \color{black}\underline{0.079}\tiny{$\pm$0.002} & \color{black}\underline{0.05}\tiny{$\pm$0.0016}\\
\cmidrule{2-12}
&\multirow{7}{*}{\rotatebox{90}{\thead{$[20,50)$}}
& BPRMF & 0.01\tiny{$\pm$0.0013} & 0.0097\tiny{$\pm$0.0008} & 0.0111\tiny{$\pm$0.0013} & 0.0094\tiny{$\pm$0.0014} & 0.0183\tiny{$\pm$0.0019} & 0.0146\tiny{$\pm$0.0016} & 0.0090\tiny{$\pm$0.0012} & 0.0263\tiny{$\pm$0.0025} & 0.0181\tiny{$\pm$0.0019} \\
&& NeuMF & 0.0157\tiny{$\pm$0.001} & 0.0152\tiny{$\pm$0.0011} & 0.018\tiny{$\pm$0.0011}& 0.0137\tiny{$\pm$0.0008} & 0.0267\tiny{$\pm$0.0017} & 0.0222\tiny{$\pm$0.0013} & 0.0124\tiny{$\pm$0.0008} & 0.0365\tiny{$\pm$0.0020} & 0.0265\tiny{$\pm$0.0015} \\
&& NGCF & 0.0235\tiny{$\pm$0.0012} & 0.0232\tiny{$\pm$0.0008} & 0.0271\tiny{$\pm$0.0009} & 0.0198\tiny{$\pm$0.0010} & 0.0392\tiny{$\pm$0.0011} & 0.0329\tiny{$\pm$0.0009} & 0.0177\tiny{$\pm$0.0009} & 0.0526\tiny{$\pm$0.0010} & 0.0388\tiny{$\pm$0.0009} \\
&& LR-GCCF & 0.0245\tiny{$\pm$0.0015} & 0.0244\tiny{$\pm$0.0005} & 0.0286\tiny{$\pm$0.0012} & 0.0208\tiny{$\pm$0.0011} & 0.0413\tiny{$\pm$0.0007} & 0.0348\tiny{$\pm$0.0008} & 0.0186\tiny{$\pm$0.0009} & 0.0552\tiny{$\pm$0.0010} & 0.0409\tiny{$\pm$0.0009} \\
&& LightGCN & 0.0269\tiny{$\pm$0.0014} & 0.0266\tiny{$\pm$0.0005} & 0.0314\tiny{$\pm$0.0011} & 0.0224\tiny{$\pm$0.0012} & 0.0443\tiny{$\pm$0.0008} & 0.0377\tiny{$\pm$0.0009} & 0.0200\tiny{$\pm$0.0010} & 0.0592\tiny{$\pm$0.0013} & 0.0442\tiny{$\pm$0.0009} \\
&& \color{black}SGCN &\color{black} 0.0237\tiny{$\pm$0.0015} & \color{black} 0.0231\tiny{$\pm$0.0006} & \color{black}0.027\tiny{$\pm$0.0012} & \color{black} 0.0207\tiny{$\pm$0.0012} & \color{black} 0.0405\tiny{$\pm$0.0009} & \color{black}0.0335\tiny{$\pm$0.001} & \color{black} 0.0187\tiny{$\pm$0.0009} &\color{black} 0.0549\tiny{$\pm$0.0019} & \color{black}0.0398\tiny{$\pm$0.0011} \\
&& \textsf{SiReN} &
\color{black} \underline{0.0344}\tiny{$\pm$0.0012} & \color{black} \underline{0.0339}\tiny{$\pm$0.0007} & \color{black}\underline{0.0402}\tiny{$\pm$0.0007} & \color{black} \underline{0.0287}\tiny{$\pm$0.0009} & \color{black}\underline{0.0567}\tiny{$\pm$0.0015} & \color{black} \underline{0.0482}\tiny{$\pm$0.0006} & \color{black}\underline{0.0254}\tiny{$\pm$0.0008} & \color{black}\underline{0.0753}\tiny{$\pm$0.0016} & \color{black}\underline{0.0564}\tiny{$\pm$0.0004}\\
\cmidrule{2-12}
&\multirow{7}{*}{\rotatebox{90}{\thead{$[50,\infty)$}}}
& BPRMF & 0.0256\tiny{$\pm$0.0021} & 0.008\tiny{$\pm$0.0005} & 0.0263\tiny{$\pm$0.0017}& 0.0241\tiny{$\pm$0.0024} & 0.0150\tiny{$\pm$0.0015} & 0.0256\tiny{$\pm$0.0020} & 0.0236\tiny{$\pm$0.0025} & 0.0221\tiny{$\pm$0.0024} & 0.0269\tiny{$\pm$0.0024} \\
&& NeuMF & 0.0435\tiny{$\pm$0.0024} & 0.0137\tiny{$\pm$0.0008} & 0.0456\tiny{$\pm$0.0029} & 0.0384\tiny{$\pm$0.0018} & 0.0240\tiny{$\pm$0.0014} & 0.0420\tiny{$\pm$0.0023} & 0.0352\tiny{$\pm$0.0013} & 0.0331\tiny{$\pm$0.0014} & 0.0423\tiny{$\pm$0.0020} \\
&& NGCF & 0.0584\tiny{$\pm$0.0023} & 0.0187\tiny{$\pm$0.0004} & 0.0617\tiny{$\pm$0.0023} & 0.0506\tiny{$\pm$0.0016} & 0.0323\tiny{$\pm$0.0004} & 0.0562\tiny{$\pm$0.0017} & 0.0458\tiny{$\pm$0.0012} & 0.0439\tiny{$\pm$0.0003} & 0.0562\tiny{$\pm$0.0012} \\
&& LR-GCCF & 0.0629\tiny{$\pm$0.0018} & 0.0201\tiny{$\pm$0.0004} & 0.0671\tiny{$\pm$0.002} & 0.0542\tiny{$\pm$0.0012} & 0.0346\tiny{$\pm$0.0003} & 0.0608\tiny{$\pm$0.0014} & 0.0492\tiny{$\pm$0.0013} & 0.047\tiny{$\pm$0.0006} & 0.0606\tiny{$\pm$0.0013} \\
&& LightGCN & 0.0699\tiny{$\pm$0.0015} & 0.0222\tiny{$\pm$0.0004} & 0.0748\tiny{$\pm$0.0015}& 0.0599\tiny{$\pm$0.0013} & 0.0380\tiny{$\pm$0.0004} & 0.0675\tiny{$\pm$0.0012} & 0.0540\tiny{$\pm$0.0013} & 0.0514\tiny{$\pm$0.0004} & 0.0670\tiny{$\pm$0.0008} \\
&& \color{black}SGCN & \color{black} 0.0616\tiny{$\pm$0.0008} &\color{black} 0.0194\tiny{$\pm$0.0004} & \color{black}0.0645\tiny{$\pm$0.0011}& \color{black} 0.054\tiny{$\pm$0.0008} & \color{black} 0.0341\tiny{$\pm$0.0008} & \color{black}0.0593\tiny{$\pm$0.0009} & \color{black} 0.0491\tiny{$\pm$0.0009} & \color{black}0.0465\tiny{$\pm$0.0011} & \color{black}0.0593\tiny{$\pm$0.0009} \\
&& \textsf{SiReN} &
\color{black} \underline{0.0864}\tiny{$\pm$0.001} & \color{black} \underline{0.0277}\tiny{$\pm$0.0004} & \color{black}\underline{0.0924}\tiny{$\pm$0.0009} & \color{black} \underline{0.0732}\tiny{$\pm$0.0011} & \color{black}\underline{0.047}\tiny{$\pm$0.001} & \color{black} \underline{0.0829}\tiny{$\pm$0.0009} & \color{black}\underline{0.0656}\tiny{$\pm$0.0008} &\color{black} \underline{0.0629}\tiny{$\pm$0.0013} & \color{black}\underline{0.082}\tiny{$\pm$0.0008}\\
\midrule
\end{tabular}}
\caption{\color{black}Performance comparison among \textsf{SiReN} and six benchmark methods in terms of three performance metrics (average $\pm$ standard deviation) according to three different user groups \color{black} when $K\in \{5, 10, 15\}$\color{black}. Here, the best method for each case is highlighted using underlines.}
\label{cold_table}
\end{table*}
\paragraph{Comparison With Benchmark Methods (RQ3)}
The performance comparison between our \textsf{SiReN} method and \textcolor{black}{four} state-of-the-art GNN methods, including NGCF \cite{wang2019neural}, LR-GCCF \cite{chen2020revisiting}\color{black}, LightGCN \cite{he2020lightgcn}, and SGCN \cite{derr2018signed} \color{black} as well as two baseline MF methods, including BPRMF \cite{BPRMF09} and NeuMF \cite{he2017neural}, for top-$K$ recommendation is comprehensively presented in Table \ref{Q4table} with respect to three performance metrics using three real-world datasets, where \color{black} $K\in\{5,10,15\}$. \color{black} We note that the hyperparameters in all the aforementioned benchmark methods are tuned differently according to each dataset so as to provide the best performance. In Table \ref{Q4table}, the value with an underline indicates the best performer for each case. We would like to make the following insightful observations:
\begin{itemize}
\item Our \textsf{SiReN} method consistently outperforms five benchmark methods for all datasets regardless of the performance metrics and the values of $K$. The superiority of our method comes from the fact that we are capable of more precisely representing users' preferences without any information loss. This implies that low ratings are indeed informative as long as the low rating information is well exploited through judicious model design and optimization.
\color{black}
\item The second best performer is LightGCN for all the cases. The performance gap between our \textsf{SiReN} method ($X$) and the second best performer ($Y$) is the largest when the Amazon-Book dataset is used; the maximum improvement rates of 28.16\%, 30.93\%, and 28.52\% are achieved in terms of $P@5$, $R@5$, and $nDCG@5$, respectively, where the improvement rate ($\%$) is given by $\frac{X-Y}{Y}\times 100$. We recall that the Amazon-Book dataset has the lowest density (i.e., the highest sparsity) out of three datasets (refer to Table \ref{data_stat}). Thus, from the above empirical finding, it is seen that exploiting negative user--item interactions in sparser datasets would be more beneficial and effective in improving the recommendation accuracy.
\item SGCN is \textcolor{black}{far inferior to some of state-of-the-art GNN methods (e.g., LR-GCCF and LightGCN). This implies that performance of the GNN model built upon the balance theory is not effective compared to other competing GNN-based recommender systems.}
\item Two baseline MF methods reveal worse performance than that of three state-of-the-art GNN methods including NGCF, LR-GCCF, and LightGCN. This indicates that exploring the high-order connectivity information via GNNs indeed significantly improves the recommendation accuracy.
\color{black}
\item The gain of LightGCN over LR-GCCF and NGCF is consistently observed. We note that, in contrast to LR-GCCF and NGCF, LightGCN aggregates the information of neighbors without both nonlinear activation and weight transformation operations. Our experimental results coincide with the argument in \cite{he2020lightgcn} that a simple aggregator of the information of neighbors using a weighted sum is the most effective as long as GNN-based recommender systems are associated.
\end{itemize}
\paragraph{Robustness to Interaction Sparsity Levels (RQ4)}
Needless to say, the sparsity issue is one of crucial challenges on designing recommender systems since few user--item interactions are insufficient to generate high-quality embeddings \cite{nguyen2018npe,wang2019neural}. In this experiment, we demonstrate that making use of low rating scores for better representing users' preferences enables us to alleviate this sparsity issue. To this end, we partition the set of users in the test set into three groups according to the number of interactions in the training set as in \cite{nguyen2018npe,wang2019neural}. More precisely, for each dataset, we split the users into three groups, each of which is composed of the users whose number of interactions in the training set ranges between $[0,20)$, $[20,50)$, and $[50,\infty)$, respectively. In Table \color{black} \ref{cold_table}, we comprehensively carry out the performance comparison between our \textsf{SiReN} method and six benchmark methods with respect to three performance metrics of top-$K$ recommendation using the ML-1M, Amazon-Book, and Yelp datasets, where experimental results are shown according to three interaction sparsity levels for each dataset. Our findings can be summarized as follows:
\begin{itemize}
\color{black}
\item Our \textsf{SiReN} method consistently outperforms \textcolor{black}{all} the benchmark \textcolor{black}{method. This demonstrates} that exploiting the set of negative interactions in designing GNN-based recommender systems \textcolor{black}{is useful in} improving recommendation accuracy \textcolor{black}{regardless of interaction sparsity levels}.
\item The performance gap between our \textsf{SiReN} method and the second best performer is the largest for the user group having $[0,20)$ interactions in the Amazon-Book dataset; the maximum improvement rates of 36.79\%, 36.98\%, and 36.9\% are achieved in terms of $P@5$, $R@5$, and $nDCG@5$, respectively. As stated above, the performance improvement of \textsf{SiReN} over competing methods is significant when sparse datasets are used.
\color{black}
\item As the number of interactions per user increases, the performance is likely to be enhanced for all the methods regardless of types of datasets. It is obvious that more user--item interactions yield higher recommendation accuracy.
\end{itemize}
\color{black}
\section{Concluding Remarks}
In this paper, we explored a fundamentally important problem of how to take advantage of both high and low rating scores in developing GNN-based recommender systems. To tackle this challenge, we introduced a novel method, termed \textsf{SiReN}, that is designed based on sign-aware learning and optimization models along with a GNN architecture. Specifically, we presented an approach to 1) constructing a signed bipartite graph $G^s$ to distinguish users' positive and negative feedback and then partitioning $G^s$ into two edge-disjoint graphs $G^p$ and $G^n$ with positive and negative edges each, respectively, 2) generating two embeddings for $G^p$ and $G^n$ via a GNN model and an MLP, respectively, and then using an attention model to discover the final embeddings, and 3) training our learning models by establishing a sign-aware BPR loss function that captures each relation of positively and negatively connected neighbors. Using three real-world datasets, we demonstrated that our \textsf{SiReN} method remarkably outperforms \color{black} four \color{black} state-of-the-art GNN methods as well as two baseline MF methods while showing gains over the second best performer (i.e., LightGCN) by up to \color{black} 30.93\% \color{black} in terms of the recommendation accuracy. \color{black} We also demonstrated that our proposed method is robust to more challenging situations according to interaction sparsity levels by investigating that the performance improvement of \textsf{SiReN} over state-of-the-art methods is significant when sparse datasets are used. Additionally, we empirically showed the effectiveness of MLP used for the graph $G^n$ with negative edges.
Potential avenues of future research include the design of a more sophisticated GNN model that fits well into $G^n$ in signed bipartite graphs. Here, the challenges lie in developing a new information aggregation and propagation mechanism. \color{black}Furthermore, the design of multi-criteria recommender systems utilizing sign awareness remains for future work. \color{black}
\section*{Acknowledgments}
The work of W.-Y. Shin was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C3004345) and by the Yonsei University Research Fund of 2021 (2021-22-0083). The work of S. Lim was supported by the Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea Government (MSIT) (No. 2020-0-01441, Artifical Intelligence Convergence Research Center (Chungnam National University)). The authors would like to thank Dr. Hajoon Ko from Harvard University for his helpful comments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUdM45qoTBBFOE4uVk
| 5 | 1 |
\section{Introduction}
With the advent of novel light sources, the emerging field of x-ray quantum optics has gained considerable momentum, both on the experimental and theoretical side~\cite{Adams2013}. While ideas based on quantum coherence and interference could in principle be realized with inner-shell electrons, solid state targets obeying the M\"ossbauer effect~\cite{Moessbauer1958} have sparked interest in many recent works~\cite{Shvydko1996,Palffy2009,Shakhmuratov2009,Shakhmuratov2011,Liao2012,Liao2012b,Vagizov2014}.
A particularly interesting setting in which quantum optics with M\"ossbauer nuclei can be realized is specifically engineered planar x-ray cavities. Embedding a thin layer of resonant nuclei in such cavities has facilitated the observation of a number of phenomena, such as the cooperative Lamb shift and single-photon superradiance~\cite{Roehlsberger2010}, Fano line shape control and interferometric phase measurements~\cite{Heeg2014b}, magnetically controlled reflection spectra modified by spontaneously generated coherences~\cite{Heeg2013} and group velocity control of x-ray pulses~\cite{Heeg2014}.
However, there is an additional cavity configuration which has sparked interest recently. In a setting with two particularly placed ensembles of M\"ossbauer nuclei in the cavity, the iron isotope $^{57}$Fe with its transition at $14.4$~keV, it was possible to observe a reflection spectrum with a deep interference minimum in the center due to the phenomenon of electromagnetically induced transparency (EIT)~\cite{Harris1997,Fleischhauer2005,Roehlsberger2012}. This is a remarkable result, since typically two coherent driving fields are required for this effect to emerge. In Ref.~\cite{Roehlsberger2012}, however, the EIT experiment was established in a thin-film cavity with only a single excitation from a synchrotron beam, whereas the second field was intrinsically provided by intracavity couplings between the two $^{57}$Fe layers. For the modeling of the experiment, different semiclassical approaches can be employed~\cite{Parratt1954,Roehlsberger2005}, while a consistent description based on quantum optics as usually desired for EIT is still lacking.
A first quantum optical model for the light-matter interaction in the cavity was developed in Ref.~\cite{Heeg2013b}. However, it does not yet cover cavity settings with multiple resonant layers, and hence it is also not yet capable of describing the EIT experiment. But motivated by the expected significance of multilayer configurations, it would be highly desirable to also have a microscopic theory at hand, which allows for a deeper understanding. Topics of interest include the nature of the above-mentioned intrinsic cavity-mediated coupling between the different resonant layers, and perspectives on how multilayer cavities can be specifically engineered. We note that other approaches based on scattering theory were employed to model the two-layer layout~\cite{Xu2013}; however to our knowledge they remained unsuccessful in providing a quantitative description of the EIT experiment.
In this work, we generalize the quantum optical theory from Ref.~\cite{Heeg2013b} with the aim to describe the single-photon EIT experiment~\cite{Roehlsberger2012} and related settings. To this end, we extend the description to include multiple cavity modes as well as multiple layers. The extension to multiple modes allows us to accurately describe the cavity reflection in the absence of resonant nuclei. However, the general shape of the nuclear contribution to the measured signal turns out to be unchanged when considering the two extensions separately. In this case, the line shape predicted in the absence of a magnetic hyperfine splitting is a Lorentz profile, in which only the coefficients are modified due to the additional elements in the theory. However, if multiple cavity modes and multiple layers are considered simultaneously, a new class of nuclear reflection spectra is obtained. In particular, restricting the analysis to two resonant layers, the EIT-like spectrum observed in Ref.~\cite{Roehlsberger2012} is reobtained and the predicted scalings are in accordance with previous semiclassical calculations. Due to the microscopic ansatz of our theoretical model, we can provide a full quantum interpretation of the system. A good agreement to numerical results obtained from semiclassical descriptions is found over a broad parameter range. Furthermore, our extended model opens up avenues to engineer a broader set of effective level schemes at x-ray energies by combining the advanced possibilities of multiple modes and layers together with magnetic hyperfine splitting.
This paper is structured as follows: In Sec.~\ref{sec:basic} we recapitulate the cavity system and the basic model which was developed for its description in Ref.~\cite{Heeg2013b}. After this, we generalize the model to include multiple layers and multiple modes. In Secs.~\ref{sec:effect_modes} and~\ref{sec:effect_layers}, we analyze the effects of the two extensions separately and in Sec.~\ref{sec:effect_both} the consequence of both extensions applied simultaneously are discussed. Finally, the general model is applied to describe the EIT setting from Ref.~\cite{Roehlsberger2012}.
\section{Recap of the basic model\label{sec:basic}}
The setup which is considered in this work is a thin-film cavity with embedded M\"ossbauer nuclei, probed by hard x rays in grazing incidence as visualized in Fig.~\ref{fig:setup}. Such a cavity is formed by a stack of different materials. At its boundaries materials with a high electron density, such as platinum or palladium, act as mirrors, while the material in the center, e.g.~carbon, has a low electron density and provides a guiding layer for the x rays. For certain incident angles in the mrad range, the x rays can resonantly excite a guided cavity mode and propagate inside the cavity, rendering the structure a waveguide-like system. By embedding M\"ossbauer nuclei in the center of the waveguide, a near-resonant interaction of the x-ray light with the transitions of the resonant nuclei is achieved. The reflected signal forms the main observable, and its spectral shape is crucially influenced by the light-matter interaction in the cavity. As proven in a number of recent experiments~\cite{Roehlsberger2010,Roehlsberger2012,Heeg2013,Heeg2014,Heeg2014b}, this setup constitutes an auspicious platform for the exploration of quantum optical phenomena in the x-ray regime.
\begin{figure}[t]
\centering
\includegraphics{setup}
\caption{\label{fig:setup}(Color online) Schematic of the considered setup. A thin-film cavity is probed by hard x rays in grazing incidence. The light-matter interaction with the resonant nuclei in the center of the cavity, possibly under the influence of a magnetic field, modifies the observed reflected signal.}
\end{figure}
A quantum optical model for the description of the x-ray light-matter interaction in these thin-film cavities was introduced in Ref.~\cite{Heeg2013b}, however, it is limited to a small subset of possible cavity layouts. In the following, we will briefly present the existing theoretical approach, before we then continue to generalize the model in order to cover more elaborate scenarios.
\subsection{Cavity\label{sec:basic_cavity}}
As already mentioned above, guided cavity modes can be excited for certain resonance angles $\theta_0$. These angles depend on the cavity layout, such as the materials and the layer thicknesses, and can be determined by computing the angular-dependent reflectance in the absence of resonant nuclei. At certain positions of this curve, the reflection is strongly suppressed, indicating the presence of a guided cavity mode. The reason for this suppression is the destructive interference between the reflection directly at the cavity surface and the reflection of the light which entered the cavity mode.
\begin{figure}[t]
\centering
\includegraphics{field}
\caption{\label{fig:field}(Color online) Field distribution in the two cavities defined in Tab.~\ref{tab:layers_eit} is shown. The incidence angle ($\theta \approx 3.5$~mrad) is chosen such that the third guided mode is driven. The cavity field is normalized to an input field with unity intensity. Solid red lines take into account only electronic scattering in the cavity, which is realized at x-ray frequencies off-resonant to the nuclear transition in $^{57}$Fe, dashed blue curves assume resonant driving of the transition. The shaded areas (red, gray, blue) indicate the positions of the different cavity materials (Pt, C, Fe).}
\end{figure}
Exemplary field distributions for two particular cavities, which will be analyzed later, are shown in Fig.~\ref{fig:field}. The incidence angle is chosen such that the third guided mode is driven, which is reflected in the three antinodes of the field intensity inside the cavity. The external cavity field stems from the interference of the incident (``$\exp{(i k_z z)}$'') and reflected beam (``$R \exp{(-i k_z z)}$''). Hence, the suppression in the reflectance $R \ll 1$, which is characteristic for the guided modes, is visible as a small modulation of the external field. In contrast, if the x-ray frequency matches the transition of the nuclei in the cavity (dashed lines in Fig.~\ref{fig:field}), the relative strength of the reflected light becomes larger as it now consists not only of electronic, but also nuclear scattering contributions which adds up to the observed signal. Note that the external field in Fig.~\ref{fig:field} is shown close to the cavity surface and therefore it an interference pattern of the incident and the reflected light appears. In the far field, however, these two contributions are easily distinguished due to the different propagation direction, as visualized in the schematic in Fig.~\ref{fig:setup}.
Since the different cavity modes are well separated in their resonant incident angles by several mrad, only at most one mode is usually driven near its resonance. Nevertheless, to take into account the two polarization states of the x rays perpendicular to the propagation direction, two cavity modes $a_1$ and $a_2$ are included in the theoretical description. In the rotating frame of the external driving field with frequency $\omega$, the Hamiltonian characterizing the free evolution of the cavity modes with photon annihilation and creation operators $a$ and $a^\dagger$ as well as their coupling to the external field $a_{\textrm{in}}$ is given by ($\hbar = 1$ used here and in the following)~\cite{Heeg2013b}
\begin{align}
H_M &= \Delta_{C} {a_1}^\dagger {a_1} + \Delta_{C} {a_2}^\dagger {a_2} \nonumber \\
&+ i \sqrt{2 \kappa_{R}} \left[ (\vec{\hat a}_1^* \!\cdot\! \vec{\hat a}_\text{in}) \, a_\text{in} a_1^\dagger - (\vec{\hat a}_\text{in}^* \!\cdot\! \vec{\hat a}_1) \, a_\text{in}^* a_1 \right] \nonumber \\
&+ i \sqrt{2 \kappa_{R}} \left[ (\vec{\hat a}_2^* \!\cdot\! \vec{\hat a}_\text{in}) \, a_\text{in} a_2^\dagger - (\vec{\hat a}_\text{in}^* \!\cdot\! \vec{\hat a}_2) \, a_\text{in}^* a_2 \right] \; .\label{eq:basic_H_mode}
\end{align}
Here, the expressions in the round brackets denote scalar products of the polarization directions between the incident radiation ($\vec{\hat a}_\textrm{in}$) and the cavity modes ($\vec{\hat a}_{1}$,$\vec{\hat a}_{2}$). The mismatch between the frequencies of the cavity mode and the external field is denoted by the cavity detuning
\begin{align}
\Delta_C(\theta) = \omega_C - \omega = \omega \left[\sqrt{\cos{(\theta)}^2 + \sin{(\theta_0)}^2} -1 \right] \;. \label{eq:cavity_detuning}
\end{align}
We emphasize that the cavity detuning can be controlled with the incidence angle $\theta$ and does not depend on the frequency of the driving field for all practical purposes. The reason for this is the special grazing incidence geometry, in which the cavity is probed, and that the nuclear resonance typically is orders of magnitude more narrow than the cavity line width.
\begin{table}[t]
\centering
\caption{\label{tab:layers_eit}Parameters for the two-layer cavities analyzed in this work. The geometry defined in the first column corresponds to a node--antinode configuration in which an EIT-like spectrum is expected, the second parameter set defines a antinode--node configuration in which no EIT-like spectrum was observed~\cite{Roehlsberger2012}.}
\begin{tabularx}{\columnwidth}{X c X r @{}p{1cm} X r @{}p{1cm} X}
\hline \hline \\ [-1.5ex]
& Material & & \multicolumn{2}{c}{Thickness [nm]} && \multicolumn{2}{c}{Thickness [nm]} & \\[0.5ex]
\hline \\ [-1.5ex]
&Pt& & $~~~~~~~$3 & && $~~~~~~~$3 &&\\
&C& & 10 &.5 && 20 & &\\
&$^{57}$Fe& & 3 & && 3 & & \\
&C& & 6 &.5 && 6 &.5 &\\
&$^{57}$Fe& & 3 & && 3 & & \\
&C& & 21 & && 11 &.5 &\\
&Pt& & 10 & && 10 & &\\[0.5ex]
\hline \hline
\end{tabularx}
\end{table}
Furthermore, incoherent effects such as the photon loss of the cavity modes have to be included in the model. This can be done via a description in terms of the density matrix $\rho$ and Lindblad operators $\mathcal{L}[\rho]$. We define
\begin{equation}
\mathcal{L}[\rho, \mathcal{O}^+, \mathcal{O}^-] = \big( \mathcal{O}^+ \mathcal{O}^- \rho + \rho \mathcal{O}^+ \mathcal{O}^- -2\mathcal{O}^- \rho \mathcal{O}^+ \big) \label{eqn:lindblad_helper} \;
\end{equation}
for arbitrary operators $\mathcal{O}^+$ and $\mathcal{O}^-$. Then, the photon loss can be described via
\begin{align}
\mathcal{L}_M[\rho] = &-\kappa\, \mathcal{L}[\rho, a_1^\dagger, a_1] -\kappa\, \mathcal{L}[\rho, a_2^\dagger, a_2] \label{eqn:cavity_decay} \;.
\end{align}
\subsection{Nuclei\label{sec:basic_nuclei}}
Next, we include the resonant nuclei to the description. In this work, to be specific, we refer to the commonly used M\"ossbauer isotope $^{57}$Fe with its transition at $\omega_0 = 14.4$~keV and a line width of $\gamma = 4.7$~neV. To encompass the general case, we allow for a magnetic hyperfine splitting of the nuclear resonance. Then, the ground state splits up into two states separated by the energy $\delta_g$ and four excited states with energy spacing $\delta_e$ between adjacent states. This leads to six M1 allowed transitions in the nucleus~\cite{Hannon1999}, which are summarized in Tab.~\ref{tab:transitions}. We note that the energy splitting of the ground states is in the neV range, such that both states are evenly populated at room temperature according to the Boltzmann factor $\exp{(\delta_g/k_B T)}$.
In a suitable interaction picture, the nuclear dynamics is characterized by the Hamiltonian
\begin{align}
H_N = H_0 + H_{C} \;.
\end{align}
The diagonal part
\begin{align}
H_0 = \sum_{n=1}^N \bigg[ &\sum_{j=1}^2 \delta_g (j- \tfrac{3}{2}) \; \ketbra{g_j^{(n)}} \nonumber \\
+ &\sum_{j=1}^4 \left( \delta_e (j- \tfrac{5}{2}) - \Delta \right) \; \ketbra{e_j^{(n)}} \bigg] \, \label{eq:basic_H_0}
\end{align}
contains the energy detuning $\Delta = \omega - \omega_0$ and the energy splitting of the states due to the magnetic hyperfine interaction for all $N$ atoms. It is important to note that in contrast to the cavity detuning $\Delta_C$, the detuning $\Delta$ does not depend on the incidence angle $\theta$, but only on the frequency of the externally applied x-ray field. In Eq.~(\ref{eq:basic_H_0}), the index $n$ sums over all $N$ nuclei, and $j$ sums over the two ground and four excited hyperfine states, respectively. The coupling of the six transitions $\mu$ (see Tab.~\ref{tab:transitions}) to the two polarization modes are given as
\begin{align}
H_{C} = \sum_{n=1}^N \sum_{\mu=1}^6 \Big[ ( &\vec{\hat d}_\mu^* \!\cdot\! \vec{\hat a}_1 )\, g_\mu^{(n)} S_{\mu+}^{(n)} a_1 \nonumber \\
+ \, ( &\vec{\hat d}_\mu^* \!\cdot\! \vec{\hat a}_2 )\, g_\mu^{(n)} S_{\mu+}^{(n)} a_2 \,+ \textrm{H.c.} \Big] \; . \label{eq:basic_H_C}
\end{align}
Here, $\vec{\hat d}_\mu$ denotes the normalized dipole moment, $S_{\mu-}^{(n)}$ [$S_{\mu+}^{(n)}$] is the nuclear lowering [raising] operator of the transition $\mu$ for atom $n$, and the coupling coefficient $g_\mu^{(n)} = g \: c_{\mu} \: e^{i\,\phi^{(n)}}$ is composed of a constant $g$, the Clebsch-Gordan coefficient $c_\mu$ of the respective transition and a phase taking into account the position of the atom $n$.
Additionally, the spontaneous decay of the excited nuclear states is taken into account via
\begin{align}
\mathcal{L}_\text{SE}[\rho] &= \sum_{n=1}^N \mathcal{L}_\text{SE}^{(n)}[\rho] \label{eqn:spont_emission} \;, \\
\mathcal{L}_\text{SE}^{(n)}[\rho] &= -\frac{\gamma}{2} \sum_{\mu=1}^{6} c_\mu^2 \, \mathcal{L}[\rho, S_{\mu+}^{(n)}, S_{\mu-}^{(n)}] \;.
\end{align}
\begin{table}[t]
\caption{Overview of the M1 allowed transitions in the $^{57}$Fe nucleus with transition index $\mu$. Shown are the involved states, the transition energy $\Delta E$ relative to the energy at vanishing magnetization $\omega_0$, the Clebsch-Gordan coefficient (CG) $c_\mu$ and the polarization type. Linear polarization is denoted by $\pi^0$, right (left) circular polarization as $\sigma^+$ ($\sigma^-$).\label{tab:transitions}}
\begin{tabular}{ccccc}
\hline \hline \\[-1.5ex]
$\quad\mu\quad$ & $\quad$Transition$\quad$ & $\quad\Delta E\quad$& $\quad$C-G$\quad$ & Polarization \\[0.5ex]
\hline\\[-1.5ex]
1 & $\ket{g_1}\leftrightarrow\ket{e_1}$ & $-\delta_g/2-3/2\delta_e$ & $1$ & $\sigma^-$ \\
2 & $\ket{g_1}\leftrightarrow\ket{e_2}$ & $-\delta_g/2-1/2\delta_e$ & $\sqrt{2/3}$ & $\pi^0$ \\
3 & $\ket{g_1}\leftrightarrow\ket{e_3}$ & $-\delta_g/2+1/2\delta_e$ & $\sqrt{1/3}$ & $\sigma^+$ \\
4 & $\ket{g_2}\leftrightarrow\ket{e_2}$ & $\delta_g/2-1/2\delta_e$ & $\sqrt{1/3}$ & $\sigma^-$ \\
5 & $\ket{g_2}\leftrightarrow\ket{e_3}$ & $\delta_g/2+1/2\delta_e$ & $\sqrt{2/3}$ & $\pi^0$ \\
6 & $\ket{g_2}\leftrightarrow\ket{e_4}$ & $\delta_g/2+3/2\delta_e$ & $1$ & $\sigma^+$\\[0.5ex]
\hline \hline
\end{tabular}
\end{table}
\subsection{Input-output relation and observables\label{sec:basic_io}}
In order to relate the internal operators in the cavity to externally accessible quantities, the input-output relations are employed~\cite{Gardiner2004}. The output field $a_{\textrm{out}}$, also visualized in Fig.~\ref{fig:setup}, is given by
\begin{align}
a_\text{out} &= -a_\text{in} \, \left(\vec{\hat a}_\text{out}^* \!\cdot\! \vec{\hat a}_\text{in} \right) \nonumber \\
&+ \sqrt{2 \kappa_{R}} \left[ (\vec{\hat a}_\text{out}^* \!\cdot\! \vec{\hat a}_1) \, a_1 + (\vec{\hat a}_\text{out}^* \!\cdot\! \vec{\hat a}_2) \, a_2 \right] \,.
\label{eq:basic_io}
\end{align}
With this operator at hand, the reflection coefficient reads
\begin{align}
R = \frac{\ew{a_{\textrm{out}}}}{a_{\textrm{in}}} \;. \label{eq:basic_R}
\end{align}
Note that in a typical experiment, the reflectance $|R|^2$ is measured.
\subsection{Full model\label{sec:basic_full}}
The expressions given above form the building blocks of the general model developed in Ref.~\cite{Heeg2013b}. The full master equation reads
\begin{align}
\frac{d}{dt}\rho = -i [ H_M + H_N, \rho] + \mathcal{L}_M[\rho] + \mathcal{L}_\text{SE}[\rho]\;.
\label{eq:basic_me}
\end{align}
In principle, the dynamics of the system could be solved this way. However, due to the huge Hilbert space connected with the $N$ atoms and two cavity modes, this task is challenging. Hence, in Ref.~\cite{Heeg2013b} two approximations well justified at present experimental conditions were performed. First, the cavity modes $a_1$ and $a_2$ were adiabatically eliminated. This is possible since the cavity modes have a low quality factor $Q$~\cite{Roehlsberger2010}, which is known as the bad-cavity regime~\cite{Meystre2007}. Hence, the time scale $1/\kappa$, on which the mode dynamics equilibrates, is very short compared to the nuclear time scale and therefore the cavity modes can be considered as stationary. Second, it is possible to restrict the analysis to the subspace of up to one excitation in the system, since experiments performed at current synchrotron radiation sources provide on average less than one resonant photon per pulse~\cite{Shenoy2008,Roehlsberger2010,Roehlsberger2013}. This simplifies the master equation considerably and compact analytic expressions for the observables can be obtained. Below we will exploit the same approaches to also simplify the extended model which will then include different resonant layers and multiple modes.
\section{Generalization to multiple layers and multiple modes\label{sec:generalization}}
We will now extend the basic model introduced in the last section by explicitly including multiple layers of resonant nuclei as well as more than a single cavity mode in theory. As already discussed above, in typical experiments at most one cavity mode can be driven resonantly at a time. This is due to their large angular separation, which significantly exceeds the beam divergence at modern x-ray sources. The angular separation in turn leads to large cavity detunings $\Delta_C$ for nonresonant modes. Nevertheless, as we will show below, the additional modes can sometimes be of importance, since the nuclei can in principle scatter into them, or if the reflectance is considered over a broad range of incidence angles. Also the inclusion of multiple layers is a highly desirable goal, as motivated by observation of EIT in Ref.~\cite{Roehlsberger2012}.
Clearly, the coefficients in the master equation, such as decay rates $\kappa$ or coupling constants $g$, differ for each mode and each layer and ought to be marked with an index for the respective element. In an attempt to reduce confusion, we stick to the following notation: An atomic index is denoted by an upper index $n$ in brackets. Lower indices $\mu$ indicate a transition, as listed in Tab.~\ref{tab:transitions}. The different cavity modes are distinguished by an upper index ${[j]}$ in squared brackets. A curly bracket $\{l\}$ indicates that the respective quantity is related to layer~$l$. This notation is summarized in Tab.~\ref{tab:notation}.
We start by revisiting the internal and external electromagnetic field. We consider a single incident field $a_{\textrm{in}}$, which impinges onto the cavity surface under the grazing angle $\theta$, and an outgoing field $a_{\textrm{out}}$, emitted at the respective reflection angle $\pi - \theta$. Compared to the initial analysis in Sec.~\ref{sec:basic}, the input field does not only drive one cavity mode $a$, but multiple modes $a^{[j]}$. At the same time, the output field is driven by these modes and, naturally, also the resonant nuclei will interact with the different cavity field modes. This means, we also have to distinguish the coupling coefficients and decay rates for each cavity mode.
\begin{table}[t]
\centering
\caption{\label{tab:notation}Index notation used throughout this work.}
\begin{tabularx}{\columnwidth}{X c X c X X }
\hline \hline \\[-1.5ex]
& Index & & Quantity & \\[0.5ex]
\hline \\[-1.5ex]
& $n,m$ & & nucleus $n, m \in\{1,\dots,N\}$ & \\
& $\mu, \nu$ & & nuclear transition $\mu, \nu \in\{1,\dots,6\}$ & \\
& $\{l\}, \{k\}$ & & nuclear layer & \\
& $[j]$ & & cavity mode& \\[0.5ex]
\hline \hline
\end{tabularx}
\end{table}
Generalizing Eq.~(\ref{eq:basic_io}) from the original theory, we write for the input-output relation
\begin{align}
a_{\textrm{out}} &= -a_{\textrm{in}} \scpr{\vec{\hat a}_\textrm{out}^*}{\vec{\hat a}_\textrm{in}} \nonumber \\ &+ \sum_j \sqrt{2\kappa_R^{[j]}} \Big[ a_1^{[j]} \scpr{\vec{\hat a}_\textrm{out}^*}{\vec{\hat a}_1^{[j]}} + a_2^{[j]} \scpr{\vec{\hat a}_\textrm{out}^*}{\vec{\hat a}_2^{[j]}} \Big]
\end{align}
and the Hamiltonian describing the dynamics of the modes given in Eq.~(\ref{eq:basic_H_mode}) becomes
\begin{align}
H_M = \sum_j & \Delta_C^{[j]} \left( {a_1^{[j]}}^\dagger a_1^{[j]} + {a_2^{[j]}}^\dagger a_2^{[j]} \right) + i \sum_j \sqrt{2\kappa_R^{[j]}} \nonumber \\
\times\;\Big[\;\;&
a_{\textrm{in}} {a_1^{[j]}}^{\dagger} \scpr{{{\vec{\hat a}_1^{[j]}}^*}}{\vec{\hat a}_\textrm{in}}
-a_{\textrm{in}}^*{a_1^{[j]}} \scpr{\vec{\hat a}_\textrm{in}^*}{{{\vec{\hat a}_1^{[j]}}}}
\nonumber \\
+\;&a_{\textrm{in}} {a_2^{[j]}}^{\dagger} \scpr{{{\vec{\hat a}_2^{[j]}}^*}}{\vec{\hat a}_\textrm{in}}
-a_{\textrm{in}}^*{a_2^{[j]}} \scpr{\vec{\hat a}_\textrm{in}^*}{{{\vec{\hat a}_2^{[j]}}}}
\Big] \;.
\end{align}
In a similar fashion, the couplings with nuclei are modified to include the sum over all modes $j$ and the interaction Hamiltonian given in Eq.~(\ref{eq:basic_H_C}) and describing a transition $\mu$ of an atom $n$ is extended accordingly. While the coupling to the single cavity mode was denoted by $g_\mu^{(n)}$ before, the coefficients related to a general cavity mode $j$ are now named $g_\mu^{(n)[j]}$. They can be decomposed into $g_\mu^{(n)[j]} = g^{[j]} c_\mu e^{i \phi^{(n)}}$, where $c_\mu$ is the Clebsch-Gordan coefficient of the transition $\mu$, $\phi^{(n)}$ accounts for a potential phase imprinted on the nucleus by the field due to the atomic position, and $g^{[j]}$ denotes a universal coupling constant between mode $j$ and all nuclei and transitions. Note that this factorization is possible in this way only because we assumed a single thin layer of resonant nuclei in this analysis.
However, as soon as we consider multiple layers, the assumption of uniform coupling strengths $g^{[j]}$ for all atoms is clearly no longer justified. For example, for a cavity without resonant nuclei, roughly an intensity profile with $\sin^2$ shape can be expected along the cavity for the guided modes. Different layers at different positions will thus experience different field strengths and the coupling coefficient $g$ to the cavity mode cannot be considered as a constant anymore. Also, we want to emphasize that the same argument holds if a very thick layer of resonant nuclei is present in the cavity. Here, the nuclei close to the two layer boundaries might be exposed to strongly differing field strengths and the respective coupling coefficients become spatially dependent.
Both cases can be modeled by introducing several ensembles of nuclei. The atoms in each ensemble are situated at the same depth of the cavity and hence couple to the modes with a common coefficient. We denote this coupling parameter between the nuclei in the layer $l$ and the cavity modes $j$ by $g^{[j]\{l\}}$. The coupling coefficient of the transition $\mu$ in single atom $n$ located in the layer $l_n$ then reads
\begin{align}
g_{\mu}^{(n)[j]} = g^{[j]\{l_n\}} c_\mu e^{i \,\phi^{(n)}}\;,
\end{align}
The number of nuclei in each layer is $N^{\{l\}}$ and the total number of resonant nuclei is $N = \sum_l N^{\{l\}}$. In this formulation, the coupling Hamiltonian from Eq.~(\ref{eq:basic_H_C}) is generalized to
\begin{align}
H_C = \sum_{n, \mu, j} \Big[
&\scpr{\vec{\hat d}_\mu^*}{\vec{\hat a}_1^{[j]}} g_\mu^{(n)[j]} S_{\mu+}^{(n)} a_1^{[j]} \nonumber \\ +
&\scpr{\vec{\hat d}_\mu^*}{\vec{\hat a}_2^{[j]}} g_\mu^{(n)[j]} S_{\mu+}^{(n)} a_2^{[j]}
+ \textrm{H.c.} \Big] \;.
\end{align}
The diagonal part $H_0$ containing the energy shifts of the states and the detuning $\Delta$ is unaffected by our extension of the model.
Next to the Hamiltonian dynamics, also the incoherent part capturing the mode decays need to be extended accordingly. The Lindblad operator describing the photon loss in the cavity modes, see Eq.~(\ref{eqn:cavity_decay}), becomes
\begin{align}
\mathcal{L}_M[\rho] = &-\sum_j \kappa^{[j]}\, \mathcal{L}[\rho, {a_1^{[j]}}^\dagger, a_1^{[j]}] \nonumber \\
&-\sum_j \kappa^{[j]} \, \mathcal{L}[\rho, {a_2^{[j]}}^\dagger, a_2^{[j]}] \;,
\end{align}
whereas the spontaneous emission contribution of the nuclei remains the same.
\subsection{Effective Master equation}
In a next step we simplify the master equation by applying the same approximations as in the case of the original model, which were described already in Sec.~\ref{sec:basic_full}.
First, we perform the adiabatic elimination of the cavity modes. In contrast to the basic model, we do not eliminate the two modes $a_1$ and $a_2$ for the two polarization directions only, but a total of $2 j$ modes. However, since the different modes are not directly mutually coupled, they can be eliminated independently and their contributions to the effective master equation sum up.
From the Heisenberg equation of motion for the cavity mode operators
\begin{align}
\frac{d}{dt} a_\iota^{[j]} = i [ H_M + H_0 + H_C, a_\iota^{[j]} ] - \kappa^{[j]} a_\iota^{[j]} \;
\end{align}
we find the stationary solutions
\begin{align}
a_\iota^{[j]} &= \frac{1}{\kappa^{[j]} + i\Delta_C^{[j]}}
\bigg[ \sqrt{2 \kappa_R^{[j]}} a_{\textrm{in}} ({\vec{\hat a}_\iota^{[j]}}^*\!\cdot\!\vec{\hat a}_\text{in}) \nonumber \\
&\qquad\qquad- i \sum_{n,\mu} ({\vec{\hat a}_\iota^{[j]}}^*\!\cdot\!\vec{\hat d}_\mu) {g_\mu^{(n)[j]}}^* S_{\mu-}^{(n)} \bigg] \;,
\end{align}
where $\iota = 1,2$ indicates the two perpendicular cavity mode polarizations.
Inserting these operators in the full model, we obtain the effective master equation for the nuclei
\begin{align}
\frac{d}{dt} \rho = -i [ H_\textrm{eff}, \rho] + \mathcal{L}_\textrm{eff}[\rho] \;,
\end{align}
with the effective Hamiltonian and the Lindblad terms
\begin{align}
H_\textrm{eff} &= H_0 + H_\Omega + H_\textrm{LS} \;, \\
\mathcal{L}_\textrm{eff}[\rho] &= \mathcal{L}_\text{SE}[\rho] + \mathcal{L}_\text{cav}[\rho] \;.
\end{align}
In the same notation as in Ref.~\cite{Heeg2013b}, the individual components of these equations are found as
\begin{align}
H_\Omega &= \sum_{n, \mu} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat a}_\textrm{in}} \sum_j \left( \Omega^{[j]} g_\mu^{(n)[j]}\right) S_{\mu+}^{(n)} + \textrm{H.c.} \; , \label{eq:H_Omega_full} \\[2ex]
H_\textrm{LS} &= \sum_{n,m} \sum_{\mu,\nu} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat d}_\nu} \sum_j \left( \delta_\textrm{LS}^{[j]} g_\mu^{(n)[j]} {g_\nu^{(m)[j]}}^* \right) \nonumber \\ &\quad \times S_{\mu+}^{(n)} S_{\nu-}^{(m)} \; , \\[2ex]
\mathcal{L}_\textrm{cav}[\rho] &= \sum_{n,m} \sum_{\mu, \nu} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat d}_\nu} \sum_j\left(- \zeta_{S}^{[j]} g_{\mu}^{(n)[j]} {g_{\nu}^{(m)[j]}}^{*} \right) \nonumber \\ &\quad \times \mathcal{L}[\rho, S_{\mu+}^{(n)}, S_{\nu-}^{(m)}] \;, \label{eq:L_cav_full}
\end{align}
with the coefficients
\begin{align}
\Omega^{[j]} &= \frac{\sqrt{2 \kappa_{R}^{[j]}} a_{\textrm{in}}}{\kappa^{[j]} + i\Delta_C^{[j]}} \,,\\
\delta_\textrm{LS}^{[j]} &= {\operatorname{Im}\left(\frac{1}{\kappa^{[j]} + i \Delta_C^{[j]}} \right)} \;, \\
\zeta_S^{[j]} &= {\operatorname{Re}\left(\frac{1}{\kappa^{[j]} + i \Delta_C^{[j]}} \right) }\;.
\end{align}
and the outer product $\mathbb{1}_\perp = \vec{\hat a}_1 \vec{\hat a}_1^* + \vec{\hat a}_2 \vec{\hat a}_2^*$. Note that this completeness relation only refers to the two possible mode polarizations, such that no sum over the different modes $[j]$ is required. Moreover, the adiabatic elimination also affects the input-output relation from Eq.~(\ref{eq:basic_io}) and thus the observable in Eq.~(\ref{eq:basic_R}). We obtain for the reflection coefficient
\begin{align}
R &= R_C + R_N \;,
\end{align}
with the cavity contribution $R_C$ and the nuclear part of the reflectance $R_N$. The two reflection coefficients are given by
\begin{align}
R_C &= \left(-1 + \sum_{j}\frac{2\kappa_R^{[j]}}{\kappa^{[j]} + i \Delta_C^{[j]}} \right)\scpr{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{a}}_\textrm{in}} \;, \label{eq:R_C_multimode}\\
R_N& = - \frac{i}{a_{\textrm{in}}} \sum_{n,\mu} \left( \sum_j\frac{\sqrt{2\kappa_R^{[j]}} {g_\mu^{(n)[j]}}^*}{\kappa^{[j]} + i \Delta_C^{[j]}} \right) \scprt{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{d}}_\mu} \nonumber \\
&\qquad \times \langle S_{\mu-}^{(n)} \rangle \;.
\end{align}
In a second approximation, we restrict the dynamics of the system to the subspace of up to one excitation in the system. As mentioned above, the reduction to the linear regime is well justified for experiments performed at current synchrotron radiation sources. In the initial stage, all nuclei reside in one of the two hyperfine ground states $\ket{g_1}$ and $\ket{g_2}$, with equal probability at room temperature. Further, we can assume that the nuclei of the different macroscopic ensembles $l$ introduced above are evenly distributed among these two states as well. This collective ground state is denoted by $\ket G$.
The definition of the collective excited states demands a more elaborate approach. In Ref.~\cite{Heeg2013b} collective excited states $\ket{E_\mu^{(+)}}$ were introduced, which denote a symmetrized excitation on the transition $\mu$. Note that such an excitation is shared only by $N/2$ nuclei, since only half of the nuclei were originally in the ground state of the respective transition. Here, we now generalize these states and denote a collectively excited state in the ensemble $l$ on the transition $\mu$ by $\ket{E_\mu^{\{l\}}}$. More formally, we define it as
\begin{align}
\ket{E_\mu^{\{l\}}} = \frac{1}{\sqrt{N^{\{l\}}/2}} \: \sum_{n}^{N^{\{l\}}/2} e^{i\,\phi^{(n)}} \: S_{\mu + }^{(n)} \: \ket{G} \;,
\end{align}
and again only half of the nuclei in the respective layer $l$ contribute due to the ground state distribution. Each of the contributory nuclei couples to the modes with the same rate $g^{[j]\{l\}}$. This qualifies the collective states defined here to rewrite the system dynamics given in Eqs.~(\ref{eq:H_Omega_full})--(\ref{eq:L_cav_full}) in the linear regime. In the process, the sums over the macroscopic number of atoms $\sum_n$ is simplified to the much more manageable sum over the different resonant layers $l$ as
\begin{align}
\sum_n g_{\mu}^{(n)[j]} S_{\mu+}^{(n)} = \sum_l \sqrt{\tfrac{1}{2} N^{\{l\}}} c_{\mu} g^{[j]\{l\}} \ket{E_\mu^{\{l\}}}\bra{G} \;.
\end{align}
This way, we obtain the effective equations for the linear regime
\begin{align}
H_\Omega &= \sum_{\mu, j, l} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat a}_\textrm{in}} \left( \Omega^{[j]} c_{\mu} g^{[j]\{l\}}\right) \nonumber \\
&\quad \times \sqrt{\tfrac{1}{2}N^{\{l\}}}\: \ket{E_\mu^{\{l\}}}\bra{G}+ \textrm{H.c.} \;, \label{eq:H_Omega_lin} \\
H_\textrm{LS} &= \sum_{\mu,\nu} \sum_{j,l,k} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat d}_\nu} \left( \delta_\textrm{LS}^{[j]} c_{\mu}c_{\nu} g^{[j]\{l\}} {g^{[j]\{k\}}}^* \right) \nonumber \\
&\quad \times \tfrac{1}{2}\sqrt{ N^{\{l\}} N^{\{k\}} }\: \ket{E_\mu^{\{l\}}}\bra{E_\nu^{\{k\}}} \; , \\[2ex]
\mathcal{L}_\textrm{cav}[\rho] &= \sum_{\mu, \nu} \sum_{j,l,k} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat d}_\nu} \left(-\zeta_{S}^{[j]} c_{\mu}c_{\nu} g^{[j]\{l\}} {g^{[j]\{k\}}}^{*} \right) \nonumber \\
&\quad \times \tfrac{1}{2}\sqrt{ N^{\{l\}} N^{\{k\}} }\: \mathcal{L}[\rho, \ket{E_\mu^{\{l\}}}\bra{G}, \ket{G}\bra{E_\nu^{\{k\}}}] \;. \label{eq:Lcav_lin}
\end{align}
Finally, the reflection coefficient reads
\begin{align}
R = R_C &-\frac{i}{a_{\textrm{in}}} \sum_{\mu,j,l} \left( \frac{\sqrt{2\kappa_R^{[j]}} c_{\mu} {g^{[j]\{l\}}}^*}{\kappa^{[j]} + i \Delta_C^{[j]}} \right)
\nonumber \\ &\quad \times
\scprt{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{d}}_\mu} \sqrt{\tfrac{1}{2}N^{\{l\}}}\: \bra{E_\mu^{\{l\}}}\rho\ket{G} \;. \label{eq:R_lin}
\end{align}
\subsection{Unmagnetized layers}
A commonly encountered scenario is the setting without a magnetic hyperfine splitting in the resonant layers. In this case, the level scheme of the $^{57}$Fe nucleus reduces to a two-level system with one ground and one excited state. From the general theory above, this behavior can be emulated by setting the energy splittings $\delta_g$, $\delta_e$ of the ground and excited states to zero and choosing the quantization axis such that the incident beam with polarization $\vec{\hat a}_\textrm{in}$ only drives the linearly polarized transitions $\mu = 2,5$ (c.f.~Tab.~\ref{tab:transitions}). Further, we define the state
\begin{align}
\ket{E^{\{l\}}} = \frac{1}{\sqrt{2}} \left( \ket{E_2^{\{l\}}} + \ket{E_5^{\{l\}}} \right) \;,
\end{align}
which describes an excitation in the $l$th resonant layer without the distinction of the two hyperfine substates. We obtain for the master equation and the reflection coefficient
\begin{align}
H_\Omega &= \sum_j \Omega^{[j]}\sqrt{\tfrac{2}{3}} \sum_l g^{[j]\{l\}} \sqrt{N^{\{l\}}} \ket{E^{\{l\}}}\bra{G} + \textrm{H.c.} \;, \label{eq:H_Omega_lin_B0} \\[2ex]
H_\textrm{LS} &= \sum_j \delta_\textrm{LS}^{[j]} \tfrac{2}{3} \sum_{l,k} g^{[j]\{l\}} {g^{[j]\{k\}}}^* \sqrt{N^{\{l\}} N^{\{k\}}} \nonumber \\ &\quad\quad\times \ket{E^{\{l\}}}\bra{E^{\{k\}}} \;, \label{eq:H_LS_lin_B0} \\[2ex]
\mathcal{L}_\textrm{cav}[\rho] &= -\sum_j \zeta_{S}^{[j]} \tfrac{2}{3} \sum_{l,k} g^{[j]\{l\}} {g^{[j]\{k\}}}^* \sqrt{N^{\{l\}} N^{\{k\}}} \nonumber \\ &\quad\quad\times \mathcal{L}[\rho, \ket{E^{\{l\}}}\bra{G},\ket{G}\bra{E^{\{k\}}}] \label{eq:Lcav_lin_B0} \;, \\[2ex]
R &= \bigg[ -1 + \sum_j\frac{2\kappa_R^{[j]}}{\kappa^{[j]} + i\Delta_C^{[j]}} - \frac{i}{a_{\textrm{in}}} \sum_j \frac{\sqrt{2\kappa_R^{[j]}}}{\kappa^{[j]}+i\Delta_C^{[j]}} \nonumber \\ &\times \sqrt{\tfrac{2}{3}} \sum_l {g^{[j]\{l\}}}^* \sqrt{N^{\{l\}}} \bra{E^{\{l\}}}\rho\ket{G} \bigg] \scpr{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{a}}_\textrm{in}} \;. \label{eq:R_lin_B0}
\end{align}
This set of equations will form the basis for the description of the EIT experiment~\cite{Roehlsberger2012}, which we will analyze in Sec.~\ref{sec:eit}.
\subsection{Heuristic extensions\label{sec:heuristic}}
Before we study the phenomena and consequences which emerge from our generalized theory, we take a step back and consider additional effects in our system, which in general will turn out to be of significance. We emphasize, that so far our theory is developed to capture the cavity character of the layer system, i.e.~the guided modes and the embedded resonant nuclei. However, in the grazing incidence geometry, also effects which are not related to the structure of the cavity and stem from bulk material properties become important.
Since the refractive index of the cavity materials at x-ray energies is less than one, total reflection is observed for small incidence angles $\theta$ in the few-mrad range, while for larger angles the light is essentially completely absorbed. The transition between those two regimes is not sudden, but can be characterized by a smooth function $R_\textrm{Envelope}(\theta)$. As soon as we consider the reflectance over a broader range of incidence angles, this envelope has to be taken into account. Note that previous studies have been performed at a fixed incidence angle~\cite{Roehlsberger2010,Roehlsberger2012,Heeg2013,Heeg2014} or covered only tiny angular ranges, for which the envelope could be considered constant~\cite{Heeg2014b}. To include this total reflection behavior at grazing incidence,
we will heuristically combine the analytical formula from Eq.~(\ref{eq:R_C_multimode}), describing the guided modes, with the reflection curve of the cavity's mirror material only, which approximately takes into account the total reflection envelope. The envelope function $R_\textrm{Envelope}(\theta)$ is described in more detail in Appendix~\ref{sec:appendix_envelope}.
Furthermore, it has been observed that the cavity material dispersion leads to an additional relative phase between light reflected off of the outside of the cavity and light entering the cavity. This dispersion phase was found to be necessary, e.g., to describe the asymmetry of the reflection curve $R(\theta)$ around the minima of the guided modes~\cite{dispersionphase}. It can be included by generalizing the contribution which stems from the direct reflection on the cavity surface (``$-1$'' in Eq.~(\ref{eq:R_C_multimode})) with an additional phase factor $\exp{(i\phi_C)}$. As a second heuristic extension, we include such a phase phase factor as well, and allow for a complex variable $r$ instead of the cavity surface amplitude $-1$ with $|r|\approx 1$. This has the additional advantage that such a modification can also take into account possible effects of far off-resonant modes, which would give rise to a small constant offset to the reflection coefficient.
With the heuristic modifications described above, the cavity contribution to the reflection coefficients reads
\begin{align}
R_C(\theta) = R_\textrm{Envelope}(\theta) \, \left( r + \sum_j\frac{2\kappa_R^{[j]}}{\kappa^{[j]} + i\Delta_C^{[j]}(\theta)} \right) \;, \label{eq:R_C_multimode_heuristic}
\end{align}
with the cavity detuning (c.f.~Eq.~(\ref{eq:cavity_detuning}))
\begin{align}
\Delta_C^{[j]}(\theta) = \omega \left[\sqrt{\cos{(\theta)}^2 + \sin{(\theta_0^{[j]})}^2} -1 \right] \;.
\end{align}
\section{Effect of multiple modes\label{sec:effect_modes}}
In this part, we will discuss the influence of multiple modes on the reflectivity, while we still restrict ourselves to a single thin layer of resonant nuclei in the cavity. Hence, the index $l$ corresponding to the different nuclear ensembles in the cavity, will be omitted in the following.
We start by considering the nuclear contribution to the reflectance only. Restricting ourselves to only one layer $l$, it can be easily seen from Eqs.~(\ref{eq:H_Omega_lin})--(\ref{eq:R_lin}) that the general form of contributions to the effective master equation does not depend on the number of cavity modes $j$. In particular, no new operators or additional couplings between the different collective states are present in the effective master equation, which is a direct consequence of the adiabatic elimination. The sole differences are the coefficients entering the expressions. For instance, generalizing the driving Hamiltonian $H_\Omega$ from a single to multiple modes requires only the modification
\begin{align}
\Omega g_\mu^{(n)} \;\rightarrow\; \sum_j \Omega^{[j]} g_\mu^{[j]}
\end{align}
on the level of a coefficient. Similar replacements of the variables are required for the other parts contributing to the master equation.
With the knowledge that the basic equations in the cases with one and with multiple cavity modes $j$ are equivalent, the results obtained in the original theory (c.f.~Ref.~\cite{Heeg2013b}) can be straightforwardly extended by replacing the coefficients with their respective generalized counterparts. Doing so for the linear reflectance without magnetic hyperfine splitting and neglecting the trivial polarization dependency $\scpr{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{a}}_\textrm{in}}$, this yields
\begin{align}
R_N& = -i \frac{2N}{3} \frac{
\left( \sum_j \tfrac{\sqrt{2\kappa_R^{[j]}}g^{[j]}}{\kappa^{[j]}+i\Delta_C^{[j]}} \right)\left(
\sum_j \tfrac{\sqrt{2\kappa_R^{[j]}}{g^{[j]}}^{*}}{\kappa^{[j]}+i\Delta_C^{[j]}}\right)
}{\Delta + i\tfrac{\gamma}{2} + \tfrac{2 N}{3} \left[ \sum_j \left|g^{[j]}\right|^2 \left( i \zeta_{S}^{[j]} - \delta_\textrm{LS}^{[j]} \right) \right]} \;.
\label{eq:multimode_RN}
\end{align}
Since in the single-mode theory a Lorentzian profile was derived for the nuclear spectrum, we also recover this line shape here in the case of multiple cavity modes. Similar to the original model, it is shifted due to a collective Lamb shift and broadened due to superradiance~\cite{Roehlsberger2010}. It can be seen from the denominator in Eq.~(\ref{eq:multimode_RN}) that each mode induces its own frequency shift and line broadening. But typically for any angle of incidence, all but (at most) one mode are driven far off-resonantly as mentioned above. Then, the according values for the cavity detuning $\Delta_C$ become large and their respective contributions to the cooperative Lamb shift and to the superradiance diminish. From the numerator of the nuclear part, we find that the strength of the nuclear signal is typically determined by one dominant mode with the smallest $\Delta_C$. Generally, the behavior is as follows. The nuclei can be excited by the external driving field via each mode $j$. This is represented by the first sum in the numerator of Eq.~(\ref{eq:multimode_RN}). The emission forms an independent second step and can again occur via each mode, as indicated by the second sum. Hence, in the general case, interferences between the different cavity modes can arise. Nevertheless, the general Lorentzian structure of the line profile is unaffected by this and hence no qualitatively different features appear in the spectrum.
The main difference to the single-mode result is found in the cavity contribution to the reflectance $R_C$ given in Eq.~(\ref{eq:R_C_multimode}), and accordingly its heuristic extension in Eq.~(\ref{eq:R_C_multimode_heuristic}). Since we included multiple guided modes in the analysis above, it is clear that the resonances of these modes should become apparent in the reflection curve, i.e.~when considering the cavity reflectance in dependence on the incidence angle $\theta$. Indeed, the expressions we derived in Eqs.~(\ref{eq:R_C_multimode}) and~(\ref{eq:R_C_multimode_heuristic}) highlight these resonances in its sum. The resonance of a guided mode $j$ is encountered at $\theta = \theta_0^{[j]}$, where $\Delta_C^{[j]} = 0$, and the reflection curve will exhibit a local minimum in the vicinity of the resonant angles.
At this point it is of interest, how well the actual angular dependent reflection curve $R(\theta)$ can be described by the cavity part of the reflectance. To this end, we numerically calculate the reflection curve using established semiclassical methods, such as Parratt's formalism~\cite{Parratt1954} or simulations by \textsc{conuss}~\cite{Sturhahn2000}, which both give equivalent results.
As we will describe the EIT scenario from Ref.~\cite{Roehlsberger2012} below, we specialize to this particular cavity structure. The parameters of this cavity geometry are given in the middle column of Tab.~\ref{tab:layers_eit}. Note that the two resonant iron layers do not pose a challenge in this analysis, since in this first step the nuclear resonances are omitted in the description of the angular dependent reflection curve.
In the specified cavity it is platinum that acts as cavity mirror material, therefore the envelope function $R_\textrm{Envelope}(\theta)$ taking account for the total reflection is the reflectivity of a single infinitely thick Pt layer. We fitted the absolute value of Eq.~(\ref{eq:R_C_multimode_heuristic}) with a maximum number of cavity modes $j=5$ to the expected reflection curve for the cavity, which was obtained using Parratt's formalism. The fit was performed in the range $0 \le \theta \le 5$~mrad and the obtained parameters are summarized in Appendix~\ref{sec:appendix_parameters}. The result is shown in Fig.~\ref{fig:eit_rocking_qo_parratt}. Clearly, the quantum optical model together with the heuristic extensions are well suited to describe the reflection curve. We observe that the last guided mode in Fig.~\ref{fig:eit_rocking_qo_parratt} is not reproduced. However, this is expected, since it is the sixth mode not included in the fit model, and since its angular position is not within the fit range.
Interestingly, we find that also the phase behavior of the cavity reflection coefficient is reproduced very well. This is remarkable, because only absolute values were taken into account in the fit procedure. The analytic formula in Eq.~(\ref{eq:R_C_multimode_heuristic}) has only been corrected for a global phase to match the phase behavior predicted by the Parratt formalism. Considering the phases in more detail, a deviation can be seen at the second guided mode at $\theta \approx 3$~mrad. In contrast to Parratt's formalism, the curve obtained with the quantum optical model features an apparent phase jump of $2\pi$ at the resonance.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{rocking}
\caption{\label{fig:eit_rocking_qo_parratt}(Color online) Cavity reflectance as a function of the incidence angle $\theta$. The minima denote resonance angles at which guided modes are driven resonantly. Including a dispersion phase and the heuristic extension of an envelope given by the topmost layer (gray dotted line), the quantum optical theory (blue, solid) can reproduce the exact result calculated with Parratt's formalism (red, dashed) very well. In the quantum optical model only the first five guided modes were taken into account and the fit was restricted to the range $0 \le \theta \le 5$~mrad, as indicated by the shaded area. Although only the absolute value of the reflectance was fitted, the behavior of the phases calculated with the two descriptions are very similar.}
\end{figure}
To understand this artifact we note that in the vicinity around a guided mode, the reflection coefficient can be approximated as $R_C \approx -1 + 2\kappa_R / [\kappa + i \Delta_C]$, which results in a minimum at the resonance angle where $\Delta_C = 0$. The reflectance vanishes completely for $2\kappa_R = \kappa$, which is known as the critical coupling condition. However, a residual reflectance occurs for both $2 \kappa_R > \kappa$ and $2 \kappa_R < \kappa$, which correspond to the over- and undercritically coupled cases, respectively~\cite{Dayan2008}. Looking solely at the modulus $|R_C|$, though, these two cases cannot be distinguished. The difference becomes only apparent when the phase of $R_C$ around the resonance angle is considered: For an undercritically coupled cavity mode the phase remains in the same branch, however it undergoes an evolution to the next branch in the overcritically coupled case, which manifests as an apparent phase jump of $2\pi$. Generally, it might be beneficial to not fit absolute values, but to use the complex values of the reflection curve instead. In this case also the over- and undercritically coupled modes should be captured correctly within the quantum optical description. However, since we are interested mainly at the third guided mode later on, which is the mode at which the EIT spectra have been measured in Ref.~\cite{Roehlsberger2012}, we will use the parameters obtained in the fit discussed above for our further analysis.
Finally, it should be mentioned that it is not meaningful to extend the quantum optical descriptions to very large incident angles $\theta$. On the one hand, the theoretical description of the perpendicular polarization directions might break down. On the other hand, distinct non-grazing incidence effects are expected, since the cavity is no longer probed in (000) Bragg geometry. The angular cutoff should therefore be around the total reflection edge, which for typical cavity settings limits the number of guided modes to approximately five.
\section{Effect of multiple resonant layers\label{sec:effect_layers}}
In this part we will examine the influence of multiple layers of resonant nuclei in the cavity, while restricting the analysis to only on cavity mode. Hence, we drop the index $j$ in the coefficients throughout this section.
Since the cavity reflection part $R_C$ is unaffected by including multiple layers in the theory, we considering the nuclear contribution to the reflectance only. Starting from the general expressions given in Eqs.~(\ref{eq:H_Omega_lin})--(\ref{eq:R_lin}) and taking only one cavity mode $j$ into account, we observe that the set of equations can be considerably simplified by means of a basis transformation.
We introduce the states which resemble a collective excitation on transition $\mu$, which is distributed among the different nuclear ensembles $l$, as
\begin{align}
\ket{E_\mu^{\{+\}}} = \frac{1}{{\mathcal{G}}} \sum_l g^{\{l\}} \sqrt{N^{\{l\}}} \ket{E_\mu^{\{l\}}} \; \label{eq:multilayer_coll_layer_state}
\end{align}
with the normalization factor
\begin{align}
\mathcal{G} = \Big({\sum_l \big|g^{\{l\}}\big|^2 N^{\{l\}}}\Big)^{1/2} \;.
\end{align}
Using the new states, Eqs.~(\ref{eq:H_Omega_lin})--(\ref{eq:R_lin}) become
\begin{align}
H_\Omega &= \sqrt{\tfrac{1}{2}} \Omega\mathcal{G} \sum_\mu \scprt{\vec{\hat d}_\mu^*}{\vec{\hat a}_\textrm{in}} c_\mu \: \ket{E_\mu^{\{+\}}}\bra{G}+ \textrm{H.c.} \;,\\[2ex]
H_\textrm{LS} &= \tfrac{1}{2} \delta_\textrm{LS} \mathcal{G}^2 \sum_{\mu,\nu} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat d}_\nu} c_\mu c_\nu \: \ket{E_\mu^{\{+\}}}\bra{E_\nu^{\{+\}}} \; , \\[2ex]
\mathcal{L}_\textrm{cav}[\rho] &= -\tfrac{1}{2} \zeta_{S} \mathcal{G}^2 \sum_{\mu,\nu} \scprt{\vec{\hat d}_\mu^*}{\vec{\hat d}_\nu} c_\mu c_\nu \nonumber \\
&\qquad \times \mathcal{L}[\rho, \ket{E_\mu^{\{+\}}}\bra{G}, \ket{G}\bra{E_\nu^{\{+\}}}] \;, \\[2ex]
R &= R_C - \frac{i}{a_{\textrm{in}}} \frac{\sqrt{\kappa_R}}{\kappa+i\Delta_C} \mathcal{G} \sum_\mu c_\mu \nonumber \\
&\qquad \times
\scprt{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{d}}_\mu} \: \bra{E_\mu^{\{+\}}}\rho\ket{G} \;.
\end{align}
Comparing these expressions with the effective master equation of the original single-layer theory (c.f.~Ref.~\cite{Heeg2013b}), we observe an exact correspondence of the structure. Similar as in the case of the extension to multiple cavity modes, the differences manifest only in terms of the coefficients. In particular, the collective coupling between the cavity mode and the nuclei is modified as
\begin{align}
|g|\sqrt{N} \rightarrow \mathcal{G} = \Big(\sum_l \big|g^{\{l\}}\big|^2 N^{\{l\}}\Big)^{1/2} \;,
\end{align}
while all other relations remain the same. Hence, the shape of the reflection coefficient is unaffected by taking into account multiple resonant layers. In the absence of magnetization, it is given by
\begin{align}
R& = \bigg[ -1 + \frac{2\kappa_R}{\kappa + i\Delta_C} \nonumber \\ &-i\frac{2\kappa_R}{(\kappa + i\Delta_C)^2} \frac{\tfrac{2}{3} \mathcal{G}^2 }{\Delta + i\tfrac{\gamma}{2} + \tfrac{2}{3} \mathcal{G}^2 ( i\zeta_{S} - \delta_\textrm{LS}) } \bigg] \scpr{\vec{\hat{a}}_\textrm{out}^* }{\vec{\hat{a}}_\textrm{in}} \;. \label{eq:multilayer_R}
\end{align}
Restricting ourselves to only one layer, the coefficient $\mathcal{G}^2$ reduces to $|g|^2 N$ and we recover the result which we already derived in Ref.~\cite{Heeg2013b}.
Even though we included multiple layers in our analysis, we see from Eq.~(\ref{eq:multilayer_R}) that it is not possible to explain an EIT-like spectrum as reported in Ref.~\cite{Roehlsberger2012}. Rather, we will find that it is the combined extension of multiple layers and multiple guided modes to the theory, which will be able to explain the EIT phenomenon. This will be shown in the following Sections.
\section{Effect of multiple resonant layers and multiple modes\label{sec:effect_both}}
In the last Sections we generalized the theoretical description to include multiple modes and multiple resonant layers, respectively. When restricting to one extension at a time, we observed that both give rise to additions in the nuclear reflection amplitude, while, however, leaving the general structure of a Lorentzian line shape unaffected, c.f.~Eqs.~(\ref{eq:multimode_RN}) and (\ref{eq:multilayer_R}).
The general expressions for covering both multiple modes and multiple layers in the cavity were given in Eqs.~(\ref{eq:H_Omega_lin})--(\ref{eq:R_lin}), or, in the absence of magnetization, in Eqs.~(\ref{eq:H_Omega_lin_B0})--(\ref{eq:R_lin_B0}). To simplify this set of equations, it would be desirable to perform a basis transformation which converts the different states $\ket{E_\mu^{\{l\}}}$ which describes an excitation in a single layer into a collective layer state, similar to the transformation we performed in Eq.~(\ref{eq:multilayer_coll_layer_state}). For that purpose, one would have to sum over the layers $l$, which then contains the coupling factor $g^{[j]\{l\}}$. But since this coupling coefficient now also depends on the guided mode index~$j$, the basis transformation must also involve the sum over the modes $\sum_j$. However, it can be easily seen from Eqs.~(\ref{eq:H_Omega_lin_B0})--(\ref{eq:R_lin_B0}) that this sum would be different for every contribution to the equations of motion, since the prefactors depending on $j$ are mutually different. Hence, also in the absence of magnetization it is not possible to transform the system into a form in which only one collective state is excited. Rather, in a cavity configuration with $l$ resonant layers the equations of motion need to be solved for the $l$ coupled states $\ket{E^{\{l\}}}$. This implies that the response of the nuclear ensemble will generally go beyond a Lorentzian line profile.
The different coupling coefficients ${g^{[j]\{l\}}}$ required for the extended theory need to be determined in different ways. Apart from a direct fit to numerical data, a more sophisticated approach is to derive the relative weights and phases from the field amplitudes calculated with Parratt's formalism~\cite{deBoer1991}. This self-consistent method will be applied and explained in more detail in Sec.~\ref{sec:numerical_analysis}.
\section{Application to the EIT setting\label{sec:eit}}
Next, we will analyze a particular setting, in which both multiple layers and multiple modes are considered. In the previous Section, we already discovered that the general theory developed here allows for reflection spectra, which comprise nuclear responses beyond a simple Lorentz profile. As it was shown in Ref.~\cite{Roehlsberger2012} employing a semiclassical model, the reflectance of a cavity with two unmagnetized resonant layers can even exhibit EIT-like spectra. It is this particular scenario which will be discussed in more detail in the following.
\subsection{The EIT experiment}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.5]{level_schemes}
\caption{\label{fig:level_schemes}(Color online) (a) Effective level scheme of the EIT scenario from Ref.~\cite{Roehlsberger2012}. The nuclei in the field antinode are driven by the external field $\Omega_P$ and decay superradiantly with rate $\Gamma$, the nuclei in the field only due to spontaneous decay $\gamma$. Both ensembles are coupled via the cavity field $\Omega_C$, which gives rise to a scheme equivalent to EIT. (b) Effective level scheme in the quantum optical model. Two collective excited states are coupled to the ground state. Coherent driving and collective Lamb shifts are marked in blue, superradiant spontaneous emission is denoted by curly red single headed arrows, cross-damping between the excited states by curly red double headed arrows. [Thick solid~/ solid~/ dashed] lines denote the cavity mode detuning scalings $\sim$ [$1$~/ $\Delta_C^{-1}$~/ $\Delta_C^{-2}$] and mark the relative magnitude of the different couplings.}
\end{figure*}
The key of the setting studied in Ref.~\cite{Roehlsberger2012} is the placement of two ensembles of $^{57}$Fe nuclei in the cavity. The incidence angle was chosen such that the third guided mode of the cavity is driven resonantly and the first layer of the resonant nuclei was placed in a field node, the second layer in a antinode as sketched in Fig.~\ref{fig:field}(a). Following the interpretation in the same reference, only the latter ensemble is probed by the x-ray beam and decays superradiantly with rate $\Gamma$, while the nuclei in the first layer are only subjected to natural decay $\gamma$ on a much longer timescale. However, the first ensemble can crucially influence the system's dynamics, as a control field $\Omega_C$ between the two layers is naturally established. It arises due to radiative coupling between the two ensembles. The resulting level scheme, as also visualized in Fig.~\ref{fig:level_schemes}(a), is equivalent to a system featuring EIT. By employing a cavity like the one sketched in Fig.~\ref{fig:field}(a), the key signature of EIT, transparency of the medium on resonance, could be verified in Ref.~\cite{Roehlsberger2012}. Interestingly, it was found that the coupling field $\Omega_C$ vanishes by interchanging the roles of the two layers, i.e.~placing the first layer in a field antinode and the second ensemble in a field node, see Fig.~\ref{fig:field}(b).
\subsection{Theoretical analysis}
In order to describe the setting of two resonant layers with our quantum optical model, we restrict ourselves to two ensembles of nuclei $l=1,2$ in the cavity, but we still allow for an arbitrary number of cavity modes $j$. As before, we consider the linear response case without magnetization and omit the trivial polarization dependence in the following. We rewrite the effective Hamiltonian from Eqs.~(\ref{eq:H_Omega_lin_B0}) and~(\ref{eq:H_LS_lin_B0}) as well as the detuning part from Eq.~(\ref{eq:basic_H_0}) as
\begin{align}
H &= \left(\tilde{\Omega}^{\{1\}} \ket{E^{\{1\}}}\bra{G} + h.c. \right)
+ \left(\tilde{\Omega}^{\{2\}} \ket{E^{\{2\}}}\bra{G} + h.c. \right) \nonumber \\
&+ (\tilde{\delta}^{\{1\}} - \Delta) \ketbra{E^{\{1\}}}
+ (\tilde{\delta}^{\{2\}} - \Delta) \ketbra{E^{\{2\}}} \nonumber \\
&+ \left(\tilde{\delta}^{\{1,2\}} \ket{E^{\{1\}}}\bra{E^{\{2\}}} + h.c. \right) \;. \label{eq:multi_eit_H}
\end{align}
Here, the first line covers the driving of the two layers, the second line accounts for the cooperative Lamb shifts and the detuning, and the last line describes a coherent coupling between the two layers. Later, we will see that the last contribution can in parts be identified with the control field $\Omega_C$ from the EIT interpretation in Ref.~\cite{Roehlsberger2012}. The incoherent Lindblad terms in our description are given by
\begin{alignat}{3}
\mathcal{L} = -\Big(\frac{\gamma}{2} &+ \tilde{\gamma}^{\{1\}} \Big) &&\mathcal{L}[\rho, \ket{E^{\{1\}}}\bra{G},\ket{G}\bra{E^{\{1\}}}] \nonumber \\
-\Big(\frac{\gamma}{2} &+ \tilde{\gamma}^{\{2\}} \Big) &&\mathcal{L}[\rho, \ket{E^{\{2\}}}\bra{G},\ket{G}\bra{E^{\{2\}}}] \nonumber \\
&- \tilde{\gamma}^{\{1,2\}} &&\mathcal{L}[\rho, \ket{E^{\{1\}}}\bra{G},\ket{G}\bra{E^{\{2\}}}] \nonumber \\
&-{{}\tilde{\gamma}^{{\{1,2\}}}}^* &&\mathcal{L}[\rho, \ket{E^{\{2\}}}\bra{G},\ket{G}\bra{E^{\{1\}}}] \;. \label{eq:multi_eit_L}
\end{alignat}
Here, the first line accounts for spontaneous emission and superradiance. The other two terms describe an incoherent cross-damping term~\cite{Kiffner2010,Heeg2013}, which will contribute to the control field coupling in the EIT interpretation as well. The coefficients in Eqs.~(\ref{eq:multi_eit_H}) and~(\ref{eq:multi_eit_L}) are given by
\begin{align}
\tilde{\Omega}^{\{l\}} &= \sum_j \Omega^{[j]}\sqrt{\tfrac{2}{3}} g^{[j]\{l\}} \sqrt{N^{\{l\}}} \;,\label{eq:multi_eit_omega_def}\\
\tilde{\delta}^{\{l\}} &= \sum_j \delta_\textrm{LS}^{[j]} \tfrac{2}{3} \big|g^{[j]\{l\}}\big|^2 N^{\{l\}} \;,\\
\tilde{\delta}^{\{1,2\}} &= \sum_j \delta_\textrm{LS}^{[j]} \tfrac{2}{3} g^{[j]\{1\}} {g^{[j]\{2\}}}^* \sqrt{N^{\{1\}} N^{\{2\}}} \;, \label{eq:multi_eit_d12_def} \\
\tilde{\gamma}^{\{l\}} &= \sum_j \zeta_{S}^{[j]} \tfrac{2}{3} \big| g^{[j]\{l\}}\big|^2 N^{\{l\}} \;,\\
\tilde{\gamma}^{\{1,2\}} &= \sum_j \zeta_{S}^{[j]} \tfrac{2}{3} g^{[j]\{1\}} {g^{[j]\{2\}}}^* \sqrt{N^{\{1\}} N^{\{2\}}} \;.\label{eq:multi_eit_g12_def}
\end{align}
The effective level scheme of the system defined above is visualized in Fig.~\ref{fig:level_schemes}(b). The similarity to the scheme used in the interpretation of Ref.~\cite{Roehlsberger2012} can already be anticipated. However, in our approach a larger number of coherent and incoherent couplings are present. Nevertheless, the relative strength and hence the importance of the coupling rates can be straightforwardly estimated, as we will show in the following.
As mentioned above, in the cavity geometries of interest, the $^{57}$Fe layers are arranged such that one layer $l=1$ is located at a field node of the third guided mode, while a second layer $l=2$ is located at an antinode.
As a consequence, the nuclei in the node hardly couple to driven mode. In our quantum optical language, we can represent this idealized case by setting the respective coupling constant to zero, i.e.~$g^{[3]\{1\}} = 0$. At the same time, all other modes $j\neq 3$ are driven strongly off-resonant, such that their cavity detuning $\Delta_C^{[j]}$ becomes large. Indicating this suppression due to the large detuning with a symbolic notation $1/\Delta_C$, we find the scalings
\begin{align}
\tilde{\Omega}^{\{1\}} \;,\; \tilde{\delta}^{\{1\}}\;,\; \tilde{\delta}^{\{1,2\}} &\sim \frac{1}{\Delta_C} \;,\\
\tilde{\gamma}^{\{1\}} \;,\; \tilde{\gamma}^{\{1,2\}} &\sim \frac{1}{\Delta_C^2} \;.
\end{align}
In contrast, the coefficients
\begin{align}
\tilde{\Omega}^{\{2\}} \;,\; \tilde{\delta}^{\{2\}} \;,\; \tilde{\gamma}^{\{2\}} \sim 1
\end{align}
for the second layer are not suppressed due to cavity mode detuning, as they still contain the non-zero coupling coefficient $g^{[3]\{2\}}$ to the resonantly driven mode. From these scalings we can already anticipate the EIT behavior in accordance with the interpretation discussed Ref.~\cite{Roehlsberger2012}:
Only the nuclei in the second layer decay superradiantly. The collective decay of atoms in the first layer $\tilde{\gamma}^{\{1\}}$ and the cross-damping terms $\tilde{\gamma}^{\{1,2\}}$ are quadratically suppressed in the detunings of the additional cavity modes and can be neglected in a first approximation. However, other contributions due to the presence of further cavity modes can have a substantial influence on the system, such as the coherent driving $\tilde{\delta}^{\{1,2\}}$ between the two layers, which can give rise to the coupling field required for EIT.
These scalings with the cavity detuning $\Delta_C$ are visualized in the level scheme shown in Fig.~\ref{fig:level_schemes}(b) as well. Coupling rates denoted by thick, solid or dashed lines and indicate the different powers in the scaling behavior with respect to $\Delta_C$. With the relative magnitude of the rates in mind, a very close similarity with the EIT level scheme from Fig.~\ref{fig:level_schemes}(a) can be observed. Hence, our analysis so far also suggests EIT-like features in the system. However, it is yet unclear how the additional driving terms and inter-layer coupling terms affect the spectrum in detail. In order to answer this question, we will now turn to the analytic solution of the model.
Starting from Eqs.~(\ref{eq:multi_eit_H}) and~(\ref{eq:multi_eit_L}), we find that the equations of motion for the density matrix elements
\begin{align}
\rho_{1G} &= \bra{E^{\{1\}}}\rho\ket{G} \;, \\
\rho_{2G} &= \bra{E^{\{2\}}}\rho\ket{G} \;,
\end{align}
form a closed set of equations in the limit of linear response, i.e., ~where the populations $\bra{G}\rho\ket{G}\approx 1$ and $\bra{E^{\{1\}}}\rho\ket{E^{\{1\}}} = \bra{E^{\{2\}}}\rho\ket{E^{\{2\}}} \approx 0$ and the coherence between the excited states $\bra{E^{\{1\}}}\rho\ket{E^{\{2\}}}$ vanishes. The equations of motion read
\begin{align}
\frac{d}{dt}\rho_{1G} &= \left[ i(\Delta-\tilde{\delta}^{\{1\}}) - \tilde{\gamma}^{\{1\}} - \tfrac{\gamma}{2} \right] \rho_{1G} \nonumber \\ & - i\tilde{\Omega}^{\{1\}} - (i\tilde{\delta}^{\{1,2\}}+\tilde{\gamma}^{\{1,2\}}) \rho_{2G} \;,\\
\frac{d}{dt}\rho_{2G} &= \left[ i(\Delta-\tilde{\delta}^{\{2\}}) - \tilde{\gamma}^{\{2\}} - \tfrac{\gamma}{2} \right] \rho_{2G} \nonumber \\ &- i\tilde{\Omega}^{\{2\}} - (i
{{}\tilde{\delta}^{\{1,2\}}}^*
+{{}\tilde{\gamma}^{\{1,2\}}}^*
) \rho_{1G} \;.
\end{align}
From this we obtain the steady state solutions of the coherences
\begin{align}
\rho_{1G} &= \frac{\tilde{\Delta}^{\{2\}}\tilde{\Omega}^{\{1\}} - \left( -\tilde{\delta}^{\{1,2\}} + i \tilde{\gamma}^{\{1,2\}} \right) \tilde{\Omega}^{\{2\}} }
{
\tilde{\Delta}^{\{1\}}
\tilde{\Delta}^{\{2\}}
-
\Omega_C^2
} \label{eq:multi_eit_rho1G} \;,\\
\rho_{2G} &= \frac{\tilde{\Delta}^{\{1\}}\tilde{\Omega}^{\{2\}} - \left( -{{}\tilde{\delta}^{\{1,2\}}}^* + i {{}\tilde{\gamma}^{\{1,2\}}}^* \right) \tilde{\Omega}^{\{1\}} }
{
\tilde{\Delta}^{\{1\}}
\tilde{\Delta}^{\{2\}}
-
\Omega_C^2
} \label{eq:multi_eit_rho2G} \;,
\end{align}
with the abbreviations
\begin{align}
\tilde{\Delta}^{\{l\}} &= \Delta - \tilde{\delta}^{\{l\}} + i \,(\tfrac{\gamma}{2}+\tilde{\gamma}^{\{l\}})\;, \\
\Omega_C^2 &= \left( \tilde{\delta}^{\{1,2\}} -i\tilde{\gamma}^{\{1,2\}} \right)
\left( {{}\tilde{\delta}^{\{1,2\}}}^* -i {{}\tilde{\gamma}^{\{1,2\}}}^* \right) \;. \label{eq:multi_eit_omega_c}
\end{align}
With the solutions for the coherences at hand, we can now turn to the observable, the complex reflection coefficient $R$. According to Eq.~(\ref{eq:R_lin_B0}), it is given by
\begin{align}
R = -1 + \sum_j \frac{{2 \kappa_R^{[j]}}}{\kappa^{[j]}+i \Delta_C^{[j]}} + R^{\{1\}} \rho_{1G} + R^{\{2\}} \rho_{2G}\;, \label{eq:eit_R}
\end{align}
with
\begin{align}
R^{\{l\}} = -\frac{i}{a_{\textrm{in}}} \sum_j \frac{ \sqrt{2 \kappa_R^{[j]} } }{ \kappa^{[j]}+i \Delta_C^{[j]} } \sqrt{\tfrac{2}{3}} {g^{[j]\{l\}}}^* \sqrt{N^{\{l\}}} \;. \label{eq:multi_eit_Rl_def}
\end{align}
At this point it is instructive to discuss the scaling related to the cavity detuning $\Delta_C$ once again. As before, we assume that $g^{[3]\{1\}} = 0$, i.e.~the first layer does not couple to the driven cavity mode $j=3$ since it is located at a field node. In this case we find that $R^{\{1\}} \sim 1/\Delta_C$, while $R^{\{2\}}$ is not suppressed due to a cavity detuning, since the second layer can couple to the resonantly driven mode as $g^{[3]\{2\}} \neq 0$. Furthermore, from Eqs.~(\ref{eq:multi_eit_rho1G}) and~(\ref{eq:multi_eit_rho2G}) we find that $\rho_{1G} \sim 1/\Delta_C$, whereas the $\rho_{2G}$ is not suppressed in this fashion. Therefore, for a qualitative understanding of the reflectance, it is well justified to drop the quadratically suppressed contribution $R^{\{1\}} \rho_{1G}$ and only consider the reflection signal which stems from the second layer, i.e.~$R^{\{2\}} \rho_{2G}$.
In a further step, we restrict the numerator of the fraction in $R^{\{2\}} \rho_{2G}$ to terms up to linear order in $1/\Delta_C$. Moreover, we neglect the tiny collective Lamb shift and superradiance of the nuclei in the first layer. This yields the reflection coefficient
\begin{align}
R = &-1 + \sum_j \frac{{2 \kappa_R^{[j]}}}{\kappa^{[j]}+i \Delta_C^{[j]}} \nonumber \\
&+ R^{\{2\}} \, \tilde{\Omega}_2 \; \frac{ \Delta + i \,\tfrac{\gamma}{2}}
{
\left( \Delta + i \, \tfrac{\gamma}{2} \right)
\left( \Delta - \tilde{\delta}_2 + i \,(\tfrac{\gamma}{2}+\tilde{\gamma}_2) \right)
- \Omega_C^2 } \;. \label{eq:multi_eit_R}
\end{align}
The nuclear contribution to the reflectance is revealed in the second line. Its spectral shape is essentially that of a system featuring EIT. Hence, we recover the same result as in Ref.~\cite{Roehlsberger2012}: In a cavity with two resonant layers it is possible to realize the phenomenon of electromagnetically induced transparency.
\subsection{Comparison to the semiclassical analysis\label{sec:comparison}}
In Ref.~\cite{Roehlsberger2012}, the case was studied, where the empty-cavity contribution to the reflectance vanishes, and
a semiclassical theory based on transfer matrix techniques was used to derive an expression for the nuclear reflectance. In the sign convention of the nuclear resonances of the present work, the result was found as
\begin{align}
R &= - i d_2 f_0 \tfrac{\gamma}{2} E_{2-+} \frac{ \Delta + i\tfrac{\gamma}{2}} {(\Delta + i\tfrac{\gamma}{2}) (\Delta + i\tfrac{\Gamma}{2} ) - \Omega_C^2} \;,\\
\Gamma &= \gamma (1+d_2 f_0 E_{2--}) \;, \\
\Omega_C^2 &= d_1 d_2 f_0^2 \tfrac{\gamma^2}{4} E_{2-+}E_{1+-} \;, \label{eq:Omega_C_semi}
\end{align}
where $d_1$ and $d_2$ are the thicknesses of the respective two layers, $f_0$ is the nuclear scattering amplitude at resonance, and $E_{2-+}$, $E_{2--}$, and $E_{1+-}$ are transfer matrix elements. Comparing it with the part of the nuclear reflection in the quantum optical expression given in Eq.~(\ref{eq:multi_eit_R}), we notice a perfect agreement of the structures of the two formulas.
However, as an important consistency check, it remains to be verified if the scaling with the number of nuclei in the two layers agrees as well. In the semiclassical theory it was shown that the amplitude of the reflection coefficient and the superradiance of the nuclei in the second layer scale linearly with the thickness of the second layer $d_2$, and furthermore the control field $\Omega_C$ was shown to be proportional to $\sqrt{d_1 d_2}$.
The present model does not directly contain the layer thicknesses as parameters. But since $d_1 \propto N^{\{1\}}$ and $d_2 \propto N^{\{2\}}$, is sufficient to show that the scaling relations also hold for the numbers of nuclei. From Eqs.~(\ref{eq:multi_eit_omega_def})--(\ref{eq:multi_eit_g12_def}) and (\ref{eq:multi_eit_Rl_def}) it can indeed be seen that the relations are correctly reproduced by our theory.
This is an important result, since it is a strong hint that the two independently derived results do not coincide by chance, but also agree on a more fundamental level. Hence, the model developed here can now be employed to shine light on the EIT scenario from a different perspective.
In the nuclear reflectance calculated in Eq.~(\ref{eq:multi_eit_R}), the coupling Rabi frequency occurs as $\Omega_C^2$ in the denominator, whereas in standard EIT settings it appears as a positive real-valued variable $|\Omega_C|^2$. Taking a closer look at our definition of the coupling Rabi frequency in Eq.~(\ref{eq:multi_eit_omega_c}), we note that in our case $\Omega_C^2$ can generally be complex. Also in the semiclassical theory the complex field amplitudes and transfer matrix elements $E_{2-+}E_{1+-}$ allow for complex values, c.f.~Eq.~(\ref{eq:Omega_C_semi}). The results of Ref.~\cite{Roehlsberger2012}, though, seem to imply that the imaginary component is very small and an EIT situation is well realized. However, from the theoretical analysis of the semiclassical models, this fact could not be understood and the influence of the imaginary component was unclear~\cite{Roehlsberger2013}. With the present theory, though, it is now possible to examine the complex nature of the coupling in more detail. From Eq.~(\ref{eq:multi_eit_omega_c}) we know that it is not only given by the coherent coupling $\tilde{\delta}^{\{1,2\}}$ between the two layers as written in the Hamiltonian in Eq.~(\ref{eq:multi_eit_H}), but is also affected by the incoherent cross-damping term $\tilde{\gamma}^{\{1,2\}}$ between the two layers. In the discussion on the scalings we have already seen that, in contrast to the coherent contribution, the incoherent term is suppressed quadratically with the detuning of the off-resonant cavity modes. Thus, we find that the incoherent part
\begin{align}
\operatorname{Im}(\Omega_C^2) = -2 \operatorname{Re}\left( \tilde{\delta}^{\{1,2\}} {{}\tilde{\gamma}^{\{1,2\}}}^* \right) \sim \frac{1}{\Delta_C^3} \;
\end{align}
can be neglected, and $\Omega_C^2 \approx |\tilde{\delta}^{\{1,2\}}|^2$ such that the real component of the coupling frequency $\Omega_C$ dominates.
Furthermore, the microscopic ansatz of our quantum optical theory enables one to interpret the origin of the coupling between the layers. While in Ref.~\cite{Roehlsberger2012} it was shown that the EIT control field arises from radiative coupling between the two resonant layers, it can now be pinned down from Eqs.~(\ref{eq:multi_eit_d12_def}),~(\ref{eq:multi_eit_g12_def}) and~(\ref{eq:multi_eit_omega_c}) to
\begin{align}
\Omega_C^2 &= \left(\tfrac{2}{3}\right)^2 N^{\{1\}} N^{\{2\}}
\nonumber \\ &\quad\times
\left( \sum_j \frac{g^{[j]\{1\}} {g^{[j]\{2\}}}^*}{\Delta_C^{[j]}-i \kappa^{[j]}} \right)
\left( \sum_j \frac{{g^{[j]\{1\}}}^* {g^{[j]\{2\}}}}{\Delta_C^{[j]}-i \kappa^{[j]}} \right) \;.
\end{align}
Since we assumed that the first layer does not couple to the third guided mode in the idealized case, i.e.~$g^{[3]\{1\}} = 0$, we observe that the coupling field is only mediated via the remaining guided modes $j\neq 3$ in the cavity. This way, it becomes now also clear why the EIT phenomenon was not obtained in Sec.~\ref{sec:effect_layers}, where multiple layers, but only one guided mode was included in the theoretical analysis.
\subsection{Numerical analysis\label{sec:numerical_analysis}}
\begin{figure*}[!t]
\centering
\includegraphics[scale=1]{grid}
\caption{(Color online) The reflectance of (a) the EIT and (b) the non-EIT scenario is shown as a function of the detuning $\Delta$ and the incidence angle $\theta$. The results derived with the extended quantum optical model agree very well with the predictions from Parratt's formalism. The dashed line at $\theta \approx 3.5$ mrad marks the angle at which the 3rd cavity minimum is expected. A cut along this line corresponds to the spectrum measured in Ref.~\cite{Roehlsberger2012} and is shown in Fig.~\ref{fig:eit_qo_parratt_spectrum}. Parameters are given in Appendix~\ref{sec:appendix_parameters}.
\label{fig:eit_noneit_qo_parratt_grid}
}
\end{figure*}
Let us now see how well our analytical expression for the reflectance derived above performs in practice. In particular, we aim to describe the spectrum of the EIT cavity defined in the first column of Tab.~\ref{tab:layers_eit} with our quantum optical model. Moreover, we include a second cavity into the analysis: While the EIT cavity has its resonant layers in a node and antinode of the field of the resonantly driven mode, respectively, we also consider a cavity in which the situation is reversed. Namely, the first resonant layer is located at a field antinode and the second ensemble of nuclei at the field node. The corresponding geometry is defined in the last column of Tab.~\ref{tab:layers_eit}. The two cavity layouts reflect the cases discussed in Ref.~\cite{Roehlsberger2012}, where it was shown that the first cavity exhibits the EIT phenomenon, while for the second system the control coupling $\Omega_C$ vanishes and only a Lorentz-like spectrum is measured.
In order to determine the free parameters related to the cavities defined in Tab.~\ref{tab:layers_eit} for the quantum optical model in a consistent way, we employed the following method. First, we restricted ourselves to the first five guided modes in the theory and did not take into account the resonant nuclei yet. For each of these modes the angles $\theta_0^{[j]}$, at which the modes are driven resonantly, and the decay and coupling rates $\kappa^{[j]}$ and $\kappa_R^{[j]}$ have to be determined. The parameters can be found by fitting Eq.~(\ref{eq:R_C_multimode_heuristic}) to the reflection curve as function of the x-ray incidence angle as it was already done in Sec.~\ref{sec:effect_modes}.
With the cavity parameters at hand, the next step now is to include the nuclear resonances to the model. In particular, the complex collective coupling coefficients $g^{[j]\{l\}}\sqrt{N^{\{l\}}}$ between the $j$th guided mode and the layer $l$ of resonant nuclei have to be determined. For each cavity the number of coupling coefficients is $10$, since we specialize to five guided modes in the analysis and each mode can couple to the two respective layers. In order to avoid arbitrariness in a fit to numerical data, it is advisable to decrease this large number of free parameters. In fact, is it possible to determine all coupling coefficients in a consistent way, while keeping only one global scaling as free parameter. To illustrate this, we note that the coupling coefficients can be decomposed as
\begin{align}
g^{[j]\{l\}}\sqrt{N^{\{l\}}} = \tilde{\mathcal{E}}^{[j]\{l\}} \; \cdot \; \big( \tilde{g}^{\{l\}}\sqrt{N^{\{l\}}} \big) \;, \label{eq:g_decomposed}
\end{align}
where the first factor denotes the cavity field amplitude of mode $j$ at layer $l$, and the second factor includes the collective nuclear dipole moment. Next, we exploit that the complex field amplitudes in the cavity can be easily derived by means of Parratt's formalism~\cite{deBoer1991}.
In a simple picture, we can interpret the resonant nuclei in the cavity as a perturbation, which modifies the cavity field and, accordingly, the reflectance. The cavity field in the presence of nuclear resonances can be understood as a superposition of the bare cavity field and the contribution due to scattering at the nuclei. This presupposition clearly holds for x-ray frequencies apart from the nuclear resonance, but does also give consistent results directly at the resonance, where the perturbation due to the nuclei is not generally small. Hence, to determine the field coefficients $\tilde{\mathcal{E}}^{[j]\{l\}}$, it is not necessary to include any nuclear resonances in Parratt's formalism, but only the bare cavity field in the absence of $^{57}$Fe resonances are required. With the input field normalized to intensity one, like in Fig.~\ref{fig:field}, we can directly compute all complex valued field coefficients $\tilde{\mathcal{E}}^{[j]\{l\}}$ at the center of the respective layers $l$ by tuning the incidence angle $\theta$ to the angles $\theta^{[j]}$, where the $j$th cavity mode is driven resonantly. The remaining task is to determine the second coefficient in Eq.~(\ref{eq:g_decomposed}), which takes into account the nuclear properties and other constant contributions. Since both iron layers in the cavities have the same thickness and hence the number of nuclei is the same, we can expect that $\tilde{g}^{\{l\}}\sqrt{N^{\{l\}}}$ is a constant and acts only as a scaling parameter for the previously determined field amplitudes. This way, all coupling coefficients $g^{[j]\{l\}}\sqrt{N^{\{l\}}}$ can be deduced by fitting the model to numerical data, calculated with Parratt's formalism, with only one free scaling parameter. The complex field amplitudes for both cavities and the scaling parameter found in our analysis are summarized in Appendix \ref{sec:appendix_parameters}. We note that the couplings to the layers, which are located in the field nodes of the third cavity mode, do not completely vanish due to the finite thickness of the layers and a potential misplacement in the cavity. However, they are found to be much smaller than the coupling coefficients of the respective layers in the cavity field antinode. This can already be deduced from the field intensity distributions shown in Fig.~\ref{fig:field}.
An alternative approach to determine the coupling constants $g^{[j]\{l\}}\sqrt{N^{\{l\}}}$ is by fitting the model with all coefficients directly to numerical data. While this procedure is not as persuasive as the consistent method described above, it might also offer some advantages in quantitative studies. Errors in other parameters, such as the coefficient which characterizes the cavity modes can partly be compensated. Moreover, for iron layers with a larger thickness the field amplitude might not be constant and, in contrast to the method from above, an effective coupling strength would be naturally obtained. Finally, the fitted parameters could provide a handle to cover the fact in more detail, that on-resonance the nuclei have an effect on the cavity field which goes beyond a perturbation. In this work, however, we will not optimize the parameters in this way, but utilize the coefficients derived previously to illustrate the general consistency of our model.
Now we are able to benchmark our analytical result for the case of two resonant layers, which was calculated in Eqs.~(\ref{eq:multi_eit_rho1G})--(\ref{eq:multi_eit_Rl_def}). A comparison with the frequency- and angular-dependent reflectance for the EIT and the non-EIT cavity is shown in Fig.~\ref{fig:eit_noneit_qo_parratt_grid}. Clearly, the agreement between the two different models is very good. We stress that this is not an obvious result, since the parameters for the quantum optical model were determined independently and not obtained from a fit to the numerical data.
A range, in which strong deviations can be observed, is the domain around $\Delta\approx 0$. Here, the exact numerical solution obtained from Parratt's formalism shows an additional structure. This can be understood from the following considerations. If the x rays are not resonant to the transition in the $^{57}$Fe nuclei, they will primarily be damped due to the electronic absorption in the cavity, before they can reach the lower resonant layer. If, however, their frequency is too close to resonance, the x rays will additionally be absorbed by the nuclei in the upper layer. Consequently, the field seen by the nuclei in the second layer is strongly modified compared to the off-resonant case. However, in the derivation above we assumed that the presence of the nuclei can be treated as a small perturbation to the cavity field, which is not the case in the extreme situation encountered here. An approach for future studies could thus be to comprise this effect self-consistently into the quantum optical theory for an even better agreement with the numerical data.
We now turn to the spectrum measured at the incidence angle corresponding to the third guided mode, i.e., the situation from Ref.~\cite{Roehlsberger2012}. The spectra for both the EIT and the non-EIT cavity defined in Tab.~\ref{tab:layers_eit} are shown in Fig.~\ref{fig:eit_qo_parratt_spectrum}.
Again, we observe a good qualitative agreement of our theory with the numerical data obtained with Parratt's formalism, which could already be anticipated from the accordance in Fig.~\ref{fig:eit_noneit_qo_parratt_grid}. But in any case, the fact that the EIT as well as the non-EIT spectrum is reproduced without post-optimization of the consistently derived parameters, supports the validity of our theoretical description.
Finally, we want to review to role of the coupling field $\Omega_C$. In the theoretical analysis in Sec.~\ref{sec:comparison} it was found that the presence of this control field gives rise to EIT. Moreover, we found that the control field is established by an interaction between the nuclear ensembles via different cavity modes, as the layer in the field node does not directly couple to the driven guided mode. From our numerical analysis we observe that this idealized case is not strictly realized. Since the coupling coefficient is small yet finite, also the resonantly driven mode gives rise a coupling between the two layers in the cavity. Furthermore, the control field $\Omega_C$ does not vanish in the non-EIT case and hence the Lorentz-like spectrum cannot be explained by its absence in the frame of our model. Rather, in the non-idealized case it is the interplay with other contributions to the reflection coefficient and their interference which results in the Lorentzian spectrum.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{spectrum}
\caption{\label{fig:eit_qo_parratt_spectrum}(Color online) Spectra of (a) the EIT and (b) the non-EIT cavity at incidence angle $\theta \approx 3.5$~mrad, at which the third guided mode is excited. The quantum optical description (blue solid lines) is in qualitative agreement with the exact result derived with the Parratt formalism (red dashed lines). Parameters are given in Appendix~\ref{sec:appendix_parameters}.}
\end{figure}
\section*{Summary and discussion}
In summary, we investigated the effect of multiple modes and multiple ensembles of resonant M\"ossbauer nuclei in an x-ray cavity QED setup, which has recently served as a platform for multiple experiments related to x-ray quantum optics.
Most of the time, the scenario with a singe ensemble, realized by a layer of collectively acting nuclei, has been studied theoretically~\cite{Heeg2013b} as well as in several experiments~\cite{Roehlsberger2010,Heeg2013,Heeg2014,Heeg2014b}.
The theoretical framework of this work is applicable to also model experimental settings with more than one resonant ensemble~\cite{Roehlsberger2012} and interpret them from a quantum optical point of view.
Our theory from Sec.~\ref{sec:generalization} is based on the approach taken in Ref.~\cite{Heeg2013b} and constitutes a generalization with multiple cavity modes and several layers of resonant M\"ossbauer nuclei. Similar to the original theory, we were able to simplify the basic equations using two well justified approximations. By adiabatically eliminating the cavity modes and restricting the analysis to the linear regime, effective equations of motion for the nuclear ensembles could be derived. The resulting set of equations characterizing the dynamics of a few-level system can easily be solved analytically.
In Sections \ref{sec:effect_modes} and \ref{sec:effect_layers} we discussed the consequences of the two extensions to the theory in detail. By introducing multiple cavity modes to the model we found that the spectral properties around the resonance of $^{57}$Fe are unaffected. In absence of magnetic hyperfine splitting, the nuclear response is given by Lorentz profiles, which are shifted and broadened due to collective effects. The differences to the predictions from Ref.~\cite{Heeg2013b} manifest only in the coefficients entering the final expressions. However, a clear difference could be observed when the reflectance was studied as a function of the x-ray incidence angle. While a single-mode theory can only indicate one guided mode of the system at a time, our extension allows to accurately model the reflectance over a range of several mrad, reproducing all guided modes. Moreover, we found that our model, which takes into account the effect of the cavity and its modes, can be heuristically extended to incorporate bulk material properties such as the total reflection envelope. This way, a close agreement with established semiclassical models could be achieved.
Further, the effect of multiple ensembles of $^{57}$Fe nuclei in the cavity, located in different layers, was studied. This extension alone did not give rise to qualitatively new effects.
Next, we analyzed the case in which both extensions enter the theory at the same time, i.e.~multiple cavity modes and multiple resonant layers. We could show that in this case the equations cannot be mapped to an effective two-level system, as the coupling coefficients between the different nuclear ensembles and cavity modes are mutually different and do not allow for a diagonalization in which only one excited state is probed. Rather, more advanced level schemes generally occur in this setting.
In the final part of this work we applied the general theory to the setting, which was experimentally explored in Ref.~\cite{Roehlsberger2012}. In this reference, EIT-like spectra could be observed for a cavity with two layers of resonant iron nuclei. We applied our quantum theoretical approach to the setting and could successfully reproduce the findings. An effective level scheme with one collective ground and two collective excited states which captures the complete system could be found and an analytic solution for the reflection coefficient was given. For the idealized case of perfectly placed layers in the cavity we found that the nuclear response has indeed the spectral shape of a system featuring EIT. In this process, we compared our result to the previously used semiclassical models and observed agreement on the analytic level. In particular, the scalings with respect to the number of atoms in the respective ensembles are reproduced by our quantum optical description. Most importantly, the question on the nature of the control field, which forms a pivotal requirement of EIT, could be elucidated. From our analysis of the idealized scenario we found that the radiative coupling is mediated by the off-resonant cavity modes.
We further developed a way to consistently derive the different coupling rates required for the model. This approach is based on an analysis of the cavity in the absence of nuclear resonances. Hence, the arising spectral features can be traced back to the capability of our model and are not due to a potentially biased parameter fit. In our numerical data we observe a good agreement to the results of semiclassical models and the essential features, such as the signatures of EIT, are reproduced.
While we mainly analyzed the cavity properties in the absence of magnetization, we emphasize that the extended theory description developed in this work is not restricted to a vanishing magnetic hyperfine splitting in the resonant layers. Rather, in our model it is possible to include all Zeeman sublevels properly. In future works, this could be exploited to combine the effect of multiple layers and modes, giving rise to the EIT-like effects, and magnetization, leading to the phenomenon of spontaneously generated coherences~\cite{Heeg2013}. This way, a broad class of quantum optical level schemes could be engineered, indicating promising perspectives of x-ray cavity QED with M\"ossbauer nuclei.
\section*{Acknowledgements}
Fruitful discussions with R. R\"ohlsberger are gratefully acknowledged. K.P.H. acknowledges funding by the German National Academic Foundation.
|
train/arxiv
|
BkiUbZ7xaKPQoka4Q_EI
| 5 | 1 |
\section{ Introduction}
The raise and peel model \cite{GNPR, ALR} which is a stochastic model of a
fluctuating interface is, to our knowledge, the first example of a
stochastic model which has the space-time symmetry of conformal
invariance. This implies that the dynamic critical exponent $z = 1$
and certain scaling properties of various correlation functions are known.
This model
was extended in order to take into account sources at the boundaries
\cite{DGP, PP, PAR} keeping conformal invariance.
In all these cases, the
stationary states have magic combinatorial properties.
In the present paper we describe another extension of the raise and peel
model keeping conformal invariance (see also Appendix A and B)
by introducing defects on the interface.
These defects hop at long distances in a medium which is changed by the
hops. Between the hops the medium also changes. Finally when two defects
touch, they can annihilate. The stationary state is the one as the
original raise and peel model with no defects.
The whole process can be seen as a reaction $D + D \rightarrow \emptyset $,
where $D$ is a
defect, taking place in a disordered unquenched medium.
In Sec. II we describe the model. Like the raise and peel model
\cite{GNPR}, the present model comes from considering the action of a
Hamiltonian expressed in terms of Temperley-Lieb generators on a vector
space which is a left ideal of the Temperley-Lieb algebra. The ideal can be
mapped on graphs which constitute the configuration space of the model. We
shortly review in Appendix A the mathematical background of the
model and refer for details to Refs.~\cite{PP, PAR}.
In Sec. III, using Monte Carlo simulations, we describe the long range
hopping of defects and give the L\'evy flights probability distribution.
In Sec. IV, again using Monte Carlo simulations, starting with a
configuration which consists only of defects, we study the variation in
time $t$ of their density for a lattice of size $L$. We obtain the scaling
function which gives the number of defects in terms of $t/L$ and show how
conformal invariance gives some of its properties. In the thermodynamic
limit the density decreases in time like $1/t$ as is expected since in a
conformal invariant theory time and space are on equal footing.
In Sec. V we present our conclusions.
\section{ The raise and peel model with defects.}\label{sect2}
We consider an interface of a one-dimensional lattice with $L+1$ sites.
An interface is formed by attaching at each site a non-negative
integer height $h_i$ ($i = 0,1,\ldots,L$). We take $h_0 = h_L = 0$. If for two
consecutive sites $j$ and $j+1$ we have $h_j = h_{j+1} = 0$, on the link
connecting the two sites we put an arrow called defect (see Fig. 1). For
the remaining sites, the heights obey the restrict solid-on-solid
(RSOS) rules:
\begin{equation} \label{2.1}
h_{i+1} -h_i = \pm 1, \;\;\;\;\;\; h_i \geq 0.
\end{equation}
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.40]{fig1av.eps}}
\caption{
One of the
configurations for $L = 21$ (22 sites). There are 3
defects
(arrows on the links) and 3 clusters. Also shown are 5 tiles (tilted
squares) (a)-(e)
belonging to the gas. When a tile hits the surface the effect is
different in the five cases. }
\label{fig1}
\end{figure}
A domain in which the RSOS rules are obeyed
$\{h_j=h_l=0, h_k >0, j<k<l\}$ is called a cluster. There are
3 clusters and 3 defects in Fig.~1 ($L = 21$). There are ${L \choose [L/2]}$
possible
configurations of the interface (we denote by $[x]$ the integer part of $x$).
There is a simple bijection between the configurations
of interfaces and defects, where Fig.~1 is an example, and ballot
paths \cite{PP}.
A ballot path is obtained if one follow the RSOS rules \rf{2.1}, take $h_L=0$ but
let $h_0$ free ($0\leq h_L \leq L$).
This fact was used in \cite{PAR} to define
another stochastic model than the one described below.
In the case $L = 4$, the six possible
configurations are shown in Fig.~2. The configuration shown in Fig.~2b has 2
defects and one cluster, while there are no defects in Fig.~2f.
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.38]{fig1dv.eps}}
\caption{
The six configurations for $L = 4$ (5 sites). In the stationary
state only the RSOS configurations e) and f) occur.}
\label{fig2}
\end{figure}
We consider the interface separating a film of tiles (clusters with
defects) from a gas of tiles (tilted squares).
The evolution of the system (Monte Carlo steps) is given by the
following rules. With a probability $P_i=\frac{1}{L-1}$ a tile from the gas
hits site $i$
($i = 1,\ldots,L-1$). As a result of this hit, the following effects can
take place:
\begin{itemize}
\item[a)] The tile hits a local maximum of a cluster ("a" in Fig.~1).
The
tile is reflected.
\item[b)] The tile hits a local minimum of a cluster ("b" in Fig.~1). The tile
is adsorbed.
\item[c)] The tile hits a cluster and the slope is positive
($h_{i+1}>h_i>h_{i-1}$) ("c" in Fig.~1). The tile is reflected after
triggering the
desorption of a layer of tiles from the segment ($h_j>h_i=h_{i+b}$, $j =
i+1,\ldots,i+b-1$), i.e., $h_j\rightarrow h_j -2,$ $j=i+1,...,i+b-1$. The
layer contains $b
- 1$ tiles (this is an odd
number). Similarly, if the slope is negative ($h_{i+1}<h_i<h_{i-1}$, the
tile is reflected after triggering the desorption of a layer of tiles
belonging to the segment ($h_j> h_i = h_{i-b}$, $j+i-b+1,\ldots,i-1$).
\item[d)] The tile hits the right end of a cluster $h_j> h(i-c) = h(i) = 0$
($j =i-c+1,...,i-1$) and $h(i+1)=0$. The link ($i,i+1$) contains a defect
("d" in Fig.~1). The defect hops on the link ($c,c+1$) after triggering
the desorption of a layer of tiles ($h_j\rightarrow h_j-2$,
$j=i-c+1,...,i-1$)
and
the tile is adsorbed producing a new small cluster ($h_{i-1}=h_{i+1}=0,
h_i=1$) (see Fig.~3). If the defect is at the left end of a cluster, the
rules are similar, the defect hops to the right after peeling the
cluster, and a new small cluster appears at the end of the old one.
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.40]{fig1bv.eps}}
\caption{
The new profile after the tile "d" in Fig.~1 has hit the
surface at the right end of a cluster. The defect hops to the left end
of the cluster, peeling one layer and a new small cluster appears at the
right of the old cluster.}
\label{fig3}
\end{figure}
\item[e)] The tile hits a site between two defects ($h_{i-1}=h_i=h_{i+1}=0$).
This is the case "e" in Fig.~1. The two defects annihilate and in their
place appears a small cluster ($h_{i-1}=h_{i+1}=0, h_i=1$). See Fig.~4.
\end{itemize}
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.40]{fig1cv.eps}}
\caption{
The new profile after the tile "e" has hit the surface between
two defects. The defects have disappeared and in their place one gets
a new small cluster.}
\label{fig4}
\end{figure}
To sum up. The defects ($D$) hop non-locally in a disordered (not
quenched) medium which changes between successive hops (local
adsorption and nonlocal desorption take place in the clusters). During
the hop the defect peels the cluster and therefore also changes the
medium. The
annihilation reaction $D + D \rightarrow \emptyset $ is local. If one starts
the
stochastic process with a certain configuration (for example, only
defects as in Fig.~\ref{fig3}a), due to the annihilation process, for $L$
even all the
defects disappear and in the stationary state one has only clusters
(RSOS configurations). The properties of the stationary states have
been studied elsewhere \cite{GNPR, ALR}. In the case $L$ odd, in the
stationary states one has one defect. In the next section we are going
to see how this defect hops and will observe that the defect behaves
like a random walker performing L\'evy flights. This will help us
understand the annihilation process $D + D \rightarrow \emptyset $ described
in Sec. IV.
The rules described above were obtained by using a
representation of the Temperley-Lieb algebra in a certain ideal \cite{DGP,
PP, PAR}(see Appendix A).
The finite-size scaling of the Hamiltonian eigenspectrum is
known from conformal field theory (see Appendix B), therefore the
physical properties of the model can be traced back to conformal
invariance.
\section{ The random walk of a defect}
Before discussing the annihilation reaction of defects, it is useful to
understand how defects hop. The simplest way to study the behavior of
defects is to take the stationary states in the case $L$ odd when we
have only a single defect. Although there is a lot of information about these
stationary states coming from combinatorics \cite{BGN,PAR} and Monte Carlo
simulations \cite{PAR}, the results we present here are new.
One asks what is the probability $P(s)$ for a defect to hop, at one
Monte-Carlo step, at a distance $s$ (we assume $L$ very large).
We first see if on physical grounds, one can't guess the result.
Let us assume that the defect behaves like a randon walker and that
$P(s)$ describes L\'evy flights \cite{BG,HCF,WSUS,JMF}. This implies that for
large values of $s$ we have:
\begin{equation} \label{3.1}
P(s) \sim \frac{1}{|s|^{1+\sigma}}.
\end{equation}
If the random walker starts at a point $x = 0$ (for example in the
middle of the lattice), at large values of $t$, the dispersion is \cite{JMF}:
\begin{equation} \label{3.2}
<x^2> \sim t^{\frac{2}{\sigma}}.
\end{equation}
In a conformal invariant model, one has no other scales but the size
of the system, space and time are on equal footing therefore one
has to have $\sigma = 1$.
In Fig.~5 we show $P(s)$ as obtained from Monte Carlo simulations for
systems of different sizes. One notices a data collapse for a large
domain of $s$. A fit to the data for the largest lattice ($L = 4095$)
gives, for large $s$:
\begin{equation} \label{3.3}
P(s) \approx \frac{2.25}{|s|^{2.06}},
\end{equation}
in agreement with what we expected.
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.46]{fig2vnn.eps}}
\caption{
(Color online) The probability $P(s)$ for a defect to hop at a distance $s$ in
units of lattice spacing. Monte Carlo simulations were done on systems
of different sizes.}
\label{fig5}
\end{figure}
\section{ The density of defects at large times}
We are now going to study the number of defects $N_d(t,L)$ as a function of
time and lattice size taking at $t = 0$ the configuration where the
lattice is covered by defects only
(like in Fig.~2a). An
interesting and novel aspect of this study is the role of conformal
invariance.
Since there are no other scales in the system except $L$, we expect for
large values of $t$ and $L$:
\begin{equation} \label{4.1}
N_d(t,L) = f(\frac{t}{L}).
\end{equation}
In Fig.~6, we show $N_d(t,L)$ for various lattice sizes ($L$
odd). One sees a nice data collapse except for very small values of
$t/L$ where the convergence is slower. A similar (but not identical!)
function is obtained for $L$ even.
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.46]{fig6-errornn.eps}}
\caption{
(Color online) The number of defects $N_d(t,L)$ as a function of $t/L$ for several lattice
sizes ($L$ odd). At $t= 0$, $N_d(t=0,L) = L$. The error bars are also shown.
The fitted linear curve shows that the
density decreases as the inverse of time.}
\label{fig6}
\end{figure}
We firstly discuss the behavior of $N_d$ for large values of $t/L$ (see
Fig.~7). A fit to the data gives ($L$ odd):
\begin{equation} \label{4.2}
N_d(\frac{t}{L}) = A_1^{(o)} e^{-\lambda_1^{(o)}\frac{t}{L}} +
A_2^{(o)} e^{-\lambda_2^{(o)}\frac{t}{L}} + \cdots ,
\end{equation}
where
\begin{eqnarray} \label{4.3}
A_1^{(o)} &=& 6.75, \;\;\;\; A_2^{(o)} = 17.27, \nonumber \\
\lambda_1^{(o)} &=& 8.21, \;\;\;\; \lambda_2^{(o)} = 26.48.
\end{eqnarray}
We can now compare the data obtained from the fit with the finite-size
scaling spectrum of the Hamiltonian (see Appendix B, Eqs. \rf{A10},\rf{A11}):
\begin{eqnarray} \label{4.5}
\lambda_1^{(o)} &=& \frac{3\pi\sqrt{3}}{2} = 8.1620971\cdots, \nonumber \\
\lambda_2^{(o)} &=& \frac{3\pi\sqrt{3}}{2} \frac{10}{3} = 27.20699\cdots.
\end{eqnarray}
No prediction can be made about $A_1^{(o)}$ or $A_2^{(o)}$ since they are not
universal, they depend on the initial conditions. Notice that $A_2^{(o)} >
A_1^{(o)}$
as it should be since the expansion should diverge for short times where
we expect
\begin{equation} \label{4.6}
N_d \sim \frac{L}{t}
\end{equation}
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.46]{fig5avnn.eps}}
\caption{(Color online)
The number of defects $N_d$ as a function of $t/L$ as in Fig.~6
zoomed on the large time domain. The error bars, given in Fig.~\ref{fig6}, are not shown. }
\label{fig7}
\end{figure}
A similar fit, done for $L$ even (the data are shown in Fig.~8), gives:
\begin{equation} \label{4.7}
N_d(\frac{t}{L}) = A_1^{e)} e^{-\lambda_1^{(e)}\frac{t}{L}} +
A_2^{(e)} e^{-\lambda_2^{(e)}\frac{t}{L}} + \cdots ,
\end{equation}
with
\begin{eqnarray} \label{4.8}
A_1^{(e)} &=& 2.83, \;\;\;\; A_2^{(e)} = 6.93,\nonumber \\
\lambda_1^{(e)} &=& 2.71, \;\;\;\; \lambda_2^{(e)} = 16.64.
\end{eqnarray}
We can again use the predictions of conformal invariance (see \rf{A10} and
\rf{A11}) and get
\begin{eqnarray} \label{4.10}
\lambda_1^{(e)} &=& \frac{3\pi\sqrt{3}}{2}\frac{1}{3} = 2.72069\cdots,\nonumber \\
\lambda_2^{(e)} &=& \frac{3\pi\sqrt{3}}{2}2 = 16.324194\cdots.
\end{eqnarray}
to be compared with \rf{4.8}.
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.46]{fig5bvnn.eps}}
\caption{(Color online)
$N_d$ as a function of $t/L$ for large times for different lattice
sizes ($L$ even). The error bars are of the same order as in Fig.~\ref{fig6}.}
\label{fig8}
\end{figure}
In the small $t/L$ domain we get for $L$ even and odd:
\begin{equation} \label{4.11}
\rho = \frac{N_d}{L} \approx \frac{0.322}{t}.
\end{equation}
In order to find the correction term in \rf{4.11}, we have computed
$N_dt^2/L$
as shown in Fig.~9. We have obtained a straight line from which we get:
\begin{equation} \label{4.12}
\rho = \frac{0.322}{t} + \frac{0.334}{t^2} + \cdots .
\end{equation}
This last result is the same for $L$ even and odd. Notice that the
correction term in \rf{4.12} is not given by the scaling function \rf{4.1}.
\begin{figure}[ht!]
\centering
{\includegraphics[angle=0,scale=0.46]{fig4vn.eps}}
\caption{(Color online)
The density of defects times $t^2$ for short times. A linear fit to
the data obtained for the largest lattice ($L = 4096$) gives \rf{4.12}.}
\label{fig9}
\end{figure}
We have also computed the fluctuation of the density as a function of
time and got:
\begin{equation} \label{4.13}
\frac{<\rho^2> - <\rho>^2}{<\rho>^2} \approx \frac{0.237}{t^{1.00}} .
\end{equation}
We would like to compare our results with known results obtained for
diffusion and annihilation reactions ( $ A + A \rightarrow \emptyset $ )
with L\'evy flights \cite{ALB,HH,VH,DV,THVL}. In one dimension, for L\'evy
flights
given by Eq.~\rf{3.1}, one gets \cite{HH}:
\begin{equation} \label{4.14}
\rho \sim \left\{\begin{array}{rc}
t^{-\frac{1}{\sigma}}&\mbox{for}\quad \sigma>1 \\
\frac{\ln t}{t} &\mbox{for}\quad \sigma=1 \\
t^{-1}&\mbox{for}\quad \sigma<1
\end{array} \right.
\end{equation}
the critical dimension being $d_c= \sigma$.
If one compares \rf{4.14} for $\sigma = 1$,
as obtained in Sec. III and \rf{4.12}, one notices the absence of the
$\ln t$
correction. Such a term, if present, could have been seen in our simulations
(one
observes that for large lattices, $\rho t$ converges to the value
0.322 from
above). Logarithmic corrections can also appear in a conformal field
theory if one has Jordan cells \cite{jordan} but there are no Jordan cells in
the
Hamiltonian \rf{A2} given in Appendix A \cite{PPe}.
We believe that the discrepancy between the results of our model and
those obtained for the reaction $A + A \rightarrow \emptyset $
comes from the fact that the two
models have little in common.
\section{ Conclusions}
We have presented an extension of the raise and peel model taking into
account defects. The main property of this model is that conformal
invariance is preserved. The model mimics a system in which particles move
in a disordered unquenched medium doing L\'evy flights and changing the
medium during the flights. Upon contact the defects annihilate. The
properties of the system are simple and could be guessed on simple
grounds based on conformal invariance.
Conformal field theory enters in the description of the scaling
function $N_d = f(t/L)$ ($N_d$ is the number of defects, $L$ the size of the
system and $t$ is the time).
The original raise and peel model \cite{ALR} (this is the present model with
the
defects absent) depends on a parameter $w$ which is the ratio of the
desorption and adsorption rates. If $w = 1$, one has conformal invariance
and the dynamic critical exponent $z = 1$. If one takes $0 < w < 1$, in the
disordered medium one has less clusters and $z$
varies continuously in the interval $0 < z < 1$.
One can add defects to the model and
repeat the exercise done in this paper for all values of $w$. In this
case one expects to find defects making L\'evy flights with a probability
distribution function:
\begin{equation} \label{5.1}
P(s)\sim \frac{1}{ s^{1+z}}.
\end{equation}
\section*{Acknowledgments}
We thank H. Hinrichsen for a carefull reading of the manuscript.
This work has been partially supported by the Brazilian agencies FAPESP and
CNPq, and
by The Australian Research Council.
|
train/arxiv
|
BkiUbRk25V5hegR8XoVv
| 5 | 1 |
\section{Introduction} \label{secIntro}
The fusion process is a fundamental ingredient in the standard description of all rational conformal field theories. Roughly speaking, the fusion coefficient $\fusioncoeff{a}{b}{c}$ counts the multiplicity with which the family of fields $\phi_{c}$ appears in the operator product expansion of a field from family $\phi_{a}$ with a field from family $\phi_{b}$. This is succinctly written as a fusion rule:
\begin{equation}\label{eqAa-PB}
\phi_{a} \times \phi_{b} = \sum_c \fusioncoeff{a}{b}{c} \phi_{c}.
\end{equation}
This definition makes clear the fact that fusion coefficients are non-negative integers. Of course, one can define fusion in a more mathematically precise manner in terms of the Grothendieck ring of a certain abelian braided monoidal category that appears in the vertex operator algebra formulation of conformal field theory. However, we will not have need for such sophistication in what follows. For our purposes a fusion ring is defined by \eqnref{eqAa-PB}, where the coefficients $\fusioncoeff{a}{b}{c}$ are explicitly given.
The standard assumptions and properties of the operator product expansion then translate into properties of the fusion coefficients. It is convenient to express these in terms of matrices $\fusionmat{a}$ defined by $\sqbrac{\fusionmat{a}}_{bc} = \fusioncoeff{a}{b}{c}$. We assume that the identity field is in the theory; the corresponding family is denoted by $\phi_{0}$, and $\fusionmat{0}$ is therefore the identity matrix. Commutativity and associativity of the operator product expansion translate into
\begin{equation*}
\fusionmat{a} \fusionmat{b} = \fusionmat{b} \fusionmat{a} \qquad \text{and} \qquad \fusionmat{a} \fusionmat{b} = \sum_c \fusioncoeff{a}{b}{c} \fusionmat{c},
\end{equation*}
respectively. Last, given a family $\phi_{a}$, there is a unique family $\phi_{a^+}$ such that their operator product expansions contain fields from the family $\phi_{0}$ with multiplicity one (this is effectively just the normalisation of the two-point function). It follows\footnote{Here we are implicitly excluding logarithmic conformal field theories from our considerations.} that $\fusionmat{a^+} = \transpose{\fusionmat{a}}$, where $\transpose{}$ denotes transposition.
These matrices thus form a commuting set of normal matrices, and so may be simultaneously diagonalised by a unitary matrix $U$. The diagonalisation $\fusionmat{a} U = U D_a$ ($D_a$ diagonal) is equivalent to $\sum_c \fusioncoeff{a}{b}{c} U_{cd} = U_{bd} \lambda^{\brac{a}}_d$ where $\lambda^{\brac{a}}_d$ are the eigenvalues of $\fusionmat{a}$. Putting $b = 0$ then gives $U_{ad} = U_{0 d} \lambda^{\brac{a}}_d$, which determines the eigenvalues completely (if $U_{0 d}$ were to vanish, $U_{ad}$ would vanish for all $a$ contradicting unitarity).
The celebrated Verlinde conjecture \cite{VerFus} identifies the diagonalising matrix $U$ with the S-matrix $S$ describing the transformations of the characters of the chiral symmetry algebra induced by the modular transformation $\tau \mapsto -1 / \tau$. This gives a closed expression for the fusion coefficients:
\begin{equation} \label{eqnVerlinde}
\fusioncoeff{a}{b}{c} = \sum_d \frac{S_{ad} S_{bd} \conj{S_{cd}}}{S_{0d}}.
\end{equation}
It is worthwhile noting that the Verlinde conjecture has recently been proved for a fairly wide class of conformal field theories (in the vertex operator algebra approach) \cite{HuaVer}.
Mathematically, these families with their fusion product define a finitely-generated, associative, commutative, unital ring. Moreover, this \emph{fusion ring} is freely generated as a $\mathbb{Z}$-module (abelian group), and possesses a distinguished ``basis'' in which the structure constants are all non-negative integers. The matrices $\fusionmat{a}$ introduced above correspond to this basis in the regular representation of the fusion ring. It is often convenient to generalise this structure to a \emph{fusion algebra} (also known as a \emph{Verlinde algebra}) by allowing coefficients in an algebraically closed field, $\mathbb{C}$ say. We will denote the fusion ring by $\fusionring{}$, and the corresponding fusion algebra (over $\mathbb{C}$) by $\fusionalg{} = \fusionring{} \otimes_{\mathbb{Z}} \mathbb{C}$. It is important to note that the structure which arises naturally in applications is the fusion ring, and that the fusion algebra is just a useful mathematical construct.
One of the first advantages in considering $\fusionalg{}$ is that it contains the elements \cite{FucFus}
\begin{equation}\label{eqnFusIdempotents}
\pi_a = S_{0 a} \sum_b \conj{S_{ab}} \phi_{b},
\end{equation}
where the sum is over the distinguished basis of $\fusionalg{}$. A quick calculation shows that the $\pi_a$ then form a basis of orthogonal idempotents: $\pi_a \times \pi_b = \delta_{ab} \pi_b$. It follows that there are no non-zero nilpotent elements in $\fusionalg{}$, and hence the same is true for $\fusionring{}$.
Since the fusion algebra is finitely-generated, associative, and commutative, it
may be presented as a free polynomial ring (over $\mathbb{C}$) in its generators,
modulo an ideal $\fusionideal{}{\mathbb{C}}$. The lack of non-trivial nilpotent
elements implies that this ideal has the property that whenever some positive power of a polynomial belongs to the ideal, so does the polynomial itself. That is, the ideal is \emph{radical}, hence completely determined by the variety of points (in $\mathbb{C}^n$) at which every polynomial in the ideal vanishes \cite{CoxIde}. This variety will be referred to as the \emph{fusion variety}.
As $\fusionalg{}$ is a finite-dimensional vector space over $\mathbb{C}$, it follows that the fusion variety consists of a finite number of points, one for each basis element \cite{CoxIde}. Since the $\pi_a$ of \eqnref{eqnFusIdempotents} form a basis of idempotents, they correspond to polynomials which take the values $0$ and $1$ on the fusion variety. Their \emph{supports} (points of the fusion variety where the representing polynomials take value $1$) cannot be empty, and their orthogonality ensures that their supports must be disjoint. This forces the supports to consist of a single point, different for each $\pi_a$. We denote this point of the fusion variety by $v^a$. It now follows from inverting \eqnref{eqnFusIdempotents} that the polynomial $p_a$ representing $\phi_{a}$ takes the value $\lambda_b^{\brac{a}} = S_{ab} / S_{0 b}$ at $v^b$.
Suppose now that there is a subset $\set{\phi_{a_i} \colon i = 1 , \ldots , r}$ of the $\phi_{a}$ which generates the entire fusion algebra. If we take the free polynomial ring to be $\polyring{\mathbb{C}}{\phi_{a_1} , \ldots , \phi_{a_r}}$, then the coordinates of the fusion variety are just
\begin{equation*}
v^b_i = \func{p_{a_i}}{v^b} = \frac{S_{a_i b}}{S_{0 b}}.
\end{equation*}
This proves the following result of Gepner \cite{GepFus}:
\begin{proposition} \label{propGepner1}
$\fusionalg{} \cong \polyring{\mathbb{C}}{\phi_{a_1} , \ldots , \phi_{a_r}} / \fusionideal{}{\mathbb{C}}$, where $\fusionideal{}{\mathbb{C}}$ is the (radical) ideal of polynomials vanishing on the points
\begin{equation*}
\set{\brac{\frac{S_{a_1 b}}{S_{0 b}} , \ldots , \frac{S_{a_r b}}{S_{0 b}}} \in \mathbb{C}^r}.
\end{equation*}
\end{proposition}
Notice that this result only characterises the fusion algebra. The fusion ring may likewise be represented as a quotient of $\polyring{\mathbb{Z}}{\phi_{a_1} , \ldots , \phi_{a_r}}$, where the fusion ideal is given by $\fusionideal{}{\mathbb{Z}} = \fusionideal{}{\mathbb{C}} \cap \polyring{\mathbb{Z}}{\phi_{a_1} , \ldots , \phi_{a_r}}$ \cite{FucFus}. The fusion ideal over $\mathbb{Z}$ thus inherits the property from $\fusionideal{}{\mathbb{C}}$ that if any integral multiple of a polynomial is in the ideal, then so is the polynomial itself. This ensures that the quotient is a free $\mathbb{Z}$-module, as required. By analogy with radical ideals (and for wont of a better name), we will refer to ideals with this property as being \emph{dividing}.
In this paper, we are interested in the fusion rings of Wess-Zumino-Witten models. These are conformal field theories defined on a group manifold $\group{G}$ (which we will take to be simply-connected, connected, and compact), and parametrised by a positive integer $k$ called the level. Our motivation derives from the determination of the dynamical charge group of a certain class of D-brane in these theories. The brane charges \cite{PolDir,MinKTh} can be computed explicitly, and the order of the charge group can be shown to be constrained by the fusion rules \cite{BouNot,FreBra}. A suitably detailed understanding of the structure of the fusion rules therefore makes the computation of the charge group possible. This was achieved for the models based on the groups $\group{G} = \func{\group{SU}}{r+1}$ in \cite{FreBra}, and the general case in \cite{BouDBr02}.
However, the general charge group computations have only been rigorously proved for $\group{G} = \func{\group{SU}}{r+1}$ and\footnote{In this paper we denote by $\func{\group{Sp}}{2r}$ the (unique up to isomorphism) connected, simply-connected, compact Lie group whose Lie algebra is $\func{\alg{sp}}{2r}$.} $\func{\group{Sp}}{2r}$, essentially because the detailed structure of the fusion rules associated with the other groups is not well understood. The aim of this paper is to re-examine the cases which have been described, and try to elucidate a corresponding detailed structure in other cases.
The field families of a level-$k$ Wess-Zumino-Witten model on the group manifold $\group{G}$ are conveniently labelled by an integrable highest weight representation of the associated untwisted affine Lie algebra $\affine{\alg{g}}$, hence by the projection of the highest weight onto the weight space of the horizontal subalgebra $\alg{g}$ (which will be identified with the Lie algebra of $\group{G}$). In other words, the abstract elements naturally appearing in the fusion rules may be identified with the integral weights (of $\alg{g}$) in the closed fundamental affine alcove. We denote this set of weights by $\affine{\group{P}}_k$. In what follows, it will usually prove more useful to regard these weights as the integral weights in the \emph{open, shifted} fundamental alcove. Concretely,
\begin{equation*}
\affine{\group{P}}_k = \set{\lambda \in \group{P} \colon \bilin{\lambda + \rho}{\alpha_i} > 0 \text{ for all } i, \text{ and } \bilin{\lambda + \rho}{\theta} < k + \Cox^{\vee}},
\end{equation*}
where $\group{P}$ is the weight lattice, $\alpha_i$ are the simple roots, $\theta$ denotes the highest root, $\rho$ the Weyl vector, and $\Cox^{\vee}$ is the dual Coxeter number of $\alg{g}$. The inner product on the weight space is normalised so that $\bilin{\theta}{\theta} = 2$.
For these Wess-Zumino-Witten models, the Verlinde conjecture was proven in \cite{TsuCon,FalPro,BeaCon}. By combining this with the Kac-Peterson formula \cite{KacPetInf} for the Wess-Zumino-Witten S-matrix elements,
\begin{equation} \label{eqnKacPeterson}
S_{\lambda \mu} = \func{C}{\affine{\alg{g}} , k} \sum_{w \in \group{W}} \det w \ e^{-2 \pi \mathfrak{i} \bilin{\func{w}{\lambda + \rho}}{\mu + \rho} / \brac{k + \Cox^{\vee}}}
\end{equation}
(here $\func{C}{\affine{\alg{g}} , k}$ is a constant and $\group{W}$ is the Weyl group of $\group{G}$), one can derive a very useful expression for the fusion coefficients, known as the Kac-Walton formula \cite{KacInf,WalFus,WalAlg,FucWZW,FurQua}:
\begin{equation} \label{eqnKacWalton}
\fusioncoeff{\lambda}{\mu}{\nu} = \sum_{\affine{w} \in \affine{\group{W}}_k} \det \affine{w} \ \tensorcoeff{\lambda}{\mu}{\affine{w} \cdot \nu}.
\end{equation}
This formula relates the fusion coefficients to the tensor product multiplicities $\tensorcoeff{\lambda}{\mu}{\nu}$ of the irreducible representations of the group $\group{G}$ (or its Lie algebra $\alg{g}$), via the shifted action of the affine Weyl group $\affine{\group{W}}_k$ at level $k$, $\affine{w} \cdot \nu = \func{\affine{w}}{\nu + \rho} - \rho$.
The Kac-Walton formula suggests that for Wess-Zumino-Witten models, it may be advantageous to choose the free polynomial ring appearing in \propref{propGepner1} to be the complexified representation ring (character ring) of $\group{G}$. The character of the irreducible representation of highest weight $\lambda$ is given by
\begin{equation*}
\chi_{\lambda} = \sum_{\mu \in P_{\lambda}} e^{\mu} = \frac{\sum_{w \in \group{W}} \det w \ e^{\func{w}{\lambda + \rho}}}{\sum_{w \in \group{W}} \det w \ e^{\func{w}{\rho}}},
\end{equation*}
where $P_{\lambda}$ is the set of weights of the representation with multiplicity (and the second equality is the Weyl character formula). The character ring is freely generated by the characters $\chi_{\Lambda_i} \equiv \chi_i$ ($i = 1 , \ldots , r = \rank \group{G}$) of the representations whose highest weights are the fundamental weights $\Lambda_i$ of $\group{G}$. Gepner's result for Wess-Zumino-Witten models may therefore be recast in the form:
\begin{proposition} \label{propGepner}
The fusion algebra of a level-$k$ Wess-Zumino-Witten model is given by $\fusionalg{k} \cong \polyring{\mathbb{C}}{\chi_1 , \ldots , \chi_r} / \fusionideal{k}{\mathbb{C}}$, where $\fusionideal{k}{\mathbb{C}}$ is the (radical) ideal of polynomials vanishing on the points
\begin{equation*}
\set{\brac{\frac{S_{\Lambda_1 \lambda}}{S_{0 \lambda}} , \ldots , \frac{S_{\Lambda_r \lambda}}{S_{0 \lambda}}} \in \mathbb{C}^r \colon \lambda \in \affine{\group{P}}_k}.
\end{equation*}
\end{proposition}
We will likewise denote a level-$k$ Wess-Zumino-Witten fusion ring by $\fusionring{k}$ and the corresponding fusion ideal of $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$ by $\fusionideal{k}{\mathbb{Z}}$.
We are interested in explicit sets of generators for these fusion ideals (over $\mathbb{C}$ and $\mathbb{Z}$). Given a candidate set of elements in $\fusionideal{k}{\mathbb{C}}$, the verification that this set is generating may be broken down into three parts: First, one checks that each element vanishes on the fusion variety. Second, one must show that these elements do not collectively vanish anywhere else. Third, the ideal generated by this candidate set must be verified to be radical. This last step is always necessary because there is generically an infinite number of ideals corresponding to a given variety (consider the ideals $\ideal{x^n} \subset \polyring{\mathbb{C}}{x}$ which all vanish precisely at the origin). It should be clear that verifying radicality does not consist of the trivial task of checking that the candidate generating set contains no powers of polynomials (consider $\ideal{x^2 + y^2 , 2 x y} \subset \polyring{\mathbb{C}}{x,y}$).
For the $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$ fusion algebras, generating sets for $\fusionideal{k}{\mathbb{C}}$ have been postulated in \cite{GepFus,BouTop,GepSym} as the partial derivatives of a \emph{fusion potential}. The first step of the verification process is well-documented there, the second step appears somewhat sketchy, and the third does not seem to have appeared in the literature at all. We rectify this in \secref{secFusAlg}. The methods we employ are then used to show why analogous potentials have not been found for the other groups, despite several attempts \cite{CreFus,MlaInt}.
However, we would like to repeat our claim that it is the fusion ring which is of physical interest in applications, and the above verification process does not allow us to conclude that a set of elements is generating over $\mathbb{Z}$. In other words, a set of generators for $\fusionideal{k}{\mathbb{C}}$ need not form a generating set for $\fusionideal{k}{\mathbb{Z}}$, even if the set consists of polynomials with integral coefficients (a simple example would be if $\fusionideal{k}{\mathbb{C}} = \ideal{x + y , x - y} \subset \polyring{\mathbb{C}}{x,y}$ then $\fusionideal{k}{\mathbb{Z}} \neq \ideal{x + y , x - y} \subset \polyring{\mathbb{Z}}{x,y}$ as this latter ideal is not dividing). This consideration also seems to have been overlooked in the literature, and is, in our opinion, quite a serious omission. We will rectify this situation in \secref{secFusRing} by removing the need to postulate a candidate set of generators; instead, we shall derive generating sets \emph{ab initio}.
In the cases $\group{G} = \func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$, some simple manipulations will allow us to reduce the number of generators in these sets drastically. We will see that these manipulations reproduce the aforementioned fusion potentials. Our results therefore constitute the first complete derivation of this description from first principles, and we emphasise that this derivation holds over $\mathbb{Z}$. The results to this point have already been detailed in \cite{RidPhD}. We then detail the analagous manipulations for $\func{\group{Spin}}{2r+1}$ in \secref{secFusSpin}, producing a relatively small set of explicit generators for the corresponding fusion ideal. It is not clear to us whether these generators are related to a description by fusion potentials. The manipulations essentially rely upon the application of a class of identities generalising the classical Jacobi-Trudy identity (which we will collectively refer to as Jacobi-Trudy identities). Many of these are well-known \cite{WeyCla}, but we were unable to find identities for spinor representations in the literature, so we include derivations in \appref{secJT}. We also include the corresponding identities for $\func{\group{Spin}}{2r}$, as they may be of independent interest.
\section{Presentations of Fusion Algebras} \label{secFusAlg}
In this section, we consider the description of the fusion ideals $\fusionideal{k}{\mathbb{C}}$ by fusion potentials. We introduce the potentials for the Wess-Zumino-Witten models over the groups $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$, and verify that the induced ideals vanish precisely on the fusion variety, \emph{and} are radical. We then investigate the obvious class of analogous potentials for Wess-Zumino-Witten models over other groups, and show that in these cases, no potential in this class correctly describes the fusion algebra. Readers that are only interested in fusion \emph{rings} and presentations of the ideals $\fusionideal{k}{\mathbb{Z}}$ should skip to \secref{secFusRing}.
\subsection{Fusion Potentials} \label{secFusPot}
For Wess-Zumino-Witten models over $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$, the fusion ideal is supposed to be generated by the partial derivatives (with respect to the characters $\chi_i$ of the fundamental representations) of a single polynomial, called the \emph{fusion potential}. At level $k$, \cite{GepFus} gives the $\func{\group{SU}}{r+1}$-potential as
\begin{equation} \label{eqnFusPotSU}
\func{V_{k+r+1}}{\chi_1 , \ldots , \chi_r} = \frac{1}{k+r+1} \sum_{i=1}^{r+1} q_i^{k+r+1},
\end{equation}
where the $q_i$ are the (formal) exponentials of the weights $\varepsilon_i$ of the defining representation (whose character is $\chi_1$). Note that $q_1 \cdots q_{r+1} = 1$. The $\varepsilon_i$ are permuted by the Weyl group $\group{W} = \group{S}_{r+1}$ of $\func{\group{SU}}{r+1}$, and $\group{W}$ acts analogously on the $q_i$. Therefore, $V_{k+r+1}$ is clearly $\group{W}$-invariant, hence is indeed a polynomial in the $\chi_i$ \cite{BouLie2}.
The level-$k$ $\func{\group{Sp}}{2r}$-potential is given in \cite{BouTop,GepSym} as
\begin{equation} \label{eqnFusPotSp}
\func{V_{k+r+1}}{\chi_1 , \ldots , \chi_r} = \frac{1}{k+r+1} \sum_{i=1}^r \sqbrac{q_i^{k+r+1} + q_i^{-\brac{k+r+1}}},
\end{equation}
where the $q_i$ and $q_i^{-1}$ refer to the (formal) exponentials of the weights $\pm \varepsilon_i$ of the defining representation of $\func{\group{Sp}}{2r}$ (whose character is again $\chi_1$). The Weyl group $\group{W} = \group{S}_r \ltimes \mathbb{Z}_2^r$ acts on the $\varepsilon_i$ by permutation ($\group{S}_r$) and negation (each $\mathbb{Z}_2$ sends one $\varepsilon_i$ to $-\varepsilon_i$ whilst leaving the others invariant). We see again that the given potential is a $\group{W}$-invariant, hence a polynomial in the $\chi_i$.
These potentials are obviously best handled with generating functions. We also note that these potentials may be unified as
\begin{equation} \label{eqnFusPotSUSp}
\func{V_{k+\Cox^{\vee}}}{\chi_1 , \ldots , \chi_r} = \frac{1}{k+\Cox^{\vee}} \sum_{\mu \in P_{\Lambda_1}} e^{\brac{k+\Cox^{\vee}} \mu},
\end{equation}
where $P_{\lambda}$ denotes the set of weights of the irreducible representation of highest weight $\lambda$. Putting this form into a generating function (and dropping the explicit $\chi_i$ dependence) gives
\begin{equation*}
\func{V}{t} = \sum_{m=1}^{\infty} \brac{-1}^{m-1} V_m t^m = \log \sqbrac{\prod_{\mu \in P_{\Lambda_1}} \brac{1 + e^{\mu} t}}.
\end{equation*}
This generating function may therefore be expressed in terms of the characters of the exterior powers of the defining representation. These exterior powers are well-known \cite{FulRep}, and give
\begin{align}
\func{\group{SU}}{r+1} & \text{:} & \func{V}{t} &= \log \sqbrac{\sum_{n=0}^{r+1} \chi_n t^n}, \label{eqnFusPotGFSU}
\intertext{where $\chi_0 = \chi_{r+1} = 1$, and}
\func{\group{Sp}}{2r} & \text{:} & \func{V}{t} &= \log \sqbrac{\sum_{n=0}^{r-1} E_n \brac{t^n + t^{2r-n}} + E_r t^r}, \label{eqnFusPotGFSp}
\end{align}
where $\chi_0 = 1$, $\chi_n = 0$ for all $n < 0$, and $E_n = \chi_n + \chi_{n-2} + \chi_{n-4} + \ldots$.
At this point it should be mentioned that there is an explicit construction for arbitrary rational conformal field theories \cite{AhaGen}, which determines a function whose derivatives vanish on the fusion variety. This construction, however, requires an explicit knowledge of the S-matrix elements, and is quite unwieldy (as compared with the above potentials). Indeed, it also seems to possess significant ambiguities, and it is not clear how to fix this so as to find a potential with a representation-theoretic interpretation. In any case, it also appears to be difficult to determine if these ideals thus obtained are radical or dividing, so we will not consider this construction any further. There is also a paper \cite{CreFus} postulating simple potentials for every Wess-Zumino-Witten model, similar in form to those of \eqnDref{eqnFusPotSU}{eqnFusPotSp}. But, as pointed out in \cite{MlaInt}, the partial derivatives of the potentials given do not always vanish on the fusion variety, and so cannot generate the fusion ideal. In \cite{MlaInt}, fusion potentials are presented for rings related to the fusion rings of the Wess-Zumino-Witten models over the special orthogonal groups. Unfortunately, their method fails to give the fusion rings for the special orthogonal groups. We will see in in \secref{secGenPot} why this is the case.
\subsection{Verification} \label{secFusPotVerify}
Let us first establish that the ideals defined by the potentials given in \eqnDref{eqnFusPotSU}{eqnFusPotSp} vanish on their respective fusion varieties. From \propref{propGepner}, the points of the fusion variety have coordinates
\begin{equation*}
v^{\lambda}_i = \frac{S_{\Lambda_i \lambda}}{S_{0 \lambda}} = \func{\chi_i}{-2 \pi \mathfrak{i} \frac{\lambda + \rho}{k + \Cox^{\vee}}},
\end{equation*}
where the second equality follows readily from Weyl's character formula and \eqnref{eqnKacPeterson}. It follows that the fusion potentials should have critical points precisely when the characters are evaluated at $\xi_{\lambda} = -2 \pi \mathfrak{i} \brac{\lambda + \rho} / \brac{k + \Cox^{\vee}}$, for $\lambda \in \affine{\group{P}}_k$. In fact, the functions $\varkappa_i$ defined by
\begin{equation*}
\func{\varkappa_i}{\lambda} = \func{\chi_{i}}{-2 \pi \mathfrak{i} \frac{\lambda + \rho}{k + \Cox^{\vee}}} = \sum_{\mu \in P_{\Lambda_i}} e^{-2 \pi \mathfrak{i} \bilin{\mu}{\lambda + \rho} / \brac{k + \Cox^{\vee}}}
\end{equation*}
are invariant under the shifted action of the affine Weyl group $\affine{\group{W}}_k$. Thus, the potentials should have critical points when evaluated at $\chi_i = \func{\varkappa_i}{\lambda}$, for any $\lambda \in \group{P}$ which is \emph{not} on a shifted alcove boundary.
We denote the gradient operations with respect to the fundamental characters $\chi_i$ and the Dynkin labels $\lambda_j$ by $\nabla_{\chi}$ and $\nabla_{\lambda}$ respectively, and the jacobian matrix of the functions $\varkappa_i$ with respect to the $\lambda_j$ by $J$. From the chain rule, it follows that if the potential has a critical point with respect to $\lambda$ at which $J$ is non-singular, then this is also a critical point with respect to the fundamental characters. It is therefore necessary to determine when $J$ becomes singular.
Explicit calculation shows that the jacobian, as a function on the weight space, satisfies
\begin{equation} \label{eqnDetJw}
\func{J}{\func{w}{\nu}} = \func{J}{\nu} w,
\end{equation}
hence $\det J$ is anti-invariant under the Weyl group $\group{W}$ (here, $w$ on the right hand side refers to the matrix representation of $w$ with respect to the basis of fundamental weights). It is therefore a multiple of the primitive anti-invariant element \cite{BouLie2}, and by comparing leading terms, we arrive at
\begin{equation*}
\det J = \brac{\frac{-2 \pi \mathfrak{i}}{k + \Cox^{\vee}}}^r \frac{1}{\abs{\group{P} / \group{Q}^{\vee}}} \prod_{\alpha \in \Delta_+} \brac{e^{\alpha / 2} - e^{-\alpha / 2}},
\end{equation*}
where $\group{Q}^{\vee}$ is the coroot lattice and $\Delta_+$ are the positive roots of $\alg{g}$ (explicit details may be found in \cite{RidPhD}). Evaluating at $-2 \pi \mathfrak{i} \brac{\lambda + \rho} / \brac{k + \Cox^{\vee}}$, it follows that the jacobian is singular precisely when
\begin{equation*}
\prod_{\alpha \in \Delta_+} \sin \sqbrac{\pi \frac{\bilin{\alpha}{\lambda + \rho}}{k + \Cox^{\vee}}} = 0.
\end{equation*}
That is, when $\lambda$ is on the boundary of a shifted affine alcove. Therefore, these boundaries are the only places where a potential may have critical points with respect to $\lambda$ which need not be critical points with respect to the $\chi_i$.
Evaluating the potentials, \eqnref{eqnFusPotSUSp}, as above gives
\begin{equation*}
\func{V_{k + \Cox^{\vee}}}{\func{\varkappa_1}{\lambda} , \ldots , \func{\varkappa_r}{\lambda}} = \frac{1}{k + \Cox^{\vee}} \sum_{\mu \in P_{\Lambda_1}} e^{-2 \pi \mathfrak{i} \bilin{\mu}{\lambda + \rho}} = \frac{1}{k + \Cox^{\vee}} \func{\chi_1}{-2 \pi \mathfrak{i} \brac{\lambda + \rho}}.
\end{equation*}
Note that the level dependence becomes quite trivial. We now determine the critical points of these potentials with respect to the Dynkin labels $\lambda_j$.
\begin{description}
\item[$\func{\group{Sp}}{2r}$]
The $2r$ weights of the defining representation are the $\varepsilon_j$ and their negatives. The potentials therefore take the form
\begin{equation*}
\func{V_{k+\Cox^{\vee}}}{-2 \pi \mathfrak{i} \frac{\lambda + \rho}{k + \Cox^{\vee}}} = \frac{2}{k + \Cox^{\vee}} \sum_{j=1}^r \cos \sqbrac{2 \pi \bilin{\varepsilon_j}{\lambda + \rho}}.
\end{equation*}
Critical points therefore occur when
\begin{equation*}
\sum_{j=1}^r \bilin{\Lambda_i}{\varepsilon_j} \sin \sqbrac{2 \pi \bilin{\varepsilon_j}{\lambda + \rho}} = 0,
\end{equation*}
for each $i = 1 , \ldots , r$. The $\bilin{\Lambda_i}{\varepsilon_j}$ form the entries of a square matrix which is easily seen to be invertible, as $\varepsilon_j = \frac{1}{2} \brac{\alpha_j^{\vee} + \ldots + \alpha_r^{\vee}}$ \cite{BouLie2}. We therefore have critical points precisely when
\begin{equation*}
\sin \sqbrac{2 \pi \bilin{\varepsilon_j}{\lambda + \rho}} = \sin \sqbrac{\pi \brac{\lambda_j + \rho_j + \ldots + \lambda_r + \rho_r}} = 0,
\end{equation*}
for all $j = 1 , \ldots , r$. It follows that $\lambda_j + \ldots + \lambda_r \in \mathbb{Z}$ for each $j = 1 , \ldots , r$, hence $\lambda \in \group{P}$.
\item[$\func{\group{SU}}{r+1}$]
In this case, the $r+1$ weights of the defining representation are the $\varepsilon_j$, but we have the constraint $\varepsilon_1 + \ldots + \varepsilon_{r+1} = 0$. Finding the critical points on the weight space is a constrained optimisation problem in $\mathbb{R}^{r+1}$, so we add a Lagrange multiplier $\Omega$ to the potential:
\begin{equation*}
\func{\widetilde{V}_{k+\Cox^{\vee}}}{-2 \pi \mathfrak{i} \frac{\lambda + \rho}{k + \Cox^{\vee}}} = \frac{1}{k + \Cox^{\vee}} \sum_{j=1}^{r+1} e^{-2 \pi \mathfrak{i} \bilin{\varepsilon_j}{\lambda + \rho}} + \Omega \bilin{\lambda}{\varepsilon_1 + \ldots + \varepsilon_{r+1}}.
\end{equation*}
It is now straightforward to show that the critical points are again $\lambda \in \group{P}$, so we leave this as an exercise for the reader.
\end{description}
So, for both $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$, the critical points with respect to $\lambda$ of the potentials of \eqnref{eqnFusPotSUSp} coincide with the weight lattice $\group{P}$. Every integral weight which is not on a shifted affine alcove boundary therefore corresponds to a critical point with respect to the fundamental characters (since $J$ is non-singular there). To conclude that the critical points of the potentials coincide with the points of the corresponding fusion varieties, we therefore need to exclude the possibility that an integral weight on a shifted affine alcove boundary can correspond to a critical point with respect to the fundamental characters. This follows readily from a study of the determinant of the hessian matrix $H_{\lambda} = \brac{\parDD{V_{k+\Cox^{\vee}}}{\lambda_i}{\lambda_j}}$ of the potentials at these points, whose computation we now turn to.
\begin{description}
\item[$\func{\group{SU}}{r+1}$]
Here (indeed, for any simply-laced group), $\group{P}$ coincides with the dual of the root lattice. Thus, $\lambda \in \group{P}$ implies that $\bilin{\mu}{\lambda + \rho} = \bilin{\Lambda_1}{\lambda + \rho} \pmod{1}$ for all $\mu \in P_{\Lambda_1}$. It follows that
\begin{align*}
\brac{H_{\lambda}}_{ij} &= \frac{-4 \pi^2}{k+\Cox^{\vee}} \sum_{\mu \in P_{\Lambda_1}} \bilin{\mu}{\Lambda_i} \bilin{\mu}{\Lambda_j} e^{-2 \pi \mathfrak{i} \bilin{\mu}{\lambda + \rho}} \\
&= \frac{-4 \pi^2}{k+\Cox^{\vee}} e^{-2 \pi \mathfrak{i} \bilin{\Lambda_1}{\lambda + \rho}} I_{\Lambda_1} \bilin{\Lambda_i}{\Lambda_j},
\end{align*}
where $I_{\Lambda_1}$ is the Dynkin index of the irreducible representation of highest weight $\Lambda_1$. Thus,
\begin{equation*}
\det H_{\lambda} = \brac{\frac{-4 \pi^2 I_{\Lambda_1}}{k+\Cox^{\vee}}}^r \frac{e^{-2 \pi \mathfrak{i} r \bilin{\Lambda_1}{\lambda + \rho}}}{\abs{\group{P} / \group{Q}^{\vee}}} \neq 0,
\end{equation*}
when $\lambda \in \group{P}$.
\item[$\func{\group{Sp}}{2r}$]
The weights of $P_{\Lambda_1}$ take the form $\pm \varepsilon_{\ell} = \pm \frac{1}{2} \brac{\alpha_{\ell}^{\vee} + \ldots + \alpha_r^{\vee}}$, for $\ell = 1 , 2 , \ldots , r$, so $\bilin{\varepsilon_{\ell}}{\Lambda_i} \bilin{\varepsilon_{\ell}}{\Lambda_j} = \frac{1}{4}$ if $i \geqslant \ell$ and $j \geqslant \ell$, and $0$ otherwise. Computing the hessian as before gives
\begin{equation*}
\brac{H_{\lambda}}_{ij} = \frac{-2 \pi^2}{k+\Cox^{\vee}} \sum_{\ell = 1}^{\min \set{i , j}} \cos \sqbrac{\pi \brac{\lambda_{\ell} + \ldots + \lambda_r + r - \ell + 1}}.
\end{equation*}
Elementary row operations now suffice to compute
\begin{equation*}
\det H_{\lambda} = \brac{\frac{-2 \pi^2}{k+\Cox^{\vee}}}^r \prod_{\ell = 1}^r \cos \sqbrac{\pi \brac{\lambda_{\ell} + \ldots + \lambda_r + r - \ell + 1}},
\end{equation*}
so again $\det H_{\lambda} \neq 0$ on the weight lattice.
\end{description}
Denote the hessian matrix with respect to the $\chi_i$ of the potentials by $H_{\chi}$. Then, from
\begin{equation*}
\parDD{V_{k+\Cox^{\vee}}}{\lambda_i}{\lambda_j} = \sum_{s,t} \parD{\chi_s}{\lambda_i} \parDD{V_{k+\Cox^{\vee}}}{\chi_s}{\chi_t} \parD{\chi_t}{\lambda_j} + \sum_{\ell} \parD{V_{k+\Cox^{\vee}}}{\chi_{\ell}} \parDD{\chi_{\ell}}{\lambda_i}{\lambda_j},
\end{equation*}
we see that
\begin{equation*}
H_{\lambda} = \transpose{J} H_{\chi} J \qquad \text{when $\nabla_{\chi} V_{k+\Cox^{\vee}} = 0$.}
\end{equation*}
It follows that at the critical points of the potential with respect to the $\chi_i$,
\begin{equation} \label{eqnHessians}
\det H_{\lambda} = \brac{\det J}^2 \det H_{\chi}.
\end{equation}
Now, we have just demonstrated that $\det H_{\lambda} \neq 0$ on the weight lattice, but we know that $\det J = 0$ on the shifted affine alcove boundaries. As $\det H_{\chi}$ is a polynomial (hence finite-valued), this forces the conclusion that any integral weight lying on a shifted affine alcove boundary is \emph{not} a critical point of the potential with respect to the $\chi_i$. Of course, this is exactly what we wanted to show.
To summarise, we have shown that the ideal generated by the derivatives of the potentials given in \eqnDref{eqnFusPotSU}{eqnFusPotSp} vanishes precisely on the fusion variety. To complete the proof (over $\mathbb{C}$) that these potentials describe the fusion ideal $\fusionideal{k}{\mathbb{C}}$, we need to show that this ideal is radical. Happily, this follows immediately from \eqnref{eqnHessians} and some standard multiplicity theory, specifically the theory of \emph{Milnor numbers} \cite{CoxUsi,MilSin}: The ideal generated by the derivatives of a potential is radical if and only if the hessian of the potential is non-singular at each point of the corresponding (zero-dimensional) variety. Since $H_{\lambda}$ and $J$ are non-singular at the points of the fusion variety, $H_{\chi}$ is non-singular there by \eqnref{eqnHessians}, and we are done. The ideals are radical, so the potentials given by \eqnDref{eqnFusPotSU}{eqnFusPotSp} correctly describe the fusion algebras of $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$ (respectively).
\subsection{A Class of Candidate Potentials} \label{secGenPot}
In searching for fusion potentials appropriate for the Wess-Zumino-Witten models over the other (simply-connected) simple groups $\group{G}$, an obvious class of potentials to consider is those of the form (compare \eqnref{eqnFusPotSUSp})
\begin{equation} \label{eqnGenPot}
V_{k+\Cox^{\vee}}^{\Gamma} = \frac{1}{k+\Cox^{\vee}} \sum_{\mu \in \Gamma} e^{\brac{k+\Cox^{\vee}} \mu}.
\end{equation}
Here, $\Gamma$ is a finite $\group{W}$-invariant set of integral weights. This ensures that these potentials are polynomials in the fundamental characters with rational coefficients. Indeed, the derivatives of such polynomials have integral coefficients, as may be seen by differentiating the generating function
\begin{equation*}
\func{V^{\Gamma}}{t} = \sum_{m=1}^{\infty} \brac{-1}^{m-1} V_m^{\Gamma} t^m = \log \sqbrac{\prod_{\mu \in \Gamma} \brac{1 + e^{\mu} t}}.
\end{equation*}
In this section, we will show (with the aid of an example) that the fusion algebra of these other Wess-Zumino-Witten models is not described by potentials from this class\footnote{To be precise, we will prove that the potential cannot take the form of \eqnref{eqnGenPot} for all levels, unless $\group{G}$ is $\func{\group{SU}}{r+1}$ or $\func{\group{Sp}}{2r}$.}. For our example, we choose the exceptional group $\group{G}_2$ because its weight space is easily visualised. Specifically, we consider the two potentials obtained from \eqnref{eqnGenPot} by taking $\Gamma$ to be the Weyl orbit $\func{\group{W}}{\Lambda_i}$ of a fundamental weight. One might prefer to take the potentials based on the weights of the fundamental representations, but this leads to more difficult computations.
As in \secref{secFusPotVerify}, we evaluate these potentials on the weight space (at $\xi_{\lambda}$). It is extremely important to realise that as functions on the weight space, the potentials are invariant under the shifted action of the affine Weyl groups $\affine{\group{W}}_k$ \emph{for all $k$} (because the level dependence is essentially trivial). We can therefore restrict to computing the critical points in a fundamental alcove at (effective) level $\kappa \equiv k + \Cox^{\vee} = 1$ (a truly fundamental domain for the periodicity of the potentials). The results are shown in \figref{figG2CritPts}. It is immediately evident that in contrast with the $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$ fusion potentials, these $\group{G}_2$ potentials have critical points (with respect to the Dynkin labels $\lambda_i$) which include, but are not limited to, the weight lattice.
\psfrag{0}{$0$}
\psfrag{L1/2}{$\Lambda_1 / 2$}
\psfrag{L1/3}{$\Lambda_1 / 3$}
\psfrag{L2}{$\Lambda_2$}
\psfrag{L2/2}{$\Lambda_2 / 2$}
\psfrag{V1}[][]{$V_m^{\func{\group{W}}{\Lambda_1}}$}
\psfrag{V2}[][]{$V_m^{\func{\group{W}}{\Lambda_2}}$}
\begin{center}
\begin{figure}
\includegraphics[width=10cm]{critptsG2.eps}
\caption{The (shifted) critical points $\lambda + \rho$ of the potentials $V_{k+\Cox^{\vee}}^{\func{\group{W}}{\Lambda_1}}$ and $V_{k+\Cox^{\vee}}^{\func{\group{W}}{\Lambda_2}}$ for $\group{G}_2$ as a function of the weight space. (Our convention is that $\Lambda_1$ is the highest weight of the adjoint representation.)} \label{figG2CritPts}
\end{figure}
\end{center}
\vspace{-\baselineskip}
These non-integral critical points are the crux of the matter. When these critical points lie on a shifted (level-$k$) alcove boundary, we saw in \secref{secFusPotVerify} that they need not correspond to genuine critical points (with respect to the fundamental characters). However, any critical point in the interior of a shifted alcove is necessarily a critical point with respect to the fundamental characters, and Gepner's characterisation of the fusion variety requires these to be integral. Unfortunately, at any given level $k > 0$, the invariance of the critical points under $\affine{\group{W}}_{k'}$ for all $k'$ means that there will always be non-integral critical points in the interior of the alcoves (for $k$ sufficiently large). This is illustrated in \figref{figG2CritPtsk=1} for the potential $V_5^{\func{\group{W}}{\Lambda_1}}$ (corresponding to level $k = 1$). It follows that the potentials based on the Weyl orbits of the $\group{G}_2$ fundamental weights do not describe the fusion variety.
\psfrag{0}{$0$}
\psfrag{L1}{$\Lambda_1$}
\psfrag{L2}{$\Lambda_2$}
\begin{center}
\begin{figure}
\includegraphics[height=8cm]{critptsG2alcove.eps}
\caption{The critical points $\lambda$ of the potential $V_{k+\Cox^{\vee}}^{\func{\group{W}}{\Lambda_2}}$ for $\group{G}_2$ in the shifted fundamental alcove at level $k = 1$. The white points denote those in the interior which do not belong to the weight lattice.} \label{figG2CritPtsk=1}
\end{figure}
\end{center}
\vspace{-\baselineskip}
We can, of course, consider potentials $V_{k+\Cox^{\vee}}^{\Gamma}$ based on more complicated $\group{W}$-invariant sets $\Gamma$. However, when evaluating on the weight space, any such potential is just a $\group{W}$-invariant linear combination of formal exponentials of integral weights, and so is a polynomial in the potentials $V_{k+\Cox^{\vee}}^{\func{\group{W}}{\Lambda_1}}$ and $V_{k+\Cox^{\vee}}^{\func{\group{W}}{\Lambda_2}}$ considered before. It follows now from the chain rule for differentiation that if $\lambda + \rho$ is a common critical point of all the $V_{k+\Cox^{\vee}}^{\func{\group{W}}{\Lambda_i}}$, then it is also a critical point of $V_{k+\Cox^{\vee}}^{\Gamma}$. From \figref{figG2CritPts}, we see that any potential $V_{k+\Cox^{\vee}}^{\Gamma}$ for $\group{G}_2$ will have critical points at non-integral weights, and so will not correctly describe the fusion variety.
The situation is similarly bleak for the other simple groups because any potential of the form $V_{k+\Cox^{\vee}}^{\Gamma}$ will have (shifted) critical points at the vertices of the affine alcoves (at all levels). We will demonstrate this claim shortly. What it implies is that the only time a potential of this form stands of chance of describing the fusion variety is when the alcove vertices are integral (at all levels). This only happens when the comarks of the Lie group are all unity, which is only the case for $\group{G} = \func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$.
Let us finish with the promised demonstration. Our earlier remarks show that it is sufficient to consider the potentials $V_m^{P_{\Lambda_i}}$, $i = 1 , \ldots , r$. We will show that these always have critical points (with respect to $\lambda$) when $\lambda + \rho$ is the vertex of an affine alcove. Identifying $m$ with $k+\Cox^{\vee}$, the condition for $V_m^{P_{\Lambda_i}}$ to have a critical point is just that $\func{J_{ij}}{-2 \pi \mathfrak{i} \brac{\lambda + \rho}} = 0$ for each $j$. We therefore need to show that $\func{J}{-2 \pi \mathfrak{i} \nu} = 0$ whenever $\nu$ is an alcove vertex.
We rewrite \eqnref{eqnDetJw} in terms of the $i^{\text{th}}$ row of $J$, $\nabla_{\lambda} \chi_i$:
\begin{equation*}
\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \func{w}{\nu}} = \func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu} w.
\end{equation*}
Here $w$ (on the right hand side) denotes the matrix representing $w$ with respect to the basis of fundamental weights. We will treat the row vector $\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu}$ as an element of the dual of the weight space (the Cartan subalgebra).
We can also restrict our attention to the fundamental alcove vertices, by $\affine{\group{W}}$-invariance of the characters. If $\nu = 0$, then $\nu$ is fixed by every $w \in \group{W}$, so $\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu}$ is a row vector fixed by every $w \in \group{W}$. Thus, $\func{\nabla_{\lambda} \chi_i}{0}$ is the zero vector (for each $i$), verifying our claim for this vertex (and its $\affine{\group{W}}$-images).
The other fundamental alcove vertices have the form $\nu = \Lambda_j / a_j^{\vee}$, where $a_j^{\vee}$ is the $j^{\text{th}}$ comark of $\alg{g}$. As $\nu$ is invariant under all the simple Weyl reflections except $w_j$, $\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu}$ is also invariant under all these simple reflections, hence $\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu}$ is orthogonal to every simple root except $\alpha_j$. But, $\nu$ is fixed by the affine reflection about the hyperplane $\bilin{\mu}{\theta} = 1$. This reflection has the form $\func{\affine{w}}{\mu} = \func{w_{\theta}}{\mu} + \theta$, where $w_{\theta} \in \group{W}$ is the Weyl reflection associated with the highest root $\theta$. Hence, using the invariance of the characters under translations in $\group{Q}^{\vee}$,
\begin{equation*}
\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu} = \func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \brac{\func{w_{\theta}}{\nu} + \theta}} = \func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \func{w_{\theta}}{\nu}} = \func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu} w_{\theta}.
\end{equation*}
It follows now that $\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu}$ is also orthogonal to $\theta$. But, $\theta$ and the simple roots, excepting $\alpha_j$, together constitute a basis of the weight space (as the mark $a_j$ never vanishes). Thus, $\func{\nabla_{\lambda} \chi_i}{-2 \pi \mathfrak{i} \nu}$ is again the zero vector, verifying our claim for all the vertices of the fundamental alcove.
\section{Presentations of Fusion Rings} \label{secFusRing}
We now turn to the study of fusion rings over $\mathbb{Z}$. Given the results of \secref{secGenPot}, we introduce a characterisation of the fusion ideal $\fusionideal{k}{\mathbb{Z}}$ for general Wess-Zumino-Witten models which makes no mention of potentials. We then analyse this characterisation in the cases of $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$, and show that it can be reduced to recover the potentials of \eqnDref{eqnFusPotSU}{eqnFusPotSp}. We would like to emphasise that this constitutes a derivation of these fusion potentials over $\mathbb{Z}$, and not an \emph{a posteriori} verification over $\mathbb{C}$. In \secref{secFusSpin}, we will apply this reduction to $\func{\group{Spin}}{2r+1}$.
\subsection{A General Characterisation} \label{secGrobner}
We begin with the simple observation that given any weight $\lambda$ and $\affine{w} \in \affine{\group{W}}_k$, we have
\begin{equation} \label{eqnObvious}
\chi_{\lambda} - \det \affine{w} \ \chi_{\affine{w} \cdot \lambda} \in \fusionideal{k}{\mathbb{Z}}.
\end{equation}
(The definition of character has been extended to non-dominant weights by Weyl's character formula.) This follows easily from Gepner's characterisation of the fusion algebra, \propref{propGepner} (and the remarks which follow it). Since the fusion ideal is dividing (\secref{secIntro}), it follows that $\chi_{\lambda} \in \fusionideal{k}{\mathbb{Z}}$ whenever $\lambda$ is on a shifted affine alcove boundary.
Let $L_{\lambda}$ denote the irreducible representation of $\group{G}$ of highest weight $\lambda$. Letting $\lambda_i$ denote the Dynkin labels of the weight $\lambda$, it follows from the familiar properties of the representation ring that $\lambda$ is the highest weight of the representation $L_{\Lambda_1}^{\otimes \lambda_1} \otimes \cdots \otimes L_{\Lambda_r}^{\otimes \lambda_r}$. As a polynomial in the character ring, $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$, we see that the character $\chi_{\lambda}$ has the form
\begin{equation*}
\chi_{\lambda} = \chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} - \ldots
\end{equation*}
where the omitted terms correspond, in a sense, to lower weights which we regard as being of lesser importance. Our strategy now is to make this lack of importance precise by introducing a monomial ordering on the character ring such that the leading term (\textsc{lt}) of $\chi_{\lambda}$ is precisely $\LT{\chi_{\lambda}} = \chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$. Of course, we are studying fusion, so we also want to assign (relative) importance to characters according to whether the associated weight is on a shifted affine alcove boundary or not. In particular, we should distinguish weights on the boundary $\bilin{\lambda}{\theta} = k+1$ from those inside the fundamental alcove $\bilin{\lambda}{\theta} \leqslant k$.
Happily, these requirements can both be satisfied by defining a \emph{monomial ordering} $\prec$ on the character ring, $\mathbb{Z} \sqbrac{\chi_1 , \ldots , \chi_r}$, by
\begin{equation*}
\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} \prec \chi_1^{\mu_1} \cdots \chi_r^{\mu_r} \qquad \text{if and only if}
\end{equation*}
\begin{align*}
\bilin{\lambda}{\theta} &< \bilin{\mu}{\theta}, & &\text{or} & & & & & & \\
\bilin{\lambda}{\theta} &= \bilin{\mu}{\theta} & &\text{and} & \bilin{\lambda}{\rho} &< \bilin{\mu}{\rho}, & &\text{or} & & \\
\bilin{\lambda}{\theta} &= \bilin{\mu}{\theta} & &\text{and} & \bilin{\lambda}{\rho} &= \bilin{\mu}{\rho} & &\text{and} & \chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} &\prec' \chi_1^{\mu_1} \cdots \chi_r^{\mu_r},
\end{align*}
where $\prec'$ is any other monomial ordering, lexicographic for definiteness. This is an example of a weight order \cite{CoxIde} (and is therefore a genuine monomial ordering).
We demonstrate that $\LT{\chi_{\lambda}}$ is indeed $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$. This proceeds inductively on the height, as it is obvious when $\lambda$ is zero or a fundamental weight. We decompose $L_{\Lambda_1}^{\otimes \lambda_1} \otimes \cdots \otimes L_{\Lambda_r}^{\otimes \lambda_r}$ into irreducible representations, so that
\begin{equation*}
\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} = \chi_{\lambda} + \sum_{\mu} c_{\mu} \chi_{\mu},
\end{equation*}
where the $\mu$ are all of lower height than $\lambda$: $\bilin{\mu}{\rho} < \bilin{\lambda}{\rho}$. By induction, $\LT{\chi_{\lambda}}$ is the greatest (under $\prec$) of $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$ and the monomials $- c_{\mu} \chi_1^{\mu_1} \cdots \chi_r^{\mu_r}$. Now, since each $\mu$ is a weight of $L_{\Lambda_1}^{\otimes \lambda_1} \otimes \cdots \otimes L_{\Lambda_r}^{\otimes \lambda_r}$, $\mu = \lambda - \sum_i m_i \alpha_i$, where the $m_i$ are non-negative integers. It follows that $\bilin{\mu}{\theta} \leqslant \bilin{\lambda}{\theta}$ since the Dynkin labels of $\theta$ are never negative. But, in the definition of $\prec$, ties in $\bilin{\cdot}{\theta}$ are broken by height, hence $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$ is the greatest of the monomials (under $\prec$) as required.
Consider now the ideal $\ideal{\LT{\fusionideal{k}{\mathbb{Z}}}}$ generated by the leading terms (with respect to $\prec$) of the polynomials in the fusion ideal. Since the fusion ring is freely generated (as a $\mathbb{Z}$-module) by (the cosets of) the characters of the weights in $\affine{\group{P}}_k$, the leading terms $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$, with $\bilin{\lambda}{\theta} \leqslant k$ must be the only monomials not in $\ideal{\LT{\fusionideal{k}{\mathbb{Z}}}}$. That is, $\ideal{\LT{\fusionideal{k}{\mathbb{Z}}}}$ is freely generated as an abelian group by the set of monomials $\mathcal{M} = \set{\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} \colon \bilin{\lambda}{\theta} > k}$.
As an ideal, it is now easy to see that $\ideal{\LT{\fusionideal{k}{\mathbb{Z}}}}$ is generated by the \emph{atomic} monomials of $\mathcal{M}$, where the atomic monomials are defined to be those which \emph{cannot} be expressed as the product of a fundamental character and a monomial from $\mathcal{M}$. Equivalently, atomic monomials are those corresponding to weights from which one cannot subtract any fundamental weight and still remain in the set of weights corresponding to $\mathcal{M}$.
It should be clear that every weight $\lambda$ with $\bilin{\lambda}{\theta} = k+1$ corresponds to an atomic monomial. In fact, for $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$, these are all the atomic monomials, as the comarks are $a_i^{\vee} = 1$ (so if $\bilin{\mu}{\theta} > k+1$, one can always subtract a fundamental weight from $\mu$ yet remain in $\mathcal{M}$). For other groups, it will generally be necessary to include other monomials. For example, $a_1^{\vee} = 2$ for $\group{G}_2$, so it follows that when the level $k$ is even, the monomial $\chi_1^{\brac{k+2} / 2}$ is also atomic (this is illustrated in \figref{figG2AtomMon}).
\psfrag{level}[][]{$\bilin{\lambda}{\theta} = k+1$}
\psfrag{k even}[][]{$k$ even}
\psfrag{k odd}[][]{$k$ odd}
\begin{center}
\begin{figure}
\includegraphics[width=13cm]{G2monomials.eps}
\caption{The weights corresponding to the atomic monomials for the ideal $\ideal{\LT{\fusionideal{k}{\mathbb{Z}}}}$ associated with $\group{G}_2$ at even and odd level. Weights corresponding to monomials in the ideal are grey or black, the latter corresponding to atomic monomials. The arrows indicate the effect of multiplying by $\chi_1$ and $\chi_2$.} \label{figG2AtomMon}
\end{figure}
\end{center}
\vspace{-\baselineskip}
Let $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$ be an atomic monomial of $\mathcal{M}$. If the associated weight $\lambda$ is on a shifted affine alcove boundary, we associate to this atomic monomial the polynomial $p_{\lambda} = \chi_{\lambda} \in \fusionideal{k}{\mathbb{Z}}$. If not, we use \eqnref{eqnObvious} to reflect $\lambda$ into the fundamental affine alcove, and take $p_{\lambda} = \chi_{\lambda} - \det \affine{w} \ \chi_{\affine{w} \cdot \lambda} \in \fusionideal{k}{\mathbb{Z}}$. In either case, we have constructed a $p_{\lambda}$ in the fusion ideal whose leading term with respect to $\prec$ is $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$. Therefore,
\begin{align*}
\ideal{\LT{\fusionideal{k}{\mathbb{Z}}}} &= \ideal{\text{atomic $\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r}$ in $\mathcal{M}$}} \\
&= \ideal{\LT{p_{\lambda}} \colon \text{$\lambda$ is associated to an atomic monomial in $\mathcal{M}$}}.
\end{align*}
But, this is exactly the definition of a \emph{Gr\"{o}bner basis} for $\fusionideal{k}{\mathbb{Z}}$ \cite{CoxIde,CoxUsi}.
\begin{proposition} \label{propGrobner}
The polynomials $p_{\lambda}$ constructed above for each weight $\lambda$ associated to an atomic monomial of $\mathcal{M} = \set{\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} \colon \bilin{\lambda}{\theta} > k}$ form a Gr\"{o}bner basis for the fusion ideal $\fusionideal{k}{\mathbb{Z}}$, with respect to the monomial ordering $\prec$. That is,
\begin{equation*}
\fusionideal{k}{\mathbb{Z}} = \ideal{p_{\lambda} \colon \text{$\lambda$ is associated to an atomic monomial in $\mathcal{M}$}}.
\end{equation*}
\end{proposition}
Note the crucial, but subtle, r\^{o}le played by the monomial ordering $\prec$. Note also that because the Gr\"{o}bner basis given has elements whose leading coefficient is unity, this presentation shows explicitly that the fusion ideal is dividing. Whilst this presentation has a nice Lie-theoretic interpretation, it is rather more cumbersome than we would wish for. Indeed, a presentation in terms of a potential would give a set of $r = \rank \group{G}$ generators for the fusion ideal (at every level $k$), whereas \propref{propGrobner} gives a set whose cardinality is of the order of $k^{r-1}$. We will therefore indicate in what follows how one can reduce the number of generators to something a bit more manageable (at least for the classical groups).
\subsection{Deriving Fusion Potentials} \label{secDerFusPot}
We will begin with the case of $\func{\group{SU}}{r+1}$. As noted in \secref{secGrobner}, the atomic monomials of $\mathcal{M} = \set{\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} \colon \bilin{\lambda}{\theta} > k}$ are precisely those corresponding to weights $\lambda$ with $\bilin{\lambda}{\theta} = k+1$. It follows from \propref{propGrobner} that
\begin{equation*}
\fusionideal{k}{\mathbb{Z}} = \ideal{\chi_{\lambda} \colon \bilin{\lambda}{\theta} = k+1}.
\end{equation*}
The highest root has the form $\theta = \varepsilon_1 - \varepsilon_{r+1}$, so for these weights, $k+1 = \bilin{\lambda}{\theta} = \lambda^1 - \lambda^{r+1} = \lambda^1$. Here, we write $\lambda = \sum_{j=1}^{r+1} \lambda^j \varepsilon_j$, and fix the ambiguity corresponding to $\sum_{j=1}^{r+1} \varepsilon_j = 0$ by setting $\lambda^{r+1} = 0$. We emphasise that the $\lambda^j$ are not to be confused with the Dynkin labels $\lambda_j$.
We now use the \emph{Jacobi-Trudy identity}, \eqnref{eqnJTA}, to decompose these generators of the fusion ideal into complete symmetric polynomials (denoted by $H_m$) in the $q_i$. We have
\begin{equation*}
\chi_{\lambda} =
\begin{vmatrix}
H_{\lambda^1} & H_{\lambda^2 - 1} & \cdots & H_{\lambda^r - r + 1} \\
H_{\lambda^1 + 1} & H_{\lambda^2} & \cdots & H_{\lambda^r - r + 2} \\
\vdots & \vdots & \ddots & \vdots \\
H_{\lambda^1 + r - 1} & H_{\lambda^2 + r - 2} & \cdots & H_{\lambda^r}
\end{vmatrix}
=
\begin{vmatrix}
H_{k+1} & H_{\lambda^2 - 1} & \cdots & H_{\lambda^r - r + 1} \\
H_{k+2} & H_{\lambda^2} & \cdots & H_{\lambda^r - r + 2} \\
\vdots & \vdots & \ddots & \vdots \\
H_{k+r} & H_{\lambda^2 + r - 2} & \cdots & H_{\lambda^r}
\end{vmatrix}
.
\end{equation*}
Since $H_m = \chi_{m \Lambda_1} \in \polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$, expanding this determinant down the first column gives $\chi_{\lambda}$ as a $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$-linear combination of the $H_{k+i} = \chi_{\brac{k+i} \Lambda_1}$, where $i = 1 , \ldots , r$. Therefore,
\begin{equation*}
\fusionideal{k}{\mathbb{Z}} \subseteq \ideal{\chi_{\brac{k+i} \Lambda_1} \colon i = 1 , \ldots , r}.
\end{equation*}
Conversely, we show that each $\brac{k+i} \Lambda_1$, $i = 1 , \ldots , r$, is on a shifted affine alcove boundary, hence is fixed by an affine reflection $\affine{w}$, and thus that $\chi_{\brac{k+i} \Lambda_1}$ is in the fusion ideal. This amounts to verifying that $\bilin{\brac{k+i} \Lambda_1}{\alpha} \in \brac{k + \Cox^{\vee}} \mathbb{Z}$ for some root $\alpha$, and the reader can easily check that $\alpha = \varepsilon_1 - \varepsilon_{r+2-i}$ works. We have therefore demonstrated that
\begin{equation} \label{eqnFusGenSU}
\fusionideal{k}{\mathbb{Z}} = \ideal{\chi_{\brac{k+i} \Lambda_1} \colon i = 1 , \ldots , r}.
\end{equation}
It is rather pleasing that such a simple device can reduce the number of generators from (the order of) $k^{r-1}$ to $r$. Before turning to the integration of these generators to a potential, we would like to mention one further observation that may be of interest. We consider the characters $\chi_{k \Lambda_1 + \Lambda_i}$, where $i = 1 , \ldots , r$. Expanding with the Jacobi-Trudy identity, we find that
\begin{align*}
\chi_{k \Lambda_1 + \Lambda_1} &= H_{k+1} \\
\chi_{k \Lambda_1 + \Lambda_2} &= H_1 H_{k+1} - H_{k+2} \\
\chi_{k \Lambda_1 + \Lambda_3} &= \brac{H_1^2 - H_2} H_{k+1} - H_1 H_{k+2} + H_{k+3} \\
\chi_{k \Lambda_1 + \Lambda_4} &= \brac{H_1^3 - 2 H_1 H_2 + H_3} H_{k+1} - \brac{H_1^2 - H_2} H_{k+2} + H_1 H_{k+3} - H_{k+4} \\
&\mspace{9mu} \vdots
\end{align*}
We call this the \emph{method of $1$'s} due to the line of $1$'s which appear off-diagonal in the Jacobi-Trudy expansion of these characters. These equations show (inductively) that there is another simple generating set for the fusion ideal:
\begin{equation*}
\fusionideal{k}{\mathbb{Z}} = \ideal{\chi_{k \Lambda_1 + \Lambda_i} \colon i = 1 , \ldots , r}.
\end{equation*}
This generating set is suggested by the computations of \cite{FreBra} (though not explicitly stated there) on the corresponding brane charge groups\footnote{To elaborate somewhat, the authors of \cite{FreBra} computed the brane charge group of the level $k$ $\func{\group{SU}}{r+1}$ Wess-Zumino-Witten model from the greatest common divisor of the dimensions of the irreducible representations of highest weight $k \Lambda_1 + \Lambda_i$, $i = 1 , \ldots , r$. In \cite{BouDBr02}, the brane charge group was shown to be determined by the greatest common divisor of the dimensions of any set of generators of the ideal $\fusionideal{k}{\mathbb{Z}}$ of the fusion ring. This suggests that the $\chi_{k \Lambda_1 + \Lambda_i}$ are such a set of generators, and here we have given a simple proof of this fact.}. Note that this set has the nice property of consisting entirely of characters $\chi_{\lambda}$ with $\bilin{\lambda}{\theta} = k+1$.
We now turn to the derivation of the fusion potential, \eqnref{eqnFusPotSU}. Let $E_n$ denote the $n^{\text{th}}$ elementary symmetric polynomial in the $q_i$. From the identity $\sum_m H_m t^m = \big[ \sum_n \brac{-1}^n E_n t^n \big] ^{-1}$, we can derive
\begin{equation} \label{eqnDHDE}
\parD{H_m}{E_j} = \brac{-1}^{j+1} \sum_n H_n H_{m-j-n}.
\end{equation}
For $\func{\group{SU}}{r+1}$, $E_j = \chi_j \equiv \chi_{\Lambda_j}$ for $j = 1 , \ldots , r$, so we see that
\begin{equation*}
\brac{-1}^{i-1} \parD{H_{k+\Cox^{\vee}-i}}{\chi_j} = \brac{-1}^{i+j} \sum_n H_n H_{k+\Cox^{\vee}-i-j-n}
\end{equation*}
is symmetric in $i$ and $j$. Therefore, $\sum_i \brac{-1}^{i-1} H_{k+\Cox^{\vee}-i} \mathrm{d} \chi_i$ is a closed $1$-form, hence integrates to a potential $V_{k+\Cox^{\vee}}$ (there is no topology).
We can compute this potential using generating functions. If $\func{V}{t} = \sum_m \brac{-1}^{m-1} V_m t^m$, then
\begin{align*}
\parD{\func{V}{t}}{\chi_i} &= \sum_m \brac{-1}^{m+i} H_{m-i} t^m = \frac{t^i}{\prod_{\ell} \brac{1 + q_{\ell} t}} = \frac{t^i}{\sum_n E_n t^n} \\
&= \frac{t^i}{1 + \chi_1 t + \ldots + \chi_r t^r + t^{r+1}} \\
\Rightarrow \qquad \func{V}{t} &= \log \sqbrac{1 + \chi_1 t + \ldots + \chi_r t^r + t^{r+1}},
\end{align*}
up to a constant. This is of course \eqnref{eqnFusPotGFSU}, from which one can easily recover the fusion potential, \eqnref{eqnFusPotSU}.
We would like to emphasise once again that not only have we given a complete derivation of the fusion potential for the $\func{\group{SU}}{r+1}$ Wess-Zumino-Witten models, but we have shown that this potential describes the fusion process over $\mathbb{Z}$, rather than just over $\mathbb{C}$.
Consider now the fusion ring for $\func{\group{Sp}}{2r}$. As before, \propref{propGrobner} gives the characters $\chi_{\lambda}$ with $\bilin{\lambda}{\theta} = k+1$ as a set of generators for the fusion ideal, $\fusionideal{k}{\mathbb{Z}}$. The highest root is $\theta = 2 \varepsilon_1$, so for these characters, $k+1 = \bilin{\lambda}{\theta} = \lambda^1$ (note that $\norm{\varepsilon_i}^2 = \frac{1}{2}$). We expand the $\func{\group{Sp}}{2r}$ Jacobi-Trudy identity, \eqnref{eqnJTC}, down the first column. Noting that $H_m = \chi_{m \Lambda_1}$, this shows that the generating characters can be expressed as $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$-linear combinations of the $r$ elements $H_{k+1}$ and $H_{k+1+i} + H_{k+1-i}$ ($i = 1 , \ldots , r-1$). Here, the $H_m$ are complete symmetric polynomials in the $q_i$ and their inverses. It is obvious that these elements belong to $\fusionideal{k}{\mathbb{Z}}$, hence
\begin{equation} \label{eqnFusGenSp}
\fusionideal{k}{\mathbb{Z}} = \ideal{\chi_{\brac{k+1} \Lambda_1} , \chi_{\brac{k+1+i} \Lambda_1} + \chi_{\brac{k+1-i} \Lambda_1} \colon i = 1 , \ldots , r-1}.
\end{equation}
Applying the method of $1$'s to these elements gives an alternative set of generators:
\begin{equation*}
\fusionideal{k}{\mathbb{Z}} = \ideal{\chi_{k \Lambda_1 + \Lambda_i} \colon i = 1 , \ldots , r}.
\end{equation*}
Deriving a potential from these generators is somewhat more cumbersome than before. For this purpose, we use the set of generators
\begin{equation*}
\set{\sum_{\ell = 0}^{r-i} H_{k+\Cox^{\vee}-i-2 \ell} \colon i = 1 , \ldots , r},
\end{equation*}
which is easily derived from those given above. From \eqnref{eqnDHDE} and the expressions for $E_n$ in terms of the $\chi_j$ \cite{FulRep}, we compute that
\begin{equation*}
\brac{-1}^{i-1} \parD{}{\chi_j} \sum_{\ell = 0}^{r-i} H_{k+\Cox^{\vee}-i-2 \ell} = \brac{-1}^{i+j} \sum_n H_n \sum_{m=0}^{r-i} \sum_{m'=0}^{r-j} H_{k+\Cox^{\vee}-n-i-j-2 \brac{m+m'}},
\end{equation*}
which is symmetric in $i$ and $j$ (indeed, this symmetry is what suggests the above generating set, as it leads to a closed $1$-form). These generators may therefore be integrated to a potential, and the derivation may be completed using generating functions as in the $\func{\group{SU}}{r+1}$ case. In this way, we recover \eqnref{eqnFusPotGFSp} and therefore the fusion potential, \eqnref{eqnFusPotSp}.
\section{Presentations for $\func{\group{Spin}}{2r+1}$} \label{secFusSpin}
We now apply the techniques of \secref{secGrobner} to the fusion rings of the Wess-Zumino-Witten models over $\func{\group{Spin}}{2r+1}$. We are not aware of any concise, representation-theoretic presentations of these rings (nor of the corresponding algebras) in the literature\footnote{In the course of preparing this section, we were made aware of a conjecture regarding the presentations of the fusion ideals of the $\func{\group{Spin}}{2r+1}$ (and $\func{\group{Spin}}{2r}$) Wess-Zumino-Witten models \cite{Boysal}. This elegant conjecture amounts to the statement that the fusion ideal at level $k$ is the radical of the ideal generated by the $\chi_{\brac{k+i} \Lambda_1}$, for $i = 1 , 2 , \ldots , \Cox^{\vee} - 1$. This is a generalisation of the $\func{\group{SU}}{r+1}$ result, \eqnref{eqnFusGenSU}. It is further conjectured that the radical of this ideal is generated by the above characters and $\chi_{k \Lambda_1 + \Lambda_r}$ ($\chi_{k \Lambda_1 + \Lambda_{r-1}}$ is also needed for $\func{\group{Spin}}{2r}$).}. We will see that the appropriate Jacobi-Trudy identities may be employed to substantially simplify the presentations given by \propref{propGrobner}, though the simplification turns out to be not quite so drastic as that found for $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$. In particular, it seems rather doubtful that the presentations obtained are related to potentials.
Recall from \secref{secGrobner} that we can derive a generating set for the fusion ideal $\fusionideal{k}{\mathbb{Z}}$ by computing the atomic monomials of the set $\set{\chi_1^{\lambda_1} \cdots \chi_r^{\lambda_r} \colon \bilin{\lambda}{\theta} > k}$. As shown there for $\group{G}_2$, this computation depends upon the comarks $a_i^{\vee}$, which for $\func{\group{Spin}}{2r+1}$ are $1$ for $i = 1 , r$, and $2$ otherwise (we will only consider $r > 2$). The atomic monomials therefore correspond to the weights
\begin{align*}
\text{$k$ odd} &: & &\set{\lambda \colon \bilin{\lambda}{\theta} = k+1} \\
\text{$k$ even} &: & &\set{\lambda \colon \bilin{\lambda}{\theta} = k+1} \cup \set{\lambda \colon \bilin{\lambda}{\theta} = k+2 \text{ and } \lambda_1 = \lambda_r = 0}.
\end{align*}
Finding elements of $\fusionideal{k}{\mathbb{Z}}$ whose leading terms are these monomials is easy, and we deduce from \propref{propGrobner} that the fusion ring is generated by:
\begin{align} \label{eqnGrobGenB}
\begin{split}
\text{$k$ odd}: \qquad &\set{\chi_{\lambda} \colon \bilin{\lambda}{\theta} = k+1} \\
\text{$k$ even}: \qquad &\set{\chi_{\lambda} \colon \bilin{\lambda}{\theta} = k+1} \cup \set{\chi_{\lambda} + \chi_{\lambda - \theta} \colon \bilin{\lambda}{\theta} = k+2 \text{ and } \lambda_1 = \lambda_r = 0}.
\end{split}
\end{align}
We note that if $\lambda_2 = 0$, $\chi_{\lambda - \theta} = 0$.
In order to reduce the size of this generating set, we again turn to the appropriate Jacobi-Trudy identities. As noted in \appref{secJTB}, these identities distinguish between \emph{tensor} and \emph{spinor} representations (whose highest weight $\lambda$ has $\lambda_r$ even and odd, respectively). We consider first the tensor representations. The appropriate Jacobi-Trudy identity, \eqnref{eqnJTBT}, gives the irreducible characters as a determinant of an $r \times r$ matrix:
\begin{equation} \label{eqnJTBT+}
\chi_{\lambda} =
\begin{vmatrix}
H_{\lambda^1} - H_{\lambda^1 - 2} & H_{\lambda^2 - 1} - H_{\lambda^2 - 3} & \cdots & H_{\lambda^r + 1 - r} - H_{\lambda^r - 1 - r} \\
H_{\lambda^1 + 1} - H_{\lambda^1 - 3} & H_{\lambda^2} - H_{\lambda^2 - 4} & \cdots & H_{\lambda^r + 2 - r} - H_{\lambda^r - 2 - r} \\
\vdots & \vdots & \ddots & \vdots \\
H_{\lambda^1 + r - 1} - H_{\lambda^1 - r - 1} & H_{\lambda^2 + r - 2} - H_{\lambda^2 - r - 2} & \cdots & H_{\lambda^r} - H_{\lambda^r - 2r}
\end{vmatrix}
.
\end{equation}
Here, $\lambda^j$ denotes the components of $\lambda$ with respect to the usual orthonormal basis $\varepsilon_j$ of the weight space, and $H_m$ denotes the $m^{\text{th}}$ complete symmetric polynomial in the $q_i = \func{\exp}{\varepsilon_i}$, their inverses, and $1$.
How this treatment differs from the analysis of \secref{secDerFusPot}, and is thereby significantly complicated, is that $\theta = \varepsilon_1 + \varepsilon_2$, so $\bilin{\lambda}{\theta} = \lambda_1 + \lambda_2$. It follows that the elements in any single column of the Jacobi-Trudy determinant of a character $\chi_{\lambda}$ with $\bilin{\lambda}{\theta} = k+1$ will not generally belong to the fusion ideal, so expanding the determinant down a single column is pointless. Instead, we notice that the top-left $2 \times 2$ subdeterminant is the character $\chi_{\lambda^1 \varepsilon_1 + \lambda^2 \varepsilon_2}$, and that $\bilin{\lambda}{\theta} = k+1$ implies that \emph{this} subdeterminant is in $\fusionideal{k}{\mathbb{Z}}$.
This observation suggests that we must expand \eqnref{eqnJTBT+} down the first two columns. In this way, $\chi_{\lambda}$ is expressed as a $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$-linear combination of the $2 \times 2$ determinants
\begin{equation*}
\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} =
\begin{vmatrix}
H_{\lambda^1 + m_1 - 1} - H_{\lambda^1 - m_1 - 1} & H_{\lambda^2 + m_1 - 2} - H_{\lambda^2 - m_1 - 2} \\
H_{\lambda^1 + m_2 - 1} - H_{\lambda^1 - m_2 - 1} & H_{\lambda^2 + m_2 - 2} - H_{\lambda^2 - m_2 - 2}
\end{vmatrix}
.
\end{equation*}
Here, $1 \leqslant m_1 < m_2 \leqslant r$ counts the $\binom{r}{2}$ choices of rows used in these subdeterminants. We have already noted that $\func{\psi_{12}}{\lambda^1 , \lambda^2} \in \fusionideal{k}{\mathbb{Z}}$ when $\lambda^1 + \lambda^2 = k+1$, so it is natural to enquire if the same is true for general $m_1$ and $m_2$.
To investigate this, we need to digress a little in order to derive a more amenable form for the $\func{\psi_{12}}{\lambda^1 , \lambda^2}$ (\eqnref{eqnGenBT} below). This derivation is an exercise in manipulating generating functions. Introducing parameters $t_1$ and $t_2$, we compute
\begin{equation} \label{eqnGFBT}
\sum_{\lambda^1 , \lambda^2 \in \mathbb{Z}} \func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} t_1^{\lambda^1} t_2^{\lambda^2} = \sum_{\lambda^1 \in \mathbb{Z}} H_{\lambda^1} t_1^{\lambda^1} \sum_{\lambda^2 \in \mathbb{Z}} H_{\lambda^2} t_2^{\lambda^2} \Det{t_j^{j-m_i} - t_j^{j+m_i}}_{i,j = 1}^2.
\end{equation}
Denoting the determinant on the right by $A_{m_1 m_2}$, we form the generating function
\begin{equation*}
\sum_{m_1 , m_2 = 0}^{\infty} A_{m_1 m_2} z_1^{m_1} z_2 ^{m_2} = \Det{\frac{\brac{t_j^{j-1} - t_j^{j+1}} z_i}{\brac{1 - t_j z_i} \brac{1 - t_j^{-1} z_i}}}_{i,j = 1}^2.
\end{equation*}
Applying \eqnref{eqnCauchy2} to this determinant gives
\begin{align*}
\sum_{m_1 , m_2 = 0}^{\infty} A_{m_1 m_2} z_1^{m_1} z_2 ^{m_2} &= \brac{1 - t_1^2} \brac{t_2 - t_2^3} \frac{\Det{\brac{t_j + t_j^{-1}}^{2-i}} \Det{\brac{z_i + z_i^{-1}}^{j-1} z_i^2}}{\displaystyle \prod_{i,j = 1}^2 \brac{1 - t_j z_i} \brac{1 - t_j^{-1} z_i}} \\
&= -A_{12}
\begin{vmatrix}
z_1^2 & z_1^3 + z_1 \\
z_2^2 & z_2^3 + z_2
\end{vmatrix}
\prod_i \sqbrac{\sum_{m_i \in \mathbb{Z}} \func{h_{m_i}}{t_1 , t_1^{-1} , t_2 , t_2^{-1}} z_i^{m_i}},
\end{align*}
where we recognise $A_{12} = \brac{1 - t_1^2} \brac{1 - t_2^2} \brac{1 - t_1 t_2} \brac{1 - t_1^{-1} t_2}$. Here, $h_m$ denotes the $m^{\text{th}}$ complete symmetric polynomial in the $t_i$ and their inverses (to be distinguished from the $H_m$).
It follows that $A_{12}$ is a factor of $A_{m_1 m_2}$:
\begin{equation*}
A_{m_1 m_2} = A_{12}
\begin{vmatrix}
h_{m_2 - 2} & h_{m_2 - 1} + h_{m_2 - 3} \\
h_{m_1 - 2} & h_{m_1 - 1} + h_{m_1 - 3}
\end{vmatrix}
.
\end{equation*}
Fascinatingly, if we set $t_j = \func{\exp}{\eta_j}$, where $\eta_j$ denotes the usual orthogonal basis vectors for the weight space of $\func{\group{Sp}}{4}$, then comparing with \eqnref{eqnJTC} gives
\begin{equation*}
\frac{A_{m_1 m_2}}{A_{12}} = \chi_{\brac{m_2 - 2} \eta_1 + \brac{m_1 - 1} \eta_2}^{\func{\group{Sp}}{4}}.
\end{equation*}
This rather unexpected relation turns out to be extremely useful. For example, we can substitute it back into \eqnref{eqnGFBT} to recover an expression for the original determinants:
\begin{equation} \label{eqnGenBT}
\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} = \sum_{\mu} \chi_{\brac{\lambda^1 - \mu^1} \varepsilon_1 + \brac{\lambda^2 - \mu^2} \varepsilon_2}^{\func{\group{Spin}}{2r+1}}.
\end{equation}
Here, the sum is over the weights $\mu = \mu^1 \eta_1 + \mu^2 \eta_2$ of the irreducible $\func{\group{Sp}}{4}$-module of highest weight $\brac{m_2 - 2} \eta_1 + \brac{m_1 - 1} \eta_2$.
Recall that the fusion ideal is generated by the characters $\chi_{\lambda}$ with $\bilin{\lambda}{\theta} = k+1$ and, if $k$ is even, by the same set augmented by the $\chi_{\lambda} + \chi_{\lambda - \theta}$ with $\bilin{\lambda}{\theta} = k+2$ and $\lambda_1 = \lambda_r = 0$. We have seen that when the characters correspond to tensor representations, the generators of the first type may be expressed as a $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$-linear combination of the $\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2}$, with $\lambda^1 + \lambda^2 = k+1$. Since $\theta = \varepsilon_1 + \varepsilon_2$, it follows that the Jacobi-Trudy determinant for $\chi_{\lambda}$ and $\chi_{\lambda - \theta}$ will be identical in columns $3 , \ldots , r$. Therefore, the generators of the second type (which always correspond to tensor representations) may be expressed as a $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$-linear combination of the $\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} + \func{\psi_{m_1 m_2}}{\lambda^1 - 1, \lambda^2 - 1}$, with $\lambda^1 + \lambda^2 = k+2$. Indeed, $\lambda_1 = 0$ implies that $\lambda^1 = \lambda^2$, so the generators of the second type can \emph{all} be expressed in terms of the elements $\func{\psi_{m_1 m_2}}{\frac{k}{2} + 1 , \frac{k}{2} + 1 } + \func{\psi_{m_1 m_2}}{\frac{k}{2} , \frac{k}{2}}$.
Consider now a single $\func{\group{Spin}}{2r+1}$-character in the sum of \eqnref{eqnGenBT}, labelled by the weight $\brac{\lambda^1 - \mu^1} \varepsilon_1 + \brac{\lambda^2 - \mu^2} \varepsilon_2$, with $\lambda^1 + \lambda^2 = k+1$. We can pair it with the character labelled by the weight $\brac{\lambda^1 + \mu^2} \varepsilon_1 + \brac{\lambda^2 + \mu^1} \varepsilon_2$, its image under the fundamental affine Weyl reflection $\affine{w}_0$. If this character is also (always) in the sum, then we can conclude that the right-hand-side of \eqnref{eqnGenBT} belongs to $\fusionideal{k}{\mathbb{Z}}$, that is $\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} \in \fusionideal{k}{\mathbb{Z}}$.
But this follows immediately from the fact that the transformation
\begin{equation*}
- \mu^1 \eta_1 - \mu^2 \eta_2 \longmapsto \mu^2 \eta_1 + \mu^1 \eta_2
\end{equation*}
is precisely the action of the $\func{\group{Sp}}{4}$-Weyl reflection about the (short) root $\eta_1 + \eta_2$. Since the sum in \eqnref{eqnGenBT} is over the weights of an $\func{\group{Sp}}{4}$-representation, which is invariant under this (indeed any) $\func{\group{Sp}}{4}$-Weyl reflection, it is clear that $\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} \in \fusionideal{k}{\mathbb{Z}}$ (when $\lambda^1 + \lambda^2 = k+1$). More generally, an almost identical argument shows that $\func{\psi_{m_1 m_2}}{\frac{k}{2} + 1 , \frac{k}{2} + 1 } + \func{\psi_{m_1 m_2}}{\frac{k}{2} , \frac{k}{2}} \in \fusionideal{k}{\mathbb{Z}}$. It follows that the generators of $\fusionideal{k}{\mathbb{Z}}$ that correspond to tensor representations can be replaced by
\begin{align*}
&\func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2}, & &\lambda^1 + \lambda^2 = k+1, \\
\text{and } &\func{\psi_{m_1 m_2}}{\frac{k}{2} + 1 , \frac{k}{2} + 1 } + \func{\psi_{m_1 m_2}}{\frac{k}{2} , \frac{k}{2}} & &\text{if $k$ is even,}
\end{align*}
where $1 \leqslant m_1 < m_2 \leqslant r$.
The story for the spinor representations ($\lambda^j$ half-integral) is much the same. Using the appropriate Jacobi-Trudy identity, \eqnref{eqnJTBS}, we find that the $\chi_{\lambda}$ are $\polyring{\mathbb{Z}}{\chi_1 , \ldots , \chi_r}$-linear combinations of the subdeterminants
\begin{equation*}
\func{\varphi_{m_1 m_2}}{\lambda^1 , \lambda^2} = \chi_r
\begin{vmatrix}
H_{\lambda^1 + m_1 - \frac{3}{2}} - H_{\lambda^1 - m_1 - \frac{1}{2}} & H_{\lambda^2 + m_1 - \frac{5}{2}} - H_{\lambda^2 - m_1 - \frac{3}{2}} \\
H_{\lambda^1 + m_2 - \frac{3}{2}} - H_{\lambda^1 - m_2 - \frac{1}{2}} & H_{\lambda^2 + m_2 - \frac{5}{2}} - H_{\lambda^2 - m_2 - \frac{3}{2}}
\end{vmatrix}
.
\end{equation*}
Constructing generating functions as before, one can prove that
\begin{equation} \label{eqnGenBS}
\func{\varphi_{m_1 m_2}}{\lambda^1 , \lambda^2} = \sum_{\nu} \chi_{\brac{\lambda^1 - \nu^1 - \frac{1}{2}} \varepsilon_1 + \brac{\lambda^2 - \nu^2 - \frac{1}{2}} \varepsilon_2 + \Lambda_r}^{\func{\group{Spin}}{2r+1}},
\end{equation}
where this sum is over the weights $\nu = \nu^1 \zeta_1 + \nu^2 \zeta_2$ of the irreducible $\func{\group{Spin}}{5}$-module of highest weight $\brac{m_2 - 2} \zeta_1 + \brac{m_1 - 1} \zeta_2$ (and the $\zeta_i$ are the usual orthonormal basis vectors for this weight space). As before, it now follows quickly from the fact that $\zeta_1 + \zeta_2$ is a root of $\func{\group{Spin}}{5}$ that $\func{\varphi_{m_1 m_2}}{\lambda^1 , \lambda^2} \in \fusionideal{k}{\mathbb{Z}}$.
These manipulations for the tensor and spinor representations finally prove that the fusion ideal has the following generators:
\begin{align}
\text{$k$ odd} &: & \fusionideal{k}{\mathbb{Z}} &= \Bigl\langle \func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} , \func{\varphi_{m_1 m_2}}{\lambda^1 , \lambda^2} \colon \lambda^1 + \lambda^2 = k+1, \ 1 \leqslant m_1 < m_2 \leqslant r \Bigr\rangle, \notag \\
\text{$k$ even} &: & \fusionideal{k}{\mathbb{Z}} &= \Bigl\langle \func{\psi_{m_1 m_2}}{\lambda^1 , \lambda^2} , \func{\psi_{m_1 m_2}}{\textstyle \frac{k}{2} + 1 , \frac{k}{2} + 1 } + \func{\psi_{m_1 m_2}}{\textstyle \frac{k}{2} , \frac{k}{2}} , \func{\varphi_{m_1 m_2}}{\lambda^1 , \lambda^2} \Bigr. \label{eqnFusRingGenB} \\
& & & \mspace{300mu} \Bigl. \colon \lambda^1 + \lambda^2 = k+1, \ 1 \leqslant m_1 < m_2 \leqslant r \Bigr\rangle .\notag
\end{align}
Since $\lambda^1 \geqslant \lambda^2$ are integers and half-integers in the $\psi_{m_1 m_2}$ and $\varphi_{m_1 m_2}$ respectively, it follows that the number of generators in this set is of the order of $k \binom{r}{2}$. This compares favourably with the set of generators given in \eqnref{eqnGrobGenB}, whose number is of the order $k^{r-1}$, though perhaps not with the expectation that we could reduce the number of generators to $r$. Finally, we note that other sets of generators can be deduced from this one, in particular by using the method of $1$'s. We leave this as an exercise for the enthusiastic reader.
\section{Discussion and Conclusions} \label{secConc}
In this paper we have attempted to give a complete account of our understanding regarding explicit, representation-theoretic presentations of the fusion rings and algebras associated to the Wess-Zumino-Witten models over the compact, connected, simply-connected (simple) Lie groups. We have discussed presentations in terms of fusion potentials, and have provided complete proofs of the fact that there are explicitly known potentials which correctly describe the fusion \emph{algebras} of the models over $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$. These potentials appear to have been guessed in an educated manner. We hope that our proofs will complement what has already appeared in the literature, and will be useful for subsequent studies. We have also proven that the fusion algebras of the other groups \emph{cannot} be described by potentials analogous to those known, which explains why attempts to guess these potentials have not been successful.
We recalled that it is the fusion \emph{ring}, rather than the fusion algebra, which is of physical interest in applications. Despite the fact that the fusion ring is torsion-free, we noted that a presentation for the fusion algebra need not give a presentation of the fusion ring. To overcome this, we have stated and proved a fairly elementary result (\propref{propGrobner}) giving an explicit presentation (that is easily constructed) of the fusion ring in all cases. We believe that this is the first time such a presentation has been formulated. It is in terms of (linear combinations) of irreducible characters, and so should be regarded as representation-theoretic in the strongest possible sense.
These general presentations have one rather obvious disadvantage in that the number of characters appearing is quite large. Whilst easy to write down, these presentations nevertheless contain quite a bit of complexity. However, we have seen that it is sometimes possible to express the relevant characters in terms of simpler characters, and so reduce the number of characters that appear. In particular, we have used the well-known determinantal identities for the characters of $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$ to \emph{derive} the fusion potentials from first principles. An important corollary to our results is then that these fusion potentials correctly describe the fusion \emph{rings} of the $\func{\group{SU}}{r+1}$ and $\func{\group{Sp}}{2r}$ models.
We then extended this result to the $\func{\group{Spin}}{2r+1}$ models. The corresponding determinantal identities for the characters did not lead to as nice a simplification as before, in particular we did not end up with a potential description, but the result, \eqnref{eqnFusRingGenB}, is still relatively concise. To the best of our knowledge, this is the first rigorous representation-theoretic presentation of the fusion ideal (over $\mathbb{C}$ or $\mathbb{Z}$) for these Wess-Zumino-Witten models. Nonetheless, this presentation is not as concise as we would like for the concrete applications we have in mind. Certainly, for our motivating application to D-brane charge groups, our result allows us to write down an explicit form for this group\footnote{The charge group has the form $\mathbb{Z}_x^{2^{r-2}}$ \cite{BraTwi}, and we can determine $x$ to be the greatest common divisor of the integers obtained by evaluating the fusion ideal generators at the origin of the weight space. With respect to \eqnref{eqnFusRingGenB}, this amounts to replacing the complete symmetric polynomials $\func{H_m}{q , 1 , q^{-1}}$ by $\binom{m + 2r}{2r}$ (and then finding the greatest common divisor).}. However, we have been unable to substantially simplify this formula, so as to rigorously prove the result conjectured in \cite{BouDBr02}. We have checked that this result is numerically consistent (to high level) with the generators presented here.
We expect that this result can also be extended to the $\func{\group{Spin}}{2r}$ models. However, we have not done so for two reasons. First, as mentioned in \appref{secJTD}, the derivation of the appropriate determinantal identities requires a slightly more general approach than what we have been using. It follows that the methods we applied in analysing the $\func{\group{Spin}}{2r+1}$ case will require an analagous generalisation. However, we believe that this generalisation should follow easily from the methods used in \cite{FulRep}. Our second reason in that as with the $\func{\group{Spin}}{2r+1}$ case, we do not expect to get as simple a presentation as we would like. We feel that the root of this is the observation that determinants are not particularly well-suited to computations when the Weyl group is not a symmetric group. A far more elegant approach would be to generalise the algebra of determinants to the other Weyl groups, and then derive ``generalised determinantal identities'' for the Lie group characters in terms of Weyl-symmetric polynomials. It would be very interesting to see if such an approach can be constructed (if it has not already been), and we envisage that it may lead to more satisfactory fusion ring presentations. We hope to return to this in the future.
\section*{Acknowledgements}
PB is financially supported by the Australian Research Council, and DR would like to thank the Australian National University for a visiting fellowship during this project. We would also like to thank Arzu Boysal, Volker Braun and Howard Schnitzer for helpful and stimulating correspondence.
|
train/arxiv
|
BkiUd3k4uBhjAuZO7jrs
| 5 | 1 |
\section{Introduction}
Morphisms and substitutions on words (over a finite alphabet) appear in different contexts of mathematics and theoretical computer science.
In particular in dynamical systems, the substitution dynamical systems are an important class
of symbolic dynamical systems. They have been studied extensively, for instance see~\cite{queffelec,CANT} and references therein.
We first recall some notions from combinatorics on words.
An {\em alphabet} $\A$ is a finite set and its elements are {\em letters}.
The elements of $\A^n$, for $n\geq 1$ are called {\em words on} $\A$, and $\A^0$ is the set formed by the empty word, $\varepsilon$.
Let $\A^*=\cup_{n\geq 0}\A^{n}$ be the set of finite words on $\A$.
Let $\B$ be an alphabet.
A {\em morphism} $\varphi$ on $\A$ is a map $\varphi: \A \to \B^*$.
By setting $\varphi(uv)=\varphi(u)\varphi(v)$ for $u,v\in\A^*$, the morphism $\varphi$ is extended to $\A^*$.
Hence, the map $\varphi$ is extended in a straightforward manner to $\A^{\N}$, i.e. the set of one-sided infinite sequences over $\A$.
If $\varphi$ is an endomorphism, i.e. $\B=\A$, then it is also called a {\em substitution}.
An important question is to describe the common dynamics for two morphisms $\varphi_0$ and $\varphi_1$ that have the same incidence matrix. The {\em incidence matrix} of $\varphi$ is the matrix $M_{\varphi}=(m_{ij})$,
where $m_{ij}$ is the number of occurrences of the letter $i$ in the word $\varphi(j)$.
This problem has been addressed in~\cite{sirvent,sing-sirvent,sellami}.
The technique used in~\cite{sel0,sellami} is based on the balanced pair algorithm for two substitutions having the same incidence matrix.
This algorithm is a variation of the classical balanced pair algorithm introduced by Livshits~\cite{livshits} in the context of the Pisot conjecture, for more details see for instance~\cite{sirvent-solomyak}.
We say that a substitution is {\em Pisot} if the dominant eigenvalue of the incidence matrix is a Pisot number.
In the case of Pisot substitutions, we can associate a geometrical object to the substitution, called Rauzy fractal~(\emph{cf.}~\cite{rauzy}).
Rauzy fractals play a fundamental role in the study of the Pisot conjecture.
The topological, geometrical and dynamical properties of the Rauzy fractals have been analyzed extensively, for instance see the recent survey~\cite{CANT} and references therein.
The study of the common dynamics of two Pisot morphisms is related to the intersection of their respective Rauzy fractals.
The balanced pair algorithm gives answers to this matter.
We introduce some notation that we use throughout the article:
For a word $u = u_0u_1 \cdots u_n \in \A^{n+1}$,
we set $u[i] = u_i$ for $0\leq i < n$.
For $\ell \leq n$, $u[{:}\ell] = u_0u_1\cdots u_{\ell-1} \in \A^\ell$.
The length of $u$ is denoted $|u|$, i.e., $|u| = n+1$.
Let $a \in \A$.
The number of occurrences of the letter $a$ in a finite word $u$ is denoted $|u|_a$.
We have $|u| = \sum_{a \in \A} |u|_a$.
Assuming that $\A$ is ordered, i.e., $\A = \{a_1,a_2,\ldots, a_k\}$, we set
\[
\parikh(u) = \left( |u|_{a_1} , |u|_{a_2}, \ldots, |u|_{a_k} \right) \in \N^k.
\]
Let $u$ and $v$ be two finite words, we say that $\left( u, v \right) $ is a \emph{balanced pair } if the number of occurrences of each letter of the alphabet in $u$ and $v$ are the same, i.e., $\parikh(u) = \parikh(v)$.
We say that a \emph{balanced pair $\left( u, v \right) $ is minimal} if
$\parikh(u[{:}\ell])\neq\parikh(v[{:}\ell])$, for $1\leq\ell < |u|$.
Let $\varphi_0$ and $\varphi_1$ be two morphisms having the same incidence matrix.
We assume that the morphisms are \emph{primitive}, i.e. there exists an integer $k$ such that all the entries of the matrix $M_{\varphi_0}^k$ are positive.
Let $\bu_0$ and $\bu_1$ be one-sided fixed points of $\varphi_0^m$ and $\varphi_1^m$, for some $m\geq 1$, i.e.
$$
\bu_0=\varphi_0^m(\bu_0), \quad \bu_1=\varphi_1^m(\bu_1).
$$
An \emph{initial balanced pair} for $(\bu_0,\bu_1)$ is a balanced pair $\left( u, v \right)$ such that $u$ and $v$ are prefixes of $\bu_0$ and $\bu_1$, respectively.
The \emph{balanced pair algorithm} for $(\bu_0,\bu_1)$ is defined as follows:
We assume that there exists an initial balanced pair $\left( u, v \right)$ for $\bu_0$ and $\bu_1$.
We apply the morphisms $\varphi_0^m$, $\varphi_1^m$ as
$\left( u, v \right) \mapsto \left( \varphi_0^m(u) , \varphi_1^m(v) \right)$.
Since the morphisms have the same incidence matrix, the obtained pair $\left( \varphi_0^m(u) , \varphi_1^m(v) \right)$ is balanced.
We decompose this balanced pair into minimal balanced pairs and we repeat this procedure for each new minimal balanced pair.
Under the right hypothesis the set of minimal balanced pairs is finite and the algorithm terminates (\emph{cf.}~\cite{sellami}).
The balanced pair algorithm requires that there is an initial balanced pair for the two sequences.
Sufficient conditions for Pisot morphisms to have an initial balanced pair are stated in~\cite{sellami}.
These conditions are in terms of geometrical properties of their respective Rauzy fractals.
A pair of substitutions is given in~\cite[Example 5]{sellami-sirvent} that does not satisfy these conditions.
It is conjectured that there is no initial balanced pair between the (one-sided) fixed points of the morphisms.
In the following theorem we give a positive answer to this conjecture.
\begin{theorem}\label{thm:main}
Let $\varphi_0$ and $\varphi_1$ be the morphisms over the alphabet $\left\{ a,b,c \right\}$ defined by
\begin{equation}\label{eqn:morphisms}
\varphi_0: \begin{array}{l}
a \mapsto abc\\
b \mapsto a\\
c \mapsto ac
\end{array}
\quad \text{ and } \quad
\varphi_1:
\begin{array}{l}
a \mapsto cba\\
b \mapsto a\\
c \mapsto ca
\end{array},
\end{equation}
and let $\bu_0$ and $\bu_1$ be their respective one-sided fixed points:
\begin{align*}
\bu_{0} = \varphi_0(\bu_0) = abcaacabcabcacabcaacabcaacabcacabc \ldots \\
\bu_{1} = \varphi_1(\bu_1)= cacbacaacbacacbacbacaacbacacbacaac \ldots
\end{align*}
There is no initial balanced pair for $(\bu_0,\bu_1)$, i.e. for all $n>0$ we have
\begin{equation}
\parikh(\bu_0[{:}n]) \neq \parikh(\bu_1[{:}n]).
\end{equation}
\end{theorem}
The substitutions of Theorem~\ref{thm:main} are Pisot.
However we will not use any argument based on properties of Rauzy fractals or discuss any consequences related to the intersection of their respective Rauzy fractals.
The Rauzy fractals corresponding to these morphisms are depicted in~\cite[Figure 6]{sellami-sirvent}.
In the present note we shall use only a symbolic approach.
We give the proof of Theorem~\ref{thm:main} in \Cref{s:proof}.
The main idea for the proof is the following.
We investigate the prefixes $\varphi_0(\bu_0[{:}n])$ and $\varphi_1(\bu_1[{:}n])$ at the same time.
For instance, for $n=1,2,3$ we have:
\begin{align*}
\varphi_0(\bu_0[{:}1]) &= abc, & \varphi_0(\bu_0[{:}2]) &= abca, & \varphi_0(\bu_0[{:}3]) &= abcaac, \\
\varphi_1(\bu_1[{:}1]) &= ca, & \varphi_1(\bu_1[{:}2]) &= cacba, & \varphi_1(\bu_1[{:}3]) &= cacbaca.
\end{align*}
To keep track of the letters at the same positions in the two fixed points and the images of letters at the same time, we encode the corresponding images of letters in the following manner:
\begin{align*}
\begin{array}{rl}
\varphi_0(\bu_0[{:}1]) & = \\
\varphi_1(\bu_1[{:}1]) & =
\end{array}
&
\letterOrig{0}
&
\begin{array}{rl}
\varphi_0(\bu_0[{:}2]) & = \\
\varphi_1(\bu_1[{:}2]) & =
\end{array}
&
\letterOrig{0} \
\letterOrig{1}
\\
\begin{array}{rl}
\varphi_0(\bu_0[{:}3]) & = \\
\varphi_1(\bu_1[{:}3]) & =
\end{array}
&
\letterOrig{0} \
\letterOrig{1} \
\letterOrig{2}
& &
\end{align*}
In that way, we construct a sequence which encodes $\bu_0$ and $\bu_1$ at the same time.
In the next section, we give the formal definitions and description of this construction.
\Cref{s:proof} contains a proof of \Cref{thm:main}.
In \Cref{s:comments} we discuss possible generalization of this technique of simultaneously generating and encoding two fixed points for a wider class of substitutions.
\section{Simultaneous coding of two fixed points} \label{sec:bricks}
In this section, we assume that $\varphi_0$ and $\varphi_1$ are two fixed substitutions of $\A$ and $\bu_0$ and $\bu_1$ are their two fixed points, respectively.
We start by giving a definition of the alphabet that shall be used for the simultaneous encoding.
\begin{definition} \label{de:brick}
Let $\A$ be an alphabet.
We set $\BA(\A) = \A \times \A \times \Z$ and call it the \emph{brick alphabet} (over $\A$).
An element of $\BA(\A)$ is called a \emph{brick}.
The last element of a brick is called its \emph{offset}.
\end{definition}
\begin{definition}
Let $(v_0,v_1,s)$ and $(w_0,w_1,t)$ be two bricks.
We say that $(v_0,v_1,s)$ \emph{joins with} $(w_0,w_1,t)$ \emph{with respect to $(\varphi_0, \varphi_1)$} if
\[
| \varphi_0(v_0) | - s - | \varphi_1(v_1) | + t = 0.
\]
\end{definition}
In relation with the last definition and the morphisms $\varphi_0$ and $\varphi_1$, we shall depict bricks as in Introduction:
for a brick $(v_0,v_1,s)$ we place on the first row the word $\varphi_0(v_0)$ and on the second row we place $\varphi_1(v_1)$ with the offset $s$ relative to the first row.
One brick joins with another if the end of the first brick matches with the beginning of the second brick as demonstrated in the following example.
Consider $\varphi_0$ and $\varphi_1$ given in~(\ref{eqn:morphisms}).
Let $n = 1$.
Consider the brick $(a,c,0)$, depicted as
\[
\letterOrig{0},
\]
and the brick $(b,a,-1)$, depicted as
\[
\letterOrig{1}.
\]
The brick $(a,c,0)$ joins with $(b,a,-1)$ with respect to $(\varphi_0, \varphi_1)$ since
\[
|\varphi_0(a)| - 0 - |\varphi_1(c)| = 3 - 0 - 2 = 1.
\]
This fact can be visually depicted as
\[
\letterOrig{01}.
\]
The brick $(b,a,-1)$ joins with $(c,c,1)$ and we have
\[
\letterOrig{012}.
\]
The brick $(b,a,-1)$ does not join with itself:
\[
\letterOrig{1} \ \letterOrig{1}.
\]
Let us introduce a notation that allows us to retrieve the encoded sequences from sequences over a brick alphabet $\BA(\A)$.
Let $z = \left( \left( w_{0,i},w_{1,i}, k_i \right) \right)_{i} $ be an element of $\BA(\A)^{\N}$.
For $j\in\{0,1\}$, we set $\tau_j: \BA(\A)^\N \to \A^\N$ by $\tau_j(z) = w_{j,0} w_{j,1} w_{j,2}\ldots\in\A^{\N}$.
For any $\varphi_0$ and $\varphi_1$, we can find a sequence $\bz$ over $\BA(\A)$ in the spirit of the above example.
We construct the bricks to respect the original sequences: $\tau_i(\bz) = \bu_i$ for $i \in \{0,1\}$.
The offsets of the bricks in $\bz$ are set position by position starting from the beginning.
The first offset is $0$ and the offsets of other bricks in $\bz$ is set so that any brick in $\bz$ joins (with respect to $(\varphi_0,\varphi_1)$) with the brick immediately following it.
We say that $\bz$ \emph{simultaneously encodes} the fixed points $\bu_0$ and $\bu_1$.
Continuing the above example, we find that the sequence simultaneously encoding the fixed points of~(\ref{eqn:morphisms}) starts with
\[
\left( a,c, 0 \right) \left( b,a,-1 \right) \left( c,c,1 \right) \ldots
\]
\section{Proof of Theorem~\ref{thm:main}}\label{s:proof}
In this section, $\varphi_0$ and $\varphi_1$ are fixed to be the two morphisms in~(\ref{eqn:morphisms}), and
$\bu_0$ and $\bu_1$ are their respective fixed points.
The alphabet considered is $\A = \{a,b,c\}$.
Let $\mu$ be a substitution over $\C = \{0,1, \ldots, 8 \}$ given by
\[
\mu:
\begin{array}{lllll}
0 \mapsto 01, &
1 \mapsto 23, &
2 \mapsto 45, &
3 \mapsto 41, &
4 \mapsto 231, \\
5 \mapsto 26, &
6 \mapsto 478, &
7 \mapsto 2, &
8 \mapsto 66.
\end{array}
\]
Let $\bw$ denote its fixed point.
We have
\[
\bw = 012345412312623123454123454784541234541231 \ldots
\]
Inspecting $\mu$, we find the set of all its factors of length 2 easily:
\[
\L_2(\bw) = \left\{ 01, 66, 12, 31, 23, 34, 26, 45, 62, 47, 84, 78, 54, 41 \right\}.
\]
Let $\B = \{ (a,c,0),
(b,a,-1),
(c,c,1),
(a,b,1),
(a,a,-1),
(c,c,-1),
(a,a,1),
(b,c,-1),
(c,b,0)
\} \subset \BA(\A)$
and $\pi: \C^* \to \B^*$ be the morphism determined by
\[
\pi:
\begin{array}{rlrlrlrlrl}
0 & \mapsto (a,c,0), &
1 & \mapsto (b,a,-1), &
2 & \mapsto (c,c,1), &
3 & \mapsto (a,b,1), &
4 & \mapsto (a,a,-1), \\
5 & \mapsto (c,c,-1), &
6 & \mapsto (a,a,1), &
7 & \mapsto (b,c,-1), &
8 & \mapsto (c,b,0).
\end{array}
\]
The action of $\pi$ may be depicted with respect to $(\varphi_0,\varphi_1)$ as follows:
\begin{equation} \label{eq:orig_pi_bricks}
\pi: \begin{array}{rlrlrl}
0 & \mapsto \letterOrig{0}, &
1 & \mapsto \letterOrig{1}, &
2 & \mapsto \letterOrig{2},\\[2em]
3 & \mapsto \letterOrig{3}, &
4 & \mapsto \letterOrig{4}, &
5 & \mapsto \letterOrig{5}, \\[2em]
6 & \mapsto \letterOrig{6}, &
7 & \mapsto \letterOrig{7}, &
8 & \mapsto \letterOrig{8}.\\[2em]
\end{array}
\end{equation}
The sequence $\bw$ is constructed so that the sequence $\pi(\bw) = \pi(\bw[0])\pi(\bw[1])\pi(\bw[2])\ldots$ simultaneously encodes both $\bu_0$ and $\bu_1$.
This fact is shown by the two following propositions.
The alphabet $\C$ and the morphism $\pi$ serve only to work with the indices of the elements of $\B$ to ease the presentation.
\begin{proposition} \label{st:orig_u_are_produced}
Let $i \in \{0,1\}$.
We have $\bu_i = \tau_i \pi (\bw)$.
\end{proposition}
\begin{proof}
We show that for all $n$, the word $\tau_i \pi \mu ( \bw[{:}n] ) $ is a prefix of $\varphi_i \tau_i \pi ( \bw[{:}n] )$ (which is a prefix of $\bu_i$).
The proof will be done by induction on $n$.
In fact, we will prove a stronger statement:
there exists a mapping $s_i: \C \to \A^*$ such that for all $n$ we have
\begin{equation}
\tau_i \pi \mu ( \bw[{:}(n{+}1)] ) s_i(x) = \varphi_i \tau_i \pi ( \bw[{:}(n{+}1)] ) \label{eq:indu_x}
\end{equation}
where $x$ is the last letter of $\bw[{:}(n{+}1)]$, i.e., $\bw[{:}(n{+}1)] = \bw[{:}n]x$.
We set $s_i$ as follows:
\begin{align*}
s_0: & & 0 \mapsto c, 1 \mapsto \varepsilon, 2 \mapsto \varepsilon, 3 \mapsto c, 4 \mapsto c, 5 \mapsto c, 6 \mapsto \varepsilon, 7 \mapsto a, 8 \mapsto c, \\
s_1: & & 0 \mapsto \varepsilon, 1 \mapsto a, 2 \mapsto a, 3 \mapsto \varepsilon, 4 \mapsto \varepsilon, 5 \mapsto \varepsilon, 6 \mapsto a, 7 \mapsto a, 8 \mapsto \varepsilon.
\end{align*}
Taking all $yx \in \L_2(\bw) = \left\{ 01, 66, 12, 31, 23, 34, 26, 45, 62, 47, 84, 78, 54, 41 \right\}$, we may verify that
\begin{equation} \label{eq:indux_ro}
\tau_i \pi \mu (x) s_i(x) = s_i(y) \varphi_i \tau_i \pi (x).
\end{equation}
To show \eqref{eq:indu_x} we proceed by induction on $n$.
The claim is verified for $n = 0$ ($\bw[{:}1] = \bw[0] = 0$):
\begin{align*}
\tau_0 \pi \mu ( 0 ) s_0(0) & = abc = \varphi_0 \tau_0 \pi (0), \\
\tau_1 \pi \mu ( 0 ) s_1(0) & = ca = \varphi_1 \tau_1 \pi (0).
\end{align*}
Assume \eqref{eq:indu_x} holds for $n \geq 0$.
Let $\bw[{:}(n{+}2)] = \bw[{:}n]yx$, i.e., $y$ and $x$ be the last two letters of $\bw[{:}(n{+}2)]$.
We have
\[
\tau_i \pi \mu ( \bw[{:}(n{+}2)] ) s_i(x) = \tau_i \pi \mu ( \bw[{:}(n{+}1)] ) \tau_i \pi \mu (x) s_i(x) = \tau_i \pi \mu ( \bw[{:}(n{+}1)] ) s_i(y) \varphi_i \tau_i \pi (x)
\]
where the last equality follows from \eqref{eq:indux_ro}.
Since by using the induction assumption we have
\[
\tau_i \pi \mu ( \bw[{:}(n{+}1)] ) s_i(y) \varphi_i \tau_i \pi (x) = \varphi_i \tau_i \pi ( \bw[{:}(n{+}1)] ) \varphi_i \tau_i \pi (x) =
\varphi_i \tau_i \pi ( \bw[{:}(n{+}2)] ),
\]
we conclude that
\[
\tau_i \pi \mu ( \bw[{:}(n{+}2)] ) s_i(x) = \varphi_i \tau_i \pi ( \bw[{:}(n{+}2)] ),
\]
which finishes the induction step and \eqref{eq:indu_x} is proven.
\end{proof}
The following claim states that all two consecutive bricks of $\pi(\bw)$ join one with another.
\begin{proposition} \label{st:orig_w_compatible}
If $b_0b_1 \in \L_2(\pi(\bw))$, then $b_0$ joins with $b_1$ with respect to $(\varphi_0,\varphi_1)$.
\end{proposition}
\begin{proof
Checking each $b_0b_1 \in \L_2(\pi(\bw))$, we find that $b_0$ joins with $b_1$ with respect to $(\varphi_0,\varphi_1)$.
Indeed, using the graphical view \eqref{eq:orig_pi_bricks} we obtain
\begin{equation} \label{eq:orig_piL2}
\begin{aligned}
\pi(01) & = \letterOrig{01} &
\pi(66) & = \letterOrig{66} &
\pi(12) & = \letterOrig{12} \\
\pi(31) & = \letterOrig{31} &
\pi(23) & = \letterOrig{23} &
\pi(34) & = \letterOrig{34} \\
\pi(26) & = \letterOrig{26} &
\pi(45) & = \letterOrig{45} &
\pi(62) & = \letterOrig{62} \\
\pi(47) & = \letterOrig{47} &
\pi(84) & = \letterOrig{84} &
\pi(78) & = \letterOrig{78} \\
\pi(54) & = \letterOrig{54} &
\pi(41) & = \letterOrig{41} &
\end{aligned}
\end{equation}
and the proof is concluded by checking that there are no gaps or overlaps between the two bricks depicted with respect to $(\varphi_0,\varphi_1)$.
\end{proof}
We are set for proving Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We will prove that for all $n > 0$ we have $\left |\bu_0[{:}n] \right |_c < \left |\bu_1[{:}n] \right |_c$.
We shall depict $\pi(\bw)$ with respect to $\varphi_0$ and $\varphi_1$, i.e., using \eqref{eq:orig_pi_bricks}.
We obtain
\[
\pi(\bw) =
\letterOrig{0123454}
\ldots
\]
By \Cref{st:orig_u_are_produced,st:orig_w_compatible} we conclude that on the first row of the right side of the previous equation we have exactly $\bu_0$ (with no spaces between letters or overlaps), and that the second row is exactly $\bu_1$.
Moreover, since the first brick is $(a,c,0)$ with offset $0$, the words $\bu_0$ and $\bu_1$ are aligned and their positions match.
Comparing all occurrences of the letter $c$ in $\pi(\L_2(\bw))$ depicted in \eqref{eq:orig_piL2} we conclude that almost all occurrences of $c$ in $\bu_0$ may be paired with occurrences of $c$ in $\bu_1$.
Precisely, for $n > 0$ we have
\begin{align*}
\text{if }\bu_0[n] = c \quad \text{then} \quad \bu_1[n{-}1]\bu_1[n] \text{contains exactly 1 occurrence of } c, \\
\text{if }\bu_1[n] = c \quad \text{then} \quad \bu_0[n]\bu_0[n{+}1] \text{contains exactly 1 occurrence of } c.
\end{align*}
The first letter of $\bu_1$ is $c$ and it is not paired with any $c$ in $\bu_0$.
The relative positions of $c$'s in $\bu_0$ and $\bu_1$ imply
\[
\left | \bu_1[{:}n] \right |_c > \left | \bu_0[{:}n] \right |_c
\]
for all $n > 0$.
\end{proof}
\section{Comments} \label{s:comments}
In this section we discuss the used technique to simultaneously generate two fixed points in a more general view.
This technique was introduced in this article in order to check that there is no initial balanced pair for a pair of fixed points; however the technique is interesting by itself and it could have other potential applications.
Clearly, a crucial step of the technique is to have a simultaneous coding which is a fixed point of a morphism.
We explore here if the same method may be used to answer the same question for some other pair of morphisms.
Since we only intend to make comments and state open questions, we don't give any proofs as they would in the very same spirit as above and we support our claims by computer evidence.
We give two simple examples while keeping the same meaning of the notation:
The sequence $\pi(\bw)$ is the sequence simultaneously encoding the two fixed points and the goal is to find a fixing morphism $\mu$ of $\bw$.
Again, $\C$ and $\pi$ are used only to ease the presentation.
\begin{example} \label{ex:1}
We consider the following two morphisms over $\A = \{a,b\}$:
\[
\varphi_0: \begin{array}{l}
a \mapsto aab\\
b \mapsto ab
\end{array}
\quad \text{ and } \quad
\varphi_1:
\begin{array}{l}
a \mapsto aba\\
b \mapsto ba
\end{array}.
\]
We set $\C=\{0,1,2\}$, $\B = \left\{ (a,b,0),(a,a,-1),(b,a,-1) \right\}$ and $\pi: \C^* \to \B^*$ depicted with respect to $\left( \varphi_0, \varphi_1 \right) $ as follows
\[
\pi: 0 \mapsto \begin{array}{|c|c|c|} \cline{1-3} \multicolumn{1}{|c}{a} & \multicolumn{1}{c}{a} & \multicolumn{1}{c|}{b} \\ \cline{1-3} \multicolumn{1}{|c}{b} & \multicolumn{1}{c|}{a} \\ \cline{1-2} \end{array} , \quad
1 \mapsto \begin{array}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{1}{|c}{a} & \multicolumn{1}{c}{a} & \multicolumn{1}{c|}{b} \\ \cline{1-4} \multicolumn{1}{|c}{a} & \multicolumn{1}{c}{b} & \multicolumn{1}{c|}{a} \\ \cline{1-3} \end{array} , \quad
2 \mapsto \begin{array}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & \multicolumn{1}{|c}{a} & \multicolumn{1}{c|}{b} \\ \cline{1-3} \multicolumn{1}{|c}{a} & \multicolumn{1}{c}{b} & \multicolumn{1}{c|}{a} \\ \cline{1-3} \end{array}.
\]
Let $\bw \in \C^\N$ be fixed by the morphism determined by $0 \mapsto 01, 1 \mapsto 201, 2 \mapsto 202$.
The sequence $\pi(\bw)$ simultaneously encodes $\bu_0$ and $\bu_1$.
\end{example}
\begin{example} \label{ex:2}
We consider $\A = \{a,b,c\}$ and the substitutions
\[
\varphi_0: \begin{array}{l}
a \mapsto abac\\
b \mapsto aba\\
c \mapsto ab
\end{array}
\quad \text{ and } \quad
\varphi_1:
\begin{array}{l}
a \mapsto acab\\
b \mapsto aab\\
c \mapsto ab
\end{array}.
\]
We set $\C = \{0,1,\ldots,11\}$ and
\[
\B = \left\{\begin{array}{l}
(a,a,0),
(b,c,0),
(a,a,-1),
(c,b,-1),
(b,b,0),
(a,c,0),\\
(b,a,-2),
(a,b,-1),
(c,a,-2),
(b,a,0),
(a,b,1),
(c,c,0)
\end{array}
\right\}.
\]
The morphism $\pi: \C^* \to \B^*$ maps the elements of $\C$ to the elements of $\B$ in the order as depicted above, its graphical view with respect to $(\varphi_0,\varphi_1)$ is as follows:
\[
\begin{aligned}
\begin{aligned}
\pi(0) & = \letterzuj{0}, &
\pi(1) & = \letterzuj{1}, &
\pi(2) & = \letterzuj{2}, \\
\pi(3) & = \letterzuj{4}, &
\pi(4) & = \letterzuj{5}, &
\pi(5) & = \letterzuj{6}, \\
\pi(6) & = \letterzuj{6}, &
\pi(7) & = \letterzuj{8}, &
\pi(8) & = \letterzuj{8}, \\
\pi(9) & = \letterzuj{9}, &
\pi(10) & = \letterzuj{10}, &
\pi(11) & = \letterzuj{11}.\\
\end{aligned}
\end{aligned}
\]
The morphism fixing $\bw$ is determined by
\[
\begin{aligned}
0 & \mapsto 0123, &
1 & \mapsto 0, &
2 & \mapsto 405678,\\
3 & \mapsto 04, &
4 & \mapsto 0.9.10, &
5 & \mapsto 0.4.0.11.0.4,\\
6 & \mapsto 0, &
7 & \mapsto 0.4.0.11, &
8 & \mapsto 04\\
9 & \mapsto 0, &
10 & \mapsto 127623, &
11 & \mapsto 04.\\
\end{aligned}
\]
Since the alphabet has more than $10$ letters and we use decimal numbers to represent them, if needed, we use ``.'' to separate the elements of a word in order to avoid confusions.
\end{example}
We would like to point out that \Cref{ex:1,ex:2} trivially have an initial balanced pair, i.e. $(a,a)$.
In general, this direct approach to simultaneously generate two fixed points does not work for any pair of morphisms.
We do not consider the computational problems, i.e., when the size of $\B$ is large.
The combinatorial problem is that $\bw$ may not be a fixed point.
We continue with examples that displayed this property and discuss possible refinements of the method.
\begin{example}
The pair of morphisms~(\ref{eqn:morphisms}) belongs to the following family
\begin{equation} \label{eqn:morphisms_k}
\varphi_{k,0}: \begin{array}{l}
a \mapsto a^kbc\\
b \mapsto a\\
c \mapsto ac
\end{array}
\quad \text{ and } \quad
\varphi_{k,1}:
\begin{array}{l}
a \mapsto cba^k\\
b \mapsto a\\
c \mapsto ca
\end{array},
\end{equation}
where $a^k=\underbrace{a\cdots a}_{k-\text{times}}$, with $k\geq 1$.
Computer experiments indicate that their respective fixed points do not have an initial balanced pair.
The methods expounded in this note do not work well for $k>1$.
\end{example}
The first refinement is to alternate \Cref{de:brick}.
\begin{definition}[Extended~\Cref{de:brick}]
Let $\A$ be an alphabet and $n$ is a positive integer.
We set $\BA_n(\A) = \A^n \times \A^n \times \Z$ and call it the \emph{brick alphabet of order $n$} (over $\A$).
An element of $\BA_n(\A)$ is called a \emph{brick of order $n$}.
\end{definition}
Using this notion, the problem for the morphisms~\eqref{eqn:morphisms_k} with $k=2$ is remedied by taking bricks of order $2$.
(The simultaneous encoding using bricks of order $2$ is done so that in the graphical view the bricks overlap by a brick of order $1$, i.e., $\tau_i(\bw) = \bu_i[0]\bu_i[1], \bu_i[1]\bu_i[2], \bu_i[2]\bu_i[3] \ldots \in \left( \L_2(\bu_i) \right)^\N$.)
We find $\#\B = \#\C = 21$ and $\bw$ is fixed by a morphism.
However, for $k=3$, computer evidence suggests that the sequence $\bw$ is not fixed by any morphism.
In order to generate $\bw$ one might try to check whether it can be generated as a morphic image of a fixed point (so-called \emph{morphic sequence}).
This is a second refinement of the exposed technique.
An intuitive approach is to select a factor of $\bw$, construct a coding of return words to this factor in $\bw$ (usually called a \emph{derivated sequence}, see \cite{Durand98}) or a coding of return words to a selected set of factors, and test whether this coding is a fixed point.
Using these two refinements, we have successfully solved the problem for some random examples.
However, a sufficient condition for any of the techniques to work is an open question.
We state this question along with other open problems:
\begin{enumerate}
\item Primitivity of $\varphi_0$ (and thus of $\varphi_1$) is a sufficient condition for $\big | \left | \varphi_0(\bu_0[{:}n]) \right | - \left | \varphi_1(\bu_1[{:}n]) \right | \big |$ to be bounded.
It follows that the simultaneous coding of $\bu_0$ and $\bu_1$ is over a finite alphabet since the offsets of the bricks occurring in it are bounded.
It implies that using bricks of higher orders will also yield a finite alphabet.
To see that in the non-primitive case we may need an infinite number of bricks consider the following example:
\[
\varphi_0: \begin{array}{l}
a \mapsto abacaa\\
b \mapsto cbb\\
c \mapsto bcc
\end{array}
\quad \text{ and } \quad
\varphi_1:
\begin{array}{l}
a \mapsto baacaa\\
b \mapsto bbc\\
c \mapsto bcc
\end{array} .
\]
The incidence matrix of these two morphisms is $ \begin{pmatrix}
4 & 0 & 0 \\
1 & 2 & 1 \\
1 & 1 & 2
\end{pmatrix}$, its largest eigenvalue is $4$ and the associated eigenvector is $(1,1,1)^T$.
The other eigenvalues are $3$ and $1$ with respective eigenvectors $(0,1,1)^T$ and $(0,1,-1)$.
It follows that the order of growth of $\left | \varphi_0^n ( a ) \right |$ is $4^n$,
while the order of growth of $\left | \varphi_1^n ( b ) \right |$ is $3^n$.
Therefore, the simultaneous coding of the fixed points of these two morphisms requires an infinite number of bricks.
Clearly, this property is due to the choice of distinct first letters of the compared fixed points $a$ for $\varphi_0$ and $b$ for $\varphi_1$ which have distinct magnitude of growth.
A further investigation of conditions for the finiteness of the number of bricks needed is an open questions along with an estimate on this number.
\item Assuming the number of required bricks for a simultaneous coding is finite, one can be interested in an integer $N$ such that all required bricks can be found using the words $\varphi_0^N(\bu_0[0])$ and $\varphi_1^N(\bu_1[0])$.
\item What are some sufficient conditions so that a simultaneous coding $\bw$ (using bricks of order $h$) is a fixed point or a morphic image of a fixed point?
\item There is an overlap of our presented algorithm with the approach used in~\cite{sellami} based on the balanced pair algorithm, recalled in Introduction.
This approach identifies minimal balanced pairs which can be seen as sequences of various lengths of bricks of order $1$, always ending with a brick with offset $0$, or as bricks of various orders with offset $0$.
A description of the relation of the two algorithms is an open question.
For instance, if one terminates, does the other also terminate?
In~\cite{sellami}, the author investigates specific pairs of substitutions (unimodular irreductible Pisot substitutions satisfying the Pisot conjecture and having $0$ in the interior of their Rauzy fractal, see~\cite{sellami,Aper}) for which the approach based on minimal balanced pairs terminates.
In Section 5 of~\cite{sellami}, there are the two following examples.
The first one consists of morphisms determined by $a \mapsto aba, b \mapsto ab$ and $a \mapsto aab, b \mapsto ba$.
Our algorithm terminates by finding a morphism fixing the simultaneous coding in question, which is over $6$ bricks of order $1$.
The second example is formed by morphisms determined by $a \mapsto ab,b \mapsto ac,c \mapsto a$ and $a \mapsto ab,b \mapsto ca,c \mapsto a$.
In this case, using bricks of order $1$, the simultaneous coding seems not to be fixed by a morphism.
This observation supports the need to explore this question further.
\end{enumerate}
\section*{Acknowledgements}
ŠS acknowledges financial support by the Czech Science Foundation grant GA\v CR 13-03538S.
Computer experiments were performed using SageMath~\cite{sage}.
|
train/arxiv
|
BkiUc2Q5qoYAmdCYN4mt
| 5 | 1 |
\section{Introduction}
\label{intro}
Quarks undergoing acceleration in the deconfined state
of QCD matter can generate electromagnetic radiation,
with those photons that are off-shell subsequently decaying into lepton-antilepton pairs.
Therefore, both the real photon spectrum and dilepton invariant mass distribution
can provide access to properties of the hot quark-gluon plasma (QGP)
that exists in heavy ion collision experiments
\cite{McLerran1984,Weldon1990,Gale1990}.
In this report, I will
show that the spectral function can be constrained by lattice data and
discuss how the presence of a baryon chemical potential, $\mu_{\rm B}$,
impacts the production of photons and dileptons.
While the latter has been examined for real photons \cite{Gervais2012},
we present new results away from the light cone.
This involves properly understanding how $\mu_{\rm B}$ enters the strict NLO
computation, the so-called LPM effect (at low invariant masses),
and how to smoothly interpolate between the two regimes as originally
advocated in ref.~\cite{Ghisoiu2014}.
To start, we fix the notation and denote the temperature by $T\,$,
the quark chemical potential by $\mu\,$ and
the energy and momentum with respect to the plasma rest frame
of the lepton pair by $\omega$ and $\bm k$ respectively.
In chemical equilibrium,
$\mu = \frac13 \mu_{\rm B}$
and thermal averages are calculated from
$\langle ... \rangle = {\rm Tr}\big[\, \hat \varrho\, (...)\,\big]$
with the density matrix $\hat \varrho = {\cal Z}^{-1} e^{-(\hat H - \mu \hat Q)/T}$
\cite{basics}.
Emission rates can then be derived from an associated spectral function.
In this case, the relevant spectral function is given by
the imaginary part of the current-current
correlation function, evaluated at the energy $k_0 = \omega + i 0^+$,
namely
\begin{equation}
\rho_{\mu\nu}(\omega, \bm k) =
\mathop{\mbox{Im}} \big[ \Pi_{\mu\nu}(K)
\big]
\,, \label{cut}
\end{equation}
where $K = (k_0,\vec{k})$
and the correlation function being given by\footnote{%
An overall minus sign appears in \eq{Pi_munu}
for sake of convenience.
}
\begin{equation}
\Pi^{ }_{\mu\nu}(K)
\; \equiv \;
-\,
\int_0^{1/T} \!\! {\rm d} \tau
\int_{\vec{x}} \;
e^{k_0\tau + i\, \vec{k}\cdot\vec{x} } \,
\bigg\langle\,
J_\mu
\big(t,\vec{x}\!\,\big) \,
J_\nu
\big(0,\vec{0}\!\,\big)
\,\bigg\rangle
\, , \quad J_\mu \equiv \bar\psi \gamma_\mu \psi
\;. \label{Pi_munu}
\end{equation}
The thermal average is taken on a volume
with periodic temporal extent $\tau \in (0,T^{-1})$ and
$k_0$ is a bosonic Matsubara frequency
$k_0 = i\, 2\pi z T$ with
$z \in \mathbb{Z}$.
With these definitions,
the differential photon rate involves the spectral function
$\rho_{\rm V} \equiv \rho_\mu^{\ \mu}$
for both $i$) real and $ii$) virtual photons.
In case $i$), $\omega = |\vec{k}| \equiv k$
and case $ii$) provides the dilepton rate
with the invariant mass $M \equiv \sqrt{ K^2}$
that should be above the threshold to
form the pair: $K^2 = \omega^2 - k^2 > 4m_\ell^2\,$.
One might also consider $iii$) deep inelastic scattering on a QGP target,
which would involve $\rho_{\rm V}$ for timelike virtualities \cite{harvey_DIS}.
Although case $iii$) may not be experimentally accessible,
there is another good reason to pursue $K^2<0\,$:
Knowing
the spectral function at fixed $k$ for {\em all} $\omega$,
enables one to calculate the imaginary time correlation function
and thus connect with non-perturbative lattice measurements (at $\mu=0$).
The gross features of $\rho_{\rm V}$ can be understood from the
leading-order (LO) process ${q\bar q \to \gamma^\star}$, i.e. $\alpha_s = 0$.
For non-zero $\mu$, the
result was determined in ref.~\cite{Dumitru1993} for $K^2 > 0\,$.
In general, for any $\omega\,$, the free spectral function is
given by the strict 1-loop result\footnote{%
Setting $\mu=0$ gives eq.~(2.4) from ref.~\cite{Jackson2019}.}
\begin{eqnarray}
\left.\rho_{\rm V}\right|^{\rm strict}_{\rm 1-loop} & = &
\frac{ N_{\rm c} K^2 }{4 \pi }
\bigg\{ \,
\sum_{\nu = \pm \mu}
\frac{T}{k} \ln \bigg[ \frac{1 + e^{\nu-\frac12(\omega+k)/T}}{1+ e^{\nu-\frac12|\omega-k|/T}} \bigg] + \Theta \big( K^2 \big) \,
\bigg\}
\;, \label{rhoV_LO}
\end{eqnarray}
where $\Theta$ denotes the Heaviside step function
and $N_{\rm c}$ is the number of colours.
This is depicted in fig.~\ref{fig-1}, assuming non-zero $T$.
The vanishing of $\rho_{\rm V}$ for $\omega = k$ is readily understood
from kinematics
and the $\mu$-dependence stems from the relative
enhancement and depletion of quarks and antiquarks respectively (for $\mu>0$).
\begin{figure}[h]
\centering
\includegraphics[width=9cm,clip]{sketch_proceedings.pdf}
\vspace{-.5cm}
\caption{
Sketch of the
free spectral function ($\alpha_s=0$),
with the $\mu=0$ result (solid) and
the impact of $\mu>0$ also shown (dashed).
The limit $T,\,\mu \to 0$
of eq.~\eq{rhoV_LO} is the vacuum result (dotted).
}
\label{fig-1}
\end{figure}
\section{Weak coupling QCD corrections}
Corrections to \eq{rhoV_LO} may be computed in perturbation theory,
however the structure of the expansion in $\alpha_s$ depends
on the (parametric) value of $K^2$.
Away from the light cone, $|K^2| \gsim (\pi T)^2\,$, the 2-loop corrections may be calculated
directly and the NLO terms are ${\cal O}(\alpha_s)$ \cite{Laine2013vpa}.
However, for small $K^2$ as the free
result \eq{rhoV_LO} gets kinematically suppressed (and, in particular vanishes for $K^2=0$)
implying that the QCD `corrections' actually represent the first non-trivial approximation to the
real photon rate.
For $K^2 \mathrel{\rlap{\lower0.25em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} (gT)^2$ certain diagrams
need to be resummed to obtain a meaningful result, which is motivated on physics grounds
to describe thermal screening \cite{Braaten1990,Kapusta1991,Baier1991}
in addition to the Landau-Pomeranchuk-Migdal (LPM) effect
\cite{agz,agmz,Arnold2001ba,Arnold2001ms}.
These contributions alter the asymptotic dependence on the strong coupling $\alpha_s$
to a leading-logarithm $\rho_{\rm V} \sim \alpha_s \ln (1/\alpha_s) T^2$ as $\alpha_s \to 0\,$.
In ref.~\cite{Ghisoiu2014}, a simple procedure to
interpolate between these two regimes was proposed.
Care is required to avoid double counting when resummation is combined with the strict NLO
expansion.
A full resummed spectral functions can be defined as
\begin{eqnarray}
\rho_{\rm V} |_{\rm NLO}^{\rm resummed}
\; \equiv \;
\rho_{\rm V} |_{\rm 1-loop}^{\rm strict}
+
\rho_{\rm V} |_{\rm 2-loop}^{\rm strict}
+
\big(
\rho_{\rm V} |_{\rm LPM}^{\rm full}
-
\rho_{\rm V} |_{\rm LPM}^{\rm expanded}
\, \big)
\;, \label{resummation}
\end{eqnarray}
where the subtracted term in parenthesis represent the 1- and 2-loop parts that are included
in the full LPM result (with certain approximations).
For \eq{resummation} to make sense,
a delicate cancellation must take place around $\omega \simeq k$ so that the result is finite and continuous there~\cite{Jackson2019}.
The details of the LPM `full' and `expanded' will be provided
in sec.~\ref{sec LPM}, where we focus on non-zero baryon density.
In ref.~\cite{Jackson2021}, we studied (with full generality) the types
of interactions that would contribute to strict NLO rates and developed a numerical
routine for {\em any} combination of particles, masses, chemical potentials and
a wide class of matrix elements.
For the dilepton rate, it is preferable to use a more tailored approach
which requires a 2-dimensional phase space integration \cite{Jackson2019a}.
The underlying spectral function can be reduced to a set of elementary
`master integrals' at NLO
(some of which were studied for $\omega >k$ in \cite{Laine2013vpa}),
which are uniformly defined by
\begin{eqnarray}
\rho_{abcde}^{(m,n)}
(K)
&\equiv&
\mathop{\mbox{Im}}\,
\footnotesize\sumint{\, P,Q} \,
\frac{p_0^m \, q_0^n}{
[P^2]^a \, [Q^2]^b \, [R^2]^c \, [L^2]^d \, [V^2]^e
}
\, \Bigg|_{R = K-P-Q\, ,\ L = K-P\, ,\ V = K-Q} \ .
\label{I def}
\end{eqnarray}
Functions of this kind provide a basis onto which the
general 2-loop topology (after carrying out the Dirac algebra, etc.)
can be mapped for self energies with external momentum $K$.
In the sum-integrals \eq{I def},\footnote{%
To be crystal clear, the sum-integrals are (in $4-2\epsilon$ spacetime dimensions)
$$
\sumint{\, P} = \int_{\vec{p}} \, T \sum_{p_0} \;, \qquad
\int_{\vec{p}} = \bigg( \frac{e^\gamma \bar \mu^2}{4\pi} \bigg)^\epsilon
\int\!\! \frac{d^d p}{(2\pi)^d} \, .
$$
}
$P$ and $Q$ are fermionic momenta with
$p_0 = i (2x+1)\pi T + \mu$ and
$q_0 = i (2y+1)\pi T - \mu$ (where $x,y \in \mathbb{Z}$).
(Recall that $K$ is bosonic,
thus $R=K-P-Q$ is also bosonic while $L=K-P$ and $V=K-Q$ are fermionic.)
\section{Non-perturbative constraints}
Although real-time rates are difficult to compute from
numerical Monte Carlo simulations,
the $\tau$ dependence of the integrand in eq.~\eq{Pi_munu}
can be obtained from Euclidean lattices.
The imaginary-time
correlation function $G_{\mu\nu}(\tau,k)$
is related to the spectral function from \eq{cut} via the integral transform\footnote{%
In practice, all the spectral functions studied here are antisymmetric in $\omega \to - \omega$
and therefore only the first term on the right hand side of eq.~\eq{G} contributes.
}
\begin{eqnarray}
G_{\mu\nu}(\tau,k) \ =\
\int_0^\infty \frac{{\rm d}\omega}{2\pi}
\!\!\! &\Bigg\{& \!\!\!
\Big(\, \rho_{\mu\nu} (\omega,k) - \rho_{\mu\nu} (-\omega,k) \,\Big)
\frac{ \cosh\big[\omega(\frac1{2T}-\tau)\big] }{\sinh[\frac1{2T} \omega]}
\nonumber \\
\!\!\! &+& \!\!\!
\Big(\, \rho_{\mu\nu} (\omega,k) + \rho_{\mu\nu} (-\omega,k) \,\Big)
\frac{ \sinh\big[\omega(\frac1{2T}-\tau)\big] }{\sinh[\frac1{2T} \omega]} \ \,
\Bigg\} \ .
\label{G}
\end{eqnarray}
It is a formidable task to invert \eq{G} and thus obtain the spectral function
directly from a finite set of sampling points~\cite{inversion,mem}.
Rather than using \eq{G} to obtain $\rho_{\rm V}$ (from $G_{\rm V}$),
another spectral function turns out to be convenient:
\begin{eqnarray}
\rho_{\rm H}
&\equiv& \rho_{\rm V} + 3 \frac{K^2}{k^2} \rho_{00} \, ,
\label{rhoH}
\end{eqnarray}
which is highly suppressed in the ultraviolet
and exactly vanishes in vacuum.
This makes the corresponding Euclidean correlator more sensitive to the
infrared physics of interest \cite{brandt_rhoH}.
We point out that $\rho_{\rm V}$ and $\rho_{\rm H}$ agree on the light cone,
but differ considerably for $\omega \neq k\,$.
The spectral function \eq{rhoH} satisfies a
sum rule, $\int_0^\infty {\rm d} \omega \, \omega \, \rho_{\rm H}(\omega,k) = 0\,$,
which supplies additional restrictions on any inversion candidates.
Computing both $\rho_{\rm V}$ and $\rho_{\rm H}$
amounts to determining separately
the transverse and longitudinal components, thus
specifying the entire tensor $\rho_{\mu\nu}$.
One may also use \eq{G} to compute $G_{\mu\nu}$ from
models of the spectral function.
In fig.~\ref{fig-2} the perturbative results for $\rho_{\rm H}$ and $G_{\rm H}$
are shown, compared with
continuum extrapolated lattice data for quenched QCD from ref.~\cite{constraints}.
The various curves show different choices of the scale in the running coupling, $Q$,
as well as including the NLO part of the LPM computation \cite{lpm_nlo}.
(Further details may be found in refs.~\cite{Jackson2019,phd}.)
\begin{figure}[t]
\centering
\includegraphics[width=6.2cm,clip]{rhoH_nf0_T11.pdf}
\!\!\!
\includegraphics[width=6.2cm,clip]{GH_nf0_T11.pdf}
\vspace{-.3cm}
\caption{
Results for $\rho_{\rm H}$ (left) and the corresponding Euclidean correlation function,
calculated from eq.~\eq{G},
(right) at $T=1.1T_c$ for $n_{\!f}=0\,$.
Similar plots for $\rho_{\rm V}$ (and $G_{\rm V}$) may be found in ref.~\cite{Jackson2019}.
}
\label{fig-2}
\end{figure}
\section{Beyond leading-order: strict NLO}
\label{sec NLO}
The strict NLO result for $\rho_{\rm V}$ can be
expressed as a linear combination of the master integrals,
defined by eq.~\eq{I def}.
Evaluating the diagrams, we obtain the result
\begin{eqnarray}
\left.\rho_{\rm V}\right|_{\rm NLO}^{(g^2)} & = &
\label{masters, mn}\\
8(1-\epsilon) g^2 C_{_{\rm F}} N_{\rm c}
\!\!\! & \bigg\{ &
(1-\epsilon)K^2
\Big(
\rho_{11020}^{(0,0)} + \rho_{11002}^{(0,0)}
- \rho_{10120}^{(0,0)} - \rho_{01102}^{(0,0)} \Big)
+ \rho_{11010}^{(0,0)}
+ \rho_{11001}^{(0,0)}
\nonumber \\
&+& 2\epsilon \, \rho_{11100}^{(0,0)}
\ +\ 2 \frac{K^2}{k^2} \rho_{11011}^{(1,1)}
\ -\ \tfrac12 K^2
\Big(\, \frac{\omega^2}{k^2} + 3+2\epsilon \,\Big)
\rho_{11011}^{(0,0)}
\nonumber \\
&-&
(1-\epsilon) \Big( \rho_{1111(-1)}^{(0,0)} + \rho_{111(-1)1}^{(0,0)} \Big)
+ 2K^2 \Big( \rho_{11110}^{(0,0)} + \rho_{11101}^{(0,0)} \Big)
- K^4 \rho_{11111}^{(0,0)}
\ \bigg\} \ ,\nonumber
\end{eqnarray}
where $C_{_{\rm F}} = (N_{\rm c}^2-1)/(2N_{\rm c})\,$.
Above, the limit $\epsilon \to 0$ is implied because
some of the master integrals have $1/\epsilon$-contributions stemming from
their vacuum parts.
Note that the spectral function is symmetric in the simultaneous
exchanges:
$a \leftrightarrow b\,$, $d \leftrightarrow e$ and $m \leftrightarrow n\,$
for the master integrals.
Consequently, the result will be unchanged by $\mu \to - \mu\,$.
In the case where $\mu = 0\,$, the additional symmetry
$\rho_{abcde}^{(m,n)}
\to \rho_{baced}^{(n,m)}$ leads to the same
decomposition as in ref.~\cite{Jackson2019,Jackson2019a}.
An important cross-check of the result \eq{masters, mn} (besides the obvious, gauge invariance etc.)
can be found
within the {\em hard thermal loop} (HTL) approximation, for which
the master integrals can be computed in closed form.
The HTL limit is given by the small-$K$ behaviour of
$\Pi_{\mu\nu}$ and the 1-loop result is well known \cite{basics}.
Recently, the 2-loop HTL photon self energy was
computed for a hot and dense QED plasma in ref.~\cite{Gorda2022}.
We restate the outcome here,
in a way that is compatible with eq.~\eq{Pi_munu},
\begin{eqnarray}
\Pi_{\rm V}^{\rm HTL}
\!\!\! &=& \!\!
- \, \bigg( \tfrac13 T^2 + \frac{\mu^2}{\pi^2} \bigg)
+ \frac{e^2}{8\pi^2} \bigg( T^2 + \frac{\mu^2}{\pi^2} \bigg)
\bigg( 1 + \frac{\omega}{k}
L
\bigg)
+ \frac{e^2}{4\pi^2} \, \frac{\mu^2}{\pi^2} \bigg( 1 - \frac{\omega^2}{k^2} \bigg)
\bigg( 1 - \frac{\omega}{2k}
L
\bigg)^2 \ ,
\nonumber \\
\label{htl}
\end{eqnarray}
where $L = \ln \frac{\omega + k + i0^+}{\omega - k + i0^+}\,$.
This result can be transcribed to the present
case by replacing ${e^2 \to g^2 C_{_{\rm F}} N_{\rm c}}$
in eq.~\eq{htl},
so that the resulting spectral function should coincide with
the strict NLO version of $\rho_{\rm V}$ \eq{masters, mn} assuming $\omega$ and $k$ are small.
The agreement between the two approaches
has been verified both analytically and numerically.
Worth mentioning explicitly, is the HTL limit for the
$\rho_{11011}^{(1,1)}$ master integral.\footnote{%
If $\mu=0\,$, one can prove that
$\rho_{11011}^{(1,1)} = \frac14 \omega^2 \rho_{11011}^{(0,0)}\,$
which vanishes in the HTL approximation when $\omega$ is soft.
}
One may readily check that
\begin{eqnarray}
\sumint{P,Q}
\frac{p_0 q_0}{P^2Q^2(K-P)^2(K-Q)^2}
& \approx &
-\,
\frac{\mu^2}{4(2\pi)^4}
\bigg(\, 1 - \frac{\omega}{2k} L
\bigg)^2
\, .
\end{eqnarray}
This term appears when the strict 2-loop self energy,
$\Pi_{\rm V}|_{\rm NLO}^{(g^2)}$, is evaluated
and is entirely responsible for the last term in \eq{htl},
which contains a new structure involving a squared logarithm (only
present at finite density).
\section{Beyond leading-order: LPM regime}
\label{sec LPM}
The master integrals
$\rho_{1111(-1)}^{(0,0)}$ and $\rho_{111(-1)1}^{(0,0)}$
from eq.~\eq{masters, mn} each contain a log-divergence as $K^2\to 0^\pm$ \cite{phd}.
This is a signal that resummation is required, and the LPM framework
serves that purpose.
Two important scales enter in the problem: The
Debye mass $m_D$ and the asymptotic quark mass $m_\infty$,
both of which are modified by the chemical potential, {\em viz.}
\begin{eqnarray}
m_D^2 \ \equiv \
g^2 \bigg[
\Big(\tfrac12 n_{\!f} + N_{\rm c} \Big)
\frac{T^2}3
+
n_{\!f} \, \frac{\mu^2}{2\pi^2} \bigg] \;,
& &
m_\infty^2 \ \equiv \
g^2 \, \frac{C_{_{\rm F}}}4 \bigg( T^2 + \frac{\mu^2}{\pi^2} \bigg) \; ,
\end{eqnarray}
where $n_{\!f}$ is the number of light quark flavours.
Following ref.~\cite{agmz} in impact parameter space,
the result can be expressed as\footnote{%
A formulation with better asymptotics (for large $M$) was
proposed in ref.~\cite{lpm_born_interp}, although we do not use that here.
}
\begin{eqnarray}
\left. \rho_{\rm V} \right|^{\rm full}_{\rm LPM}
& = &
- \frac{N_{\rm c}}{\pi}
\int_{-\infty}^{\infty} \! {\rm d}p \,
\bigl[ 1-n_{\rm F}^{ }(p-\mu)-n_{\rm F}^{ }(\omega-p+\mu) \bigr]
\\ \nonumber
& \times &
\lim_{{\bm b}\to {\bm 0}} \, \mathbb{P} \,
\biggl\{
\frac{K^2
}{\omega^2}
\mathop{\mbox{Im}} [g({\bm b})]
+
\frac12 \bigg[ \frac1{p^2} + \frac1{(\omega-p)^2} \bigg]
\mathop{\mbox{Im}} [\nabla^{ }_{\perp}\cdot {\bm f}({\bm b})]
\biggr\}
\;, \label{final_from_LPM} \hspace*{5mm}
\end{eqnarray}
where
$n_{\rm F}$ is the Fermi-Dirac distribution,
$\mathbb{P}$ stands for the Cauchy principal value and
$g$ and ${\bm f}$ are Green's functions satisfying
\begin{equation}
\bigl( \hat{H} + i 0^+_{ }\bigr) g({\bm b}) \; = \; \delta^{(2)}({\bm b})
\;, \quad
\bigl( \hat{H} + i 0^+_{ }\bigr) {\bm f}({\bm b})
\; = \; -\nabla^{ }_{\perp} \delta^{(2)}({\bm b})
\;.
\end{equation}
The operator $\hat{H}$ acts in the transverse plane,
\begin{equation}
\hat{H} =
\frac{\omega^{ }
( M_{\rm eff}^2 - \nabla_\perp^2 )
}{2p(\omega^{ }- p)}
+ i g^2 C_{_{\rm F}}^{ } T
\int\! \frac{{\rm d}^2 \vec{q}}{(2\pi)^2}
\bigl( 1 - e^{i {\bm q}\cdot{\bm b}}\bigr)
\biggl(
\frac{1}{q^2} - \frac{1}{q^2 + m_D^2}
\biggr)
\;, \label{hatH}
\end{equation}
where $M_{\rm eff}^2 \equiv m_\infty^2 - \frac{p(\omega-p)}{\omega^2}M^2\,$.
In order to combine the LPM and NLO results,
we also need to naively expand the LPM results up to ${\cal O}(g^2)$
and remove double counting {\em \`{a} la} eq.~\eq{resummation}.
At zeroth order in $g$,
the expression becomes
\begin{eqnarray}
\rho_{\rm V} \big|_{\rm LPM}^{(g^0)}
\; = \;
\frac{ N_{\rm c} M^2 }{4 \pi }
\bigg\{ \,
\sum_{\nu = \pm \mu}
\frac{T}{\omega} \ln \bigg[ \frac{1 + e^{(\nu-\omega)/T}}{1+ e^{\nu/T}} \bigg] + \Theta \big( K^2 \big) \,
\bigg\}
\;, \label{LPM_LO}
\end{eqnarray}
which matches \eq{rhoV_LO} for $\omega \simeq k\,$.
The corrections of ${\cal O}(g^2)$
are proportional to $m_\infty^2\,$.
As in the $\mu=0$ case \cite{Jackson2019},
the spectral function $\rho_{\rm V}$ contains a
log-divergence plus a finite part:
\begin{eqnarray}
\rho_{\rm V} \big|^{(g^2)}_{\rm LPM}
& = &
\frac{N_{\rm c}^{ } m_\infty^2 }{4\pi}
\Biggl\{
\biggl[ 1 - n_{\rm F}(\omega-\mu) - n_{\rm F}(\omega+\mu) \biggr]
\biggl( \ln\biggl| \frac{m_\infty^2}{M^2} \biggr| - 1 \biggr)
\label{LPM T}
+ {\cal F}(\omega) \, \Biggr\}
\label{log div}
\end{eqnarray}
where
\begin{eqnarray}
{\cal F}(\omega) &\equiv&
\biggl[
\Theta(K^2) \!
\int_0^{\omega}\! {\rm d}p
-
\Theta(-K^2)
\biggl( \, \int_{-\infty}^{0} \! + \int_{\omega}^{\infty} \,\biggr)
\, {\rm d}p
\biggl]
\bigg\{
2\,\frac{1 - n_{\rm F}(p-\mu) - n_{\rm F}(\omega-p+\mu)}{\omega}
\, \nonumber \\
&-&
\frac{n_{\rm F}(-\mu) +
n_{\rm F}(\omega+\mu) -
n_{\rm F}(p-\mu) -
n_{\rm F}(\omega-p+\mu)}{p}
\nonumber \\
& - &
\frac{
n_{\rm F}(\omega-\mu) +
n_{\rm F}(\mu) -
n_{\rm F}(p-\mu) -
n_{\rm F}(\omega-p+\mu)
}{\omega-p}
\ \bigg\} \ .
\end{eqnarray}
The log-divergence in \eq{log div} exactly matches that from $\rho_{\rm V}|_{\rm NLO}^{(g^2)}\,$,
and the full resummed expression is finite and continuous across the light cone.
This is illustrated in fig.~\ref{fig-3} at $\mu=2T$
for fixed coupling $\alpha_s=0.3\,$.
(We have also verified this cancellation analytically.)
Although not visible from fig.~\ref{fig-3}, the presence of $\mu$
enhances the LPM rate due to a larger $m_\infty$ which sets
the overall scale.
This enhancement counteracts the suppressing effect of $\mu$ in the
1-loop spectral function \eq{rhoV_LO}.
\begin{figure}
\centering
\sidecaption
\includegraphics[width=7.5cm,clip]{decomp.pdf}
\quad
\caption{
The vector channel spectral function, plotted as a function of $\omega$ for $k=2\pi T$
and with $\alpha_s = 0.3\,$.
This illustrates both the strict loop expansion from sec.~\ref{sec NLO} (dashed)
and the relevant LPM part from sec.~\ref{sec LPM} (dotted)
where the log-divergence for $\omega \approx k$ from \eq{log div} is evident.
Crucially, the prescription of eq.~\eq{resummation} gives a result (solid) that
is both finite and continuous at the light cone.
(For comparison, the free result is also shown.)
}
\label{fig-3}
\end{figure}
\section{Outlook}
The emission rate of thermal photons and dileptons can be derived from the
same underlying spectral function $\rho_{\rm V}$, which encodes all orders in $\alpha_s$.
After a long history of computing the perturbative corrections
in various limits,
there is now sufficient information to interpolate between these
regimes as suggested by ref.~\cite{Ghisoiu2014}.
The utility of having a model of the spectral function for all $\omega$
is that it allows for comparison with lattice data at non-zero momentum.
One may also use the pertubative result to create `mock data' for testing
methods of reconstructing $\rho_{\rm V}$ from \eq{G}, e.g. the Backus-Gilbert method.
A natural next step is to implement the thermal rates calculated from \eq{resummation}
in hydrodynamic simulations of relativistic heavy ion collisions~\cite{jc}.
(Early studies in this direction can be found in ref.~\cite{Burnier2015}.)
For example, the fully differential dilepton rate, with
$\alpha_{\rm em} = e^2/(4\pi)$ and for $n_{\!f}=3\,$ reads
\begin{eqnarray}
\frac{{\rm d} \Gamma_{\ell\bar\ell}\,(\omega,k)}
{{\rm d}\omega\, {\rm d}^3 \vec{k}}
&=&
2\frac{\alpha_{\rm em}^2 n_{\rm B}^{ } (\omega) }
{9 \pi^3 M^2} \;
B \bigg( \frac{m_\ell^2}{M^2} \bigg) \;
\rho_{\rm V}^{ }\big(\omega,{k}\big)
\; ,\label{dilepton}
\end{eqnarray}
where $n_{\rm B}$ is the Bose distribution function and
the phase space factor is $B(x) \equiv (1 + 2x) \Theta( 1 -4x ) \sqrt{ 1 -4x }\,$.
The $M$-distribution that follows is shown in fig.~\ref{fig-4} for several temperatures
(at zero net baryon density) which are expected to be probed in central collisions
at the LHC and RHIC facilities.
Since ${\rm d}\Gamma_{\ell \bar \ell}$
represents the rate per unit volume, the result shown still needs to
be convoluted with the spacetime evolution of the fireball.
This task is left for future work
\begin{figure}[h]
\centering
\sidecaption
\includegraphics[width=7.5cm,clip]{comparison.pdf}
\quad
\caption{
Invariant mass distribution of the differential rate to product an
$e^+e^-$ pair, ${\rm d}\Gamma_{ee} \equiv {\rm d}N/{\rm d}^4X\,$,
computed from eq.~\eq{dilepton} after converting to hyperbolic coordinates and integrating over the
azimuthal angle and $k_\perp$ at midrapidity.
We show the resummed NLO result from \eq{resummation} (solid) and the LO result from \eq{rhoV_LO} (dotted),
at the temperatures $T = \{ 180,300,500 \}$~MeV.
The running of $\alpha_s$ has been implemented as described in ref.~\cite{Jackson2019}.
}
\label{fig-4}
\end{figure}
\section*{Acknowledgements}
Let me express my gratitude to
D.~Bala,
J.~Churchill,
C.~Gale,
J.~Ghiglieri,
S.~Jeon,
O.~Kaczmarek and
M.~Laine
for many helpful discussions and their ongoing collaboration
on several aspects of this topic.
Furthermore, I thank J.~Ghiglieri for providing the
LPM$^{\rm NLO}$ data from ref.~\cite{lpm_nlo},
and D.~Bala and O.~Kaczmarek for providing
the quenched lattice data shown in fig.~\ref{fig-2}.
I am also grateful to T.~Gorda, K.~Sepp\"{a}nen and R.~Paatelainen
for their assistance in cross-checking these results in
the HTL limit \cite{Gorda2022}.
This work was supported by the U.S. Department of Energy (DOE) under grant No.~DE-FG02-00ER41132.
|
train/arxiv
|
BkiUfIs25V5jbixhKOtI
| 5 | 1 |
\section{Introduction}
Ring galaxies are among the most fascinating objects in the Universe. The Cartwheel galaxy is surely the most famous of them, and also the biggest (with its optical diameter of $\sim{}$50-60 kpc) and the most studied. Cartwheel has been thoroughly observed in almost every band: H$\alpha{}$ (Higdon 1995) and optical (Theys \& Spiegel 1976, Fosbury \& Hawarden 1977) images, red continuum (Higdon 1995), radio line (Higdon 1996) and continuum (Higdon 1996; Mayya et al. 2005), near- (Marcum, Appleton \& Higdon 1992) and far-infrared (Appleton \& Struck-Marcell 1987a), line spectroscopy (Fosbury \& Hawarden 1977) and X-ray (Wolter, Trinchieri \&{} Iovino 1999; Gao et al. 2003; Wolter \& Trinchieri 2004; Wolter, Trinchieri \& Colpi 2006).
Cartwheel exhibits a double ringed shape, with some transversal 'spokes' (Theys \& Spiegel 1976, Fosbury \& Hawarden 1977; Higdon 1995), which have been detected only in few of the $\lower.5ex\hbox{\ltsima}{}300$ known ring galaxies (Arp \& Madore 1987; Higdon 1996). The outer ring is rich of HII regions, especially in its southern quadrant, and dominates the H$\alpha{}$ emission (Higdon 1995). This implies that Cartwheel is currently undergoing an intense epoch of star formation (SF, with SF rate $\sim{}20-30\,{}M_\odot{}$ yr$^{-1}$; Marston \& Appleton 1995; Mayya et al. 2005), almost entirely confined to the outer ring. Inner ring, nucleus and spokes are nearly devoid of gas and dominated by red continuum emission, indicating a relatively old, low- and intermediate-mass stellar population (Higdon 1995, 1996; Mayya et al. 2005).
Cartwheel is located in a small group of 4 galaxies. All of its 3 companions (known as G1, G2 and G3) are less massive than Cartwheel, and host less than 20 per cent of the total gas mass in the group.
The analysis of X-ray data shows another intriguing peculiarity: most of the point sources detected by $Chandra$ are in the outer ring, and particularly concentrated in the southern quadrant (Gao et al. 2003; Wolter \& Trinchieri 2004). According to Wolter \& Trinchieri (2004) 13 out of 17 sources associated with Cartwheel are in the outer ring, the remaining 4 being close to the inner rim of the ring or to the optical spokes\footnote{The total number of point X-ray sources in the Cartwheel group is 24 (Wolter \& Trinchieri 2004), but 6 of them are associated with the companion galaxies G1 and G2, whereas 1 of them is probably background/foreground contamination.}. Gao et al. (2003) noted that all the five strongest H$\alpha{}$ knots in the ring are associated with an X-ray source, indicating a possible correlation between X-ray sources and young star-forming regions. Furthermore, most of the observed sources have isotropic X-ray luminosity $L_X\lower.5ex\hbox{\gtsima}{}10^{39}$ erg s$^{-1}$, belonging to the category of ultraluminous X-ray sources (ULXs). Given the distance of Cartwheel ($\sim{}124$ Mpc for an Hubble constant $H_0=73$ km s$^{-1}$ Mpc$^{-1}$), the data are incomplete for $L_X\lower.5ex\hbox{\ltsima}{}5\times{}10^{38}$ erg s$^{-1}$, and the faintest sources in the sample have $L_X\sim{}10^{38}$ erg s$^{-1}$. Then, almost all the X-ray sources detected in Cartwheel are close to the ULX range.
Many theoretical studies, both analytical (Lynds \& Toomre 1976; Theys \& Spiegel 1976; Appleton \& Struck-Marcell 1996) and numerical (Theys \& Spiegel 1976; Appleton \& Struck-Marcell 1987a, 1987b; Hernquist \& Weil 1993; Mihos \& Hernquist 1994; Struck 1997; Horellou \& Combes 2001; Griv 2005) were aimed to explain the origin of Cartwheel and its observational features. In the light of these papers, the origin of propagating rings in Cartwheel and similar galaxies can be explained by galaxy collisions with small impact parameter (Theys \& Spiegel 1976; Appleton \& Struck-Marcell 1987a, 1987b; Hernquist \& Weil 1993; Mihos \& Hernquist 1994; Struck 1997; Horellou \& Combes 2001). Among the companions of Cartwheel, both G1 and G3 are good candidates for this interaction, having a small projected impact parameter, being not too far from Cartwheel, and showing a disturbed distribution of neutral hydrogen (Higdon 1996).
An alternative, less investigated model explains the rings with disc instabilities (Griv 2005).
Models of galaxy collisions explain quite well most of Cartwheel properties.
However, none of the previous theoretical studies has investigated the nature of the X-ray sources, and especially of the ULXs, observed in Cartwheel.
The nature of the ULXs is still not understood. It has been suggested that they are high-mass X-ray binaries (HMXBs) powered by stellar mass black holes (BHs) with anisotropic X-ray emission (King et al. 2001; Grimm, Gilfanov \& Sunyaev 2003; King 2006) or with super-Eddington accretion rate (e.g. King \&{} Pounds 2003; Socrates \& Davis 2006; Poutanen et al. 2007). However, some ULXs, especially the brightest ones ($L_X\lower.5ex\hbox{\gtsima}{}10^{40}$ erg s$^{-1}$), show characteristics which are difficult to reconcile with the hypothesis of beamed emission, such as the presence of an isotropically ionized nebula (e.g. Pakull \& Mirioni 2003; Kaaret, Ward \& Zezas 2004) or of quasi periodic oscillations (e.g. Strohmayer \& Mushotzky 2003). Then, it has been proposed that some ULXs (or at least the brightest among them; Miller, Fabian \& Miller 2004) might be associated with intermediate-mass black holes (IMBHs), i.e. BHs with mass $20\lower.5ex\hbox{\ltsima}{}m_{\rm BH}/M_\odot{}\lower.5ex\hbox{\ltsima}{}10^{5}$, accreting either gas or companion stars (Miller et al. 2004; see Mushotzky 2004 and Colbert \& Miller 2005 for a review). On the other hand, most of ULXs can be explained with the properties of stellar mass BHs (see Roberts 2007 for a review and references therein).
In this paper we present a new, refined $N-$body/SPH model of the Cartwheel galaxy, which reproduces the main features of Cartwheel.
The aim of this paper is to use the $N-$body/SPH model in order to check whether the IMBH hypothesis is viable to explain all or a part of the X-ray sources observed in Cartwheel. In fact, in the last few years the hypothesis that most of the ULXs are powered by IMBHs has became increasingly difficult to support, as the observational features of the majority of ULXs are consistent with accreting stellar mass BHs (Roberts 2007). It would be interesting to see whether also the dynamical simulations agree with this conclusion\footnote{We will not consider other possible scenarios (e.g. the production of ULXs by beamed emission in HMXBs), due to intrinsic limits of $N-$body methods.}.
In Section 2 we describe our simulations. In Section 3 we discuss the evolution of our models and the dynamics of either halo or disc IMBHs. In Section 4 we investigate the possibility that the X-ray sources in Cartwheel are powered by IMBHs accreting gas or stars. In the last case we consider both the accretion from old stars (which produce only transient sources) and from young stars, formed after the galaxy collision (which can lead also to persistent sources). We also investigate the hypothesis that new disc IMBHs are formed in very young clusters after the galaxy collision. Our findings are summarized in Section 5.
\section{The $N$-body model}
In order to form a 'Cartwheel-like' galaxy, we simulate the encounter between a disc galaxy (in the following, we will call it 'progenitor' of Cartwheel) and a smaller mass companion (in the following 'intruder'), integrating the evolution of the system over $\sim{}1$ Gyr.
The simulations have been carried out running the parallel $N$-body/SPH code GASOLINE (Wadsley, Stadel \& Quinn 2004) on the cluster $zBox2$\footnote{\tt http://www-theorie.physik.unizh.ch/$\sim{}$dpotter/zbox/} at the University of Z\"urich.
\subsection{The progenitor of Cartwheel}
The $N$-body model adopted to simulate the 'progenitor' of the Cartwheel galaxy is analogous to that described in Mapelli, Ferrara \& Rea (2006) and in Mapelli (2007), derived from the recipes by Hernquist (1993). The progenitor galaxy is represented by four different components: a Navarro, Frenk \& White (1996, hereafter NFW) halo, an Hernquist bulge (absent in some runs), an exponential stellar disc and an exponential gaseous disc. Halo, bulge, disc and gas velocities are generated using the Gaussian approximation (Hernquist 1993), as described in Mapelli et al. (2006).
\subsubsection{The halo}
We adopt a NFW halo, according to the formula (NFW; Moore et al. 1999)
\begin{equation}\label{eq:eq1}
\rho_h(r)=\frac{\rho_s}{(r/r_s)^\gamma{}\,{}[1+(r/r_s)^\alpha{}]^{(\beta{}-\gamma{})/\alpha{}}},
\end{equation}
where we choose $(\alpha{},\beta{}, \gamma{})=(1,3,1)$, and
$\rho{}_s=\rho_{crit}\,{}\delta_c$, $\rho_{crit}$ being the critical
density of the Universe and
\begin{equation}\label{eq:eq2}
\delta_c=\frac{200}{3}\frac{c^3}{\ln{(1+c)}-[c/(1+c)]},
\end{equation}
where $c$ is the concentration parameter and $r_s$ is the halo scale radius, defined by
$r_s=R_{200}/c$. $R_{200}$ is the radius encompassing a mean overdensity of 200 with respect to the
background density of the Universe.
$R_{200}$ can be calculated as $R_{200}=V_{200}/[10\,{}H(z)]$, where $V_{200}$ is the circular velocity at the
virial radius and $H(z)$ is the Hubble parameter at the redshift $z$.
As the measured mass of Cartwheel within the virial radius is $\sim{}3-6\times{}10^{11}M_\odot{}$ (Higdon 1996; Horellou \& Combes 2001), we adopt $V_{200}=100$ km s$^{-1}$, obtaining $R_{200}=140$ kpc. Adopting a fiducial concentration $c=12$, we obtain $r_s=12$ kpc.
We made check simulations changing some of those parameters (e.g. with $c=5, 10$) without observing any significant difference in the post-encounter evolution.
\subsubsection{The bulge}
We adopt a spherical Hernquist bulge
\begin{equation}\label{eq:eq4}
\rho_b(r)=\frac{M_b\,{}a}{2\pi}\,{}\frac{1}{r\,{}(a+r)^3},
\end{equation}
where $M_b$ and $a$ are the bulge mass and scale length (see Section 2.4 and Table 1).
\subsubsection{The disc}
The stellar disc profile is (Hernquist 1993):
\begin{equation}\label{eq:eq3}
\rho_d(R,z)=\frac{M_d}{4\pi R_d^2\,{}z_0}\,{}e^{-R/R_d}\,{}\textrm{sech}^2(z/z_0),
\end{equation}
where $M_d$ is the disc mass, $R_d$ the disc scale length and $z_0$ the disc scale height. We adopt different values of these parameters for different runs (see Section 2.4 and Table 1).
\subsubsection{The gaseous disc}
For the gas component we adopt the same exponential profile as in equation~(\ref{eq:eq3}), with different values for the total mass $M_g$, the scale length $R_g$ and scale height $z_g$ (see Section 2.4 and Table 1).
The gas is allowed to cool down to a temperature of $2\times{}10^4$ K. In some runs we also switch-on SF, according to the Schmidt law (Katz 1992; Wadsley et al. 2004).
\subsection{The intruder}
We model the intruder as a NFW halo with total mass $3.2\times{}10^{11}\,{}M_\odot{}$, i.e. a fraction of $\sim{}0.5-0.6$ (depending on the run) of the total mass of the Cartwheel progenitor. We assume $R_{200}=30$ kpc and $c=12$. Less concentrated intruders produce much less developed rings.
We made also runs where the intruder has an exponential gas disc of $2\times{}10^9\,{}M_\odot{}$ (consistent with the mass in gas of G3, one of the most probable intruders).
We put the intruder on an orbit with a null (Hernquist \& Weil 1993) or small (Horellou \& Combes 2001) impact parameter (see Table 1), with a centre-of-mass velocity, relative to Cartwheel, close to the escape velocity.
\subsection{Intermediate-mass black holes}
Not only the properties, but the very existence of IMBHs are still uncertain (see van der Marel 2004 for a review). Then, we do not know anything from the observations about the positions, the velocities and the mass function of IMBHs in galaxies.
Different theoretical models predict different properties for IMBHs. We will focus on two of these models: (i) the formation of IMBHs from massive metal-free\footnote{As far as we know, IMBHs cannot be formed as remnants of recent (i.e. relatively high metallicity) SF. In fact, population I
massive stars lose a significant fraction of their mass due to stellar winds before the
formation of the BH (Heger et al. 2003). On the contrary, BHs with mass $> 20\,{}M_\odot{}$ might have formed as remnants of population III stars, which, being metal-free, are not significantly affected
by mass loss (Heger \&{} Woosley 2002; Heger et al. 2003).} stars (Heger \& Woosley 2002); (ii) the runaway collapse of stars in young clusters (Portegies Zwart \& McMillan 2002).
In the former scenario, IMBHs are the remnants of very massive ($30-140\,{}M_\odot{}$ or $\lower.5ex\hbox{\gtsima}{}260\,{}M_\odot{}$, Heger \& Woosley 2002) population III stars. Then, they are expected to form at high redshift ($\approx{}10-25$) in mini-haloes. Diemand, Madau \& Moore (2005, hereafter DMM) showed that objects formed in high density peaks at high redshift appear more concentrated than more recent structures. According to DMM, the density profile of objects formed in a $\nu{}\,{}\sigma{}$ peak can be parametrized as
\begin{equation}\label{eq:DMM}
\rho{}_{\rm BH}(r)=\frac{\rho_s}{(r/r_\nu)^\gamma{}\,{}[1+(r/r_\nu{})^\alpha{}]^{(\beta{}_\nu{}-\gamma{})/\alpha{}}},
\end{equation}
where $\alpha{}$ and $\gamma{}$ are the same as defined in equation~(\ref{eq:eq1}); $r_\nu\equiv{}r_s/f_\nu{}$ is the scale radius for objects formed in a $\nu{}\,{}\sigma{}$ fluctuation [with $f_\nu{}=\exp{(\nu{}/2)}$], and $\beta{}_\nu{}=3+0.26\,{}\nu{}^{1.6}$. Hereafter, we will refer to equation~(\ref{eq:DMM}) as the DMM profile. This profile is likely to track the distribution of IMBHs, if they are a halo population (for alternative models see e.g. Bertone, Zentner \& Silk 2005).
In the second scenario, repeated collisions in star clusters with sufficiently small initial half-mass relaxation time ($t_h\lower.5ex\hbox{\ltsima}{}25$ Myr) lead to the runaway growth of a central object with mass up to 0.1 per cent of the total mass of the cluster (Portegies Zwart \& McMillan 2002). All clusters with current $t_h\lower.5ex\hbox{\ltsima}{}100$ Myr are expected to form IMBHs in this way. As young clusters are a disc population, also the IMBHs formed via runaway collapse are expected to lie in the disc. As shown by Mapelli (2007) disc IMBHs can accrete more efficiently than halo IMBHs, leading to stronger constraints on their number and mass.
In this paper, we run simulations where IMBHs are modelled either as a disc or as a halo population. As described in Mapelli (2007), halo IMBHs are distributed according to a DMM profile for objects formed in $3.5\,{}\sigma{}$ fluctuations; while disc IMBHs are distributed in the same way as stars (see equation~\ref{eq:eq3}).
In the following, we will consider IMBHs of $10^2$ and $10^3\,{}M_\odot{}$, which are the typical masses suggested both by the first star scenario and by the runaway collapse. Furthermore, such mass range has also been deeply studied from the point of view of the mass-transfer history (Portegies Zwart, Dewi \& Maccarone 2004; Blecha et al. 2006; Madhusudhan et al. 2006). In particular, we will consider IMBHs of $10^3\,{}M_\odot{}$ as an upper limit only in the case of gas accretion from surrounding clouds. In all the other cases we will assume that the IMBHs have a mass of $10^2\,{}M_\odot{}$ as our fiducial case. Larger masses of the IMBHs can be hardly reconciled with the observed properties of most of ULXs, as highlighted by Roberts (2007). In fact, both the shape of the X-ray luminosity function (Grimm et al. 2003) and the association of many ULXs with star-forming regions (King 2004) suggest that ULXs are dominated by BHs of mass up to $\sim{}100\,{}M_\odot{}$ (Roberts 2007). Also the analysis of the spectra of some bright ULXs seem to confirm this mass range, showing the intrinsic weaknesses of cool black-body disc models (Goncalves \& Soria 2006; Stobbart, Roberts \&{} Wilms 2006). Finally, also the analysis of the optical counterparts of 7 ULXs shows that the host BHs are consistent with stellar mass BHs or with $\sim{}100\,{}M_\odot{}$ IMBHs (Copperwheat et al. 2007).
\subsection{Description of runs}
\begin{table}
\begin{center}
\caption{Initial parameters of runs
}
\begin{tabular}{lllll}
\hline
\vspace{0.1cm}
Run & $M_d/(10^{10}M_\odot{})$ & $M_b/(10^{10}M_\odot{})$ & IMBH profile & SF \\
\hline
A1 & 9.6 & 2.4 & DMM & no \\
A2 & 9.6 & 2.4 & disc & no \\
\vspace{0.1cm}
A3 & 9.6 & 2.4 & disc & yes \\
B1 & 4.8 & 0 & DMM & no \\
B2 & 4.8 & 0 & disc & no \\
B3 & 4.8 & 0 & disc & yes \\
\hline
\end{tabular}
\end{center}
\label{tab_1}
\end{table}
In all the performed runs the progenitor galaxy has 122000 dark matter halo particles with a mass of $4\times{}10^6M_\odot{}$, corresponding to a total halo mass of 4.9$\times{}10^{11}M_\odot{}$, consistent with the observations (Higdon 1995). The intruder is always composed by 80000 dark matter particles, for a total mass of 3.2$\times{}10^{11}M_\odot{}$.
Disc, bulge and gas particles, as well as the particles hosting an IMBH\footnote{Particles hosting an IMBH are normal stellar particles. We assume that a fraction of their mass represents an IMBH of 10$^2$ or $10^3$~$M_\odot{}$, while the remaining mass is composed by stars.}, have mass equal to $4\times{}10^5M_\odot{}$. The number of IMBH particles in each simulation is fixed to the reference value of 100 (but the results can be easily rescaled). The initial number of gas particles is equal to 80000 (corresponding to $M_g=3.2\times{}10^{10}M_\odot{}$) in all the simulations. The initial $M_g$ in our simulations is about a factor of 1.5 higher than the observed value (Higdon 1995), to allow a fraction of initial gas to form stars (Cartwheel is a starburst galaxy) or be stripped during the interaction.
The number of disc and bulge particles depends on the simulation. As it is shown in Table~1, runs labelled as 'A' have $M_d=9.6\times{}10^{10}M_\odot{}$ and $M_b=2.4\times{}10^{10}M_\odot{}$ (corresponding to 240000 and 60000 disc and bulge particles, respectively). Runs A have $R_d=4.4$ kpc, $z_0=0.1\,{}R_d$ and $a=0.2\,{}R_d$. The analogous properties for the gaseous disc in runs A are $R_g=R_d$ and $z_g=0.057\,{}R_g$ (where $R_g$ and $z_g$ are the gas disc scale length and scale height, see Section 2.1).
Runs labelled as 'B' in Table~1 have $M_d=4.8\times{}10^{10}M_\odot{}$ (corresponding to 120000 star particles), $M_b=0$, $R_d=6.6$ kpc, $z_0=0.2\,{}R_d$, $R_g=R_d$ and $z_g=0.057\,{}R_g$. The characteristics of runs B (and, in particular, the absence of a bulge) have been chosen in analogy with Hernquist \& Weil (1993); while the properties of runs A are similar to those used by Horellou \& Combes (2001). As we will see in the next section, the presence of the bulge does not affect significantly the results. Instead, the number of pre-existing stars in the disc can be important for the formation of spokes.
Softening lengths\footnote{For the choice of the halo softening we adopt the recipe by Dehnen (2001). For disc particles a (up to a factor of $\sim{}$20) larger softening than the adopted one gives essentially the same results.
For gas particles we do not have fragmentation problems, as the Jeans mass is a factor of $>20$ larger than the mass resolution during the entire simulation (see Bate \& Burkert 1997).} are 0.2 kpc for dark matter particles and 0.01 kpc for disc, bulge, IMBH and gas particles. The initial smoothing length is also 0.01 kpc.
There is another fundamental difference between runs A and B, i.e. the inclination of the initial velocity direction of the intruder with respect to the symmetry axis of the Cartwheel progenitor. The initial position and velocity of the centre-of-mass of the intruder are {\bf x}=(0, 8, 32) kpc and {\bf v}=(28, -218, -872) km s$^{-1}$ for runs A, and {\bf x}=(0, 0, 40) kpc and {\bf v}=(0, 0, 900) km s$^{-1}$ for runs B. This implies that the intruder travels along the symmetry axis of Cartwheel in the runs B (as in Hernquist \& Weil 1993), while it is a bit off-centre in runs A (as in Horellou \& Combes 2001). As we will see in the next Section, this difference is crucial for the shape of the post-encounter Cartwheel.
Finally, in runs A1 and B1 the IMBHs are distributed according to a DMM profile for 3.5$\,{}\sigma{}$ fluctuations; whereas IMBHs populate the exponential stellar disc in runs A2, A3, B2 and B3. In runs A3 and B3 SF is allowed, with an efficiency $c_\ast{}=0.1$.
\section{Dynamical evolution of Cartwheel models}
\begin{figure}
\center{{
\epsfig{figure=fig1_000s.eps,height=4cm}
\epsfig{figure=fig1_040s.eps,height=4cm}
\epsfig{figure=fig1_080s.eps,height=4cm}
\epsfig{figure=fig1_120s.eps,height=4cm}
\epsfig{figure=fig1_160s.eps,height=4cm}
\epsfig{figure=fig1_200s.eps,height=4cm}
}}
\caption{\label{fig:fig1}
Run A3, time evolution of the dark matter (blue on the web), stellar (yellow on the web) and gas (red on the web) component during the encounter. From top to bottom and from left to right: after 0, 40, 80, 120, 160 and 200~Myr from the beginning of the simulation. Each frame measures 100 kpc per edge. Cartwheel is seen edge-on.}
\end{figure}
\begin{figure}
\center{{
\epsfig{figure=fig2_000s.eps,height=4cm}
\epsfig{figure=fig2_040s.eps,height=4cm}
\epsfig{figure=fig2_080s.eps,height=4cm}
\epsfig{figure=fig2_120s.eps,height=4cm}
\epsfig{figure=fig2_160s.eps,height=4cm}
\epsfig{figure=fig2_200s.eps,height=4cm}
}}
\caption{\label{fig:fig2}
The same as Fig.~\ref{fig:fig1}, but Cartwheel is seen face-on.}
\end{figure}
\begin{figure}
\center{{
\epsfig{figure=fig3_000s.eps,height=4cm}
\epsfig{figure=fig3_040s.eps,height=4cm}
\epsfig{figure=fig3_080s.eps,height=4cm}
\epsfig{figure=fig3_120s.eps,height=4cm}
\epsfig{figure=fig3_160s.eps,height=4cm}
\epsfig{figure=fig3_200s.eps,height=4cm}
}}
\caption{\label{fig:fig3}
The same as Fig.~\ref{fig:fig1}, but for run B3.}
\end{figure}
\begin{figure}
\center{{
\epsfig{figure=fig4_000s.eps,height=4cm}
\epsfig{figure=fig4_040s.eps,height=4cm}
\epsfig{figure=fig4_080s.eps,height=4cm}
\epsfig{figure=fig4_120s.eps,height=4cm}
\epsfig{figure=fig4_160s.eps,height=4cm}
\epsfig{figure=fig4_200s.eps,height=4cm}
}}
\caption{\label{fig:fig4}
The same as Fig.~\ref{fig:fig3}, but Cartwheel is seen face-on.}
\end{figure}
Figs. \ref{fig:fig1} and \ref{fig:fig2} show the evolution of run A3 (A1 and A2 are almost identical), along two different projection axes, in the first 200 Myr of the simulation. Figs. \ref{fig:fig3} and \ref{fig:fig4} show the same for run B3.
First of all, let us consider run A3. The intruder approaches Cartwheel with a non-zero inclination angle ($\sim{}14^\circ{}$). Even before the passage through the disc ($\lower.5ex\hbox{\ltsima}{}40$ Myr after the beginning of the simulation), the bulge and the central part of the disc of Cartwheel are deformed by the gravitational influence of the intruder. After the encounter, a fast density wave propagates from the centre of the galaxy towards the periphery. The wave does not remain on the plane of the galaxy, but tends to be deflected towards the intruder, generating a bell-like edge-on galaxy.
After $\sim{}120$ Myr from the beginning of the simulation, the density wave forms a well-defined propagating ring. Interestingly, the ring is not perfectly circular, but slightly elongated, because of the initial inclination angle between Cartwheel and the intruder (see also Horellou \& Combes 2001). This indicates that the difference between major and minor axis in Cartwheel, normally ascribed to the inclination angle of the disc with respect to the observer ($\sim{}40^\circ{}$, Higdon 1996), can be an intrinsic deformation due to the interaction itself.
An intrinsic asymmetry of the ring (rather than an effect of inclination) is somehow supported also by observations, which indicate that most of the H$\alpha{}$ (Higdon 1995) and X-ray emission (Gao et al. 2003; Wolter \& Trinchieri 2004) is concentrated in the southern quadrant of the ring.
The inner, less-developed ring is also well visible in the simulations.
Another interesting feature of runs A1$-$A3 is the presence of well-developed spokes after $\sim{}120$ Myr from the beginning of the simulation. The upper panel of Fig.~\ref{fig:fig5} (where only the gas component is shown, at a time $t=$140 Myr from the beginning of run A3) and Fig.~\ref{fig:fig6} (where only stars are plotted, at $t=$140 Myr) show that these spokes are mainly composed by stars and are quite gas-poor. This result perfectly agrees with observations: spokes appear bright in red continuum (Higdon 1995), but hardly detectable in H$\alpha{}$ (Higdon 1995), 21 cm (Higdon 1996) and radio continuum (Mayya et al. 2005). The spokes in our simulations probably originate from gravitational instabilities in the stellar ring (see Appendix A for more details on the formation of spokes and for a comparison with previous studies).
The time evolution of run B3 (Figs. \ref{fig:fig3} and \ref{fig:fig4}) is quite similar to run A3. However, the velocity of the intruder is slightly higher with respect to the escape velocity from Cartwheel, so that the bell-like shape of the edge-on post-encounter Cartwheel is less pronounced: the galaxy remains almost in the initial plane, even if the disc becomes thicker towards the direction of the intruder.
In this case, the intruder approaches Cartwheel along its symmetry axis. Then, the rings which propagate after the encounter are perfectly circular. To obtain the observed ratio between the two axes of Cartwheel, we have to rotate the simulated face-on galaxy of $\sim{}37^{\circ{}}-40^{\circ{}}$, as it has been done in the bottom panel of Fig.~\ref{fig:fig5} and in Fig.~\ref{fig:fig7}.
The stellar spokes appear also in run B3, but are much less pronounced than in run A3, indicating that they depend on the strength of the interaction and especially on the total mass in stars. We made other check runs, to understand for which parameters the spokes form and how frequent is their production. Most of runs show only underdeveloped spokes, like run B3. In particular, the spokes tend to disappear if we increase the central halo mass, if we reduce the mass of the disc (stabilizing it), if we increase the relative velocity between intruder and target and if we reduce the mass of the intruder. Summarizing, runs A are the only case (among $\sim{}$10 check runs with different parameters) in which spokes are so well developed. The fact that spokes seem to form only for a small range of initial parameters explains why they are observed only in $\sim{}3$ of the known ring galaxies (Higdon 1996).
Apart from these differences, both runs A and B reproduce the main features of Cartwheel. The size of the ring agrees with Cartwheel observations especially from $t\sim{}120$ to $t\sim{}200$ Myr from the beginning of the simulation (i.e. $t\sim{}80$ to $t\sim{}160$ Myr from the gravitational encounter). This result agrees with models based on the
radial optical and near-infrared color gradients (Vorobyov \&{} Bizyaev 2003), which predict for Cartwheel an age $<250$ Myr.
The age estimated by previous simulations ($\sim{}160$ Myr from the gravitational encounter, Hernquist \& Weil 1993) is also in agreement with our findings.
In the following, we will focus on runs A at $t=140$ Myr and runs B at $t=160$ Myr, as fiducial ages.
The only feature of our simulations which is not in perfect agreement with the observations is the presence of gas at the centre of Cartwheel, both in runs A (upper panel of Fig.~\ref{fig:fig5}) and B (bottom panel). Observations do not show significant H$\alpha{}$ (Higdon 1995) or HI emission lines (Higdon 1996) from the centre and the inner ring. In particular, the HI mass within 22 and 43 arcsecond (i.e. $\sim{}13$ and $\sim{}25$ kpc) is $\sim{}10^8$ and $\sim{}1.5\times{}10^9\,{}M_\odot{}$, respectively (Higdon 1996). These observational estimates are a factor of $\sim{}5$ smaller than the values in our simulations.
However, previous simulations (Hernquist \& Weil 1993; Mihos \& Hernquist 1994; Horellou \& Combes 2001) agree with our results, showing that a large fraction of gas must end up at the centre. Thus, what kind of process can either lead to the removal of the central gas or stop the SF activity? Previous papers consider the intruder as either a rigid body (Horellou \& Combes 2001) or a merely dark matter object (Hernquist \& Weil 1993; Mihos \& Hernquist 1994). One can wonder whether a gas-rich companion can strip the gas from the centre of Cartwheel. However, we made a test run where the companion has an exponential disc of gas (total mass $2\times{}10^9M_\odot{}$, consistent with the observations of G3, Higdon 1996), and we did not find any substantial difference.
Another hypothesis is that the effect of efficient SF (exhausting gas in the centre before than in the ring) has exhausted the central gas.
As it will be discussed in Section 4.2.2, the efficiency of SF in the centre is very high, especially in the first stages of the Cartwheel formation. However, we still have gas in the centre after $\sim{}$300 Myr. Maybe, there are deviations from the Schmidt law in such a peculiar environment (Higdon 1996; Vorobyov 2003). Thus, a better recipe for SF, gas cooling and feedback might attenuate this problem.
Furthermore, $Chandra$ X-ray observations (Wolter \& Trinchieri 2004) indicate the presence of a diffuse component in the centre of Cartwheel, which can be fit by a power law plus a plasma model. Even if the power law is associated with unresolved binaries, the plasma component is likely due to hot diffuse gas and its total luminosity ($\sim{}3\times{}10^{40}$ erg s$^{-1}$) is comparable with the soft gaseous component of most X-ray luminous starburst galaxies\footnote{Recent {\it XMM-Newton} observations (A. Wolter, private communication) suggest that the total luminosity due to hot diffuse gas is slightly lower ($\sim{}2\times{}10^{40}$ erg s$^{-1}$, of which one-third is due to the centre and inner ring) than the value previously derived by $Chandra$ (Wolter \& Trinchieri 2004).}.
This suggests that the high temperature of the gas at the centre of Cartwheel (probably due to stellar or BH feedback) has stopped SF, while our simulations are not able to account for this physical process.
In addition, Horellou et al. (1998) detected CO emission from Cartwheel, indicating the presence of $\sim{}1.5\times{}10^9\,{}M_\odot{}$ of molecular gas in the inner $\sim{}25$ kpc of Cartwheel (but most of the detected molecular gas is probably concentrated in the inner $\sim{}13$ kpc).
Thus, the centre of Cartwheel, although lacking of atomic cold gas (responsible of the HI emission line) and of SF activity (responsible of the H$\alpha{}$ emission), seems to host a conspicuous amount of hot gas and also a significant mass of molecular gas.
Moreover, H$\alpha{}$ observations (Higdon \& Wallin 1997)
show that the nucleus of AM 0644$-$741, a ring galaxy similar to Cartwheel, is quite gas-rich. Thus, the dearth of cold gas in the nucleus of Cartwheel could be a peculiarity of this galaxy, due to some particular stage of its evolution (e.g. feedback from the central BH or SF).
\begin{figure}
\center{{
\epsfig{figure=fig5a.eps,height=8cm}
\epsfig{figure=fig5b.eps,height=8cm}
}}
\caption{\label{fig:fig5}
Density of the gas in run A3 (at $t=140$ Myr, top panel) and B3 (at $t=160$ Myr, bottom panel).
The color coding indicates the projection along the z axis of the gas density in linear scale (from 0 to 20 $M_\odot{}$ pc$^{-2}$). Each frame measures 70 kpc per edge.
}
\end{figure}
\subsection{Dynamics of IMBHs}
\begin{figure}
\center{{
\epsfig{figure=fig6a.eps,height=8cm}
\epsfig{figure=fig6b.eps,height=8cm}
}}
\caption{\label{fig:fig6}
Density of stars at $t=140$ Myr in run A1 (top panel) and A3 (bottom panel).
The color coding indicates the projection along the z axis of the star density in linear scale (from 0 to 40 $M_\odot{}$ pc$^{-2}$). The filled circles (green in the online version) mark the IMBH particles. Each frame measures 70 kpc per edge.
}
\end{figure}
\begin{figure}
\center{{
\epsfig{figure=fig7a.eps,height=8cm}
\epsfig{figure=fig7b.eps,height=8cm}
}}
\caption{\label{fig:fig7}
Density of stars at $t=160$ Myr in run B1 (top panel) and B3 (bottom panel).
The color coding indicates the projection along the z axis of the star density in linear scale (from 0 to 20 $M_\odot{}$ pc$^{-2}$). The filled circles (green in the online version) mark the IMBH particles. Each frame measures 70 kpc per edge.
}
\end{figure}
Let us consider now the dynamics of the IMBHs during the formation of the ring. In Fig.~\ref{fig:fig6} we marked the positions of IMBH particles at $t=140$ Myr in the case of runs A1 (top panel) and A3 (bottom panel). Fig.~\ref{fig:fig7} shows the same at $t=160$ Myr for runs B1 (top panel) and B3 (bottom panel).
In both cases, halo IMBHs appear much more concentrated than disc IMBHs. As we would have expected, disc IMBHs behave like stars (having the same initial distribution), and an important fraction of them ends up inside the outer ring. Instead, halo IMBHs are only slightly perturbed by the intruder.
\begin{figure}
\center{{
\epsfig{figure=fig8.eps,height=8cm}
}}
\caption{\label{fig:fig8}
Distribution of IMBHs as a function of radius. Top row: run A1 (left panel) and B1 (right). Bottom row: run A3 (left) and B3 (right). Empty histograms show the initial conditions, hatched histograms show the distribution at $t=140$ Myr (runs A1 and A3) or $t=160$ Myr (runs B1 and B3).
}
\end{figure}
This qualitative consideration can be quantified by looking at Fig.~\ref{fig:fig8}, where the radial distribution of IMBHs is shown for runs A1, A3, B1 and B3, and at the third column of Table 2, where the number of IMBHs within the ring is listed.
The distribution of IMBHs after $t=140-160$ Myr is never the same as the initial one. Even some tens of halo IMBHs (runs A1 and B1) are strongly perturbed by the galaxy interaction: their final orbits are up to 30 kpc far from the galactic centre.
However, the orbits of disc IMBHs (runs A3 and B3) are far more perturbed: more than half of the BHs are dragged into the outer ring.
\section{Are the X-ray sources associated with IMBHs?}
As we have shown in the previous Section, after the gravitational interaction a fraction of IMBHs is pulled into the outer ring. The main question we would like to address is whether these IMBHs can power all or a part of the ULXs observed in Cartwheel.
IMBHs can switch on as X-ray sources both by gas accretion and by tidal capture of stars. In this Section we will consider both mechanisms.
\subsection{IMBHs accreting gas}
The details of gas accretion by IMBHs are basically unknown. As the simplest approximation (Mii \& Totani 2005; Mapelli et al. 2006; Mapelli 2007), we can assume that these IMBHs accrete at the Bondi-Hoyle luminosity\footnote{$L_X$ in equation~(\ref{eq:bondihoyle}) indicates the total X-ray luminosity. Properly speaking, the Bondi-Hoyle formula refers to the bolometric luminosity. However, in the case of ULXs the X-ray luminosity is much higher than the optical luminosity (Winter, Mushotzky \& Reynolds 2006), justifying our approximation.}:
\begin{equation}\label{eq:bondihoyle}
L_X(\rho{}_g,\,{}v)=4\,{}\pi{}\eta{}\,{}c^2\,{}G^2\,{}m_{\rm BH}^2\,{}\rho{}_g\,{}\tilde{v}^{-3},
\end{equation}
where $\eta{}$ is the radiative efficiency, $c$ the speed of light, $G$ the gravitational constant, $m_{\rm BH}$ the IMBH mass, $\rho{}_g$ the density of the gas surrounding the IMBH. $\tilde{v}=(v^2+\sigma{}_{MC}^2+c_s^2)^{1/2}$, where $v$ is the relative velocity between the IMBH and the gas particles, $\sigma{}_{MC}$ and $c_s$ are the molecular cloud turbulent velocity and gas sound speed, respectively.
Due to resolution limits\footnote{It is worth reminding that the Bondi-Hoyle formula refers only to gas well within the influence radius of the BH. Furthermore, the efficiency of the accretion is strongly dependent on the angular momentum distribution of the gas inside this influence radius (Agol \& Kamionkowski 2002). We do not have sufficient resolution to account for this physics. Thus, we are interested only to derive an upper limit of $L_X$.}, we cannot account for the local properties of the gas. In particular, the local density of the gas never reaches the values expected for molecular clouds. But we know from the observations (Higdon 1995) that Cartwheel ring should be rich of molecular clouds. Thus, if we calculate $\rho{}_g$ directly from our simulation, we probably underestimate the accretion rate.
Then, as an upper limit, we assume that an IMBH passing through the ring 'intercepts' a molecular cloud whenever its distance from the closest gas particle is less than $r_g=0.5$ kpc (i.e. 50 times the softening length). If this occurs, we calculate $L_X$ by assuming $\rho{}_g=n_{mol}\,{}\mu{}\,{}m_{\rm H}$, where $\mu{}\sim{}2$ is the molecular weight, $m_{\rm H}=1.67\times{}10^{-24}$ g is the proton mass and $n_{mol}=10^2$ cm$^{-3}$ is the mean density of a molecular cloud (Sanders, Scoville \& Solomon 1985).
$v$ and $c_s$ are extracted directly from our simulations, adopting the same technique used in Mapelli (2007).
In particular, $v$ is derived as the average relative velocity between the IMBH and the gas particles which are within a distance $r_g$ from the IMBH.
Similarly, the sound speed of the gas, $c_s$, around each IMBH is calculated as the average sound speed of gas particles within the same radius $r_g$, by using the relation $c_s^2=2\,{}u_g$ (where $u_g$ is the average internal energy per unit mass of the gas particles within $r_g$). Finally, we adopt the average observed Galactic value $\sigma{}_{MC}=3.7$ km s$^{-1}$ (Mapelli et al. 2006 and references therein).
\begin{figure}
\center{{
\epsfig{figure=fig9.eps,height=8cm}
}}
\caption{\label{fig:fig9}
Cumulative X-ray luminosity observed in Cartwheel (shaded histograms) compared with simulations (open histograms). The simulated sources are obtained assuming IMBHs of mass $m_{\rm BH}=10^3\,{}M_\odot{}$, accreting gas with efficiency $\eta{}=0.1$. Upper panel: run A3. Lower panel: run B3.
}
\end{figure}
Fig.~\ref{fig:fig9} shows the X-ray luminosity distribution of disc IMBHs for runs A3 (top panel) and B3 (bottom), considering only the IMBHs which are in the ring. These luminosities are obtained by applying the procedure described above and by assuming $\eta{}=0.1$ and $m_{\rm BH}=10^3\,{}M_\odot{}$.
The simulated sources exceed the number of the observed ones at low luminosities; but this is not significant, as observations are complete only for $L_X\lower.5ex\hbox{\gtsima}{}5\times{}10^{38}$ erg s$^{-1}$.
Instead, both in runs A3 and B3 the luminosity of IMBHs remains well below (a factor of $\lower.5ex\hbox{\gtsima}{}10$) the high-luminosity tail of the observed distribution, even with our optimistic assumptions.
Of course, if we increase $m_{\rm BH}$ and/or $\eta{}$, $L_X$ increases according to equation (\ref{eq:bondihoyle}).
However, $\eta{}=0.1$ and $m_{\rm BH}=10^3\,{}M_\odot{}$ are upper limits for our model, as it is very difficult to produce IMBHs with mass larger than $\sim{}10^3\,{}M_\odot{}$ in the runaway collapse scenario (Portegies Zwart \& McMillan 2002) and it is unlikely to have higher radiative efficiency. Indeed, adopting $\eta{}=0.1$ means that we are assuming a thin disc accretion (Shakura \& Sunyaev 1973). The gas surrounding a $10^3\,{}M_\odot{}$ IMBH is unlikely to form a thin disc (Agol \& Kamionkowski 2002), as its expected accretion rate is (Beskin \& Karpov 2005; Mapelli 2007\footnote{In Mapelli (2007) this formula is affected by a typographical error: the exponent of the velocity should be $-3$, as in our equation~(7).}):
\begin{equation}
\frac{\dot{M}}{\dot{M}_{\rm Edd}}\sim{}10^{-2}\,{}\left(\frac{m_{\rm BH}}{10^3\,{}M_\odot{}}\right)\,{}\left(\frac{n_{mol}}{10^2\,{}{\rm cm}^{-3}}\right)\,{}\left(\frac{\tilde{v}}{40\,{}{\rm km}\,{}{\rm s}^{-1}}\right)^{-3},
\end{equation}
where $\dot{M}_{\rm Edd}$ is the Eddington accretion rate.
For such low accretion rates an advection-dominated accretion flow is likely to establish, with radiative efficiency a factor of $\sim{}100$ lower than in the thin disc model (Narayan, Mahadevan \& Quataert 1998). Thus, assuming $\eta{}=0.1$ is a generous upper limit.
We do not even show the plot of halo IMBHs, i.e. runs A1 and B1, because they present only 1 and 2 X-ray sources, respectively, with luminosity lower than $10^{37}$ erg s$^{-1}$. This is mainly due to the fact that only a small fraction of halo IMBHs passes through the disc (and close enough to gas particles) for a sufficiently long lapse of time. Furthermore, halo IMBHs, even when they pass through the disc, have high relative velocities ($v\lower.5ex\hbox{\gtsima}{}100$ km s$^{-1}$) with respect to the closest gas particles.
Then, we can conclude that ULXs in Cartwheel can hardly be originated by gas accreting IMBHs. Indeed, our model oversimplifies the process of accretion. A better treatment of gas cooling and feedback might give different estimates. However, most of our assumptions are generous upper limits, strengthening our result.
\subsection{IMBHs accreting from stars}
IMBHs can also accrete by mass transfer in binaries. It is unlikely that halo IMBHs can capture a star and form a binary system. Recently, Kuranov et al. (2007) found that the expected number of ULXs powered by halo IMBHs which have tidally captured a stellar companion is $\sim{}10^{-7}$ per galaxy (or $0.01$ sources per galaxy with $L_X\lower.5ex\hbox{\gtsima}{}10^{36}$ erg s$^{-1}$). Then, in this Section, we will not consider halo IMBHs.
Instead, IMBHs born by runaway collapse (i.e. disc IMBHs) are hosted in star clusters, and have a significant probability of being in binary systems. It is also likely that IMBHs remain inside the parent cluster during its lifetime (Colpi, Mapelli \& Possenti 2003).
However, some observed ULXs have been found displaced from the star clusters (e.g. in the Antennae, see Zezas et al. 2002), suggesting that the BHs powering these ULXs and their companion stars have been ejected from the parent cluster.
In this paper, we assume that all the IMBHs remain in their parent cluster, which gives us an upper limit for the mass-transfer time [see equation (\ref{eq:oldstars})].
X-ray sources originated by accreting $\sim{}100-1000\,{}M_\odot{}$ IMBHs are expected to be transient, if the mass of the companion is $\lower.5ex\hbox{\ltsima}{}10\,{}M_\odot{}$ (Portegies Zwart et al. 2004). The duty cycle shows a short (few days) bright phase followed by a rather long (several weeks) quiescent phase. The peak luminosity of these transient sources is always in the ULX range ($L_X>10^{40}$ erg s$^{-1}$, Portegies Zwart et al. 2004).
Instead, Patruno et al. (2005) showed that $\sim{}1000\,{}M_\odot{}$ IMBHs accreting from high-mass ($\lower.5ex\hbox{\gtsima}{}10-15\,{}M_\odot{}$) stellar companions should produce persistent X-ray sources, with luminosities ranging from $10^{36}$ erg s$^{-1}$ to more than $10^{40}$ erg s$^{-1}$.
Of the 17 point X-ray sources (with $L_X>10^{38}$ erg s$^{-1}$) associated with Cartwheel (see Fig.~\ref{fig:fig9} and Wolter \& Trinchieri 2004), at least 3 are variable over a time-scale of 6 months (Wolter et al. 2006). Other 4 sources could be constant, at least over a time-scale of 6 months, as they have not shown variability in the $XMM-Newton$ observations reported by Wolter et al. (2006). Also the variable sources are not properly transient (e.g. the source N.10 dims of a factor of $\sim{}2$ over a time-scale of 6 months, after having been constant for at least 4 years). Thus, in Cartwheel both variable and constant sources can exist; but there are not evidences for transient sources (where for transient we mean sources with a bright phase of a few days and a long quiescent phase).
We also point out that it is physically unlikely that all of these 17 point sources are powered by IMBHs. Among them only the source N.10 has a luminosity higher than $10^{41}$ erg s$^{-1}$ (Gao et al. 2003; Wolter \& Trinchieri 2004), which can hardly be explained with a BH mass smaller than $100\,{}M_\odot{}$.
Other 4 sources have $L_X\lower.5ex\hbox{\gtsima}{}5\times{}10^{39}$ erg s$^{-1}$, which is very high but still consistent with models of X-ray sources powered by BHs with mass $\lower.5ex\hbox{\ltsima}{}100\,{}M_\odot{}$. The remaining sources have $L_X\lower.5ex\hbox{\ltsima}{}5\times{}10^{39}$ erg s$^{-1}$. Thus, apart from the source N.10, all the X-ray sources in Cartwheel are perfectly consistent with various models of ULXs powered by stellar mass or moderately massive ($\lower.5ex\hbox{\ltsima}{}100\,{}M_\odot{}$) BHs (e.g. via super-Eddington accretion, King \& Pounds 2003).
In this Section we want to address whether it is theoretically possible that all these ULXs or at least the brightest among them are powered by IMBHs accreting by mass transfer from stellar companions.
In order to check this, we calculate the number of X-ray sources which are powered by our simulated IMBHs. Firstly, we consider the case in which IMBHs accrete only from old stars, then we focus on IMBHs accreting from young stars.
\begin{table}
\begin{center}
\caption{IMBHs in the ring.
}
\begin{tabular}{llll}
\hline
\vspace{0.1cm}
Run & time (Myr) & $N_{\rm BH,\,{}ring}$$^a$ & $N_{\rm BH,\,{}SF}$$^b$\\
\hline
A1 & 140 & 17 & -\\
A2 & 140 & 50 & -\\
\vspace{0.1cm}
A3 & 140 & 50 & 27\\
B1 & 160 & 21 & -\\
B2 & 160 & 79 & - \\
B3 & 160 & 79 & 30 \\
\hline
\end{tabular}
\end{center}
{\footnotesize $^{a}$Number of IMBHs within (or close to) the ring, i.e. more than 15 kpc away from the centre of the galaxy.\\
$^{b}$Number of IMBHs within (or close to) the ring which are close to a star-forming region.}\\
\label{tab_2}
\end{table}
\subsubsection{IMBHs accreting from old stars}
Blecha et al. (2006) found that a $100-500\,{}M_\odot{}$ IMBH located in a young cluster spends $\sim{}3$ per cent of its life in the mass-transfer phase with a stellar companion. So, the number ($N_{\rm BH,\,{}MT}$) of $\sim{}100\,{}M_\odot{}$ IMBHs, born before the dynamical interaction, which are accreting by mass transfer from old stars, in the ring, at the current time, will be simply given by
\begin{equation}\label{eq:oldstars}
N_{\rm BH,\,{}MT}= 2.4\,{}\left(\frac{f_{\rm MT}}{0.03}\right)\,{}\left(\frac{N_{\rm BH,\,{}ring}}{79}\right),
\end{equation}
where $f_{\rm MT}$ is the fraction of the star cluster life in which the IMBH undergoes mass transfer (from Blecha et al. 2006), and $N_{\rm BH,\,{}ring}$ is the number of IMBHs in the stellar ring (see column 3 of Table~2). This calculation implies that in runs A2$-$A3 (B2$-$B3) there are only 1.5 (2.4) IMBHs undergoing mass transfer.
Furthermore, if both the IMBH and the stellar companion formed before the dynamical interaction between Cartwheel and the intruder, we expect that the companion is older than $\sim{}100$ Myr, which is approximately the lifetime of 6-7 $M_\odot{}$ stars. Then, X-ray sources due to IMBHs and stars born before the dynamical interaction of Cartwheel are expected to be transient.
In this case, the time spent in the outburst phase is probably only a few per cent of the total mass-transfer time (McClintock \&{} Remillard 2006), further reducing the probability of observing such sources (Blecha et al. 2006).
Therefore, we have to hypothesize that the non-transient X-ray sources in Cartwheel, if their are powered by IMBHs, are due either to pre-interaction or to post-interaction born IMBHs accreting from young ($<100$ Myr) and massive ($>10\,{}M_\odot{}$) stars.
\subsubsection{IMBHs accreting from young stars}
We checked the hypothesis of IMBHs accreting from young ($<100$ Myr) stars by setting up SF in two of our runs (A3 and B3). The recipe for SF in our code is simply based on the Schmidt law, and we assume a SF efficiency $c_\ast{}=0.1$. This assumption results in a SF rate (in the ring) of $\sim{}36\,{}M_\odot{}$ yr$^{-1}$, in good agreement with observations (Marston \& Appleton 1995; Mayya et al. 2005).
\begin{figure}
\center{{
\epsfig{figure=fig10a.eps,height=8cm}
\epsfig{figure=fig10b.eps,height=8cm}
}}
\caption{\label{fig:fig10}
Newly formed stars (filled circles, green in the online version) are superimposed to the density map of old stars (the same as in Figs.~\ref{fig:fig6} and \ref{fig:fig7}). Top panel: run A3 at $t=140$ Myr. Bottom panel: run B3 at $t=160$ Myr. Each frame measures 70 kpc per edge.
}
\end{figure}
Fig.~\ref{fig:fig10} shows stars which formed after the beginning of simulations in runs A3 ($t=140$ Myr, top panel) and B3 ($t=160$ Myr, bottom).
In both cases SF is strong in the outer ring and at the centre. While SF in the ring is consistent with observations, SF in the centre is puzzling, as no H$\alpha{}$ emission is detectable now from the centre. This problem of inconsistency between data and simulations has already been pointed out by Higdon (1996), who proposed a possible deviation in Cartwheel from the Schmidt law. On the other hand, Fig.~\ref{fig:fig11}, where successive epochs of SF in Cartwheel are shown, indicates that SF is particularly intense in the central region immediately after the interaction with the intruder (top panels), when the density wave passes through the inner 10-20 kpc. After that phase ($t>120$ Myr, bottom panels), SF proceeds especially in the propagating ring and at the very centre, while it is quenched in the intermediate parts (the region of the spokes and the inner ring).
\begin{figure}
\center{{
\epsfig{figure=fig11_040s.eps,height=4cm}
\epsfig{figure=fig11_080s.eps,height=4cm}
\epsfig{figure=fig11_120s.eps,height=4cm}
\epsfig{figure=fig11_160s.eps,height=4cm}
}}
\caption{\label{fig:fig11}
Newly formed stars (filled circles, green in the online version) are superimposed to the density map of old stars (the scale is the same as Fig.~\ref{fig:fig7}) for run B3. Top left panel: situation at $t=40$ Myr; filled circles are stars born at $t\leq{}40$ Myr. Top right panel: situation at $t=80$ Myr; filled circles are stars born at $40\leq{}t\leq{}80$ Myr. Bottom left panel: situation at $t=120$ Myr; filled circles are stars born at $80\leq{}t\leq{}120$ Myr. Bottom right panel: situation at $t=160$ Myr; filled circles are stars born at $120\leq{}t\leq{}160$ Myr.
Each frame measures 70 kpc per edge.
}
\end{figure}
Another important feature of Fig.~\ref{fig:fig10} (especially the bottom panel, run B3) is that relatively young stars appear concentrated along the spokes. As this concentration of young stars within the spokes is not evident in the frames of Fig.~\ref{fig:fig11}, we infer that most of the relatively young stars concentrated in the spokes are born in the central part of Cartwheel before or in the first stages of the interaction ($t\lower.5ex\hbox{\ltsima}{}80$ Myr from the beginning of the simulation), and have been successively ejected in the spokes (see the Appendix). They are not born in the spokes, confirming that spokes are not star-forming regions.
Let us now estimate the importance of these newly formed stars for the accretion of IMBHs. Two scenarios (maybe coexistent) are possible: (i) the IMBHs were created before the interaction and then entered into young star-forming regions, where they captured a stellar companion; (ii) the IMBHs have been formed in very young stellar clusters born after the dynamical interaction of Cartwheel and they accrete from stars belonging to the same parent cluster.
If the IMBHs were born before the interaction of Cartwheel with the intruder, we have to calculate how many of them happen to be in the outer ring and close enough to a young star-forming region. The fourth column of Table~2 shows how many IMBHs ($N_{\rm BH,\,{}SF}$) are in the outer ring (at time $t=140$ and $t=160$ Myr for run A3 and B3, respectively) and host at least 1 newly formed stellar particle in their neighborhood (the radius of the neighborhood is $r_g=0.5$ kpc, the same as in Section 4.1).
We assume that all these IMBHs plunged into a young star cluster and captured a young stellar companion (which is obviously an upper limit). In analogy with equation (\ref{eq:oldstars}), the number of IMBHs which are accreting by mass transfer at the current time is:
\begin{equation}\label{eq:newstars}
N_{\rm BH,\,{}MT}= 0.9\,{}\left(\frac{f_{\rm MT}}{0.03}\right)\,{}\left(\frac{N_{\rm BH,\,{}SF}}{30}\right),
\end{equation}
which means that both in run A3 and B3 only $\lower.5ex\hbox{\ltsima}{}1$ ULX can form via this mechanism.
However, IMBHs can also form by runaway collapse in young clusters born after the dynamical interaction.
Then, we can assume that a fraction of the newly born star particles hosts an IMBH. In this scenario, it is required that $\sim{}$1000 disc IMBHs are born in the young star clusters, in order to have $\sim{}30$ IMBHs undergoing mass transfer at present. If each newly formed young cluster hosts an IMBH (a quite optimistic assumption), this means that $\sim{}5-8$ per cent of the total mass of stars formed in the ring (which is about half of the total mass of new stars, in our runs A3 and B3) is in young clusters.
If the age of the young cluster is $\lower.5ex\hbox{\ltsima}{}40$ Myr, then it hosts also stars more massive than 10 $M_\odot{}$. IMBHs undergoing mass transfer in these very young star clusters can generate both transient (if the mass of the companion is $\lower.5ex\hbox{\ltsima}{}10\,{}M_\odot{}$) and persistent X-ray sources (otherwise). A good fraction of these sources are expected to be in the ULX range.
Thus, if $\sim{}30$ IMBHs are undergoing mass transfer at present in the Cartwheel ring, it is natural to expect that they produce a number of bright X-ray sources comparable with observations ($\sim{}17$ X-ray sources brighter than $L_X>10^{38}$ erg s$^{-1}$; Wolter \& Trinchieri 2004).
But is it realistic to think that $\sim{}1000$ IMBHs have been formed (or pulled) in young clusters along the Cartwheel ring?
The Milky Way probably hosts $\sim{}$100 young ($<10^7$ Myr) massive ($>10^4\,{}M_\odot{}$) star clusters\footnote{The currently detected young massive clusters are of the order of 10. However, assuming an homogeneous distribution and accounting for the fact that most of clusters are hidden by the obscuring material in the Galactic plane, the total number of young clusters in the Milky Way might be as high as 100 (Gvaramadze, Gualandris \& Portegies Zwart 2007).} (Gvaramadze, Gualandris \& Portegies Zwart 2007). Galaxies which are experiencing an epoch of intense SF might have a larger population of young massive clusters. As the SF rate in Cartwheel at present is a factor $\sim{}80$ higher than in the Milky Way, it might be possible (although optimistic) that Cartwheel hosts a factor of 10 more young massive clusters than the Milky Way.
If we accept the theory of runaway collapse (Portegies Zwart \& McMillan 2002), it is also possible that most young massive clusters host an IMBH at their centre. In fact, the only requirement to start runaway collapse is that the half-mass radius is from 0.3 to 0.8 pc (Gvaramadze et al. 2007), a condition which is satisfied by most Galactic young massive clusters. Thus, a number of $\sim{}1000$ young clusters (formed preferentially after the dynamical interaction with the intruder galaxy), hosting a $\sim{}100\,{}M_\odot{}$ IMBH at their centre, might produce a number of ULXs comparable with the observations. However, this result dramatically depends on the IMBH formation process that we assume, as only the runaway collapse model guarantees the formation of such a large number of disc IMBHs.
Thus, we conclude that the hypothesis that all the X-ray sources observed in Cartwheel are powered by $\sim{}100\,{}M_\odot{}$ disc IMBHs accreting from stars can be hardly justified by our simulations, as it requires (i) that a huge number ($\sim{}500-1000$) of IMBHs and of young clusters form in Cartwheel, (ii) that each young cluster hosts an IMBH (possible only, under extreme assumptions, in the runaway collapse scenario), (iii) that all the IMBHs remain in the parent cluster.
On the other hand, the observational properties of ULXs suggest that only the brightest among them require the presence of an IMBH. In particular, the source N.10 in Cartwheel (see Section 4.2) has $L_X\lower.5ex\hbox{\gtsima}{}10^{41}$ erg s$^{-1}$, difficult to explain with a BH mass smaller than $100\,{}M_\odot{}$. Then, we can hypothesize that only the long-term variable source N.10 is powered by an IMBH. In order to observe only one very bright ULX, $\sim{}30$ IMBHs
are required to form in Cartwheel ring.
Other sources among the brightest ULXs in Cartwheel might be powered by IMBHs with mass $\sim{}100\,{}M_\odot{}$. For example, other 4 point sources in Cartwheel have $L_X\lower.5ex\hbox{\gtsima}{}5\times{}10^{39}$ erg s$^{-1}$.
As all the 5 brightest sources are either persistent or variable but not transient\footnote{Here for transient sources we only mean sources with bright phase of a few days and very long quiescent phase, as described in Portegies Zwart et al. (2004).} (Wolter et al. 2006), they cannot be powered by IMBHs with low-mass stellar companions. Thus, $\sim{}100-200$ disc IMBHs, mostly born in the last $\sim{}40$ Myr, are required to explain the $\lower.5ex\hbox{\ltsima}{}5$ brightest sources.
\section{Summary}
In this paper we investigated the possible connection among IMBHs and the $\sim{}17$ (Gao et al. 2003; Wolter \& Trinchieri 2004) bright X-ray sources detected in the outer ring of Cartwheel. Recent observations show that models based on beamed emission or super-Eddington accretion in HMXBs including stellar mass BHs can explain most of ULXs, apart from the brightest ones (Roberts 2007 and references therein). However, the observations cannot definitely exclude that all the ULXs are powered by IMBHs. Thus, in our paper we checked whether the IMBHs can account for all or only for a part of the ULXs observed in Cartwheel.
We simulated the formation of a Cartwheel-like ring galaxy via dynamical interaction with an intruder galaxy. In this simulation we also integrated the evolution of 100 IMBHs particles.
We considered two different models of IMBH formation, i.e. IMBHs born as relics of population III stars (and distributed as a concentrated halo population) and IMBHs formed via runaway collapse of stars (and distributed as an exponential disc). For these models, we investigated both gas accretion from surrounding molecular clouds and mass transfer from a stellar companion. The main results of this study are the following.
\subsection{Halo IMBHs}
IMBHs born as the relics of population III stars, if they are distributed as a halo population, cannot contribute to the X-ray sources, neither via gas accretion nor via mass transfer in binaries. In particular, the luminosity produced by halo IMBHs accreting gas is always many orders of magnitude smaller than that of the observed sources, even if we assume that halo IMBHs have a large mass ($m_{\rm BH}=10^3\,{}M_\odot{}$) and a high radiative efficiency ($\eta{}=0.1$). This is due to the fact that only a small fraction of halo IMBHs passes through the disc (only for a short lapse of time), and even these IMBHs have a high ($v\lower.5ex\hbox{\gtsima}{}100$ km s$^{-1}$) relative velocity with respect to gas particles.
Similarly, halo IMBHs cannot accrete mass from stars, as the probability that they acquire a companion is very low.
\subsection{Disc IMBHs}
IMBHs born from the runaway collapse of stars should be a disc population. Under overoptimistic assumptions ($n_g=10^2\,{}{\rm cm}^{-3}$, $m_{\rm BH}=10^3\,{}M_\odot{}$ and $\eta{}=0.1$) these IMBHs can produce, via gas accretion, X-ray sources with $L_X\lower.5ex\hbox{\ltsima}{}10^{39}$ erg s$^{-1}$, a factor of $\sim{}10$ fainter than the brightest ULXs observed in Cartwheel. Thus, also disc IMBHs accreting gas cannot explain the observed X-ray sources in Cartwheel. Our model of gas accreting IMBHs contains many rough assumptions; but most of them are upper limits, strengthening our result. However, a more realistic treatment of the local properties of the gas would be helpful, to understand the physical mechanisms of gas accretion onto IMBHs.
On the other hand, runaway collapse born IMBHs are hosted in dense young clusters. In such environment, it is easy for the IMBH to capture a stellar companion. Blecha et al. (2006) estimated that a $\sim{}100\,{}M_\odot{}$ IMBH undergoes mass transfer from a companion star for about the $3$ per cent of the cluster lifetime. Previous papers (Portegies Zwart et al. 2004; Patruno et al. 2005) have shown that IMBHs accreting from low-mass ($\lower.5ex\hbox{\ltsima}{}10\,{}M_\odot{}$) and high-mass ($\lower.5ex\hbox{\gtsima}{}10\,{}M_\odot{}$) companions generate transient (with a bright phase of only a few days) and persistent bright X-ray sources, respectively.
As $10\,{}M_\odot{}$ stars have a lifetime of $\sim{}30-40\,{}$ Myr, only IMBHs hosted in sufficiently young star clusters can generate persistent X-ray sources. Then, IMBHs hosted in clusters born before the dynamical encounter with the intruder (i.e. more than 100 Myr ago) can produce only transient sources.
We estimated that, out of 100 IMBHs which were present before the dynamical encounter with the intruder galaxy, only $\lower.5ex\hbox{\ltsima}{}2-3$ are expected to undergo mass transfer from low-mass companions at present, producing a comparable number of transient X-ray sources. As observations show that at least 4 X-ray sources in the Cartwheel ring are persistent over a time-scale of 6 months (Wolter et al. 2006), we conclude that pre-encounter formed IMBH binaries are not sufficient to explain the data.
We considered the possibility that pre-encounter IMBHs capture massive stars produced after the encounter with the intruder. In this case, under overoptimistic assumptions, 100 pre-encounter IMBHs can produce $\lower.5ex\hbox{\ltsima}{}1$ X-ray source, either persistent or not.
Finally, we hypothesized that very young ($<40$ Myr) star clusters, formed after the encounter, generate IMBHs at their centre. Under this hypothesis, $500-1000$ IMBHs are required to produce $\sim{}15-30$ bright ($10^{36}-10^{41}$ erg s$^{-1}$) X-ray sources, some of them persistent and some transien
. This scenario might account for the $\sim{}17$ observed X-ray sources in the Cartwheel ring. It is also in agreement with the fact that many ULXs observed in Cartwheel are associated with bright $H_\alpha{}$ spots, i.e. active star-forming regions (Gao et al. 2003).
The birth of $\sim{}1000$ IMBHs (each one of 100 $M_\odot{}$) in $\sim{}$40 Myr implies an IMBH formation rate of $2.5\times{}10^{-3}\,{}M_\odot{}\,{}{\rm yr}^{-1}$, that is a factor of $\sim{}10^4$ lower than the SF rate. This rate is acceptable for runaway collapse scenarios, as Portegies Zwart \& McMillan (2002) show that $\sim{}$0.1 per cent of the mass of the parent young cluster merges to form the IMBH.
However, we stress that only the runaway collapse scenario, under extreme assumptions, can explain the formation of such a huge number of IMBHs in the disc. Thus, our simulations suggest that IMBHs can hardly account for all the ULXs observed in Cartwheel.
On the other hand, it is possible that only the few brightest sources in Cartwheel are powered by IMBHs, while the other ones are either beamed HMXBs, super-Eddington accreting stellar mass BHs or a blending of multiple fainter sources. For example, $\sim{}30$ IMBHs are expected to form in the ring, in order to produce only 1 very bright ULX, such as the source N.10 in Cartwheel.
These results agree with the semi-analytical model by King (2004), who showed that IMBHs cannot explain all the ULXs in Cartwheel. However, King (2004) concludes that $>3\times{}10^4$ IMBHs are required to produce the observed number of ULXs, $\sim{}30-60$ times more than in our analysis. This apparent discrepancy is due to the fact that King (2004) assumes that the IMBHs power only transient sources (Kalogera et al. 2004), and thus he has to introduce a $\sim{}10^{-2}$ duty-cycle. However, Patruno et al. (2005) showed that IMBHs accreting from young massive stars ($\gtrsim10\,{}M_\odot{}$) produce non-transient sources, increasing the expected duty-cycle.
In conclusion, new $Chandra$ and $XMM-Newton$ observations of Cartwheel could partially solve the mystery of Cartwheel X-ray sources, investigating which sources are transient, which variable, and which persistent. Deeper observations are also needed to resolve possible blended sources.
In the future, it would be interesting to search whether other ring galaxies
host as many bright X-ray sources as Cartwheel and whether these sources are similarly concentrated in the outer ring.
\section*{Acknowledgments}
The authors thank J.~Stadel, P.~Englmaier and D.~Potter for technical support, and acknowledge the two referees for the critical reading of the manuscript and for their helpful comments. We also thank A.~Wolter and G.~Trinchieri for their useful comments. MM acknowledges support from the Swiss
National Science Foundation, project number 200020-109581/1
(Computational Cosmology \&{} Astrophysics).
|
train/arxiv
|
BkiUdYg4uzlh3ybkLvlZ
| 5 | 1 |
\section{\label{intro}Introduction}
\input{intro.tex}
\section{\label{model} Potential Energy Model}
Over the past decade, numerous different MLIAP model forms have been proposed but can be generally classified by the use (or lack thereof) of a symmetry preserving descriptors as model input.
As one might expect, practitioners of MLIAP aim to preserve the intrinsic physical laws that govern observables such as energy, force, and stress.
Most models satisfy rotation, translation, and permutation (of atomic indices) invariance. Different classes of MLIAP can be characterizes by whether these invariances are learned from the data or enforced explicitly in the mathematical structure of the descriptors.
Examples of the former include graph representations\cite{chen2019graph, zhang2018deep}, equivariant\cite{batzner20223, rackers2022cracking}, and message passing neural networks\cite{smith2019approaching, st2019message, zubatiuk2021development}.
Where these invariant properties are enforced in the descriptors, these are often referred to as descriptor-based MLIAP. A summary and comparison of many of these has been recently published\cite{Zuo2020}.
The bispectrum components used in SNAP is one such descriptor, and has been demonstrated for use in single and multi-element forms\cite{cusentino2020explicit, Thompson2015,Wood2018}.
\subsection{\label{math-section}Spectral Neighborhood Analysis Potential}
\input{SNAP.tex}
\subsection{\label{training-set-construction} Constructing the Training Set}
\begin{slow}
\input{training_set}
\end{slow}
\subsection{\label{optimize}Optimization Methodology}
\begin{slow}
\input{optimization.tex}
\end{slow}
\subsection{\label{interpolation}Properties and Performance}
\input{properties.tex}
\section{\label{discussion}Discussion}
\input{discussion}
\section{\label{conclusions}Conclusions}
\input{conclusion.tex}
\begin{acknowledgments}
We would like to thank Saikat Mukhopadhyay and Brian Wirth for sharing their atomic structures and helpful discussion relevant to this work. We also appreciate helpful feedback from James Goff for revising this manuscript. This work was supported by the U.S. Department of Energy, Office of Fusion Energy Sciences (OFES) under Field Work Proposal Number 20-023149.
This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This article has been authored by an employee of National Technology \& Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doe-public-access-plan.
\end{acknowledgments}
\section*{Author Declaration}
\subsection*{Conflict of Interest}
The authors have no conflicts to disclose.
\subsection*{Author Contributions}
E.L.S. and J.T. performed the DFT calculations. J.T. conceptualized the methodology and E.L.S. developed the production potential. M.A.C. and M.A.W. provided direction for the potential development and LAMMPS simulations. M.J.M automated the bicrystal generation and E.L.S. performed the MD simulations. E.L.S. wrote the original draft and all authors contributed to, reviewed, and edited the manuscript. A.P.T. and M.J.M. implemented the inner cutoff. A.P.T. supervised the project.
\subsection*{Data Availability}
The data to produce the W-ZrC potential will be available at https://github.com/FitSNAP/FitSNAP upon publication.
\section*{\label{references}References}
\subsubsection{\label{SNAP_variables}SNAP variables}
The SNAP hyper-parameters include the radial cutoffs $R$ (one for each element), the element weights, and inner cutoff parameters $S$ and $D$ from Eq. \ref{eqn:inner}. The SNAP group weights correspond to the groups that the training set is broken down into. The group weights will tell FitSNAP how much to prioritize the energy and force errors of each group.
The group weight feature allows the user to fine-tune the importance of different training groups. For example, the user can place increased importance on high-accuracy ground state configurations while simultaneously lowering the weights on high energy structures without interfering with the potential energy surface construction. If all training data were to be weighted equally, this could cause the fit to skew towards reproducing the energies of outliers with the most fidelity. Similarly, without group separation, it would be difficult to include both traditional DFT and AIMD training data. Due to the differences in both the governing physics and the ionic steps desired between traditional DFT and AIMD simulations, one set of cutoff energies and k-points cannot always be applied to both DFT and AIMD and yield optimal results. Specifically, DFT calculations can generally converge with lower cutoff energies but require higher k-points. Meanwhile, AIMD simulations need higher cutoff energies to include higher energy occupied states, but are often sampled at the gamma point due to lower reciprocal space dimensions and to balance computational cost. Some MLIAPs have addressed this discrepancy by recalculating AIMD snapshots at the accuracy of the rest of the training set \cite{Zuo2020} to maintain a consistent potential energy surface. However, this adds great computational expense to informing the potential with AIMD. By splitting DFT and AIMD simulations into separate groups and including objective functions demanding high accuracy from properties of interest, Dakota\cite{dakota} and FitSNAP can collectively arrive at SNAP solutions that both maintain high fidelity to traditional DFT calculations and are informed by AIMD.
\par
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figures/fitSNAP_diagram_v3.png}
\caption{Schematic of the workflow for SNAP potential Development. First, FitSNAP reads in DFT training data. Using hyper-parameters and group weights assigned to the respective candidate by Dakota, FitSNAP determines the bispectrum coefficients. FitSNAP then calls LAMMPS to evaluate the forces and energies for the training configurations as predicted by the SNAP candidate. The difference between these SNAP predictions and the DFT values are fed to Dakota, along with errors from material property objective functions. Dakota uses the error information to determine which candidates should participate in reproduction and uses GA strategies to produce the next generation of candidates. Ultimately, the potential can be used to run large scale MD simulations on a machine such as Summit.}
\label{fig:fitsnap}
\end{figure}
\par
\subsubsection{\label{objective functions}Objective Functions}
Using objective functions helps ensure that our SNAP potential yields optimal material properties and stable, physical dynamics at high temperatures. The production SNAP potential was fit using eight material property objective functions and four stability functions.
Five objective functions were initially included to minimize the error for material properties: (1) W (100) and (110) surface energies, (2) ZrC (100) and (110) surface energies, (3) selected W-ZrC interface energies, (4) the W bulk modulus, and (5) the ZrC bulk modulus. The surface energies were calculated using
\begin{equation}
E_{surf} = \frac{E_{slab}-E_{bulk}}{2A} \quad ,
\end{equation}
where $E_{slab}$ is the total energy of slab, $E_{bulk}$ is the total energy of the bulk system with the same number of atoms as the slab, and $A$ is the surface area. The surface energy objective functions help ensure the formation energy for each surface is in the correct order and of reasonable magnitude, especially after geometry relaxation. This differs from the standard regression implemented in FitSNAP that only checks whether single-point (no geometry optimization) SNAP calculations match DFT for the training set structures. Similarly, interface energies were calculated using\cite{Zhang2020}
\begin{equation}
E_{int} = \frac{E^{int}_{tot}-N_{W}E^{bulk}_{W}-N_{ZrC}E^{bulk}_{ZrC}}{S}-E^{surf}_{W}-E^{surf}_{ZrC} ,
\end{equation}
where $E^{int}_{tot}$ is the total energy of the interface, $N_{W}$ is the number of W atoms in the interface, $E^{bulk}_{W}$ is the bulk W energy per atom, $N_{ZrC}$ is the number of Zr-C pairs in the interface, $E^{bulk}_{ZrC}$ is the bulk ZrC energy per Zr-C pair, and $S$ is the area of the interface.
We note that while previous DFT work has used rigorous relaxation procedures to reach ground state interfaces \cite{Mukhopadhyay2022}, we performed simpler interface geometry optimizations. The purpose of the interface objective function is thus to check whether the SNAP candidate, following identical constraints during geometry optimization to the DFT reference, reaches an interface energy of the same magnitude.
\par
Fitting to energies, forces, and the aforementioned objective functions was insufficient to ensure thermal expansion of the correct magnitude or even the correct sign. Objective functions were added for both W and ZrC to check whether the thermal expansion from room temperature to 3000~K and 2793~K, respectively, matched experiment. The thermal expansion was calculated as\cite{Kostanovskiy2018}
\begin{equation}
\bar{\alpha} = \frac{l_{T}-l_{0}}{l_{0}\big(T-293\big)} \quad ,
\end{equation}
where $l_{T}$ is the lattice parameter at the temperature of interest, $l_{0}$ is the lattice parameter at 293~K, and $T$ is the temperature of interest.
\par
Some SNAP candidates would incorrectly predict the most stable structure of W and ZrC, e.g. predicting face centered cubic (FCC) structure instead of body centered cubic (BCC) structure for W and predicting space group 198 instead of 225 (rocksalt) for ZrC. While including FCC W structures in the training set was sufficient to correct the SNAP candidate to form BCC W, many SNAP candidates formed space group 198 ZrC even after it was added to the training set. To better prevent this behavior, an objective function was added to return an error if the SNAP candidate predicted space group 198 as more stable than 225 for ZrC.
\par
In addition to the material property objective functions, four stability objective functions were added to improve the performance of the candidates in MD simulations. These included (1) mitigation of atomic cluster formation $<$1.75 $\mbox{\AA}$ apart, (2) graphite bulk modulus, and radial distribution functions (RDFs) for (3) W and (4) ZrC surfaces at high temperature. As mentioned in Section \ref{training-set-construction}, it is both obscure and practically difficult to include a significant amount of training with short interatomic distances. For example, the majority of the C-C distances in the compressed C training data is at $\approx$ 1.4 $\mbox{\AA}$, with very limited training down to $\approx$ 0.9 $\mbox{\AA}$. Without much training below 1.4 $\mbox{\AA}$, SNAP(and any ML-IAP) is not fundamentally constrained to physical behavior at those distances. Also, turning off the SNAP contribution at short interatomic distances (Eq. \ref{eqn:inner}) and including the ZBL reference potential promotes but does not ensure desired physical behavior.
For example, during MD testing of early SNAP candidates, C atoms would attempt to move to positions $<$1.0 $\mbox{\AA}$ away from other C atoms. Subsequently, nearby W and Zr atoms would enter this region of multiple atoms sitting $<$1.0 $\mbox{\AA}$ apart. To mitigate this behavior, we added a stability objective function to return errors for cluster formation. This objective function begins with a small ($\approx$2000 atom) test of ZrC embedded in W. The test uses the SNAP candidate to minimize the small dispersoid geometry and run NVT at 2000~K. LAMMPS is used to sort any atoms less than 1.75 $\mbox{\AA}$ apart into clusters. The test then applies a penalty value based on the size of the largest cluster, so that as the Dakota run progresses it will move away from variables that result in this unphysical clustering.
\par
Similarly, a graphite bulk modulus objective function was added to improve the stability of C. This test has any candidate that produces a positive graphite bulk modulus value return a negligible error, while candidates that produce negative bulk modulus values are given an error proportional to the magnitude of that bulk modulus.
\par
During the optimization process, while successively good candidates often had excellent bulk behavior, they had poorer performance for surfaces and interfaces, especially at high temperature. Furthermore, interfaces appeared to trigger clustering. For these reasons, objective functions were added to test whether the SNAP candidates could reproduce the AIMD RDFs for W and ZrC surfaces near or above their melting temperature. The ZrC RDF objective function was set to return an arbitrarily large error if it resulted in a cluster with more than 4 atoms. Otherwise, both the W and ZrC RDF objective functions returned the sum of the differences between the SNAP and AIMD RDF for each bin.
\par
With an increasing number of objective functions, the computational time for each Dakota run quickly adds up. Namely, for each candidate, FitSNAP needs to determine the bispectrum coefficients for the given variables, and all of the objective functions need to be run. A tradeoff has to be made between providing each candidate one node so that more candidates can be evaluated at once and providing each candidate multiple nodes to speedup objective function evaluation but possibly reduce the total number of evaluated candidates. To expedite the candidate throughput, only eight objective functions (the original five material property objective functions, the W and ZrC surface RDFs, and the ZrC space group) were evaluated for every single candidate. The remaining thermal expansion and stability objective functions were set to run only if the sum of the previous force, energy, and objective function errors were below some threshold value. This allowed each node to continue on to the next candidate if the current candidate was already exhibiting poor energy, forces, and/or material property errors.
\subsubsection{\label{GA}Genetic Algorithm Optimization}
While separating the training into different groups allows us to include more diverse training and to emphasize or de-emphasize particular training data, it makes the optimization problem more challenging. In addition to the seven SNAP hyper-parameters in this three element system, the training set was sorted into groups of ground state, USPEX, surface and interfaces, room temperature AIMD, high temperature AIMD, m-active, and liquid structures. Furthermore, separate weights for energies and forces were provided for each training group. Given the large parameter optimization space, choosing the correct set of GA parameters in Dakota is key to expediting the optimization loop. During the GA, we want to efficiently find SNAP candidates at the global minimum of error values rather than get trapped at a local minimum. We have found two GA parameters: population size and replacement type, have the greatest effect on the quality of output SNAP candidates.
\par
Large populations sizes require more computational time to reach subsequent generations, but allow more diversity in the population. Meanwhile, small population sizes evolve through generations more quickly but can lead to a poor solution \cite{Katoch2021}. To better determine ideal population size, we compared the quality of candidates from four, otherwise identical, Dakota runs with varying population size. Figure \ref{fig:pop_size} shows the total number of good candidates produced with respect to the total number of candidates evaluated over the course of each Dakota run. Good candidates were identified as those yielding errors below our desired accuracy for energies, forces, and objective functions. Looking at one optimization chronologically in Figure \ref{fig:pop_size}, a population size 100 (data in black circles), after four generations the number of these good candidates saturates at just 3, with no further improvement in the subsequent 60 generations. Increasing the population size to 500, 22 good candidates are found, though this run similarly, and unfortunately, saturated toward in the latter half of optimization. Tipping the balance ore toward exploration than exploitation, population size 1000 significantly improves the good candidate output to 81, and only begins to plateau toward the end of this run. Population size 1500 yields the greatest number of good candidates at 102, and we continue to see good candidates produced as the Dakota run progresses (i.e. no plateauing/converging on a local minima). However, there are diminishing returs of good candidates as the population size is increased to 2000. Here we can see how increasing the population size begins to slow down the progression through generations, resulting in fewer good candidates in the same amount of walltime. All subsequent optimizations were run with a population size of 1500.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Good_candidates_over_time.png}
\caption{Number of good candidates produced from Dakota runs with respect to total number of evaluated candidates. The population size is varied from 100 to 2000. These runs work to optimize 21 variables, and all GA parameters aside from population size are consistent across runs. }
\label{fig:pop_size}
\end{figure}
\par
Over the course of the GA, loss of diversity in the candidate hyper-parameter values can also coincide with premature convergence \cite{Katoch2021}, or a result at a local rather than global minimum. This can be combatted by reducing the selection pressure which favors candidates with better fitness (lower errors) to act as parents for subsequent generations. To reduce the selection pressure, we used the ``favor feasible" replacement type in Dakota to allow any candidates with feasible errors to participate in reproduction. In contrast, the default replacement type ``elitist" only allows the best candidates to reproduce. ``Favor feasible" better maintains diversity in the candidate population, as shown in the t-SNE plot in Figure \ref{fig:favor_feasible_hyper_params}. This allows us to visualize how good candidates produced using different replacement types span hyper-parameter and group weight space. Figure \ref{fig:favor_feasible_hyper_params} shows that the candidates produced using the elitist method are limited to a smaller volume of parameter space. Meanwhile, the candidates produced using the favor feasible method span a larger volume and thus indicate greater diversity in the candidate parameters. Furthermore, from these otherwise identical runs, the elitist method produced 9 good candidates while the favor feasible method produced 27 good candidates.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figures/elitist_vs_favor_feasible.png}
\caption{t-SNE of the 21 total hyper-parameters and group weights of good candidates from two Dakota runs. The red triangles and blue circles show candidates produced using the ``elitist" and ``favor feasible" replacement types, respectively. All other GA parameters, including a population size of 1500, were identical between the two runs.}
\label{fig:favor_feasible_hyper_params}
\end{figure}
\par
\subsubsection{\label{optimized-snap-vars}Optimized SNAP variables}
Using these objective functions and optimized GA parameters resulted in a production W-ZrC potential that can run at divertor temperatures. The final SNAP variables are shown in Table \ref{table:hyper-parameters}. Here we can see how Dakota prioritized different training groups with weights ranging from approximately 3 to 8. We note the high weight on ground state energies and corresponding low error. The surfaces and interfaces force weight also needs to be relatively high to account for the frozen layers in those simulations. Without a high force weight, SNAP may see frozen layers as stable and attempt to produce them during molecular dynamics simulations.
\par
\renewcommand{\arraystretch}{1.5}
\setlength{\tabcolsep}{31pt}
\begin{table*}
\caption{Optimized SNAP hyper-parameters: element weights $w$, radial cutoffs $R$, and inner cutoff parameters $S$ and $D$. For each training group, the optimized energy weight (EW), optimized force weight (FW), energy mean absolute error (EMAE) (eV/atom), and force mean absolute error (FMAE) (eV/\AA) are given.}
\begin{tabular}{l c c c c}
\hline
\hline
& W & Zr & C &\\
\hline
$w$ (unitless) & 1.0 & 0.63 & 1.05 & \\
$R$ (\AA) & 4.42 & 5.26 & 3.29 &\\
$S$ (\AA) & 0.88 & 0.88 & 0.88 &\\
$D$ (\AA) & 0.12 & 0.12 & 0.12 &\\
\hline
training group & EW & FW & EMAE & FMAE \\
\hline
ground state & 3.94 & 7.32 & 4.3 $\times$ 10$^{-2}$ & 3.8 $\times$ 10$^{-2}$\\
USPEX & 6.41 & 3.94 & 1.1 $\times$ 10$^{0}$ & 2.6 $\times$ 10$^{0}$\\
Surfaces and Interfaces & 3.36 & 6.52 & 8.9 $\times$ 10$^{-2}$ & 3.0 $\times$ 10$^{-1}$ \\
AIMD 300~K & 3.5 & 3.74 & 1.2 $\times$ 10$^{-1}$ & 5.3 $\times$ 10$^{-1}$ \\
AIMD $\geq$1000~K & 4.1 & 7.94 & 4.2 $\times$ 10$^{-1}$ & 1.0 $\times$ 10$^{-1}$\\
M-active & 3.3 & 3.62 & 8.8 $\times$ 10$^{-1}$ & 3.5 $\times$ 10$^{0}$ \\
Liquid & 3.68 & 5.36 & 2.7 $\times$ 10$^{-1}$ & 2.4 $\times$ 10$^{0}$\\
\hline
\hline
\label{table:hyper-parameters}
\end{tabular}
\end{table*}
\section*{Supporting Information}
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{Figures/tsne_all_dft_supplemental.png}
\caption{Visualization of training set coverage on the bispectrum descriptor space in 2D using distance preserving t-SNE analysis including all labels.}
\label{fig:tsne-all-labels}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{Figures/stress_strain_curves.png}
\caption{Stress strain curves for each of the studied interfaces.}
\label{fig:stress-strain-curves}
\end{figure}
|
train/arxiv
|
BkiUd6DxK6nrxpQc16CR
| 5 | 1 |
\section{Introduction}
For a class $\mathcal{C}$ of binary linear codes and for some rate $R \in (0,1)$, we consider the \emph{maximum-likelihood decoding threshold} $\theta_{\mathcal{C}}(R)$ for $\mathcal{C}$ at $R$. This is the unique $\theta \in [0,\tfrac{1}{2}]$ such that
\begin{itemize}
\item for each $p \in (0,\theta)$ and all $\varepsilon > 0$, given a binary symmetric channel of bit-error rate $p$, there exists a code $C \in \mathcal{C}$ of rate at least $R$ such that the probability of a error in maximum-likelihood decoding on $C$ is at most $\varepsilon$, and
\item for each $p \in (\theta,\tfrac{1}{2})$ there exists $\varepsilon > 0$ such that, given a binary symmetric channel of bit-error rate $p$, for each code $C \in \mathcal{C}$ of rate at least $R$ the probability of an error in maximum-likelihood decoding on $C$ is at least $\varepsilon$.
\end{itemize}
The function $\theta_{\mathcal{C}}(R)$ is the \emph{threshold function} for $\mathcal{C}$; it essentially measures the maximum bit-error rate that can be `tolerated' by rate-$R$ codes in $\mathcal{C}$ with vanishing probability of a decoding error. Our main result proves an upper bound on this function for the class $\mathcal{G}$ of cycle codes of graphs:
\begin{theorem}\label{mainintro}
If $\mathcal{G}$ is the class of cycle codes of graphs and $R \in (0,1)$, then $\theta_{\mathcal{G}}(R) \le \tfrac{(1-\sqrt{R})^2}{2(1+R)}$. If equality holds, then $R = 1 - \tfrac{2}{d}$ for some $d \in \mathbb Z$.
\end{theorem}
This generalises a result of Decreusefond and Z\'{e}mor [\ref{dz}], who proved the same upper bound for the class of cycle codes of regular graphs. Our proof follows theirs conceptually, although our exposition and notation are somewhat different. The proof in [\ref{dz}] implicitly involves a problem of enumerating `non-backtracking' walks that is trivial for regular graphs but not in general; much of the original material in our proof is related to this difficulty.
When $R = 1 - \tfrac{2}{d}$ for some $d \in \mathbb Z$ (that is, when the cycle codes of large $d$-regular graphs have rate close to $R$) our theorem does not improve the bound $\theta_{\mathcal{G}}(R) \le \tfrac{(1-\sqrt{R})^2}{2(1+R)}$. In this case, however, the bound is known to be best-possible; Z\'{e}mor and Tillich [\ref{zt97}] showed, when $d-1$ is one of various prime powers, that certain families of $d$-regular Ramanujan graphs have cycle codes attaining this threshold (that is, can tolerate a bit-error rate of $p$ for any $p < \tfrac{(1-\sqrt{R})^2}{2(1+R)}$), and later random constructions due to Alon and Bachmat [\ref{ab06}] can be demonstrated to give the same result for all $d \ge 3$. Combining these constructions with Theorem~\ref{mainintro}, we have the following:
\begin{theorem}\label{intval}
If $\mathcal{G}$ is the class of cycle codes of graphs, and $R = 1 - \tfrac{2}{d}$ for some integer $d \ge 3$, then $\theta_{\mathcal{G}}(R) = \tfrac{(1-\sqrt{R})^2}{2(1+R)}$.
\end{theorem}
Theorem~\ref{mainintro} implies that this equality holds for no other $R \in (0,1)$; this can be interpreted as a statement that the cycle codes of regular graphs are `best' among all cycle codes.
Theorem~\ref{mainintro} will be derived as a consequence of a stronger upper bound for $\theta_{\mathcal{G}}$, given in Theorem~\ref{maintechnical}. While the bound in Theorem~\ref{maintechnical} is highly technical in its statement, we believe (Conjecture~\ref{mainconjecture}) that it is in fact the correct upper bound.
\subsection*{Minor-Closed Classes}
The main result of [\ref{nvz14}] shows that the failure of the cycle codes to be `asymptotically good' extends to every \emph{proper minor-closed} subclass of binary codes; that is, every proper subclass that is closed under puncturing and shortening. The proof uses a deep result in matroid structure theory due to Geelen, Gerards and Whittle [\ref{ggw}] that states, roughly, that the `highly connected' members of any such class of codes are close to being either cycle codes or their duals.
We believe that this paradigm that the members of any minor-closed subclass of binary codes are `nearly' cycle or cocycle codes will also apply to the threshold function. We predict that the threshold function $\theta_{\mathcal{G}}(R)$ for any minor-closed class agrees with that of either the class $\mathcal{G}$ of cycle codes or the class $\mathcal{G}^*$ of cocycle codes. It is easily shown (see [\ref{ggw}]) that $\theta_{\mathcal{G}^*}(R) = 0$ for all $R \in (0,1)$. Geelen, Gerards and Whittle [\ref{ggw}] made the following striking conjecture:
\begin{conjecture}
Let $\mathcal{C}$ be a proper subclass of the binary linear codes that is closed under puncturing and shortening. Either
\begin{itemize}
\item $\mathcal{G} \subseteq \mathcal{C}$ and $\theta_{\mathcal{C}} = \theta_{\mathcal{G}}$, or
\item $\theta_{\mathcal{C}} = 0$.
\end{itemize}
\end{conjecture}
In other words, the presence or absence of the class of cycle codes should be all that determines the threshold function for any minor-closed class. Proving this conjecture would likely require a combination of the matroidal techniques in [\ref{nvz14}] and the algebraic and probabilistic ideas in this paper.
\section{Preliminaries}
We give some basic definitions in coding theory that, together with the definition of threshold function in the introduction, are all that are required for this paper; a more comprehensive reference is found in [\ref{ms77}]. We also use some standard graph theory terminology from [\ref{diestel}] and [\ref{gr}].
For integers $n \ge k \ge 0$, a \emph{binary linear $[n,k]$-code} is a $k$-dimensional subspace $C$ of some $n$-dimensional vector space $V$ over $\GF(2)$. We call the elements of $C$ \emph{codewords}. The \emph{rate} of $C$ is the ratio $R = \tfrac{k}{n}$.
\subsection{Cycle codes}This paper is concerned solely with the cycle codes of graphs. For a finite graph $G = (V,E)$, the \emph{cycle code} of $G$ is the subspace of $\GF(2)^{E}$ whose elements are exactly the characteristic vectors of cycles of $G$ (that is, edge-disjoint unions of circuits of $G$, or equivalently edge-sets of even subgraphs of $G$). We write $\mathcal{G}$ for the class of all such codes; it is well-known that every cycle code is the cycle code of a connected graph.
If $G$ is connected, then its cycle code $C$ is a binary linear $[n,k]$-code, where $n = |E|$ and $k = |E| - |V| + 1$, giving $R = 1 - \tfrac{|V|}{|E|} + \tfrac{1}{|E|}$. The ratio $\tfrac{|V|}{|E|}$ is exactly $\tfrac{2}{\mu(G)}$, where $\mu(G)$ denotes the average degree of $G$; we adopt this notation $\mu(G)$ throughout the paper. The above formula implies that a large connected graph $G$ has a cycle code of rate $R \approx 1 - \tfrac{2}{\mu(G)}$. A simple `error-tolerance' parameter of $C$ is the minimum Hamming distance $d$ between two codewords of $C$; this is equal to the \emph{girth} of $G$ (the length of a shortest circuit of $G$) -- we will write $d(G)$ for the girth of a graph $G$.
\subsection{Maximum-likelihood decoding} Suppose that some codeword $c$ of a linear $[n,k]$-code $C \subseteq V$ is transmitted across a binary symmetric channel with bit-error rate $p \in (0,\tfrac{1}{2})$, giving some $x \in V$ obtained by switching the value of each entry of $c$ independently with probability $p$. \emph{Maximum-likelihood decoding} (abbreviated ML-decoding) is the process where, given $x$, we attempt to recover $c$ by choosing the codeword $c' \in C$ with the highest probability to have been sent, given that $x$ has been received. If this choice is ambiguous (that is, if this maximum is not unique) or gives an incorrect answer (that is, if $c' \ne c$), then we say a \emph{decoding error} has been made; this occurs with some probability depending on $p$ and $C$ but, by linearity, not on the particular codeword $c$. In this particular setting of a constant bit-error probability $p < \tfrac{1}{2}$ that behaves independently on each bit, ML-decoding is equivalent to \emph{nearest-neighbour} decoding, where $c'$ is simply chosen to be the closest codeword to $x$ in Hamming distance. We remark that our definition of ML-decoding deviates slightly from the standard one, in which a decoding error is also avoided with nonzero probability in the case of an ambiguous choice. This difference will not affect the asymptotic analysis with which we are concerned.
ML-decoding is hard for general binary codes [\ref{bmt78}], but an attractive property of cycle codes of graphs (and an important motivating factor for this paper) is that ML-decoding can be implemented efficiently for cycle codes using standard techniques in combinatorial optimization (see [\ref{nh81}]). This is the case because the probability of a decoding error can be understood purely graphically: if $C$ is the cycle code of a graph $G = (V,E)$ and codewords of $C$ are transmitted across a channel of bit-error rate $p \in (0, \tfrac{1}{2})$, then the probability of an ML-decoding error is exactly the probability, given a set $X \subseteq E$ formed by choosing each edge uniformly at random with probability $p$, that $X$ contains at least half of the edges of some circuit of $G$. Thus, to prove our main theorem, we study random subsets of edges of a graph. From this point on, given a set $E$ and some $p \in [0,1]$, we refer to a random set $X \subseteq E$ formed by including each element of $E$ independently at random with probability $p$ as a \emph{$p$-random subset of $E$}.
\section{Non-backtracking walks}\label{walksection}
A \emph{non-backtracking walk} of length $\ell$ in a graph $G$ is a walk $(v_0,v_1,\dotsc,v_\ell)$ of $G$ so that $v_{i+1} \ne v_{i-1}$ for all $i \in \{1,\dotsc, \ell-1\}$. In all nontrivial cases, the number of such walks grows roughly exponentially in $\ell$; in this section we estimate the base of this exponent, mostly following ([\ref{alon}], Theorem 1).
Let $G = (V,E)$ be a simple connected graph of minimum degree at least $2$.
Let $\bar{E} = \{(u,v) \in V^2: u \sim_G v\}$ be the $2|E|$-element set of arcs of $G$. Let $B = B(G) \in \{0,1\}^{\bar{E} \times \bar{E}}$ be the matrix so that $B_{(u,v),(u',v')} = 1$ if and only if $u' = v$ and $u \ne v'$. It is easy to see that
\begin{enumerate}
\item\label{scd} $B$ is the adjacency matrix of a strongly connected digraph (essentially the `line digraph' of $G$), and
\item\label{nbw} For each integer $\ell \ge 1$, the entry $(B^{\ell})_{e,f}$ is the number of non-backtracking walks of length $\ell+1$ in $G$ with first arc $(v_0,v_1) = e$ and last arc $(v_\ell,v_{\ell+1})= f$.
\end{enumerate}
By (\ref{scd}) and the Perron-Frobenius theorem (see [\ref{gr}], section 8.8), there is a positive real eigenvalue $\lambda_*$ of $B$ and an associated positive real eigenvector $w_*$, so that $|\lambda_*| \ge |\lambda|$ for every eigenvalue $\lambda$ of $B$. Furthermore, by Gelfand's formula [\ref{gelfand}] we have $\lambda_* = \lim_{n \to \infty} \norm{B^n}^{1/n}$, where $\norm{B^n}$ denotes the sum of the absolute values of the entries of $B^n$. By (\ref{nbw}), the parameter $\lambda_* = \lambda_*(B(G))$ thus governs the growth of non-backtracking walks in $G$.
Note that
$B^\ell$ has only nonnegative entries, so $\norm{B^\ell} = \overline{\mathbf{1}}^T B^{\ell} \overline{\mathbf{1}}$. Let $\mu = \mu(G) = \tfrac{1}{n}|\bar{E}|$ denote the average degree of $G$. The proof of Theorem 1 of [\ref{alon}] contains the following:
\begin{lemma} Let $G$ be a connected graph of minimum degree at least $2$ and let $B = B(G)$. Then $\overline{\mathbf{1}}^TB^{\ell} \overline{\mathbf{1}} \ge (n\mu) \Lambda^{\ell}$, where
\[\Lambda = \Lambda(G) = \prod_{v \in V}(d_G(v)-1)^{d_G(v)/(n \mu)}.\]
\end{lemma}
It follows in turn from this lemma that $\norm{B^\ell}^{1/\ell} \ge \Lambda(G)$, so $\lambda_*(B(G)) \ge \Lambda(G)$. As observed in [\ref{alon}], the log-convexity of the function $(x-1)^x$ (for $x > 1$) implies that $\Lambda(G) \ge \mu(G)-1$. For each $x \in \mathbb R$, let $\eta(x) = \min(x - \lfloor x \rfloor, \lceil x \rceil - x)$ denote the distance from $x$ to the nearest integer. The following lemma, which is proved by slightly improving the bound $\Lambda(G) \ge \mu(G)-1$ when $\mu(G)$ is not an integer, is an unilluminating exercise in calculus.
\begin{lemma}\label{boundlambda}
Let $\mu_0 \in \mathbb R$ satisfy $\mu_0 \ge 2$ and let $G$ be a connected graph with minimum degree at least $2$ and average degree at least $\mu_0$. Then $\lambda_*(B(G)) \ge \mu_0-1 + \tfrac{\eta(\mu_0)^3}{8\mu_0^3}$.
\end{lemma}
\begin{proof}
Let $n = |V(G)|$, let $d_1, \dotsc, d_n$ be the degrees of the vertices of $G$, and let $\mu = \tfrac{1}{n}\sum_{i=1}^nd_i \ge \mu_0$ be the average degree of $G$. Let $\eta = \eta(\mu)$; note that $\mu \ge 2 + \eta$. Define $g\colon (1,\infty) \to \mathbb R$ by $g(x) = x \ln (x-1)$; observe that $g'(x) = \tfrac{x}{x-1} + \ln(x-1)$ and $g''(x) = \tfrac{x-2}{(x-1)^2}$. We have
$\ln (\Lambda(G)) = \tfrac{1}{n \mu} \sum_{i = 1}^n g(d_i)$; for each $i$, Taylor's theorem gives
\[g(d_i) = g(\mu) + g'(\mu)(d_i-\mu) + \tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2\]
for some $\xi_i$ between $d_i$ and $\mu_0$. We now estimate the `error' terms.
\begin{claim}
$\tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2 \ge \tfrac{\eta^3}{8\mu^2}$ for each $i$.
\end{claim}
\begin{proof}[Proof of claim:]
First suppose that $d_i = 2$. Then $g(d_i) = 0$, so
\begin{align*} \tfrac{1}{2}g''(\xi_i)(2-\mu)^2 &= -g(\mu) - g'(\mu)(2-\mu) \\
&= (\mu-2)\left(\tfrac{\mu}{\mu-1} + \ln(\mu-1)\right) - \mu \ln(\mu-1)\\
&= \tfrac{\mu(\mu-2)}{\mu-1} - 2 \ln(\mu-1).
\end{align*}
Note that the above expression is equal to $1.174\dotsc > 1$ for $\mu = \tfrac{7}{3}$, and is increasing in $\mu$ for $\mu \in (2,\infty)$. If $\mu \ge \tfrac{7}{3}$ then we therefore have $\tfrac{1}{2}g''(\xi_i)(2-\mu)^2 > 1$. If $\mu < \tfrac{7}{3}$ then $\mu = 2 + \eta$ and $\eta < \tfrac{1}{3}$, so
\begin{align*}
\tfrac{\mu(\mu-2)}{\mu-1} - 2 \ln(\mu-1) &= \tfrac{\eta(2 + \eta)}{1 + \eta} - 2 \ln(1 + \eta) \\
&\ge \tfrac{\eta(2 + \eta)}{1 + \eta} - 2(\eta - \tfrac{1}{2} \eta^2 + \tfrac{1}{3}\eta^3)\\
&= \tfrac{\eta^3}{3(1+\eta)}(1-2\eta)\\
&> \tfrac{1}{12}\eta^3,
\end{align*}
where the last inequality uses $\eta < \tfrac{1}{3}$. Therefore if $d_i = 2$ we have $\tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2 \ge \min(1,\tfrac{1}{12}\eta^3) = \tfrac{1}{12}\eta^3 > \tfrac{\eta^3}{8\mu^2}$.
Suppose that $d_i \ge 3$. Since $\xi_i$ is between $\mu$ and $d_i$, we have $\xi_i \ge \min(d_i,\mu) \ge \min(3,\mu)$, so $\xi_i - 2 \ge \eta$. Therefore $g''(\xi_i) \ge \tfrac{\eta}{(\xi_i-1)^2} > \tfrac{\eta}{\xi_i^2}$. Thus, using $\xi_i \le \max(\mu,d_i)$, we have
\[\tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2 \ge \frac{\eta(d_i-\mu)^2}{2\xi_i^2} \ge \frac{\eta(d_i-\mu)^2}{2\max(\mu,d_i)^2}.\]
It is easy to show, since $d_i \in \mathbb Z$, that $\left|\tfrac{d_i-\mu}{\max(\mu,d_i)}\right| \ge \tfrac{\eta}{\mu + \eta} \ge \tfrac{\eta}{2\mu}$, so $\tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2 \ge \tfrac{\eta^3}{8 \mu^2}$ and the claim follows.
\end{proof}
Using the claim, we have
\begin{align*}
\ln(\Lambda(G)) &= \frac{1}{n\mu} \sum_{i = 1}^n g(d_i)\\
&= \frac{1}{n\mu}\sum_{i=1}^n \left(g(\mu) + g'(\mu)(d_i-\mu) + \tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2\right)\\
&= \frac{1}{n\mu}\left(n g(\mu) + \sum_{i=1}^n\tfrac{1}{2}g''(\xi_i)(d_i-\mu)^2\right)\\
&\ge \ln(\mu-1) + \frac{1}{n\mu}\left(\frac{n\eta^3}{8\mu^2}\right)\\
&= \ln(\mu-1) + \frac{\eta^3}{8\mu^3}.
\end{align*}
So $\Lambda(G) \ge (\mu-1)\exp\left(\tfrac{\eta^3}{8\mu^3}\right) \ge \mu-1 + (\mu-1)\left(\frac{\eta^3}{8\mu^3}\right) \ge \mu-1 + \frac{\eta^3}{8\mu^3}$. One easily checks that the function $h(y) = y-1 + \tfrac{\eta(y)}{8y^3}$ is strictly increasing on $(2,\infty)$; since $\mu \ge \mu_0$ and $\lambda_*(B(G)) \ge \Lambda(G)$, it follows that $\lambda_*(B(G)) \ge \mu_0-1 + \tfrac{\eta(\mu_0)^3}{8\mu_0^3}$, as required.
\end{proof}
For each $\mu \ge 2$, let $\mathcal{G}_{\mu}$ denote the class of connected graphs with average degree at least $\mu$ and minimum degree at least $2$. For every integer $n \ge \mu+1$, let \[\lambda_*(\mu;n) = \inf\{\lambda_*(B(G))\colon G \in \mathcal{G}_\mu, |V(G)| = n\},\] noting that this infimum is finite since $K_n \in \mathcal{G}_{\mu}$ for all $n \ge \mu+1$. Define $\lambda_*\colon [2,\infty) \to \mathbb R$ by $\lambda_*(\mu) = \liminf_{n \to \infty} \lambda_*(\mu;n)$. The following is immediate from Lemma~\ref{boundlambda}.
\begin{lemma}\label{lambdastar}
$\lambda_*(\mu) \ge \mu-1$. If equality holds, then $\mu \in \mathbb Z$.
\end{lemma}
Having defined the function $\lambda_*$, we can now state the more technical main theorem from which Theorem~\ref{mainintro} will easily follow.
\begin{theorem}\label{maintechnical}
If $\mathcal{G}$ is the class of cycle codes of graphs and $R \in (0,1)$, then $\theta_{\mathcal{G}}(R) \le \tfrac{1}{2}\left(1-\sqrt{1-\tfrac{1}{\lambda^2}}\right)$, where $\lambda = \lambda_*\left(\tfrac{2}{1-R}\right)$.
\end{theorem}
As mentioned, we believe the above bound is the true value for $\theta_{\mathcal{G}}$.
\begin{conjecture}\label{mainconjecture}
The bound in Theorem~\ref{maintechnical} holds with equality for all $R \in (0,1)$.
\end{conjecture}
By Theorem~\ref{intval}, this conjecture holds when $R = 1 - \tfrac{2}{d}$ for $d \in \mathbb Z$.
\section{Covering trees}
A \emph{locally finite, infinite rooted tree} (hereafter just a \emph{tree}) is a connected acyclic infinite graph $\Gamma$ of finite maximum degree together with a particular vertex $r$ called the \emph{root}. Adopting some notation of [\ref{dz}] and [\ref{lyons}], for $x \in V(\Gamma)$ we write $|x|$ for the distance of $x$ from $r$, and we write $x \preceq y$ if $x$ is on the path from $r$ to $y$. We write $x \wedge y$ for the \emph{join} of $x$ and $y$, the vertex of largest distance from $r$ that is on both the path from $r$ to $x$ and the path from $r$ to $y$.
The trees we are interested in are `covering trees' for finite graphs. Let $G = (V,E)$ be a finite graph of minimum degree at least $2$ and let $e = (u,v)$ be an arc of $G$. The \emph{covering tree of $G$ rooted at $e$} is the tree $\Gamma = \Gamma_e(G)$ where the root is the length-zero walk $(u)$ of $G$, the other vertices are the non-backtracking walks of $G$ with first arc $e$ and the children of each walk $(u,v,v_2,\dotsc,v_\ell)$ of length $\ell$ are its extensions $(u,v,v_2,\dotsc,v_\ell,v_{\ell+1})$ to nonbacktracking walks of length $\ell+1$ (ie. where $v_{\ell+1}$ is adjacent to $v_{\ell}$ in $G$ and is not equal to $v_{\ell-1}$). Note that the number of vertices of $\Gamma_e(G)$ at distance $\ell$ from the root is the total number of length-$\ell$ non-backtracking walks of $G$ with first arc $e$, which is exactly the sum of the entries of the $e$-column of $B(G)^{\ell-1}$.
There is a natural homomorphism that associates each walk with its final vertex; if $G$ has large girth, this map preserves much of the local structure of $G$. To analyse the ubiquity of cycles in a random sample of edges of $G$, we follow [\ref{dz}] and study a problem of `fractional percolation' on covering trees, bounding the probability that, given a $p$-random subset of $E(\Gamma_{e}(G))$, there is a long path starting at $r$ that is, in a certain sense, dense with edges in the subset.
Let $\Gamma$ be such a tree, and let $\alpha \in (0,1)$. Given $X \subseteq E(\Gamma)$, we say that a finite path $(v_0,v_1, \dotsc, v_n)$ of $\Gamma$ is \emph{$\alpha$-adapted} with respect to $X$ if, for each $i \in \{1, \dotsc, n\}$, the subpath $(v_0,\dotsc,v_i)$ contains at least $\alpha i$ edges of $X$. If $t_1,t_2, \dotsc, $ is a sequence of positive integers and $T_n = \sum_{i=1}^n t_i$ is its sequence of partial sums (with $T_0 = 0$), then we say that a path $(x_0,x_1, \dotsc, x_n)$ of $\Gamma$ is \emph{$(\alpha,t)$-adapted} with respect to $X$ if for each $i \in \mathbb Z_{>0}$ for which $T_{i+1} < n$, the path $(x_{T_{i}},x_{T_{i}+1}, \dotsc, x_{T_{i+1}-1})$ is $\alpha$-adapted, and also the path $(x_{T_j},x_{T_j+1}, \dotsc, x_n)$ is $\alpha$-adapted, where $j$ is minimal so that $T_{j+1} > n$. Note that any initial subpath of an $(\alpha,t)$-adapted path is $(\alpha,t)$-adapted.
We will be considering $p$-random subsets $X$ of $E(\Gamma)$. We first estimate, with an argument used in ([\ref{dz}], Proposition 2), the probability that a given path is $\alpha$-adapted with respect to $X$. Henceforth, we denote the `relative entropy' between $\alpha$ and $p$ by
\[D(\alpha \Vert p) = \alpha \ln\left(\frac{\alpha}{p}\right) + (1-\alpha)\ln\left(\frac{1-\alpha}{1-p}\right).\]
We remark that [\ref{dz}] defines $D(\alpha \Vert p)$ as the negative of this formula. \begin{lemma}\label{boundadapted}
Let $0 < p < \alpha < 1$. There exists $c > 0$ so that, if $[x_0,x_1,\dotsc, x_n]$ is a finite path, and $X$ is a $p$-random subset of the edges of the path, then
\[\mathbf{P}\left(\ \!\![x_0, \dotsc, x_n] \textrm{ is $\alpha$-adapted w.r.t. $X$} \right)\ge cn^{-5/2}\exp(-n D(\alpha \Vert p)).\]
\end{lemma}
\begin{proof}
We first make a claim that will simplify the estimate.
\begin{claim} If $|X| \ge \alpha n$, then there exists $\ell \in \{0, \dotsc, n-1\}$ such that the path corresponding to the cyclic ordering $[x_\ell,x_{\ell+1},\dotsc,x_n = x_0,x_1,\dotsc,x_{\ell}]$ is $\alpha$-adapted with respect to $X$.
\end{claim}
\begin{proof}[Proof of claim:]
For each $i \in \mathbb Z_n$, let $t_i = 1-\alpha$ if the edge $x_{i}x_{i+1}$ is in $X$, and $t_i = -\alpha$ otherwise. For $0 \le j \le j' \le n$ let $S(j,j') = \sum_{i=j}^{j'-1} t_i$; observe that if $S(j,j') \ge 0$ then the path from $x_j$ to $x_{j'}$ has an $\alpha$-fraction of its edges in $X$. In particular, we have $S(0,n) = |X| - \alpha n \ge 0$. Choose $\ell \in \{0, \dotsc, n-1\}$ so that $S(0,\ell)$ is minimized. For $\ell \le h \le n$ we have $S(\ell,h) = S(0,h) - S(0,\ell) \ge 0$ and for $1 \le h \le \ell$ we have $S(\ell,n) + S(0,h) = S(0,n) + (S(0,h) - S(0,\ell)) \ge 0$. It follows from the observation that $\ell$ satisfies the claim.
\end{proof}
By the above claim and symmetry, the probability that the path $[x_0, \dotsc, x_n]$ is $\alpha$-adapted is at least $\tfrac{1}{n}\mathbf{P} (|X| \ge \alpha n)$.
It is straightforward to show using $0 < \alpha < 1$ and Stirling's approximation that all sufficiently large $n$ satisfy
\begin{align*} \lceil \alpha n \rceil! \le (\alpha n + 1)\lfloor \alpha n \rfloor! \le \sqrt{2\pi n^{3}} \left(\tfrac{\alpha n}{e}\right)^{\alpha n}\\
(n-\lceil \alpha n \rceil)! \le \sqrt{2\pi n}\left(\tfrac{(1-\alpha)n}{e}\right)^{(1-\alpha)n},
\end{align*}
so Stirling's approximation gives $\binom{n}{\lceil \alpha n \rceil} \ge \tfrac{1}{\sqrt{2\pi}n^{3/2}}\left({\alpha^{\alpha}(1-\alpha)^{1-\alpha}}\right)^{-n}$ for all large $n$. All large enough $n$ thus satisfy
\begin{align*}
\tfrac{1}{n} \mathbf{P}(|X| \ge \alpha n) &\ge \tfrac{1}{n}\mathbf{P}(|X| = \lceil \alpha n \rceil) \\
&= \frac{1}{n}\binom{n}{\lceil \alpha n \rceil}p^{\lceil \alpha n \rceil}(1-p)^{n-\lceil \alpha n \rceil}\\
& \ge \frac{1}{\sqrt{2 \pi} n^{5/2}}\left(\frac{p^{\alpha}(1-p)^{1-\alpha}}{\alpha^\alpha(1-\alpha)^{1-\alpha}}\right)^np^{\lceil \alpha n \rceil - \alpha n}(1-p)^{\alpha n - \lceil \alpha n \rceil}\\
& \ge \frac{p}{\sqrt{2\pi}n^{5/2}}\exp(-n D(\alpha \Vert p));
\end{align*}
since the probability of a path being $\alpha$-adapted is clearly positive for all $n$, some $c \in (0,\tfrac{p}{\sqrt{2\pi}}]$, obtained by taking a minimum over all small $n$, satisfies the lemma.
\end{proof}
We say a positive integer sequence $t = (t_i: i \ge 1)$ is \emph{slow} if it is nondecreasing and satisfies $\lim_{n \to \infty} t_n = \infty$ and $\lim_{n \to \infty} \tfrac{t_{n+1}}{\sum_{i = 1}^n t_i} = 0$.
The next lemma is the main technical result of this section. It shows that, if $t$ is a slow sequence, $G$ is a graph, and $\alpha$ and $p$ are chosen so that $\exp(D(\alpha \Vert p))$ is less than the graph invariant $\lambda_*(B(G))$ of the previous section, then there is some arc $e_0$ of $G$ for which a $p$-random subset of $E(\Gamma_{e_0}(G))$ will give an arbitrarily long $(\alpha,t)$-adapted path with probability bounded away from zero. The independence of $\delta$ on $n$ and $G$ in this lemma is crucial.
\begin{lemma}\label{uniformity}
For all $0 < p < \alpha < 1$, every slow sequence $t$, and all $\lambda > \exp(D(\alpha \Vert p))$, there is some $\delta = \delta(t,\lambda,\alpha,p) > 0$ such that, if $n \ge 1$ is an integer and $G$ is a connected graph of minimum degree at least $2$ with $\lambda_*(B(G)) \ge \lambda$, then there is an arc $e_0$ of $G$ so that, given a $p$-random subset $X \subseteq E(\Gamma_{e_0}(G))$, we have
\[\mathbf{P}\left(\text{$\Gamma_{e_0}(G)$ contains an $(\alpha,t)$-adapted path of length $n$ w.r.t. $X$}\right) > \delta.\]
\end{lemma}
\begin{proof}
Let $\lambda_* = \lambda_*(B(G))$. Let $t = (t_i: i \ge 1)$ and $T_\ell = \sum_{i=1}^\ell t_i$ for each $\ell \ge 0$. Let $\lambda_0 = \exp(D(\alpha\Vert p))$ and $\lambda_1,\lambda_2$ be real numbers so that $\lambda_0 < \lambda_1 < \lambda_2 < \lambda$. Note that $\lambda_0 > 1$ and $\lambda_* \ge \lambda$.
Let $\Pi(m)$ denote the probability that a path of length $m$ is $\alpha$-adapted with respect to a $p$-random subset of its edges, and for each $\ell \ge 0$ let $f(\ell) = \prod_{i=1}^{\ell}\Pi(t_i)^{-1}$ be the reciprocal of the probability that a path of length $T_{\ell}$ is $(\alpha,t)$-adapted. To determine $\delta$, we first estimate $f$:
\begin{claim}
There exists $M > 0$ such that $f(\ell+1) \le M \lambda_2^{T_\ell}$ for all $\ell$.
\end{claim}
\begin{proof}[Proof of claim:]
Let $c > 0 $ be given by Lemma~\ref{boundadapted} for $p$ and $\alpha$. We have
\begin{align*}
f(\ell+1) = \prod_{i=1}^{\ell+1} \Pi(t_i)^{-1} &\le \prod_{i=1}^{\ell+1} \frac{t_i^{5/2}}{c} \exp\left(D(\alpha \Vert p) \sum_{j=1}^{\ell+1} t_j \right)\\
&= \lambda_0^{T_{\ell+1}}\prod_{i=1}^{\ell+1}\frac{t_i^{5/2}}{c}\\
&= \lambda_1^{T_{\ell+1}}\prod_{i=1}^{\ell+1} \frac{t_i^{5/2}}{c}\left(\frac{\lambda_0}{\lambda_1}\right)^{t_i}\\
&= \lambda_1^{(1+t_{\ell+1}/T_{\ell})T_{\ell}} \prod_{i=1}^{\ell+1}\frac{t_i^{5/2}}{c}\left(\frac{\lambda_0}{\lambda_1}\right)^{t_i}.
\end{align*}
Since $\lambda_0 < \lambda_1 < \lambda_2$ and $t_{\ell+1}/T_{\ell} \to 0$ and $t_{\ell} \to \infty$, this expression is at most $\lambda_2^{T_\ell}$ for large enough $\ell$. The claim follows by taking a maximum over all small $\ell$.
\end{proof}
Set $\delta = M^{-1}(\tfrac{1}{\lambda_2}-\tfrac{1}{\lambda})$. Let $\bar{E}$ be the set of arcs of $G$, let $B = B(G)$ and let $w_*$ be the (strictly positive) eigenvector of $B$ for $\lambda_*$, normalised to have largest entry $1$. Choose $e_0 \in \bar{E}$ such that $w_*(e_0) = 1$. We show that $\delta$ and $e_0$ satisfy the lemma.
For each $e \in \bar{E}$, let $b_{e}$ be the standard basis vector in $\mathbb R^{\bar{E}}$ corresponding to $e$, and let $N_{h}(e_0,e) = b_{e_0}^T B^{h-1} b_{e}$ be the number of non-backtracking walks of length $h$ in $G$ with first arc $e_0$ and last arc $e$.
Let $\Gamma = \Gamma_{e_0}(G)$ and $r$ be the root of $\Gamma$. Let $\rho\colon V(\Gamma)\setminus \{r\} \to \bar{E}$ be the map assigning each walk to its last arc. Set $\phi(r) = 1$ and, for each vertex $x \ne r$ of $\Gamma$, set $\phi(x) = \lambda_*^{1-|x|}w_*(\rho(x))$. Note that $\phi((e_0)) = w_*(e_0) = 1$ and that, for each $x \ne r$ with $\rho(x) = e$, the sum of $\phi(y)$ over the children $y$ of $x$ is
\begin{align*}
&\lambda_*^{1-(|x|+1)}\sum\left(w_*(e')\colon e' \in \bar{E}, B_{e,e'}=1\right)\\
&= \lambda_*^{-|x|}b_{e}^TBw_* \\
& = \lambda_*^{1-|x|}w_*(e) = \phi(x).\end{align*}
(In other words, $\phi$ is a \emph{unit flow} on $\Gamma$.) It follows that for every $h \ge 0$ and all $x$ with $|x| \le h$, we have $\sum\left(\phi(y)\colon y \succeq x, |y| = h\right) = \phi(x)$.
For $X \subseteq E(\Gamma)$, we say that a vertex $v$ of $\Gamma$ is \emph{$(\alpha,t)$-reachable} with respect to $X$ if the path of $\Gamma$ from $r$ to $x$ is $(\alpha,t)$-adapted with respect to $X$; let $R(X)$ denote the set of $(\alpha,t)$-reachable vertices. Fix $\ell$ so that $T_\ell \ge n$, and define a random variable $Q = Q(X)$ by
\[Q = f(\ell)\sum_{|x| = T_\ell} \phi(x)1_{R(X)}(x).\]
The $\phi(x)$ sum to $1$ over all $x$ with $|x| =T_\ell$, so $\mathbf{E}(Q) = 1$. We now bound the second moment of $Q$.
\begin{claim}
$\mathbf{E}(Q^2) < \delta^{-1}$.
\end{claim}
\begin{proof}[Proof of claim:]
We have \[\mathbf{E}(Q^2) = f(\ell)^2 \sum_{|x|=|y| = T_\ell}\phi(x)\phi(y)\mathbf{P}(x,y \in R(X)).\]
For each $z \in V(\Gamma)$, let $k(z)$ be the maximum integer $k \ge 0$ so that $T_{k} \le |z|$. There are edge-disjoint paths of lengths $t_1,t_2,\dotsc,t_\ell$ and $t_{k(x \wedge y)+2},t_{k(x \wedge y)+3},\dotsc, t_\ell$ that all must be $\alpha$-adapted for both $x$ and $y$ to be in $R(X)$ (the first set of paths make up the path from $r$ to $x$ and the second set are contained in the path from $x \wedge y$ to $y$), so
\begin{align*}
\mathbf{P}(x,y \in R(X)) &\le \prod_{i=1}^{\ell} \Pi(t_i) \prod_{i=k(x \wedge y)+2}^\ell \Pi(t_i) \\
&= f(k(x \wedge y)+1)f(\ell)^{-2}\\%\left(\prod_{i=1}^{\ell(x \wedge y)+1}\Pi(t_i)\right)^{-1}f(x)^{-1}f(y)^{-1}\\
&\le M\lambda_2^{T_{k(x \wedge y)}} f(\ell)^{-2}\\
&\le M\lambda_2^{|x \wedge y|}f(\ell)^{-2},
\end{align*}
where we use the first claim. Using the fact that $|x \wedge y| \ge 1$ whenever $|x| = |y| = T_{\ell}$, we have
\begin{align*}
\mathbf{E}(Q^2) &\le M\sum_{|x|,|y| = T_\ell}\phi(x)\phi(y)\lambda_2^{|x \wedge y|}\\
&= M\sum_{1 \le |z| \le T_\ell}\lambda_2^{|z|}\sum_{\substack{|x| = |y| = T_\ell \\ x \wedge y = z}} \phi(x)\phi(y)\\
&\le M \sum_{{1 \le |z| \le T_\ell}}\lambda_2^{|z|}\left(\sum_{\substack{|x| = T_\ell \\ x \succ z}}\phi(x)\right)^2\\
&= M\sum_{\substack{1 \le |z| \le T_\ell}}\lambda_2^{|z|}\phi(z)^2\\
&= M\sum_{i=1}^{T_\ell} \lambda_2^i \sum_{|z| = i}\phi(z)^2.
\end{align*}
If $|z| = i \ge 1$, then $w_*(e) \le 1$ gives \[\phi(z)^2 = \lambda_*^{2-2i}w_*(\rho(z))^2 \le \lambda_*^{2-2i} w_*(\rho(z)).\] For each $e \in \bar{E}$, the number of $z \in V(\Gamma)$ with $|z| = i$ and $\rho(z) = e$ is $N_{i}(e_0,e) = b_{e_0}^T B^{i-1} b_{e}$, so since $B w_* = \lambda_*w_*$ and $w_*(e_0) = 1$, we have
\[\sum_{|z|=i} \phi(z)^2 \le \lambda_*^{2-2i} b_{e_0}^T B^{i-1} \sum_{e \in \bar{E}} b_{e} w_*(e) = \lambda_*^{2-2i} b_{e_0}^TB^{i-1} w_* = \lambda_*^{1-i} \le \lambda^{1-i}.\]
Thus $\mathbf{E}(Q^2) < M\sum_{i=1}^{\infty}\lambda_2^i\lambda^{1-i} = M(\tfrac{1}{\lambda_2} - \tfrac{1}{\lambda})^{-1} = \delta^{-1}$ .
\end{proof}
Now by the Cauchy-Schwartz inequality we have
\[1 = \mathbf{E}(Q)^2 = \mathbf{E}(Q \cdot 1_{Q > 0})^2 \le \mathbf{E}(Q^2) \mathbf{E}(1_{Q>0}^2) < \delta^{-1} \mathbf{P}(Q > 0),\]
so $\mathbf{P}(Q > 0) > \delta$. Therefore $\Gamma$ has an $(\alpha,t)$-adapted path of length $T_{\ell}$ with respect to $X$ with probability greater than $\delta$. Such a path contains an $(\alpha,t)$-adapted path of length $n$, giving the result. \end{proof}
\section{Graphs}
For a graph $G = (V,E)$ and for $p,\beta \in [0,1]$, let $f_p^{\beta}(G)$ denote the probability, given a $p$-random subset $X \subseteq E$, that $X$ contains at least a $\beta$-fraction of the edges of some circuit of $G$. Recall that $\lambda_*(\mu_0)$ is some value not less than $\mu_0-1$.
\begin{theorem}\label{maintech}
For all $\mu_0 \ge 2$ and $0 < p < \beta < 1$ satisfying $\exp(D(\beta \Vert p)) < \lambda_*(\mu_0)$, there exists $\delta = \delta(\mu_0,p,\beta) > 0$ such that, if $G$ is a connected graph with $\mu(G) \ge \mu_0$, then $f_p^{\beta}(G) \ge \delta$.
\end{theorem}
\begin{proof}
It suffices to show this just for graphs of minimum degree at least $2$, since deleting a degree-$1$ vertex from a graph $G$ with $\mu(G) \ge 2$ does not change $f_p^{\beta}$ or connectedness, and does not decrease $\mu(G)$. Suppose that the result fails. Then there exists a sequence $G_1,G_2,\dotsc, $ of graphs of average degree at least $\mu_0$ and minimum degree at least $2$, such that $\lim_{n \to \infty}(f_p^{\beta}(G_n)) = 0$. We clearly have $f_p^{\beta}(G) \ge p^{d(G)}$ for every graph (this is the probability of a $p$-random subset containing \emph{every} edge in a given shortest cycle), so we may assume by taking a subsequence that $d(G_i) \ge i$ for each $i$.
\begin{claim}
There is a slow integer sequence $t = (t_k\colon k \ge 1)$ so that $t_{|V(G_k)|} \le \sqrt{k}$ for each $k$.
\end{claim}
\begin{proof}[Proof of claim:]
Let $(t_k\colon k \ge 1)$ be a nondecreasing, divergent integer sequence in which the integer $\lfloor \sqrt{r} \rfloor$ occurs at least $|V(G_r)|$ times for each $r \ge 1$. (Such a sequence can be chosen to diverge because each integer is only required to occur finitely often.) By construction we have $t_{|V(G_k)|} \le \lfloor \sqrt{k} \rfloor$ for each $k$. Furthermore, if $\ell \ge 1$ and $t_{\ell+1} = d+1 \ge 2$ then the integer $d$ has occured at least $|V(G_{d^2})| \ge d^2$ times before $t_{\ell+1}$, so $t_{\ell+1}/\sum_{i=1}^\ell t_i \le (d+1)/d^3$. It follows that $\lim_{n \to \infty} t_{n+1}/\sum_{i=1}^{n} t_i = 0$, so $(t_k\colon k \ge 1)$ is slow.
\end{proof}
Note that $D(x \Vert p)$ is increasing in $x$ for $x > p$. Since $\exp(D(\beta \Vert p)) < \lambda_*(\mu_0)$ we can choose $\alpha \in (\beta,1)$ and $\lambda'$ so that
\[\exp(D(\beta \Vert p)) < \exp(D(\alpha \Vert p)) < \lambda' < \lambda_*(\mu_0).\]
Let $k_0$ be large enough so that $\lambda_*(\mu_0;n) \ge \lambda'$ for all $n \ge k_0$. Let $\delta = \delta(t,\lambda',\alpha,p) > 0$ be given by Lemma~\ref{uniformity}. We argue that if $k$ is sufficiently large so that $k \ge k_0$ and $\tfrac{2\sqrt{k}+1}{k} \le \alpha - \beta$, then the graph $G = G_k$ satisfies $f_p^\beta(G) \ge \delta$. This contradicts $\lim_{n \to \infty} f_p^{\beta}(G_n) = 0$.
Let $G = G_k$ for such a $k$, and let $\Gamma = \Gamma_e(G)$ be the covering tree of $G$ with respect to the arc $e = (r,s)$ given by Lemma~\ref{uniformity}. Let $\pi\colon V(\Gamma) \to V(G)$ assign each path to its final vertex. Since $|V(G)| \ge k \ge k_0$, we have $\lambda_*(B(G)) \ge \lambda_*(\mu_0;k) \ge \lambda'$ by the choice of $k_0$.
We now relate $f_p^\beta(G)$ to the probability that a $p$-random subset of $E(\Gamma)$ gives a long $(\alpha,t)$-adapted path. For each set $Z \subseteq V(G)$, let $G(Z)$ denote the subgraph of $G$ induced by $Z$.
Recalling notation from the proof of Lemma~\ref{uniformity}, for $X \subseteq E(G)$ we say a vertex $v$ of $G$ is \emph{reachable} with respect to $X$ if $v = r$, or there is an $(\alpha,t)$-adapted path of $G$ (with respect to $X$) having first arc $e$ and last vertex $v$. We write $R(X)$ for the set of all such vertices. Similarly, for $Y \subseteq E(\Gamma)$, we say a vertex $v$ of $\Gamma$ is \emph{reachable} with respect to $Y$ if there is an $(\alpha,t)$-adapted path of $\Gamma$ (with respect to $Y$) from the root to $v$. Let $R(Y)$ denote the set of all such vertices. Note, for any $X$ and $Y$, that each of the sets $R(X)$ and $\pi(R(Y))$ either is equal to $\{r\}$, or induces a connected subgraph of $G$ containing $r$ and $s$.
Suppose that $X$ is a $p$-random subset of $E(G)$ and $Y$ is a $p$-random subset of $E(\Gamma)$. Let $C_G$ denote the event that $G(R(X))$ contains a circuit, and $C_\Gamma$ denote the event that $G(\pi(R(Y)))$ contains a circuit.
\begin{claim}
$\mathbf{P}(C_G) = \mathbf{P}(C_{\Gamma})$.
\end{claim}
\begin{proof}[Proof of claim:]
Let $\mathcal{Z}'$ denote the family of subsets of $V(G)$ that induce an \emph{acyclic} connected subgraph of $G$ containing $r$ and $s$, and let $\mathcal{Z} = \mathcal{Z}' \cup \{\{r\}\}$. The event $C_G$ fails to hold exactly when $R(X) \in \mathcal{Z}$, so \[1-\mathbf{P}(C_G) = \sum_{Z \in \mathcal{Z}} \mathbf{P}(R(X) = Z).\] Similarly, we have \[1-\mathbf{P}(C_{\Gamma}) = \sum_{Z \in \mathcal{Z}} \mathbf{P}(\pi(R(Y)) = Z).\]
If $Z = \{r\}$, then clearly $\mathbf{P}(R(X) = Z) = \mathbf{P}(\pi(R(Y)) = Z) = 1-p$. Suppose that $Z \in \mathcal{Z}'$. By acyclicity of $G(Z)$, there is a unique subtree $\Gamma_Z$ of $\Gamma$ that contains the root of $\Gamma$ and satisfies $\pi(V(\Gamma_Z)) = Z$, and moreover $G(Z)$ and $\Gamma_Z$ are isomorphic finite trees. Now $G(Z)$ and $\Gamma_Z$ have the same number of edges, and the number of edges of $G$ with exactly one end in $Z \setminus \{r\}$ is equal to the number of edges of $\Gamma$ with exactly one end in $V(\Gamma_Z)$, so \[\mathbf{P}(R(X) = Z) = \mathbf{P}(R(Y) = V(\Gamma_Z)) = \mathbf{P}(\pi(R(Y)) = Z).\] The claim now follows from the two summations above.
\end{proof}
\begin{claim}
$\mathbf{P}(C_{\Gamma}) \ge \delta$.
\end{claim}
\begin{proof}[Proof of claim:]
By Lemma~\ref{uniformity}, the tree $\Gamma$ contains, with probability at least $\delta$, a length-$|V(G)|$ path $[v_1,v_2,\dotsc]$ that is $(\alpha,t)$-adapted with respect to $Y$. For any such path, there must be some $i < j$ so that $\pi(v_i) = \pi(v_j)$; now $\{\pi(v_i),\pi(v_{i+1}), \dotsc, \pi(v_j)\}$ is the vertex set of a closed non-backtracking walk of $G(\pi(R(Y)))$, which must contain a circuit. This implies the claim.
\end{proof}
\begin{claim}
$f_p^{\beta}(G) \ge \mathbf{P}(C_G)$.
\end{claim}
\begin{proof}[Proof of claim:]
Suppose that $X \subseteq E$ satisfies $C_G$; i.e. $G(R(X))$ contains a circuit $C$. It suffices to show that $X$ contains a $\beta$-fraction of the edges of some circuit of $G$. Let $V(C) = [x_0,x_1,\dotsc,x_m]$, where $x_0$ is the end of a shortest $(\alpha,t)$-adapted path $P_0$ from $r$ to $V(C)$. If there is some $i \in\{1,\dotsc, m\}$ such that there exists in $G$ an $(\alpha,t)$-adapted path $P_i$ from $r$ to $x_i$ not containing $x_{i-1}$ and an $(\alpha,t)$-adapted path $P_{i-1}$ from $r$ to $x_{i-1}$ not containing $x_i$, then $E(P_i) \cup E(P_{i-1}) \cup \{x_{i-1}x_i\}$ contains a circuit $C'$ of $G$. Moreover, this circuit is the disjoint union of the edge $x_{i-1}x_i$, a set of subpaths that are $\alpha$-adapted with respect to $X$, and at most two extra subpaths each of length at most $t_{|V(G)|}$ (these two subpaths are `partial' subpaths arising because the last intersection point of $P_{i-1}$ and $P_i$ need not cleanly divide these paths into a union of $\alpha$-dense subpaths), so $|X \cap E(C')| \ge \alpha|E(C')| - 2 t_{|V(G)|}-1$. Now $G = G_k$, so $|E(C')| \ge d(G) \ge k$ and $t_{|V(G)|} \le \sqrt{k}$, giving
\[\tfrac{|X \cap E(C')|}{|E(C')|} \ge \alpha - \tfrac{2t_{|V(G)|}+1}{|E(C')|} \ge \alpha - \tfrac{2\sqrt{k} + 1}{k} \ge \beta,\]
so $X$ contains a $\beta$-fraction of the edges of $C'$.
If no such $i$ exists, then an easy inductive argument implies for each $j \ge 1$ that every $(\alpha,t)$-adapted path from $r$ to $x_j$ passes through $x_{j-1}$, so $E(P_0) \cup E(C) - \{x_0x_m\}$ is the edge set of an $(\alpha,t)$-adapted path from $r$ to $x_m$. By a similar argument to the above, we have $|E(C) \cap X| \ge \alpha |E(C)| - 2 t_{|V(G)|} - 1$, and thus $X$ contains a $\beta$-fraction of the edges of $C$, giving the claim.
\end{proof}
The last three claims give $f_p^{\beta}(G) \ge \delta$, implying the theorem.
\end{proof}
\section{The Threshold}
We now prove Theorems~\ref{maintechnical} and~\ref{mainintro}. Recall that, if $C$ is the cycle code of a graph $G$, then the probability of a maximum-likelihood decoding error in $C$ over a channel of bit-error rate $p \in (0,\tfrac{1}{2})$ is exactly the parameter $f_p^{1/2}(G)$ of the previous section. We use this fact to derive Theorem~\ref{maintechnical} (restated here) from Theorem~\ref{maintech}.
\begin{theorem}\label{mainrev}
If $R \in (0,1)$ and $\mathcal{G}$ is the class of cycle codes of graphs, then $\theta_{\mathcal{G}}(R) \le \tfrac{1}{2}\left(1 - \sqrt{1 - \tfrac{1}{\lambda^2}}\right)$, where $\lambda = \lambda_*(\tfrac{2}{1-R})$.
\end{theorem}
\begin{proof}
Fix $R \in (0,1)$, let $\mu = \tfrac{2}{1-R}$ and let $\theta = \tfrac{1}{2}\left(1 - \sqrt{1 - \tfrac{1}{\lambda^2}}\right)$, where $\lambda = \lambda_*(\mu)$. Note that $\exp(D(\tfrac{1}{2}\Vert \theta)) = \lambda \ge \mu-1 > 1$ by Lemma~\ref{lambdastar}. It is enough to show that for all $p \in (\theta,\tfrac{1}{2})$ there is some $\varepsilon > 0$ such that the probability of an error in maximum-likelihood decoding of a cycle code of rate at least $R$, over a binary symmetric channel with bit-error rate $p$, is at least $\varepsilon$.
Let $p \in (\theta,\tfrac{1}{2})$. Since $p > \theta$ we have $\exp(D(\tfrac{1}{2} \Vert p)) < \lambda$; let $\lambda_0 \in (\exp(D(\tfrac{1}{2} \Vert p)),\lambda)$ and let $\mu_0 = \lambda_0 +1$. Let $\delta = \delta(\mu_0,p,\tfrac{1}{2})$ be given by Theorem~\ref{maintech} and set $\varepsilon = \min(\delta,p^b)$, where $b = \tfrac{2\mu\mu_0}{\mu-\mu_0}$.
Let $C$ be a cycle code of rate $R(C) \ge R$ and let $G$ be a connected graph whose cycle code is $C$. Note, since $R > 0$, that $G$ contains a circuit, so $f_p^{1/2}(G) \ge p^{|E(G)|}$. If $\mu(G) \ge \mu_0$ then $f_p^{1/2}(G) \ge \delta \ge \varepsilon$ by Theorem~\ref{maintech}. Otherwise
\[1-\tfrac{2}{\mu} = R \le R(C) = 1 - \tfrac{2}{\mu(G)} + \tfrac{1}{|E(G)|} < 1 - \tfrac{2}{\mu_0} + \tfrac{1}{|E(G)|}, \]
so $|E(G)| < \tfrac{2\mu\mu_0}{\mu-\mu_0} = b$ and thus $f_p^{1/2}(G) \ge p^b \ge \varepsilon$, as required.
\end{proof}
Finally, we restate and prove Theorem~\ref{mainintro}.
\begin{theorem}
If $\mathcal{G}$ is the class of cycle codes of graphs and $R \in (0,1)$, then $\theta_{\mathcal{G}}(R) \le \tfrac{(1-\sqrt{R})^2}{2(1+R)}$. If equality holds, then $R = 1 - \tfrac{2}{d}$ for some $d \in \mathbb Z$.
\end{theorem}
\begin{proof}
Let $\mu = \tfrac{2}{1-R}$ and $\lambda = \lambda_*(\mu)$. By Lemma~\ref{lambdastar} we have $\lambda \ge \mu-1$ with equality if and only if $\mu \in \mathbb Z$. Theorem~\ref{mainrev} thus gives $\theta_{\mathcal{G}}(R) \le \tfrac{1}{2}\left(1-\sqrt{1 + \tfrac{2}{\mu-1}}\right)$, with equality only if $\mu \in \mathbb Z$: that is, if and only if $R = 1 - \tfrac{2}{d}$ for some $d \in \mathbb Z$. The result now follows from the definition of $\mu$ and a computation.
\end{proof}
\subsection*{Acknowledgements} We thank the two anonymous referees for their helpful suggestions that improved the quality of the paper.
\section*{References}
\newcounter{refs}
\begin{list}{[\arabic{refs}]}
{\usecounter{refs}\setlength{\leftmargin}{10mm}\setlength{\itemsep}{0mm}}
\item\label{ab06}
N. Alon and E. Bachmat,
Regular graphs whose subgraphs tend to be acyclic,
Random Struct. Algo. 29 (2006), 324--337.
\item\label{alon}
N. Alon, S. Hoory and N. Linial,
The Moore Bound for Irregular Graphs,
Graph Combinator. 18 (2002), 53--57.
\item\label{bmt78}
E.R. Berlekamp, R.J. McEliece and H.C.A. van Tilborg,
On the inherent intractability of certain coding problems,
IEEE Trans. Inform. Theory 24 (1978), 384--386.
\item\label{dz}
L. Decreusefond and G. Z\'{e}mor,
On the error-correcting capabilities of cycle codes of graphs,
Combin. Probab. Comput. 6 (1997), 27--38.
\item\label{diestel}
R. Diestel,
Graph Theory,
Springer, 2000.
\item\label{ggw}
J. Geelen, B. Gerards and G. Whittle, The highly connected matroids in minor-closed classes,
Ann. Comb. 19 (2015), 107--123.
\item\label{gr}
C. Godsil and G. Royle,
Algebraic Graph Theory,
Springer, 2001.
\item\label{gelfand}
I. Gelfand,
Normierte ringe,
Rech. Math. [Mat. Sbornik] N.S., 9 (1941), 3--24
\item\label{lyons}
R. Lyons,
Random walks and percolation on trees,
Ann. Probab. 18 (1990), 931--958.
\item\label{ms77}
F. J. MacWilliams and N. J. A. Sloane,
The Theory of Error-Correcting Codes,
Amsterdam, The Netherlands: North-Holland, 1977.
\item\label{nvz14}
P. Nelson and Stefan H.M. van Zwam,
On the existence of asymptotically good linear codes in minor-closed classes,
IEEE Trans. Inform. Theory 61 (2015), 1153--1158.
\item\label{nh81}
S.C. Ntafos and S.L. Hakimi,
On the complexity of some coding problems,
IEEE Trans. Inform. Theory 27 (1981), 794--796.
\item\label{zt97}
J-P. Tillich, G. Z\'emor,
Optimal cycle codes constructed from Ramanujan graphs,
SIAM J. Discrete Math 10 (1997), 447--459.
\end{list}
\end{document}
|
train/arxiv
|
BkiUbVk4uBhhxLtiMHbG
| 5 | 1 |
\section*{Introduction}
\noindent We let $k$ be a field and ${\mathcal M}_{k, R}$ be the category of Chow motives over $k$ with $R$ coefficients. For $X$ a smooth projective variety of dimension $d$ over $k$ and ${\mathfrak h}(X) \in {\mathcal M}_{k, R}$ the corresponding Chow motive, recall that one has by definition
\[\mathop{\rm End}\nolimits_{{\mathcal M}_{k, R}} ({\mathfrak h}(X)) = CH^{d} (X \times X)_{R} \]
Unless otherwise specified, we will work with $\ell$-adic cohomology (for $\ell$ prime to the characteristic of $k$). For $X$ smooth and projective over $k$ there is a cycle class map:
\begin{equation}CH^{r} (X)_{R} \to H^{2r}_{\text{\'et}} (X\times_{k} \overline{k}, R_{\ell}(r)))\label{cycle-class-map} \end{equation}
(where $R = {\mathbb Z}, {\mathbb Q}$), which gives homological equivalence for algebraic cycles, $\sim_{hom}$. Since the cycle class map is generally not injective, there are conjectures about the kernel, such as Voevodsky's smash-nilpotence conjecture for general codimension and the following nilpotence conjecture in codimension $d =dim(X)$ (see, for instance, \cite{J}):
\begin{Conj}\label{nil} For a Chow motive $M \in {\mathcal M}_{k, R}$ and $\Gamma \in \mathop{\rm End}\nolimits_{{\mathcal M}_{k, R}} (M)$ such that $\Gamma \sim_{hom} 0$, there exists some $n>>0$ for which $\Gamma^{n} =0 \in \mathop{\rm End}\nolimits_{{\mathcal M}_{k, R}} (M)$.
\end{Conj}
\noindent For $R = {\mathbb Q}$, the above conjecture is well-known and is of particular interest in determining whether or not the category of Chow motives possesses any {\em phantom motives} (i.e., Chow motives with vanishing cohomology). While several important conjectures in the theory of Chow motives imply this conjecture, it seems to be known only in the case of smooth projective varieties whose motives are summands of motives of Abelian varieties (see \cite{J}). Because of the intractibility of this problem, our aim will be to prove a nilpotence result which is substantially weaker. For this, we have the cycle class map:
\begin{equation}CH^{r} (X)_{R}\to H^{2r}_{\text{\'et}} (X, R_{\ell}(r))\label{cycle-class-map-k} \end{equation}
where again $R = {\mathbb Z}, {\mathbb Q}$. This is defined analogously to (\ref{cycle-class-map}) by means of the long exact sequence of a pair. Then, we have the following equivalence relation:
\begin{Def} Given $\gamma \in CH^{r} (X)_{R}$, we say that $\gamma$ is {\em $k$-homologically trivial with respect to $\ell$} and write $\gamma \sim_{k-hom} 0$ if $\gamma$ lies in the kernel of (\ref{cycle-class-map-k}), where $R= {\mathbb Z}, {\mathbb Q}$.
\end{Def}
\noindent We should note that a priori both $\sim_{hom}$ and $\sim_{k-hom}$ depends on the choice of $\ell$. On the other hand, for $R = {\mathbb Q}$ (which will be our primary interest here) the notion of nilpotence modulo $\sim_{hom}$ does not depend on $\ell$. When $\text{char } k =0$, this is clear. When $\text{char } k >0$, this follows from the fact that the characteristic polynomial of a correspondence is independent of $\ell$. Indeed, one quickly reduces to the case where $k$ is finitely generated over a finite field and, by a specialization argument, eventually one can to reduce to the case of a finite field, where it follows by \cite{KM} Theorem 2.
\begin{Thm}\label{main} Let $X$ be a smooth projective variety of dimension $d$ over a field $k$.
\begin{enumerate}[label=(\alph*)]
\item\label{main-a} If $\Gamma \in \mathop{\rm End}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}}({\mathfrak h}(X))$ is homologically nilpotent, then $\Gamma$ is $k$-homologically nilpotent.
\item\label{main-b} Suppose $H^{*} (X_{\overline k}, {\mathbb Z}_{\ell})$ is torsion-free; suppose also that $k$ has characteristic $0$ or that $X$ satisfies the Lefschetz standard conjecture. If $\Gamma \in \mathop{\rm End}\nolimits_{{\mathcal M}_{k}}({\mathfrak h}(X))$ is homologically nilpotent, then $\Gamma$ is $k$-homologically nilpotent.
\end{enumerate}
In fact, in both cases one can find some $N>>0$ for which $\Gamma^{N} \sim_{k-hom} 0$ for all $\ell$.
\end{Thm}
\begin{Cor} For any $\Gamma \in \mathop{\rm End}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}}({\mathfrak h}(X))$, there exists a polynomial $p[t] \in {\mathbb Q}[t]$ such that $p(\Gamma) \sim_{k-hom} 0$ with respect to every $\ell$.
\begin{proof} Let $q[t] \in {\mathbb Q}[t]$ be the characteristic polynomial of $[\Gamma]$ on $H^{*}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell})$. Then, set $\Gamma' := q(\Gamma)$ and we see that $\Gamma' \sim_{hom} 0$ with respect to every $\ell$. It follows from Theorem \ref{main} that there is some $N>>0$ for which $(\Gamma')^{N} \sim_{k-hom} 0$ with respect to every $\ell$.
\end{proof}
\end{Cor}
\noindent The key idea in the proof of Theorem \ref{main} is based on a fundamental decomposition result of Deligne for derived categories in his thesis \cite{D}, as well as his Hard Lefschetz theorem for $\ell$-adic cohomology. We can also use this idea to consider the nilpotence problem for torsion cycles. For $(n, \text{char } k) = 1$, consider the cycle class map for $n$-torsion cycles:
\begin{equation} CH^{d} (X \times X)[n] \to H^{2d}_{\text{\'et}} (X_{\overline{k}}\times X_{\overline{k}}, {\mathbb Z}/n(d))\label{n-tors-cycle} \end{equation}
Then, we have the following torsion version of Conjecture \ref{nil}:
\begin{Conj}\label{tor-nil} If $\Gamma \in CH^{d} (X \times X)[n]$ is in the kernel of (\ref{n-tors-cycle}), then $\Gamma$ is a nilpotent correspondence.
\end{Conj}
\noindent We note, in particular, that if the $\ell$-adic cohomology of $X$ is torsion-free for $\ell \neq \text{char } k$, then this conjecture asks whether every $n$-torsion cycle is nilpotent for $(n, \text{char } k) = 1$. Since $CH^{d} (X \times X)$ has no torsion in the case that $X$ has dimension $1$, the question only applies in dimension $\geq 2$. In this direction, we have the following partial result:
\begin{Thm}\label{main-2} Suppose $X$ is a surface over a perfect field $k$. Then, Conjecture \ref{tor-nil} holds for $n = \ell^{r}$ for all but finitely many primes $\ell$.
\end{Thm}
\noindent The proof of this theorem will use Theorem \ref{main}, as well as a consequence of the Merkurjev-Suslin Theorem. When the codimension is $> 2$, the conjecture seems elusive even in instances for which Conjecture \ref{nil} is known. For instance, the $\ell^{r}$-torsion can be infinite, even when $X$ is an Abelian variety. In fact, one can take $X$ to be a triple product of very general elliptic curves over ${\mathbb C}$; the main result of \cite{Di1} (or \cite{S2}) together with the main result of \cite{S1} show that the $\ell^{r}$-torsion is infinite in this case. It should be noted that Conjecture \ref{tor-nil} is a more general version of the Rost nilpotence conjecture, which predicts that cycles in the kernel of the extension of scalars functor:
\[ \mathop{\rm End}\nolimits_{{\mathcal M}_{k}} ({\mathfrak h}(X)) \to \mathop{\rm End}\nolimits_{{\mathcal M}_{E}} ({\mathfrak h}(X_{E})) \]
(for $E/k$ a field extension) is nilpotent. This last conjecture is known for certain types of homogeneous varieties (see \cite{CGM}) and was recently established for surfaces (first by Gille in \cite{Gi} for fields of characteristic $0$ and then by Rosenschon and Sawant in \cite{RS} for perfect fields of characteristic $p$) but remains wide open in general.\\
\subsection*{Acknowledgements}
The author would like to thank Bruno Kahn for his interest and for taking the time to read several drafts of this paper; his comments were indispensable. He would also like to thank Robert Laterveer for some conversations which led to the formulation of the problems considered in the paper.
\section{Preliminaries}
\subsection*{Some decomposition theorems} Let $X$ be a smooth projective variety over a field $k$ and let $\ell \neq \text{char } k$ and let $G_{k}$ denote the absolute Galois group. Then, there is a Hochschild-Serre spectral sequence for Jannsen's continuous \'etale cohomology (see \cite{J2}):
\[ H^{p}_{cont} (k, H^{q} (X_{\overline{k}}, {\mathbb Q}_{\ell})) \Rightarrow H^{p+q}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}) \]
This spectral sequence degenerates by some non-trivial results of Deligne. More precisely, let $\pi: X \to \text{Spec } k$ be the structure morphism. Then, by Prop 1.2 of \cite{D} the degeneration of the above sequence is equivalent to a (non-canonical) isomorphism:
\begin{equation} R_{cont}\pi_{*}{\mathbb Q}_{\ell} \cong \bigoplus_{i \in {\mathbb Z}} R_{cont}^{i}\pi_{*}{\mathbb Q}_{\ell}[-i] \label{deg-Q}\end{equation}
in $D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})$, the bounded derived category of continuous ${\mathbb Q}_{\ell}[G_{k}]$-modules. (It ought to be noted that this is not a derived category in the usual sense, but the argument in \cite{D} still applies). Then, (\ref{deg-Q}) is obtained from the following result (together with Deligne's proof of the Hard Lefschetz theorem for $\ell$-adic cohomology):
\begin{Thm}[\cite{D} Thm. 1.5]\label{Deligne} Suppose that ${\mathcal A}$ is an Abelian category with bounded derived category $D^{b}({\mathcal A})$. For $A_{*} \in D^{b}({\mathcal A})$, there is an isomorphism:
\[ A_{*} \cong \bigoplus_{i \in {\mathbb Z}} H^{i} (A_{*})[-i] \in D^{b}({\mathcal A}) \]
if there is an integer $n$ and an endomorphism of degree $2$, $\phi: A_{*} \to A_{*}[2]$, which induces an isomorphism on cohomology:
\[ H^{n-i}(\phi^{i}): H^{n-i}(A_{*}) \to H^{n+i}(A_{*}) \]
\end{Thm}
\noindent We can use this to prove an analogue of (\ref{deg-Q}) with integral coefficients which we will need for the proof of Theorem \ref{main-2}.
\begin{Prop}\label{lef-mod-tors} Suppse that $k$ has characteristic $0$ or that $X$ satisfies the Lefschetz standard conjecture. Then for all but finitely many primes $\ell$ there is a decomposition:
\[ R\pi_{*}{\mathbb Z}_{\ell} \cong \bigoplus_{i=0}^{2d} R^{i}\pi_{*}{\mathbb Z}_{\ell}[-i] \]
in $D^{b} (\text{Spec } k, {\mathbb Z}_{\ell})$, the bounded derived category of continuous ${\mathbb Z}_{\ell}[G_{k}]$-modules (in the sense of \cite{E}).
\begin{proof} By Theorem \ref{Deligne}, it suffices to prove that if $h \in Pic(X)$ is the class of an ample hyperplane divisor and $\mathcal{L}$ is the corresponding Lefschetz operator, then there is an isomorphism:
\[ H^{d-m}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell}) \xrightarrow[\cong]{\mathcal{L}^{m}} H^{d+m}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell}(m))\]
for all but finitely many primes $\ell$. Now, by \cite{Ga} $H^{*}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell})$ is torsion-free for all but finitely many $\ell$. So, it suffices to prove that the map:
\begin{equation} H^{d-j}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell})/tors \xrightarrow{\mathcal{L}^{m}} H^{d+j}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell})/tors \label{mod-tors}\end{equation}
is an isomorphism for all but finitely many $\ell$. In this direction, we note that since $X$ is defined over a finitely generated field, we can assume (by invariance of \'etale cohomology under algebraically closed extensions, \cite{M} VI Cor. 2.6) that $k$ is some finitely generately field. Using his proof of the Weil conjectures, Deligne proved in \cite{D2} the Hard Lefschetz theorem with ${\mathbb Q}_{\ell}$ coefficients for finite fields. Using the proper base change theorem (VI Cor. 2.3 of loc. cit.), one obtains the statement for all finitely generated fields of characteristic $p$. Thus, the Lefschetz operator $\mathcal{L}^{m}$ induces an isomorphism:
\[ H^{d-m}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}) \xrightarrow[\cong]{\mathcal{L}^{m}} H^{d+m}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}(m))\]
So, the map in (\ref{mod-tors}) is injective for all $\ell$ and so we only need to prove that it is surjective for all but finitely many $\ell$. In this direction, we note that if the Lefschetz standard conjecture holds, there exists:
\[ \Lambda \in CH^{d-1} (X \times X)_{{\mathbb Q}} \]
for which $\mathcal{L}^{m}\circ \Lambda^{m} = \text{ Id}$ on $H^{d+m}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}(M))$. Let $M \in {\mathbb Z}$ be such that $M\cdot \Lambda \in CH^{d-1} (X \times X)$ so that setting $\Lambda' := M\cdot \Lambda$, we have
\[ \mathcal{L}^{m}\circ \Lambda^{m} = M\cdot\text{ Id} \]
on $H^{d+m}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell}(m))/tors$. Since $M$ is invertible for all but finitely many primes, it follows that the cokernel of (\ref{mod-tors}) is finite, as desired. On the other hand, if $k$ has characteristic $0$, one quickly reduces to the case of $k = {\mathbb C}$, where one obtains
\[ \Lambda \in H^{2(d-1)} (X \times X, {\mathbb Q}) \]
in singular cohomology, and the remainder of the proof is then identical.
\end{proof}
\end{Prop}
\subsection*{Chow motives} For the applications below, we let ${\mathcal M}_{k, {\mathbb Q}}$ be the category of Chow motives over $k$ with rational coefficients. Recall that this is the pseudo-Abelian envelope of the additive category $Cor(k)$ whose objects are pairs $(X, m)$ where $X$ is smooth and projective of dimension $d$ and $m \in {\mathbb Z}$ and whose morphisms are given by correspondences:
\[ Cor ((X, m), (Y, m')) := CH^{d +m'-m} (X \times Y)_{{\mathbb Q}} \]
Thus, a Chow motive is given by a triple $(X, \pi, m)$, where $\pi \in Cor ((X, 0), (X, 0))$ is an idempotent. We also observe that ${\mathcal M}_{k, {\mathbb Q}}$ possesses the structure of a tensor category:
\[(X, \pi, m) \otimes (Y, \pi', m') := (X \times Y, \pi \times \pi', m+ m')\]
One can also define an internal symmetric and alternating product functor (using the usual Schur functor formalism). Moreover, there is the $\ell$-adic realization functor $R: {\mathcal M}_{k, {\mathbb Q}} \to D^{b}(\text{Spec } k, {\mathbb Q}_{\ell})$ which preserves the tensor product structure since for $\pi: X \to \text{Spec } k$, $\pi': Y \to \text{Spec } k$ smooth projective morphisms, we have
\[ R(\pi, \pi')_{*}{\mathbb Q}_{\ell, X \times Y} \cong R\pi_{*}{\mathbb Q}_{\ell, X} \boxtimes R\pi'_{*}{\mathbb Q}_{\ell, Y} \]
by the K\"unneth theorem (\cite{M} VI.1). Finally, we will use the Tate twist notation
\[ M(i) = (X, \pi, m+i) \]
for $M= (X, \pi, m)$.
\section{Proof of Theorem \ref{main}}
\begin{proof}[Proof of Theorem \ref{main}]
The following proposition is key:
\begin{Prop} For a smooth projective morphism $\pi: X \to \text{Spec } k$, the kernel of the natural map of algebras:
\begin{equation} \mathop{\rm End}\nolimits_{D} (R\pi_{*}A) \xrightarrow{\bigoplus_{i \in {\mathbb Z}} H^{i}} \bigoplus_{i \in {\mathbb Z}}\mathop{\rm End}\nolimits_{A} (R^{i}\pi_{*}A)\label{main-map} \end{equation}
is a nil ideal, where $D$ is either the bounded derived category of continuous ${\mathbb Q}_{\ell}[G_{k}]$-modules (in the sense of \cite{J2}) with $A = {\mathbb Q}_{\ell}$ or the bounded derived category of continuous $ {\mathbb Z}_{\ell}[G_{k}]$-modules with $A = {\mathbb Z}_{\ell}$, provided that either of the assumptions of Prop. \ref{lef-mod-tors} are satisfied.
\begin{proof} We begin by noting that there is an obvious isomorphism:
\[\mathop{\rm End}\nolimits_{A} (R^{i}\pi_{*}A) \cong \mathop{\rm End}\nolimits_{D} (R^{i}\pi_{*}A[i]) \]
whose inverse is obtained by the functor $H^{i}$. Then, we set $I$ to be the kernel of (\ref{main-map}), and the proposition then follows from the lemma below (together with Theorem \ref{Deligne}), which was communicated to the author by Bruno Kahn and whose proof is essentially that of Prop. 2.3.4(c) from \cite{AKOS}.
\end{proof}
\end{Prop}
\begin{Lem}\label{key} Let ${\mathcal C}$ be an additive category and $C_{1}, C_{2}, \cdots C_{n} \in {\mathcal C}$ and set
\[ C := \bigoplus_{i=1}^{n} C_{i} \]
Denote by $\iota_{i}: C_{i} \hookrightarrow C$ the inclusions and $\nu_{i}: C \to C_{i}$ the projections. Let $I$ be an ideal of $\mathop{\rm End}\nolimits_{{\mathcal C}} (C)$ for which $I_{i} =\iota_{i}\circ I \circ \nu_{i}$ is a nil ideal with nilpotency index $r_{i}$. Then, $I$ is a nil ideal; in fact, $I^{N} =0$ for $N \geq r_{1} +r_{2} + \ldots +r_{n} + n-1$.
\begin{proof}[Proof of Lemma] By induction, we quickly reduce to the case of $n=2$. To this end, let $f \in I$ and write
\[ f = f_{11} + f_{12} + f_{21} + f_{22} \]
where $f_{ij} = \iota_{j}\circ \circ f \circ \nu_{i}$. Then, we compute
\begin{equation} f^{N} = (f_{11} + f_{12} + f_{21} + f_{22})^{N} = \sum_{I} a_{I}f_{i_{1}j_{1}}^{p_{1}}f_{i_{2}j_{2}}^{p_{2}}\ldots f_{i_{k}j_{k}}^{p_{k}}\label{summands}\end{equation}
where $I$ is the indexing set of all such products $f_{i_{1}j_{1}}^{p_{1}}f_{i_{2}j_{2}}^{p_{2}}\ldots f_{i_{k}j_{k}}^{p_{k}}$. Now, we observe that $\nu_{i}\iota_{j} = 0$ if $i \neq j$, so it follows that all terms in the above sum are $0$ except for those where
\[ p_{l} = 1 \text{ unless } i_{l}= j_{l}, \text{ and } i_{1} = j_{2}, \ldots i_{k-1} = j_{k} \]
Then, any summand in the sum (\ref{summands}) is expressible as:
\begin{equation} f_{22}^{n_{1}}f_{12}f_{11}^{m_{1}}f_{21}f_{22}^{n_{2}}\ldots f_{22}^{n_{k}}f_{12}f_{11}^{m_{k}}f_{21}f_{22}^{n_{k+1}} \label{type}
\end{equation}
or the corresponding product with the subscripts relabeled (i.e., $1$ and $2$ swapped). As in \cite{AKOS}, we see that this is contained in $I_{2}^{b} \cap I_{1}^{a}$ for $b \geq n_{1} + n_{2} + \ldots + n_{k+1} + k$ and $a \geq m_{1} + m_{2} + \ldots + m_{k} + k-1$. Since \[N = m_{1} + m_{2} + \ldots + m_{k} +n_{k} + m_{k+1} + 2k = a + b+1 \]
We see then that taking $N = r_{1}+r_{2}+1$, then either $a \geq r_{1}$ or $b \geq r_{2}$, making $I_{2}^{b} \cap I_{1}^{a} = 0$. The lemma now follows.
\end{proof}
\end{Lem}
\noindent First we prove item \ref{main-a}. A basic observation is that the cycle class map coincides with the realization functor:
\[\mathop{\rm End}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}} ({\mathfrak h}(X)) \to \mathop{\rm End}\nolimits_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (R\pi_{*}{\mathbb Q}_{\ell, X}) \]
after suitable identifation. In fact, we have the following chain of natural isomorphisms (see for instance Remark 4.2 of \cite{J3}):
\begin{equation} \begin{CD}
CH^{d} (X \times X)_{{\mathbb Q}} @>{}>> H^{2d}_{\text{\'et}} (X \times X, {\mathbb Q}_{\ell}(d))\\
@| @V{\cong}VV\\
\mathop{\rm Hom}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}} (\mathds{1}, {\mathfrak h}(X\times X)[d]) @>>> \mathop{\rm Hom}\nolimits_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (\mathds{1}, R(\pi,\pi)_{*}{\mathbb Q}_{\ell, X\times X}(d)[2d])\\
@V{\cong}VV @V{\cong}VV\\
\mathop{\rm Hom}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}} (\mathds{1}, {\mathfrak h}(X) \otimes {\mathfrak h}(X)[d]) @>>> \mathop{\rm Hom}\nolimits_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (\mathds{1}, R\pi_{*}{\mathbb Q}_{\ell, X} \boxtimes R\pi_{*}{\mathbb Q}_{\ell, X} (d)[2d])\\
@V{\cong}VV @V{\cong}VV\\
\mathop{\rm Hom}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}} (\mathds{1}, {\mathfrak h}(X) \otimes {\mathfrak h}(X)^{\vee}) @>>> \mathop{\rm Hom}\nolimits_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (\mathds{1}, R\pi_{*}{\mathbb Q}_{\ell, X} \boxtimes R\pi_{*}{\mathbb Q}_{\ell, X}^{\vee})\\
@V{\cong}VV @V{\cong}VV\\
\mathop{\rm End}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}} ({\mathfrak h}(X)) @>{}>> \mathop{\rm End}\nolimits_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (R\pi_{*}{\mathbb Q}_{\ell, X})\\
\end{CD}\label{big-commutative}
\end{equation}
where the first horizontal arrow is the cycle class map and the remaining horizontal arrows are the $\ell$-adic realization functors. On the left, the vertical isomorphisms follow from formal properties of Chow motives, while on the right, the vertical isomorphisms follow from the K\"unneth isomorphism, Poincar\'e duality and a property of the dual. Moreover, the bottom-most horizontal arrow respects composition (see, for instance, \cite{H} Prop. 19.2.4). There is also the ``extension-of-scalars" commutative diagram for cohomology:
\begin{equation}\begin{tikzcd}
H^{2d}_{\text{\'et}} (X \times X, {\mathbb Q}_{\ell}(d)) \arrow{r}{\cong} \arrow{d}
& \mathop{\rm End}\nolimits_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (R\pi_{*}{\mathbb Q}_{\ell, X}) \arrow{d}\\
H^{2d}_{\text{\'et}} (X_{\overline{k}} \times X_{\overline{k}}, {\mathbb Q}_{\ell}(d)) \arrow{r}{\cong}
& \bigoplus_{i=0}^{2d} \mathop{\rm End}\nolimits_{{\mathbb Q}_{\ell}} (H^{i}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell})) \end{tikzcd}\label{commutative} \end{equation}
where the top horizontal arrow is the composition of right vertical arrows in (\ref{big-commutative}), the right vertical arrow is a special case of (\ref{main-map}) and the bottom arrow in (\ref{commutative}) is given by the composition of the following isomorphisms
\[ \begin{split}
H^{2d}_{\text{\'et}} (X_{\overline{k}} \times X_{\overline{k}}, {\mathbb Q}_{\ell}(d)) & \cong \bigoplus_{i=0}^{2d} H^{i}_{\text{\'et}}(X_{\overline{k}}, {\mathbb Q}_{\ell}(i))\otimes H^{2d-i}_{\text{\'et}}(X_{\overline{k}}, {\mathbb Q}_{\ell}(d-i))\\
& \cong \bigoplus_{i=0}^{2d} H^{i}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}(i))\otimes H^{i}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}(i))^{\vee}\\
& \cong \bigoplus_{i=0}^{2d} \mathop{\rm End}\nolimits_{{\mathbb Q}_{\ell}} (H^{i}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}))
\end{split} \]
Suppose then that the cycle class of
\[ \Gamma \in \mathop{\rm End}\nolimits_{{\mathcal M}_{k, {\mathbb Q}}} ({\mathfrak h}(X)) \]
is nilpotent in $\bigoplus_{i=0}^{2d} \mathop{\rm End}\nolimits_{{\mathbb Q}_{\ell}} (H^{i}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Q}_{\ell}))$. From the last proposition, it then follows that \[[\Gamma^{N}] = [\Gamma]^{N} = 0 \in H^{2d}_{\text{\'et}} (X \times X, {\mathbb Q}_{\ell}(d)) \cong End_{D^{b} (\text{Spec } k, {\mathbb Q}_{\ell})} (Rf_{*}{\mathbb Q}_{\ell, X})\] for some $N>>0$ not depending on $\ell$. An inspection of the above proof shows that it works mutatis mutandis to prove item \ref{main-b}, so long as $H^{*}_{\text{\'et}} (X_{\overline{k}}, {\mathbb Z}_{\ell}(r))$ is torsion-free.
\end{proof}
\section{Proof of Theorem \ref{main-2}}
\noindent Suppose first that
\[ \Gamma \in \ker \{CH^{2} (X \times X) \to CH^{2} (X_{\overline{k}} \times X_{\overline{k}}) \} \]
Then, the Rost nilpotence conjecture holds (see \cite{Gi} or \cite{RS}), so it suffices to prove that
\[ \Gamma \in CH^{2} (X_{\overline{k}} \times X_{\overline{k}})[\ell^{\infty}] \]
is nilpotent, which we can reduce to the case of $k$ finitely generated using Suslin rigidity in the form of \cite{Le}. For ease of notation, write $Y = X \times X$. Then, we have the cycle class map:
\begin{equation} CH^{2} (Y_{\overline{k}}) := \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} CH^{2} (Y_{k'}) \rightarrow \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))\label{cycle}\end{equation}
Now, we let
\[ H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))_{0} := \ker \{ H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2)) \to H^{4}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2))^{G_{k'}} \}\]
For all but finitely many $\ell$, $H^{4}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2))$ is torsion-free by \cite{Ga}, so for all but finitely many such $\ell$ the image of an $\ell$-ary torsion cycle under (\ref{cycle}) lies in
\[\mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))_{0} \]
and thus we can consider the Abel-Jacobi map:
\[ CH^{2} (Y_{\overline{k}})[\ell^{\infty}] \to \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))_{0} \xrightarrow{\text{AJ}} \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{1}(G_{k'}, H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2))) \]
There is also the (direct limit of the) boundary map:
\[ \partial: H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) = \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))^{G_{k'}} \to \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{1}(G_{k'}, H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2))) \]
obtained by applying $G_{k'}$-invariants to the short exact sequence of $G_{k}$-modules (obtained from Bockstein using the fact that there is no torsion in $H^{*}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2))$) and then taking a direct limit over $r$:
\[ 0 \to H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2)) \xrightarrow{\times \ell^{r}} H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2)) \to H^{3}_{\text{\'et}} (Y_{\overline{k}} , {\mathbb Z}_{\ell}/\ell^{r}(2)) \to 0\]
Now, our key observation is that these maps fit into the following commutative diagram:
\begin{equation} \begin{CD}
CH^{2} (Y_{\overline{k}}) [\ell^{\infty}] @>{cl_{B}}>> H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))\\
@VVV @V{\partial}VV\\
\displaystyle
\mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))_{0} @>{\text{AJ}}>> \displaystyle\mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{1} (G_{k'}, H^{2}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2)))
\end{CD}\label{composition}
\end{equation}
where $cl_{B}$ is Bloch's cycle class map (see \cite{B}).
\begin{proof}[Proof that (\ref{composition}) commutes] Let $Y$ any smooth projective variety over $k$ and assume for simplicity that $H^{*}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell})$ is torsion-free. Now, suppose that $\gamma \in CH^{2} (Y_{\overline{k}}) [\ell^{\infty}]$. Then, there exists some finite Galois extension $k'\subset k$ over which $\gamma$ is defined. In particular,
\[ cl_{B} (\gamma) \in H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))^{G_{k'}} \]
Then, there is a surjective map:
\begin{equation} H^{1}_{\text{Zar}} (Y_{k'}, \mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}) \to H^{2}_{\text{Zar}} (Y_{k'}, \mathcal{K}_{2}) [\ell^{\infty}] = CH^{2} (Y_{k'}) [\ell^{\infty}]\label{surj}\end{equation}
where $\mathcal{K}_{2}$ is the Zariski sheaf associated to the presheaf of the Quillen $K$-theory of $Y$ (similarly, $\mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}$ is the direct limit of the torsion sheaves $\mathcal{K}_{2, {\mathbb Z}/\ell^{r}}$). Then, the Galois symbol induces maps:
\begin{equation} \begin{split} H^{i}_{Zar} (Y_{k'}, \mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}) & \to H^{i}_{Zar} (Y_{k'}, \mathcal{H}^{2}_{{\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}(2))\\
H^{i}_{Zar} (Y_{k'}, \mathcal{K}_{2}) & \to H^{i}_{Zar} (Y_{k'}, \mathcal{H}^{2}_{{\mathbb Z}_{\ell}}(2))\\
\end{split}\label{double} \end{equation}
where $\mathcal{H}^{2}_{R}(2)$ denotes the Zariski sheafification of $U \mapsto H^{2}_{\text{\'et}} (U, R(2))$ for $R = {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}, {\mathbb Z}_{\ell}$. We observe that $i=2$ in (\ref{double}) gives the cycle class map. Moreover, (\ref{surj}) fits into a commutative diagram:
\begin{equation} \begin{CD}
H^{1}_{\text{Zar}} (Y_{k'}, \mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}) @>>> H^{2}_{\text{Zar}} (Y_{k'}, \mathcal{K}_{2}) [\ell^{\infty}]\\
@VVV @VVV\\
H^{1}_{\text{Zar}} (Y_{k'}, \mathcal{H}^{2}_{{\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}(2)) @>>> H^{2}_{\text{Zar}} (Y_{k'}, \mathcal{H}^{2}_{{\mathbb Z}_{\ell}}(2)) [\ell^{\infty}]\\
@VVV @VVV\\
H^{3}_{\text{\'et}} (Y_{k'}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) @>>> H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))[\ell^{\infty}]
\end{CD}\label{big-CD}
\end{equation}
Here, the top vertical arrows are those of (\ref{double}), the bottom vertical arrows arise from the spectral sequence:
\[H^{p}_{Zar} (Y_{k'}, \mathcal{H}^{q}_{R}(2)) \Rightarrow H^{p+q}_{\text{\'et}} (Y_{k'}, R(2)) \]
and the bottom horizontal arrow is the boundary map of the Bockstein exact sequence:
\[ 0 \to H^{3}_{\text{\'et}} (Y, {\mathbb Z}_{\ell}(2))\otimes {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell} \to H^{3}_{\text{\'et}} (Y, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) \to H^{4}_{\text{\'et}} (Y, {\mathbb Z}_{\ell}(2))[\ell^{\infty}] \to 0 \]
Of course, there is a diagram analogous to (\ref{big-CD}) with $k'$ replaced by $\overline{k}$. In fact, using the Weil conjectures, Bloch derives $cl_{B}$ as the dotted arrow in:
\begin{equation}\begin{tikzcd}
H^{1}_{\text{Zar}} (Y_{\overline{k}}, \mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}) \arrow{r} \arrow{d} & CH^{2} (Y_{\overline{k}}) [\ell^{\infty}] \arrow{d} \arrow[dotted]{ld}\\
H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) \arrow{r} & H^{4}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2)) [\ell^{\infty}]
\end{tikzcd}\label{small-CD}
\end{equation}
Further, these (\ref{big-CD}) and (\ref{small-CD}) fit into a commutative cube in which the connecting arrows are the obvious extension-of-scalars maps from $k'$ to $\overline{k}$.
Thus, there is some $\tilde{\gamma} \in H^{1}_{\text{Zar}} (Y_{k'}, \mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}})$ whose image under
\[H^{1}_{\text{Zar}} (Y_{k'}, \mathcal{K}_{2, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}}) \to H^{3}_{\text{\'et}} (Y_{k'}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) \to H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))^{G_{k'}} \]
coincides with $cl_{B}(\gamma)$.
Thus, what remains is to observe that there is a commutative diagram:
\begin{equation}\begin{CD}
H^{3}_{\text{\'et}} (Y_{k'}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) @>>> H^{4}_{\text{\'et}} (Y_{k'}, {\mathbb Z}_{\ell}(2))[\ell^{\infty}]\\
@VVV @V{AJ}VV\\
H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))^{G_{k'}} @>{\partial}>> H^{1}(G_{k'}, H^{3}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2)))[\ell^{\infty}]
\end{CD}\label{key-CD}\end{equation}
To see this, note that the vertical maps arise from the Hochschild-Serre spectral sequence (the right map being well-defined since $H^{4}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Z}_{\ell}(2))$ is torsion-free). Then, there is a short exact sequence in the Ekedahl category of \'etale sheaves over $Y_{k'}$:
\[ 0 \to {\mathbb Z}_{\ell}(2) \to {\mathbb Q}_{\ell}(2) \to {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2) \to 0 \]
Let $\pi: Y_{k'} \to \text{Spec } k'$ be the structure morphism; applying $R\pi_{*}$ to this gives the boundary map:
\[ R\pi_{*}{\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2) \to R\pi_{*}{\mathbb Z}_{\ell}(2) [1] \]
which induces the horizontal maps in (\ref{key-CD}). There is a corresponding map of spectral sequences; for the $E_{2}$-page, this means there are commutative diagrams:
\[\begin{CD}
H^{p} (G_{k'}, H^{q}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) @>>> H^{p+1} (G_{k'}, H^{q}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))\\
@V{d_{2}^{p,q}}VV @V{d_{2}^{p+1,q}}VV\\
H^{p+2} (G_{k'}, H^{q-1}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2)) @>>> H^{p+3} (G_{k'}, H^{q-1}_{\text{\'et}} (Y_{\overline{k}}, {\mathbb Q}_{\ell}/{\mathbb Z}_{\ell}(2))
\end{CD}
\]
as well as for the other pages. We deduce then that (\ref{key-CD}) commutes, as desired.
\end{proof}
\begin{Rem} We should note that the commutativity of (\ref{composition}) is essentially an algebraic version of Prop. 3.7 in \cite{B}.
\end{Rem}
\noindent To finish the proof, we observe that Bloch's cycle class map is injective by a consequence of the Merkurjev-Suslin Theorem (see \cite{MS}). Moreover, the map $\partial$ in (\ref{composition}) is an isomorphism because of a weights argument given in \cite{S3} Lemma 1.1.3. By (\ref{composition}), it follows that the direct limit of cycle class maps:
\[CH^{2} (X_{\overline{k}} \times X_{\overline{k}})[\ell^{\infty}] \to \mathop{\lim_{\longrightarrow}}_{[k': k] <\infty} H^{4}_{\text{\'et}} (X_{k'} \times X_{k'}, {\mathbb Z}_{\ell}(2))_{0} \]
is injective. Since surfaces satisfy the Lefschetz standard conjecture, it follows from Theorem \ref{main} \ref{main-b} that $\Gamma$ is nilpotent in the right group and, hence, also in the left by injectivity.
|
train/arxiv
|
BkiUbDTxaKPQoqJ9_pHo
| 5 | 1 |
\section{Introduction}\label{sec:intro}
Cavitation is the process whereby vapor cavities, or bubbles, are produced in a liquid or solid due to pressure differences. For inertially generated bubbles, the resulting dynamics are highly transient and often nonlinear. While models have been developed to successfully describe cavitation dynamics in water, the extension of the theoretical foundation of these models to soft materials is challenging due to their unique deformation and failure mechanisms, especially under extreme loading conditions (large strain magnitudes and high strain rates). In particular, the nature of the physical and chemical composition of many soft polymers gives rise to highly nonlinear elastic and viscoelastic macroscale material behavior.
Current prediction of inertial cavitation dynamics usually relies on the approximation that the cavitation process remains nominally spherically symmetric through its life cycle, allowing for the use of the classical Rayleigh-Plesset \cite{plesset1954jap,prosperetti1977qam,luo2020scirep,barney2020pnas,saintmichel2020cocis} or Keller-Miksis \cite{keller1980bubble,estrada2018jmps,yang2020eml} modeling approaches. However, recent experiments and numerical simulations show that cavitation bubble growth and collapse rarely occur in a perfectly spherically symmetric fashion even in nominally homogeneous and isotropic soft matter \cite{brenner1995prl,hamaguchi2015pof,shaw2017pof,guedra2018jfm,saintmichel2020softmatter}.
Predicting the onset of non-spherical deformation arising due to instabilities during inertial cavitation in soft viscoelastic materials is an important and fundamental problem with significant implications across many applications. For example, non-spherical instabilities may lead to strain-localization and subsequent material damage near the bubble wall as well as other important physical, chemical, and biological outcomes, including sonochemistry \cite{yoshikawa2014chemsocrev,mettin2015sonochem}, sonoluminesence \cite{brenner1995prl}, local plasticity and fracture,
and/or tissue/cell dysfunctions in biological materials \cite{brennen2015cavitation}.
Documentation of cavitation-related instabilities in liquids has a well-established history in the fluid mechanics community, including detailed descriptions of classical instability phenomena, such as the \textit{Rayleigh-Taylor} (RT) instability \cite{taylor1950RTinstab}, or \textit{parametric} instabilities, which arise due to the accumulation of non-spherical perturbations over many oscillation periods \cite{hamaguchi2015pof,saintmichel2020softmatter,murakami2020us,gaudron2020jmps}. However, predicting the onset of non-spherical instabilities during inertial cavitation in soft materials is still in its infancy, largely due to the additional complexities arising from the intrinsic coupling of the bubble dynamics to a nonlinear, viscoelastic solid. Cavitation in a solid material can present different and perhaps more complex (non-spherical) instability patterns compared to a fluid, including wrinkles, creases, and folds. In addition, parametric instabilities, tend to occur earlier during the cavitation expansion-collapse cycles in a solid material when compared to a fluid. Mathematically, the constitutive laws of hyperelastic solids are typically expressed using a Lagrangian description based on the reference configuration, while the cavitation dynamics are typically described in the current, deformed configuration, which must be accounted for when theoretically describing and predicting the onset and evolution of inertial cavitation instability patterns within a soft solid.
To address these challenges, we present a new theoretical framework that, based on a first-order incremental perturbation analysis, is able to accurately capture and predict the evolution of complex deformation modes observed in dynamic cavitation events. By comparing our theoretical predictions against recent experimental observations of various non-spherical bubble shapes, we find that our model is in excellent quantitative and qualitative agreement with the experimental measurements.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{figures/fig_PA8pt3_exp5.pdf}
\end{center}
\caption{Spatiotemporally recorded bubble dynamics via high-speed videography \cite{yang2020eml}. (a)\,A representative bubble radius vs.\ time curve for a laser-induced cavitation bubble inside a PA gel. (b)\,Quantitative measurements of the magnitude of bubble surface instabilities. (c)\,Various types of non-spherical bubble shapes corresponding to different time-points (red circles) in (a). Red dashed lines are overlaid onto original images to visualize the shape of the bubble wall. Inset in (c-ii) is the zoomed-in non-spherical shape instability. Data from \cite{yang2020eml}.} \label{fig:exp4}
\end{figure}
In order to compare and assess the accuracy of our theoretical predictions against appropriate experiments, we take advantage of several existing high-resolution data sets of laser-induced cavitation in polyacrylamide (PA) hydrogels \cite{yang2020eml}.
A representative bubble radius vs.\ time curve is shown in Fig.\,\ref{fig:exp4}(a), plotted starting at the time-point of the maximum bubble radius $R_{\text{max}}$.
Figure\,\ref{fig:exp4}(c) depicts various non-spherical bubble shapes at eight time-points during the expansion and collapse cycle of the cavitation dynamics, as indicated by red circles in Fig.\,\ref{fig:exp4}(a). The bubble shapes in frames (i,ii,iv,vi) show nearly spherical bubbles, while frames (v,vii,viii) present wrinkling, creasing, and stellate instabilities near the bubble wall, respectively. From these experimental observations, we quantify the upper and lower bounds, mean, and root-mean-square (RMS) values of the magnitude of these bubble surface instabilities \footnote{Upper and lower bounds, mean, and RMS values of the magnitude of bubble surface instabilities are summarized in Supplementary Material S2.}, as well as the bubble roundness (4$\pi\times$Area/Perimeter$^2$) (see\ Fig.\,\ref{fig:exp4}(b)).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=.48 \textwidth]{figures/fig_PA8pt3_expAll_non_sph_v2.pdf}
\end{center}
\caption{Summary of various instability patterns in nine individual cavitation events. Experimental data sets 2-9 are purposefully offset vertically for easier viewing of each curve. Right inset: (i) Top and (ii) side views of different instability modes.} \label{fig:expAll}
\end{figure}
Next, we non-dimensionalize all bubble radius vs.\ time curves ($t^{*}$\,$=$\,$t {\small \sqrt{p_{\infty}/\rho}}$\,$ / $\,$R_{\text{max}}$, $R^{*}$\,$=$\,$R$\,$/$\,$R_{\text{max}}$, where $p_\infty$ and $\rho$ are defined after Eq.~\eqref{Eq:RayleighPlesset}), and present our findings for the various instability patterns for nine individual experiments in Fig.\,\ref{fig:expAll}. Experimental data sets 2-9 are purposefully offset vertically for easier viewing of each curve. Top and side views of different instability modes (mode shape number $l$ later defined in Eq.\,(\ref{eq:inc_def_disp})) are summarized in the right inset. The instability pattern within each frame is directly extracted from that frame. Creases are distinguishable from wrinkles by their innate surface folds and significant local curvature in the formed creases (Fig.\,\ref{fig:exp4}(c-vii)) \cite{diab2013prsa}, whereas the emergence of stellate patterns are typically observed right after the occurrence of local creases (Fig.\,\ref{fig:exp4}(c-viii)). By conducting a careful qualitative shape analysis, we find the first appearance of non-spherical bubble shapes near the first, often violent (local Mach number exceeding 0.08), collapse, predominantly featuring shapes similar to the spherical harmonic function of mode shape 8 (Fig.\,\ref{fig:exp4}(c-v)), while subsequent collapse points show the emergence of other mode shapes (e.g., spherical harmonic function mode shapes 6 and 7) along with mode shape 8.
To quantify the critical condition for the onset of non-spherical instabilities associated with each mode shape, we develop a new theoretical framework based on a perturbation approach of the spherically symmetric Rayleigh-Plesset governing equation \cite{plesset1949dynamics}, in which the time-dependent bubble radius of the base state $R(t)$ is governed by \cite{plesset1949dynamics,estrada2018jmps,yang2020eml,murakami2020us}:
\begin{equation}\label{Eq:RayleighPlesset}
R R_{,tt} + \frac{3}{2} R_{,t}^2 = \frac{1}{\rho} \left( p_b - p_{\infty} + S - \frac{2 \gamma}{R} \right),
\end{equation}
where $(\cdot)_{,t}$ denotes the derivative with respect to time; $\rho$ is the mass density of the surrounding viscoelastic material, which is assumed to be nearly constant; $\gamma$ is the surface tension between the gaseous bubble phase and the surrounding medium; $p_b$ is the internal bubble pressure; $p_{\infty}$ is the far-field pressure, which is assumed to be atmospheric as there are no external driving forces; and $S$ is given through the integral of deviatoric Cauchy stress components over the surroundings. Consistent with our previous work, we describe the surrounding soft material as a nonlinear strain-stiffening Kelvin-Voigt (qKV) viscoelastic material \cite{yang2020eml} with a quadratic-law strain energy density function $W$ given by
\begin{equation}
W = \frac{G}{2} \left[(I_1 - 3) + \frac{\alpha}{2}(I_1-3)^2 \right], \label{eq:W_quad}
\end{equation}
where $I_1$ is the first invariant of the right Cauchy-Green deformation tensor, $G$ is the ground-state shear modulus, and $\alpha$ is a dimensionless material parameter characterizing large deformation strain stiffening effects \cite{yang2020eml}.
The Newtonian viscosity $\mu$ in the qKV model is assumed to be constant and typically has a value of $O(10^{-3})$\,$\sim$\,$O(10^{-1})$ Pa$\cdot$s for water-based hydrogels and biomaterials \cite{yang2020eml,mancia2020amr}. The stress integral $S$ then takes the form
\begin{equation}
\small
\begin{aligned}
S =& \frac{(3 \alpha - 1)G}{2} \left[ 5 - \left( \frac{R_{0}}{R} \right)^4 - \frac{4 R_{0}}{R} \right] - \frac{4 \mu \dot{R}}{R} + \\
& 2 \alpha G \left[\frac{27}{40} + \frac{1}{8} \left( \frac{{R}_{0}}{R} \right)^8 + \frac{1}{5}\left( \frac{{R}_{0}}{R} \right)^5 + \left( \frac{{R}_{0}}{R} \right)^2 - \frac{2R}{R_{0}} \right]
\end{aligned} \label{eq:S_quad}
\end{equation}
where $R_{0}$ is the undeformed bubble radius.
We model the inside of the bubble as a two-phase mixture consisting of water vapor and non-condensible gas, which is homobaric and follows the ideal gas law.
We neglect temperature changes at the bubble wall and assume there is no mass diffusion of non-condensible gas across the bubble wall since both the heat and mass diffusion processes across the bubble wall are much slower than the bubble dynamics.
Simulations are initiated when the bubble attains its maximum radius $R_{\text{max}}$ thus avoiding the need to account for non-equilibrium nucleation and growth dynamics and beginning the simulations at thermodynamic equilibrium \cite{barajas2017effects,estrada2018jmps}.
Next, based on the spherically symmetric Rayleigh-Plesset governing equation \eqref{Eq:RayleighPlesset}, we consider the following perturbation of the displacement field using spherical harmonic basis functions \footnote{Derivation of the ansatz for the perturbed deformation is summarized in Supplementary Material S3.}:
\begin{equation}
\left\lbrace
\begin{aligned}
\tilde{u}_r &= \frac{a(t) R^2}{r^2} Y_{l}^{m} (\theta,\phi) \quad (l \geqslant |m| >0) \\
\tilde{u}_{\theta} &= 0 \\
\tilde{u}_{\phi} &= 0
\end{aligned}
\right. \label{eq:inc_def_disp}
\end{equation}
where $\lbrace r, \theta, \phi \rbrace$ are radial, polar, and azimuthal angular coordinates;
$Y_{l}^{m} (\theta,\phi)$
is the normalized $(l,m)^{th}$ spherical harmonic function;
and $a(t)$ is the time-dependent perturbation magnitude at the bubble wall.
One can show that the non-spherical perturbation \eqref{eq:inc_def_disp} is isochoric \cite{gaudron2020jmps}.
The perturbation is governed by the momentum balance equation, expressed in the current, deformed configuration as
\begin{equation}
\nabla\cdot \boldsymbol{\sigma} = \rho \mathbf{a} \label{eq:mom}
\end{equation}
where $\nabla\cdot(\bullet)$ is the spatial divergence operator, ${\bf a}$ is the acceleration vector, and $\boldsymbol{\sigma}$ is the Cauchy stress tensor. The traction boundary condition at the bubble wall is
\begin{equation}
\boldsymbol{\sigma} \mathbf{n}|_{r=R+aY_l^m} = -p_{\rm b}\mathbf{n} - \gamma (\nabla_{\mathcal{S}} \cdot \mathbf{n})\mathbf{n}, \label{eq:bc}
\end{equation}
where $[\mathbf{n}] = [-1, a {Y_{l,\theta}^{m}}/R, a Y_{l,\phi}^{m}/(R \text{sin}\theta)]^{\top}$ is the linearized outward unit normal vector on the perturbed bubble wall, and $\nabla_{\mathcal{S}}\cdot(\bullet)$ is the surface divergence operator in the deformed configuration. In the far-field, the stress approaches a state of hydrostatic pressure: $\boldsymbol{\sigma}|_{r\rightarrow\infty} = -p_\infty {\bf I}$, where ${\bf I}$ is the identity tensor.
After inserting \eqref{eq:inc_def_disp} into \eqref{eq:mom}, integrating over $r$ from the current, perturbed bubble wall to the far-field, applying the radial boundary conditions, and collecting the $O(a)$ terms, we obtain the governing, second-order differential equation for the bubble instability perturbation magnitude, $a$:
\begin{equation}
a_{,tt} + \eta a_{,t} - \xi a = 0\,; \quad \eta = \frac{3 R_{,t}}{R} + \frac{4 \mu}{ \rho R^2} + \frac{ l(l+1) \mu}{ 3 \rho R^2}, \label{eq:a_sec_ode}
\end{equation}
where $\xi$ explicitly accounts for inertial effects during cavitation, nonlinear deformations of the viscoelastic solid, and surface tension effects:
\begin{equation}
\small
\begin{aligned}
\xi = & - \frac{R_{,tt}}{R} + \frac{4 \mu R_{,t}}{\rho R^3} -
\frac{ 2 l(l+1) \mu R_{,t}}{ 3 \rho R^3} \\
& - \frac{2 G R_{0} }{\rho R^3} \Big( 1 + \frac{R_{0}^3}{R^3} \Big) - \frac{G l (l+1)}{\rho (R^2 + R R_{0} + R_{0}^2)} \\
& - \frac{2 \alpha G}{\rho R^2} \frac{(R-R_{0})^2}{R R_{0}} \Big( 1+ \frac{R_{0}}{R} \Big)^3 \Big( 2- \frac{2R_{0}}{R} + \frac{3R_{0}^2}{R^2} - \frac{R_{0}^3}{R^3} + \frac{R_{0}^4}{R^4} \Big) \\
& - \frac{\alpha G l(l+1) (R - R_{0})^2}{5 \rho R R_{0} (R^2 + R R_{0} + R_{0}^2)} \Big( 10 + \frac{6R_{0}}{R} + \frac{3R_{0}^2}{R^2} + \frac{R_{0}^3}{R^3} \Big) \\
& - \frac{(l+2)(l-1)\gamma}{\rho R^3}.
\end{aligned} \label{eq:xi}
\end{equation}
We note that when $\alpha$\,$ \rightarrow$\,$ 0$, Eqs.\,(\ref{eq:a_sec_ode}-\ref{eq:xi}) describe the evolution of the perturbation magnitude $a$ in a neo-Hookean viscoelastic medium.
Examining the differential relation \eqref{eq:a_sec_ode}, we find that perturbations $a$ grow if $\eta$\,$<$\,$0$ or $\xi$\,$>$\,$0$ and decay if $\eta$\,$>$\,$0$ and $\xi$\,$<$\,$0$ \footnote{See Supplementary Material S5 for more information regarding the stability phase diagram for \eqref{eq:a_sec_ode}}.
\begin{figure} [!t]
\begin{center}
\includegraphics[width= 0.42 \textwidth]{figures/fig3_update.pdf}
\end{center}
\caption{(a)\,Hoop stretch at the bubble wall $\lambda_{w}$ for the experiment labeled ``exp 4'' in Fig.\,\ref{fig:expAll}. Histories of (b)\,the bubble radius $R$ and the quantities (c)\,$\eta$ and (d)\,$\xi$ are numerically computed using simulation parameters based on Fig.\,\ref{fig:expAll} ``exp 4''. Red shaded regions denote situations in which non-spherical instabilities corresponding to $l=8$ are predicted. The critical values of $\lambda_w$ at the onset of instability occurring during the first three collapse cycles are marked as black crosses in (b).} \label{fig:phase_diagram}
\end{figure}
Utilizing the radius vs.\ time data from the experimental observations in Fig.\,\ref{fig:expAll} in our instability model Eqs.\,(\ref{eq:a_sec_ode}-\ref{eq:xi}), we can theoretically predict the onset of various non-spherical instability patterns during the cavitation process. For illustration, based on this data set (specifically, ``exp 4'' in Fig.~\ref{fig:expAll}), we consider the emergence of instability mode shape $l=8$. The evolution of the bubble radius $R$ for this case is numerically simulated, and the histories of the bubble radius and the quantities $\eta$ and $\xi$ are plotted in Fig.\,\ref{fig:phase_diagram}(b-d). Conditions under which non-spherical instabilities corresponding to $l=8$ are predicted (i.e., $\eta<0$ or $\xi>0$) are marked as red shaded regions in Fig.\,\ref{fig:phase_diagram}(b-d). The onset of non-spherical deformation predicted by the theoretical instability model is in good agreement with the experimental observations in Fig.\,\ref{fig:phase_diagram}(a) during each of the first three collapse cycles
From Eqs.\,(\ref{eq:a_sec_ode}-\ref{eq:xi}), we find that both material viscosity and surface tension always act to stabilize the bubble against non-spherical deformation. However, when the bubble size is much greater than the characteristic length $\gamma/G$, the effect of surface tension is negligible. When the bubble approaches the final equilibrium radius ($R$\,$\rightarrow$\,$R_{0}$, $R_{,t}$\,$\rightarrow$\,$0$, and $R_{,tt}$\,$\rightarrow$\,$0$), we find that $\eta_{\infty}$\,$>$\,$0$ and $\xi_{\infty}$\,$<$\,$0$, so that a spherical bubble will be stable under all perturbation modes. Under these conditions, we can also obtain the natural frequency of vibration $\omega_l$ for a bubble in a viscoelastic material corresponding to each non-spherical mode shape $l$:
\begin{equation}
\omega_l^2 = \frac{4 G}{\rho R_{0}^2} + \frac{G l (l+1)}{3 \rho R_{0}^2} + \frac{(l+2)(l-1) \gamma}{\rho R_{0}^3}.
\end{equation}
As discussed in \cite{murakami2020us}, a non-spherical mode becomes unstable under continuous external driving when the driving frequency $\omega_{d}$ equals $ 2 \omega_{l}$.
Taking a closer look at the temporal evolution of the experimentally measured bubble wall hoop stretch, $\lambda_w$ (Fig.\,\ref{fig:phase_diagram}(a)), we find that the onset of instabilities occurs for $\lambda_w$\,$>$\,$1$. This marks a significant departure from previous surface instability investigations under quasi-static loading conditions, where rugae patterns require a stretch ratio within the plane of the surface that is less than one \cite{cai2010softmatter,jin2011epl,li2012softmatter,diab2013prsa,2017zhaojap}.
Intrigued by these observations, we ask how well our instability model predicts the emergence of instabilities for $\lambda_w$\,$ >$\,$ 1$. In Fig.\,\ref{fig:crit_lambda}, we plot the theoretically predicted values of $\lambda_w$ at the onset of instability during the first three collapse cycles as a function of the maximum bubble radius $R_{\text{max}}$ and the non-spherical mode shape number $l$ \footnote{The initial hoop stretch ratio $\lambda_{\text{max}}$ is fixed at the same value as in Fig.\,\ref{fig:expAll}. Details of the theoretical predictions are summarized in Supplementary Material S7.}.
Next, using the experimentally measured values of $R_{\text{max}}$, we plot the critical values of $\lambda_{w}$ at the onset of each experimentally observed instability occurring in Fig.\,\ref{fig:expAll}, and we find that the critical values during the first three collapse cycles are $\lambda_{wc}^{(1)}$\,$=$\,6.5\,$\pm$\,0.3 (circles in Fig.\,\ref{fig:crit_lambda}), $\lambda_{wc}^{(2)}$\,$=$\,3.2\,$\pm$\,0.2 (pluses in Fig.\,\ref{fig:crit_lambda}), and $\lambda_{wc}^{(3)}$\,$=$\,1.8\,$\pm$\,0.2 (crosses in Fig.\,\ref{fig:crit_lambda}), respectively. Comparing the experimentally obtained values of $\lambda_{wc}$ with the ones predicted by our instability model, we see good agreement across all mode shapes, as shown in Fig.\,\ref{fig:crit_lambda}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=.42 \textwidth]{figures/fig_crit_lambda.pdf}
\end{center}
\caption{Theoretically predicted and experimentally measured values of the critical hoop stress at the bubble wall $\lambda_{wc}$ at the onset of instability during the first three collapse cycles as a function of the maximum bubble radius $R_\text{max}$ and the non-spherical mode shape number $l$, where $\lambda_{\text{max}}$ is fixed at the same value as in Fig.\,\ref{fig:expAll}.} \label{fig:crit_lambda}
\end{figure}
It is important to note that other types of non-spherical instabilities are also possible during an inertial cavitation event. For example, in Fig.\,\ref{fig:exp4}(c-vii), one can see crease-like instabilities near the bubble wall for $\lambda_{w}$\,$<$\,1 \cite{milner2017softmatter,bruning2019prl,yang2020semannual}.
Axisymmetric instabilities may also appear during inertial cavitation in heterogeneous media or in the vicinity of impedance mismatched boundaries \cite{brujan_nahen_schmidt_vogel_2001,bremond2006pof,yang2021semannual_interface}.
The critical conditions for predicting the onset of these and other non-spherical instabilities remain an active area of research and will be the subject of future work.
Finally, through fully 3D finite element simulations, we find that non-spherical instabilities near the bubble wall can induce strain and stress amplification in soft materials, which might lead to a thin damage layer developing near the bubble wall \footnote{Fully 3D finite element simulations are presented in Supplementary Material S8.}.
It is also conceivable that within such a layer the material could experience inelastic deformation, fracture, or significant strain softening \cite{hutchens2016softmatter,movahed2016jasa,cohen_frac_cav}.
While addressing appropriate material damage models is an exciting research area beyond the scope presented here, we nevertheless hope that the results of our theory motivate future studies aimed at resolving the intricate mechanics and physics near the bubble wall during these high strain-rate, inertially dominated deformations.
In sum, this paper presents a new theoretical framework for predicting the dynamic onset and evolution of complex non-spherical instability shapes in nonlinear viscoelastic soft materials during inertial cavitation, and provides a new foundation for characterizing and classifying dynamic instabilities under extreme loading conditions.
We gratefully acknowledge funding support from the Office of Naval Research (Dr. Timothy Bentley) under grant N000141712058. We thank Dr. Lauren Hazlett for helpful discussions and editing of the manuscript.
\section{Polyacrylamide Hydrogel Preparation}
\label{suppsec:PA_protocol}
To prepare polyacrylamide (PA) hydrogels, 40.0 \% acrylamide solution and 2.0 \% bis solution (Bio-Rad, Hercules, CA) are mixed to a final concentration of 8.0 \%/0.08 \% Acrylamide/Bis (v/v) in deionized water and crosslinked with 0.5 \% Ammonium Persulfate (APS) and 1.25 \% N,N,N',N'-tetramethylethane-1,2-diamine (TEMED)(ThermoFisher Scientific, USA). Once mixed, PA samples were allowed to
polymerize in custom 15 mm diameter by 2 mm height cylindrical delrin molds for 1 hour. PA hydrogels were then submerged in deionized water 24 hours before testing to allow for any swelling to equilibrate.
The viscoelastic material properties of the final prepared PA hydrogels are characterized by a quadratic law strain stiffening Kelvin-Voigt (qKV) viscoelastic model with ground-state shear modulus $G=2.77$ kPa, viscosity $\mu=0.186 \pm 0.194 $ Pa$\cdot$s, and strain stiffening parameter $\alpha=0.48 \pm 0.14$ \cite{yang2020eml}.
\section{Magnitude of bubble surface perturbations}
This section describes how we quantitatively measure the magnitude of bubble surface perturbations in our experiments. First, we perform image post-processing to extract the outline of bubble wall from each video frame, see Fig.\,\ref{fig:mag_wrk}(a-b), which is then rewritten into a polar coordinate system $\lbrace r, \phi \rbrace$, see Fig.\,\ref{fig:mag_wrk}(c), where $r$ and $\phi$ are the radial and polar angular coordinates and defined in Fig.\,\ref{fig:mag_wrk}(b). We further define the mean radius of the bubble as:
\begin{equation}
\bar{r} = \frac{1}{2 \pi} \int_{\phi} r d \phi
\end{equation}
The upper bound, lower bound, mean, and root-mean-square (RMS) values of the magnitude of the bubble surface perturbation are further defined as:
\begin{align}
a_{\text{upper bound}} &= \max_{\phi} |r-\bar{r}|, \\
a_{\text{lower bound}} &= \min_{\phi} |r-\bar{r}|, \\
a_{\text{mean}} &= \frac{1}{2 \pi} \int_{\phi} |r-\bar{r}| d \phi, \\
a_{\text{RMS}} &= \left( \frac{1}{2 \pi} \int_{\phi} |r-\bar{r}|^2 d \phi \right)^{1/2}.
\end{align}
\begin{figure}[!t]
\centering
\includegraphics[width = 0.9 \textwidth]{figures/fig_S1.pdf}
\caption{Measurement of magnitude of bubble shape instabilities. (a)\,A typical frame from the recorded experimental video. (b)\,Bubble shape is extracted from the image post-processing of each frame. (c)\,Bubble surface is further transformed into a polar coordinate system.} \label{fig:mag_wrk}
\end{figure}
\section{Derivation of the ansatz for the perturbed displacement field in main text Eq (4)}
Consider a spherical void (see Fig.\,\ref{fig:sph_coord}) with reference configuration $\mathcal{B}_0(r_0,\theta_0,\phi_0)$,
$\lbrace R_{0} \leqslant r_0 \leqslant \infty, \ 0 \leqslant \theta_0 \leqslant \pi, \ 0 \leqslant \phi_0 \leqslant 2 \pi \rbrace$, and current deformed configuration $\mathcal{B}(r,\theta,\phi)$, $\lbrace R \leqslant r \leqslant \infty, \ 0 \leqslant \theta \leqslant \pi, \ 0 \leqslant \phi \leqslant 2 \pi \rbrace$, where $\lbrace r_0, r \rbrace$ represent referential and current radial coordinates, $\lbrace \phi_0, \phi \rbrace$ are referential and current azimuthal angular coordinates, and $\lbrace \theta_0, \theta \rbrace$ are referential and current polar angular coordinates.
The time-dependent bubble radius is $R(t)$, and $R_0$ denotes the undeformed bubble radius.
\begin{figure}[!b]
\centering
\includegraphics[width = 0.27 \textwidth]{./figures/sph_coord.pdf}
\caption{Spherical coordinates $\lbrace r, \theta, \phi \rbrace$. } \label{fig:sph_coord}
\end{figure}
The spherically symmetric base state of deformation is described by the radial mapping $r=(r_0^3 + R^3(t) - R_0^3)^{1/3}$, so that in the base state, the only non-zero component of displacement is $u_{r} =r- r_{0}$. We then consider a displacement perturbation of the form $\{\tilde{u}_r,\tilde{u}_\theta,\tilde{u}_\phi\}$, which is added to the base-state displacement field. The resulting deformation gradient is ${\bf F} = {\bf F}_0+{\bf d}$, where ${\bf F}_0$ is the base-state deformation gradient, which can be expressed referentially as
\begin{equation}
[\mathbf{F}_0] = \begin{bmatrix}
\frac{r_0^2}{(r_0^3 + R^3 - R_0^3)^{2/3}} & 0 & 0 \\[4pt]
0 & \frac{(r_0^3+R^3-R_0^3)^{1/3}}{r_0} & 0 \\[4pt]
0 & 0 & \frac{(r_0^3+R^3-R_0^3)^{1/3}}{r_0}
\end{bmatrix}\quad\text{so that}\quad \det{\bf F}_0=1, \label{eq:F}
\end{equation}
and ${\bf d}$ is the perturbation of the deformation gradient, which can be expressed spatially as
\begin{equation}
[\mathbf{d}] = \begin{bmatrix}
\frac{\partial \tilde{u}_r}{\partial r} & \frac{1}{r} \frac{\partial \tilde{u}_r}{\partial \theta} - \frac{\tilde{u}_{\theta}}{r} & \frac{1}{r \text{sin}\theta} \frac{\partial \tilde{u}_r}{\partial \phi} - \frac{\tilde{u}_{\phi}}{r} \\[4pt]
\frac{\partial \tilde{u}_{\theta}}{\partial r} & \frac{1}{r} \frac{\partial \tilde{u}_{\theta}}{\partial \theta} + \frac{\tilde{u}_{r}}{r} & \frac{1}{r \text{sin}\theta} \frac{\partial \tilde{u}_{\theta}}{\partial \phi} - \frac{\tilde{u}_{\phi} \text{cot}\theta}{r} \\[4pt]
\frac{\partial \tilde{u}_{\phi}}{\partial r} & \frac{1}{r} \frac{\partial \tilde{u}_{\phi}}{\partial \theta} & \frac{1}{r \text{sin}\theta} \frac{\partial \tilde{u}_{\phi}}{\partial \phi} + \frac{\tilde{u}_r}{r} + \frac{\tilde{u}_{\theta} \text{cot}\theta}{r}
\end{bmatrix}. \label{eq:inc_d}
\end{equation}
Since the surrounding material is assumed to be incompressible, the incremental deformation gradient $\mathbf{d}$ must satisfy
\begin{equation}
\text{tr}(\mathbf{d}) = \frac{\partial \tilde{u}_r}{\partial r} + \frac{1}{r} \frac{\partial \tilde{u}_{\theta}}{\partial \theta} + \frac{\tilde{u}_r}{r} + \frac{1}{r \text{sin} \theta} \frac{\partial \tilde{u}_{\phi}}{\partial \phi} + \frac{\tilde{u}_r}{r} + \frac{\tilde{u}_{\theta} \text{cot} \theta}{r} = 0 \label{eq:incomp_sph}
\end{equation}
We consider the following displacement perturbation:
\begin{align}
\left\lbrace
\begin{aligned}
\tilde{u}_{r} &= f(r) \text{cos}( m \phi ) P_{l}^{m}(\text{cos} \theta), \quad \\
\tilde{u}_{\theta} &= g(r) \text{cos}( m \phi ) \frac{d}{d \theta} P_{l}^{m}(\text{cos} \theta), \\
\tilde{u}_{\phi} &= h(r) m \text{sin}( m \phi ) P_{l}^{m}(\text{cos} \theta) ,
\end{aligned}
\right. \label{eq:inc_def_all}
\end{align}
where $f(r)$, $g(r)$, and $h(r)$ are functions that only depend on the spatial radial coordinate $r$.
Perturbations of this form are related to the normalized spherical harmonic functions $Y_{l}^{m}(\theta,\phi) := \text{cos}(m \phi) P_{l}^{m} (\text{cos}\theta) $, where $P_{l}^{m} (\text{cos}\theta)$ are the normalized associated Legendre polynomials, $m$ is the fixed wavenumber in the azimuthal angular coordinate, and the integers $m$ and $l$ satisfy $|m| \leqslant l$.
After inserting the ansatz \eqref{eq:inc_def_all} into the incompressibility condition \eqref{eq:incomp_sph},
we obtain
\begin{equation}
\frac{\partial f}{\partial r} + \frac{2f}{r} - \frac{g}{r}\left[l(l+1)-\frac{m^2}{\text{sin}^2 \theta} \right] + \frac{m^2 h}{r \text{sin}\theta} = 0. \label{eq:inc_def_incomp_cond}
\end{equation}
Here, motivated by our experimental observations, we focus on non-spherical, non-axisymmetric instabilities, in which $m>0$. Thus, the solution of Eq.\,\eqref{eq:inc_def_incomp_cond} obeys following conditions:
\begin{equation}
g=h=0 \quad \Rightarrow \quad \tilde{u}_{\theta}=\tilde{u}_{\phi} = 0 ,
\end{equation}
\begin{equation}
\frac{\partial f}{\partial r} + \frac{2f}{r} = 0 \
\Rightarrow \ f(r) = C r^{-2}. \label{eq:case1_incomp}
\end{equation}
We further rewrite $C$ as $a R^2$, where $a(t)$ is the time-dependent perturbation magnitude at the bubble wall, so that \eqref{eq:inc_def_all} takes the form of Eq.\,(4) in the main text.
\section{Derivation of Eqs (7-8) in the main text}
\subsection{Kinematics}
First, we calculate several kinematical fields in the surrounding soft material.
For the ansatz for the perturbed displacement field in main text Eq.\,(4), the deformation mapping is
\begin{equation}
\left\lbrace
\begin{aligned}
r(r_0,\theta_0,\phi_0,t) &= (r_0^3 + R^3(t) - R_0^3)^{1/3} + \frac{a(t)R^2(t)}{(r_0^3 + R^3(t) - R_0^3)^{2/3}} Y_{l}^{m} (\theta_0,\phi_0), \\
\theta(r_0,\theta_0,\phi_0,t) &= \theta_0, \\
\phi(r_0,\theta_0,\phi_0,t) &= \phi_0 ,
\end{aligned}
\right. \label{eq:def_mapping}
\end{equation}
where $Y_{l}^{m} (\theta,\phi)$
is the normalized $(l,m)^{th}$ spherical harmonic function,
and $a(t)$ is a time-dependent coefficient, which is assumed to be much smaller than $R(t)$.
The referential Lagrangian description of the displacement field is then
\begin{equation}
\left\lbrace
\begin{aligned}
u_r(r_0,\theta_0,\phi_0,t) &= (r_0^3 + R^3(t) - R_0^3)^{1/3} - r_0 + \frac{a(t)R^2(t)}{(r_0^3 + R^3(t) - R_0^3)^{2/3}} Y_{l}^{m} (\theta_0,\phi_0), \\
u_{\theta}(r_0,\theta_0,\phi_0,t) &= 0, \\
u_{\phi}(r_0,\theta_0,\phi_0,t) &= 0 .
\end{aligned}
\right. \label{eq:inc_def_disp}
\end{equation}
The deformation gradient tensor in the referential Lagrangian description can be simplified as
\begin{equation}
\begin{aligned}
[\mathbf{F}] &= \begin{bmatrix}
\frac{\partial r}{\partial r_0} & \frac{1}{r_0}\frac{\partial u_r}{\partial \theta_0} & \frac{1}{r_0 \text{sin} \theta_0} \frac{\partial u_r}{\partial \phi_0} \\[4pt]
0 & \frac{r}{r_0} & 0 \\[4pt]
0 & 0 & \frac{r}{r_0}
\end{bmatrix} \\
&= \begin{bmatrix}
\frac{r_0^2}{(r_0^3+R^3-R_0^3)^{2/3}} - \frac{2 r_0^2 a R^2 Y_l^m}{(r_0^3+R^3-R_0^3)^{5/3}} & \frac{a R^2}{r_0 (r_0^3+R^3-R_0^3)^{2/3}} \frac{\partial Y_l^m}{\partial \theta_0} & \frac{a R^2}{r_0 \text{sin}\theta_0 (r_0^3+R^3-R_0^3)^{2/3}} \frac{\partial Y_l^m }{\partial \phi_0 } \\[4pt]
0 & \frac{(r_0^3+R^3-R_0^3)^{1/3}}{r_0} + \frac{a R^2 Y_{l}^{m}}{r_0 (r_0^3+R^3-R_0^3)^{2/3}} & 0 \\[4pt]
0 & 0 & \frac{(r_0^3+R^3-R_0^3)^{1/3}}{r_0} + \frac{a R^2 Y_{l}^{m}}{r_0 (r_0^3+R^3-R_0^3)^{2/3}}
\end{bmatrix}.
\end{aligned}
\end{equation}
Since $a(t) \ll R(t)$, the left Cauchy-Green deformation tensor can be linearized as
\begin{equation}
\begin{aligned}
[\mathbf{B}] &= [\mathbf{F}][\mathbf{F}]^{\top} \\
&\approx \begin{bmatrix}
\frac{r_0^4}{(r_0^3+R^3-R_0^3)^{4/3}} - \frac{4 r_0^4 a R^2 Y_l^m}{(r_0^3+R^3-R_0^3)^{7/3}} & \frac{a R^2}{r_0^2 (r_0^3+R^3-R_0^3)^{1/3}} \frac{\partial Y_l^m}{\partial \theta_0} & \frac{a R^2}{r_0^2 \text{sin}\theta_0 (r_0^3+R^3-R_0^3)^{1/3}} \frac{\partial Y_l^m }{\partial \phi_0 } \\[4pt]
\frac{a R^2 }{r_0^2 (r_0^3+R^3-R_0^3)^{1/3} }\frac{\partial Y_l^m}{\partial \theta_0} & \frac{(r_0^3+R^3-R_0^3)^{2/3}}{r_0^2} + \frac{2a R^2 Y_{l}^{m}}{r_0^2 (r_0^3+R^3-R_0^3)^{1/3}} & 0 \\[4pt]
\frac{a R^2}{r_0^2 \text{sin}\theta_0 (r_0^3+R^3-R_0^3)^{1/3}} \frac{\partial Y_l^m }{\partial \phi_0 } & 0 & \frac{(r_0^3+R^3-R_0^3)^{2/3}}{r_0^2} + \frac{2a R^2 Y_{l}^{m}}{r_0^2 (r_0^3+R^3-R_0^3)^{1/3}}
\end{bmatrix}.
\end{aligned}
\end{equation}
To express kinematical fields using a spatial description, we linearize the inverse deformation mapping:
\begin{equation}
r_0(r,\theta,\phi,t) \approx (r^3+R_0^3-R^3(t))^{1/3} - \frac{a(t) R^2(t)}{(r^3+R_0^3-R^3(t))^{2/3}} Y_l^m(\theta, \phi).
\end{equation}
Then, using the inverse mapping and linearizing, the left Cauchy-Green deformation tensor can be expressed spatially as
\begin{equation}\label{eq:spatialB}
\begin{split}
[\mathbf{B}] \approx \begin{bmatrix}
\frac{(r^3+R_0^3-R^3)^{4/3}}{r^4}-\frac{4 a R^2 (r^3+R_0^3-R^3)^{1/3} Y_l^m}{r^4} & \frac{a R^2}{r (r^3+R_0^3-R^3)^{2/3}} \frac{\partial Y_l^m}{\partial \theta} & \frac{a R^2}{r \text{sin}\theta (r^3+R_0^3-R^3)^{2/3}} \frac{\partial Y_l^m}{\partial \phi} \\[4pt]
\frac{a R^2}{r (r^3+R_0^3-R^3)^{2/3}} \frac{\partial Y_l^m}{\partial \theta} & \frac{r^2}{(r^3+R_0^3-R^3)^{2/3}} + \frac{2 a R^2 r^2 Y_l^m}{(r^3+R_0^3-R^3)^{5/3}} & 0 \\[4pt]
\frac{a R^2}{r \text{sin}\theta (r^3+R_0^3-R^3)^{2/3}} \frac{\partial Y_l^m}{\partial \phi} & 0 & \frac{r^2}{(r^3+R_0^3-R^3)^{2/3}} + \frac{2 a R^2 r^2 Y_l^m}{(r^3+R_0^3-R^3)^{5/3}}
\end{bmatrix}.
\end{split}
\end{equation}
The referential description of the radial velocity is
\begin{equation}
V_r(r_0,\theta_0,\phi_0,t) = \frac{\partial u_r}{\partial t}\bigg\rvert_{r_0} = \frac{R^2 R_{,t}}{(r_0^3+R^3-R_0^3)^{2/3}} - \frac{2a R^4 R_{,t} Y_l^m}{(r_0^3+R^3-R_0^3)^{5/3}} + \frac{(a_{,t}R^2 + 2 a R R_{,t}) Y_l^m}{(r_0^3+R^3-R_0^3)^{2/3}},
\end{equation}
and upon linearization, the spatial description of the radial velocity is
\begin{equation}
\begin{aligned}
v_r(r,\theta,\phi,t) \approx \frac{R^2 R_{,t}}{r^2} + \frac{a_{,t}R^2 + 2a R R_{,t}}{r^2} Y_l^m.
\end{aligned}
\end{equation}
The spatial velocity gradient tensor is
\begin{equation}
[\mathbf{L}] = [\nabla \mathbf{v}] = \begin{bmatrix}
-\frac{2 R^2 R_{,t}}{r^3} - \frac{2 (a_{,t}R^2 + 2 a R R_{,t})}{r^3} Y_l^m & \frac{(a_{,t}R^2 + 2 a R R_{,t})}{r^3} \frac{\partial Y_l^m}{\partial \theta} & \frac{(a_{,t}R^2 + 2 a R R_{,t})}{r^3 \text{sin}\theta} \frac{\partial Y_l^m}{\partial \phi} \\[4pt]
0 & \frac{R^2 R_{,t}}{r^3} + \frac{(a_{,t}R^2 + 2 a R R_{,t})}{r^3} Y_l^m & 0 \\[4pt]
0 & 0 & \frac{R^2 R_{,t}}{r^3} + \frac{(a_{,t}R^2 + 2 a R R_{,t})}{r^3} Y_l^m
\end{bmatrix},
\end{equation}
and the rate of deformation tensor $\mathbf{D}$ is then
\begin{equation}\label{eq:spatialD}
\begin{aligned}
[\mathbf{D}] &= \frac{1}{2}\left( [\mathbf{L}] + [\mathbf{L}]^{\top} \right) \\
&= \begin{bmatrix}
-\frac{2 R^2 R_{,t}}{r^3}-\frac{2 (a_{,t}R^2+2a R R_{,t})}{r^3} Y_l^m & \frac{(a_{,t}R^2+2a R R_{,t})}{2 r^3} \frac{\partial Y_l^m}{\partial \theta} & \frac{(a_{,t}R^2+2a R R_{,t})}{2 r^3 \text{sin}\theta} \frac{\partial Y_l^m}{\partial \phi} \\[4pt]
\frac{(a_{,t}R^2+2a R R_{,t})}{2 r^3} \frac{\partial Y_l^m}{\partial \theta} & \frac{ R^2 R_{,t}}{r^3}+\frac{ (a_{,t}R^2+2a R R_{,t})}{r^3} Y_l^m & 0 \\[4pt]
\frac{(a_{,t}R^2+2a R R_{,t})}{2 r^3 \text{sin}\theta} \frac{\partial Y_l^m}{\partial \phi} & 0 & \frac{ R^2 R_{,t}}{r^3}+\frac{ (a_{,t}R^2+2a R R_{,t})}{r^3} Y_l^m
\end{bmatrix}.
\end{aligned}
\end{equation}
The referential description of the radial acceleration is
\begin{equation}
\begin{aligned}
A_r(r_0,\theta_0,\phi_0,t) = \frac{\partial v_r}{\partial t}\bigg\rvert_{r_0} &= \frac{2R R_{,t}^2 + R^2 R_{,tt}}{(r_0^3+R^3-R_0^3)^{2/3}} - \frac{2 R^4 R_{,t}^2 }{ (r_0^3+R^3-R_0^3)^{5/3}} \\
& \quad + \frac{(a_{,tt}R^2 + 4 a_{,t} R R_{,t} + 2 a R_{,t}^2 + 2 a R R_{,tt})}{(r_0^3+R^3-R_0^3)^{2/3}}Y_l^m\\
& \quad - \frac{2(2 a_{,t} R^4 R_{,t} + 6a R^3 R_{,t}^2 + aR^4 R_{,tt})}{(r_0^3+R^3-R_0^3)^{5/3}} Y_l^m + \frac{10 a R^6 R_{,t}^2 }{(r_0^3+R^3-R_0^3)^{8/3}}Y_l^m,
\end{aligned}
\end{equation}
and upon linearization, the spatial description of the radial acceleration is
\begin{equation}
\begin{aligned}
a_{r}(r,\theta,\phi,t) = \frac{2 R R_{,t}^2+ R^2 R_{,tt}}{r^2} - \frac{2 R^4 R_{,t}^2}{r^5} + \frac{(a_{,tt}R^2 + 4 a_{,t}R R_{,t} + 2 a R_{,t}^2 + 2 a R R_{,tt})}{r^2} Y_l^m - \frac{(4 a_{,t} R^4 R_{,t}+ 8 a R^3 R_{,t}^2)}{r^5} Y_l^m.
\end{aligned} \label{eq:a_spatial}
\end{equation}
\subsection{Momentum balance equation}
The momentum balance equation is
\begin{equation}
\nabla \cdot \boldsymbol{\sigma} = \rho \mathbf{a}, \label{eq:bubble_dyn_mom_eq}
\end{equation}
where $\nabla\cdot(\bullet)$ is the spatial divergence operator; $\rho$ is the spatial mass density; ${\bf a}$ is the acceleration vector; and $ \boldsymbol{\sigma}$ is the Cauchy stress tensor, which has the following explicit form for the quadratic law Kelvin-Voigt (qKV) material model:
\begin{equation}\label{eq:stress}
\boldsymbol{\sigma} = G [1+ \alpha (I_1-3)] \mathbf{B} + 2 \mu \mathbf{D} - p \mathbf{I},
\end{equation}
where $p$ is a pressure-like quantity arising due to the incompressibility constraint in the surrounding material, and $I_1$ is the first invariant of the left Cauchy-Green deformation tensor:
\begin{equation}
I_1 = \text{tr}(\mathbf{B}) = \frac{(r^3+R_0^3-R^3)^{4/3}}{r^4} + \frac{2 r^2}{(r^3+R_0^3-R^3)^{2/3}} - \frac{4 a R^2 (r^3+R_0^3-R^3)^{1/3}}{r^4} Y_l^m + \frac{4 a R^2 r^2}{(r^3+R_0^3-R^3)^{5/3}} Y_l^m.
\end{equation}
We note that the pressure-like quantity $p$ consists of the sum of the base-state field, which only depends on $r$, and a perturbation field, which depends on $r$, $\theta$, and $\phi$. In what follows, we focus on the radial component of the momentum balance equation \eqref{eq:bubble_dyn_mom_eq}:
\begin{equation}\label{eq:mombal_radial}
\frac{\partial \sigma_{rr}}{\partial r} + \dfrac{1}{r}\dfrac{\partial \sigma_{r\theta}}{\partial \theta} + \dfrac{1}{r\sin\theta}\dfrac{\partial\sigma_{r\phi}}{\partial\phi} + \dfrac{2}{r}(\sigma_{rr}-\sigma_{\theta\theta}) + \dfrac{\cot\theta}{r}\sigma_{r\theta} = \rho a_r,
\end{equation}
where we have recognized that $\sigma_{\theta\theta}=\sigma_{\phi\phi}$.
\subsection{Boundary conditions}
The traction boundary condition at the bubble wall $(r = R+aY_l^m)$ is
\begin{equation}
\boldsymbol{\sigma} \mathbf{n}|_{r=R+aY_l^m} = -p_{\rm b}\mathbf{n} - \gamma (\nabla_{\mathcal{S}} \cdot \mathbf{n})\mathbf{n} , \label{eq:bc_R}
\end{equation}
where $[\mathbf{n}] = [-1, a {Y_{l,\theta}^{m}}/R, a Y_{l,\phi}^{m}/(R \text{sin}\theta)]^{\top}$ is the linearized outward unit normal vector on the bubble wall, $p_{\rm b}$ is the pressure inside the bubble, and $\nabla_{\mathcal{S}}\cdot(\bullet)$ is the surface divergence operator in the deformed configuration. After linearization \cite{plesset1954jap}, the radial component of Eq.~\eqref{eq:bc_R} is
\begin{equation}\label{eq:wallBC_radial}
\sigma_{rr}|_{r=R+aY^m_l} = - p_b + \frac{2 \gamma}{R} + \frac{(l+2)(l-1)}{R^2} a Y_l^m.
\end{equation}
In the far-field, the perturbation decays to zero, and the stress approaches a state of hydrostatic pressure: $\boldsymbol{\sigma}|_{r\rightarrow\infty} = -p_\infty {\bf I}$, where $p_{\infty}$ is the far-field atmospheric pressure. Therefore, the far-field boundary condition for the radial stress component is
\begin{equation}
\sigma_{rr}|_{r \rightarrow \infty} = -p_{\infty}. \label{eq:bc_infty}
\end{equation}
\subsection{Results}
Using \eqref{eq:spatialB} and \eqref{eq:spatialD}, we insert \eqref{eq:stress} and \eqref{eq:a_spatial} into \eqref{eq:mombal_radial}, integrate over $r$ from the current, perturbed bubble wall $(R+a Y_l^m)$ to the far-field $(r\rightarrow\infty)$ and apply the radial boundary conditions \eqref{eq:wallBC_radial} and \eqref{eq:bc_infty}. After collecting the $O(1)$ terms, we recover the Rayleigh-Plesset governing equation for the dynamics of the unperturbed bubble radius in a nonlinear strain-stiffening viscoelastic material, which is consistent with our previous study as shown in Yang, et al.\,\cite{yang2020eml}:
\begin{equation}\label{eq:RP}
R R_{,tt} + \frac{3}{2} R_{,t}^2 = \frac{1}{\rho} \left( p_b - p_{\infty} + S - \frac{2 \gamma}{R} \right),
\end{equation}
where stress integral $S$ takes the form
\begin{equation}
S = \frac{(3 \alpha - 1)G}{2} \left[ 5 - \left( \frac{{R}_{0}}{R} \right)^4 - \frac{4 {R}_{0}}{R} \right] - \frac{4 \mu \dot{R}}{R} + 2 \alpha G \left[\frac{27}{40} + \frac{1}{8} \left( \frac{{R}_{0}}{R} \right)^8 + \frac{1}{5}\left( \frac{{R}_{0}}{R} \right)^5 + \left( \frac{{R}_{0}}{R} \right)^2 - \frac{2R}{{R}_{0}} \right] .
\end{equation}
Next, collecting the $O(a)$ terms, we obtain the governing, second-order differential equation for the bubble-wall perturbation magnitude, $a$:
\begin{equation}
a_{,tt} + \eta a_{,t} - \xi a = 0\,; \quad \eta = \frac{3 R_{,t}}{R} + \frac{4 \mu}{ \rho R^2} + \frac{ l(l+1) \mu}{ 3 \rho R^2}, \label{eq:a_sec_ode}
\end{equation}
where $\xi$ explicitly accounts for inertial effects during cavitation, nonlinear deformations of the viscoelastic solid, and surface tension effects:
\begin{equation}
\small
\begin{aligned}
\xi = & - \frac{R_{,tt}}{R} + \frac{4 \mu R_{,t}}{\rho R^3} - \frac{ 2 l(l+1) \mu R_{,t}}{ 3 \rho R^3} - \frac{(l+2)(l-1)\gamma}{\rho R^3}
- \frac{2 G R_{0} }{\rho R^3} \Big( 1 + \frac{R_{0}^3}{R^3} \Big) - \frac{G l (l+1)}{\rho (R^2 + R R_{0} + R_{0}^2)} \\
& - \frac{2 \alpha G}{\rho R^2} \frac{(R-R_{0})^2}{R R_{0}} \Big( 1+ \frac{R_{0}}{R} \Big)^3 \Big( 2- \frac{2R_{0}}{R} + \frac{3R_{0}^2}{R^2} - \frac{R_{0}^3}{R^3} + \frac{R_{0}^4}{R^4} \Big)
- \frac{\alpha G l(l+1) (R - R_{0})^2}{5 \rho R R_{0} (R^2 + R R_{0} + R_{0}^2)} \Big( 10 + \frac{6R_{0}}{R} + \frac{3R_{0}^2}{R^2} + \frac{R_{0}^3}{R^3} \Big) .
\end{aligned} \label{eq:a_sec_ode_xi}
\end{equation}
We note that the perturbation of the pressure-like quantity $p$ does not enter the derivation of Eqs.\,\eqref{eq:RP}-\eqref{eq:a_sec_ode_xi}.
\section{Eigenvalues of Eq (7) in the main text}
The second order ordinary differential equation for $a(t)$, Eq.\,(7) in the main text, can be rewritten as the following dynamical system:
\begin{equation}
\frac{d}{dt}
\begin{bmatrix}
a \\
a_{,t}
\end{bmatrix} = \begin{bmatrix}
0 & 1 \\
\xi & - \eta
\end{bmatrix}
\begin{bmatrix}
a \\
a_{,t}
\end{bmatrix}. \label{eq:a_dynsys}
\end{equation}
It is straightforward to see that $\mathbf{a}^{*}$\,$:=$\,$\lbrace a=0, a_{,t}=0 \rbrace$ is always an equilibrium point of \eqref{eq:a_dynsys}, and its stability depends on the sign of the eigenvalues of the dynamical system \eqref{eq:a_dynsys}. The eigenvalues are of the form
\begin{equation}
\zeta_{1,2}= -\frac{\eta}{2} \pm \frac{\sqrt{\Delta}}{2}\quad\text{with}\quad\Delta = \eta^2 + 4 \xi, \label{eq:case_i}
\end{equation}
which may be classified in the following cases:
\begin{itemize}
\item \textbf{Case (i)}: If $\xi$\,$>$\,$0$, the eigenvalues are real and of opposite sign, and therefore, $\mathbf{a}^{*}$ is a \emph{saddle} point (Fig.\,\ref{fig:phase_diag} (a-i));
\item \textbf{Case (ii)}: If $\xi$\,$<$\,$0$, $\eta<0$, and $\Delta > 0$, the eigenvalues are real and positive, and $\mathbf{a}^{*}$ is a \textit{source}-type instability point (Fig.\,\ref{fig:phase_diag} (a-ii));
\item \textbf{Case (iii)}: If $\xi$\,$<$\,$0$, $\eta<0$, and $\Delta < 0$, the eigenvalues are complex with positive real part, and $\mathbf{a}^{*}$ is an \textit{unstable spiral}-type instability (Fig.\,\ref{fig:phase_diag} (a-iii));
\item \textbf{Case (iv)}: If $\xi$\,$<$\,$0$, $\eta>0$, and $\Delta < 0$, the eigenvalues are complex with negative real part, and $\mathbf{a}^{*}$ is a \textit{stable spiral} (Fig.\,\ref{fig:phase_diag} (a-iv));
\item \textbf{Case (v)}: If $\xi$\,$<$\,$0$, $\eta>0$, and $\Delta > 0$, the eigenvalues are real and negative, and $\mathbf{a}^{*}$ is a \textit{sink} (Fig.\,\ref{fig:phase_diag} (a-v)).
\end{itemize}
A phase diagram summarizing the stability of the solution $\mathbf{a}^{*}$\,$=$\,$\lbrace a=0, a_{,t}=0 \rbrace$ for an inertial cavitation bubble in a nonlinear viscoelastic medium (Eq\,(7) in the main text) is shown in Fig.\,\ref{fig:phase_diag}(a).
\section{Additional results of the numerical simulation in Fig 3}
Here we provide additional details of the numerical simulations in Fig.\,3 of the main text. We summarize the numerically simulated hoop stretch ratio at the bubble wall $\lambda_w$ and the variables $\Delta$, $\xi^{i}$, $\xi^{e}$, and $\xi^{st}$ in Fig.\,\ref{fig:phase_diag}(b-f), where the variables $\xi^{i}$, $\xi^{e}$, $\xi^{st}$ account for inertial and viscous effects during cavitation, nonlinear hyperelastic deformations of the surrounding solid, and surface tension effects, respectively:
\begin{align}
& \xi^{i} = - \frac{R_{,tt}}{R} + \frac{4 \mu R_{,t}}{\rho R^3} + \frac{ 2 l(l+1) \mu R_{,t}}{ 3 \rho R^3}, \\
& \begin{aligned}
\xi^{e} = & - \frac{2 G R_{0} }{\rho R^3} \Big( 1 + \frac{R_{0}^3}{R^3} \Big) - \frac{G l (l+1)}{\rho (R^2 + R R_{0} + R_{0}^2)} \\
& - \frac{2 \alpha G}{\rho R^2} \frac{(R-R_{0})^2}{R R_{0}} \Big( 1+ \frac{R_{0}}{R} \Big)^3 \Big( 2- \frac{2R_{0}}{R} + \frac{3R_{0}^2}{R^2} - \frac{R_{0}^3}{R^3} + \frac{R_{0}^4}{R^4} \Big) \\
& - \frac{\alpha G l(l+1) (R - R_{0})^2}{5 \rho R R_{0} (R^2 + R R_{0} + R_{0}^2)} \Big( 10 + \frac{6R_{0}}{R} + \frac{3R_{0}^2}{R^2} + \frac{R_{0}^3}{R^3} \Big) ,
\end{aligned} \\
& \xi^{st} = - \frac{(l+2)(l-1)\gamma}{\rho R^3}.
\end{align}
Conditions in which non-spherical instabilities corresponding to a spherical harmonic function of mode shape 8 are predicted (i.e., $\eta<0$ or $\xi>0$) are marked as red shaded regions.
\begin{figure}[h]
\centering
\includegraphics[width = 1 \textwidth]{figures/figS3_update.pdf}
\caption{(a)\,Phase diagram summarizing the stability of the solution $\mathbf{a}^{*}$\,$=$\,$\lbrace a=0, a_{,t}=0 \rbrace$ for an inertial cavitation bubble in a nonlinear viscoelastic medium. Calculated (b)\,radial stretch ratio at the bubble wall $\lambda_w$, (c)\,$\Delta$, (d)\,$\xi^{i}$, (e)\,$\xi^{e}$, (f)\,$\xi^{st}$ in the numerical simulation in Fig.\,3 of the main text. Red shaded regions denote situations in which non-spherical instabilities corresponding to a spherical harmonic function of mode shape 8 are predicted.} \label{fig:phase_diag}
\end{figure}
\section{Details of theoretical predictions in Fig 4}\label{sec:eee}
We theoretically predict the critical value of the radial stretch ratio at the bubble wall, $\lambda_w$, at the onset of bubble surface instability during the first three collapse cycles as a function of the maximum bubble radius $R_{\text{max}}$ and the non-spherical mode shape number $l$. The initial $\lambda_{\text{max}}$ is fixed at 7.1, which is the same value as in all experimental tests shown in Figs.\,1-2 in the main text.
For each $R_{\text{max}}$ value, we numerically solve the Rayleigh-Plesset equation, Eqs.\,(1) and (3), in the main text and compute the variables $\eta$ and $\xi$ using Eqs\,(7-8) in the main text. Then, we find the critical $\lambda_{w}$ at the onset of each instability mode occurring in the first three collapse cycles where $\eta <0$ or $\xi>0$, as shown in Figs.\,\ref{fig:R100}-\ref{fig:R500}, where the theoretically predicted non-spherical mode shape 8 instabilities are marked as red shaded regions. In Figs.\,\ref{fig:R100}-\ref{fig:R500}, $R_{\text{max}}$ takes a value of 100$\mu m$, 200$\mu m$, 300$\mu m$, 400$\mu m$, 500$\mu m$, respectively.
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 1 \textwidth]{figures/fig_R100.pdf}
\end{center}
\caption{(a,c,e)\,Numerically simulated bubble dynamics when $R_{\text{max}}=100\mu m$. (b,d,f)\,Calculated bubble wall stretch ratio $\lambda_w$ and the variables $\eta$ and $\xi$ calculated using Eqs\,(7-8) in the main text. Red shaded regions denote theoretically predicted non-spherical mode shape 8 instabilities. } \label{fig:R100}
\end{figure}
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 1 \textwidth]{figures/fig_R200.pdf}
\end{center}
\caption{(a,c,e)\,Numerically simulated bubble dynamics when $R_{\text{max}}=200\mu m$. (b,d,f)\,Calculated bubble wall stretch ratio $\lambda_w$ and the variables $\eta$ and $\xi$ calculated using Eqs\,(7-8) in the main text. Red shaded regions denote theoretically predicted non-spherical mode shape 8 instabilities.} \label{fig:R200}
\end{figure}
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 1 \textwidth]{figures/fig_R300.pdf}
\end{center}
\caption{(a,c,e)\,Numerically simulated bubble dynamics when $R_{\text{max}}=300\mu m$. (b,d,f)\,Calculated bubble wall stretch ratio $\lambda_w$ and the variables $\eta$ and $\xi$ calculated using Eqs\,(7-8) in the main text. Red shaded regions denote theoretically predicted non-spherical mode shape 8 instabilities.} \label{fig:R300}
\end{figure}
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 1 \textwidth]{figures/fig_R400.pdf}
\end{center}
\caption{(a,c,e)\,Numerically simulated bubble dynamics when $R_{\text{max}}=400\mu m$. (b,d,f)\,Calculated bubble wall stretch ratio $\lambda_w$ and the variables $\eta$ and $\xi$ calculated using Eqs\,(7-8) in the main text. Red shaded regions denote theoretically predicted non-spherical mode shape 8 instabilities.} \label{fig:R400}
\end{figure}
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 1 \textwidth]{figures/fig_R500.pdf}
\end{center}
\caption{(a,c,e)\,Numerically simulated bubble dynamics when $R_{\text{max}}=500\mu m$. (b,d,f)\,Calculated bubble wall stretch ratio $\lambda_w$ and the variables $\eta$ and $\xi$ calculated using Eqs\,(7-8) in the main text. Red shaded regions denote theoretically predicted non-spherical mode shape 8 instabilities.} \label{fig:R500}
\end{figure}
\section{Finite element Numerical Simulations}\label{sec:fem}
To examine the strain and stress fields induced by non-spherical deformation, we construct a finite-element model of an isolated 3D spherical cavity within an infinite material using the commercial software package Abaqus \cite{abaqus}. Using a user-material (UMAT) subroutine, we model the surrounding material as a quadratic law strain stiffening Kelvin-Voigt (qKV) viscoelastic material (Eq\,(2) in the main text) with shear modulus $G$\,$=$\,$2.77$ kPa, strain stiffening parameter $\alpha$=0.48, and viscosity $\mu$\,$=$\,$0.186$\,Pa$\cdot$s.
Due to the symmetry of the problem, we simulate one eighth of the sphere. The undeformed bubble radius is taken to be 45\,$\mu$m, and the outer radius of the simulation domain is taken to be large enough so that an infinite surroundings is approximated. The mesh consists of 7275 3D, 8-node fully integrated hybrid elements. Next, to account for the non-spherical deformation, we impose a displacement boundary condition on the bubble wall surface consisting of the sum of a spherically symmetric deformation and a spherical harmonic perturbation (Eq.\,(4) in the main text), using a user-displacement (UDISP) subroutine. We perform a static simulation considering a time point between (iv) and (v) in the main text Fig.\,1(a), corresponding to a deformed bubble radius of 72\,$\mu$m, a perturbation amplitude of 1.6386 $\mu$m, and mode shape $l=8$.
In Fig.\,\ref{fig:strain contour plots}(a) and (b), we plot contours of the spherical component of the logarithmic strain, $ E_{rr}$, while in Fig.\,\ref{fig:strain contour plots}(c) and (d) we plot contours of the circumferential component of the logarithmic strain $ E_{\theta \theta}$.
In Fig.\,\ref{fig:stress contour plots}(a) and (b) we plot contours of the spherical component of the Cauchy stress, $\sigma_{rr}$, while in Fig.\,\ref{fig:stress contour plots}(c) and (d) we plot contours of the circumferential component of the Cauchy stress, $\sigma_{\theta \theta}$. We include both the spherically symmetric case and the perturbed case, illustrating the amplification of stress and strain due to non-spherical deformation, notably in the circumferential components of stress and strain.
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 0.6 \textwidth]{figures/fig_FEA_strain.pdf}
\end{center}
\caption{Contours of the spherical component of the logarithmic strain, $ E_{rr}$, for (a) the spherical bubble and (b) the perturbed bubble. Contours of the circumferential component of the logarithmic strain, $ E_{\theta \theta}$, for (c) the spherical bubble and (d) the perturbed bubble. The deformed radius of the spherical bubble is 72 $\mu$m, and the perturbation amplitude in (b) and (d) is 1.6386 $\mu$m.} \label{fig:strain contour plots}
\end{figure}
\begin{figure} [h!]
\begin{center}
\includegraphics[width= 0.6 \textwidth]{figures/fig_FEA_stress.pdf}
\end{center}
\caption{Contours of the spherical component of the Cauchy stress, $\sigma_{rr}$, for (a) the spherical bubble and (b) the perturbed bubble. Contours of the circumferential component of the Cauchy stress, $\sigma_{\theta \theta}$, for (c) the spherical bubble and (d) the perturbed bubble. The deformed radius of the spherical bubble is 72 $\mu$m, and the perturbation amplitude in (b) and (d) is 1.6386 $\mu$m.} \label{fig:stress contour plots}
\end{figure}
|
train/arxiv
|
BkiUeDrxK6nrxqmo0STR
| 5 | 1 |
\section{Introduction}
The most successful approach to the problem of quantizing gravity is, up to now, the so called Loop Quantum Gravity (LQG) theory \cite{Cianfrani2014,Rovelli2004,Thiemann2007}. This formulation, of course, still contains a number of unsolved issues, like the implementation of the quantum dynamics via the scalar constraint, the construction of a classical limit and the ambiguity in the meaning and value of the so-called Immirzi parameter \cite{Rovelli1998,Gambini1999,
Chou2005,Perez2006,Mercuri2006,Date2009,
Mercuri2009,Nicolai2005}. Nonetheless, the great interest for LQG is due to the possibility of constructing a kinematic Hilbert space for the quantum theory, resulting in geometrical operators like areas and volumes, endowed with discrete spectra \cite{Rovelli1995,Ashtekar1997,Lewandowski1997}. By other words, LQG is able to introduce space discretization starting from a classical Lagrangian for the gravitational field \cite{Cianfrani2014}, with the quantum theory just relying on the pre-metric concept of graph. That is achieved via Ashtekar-Barbero-Immirzi variables \cite{Ashtekar1986,Ashtekar1987, BarberoG.1995,Immirzi1997}, which allow an Hamiltonian formulation of gravity by close analogy with non-Abelian gauge theories: the constraint associated to local spatial rotations can be put in the form of a standard Gauss constraint for the $SU(2)$ group. When tetrads and spin connections are considered independent fields, however, it is necessary to add to the Palatini action new terms, the Holst or Nieh-Yan contributions, which do not affect the classical dynamics (the former is vanishing on half-shell, where the equation for the connection holds, and the latter a pure topological term \cite{Nieh1982,Holst1996,Soo1999,Nieh2007,
Mercuri2008,Banerjee2010}). In both cases, then, we deal with a restatement of General Relativity, suitable for loop quantization, which admits Einstein equations as classical limit.
\\ \indent In this respect, the recent interest for $f(\mathcal{R})$ modifications of General Relativity \cite{Sotiriou2010, Nojiri2017}, makes very timely questioning about possible LQG extensions of $f(\mathcal{R})$ models, especially via their scalar-tensor reformulation \cite{Bergmann1968,Capone2010,Ruf2018}. A first attempt in this direction was performed in \cite{Zhang2011a,Zhang2011,Han2014} (see also \cite{Cianfrani2009b,Bombacigno2018,Bombacigno2019}), where the problem was faced by considering the metric as the only independent field, and authors actually followed in defining Ashtekar-like variables an extended phase-space method \cite{Thiemann2007}. Conclusions of this study suggest that a suitable set of variables can be determined, and Gauss and vector constraints properly obtained, with the non-minimally coupled scalar field affecting the scalar constraint.
\\ \indent Here, we face the same problem on a more general framework and adopting the most natural first order formalism, i.e. we deal with Palatini $f(\mathcal{R})$ models \cite{Olmo2011}, equipped with Holst and Nieh-Yan terms. In particular, we characterize the resulting classical theory and we discuss the reformulation in terms of $SU(2)$ variables.
\\ \indent In including Holst and Nieh-Yan terms, we have two different choices, consisting in inserting these terms either inside or outside the argument of the function $f$. We first analyze the classical dynamics of these models demonstrating that they correspond to two physically distinct scenarios. In fact, both when the Holst term is included in the argument of $f$ and when the Nieh-Yan term is added to the $f(\mathcal{R})$ Lagrangian, we recover the dynamics of Palatini $f(\mathcal{R})$ models, characterized by a non-dynamical scalar field. Conversely, when the Holst term is added to $f$ and the Nieh-Yan one is plugged inside the function, we deal with a scalar-tensor theory, where the kinetic term for the scalar field is modulated by the Immirzi parameter. We specialize, then, to this case, showing how a non vanishing value of the Immirzi parameter causes the scalar field to acquire an independent dynamical character. We perform the Hamiltonian formulation and we discuss the resulting morphology in terms of the constraints emerging after the Legendre transformation.\\
The main merit of this study consists of the determination of suitable generalized Ashtekar-Barbero-Immirzi variables starting from a genuine first order action, and we determine Gauss and vector constraints with the same form of LQG, i.e. we are able to construct a kinematic Hilbert space suitable for LQG canonical quantization. We study the spectrum of the area operator, and differently from what assumed in \cite{Fatibene2010}, we clarify how the area operator has an unambiguous geometrical nature, being constructed with the real triad of the space. We stress, therefore, how the different link between the real triads and the particular $SU(2)$ variables considered affects the morphology of the area operator. In the case of a standard Palatini $f(\mathcal{R})$ model, the field is non-dynamical and when it is expressed via the trace of the stress energy tensor, a coupling takes place among the size of the area associated to a graph and the nature of the matter filling the space. More interesting is when the scalar field is truly dynamical, and it must be quantized as a proper scalar degree of freedom. In this case, following \cite{Lewandowski2016}, the discrete character of the area is spoiled by the continuous spectrum of the scalar field.\\
In this regard, we can infer that in extended theories we considered the ambiguity of Immirzi parameter is to some extent weakened. Our study, indeed, suggests that such a parameter could be reabsorbed in the scalar field definition, hinting a more general formulations of the gravitational sector as a $SU(2)$ gauge theory. We emphasize that the quantization procedure for the scalar field requires in both the cases a very particular attention. Indeed, when non dynamical, it still relies on the quantization of the truly gravitational degrees of freedom, which are of course involved in the very definition of the trace for the stress energy tensor. On the other hand, the dynamical case is sensitive to the non minimal coupling of the scalar field and the $\phi$-factors appearing in the Hamiltonian constraint must be treated carefully.\\
Furthermore, when comparing the second order metric analysis of \cite{Zhang2011} to our Palatini formulation, we observe the emergence of a discrepancy in the scalar constraints. By other words, starting directly from a metric formulation of $f(R)$ gravity with $SU(2)$ variables provides different dynamical constraints with respect to a first order formulation. In this respect, we outline the possibility to restore a complete equivalence between these two approaches, by restating our models into the Einstein framework and performing a canonical transformation. We note that similar issues hold also for \cite{Zhou2013}, in which the analysis is actually pursued in a first order formalism, starting from an action which differs however from the one considered here by the inclusion of additional contributions which eliminate torsion from the theory.
\\ \indent The paper is structured as follows. In section~\ref{sec2} the models are presented and their effective theories are derived, highlighting equivalences and differences with Palatini $f(\mathcal{R})$ theory in its scalar-tensor formulation. In section~\ref{Appendix} the spacetime splitting and Hamiltonian analysis of the constrained system are performed. In section~\ref{Sec mod asht var} the new modified variables and the correspondent set of constraints are derived. In particular, we depict some possible effects on the spectrum of the area operator in the presence of a scalar field, and we also briefly discuss the case when it is devoid of a proper dynamics. The analysis performed in the conformal Einstein frame and the comparison with earlier studies in literature are contained in section~\ref{Sec Einstein frame}. In section~\ref{sec concl} conclusions are drawn, while some details regarding the results presented in section~\ref{Appendix} can be found in the~\ref{new Appendix}.
\\ \indent Eventually, notation is established as follows. Spacetime indices are denoted by middle alphabet Greek letters $\mu,\nu,\rho$, spatial ones by letters from the beginning of the Latin alphabet $a,b,c$. Four dimensional internal indices are displayed by capital letters from the middle of the Latin alphabet $I,J,K$, while $i,j,k$ indicate three dimensional internal indices. Spacetime signature is chosen mostly plus, i.e. $\eta_{\mu\nu}= \text{diag}(-1,1,1,1)$.
\section{Extended $f(\mathcal{R})$ actions}\label{sec2}
\noindent We consider the following models
\begin{align}
S &= \frac{1}{2\chi} \int d^4x \sqrt{-g} f(\mathcal{R}+L), \label{fRH} \\
S&= \frac{1}{2\chi} \int d^4x \sqrt{-g} \left[ f(\mathcal{R})+L \right],\label{fR H}
\end{align}
where $\chi=8\pi G$. The Ricci scalar $\mathcal{R}=g^{\mu\nu}\mathcal{R}_{\mu\nu}$ is obtained by the contraction of the Ricci tensor $\mathcal{R}_{\mu\nu}$, here considered as a function of the independent connection $\Gamma\indices{^{\mu}_{\nu\rho}}$ and related to the Riemann tensor by $\mathcal{R}_{\mu\nu} = \mathcal{R}\indices{^{\rho}_{\mu\rho\nu}}$, with $\mathcal{R}\indices{^{\mu}_{\nu\rho\sigma}} = \partial_{\rho}\tensor{\Gamma}{^{\mu}_{\nu\sigma}} - \partial_{\sigma}\tensor{\Gamma}{^{\mu}_{\nu\rho}} + \tensor{\Gamma}{^{\mu}_{\lambda\rho}}\tensor{\Gamma}{^{\lambda}_{\nu\sigma}} - \tensor{\Gamma}{^{\mu}_{\lambda\sigma}}\tensor{\Gamma}{^{\lambda}_{\nu\rho}}$. The term $L$ either coincides with the Holst term or with the Nieh-Yan invariant, given by, respectively
\begin{align}
H & = -\frac{\beta}{2}\varepsilon\indices{^{\mu\nu\rho\sigma}}\mathcal{R}_{\mu\nu\rho\sigma},
\label{holst def}\\
NY & = \frac{\beta}{2}\varepsilon\indices{^{\mu\nu\rho\sigma}}\left( \frac{1}{2} T\indices{^{\lambda}_{\mu\nu}}T\indices{_{\lambda\rho\sigma}} - \mathcal{R}_{\mu\nu\rho\sigma} \right),
\label{ny def}
\end{align}
with $\beta$ the reciprocal of the Immirzi parameter. Torsion tensor is displayed by $T\indices{^{\mu}_{\nu\rho}}=\Gamma\indices{^{\mu}_{\nu\rho}} - \Gamma\indices{^{\mu}_{\rho\nu}} $ and, as outlined in \cite{Bombacigno2018,Bombacigno2019,Calcagni2009}, it cannot be a priori neglected in Palatini $f(\mathcal{R})$ generalizations of Holst and Nieh-Yan actions, being its form to be determined dynamically.\\
It is worth noting that by dealing with a proper metric-affine formalism, the affine connection $\Gamma\indices{^\mu_{\nu\rho}}$ can be a priori characterized by non-metricity as well, measuring the departure from metric compatibility, i.e. $Q\indices{_{\rho\mu\nu}}=-\nabla_\rho g_{\mu\nu}\neq 0$. However, it can be demonstrated (see \cite{Iosifidis:2019fsh,Iosifidis:2020dck,Jimenez:2020dpn} for details) that by means of the projective transformation
\begin{equation}
\Gamma\indices{^\rho_{\mu\nu}}\to \Gamma\indices{^\rho_{\mu\nu}}+\delta\indices{^\rho_\mu}\xi_\nu,
\end{equation}
where $\xi_\nu$ is a vector degree, non-metricity contributions can be always neglected, so that we can safely assume connection be still metric compatible. Therefore, without loss of generality, we can rewrite the connection as $\tensor{\Gamma}{^{\rho}_{\mu\nu}} = \tensor{C}{^{\rho}_{\mu\nu}} + \tensor{K}{^{\rho}_{\mu\nu}}$,
where $\tensor{C}{^{\nu}_{\rho\mu}}$ denotes the Christoffel symbol of $g_{\mu\nu}$ and the independent character of the connection is now encoded in the contortion tensor, given by $\tensor{K}{^{\rho}_{\mu\nu}}=\frac{1}{2}(\tensor{T}{^{\rho}_{\mu\nu}} - \tensor{T}{_{\mu}^{\rho}_{\nu}} - \tensor{T}{_{\nu}^{\rho}_{\mu}})$. Now, in complete analogy with $f(R)$ theories, the actions \eqref{fRH} and \eqref{fR H} can be expressed in the Jordan frame as
\begin{align}\label{Jordan frame models}
S &= \frac{1}{2\chi} \int d^4x \sqrt{-g} \left[ \phi \left(\mathcal{R}+L\right) - V(\phi) \right],\\
S &= \frac{1}{2\chi} \int d^4x \sqrt{-g} \left[ \phi \mathcal{R} +L - V(\phi) \right],\label{Jordan frame models 2}
\end{align}
with $\phi$ defined as the derivative of the function $f(\cdot)$ with respect to its generic argument, while the potential takes the same expression as in standard Palatini $f(\mathcal{R})$ theory.\\
Next, the effective theories dynamically equivalent on-half shell to models \eqref{Jordan frame models} and \eqref{Jordan frame models 2}, can be computed inserting into the actions the solution of the equations of motion for the independent connection. This can be easily achieved decomposing the torsion tensor into its independent components according to the Lorentz group. These are the trace vector $T_{\mu} \equiv T \indices{^{\nu}_{\mu\nu}}$, the pseudotrace axial vector
$S_{\mu} \equiv \varepsilon_{\mu\nu\rho\sigma}T^{\nu\rho\sigma}$
and the antisymmetric tensor $q_{\mu\nu\rho}$, satisfying $
\varepsilon^{\mu\nu\rho\sigma} q_{\nu\rho\sigma} = 0$ and $ q\indices{^{\mu}_{\nu\mu}} = 0$. Their equations of motion can be solved yielding $q_{\mu\nu\rho}\equiv 0$, whereas vectors $S_{\mu}$ and $T_{\mu}$ can be expressed in terms of $\partial_{\mu}\phi$ as
\begin{align}\label{tor sol t}
&T_\mu = \frac{3}{2\phi}\left[ \dfrac{1 + b_1b_2\Phi\beta^2/\phi}{1 + b_2\Phi^2\beta^2 /\phi^2} \right]\partial_{\mu}\phi,\\
&S_\mu=\frac{6\beta}{\phi} \left[ \dfrac{b_1-b_2\Phi/\phi}{1 + b_2\Phi^2\beta^2/\phi^2} \right]\partial_{\mu}\phi.\label{tor sol s},
\end{align}
where we introduced two parameters $b_1$ and $b_2$ which can take values $0$ or $1$ according to what specific model considered. If the Holst term is taken into account, then $b_2=1$, while $b_1=0$ and $b_1=1$ for $f(\mathcal{R})+H$ and $f(\mathcal{R}+H)$, respectively. When the action features the Nieh-Yan contribution, $b_2=0$, while $b_1=0$ and $b_1=1$ for $f(\mathcal{R})+NY$ and $f(\mathcal{R}+NY)$. Finally, $\Phi$ is coincident with $\phi$ in the $f(\mathcal{R}+H)$ case and identically equal to $1$ in the $f(\mathcal{R})+H$ one.\\
Substituting these results back into the actions yields the effective theory
\begin{equation}
S=\frac{1}{2\chi} \int d^4x \sqrt{-g} \left[ \phi R - \frac{\Omega(\phi)}{\phi}\partial^{\mu}\phi\partial_{\mu}\phi - V(\phi) \right],
\label{effective brans dicke}
\end{equation}
where $\Omega(\phi)$ depends on the particular model addressed. Both for $f(\mathcal{R}+H)$ and $f(\mathcal{R})+NY$ it assumes the constant value $\Omega=-3/2$, corresponding to the effective description of standard Palatini $f(\mathcal{R})$ gravity. This implies that the field $\phi$ is not an actual degree of freedom, but it is simply determined by the structural equation as in the standard case \cite{Sotiriou2010,Olmo2011}, i.e.
\begin{equation}
2V(\phi)-\phi V'(\phi)=\chi T,
\label{structural equation}
\end{equation}
where a prime denotes differentiation with respect to the argument and $T$ is trace of the stress energy tensor for some matter contributions, which are assumed do not couple to connection. Thus, as one might expect, the topological character of the Nieh-Yan term is preserved if it is added directly to the Lagrangian. Less trivial, instead, is of course the outcome for the $f(\mathcal{R}+H)$ model. In this case, indeed, the vanishing of the Holst term on half shell is to some extent recovered when it is included in the argument of the function $f(\cdot)$.\\
On the other hand, inserting the Nieh-Yan term in the argument of the function $f(\cdot)$ or featuring the Palatini $f(\mathcal{R})$ theory with an additional Holst contribution, leads to the following expressions, respectively:
\begin{equation}
\Omega=\frac{3(\beta^2-1)}{2},\quad
\Omega=-\frac{3}{2}\frac{\phi^2}{\phi^2 + \beta^2},
\end{equation}
ensuring the classical equivalence to Palatini $f(\mathcal{R})$ gravity for $\beta=0$. In this case, therefore, the scalar field acquires in general a dynamical character due to a non vanishing value of the Immirzi parameter, and \eqref{structural equation} is replaced by
\begin{equation}
(3+2\Omega)\Box\phi+\Omega'(\partial\phi)^2+2V(\phi)-\phi V'(\phi)=\chi T.
\label{scalar equation phi}
\end{equation}
Now, by virtue of the dependence of $T_\mu,\,S_\mu$ on derivatives of scalar field $\phi$, we actually deal with a theory equipped with propagating torsion degrees, by close analogy with \cite{Bombacigno2018,Bombacigno2019}, where $f(\mathcal{R})+H$ and $f(\mathcal{R})+NY$ models were considered in the presence of a dynamical Immirzi field. As a result, even though the $f(\mathcal{R}+NY)$ model is formally identical to \textit{metric} $f(R)$ gravity for $\beta=\pm 1$ ($\Omega = 0$), they are actually endowed with distinct phenomenology.
\section{Analysis of the constrained Hamiltonian system}\label{Appendix}
In this section we perform the spacetime splitting and Hamiltonian analysis of the models presented above, with the aim of characterizing the phase space structure of the theory.\\
Since we are eventually interested in the implementation of Ashtekar-like variables, let us first recall the two possible procedures available to achieve this result, namely the so called extended phase space approach, which was followed in [30,31,32], and the first order approach based on the addition of Holst or Nieh-Yan contributions, which is the one we adopted in this paper, consistently with the Palatini formulation of $f(R)$ gravity.\\
The former, the so called extended phase space approach, is carried out in a second order formalism at the Hamiltonian level, firstly defining new configuration variables, i.e. the extrinsic curvature and densitized triad as in standard LQG, and then by extending the phase space to a larger one, characterized by an additional constraint. Next, a symplectic reduction is performed, showing that the new phase space reproduces the correct Poisson brackets between the old configuration variables, namely the 3-metric and its conjugate momentum. Finally, the Ashtekar variables are introduced by means of a canonical transformation on the new phase space variables. This is the paradigm followed in \cite{Zhang2011a,Zhang2011,Han2014}. \\
The other approach is pursued directly at Lagrangian level according a first order or Palatini formalism, by including additional terms in the action, such as the Holst or Nieh-Yan contributions. The formulation in terms of densitized triads and extrinsic curvature arises naturally after the 3+1 spacetime decomposition and Legendre transform are performed. Eventually, with the same canonical transformation, a constrained Hamiltonian system coordinatized by Ashtekar variables is obtained.\\
In this paper we follow this very last approach for both models \eqref{fRH}-\eqref{fR H}.\\
Let us start performing the spacetime splitting on actions \eqref{Jordan frame models} and \eqref{Jordan frame models 2}, which, by means of tetrad fields $e^I_{\mu}$ and spin connections $\omega\indices{_{\mu}^{IJ}}$, can be simultaneously rewritten, modulo surface terms, as
\begin{align}\label{Appendix eq A1}
&S= \dfrac{1}{2\chi} \int d^4x \; e \left[\phi e^{\mu}_I e^{\nu}_J R^{IJ}_{\mu\nu} + \dfrac{\phi}{24}S^{\mu}S_{\mu} -\dfrac{2}{3}\phi T^{\mu}T_{\mu} \right. \\
&
\left. +2T^{\mu}\partial_{\mu}\phi - b_1 \dfrac{\beta}{2}S^{\mu}\partial_{\mu}\phi + b_2\Phi \frac{\beta}{3}T^{\mu}S_{\mu} - V(\phi) \right],\nonumber
\end{align}
where $e=\text{det}(e^I_{\mu})$ and $R^{IJ}_{\mu\nu} = 2\partial_{[\mu} \omega\indices{_{\nu]}^{IJ}} + 2\omega\indices{_{[\mu}^I_{K}}\omega\indices{_{\nu]}^{KJ}}$ is the strength tensor of the spin connection. In \eqref{Appendix eq A1}, terms containing $q_{\mu\nu\rho}$ have been neglected since they would eventually turn out to yield vanishing contributions as argued further on.\\
The spacetime splitting is achieved via a foliation of the manifold into a family of 3-dimensional hypersurfaces $\Sigma_t$ defined by the parametric equations $y^{\mu} = y^{\mu}(t,x^a)$, where $t\in \mathbb{R}$. The submanifold $\Sigma_t$ is globally defined by a time-like vector $n^{\mu}$ normal to the hypersurfaces, such that $n^{\mu}n_{\mu}=-1$, and an adapted base on $\Sigma_t$ is then given by $b^{\mu}_a := \partial_{a}y^{\mu}$, satisfying the conditions $g_{\mu\nu}n^{\mu}b^{\nu}_a=0$. Defining the deformation vector as $t^{\mu}=\partial_{t}y^{\mu}$, it can be decomposed on the basis vectors $\left\lbrace n^{\mu},b^{\mu}_a \right\rbrace$ as $t^{\mu} = N n^{\mu} + N^{\mu}$, where $N^{\mu}=N^{a} b^{\mu}_a$, $N$ is the lapse function and $N^a$ the shift vector.\\
The completeness relation $h_{\mu\nu} = g_{\mu\nu} + n_{\mu}n_{\nu}$ holds, where $h_{\mu\nu} = h_{ab} b^a_{\mu} b^{b}_{\nu}$ is the projector on the spatial hypersurfaces and $h_{ab}$ the 3-metric, related to the triads by $h_{ab} = e^i_a e_{ib}$. The lapse function, shift vector and the 3-metric are the new metric configuration variables, in terms of which the metric acquires the usual ADM expression, i.e.
\begin{equation}
ds^2=-N^2 dt^2 + h_{ab}(dx^a+N^adt)(dx^b+N^b dt),
\end{equation}
as in standard geometrodynamics.\\
Now, assuming the time gauge conditions $n^{\mu}=e^{\mu}_0$, $e^t_i = 0$ and using $e=N\bar{e}$, with $\bar{e}=\text{det}(e^i_a)$, the action can be rewritten as
\begin{align}\label{action Sb}
S_{b} &= \int dt d^3x \; \bar{e} \left\lbrace \phi e^a_i \left[ \mathcal{L}_t K^i_a - D^{(\omega)}_a (t\cdot \omega^i) + t\cdot \tensor{\omega}{^{i}_{k}}K^k_a\right] + \right. \nonumber \\&
+ \left( - n\cdot T +b_1\frac{\beta}{4}n\cdot S \right) \mathcal{L}_t \phi
-N^a \left[ 2 \phi e^b_i D^{(\omega)}_{[a}K^i_{b]} +\left(- n\cdot T + b_1\frac{\beta}{4}n\cdot S \right)\partial_a\phi \right]+
\nonumber \\ &
+N\left[ \frac{\phi e^a_i e^b_j}{2} \left( {}^3R^{ij}_{ab} + 2K^i_{[a}K^j_{b]} \right) + \left(T^a - b_1\frac{\beta}{4}S^a\right)\partial_{a}\phi -\frac{\phi}{48}(n\cdot S)^2 + \frac{\phi}{3}(n\cdot T)^2 + \right.\nonumber\\& \left.\left.
-b_2\Phi\frac{\beta}{6}(n\cdot T)(n \cdot S) +\frac{\phi}{48}S^aS_a - \frac{\phi}{3}T^aT_a + b_2 \Phi \frac{\beta}{6}T^aS_a -\frac{1}{2}V(\phi) \right] \right\rbrace,
\end{align}
where $K^i_a\equiv\omega^{0i}_a$, the Lie derivative along the vector field $t^{\mu}$ is defined as $\mathcal{L}_t V_{\mu} = t^{\nu}\partial_{\nu}V_{\mu} + V_{\nu}\partial_{\mu}t^{\nu}$, while $\cdot$ indicates spacetime indices contractions, namely $t\cdot \omega^i\equiv t^{\mu}\omega^{0i}_{\mu}$, $t \cdot \omega^{ij}\equiv t^{\mu}\omega^{ij}_{\mu}$, $n \cdot T \equiv n^{\mu}T_{\mu}$ and $n \cdot S \equiv n^{\mu}S_{\mu}$. Moreover, we defined the derivative $D^{(\omega)}_a$ acting only on spatial internal indices via the spatial components of the spin connection, i.e. $D^{(\omega)}_a V^i_{b}= \partial_a V^i_b + \omega\indices{_a^i_j}V^j_b$. Then, the computation of conjugate momenta of $K^i_a$ and $\phi$ yields, respectively
\begin{align}
\tilde{E}^a_i &\equiv \dfrac{\delta S_{tot}}{\delta \mathcal{L}_t K^i_a} = \phi \bar{e} e^a_i,\\
\pi &\equiv \dfrac{\delta S_{tot}}{\delta \mathcal{L}_t \phi} = \bar{e} \left(- n \cdot T + b_1\frac{\beta}{4} n \cdot S \right),
\end{align}
while all other momenta vanish. Thus, all momenta are non invertible for the correspondent velocities, implying the presence of just as many primary constraints (See the \ref{new Appendix} for the detailed list of primary constraints). \\
A total Hamiltonian can be therefore defined by replacing each non invertible velocity with a Lagrange multiplier, i.e.:
\begin{equation}\label{Htot}
\begin{aligned}
&H_T =\int d^3x \left( \tilde{E}^a_i \mathcal{L}_t K^i_a + \pi \mathcal{L}_t \phi + \lambda^m C_m - L \right)=\\
&= \int d^3x \left\lbrace
-t\cdot \omega^i D^{(\omega)}_a (\tilde{E}^a_i) +
t\cdot \tensor{\omega}{^{i}^{k}}K_{ai}\tilde{E}^a_k + \right.\\
&\left. N^a \left[ 2 \tilde{E}^b_i D^{(\omega)}_{[a}K^i_{b]} + \sqrt{\frac{\tilde{E}}{\phi^3}} \left( -n\cdot T + b_1\frac{\beta}{4} n \cdot S \right) \partial_a\phi \right] \right.\\&
\left.-N \sqrt{\phi} \frac{\tilde{E}^a_i\tilde{E}^b_j}{2\sqrt{\tilde{E}}} \left( {}^3R^{ij}_{ab} + 2K^i_{[a}K^j_{b]} \right) \right. \\&
\left. - N\sqrt{\frac{\tilde{E}}{\phi^3}}\left[ \left(T^a - b_1\frac{\beta}{4}S^a\right)\partial_{a}\phi - \frac{\phi}{48}(n\cdot S)^2 \right.\right.+ \\&
\left.\left.
+ \frac{\phi}{3}(n\cdot T)^2- b_2\frac{\Phi\beta}{6}(n\cdot T)(n \cdot S) +\frac{\phi}{48}S^aS_a \right.\right.\\
&\left.\left. - \frac{\phi}{3}T^aT_a + b_2 \Phi\frac{\beta}{6}T^aS_a - \frac{1}{2} V(\phi) \right] +\lambda^m C_m \right\rbrace
\end{aligned}
\end{equation}
where $\tilde{E}=\text{det}(\tilde{E^a_i})$ and $\lambda^m C_m$ in the first line collectively indicates the primary constraints and their correspondent Lagrange multipliers, indicated by $\lambda$ characters (see the \ref{new Appendix}). Finally, the phase space is equipped with the standard Poisson brackets, which are defined in the Appendix as well. \\
At this stage we have to impose that primary constraints be preserved by the dynamics of the system. This amounts to compute their time evolution, evaluating their Poisson brackets with the total Hamiltonian using \eqref{Poisson1}-\eqref{Poisson2}, and imposing the result to be at least weakly vanishing on the constraint hypersurface. This yields
\begin{align}\label{kCdot}
{}^{(K)}\dot{C}^a_i &= Z^a_i - \phi\bar{e} \Delta^{aj}_{bi} {}^{(e)}\lambda^b_j \approx0,\\\label{Cdot}
\dot{C}&=W + \bar{e} \,{}^{(T)}\lambda - \bar{e}\, b_1\frac{\beta}{4} {}^{(S)}\lambda\approx0,\\\label{eCdot}
{}^{(e)}\dot{C}^i_a &= \phi \bar{e} \Delta^{bi}_{aj} {}^{(K)}\lambda^j_b\approx0,\\\label{omegatCdot}
{}^{(\omega_t)}\dot{C}_i&= D^{(\omega)}_a \tilde{E}^a_i\approx0,\\\label{omegaaCdot}
{}^{(\omega_a)}\dot{C}^a_{ij} &= t\cdot \omega^{[i} \tilde{E}^{j]a} - 2 N^{c}\tilde{E}^{a}_{[i}K_{j]c} - N^a \tilde{E}^c_{[i} K_{j]c} \nonumber\\
& - \frac{\tilde{E}^b_{[i}\tilde{E}_{j]a}}{\sqrt{\tilde{E}}}\partial_b \left( N\sqrt{\phi} \right) - N\sqrt{\phi} D^{(\omega)}_b \left( \frac{\tilde{E}^b_{[i}\tilde{E}_{j]a}}{\sqrt{\tilde{E}}} \right)\approx0,\\\label{NaCdot}
{}^{(N)}\dot{C}_a &= - \left( 2 \tilde{E}^b_i D^{(\omega)}_{[a}K^i_{b]} + \pi\partial_a\phi \right)
\equiv H_a \approx0,\\\label{NCdot}
{}^{(N)}\dot{C} &= \sqrt{\phi} \frac{\tilde{E}^a_i\tilde{E}^b_j}{2\sqrt{\tilde{E}}} \left( {}^3R^{ij}_{ab} + 2K^i_{[a}K^j_{b]} \right) + \sqrt{\frac{\tilde{E}}{\phi^3}}\left[ \left(T^a - b_1 \frac{\beta}{4}S^a\right)\partial_{a}\phi \right. \nonumber \\&
- \frac{\phi}{48}(n\cdot S)^2 + \frac{\phi}{3}(n\cdot T)^2 - b_2 \Phi\frac{\beta}{6}(n\cdot T)(n \cdot S) +\frac{\phi}{48}S^aS_a \nonumber\\&
\left.- \frac{\phi}{3}T^aT_a + b_2 \Phi \frac{\beta}{6}T^aS_a - \frac{1}{2} V(\phi) \right] \equiv -H\approx0,\\\label{rotCdot}
{}^{(\omega_t)}\dot{C}_{ik} &= K_{a[k}\tilde{E}^a_{i]}\approx0,\\\label{SaCdot}
{}^{(S)}\dot{C}^a &= N\sqrt{\frac{\tilde{E}}{\phi^3}} \left( -b_1\frac{\beta}{4}\partial^{a}\phi + \frac{\phi}{24}S^a + b_2\frac{\Phi\beta}{6}T^a \right)\approx0,\\\label{TaCdot}
{}^{(T)}\dot{C}^a &= N\sqrt{\frac{\tilde{E}}{\phi^3}} \left( \partial^{a}\phi -\frac{2}{3}\phi T^a + b_2 \frac{\Phi\beta}{6}S^a \right)\approx0,\\\label{SCdot}
{}^{(S)}\dot{C} &= -N\sqrt{\frac{\tilde{E}}{\phi^3}} \left( \frac{\phi}{24}n\cdot S + b_2\frac{\Phi\beta}{6} n\cdot T \right) +\bar{e} \lambda b_1 \frac{\beta}{4}\approx0,\\\label{TCdot}
{}^{(T)}\dot{C}&= N\sqrt{\frac{\tilde{E}}{\phi^3}} \left( \frac{2\phi}{3}n\cdot T - b_2\frac{\Phi\beta}{6}n\cdot S \right) -\bar{e} \lambda\approx0,
\end{align}
where we defined
\begin{align}
Z^a_i &\equiv\left\lbrace \tilde{E}^a_i , H_T \right\rbrace
- \bar{e} e^a_i \left\lbrace \phi , H_T \right\rbrace, \\
\Delta^{aj}_{bi} &\equiv \left( \delta^a_b \delta^j_i - e^a_i e^j_b \right),\\
W &\equiv\left\lbrace \pi , H_T \right\rbrace + \left( n\cdot T - b_1\frac{\beta}{4} n\cdot S \right) \left\lbrace \bar{e} , H_T \right\rbrace.
\end{align}
The expressions above are either functions of the phase space variables alone or they also depend on the Lagrange multipliers. In the former case, the weakly vanishing of the expressions must be imposed, resulting in the presence of secondary constraints. In the latter, they have to be considered as equations in the Lagrange multipliers and must be solved for them, restricting their original arbitrariness.\\
Before moving on, we note that some phase space variables, namely $e^a_i$, $t\cdot \omega^i$, $\omega\indices{_a^{ij}}$, $N^a$, $N$, $t\cdot \omega^{ij}$, $S_a$, $T_a$, $n\cdot S$, $n\cdot T$, are actually completely arbitrary and can be considered as Lagrange multipliers themselves. This happens because their momenta only appear in the combination $\lambda^m C_m$ in $H_T$ and therefore their dynamics is itself arbitrary, being given only by the correspondent Lagrange multiplier.\\
Now, expressions \eqref{kCdot}, \eqref{eCdot}, \eqref{omegatCdot}, \eqref{omegaaCdot}, \eqref{SaCdot}, \eqref{TaCdot} contain Lagrange multipliers ${}^{(e)}\lambda^c_k$, ${}^{(K)}\lambda^j_b$, $\omega\indices{_a^{ij}}$, $t\cdot \omega^i$, $S_a$, $T_a$ and are solved by
\begin{align}
{}^{(e)}\lambda^c_k &= \frac{1}{\phi\bar{e}}\left(\Delta^{-1} \right)^{ci}_{ak} Z^a_i,\\
{}^{(K)}\lambda^j_b &= 0,\\\label{solution to omegatCdot}
\omega\indices{_a^{ij}}&=\tilde{\omega}\indices{_a^{ij}} \equiv\tilde{E}^{b[i} \left( 2 \partial_{[a}\tilde{E}^{j]}_{b]} + \tilde{E}^{j]d} \tilde{E}^k_a \partial_{d}\tilde{E}_{bk}\right) + \frac{1}{\tilde{E}}\tilde{E}^{[i}_a \tilde{E}^{j]b}\partial_{b}\tilde{E},\\
t\cdot \omega^i &= -2N^c K^i_c + \frac{\tilde{E}^{ib}}{\sqrt{\tilde{E}}}\partial_b\left( N\sqrt{\phi} \right),\label{solution to omegaaCdot}
\end{align}
where $(\Delta^{-1})^{ci}_{ak} = \left( \delta^c_a \delta^i_k -\frac{1}{2} e^c_k e^i_a \right)$ is the inverse of $\Delta^{aj}_{bi}$ defined such that $\Delta^{aj}_{bi}(\Delta^{-1})^{ci}_{ak}= \delta^c_b \delta^j_k$ and the solutions for $S_a$ and $T_a$ are given by the spatial part of expressions \eqref{tor sol t} and \eqref{tor sol s}, which is consistent with the expressions obtained at the Lagrangian level, solving the field equations for torsion.\\
Note that \eqref{solution to omegaaCdot} is a solution to \eqref{omegaaCdot} since in the latter the third term is proportional to the rotational constraint, which will be derived shortly, while the last term can be dropped once the spin connection is set as in \eqref{solution to omegatCdot}.\\
Eventually, the spatial components of the spin connection turn out to be functions of $\tilde{E}^a_i$. However, this does not imply that we have replaced the initial first order Palatini formulation with a second order one. Indeed, part of the original connection is now encoded in the components of torsion. Moreover, the modified Ashtekar connection ${}^{(\beta)}A^a_i$ obtained in section \ref{Sec mod asht var} contains $K^i_a\equiv \omega\indices{_a^{0i}}$, which is a part of the original spin connection that has not been expressed in terms of any other variable and is still completely independent.\\
Equations \eqref{SCdot} and \eqref{TCdot} are solved by
\begin{equation}\label{solution Lagrange mult non-dyn}
\lambda=\frac{2}{3}\phi N n\cdot T
\end{equation}
and $n \cdot S=0$ in models with $\phi$ non-dynamical, or
\begin{equation}\label{relaz S T NY}
n \cdot S = 4\beta \; n \cdot T
\end{equation}
and
\begin{equation}\label{relaz S T H}
n \cdot S = - \frac{4\beta}{\phi} n \cdot T
\end{equation}
in the $f(\mathcal{R}+NY)$ and $f(\mathcal{R})+H$ cases, respectively.\\
We emphasize that for $f(\mathcal{R})+NY$ and $f(\mathcal{R}+H)$ models we can reproduce the structural equation from the secondary constraint \eqref{Cdot}, provided we fix one between ${}^{(S)}\lambda$ and ${}^{(T)}\lambda$. However, it is worth noting that, as shown in \cite{Olmo2011} for the pure Palatini $f(R)$ case, the same result can be obtained without fixing any of the Lagrange multipliers but making use of the equations of motion and taking into account the scalar constraint.\\
Then, one has to impose the conservation of the structural equation and check if a further constraint arises. This yields a linear homogeneous equation in $\lambda$ which in turn must be proportional to $n\cdot T$ as in \eqref{solution Lagrange mult non-dyn}. Therefore, no tertiary constraints arise and the conservation of the structural equation is just a restriction on the Lagrange multiplier $n\cdot T$, which has to vanish. Thus, the scalar field momentum in non dynamical models turns out to be weakly vanishing, in agreement with the non dynamical character of its conjugate variable $\phi$.\\
In the other two models instead, the time evolution of \eqref{constr phi} can be set to zero fixing ${}^{(S)}\lambda$ or ${}^{(T)}\lambda$, which eventually implies that \textit{both} of them are no longer arbitrary since the equations of motion for $n\cdot S$ and $n\cdot T$ are proportional to their Lagrange multipliers and relations \eqref{relaz S T NY} and \eqref{relaz S T H} hold.
Thus, in this case there are no arbitrary degrees of freedom in the definition of $\pi$ that can be used to freeze its dynamics. Moreover, if one of the Lagrange multipliers is used to reproduce the structure equation also in this case, this would imply the non dynamicity of $\phi$, ending up with an inconsistency and forcing to choose another form for the Lagrange multiplier.\\
Finally, expressions \eqref{NCdot} and \eqref{NaCdot}, which do not contain arbitrary Lagrange multipliers anymore, are imposed to be weakly vanishing implying the presence of the vector, scalar and rotational constraints, namely
\begin{align}\label{vincolo Na sec}
&H_a \equiv - \left( 2 \tilde{E}^b_i D^{(\omega)}_{[a}K^i_{b]} + \pi\partial_a\phi \right) \approx 0,\\
\label{vincolo N sec}
& H \equiv -\sqrt{\phi} \frac{\tilde{E}^a_i\tilde{E}^b_j}{2\sqrt{\tilde{E}}} \left( {}^3R^{ij}_{ab} + 2K^i_{[a}K^j_{b]} \right) + \nonumber \\
& - \sqrt{\frac{\tilde{E}}{\phi^3}}\left[ \left(T^a - b_1 \frac{\beta}{4}S^a\right)\partial_{a}\phi - \frac{\phi}{48}(n\cdot S)^2 + \frac{\phi}{3}(n\cdot T)^2 + \right. \nonumber \\&
- b_2 \Phi\frac{\beta}{6}(n\cdot T)(n \cdot S) +\frac{\phi}{48}S^aS_a - \frac{\phi}{3}T^aT_a + \nonumber\\&
\left. + b_2 \Phi \frac{\beta}{6}T^aS_a - \frac{1}{2} V(\phi) \right] \approx 0,\\
\label{rot sec}
&K_{a[k}\tilde{E}^a_{i]} \approx 0.
\end{align}
Eventually, taking into account the restrictions on Lagrange multipliers obtained in this section, the above constraints take the form shown in equations \eqref{rotational constraint}, \eqref{vecotr constraint} and \eqref{scalar constraint} of section~\ref{Sec mod asht var}.\\
The algebra of the remaining constraints has already been studied in \cite{Zhang2011}. The treatment applies also to the present case since the two sets of constraints are linked by a canonical transformation, as shown in section \ref{Sec Einstein frame}.\\
Matter can be implicitly included into the theory positing its action to depend only on the metric and the matter fields and not on the connection. Assuming that no primary constraints arise in the matter sector, the constraint structure of the theory gets modified by the addition of the terms $\frac{\delta H_{matt}}{\delta N^a}$ and $\frac{\delta H_{matt}}{\delta N}$ to the vector and scalar constraints \eqref{vincolo Na sec} and \eqref{vincolo N sec}, respectively, being $H_{matt}$ the matter Hamiltonian. In terms of it, from the usual definition of the stress energy tensor of matter in terms of the matter Lagrangian, its trace can be expressed as
\begin{equation}\label{Trace}
T = \frac{2 \phi^{5/2}}{N \sqrt{\tilde{E}}} \left( -\frac{N}{2\phi} \dfrac{\delta H_{matt}}{\delta N} + \dfrac{\delta H_{matt}}{\delta \phi} \right),
\end{equation}
a relation useful in order to recover the structural equation in the Einstein frame formulation developed in section~\ref{Sec Einstein frame}.\\
Finally, let us notice that, if the terms proportional to $q_{\mu\nu\rho}$ would not have been neglected, then, given the absence of derivatives of $q_{\mu\nu\rho}$, its conjugate momentum would have been weakly vanishing. The additional terms proportional to it via a Lagrange multiplier appearing in the total Hamiltonian would have produced secondary constraints whose solutions would have implied in turn the vanishing of $q_{\mu\nu\rho}$ components, since it does not couple to any derivative of the scalar field $\phi$, contrary to what happens to the other components of torsion.
\section{Modified Ashtekar variables}\label{Sec mod asht var}
In this section we successfully implement a modified set of Ashtekar-like variables, still suitable for loop quantization and that ensure the presence of a $SU(2)$ Gauss constraint in the phase space of the theory.\\
We now focus on $f(\mathcal{R})+H$ and $f(\mathcal{R}+NY)$ models. As a result of the analysis pursued in the previous section, the gravitational sector of the phase space turns out to be characterized by the set of canonical variables $\{\pi, \phi;\;\tilde{E}^a_i,K^i_a\}$, where $\pi$ denotes the conjugate momentum to $\phi$ and $\{\tilde{E}^a_i,K^i_a\}$ are defined as
\begin{align}
K^i_a \equiv \omega\indices{_{a}^{0i}},\label{K}\qquad
\tilde{E}^a_i \equiv \phi E^a_i,
\end{align}
where $E^a_i = \text{det}(e^j_b) e^a_i$ is the ordinary densitized triad and $\omega\indices{_\mu^{IJ}}$ the independent spin connection. This set of variables is not posited and then justified via a symplectic reduction, as it would be done in the extended phase space approach. Instead, it naturally arises from the spacetime splitting and Legendre transform performed in section~\ref{Appendix} on actions \eqref{Jordan frame models}-\eqref{Jordan frame models 2}.
\\ The phase space is subject to a set of first class constraints consisting of the rotational constraint
\begin{equation}\label{rotational constraint}
R_i \equiv \tensor{\varepsilon}{_{ij}^{k}} K^j_a \tilde{E}^a_k \approx 0,
\end{equation}
the vector constraint
\begin{equation}\label{vecotr constraint}
H_a \equiv 2\tilde{E}^b_i D^{(\tilde{\omega})}_{[a}K^i_{b]} + \pi \partial_{a}\phi \approx 0,
\end{equation}
and the scalar constraint
\begin{align}\label{scalar constraint}
H &\equiv -\frac{\sqrt{\phi}}{2}\frac{\tilde{E}^a_i \tilde{E}^b_j}{\sqrt{\tilde{E}}}\left( {}^3R^{ij}_{ab}(\tilde{\omega}) + 2K^i_{[a}K^j_{b]} \right) + \frac{1}{2}\sqrt{\frac{\phi^{3}}{\tilde{E}}}\frac{\phi}{\Omega}\pi^2 + \nonumber \\
& +\frac{1}{2}\sqrt{\frac{\tilde{E}}{\phi^{3}}}\frac{\Omega}{\phi}\partial^a\phi\partial_a\phi +\frac{1}{2} \sqrt{\frac{\tilde{E}}{\phi^3}}V(\phi) \approx 0.
\end{align}
where $\tilde{E}=\text{det}(\tilde{E}^a_i)$, ${}^{3}R\indices{^{ij}_{ab}}(\tilde{\omega}) = 2\partial_{[a}\tilde{\omega}\indices{_{b]}^{ij}} + 2 \tilde{\omega}\indices{_{[a}^i_k} \tilde{\omega}\indices{_{b]}^{kj}}$. In particular, we defined a new type of covariant derivative $D^{(\tilde{\omega})}_a$, acting on internal spatial indices, by means of the modified spin connection
\begin{equation}\label{spin conn}
\tilde{\omega}\indices{_a^{ij}} =\tilde{E}^{b[i} \left( 2 \partial_{[a}\tilde{E}^{j]}_{b]} + \tilde{E}^{j]d} \tilde{E}^k_a \partial_{d}\tilde{E}_{bk}\right) + \frac{1}{\tilde{E}}\tilde{E}^{[i}_a \tilde{E}^{j]b}\partial_{b}\tilde{E},
\end{equation}
which can be expressed in terms of the Riemannian spin connection $\bar{\omega}\indices{_\mu^{IJ}}=e^{\nu I}\nabla_\mu e^J_{\nu}$ and the scalar field as $\tilde{\omega}\indices{_a^{ij}} =\bar{\omega}\indices{_a^{ij}}(E) + \frac{1}{\phi} E^{[i}_a E^{j]b}\partial_b\phi$. Then, performing the canonical transformation
\begin{align}\label{mod triads}
\tilde{E}^a_i &\rightarrow {}^{(\beta)}\tilde{E}^a_i = \beta \tilde{E}^a_i,\\ \label{mod conn}
K^i_a &\rightarrow {}^{(\beta)}A^a_i = \frac{1}{\beta} K^i_a + \tilde{\Gamma}^i_a,
\end{align}
where $\tilde{\Gamma}^i_a =- \frac{1}{2}\varepsilon^{ijk}\tilde{\omega}^{jk}_a$, a set of modified Ashtekar variables $\{{}^{(\beta)}\tilde{E}^a_i,\,{}^{(\beta)}A^i_a\}$ can be obtained, in terms of which the rotational constraint can be combined with the compatibility condition $D^{(\tilde{\omega})}_a\tilde{E}^b_i=0$, satisfied by \eqref{spin conn}, yielding the $SU(2)$ Gauss constraint
\begin{equation}
G_i = \partial_a {}^{(\beta)}\tilde{E}^a_i + \varepsilon_{ijk} {}^{(\beta)}A^j_a {}^{(\beta)}\tilde{E}^a_k \approx 0.
\end{equation}
This guarantees that Palatini $f(\mathcal{R})$ models here considered are actually feasible for LQG quantization procedure. Especially, by means of the new variables the vector constraint can be rearranged as in standard LQG, along with the additional term associated to the scalar field, i.e.
\begin{equation}
H_a = {}^{(\beta)}\tilde{E}^b_i F^i_{ab} + \pi \partial_a\phi,
\end{equation}
where $F^i_{ab}= 2\partial_{[a} {}^{(\beta)}A^i_{b]} + \varepsilon\indices{^i_{jk}} {}^{(\beta)}A^j_a {}^{(\beta)}A^k_b$.
Conversely, the scalar constraint turns out to be modified with respect to the standard case, namely
\begin{align}
&H = -\frac{\sqrt{\phi}}{2}\frac{{}^{(\beta)}\tilde{E}^a_i {}^{(\beta)}\tilde{E}^b_j}{\sqrt{\beta{}^{(\beta)}\tilde{E}}}\left[ \beta^2\varepsilon^{ijk}F^k_{ab}+(\beta^2+1) \; {}^3R^{ij}_{ab}(\tilde{\omega}) \right] + \nonumber \\
& +\frac{1}{2}\sqrt{\frac{\phi^{3}}{{}^{(\beta)}\tilde{E}}}\frac{\phi}{\Omega}\pi^2 +\frac{1}{2}\sqrt{\frac{{}^{(\beta)}\tilde{E}}{\phi^{3}}}\frac{\Omega}{\phi}\partial^a\phi\partial_a\phi +\frac{1}{2} \sqrt{\frac{{}^{(\beta)}\tilde{E}}{\phi^3}}V(\phi),
\label{scalar constraint modified var}
\end{align}
where ${}^{(\beta)}\tilde{E} = \text{det}({}^{(\beta)}\tilde{E}^a_i)$,
reflecting the difference in the dynamics which exists at a classical level between General Relativity and Palatini $f(\mathcal{R})$ Gravity.\\
The preservation of Gauss and vector constraints assure that it is straightforward to extend the usual quantization procedure \cite{Cianfrani2014,Rovelli2004,Thiemann2007} to the new variables \eqref{mod triads}-\eqref{mod conn}, while the quantization of the scalar sector still requires some care, by virtue of the different dynamical character of $\phi$ in the different models. When the scalar field is dynamical, it embodies a proper gravitational degree of freedom and following \cite{Lewandowski2016} we can introduce a scalar field Hilbert space spanned by quantum states $\ket{\varphi}$, defined by $C^k$-functions $\varphi:\Sigma \rightarrow \mathbb{R}$ and endowed with the scalar product $\braket{\varphi|\varphi}=1,\;\braket{\varphi|\varphi'}=0$, whenever $\varphi\neq\varphi'$. The operator associated to the scalar field acts by multiplication as $\hat{\phi}(x)\ket{\varphi} = \varphi(x)\ket{\varphi}$,
allowing well defined operators $\widehat{\partial_a\phi}$ and $\widehat{\sqrt{\phi}}$, as shown in \cite{Lewandowski2016}. Conversely, the operator associated to the conjugate momentum only exists in its exponentiated version. However, as noticed in \cite{Lewandowski2016}, an operator $\int d^3x f(x) \hat{\pi}(x)$ can still be defined as a directional functional derivative acting in the dual space, spanned by linear functionals $\Phi: C^k(\Sigma)\rightarrow \mathbb{C}$. Now, the main difficulty in the present case comes from the $\phi^{-1}$ factors appearing both in the scalar constraint and in the area (see section~\ref{sec area}). Therefore, in order to define an operator associated to the inverse of the scalar field one can restrict to the case $\phi > 0$ (condition required on a classical level in order the theory be consistent) and resort to the following classical identity
\begin{equation}
\phi^{-1}(x)=4\left( \left\lbrace \sqrt{\phi(x)} , \int d^3z \pi(z) \right\rbrace \right)^2,
\end{equation}
eventually defining the operator $\hat{\phi}^{-1}$ via the replacement $\{\cdot,\cdot\}\rightarrow [\cdot,\cdot]/(i\hslash)$, where braces and square brackets denote Poisson brackets and commutators, respectively.
\subsection{Heuristic picture for the area spectrum}\label{sec area}
In standard LQG the classical expression for the area of a surface $S$ is written in terms of densitized triads as
\begin{equation}\label{classical area}
A(S)= \int_S ds \sqrt{E^a_iE^b_i n_a n_b},
\end{equation}
where $n_a$ is the normal vector to the surface, and then quantized computing the action of fluxes on spin-network basis states. The area operator turns out to be diagonal in this basis, with the spectrum given by\footnote{We consider here only the simple case in which there are no nodes of the graph belonging to the surface nor edges laying on it.}
\begin{equation}\label{standard spectrum}
a = \frac{8\pi \ell_P^2}{\beta} \sum_p \sqrt{j_p(j_p+1)},
\end{equation}
where $\ell_P=\sqrt{\hslash G}$ is the Planck length and the sum runs over punctures $p$ of the surface $S$ due to edges of the spin-network, colored by spin quantum numbers $j_p$.\\
However, in the theories we analyzed, the phase space variable to be quantized is ${}^{(\beta)}\tilde{E}^a_i$, whereas the physical metric is still associated to the ordinary densitized triad $E^a_i$. Thus, equation \eqref{classical area} still holds and, in view of its quantization, it has to be rewritten in terms of ${}^{(\beta)}\tilde{E}^a_i$, namely
\begin{equation}\label{mod classical area}
A(S) = \frac{1}{\beta}\int_S ds\, \frac{\sqrt{{}^{(\beta)}\tilde{E}^a_i n_a {}^{(\beta)}\tilde{E}^b_i n_b}}{\phi} .
\end{equation}
Now, the square root can be quantized via a regularization procedure as in the standard case and, as long as $\phi$ is dynamical, one can treat the reciprocal of the scalar field as explained in section~\ref{Sec mod asht var}. Computing the action of both $\hat{\phi}^{-1}$ and the fluxes on a state obtained by the direct product of spin-network and (dual) scalar field states result in a modified area spectrum
\begin{equation}\label{mod spectrum2}
a_{\phi} = \frac{8\pi \ell_P^2}{\beta} \sum_p \dfrac{\sqrt{j_p(j_p+1)}}{\varphi(p)},
\end{equation}
where the only non vanishing contributions are those in which the scalar field is computed in the punctures. This implies that the scalar field contribution to the area operator spoils the discrete character of its spectrum, similarly to what argued in \cite{Veraguth2017,Wang2018}.\\
At the same time, this feature implies that the Immirzi parameter ambiguity, present in standard LQG, is here absent, since different values of $\beta$ do not label different values of physical observables but instead they lead to the same, continuous spectrum. Such an outcome seems to suggest that in Palatini $f(\mathcal{R})$ extensions of LQG, the Immirzi parameter can be conveniently set to unity in definitions of geometrical objects as in \eqref{classical area}, and its effects on dynamics absorbed in the value of $\Omega$. We note that in \cite{Veraguth2017,Wang2018} analogous results are achieved in the context of Conformal-LQG, where an additional conformal transformation is included into the symmetries of the theory. In that case, indeed, it is possible to build an area operator invariant under conformal transformation and independent on the Immirzi parameter, which acquires the role of gauge parameter for the new conformal symmetry. For such a purpose, however, one is forced to consider the conformal rescaled metric as the physical one, in contrast to our assumptions and by analogy with \cite{Fatibene2010}.
\subsection{Non-dynamical models}
For the two models in which the scalar field is non dynamical, namely $f(\mathcal{R}+H)$ and $f(\mathcal{R})+NY$, the main result presented in this section still holds, namely the existence of a generalized set of Ashtekar variables allowing the preservation of the vector and Gauss constraints.\\
There are however some important caveats. As shown in section~\ref{sec2}, in both cases the parameter $\Omega$ assumes the constant value $\Omega=-3/2$, corresponding to the effective description of standard Palatini $f(\mathcal{R})$ gravity which is characterized by the non-dynamical character of the scalar field.\\
This aspect is classically encoded in the structural equation of motion \eqref{structural equation}. However, we proved that it is also reflected in a slightly different phase space structure. Indeed, the phase space of these models is endowed with an additional second class constraint, which can be recast, provided we properly fix the Lagrangian multipliers, in the form \eqref{structural equation}, proving the non dynamical character of the field $\phi$. This result is reinforced by the fact that its conjugate momentum turns out to be weakly vanishing. In the other two models, instead, no additional constraints arise and the scalar field and its conjugate momentum are truly dynamical.\\
Thus, since in these models the scalar field is not an independent degree of freedom, it has not to be directly quantized, but it has to be considered a function of matter fields $\phi(T)$ by means of \eqref{structural equation}. The same goes for the scalar field appearing in \eqref{mod classical area}. This introduces a dependence of the area operator on matter and we expect that it could affect the purely geometric contribution too, resulting in a modification of the eigenvalue expression \eqref{standard spectrum} by virtue of the dependence of $T$ on the metric tensor. However, given a matter Lagrangian, even for the simplest choices for the function $f$, the expression for $\phi(T)$ can be quite cumbersome, making its treatment unfeasible.
\section{Formulation in the Einstein frame}\label{Sec Einstein frame}
Here we compare the results of the Hamiltonian approach discussed in section~\ref{Sec mod asht var} to analogous studies present in literature \cite{Zhang2011,Zhou2013}, where a different set of canonical variables was obtained, denoted by hatted characters and related to ours by the canonical transformation
\begin{align}\label{canontransf1}
&\hat{E}^a_i = \frac{1}{\phi}\tilde{E}^a_i,
\qquad\hat{K}^i_a = \phi K^i_a,\\
&\label{canontransf2}
\hat{\pi} = \pi - \frac{1}{\phi} \tilde{E}^a_i K^i_a,
\qquad
\hat{\phi} = \phi.
\end{align}
These results are to some extent controversial, since performing on \eqref{rotational constraint}-\eqref{scalar constraint} such a transformation reproduces the same set of constraints of \cite{Zhang2011,Zhou2013} only if we replace by hand $\Omega$ with $\Omega + 3/2$. Furthermore, in \cite{Zhang2011} starting from a second order analysis of \eqref{effective brans dicke}, a LQG formulation of scalar tensor theories was achieved via a symplectic reduction technique \cite{Thiemann2007}. Then, this seems to point out that the implementation of Ashtekar variables could be affected by the peculiar choice of the formalism adopted, when extensions to General Relativity are taken into account, as it occurs for the Jordan frame formulation of Palatini $f(\mathcal{R})$ models we considered.
In this sense, the fact that in \cite{Zhou2013} were actually derived results equivalent to \cite{Zhang2011} according a first order approach, can be traced back to the choice of including additional contributions in the action, featuring a Holst term, with the aim of eliminating torsion from the theory.
\\ Now, we want to show how the phase space structure obtained in \cite{Zhang2011,Zhou2013} could be reproduced by means of the canonical transformation \eqref{canontransf1}-\eqref{canontransf2}, provided the analysis of section~\ref{Appendix} be pursued in the so called Einstein frame, endowed with the conformally rescaled metric $\tilde{g}_{\mu\nu} = \phi g_{\mu\nu}$. Of course, we note that such an equivalence does not exclude a priori the existence, even at the level of the Jordan frame, of a different, possibly \textit{gauged} canonical transformation (see \cite{Cianfrani2012}), able to tackle this problem.
\\With steps analogous to the ones followed in the section~\ref{sec2}, the effective actions for the different models can be derived as
\begin{equation}\label{ST Einstein frame}
S = \frac{1}{2\chi} \int d^4x \sqrt{-\tilde{g}} \left\lbrace \widetilde{R} - \frac{\tilde{\Omega}(\phi)}{\phi}\tilde{g}^{\mu\nu}\partial_{\mu}\phi \partial_{\nu}\phi -U(\phi)\right\rbrace,
\end{equation}
where $\tilde{R}$ is the Ricci scalar depending only on the conformal metric and $U(\phi)=V(\phi)/\phi^2$. As before the models can be divided in two cases. $f(\mathcal{R})+H$ and $f(\mathcal{R})+NY$, for which $\tilde{\Omega}=0$, and $f(\mathcal{R}+NY)$ and $f(\mathcal{R}+H)$, for which $\tilde{\Omega}=\frac{3}{2}\phi \beta^2$ and $\tilde{\Omega}=\frac{3}{2}\frac{\phi\beta^2}{1+\beta^2\phi^2}$, respectively.\\
In particular, by comparing \eqref{ST Einstein frame} with \eqref{effective brans dicke}, we see that $\Omega$ and $\tilde{\Omega}$ are related by
\begin{equation}\label{Jordan Einstein relation}
\tilde{\Omega} = \frac{\Omega + 3/2}{\phi},
\end{equation}
which shows how the value $\Omega=-3/2$, associated to non dynamical configurations for the field $\phi$ in the Jordan frame, corresponds to $\tilde{\Omega}=0$, in agreement with the representation in the Einstein frame of scalar tensor theories.\\
Then, following the line of section~\ref{Appendix}, we can perform the Hamiltonian analysis, which reveals a phase space coordinatized by the set of variables $\{\tilde{E}^a_i,K^i_a\}$, where the densitized triad is now defined in terms of the spatial part of the conformal tetrad $\tilde{e}^I_{\mu} = \sqrt{\phi}e^I_{\mu}$, and a rotational constraint coincident for all models with the expression derived in the Jordan frame. Regarding the vector and scalar constraints instead, in the non dynamical models they read, respectively
\begin{align}
&H_a = 2\tilde{E}^b_i D^{(\tilde{\omega})}_{[a}K^i_{b]}\approx 0\\
&H = -\frac{\tilde{E}^a_i \tilde{E}^b_j}{2\sqrt{\tilde{E}}}\left( {}^3R^{ij}_{ab}(\tilde{\omega}) + 2K^i_{[a}K^
{j}_{b]} \right) + \sqrt{\tilde{E}}U(\phi) \approx 0,
\end{align}
where $\tilde{\omega}\indices{_a^{ij}}$ is the spin connection compatible with $\tilde{e}^i_a$. In this case, the conjugated momentum to the scalar field is weakly vanishing, leading to the primary constraint $\pi\approx 0$, whose conservation along the dynamics simply amounts to impose the variation of the potential term with respect to $\phi$ to weakly vanish. If matter is included this reproduces the structural equation as a secondary constraint, as shown in section \ref{Appendix}.\\
In models where $\phi$ is dynamical, instead, the vector constraint takes the form \eqref{vecotr constraint} and the scalar constraint reads
\begin{align}
H &= -\frac{\tilde{E}^a_i \tilde{E}^b_j}{2\sqrt{\tilde{E}}}\left( {}^3R^{ij}_{ab}(\tilde{\omega}) + 2K^i_{[a}K^{j}_{b]} \right) + \frac{1}{2\sqrt{\tilde{E}}}\frac{\phi}{\tilde{\Omega}}\pi^2 +\nonumber \\
& + \frac{1}{2}\frac{\tilde{\Omega}}{\phi} \frac{\tilde{E}^a_i\tilde{E}^b_i}{\sqrt{\tilde{E}}} \partial_{a}\phi\partial_b \phi + \sqrt{\tilde{E}}U(\phi) \approx 0.
\end{align}
If we then perform the canonical transformation \eqref{canontransf1}-\eqref{canontransf2}, the difference consisting in the shift by $-3/2$ of $\Omega$ is now compensated taking into account relation \eqref{Jordan Einstein relation}, and the final expression for the constraints is in agreement with \cite{Zhou2013}, both for $\Omega \neq -3/2$ and for $\Omega = -3/2$ ($\tilde{\Omega}=0$ and $\tilde{\Omega}\neq 0$). Specifically, in the latter case the primary constraint $\pi\approx 0$ becomes $\hat{\pi} + \frac{1}{\hat{\phi}} \hat{E}^a_i \hat{K}^i_a \approx 0$, which reproduces the so called conformal constraint present in \cite{Zhou2013}, and proves the equivalence of first and second order approaches in the Einstein frame.
\section{Concluding remarks}\label{sec concl}
In this work, we adopted the point of view that introducing Ashtekar-Barbero-Immirzi variables into modified $f(\mathcal{R})$ contexts requires a Palatini formulation of the action. This perspective implies the need to add to the Lagrangian the typical Holst and Nieh-Yan contributions, like the standard Einstein-Palatini action in \cite{Holst1996}. As shown in section~\ref{sec2}, we have different possible combinations for the Lagrangian, corresponding to include the aforementioned terms either inside the argument of the function $f$ or simply outside. When the equations of motion are calculated, and the torsion field is expressed via the metric and the scalar field variables, two different physical formulations come out. In fact, plugging the Holst term in the function $f$ or adding the Nieh-Yan contribute directly in the Lagrangian, simply corresponds to a standard Palatini $f(\mathcal{R})$ theory. Conversely, in the opposite case a truly scalar-tensor model is obtained, whose parameter $\Omega$ turns out to depend on the Immirzi parameter.\\
In both instances we are able to define new Ashtekar-Barbero-Immirzi variables, in terms of which Gauss and vector constraints of LQG are recovered. The discrepancy between the two scenarios is also reflected into the morphology of the area operator, obtained starting from its expression in natural geometrical variables, i.e. the natural tetrad fields, and then expressed via the proper $SU(2)$ variables, suitable for loop quantization. As a result, the area spectrum depends now on the scalar field properties as well. In particular, in the case of a Palatini $f(\mathcal{R})$ theory, we have to deal with the intriguing feature that the geometrical structure of the space depends on the nature of the matter by which it is filled. Another interesting issue comes out when the scalar field must be also quantized, and the area operators eigenvalues contain features of the scalar mode spectrum. This property is a consequence of the non-minimal coupling of the scalar field to gravity and suggests space discretization could be influenced by the particular considered form of the function $f$, i.e. of the potential term $V(\phi)$. Thus, the form of the Lagrangian one adopted to describe gravity seems to directly affect the space quantum kinematics, differently from the classical scenario, where only the space metric fixes the geometry kinematics, disregarding the Lagrangian form.\\
We also showed that the scalar field impacts the area spectrum in such a way that it is possible to set the Immirzi parameter equal to one in all the kinematic constraints, while it still affects the scalar constraint morphology.\\
This is not surprising since we are able to directly link the Immirzi parameter to the scalar-tensor parameter $\Omega$. However, the $SU(2)$ morphology of the theory and its kinematic properties must be not influenced by $\Omega$, since we can re-absorbe its value into the Ashtekar-Barbero-Immirzi variable definition. In turn, the dynamics can be instead affected by $\Omega$, i.e. by $\beta$, since the theories are not dynamical equivalent and these parameters can be constrained by experimental observations \cite{Zakharov2006,Chiba2007,Schmidt2008,Berry2011}. \\ In this sense, the Immirzi parameter ambiguity is here completely solved, by its link to the physical scalar-tensor parameter and by the independence of the theory kinematics on its specific value. This is, in our opinion, a very relevant result, opening a new perspective for the solution of some of the LQG shortcomings into a revised and extended formulation of the gravitational interaction.
\\ \indent We conclude by stressing a technical issue concerning the possibility to obtain equivalent formulations, when starting from a second order approach as in \cite{Zhang2011} and according the present Palatini approach. In fact, the scalar constraint appears different in the two analysis and the possibility to restore a complete equivalence implies the choice of a Einstein framework for our formulation, i.e. a conformal rescaling of the tetrad field. This technical evidence suggests a possible physical interpretation for the dynamics in the Einstein framework of a $f(R)$ theory (on the present level it must be considered simply a mathematical tool to make the scalar dynamics minimally coupled), which deserves further investigation.
\section*{Acknowledgements}
The work of F. B. is supported by the Fondazione Angelo della Riccia grant for the year 2020.
|
train/arxiv
|
BkiUfOnxK0zjCobCQV8F
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
\begin{Conv}
\label{Conv:G/k}%
We place ourselves in the setting of modular representation theory of finite groups. So unless mentioned otherwise, \emph{$G$ is a finite group and $\kk$ is a field of characteristic~\mbox{$p>0$}}, with $p$ typically dividing the order of~$G$.
\end{Conv}
\smallbreak
\subsection*{Permutation modules}
Among the easiest representations to construct, permutation modules are simply the $\kk$-linearizations $\kk(X)$ of $G$-sets~$X$.
And yet they play an important role in subjects as varied as derived equivalences~\cite{rickard:splendid}, Mackey functors~\cite{yoshida:G-functors},
or equivariant homotopy theory~\cite{mathew-naumann-noel:nilpotence-descent}, to name a few.
The authors' original interest stems from yet another connection, namely the one with Voevodsky's theory of motives~\cite{Voevodsky00}, specifically Artin motives.
For a gentle introduction to these ideas, we refer the reader to~\cite{balmer-gallauer:Dperm}.
We consider a `small' tensor triangulated category, the homotopy category
\begin{equation}\label{eq:K(G)}%
\cK(G):=\Kb\big(\perm(G;\kk)^\natural\big)
\end{equation}
of bounded complexes of permutation $\kkG$-modules, idempotent-completed.\,(\footnote{\,Direct summands of finitely generated permutation modules are called \emph{$p$-permutation} or \emph{trivial source} $\kkG$-modules and form the category denoted~$\perm(G;\kk)^\natural$.
It has only finitely many indecomposable objects up to isomorphism.
If $G$ is a $p$-group, all $p$-permutation modules are permutation and the indecomposable ones are of the form $\kk(G/H)$ for subgroups~$H\le G$.})
It sits as the compact part~$\cK(G)=\cT(G)^c$ of the `big' tensor triangulated category
\begin{equation}\label{eq:T(G)}%
\cT(G):=\DPerm(G;\kk)
\end{equation}
obtained for instance by closing~$\cK(G)$ under coproducts and triangles in the homotopy category $\K(\MMod{\kkG})$ of all $\kkG$-modules.
We call $\cT(G)$ the \emph{derived category of permutation $\kkG$-modules}.
Details can be found in~\Cref{Rec:perm}.
This derived category of permutation modules $\cT(G)$ is amenable to techniques of tensor-triangular geometry~\cite{balmer:icm}, a geometric approach that brings organization to otherwise bewildering tensor triangulated categories, in topology, algebraic geometry or representation theory. Tensor-triangular geometry has led to many new insights, for instance in equivariant homotopy theory~\cite{balmer-sanders:SH(G)-finite,barthel-et-al:spc-SHG-abelian,barthel-greenlees-hausmann:SH(G)-compact-Lie}.
Our goal in the present paper and its follow-up~\cite{balmer-gallauer:TTG-Perm-II} is to understand the tensor-triangular geometry of the derived category of permutation modules, both in its big variant~$\cT(G)$ and its small variant~$\cK(G)$.
In Part~III of the series~\cite{balmer-gallauer:TTG-Perm-III} we extend our analysis to profinite groups and to Artin motives.
\smallbreak
Having sketched the broad context and the aims of the series, let us now turn to the content of the present paper in more detail.
\subsection*{Stratification}
In colloquial terms, one of our main results says that the big derived category of permutation modules is strongly controlled by its compact part:
\begin{Thm}[\Cref{Thm:stratification}]
\label{Thm:stratification-intro}%
The derived category of permutation modules $\cT(G)$ is \emph{stratified} by~$\SpcKG$ in the sense of Barthel-Heard-Sanders~\cite{barthel-heard-sanders:stratification-Mackey}.
\end{Thm}
Let us remind the reader of BHS-stratification.
What we establish in \Cref{Thm:stratification} is an inclusion-preserving bijection between the localizing $\otimes$-ideals of~$\cT(G)$ and the subsets of the spectrum~$\SpcKG$.
This bijection is defined via a canonical support theory on~$\cT(G)$ that exists once we know that~$\SpcKG$ is a noetherian space (\Cref{Prop:Spc-noetherian}).
Note that \Cref{Thm:stratification-intro} cannot be obtained via `BIK-stratification' as in Benson-Iyengar-Krause~\cite{BIK:stratifying-stmod-kG}, since the endomorphism ring of the unit~$\Hom^\sbull_{\cK(G)}(\unit,\unit)=\kk$ is too small.
However, we shall see that \cite{BIK:stratifying-stmod-kG} plays an important role in our proof, albeit indirectly.
An immediate consequence of stratification is the Telescope Property (\Cref{Cor:telescope}):
\begin{Cor}
\label{Cor:smash-intro}%
Every smashing $\otimes$-ideal of~$\cT(G)$ is generated by its compact part.
\end{Cor}
The key question is now to understand the spectrum~$\SpcKG$.
For starters, recall from~\cite[Theorem~5.13]{balmer-gallauer:resol-small} that the innocent-looking category $\cK(G)$ actually captures much of the wilderness of modular representation theory. It admits as Verdier quotient the derived category $\Db(\kk G)$ of \emph{all} finitely generated $\kkG$-modules.
By Benson-Carlson-Rickard~\cite{benson-carlson-rickard:tt-class-stab(kG)}, the spectrum of $\Db(\kkG)$ is the homogeneous spectrum of the cohomology ring~$\rmH^\sbull(G,\kk)$.
We deduce in \Cref{Prop:VeeG} that $\SpcKG$ contains an open piece~$\VG$
\begin{equation}
\label{eq:VeeG}%
\Spech(\rmH^\sbull(G,\kk))\cong\Spc(\Db(\kkG))=:\,{\VG} \ \hook \,\SpcKG
\end{equation}
that we call the \emph{cohomological open} of~$G$.
\goodbreak
In good logic, the closed complement of~$\Vee{G}$ is the support
\begin{equation}
\label{eq:Z_G}%
\SpcKG\sminus \VG\quad = \quad \Supp(\Kac(G))
\end{equation}
of the tt-ideal $\Kac(G)=\Ker(\cK(G)\onto \Db(\kkG))$ of acyclic objects in~$\cK(G)$.
The problem becomes to understand this closed subset~$\Supp(\Kac(G))$.
To appreciate the issue, let us say a word of closed points.
\Cref{Cor:closed-pts} gives the complete list: There is one closed point~$\cM(H)$ of~$\SpcKG$ for every conjugacy class of $p$-subgroups~$H\le G$.
The open~$\VG$ only contains one closed point, for the trivial subgroup~\mbox{$H=1$}. All other closed points $\cM(H)$ for $H\neq 1$ are to be found in the complement $\Supp(\Kac(G))$.
It will turn out that $\SpcKG$ is substantially richer than the cohomological open~$\VG$, in a way that involves $p$-local information about~$G$.
To understand this, we need the right notion of fixed-points functor.
\smallbreak
\subsection*{Modular fixed-points}
Let $H\le G$ be a subgroup. We abbreviate by
\begin{equation}\label{eq:Weyl}%
\WGH:=W_G(H)=N_G(H)/H
\end{equation}
the Weyl group of~$H$ in~$G$.
If $H\normaleq G$ is normal then of course~$\WGH=G/H$.
For every $G$-set~$X$, its $H$-fixed-points~$X^H$ is canonically a $(\WGH)$-set.
We also have a naive fixed-points functor $M\mapsto M^H$ on $\kkG$-modules but it does not `linearize' fixed-points of $G$-sets, that is, $\kk(X)^H$ differs from~$\kk(X^H)$ in general. And it does not preserve the tensor product.
We would prefer a \emph{tensor}-triangular functor
\begin{equation}
\label{eq:Psi^H}%
\Psi^H\colon \cT(G)\to \cT(\WGH)
\end{equation}
such that $\Psi^H(\kk(X))=\kk(X^H)$ for every $G$-set~$X$.
A related problem was encountered long ago for the $G$-equivariant stable homotopy category~$\SH(G)$, see~\cite{LMSM:equivariant-stable-htpy}:
The naive fixed-points functor (\aka the `genuine' or `categorical' fixed-points functor) is not compatible with taking suspension spectra, and it does not preserve the smash product.
To solve both issues, topologists invented \emph{geometric} fixed-points~$\Phi^H$.
Those functors already played an important role in tensor-triangular geometry~\cite{balmer-sanders:SH(G)-finite,barthel-greenlees-hausmann:SH(G)-compact-Lie,psw:derived-mackey} and it would be reasonable, if not very original, to try the same strategy for~$\cT(G)$.
Such geometric fixed-points~$\Phi^H$ can indeed be defined in our setting but unfortunately they do \emph{not} give us the wanted~$\Psi^H$ of~\eqref{eq:Psi^H}, as we explain in \Cref{Rem:geom-fixed-pts}.
In summary, we need a third notion of fixed-points functor~$\Psi^H$, which is neither the naive one~$(-)^H$, nor the `geometric' one~$\Phi^H$ imported from topology.
It turns out (see \Cref{Rem:only-p-subgroups}) that it can only exist in characteristic~$p$ when $H$ is a~$p$-subgroup.
The good news is that this is the only restriction (see~\Cref{sec:modular-fixed-pts}):
\begin{Prop}
\label{Prop:modular-fixed-points-intro}%
For every $p$-subgroup~$H\le G$ there exists a coproduct-preserving tensor-triangular functor on the big derived category of permutation modules~\eqref{eq:T(G)}
\[
\Psi^H\colon \quad \cT(G)\too \cT(\WGH)
\]
such that $\Psi^H(\kk(X))\cong\kk(X^H)$ for every $G$-set~$X$.
In particular, this functor preserves compacts and restricts to a tt-functor $\Psi^H\colon \cK(G)\to \cK(\WGH)$ on~\eqref{eq:K(G)}.
\end{Prop}
We call the $\Psi^H$ the \emph{modular $H$-fixed-points functors}. They already exist at the level of additive categories~$\perm(G;\kk)^\natural\to \perm(\WGH;\kk)^\natural$, where they agree with the classical Brauer quotient, although our construction is quite different.
See~\Cref{Rem:Brauer}.
These $\Psi^H$ also recover motivic functors considered by Bachmann in~\cite[Corollary~5.48]{bachmann:thesis}.
For us, modular fixed-points functors are only a tool that we want to use to prove theorems. So let us return to~$\SpcKG$.
\smallbreak
\subsection*{The spectrum}
Each tt-functor~$\Psi^{H}$ induces a continuous map on spectra
\begin{equation}
\label{eq:psi^N}%
\psi^{H}:=\Spc(\Psi^{H})\,\colon \quad \Spc(\cK(\WGH))\too \SpcKG.
\end{equation}
In particular $\SpcKG$ receives via this map~$\psi^H$ the cohomological open~$\Vee{\WGH}$ of the Weyl group of~$H$:
\begin{equation}
\label{eq:psi^H-intro}
\Vee{\WGH}=\Spc(\Db(\kk(\WGH)))\hook\Spc(\cK(\WGH))\xto{\psi^H}\SpcKG.
\end{equation}
Using this, we describe the \emph{set} underlying $\SpcKG$ in~\Cref{Thm:all-points}:
\begin{Thm}
\label{Thm:Spc(K(G))-intro}%
Every point of~$\SpcKG$ is the image ${\psi}^H(\gp)$ of a point~$\gp\in \Vee{\WGH}$ for some $p$-subgroup~$H\le G$, in a unique way up to $G$-conjugation, \ie we have ${\psi}^H(\gp)={\psi}^{H'}(\gp')$ if and only if there exists $g\in G$ such that $H^g=H'$ and $\gp^g=\gp'$.
\end{Thm}
In this description, the trivial subgroup~$H=1$ contributes the cohomological open~$\VG$ (since $\Psi^1=\Id$).
Its closed complement $\Supp(\Kac(G))$, introduced in~\eqref{eq:Z_G}, is covered by images of the modular fixed-points maps~\eqref{eq:psi^H-intro}, for~$H$ running through all non-trivial $p$-subgroups of~$G$.
The main ingredient in proving \Cref{Thm:Spc(K(G))-intro} is our Conservativity \Cref{Thm:conservativity} on the associated big categories:\,(\footnote{\,Recall that Krause's homotopy category of injectives $\K\Inj(\kk(G))$ is a compactly generated tensor triangulated category whose compact part identifies with $\Db(\kk(G))$.})
\begin{Thm}
\label{Thm:conservativity-intro}%
The family of functors $\{\cT(G)\xto{\Psi^H}\cT(\WGH)\onto \K\Inj(\kk(\WGH))\}_{H}$, indexed by the (conjugacy classes of) $p$-subgroups~$H\le G$, is conservative.
\end{Thm}
This determines the set $\SpcKG$. The topology of~$\SpcKG$ is a separate plot, involving new characters.
The reader will find them in Part~II~\cite{balmer-gallauer:TTG-Perm-II}.
\smallbreak
\subsection*{Measuring progress by examples}
Before the present work, we only knew the case of cyclic group~$C_p$ of order~$p=2$, where $\Spc(\cK(C_2))$ is a 3-point space\,{\rm(\footnote{\,A line indicates specialization: The higher point is in the closure of the lower one.})}
\begin{equation}
\label{eq:C_2}%
\vcenter{\xymatrix@R=.5em@C=.2em{
{\color{Brown}\scriptstyle\Supp(\Kac(C_2))}
& {\color{Brown}\bullet} \ar@{-}@[Gray][rd]
&&
{\color{OliveGreen}\bullet}
\\
&& {\color{OliveGreen}\bullet} \ar@{-}@[OliveGreen][ru]_-{{\color{OliveGreen}\Vee{C_2}}}
}}\kern5em
\end{equation}
This was the starting point of our study of real Artin-Tate motives~\cite[Theorem~3.14]{balmer-gallauer:rage}.
It appears independently in Dugger-Hazel-May~\cite[Theorem~5.4]{dugger-et-al:C2}.
We now have a description of~$\Spc(\cK(G))$ for arbitrary finite groups~$G$.
We gather several examples in \Cref{sec:examples} to illustrate the progress made since~\eqref{eq:C_2}, and also for later use in~\cite{balmer-gallauer:TTG-Perm-III}.
Let us highlight the case of the quaternion group~$G=Q_8$ (\Cref{Exa:Q_8}).
By Quillen, we know that the cohomological open~$\Vee{Q_8}$ is the same as for its center~$Z(Q_8)=C_2$, that is, the 2-point Sierpi\'{n}ski space displayed in green on the right-hand side of~\eqref{eq:C_2}, and again below:
\[
\vcenter{\xymatrix@R=.5em@C=.2em{
{\color{Brown}\scriptstyle\Supp(\Kac(Q_8))=?} \ar@{..}@[Gray][rd]
&&
{\color{OliveGreen}\bullet}
\\
& {\color{OliveGreen}\bullet} \ar@{-}@[OliveGreen][ru]_-{{\color{OliveGreen}\Vee{Q_8}\,\cong\,\Vee{C_2}}}
}}
\]
If intuition was solely based on~\eqref{eq:C_2} one could believe that $\SpcKG$ is just~$\Vee{G}$ with some discrete decoration for the acyclics, like the single (brown) point on the left-hand side of~\eqref{eq:C_2}. The quaternion group offers a stark rebuttal.
Indeed, the spectrum $\Spc(\cK(Q_8))$ is the following space:
\begin{equation}\label{eq:Q_8}%
\kern2em\vcenter{\xymatrix@C=0.1em@R=.4em{
{\color{Brown}\overset{}{\bullet}} \ar@{-}@[Gray][rrdd] \ar@{-}@[Gray][rrrrdd] \ar@{-}@[Gray][rrrrrrdd] \ar@{~}@[Gray][rrrrrrrrdd] &&& {\color{Brown}\overset{}{\bullet}} \ar@{-}@[Brown][ldd] \ar@{-}@[Gray][rrrrrrdd]
&& {\color{Brown}\overset{}{\bullet}} \ar@{-}@[Brown][ldd] \ar@{-}@[Gray][rrrrrrdd]
&& {\color{Brown}\overset{}{\bullet}} \ar@{-}@[Brown][ldd] \ar@{-}@[Gray][rrrrrrdd]
&& {\color{Brown}\overset{}{\bullet}} \ar@{~}@[Brown][ldd] \ar@{-}@[Brown][dd] \ar@{-}@[Brown][rrdd] \ar@{-}@[Brown][rrrrdd] \ar@{-}@[Gray][rrrrrrrdd]
&&&&&&&& {\color{OliveGreen}\bullet} \ar@{-}@[OliveGreen][ldd]^-{{\color{OliveGreen}\Vee{Q_8}\,\cong\,\Vee{C_2}}}
\\ \\
&& {\color{Brown}\bullet} \ar@{-}@[Gray][rrrrrrrdd]_-{\color{Brown}\Supp(\Kac(Q_8))\,\cong\,\Spc(\cK(C_2^{\times2}))\kern5em}
&& {\color{Brown}\bullet} \ar@{-}@[Gray][rrrrrdd]
&& {\color{Brown}\bullet} \ar@{-}@[Gray][rrrdd]
& \ar@{.}@[Brown][r]
& {\scriptstyle\color{Brown}\tinyPone} \ar@{~}@[Brown][rdd] \ar@{.}@[Brown][rrrrrrr]
& {\color{Brown}\bullet} \ar@{-}@[Brown][dd]
&& {\color{Brown}\bullet} \ar@{-}@[Brown][lldd]
&& {\color{Brown}\bullet} \ar@{-}@[Brown][lllldd]
&&& {\color{OliveGreen}\bullet}
\\ \\
&&&&&&&&& {\color{Brown}\bullet}
&&&&
}}
\end{equation}
Its support of acyclics (in brown) is actually way more complicated than the cohomological open itself: It has Krull dimension two and contains a copy of the projective line~$\bbP^1_{\!\kk}$. In fact, the map~$\psi^{C_2}$ given by modular fixed-points identifies the closed piece $\Supp(\Kac(Q_8))$ with the whole spectrum for $Q_8/C_2$, which is a Klein-four. We discuss the latter in \Cref{Exa:Klein4} where we also explain the meaning of~$\Pone$ and the undulated lines in~\eqref{eq:Q_8}.
\medbreak
We hope that the outline of the paper is now clear from the above introduction and the \href{#TOC}{table of contents}.
\begin{Ack}
We thank Tobias Barthel, Henning Krause and Peter Symonds for precious conversations and for their stimulating interest.
\end{Ack}
\tristar
\begin{Ter}
\label{Ter:0}%
A `tensor category' is an additive category with a symmetric-monoidal product additive in each variable.
We say `tt-category' for `tensor triangulated category' and `tt-ideal' for `thick (triangulated) $\otimes$-ideal'.
We say `big' tt-category for a rigidly-compactly generated tt-category, as in~\cite{balmer-favi:idempotents}.
We use a general notation~$(-)^\sbull$ to indicate everything related to graded rings. For instance, $\Spech(-)$ denotes the homogeneous spectrum.
For subgroups~$H,K\le G$, we write $H\le_G K$ to say that $H$ is $G$-conjugate to a subgroup of~$K$, that is, $H^g\le K$ for some~$g\in G$.
We write $\sim_G$ for $G$-conjugation.
As always $H^g=g\inv H\,g$ and~${}^{g\!}H=g\,H\,g\inv$. We write $N_G(H,K)$ for $\SET{g\in G}{H^g\le K}$ and $N_G(H)=N_G(H,H)$ for the normalizer.
We write $\Sub_p(G)$ for the set of $p$-subgroups of~$G$. It is a $G$-set via conjugation.
\end{Ter}
\begin{Conv}
\label{Conv:light}%
When a notation involves a subgroup~$H$ of an ambient group~$G$, we drop the mention of~$G$ if no ambiguity can occur, like with $\Res_H$ for~$\Res^G_H$.
Similarly, we sometimes drop the mention of the field~$\kk$ to lighten notation.
\end{Conv}
\section{Recollections and Koszul objects}
\label{sec:rec-red}%
\begin{Rec}
\label{Rec:ttg}%
We refer to~\cite{balmer:icm} for elements of tensor-triangular geometry.
Recall simply that the \emph{spectrum} of an essentially small tt-category~$\cK$ is $\SpcK=\SET{\cP\subsetneq\cK}{\cP\textrm{ is a prime tt-ideal}}$.
For every object~$x\in\cK$, its support is $\supp(x):=\SET{\cP\in\SpcK}{x\notin \cP}$.
These form a basis of closed subsets for the topology.
\end{Rec}
\begin{Rec}
\label{Rec:perm}%
(Here $\kk$ can be a commutative ring.)
Recall our reference~\cite{balmer-gallauer:Dperm} for details on permutation modules.
Linearizing a $G$-set~$X$, we let $\kk(X)$ be the free $\kk$-module with basis~$X$ and $G$-action $\kk$-linearly extending the $G$-action on~$X$.
A \emph{permutation $\kkG$-module} is a $\kkG$-module isomorphic to one of the form~$\kk(X)$.
These modules form an additive subcategory $\Perm(G;\kk)$ of~$\MMod{\kkG}$, with all $\kkG$-linear maps.
We write $\perm(G;\kk)$ for the full subcategory of finitely generated permutation $\kkG$-modules and $\perm(G;\kk)^\natural$ for its idempotent-completion.
We tensor $\kkG$-modules in the usual way, over~$\kk$ with diagonal $G$-action. The linearization functor~$\kk(-)\colon \Gasets[G] \too \Perm(G;\kk)$ turns the cartesian product of $G$-sets into this tensor product. For every finite~$X$, the module $\kk(X)$ is self-dual.
We consider the idempotent-completion $(-)^\natural$ of the homotopy category of bounded complexes in the additive category~$\perm(G;\kk)$
\begin{equation*}
\cK(G)=\cK(G;\kk):=\Kb(\perm(G;\kk))^\natural\cong \Kb(\perm(G;\kk)^\natural).
\end{equation*}
As $\perm(G;\kk)$ is an essentially small tensor-additive category, $\cK(G)$ becomes an essentially small tensor triangulated category. As $\perm(G;\kk)$ is rigid so is~$\cK(G)$, with degreewise duals. Its tensor-unit $\unit=\kk$ is the trivial $\kkG$-module $\kk=\kk(G/G)$.
The `big' \emph{derived category of permutation $\kkG$-modules}~\cite[Definition~3.6]{balmer-gallauer:Dperm} is
\begin{equation*}
\DPerm(G;\kk)=\K(\Perm(G;\kk))\big[\{G\textrm{-quasi-isos}\}\inv\big],
\end{equation*}
where a $G$-quasi-isomorphism $f\colon P\to Q$ is a morphism of complexes such that the induced morphism on $H$-fixed points $f^H$ is a quasi-isomorphism for every subgroup~$H\le G$.
It is also the localizing subcategory of $\K(\Perm(G;\kk))$ generated by~$\cK(G)$, and it follows that $\cK(G)=\DPerm(G;\kk)^c$.
\end{Rec}
\begin{Exa}
For $G$ trivial, the category~$\cK(1;\kk)=\Dperf(\kk)$ is that of perfect complexes over~$\kk$ (any ring) and $\DPerm(1;\kk)$ is the derived category of~$\kk$.
\end{Exa}
\begin{Rem}
The tt-category $\cK(G)$ depends functorially on~$G$ and~$\kk$. It is contravariant in the group. Namely if $\alpha\colon G\to G'$ is a homomorphism then restriction along~$\alpha$ yields a tt-functor $\alpha^*\colon \cK(G')\to \cK(G)$.
When $\alpha$ is the inclusion of a subgroup~$G\le G'$, we recover usual restriction
\[
\Res^{G'}_{G}\colon \cK(G')\to \cK(G).
\]
When~$\alpha$ is a quotient $G\onto G'=G/N$ for~$N\normaleq G$, we get \emph{inflation}, denoted here\,(\footnote{\,We avoid the traditional $\Infl^{G}_{G/N}$ notation which is not coherent with the restriction notation.})
\[
\Infl^{G/N}_G\colon \cK(G/N;\kk)\to \cK(G).
\]
The covariance of $\cK(G)$ in~$\kk$ is simply obtained by extension-of-scalars.
All these functors are the `compact parts' of similarly defined functors on $\DPerm$.
\end{Rem}
Let us say a word of $\kkG$-linear morphisms between permutation modules.
\begin{Rec}
\label{Rec:Hom}%
Let $H,K\le G$ be subgroups. Then $\Hom_{\kkG}(\kk(G/H),\kk(G/K))$ admits a $\kk$-basis $\{f_g\}_{[g]}$ indexed by classes $[g]\in \doublequot HGK$. Namely, choosing a representative in each class $[g]\in \doublequot HGK$, one defines
\begin{equation}
\label{eq:Hom-f_g}%
f_g\colon \quad \kk(G/H)\underset{\eta}{\ \into\ } \kk(G/L)\underset{c_g}{\isoto} \kk(G/L^g)\underset{\eps}{\ \onto\ } \kk(G/K)
\end{equation}
where we set $L:=H\cap {}^{g\!}K$, where $\eta$ and~$\eps$ are the usual maps using that $L\le H$ and $L^g\le K$ (thus $\eta$ maps $[e]_H$ to $\sum_{\gamma\in H/L} \gamma$ and $\eps$ extends $\kk$-linearly the projection $G/L^g\onto G/K$), and finally where the middle isomorphism~$c_g$ is
\begin{equation}\label{eq:Hom-c_g}%
\vcenter{\xymatrix@R=.1em{
c_g\colon & \kk(G/L) \ar[r] & \kk(G/L^g)
\\
& [x]_{L} \ar@{|->}[r] & [x\cdot g]_{L^g}\,.
}}
\end{equation}
This is a standard computation, using the adjunction $\Ind_H^G\adj\Res^G_H$ and the Mackey formula for~$\Res^G_H(\kk(G/K))\simeq\oplus_{[g]\in \doublequot HGK}\,\kk(H/H\cap {}^{g\!}K)$.
\end{Rec}
We can now begin our analysis of the spectrum of the tt-category~$\cK(G)$.
\begin{Prop}
\label{Prop:index-invertible}%
Let $G\le G'$ be a subgroup of index invertible in~$\kk$. Then the map $\Spc(\Res^{G'}_{G})\colon \Spc(\cK(G))\to \Spc(\cK(G'))$ is surjective.
\end{Prop}
\begin{proof}
This is a standard argument.
For a subgroup $G\le G'$, the restriction functor $\Res^{G'}_{G}$ has a two-sided adjoint $\Ind_{G}^{G'}\colon \cK(G)\to \cK(G')$ such that the composite of the unit and counit of these adjunctions $\Id\to \Ind\Res\to \Id$ is multiplication by the index. If the latter is invertible, it follows that $\Res^{G'}_{G}$ is a faithful functor.
The result now follows from~\cite[Theorem~1.3]{balmer:surjectivity}.
\end{proof}
\begin{Cor}
Let $\kk$ be a field of characteristic zero and $G$ be a finite group. Then $\Spc(\cK(G))=\ast$ is a singleton.
\end{Cor}
\begin{proof}
Direct from \Cref{Prop:index-invertible} since $\Spc(\cK(1;\kk))=\Spc(\Dperf(\kk))=\ast$.
\end{proof}
\begin{Rem}
\label{Rem:reduc}%
In view of these reductions, the fun happens with coefficients in a field~$\kk$ of positive characteristic~$p$ dividing the order of~$G$. Hence \Cref{Conv:G/k}.
\end{Rem}
Let us now identify what the derived category tells us about $\SpcKG$.
\begin{Not}
\label{Not:Kac(G)}%
We can define a tt-ideal of~$\cK(G)=\Kb(\perm(G;\kk)^\natural)$ by
\[
\cK_\ac(G):=\SET{x\in\cK(G)}{x\textrm{ is \emph{acyclic} as a complex of $kG$-modules}}.
\]
It is the kernel of the tt-functor $\Upsilon_G\colon \cK(G)\to \Db(\kk G):=\Db(\mmod{\kkG})$ induced by the inclusion $\perm(G;\kk)^\natural\hook\mmod{\kkG}$ of the additive category of $p$-permutation $\kkG$-modules inside the abelian category of all finitely generated $\kkG$-modules.
\end{Not}
\begin{Rec}
\label{Rec:K(G)/Kac}%
The canonical functor induced by~$\Upsilon_G$ on the Verdier quotient
\[
\frac{\cK(G)}{\cK_\ac(G)}\too \Db(\kk G)
\]
is an equivalence of tt-categories. This is \cite[Theorem~5.13]{balmer-gallauer:resol-small}. In other words,
\begin{equation}
\label{eq:Upsilon_G}%
\Upsilon_G\colon \cK(G)\onto \Db(\kk G)
\end{equation}
realizes the derived category of finitely generated $\kkG$-modules as a localization of our~$\cK(G)$, away from the Thomason subset~$\Supp(\Kac(G))$ of~\eqref{eq:Z_G}.
Following Neeman-Thomason, the above localization~\eqref{eq:Upsilon_G} is the compact part of a finite localization of the corresponding `big' tt-categories $\cT(G)\onto \K\Inj(\kkG)$, the homotopy category of complexes of injectives. See~\cite[Remark~4.21]{balmer-gallauer:resol-big}. We return to this localization of big categories in \Cref{Rec:J-lambda}.
\end{Rec}
We want to better understand the tt-ideal of acyclics~$\Kac(G)$ and in particular show that it has closed support.
\begin{Cons}
\label{Cons:kos}%
Let $H\le G$ be a subgroup. We define a complex of~$\kkG$-modules by tensor-induction (recall \Cref{Conv:light})
\[
\sH=\sHG:=\tInd_H^G(0\to \kk\xto{1} \kk\to 0)
\]
where $0\to \kk\xto{1} \kk\to 0$ is non-trivial in homological degrees~1 and~0; hence $\sH$ lives in degrees between~$[G\!:\!H]$ and~0. Since $H$ acts trivially on~$k$, the action of~$G$ on~$\sH$ is the action of~$G$ by permutation of the factors $\otimes_{G/H}(0\to\kk\xto{1} \kk\to 0)$. This can be described as a Koszul complex. For every $0\le d\le[G\!:\!H]$, the complex~$\sH$ in degree~$d$ is the $k$-vector space $\Lambda^d(\kk(G/H))$ of dimension~${[G:H]\choose d}$. If we choose a numbering of the elements of~$G/H=\{v_1,\ldots,v_{[G:H]}\}$ then $\sH_d$ has a $k$-basis $\SET{v_{i_1}\wedge\cdots\wedge v_{i_d}}{1\le i_1<\cdots<i_d\le [G\!:\!H]}$. The canonical diagonal action of~$G$ permutes this basis but introduces signs when re-ordering the $v_i$'s so that indices increase. When $p=2$ these signs are irrelevant. When $p>2$, every such `sign-permutation' $\kkG$-module is isomorphic to an actual permutation $\kkG$-module (by changing some signs in the basis, see~\cite[Lemma~3.8]{balmer-gallauer:resol-small}).
\end{Cons}
\begin{Prop}
\label{Prop:s(H;G)}%
Let $H\le G$ be a subgroup. Then~$\sHG$ is an acyclic complex of finitely generated permutation $\kkG$-modules which is concentrated in degrees between $[G\!:\!H]$ and~$0$ and such that it is~$k$ in degree~0 and~$\kk(G/H)$ in degree~1.
\end{Prop}
\begin{proof}
See \Cref{Cons:kos}. Exactness is obvious since the underlying complex of $\kk$-modules is $(0\to k\to k\to 0)\potimes{[G:H]}$.
The values in degrees~$0,1$ are immediate.
\end{proof}
\begin{Exa}
\label{Exa:s(1;G)}
We have $\kos[G]{G}=0$ in~$\cK(G)$. The complex $\kos[G]{1}$ is an acyclic complex of permutation modules that was important in~\cite[\S\,3]{balmer-gallauer:resol-small}:
\[
\kos[G]{1}=\xymatrix{\cdots 0\ar[r] & P_n\ar[r] & \cdots \ar[r] & P_2 \ar[r] & \kk G \ar[r] & \kk \ar[r] & 0 \cdots
}
\]
\end{Exa}
\begin{Lem}
\label{Lem:kos(N;G)}%
Let $H\normaleq G$ be a normal subgroup and $H\le K\le G$. Then $\kos[G]{K}\cong \Infl^{G/H}_G(\kos[G/H]{K/H})$.
In particular, $\kos[G]H\cong\Infl^{G/H}_G(\kos[G/H]{1})$.
\end{Lem}
\begin{proof}
The construction of~$\kos[G]{K}=\otimes_{G/K}(0\to\kk\xto{1} \kk\to 0)$ depends only on the $G$-set~$G/K$ which is inflated from the $G/H$-set $(G/H)/(K/H)$.
\end{proof}
In fact, $\sHG$ is not only exact. It is split-exact on~$H$. More generally:
\begin{Lem}
\label{Lem:Res(s(G;H))}%
For every subgroups $H,K\le G$ and every choice of representatives in $K\bs G/H$, we have a non-canonical isomorphism of complexes of~$\kk K$-modules
\[
\Res^G_K(\sHG)\simeq\ \bigotimes_{[g]\in K\bs G/H}\ \kos[K]{K\cap {}^{g\!}H}.
\]
In particular, if $K\le_G H$, we have $\Res^G_K(\sHG)=0$ in~$\cK(K)$.
\end{Lem}
\begin{proof}
By the Mackey formula for tensor-induction, we have in~$\Ch(\perm(K;\kk))$
\[
\Res^G_K(\sHG)\simeq\ \bigotimes_{[g]\in K\bs G/H}\ \tInd_{K\cap {}^{g\!}H}^K \big( {}^{g\!}\Res^H_{K\cap{}^{g\!}H}(0\to \kk\xto{1}\kk\to 0)\big).
\]
The result follows since $\Res(0\to \kk\xto{1}\kk\to 0)=(0\to \kk\xto{1}\kk\to 0)$.
If $K\le_GH$, the factor $\kos[K]{K}$ appears in the tensor product and $\kos[K]{K}=0$ in~$\cK(K)$.
\end{proof}
We record a general technical argument that we shall use a couple of times.
\begin{Lem}
\label{Lem:s-generates}%
Let $\cA$ be a rigid tensor category and $s=(\cdots s_2\to s_1\to s_0 \to 0\cdots)$ a complex concentrated in non-negative degrees.
Let $x\in\Ch(\cA)$ be a bounded complex such that $s_1\otimes x=0$ in~$\Kb(\cA)$. Then there exists $n\gg0$ such that $s_0\potimes{n}\otimes x$ belongs to the smallest thick subcategory $\ideal{s}'$ of~$\K(\cA)$ that contains~$s$ and is closed under tensoring with~$\Kb(\cA)\cup\{s\}$ in~$\K(\cA)$.
In particular, if $s\in\Kb(\cA)$ is itself bounded, then $s_0\potimes{n}\otimes x$ belongs to the tt-ideal $\ideal{s}$ generated by~$s$ in~$\Kb(\cA)$.
\end{Lem}
\begin{proof}
Let $u:=s_{\geq 1}[-1]$ be the truncation of~$s$ such that $s=\cone(d:u\to s_0)$.
Similarly we have $u=\cone(u_{\geq 1}[-1]\to s_1)$.
Note that $u_{\ge 1}$ is concentrated in positive degrees.
Since $x\otimes s_1=0$ we have $u\otimes x\cong u_{\ge 1}\otimes x$ in~$\K(\cA)$ and thus
\[
u\potimes{n}\otimes x\cong (u_{\geq 1})\potimes{n}\otimes x
\]
for all~$n\ge 0$. For $n$ large enough there are no non-zero maps of complexes from $(u_{\geq 1})\potimes{n}\otimes x$ to $s_0\potimes{n}\otimes x$, simply because the former `moves' further and further away to the left and $x$ is bounded. So $d\potimes{n}\otimes x\colon u\potimes{n}\otimes x\too s_0\potimes{n}\otimes x$ is zero in~$\cK(\cA)$.
Let $\cL$ be the tt-subcategory of~$\K(\cA)$ generated by~$\Kb(\cA)\cup\{s\}$; then $\ideal{s}'$ is a tt-ideal in~$\cL$, and similarly we write $\ideal{\cone(d\potimes{n})}'$ for the tt-ideal in~$\cL$ generated by $\cone(d\potimes{n})$.
By the argument above, we have $s_0\potimes{n}\otimes x\in\ideal{\cone(d\potimes{n})}'\subseteq\ideal{s}'$.
\end{proof}
\begin{Cor}
\label{Cor:s-generates}%
Let $\cA$ be a rigid tensor category and $\cI\subseteq \Kb(\cA)$ a tt-ideal. Let $s\in\cI$ be a (bounded) complex concentrated in non-negative degrees such that
\begin{enumerate}[label=\rm(\arabic*), ref=\rm(\arabic*)]
\item
\label{it:s-generates-1}%
$\supp(s_0)\supseteq\supp(\cI)$ in~$\Spc(\Kb(\cA))$ (for instance if $s_0=\unit_{\cA}$), and
\smallbreak
\item
\label{it:s-generates-2}%
$\supp(s_1)\cap\supp(\cI)=\varnothing$, meaning that $s_1\otimes x=0$ in~$\Kb(\cA)$ for all~$x\in\cI$.
\end{enumerate}
Then $s$ generates $\cI$ as a tt-ideal in~$\Kb(\cA)$, that is, $\supp(\cI)=\supp(s)$ in~$\Spc(\Kb(\cA))$.
\end{Cor}
\begin{proof}
Let $x\in \cI$. By~\ref{it:s-generates-2}, \Cref{Lem:s-generates} gives us $s_0\potimes{n}\otimes x\in\ideal{s}$ for $n\gg0$. Hence $\supp(s_0)\cap \supp(x)\subseteq \supp(s)$. By~\ref{it:s-generates-1} we have $\supp(x)\subseteq\supp(s_0)$. Therefore $\supp(x)=\supp(s_0)\cap \supp(x)\subseteq\supp(s)$. In short $x\in\ideal{s}$ for all~$x\in\cI$.
\end{proof}
We apply this to the object $s=\sHG$ of \Cref{Cons:kos}.
\begin{Prop}
\label{Prop:Ker(Res)}%
For every subgroup~$H\le G$, the object $\sHG$ generates the tt-ideal $\Ker(\Res^G_H)$ of~$\cK(G)$.
\end{Prop}
\begin{proof}
We apply \Cref{Cor:s-generates} to $\cI=\Ker(\Res^G_H)$ and $s=\sHG$. We have $s\in\cI$ by \Cref{Lem:Res(s(G;H))}. Conditions~\ref{it:s-generates-1} and~\ref{it:s-generates-2} hold since $s_0=\kk$ and $s_1=\kk(G/H)$ by \Cref{Prop:s(H;G)} and Frobenius gives $s_1\otimes\cI=\kk(G/H)\otimes\cI=\Ind_H^G\Res^G_H(\cI)=0$.
\end{proof}
We can apply the above discussion to $H=1$ and $\cI=\Ker(\Res^G_1)=\cK_\ac(G)$.
\begin{Prop}
\label{Prop:VeeG}%
The tt-functor $\Upsilon_G \colon \cK(G)\onto\Db(\kkG)$ induces an \emph{open} inclusion $\upsilon_G\colon \VG\hook \SpcKG$ where $\VG=\Spc(\Db(\kkG))\cong\Spech(\rmH^\sbull(G,\kk))$. The closed complement of~$\VG$ is the support of~$\kos[G]{1}=\tInd_1^G(0\to \kk\xto{1}\kk\to 0)$.
\end{Prop}
\begin{proof}
The homeomorphism $\Spc(\Db(\kkG))\cong \Spech(\rmH^\sbull(G,\kk))$ follows from the tt-classification~\cite{benson-carlson-rickard:tt-class-stab(kG)}; see~\cite[Theorem~57]{balmer:icm}. By \Cref{Rec:K(G)/Kac}, the map $\upsilon_G:=\Spc(\Upsilon_G)$ is a homeomorphism onto its image, and the complement of this image is $\supp(\cK_\ac(G))=\supp(\kos[G]{1})$, by \Cref{Prop:Ker(Res)} applied to $H=1$. In particular, $\supp(\cK_\ac(G))$ is a closed subset, not just a Thomason.
\end{proof}
\begin{Rem}
The notation for the so-called \emph{cohomological open}~$\VG$ has been chosen to evoke the classical \emph{projective support variety}~$\mathcal{V}_G(\kk)=\Proj(\rmH^\sbull(G,\kk))\cong\Spc(\stmod(\kkG))$, which consists of~$\VG$ without its unique closed point, $\rmH^+(G;\kk)$.
\end{Rem}
We can also describe the kernel of restriction for classes of subgroups.
\begin{Cor}
\label{Cor:Ker(Res)}%
For every collection $\cH$ of subgroups of~$G$, we have an equality of tt-ideals in~$\cK(G)$
\[
\bigcap_{H\in\cH}\Ker(\Res^G_H)=\big\langle\ \bigotimes_{H\in\cH}\ \sHG\ \big\rangle.
\]
\end{Cor}
\begin{proof}
This is direct from \Cref{Prop:Ker(Res)} and the general fact that $\ideal{x}\cap \ideal{y}=\ideal{x\otimes y}$. (In the case of $\cH=\varnothing$, the intersection is~$\cK(G)$ and the tensor is~$\unit$.)
\end{proof}
\section{Restriction, induction and geometric fixed-points}
\label{sec:Spc(Res)}
In the previous section, we saw how much of $\SpcKG$ comes from~$\Db(\kkG)$. We now want to discuss how much is controlled by restriction to subgroups, to see how far the `standard' strategy of~\cite{balmer-sanders:SH(G)-finite} gets us.
\begin{Rem}
\label{Rem:not}%
The tt-categories~$\cK(G)$ and $\Db(\kk G)$, as well as the Weyl groups~$\WGH$ are functorial in~$G$. To keep track of this, we adopt the following notational system.
Let $\alpha\colon G\to G'$ be a group homomorphism.
We write $\alpha^*\colon \cK(G')\to \cK(G)$ for restriction along~$\alpha$, and similarly for~$\alpha^*\colon \Db(\kk G')\to \Db(\kk G)$.
When applying the contravariant $\Spc(-)$, we simply denote $\Spc(\alpha^*)$ by~$\alpha_*\colon\SpcKG\to \Spc(\cK(G'))$ and similarly for $\alpha_*\colon \Vee{G}\to \Vee{G'}$ on the spectrum of derived categories.
As announced, Weyl groups $\WGH=(N_G H)/H$ of subgroups~$H\le G$ will play a role.
Since $\alpha(N_G H)\le N_{G'}(\alpha(H))$, every homomorphism $\alpha\colon G\to G'$ induces a homomorphism $\bar \alpha\colon \WGH\to \Weyl{G'}{\alpha(H)}$.
Combining with the above, these homomorphisms $\bar{\alpha}$ define functors~$\bar{\alpha}^*$ and maps~$\bar{\alpha}_*$.
For instance, $\bar\alpha_*\colon \Vee{\WGH}\to \Vee{\Weyl{G'}{\alpha(H)}}$ is the continuous map induced on $\Spc(\Db(\kk(-)))$ by $\bar{\alpha}\colon \WGH\to \Weyl{G'}{\alpha(H)}$.
Following tradition, we have special names when $\alpha$ is an inclusion, a quotient or a conjugation. For the latter, we choose the lightest notation possible.
\smallbreak
\begin{enumerate}[wide, labelindent=0pt, label=\rm(\alph*), ref=\rm(\alph*)]
\item
For \emph{conjugation}, for a subgroup $G\le G'$ and an element ${x}\in G'$, the isomorphism $c_{x}\colon G\isoto G^{x}$ induces a tt-functor $c_{x}^*\colon \cK(G^{x})\isoto \cK(G)$ and a homeomorphism %
\[
\xymatrix@R=.1em{
(-)^{x}:=(c_{x})_*=\Spc(c_{x}^*)\colon
& \Spc(\cK(G)) \ar[r]^-{\sim}
& \Spc(\cK(G^{x}))
\\
& \cP \ar@{|->}[r]
& \cP^{x}.
}
\]
Note that if $x=g\in G$ belongs to~$G$ itself, the functor $c_g^*\colon \cK(G)\to \cK(G)$ is isomorphic to the identity and therefore we get the useful fact that
\begin{equation}\label{eq:no-conj}%
\qquad g\in G \qquad\Longrightarrow\qquad \cP^g=\cP\qquad\textrm{for all~}\cP\in \Spc(\cK(G)).
\end{equation}
Similarly we have a conjugation homeomorphism $\gp\mapsto \gp^x$ on the cohomological opens $\Vee{G}\isoto \Vee{G^x}$, which is the identity if~$x\in G$.
When $H\le G$ is a further subgroup then conjugation yields homeomorphisms~$\Vee{\Weyl{G}{H}}\isoto \Vee{\Weyl{G^x}{H^x}}$ still denoted $\cP\mapsto \cP^x$.
Again, if $x=g\in N_{G}H$, so $[g]_H$ defines an element in~$\Weyl{G}{H}$, the equivalence $(c_g)_*\colon \Db(\Weyl{G}{H})\isoto \Db(\Weyl{G}{H})$ is isomorphic to the identity. Thus
\begin{equation}\label{eq:no-conj-2}%
\qquad g\in N_G(H) \qquad\Longrightarrow\qquad \gp^g=\gp\qquad\textrm{for all~}\gp\in\Vee{\Weyl{G}{H}}.
\end{equation}
\smallbreak
\item
\label{it:rho}%
For \emph{restriction}, take $\alpha$ the inclusion $K\hook G$ of a subgroup. We write
\begin{equation}
\label{eq:rho}%
\rho_K=\rho^G_K:=\Spc(\Res^G_K)\,\colon\ \Spc(\cK(K))\to\SpcKG
\end{equation}
and similarly for derived categories.
When $H\le K$ is a subgroup, we write $\bar{\rho}_K\colon \Vee{\Weyl{K}{H}}\to \Vee{\WGH}$ for the map induced by restriction along~$\Weyl{K}{H}\hook \WGH$.
Beware that $\rho_K$ is not necessarily injective, already on~$\Vee{K}\to \Vee{G}$, as `fusion' phenomena can happen: If $g\in G$ normalizes~$K$, then $\cQ$ and~$\cQ^g$ in~$\Vee{K}$ have the same image in~$\VG$ by~\eqref{eq:no-conj} but are in general different in~$\Vee{K}$.
\smallbreak
\item
For \emph{inflation}, let $N\normaleq G$ be a normal subgroup and let~$\alpha=\proj\colon G\onto G/N$ be the quotient homomorphism.
We write
\begin{equation}
\label{eq:pi}%
\pi^{G/N}=\pi^{G/N}_G:=\Spc(\Infl^{G/N}_G)\,\colon\ \SpcKG\to\Spc(\cK(G/N))
\end{equation}
and similarly for derived categories. For $H\le G$ a subgroup, we write $\bar{\pi}^{G/N}_G\colon \Vee{\Weyl{G}{H}}\to \Vee{\Weyl{(G/N)}{(HN/N)}}$ for the map induced by~$\overline{\proj}\colon \Weyl{G}{H}\to\Weyl{(G/N)}{(HN/N)}$.
(Note that this homomorphism is not always surjective, \eg\ with $G=D_8$ and $N\simeq C_2^{\times 2}$.)
\end{enumerate}
\end{Rem}
\begin{Rec}
\label{Rec:tt-ring}%
One verifies that the $\Res^G_H\adj \Ind_H^G$ adjunction is monadic, see for instance~\cite[\S\,4]{balmer:tt-separable}, and that the associated monad~$A_H\otimes-$ is separable, where $A_H:=\kk(G/H)=\Ind^G_H\kk\in\perm(G;\kk)$. The ring structure on~$A_H$ is given by the usual unit~$\eta\colon \kk\to \kk(G/H)$, mapping $1$ to~$\sum_{\gamma\in G/H}\,\gamma$, and the multiplication~$\mu\colon A_H\otimes A_H\to A_H$ that is characterized by $\mu(\gamma\otimes\gamma)=\gamma$ and $\mu(\gamma\otimes\gamma')=0$ for all~$\gamma\neq\gamma'$ in~$G/H$.
The ring $A_H$ is separable and commutative. The tt-category $\MMod{A_H}=\Mod_{\cK(G)}(A_H)$ of~$A_H$-modules in~$\cK(G)$ identifies with~$\cK(H)$, in such a way that extension-of-scalars to~$A_H$ (\ie along $\eta$) coincides with restriction~$\Res^G_H$.
Similarly, extension-of-scalars along the isomorphism $c_{g\inv}\colon A_{H^g}\isoto A_{H}$, being an equivalence, is the inverse of its adjoint, that is~$((c_{g\inv})^*)\inv=c_g^*$, hence is the conjugation tt-functor~$c_g^*\colon \cK(H^g)\isoto \cK(H)$ of \Cref{Rem:not}.
\end{Rec}
\begin{Prop}
\label{Prop:Spc-Res}%
The continuous map~$\rho_H\colon \Spc(\cK(H))\to \SpcKG$ of~\eqref{eq:rho} is a closed map and for every $y\in \cK(H)$, we have $\rho_H(\supp(y))=\supp(\Ind_H^G(y))$ in~$\SpcKG$. In particular, $\Img(\rho_H)=\supp(\kk(G/H))$. Moreover, there is a coequalizer of topological spaces (independent of the choices of representatives~$g$)
\[
\coprod_{[g]\in \doublequot{H}{G}{H}}\Spc(\cK(H\cap {}^{g\!}H)) \ \rightrightarrows\ \Spc(\cK(H))\ \xto{\rho_H} \ \supp(k(G/H))
\]
where the two left horizontal maps are, on the $[g]$-component, induced by the restriction functor and by conjugation by~$g$ followed by restriction, respectively.
\end{Prop}
\begin{proof}
We invoke~\cite[Theorem~3.19]{balmer:tt-separable}. In particular, we have a coequalizer
\begin{equation}
\label{eq:aux-equaliz}%
\Spc(\MMod{A_H\otimes A_H})\rightrightarrows\Spc(\MMod{A_H}) \to \supp(A_H)
\end{equation}
where the two left horizontal maps are induced by the canonical ring morphisms $A_H\otimes \eta$ and $\eta\otimes A_H\colon A_H\to A_H\otimes A_H$.
For any choice of representatives $[g]\in \HGH$ the Mackey isomorphism
\[
\bigoplus_{[g]\in \doublequot{H}{G}{H}}A_{H\cap {}^{g\!}H}\isoto A_H\otimes A_H
\]
maps~$[x]_{H\cap {}^{g\!}H}$ to $[x]_H\otimes[x\cdot g]_H$. We can then plug this identification in~\eqref{eq:aux-equaliz}. The second homomorphism $\eta\otimes A_H$ followed by the projection onto the factor indexed by~$[g]$ becomes the composite $A_H\xto{c_{g\inv}} A_{{}^{g\!}H} \xinto{\eta} A_{H\cap {}^{g\!}H}$.
See \Cref{Rec:tt-ring}.
\end{proof}
\begin{Cor}
\label{Cor:Spc-Res}
For $\cP,\cP'\in\Spc(\cK(H))$ we have $\rho_H(\cP)=\rho_H(\cP')$ in $\SpcKG$ if and only if there exists $g\in G$ and $\cQ\in\Spc(\cK(H\cap {}^{g\!}H))$ such that
\[
\cP=\rho^{H}_{H\cap {}^{g\!}H}(\cQ)
\qquadtext{and}
\cP'=\big(\rho^{{}^{g\!}H}_{H\cap {}^{g\!}H}(\cQ)\big)^g
\]
using \Cref{Rem:not} for the notation $(-)^g\colon \Spc(\cK({}^{g\!}H))\isoto \Spc(\cK(H))$.
\end{Cor}
\begin{proof}
This is~\cite[Corollary~3.12]{balmer:tt-separable}, which implies the set-theoretic part of the coequalizer of \Cref{Prop:Spc-Res}.
\end{proof}
We single out a particular case.
\begin{Cor}
\label{Cor:Spc-Res-central}
If $H\le Z(G)$ is central in $G$ (for example, if $G$ is abelian) then restriction induces a closed immersion $\rho_H\colon\Spc(\cK(H))\hook\Spc(\cK(G))$.
\qed
\end{Cor}
\begin{Rem}
\label{Rem:geom-fixed-pts}%
In view of \Cref{Prop:Spc-Res}, the image of the map induced by restriction $\Img(\rho_H)=\supp(\kk(G/H))$ coincides with the support of the tt-ideal generated by~$\Ind_H^G(\cK(H))$.
Following the construction of the \emph{geometric} fixed-points functor~$\Phi^G\colon \SHc(G)\to \SHc$ in topology, we can consider the Verdier quotient
\[
\tilde{\cK}(G):=\frac{\cK(G)}{\ideal{\Ind_H^G(\cK(H))\mid H\lneqq G}}
\]
obtained by modding-out, in tensor-triangular fashion, everything induced from all proper subgroups~$H$.
This tt-category $\tilde{\cK}(G)$ has a smaller spectrum than~$\cK(G)$, namely the open complement in~$\SpcKG$ of the closed subset $\cup_{H\lneqq G}\Img(\rho_H)$ covered by proper subgroups.
This method has worked nicely in~\cite{balmer-sanders:SH(G)-finite,barthel-greenlees-hausmann:SH(G)-compact-Lie,psw:derived-mackey} because, in those instances, this Verdier quotient is equivalent to the non-equivariant version of the tt-category under consideration.
However, this fails for~$\tilde{\cK}(G)$, for instance $\tilde\cK(C_2)$ is \emph{not} equivalent to $\cK(1)=\Db(\kk)$:
\[
\frac{\SHc(G)}{\ideal{\Ind_H^G(\SHc(H))\mid H\lneqq G}} \cong\SHc
\qquadtext{but}
\frac{\cK(G)}{\ideal{\Ind_H^G(\cK(H))\mid H\lneqq G}}\not\cong \cK(1).
\]
For small groups, for instance for cyclic $p$-groups~$C_{p^n}$, the tt-category $\tilde{\cK}(G)$ is reasonably complicated and one could still compute $\SpcKG$ through an analysis of~$\tilde{\cK}(G)$.
However, the higher the $p$-rank, the harder it becomes to control~$\tilde{\cK}(G)$.
One can already see the germ of the problem with~$G=C_2$, see~\eqref{eq:C_2}:
\[
\Spc(\cK(C_2))=\qquad\vcenter{\xymatrix@R=1em@C=.5em{
{\color{Brown}\cM(C_2)}
\ar@{-}@[Gray][rd]
&&
{\color{OliveGreen}\cM(1)}
\\
& {\color{OliveGreen}\cP}
\ar@{-}@[Gray][ru]
}}
\]
We have given names to the three primes.
The only proper subgroup is~$H=1$ and the image of~$\rho_1=\Spc(\Res_1)$ is simply the single closed point~$\{\cM(1)\}=\supp(\kk C_2)$.
Chopping off this induced part, leaves us with the open $\Spc(\tilde\cK(C_2))=\{\cM(C_2),\cP\}$.
So geometric fixed points $\Phi^{C_2}\colon\cK(C_2)\to \tilde\cK(C_2)$ detects both of these points.
(This also proves that $\tilde\cK(G)\neq\cK(1)=\Db(\kk)$ since $\Db(\kk)$ would have only one point in its spectrum.)
However there is no need for a tt-functor detecting $\cM(C_2)$ \emph{and~$\cP$ again}, since $\cP$ is already in the cohomological open~$\Vee{C_2}$ detected by ~$\Db(\kk C_2)$.
In other words, geometric fixed points see \emph{too much}, not too little: The target category~$\tilde{\cK}(G)$ is too complicated in general.
And as the group grows, the issue only gets worse, as the reader can check with Klein-four in~\Cref{Exa:Klein4}.
In conclusion, we need tt-functors better tailored to the task, namely tt-functors that detect just what is missing from~$\VG$.
In the case of~$C_2$, we expect a tt-functor to~$\Db(\kk)$, to catch~$\cM(C_2)$, but for larger groups the story gets more complicated and involves more complex subquotients of~$G$, as we explain in the next section.
\end{Rem}
\section{Modular fixed-points functors}
\label{sec:modular-fixed-pts}%
Motivated by \Cref{Rem:geom-fixed-pts}, we want to find a replacement for geometric fixed points in the setting of modular representation theory.
In a nutshell, our construction amounts to taking classical Brauer quotients~\cite[\S\,1]{broue:permutation-brauer} on the level of permutation modules and then passing to the tt-categories~$\cK(G)$ and~$\cT(G)$.
We follow a somewhat different route than~\cite{broue:permutation-brauer} though, more in line with the construction of the geometric fixed-points discussed in \Cref{Rem:geom-fixed-pts}.
We hope some readers will benefit from our exposition.
It is here important that $\chara(\kk)=p$ is positive.
\begin{War}
\label{Rem:only-p-subgroups}%
A tt-functor $\Psi^H\colon \cK(G)\to \cK(\WGH)$ such that $\Psi^H(\kk(X))\cong\kk(X^H)$, as in~\eqref{eq:Psi^H}, cannot exist unless $H$ is a $p$-subgroup.
Indeed, if $P\le G$ is a $p$-Sylow then since $[G\!:\!P]$ is invertible in~$\kk$, the unit $\unit=\kk$ is a direct summand of~$\kk(G/P)$ in~$\cK(G)$.
A tt-functor~$\Psi^H$ cannot map~$\unit$ to zero.
Thus $\Psi^H(\kk(G/P))=\kk((G/P)^H)$ must be non-zero, forcing $(G/P)^H\neq\varnothing$.
If $[g]\in G/P$ is fixed by~$H$ then $H^g\le P$ and therefore $H$ must be a $p$-subgroup.
(If $\chara(\kk)=0$ this would force $H=1$.)
\end{War}
\begin{Rec}
\label{Rec:family}%
A collection~$\cF$ of subgroups of $G$ is called a \emph{family} if it is closed under conjugation and subgroups.
For instance, given $H\le G$, we have the family
\[
\cF_{H}=\SET{K\le G}{(G/K)^H=\varnothing}=\SET{K\le G}{H\not\le_G K}.
\]
For $N\normaleq G$ a normal subgroup, it is~$\cF_N=\SET{K\le G}{N\not\le K}$.
\end{Rec}
In view of \Cref{Rem:only-p-subgroups}, we must focus attention on $p$-subgroups.
The following standard lemma would not be true without the characteristic $p$ hypothesis.
\begin{Lem}
\label{Lem:infl-stab}%
Let $N\normaleq G$ be a normal $p$-subgroup.
Let $H,K\le G$ be subgroups such that~$N\le H$ and~$N\not\le K$.
Then every $\kkG$-linear homomorphism that factors as $f\colon \kk(G/H)\xto{\ell}\kk(G/K)\xto{m}\kk$ must be zero.
\end{Lem}
\begin{proof}
By \Cref{Rec:Hom} and $\kk$-linearity, we can assume that $m$ is the augmentation and that $\ell=\eps\circ c_g\circ \eta$ as in~\eqref{eq:Hom-f_g}, where $g\in G$ is some element, where we set $L=H\cap {}^{g\!}K$ and where $\eps\colon \kk(G/L^g)\onto \kk(G/K)$, $c_g\colon \kk(G/L)\isoto \kk(G/L^g)$ and $\eta\colon \kk(G/H)\into \kk(G/L)$ are the usual maps, using $L\le H$ and $L^g\le K$. The composite $m\circ \eps\circ c_g$ is an augmentation map again, hence our map $f$ is the composite
\[
f\,\colon \quad\kk(G/H) \xinto{\eta} \kk(G/L) \xonto{\eps} \kk.
\]
So $f$ maps~$[e]_H$ to $\sum_{\gamma\in H/L} 1=|H/L|$ in~$\kk$.
Now, the $p$-group $N\le H$ acts on the set $H/L$ by multiplication on the left. This action has no fixed point, for otherwise we would have $N\le_H L\le_G K$ and thus $N\le K$, a contradiction. Therefore the $N$-set $H/L$ has order divisible by~$p$. So $|H/L|=0$ in~$\kk$ and~$f=0$ as claimed.
\end{proof}
\begin{Prop}
\label{Prop:infl-pstab}%
Let $N\normaleq G$ be a normal $p$-subgroup.
Then the permutation category of the quotient~$G/N$ is an additive quotient of the permutation category of~$G$.
More precisely, consider $\proj(\cF_N)=\add^\natural\SET{\kk(G/K)}{K\in\cF_N}$, the closure of~$\SET{\kk(G/K)}{N\not\le K}$ under direct sum and summands in~$\perm(G;\kk)^\natural$.
Consider the additive quotient of~$\perm(G;\kk)^\natural$ by~this $\otimes$-ideal.\,{\rm\,(\footnote{\,Keep the same objects as $\perm(G;\kk)^\natural$ and define Hom groups by modding out all maps that factor via objects of~$\proj(\cF_N)$, as in the ordinary construction of the stable module category.})}
Then the composite
\begin{equation}
\label{eq:modular-fixed-equiv}%
\vcenter{\xymatrix@C=4em{
\perm(G/N;\kk)^\natural \ar@{ >->}[r]^-{\ \Infl^{G/N}_G}
& \perm(G;\kk)^\natural \ar@{->>}[r]^-{\quot}
& \frac{\perm(G;\kk)^\natural}{\proj(\cF_N)}
}}
\end{equation}
is an equivalence of tensor categories.
\end{Prop}
\begin{proof}
By the Mackey formula and since~$\cF_N$ is a family, $\proj(\cF_N)$ is a tensor ideal, hence $\quot$ is a tensor-functor.
Inflation $\Infl^{G/N}_G\colon \perm(G/N;\kk)^\natural\to \perm(G;\kk)^\natural$ is also a tensor-functor. It is moreover fully faithful with essential image the subcategory $\add^\natural\SET{\kk(G/H)}{N\le H}$. So we need to show that the composite
\[
\add^\natural\SET{\kk(G/H)}{N\le H}
\hook
\perm(G;\kk)^\natural \onto
\frac{\perm(G;\kk)^\natural}{\add^\natural\SET{\kk(G/K)}{N\not\le K}}
\]
is an equivalence.
Both functors in the composite are full.
The composite is faithful by \Cref{Lem:infl-stab}, rigidity, additivity and the Mackey formula.
Essential surjectivity is then easy (idempotent-completion is harmless since the functor is fully-faithful).
\end{proof}
\begin{Cons}
\label{Cons:fixed-pts}%
Let $N\normaleq G$ be a normal $p$-subgroup. The composite of the additive quotient functor with the inverse of the equivalence of \Cref{Prop:infl-pstab} yields a tensor-functor on the categories of $p$-permutation modules
\begin{equation}
\label{eq:Psi^N-perm}%
\Psi^N\colon\perm(G;\kk)^\natural\onto \frac{\perm(G;\kk)^\natural}{\proj(\cF_N)}\isoto\perm(G/N;\kk)^\natural.
\end{equation}
Applying the above degreewise, we get a tt-functor on homotopy categories~$\Kb(-)$
\[
\Psi^N=\Psi^{N\inn G}\colon\cK(G)\too \cK(G/N).
\]
\end{Cons}
\begin{Rem}
\label{Rem:Brauer}%
Following up on \Cref{Rem:geom-fixed-pts}, we have constructed~$\Psi^N$ by modding-out in \emph{additive} fashion this time, everything induced from subgroups not containing~$N$. We did it on the `core' additive category and only then passed to homotopy categories. Such a construction would not make sense on bounded derived categories, as $\Psi^N$ has no reason to preserve acyclic complexes.
The classical Brauer quotient seems different at first sight. It is typically defined at the level of individual $\kkG$-modules~$M$ by a formula like
\begin{equation}
\label{eq:dirty-brauer}%
\coker\big(\oplus_{Q\lneq N} M^Q \xto{\,(\mathrm{Tr}_{Q}^N)_Q\,} M^N\big).
\end{equation}
A priori, this definition uses the ambient abelian category of modules and one then needs to verify that it preserves permutation modules, the tensor structure, etc.
Our approach is a categorification of~\eqref{eq:dirty-brauer}: \Cref{Prop:infl-pstab} recovers the category~$\perm(G/N;\kk)^\natural$ as a tensor-additive quotient of~$\perm(G;\kk)^\natural$, at the categorical level, not at the individual module level. Amusingly, one can verify that it yields the same answer (\Cref{Prop:fixed-pts}) -- a fact that we shall not use at all.
\end{Rem}
We relax the condition that the $p$-subgroup is normal in the standard way.
\begin{Def}
\label{Def:fixed-pts}%
Let $H\le G$ be an arbitrary $p$-subgroup. We define the \emph{modular (or Brauer) $H$-fixed-points functor} by the composite
\[
\Psi^{H\inn G}\,\colon\quad \cK(G)\xto{\Res^G_{N_G H}}\cK(N_G H)\xto{\Psi^{H\inn N_G H}}\cK(\WGH)
\]
where $N_G H$ is the normalizer of~$H$ in~$G$ and $\WGH=(N_G H)/H$ its Weyl group. The second functor comes from \Cref{Cons:fixed-pts}.
Note that $\Psi^{H\inn G}$ is computed degreewise, applying the functors $\Res^G_{N_G H}$ and $\Psi^{H\inn N_G H}$ at the level of~$\perm(-;\kk)^\natural$.
\end{Def}
\begin{Rem}
We prefer the phrase `modular fixed-points' to `Brauer fixed-points', out of respect for L.\,E.\,J.\ Brouwer and his fixed points.
It also fits nicely in the flow: naive fixed-points, geometric fixed-points, modular fixed-points.
Finally, the phrase `Brauer quotient' would be unfortunate, as $\Psi^H\colon\cK(G)\to \cK(\WGH)$ is \emph{not} a quotient of categories in any reasonable sense.
\end{Rem}
Let us verify that our~$\Psi^H$ linearize the $H$-fixed-points of~$G$-sets, as promised.
\begin{Prop}
\label{Prop:fixed-pts}%
Let $H\le G$ be a $p$-subgroup. The following square commutes up to isomorphism:
\[
\xymatrix@C=4em{
\gasets[G] \ar[r]^-{\kk(-)} \ar[d]_-{(-)^H}
& \perm(G;\kk)^\natural \ \ar@{^(->}[r] \ar[d]^-{\Psi^H}
& \cK(G) \ar[d]^-{\Psi^H}
\\
\gasets[(\WGH)] \ar[r]^-{\kk(-)}
& \perm(\WGH;\kk)^\natural\ \ar@{^(->}[r]
& \cK(\WGH).
}
\]
In particular, for every~$K\le G$, we have an isomorphism of~$\kk(\WGH)$-modules
\begin{equation}
\label{eq:fixed-pts}%
\Psi^H(\kk(G/K))\cong \kk((G/K)^H)=\kk(N_G(H,K)/K).
\end{equation}
This module is non-zero if and only if $H$ is subconjugate to~$K$ in~$G$.
\end{Prop}
\begin{proof}
We only need to prove the commutativity of the left-hand square.
As restriction to a subgroup commutes with linearization, we can assume that $H\normaleq G$ is normal. Let $X$ be a $G$-set. Consider its $G$-subset~$X^H$ (which is truly inflated from~$G/H$).
Inclusion yields a morphism in~$\perm(G;\kk)$, natural in~$X$,
\begin{equation}\label{eq:fixed-example-aux}%
f_X\colon\kk(X^H)\to \kk(X).
\end{equation}
We claim that this morphism becomes an isomorphism in the quotient $\frac{\perm(G;\kk)^\natural}{\proj(\cF_H)}$. By additivity, we can assume that $X=G/K$ for $K\le G$. It is a well-known exercise that $(G/K)^H=N_G(H,K)/K$, which in the normal case $H\normaleq G$ boils down to $G/K$ or $\varnothing$, depending on whether $H\le K$ or not, \ie whether $K\notin\cF_H$ or $K\in\cF_H$. In both cases, $f_X$ becomes an isomorphism (an equality or $0\isoto \kk(G/K)$, respectively) in the quotient by~$\proj(\cF_H)$. Hence the claim.
Let us now discuss the commutativity of the following diagram
\[
\xymatrix@C=3em{
\gasets[G] \ar[r]^-{\kk(-)} \ar[dd]_-{(-)^H}
& \perm(G;\kk)^\natural \ar@{->>}[rd]^-{\quot} \ar@{=}[rr]
&& \perm(G;\kk)^\natural \ar[dd]^-{\Psi^H}_-{\textrm{(Def.\,\ref{Def:fixed-pts})}}
\\
& \perm(G;\kk)^\natural \ar@{->>}[r]^-{\quot}
& \frac{\perm(G;\kk)^\natural}{\proj(\cF)}
\\
\gasets[G/H] \ar[r]^-{\kk(-)}
& \perm(G/H;\kk)^\natural \ar[u]^-{\Infl^{G/H}_G} \ar[ru]_-{\cong}^-{\textrm{(Cor.\,\ref{Prop:infl-pstab})\ \ }} \ar@{=}[rr]
&& \perm(G/H;\kk)^\natural }
\]
The module~$\kk(X^H)$ in~\eqref{eq:fixed-example-aux} can be written more precisely as $\kk(\Infl^{G/H}_G(X^H))\cong \Infl^{G/H}_G\kk(X^H)$. So the first part of the proof shows that the left-hand `hexagon' of the diagram commutes, \ie the two functors $\gasets[G]\to \frac{\perm(G;\kk)^\natural}{\proj(\cF)}$ are isomorphic. The result follows by definition of~$\Psi^H$, recalled on the right-hand side.
\end{proof}
Here is how modular fixed points act on restriction.
\begin{Prop}
\label{Prop:Psi-Res}%
Let $\alpha:G\to G'$ be a homomorphism and $H\le G$ a $p$-subgroup. Set $H'=\alpha(H)\le G'$. Then the following square commutes up to isomorphism
\[
\xymatrix@R=2em{\cK(G') \ar[r]^-{\alpha^*} \ar[d]_-{\Psi^{H',G'}}
& \cK(G) \ar[d]^-{\Psi^{H\inn G}}
\\
\cK(\Weyl{G'}{H'}) \ar[r]^-{\bar \alpha^*}
& \cK(\WGH).
}
\]
\end{Prop}
\begin{proof}
Exercise. This already holds at the level of $\perm(-;\kk)^\natural$.
\end{proof}
\begin{Cor}
\label{Cor:Psi-Infl}%
Let $N\normaleq G$ be a normal $p$-subgroup. Then the composite functor $\Psi^N\circ \Infl^{G/N}_G\colon \cK(G/N)\to \cK(G)\to \cK(G/N)$ is isomorphic to the identity.
Consequently, the map $\Spc(\Psi^H)$ is a split injection retracted by~$\Spc(\Infl^{G/H}_G)$.
\end{Cor}
\begin{proof}
Apply \Cref{Prop:Psi-Res} to $\alpha\colon G\onto G/N$ and $H=N$, and thus $H'=1$.
The second statement is just contravariance of~$\Spc(-)$.
\end{proof}
Composition of two `nested' modular fixed-points functors almost gives another modular fixed-points functor. We only need to beware of Weyl groups.
\begin{Prop}
\label{Prop:Psi-Psi}%
Let $H\le G$ be a $p$-subgroup and $\bar{K}=K/H$ a $p$-subgroup of~$\WGH$, for $H\le K\le N_G H$. Then there is a canonical inclusion
\[
\Weyl{(\WGH)}{\bar{K}}=(N_{\WGH}\bar{K})/\bar{K} \hook (N_G K)/K=\WGK
\]
and the following square commutes up to isomorphism
\[
\xymatrix@R=2em{
\cK(G) \ar[r]^-{\Psi^{H\inn G}} \ar[d]_-{\Psi^{K\inn G}}
& \cK(\WGH) \ar[d]^-{\Psi^{\bar{K}\inn\WGH}}
\\
\cK(\WGK) \ar[r]^-{\Res}
& \cK\big(\Weyl{(\WGH)}{\bar{K}}\big).
}
\]
\end{Prop}
\begin{proof}
The inclusion comes from $N_{N_{G}(H)}K\hook N_{G}K$ and the rest is an exercise. Again, everything already holds at the level of~$\perm(-;\kk)^\natural$.
\end{proof}
\begin{Cor}
\label{Cor:Psi-Psi}%
Let $H\le K\le G$ be two $p$-subgroups with $H\normaleq G$ normal. Then $\Weyl{(G/H)}{(K/H)}\cong \Weyl{G}{K}$ and the following diagram commutes up to isomorphism
\[
\xymatrix@R=2em{
\cK(G) \ar[r]^-{\Psi^{H\inn G}} \ar[rd]_-{\Psi^{K\inn G}}
& \cK(G/H) \ar[d]^-{\Psi^{K/H\inn\, G/H}}
\\
& \cK\big(\Weyl{G}{K}\big).
}
\]
\end{Cor}
\begin{proof}
The surjectivity of the canonical inclusion $\Weyl{G/H}{(K/H)}\hook \Weyl{G}{K}$ of \Cref{Prop:Psi-Psi} holds since $H$ is normal in~$G$. The result follows.
\end{proof}
\begin{Rem}
\label{Rem:Psi-big}%
We have essentially finished the proof of \Cref{Prop:modular-fixed-points-intro}.
It only remains to verify that there are variants of the constructions and results of this section for the big categories of \Cref{Rec:perm}.
For a normal $p$-subgroup $N\normaleq G$, the canonical functor on big additive categories
\begin{equation}\label{eq:Psi-big}%
\Add^\natural(\SET{\kk(G/H)}{N\le H})\to\frac{\Perm(G;\kk)^\natural}{\Proj(\cF_N)}
\end{equation}
is an equivalence of tensor categories, where
\[
\Proj(\cF_N)=\Add^\natural\SET{\kk(G/K)}{N\not\le K}
\]
is the closure of $\proj(\cF_N)$ under coproducts and summands.
Since the tensor product commutes with coproducts, $\Proj(\cF_N)$ is again a $\otimes$-ideal in $\Perm(G;\kk)^\natural$.
Fullness and essential surjectivity of~\eqref{eq:Psi-big} are easy, and faithfulness reduces to the finite case by compact generation.
(A map $f:P\to Q$ in $\Perm(G;\kk)$ is zero if and only if all composites $P'\xto{\alpha} P\xto{f} Q$ are zero, for $P'$ finitely generated. Such a composite necessarily factors through a finitely generated direct summand of~$Q$, etc.)
As a consequence, the analogue of \Cref{Prop:infl-pstab} also holds for big categories.
Let us write $\cS(G)$ for $\K(\Perm(G;\kk))=\K(\Perm(G;\kk)^\natural)$, which is a compactly generated tt-category with compact unit.
(Compactly generated is not obvious: see~\cite[Remark~5.12]{balmer-gallauer:Dperm}.)
By the above discussion, the modular fixed-points functor with respect to a $p$-subgroup $H\le G$ extends to the big categories~$\cS(-)$:
\[
\Psi^H=\Psi^{H\inn G}\colon\ \cS(G)\xto{\Res^G_{N_G H}}\cS(N_G H)\to \K\left(\frac{\Perm(N_G H;\kk)}{\Proj(\cF_H)}\right)\xleftarrow[\sim]{\Infl^{\WGH}_{N_G H}}\cS(\WGH).
\]
Note that $\Psi^H$ is a tensor triangulated functor that commutes with coproducts and that maps~$\cK(G)$ into~$\cK(\WGH)$.
It follows that it restricts to $\Psi^H\colon\DPerm(G;\kk)\to\DPerm(\WGH;\kk)$.
The analogues of \Cref{Prop:fixed-pts,Prop:Psi-Res,Cor:Psi-Infl,Prop:Psi-Psi,Cor:Psi-Psi} all continue to hold for both $\cS(-)$ and $\DPerm(-;\kk)$.
\end{Rem}
This finishes our exposition of modular fixed-points functors~$\Psi^H$ on derived categories of permutation modules. We now start using them to analyze the tt-geometry.
First, we apply them to the Koszul complexes~$\kos[G]{K}$ of \Cref{Cons:kos}.
\begin{Lem}
\label{Lem:Psi^H-of-s}
Let $H,K\le G$ be two subgroups, with $H$ a $p$-subgroup.
\begin{enumerate}[label=\rm(\alph*), ref=\rm(\alph*)]
\item
\label{it:Psi^H-1}%
If $H\not\le_G K$, then $\Psi^H(\sKG)$ generates~$\cK(\WGH)$ as a tt-ideal.
\smallbreak
\item
\label{it:Psi^H-2}%
If $H\le_G K$, then $\Psi^H(\sKG)$ is acyclic in~$\cK(\WGH)$.
\smallbreak
\item
\label{it:Psi^H-3}%
If $H\sim_G K$, then $\Psi^H(\sKG)$ generates~$\Kac(\WGH)$ as a tt-ideal.
\end{enumerate}
\end{Lem}
\begin{proof}
For~\ref{it:Psi^H-1}, we have $N_G(H,K)=\varnothing$ and thus $\Psi^H(\kk(G/K))=0$ by \Cref{Prop:fixed-pts}. It follows that $\Psi^H(\sKG)=(\cdots \to \ast \to 0\to \kk\to 0)$ by \Cref{Prop:s(H;G)}. Thus the $\otimes$-unit $\unit_{\cK(\WGH)}=\kk[0]$ is a direct summand of $\Psi^H(\sKG)$.
For~\ref{it:Psi^H-2} and~\ref{it:Psi^H-3}, by invariance under conjugation, we can assume that $H\le K$. Let $N:=N_G H$ be the normalizer of~$H$. We have by \Cref{Lem:Res(s(G;H))} that
\begin{equation}
\label{eq:res-s}
\Psi^{H\inn G}(\sKG)=\Psi^{H\inn N}\Res^G_{N}(\sKG)\simeq\!\!\bigotimes_{[g]\in N\backslash G/K}\kern-.5em\Psi^{H\inn N}\big(\kos[N]{N\cap {}^{g\!}K}\big).
\end{equation}
For the index $g=e$ (or simply $g\in N_GK$), we can use $H\normaleq N\cap K$ and compute
\[
\begin{array}{rll}
\Psi^{H\inn N}(\kos[N]{N\cap K}) \kern-.7em & \cong\Psi^{H\inn N}(\Infl^{N/H}_N\kos[N/H]{(N\cap K)/H}) & \textrm{by \Cref{Lem:kos(N;G)}}
\\[2pt]
& \cong \kos[N/H]{(N\cap K)/H} & \textrm{by \Cref{Cor:Psi-Infl}.}
\end{array}
\]
As this object is acyclic in~$\cK(N/H)$ so is the tensor in~\eqref{eq:res-s}. Hence~\ref{it:Psi^H-2}. Continuing in the special case~\ref{it:Psi^H-3} with $H=K$, we have $(N\cap K)/H=1$ and the above $\kos[N/H]{1}$ generates~$\Kac(N/H)$ by \Cref{Prop:Ker(Res)}. It suffices to show that all the other factors in the tensor product~\eqref{eq:res-s} generate the whole~$\cK(\WGH)$. This follows from Part~\ref{it:Psi^H-1} applied to the group~$N$; indeed when $g\in G\sminus N$ we have $H\not\le_N N\cap {}^{g\!}H$ (as $H\le_N N\cap {}^{g\!}H$ and $H\normaleq N$ would imply $H={}^{g\!}H$).
\end{proof}
\section{Conservativity via modular fixed-points}
\label{sec:conservativity}%
In this section, we explain why the spectrum of~$\cK(G)$ is controlled by modular fixed-points functors~$\Psi^H$ together with the localizations~$\Upsilon_G\colon \cK(G)\onto \Db(\kkG)$. It stems from a conservativity result on the `big' category~$\cT(G)=\DPerm(G;\kk)$, namely \Cref{Thm:conservativity}, for which we need some preparation.
\begin{Lem}
\label{Lem:nilpotence}%
Suppose that $G$ is a $p$-group. Let $H\le G$ be a subgroup and let $\bar{G}=\WGH$ be its Weyl group. The modular $H$-fixed-points functor $\Psi^H\colon \perm(G;\kk)^\natural\to \perm(\bar{G};\kk)^\natural$ induces a ring homomorphism
\begin{equation}\label{eq:Psi-nilpotence}%
\Psi^H\colon \End_{\kk G}(\kk(G/H))\too \End_{\kk \bar{G}}(\kk(\bar{G})).
\end{equation}
This homomorphism is surjective with nilpotent kernel: $(\ker(\Psi^H))^n=0$ for $n\gg1$. More precisely, it suffices to take~$n\in\bbN$ such that $\Rad(\kk G)^n=0$.
\end{Lem}
\begin{proof}
The reader can check this with Brauer quotients. We outline the argument.
By \eqref{eq:fixed-pts} we have $\Psi^H(\kk(G/H))\cong \kk(N_G(H,H)/H)=\kk(\bar{G})$, so the problem is well-stated.
\Cref{Rec:Hom} provides $\kk$-bases for both vector spaces in~\eqref{eq:Psi-nilpotence}, namely
\[
\{\,f_{g}=\eps\circ c_g\circ\eta\,\}_{[g]\in \HGH}
\qquadtext{and}
\{\,c_{\bar{g}}\,\}_{\bar{g}\in \bar{G}}
\]
using the notation of~\eqref{eq:Hom-f_g} and~\eqref{eq:Hom-c_g}.
The homomorphism~$\Psi^H$ in~\eqref{eq:Psi-nilpotence} respects those bases.
Even better, it is a bijection from the part of the first basis indexed by~$H\bs(N_G H)/H$ onto the second basis, and it maps the rest of the first basis to zero.
Indeed, when $g\in N_G H$, we have $f_g=c_g$ and $\Psi^H(f_g)=\Psi^H(c_g)=c_{\bar{g}}$ for $\bar{g}=[g]_H$.
On the other hand, when $g\in G\sminus N_G H$ then $\Psi^H(f_g)=0$, by the factorization~\eqref{eq:Hom-f_g} and the fact that $\Psi^H(\kk(G/L))=0$ for $L=H\cap {}^{g\!} H$ with $g\notin N_G H$, using again \eqref{eq:fixed-pts}.
Hence \eqref{eq:Psi-nilpotence} is surjective and $\ker(\Psi^H)$ has basis $\{f_g=\eps\circ c_g\circ\eta\}_{[g]\in \HGH,\,g\notin N_G H}$.
One easily verifies that such an~$f_g$ has image contained in $\Rad(\kk G)\cdot \kk(G/H)$, using that $H\cap {}^{g\!}H$ is strictly smaller than~$H$.
Composing $n$ such generators $f_{g_1}\circ\cdots \circ f_{g_n}$ then maps $\kk(G/H)$ into $\Rad(\kkG)^n\cdot \kk(G/H)$ which is eventually zero for~$n\gg1$, since~$G$ is a $p$-group.
\end{proof}
We now isolate a purely additive result that we shall of course apply to the case where $\Psi$ is a modular fixed-points functor.
\begin{Lem}
\label{Lem:conservativity}%
Let $\Psi\colon \cA\to \cD$ be an additive functor between additive categories. Let $\cB,\cC\subseteq \cA$ be full additive subcategories such that:
\begin{enumerate}[label=\rm(\arabic*), ref=\rm(\arabic*)]
\item
\label{it:cons-1}%
Every object of~$\cA$ decomposes as $B\oplus C$ with $B\in\cB$ and~$C\in\cC$.
\item
\label{it:cons-2}%
The functor~$\Psi$ vanishes on~$\cC$, that is, $\Psi(\cC)=0$.
\item
\label{it:cons-3}%
The restricted functor $\Psi\restr{\cB}\colon \cB\to \cD$ is full with nilpotent kernel.\,{\rm(\footnote{\,There exists $n\gg1$ such that if $n$ composable morphisms $f_1,\ldots,f_n$ in~$\cB$ all go to zero in~$\cD$ under~$\Psi$ then their composite $f_n\circ \cdots\circ f_1$ is zero in~$\cB$.})}
\end{enumerate}
Let $X\in\Chain(\cA)$ be a complex such that $\Psi(X)$ is contractible in~$\Chain(\cD)$. Then $X$ is homotopy equivalent to a complex in~$\Chain(\cC)$.
\end{Lem}
\begin{proof}
Decompose every $X_i=B_i\oplus C_i$ in~$\cA$, using~\ref{it:cons-1}, for all~$i\in \bbZ$.
We are going to build a complex on the objects~$C_i$ in such a way that $X_\sbull$ becomes homotopy equivalent to~$C_\sbull$ in~$\Chain(\cA^\natural)$, where $\cA^\natural$ is the idempotent-completion of~$\cA$. As both~$X_\sbull$ and~$C_\sbull$ belong to~$\Chain(\cA)$, this proves the result.
By~\ref{it:cons-2}, the complex $\cdots \to \Psi(B_i)\to \Psi(B_{i-1})\to \cdots$ is isomorphic to~$\Psi(X)$, hence it is contractible. So each $\Psi(B_i)$ decomposes in~$\cD^\natural$ as $D_{i}\oplus D_{i-1}$ in such a way that the differential $\Psi(B_i)=D_i\oplus D_{i-1} \too \Psi(B_{i-1})=D_{i-1}\oplus D_{i-2}$ is just $\smat{0&1\\0&0}$. Since $\Psi\restr{\cB}\colon \cB\to \cD$ is full with nilpotent kernel by~\ref{it:cons-3}, we can lift idempotents. In other words, we can decompose each $B_i$ in the idempotent-completion~$\cB^\natural$ (hence in~$\cA^\natural$) as
\[
B_i\simeq B_i'\oplus B_i''
\]
with $\Psi(B_i')\simeq D_i$ and $\Psi(B_i'')\simeq D_{i-1}$ in a compatible way with the decomposition in~$\cD^\natural$.
This means that when we write the differentials in~$X$ in components in~$\cA^\natural$
\[
\cdots \to X_i=B_i'\oplus B_i''\oplus C_i \xto{\smat{*&b_i&*\\ *&*&*\\ *&*&*}} X_{i-1}=B_{i-1}'\oplus B_{i-1}''\oplus C_{i-1} \to \cdots
\]
the component $b_i\colon B_i''\to B_{i-1}'$ maps to the isomorphism $\Psi(B_i'')\simeq D_{i-1}\simeq \Psi(B_{i-1}')$ in~$\cD^\natural$.
Hence $b_i$ is already an isomorphism in~$\cB^\natural$ by~\ref{it:cons-3} again.
(Note that~\ref{it:cons-3} passes to~$\cB^\natural\to\cD^\natural$.)
Using elementary operations on~$X_i$ and~$X_{i-1}$ we can replace $X$ by an isomorphic complex in~$\cA^\natural$ of the form
\begin{equation}\label{eq:aux-cons-1}%
\cdots \to X_{i+1} \to B_i'\oplus B_i''\oplus C_i \xto{\smat{0&b_i&0\\ *&0&*\\ *&0&*}} B_{i-1}'\oplus B_{i-1}''\oplus C_{i-1} \to X_{i-2}\to \cdots
\end{equation}
This being a complex forces the `previous' differential $X_{i+1}\to X_i$ to be of the form~$\smat{*&*&*\\ 0&0&0\\ *&*&*}$ and the `next' differential $X_{i-1}\to X_{i-2}$ to be of the form~$\smat{0&*&*\\ 0&*&*\\ 0&*&*}$. We can then remove from $X$ a direct summand in~$\Chain(\cA^\natural)$ that is a homotopically trivial complex of the form~$\cdots0\to B_i''\isoto B_{i-1}'\to 0\cdots$.
The reader might be concerned about how to perform this reduction in all degrees at once, since we do not put boundedness conditions on~$X$ (thus preventing the `obvious' induction argument). The solution is simple. Do the above for all differentials in \emph{even} indices $i=2j$. By elementary operations on~$X_{2j}$ and~$X_{2j-1}$ for all~$j\in \bbZ$, we can replace~$X$ up to isomorphism into a complex whose \emph{even} differentials are of the form~\eqref{eq:aux-cons-1}. We then remove the contractible complexes $\cdots 0\to B_{2j}''\isoto B_{2j-1}'\to 0\cdots$.
We obtain in this way a homotopy equivalent complex in~$\cA^\natural$ that we call~$\tilde X$, where $B_i',B_i''\in\cB^\natural$ and~$C_i\in \cC$
\begin{equation}\label{eq:aux-X}%
\cdots \to B_{2j+1}''\oplus C_{2j+1} \xto{\smat{a_{2j+1}&*\\ *&*}} B_{2j}'\oplus C_{2j} \xto{\smat{*&*\\ *&*}} B_{2j-1}''\oplus C_{2j-1} \to \cdots
\end{equation}
in which the even differentials go to zero under~$\Psi$, by the above construction. In particular the homotopy trivial complex $\Psi(\tilde X)\simeq\Psi(X)$ in~$\cD^\natural$ has the form $\cdots \xto{0} \Psi(B_{2j+1}'')\xto{\Psi(a_{2j+1})} \Psi(B_{2j}')\xto {0} \cdots$ hence its odd-degree differentials $\Psi(a_{2j+1})$ are isomorphisms. It follows that $a_{2j+1}\colon B_{2j+1}''\to B_{2j}'$ is itself an isomorphism by~\ref{it:cons-3} again. Using new elementary operations (again in all degrees), we change the odd-degree differentials of the complex~$\tilde X$ in~\eqref{eq:aux-X} into diagonal ones and we remove the contractible summands $0\to B_{2j+1}''\isoto B_{2j}'\to 0$ as before, to get a complex consisting only of the $C_i$ in each degree~$i\in \bbZ$.
In summary, we have shown that~$X$ is homotopy equivalent to a complex $C\in\Chain(\cC)$ inside~$\Chain(\cA^\natural)$, as announced.
\end{proof}
\begin{Rem}
Of course, it would be silly to discuss conservativity of the functors $\{\Psi^H\}_{H\le G}$ since among them we find~$\Psi^1=\Id$. The interesting result appears when each $\Psi^H$ is used in conjunction with the derived category of~$\WGH$, or, in `big' form, its homotopy category of injectives.
Let us remind the reader.
\end{Rem}
\begin{Rec}
\label{Rec:J-lambda}%
In \cite{balmer-gallauer:resol-big}, we prove that the homotopy category of injective $RG$-modules, with coefficients in any regular ring~$R$ (\eg\ our field~$\kk$), is a localization of~$\DPerm(G;R)$. In our case, we have an inclusion $J_G\colon \K\Inj(\kk G)\hook \DPerm(G;\kk)$, inside~$\K(\Perm(G;\kk))$, and this inclusion admits a left adjoint~$\Upsilon_G$
\begin{equation}
\label{eq:J-lambda}%
\vcenter{\xymatrix{
\DPerm(G;\kk) \ar@<-.5em>@{->>}[d]_-{\Upsilon_G}
\\
\K\Inj(\kkG). \ar@<-.5em>@{ >->}[u]_-{J_G}
}}
\end{equation}
This realizes the finite localization of~$\DPerm(G;\kk)$ with respect to the subcategory~$\Kac(G)\subseteq\cK(G)=\DPerm(G;\kk)^c$. In particular, $\Upsilon_G$ preserves compact objects and yields the equivalence~$\Upsilon_G\colon\cK(G)/\Kac(G)\cong \Db(\kkG)\cong\K\Inj(\kkG)^c$ of~\eqref{eq:Upsilon_G}, also denoted~$\Upsilon_G$ for this reason. Note that $\Upsilon_G\circ J_G\cong\Id$ as usual with localizations.
Let $P\le G$ be a subgroup. Observe that induction~$\Ind_P^G$ preserves injectives so that $J_G\circ\Ind_P^G\cong\Ind_P^G\circ J_P$. Taking left adjoints, we see that
\begin{equation}\label{eq:Upsilon-Res}%
\Res^G_P\circ\Upsilon_G\cong \Upsilon_P\circ\Res^G_P.
\end{equation}
\end{Rec}
\begin{Not}
\label{Not:bar-Psi}%
For each $p$-subgroup~$H\le G$, we are interested in the composite
\[
\xymatrix@C=3em{\check{\Psi}^H=\check{\Psi}^{H\inn G}\colon \quad \DPerm(G;\kk) \ar[r]^-{\Psi^{H\inn G}} & \DPerm(\WGH;\kk) \ar@{->>}[r]^-{\Upsilon_{\WGH}} & \K\Inj(\kk(\WGH))}
\]
of the modular $H$-fixed-points functor followed by localization to the homotopy category of injectives~\eqref{eq:J-lambda}. We use the same notation on compacts
\begin{equation}
\label{eq:bar-Psi}%
\xymatrix@C=3em{\check{\Psi}^H=\check{\Psi}^{H\inn G}\colon \quad \cK(G;\kk) \ar[r]^-{\Psi^{H\inn G}} & \cK(\WGH;\kk) \ar@{->>}[r]^-{\Upsilon_{\WGH}} & \Db(\kk(\WGH)).}
\end{equation}
\end{Not}
We are now ready to prove the first important result of the paper.
\begin{Thm}[Conservativity]
\label{Thm:conservativity}%
Let $G$ be a finite group.
The above family of functors~$\check{\Psi}^H\colon \cT(G)\to \K\Inj(\kk(\WGH))$, indexed by all the (conjugacy classes of) $p$-sub\-groups~$H\le G$, collectively detects vanishing of objects of~$\DPerm(G;\kk)$.
\end{Thm}
\begin{proof}
Let $P\le G$ be a $p$-Sylow subgroup. For every subgroup~$H\le P$, we have $\Weyl{P}{H}\hook\WGH$ and $\check{\Psi}^{H\inn P}\circ \Res^G_P$ can be computed as $\Res^{\WGH}_{\Weyl{P}{H}}\circ\check{\Psi}^{H\inn G}$ thanks to \Cref{Prop:Psi-Res} and~\eqref{eq:Upsilon-Res}. On the other hand, $\Res^G_P$ is (split) faithful, as $\Ind_P^G\circ\Res^G_P$ admits the identity as a direct summand. Hence it suffices to prove the theorem for the group~$P$, \ie we can assume that $G$ is a $p$-group.
Let $G$ be a $p$-group and $\cF$ be a family of subgroups (\Cref{Rec:family}). We say that a complex~$X$ in~$\Chain(\Perm(G;\kk))$ is \emph{of type~$\cF$} if every $X_i$ is $\cF$-free, \ie a coproduct of $\kk(G/K)$ for $K\in\cF$. So every complex is of type~$\cF_{\textrm{all}}=\{\textrm{all }H\le G\}$. Saying that $X$ is of type $\cF_1=\varnothing$ means $X=0$.
We want to prove that if $X$ defines an object in $\DPerm(G;\kk)$ and $\check{\Psi}^H(X)=0$ for all~$H\le G$ then $X$ is homotopy equivalent to a complex $X'$ of type~$\cF_1=\varnothing$.
We proceed by a form of `descending induction' on~$\cF$. Namely, we prove:
\smallbreak
\noindent \textbf{Claim}: \textit{Let $X\in \DPerm(G;\kk)$ be a complex of type~$\cF$ for some family~$\cF$ and let $H\in \cF$ be a maximal element of~$\cF$ for inclusion.
If $\check{\Psi}^H(X)=0$ then $X\cong X'$ is homotopy equivalent to a complex $X'\in\Chain(\Perm(G;\kk))$ of type~$\cF'\subsetneqq \cF$.
}
\smallbreak
By the above discussion, proving this claim proves the theorem. Explicitly, we are going to prove this claim for $\cF'=\cF\cap \cF_H$, that is, $\cF$ with all conjugates of~$H$ removed.
By maximality of~$H$ in~$\cF$, every $K\in\cF$ is either conjugate to~$H$ or in~$\cF'$. Of course, for $H'$ conjugate to~$H$ we have $\kk(G/H')\simeq \kk(G/H)$ in~$\Perm(G;\kk)$.
We apply \Cref{Lem:conservativity} for $\cA=\Add\SET{\kk(G/K)}{K\in\cF}$, $\cB=\Add(\kk(G/H))$, $\cC=\Add\SET{\kk(G/K)}{K\in\cF'}$, $\cD=\Perm(\WGH;\kk)$ and the functor $\Psi=\Psi^H$ naturally.
Let us check the hypotheses of \Cref{Lem:conservativity}. Regrouping the terms $\kk(G/K)$ into those for which $K$ is conjugate to~$H$ and those not conjugate to~$H$, we get Hypothesis~\ref{it:cons-1}.
Hypothesis~\ref{it:cons-2} follows immediately from \eqref{eq:fixed-pts} since~$(G/K)^H=\varnothing$ for every $K\in\cF'$.
Finally, Hypothesis~\ref{it:cons-3} follows from \Cref{Lem:nilpotence} and additivity.
So it remains to show that $\Psi^H(X)$ is contractible.
Since $X$ is of type~$\cF$ and $H$ is maximal, we see that $\Psi^H(X)\in\Chain(\Inj(\kk(\WGH)))$ and applying~$\Upsilon_{\WGH}$ gives the same complex (up to homotopy). In other words, $\check{\Psi}^H(X)=0$ forces $\Psi^H(X)$ to be contractible and we can indeed get the above Claim from \Cref{Lem:conservativity}.
\end{proof}
\section{The spectrum as a set}
\label{sec:spectrum}%
In this section, we deduce from the previous results the description of all points of~$\SpcKG$, as well as some elements of its topology.
We start with a general fact, which is now folklore.
\begin{Prop}
Let $F\colon \cT\to \cS$ be a coproduct-preserving tt-functor between `big' tt-categories. Suppose that $F$ is conservative. Then $F$ detects $\otimes$-nilpotence of morphisms~$f\colon x\to Y$ in~$\cT$, whose source $x\in \cT^c$ is compact, \ie if $F(f)=0$ in~$\cS$ then there exists $n\gg1$ such that $f\potimes{n}=0$ in~$\cT$. In particlar, $F\colon \cT^c\to \cS^c$ detects nilpotence of morphisms and therefore $\Spc(F)\colon \Spc(\cS^c)\to \Spc(\cT^c)$ is surjective.
\end{Prop}
\begin{proof}
Using rigidity of compacts, we can assume that $x=\unit$. Given a morphism $f\colon \unit\to Y$ we can construct in~$\cT$ the homotopy colimit $Y^\infty:=\hocolim_n Y\potimes{n}$ under the transition maps $f\otimes\id\colon Y\potimes{n}\to Y\potimes{(n+1)}$. Let $f^\infty\colon \unit\to Y^\infty$ be the resulting map. Now since $F(f)=0$ it follows that $F(Y^\infty)=0$ in~$\cS$, as it is a sequential homotopy colimit of zero maps. By conservativity of~$F$, we get $Y^\infty=0$ in~$\cT$. Since $\unit$ is compact, the vanishing of~$f^\infty\colon \unit\to\hocolim Y\potimes{n}$ must already happen at a finite stage, that is, the map $f\potimes{n}\colon \unit \to Y\potimes{n}$ is zero for~$n\gg1$, as claimed.
The second statement follows from this, together with~\cite[Theorem~1.4]{balmer:surjectivity}.
\end{proof}
Combined with our Conservativity \Cref{Thm:conservativity} we get:
\begin{Cor}
\label{Cor:Spc-surjection-derived}%
The family of functors $\check{\Psi}^H\colon\cK(G)\to\Db(k(\WGH))$, indexed by conjugacy classes of $p$-subgroups $H\le G$, detects $\otimes$-nilpotence. So the induced map
\[
\coprod_{H\in\Sub_p(G)}\Spc(\Db(k(\WGH)))\onto \SpcKG
\]
is surjective.
\qed
\end{Cor}
\begin{Def}
\label{Def:psi}%
Let $H\le G$ be a $p$-subgroup. We write (under \Cref{Conv:light})
\[
\psi^H=\psi^{H\inn G}:=\Spc(\Psi^{H})\colon\Spc(\cK(\WGH))\to \SpcKG
\]
for the map induced by the modular $H$-fixed-points functor. We write
\[
\check{\psi}^H=\check{\psi}^{H\inn G}:=\Spc(\check{\Psi}^{H})\colon\Spc(\Db(\WGH))\to \SpcKG
\]
for the map induced by the tt-functor~$\check{\Psi}^H=\Upsilon_{\WGH}\circ\Psi^{H}$ of \eqref{eq:bar-Psi}.
In other words, $\check{\psi}^H$ is the composite of the inclusion of \Cref{Prop:VeeG} with the above~$\psi^H$
\[
\check{\psi}^H\ \colon \quad \Vee{\WGH}=\Spc(\Db(\kk(\WGH)))\ \overset{\upsilon_{\WGH}}{\hook}\ \Spc(\cK(\WGH))\ \xto{\psi^H}\ \SpcKG.
\]
\end{Def}
\begin{Def}
\label{Def:P(H;p)}%
Let $H\le G$ be a $p$-subgroup and $\gp\in\Vee{\WGH}$ a `cohomological' prime over the Weyl group of~$H$ in~$G$.
We define a point in $\SpcKG$ by
\[
\cP(H,\gp)=\cP_G(H,\gp):=\check{\psi}^H(\gp)=(\check{\Psi}^H)\inv(\gp).
\]
\Cref{Cor:Spc-surjection-derived} tells us that every point of~$\SpcKG$ is of the form~$\cP(H,\gp)$ for some $p$-subgroup~$H\le G$ and some cohomological point~$\gp\in\Vee{\WGH}$. Different subgroups and different cohomological points can give the same~$\cP(H,\gp)$. See \Cref{Thm:all-points}.
\end{Def}
\begin{Rem}
Although we shall not use it, we can unpack the definitions of~$\cP_G(H,\gp)$ for the nostalgic reader.
Let us start with the bijection~$\Vee{G}=\Spc(\Db(\kkG))\cong\Spech(\rmH^\sbull(G,\kk))$.
Let $\gp^{\sbull}\subset \rmH^\sbull(G;\kk)=\End^\sbull_{\Db(\kkG)}(\unit)$ be a homogeneous prime ideal of the cohomology.
The corresponding prime~$\gp$ in~$\Db(\kkG)$ can be described as
\[
\gp=\SET{x\in \Db(\kkG)}{\exists\, \zeta\in\rmH^\sbull(G;\kk)\textrm{ such that }\zeta\notin\gp^{\sbull}\textrm{ and }\zeta\otimes x=0}.
\]
Consequently, the prime~$\cP_G(H,\gp)$ of \Cref{Def:P(H;p)} is equal to
\[
\SET{x\in\cK(G)}{\exists\,\zeta\in\rmH^\sbull(\WGH;\kk)\sminus\gp^{\sbull}\textrm{ such that }\zeta\otimes \Psi^H(x)=0\textrm{ in }\Db(\kk(\WGH))}.
\]
\end{Rem}
\begin{Rem}
\label{Rem:Spc-Res}%
By \Cref{Prop:Psi-Res} and functoriality of~$\Spc(-)$, the primes~$\cP_G(H,\gp)$ are themselves functorial in~$G$. To wit, if $\alpha:G\to G'$ is a group homomorphism and $H$ is a $p$-subgroup of~$G$ then $\alpha(H)$ is a $p$-subgroup of~$G'$ and we have
\begin{equation}
\label{eq:Spc-Res}%
\alpha_*(\cP_G(H,\gp))=\cP_{G'}(\alpha(H),\bar\alpha_*\gp)
\end{equation}
in~$\Spc(\cK(G'))$, where $\alpha_*\colon \SpcKG\to \Spc(\cK(G'))$ and $\bar{\alpha}_*\colon \Vee{\WGH}\to \Vee{\Weyl{G'}{\alpha(H)}}$ are as in \Cref{Rem:not}.
We single out the usual suspects. Fix $H\le G$ a $p$-subgroup.
\begin{enumerate}[wide, labelindent=0pt, label=\rm(\alph*), ref=\rm(\alph*)]
\smallbreak
\item
\label{it:Spc-conj}%
For \emph{conjugation}, let $G\le G'$ and $x\in G'$.
We get $\cP_{G}(H,\gp)^x=\cP_{G^x}(H^x,\gp^x)$ for every~$\gp\in\Vee{\WGH}$.
In particular, when~$x$ belongs to~$G$ itself, we get by~\eqref{eq:no-conj}
\begin{equation}
\label{eq:Spc-conjugation}%
\qquad g\in G \qquad\Longrightarrow\qquad \cP_G(H,\gp)=\cP_G(H^g,\gp^g).
\end{equation}
\smallbreak
\item
For \emph{restriction}, let $K\le G$ be a subgroup containing~$H$ and let $\gp\in \Vee{\Weyl{K}{H}}$ be a cohomological point over the Weyl group of~$H$ in~$K$.
Then we have
\begin{equation}
\label{eq:Spc-Res-K}%
\rho_K(\cP_K(H,\gp))=\cP_G(H,\bar\rho_K(\gp)),
\end{equation}
in $\SpcKG$, where the maps $\rho_K=(\Res^G_K)_*\colon\Spc(\cK(K))\to \SpcKG$ and $\bar\rho_K\colon \Vee{\Weyl{K}{H}}\to \Vee{\WGH}$ are spelled out around~\eqref{eq:rho}.
\smallbreak
\item
For \emph{inflation}, let $N\normaleq G$ be a normal subgroup. Set $\bar G=G/N$ and $\bar H=HN/N$.
Then for every $\gp\in \Vee{\WGH}$, we have
\begin{equation}
\label{eq:Spc-infl}%
\pi^{\bar G}\,(\cP_{G}(H,\gp))=\cP_{\bar G}(\bar H,\overline{\pi}^{\bar G}\,\gp),
\end{equation}
in $\Spc(\cK(\bar G))$, where the maps $\pi^{\bar{G}}=(\Infl^{\bar{G}}_G)_*\colon\SpcKG\to\Spc(\cK(\bar{G}))$ and $\bar{\pi}^{\bar{G}}\colon \Vee{\Weyl{G}{H}}\to \Vee{\Weyl{\bar{G}}{\bar{H}}}$ are spelled out around~\eqref{eq:pi}.
\end{enumerate}
\end{Rem}
Our primes also behave nicely under modular fixed-points maps:
\begin{Prop}
\label{Prop:Spc-Psi}
Let $H\le G$ be a $p$-subgroup and let $H\le K\le N_G H$ defining a `further' $p$-subgroup~$K/H\le \WGH$.
Then for every $\gp\in \Vee{\Weyl{(\WGH)}{(K/H)}}$, we have
\[
\psi^{H\inn G}(\cP_{\WGH}(K/H,\gp))=\cP_G(K,\bar{\rho}(\gp))
\]
in~$\SpcKG$, where $\bar{\rho}=\Spc(\overline{\Res}^{\WGK}_{\Weyl{(\WGH)}{(K/H)}})\colon \Vee{\Weyl{(\WGH)}{(K/H)}}\too\Vee{\Weyl{G}{K}}$. In particular, if $H\normaleq G$ is normal, we have
\[
\psi^{H\inn G}(\cP_{G/H}(K/H,\gp))=\cP_G(K,\gp)
\]
in~$\SpcKG$, using that $\gp\in\Vee{\Weyl{(G/H)}{(K/H)}}=\Vee{\Weyl{G}{K}}$.
\end{Prop}
\begin{proof}
This is immediate from \Cref{Prop:Psi-Psi} and \Cref{Cor:Psi-Psi}.
\end{proof}
The relation between Koszul objects and modular fixed-points functors, obtained in \Cref{Lem:Psi^H-of-s}, can be reformulated in terms of the primes~$\cP_G(H,\gp)$.
\begin{Lem}
\label{Lem:P-to-s}%
Let $H\le G$ be a $p$-subgroup and $\gp\in \Vee{\WGH}$. Let $K\le G$ be a subgroup and~$\sKG$ be the Koszul object of~\Cref{Cons:kos}. Then $\sKG\in\cP_G(H,\gp)$ if and only if~$H\le_G K$. (Note that the latter condition does not depend on~$\gp$.)
\end{Lem}
\begin{proof}
We have seen in \Cref{Lem:Psi^H-of-s}\,\ref{it:Psi^H-2} that if $H\le_G K$ then $\check{\Psi}^H(\sKG)=0$ in~$\Db(\kk(\WGH))$, in which case $\sKG\in(\check{\Psi}^H)\inv(0)\subseteq (\check{\Psi}^H)\inv(\gp)=\cP_G(H,\gp)$ for every~$\gp$.
Conversely, we have seen in \Cref{Lem:Psi^H-of-s}\,\ref{it:Psi^H-1} that if $H\not\le_G K$ then $\check{\Psi}^H(\sKG)$ generates~$\Db(\kk(\WGH))$, hence is not contained in any cohomological point~$\gp$, in which case $\sKG\notin(\check{\Psi}^H)\inv(\gp)=\cP_G(H,\gp)$.
\end{proof}
\begin{Cor}
\label{Cor:P<P'}%
If $\cP_G(H,\gp)\subseteq \cP_G(H',\gp')$ then $H'\le_G H$.
Therefore if $\cP_G(H,\gp)=\cP_G(H',\gp')$ then $H$ and~$H'$ are conjugate in~$G$.
\end{Cor}
\begin{proof}
Apply \Cref{Lem:P-to-s} to $K=H$ twice, for~$H$ being once~$H$ and once~$H'$.
\end{proof}
\begin{Prop}
\label{Prop:bar-psi-injective}%
Let $H\le G$ be a $p$-subgroup. Then the map~$\check{\psi}^H\colon \Vee{\WGH}\to \Spc(\cK(G))$ is injective, that is, $\cP_G(H,\gp)=\cP_G(H,\gp')$ implies $\gp=\gp'$.
\end{Prop}
\begin{proof}
Let $N=N_G H$. By assumption we have $\rho^G_N(\cP_N(H,\gp))=\rho^G_N(\cP_N(H,\gp'))$. By \Cref{Cor:Spc-Res}, there exists $g\in G$ and a prime~$\cQ\in\Spc(\cK(N\cap{}^{g\!}N))$ such that
\begin{equation}
\label{eq:aux-inj-1}%
\cP_N(H,\gp)=\rho_{N\cap{}^{g\!}N}^{N}(\cQ)
\quadtext{and}
\cP_N(H,\gp')=\big(\rho_{N\cap{}^{g\!}N}^{{}^{g\!}N}(\cQ)\big)^g.
\end{equation}
By \Cref{Cor:Spc-surjection-derived} for the group~$N\cap{}^{g\!}N$, there exists a $p$-subgroup $L\le N\cap{}^{g\!}N$ and some~$\gq\in \Vee{\Weyl{(N\cap{}^{g\!}N)}{L}}$ such that $\cQ=\cP_{N\cap{}^{g\!}N}(L,\gq)$.
By \eqref{eq:Spc-Res} we know where such a prime $\cP_{N\cap{}^{g\!}N}(L,\gq)$ goes under the maps~$\rho=\Spc(\Res)$ of~\eqref{eq:aux-inj-1} and, for the second one, we also know what happens under conjugation by~\Cref{Rem:Spc-Res}\,\ref{it:Spc-conj}.
Applying these properties to the above relations~\eqref{eq:aux-inj-1} we get
\begin{equation*}
\label{eq:aux-inj-2}%
\cP_N(H,\gp)=\cP_N(L,\gq')
\quadtext{and}
\cP_N(H,\gp')=\cP_N(L^g,\gq'')
\end{equation*}
for suitable cohomological points $\gq'\in\Vee{\Weyl{N}{L}}$ and $\gq''\in\Vee{\Weyl{N}{L^g}}$ that we do not need to unpack.
By \Cref{Cor:P<P'} applied to the group~$N$, we must have $H\sim_N L$ and $H\sim_N L^g$. But since $H\normaleq N$, this forces $H=L=L^g$ and therefore $g\in N_G H=N$.
In that case, returning to~\eqref{eq:aux-inj-1}, we have $N\cap N^g=N=N^g$ and therefore
\[
\cP_N(H,\gp)=\cQ
\quadtext{and}
\cP_N(H,\gp')=\cQ^g=\cQ
\]
where the last equality uses~$g\in N$ and~\eqref{eq:no-conj}. Hence $\cP_N(H,\gp)=\cQ=\cP_N(H,\gp')$.
As $H$ is normal in~$N$ the map $\psi^{H\inn N}\colon \Spc(\cK(N/H))\too \Spc(\cK(N))$ is split injective by \Cref{Cor:Psi-Infl}, and we conclude that $\gp=\gp'$.
\end{proof}
We can now summarize our description of the set~$\SpcKG$.
\begin{Thm}
\label{Thm:all-points}
Every point in $\SpcKG$ is of the form $\cP_G(H,\gp)$ as in \Cref{Def:P(H;p)}, for some $p$-subgroup $H\le G$ and some point~$\gp\in\Vee{\WGH}$ of the cohomological open of the Weyl group of~$H$ in~$G$.
Moreover, we have $\cP_G(H,\gp)=\cP_G(H',\gp')$ if and only if there exists $g\in G$ such that
$H'=H^g$ and $\gp'=\gp^g$.
\end{Thm}
\begin{proof}
The first statement follows from \Cref{Cor:Spc-surjection-derived}.
For the second statement, the ``if''-direction follows from~\eqref{eq:Spc-conjugation}.
For the ``only if''-direction assume $\cP_G(H,\gp)=\cP_G(H',\gp')$. By \Cref{Cor:P<P'}, this forces $H\sim_G H'$.
Using~\eqref{eq:Spc-conjugation}, we can replace $H'$ by~$H^g$ and assume that $\cP_G(H,\gp)=\cP_G(H,\gp')$ for $\gp,\gp'\in\Vee{\WGH}$.
We can then conclude by \Cref{Prop:bar-psi-injective}.
\end{proof}
Here is an example of support, for the Koszul objects of \Cref{Cons:kos}.
\begin{Cor}
\label{Cor:supp(kos)}%
Let $K\le G$. Then $\supp(\sKG)=\SET{\cP(H,\gp)}{H\not\le_G K}$.
\end{Cor}
\begin{proof}
Since all primes are of the form~$\cP(H,\gp)$, it is a simple contraposition on \Cref{Lem:P-to-s}, for $\cP(H,\gp)\in\supp(\sKG)\Leftrightarrow\sKG\notin \cP(H,\gp)\Leftrightarrow H\not\le_G K$.
\end{proof}
We can use this result to identify the image of $\psi^H$. First, in the normal case:
\begin{Prop}
\label{Prop:Im(psi^N)}%
Let $H\normaleq G$ be a normal $p$-subgroup. Then the continuous map
\[
\psi^{H}=\Spc(\Psi^H)\colon \Spc(\cK(G/H))\to\SpcKG
\]
is a closed immersion, retracted by~$\Spc(\Infl^{G/H}_G)$. Its image is the closed subset
\begin{equation}
\label{eq:Im(psi^N)}%
\Img(\psi^H)=\SET{\cP_G(L,\gp)}{H\le L\in\Sub_p\!G,\ \gp\in\Vee{\WGL}}=\!\bigcap_{K\in\cF_H}\!\supp(\sKG)\kern-.8em
\end{equation}
where we recall that $\cF_H=\SET{K\le G}{H\not\le K}$.
Furthermore, this image of~$\psi^H$ is also the support of the object
\begin{equation}
\label{eq:y-normal}%
\bigotimes_{K\in\cF_H}\sKG
\end{equation}
and it is also the support of the tt-ideal $\cap_{K\in \cF_H}\Ker(\Res^G_K)$.
\end{Prop}
\begin{proof}
By \Cref{Cor:Psi-Infl}, the map $\psi^H$ has a continuous retraction hence is a closed immersion as soon as we know that its image is closed.
So let us prove~\eqref{eq:Im(psi^N)}.
By \Cref{Prop:Spc-Psi} and the fact that all points are of the form~$\cP(L,\gp)$, the image of~$\psi^H$ is the subset $\SET{\cP_G(L,\gp)}{H\le L,\ \gp\in\Vee{\WGL}}$. Here we use $H\normaleq G$.
\Cref{Cor:supp(kos)} tells us that every such point $\cP(L,\gp)$ belongs to the support of~$\sKG$ as long as $L\not\le_G K$, which clearly holds if $H\le L$ and $H\not\le K$. Therefore $\Img(\psi^H)\subseteq \cap_{K\in\cF_H}\supp(\sKG)$.
Conversely, let $\cP(L,\gp)\in \cap_{K\in\cF_H}\supp(\sKG)$ and let us show that $H\le L$. If \ababs, $H\not\le L$ then $L\in\cF_H$ is one of the indices~$K$ that appear in the intersection $\cap_{K\in\cF_H}\supp(\sKG)$. In other words, $\cP(L,\gp)\in\supp(\kos[G]{L})$. By \Cref{Cor:supp(kos)}, this means $L\not\le_G L$, which is absurd. Hence the result.
The `furthermore part' follows: The first claim is~\eqref{eq:Im(psi^N)} since $\supp(x)\cap \supp(y)=\supp(x\otimes y)$ and the second claim follows from \Cref{Cor:Ker(Res)}.
(For $H=1$, the result does not tell us much, as $\psi^1=\id$ and $\otimes_{\varnothing}=\unit$.)
\end{proof}
Let us extend the above discussion to not necessarily normal subgroups~$H$.
\begin{Not}
\label{Not:zul}%
Let $H\le G$ be an arbitrary subgroup. We define an object of~$\cK(G)$
\begin{equation}
\label{eq:zul}%
\tHG:=\Ind_{N_G H}^G \bigg(\bigotimes_{\SET{K\le N_G H}{H\not\le K}}\kos[N_{G}(H)]{K}\bigg).
\end{equation}
(Note that we use plain induction here, not tensor-induction as in \Cref{Cons:kos}.) If $H\normaleq G$ is normal this $\tHG$ is simply the object displayed in~\eqref{eq:y-normal}.
\end{Not}
\begin{Cor}
\label{Cor:Im(psi^H)}%
Let $H\le G$ be a $p$-subgroup. Then the continuous map
\[
\psi^{H\inn G}=\Spc(\Psi^{H\inn G})\colon \Spc(\cK(\WGH))\to\SpcKG
\]
is a closed map, whose image is $\supp(\tHG)$ where $\tHG$ is as in~\eqref{eq:zul}.
\end{Cor}
\begin{proof}
By definition $\Psi^{H\inn G}=\Psi^{H\inn N_G H}\circ\Res^G_{N_G H}$. We know the map induced on spectra by the second functor~$\Psi^{H\inn N_G H}$ by \Cref{Prop:Im(psi^N)} and we can describe what happens under the closed map~$\Spc(\Res)$ by \Cref{Prop:Spc-Res}.
\end{proof}
We record the answer to a question stated in the Introduction:
\begin{Cor}
\label{Cor:supp(Kac)}%
The support of the tt-ideal of acyclics~$\Kac(G)$ is the union of the images of the modular $H$-fixed-points maps~$\psi^H$, for non-trivial $p$-subgroups~$H\le G$.
\end{Cor}
\begin{proof}
The points of~$\SpcKG$ are of the form~$\cP(H,\gp)$. Such primes belong to~$\VG=\SET{\cP(1,\gq)}{\gq\in\VG}$ if and only if~$H$ is trivial. The complement is then $\Supp(\Kac(G))$. Hence $\Supp(\Kac(G))\subseteq \cup_{H\neq 1}\Img(\psi^H)$. Conversely, for every $p$-subgroup $H\neq 1$, the object~$\tHG$ of \Cref{Cor:Im(psi^H)} is acyclic, since the tensor is non-empty and any $\kos[N_G H]{K}$ is acyclic. So $\Img(\psi^H)\subseteq\Supp(\Kac(G))$.
\end{proof}
We wrap up this section about the spectrum by describing all closed points.
\begin{Rem}
\label{Rem:Db(kG)-local}%
Recall that in tt-geometry closed points~$\cM\in \Spc(\cK)$ are exactly the minimal primes for inclusion. Also every prime contains a minimal one.
For instance, the tt-category $\Db(\kkG)$ is local, with a unique closed point~$0=\Ker(\Db(\kkG)\to \Db(\kk))$. (In terms of homogeneous primes in~$\Spech(\rmH^\sbull(G,\kk))$ the zero tt-ideal~$\gp=0$ corresponds to the closed point~$\gp^\sbull=\rmH^+(G,\kk)$.)
\end{Rem}
\begin{Def}
\label{Def:m_H}%
Let $H\le G$ be a $p$-subgroup. (This definition only depends on the conjugacy class of~$H$ in~$G$.)
By \Cref{Prop:Psi-Res}, the following diagram commutes
\begin{equation}
\label{eq:F_H}%
\vcenter{\xymatrix@C=4em{
\cK(G) \ar[r]^-{\Res^G_H} \ar[d]_-{\Psi^{H\inn G}} \ar[rd]|-{\ \bbF^H\ }
& \cK(H) \ar[d]^-{\Psi^{H\inn H}}
\\
\cK(\Weyl{G}{H}) \ar[r]_-{\Res^{\WGH}_1}
& \cK(1)=\Db(\kk).\kern-3em
}}
\end{equation}
We baptize $\bbF^H=\bbF^{H\inn G}$ the diagonal. Its kernel is one of the primes of \Cref{Def:P(H;p)}
\begin{equation}
\label{eq:M(H)}%
\cM(H)=\cM_G(H):=\Ker(\bbF^H)=\cP_G(H,0)
\end{equation}
where $0\in\Spc(\Db(\kk(\WGH)))$ is the zero tt-ideal, \ie the unique closed point of the cohomological open~$\Vee{\WGH}$ of the Weyl group.
(See \Cref{Rem:Db(kG)-local}.)
We can think of $\bbF^H\colon \cK(G)\to \Db(\kk)$ as a tt-residue field functor at the (closed) point~$\cM(H)$.
\end{Def}
\begin{Exa}
\label{Exa:M(1)}%
For $H=1$, we have $\cM(1)=\Ker\big(\Res^G_1:\cK(G)\to \Db(\kk)\big)=\Kac(G)$. In other words, $\cM(1)=\Upsilon_G\inv(0)$ is the image under the open immersion $\upsilon_G\colon \VG\hook \SpcKG$ of \Cref{Prop:VeeG} of the unique closed point~$0\in\VG$ of \Cref{Rem:Db(kG)-local}. In general, a closed point of an open is not necessarily closed in the ambient space. Here $\cM(1)$ is closed since by definition $\{\cM(1)\}=\Img(\rho^G_1)$ where $\rho^G_1=\Spc(\Res^G_1)$. By \Cref{Prop:Spc-Res}, we know that $\Img(\rho^G_1)=\supp(\kk(G))$ is closed.
\end{Exa}
\begin{Exa}
\label{Exa:M(G)}%
For $H=G$ a $p$-group, we can give generators of the closed point
\[
\cM(G)=\ideal{\kk(G/K)\mid K\neq G}.
\]
As $\cM(G)=\ker(\Psi^G:\cK(G)\to\Db(k))$, inclusion~$\supseteq$ follows from \Cref{Prop:fixed-pts}.
For~$\subseteq$, let $X\in\cM(G)$ be a complex that vanishes under~$\Psi^G$.
Splitting the modules~$X_n$ in each homological degree~$n$ into a trivial (\ie a $\kk$-vector space with trivial action) and non-trivial permutation modules, \Cref{Lem:conservativity} shows that $X$ is homotopy equivalent to a complex in the additive category generated by $\kk(G/K)$, $K\neq G$.
\end{Exa}
\begin{Cor}
\label{Cor:closed-pts}%
The closed points of $\SpcKG$ are exactly the tt-primes~$\cM_G(H)$ of \eqref{eq:M(H)} for the $p$-subgroups~$H\le G$. Furthermore, we have $\cM_G(H)=\cM_G(H')$ if and only if~$H$ is conjugate to~$H'$ in~$G$.
\end{Cor}
\begin{proof}
Let us first verify that $\cM_G(H)$ is closed for every~$H\le G$. For $H=1$, we checked it in \Cref{Exa:M(1)}. For $H\neq 1$, we have $\cM_G(H)=\cP_G(H,0)=\Psi^{H}(\cM_{\WGH}(1))$. This gives the result since $\cM_{\WGH}(1)$ is closed in~$\Spc(\cK(\WGH))$, by \Cref{Exa:M(1)} again, and since $\psi^H$ is a closed map by \Cref{Cor:Im(psi^H)}.
Now, every point $\gp\in\Vee{\WGH}$ admits $0$ in its closure in~$\Spc(\Db(\kk(\WGH)))=\Vee{\WGH}$. (See \Cref{Rem:Db(kG)-local}.) By continuity of~$\check{\psi}^H\colon \Vee{\WGH}\to \SpcKG$, it follows that $\check{\psi}^H(0)=\cM_G(H)$ belongs to the closure of~$\bar{\psi}^H(\gp)=\cP_G(H,\gp)$, which proves that the $\cM_G(H)$ are the only closed points.
We already saw that $\cP(H,0)=\cP(H',0)$ implies $H\sim_G H'$, in \Cref{Thm:all-points}.
\end{proof}
\begin{Prop}
\label{Prop:fibered}%
For every $p$-subgroup~$H\le G$, consider the subset
\[
\Vee{G}(H):=\Img(\check{\psi}^H)=\check{\psi}^H(\Vee{\WGH})
\]
of~$\Spc(\cK(G))$. Then $\cM_G(H)$ is the unique closed point of~$\SpcKG$ that belongs to~$\Vee{G}(H)$.
We have a set-partition indexed by conjugacy classes of $p$-subgroups
\begin{equation}
\label{eq:SpcKG-partition}%
\SpcKG=\coprod_{H\in(\Sub_p\!G)/G}\ \Vee{G}(H)
\end{equation}
where each $\Vee{G}(H)$ is open in its closure.
\end{Prop}
\begin{proof}
The partition is immediate from \Cref{Thm:all-points}.
Each subset $\Vee{G}(H)=\SET{\cP(H,\gp)}{\gp\in\Vee{\WGH}}$ is a subset of the closed set~$\Img(\psi^H)$.
By \Cref{Cor:supp(Kac)} and \Cref{Prop:Spc-Psi}, the complement of~$\Vee{G}(H)$ in~$\Img(\psi^H)$ consists of the images $\Img(\psi^K)$ for every `further'~$p$-group~$K$, \ie such that $H\lneqq K\le N_G(H)$ and these are closed by \Cref{Cor:Im(psi^H)}.
Thus~$\Vee{G}(H)$ is an open in the closed subset~$\Img(\psi^H)$.
\end{proof}
\begin{Rem}
\label{Rem:filtered}%
\label{Rem:psi^H-V(H)}%
We can use~\eqref{eq:SpcKG-partition} to define a map~$\SpcKG\to (\Sub_p\!G)/G$.
\Cref{Cor:P<P'} tells us that this map is continuous for the (opposite poset) topology on $(\Sub_p\!G)/G$ whose open subsets are the ones stable under subconjugacy.
Moreover, for $H\le G$ a $p$-subgroup, the square
\[
\xymatrix@R=1.5em{
\Spc(\cK(\WGH))
\ar[d]
\ar[r]_-{\psi^H}
&
\SpcKG
\ar[d]
\\
(\Sub_p(\WGH))/(\WGH)\
\ar@{^(->}[r]
&
(\Sub_p\!G)/G
}
\]
commutes, where the bottom horizontal arrow is the canonical inclusion that sends $H\le K\le N_G(H)$ to $K$.
This follows from \Cref{Prop:Spc-Psi}.
An immediate consequence is that while $\psi^H$ might not be injective in general, we still have $(\psi^H)\inv(\Vee{G}(H))=\Vee{\WGH}$.
\end{Rem}
\section{Cyclic, Klein-four and quaternion groups}
\label{sec:examples}%
Although the full treatment~\cite{balmer-gallauer:TTG-Perm-II} of the topology of~$\SpcKG$ will require more technology, we can already present the answer for small groups.
Some of the most interesting phenomena are already visible once we reach $p$-rank two in \Cref{Exa:Klein4}.
Let us start with the easy examples.
\begin{Not}
\label{Not:W*}%
Fix an integer $n\geq 0$ and consider the following space $\W^n$ consisting of $2n+1$ points, with specialization relations pointing upward as usual:
\begin{equation}
\label{eq:W*}%
\W^n=\qquad
\vcenter{\xymatrix@R=1em@C=.7em{
{\scriptstyle\gm_0\kern-1em}
& {\bullet} \ar@{-}@[Gray] '[rd] '[rr] '[drrr]
&&
{\bullet}
&{\kern-1em\scriptstyle\gm_1}
&&{\scriptstyle\gm_{n-1}\kern-1em}&\bullet&&\bullet&{\kern-1em\scriptstyle\gm_n}
\\
&{\scriptstyle\gp_1\kern-1em}& {\bullet}&&& {\cdots}& \ar@{-}@[Gray] '[ru] '[rr] '[rrru] &&\bullet&{\kern-1em}\scriptstyle\gp_{n}}}
\end{equation}
The closed subsets of~$\W^n$ are simply the specialization-closed subsets, \ie those that contain a $\gp_i$ only if they contain~$\gm_{i-1}$ and~$\gm_i$.
So the $\gm_i$ are closed points and the $\gp_i$ are generic points of the $n$ irreducible V-shaped closed subsets~$\{\gm_{i-1},\gp_i,\gm_i\}$.
\end{Not}
\begin{Prop}
\label{Prop:cyclic}%
Let $G=C_{p^n}$ be a cyclic $p$-group. Then $\Spc(\cK(C_{p^n}))$ is homeomorphic to the space~$\W^n$ of~\eqref{eq:W*}.
More precisely, if we denote by $1=N_n<N_{n-1}<\cdots<N_0=G$ the $n+1$ subgroups of~$C_{p^n}$\,{\rm(\footnote{\,The numbering of the~$N_i$ keeps track of the index, that is, $G/N_i\cong C_{p^i}$. This choice will allow simple formulas for inflation and fixed-points, and for procyclic groups in Part~III~\cite{balmer-gallauer:TTG-Perm-III}.})},
then the points~$\gp_i$ and~$\gm_i$ in~$\SpcKG$ are given by
\[
\gm_i=(\check{\Psi}^{N_i})\inv(0)
\qquadtext{and}
\gp_i=(\check{\Psi}^{N_i})\inv(\Dperf(\kk(G/N_i)))
\]
where $\check{\Psi}^{N}=\Upsilon_{G/N}\circ\Psi^N\colon \cK(G)\to\cK(G/N)\onto\Db(\kk(G/N))$ is the tt-functor~\eqref{eq:bar-Psi}.
\end{Prop}
\begin{proof}
\label{Exa:Cyclic-revisited}%
By \Cref{Prop:fibered}, we have a partition of the spectrum in subsets
\[
\SpcKG=\coprod_{i=0}^n \ \Vee{G}(N_i)=\coprod_{i=0}^n \ \Img(\check{\psi}^{N_i})
\]
and each $\Vee{G}(N_i)$ is homeomorphic to~$\Spc(\Db(\kk G/N_i))=\Vee{G/N_i}$.
For $i>0$, each $\Vee{G}(N_i)$ is a Sierpi\'{n}ski space $\{\gp_i\rightsquigarrow\gm_i=\cM(N_i)\}$, while $\Vee{G}(N_0)$ is a singleton set~$\{\gm_0:=\cM(G)\}$.
In other words, we know the set~$\SpcKG$ has the announced~$2n+1$ points and the unmarked specializations~$\gp_i\rightsquigarrow\gm_i$ below
\begin{equation}
\label{eq:W*-temp}%
\vcenter{\xymatrix@R=1em@C=.7em{
{\scriptstyle\gm_0\kern-1em}
& {\bullet} \ar@{-}@[Gray] '[rd]|-{?} '[rr] '[rrrd]|-{?}
&& {\bullet} &{\kern-1em\scriptstyle\gm_1}
& \cdots
&{\scriptstyle\gm_{i-1}\kern-1em}&\bullet \ar@{-}@[Gray] '[rd]|-{?} '[rr]
&&\bullet&{\kern-1em\scriptstyle\gm_i}
& \cdots
&&\bullet&{\kern-1em\scriptstyle\gm_{n-1}}
&\bullet&{\kern-1em\scriptstyle\gm_n}
\\
&{\scriptstyle\gp_1\kern-1em}& {\bullet}
&&& {\cdots}
&&& \bullet&{\kern-1em}\scriptstyle\gp_{i}
&& {\cdots}
& \ar@{-}@[Gray] '[ru] '[rr]|-{?} '[rrru]
&&\bullet&{\kern-1em}\scriptstyle\gp_{n}}}
\end{equation}
We need to elucidate the topology.
Since all~$\gm_i=\cM(N_i)$ are closed (\Cref{Cor:closed-pts}), we only need to see where each $\gp_i$ specializes for~$1\le i\le n$.
By \Cref{Cor:P<P'}, the point $\gp_i=\cP(N_i,\gp)$ can only specialize to a $\cP(N_j,\gq)$ for $N_j\ge N_i$, that is, to the points $\gm_j$ or~$\gp_j$ for~$j\le i$.
On the other hand, direct inspection using \eqref{eq:fixed-pts} shows that $\supp(\kk(G/N_{i-1}))=\SET{\gm_j}{j\ge i-1}\cup\SET{\gp_j}{j\ge i}$.
This closed subset contains~$\gp_i$ hence its closure.
Combining those two observations, we have
\[
\adhpt{\gp_i}\subseteq \SET{\gm_j,\gp_j}{j\le i}\cap \big(\{\gm_{i-1}\}\cup\SET{\gm_j,\gp_j}{j\ge i}\big)=\{\gm_{i-1},\gp_{i},\gm_i\}.
\]
If any of the $\adhpt{\gp_{i}}$ was smaller than~$\{\gm_{i-1},\gp_{i},\gm_i\}$, that is, if one of the specialization relations $\gp_{i}\rightsquigarrow\gm_{i-1}$ marked with~`$?$' in~\eqref{eq:W*-temp} did not hold, then $\SpcKG$ would be a disconnected space. This would force the rigid tt-category~$\cK(G)$ to be the product of two tt-categories, which is clearly absurd, \eg because $\End_{\cK(G)}(\unit)=\kk$.
\end{proof}
With this identification, we can record the maps $\psi^H$ of~\Cref{Def:psi} and the maps $\rho_K$ and~$\pi^{G/N}$ of \Cref{Rem:Spc-Res}, that relate different cyclic $p$-groups.
\begin{Lem}
\label{Lem:Spc-cyclic}%
Let $n\ge 0$. We identify~$\Spc(\cK(C_{p^n}))$ with~$\W^n$ as in \Cref{Prop:cyclic}.
\begin{enumerate}[label=\rm(\alph*), ref=\rm(\alph*)]
\item
\label{it:Spc-cyclic.fixed-pts}%
Let $0\le i\le n$ and $H=N_i=C_{p^{n-i}}\le C_{p^n}$, so that $C_{p^n}/H\cong C_{p^i}$.
The map~$\psi^H\colon \W^i\to \W^n$ induced by modular fixed points $\Psi^H$ is the inclusion
\[
\psi\colon\W^i\hook\W^n.
\]
that catches the left-most points: $\gp_\ell\mapsto \gp_\ell$ and~$\gm_\ell\mapsto \gm_\ell$.
\smallbreak
\item
\label{it:Spc-cyclic.res}%
Let $0\le j\le n$ and $K=C_{p^j}\le C_{p^n}$.
The map~$\rho_K\colon \W^j\to \W^n$ induced by restriction $\Res_K$ is the inclusion
\[
\rho:\W^{j}\hook\W^n
\]
that catches the right-most points: $\gm_\ell\mapsto\gm_{\ell+n-j}$ and $\gp_\ell\mapsto\gp_{\ell+n-j}$.
\smallbreak
\item
\label{it:Spc-cyclic.infl}%
Let $0\le m\le n$. Inflation along $C_{p^n}\onto C_{p^m}$ induces on spectra the map
\[
\pi\colon\W^n\onto\W^m
\]
that retracts $\psi$ and sends everything else to $\gm_m$, that is, for all~$0\le \ell\le n$
\[
\qquad\pi(\gp_\ell)=\left\{
\begin{array}{cl}
\gp_\ell & \textrm{if }\ell\le m
\\
\gm_{m} & \textrm{otherwise}
\end{array}\right.
\qquadtext{and}
\pi(\gm_\ell)=\left\{
\begin{array}{cl}
\gm_\ell & \textrm{if }\ell\le m
\\
\gm_{m} & \textrm{otherwise}.
\end{array}\right.
\]
\end{enumerate}
\end{Lem}
\begin{proof}
Part~\ref{it:Spc-cyclic.fixed-pts} follows from \Cref{Prop:Spc-Psi}, while parts~\ref{it:Spc-cyclic.res} and~\ref{it:Spc-cyclic.infl} follow from \Cref{Rem:Spc-Res}.
\end{proof}
Let us now move to higher $p$-rank.
\begin{Exa}
\label{Exa:elem-ab}%
Let $E=(C_p)^{\times r}$ be the elementary abelian $p$-group of rank~$r$.
We know that $\Vee{E}=\Spc(\Db(\kk E))\cong\Spech(\rmH^\sbull(E,\kk))$ is homeomorphic to the space
\begin{equation}
\label{eq:Aff}%
\Aff{r}:=\Spech(\kk[x_1,\ldots,x_r]),
\end{equation}
that is, projective space~$\bbP^{r-1}_\kk$ with one closed point `on top'.
For instance, $\Aff{0}$ is a single point and $\Aff{1}$ is a 2-point Sierpi\'{n}ski space.
The example of~$r=1$ (see \Cref{Prop:cyclic} for $n=1$) is not predictive of what happens in higher rank.
Indeed, by \Cref{Prop:fibered}, the closed complement $\Supp(\Kac(E))$ is far from discrete in general.
It contains~$\frac{p^r-1}{p-1}$ copies of~$\Aff{r-1}$ and more generally $|\mathrm{Gr}_p(d,r)|$ copies of the $d$-dimensional~$\Aff{d}$ for $d=0,\ldots,r-1$, where $|\mathrm{Gr}_p(d,r)|$ is the number of rank-$d$ subgroups of~$(C_p)^{\times r}$.
Here is a `low-resolution' picture for Klein-four $r=p=2$:
\begin{equation}
\label{eq:C_2xC_2-artist}%
\vcenter{\xymatrix@C=3.2em@R=.8em{
{\color{Brown} \Aff{0}} \ar@{--}@[Gray][rd] \ar@{--}@[Gray][rrd] \ar@{--}@[Gray][rrrd] \ar@/_.5em/@{--}@[Gray][rrrrdd]
\\
& {\color{Brown} \Aff{1}} \ar@/_.5em/@{--}@[Gray][rrrd]
& {\color{Brown} \Aff{1}} \ar@{--}@[Gray][rrd]
& {\color{Brown} \Aff{1}} \ar@{--}@[Gray][rd]
\\
&&&& {\color{OliveGreen} \Aff{2}}
}}\kern2em
\end{equation}
The dashed lines indicate `partial' specialization relations: \emph{Some} points in the lower variety specialize to \emph{some} points in the higher one; see~ \Cref{Cor:P<P'}.
In rank~3, the similar `low-resolution' picture of~$\Spc(\cK(C_2^{\times 3}))$, still for $p=2$, looks as follows:
\begin{equation}
\label{eq:C_2xC_2xC_2-artist}%
\vcenter{\xymatrix@R=1em@C=.3em{
&&&&&& {\color{Brown}\Aff{0}}
\ar@{--}@[Gray][lllllld]\ar@{--}@[Gray][lllld]\ar@{--}@[Gray][lld]\ar@{--}@[Gray][d]\ar@{--}@[Gray][rrd]\ar@{--}@[Gray][rrrrd]\ar@{--}@[Gray][rrrrrrd]
\ar@{--}@[Gray][llllldd]\ar@{--}@[Gray][llldd]\ar@{--}@[Gray][ldd]\ar@{--}@[Gray][rdd]\ar@{--}@[Gray][rrrdd]\ar@{--}@[Gray][rrrrrdd]\ar@{--}@[Gray][rrrrrrrdd]
\ar@{--}@[Gray][rddd]
\\
{\color{Brown}\Aff{1}}
\ar@{--}@[Gray][rd]\ar@{--}@[Gray][rrrd]\ar@{--}@[Gray][rrrrrd]
&& {\color{Brown}\Aff{1}}
\ar@{--}@[Gray][ld]\ar@{--}@[Gray][rrrrrd]\ar@{--}@[Gray][rrrrrrrd]
&& {\color{Brown}\Aff{1}}
\ar@{--}@[Gray][llld]\ar@{--}@[Gray][rrrrrrrd]\ar@{--}@[Gray][rrrrrrrrrd]
&& {\color{Brown}\Aff{1}}
\ar@{--}@[Gray][llld]\ar@{--}@[Gray][rd]\ar@{--}@[Gray][rrrrrrrd]
&& {\color{Brown}\Aff{1}}
\ar@{--}@[Gray][llld]\ar@{--}@[Gray][ld]\ar@{--}@[Gray][rrrd]
&& {\color{Brown}\Aff{1}}
\ar@{--}@[Gray][llllllld]\ar@{--}@[Gray][ld]\ar@{--}@[Gray][rd]
&& {\color{Brown}\Aff{1}}
\ar@{--}@[Gray][llllllld]\ar@{--}@[Gray][llld]\ar@{--}@[Gray][rd]
\\
& {\color{Brown}\Aff{2}}
&& {\color{Brown}\Aff{2}}
&& {\color{Brown}\Aff{2}}
&& {\color{Brown}\Aff{2}}
&& {\color{Brown}\Aff{2}}
&& {\color{Brown}\Aff{2}}
&& {\color{Brown}\Aff{2}}
\\
&&&&&&& {\color{OliveGreen}\Aff{3}}
\ar@{--}@[Gray][llllllu]\ar@{--}@[Gray][llllu]\ar@{--}@[Gray][llu]\ar@{--}@[Gray][u]\ar@{--}@[Gray][rru]\ar@{--}@[Gray][rrrru]\ar@{--}@[Gray][rrrrrru]
\ar@{--}@[Gray][llllllluu]\ar@{--}@[Gray][llllluu]\ar@{--}@[Gray][llluu]\ar@{--}@[Gray][luu]\ar@{--}@[Gray][ruu]\ar@{--}@[Gray][rrruu]\ar@{--}@[Gray][rrrrruu]
}}
\end{equation}
Each $\Aff{d}$ has Krull dimension~$d\in\{0,1,2,3\}$ and contains one of 16 closed points.
\end{Exa}
Let us now discuss the example of Klein-four and `zoom-in' on~\eqref{eq:C_2xC_2-artist} to display every point at its actual height, as well as all specialization relations.
\begin{Exa}
\label{Exa:Klein4}%
Let $G=C_2\times C_2$ be the Klein four-group, in characteristic~$p=2$.
The spectrum $\Spc(\cK(E))$ looks as follows, with colors matching those of~\eqref{eq:C_2xC_2-artist}:
\begin{equation}\label{eq:C_2xC_2}%
\kern2em\vcenter{\xymatrix@C=.0em@R=.4em{
{\color{Brown}\overset{\cM(E)}{\bullet}} \ar@{-}@[Gray][rrdd] \ar@{-}@[Gray][rrrrdd] \ar@{-}@[Gray][rrrrrrdd] \ar@{~}@<.1em>@[Gray][rrrrrrrrdd] &&& {\color{Brown}\overset{\cM(N_0)}{\bullet}} \ar@{-}@[Brown][ldd] \ar@{-}@<.1em>@[Gray][rrrrrrdd]
&& {\color{Brown}\overset{\cM(N_1)}{\bullet}} \ar@{-}@[Brown][ldd] \ar@{-}@[Gray][rrrrrrdd]
&& {\color{Brown}\overset{\cM(N_\infty)}{\bullet}} \ar@{-}@[Brown][ldd] \ar@{-}@[Gray][rrrrrrdd]
&& {\color{OliveGreen}\overset{\cM(1)}{\bullet}} \ar@{~}@[OliveGreen][ldd] \ar@{-}@[OliveGreen][dd] \ar@{-}@[OliveGreen][rrdd] \ar@{-}@[OliveGreen][rrrrdd]
\\ \\
&& {\color{Brown}\underset{\cP(N_0)}{\bullet}} \ar@{-}@<-.4em>@[Gray][rrrrrrrdd]
&& {\color{Brown}\underset{\cP(N_1)}{\bullet}} \ar@{-}@<-.1em>@[Gray][rrrrrdd]
&& {\color{Brown}\underset{\cP(N_\infty)}{\bullet}} \ar@{-}@[Gray][rrrdd]
&\ar@{.}@[OliveGreen][r]
& {\scriptstyle\color{OliveGreen}\tinyPone} \ar@{~}@[OliveGreen][rdd] \ar@{.}@[OliveGreen][rrrrrrr]
& {\color{OliveGreen}\bullet_{0}} \ar@{-}@[OliveGreen][dd]
&& {\color{OliveGreen}\bullet_{1}} \ar@{-}@[OliveGreen][lldd]
&& {\color{OliveGreen}\bullet_{\infty}} \ar@{-}@[OliveGreen][lllldd]
&&
\\ \\
&&&&&&&&& {\color{OliveGreen}\bullet_{\cP_0}}\kern-1em
&&&&
}}\kern-1.11em
\end{equation}
The green part on the right is the cohomological open~$\Vee{E}\simeq \Aff{2}$ as in \eqref{eq:Aff}, that is, a~$\bbP^1$ with a closed point on top.
We marked with~${\color{OliveGreen}\bullet}$ the closed point~$\cM(1)$, the three $\bbF_{\!2}$-rational points $0$, $1$, $\infty$ of~$\bbP^1$ and its generic point~$\cP_0$.
The notation $\Pone$ and the dotted line indicate $\bbP^1\sminus\{0,1,\infty,\cP_0\}$.
The specializations involving points of~$\Pone$ are displayed with undulated lines.
For instance, the gray undulated line indicates that \emph{all} points of~$\Pone$ specialize to~$\cM(E)$.
The (brown) part on the left is the support of the acyclics.
It contains the remaining four closed points, $\cM(E)$, and~$\cM(N_0),\ \cM(N_1),\ \cM(N_\infty)$ for the rank-one subgroups that we denote~$N_0,N_1,N_\infty<E$ to facilitate matching them with~$0,1,\infty\in\bbP^1(\FF_2)$.
The three Sierpi\'{n}ski subspaces $\{\cP(N_i)\rightsquigarrow\cM(N_i)\}$ are images of~$\Vee{E/N_i}\simeq \Aff{1}$.
The point~$\cM(E)$ is the image of~$\Vee{E/E}\simeq \Aff{0}$.
See \Cref{Prop:fibered}.
The (gray) specializations require additional arguments and will be examined in detail in~\cite{balmer-gallauer:TTG-Perm-II}.
Let us still say a few words.
The specializations between $\cP(N_i)$ and $\cM(E)$ are relatively easy, as one can show that $\{\cM(E),\cP(N_i),\cM(N_i)\}$ is the image of~$\psi^{N_i}$, using our description in the case of~$C_2=E/N_i$.
Similarly, the specializations between the $\bbF_2$-rational points~$0,1,\infty$ and~$\cM(N_0),\cM(N_1),\cM(N_\infty)$ can be verified using~$\rho_{N_i}\colon \Spc(\cK(N_i))\to \Spc(\cK(E))$. The image of the latter is~$\supp(E/N_i)=\{\cM(N_i),i,\cM(1)\}$, for each~$i\in\{0,1,\infty\}=\bbP^1(\bbF_2)$.
The fact that $\cP_0$ is a generic point for the whole~$\Spc(\cK(E))$ is not too hard either. The real difficulty is to prove that the non-$\bbF_{\!2}$-rational points specialize to the closed point~$\cM(E)$, avoiding the~$\cM(N_i)$ and~$\cP(N_i)$ entirely, as indicated by the undulated gray line between~$\Pone$ and~$\cM(E)$.
These facts can be found in~\cite{balmer-gallauer:TTG-Perm-II}.
\end{Exa}
\begin{Exa}
\label{Exa:Q_8}%
The spectrum of the quaternion group~$Q_8$ is very similar to that of its quotient~$E:=Q_8/Z(Q_8)\cong C_2\times C_2$, as we announced in~\eqref{eq:Q_8}.
The center~$Z:=Z(Q_8)\cong C_2$ is the maximal elementary abelian $2$-subgroup and it follows that $\Res^{Q_8}_{Z}$ induces a homeomorphism $V_{C_2}\isoto V_{Q_8}$. In other words, $V_{Q_8}$ is again a Sierpi\'{n}ski space~$\{\cP,\cM(1)\}$.
On the other hand, the center $Z$ is also the unique minimal non-trivial subgroup.
It follows from~\Cref{Cor:Spc-surjection-derived} and \Cref{Prop:Im(psi^N)} that $\Supp(\Kac(Q_8))$ is the image under the closed immersion~$\psi^Z$ of~$\Spc(\cK(Q_8/Z))$.
It only remains to describe the specialization relations between the cohomological open~$\Vee{Q_8}$ and its closed complement~$\Supp(\Kac(Q_8))$.
Since $\cM(1)\in\Vee{Q_8}$ is also a closed point in~$\Spc(\cK(Q_8))$, we only need to decide where the generic point~$\cP$ of~$V_{Q_8}$ specializes in~$\Spc(\cK(Q_8))$.
Interestingly, $\cP$ will not be generic in the whole of~$\Spc(\cK(Q_8))$.
As $\cP$ belongs to $\Img(\rho_{Z})$, it suffices to determine~$\rho_Z(\cM_{C_2}(C_2))$. The preimage of~$\Img(\rho_{Z})=\supp(\kk(Q_8/Z))$ under~$\psi^Z$ is $\supp_{E}(\Psi^Z(\kk(Q_8/Z)))=\supp_E(\kk(E))=\{\cM_E(1)\}$. It follows that $\cP$ specializes to exactly one point:~$\psi^Z(\cM_E(1))=\cM_{Q_8}(Z)$ as depicted in~\eqref{eq:Q_8}.
\end{Exa}
\section{Stratification}
\label{sec:stratification}%
It is by now well-understood how to deduce stratification in the presence of a noetherian spectrum and a conservative theory of supports. We follow the general method of Barthel-Heard-Sanders~\cite{barthel-heard-sanders:stratification-supph,barthel-heard-sanders:stratification-Mackey}.
\begin{Prop}
\label{Prop:Spc-noetherian}%
The spectrum $\SpcKG$ is a noetherian topological space.
\end{Prop}
\begin{proof}
Recall that a space is noetherian if every open is quasi-compact. It follows that the continuous image of a noetherian space is noetherian.
The claim now follows from \Cref{Cor:Spc-surjection-derived}.
\end{proof}
We start with the key technical fact. Recall that coproduct-preserving exact functors between compactly-generated triangulated categories have right adjoints by Brown-Neeman Representability. We apply this to~$\Psi^H$.
\begin{Lem}
\label{Lem:Psi^N_rho}%
Let $N\normaleq G$ be a normal $p$-subgroup and $\Psi^N_\rho\colon \DPerm(G/N;\kk)\to \DPerm(G;\kk)$ the right adjoint of modular $N$-fixed points~$\Psi^N\colon \DPerm(G;\kk)\to \DPerm(G/N;\kk)$. Then $\Psi^N_\rho(\unit)$ is isomorphic to a complex~$s$ in $\perm(G;\kk)$, concentrated in non-negative degrees
\[
s=\quad\big(\cdots \to s_n \to \cdots \to s_2 \to s_1 \to s_0\to 0\to 0\cdots\big)
\]
with $s_0=\kk$ and $s_1=\oplus_{H\in\cF_N}\kk(G/H)$, where $\cF_N=\SET{H\le G}{N\not\le H}$.
\end{Lem}
\begin{proof}
Following the recipe of Brown-Neeman Representability~\cite{neeman:brown}, we give an explicit description of~$\Psi^N_\rho(\unit)$ as the homotopy colimit in~$\cT(G)$ of a sequence of objects $x_0=\unit\xto{f_0} x_1\xto{f_1} \cdots \to x_n\xto{f_n} x_{n+1}\to \cdots$ in~$\cK(G)$. This sequence is built together with maps~$g_n\colon \Psi^N(x_n)\to \unit$ in~$\cK(G/N)$ making the following commute
\begin{equation}
\label{eq:aux-strat-1}%
\vcenter{\xymatrix@C=2em{
\Psi^N(x_0)=\unit \ar@{=}@/_1em/[rrrd]_(.3){g_0=\id\ } \ar[rr]^-{\Psi^N(f_0)}_-{}
&& {\cdots} \ar[r]|-{} \ar@{}[rd]|-{\cdots} & \Psi^N(x_n) \ar[rr]^-{\Psi^N(f_n)} \ar[d]_-{g_n} && \Psi^N(x_{n+1}) \ar[lld]^(.4){g_{n+1}} \ar@{}[r]|-{\cdots}&\ar@{}[lld]|-{\cdots}
\\
&&& \quad \unit \quad &
}}
\end{equation}
Note that such $g_n$ yield homomorphisms, natural in $t\in\DPerm(G;\kk)$, as follows
\begin{equation}
\label{eq:aux-strat-2}%
\alpha_{n,t}\colon\Hom_{G}(t,x_n)\xto{\Psi^N}
\Hom_{G/N}(\Psi^N(t),\Psi^N(x_n))\xto{(g_n)_*}
\Hom_{G/N}(\Psi^N(t),\unit)\kern-1em
\end{equation}
where we abbreviate $\Hom_G$ for $\Hom_{\DPerm(G;\kk)}$.
We are going to build our sequence of objects $x_0\to x_1\to \cdots$ and the maps~$g_n$ so that for each~$n\ge 0$
\begin{equation}\label{eq:aux-strat-3}%
\textrm{$\alpha_{n,t}$ is an isomorphism for every $t\in \SET{\Sigma^i\,\kk(G/H)}{i< n,\ H\le G}$.}
\end{equation}
It follows that, if we set~$x_\infty=\hocolim_n x_n$ and $g_\infty\colon \Psi^N(x_\infty)\cong\hocolim_n\Psi^N(x_n)\to \unit$ the colimit of the~$g_n$, then the map
\[
\alpha_{t}\colon\Hom_{G}(t,x_\infty)\xto{\Psi^N}
\Hom_{G/N}(\Psi^N(t),\Psi^N(x_\infty))\xto{(g_\infty)_*}
\Hom_{G/N}(\Psi^N(t),\unit)
\]
is an isomorphism for all~$t\in\SET{\Sigma^i\,\kk(G/H)}{i\in \bbZ,\ H\le G}$.
Since the $\kk(G/H)$ generate~$\DPerm(G;\kk)$, it follows that $\alpha_{t}$ is an isomorphism for all~$t\in\DPerm(G;\kk)$.
Hence~$x_\infty=\hocolim_n x_n$ is indeed the image of~$\unit$ by the right adjoint~$\Psi^N_\rho$.
Let us construct these sequences $x_n,\ f_n$ and~$g_n$, for $n\ge 0$.
In fact, every complex~$x_n$ will be concentrated in degrees between zero and~$n$, so that \eqref{eq:aux-strat-3} is trivially true for $n=0$ (that is, for $i<0$), both source and target of~$\alpha_{n,t}$ being zero in that case.
Furthermore, $x_{n+1}$ will only differ from $x_n$ in degree~$n+1$, with $f_n$ being the identity in degrees~$\le n$. So the verification of~\eqref{eq:aux-strat-3} for~$n+1$ will boil down to checking the cases of~$t=\Sigma^i\,\kk(G/H)$ for~$i=n$.
As indicated, we set $x_0=\unit$ and $g_0=\id$. We define $x_1$ by the exact triangle
\[
s_1\xto{\eps} \unit \xto{f_0} x_1 \to \Sigma(s_1)
\]
where $s_1:=\oplus_{H\in\cF_N}\kk(G/H)$ and $\eps_H\colon \kk(G/H)\to \kk$ is the usual map. Note that $\Psi^N(s_1)=0$ by \eqref{eq:fixed-pts}, hence $\Psi^N(f_0)\colon \unit \to \Psi^N(x_1)$ is an isomorphism. We call~$g_1$ its inverse. One verifies that~\eqref{eq:aux-strat-3} holds for~$n=1$: For $t=\kk(G/H)$ with $H\in \cF_N$, both the source and target of~$\alpha_{1,t}$ are zero thanks to the definition of~$s_1$. For the case where $H\ge N$, there are no non-zero homotopies for maps $\kk(G/H)\to x_1$ thanks to \Cref{Lem:infl-stab}.
Let us construct $x_{n+1}$ and $g_{n+1}$ for $n\ge 1$. For every $H\le G$ let $t=\Sigma^n(\kk(G/H))$ and choose generators $h_{H,1},\ldots,h_{H,r_H}\colon t\to x_n$ of the $\kk$-module~$\Hom_{G}(t,x_n)$, source of~$\alpha_{n,t}$.
Define $s_{n+1}=\oplus_{H\le G}\oplus_{i=1}^{r_H}\kk(G/H)$ in~$\perm(G;\kk)$, a sum of $r_H$ copies of~$\kk(G/H)$ for every~$H\le G$, and define $h_n\colon \Sigma^n(s_{n+1})\to x_n$ as being $h_{H,i}$ on the $i$-th summand~$\Sigma^n\,\kk(G/H)$. Define $x_{n+1}$ as the cone of~$h_n$ in~$\cK(G)$:
\begin{equation}
\label{eq:aux-strat-4}%
\Sigma^n(s_{n+1})\xto{h_n} x_n\xto{f_n} x_{n+1} \to \Sigma^{n+1}(s_{n+1}).
\end{equation}
Note that $x_{n+1}$ only differs from~$x_n$ in homological degree~$n+1$ as announced. Since $n\ge 1$, we get $\Hom_{G/N}(\Psi^N(x_{n+1}),\unit)\cong \Hom_{G/N}(\Psi^N(x_n),\unit)$ and there exists a unique $g_{n+1}\colon \Psi^{N}(x_{n+1})\to \unit$ making~\eqref{eq:aux-strat-1} commute.
It remains to verify that $\alpha_{n+1,t}$ is an isomorphism for $t\in \SET{\Sigma^n\,\kk(G/H)}{H\le G}$. Note that the target of this map is zero.
Applying $\Hom_G(\Sigma^{n}\,\kk(G/H),-)$ to the exact triangle~\eqref{eq:aux-strat-4} shows that the source of~$\alpha_{n+1,t}$ is also zero, by construction. Hence~\eqref{eq:aux-strat-3} holds for~$n+1$.
This realizes the wanted sequence and therefore $\Psi^N_\rho(\unit)\simeq\hocolim_n(x_n)$ has the following form:
\[
\cdots \to s_{n} \to \cdots \to s_2 \to s_1 \to \kk \to 0\to 0\cdots
\]
where $s_1=\oplus_{H\in\cF_N}\kk(G/H)$ and $s_{n}\in\perm(G;\kk)$ for all~$n$.
\end{proof}
\begin{Rem}
The above description of~$\Psi^N_\rho(\unit)$ gives a formula for the right adjoint $\Psi^N_\rho\colon \DPerm(G/N;\kk)\to \DPerm(G;\kk)$ on all objects.
Indeed, for every $t\in\DPerm(G/N;\kk)$, we have a canonical isomorphism in~$\DPerm(G;\kk)$
\[
\Psi^N_\rho(t)\cong \Psi^N_\rho(\Psi^N\Infl^{G/N}_G(t)\otimes\unit)\cong \Infl^{G/N}_G(t)\otimes \Psi^N_\rho(\unit)
\]
using that $\Psi^N\circ\Infl^{G/N}_G\cong\Id$ and the projection formula. In other words, the right adjoint~$\Psi^N_\rho$ is simply inflation tensored with the commutative ring object~$\Psi^N_\rho(\unit)$.
\end{Rem}
\begin{Lem}
\label{Lem:stratification}%
Let $H\normaleq G$ be a normal $p$-subgroup and $\Psi^H_\rho\colon \DPerm(G/H;\kk)\to \DPerm(G;\kk)$ the right adjoint of modular $H$-fixed points~$\Psi^H\colon \DPerm(G;\kk)\to \DPerm(G/H;\kk)$. Then the object~$\zul[G]{H}$ displayed in~\eqref{eq:y-normal} belongs to the localizing tt-ideal of~$\DPerm(G;\kk)$ generated by~$\Psi^H_\rho(\unit)$.
\end{Lem}
\begin{proof}
By \Cref{Prop:Im(psi^N)}, we know that the tt-ideal generated by~$\zul[G]{H}$ is exactly $\cap_{K\in\cF_H}\Ker\Res^G_K$.
By Frobenius, the latter is the tt-ideal $\SET{x\in\cK(G)}{s_1\otimes x=0}$ where $s_1=\oplus_{K\in\cF_H}\kk(G/K)$ is the degree one part of the complex~$s\simeq\Psi^H_\rho(\unit)$ of \Cref{Lem:Psi^N_rho}.
We can now conclude by \Cref{Lem:s-generates} applied to this complex~$s$ and $x=\zul[G]{H}$ that $x$ must belong to the localizing tensor-ideal of~$\DPerm(G;\kk)$ generated by~$\Psi^H_\rho(\unit)$.
(Note that $s_0=\unit$ here.)
\end{proof}
Recall from \Cref{Cor:Im(psi^H)} that the map $\psi^H$ has closed image in~$\SpcKG$.
\begin{Prop}
\label{Prop:stratification}%
Let $H\le G$ be a $p$-subgroup and let ${\Psi}^H_\rho\colon \DPerm(\WGH;\kk)\to \DPerm(G;\kk)$ be the right adjoint of~${\Psi}^H\colon \DPerm(G;\kk)\to \DPerm(\WGH;\kk)$. Then the tt-ideal of~$\cK(G)$ supported on the closed subset~$\Img(\psi^H)$ is contained in the localizing tt-ideal of~$\DPerm(G;\kk)$ generated by~${\Psi}^H_\rho(\unit)$.
\end{Prop}
\begin{proof}
Let $N=N_G H$.
By definition, ${\Psi}^{H\inn G}=\Psi^{H\inn N}\circ \Res^G_{N}$ and therefore the right adjoint is ${\Psi}^{H\inn G}_\rho\cong \Ind_{N}^G\circ\Psi^{H\inn N}_\rho$. By \Cref{Lem:stratification}, we can handle $H\normaleq N$ hence we know (see also~\Cref{Prop:Im(psi^N)}) that the generator $\zul[N]{H}$ of the tt-ideal supported on~$\Img(\psi^{H\inn N})$ belongs to $\Loctens{\Psi^{H\inn N}_\rho(\unit)}$ in~$\DPerm(N;\kk)$. Applying~$\Ind_{N}^G$ and using the fact that $\Res^G_{N}$ is surjective up to direct summands (by separability), we see that $\zul[G]{H}\overset{\textrm{def}}{\ =\ }\Ind_{N}^G(\zul[N]{H})$ belongs to~$\Ind_{N}^G(\Loctens{\Psi^{H\inn N}_\rho(\unit)}\subseteq \Loctens{\Ind_{N}^G\Psi^{H\inn N}_\rho(\unit)}=\Loctens{{\Psi}^{H\inn G}_\rho(\unit)}$ in~$\DPerm(G;\kk)$.
\end{proof}
Let us now turn to stratification. By noetherianity, we can define a support for possibly non-compact objects in the `big' tt-category under consideration, here $\DPerm(G;\kk)$, following Balmer-Favi~\cite[\S\,7]{balmer-favi:idempotents}. We remind the reader.
\begin{Rec}
\label{Rec:idempotents}%
Every Thomason subset $Y\subseteq \SpcKG$ yields a so-called `idempotent triangle' $e(Y)\to\unit\to f(Y)\to \Sigma e(Y)$ in $\cT(G)=\DPerm(G;\kk)$,
meaning that $e(Y)\otimes f(Y)=0$, hence $e(Y)\cong e(Y)\potimes{2}$ and $f(Y)\cong f(Y)\potimes{2}$.
The left idempotent $e(Y)$ is the generator of $\Loctens{\cK(G)_{Y}}$, the localizing tt-ideal of~$\cT(G)$ `supported' on~$Y$. The right idempotent~$f(Y)$ realizes localization of~$\cT(G)$ `away' from~$Y$, that is, the localization on the complement~$Y^c$.
By noetherianity, for every point $\cP\in \SpcKG$, the closed subset~$\adhpt{\cP}$ is Thomason.
Hence $\adhpt{\cP}\cap (Y_{\cP})^c=\{\cP\}$, where $Y_\cP:=\supp(\cP)=\SET{\cQ}{\cP\not\subseteq\cQ}$ is always a Thomason subset.
The idempotent~$g(\cP)$ in~$\cT(G)$ is then defined as
\[
g(\cP)=e(\adhpt{\cP})\otimes f(Y_{\cP}).
\]
It is built to capture the part of~$\DPerm(G;\kk)$ that lives both `over~$\adhpt{\cP}$' (thanks to $e(\adhpt{\cP})$) and `over~$Y_{\cP}^c$' (thanks to~$f(Y_{\cP})$);
in other words, $g(\cP)$ lives exactly `at~$\cP$'.
This idea originates in~\cite{hovey-palmieri-strickland}.
It explains why the support is defined as
\[
\Supp(t)=\SET{\cP\in\SpcKG}{g(\cP)\otimes t\neq 0}
\]
for every (possibly non-compact) object~$t\in\DPerm(G;\kk)$.
\end{Rec}
\begin{Thm}
\label{Thm:stratification}%
Let $G$ be a finite group and let $\kk$ be a field. Then the big tt-category $\cT(G)=\DPerm(G;\kk)$ is stratified, that is, we have an order-preserving bijection
\[
\left\{
\textrm{Localizing tt-ideals $\cL\subseteq \cT(G)$}
\right\}
\overset{\sim}{\longleftrightarrow}
\left\{
\textrm{Subsets of $\SpcKG$}
\right\}
\]
given by sending a subcategory $\cL$ to the union of the supports of its objects; its inverse sends a subset $Y\subseteq \SpcKG$ to $\cL_{Y}:=\SET{t\in \cT(G)}{\Supp(t)\subseteq Y}$.
\end{Thm}
\begin{proof}
By induction on the order of the group, we can assume that the result holds for every proper subquotient~$\WGH$ (with $H\neq 1$).
By~\cite[Theorem~3.21]{barthel-heard-sanders:stratification-Mackey}, noetherianity of the spectrum of compacts reduces stratification to proving \emph{minimality} of $\Loctens{g(\cP)}$ for every $\cP\in\SpcKG$.
This means that $\Loctens{g(\cP)}$ admits no non-trivial localizing tt-ideal subcategory.
If $\cP$ belongs to the cohomological open~$\VG=\Spc(\Db(\kkG))$ then minimality at~$\cP$ in~$\cT=\DPerm(G;\kk)$ is equivalent to minimality at~$\cP$ in $\cT(\VG)\cong\K\Inj(\kk G)$ by~\cite[Proposition~5.2]{barthel-heard-sanders:stratification-Mackey}. Since $\K\Inj(\kk G)$ is stratified by~\cite{BIK:stratifying-stmod-kG}, we have the result in that case.
Let now $\cP\in\Supp(\Kac(G))$. By \Cref{Cor:supp(Kac)}, we know that $\cP=\cP_G(H,\gp)$ for some non-trivial $p$-subgroup $1\neq H\le G$ and some cohomological point~$\gp\in\Vee{\WGH}$. (In the notation of \Cref{Prop:fibered}, this means $\cP\in \Vee{G}(H)$.)
Suppose that $t\in\Loctens{g(\cP)}$ is non-zero. We need to show that $\Loctens{t}=\Loctens{g(\cP)}$, that is, we need to show that $g(\cP)\in \Loctens{t}$.
Recall the tt-functor~$\check{\Psi}^H\colon \DPerm(G;\kk)\to \K\Inj(\kk \WGH))$ from \Cref{Not:bar-Psi}.
By general properties of BF-idempotents~\cite[Theorem~6.3]{balmer-favi:idempotents}, we have $\check{\Psi}^K(g(\cP))=g((\check{\psi}^K)\inv(\cP))$ in~$\K\Inj(\kk(\WGK))$ for every~$K\in\Sub_p\!G$.
Since $\check{\psi}^K$ is injective by \Cref{Prop:bar-psi-injective}, the fiber $(\check{\psi}^K)\inv(\cP)$ is a singleton (namely~$\gp$) if~$K\sim H$ and is empty otherwise.
It follows that for all~$K\not\sim H$ we have $\check{\Psi}^K(g(\cP))=0$ and therefore $\check{\Psi}^K(t)=0$ as well. Since~$t$ is non-zero, the Conservativity \Cref{Thm:conservativity} forces the only remaining~$\check{\Psi}^H(t)$ to be non-zero in~$\K\Inj(\kk(\WGH))$.
This forces $\Psi^H(t)$ to be non-zero in~$\cT(\WGH)$ as well, since~$\check{\Psi}^H=\Upsilon_{\WGH}\circ\Psi^H$.
This object~$\Psi^H(t)$ belongs to~$\Loctens{\Psi^H(g(\cP))}=\Loctens{g((\psi^H)\inv(\cP))}$.
Note that $\upsilon_{\WGH}(\gp)$ is the only preimage of~$\cP=\cP_G(H,\gp)$ under~$\psi^H$ (see \Cref{Rem:psi^H-V(H)}).
By induction hypothesis, this localizing tt-ideal $\Loctens{\Psi^H(g(\cP))}$ is minimal.
And it contains our non-zero object~$\Psi^H(t)$.
Hence $\Psi^H(g(\cP))\in\Loctens{\Psi^H(t)}$. Applying the right adjoint~$\Psi^H_\rho$, it follows that $\Psi^H_\rho\Psi^H(g(\cP))\in \Psi^H_\rho(\Loctens{\Psi^H(t)})\subseteq\Loctens{t}$ where the last inclusion follows by the projection formula for $\Psi^H\adj \Psi^H_\rho$. Hence by the projection formula again we have in~$\cT(G)$ that
\begin{equation*}
\label{eq:aux-strat-5}%
\Psi^H_\rho(\unit)\otimes g(\cP)\in\Loctens{t}.
\end{equation*}
But we proved in \Cref{Prop:stratification} that the localizing tt-ideal generated by $\Psi^H_\rho(\unit)$ contains~$\cK(G)_{\Img(\psi^H)}$ and in particular $e(\adhpt{\cP})$ and a fortiori~$g(\cP)$. In short, we have $g(\cP)\cong g(\cP)\potimes{2}\in\Loctens{\Psi^H_\rho(\unit)\otimes g(\cP)}\subseteq \Loctens{t}$ as needed to be proved.
\end{proof}
\begin{Cor}
\label{Cor:telescope}%
The Telescope Conjecture holds for~$\DPerm(G;\kk)$. Every smashing tt-ideal~$\cS\subseteq \DPerm(G;\kk)$ is generated by its compact part:~$\cS=\Loctens{\cS^c}$.
\end{Cor}
\begin{proof}
This follows from noetherianity of~$\SpcKG$ and stratification by~\cite[Theorem~9.11]{barthel-heard-sanders:stratification-Mackey}.
\end{proof}
\newcommand{\etalchar}[1]{$^{#1}$}
|
train/arxiv
|
BkiUd6E4eIfiUSz0srKJ
| 5 | 1 |
\section{Introduction}
\label{Chapter1}
Temporal logics are a convenient and useful formalism to describe behaviour of dynamical systems. Probabilistic CTL (PCTL) \cite{HS,HJ:logic-time-probability-FAC} is the probabilistic extension of the branching-time logic CTL \cite{EH82}, obtained by replacing the existential and universal path quantifiers with the probabilistic operators, which allow us to quantify the probability of runs satisfying a given path formula. At first, the probabilities used were only 0 and 1 \cite{HS}, giving rise to the \emph{qualitative PCTL (qPCTL)}. This has been extended to any values from [0, 1] in \cite{HJ:logic-time-probability-FAC}, yielding the \emph{(quantitative) PCTL} (onwards denoted just \emph{PCTL}). More precisely, the syntax of these logics is built upon atomic propositions, Boolean connectives, temporal operators such as \textbf{X} (``next'') and \textbf{U} (``until''), and the probabilistic quantifier ${\bowtie q}$ where $\bowtie$ is a numerical comparison such as $\leq$ or $>$, and $q\in[0, 1]\cap \mathbb Q$ is a rational constant. A simple example of a PCTL formula is $\mathit{ok} \U{=1}(\X{\geq0.9} \mathit{finish})$, which says that on almost all runs we reach a state where there is 90\% chance to \textit{finish} in the next step and up to this state \textit{ok} holds true. PCTL formulae are interpreted over Markov chains \cite{Norris:book} where each state is assigned a subset of atomic propositions that are valid in a given state.
In this paper, we study the \emph{satisfiability problem}, asking whether a given formula has a \emph{model}, i.e. whether there is a Markov chain satisfying it. If a model does exist, we also want to construct it. Apart from being a fundamental problem, it is a possible tool for checking consistency of specifications or for reactive synthesis. The problem has been shown EXPTIME-complete for qPCTL in the setting where we quantify over finite models (\emph{finite satisfiability}) \cite{HS,LICS} as well as over generally countable models (\emph{infinite satisfiability}) \cite{LICS}. The problem for (the general quantitative) PCTL remains open for decades. We address this question on fragments of PCTL. In order to get a better understanding of this ultimate problem, we answer the problem for several fragments of PCTL that are
\begin{itemize}
\item quantitative, i.e.\ involving also probabilistic quantification over arbitrary rational numbers (not just 0 and 1),
\item step unbounded, i.e.\ not imposing any horizon for the temporal operators.
\end{itemize}
Besides, we consider models with unbounded size, i.e.\ countable models or finite models, but with no a priori restriction on the size of the state space. These are the three distinguishing features, compared to other works. The closest are the following. Firstly, solutions for the qPCTL have been given in \cite{HS,LICS} and for a more general logic PCTL$^*$ in \cite{LS,KL}. Secondly, \cite{chakraborty2016satisfiability} shows decidability for \emph{bounded PCTL} where the scope of the operators is restricted by a step bound to a given time horizon. Thirdly, the \emph{bounded satisfiability problem} is to determine, whether there exists a model of a given size for a given formula. This problem has been solved by encoding it into an SMT problem \cite{bertrand2012bounded}. There is an important implication of this result. Namely, if we are able to determine a maximum required model size for some formula, then it follows that the satisfiability of that formula can also be determined. We take this approach in some of our proofs. Additionally, we use the result of \cite{LICS} that the branching degree (number of successors) for a model of a formula $\phi$ can be bounded by $|\phi|+2$, where $|\phi|$ is the length of $\phi$.
\textbf{Our contribution} is as follows:
\begin{itemize}
\item We show decidability of the (finite and infinite) satisfiability problem for several quantitative unbounded fragments of PCTL, focusing on future- and globally-operators (\F{},\G{}).
\item We investigate the relationship between finite and infinite satisfiability on these fragments.
\item We identify a fundamental issue preventing us from extending our techniques to the general case.
We demonstrate this on a formula enforcing a more complicated form of its models.
This allows us to identify the ``smallest elegant'' fragment where the problem remains open and the solution requires additional techniques.
\end{itemize}
Note that the considered fragments are not that interesting themselves. However, they illustrate the techniques that we developed and how far we can push decidability results when applying only those. Another fragment which might seem simple enough to be reasonable to consider is the pure \textbf{U}-fragment, but despite all efforts, we have not been able to show decidability for any interesting fragment thereof. For this reason, we will not consider general \textbf{U}-operators in this paper. Due to space constraints, the proofs are sketched and then worked out in detail in the Appendix.
\subsection{Further related work}
As for the \emph{non-probabilistic} predecessors of PCTL, the satisfiability problem is known to be EXPTIME-complete for CTL \cite{EH82} as well as the more general modal $\mu$-calculus \cite{BB:temp-logic-fixed-points,FL:PDL-regular-programs}. Both logics have the small model property \cite{EH82,Kozen:mu-calculus-finite-model}, more precisely, every satisfiable formula $\phi$ has a finite-state model whose size is exponential in the size of $\phi$. The complexity of the satisfiability problems has been investigated also for fragments of CTL \cite{KV:modular-model-checking} and the modal $\mu$-calculus \cite{HKM:univ-exis-mu-calculus-TCS}.
The satisfiability problem for qPCTL and qPCTL$^*$ was investigated already in the early 80's \cite{LS,KL,HS}, together with the existence of sound and complete axiomatic systems. The decidability for qPCTL over countable models also follows from these general results for qPCTL$^*$, but the complexity was not examined until \cite{LICS}, showing it is also EXPTIME-complete, both for finite and infinite satisfiability.
While the decidability of satisfiability is open, there are only few negative results. \cite{LICS} proves \emph{undecidability} of the problem whether for a given PCTL formula there exists a model with a branching degree that is bounded by a given integer, where the branching degree is the number of successors of a state. However, the authors have not been able to extend their proof and show the undecidability for the general problem.
The PCTL \emph{model checking problem} is the task to determine, whether a given system satisfies a given formula, i.e.\ whether it is a model of the formula. This problem has been studied both for finite and infinite Markov chains and decision processes, see e.g. \cite{CY:probab-verification-JACM,HK:quantitative-mu-calculus-LICS,EY:RMC-SG-equations,EKM:prob-PDA-PCTL-LMCS,BKS:pPDA-temporal}. The PCTL \emph{strategy synthesis} problem asks whether the non-determinism in a given Markov decision process can be resolved so that the resulting Markov chain satisfies the formula \cite{BGLBC:MDP-controller,KS:MDP-controller,BBFK:Games-PCTL-objectives,BFK:MDP-PECTL-objectives}.
\section{Preliminaries}
\label{Chapter2}
In this section, we recall basic notions related to (discrete-time) Markov chains \cite{Norris:book} and the probabilistic CTL \cite{HJ:logic-time-probability-FAC}. Let $\mathcal{A}$ be a finite set of atomic propositions.
\subsection{Markov chains}
\begin{definition}[Markov chain]
A \emph{Markov chain} is a tuple $M = (S, P, s_0, L)$ where $S$ is a countable set of \emph{states}, $P: S \times S \rightarrow [0,1]$ is the \emph{probability transition matrix} such that, for all $s \in S$, $\sum_{t \in S}{P(s,t)} = 1$, $s_0\in S$ is the \emph{initial} state, and $L : S \rightarrow 2^{\mathcal{A}}$ is a labeling function.
\end{definition}
Whenever we write $M$, we implicitly mean a Markov chain $ (S, P, s_0,L)$. The semantics of a Markov chain $M$, is the probability space $(\mathit{Runs}_M,\mathcal F_M,\mathbb P_M)$, where $\mathit{Runs}_M=S^\omega$ is the set of \emph{runs} of $M$, $\mathcal F_M\subseteq 2^{S^\omega}$ is the $\sigma$-algebra generated by the set of cylinders of the form $\mathit{Cyl}_M(\rho)=\{ \pi \in S^{\omega} \mid \rho \text{ is a prefix of } \pi \}$ and the probability measure is uniquely determined \cite{baier2008principles} by $\mathbb P_M(\mathit{Cyl}_M(\rho_0\cdots\rho_n)) := \prod_{0 \leq i <n} P(\rho_i,\rho_{i+1})$ if $\rho_0=s_0$ and $0$ otherwise.
We say that a state is \emph{reached} on a run if it appears in the sequence; a set of states is reached if some of its states are reached. The immediate successors of a state $s$ are denoted by $\mathit{post}_M(s) := {\{t \in S \mid P(s,t) > 0 \}}$ and the set of states reachable with positive probability is the reflexive and transitive closure $ post^*_M(s)$. We will write $\mathbb P(\cdot), \mathit{post}(\cdot)$, and $\mathit{post}^*(\cdot)$, if $M$ is clear from the context.
The \emph{unfolding} of a Markov chain $M$ is the Markov chain $T_M := (S^+, P',s_0,L')$ with the form of an infinite tree given by $ P'(\rho s, \rho s s') = P(s,s')$ and $L'(\rho s)=L(s)$. Each state of $T_M$ maps naturally to a state of $M$ (the last one in the sequence), inducing an equivalence relation $\rho s\sim\rho s'$ iff $s=s'$. Consequently, each run of $T_M$ maps naturally to a run of $M$ and the unfolding preserves the measure of the respective events.
For a Markov chain $M$, a set $T \subseteq S$ is called \emph{strongly connected} if for all $s,t \in T$, $t \in \mathit{post}^*(s)$; it is a \emph{strongly connected component (SCC)} if it is maximal (w.r.t. inclusion) with this property. If, moreover, $\mathit{post}^*(t)\subseteq T$ for all $t\in T$ then it is a \emph{bottom SCC (BSCC)}. A classical result, see e.g.\ \cite{baier2008principles}, states that the set of states visited infinitely often is almost surely, i.e.\ with probability 1, a BSCC:
\begin{lemma}
\label{lem:bsccs}
In every finite Markov chain, the set of BSCCs is reached almost surely. Further, conditioning on runs reaching a BSCC $C$, every state of $C$ is reached infinitely often almost surely.
\end{lemma}
\subsection{Probabilistic Computational Tree Logic}
\label{Chapter3}
The definition of probabilistic CTL (PCTL) \cite{HJ:logic-time-probability-FAC} is usually based on the next- and until-operators (\X{}, \U{}). In this paper, we restrict our attention to the future- and globally operators (\F{}, \G{}), which can be derived from the until-operator. Further, w.l.o.g.\ we impose the negation normal form and the lower-bound-comparison normal form; for the respective transformations see, e.g., \cite{baier2008principles}.
\begin{definition}[PCTL(\F{},\G{}) syntax and semantics]
The \emph{formulae} are given by the following syntax:
\[
\Phi ::= a \mid
\neg a \mid
\Phi \land \Phi \mid
\Phi \lor \Phi \mid
\F{\rhd q}\Phi \mid
\G{\rhd q}\Phi
\]
where $q \in [0,1]$, $\rhd \in \{ \geq, >\}$, and $a \in \mathcal{A}$ is an atomic proposition. Let $M$ be a Markov chain and $s \in S$ its state. We define the modeling relation $\models$ inductively as follows
\begin{enumerate}[label=(M\arabic*),align=left]
\item \label{def:model.a} $M, s \models a$ iff $a \in L(s)$
\item \label{def:model.neg} $M, s \models \neg a$ iff $a \notin L(s)$
\item \label{def:model.and} $M, s \models \phi \land \psi$ iff
$M,s \models \phi$ and $M,s \models \psi$
\item \label{def:model.or} $M, s \models \phi \lor \psi$ iff
$M,s \models \phi$ or $M,s \models \psi$
\item \label{def:model.F} $M, s \models
\F{\rhd q}{\varphi}$ iff $\mathbb P_{M(s)}(\{ \pi \mid
\exists i \in \mathbb{N}_0: M,\pi[i] \models \varphi \}) \rhd q$
\item \label{def:model.G} $M, s \models
\G{\rhd q}{\varphi}$ iff $\mathbb P_{M(s)}(\{ \pi \mid
\forall i \in \mathbb{N}_0: M,\pi[i] \models \varphi \}) \rhd q$
\end{enumerate}
where $M(s)$ is $M$ with $s$ being the initial state, and $\pi[i]$ is the $i$th element of $\pi$. We say that $M$ is a \emph{model} of $\varphi$ if $M,s_0 \models \varphi$.
\end{definition}
We will denote the set of literals by $\mathcal{L} := \mathcal{A} \cup \{ \neg a \mid a \in \mathcal{A} \}$. Instead of the constraint $\geq1$, we often write $=1$. Further, we define the set of all subformulae. This definition slightly deviates from the usual definition of subformulae, e.g. the one in \cite{LICS}, in that $\neg a \in sub(\phi)$ does not necessarily imply $a \in sub(\phi)$.
\begin{definition}[Subformulae]
The set $sub(\phi)$ is recursively defined as follows
\begin{itemize}
\item $\phi \in sub(\phi)$
\item if $\psi \land \xi \in sub(\phi)$ or $\psi \lor \xi \in sub(\phi)$,
then $\psi, \xi \in sub(\phi)$
\item if $\F{\rhd q}{\psi} \in sub(\phi)$ or $\G{\rhd q}{\psi} \in sub(\phi)$
then $\psi \in sub(\phi)$
\end{itemize}
\end{definition}
Next, we introduce the satisfiability problems, which are the main topic of the paper.
\begin{definition}[The satisfiability problems]
A formula $\phi$ is called \emph{(finitely) satisfiable}, if there is a (finite) model for $\phi$. Otherwise, it is (finitely) unsatisfiable. The (finite) satisfiability problem is to determine whether a given formula is (finitely) satisfiable.
\end{definition}
Instead of simply writing ``satisfiable'' we sometimes stress the absence of ``finitely'' and write ``generally satisfiable'' for satisfiablity on countable, i.e. finite or countably infinite, models. For some proofs, it is more convenient to consider the unfolding of a Markov chain instead of the original one. As we mentioned already, the measure of events is preserved in the unfolding of a chain. Hence, we can state the following lemma.
\begin{lemma}\label{lem:tree}
If $M$ is a model of $\phi$ then its unfolding $T_M$ is a model of $\phi$.
\end{lemma}
We say that formulae $\phi,\psi$ are (finitely) equivalent if they have the same set of (finite) models, written $\phi\equiv\psi$ ($\phi\equiv_{\mathit{fin}}\psi$); that they are (finitely) equisatisfiable if they are both (finitely) satisfiable or both (finitely) unsatisfiable; and that $\phi\Rightarrow\psi$ if every model of $\phi$ is also a model of $\psi$.
\section{Results}
In this section we present our results. A summary is schematically depicted in Fig.~\ref{fig:summary}. We briefly describe the considered fragments; the full formal definitions can be found in the respective sections. Since already the satisfiability for propositional logic in negation normal form has nontrivial instances only when all the constructs $a,\neg a$ and conjunction are present, we only consider fragments with all three included; see the bottom of the Hasse diagram. The fragments are named by the list of constructs they use, where we omit the three constructs above to avoid clutter. Here $1$ stands for $\geq1$ and $q$ stands for $\rhd q$ for all $q\in[0,1]\cap\mathbb Q$. Further, $\G x(\mathit{list})$ denotes the sub-fragment of $\mathit{list}$ where the topmost operator is $\G x$. Finally, $\F{q/1}$ denotes the use of $\F q$ with the restriction that inside $\G{}$ only $q=1$ can be used.
\begin{figure}
\begin{tikzpicture}[scale=0.8,text width=18mm,outer sep=3mm]
\node (gen) at (0,2) {$\F q,\G q,\vee$ };
\node (qual) at (4,0) {$\F q,\G 1,\vee$ Sec.~\ref{ss:qual} non-bottom SCCs};
\node (nodisj) at(4,-2.4) {$\F q,\G1$ Sec.~\ref{ss:nodisj} inf=fin=H:$|\phi|$};
\node (noqf) at(8,-2.4) {$\F{q/1},\G1,\vee$ Sec.~\ref{ss:noqf} inf=fin=H:$|\phi|^2$};
\node (g) at(-4,0) {$\G q(\F q,\G q,\vee)$ Sec.~\ref{ss:g} inf$\neq$fin=$|\phi|$};
\node (g1) at(-4,-2.4) {$\G1(\F q,\G 1,\vee)$ Sec.~\ref{ss:g1} inf=fin=$|\phi|$};
\node (base) at (0,-4.4) {$\G1(\F q,\G 1)$ Sec.~\ref{ss:base} inf=fin=$|\phi|$};
\node (bottom) at (0,-6.4) {$a,\neg a, \wedge$};
\draw
(gen) edge (qual)
(gen) edge (g)
(qual) edge (nodisj)
(qual) edge (noqf)
(g) edge (g1)
(g1) edge (base)
(qual) edge (g1)
(nodisj) edge (base)
(base) edge[line width=0.01pt] (bottom)
(noqf) edge[line width=0.01pt] (bottom)
;
\end{tikzpicture}
\caption{Hasse diagram summarizing the satisfiability results for the considered fragments of PCTL(\F{}, \G{}), all containing literals and conjunctions, and some form of quantitative comparisons. The fragments are described by the list of operators they allow (excluding the constructs of the minimal fragment). The subscript denotes the possible constraints on probabilistic operators. $\G{x}(list)$ denotes formulae in the fragment described by $list$ with \G{}-operators at the top-level. fin and inf abbreviate finite and general satisfiability, respectively. fin=inf denotes that the problems are equivalent. H:$x$ denotes that the height of a tree model can be bounded by $x$. By $=x$ we denote that the model size can be bounded by $x$. The $\F q, \G 1, \vee$-fragment might require non-bottom SCCs in finite models}
\label{fig:summary}
\end{figure}
The fragments are investigated in the respective sections. We examine the problems of the general satisfiability (``inf'') and the finite satisfiability (``fin''); equality denotes the problems are equivalent. We use two results to prove decidability of the problems. Firstly, \cite{bertrand2012bounded} shows that given a formula $\phi$ and an integer $n$, one can determine whether or not there is a model for $\phi$ that has at most $n$ states. Consequently, we obtain the decidability result whenever we establish an upper bound on the size of smallest models. Here ``$|\phi|$'' denotes the satisfiability of a given $\phi$ on models of size $\leq |\phi|$. Secondly, \cite{LICS} establishes that for any satisfiable PCTL formula there is a model with branching bounded by $|\phi|+2$. Consequently, we obtain the decidability result whenever it is sufficient to consider trees of a certain height $H$ (with back edges) since the number of their nodes is then bounded by $(|\phi|+2)^H$. Here ``H:$n$'' denotes that the models can be limited to a height $H\leq n$.
While we obtain decidability in the lower part of the diagram, the upper part only treats finite satisfiability, and in particular for $\F q,\G1,\vee$, we only demonstrate that models with more complicated structure are necessary. Namely, the models may be of unbounded sizes for structurally same formulae---i.e. formulae which only differ in the constraints on the temporal operators---or require presence of non-bottom SCCs, see Section~\ref{ss:qual} and the discussion in Section~\ref{sec:disc}.
\subsection{Finite satisfiability for $\G q(\F q,\G q,\vee)$}\label{ss:g}
This section treats \G{}-formulae of the $\F q,\G q$-fragment, i.e.\ of PCTL$(\F{},\G{})$. In particular, it includes \G{>0}-formulae. In general, formulae in this fragment (even without quantified \F{} and \G{}-operators) can enforce rather complicated behaviour \cite{LICS}. Therefore, we will focus on finitely satisfiable formulae. We will see that they can be satisfied by rather simple models.
\begin{definition}
$\G q(\F q,\G q,\vee)$-formulae are given by the grammar
\begin{align*}
\Phi &::=\G{\rhd q}{\Psi}\\
\Psi &::= a \mid
\neg a \mid
\Psi \land \Psi \mid
\Psi \lor \Psi \mid
\F{\rhd q}{\Psi} \mid
\G{\rhd q}{\Psi}
\end{align*}
\end{definition}
The main result of this section is that finitely satisfiable formulae in this fragment can be satisfied by models of size linear in $|\phi|$.
\begin{theorem}
\label{thm:Gq(Fq,Gq,v)-size} Let $\phi$ be a finitely satisfiable $\G q(\F q,\G q,\vee)$-formula. Then $\phi$ has a model of size at most $|\phi|$.
\end{theorem}
Intuitively, we obtain the result from the fact that some BSCC is reached almost surely and every state in a BSCC is reached almost surely, once we have entered one. In infinite models, BSCCs are not reached almost surely and therefore the proofs cannot be extended to general satisfiability. The following lemma and its proof demonstrate how we can make use of the BSCC properties in order to obtain an equisatisfiable formula in a simpler fragment.
\begin{lemma}
\label{lem:Gq(Fq,Gq,v)-normal}
Let $\phi$ be a $\G q(\F q,\G q,\vee)$-formula. Then, $\phi$ is finitely equisatisfiable to a $\G1(\F 1, \G 1)$-formula $\phi'$, such that $\phi' \Rightarrow \phi$.
\end{lemma}
\begin{proofsketch}
Write $\phi$ as $\G{\rhd q}{\psi}$. Assume that we have a finite model $M$ for $\G{\rhd q}{\psi}$. Intuitively, we can select a BSCC that satisfies $\G{=1}{\psi}$. We know that there is a BSCC because we are dealing with a finite model. We also know that there is at least one BSCC satisfying our formula, for otherwise $M$ would not be a model for it. In a BSCC, every state is reached almost surely from every other state by Lemma~\ref{lem:bsccs}. Hence, we can select exactly one state for each \F{}-subformula which satisfies that formula's argument. Then we can create a new BSCC from these states, arranging them, e.g., in a circle. This BSCC models $\G{=1}{\hat{\psi}}$, where $\hat{\psi}$ replaces all probabilistic operators with their ``almost surely'' version. Hence, we have created a model for a $\G1(\F 1, \G 1)$ formula from a model for $\G{\rhd q}{\psi}$. The opposite direction follows from the fact that $\G{=1}{\hat{\psi}} \Rightarrow \G{\rhd q}{\psi}$.
\QED\end{proofsketch}
Note that the transformation does not produce an equivalent formula. Hence, we cannot replace an occurrence of such a formula in a more complex formula. For instance, the formula $\G{\geq 1/2}\neg a \wedge \F{\geq 1/2}a$ is satisfiable, whereas $\G{=1}\neg a \wedge \F{\geq 1/2} a$ is not. The proof does not work for equality because we are selecting one BSCC while ignoring the rest. This example demonstrates why we cannot ignore certain BSCCs in general. Using the above result, it is easy to prove Theorem \ref{thm:Gq(Fq,Gq,v)-size}.
\begin{proofsketch}[Proof Sketch of Theorem \ref{thm:Gq(Fq,Gq,v)-size}]
This follows immediately from the proof of Lemma \ref{lem:Gq(Fq,Gq,v)-normal}. The BSCC that we have created has at most as many states as there are \F{}-subformulae, which is bounded by $|\phi|$.
\QED\end{proofsketch}
\begin{example}
Consider the formula
\begin{equation}
\label{form:Gq(Fq,Gq)}
\phi := \G{\geq 1/2}{(\F{\geq 1/3}{a} \land \F{\geq 1/3}{\neg a})}.
\end{equation}
The large Markov chain in Figure \ref{fig:ex-model-GqFq} models $\phi$. Unlabeled arcs indicate a uniform distribution over all successors. It is clear that the model is unnecessarily complicated. After reducing it according to Lemma~\ref{lem:Gq(Fq,Gq,v)-normal}, we obtain the smaller Markov chain on the right.
\begin{figure}
\centering
\subfloat{
\begin{tikzpicture}[bend angle=45]
\node (s0) at (0,0) {$\emptyset$};
\node (s1) at (-1,-1) {$\{a\}$};
\node (s2) at (1,-1) {$\emptyset$};
\node (s3) at (-2,-2) {$\emptyset$};
\node (s4) at (0,-2) {$\emptyset$};
\node (s5) at (-2,-3) {$\{a\}$};
\node (s6) at (0,-3) {$\{a\}$};
\node (s7) at (-1,-4) {$\emptyset$};
\draw [->] (s0) to (s1);
\draw [->] (s0) to (s2);
\draw [->] (s1) to (s3);
\draw [->] (s1) to (s4);
\draw [->] (s3) to (s5);
\draw [->] (s3) to (s6);
\draw [->] (s4) to (s5);
\draw [->] (s4) to (s6);
\draw [->] (s5) to (s7);
\draw [->] (s6) to (s7);
\draw [->] (s7) to (s1);
\end{tikzpicture}
}
\subfloat{
\begin{tikzpicture}[auto,bend angle=30]
\node (s0) at (0,0) {$\emptyset$};
\node (s1) at (0,1) {$\{a\}$};
\draw [->,bend left] (s0) to (s1);
\draw [->,bend left] (s1) to (s0);
\end{tikzpicture}
}
\caption{A large and a small model for Formula \eqref{form:Gq(Fq,Gq)}}
\label{fig:ex-model-GqFq}
\end{figure}
\end{example}
The example below shows that satisfiability is not equivalent to finite satisfiability for this fragment, and that the proposed transformation does not preserve equisatisfiability over general models. The decidability of the general satisfiability thus remains open here.
\begin{example}
Note that we made use of the BSCC properties for the proofs of this subsection, such as that some BSCC is reached almost surely. Since this is only the case for finite Markov chains, our transformation only holds for finite satisfiability. If we consider the general satisfiability problem, then the equivalent of Lemma~\ref{lem:Gq(Fq,Gq,v)-normal} is not true. For instance, the formula
\begin{equation}
\label{form:infinite}
\phi := \G{>0}{(\neg a \land \F{>0}{a} )}
\end{equation}
is satisfiable, but requires infinite models, as pointed out in \cite{LICS}. One such model is given in Figure \ref{fig:infinite-model}. Observe that the single horizontal run has measure greater than $0$. Now consider
\[
\hat{\phi} := \G{=1}{(\neg a\land\F{=1}{a} )}
\]
Obviously, this formula unsatisfiable. Hence, in this case $\phi$ is not
equisatisfiable to $\hat{\phi}$.
\begin{figure}
\centering
\begin{tikzpicture}[auto]
\node (first top) at (0,2) {$\emptyset$};
\node (second top) at (2,2) {$\emptyset$};
\node (third top) at (4,2) {$\emptyset$};
\node (fourth top) at (6,2) {$\emptyset$};
\node (first bottom) at (0,0) {$\{a\}$};
\node (second bottom) at (2,0) {$\{a\}$};
\node (third bottom) at (4,0) {$\{a\}$};
\node (fourth bottom) at (6,0) {$\{a\}$};
\node (first phantom) at (7,2) {};
\node (second phantom) at (8,2) {};
\draw [->] (first top) to node {$1/2$} (second top);
\draw [->] (second top) to node {$3/4$} (third top);
\draw [->] (third top) to node {$7/8$} (fourth top);
\draw [dotted] (fourth top) to (first phantom);
\draw [->] (first top) to node {$1/2$} (first bottom);
\draw [->] (second top) to node {$1/4$} (second bottom);
\draw [->] (third top) to node {$1/8$} (third bottom);
\draw [->] (fourth top) to node {$1/16$} (fourth bottom);
\draw [->,loop right] (first bottom) to node {$1$} (first bottom);
\draw [->,loop right] (second bottom) to node {$1$} (second bottom);
\draw [->,loop right] (third bottom) to node {$1$} (third bottom);
\draw [->,loop right] (fourth bottom) to node {$1$} (fourth bottom);
\end{tikzpicture}
\caption{An infinite model for Formula \eqref{form:infinite}}
\label{fig:infinite-model}
\end{figure}
\end{example}
\subsection{Satisfiability for $\G1(\F q,\G 1)$}\label{ss:base}
\label{sec:G1(Fq,G1)}
This section treats \G{}-formulae of a fragment where $\G{\rhd q}$ only appears with $q=1$ and there is no disjunction. The results are later utilized in a richer fragment in Section~\ref{sec:Fq,G1}. In fact, the main result of this section is an immediate consequence of the main theorem of Section~\ref{sec:G1(Fq,G1,v)}. Still, the results are interesting themselves as they show some properties of models for formulae in this fragment which do not apply in the generalized case.
\begin{definition}
$\G1(\F q, \G 1)$-formulae are given by the grammar
\begin{align*}
\Phi &::= \G{=1}{\Psi}\\
\Psi &::= a \mid
\neg a \mid
\Psi \land \Psi \mid
\F{\rhd q}{\Psi} \mid
\G{=1}{\Psi}
\end{align*}
\end{definition}
We prove that satisfiable formulae of this fragment are satisfiable by models of linear size and thus also finitely satisfiable.
\begin{theorem}
\label{thm:G1(Fq,G1)-size}
Let $\phi$ be a satisfiable $\G1(\F q, \G 1)$-formula. Then $\phi$ has a model of size at most $|\phi|$.
\end{theorem}
The idea here is that we can find a state which behaves similarly to a BSCC (even in infinite models) in that it satisfies all \G{}-subformulae. We can then use this state's successors to construct a small model. The outline of the proof is roughly as follows: First we show that from every state and for every subformula we can find a successor that satisfies this subformula. Using this, we can show that there is a state that satisfies all \G{}-subformulae.
\begin{lemma}
\label{lem:G1(Fq,G1)-subformulae}
Let $\phi$ be a satisfiable $\G1(\F q, \G 1)$-formula and $M$ its model. Then, for every $\psi \in \mathit{sub}(\phi)$, and $s \in S$, there is a state $t \in \mathit{post}^*(s)$, such that $M,t \models \psi$.
\end{lemma}
\begin{proofsketch}
This follows from the fact that we do not allow disjunctions in this fragment. We apply induction over the depth of a subformula $\psi$. If the formula is $\phi$ itself, then there is nothing to show. Otherwise, the induction hypothesis yields that the higher-level subformulae are satisfied at some state $s$. From this, we can easily see that in all possible cases the claim follows: If the higher-level formula is a conjunction, then $\psi$ is one of its conjuncts. Since both conjuncts must be satisfied by $s$, in particular $\psi$ must be satisfied at $s$. A similar argument applies to \G{}-formulae. If it is of the form $\F{\rhd q}\xi$, then we know that there must be a reachable state where $\xi$ holds.
\QED\end{proofsketch}
This concludes the first part of the proof. We continue with the second part and prove that we can find a state which satisfies all \G{}-subformulae.
\begin{lemma}
\label{lem:G1(Fq,G1)-allG} Let $\phi$ be a satisfiable $\G1(\F q, \G 1)$-formula, $M$ its model, and let $G := \{ \psi \in \mathit{sub}(\phi) \mid \psi = \G{=1}{\xi} \text{ for some } \xi \}$. Then there is a state $s \in S$ such that $M,s \models \psi$ for all $\psi \in G$.
\end{lemma}
\begin{proofsketch}
It is clear that after encountering a \G{}-formula at some state, all successors will also satisfy it. Therefore, the set of satisfied \G{}-formulae is monotonically growing and bounded. Hence, we can apply induction over the number of yet unsatisfied \G{}-formulae. In every step, we are looking for the next state to satisfy an additional \G{}-formula. This is always possible (as long as there are still unsatisfied ones), due to Lemma \ref{lem:G1(Fq,G1)-subformulae}.
\QED\end{proofsketch}
Now, we can prove Theorem~\ref{thm:G1(Fq,G1)-size}.
\begin{proofsketch}[Proof Sketch of Theorem \ref{thm:G1(Fq,G1)-size}]
By Lemma~\ref{lem:G1(Fq,G1)-allG}, we can find a state that satisfies all \G{}-subformulae. In some sense, this state's subtree resembles a BSCC. We can include exactly one state for each \F{}-subformula and create a BSCC out of those states, e.g., arrange them in a circle. We apply induction over $\psi \in sub(\phi)$. The satisfaction of literals and conjunctions is straightforward. Since every state is reached almost surely, every \F{}-formula will be satisfied that way. The satisfaction of the \G{}-formulae follows from the fact that all states used to satisfy all \G{}-formulae in the original model, and from the induction hypothesis.
\QED\end{proofsketch}
For the case of finite satisfiability, we also present an alternative proof, which sheds more light on this fragment and its super-fragments. For details, see Appendix~\ref{app:proofs}. Let $\equiv_{\mathit{fin}}$ denote equivalence of PCTL formulae over finite models.
\begin{theorem}
\label{thm:G1(Fq,G1)-normal}
Let $\phi$ be a $\G1(\F q, \G 1)$-formula. Then, the following equivalence holds:
\begin{equation*}
\G{=1}{\phi} \equiv_{\mathit{fin}}
\G{=1}{(\bigwedge_{l \in A}{l} \land
\F{=1}{\G{=1}{\bigwedge_{l \in B}{l}}} \land
\bigwedge_{i \in I}\F{=1}{{\bigwedge_{l \in C_i}{l}}})}
\end{equation*}
for appropriate $I \subset \mathbb{N}$, and $A, B, C_i \subset \mathcal{L}$.
\end{theorem}
\begin{proofsketch}
The proof is based on the following auxiliary statements
\begin{align}
\G{=1}\G{=1}\phi &\equiv \G{=1}\phi\\
\G{=1}\F {\rhd q}\phi &\equiv_{\mathit{fin}}\G{=1}\F{=1}\phi\label{eq:fin}\\
\F{=1}\F{\rhd q}\phi &\equiv \F{\rhd q}\phi
\end{align}
and follows by induction.
The second statement is the most interesting one. Intuitively, it is a zero-one law, stating that infinitely repeating satisfaction with a positive probability ensures almost sure satisfaction. Notably, this only holds if the probabilities are bounded from below, hence for finite models, not necessarily for infinite models.
\QED\end{proofsketch}
It is an easy corollary of this theorem that a satisfiable formula has a model of a circle form with $A$ and $B$ holding in each state and each element in each $C_i$ holding in some state. In general the models can be of a lasso shape where the initial (transient) part only has to satisfy $A$, allowing for easy manipulation in extensions of this fragment.
\begin{remark}
Note that the equivalence does not hold over infinite models. Indeed, consider as simple a formula as $\G{=1}\F{>0}a$, which is satisfied on the Markov chain of Fig.~\ref{fig:infinite-model} \cite{LICS}, while this does not satisfy the transformed $\G{=1}\F{=1}a$. Crucially, equivalence (\ref{eq:fin}) does not hold already for this tiny fragment. Interestingly, when we build a model for the transformed formula, which is equisatisfiable but not equivalent, it turns out to be a model of the original formula. If, moreover, we consider $\neg a, \land, \G{>0}$ then finite and general satisfiability start to differ.
\end{remark}
Before we move on to the next fragment, we will prove another consequence of Lemma~\ref{lem:G1(Fq,G1)-subformulae}. It is a statement about the BSCCs of models for formulae in this fragment and will be used later for the proof of Theorem~\ref{thm:Fq,G1-size}.
\begin{corollary}
\label{cor:G1(Fq,G1)-bsccs}
Let $\G{=1}\phi$ be a satisfiable $\G1(\F q, \G 1)$-formula and $M$ its model. Then, for every BSCC $T \subseteq post^*(s_0)$ of $M$, the following holds
\begin{enumerate}
\item
\label{cor:G1(Fq,G1)-bsccs.F}
For all $\psi \in sub(\G{=1}\phi)$, there is a state $t \in T$, such that $M,t \models \psi$.
\item
\label{cor:G1(Fq,G1)-bsccs.G}
For all $\G{=1}{\psi} \in sub(\G{=1}\phi)$, and for all states $t \in T$, $M,t \models \G{=1}{\psi}$.
\end{enumerate}
\end{corollary}
\begin{proof}
Point \ref{cor:G1(Fq,G1)-bsccs.F} follows from the fact that every reachable BSCC must satisfy $\G{=1}\phi$, and from Lemma \ref{lem:G1(Fq,G1)-subformulae}. Point \ref{cor:G1(Fq,G1)-bsccs.G} follows immediately from point \ref{cor:G1(Fq,G1)-bsccs.F}.
\QED\end{proof}
Note that we did not assume finite satisfiability here, so the model might not contain a single BSCC. In that case, the claim is trivially true. However, Theorem~\ref{thm:G1(Fq,G1)-size} allows us to focus on finitely satisfiable formulae in this fragment.
\subsection{Satisfiability for $\G1(\F q,\G1,\vee)$}
\label{sec:G1(Fq,G1,v)}\label{ss:g1}
This section treats \G{}-formulae of the fragment where $\G{\rhd q}$ only appears with $q=1$. We thus lift a restriction of the previous fragment and allow for disjunctions. We generalize the obtained results to this larger fragment. We mentioned earlier that some of the results of the previous fragment do not apply here. Concretely, Lemma~\ref{lem:G1(Fq,G1)-subformulae} does not hold here; that is, there might be subformulae which are not satisfied almost surely. Therefore, there is not necessarily a state that satisfies all \G{}-subformulae. For example, consider $\G{=1}(\F{>0}\G{=1}a \vee \F{>0}\G{=1}\neg a)$. There cannot be a single BSCC to satisfy both disjuncts. Although this is not a problem for the results of this section, it will turn out to be a fundamental problem when dealing with arbitrary formulae of the $\F q,\G1,\vee$ fragment.
\begin{definition}
$\G1(\F q, \G 1, \vee)$-formulae are given by the grammar
\begin{align*}
\Phi &::= \G{=1}{\Psi}\\
\Psi &::= a \mid
\neg a \mid
\Psi \land \Psi \mid
\Psi \lor \Psi \mid
\F{\rhd q}{\Psi} \mid
\G{=1}{\Psi}
\end{align*}
\end{definition}
We prove that satisfiable formulae of this fragment are satisfiable by models of linear size and thus also finitely satisfiable.
\begin{theorem}
\label{thm:G1(Fq,G1,v)-size}
Let $\phi$ be a satisfiable $\G1(\F q, \G 1, \vee)$-formula. Then $\phi$ has a model of size at most $|\phi|$.
\end{theorem}
\begin{proofsketch}
The proof for this theorem works essentially the same as it did for Theorem~\ref{thm:G1(Fq,G1)-size}. Recall that we looked for a state to satisfy all \G{}-formulae. Though we will not necessarily find a state that does so in this fragment, we can look for a state that satisfies maximal subsets of satisfied \G{}-formulae. Then, we can continue in a similar way as we did in the simpler setting.
\QED\end{proofsketch}
\subsection{Satisfiability for $\F q,\G1$}\label{ss:nodisj}
\label{sec:Fq,G1}
This section treats general formulae of the fragment with no disjunction and where $\G{\rhd q}$ only appears with $q=1$.
\begin{definition}
$\F q, \G 1$-formulae are given by the grammar
\begin{align*}
\Phi ::= a \mid
\neg a \mid
\Phi \land \Phi \mid
\F{\rhd q}{\Phi} \mid
\G{=1}{\Phi}
\end{align*}
\end{definition}
In Section~\ref{sec:G1(Fq,G1)} we discussed a special case of this fragment, where the top-level operator is $\G{=1}$. Two results are particularly interesting for this section: Firstly, the construction of models for such formulae as explained in the proof of Theorem~\ref{thm:G1(Fq,G1)-size}, and secondly, the properties of BSCCs in models for such formulae as stated in Corollary~\ref{cor:G1(Fq,G1)-bsccs}. We will use those in order to simplify models in this generalized setting. We say a Markov chain has \emph{height $h$} if it is a tree with back edges of height $h$.
\begin{theorem}
\label{thm:Fq,G1-size}
A satisfiable $\F q, \G 1$-formula $\phi$ has a model of height $|\phi|$.
\end{theorem}
\begin{proofsketch}
Our aim is to transform a given, possibly infinite model into a tree-like shape. To do so, we first construct a tree by considering all non-nested \F{}-formulae. Each path in this tree will satisfy each of these \F{}-formulae at most once. At the end of each path, we will then insert BSCCs satisfying the \G{}-formulae, in the spirit of Theorem~\ref{thm:G1(Fq,G1)-size}.
The collapsing procedure from a state $s$ is as follows: We first determine which of the \F{}-formulae that are satisfied at $s$ are relevant. Those are the formulae which are not nested in other temporal formulae and have not yet been satisfied on the current path. Once we have determined this set, say $I$, we need to find the successors which are required to satisfy the formulae in $I$. For this, we construct the set $sel(s)$. Informally, $sel(s)$ contains all states $t$ s.t. (i) $t$ satisfies at least one formula $\psi \in I$, and (ii) there is no state on the path between $s$ and $t$ satisfying $\psi$. Formally, $sel(s) := \{ t \in post^*(s) \mid \exists \F{\rhd r} \psi \in I.\ M,t \models \psi \land \forall t' \in post^*(s) \cap pre^*(t). M,t \not\models \psi \}$. Then, we connect $s$ to every state in $sel(s)$ directly; i.e. for $t \in sel(s)$, we set $P(s,t) := P^*(s,t)$, and for all states $t \not\in sel(s)$, we set $P(s,t) := 0$.\footnote{In fact, we need to scale $P(s,t)$ in order to obtain a Markov chain, in general, as the probability to reach $sel(s)$ might be less than $1$. For details, refer to the formal proof in the Appendix.} A simple induction on the length of a path yields that every state that is reachable from $s$ in the constructed MC is reached with at least the probability as in the original one. From this, one can easily see that every non-nested \F{}-formula is satisfied. The new set $post(s)$ might be infinite. However, we know that we can prune most of the successors and limit the branching degree to $|\phi|+2$ \cite{LICS}. Then, we repeat the procedure from each of the successors. Since the number of non-nested \F{}-formulae decreases with every step, we will reach states which do not have to satisfy non-nested \F{}-formulae at all on every branch. The number of steps we need to reach such states, is bounded by the number of non-nested \F{}-formulae in $\phi$. At those states, we can use Theorem~\ref{thm:G1(Fq,G1)-size} to obtain models for the respective \G{}-formulae. Those are of size linear in the size of the \G{}-formulae. The overall height is then bounded by $|\phi|$. The fact that the resulting MC is a model can be easily proved by induction over $|\phi|$.
\QED\end{proofsketch}
The models that we construct have a quite regular shape: They start as a tree and in every step ensure satisfaction of one of the \F{}-formulae. As soon as they have satisfied all outer \F{}-formulae, on every branch a model of circle shape for the respective \G{}-formula follows. Since the branching degree is at most $|\phi|+2$ and the number of steps before we repeat a state is bounded by $|\phi|$, the overall size is bounded to $(|\phi|+2)^{|\phi|}$.
\begin{example}
\label{ex:selection-reduction}
Let $\phi^p := \F{\geq p}{\G{=1}{a}}$, and $\psi^p := \F{\geq p}{\G{=1}{\neg a}}$. The large Markov chain of Figure \ref{fig:ex-red-Fq,G1} is a model for $\phi^{1/2} \land \psi^{1/2}$. The grayed states illustrate the set $sel(s_0)$. The other boxes show the sets $sel(.)$ of the respective grayed state. Everything in between is omitted. The smaller Markov chain is the reduced version of the original model.
\begin{figure}
\subfloat{
\begin{tikzpicture}[auto]
\node (s0) [label=above:$s_0$] at (0,0)
{$\{\phi^{1/2}, \psi^{1/2}\}$};
\node (t1) at (-1,-2) {$\{\phi^{3/4}, \psi^{1/4}\}$};
\node (t2) at (1,-2) {$\{\phi^{1/4}, \psi^{3/4}\}$};
\node (s1) [fill=black!10,rectangle] at
(-2,-4) {$\{\phi^1\}$};
\node (s2) [fill=black!10, rectangle] at (-1,-4)
{$\{\psi^1\}$};
\node (s3) [fill=black!10,rectangle] at (1,-4)
{$\{\phi^1\}$};
\node (s4) [fill=black!10,rectangle] at (2,-4)
{$\{\psi^1\}$};
\node (t3) at (-2,-5) {$\{\phi^1\}$};
\node (t4) at (-1,-5) {$\{\psi^1\}$};
\node (t5) at (1,-5) {$\{\phi^1\}$};
\node (t6) at (2,-5) {$\{\psi^1\}$};
\node (t7) at (-2,-6) [draw=black,dotted,rectangle] {$\{a\}$};
\node (t8) at (-1,-6) [draw=black,dashed,rectangle] {$\emptyset$};
\node (t9) at (1,-6) [draw=black,solid,rectangle] {$\{a\}$};
\node (t10) at (2,-6) [draw=black,thick,rectangle] {$\emptyset$};
\draw [->] (s0) to node [swap] {$1/2$} (t1);
\draw [->] (s0) to node {$1/2$} (t2);
\draw [->] (t1) to node [swap] {$3/4$} (s1);
\draw [->] (t1) to node {$1/4$} (s2);
\draw [->] (t2) to node [swap] {$1/4$} (s3);
\draw [->] (t2) to node {$3/4$} (s4);
\draw [->] (s1) to node {$1$} (t3);
\draw [->] (s2) to node {$1$} (t4);
\draw [->] (s3) to node {$1$} (t5);
\draw [->] (s4) to node {$1$} (t6);
\draw [->] (t3) to node {$1$} (t7);
\draw [->] (t4) to node {$1$} (t8);
\draw [->] (t5) to node {$1$} (t9);
\draw [->] (t6) to node {$1$} (t10);
\draw [->,loop below] (t7) to node {$1$} (t7);
\draw [->,loop below] (t8) to node {$1$} (t8);
\draw [->,loop below] (t9) to node {$1$} (t9);
\draw [->,loop below] (t10) to node {$1$} (t10);
\end{tikzpicture}
}
\subfloat{
\begin{tikzpicture}[auto]
\node (s0) [label=above:$s_0$] at (0,0)
{$\{\phi^{1/2}, \psi^{1/2}\}$};
\node (s1) at (-2,-2) {$\{\phi^1\}$};
\node (s2) at (-0.75,-2) {$\{\psi^1\}$};
\node (s3) at (0.75,-2) {$\{\phi^1\}$};
\node (s4) at (2,-2) {$\{\psi^1\}$};
\node (t1) at (-2,-3) {$\{a\}$};
\node (t2) at (-0.75,-3) {$\emptyset$};
\node (t3) at (0.75,-3) {$\{a\}$};
\node (t4) at (2,-3) {$\emptyset$};
\draw [->] (s0) to node [swap,sloped] {$3/8$} (s1);
\draw [->] (s0) to node [swap,sloped] {$1/8$} (s2);
\draw [->] (s0) to node [sloped] {$1/8$} (s3);
\draw [->] (s0) to node [sloped] {$3/8$} (s4);
\draw [->] (s1) to node {$1$} (t1);
\draw [->] (s2) to node {$1$} (t2);
\draw [->] (s3) to node {$1$} (t3);
\draw [->] (s4) to node {$1$} (t4);
\draw [->,loop below] (t1) to node {$1$} (t1);
\draw [->,loop below] (t2) to node {$1$} (t2);
\draw [->,loop below] (t3) to node {$1$} (t3);
\draw [->,loop below] (t4) to node {$1$} (t4);
\end{tikzpicture}
}
\caption{Example of a reduction for a $\F q, \G 1$-formula. }
\label{fig:ex-red-Fq,G1}
\end{figure}
\end{example}
\subsection{Satisfiability for $\F{q/1},\G1,\vee$}\label{ss:noqf}
\label{sec:Fq1,G1,v}
In the previous section we have been able to construct simple models for formulae of the $\F q, \G1$-fragment by exploiting the nature of \G{}-formulae thereof as presented in Section~\ref{sec:G1(Fq,G1)}. This works because every formula nested within a \G{} is satisfied in every BSCC. Hence, we can simply postpone the satisfaction of those until we reach a BSCCs. In the $\F{q},\G1,\vee$-fragment, this is not the case anymore, as discussed in Section~\ref{sec:G1(Fq,G1,v)}. This can cause some complications, which are discussed in more detail in Section~\ref{sec:Fq,G1,v}. In order to be able to apply similar techniques as in the previous section, we can simplify the fragment and enforce the property that \F{}-formulae occur only with $q=1$ within \G{}s.
\begin{definition}
$\F {q/1}, \G 1,\vee$-formulae are given by the grammar
\begin{align*}
&\Phi ::= a \mid
\neg a \mid
\Phi \land \Phi \mid
\Phi \lor \Phi \mid
\F{\rhd q}{\Phi} \mid
\G{=1}{\Psi} \\
&\Psi ::= a \mid
\neg a \mid
\Psi \land \Psi \mid
\Psi \lor \Psi \mid
\F{=1}{\Psi} \mid
\G{=1}{\Psi}
\end{align*}
\end{definition}
Again, we show that the necessary minimal height of models can be bounded.
\begin{theorem}
\label{thm:Fq1,G1,v-size}
A satisfiable $\F {q/1}, \G 1,\vee$-formula $\phi$ has a model of height $|\phi|^2$.
\end{theorem}
\begin{proofsketch}
As a first step, we apply the same procedure as in the proof of Theorem~\ref{thm:Fq,G1-size}. The outer \F{}-formulae are then satisfied for the same reason as in the setting without disjunctions. However, the \G{}-nested \F{}-formulae might not be satisfied anymore because the BSCCs do not necessarily satisfy each of them. Since the \G{}-nested \F{}-formulae appear only with $q=1$, we know that once a state of the original model satisfies such a formula, almost every path satisfies the respective path formula. Let $s$ be a state of the reduced chain, and $t \in post(s)$. In the original model, there might be states in between $s$ and $t$. If some \G{}-nested \F{}-formula (say $\F{=1}\psi$) which is satisfied at $s$ is also satisfied at $t$, we do not need to take care of it. If this is not the case, then we know for sure that some of the states between $s$ and $t$ must satisfy $\psi$. We can determine such states for each \G{}-nested \F{}-formula. We include exactly one of those for each such formula. Then, preserving the order, we chain them in such a way that each one has a unique successor. The last one's unique successor is $t$. Let $s'$ be the first one. Then, we set $P(s,s') := P(s,t)$. We repeat this procedure for each state of the reduced chain.
This way, we preserve the reachability probabilities and therefore the satisfaction of the outer \F{}-formulae. The newly added states guarantee the satisfaction of nested \F{}-formulae. An induction over $\phi$ shows that the constructed MC is again a model. Since we add at most $|\phi|$ new states between the states of the reduced MC which is of height at most $|\phi|$, we obtain the claimed bound on the height.
\QED\end{proofsketch}
Figure \ref{fig:red-Fq,G1,v} illustrates the transformation of the models as described in the proof sketch. $sel_1$ and $sel_2$ are the selections. In the $\F q ,\G 1$-fragment, we directly connected those sets. Here, we insert simple chains between the selections. The construction guarantees that we have at most one state per \F{}-formula to satisfy. This is obtained by postponing the satisfaction until the last possible moment before $sel$.
\begin{figure}
\centering
\begin{tikzpicture}
[set/.style={rectangle, draw=black, fill=white},
state/.style={circle, inner sep=0, minimum size=1mm,
draw=black, fill=black}]
\node[state, label=above:$s_0$] (s0) at (0,0) {};
\node[state] (s1) at (-0.5,-0.5) {};
\node[state] (s2) at (-0.5,-0.75) {};
\node[state] (s3) at (-1,-2.25) {};
\node[state] (s4) at (-1,-2.5) {};
\node[state] (t1) at (0.5,-0.5) {};
\node[state] (t2) at (0.5,-0.75) {};
\node[state] (t3) at (1,-2.25) {};
\node[state] (t4) at (1,-2.5) {};
\node[set,minimum width=10mm] (sel1) at (0,-1.5) {$sel_1$};
\node[set,minimum width=20mm] (sel2) at (0,-3.25) {$sel_2$};
\node[set,minimum width=20mm,minimum height=10mm] (BSCCs)
at (0,-4) {BSCCs};
\draw[->] (s0) -- (s1);
\draw[->] (s1) -- (s2);
\draw[dotted] (s2) -- (sel1.north west);
\draw[dotted] (-0.25,-0.75) -- (0.25,-0.75);
\draw[->] (s0) -- (t1);
\draw[->] (t1) -- (t2);
\draw[dotted] (t2) -- (sel1.north east);
\draw[->] (sel1.south west) -- (s3);
\draw[->] (s3) -- (s4);
\draw[dotted] (s4) -- (sel2.north west);
\draw[dotted] (-0.5,-2.5) -- (0.5,-2.5);
\draw[->] (sel1.south east) -- (t3);
\draw[->] (t3) -- (t4);
\draw[dotted] (t4) -- (sel2.north east);
\end{tikzpicture}
\caption{Reduction of models for $\F {q/1} ,\G 1, \vee$}
\label{fig:red-Fq,G1,v}
\end{figure}
\begin{remark}
Note that the resulting models have a tree shape with BSCCs. There are no non-bottom SCCs. Even if the original model did have such, they are removed by this construction: The reduction algorithm takes care that every non-nested \F{}-formula occurs at most once on every path. The inserted chains contain at most one state per nested \F{}-formula and do not introduce cycles.
\end{remark}
\begin{example}\label{ex:Fq1}
Consider the formula
$$\phi := \F{\geq 1/2}{(\G{=1}{a})} \land \G{=1}{(\F{=1}{\neg a} \lor \F{=1}{b})}.$$
Figure \ref{fig:ex-Fq,G1,v-reduction} (a) shows a model for $\phi$. The boxes illustrate the selection of $s_0$. Figure \ref{fig:ex-Fq,G1,v-reduction} (b) shows the corresponding reduced chain. However, it is not a model for $\phi$. The reason is that neither states satisfying $\neg a$ nor such that satisfy $b$ are reached almost surely from $s_0$. By including additional states, the chain in Figure \ref{fig:ex-Fq,G1,v-reduction} (c) corrects this, and thereby we obtain a model of $\phi$.
\begin{figure}
\begin{tikzpicture}[auto]
\node (s0) [label=above:$s_0$] at (0,0) {$\{a\}$};
\node (s1) at (-0.5,-1) {$\{a\}$};
\node (s2) at (0.5,-1) {$\{a\}$};
\node (s3) at (-1.5,-2) {$\emptyset$};
\node (s4) [rectangle,draw=black] at (-0.5,-2) {$\emptyset$};
\node (s5) [rectangle,draw=black] at (0.5,-2) {$\emptyset$};
\node (s6) at (1.5,-2) {$\emptyset$};
\node (s7) [rectangle,draw=black] at (-1.5,-3) {$\{a,b\}$};
\node (s8) [rectangle,draw=black] at (1.5,-3) {$\{a,b\}$};
\draw [->] (s0) to (s1);
\draw [->] (s0) to (s2);
\draw [->] (s1) to (s3);
\draw [->] (s1) to (s4);
\draw [->] (s2) to (s5);
\draw [->] (s2) to (s6);
\draw [->] (s3) to (s7);
\draw [->] (s6) to (s8);
\draw [->,loop below] (s7) to (s7);
\draw [->,loop below] (s8) to (s8);
\draw [->,loop below] (s4) to (s4);
\draw [->,loop below] (s5) to (s5);
\end{tikzpicture}
\begin{tikzpicture}[auto]
\node (s0) [label=above:$s_0$] at (0,0) {$\{a\}$};
\node (s1) at (-1.5,-1) {$\{a,b\}$};
\node (s2) at (-0.5,-1) {$\emptyset$};
\node (s3) at (0.5,-1) {$\emptyset$};
\node (s4) at (1.5,-1) {$\{a,b\}$};
\node[white] (x) at (1.5,-2) {$\{a,b\}$};
\draw [->] (s0) to (s1);
\draw [->] (s0) to (s2);
\draw [->] (s0) to (s3);
\draw [->] (s0) to (s4);
\draw [->,loop below] (s1) to (s1);
\draw [->,loop below] (s2) to (s2);
\draw [->,loop below] (s3) to (s3);
\draw [->,loop below] (s4) to (s4);
\draw [->,loop below,white] (x) to (x);
\end{tikzpicture}
\begin{tikzpicture}[auto]
\node (s0) [label=above:$s_0$] at (0,0) {$\{a\}$};
\node (s1) at (-1.5,-1) {$\emptyset$};
\node (s2) at (-0.5,-1) {$\emptyset$};
\node (s3) at (0.5,-1) {$\emptyset$};
\node (s4) at (1.5,-1) {$\emptyset$};
\node (s5) at (-1.5,-2) {$\{a,b\}$};
\node (s6) at (1.5,-2) {$\{a,b\}$};
\draw [->] (s0) to (s1);
\draw [->] (s0) to (s2);
\draw [->] (s0) to (s3);
\draw [->] (s0) to (s4);
\draw [->] (s1) to (s5);
\draw [->] (s4) to (s6);
\draw [->,loop below] (s3) to (s3);
\draw [->,loop below] (s2) to (s2);
\draw [->,loop below] (s5) to (s5);
\draw [->,loop below] (s6) to (s6);
\end{tikzpicture}
\caption{Example of a reduction in the $\F {q/1},\G 1, \vee$ fragment: (a) original model, (b) reduced model, (c) corrected model}
\label{fig:ex-Fq,G1,v-reduction}
\end{figure}
\end{example}
\subsection{Finite Models for $\F q, \G 1, \vee$}\label{ss:qual}
\label{sec:Fq,G1,v}
In this section, we discuss PCTL$(\F{},\G{})$ where $\G{}$ appears only with the contraint $q=1$. Previously for the $\F q ,\G 1$-fragment, i.e. without disjunctions, we started from a model which was unfolded for a number of steps; we simplified such a model by dropping states (including all non-bottom SCCs) and then we inserted simple chains that guarantee the satisfaction of the nested \F{}-formulae. The resulting model thus (i) does not contain any non-bottom SCCs and (ii) the size only depends on the structure of the formula, not on the constraints of the \F{}-formulae. However, in the general $\F q,\G 1,\vee$-fragment, we cannot insert such simple chains to satisfy nested \F{}s. Instead, we may have to branch at several places. Intuitively, the reason for such complications is the presence of a \emph{repeated, controlled choice}. This enables us to find a formula which requires more complicated models, namely models which either have non-bottom SCCs or are of size which also depends on the constraints and not only on the structure of the formula, or can even be infinite.
\begin{example}\label{ex:rep-choice}
Consider $\phi := \G{=1} (\F{=1}(a \wedge \F{>0}\neg a) \vee a) \wedge \F{=1}\G{=1}a \wedge \neg a$. We can try to construct a model for $\phi$ as follows: Firstly, we have to start at a state which satisfies $\neg a$. This enforces the satisfaction of the first disjunct in the \G{}-formula. Therefore, almost all paths must lead to a state satisfying $a \land \F{>0}\neg a$. This state must eventually reach a state that satisfies $\neg a$ again with positive probability. Hence, we find ourselfs in the same situation as was the case in the initial state. So, we need to either create an SCC of alternating states that satisfy $a$ and $\neg a$, or create an infinite model. If we create an SCC, the side constraint $\F{=1}\G{=1}a$ enforces us to eventually leave this SCC. Hence, it is a non-bottom SCC. The MC in Figure~\ref{fig:mod-Fq,G1,v} is a possible model. From $s_0$ it models $\phi$, for any $p \in (0,1)$.
\begin{figure}
\centering
\begin{tikzpicture}[auto]
\node (s0) [label=above:$s_0$] at (0,0) {$\emptyset$};
\node (s1) at (2,0) {$\{a\}$};
\node (s2) at (4,0) {$\{a\}$};
\draw[->,bend left] (s0) to node {$1$} (s1);
\draw[->,bend left] (s1) to node {$p$} (s0);
\draw[->] (s1) to node {$1-p$} (s2);
\draw[->,loop right] (s2) to node {$1$} (s2);
\end{tikzpicture}
\caption{Example model for $\phi$}
\label{fig:mod-Fq,G1,v}
\end{figure}
\end{example}
Note that the formula given in the example is qualitative. For this fragment, it is known how to solve the satisfiability problem already \cite{LICS}. However, we can easily adapt the formula to be quantitative. In this case, we might still be able to obtain a model for the quantitative version by keeping its shape and only adapting the probabilties of the model for the qualitative version. The question arises, whether this is possible in general or not. This question remains open and might be interesting for future work.
\section{Discussion, Conclusion, and Future Work}
\label{Chapter6}\label{sec:disc}
We have identified the pattern of the \emph{controlled repeated choice}, i.e. formulae of the form
$$\G{=1}(\phi_1 \vee \cdots \vee \phi_n)$$
where at least one of the $\phi_i$ contains an \F{}-formula that has a constraint other than $q=1$. Additionally, we have ``controlling'' side constraints, as in Example~\ref{ex:rep-choice}. We have seen that the presence of this pattern enforces more complicated structure of models even in the qualitative setting. This pattern is expressible in the $\F q,\G 1,\vee$-fragment. Whenever we
\begin{enumerate}[label=(\alph*)]
\item drop the side constraints, keeping only the $\G{=1}$-part, i.e. consider the $\G1(\F q,\G 1,\vee)$-fragment, or
\item drop the disjunction for the choice and consider the $\F q,\G1$-fragment, or
\item drop the quantity of the choice and consider the $\F{q/1},\G1,\vee$-fragment,
\end{enumerate}
the structure is simpler and we obtain decidability. For these fragments, we have even shown that the general satisfiability problem is equivalent to the finite satisfiability problem.
Further, adding quantities to $\G{}$-constraints obviously also makes the satisfiability problem more complicated. Already for the qualitative $\G{>0}(\F{>0})$-fragment, satisfiability and finite satisfiability differ. Nevertheless, we established the decidability of finite satisfiability even for the $\G q(\F q,\G q,\vee)$-fragment.
\todo{bf-example, complexity}
\bigskip
Consequently, instead of attacking the whole quantitative PCTL or even just PCTL($\F{},\G{}$), we suggest two easier tasks, which should lead to a fundamental increase of understanding the general problem, namely:
\begin{itemize}
\item finite (and also general) satisfiability of the $\F q,\G 1,\vee$-fragment, i.e. PCTL($\F{},\G{}$) where $\G{}$ is limited to the $=1$ constraint, and
\item infinite satisfiability of the $\G q(\F q,\G q,\vee)$-fragment, i.e. $\G{}$-formulae of PCTL($\F{},\G{}$).
\end{itemize}
While the former omits issues stemming from $\G{>0}$ \cite{LICS} and only deals with the repeated choice, the latter generalizes the qualitative results for $\G{>0},\G{=1}$ \cite{HS,LICS} in the presence of general quantitative $\F{}$'s.
Further, potentially more straight-forward, directions include the generalization of the results obtained in this paper to the until- and release-operators instead of future- and globally-operators, respectively, or the introduction of the next-operator.
\bibliographystyle{plainurl
|
train/arxiv
|
BkiUb_TxK3YB9ohkLMsI
| 5 | 1 |
\section{Introduction}\label{sec:introduction}}
\IEEEraisesectionheading{\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{W}{ith} the advent of social media and the shift of image capturing mode from digital cameras to smartphones and life-logging devices, users share massive amounts of personal photos online these days. Being able to recognise people in such photos would benefit the users by easing photo album organisation. Recognising people in natural environments poses interesting challenges; people may be focused on their activities with the face not visible, or can change clothing or hairstyle. These challenges are largely new -- traditional focus of computer vision research for human identification has been face recognition (frontal, fully visible faces) or pedestrian re-identification (no clothing changes, standing pose).
Intuitively, the ability to recognise faces in the wild \cite{Huang2007Lfw,Sun2014ArxivDeepId2plus} is still an important ingredient. However, when people are engaged in an activity (i.e. not posing) their faces become only partially visible (non-frontal, occluded) or simply fully invisible (back-view). Therefore, additional information is required to reliably recognize people. We explore other cues that include (1) body of a person that contains information about the shape and appearance; (2) human attributes such as gender and age; and (3) scene context. See Figure \ref{fig:teaser} for a list of examples that require increasing number of contextual cues for successful recognition.
This paper presents an in-depth analysis of the person recognition task in the social media type of photos: given a few annotated training images per person, who is this person in the test image? The main contributions of the paper are summerised as follows:
\begin{itemize}
\item{Propose realistic and challenging person recognition scenarios on the PIPA benchmark (\S\ref{sec:PIPA-dataset}).}
\item{Provide a detailed analysis of the informativeness of different cues, in particular of a face recognition module DeepID2+ \cite{Sun2014ArxivDeepId2plus} (\S\ref{sec:Cues-for-recognition}).}
\item{Verify that our journal version final model \texttt{naeil2} achieves the new state of the art performance on PIPA (\S\ref{sec:Test-set-results}).}
\item{Analyse the contribution of cues according to the amount of appearance and viewpoint changes (\S\ref{sec:challenges-analysis})}.
\item{Discuss the performance of our methods under the open-world recognition setup (\S{\color{red} B}, Appendix)}
\item Code and data are open source: available at \url{https://goo.gl/DKuhlY}.
\end{itemize}
\begin{figure}
\begin{centering}
\arrayrulecolor{gray}
\par\end{centering}
\begin{centering}
\includegraphics[bb=0bp 0bp 601bp 600bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser50_allcorrect_face_28134}\includegraphics[bb=0bp 0bp 721bp 721bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser50_headincorrect_whole_35166}\includegraphics[bb=0bp 0bp 501bp 501bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser50_personincorrect_8850}\includegraphics[bb=0bp 0bp 401bp 401bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser50_onlynaeil_4631}
\par\end{centering}
\vspace{0.5em}
\begin{raggedright}
\begin{tabular}{lcc|ccc|ccc|ccc}
\hspace*{-0.6em}Head & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.15em} & \hspace{1.0em} & \includegraphics[height=1em]{figures/no} & \hspace{1.0em} & \hspace{1.0em} & \includegraphics[height=1em]{figures/no} & \hspace{1.0em} & \hspace{0.95em} & \includegraphics[height=1em]{figures/no} & \hspace{10em}\tabularnewline
\hspace*{-0.6em}Body & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.17em} & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=1em]{figures/no} & & & \includegraphics[height=1em]{figures/no} & \tabularnewline
\hspace*{-0.6em}{\scriptsize{}Attributes} & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.17em} & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=1em]{figures/no} & \tabularnewline
\hspace*{-0.6em}{\small{}All cues} & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.17em} & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=0.8em]{figures/yes} & \tabularnewline
\end{tabular}
\par\end{raggedright}
\arrayrulecolor{black}\vspace{0.5em}
\caption{\label{fig:teaser}In social media photos, depending on face occlusion or pose, different cues may be effective. For example, the surfer in the third column is not recognised using only head and body cues due to unusual pose. However, she is successfully recognised when additional attribute cues are considered.}
\end{figure}
\subsection{\label{subsec:Related-work}Related work}
\subsubsection*{Data type}
The bulk of previous work on person recognition focuses on faces. The Labeled Faces in the Wild (LFW) \cite{Huang2007Lfw} has been a great testbed for a host of works on the face identification and verification outside the lab setting. The benchmark has nearly saturated in the recent years, attributing to the deep features \cite{Taigman2014CvprDeepFace,Sun2014ArxivDeepId2plus,Zhou2015ArxivNaiveDeepFace,Schroff2015ArxivFaceNet,Parkhi15,chen2016unconstrained,wen2016discriminative} trained on large scale face databases that outperform the traditional methods involving sophisticated classifiers based on hand-crafted features and metric learning approaches \cite{Guillaumin2009Iccv,Chen2013CvprBlessing,Cao2013IccvTransferLearning,Lu2014ArxivGaussianFace}. However, LFW is not representative for the social media photos: the data consists mainly of unoccluded frontal faces (face detections) and has a bias towards public figures. Indeed, more recent benchmarks have introduced more difficult types of data. IARPA Janus Benchmark A (IJB-A) \cite{klare2015pushing} includes faces with profile viewpoints, but is still limited to public figures.
Not only face, but body has also been explored as a cue for human identification. For example, pedestrian re-identification (re-id) tackles the problem of matching pedestrian detections in different camera views. Standard benchmarks include VIPeR \cite{gray2007VIPeR}, CAVIAR \cite{cheng2011CAVIAR}, CUHK \cite{li2012CUHK1}, and Caltech Roadside Pedestrians \cite{hall2015fine}, with active line of research that previously focused on devising good hand-crafted features \cite{Li2013Cvpr,Zhao2013IccvSalienceMatching,Bak2014WacvBrownian}, and now focusing more on developing effective convnet architectures \cite{Li2014CvprDeepReID,Yi2014Arxiv,Hu2014Accvw,ahmed2015improved,Cheng_2016_CVPR,Xiao_2016_CVPR,Varior2016,Chen/cvpr2017}. However, typical re-id benchmarks do not fully cover the social media setup in three aspects: (1) subjects mostly appear in the standing pose, (2) resolution is low, and (3) the matching is only evaluated across a short time span.
Human identification in natural, everyday environment was first covered by the ``Gallagher collection person dataset'' \cite{Gallagher2008Cvpr}. However, the dataset is small ($\sim\negmedspace 600$ images, 32 identities) compared to the size of a typical social media account, and again only the frontal faces are annotated. MegaFace \cite{kemelmacher2016megaface,nech2017level} is perhaps the largest known open source face database over social media photos. However, MegaFace does not contain any back-view subject (pruned by a face detector) and the per-account statistics (e.g. number of photos per account) is not preserved due to data processing steps. We build our paper upon the PIPA dataset \cite{Zhang2015CvprPiper}, also crawled from Flickr and fairly large in scale ($\sim$40k images, $\sim$2k identities), with diverse appearances and subjects with all viewpoints and occlusion levels. Heads are annotated with bounding boxes each with an identity tag. We describe PIPA in greater detail in \S \ref{sec:PIPA-dataset}.
\subsubsection*{Recognition tasks}
There exist multiple tasks related to person recognition \cite{Gong2014PersonReIdBook} differing mainly in the amount of training and testing data. Face and surveillance re-identification is most commonly done via verification: \emph{given one reference image (gallery) and one test image (probe), do they show the same person?} \cite{Huang2007Lfw,Bedagkar2014IvcPersonReIdSurvey}. In this paper, we consider two recognition tasks. (1) closed world identification: \emph{given a single test image (probe), who is this person among the identities that are among the training identities (gallery set)?} (2) Open world recognition \cite{kemelmacher2016megaface} (\S{\color{red} B}, Appendix): \emph{given a singe test image (probe), is this person among the training identities (gallery set)? If so, who?}
Other related tasks are, face clustering \cite{Cui2007ChiEasyAlbum,Schroff2015ArxivFaceNet}, finding important people \cite{mathialagan2015vip}, or associating names in text to faces in images \cite{Everingham2006Bmvc,Everingham2009Ivc}.
\subsubsection*{Prior work with the same data type and task}
Since the introduction of the PIPA dataset \cite{Zhang2015CvprPiper}, multiple works have proposed different methods for solving the person recognition problem in social media photos. Zhang et al. proposed the Pose Invariant Person Recognition (\texttt{PIPER}) \cite{Zhang2015CvprPiper}, obtaining promising results by combining three ingredients: DeepFace \cite{Taigman2014CvprDeepFace} (face recognition module trained on a large private dataset), poselets \cite{Bourdev2009IccvPoselets} (pose estimation module trained with 2k images and 19 keypoint annotations), and convnet features trained on detected poselets \cite{Krizhevsky2012Nips,Deng2009CvprImageNet}.
Oh et al. \cite{oh2015person}, the conference version of this paper, have proposed a simple model \texttt{naeil} that extracts AlexNet cues from multiple \emph{fixed} image regions. In particular, it does not require data-heavy DeepFace or time-costly poselets, while achieving a slightly better recognition performance than \texttt{PIPER}.
There have been many follow-up works since then. Kumar et al. \cite{kumar2017pose} have improved the performance by normalising the body pose using pose estimation. Li et al. \cite{li2017sequential} considered exploiting people co-occurrence statistics. Liu et al. \cite{liu_2017_coco} have proposed to train a person embedding in a metric space instead of training a classifier on a fixed set of identities, thereby making the model more adaptable to unseen identities. Some works have exploited the photo-album metadata, allowing the model to reason over different photos \cite{joon16eccv,Li_2016_CVPR}.
In this journal version, we build \texttt{naeil2} from \texttt{naeil} and DeepID2+ \cite{Sun2014ArxivDeepId2plus} to achieve the state of the art result among the published work on PIPA. We provide additional analysis of cues according to time and viewpoint changes.
\section{\label{sec:PIPA-dataset}Dataset and experimental setup}
\subsubsection*{Dataset}
The PIPA dataset (``People In Photo Albums'') \cite{Zhang2015CvprPiper} is, to the best of our knowledge, the first dataset to annotate people's identities even when they are pictured from the back. The annotators labelled instances that can be considered hard even for humans (see qualitative examples in figure \ref{fig:success-O-split}, \ref{fig:failure-O-split}). PIPA features $37\,107$ Flickr personal photo album images (Creative Commons license), with $63\,188$ head bounding boxes of $2\,356$ identities. The head bounding boxes are tight around the skull, including the face and hair; occluded heads are hallucinated by the annotators. The dataset is partitioned into \emph{train}, \emph{val}, \emph{test}, and \emph{leftover} sets, with rough ratio $45\negmedspace:\negmedspace15\negmedspace:\negmedspace20\negmedspace:\negmedspace20$ percent of the annotated heads. The leftover set is not used in this paper. Up to annotation errors, neither identities nor photo albums by the same uploader are shared among these sets.
\subsubsection*{Task}
At test time, the system is given a photo and ground truth head bounding box corresponding to the test instance (probe). The task is to choose the identity of the test instance among a given set of identities (gallery set, 200$\sim$500 identities) each with $\sim$10 training samples.
In Appendix \S{\color{red}B}, we evaluate the methods when the test instance may be a background person (e.g. bystanders -- no training image given). The system is then also required to determine if the given instance is among the seen identities (gallery set).
\subsubsection*{Protocol}
We follow the PIPA protocol in \cite{Zhang2015CvprPiper} for data utilisation and model evaluation. The \emph{train} set is used for convnet feature training. The \emph{test} set contains the examples for the test identities. For each identity, the samples are divided into $\mbox{\emph{test}}_{0}$ and $\mbox{\emph{test}}_{1}$. For evaluation, we perform a two-fold cross validation by training on one of the splits and testing on the other. The \emph{val} set is likewise split into $\mbox{\emph{val}}_{0}$ and $\mbox{\emph{val}}_{1}$, and is used for exploring different models and tuning hyperparameters.
\subsubsection*{Evaluation}
We use the recognition rate (or accuracy), the rate of correct identity predictions among the test instances. For every experiment, we average two recognition rates obtained from the (training, testing) pairs ($\mbox{\emph{val}}_{0}$, $\mbox{\emph{val}}_{1}$) and ($\mbox{\emph{val}}_{1}$, $\mbox{\emph{val}}_{0}$) -- analogously for \emph{test}.
\subsection{\label{subsec:PIPA-splits}Splits}
We consider four different ways of splitting the training and testing samples ($\mbox{\emph{val}}_{\nicefrac{0}{1}}$ and $\mbox{\emph{test}}_{\nicefrac{0}{1}}$) for each identity, aiming to evaluate different level of generalisation ability. The first one is from a prior work, and we introduce three new ones. Refer to table \ref{tab:stat-splits} for data statistics and figure \ref{fig:splits-visualisation} for visualisation.
\subsubsection*{Original split $\mathcal{O}$ \cite{Zhang2015CvprPiper}}
The Original split shares many similar examples per identity across the split -- e.g. photos taken in a row. The Original split is thus easy - even nearest neighbour on raw RGB pixels works (\S \ref{subsec:face-rgb-baseline}). In order to evaluate the ability to generalise across long-term appearance changes, we introduce three new splits below.
\subsubsection*{Album split $\mathcal{A}$ \cite{oh2015person}}
The Album split divides training and test samples for each identity according to the photo album metadata. Each split takes the albums while trying to match the number of samples per identity as well as the total number of samples across the splits. A few albums are shared between the splits in order to match the number of samples. Since the Flickr albums are user-defined and do not always strictly cluster events and occasions, the split may not be perfect.
\subsubsection*{Time split $\mathcal{T}$ \cite{oh2015person}}
The Time split divides the samples according to the time the photo was taken. For each identity, the samples are sorted according to their ``photo-taken-date'' metadata, and then divided according to the newest versus oldest basis. The instances without time metadata are distributed evenly. This split evaluates the temporal generalisation of the recogniser. However, the ``photo-taken-date'' metadata is very noisy with lots of missing data.
\subsubsection*{Day split $\mathcal{D}$ \cite{oh2015person}}
The Day split divides the instances via visual inspection to ensure the firm ``appearance change'' across the splits. We define two criteria for division: (1) a firm evidence of date change such as \{change of season, continent, event, co-occurring people\} and/or (2) visible changes in \{hairstyle, make-up, head or body wear\}. We discard identities for whom such a division is not possible. After division, for each identity we randomly discard samples from the larger split until the sizes match. If the smaller split has $\leq\negmedspace 4$ instances, we discard the identity altogether. The Day split enables clean experiments for evaluating the generalisation performance across strong appearance and event changes.
\subsection{\label{subsec:face-detection}Face detection}
Instances in PIPA are annotated by humans around their heads (tight around skull). We additionally compute face detections over PIPA for three purposes: (1) to compare the amount of identity information in head versus face (\S\ref{sec:Cues-for-recognition}), (2) to obtain head orientation information for further analysis (\S\ref{sec:challenges-analysis}), and (3) to simulate the scenario without ground truth head box at test time (Appendix \S{\color{red}B}). We use the open source DPM face detector \cite{Mathias2014Eccv}.
Given a set of detected faces (above certain detection score threshold) and the ground truth heads, the match is made according to the overlap (intersection over union). For matched heads, the corresponding face detections tell us which DPM component is fired, thereby allowing us to infer the head orientation (frontal or side view). See Appendix \S{\color{red}A} for further details.
Using the DPM component, we partition instances in PIPA as follows: (1) detected and frontal ($\texttt{FR}$, 41.29\%), (2) detected and non-frontal ($\texttt{NFR}$, 27.10\%), and (3) no face detected ($\texttt{NFD}$, 31.60\%). We denote detections without matching ground truth head as Background. See figure \ref{fig:threeway-diagram} for visualisation.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{figures/threeway_fixed}\vspace{-0.5em}
\par\end{centering}
\caption{\label{fig:threeway-diagram}Face detections and head annotations in PIPA. The matches are determined by overlap (intersection over union). For matched faces (heads), the detector DPM component gives the orientation information (frontal versus non-frontal).}
\end{figure}
\begin{table}
\begin{centering}
\setlength\tabcolsep{0.5em}
\begin{tabular}{cccccccccccc}
& & & \multicolumn{4}{c}{\emph{val}} & & \multicolumn{4}{c}{\emph{test}} \tabularnewline
\vspace{-1em}
& & & & & & & \tabularnewline
\cline{4-7} \cline{9-12}
\vspace{-1em}
& & & & & & & \tabularnewline
& & & $\mathcal{O}$ & $\mathcal{A}$ & $\mathcal{T}$ & $\mathcal{D}$ & & $\mathcal{O}$ & $\mathcal{A}$ & $\mathcal{T}$ & $\mathcal{D}$ \tabularnewline
\vspace{-1em}
& & & & & & & \tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\vspace{-1em}
& & & & & & & \tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\vspace{-0.5em}
& & & & & & & \tabularnewline
\multirow{2}{*}{{\rotatebox{90}{{spl.0}\hspace{0em}}}}&instance & & 4820 & 4859 & 4818 & 1076 & & 6443 & 6497 & 6441 & 2484 \tabularnewline
&identity & & 366 & 366 & 366 & 65 & & 581 & 581 & 581 & 199 \tabularnewline
\vspace{-1em}
& & & & & & &\tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\vspace{-0.5em}
& & & & & & & \tabularnewline
\multirow{2}{*}{\rotatebox{90}{spl.1\hspace{0em}}}&instance & & 4820 & 4783 & 4824 & 1076 & & 6443 & 6389 & 6445 & 2485 \tabularnewline
&identity & & 366 & 366 & 366 & 65 & & 581 & 581 & 581 & 199 \tabularnewline
\vspace{-1em}
& & & & & & &\tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\end{tabular}
\par\end{centering}
\vspace{1em}
\caption{\label{tab:stat-splits}Split statistics for \emph{val} and \emph{test} sets. Total number of instances and identites for each split is shown.}
\end{table}
\begin{figure*}
\begin{centering}
\hspace*{\fill}\includegraphics[width=1.8\columnwidth]{figures/split1_ID117.png}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0.5em}
\par\end{centering}
\begin{centering}
\hspace*{\fill}\includegraphics[width=1.8\columnwidth]{figures/split2_ID118.png}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0.5em}
\par\end{centering}
\begin{centering}
\hspace*{\fill}\includegraphics[width=1.8\columnwidth]{figures/split3_ID1394.png}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0em}
\par\end{centering}
\caption{\label{fig:splits-visualisation}Visualisation of Original, Album,
Time and Day splits for three identities (rows 1-3). Greater appearance gap is observed from Original to Day splits.}
\end{figure*}
\section{\label{sec:Cues-for-recognition}Cues for recognition}
In this section, we investigate the cues for recognising people in social media photos. We begin with an overview of our model. Then, we experimentally answer the following questions: how informative are fixed body regions (no pose estimation) (\S\ref{sec:how-informative-body-region})? How much does scene context help (\S\textcolor{red}{\ref{sec:Scene}})? Is it head or face (head minus hair and background) that is more informative (\S\ref{sec:Head-or-face})? And how much do we gain by using extended data (\S{\ref{sec:Additional-training-data}} \& \S{\ref{sec:Attributes}})? How effective is a specialised face recogniser (\S{\ref{sec:deepid}})? Studies in this section are based exclusively on the \emph{val} set.
\subsection{Model overview}
\begin{wrapfigure}{O}{0.45\columnwidth}%
\begin{centering}
\par\end{centering}
\centering{}
\includegraphics[height=0.6\columnwidth]{figures/method_overview_v2}
\caption{\label{fig:image-regions}Regions considered for feature extraction: face $\ensuremath{\texttt{f}}$, head $\ensuremath{\texttt{h}}$, upper body $\ensuremath{\texttt{u}}$, full body $\ensuremath{\texttt{b}}$, and scene $\ensuremath{\texttt{s}}$. More than one cue can be extracted per region (e.g. $\ensuremath{\texttt{h}}_{1}$, $\ensuremath{\texttt{h}}_{2}$ ).
}
\end{wrapfigure}%
At test time, given a ground truth head bounding box, we estimate five different regions depicted in figure \ref{fig:image-regions}. Each region is fed into one or more convnets to obtain a set of cues. The cues are concatenated to form a feature vector describing the instance. Throughout the paper we write $+$ to denote vector concatenation. Linear SVM classifiers are trained over this feature vector (one versus the rest). In our final system, except for DeepID2+ \cite{Sun2014ArxivDeepId2plus}, all features are computed using the seventh layer (fc7) of AlexNet \cite{Krizhevsky2012Nips} pre-trained for ImageNet classification. The cues only differ amongst each other on the image area and the fine-tuning used (type of data or surrogate task) to alter the AlexNet, except for the DeepID2+ \cite{Sun2014ArxivDeepId2plus} feature
\subsection{\label{subsec:Body-regions}Image regions used}
We choose five different image regions based on the ground truth head
annotation (given at test time, see the protocol in \S\ref{sec:PIPA-dataset}). The
head rectangle $\ensuremath{\texttt{h}}$ corresponds to the ground
truth annotation. The full body rectangle $\ensuremath{\texttt{b}}$
is defined as $\left(3\negthinspace\times\negthinspace\mbox{head width},\right.$
$\left.6\negthinspace\times\negthinspace\mbox{head height}\right)$,
with the head at the top centre of the full body. The upper body rectangle
$\ensuremath{\texttt{u}}$ is the upper-half of $\ensuremath{\texttt{b}}$.
The scene region $\ensuremath{\texttt{s}}$ is the whole image containing
the head.
The face region $\ensuremath{\texttt{f}}$ is obtained using the DPM face detector discussed in \S\ref{subsec:face-detection}. For head boxes with no matching detection (e.g. back views and occluded faces), we regress the face area from the head using the face-head displacement statistics on the \emph{train} set. Five respective image regions are illustrated in figure \ref{fig:image-regions}.
Note that the regions overlap with each other, and that depending
on the person's pose they might be completely off. For example,
$\ensuremath{\texttt{b}}$ for a lying person is likely to contain
more background than the actual body.
\subsection{\label{subsec:Implementation}Fine-tuning and parameters}
Unless specified otherwise AlexNet is fine-tuned using the PIPA \emph{train} set ($\sim\negmedspace30\mbox{k}$ instances, $\sim\negmedspace1.5\mbox{k}$ identities), cropped at five different image regions, with $300\mbox{k}$ mini-batch iterations (batch size $50$). We refer to the base cue thus obtained as $\ensuremath{\texttt{f}}$, $\ensuremath{\texttt{h}}$, $\ensuremath{\texttt{u}}$, $\ensuremath{\texttt{b}}$, or $\ensuremath{\texttt{s}}$, depending on the cropped region. On the \emph{val} set we found the fine-tuning to provide a systematic $\sim\negmedspace10$ percent points (pp) gain over the non-fine-tuned AlexNet (figure \ref{fig:fine-tuning-effect}). We use the seventh layer (fc7) of AlexNet for each cue ($4\,096$ dimensions).
We train for each identity a one-versus-all SVM classifier with the regularisation parameter $C=1$; it turned out to be an insensitive parameter in our preliminary experiments. As an alternative, the naive nearest neighbour classifier has also been considered. However, on the \emph{val} set the SVMs consistently outperforms the NNs by a $\sim\negmedspace10\ \mbox{pp}$ margin.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\columnwidth]{figures/fine_tunning_effect}\vspace{-0.5em}
\par\end{centering}
\caption{\label{fig:fine-tuning-effect}PIPA \emph{val} set performance of different
cues versus the SGD iterations in fine-tuning.}
\end{figure}
\subsection{\label{sec:how-informative-body-region}How informative is each image
region?}
\begin{table}
\begin{centering}
\par\end{centering}
\begin{centering}
\begin{tabular}{lllcc}
&& Cue & & Accuracy\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
{Chance level} &&&& \hspace{0.5em}1.04\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Scene (\S\ref{sec:Scene}) && \texttt{$\texttt{s}$} && 27.06\tabularnewline
Body && $\mbox{\ensuremath{\texttt{b}}}$ && 80.81\tabularnewline
Upper body && $\texttt{u}$ && 84.76\tabularnewline
Head && $\texttt{h}$ && 83.88\tabularnewline
Face (\S\ref{sec:Head-or-face}) && $\texttt{f}$ && 74.45\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Zoom out && $\texttt{f}$ && 74.45\tabularnewline
&& $\texttt{f}\negthinspace+\negthinspace\texttt{h}$ && 84.80\tabularnewline
&& $\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}$ && 90.65\tabularnewline
& & $\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$ && 91.14\tabularnewline
& & $\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{s}$ && 91.16\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Zoom in && \texttt{$\texttt{s}$} && 27.06\tabularnewline
&& $\texttt{s}\negthinspace+\negthinspace\texttt{b}$ && 82.16\tabularnewline
&& $\texttt{s}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{u}$ && 86.39\tabularnewline
& & $\texttt{s}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{h}$ && 90.40\tabularnewline
&& $\texttt{s}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{f}$ && 91.16\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Head+body && $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ && 89.42\tabularnewline
Full person && $\texttt{P}=\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$\hspace*{-1.5em} && 91.14\tabularnewline
Full image && $\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$ && 91.16\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:validation-set-regions-accuracy}PIPA \emph{val} set accuracy of cues based on different image regions and their concatenations ($+$ means concatenation).}
\end{table}
Table \ref{tab:validation-set-regions-accuracy} shows the \emph{val} set results of each region individually and in combination. Head $\ensuremath{\texttt{h}}$ and upper body $\ensuremath{\texttt{u}}$ are the strongest individual cues. Upper body is more reliable than the full body $\ensuremath{\texttt{b}}$ because the lower body is commonly occluded or cut out of the frame, and thus is usually a distractor. Scene $\ensuremath{\texttt{s}}$ is, unsurprisingly, the weakest individual cue, but it still useful information for person recognition (far above chance level). Importantly, we see that all cues complement each other, despite overlapping pixels. Overall, our features and combination strategy are effective.
\subsection{\label{sec:Scene}Scene ($\textnormal{\texttt{s}}$)}
\begin{table}
\begin{centering}
\begin{tabular}{lllcc}
&& Method && Accuracy\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Gist && \texttt{$\texttt{s}_{\texttt{gist}}$} && 21.56\tabularnewline
PlacesNet scores && \texttt{$\texttt{s}_{\texttt{places 205}}$} && 21.44\tabularnewline
raw PlacesNet && \texttt{$\texttt{s}_{0\texttt{ places}}$} && 27.37\tabularnewline
PlacesNet fine-tuned && \texttt{$\texttt{s}_{3\texttt{ places}}$} && 25.62\tabularnewline
raw AlexNet && \texttt{$\texttt{s}_{0}$} && 26.54\tabularnewline
AlexNet fine-tuned && \texttt{$\texttt{s}=\texttt{s}_{3}$} && 27.06\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:validation-set-scene}PIPA \emph{val} set accuracy of different scene cues. See descriptions in \S\ref{sec:Scene}.}
\end{table}
Other than a fine-tuned AlexNet we considered multiple feature types
to encode the scene information. \texttt{$\texttt{s}_{\texttt{gist}}$}:
using the Gist descriptor \cite{Oliva2001IjcvGist} ($512$ dimensions).
\texttt{$\texttt{s}_{0\texttt{ places}}$}: instead of using AlexNet
pre-trained on ImageNet, we consider an AlexNet (PlacesNet) pre-trained
on $205$ scene categories of the ``Places Database'' \cite{Zhou2014NipsPlaces}
($\sim\negmedspace2.5$ million images). \texttt{$\texttt{s}_{\texttt{places 205}}$}:
Instead of the $4\,096$ dimensions PlacesNet feature vector, we also
consider using the score vector for each scene category ($205$ dimensions).
$\texttt{s}_{0}$,$\texttt{s}_{3}$: finally we consider using AlexNet
in the same way as for body or head (with zero or $300\mbox{k}$ iterations
of fine-tuning on the PIPA person recognition training set). \texttt{$\texttt{s}_{3\texttt{ places}}$: $\texttt{s}_{0\texttt{ places}}$
}fine-tuned for person recognition.
\subsubsection*{Results}
Table \ref{tab:validation-set-scene} compares the different alternatives
on the \emph{val} set. The Gist descriptor \texttt{$\texttt{s}_{\texttt{gist}}$}
performs only slightly below the convnet options (we also tried the
$4\,608$ dimensional version of Gist, obtaining worse results).
Using the raw (and longer) feature vector of \texttt{$\texttt{s}_{0\texttt{ places}}$}
is better than the class scores of \texttt{$\texttt{s}_{\texttt{places 205}}$}.
Interestingly, in this context pre-training for places classification
is better than pre-training for objects classification (\texttt{$\texttt{s}_{0\texttt{ places}}$}
versus \texttt{$\texttt{s}_{0}$}). After fine-tuning $\texttt{s}_{3}$
reaches a similar performance as \texttt{$\texttt{s}_{0\texttt{ places}}$}.\\
Experiments trying different combinations indicate that there is little
complementarity between these features. Since there is not a large
difference between \texttt{$\texttt{s}_{0\texttt{ places}}$} and
\texttt{$\texttt{s}_{3}$}, for the sake of simplicity we use \texttt{$\texttt{s}_{3}$}
as our scene cue \texttt{$\texttt{s}$} in all other experiments.
\subsubsection*{Conclusion}
Scene $\ensuremath{\texttt{s}}$ by itself, albeit weak, can obtain
results far above the chance level. After fine-tuning, scene recognition
as pre-training surrogate task \cite{Zhou2014NipsPlaces} does not
provide a clear gain over (ImageNet) object recognition.
\subsection{\label{sec:Head-or-face}Head ($\textnormal{\texttt{h}}$) or face
($\textnormal{\ensuremath{\texttt{f}}}$)?}
A large portion of work on face recognition focuses on the face region
specifically. In the context of photo albums, we aim to quantify how
much information is available in the head versus the face region. As discussed in \S\ref{subsec:face-detection}, we obtain the face regions $\ensuremath{\texttt{f}}$ from the DPM face detector \cite{Mathias2014Eccv}.
\subsubsection*{Results}
There is a large gap of $\sim\negmedspace10$ percent points performance
between $\texttt{f}$ and $\texttt{h}$ in table \ref{tab:validation-set-regions-accuracy}
highlighting the importance of including the hair and background around
the face.
\subsubsection*{Conclusion}
Using $\texttt{h}$ is more effective than $\texttt{f}$, but $\texttt{f}$ result still shows a fair performance. As with other body cues, there is a complementarity between $\texttt{h}$ and $\texttt{f}$; we suggest to use them together.
\begin{table}
\begin{centering}
\begin{tabular}{lllcc}
&& Method && Accuracy\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em} \tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
More data (\S\ref{sec:Additional-training-data}) && $\texttt{h}$ && 83.88\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ && 84.88\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$ && 86.08\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$\hspace*{-1.5em} && 86.26\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Attributes (\S\ref{sec:Attributes}) & &$\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11m}}$ && 74.63\tabularnewline
&& $\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$ & & 81.74\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$ && 85.00\tabularnewline
\arrayrulecolor{gray}
\cline{3-3} \cline{5-5}
\arrayrulecolor{black} && $\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$ && 77.50\tabularnewline
&& $\texttt{u}+\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$ && 85.18\tabularnewline
\arrayrulecolor{gray}
\cline{3-3} \cline{5-5}
\arrayrulecolor{black} && $\mbox{\ensuremath{\texttt{A}}}=\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}+\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$\hspace*{-1.5em} && 86.17\tabularnewline
&& $\texttt{h}+\texttt{u}$ && 85.77\tabularnewline
&& $\texttt{h}+\texttt{u}+\ensuremath{\texttt{A}}$ && 90.12\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\texttt{naeil} (\S\ref{sec:conf-naeil}) && \texttt{naeil}\cite{oh2015person} && 91.70\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:validation-set-extended-data-accuracy}PIPA \emph{val} set accuracy of different cues based on extended data. See \S\ref{sec:Additional-training-data}, \S\ref{sec:Attributes}, and \S\ref{sec:conf-naeil} for details.}
\end{table}
\subsection{\label{sec:Additional-training-data}Additional training data ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}},\,\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}}}$)}
It is well known that deep learning architectures benefit from additional
data. DeepFace \cite{Taigman2014CvprDeepFace} used by \texttt{PIPER} \cite{Zhang2015CvprPiper} is trained over $4.4\cdot10^{6}$
faces of $4\cdot10^{3}$ persons (the private SFC dataset \cite{Taigman2014CvprDeepFace}).
In comparison our cues are trained over ImageNet and PIPA's $29\cdot10^{3}$
faces over $1.4\cdot10^{3}$ persons. To measure the effect of training
on larger data we consider fine-tuning using two open source face recognition
datasets: CASIA-WebFace (CASIA) \cite{Yi2014ArxivLearningFace} and
the ``Cross-Age Reference Coding Dataset'' (CACD) \cite{Chen2014Eccv}.
CASIA contains $0.5\cdot10^{6}$ images of $10.5\cdot10^{3}$ persons
(mainly actors and public figures). When fine-tuning AlexNet
over these identities (using the head area $\mbox{\ensuremath{\texttt{h}}}$),
we obtain the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$ cue.
CACD contains $160\cdot10^{3}$ faces of $2\cdot10^{3}$ persons
with varying ages. Although smaller in total number of images than CASIA, CACD features greater number of samples per identity ($\sim\negmedspace2\times)$.
The $\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ cue is built
via the same procedure as $\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$.
\subsubsection*{Results}
See the top part of table \ref{tab:validation-set-extended-data-accuracy} for the results. $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$
and $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$ improve
over $\texttt{h}$ (1.0 and 2.2 pp, respectively). Extra convnet training data seems to help. However, due to the mismatch in data distribution,
$\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ and $\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$
on their own are about $\sim\negmedspace5\ \mbox{pp}$ worse than
$\texttt{h}$.
\subsubsection*{Conclusion}
Extra convnet training data helps, even if they are from different type of photos.
\subsection{\label{sec:Attributes}Attributes ($\textnormal{\texttt{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}},\,\ensuremath{\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}}}}$)}
Albeit overall appearance might change day to day, one could expect
that stable, long term attributes provide means for recognition. We build attribute cues by fine-tuning AlexNet features not for the person recognition task (like for all other cues), but rather for the attribute prediction surrogate task. We consider two sets attributes, one on the head region and the other on the upper body region.
We have annotated identities in the PIPA \emph{train} and \emph{val} sets ($1409+366$ in total) with five long term attributes: age, gender, glasses, hair colour, and hair length (see table \ref{tab:attributes-details} for details). We build $\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$ by fine-tuning AlexNet features for the task of head attribute prediction.
For fine-tuning the attribute cue $\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$, we consider two approaches: training a single
network for all attributes as a multi-label classification problem
with the sigmoid cross entropy loss, or tuning one network per attribute
separately and concatenating the feature
vectors. The results on the \emph{val} set indicate the latter
($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$) performs better
than the former ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11m}}$).
For the upper body attribute features, we use the ``PETA pedestrian attribute dataset''
\cite{Deng2014AcmPeta}. The dataset originally has $105$ attributes annotations for $19\cdot10^{3}$ full-body pedestrian images. We chose the five long-term attributes for our study: gender, age (young adult, adult), black hair, and short hair
(details in table \ref{tab:attributes-details}). We choose to use the upper-body $\texttt{u}$ rather than the full body $\texttt{b}$ for attribute prediction -- the crops are much less noisy. We train the AlexNet feature on upper body of PETA images with the attribute prediction task to obtain the cue $\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$.
\subsubsection*{Results}
See results in table \ref{tab:validation-set-extended-data-accuracy}. Both PIPA ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$)
and PETA ($\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$) annotations
behave similarly ($\sim\negmedspace1\ \mbox{pp}$ gain over $\texttt{h}$
and $\texttt{u}$), and show complementary
($\sim\negmedspace5\ \mbox{pp}$ gain over $\texttt{h}\negmedspace+\negmedspace\texttt{u}$).
Amongst the attributes considered, gender contributes the most to
improve recognition accuracy (for both attributes datasets).
\subsubsection*{Conclusion}
Adding attribute information improves the performance.
\begin{table}
\begin{centering}
\begin{tabular}{lllll}
Attribute && Classes& & Criteria\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Age && Infant && {\footnotesize{}Not walking (due to young age)}\tabularnewline
&& Child & &{\footnotesize{}Not fully grown body size}\tabularnewline
&& Young Adult && {\footnotesize{}Fully grown \& Age $<45$}\tabularnewline
&& Middle Age && {\footnotesize{}$45\leq\mbox{Age}\leq60$}\tabularnewline
&& Senior && {\footnotesize{}Age$\geq60$}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Gender && Female && {\footnotesize{}Female looking}\tabularnewline
&& Male && {\footnotesize{}Male looking}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Glasses && None && {\footnotesize{}No eyewear}\tabularnewline
&& Glasses && {\footnotesize{}Transparant glasses}\tabularnewline
&& Sunglasses && {\footnotesize{}Glasses with eye occlusion}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Haircolour && Black && {\footnotesize{}Black}\tabularnewline
&& White && {\footnotesize{}Any hint of whiteness}\tabularnewline
&& Others && {\footnotesize{}Neither of the above}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Hairlength && No hair && {\footnotesize{}Absolutely no hair on the scalp}\tabularnewline
&& Less hair && {\footnotesize{}Hairless for $>\frac{1}{2}$ upper scalp}\tabularnewline
&& Short hair && {\footnotesize{}When straightened,$<10$ cm}\tabularnewline
&& Med hair && {\footnotesize{}When straightened, $<$chin level}\tabularnewline
&& Long hair && {\footnotesize{}When straightened, $>$chin level}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\begin{centering}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:attributes-details}PIPA attributes details.}
\end{table}
\subsection{\label{sec:conf-naeil}Conference version final model ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{naeil}}}}}$) \cite{oh2015person}}
The final model in the conference version of this paper combines five vanilla regional cues ($\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$), two head cues trained with extra data ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}}$, $\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}}}$), and ten attribute cues ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$, $\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$), resulting in 17 cues in total. We name this method $\texttt{naeil}$ \cite{oh2015person}\footnote{``naeil'', \includegraphics[height=0.8em]{figures/naeil_in_korean}, means ``tomorrow'' and sounds like ``nail''.}.
\subsubsection*{Results}
See table \ref{tab:validation-set-extended-data-accuracy} for the results. \texttt{naeil}, by combining all the cues considered naively, achieves the best result 91.70\% on the \emph{val} set.
\subsubsection*{Conclusion}
Cues considered thus far are complementary, and the combined model \texttt{naeil} is effective.
\subsection{\label{sec:deepid}DeepID2+ face recognition module ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$) \cite{Sun2014ArxivDeepId2plus}}
Face recognition performance have improved significantly in recent years with better architectures and larger open source datasets \cite{Huang2007Lfw,Zhu2013Iccv,Taigman2014CvprDeepFace,Sun2014ArxivDeepId2plus,Ding2015Arxiv,Schroff2015ArxivFaceNet,Parkhi15,chen2016unconstrained,wen2016discriminative}. In this section, we study how much face recognition helps in person recognition. While DeepFace \cite{Taigman2014CvprDeepFace} used by the \texttt{PIPER} \cite{Zhang2015CvprPiper} would have enabled more direct comparison against the \texttt{PIPER}, it is not publicly available. We thus choose the DeepID2+ face recogniser \cite{Sun2014ArxivDeepId2plus}. Face recognition technology is still improving quickly, and larger and larger face datasets are being released -- the analysis in this section would be an underestimate of current and future face recognisers.
The DeepID2+ network is a siamese neural network that takes 25 different crops of head as input, with the joint verification-identification loss. The training is based on large databases consisting of CelebFaces+\cite{sun2014deep}, WDRef\cite{Chen2012}, and LFW\cite{Huang2007Lfw} -- totalling $2.9\cdot10^{5}$ faces of $1.2\cdot10^{4}$ persons. At test time, it ensembles the predictions from the 25 crop regions obtained by facial landmark detections. The resulting output is $1\,024$ dimensional head feature that we denote as $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$.
Since the DeepID2+ pipeline begins with the facial landmark detection, the DeepID2+ features are not available for instances with e.g. occluded or backward orientation heads. As a result, only $52\,709$ out of $63\,188$ instances ($83.42\%$) have the DeepID2+ features available, and we use vectors of zeros as features for the rest.
\subsubsection*{Results - Original split}
See table \ref{tab:deepid-val} for the \emph{val} set results for $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ and related combinations. $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ in itself is weak ($68.46\%$) compared to the vanilla head feature $\texttt{h}$, due to the missing features for the back-views. However, when combined with $\texttt{h}$, the performance reaches $85.86\%$ by exploiting information from strong DeepID2+ face features and the viewpoint robust $\texttt{h}$ features.
Since the feature dimensions are not homogeneous ($4\,096$ versus $1\,024$), we try $L_{2}$ normalisation of $\texttt{h}$ and $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ before concatenation ($\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$). This gives a further $3\%$ boost ($88.74\%$) -- better than $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$, previous best model on the head region ($86.26\%$).
\subsubsection*{Results - Album, Time and Day splits}
Table \ref{tab:deepid-val} also shows results for the Album, Time, and Day splits on the \emph{val} set. While the general head cue \texttt{h} degrades significantly on the Day split, $\texttt{h}_{\texttt{deepid}}$ is a reliable cue with roughly the same level of recognition in all four splits ($60\negmedspace\sim\negmedspace70\%$). This is not surprising, since face is largely invariant over time, compared to hair, clothing, and event.
On the other splits as well, the complementarity of $\texttt{h}$ and $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ is guaranteed only when they are $L_2$ normalised before concatenation. The $L_2$ normalised concatenation $\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ envelops the performance of individual cues on all splits.
\subsubsection*{Conclusion}
DeepID2+, with face-specific architecture/loss and massive amount of training data, contributes highly useful information for the person recognition task. However, being only able to recognise face-visible instances, it needs to be combined with orientation-robust $\texttt{h}$ to ensure the best performance. Unsurprisingly, having a specialised face recogniser helps more in the setup with larger appearance gap between training and testing samples (Album, Time, and Day splits). Better face recognisers will further improve the results in the future.
\begin{table}
\begin{centering}
\begin{tabular}{lccccc}
Method && Original & Album & Time & Day\tabularnewline
\cline{1-1} \cline{3-6}
&\vspace{-1em}\tabularnewline
\cline{1-1} \cline{3-6}
$\texttt{h}$ && 83.88 & 77.90 & 70.38 & 40.71\tabularnewline
$\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 68.46 & 66.91 & 64.16 & 60.46\tabularnewline
$\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 85.86 & 80.54 & 73.31 & 47.86\tabularnewline
$\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 88.74 & 85.72 & 80.88 & 66.91\tabularnewline
\cline{1-1} \cline{3-6}
$\texttt{naeil}$\cite{oh2015person} && 91.70 & 86.37 & 80.66 & 49.21\tabularnewline
$\texttt{naeil}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 92.11 & 86.77 & 81.08 & 51.02\tabularnewline
$\texttt{naeil2}$ && {93.42} & {89.95} & {85.87} & {70.58}\tabularnewline
\cline{1-1} \cline{3-6}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:deepid-val}PIPA \emph{val} set accuracy of methods involving $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$. The optimal combination weights are $\lambda^{\star}=[0.60\;1.05\;1.00\;1.50]$ for Original, Album, Time, and Day splits, respectively.\protect \\
$\oplus$ means $L_{2}$ normalisation before concatenation.}
\end{table}
\subsection{\label{sec:naeil+deepid}Combining $\texttt{naeil}$ with $\texttt{h}_\texttt{deepid}$ ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{naeil2}}}}}$)}
We build the final model of the journal version, namely the $\texttt{naeil2}$ by combining $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$. As seen in \S\ref{sec:deepid}, naive concatenation is likely to fail due to even larger difference in dimensionality ($4\,096\times 17=69\,632$ versus $1\,024$). We consider $L_{2}$ normalisation of $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$, and then performing a weighted concatenation.
\begin{equation}
\label{eq:weighted-sum}
\texttt{naeil}\oplus_\lambda\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}} = \frac{\texttt{naeil}}{||\texttt{naeil}||_2}+\lambda\cdot\frac{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}{||\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}||_2},
\end{equation}
where, $\lambda>0$ is a parameter and $+$ denotes a concatenation.
\subsubsection*{Optimisation of $\lambda$ on \emph{val} set}
$\lambda$ determines how much relative weight to be given to $\texttt{h}_\texttt{deepid}$. As we have seen in \S\ref{sec:deepid}, the amount of additional contribution from $\texttt{h}_\texttt{deepid}$ is different for each split. In this section, we find $\lambda^\star$, the optimal values for $\lambda$, for each split over the \emph{val} set. The resulting combination of $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$ is our final method, $\texttt{naeil2}$. $\lambda^\star$ is searched on the equi-distanced points $\{0,0.05,0.1,\cdots,3\}$.
See figure \ref{fig:lambda-deepid+naeil} for the \emph{val} set performance of $\texttt{naeil}\oplus_\lambda\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ with varying values of $\lambda$. The optimal weights are found at $\lambda^{\star}=[0.60\;1.05\;1.00\;1.50]$ for Original, Album, Time, and Day splits, respectively. The relative importance of $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ is greater on splits with larger appearance changes. For each split, we denote $\texttt{naeil2}$ as the combination $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$ based on the optimal weights.
Note that the performance curve is rather stable for $\lambda\geq1.5$ in all splits. In practice, when the expected amount of appearance changes of subjects are unknown, our advice would be to choose $\lambda\approx 1.5$. Finally, we remark that the weighted sum can also be done for the $17$ cues in $\texttt{naeil}$; finding the optimal cue weights is left as a future work.
\subsubsection*{Results}
See table \ref{tab:deepid-val} for the results of combining $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$. Naively concatenated, $\texttt{naeil}+\texttt{h}_\texttt{deepid}$ performs worse than $\texttt{h}_\texttt{deepid}$ on the Day split ($51.02\%$ vs $60.46\%$). However, the weighted combination \texttt{naeil2} achieves the best performance on all four splits.
\subsubsection*{Conclusion}
When combining $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$, a weighted combination is desirable, and the resulting final model \texttt{naeil2} beats all the previously considered models on all four splits.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.7\columnwidth]{figures/lambda}\hspace*{\fill}\vspace{0.5em}
\par\end{centering}
\caption{\label{fig:lambda-deepid+naeil}PIPA \emph{val} set accuracy of $\texttt{naeil}\oplus_\lambda\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$
for varying values of $\lambda$. Round dots denote the maximal \emph{val} accuracy.}
\end{figure}
\section{\label{sec:Test-set-results}PIPA test set results and comparison}
\begin{table*}
\begin{centering}
\begin{tabular}{cclcllcllccccc}
&& && \multicolumn{2}{c}{Special modules} && \multicolumn{2}{c}{General features} && \tabularnewline
\cline{5-6} \cline{8-9}
&& Method && Face rec. & Pose est. && Data & Arch. && Original & Album & Time & Day\tabularnewline
\cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
&\vspace{-1em}\tabularnewline
\cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
&& Chance level && \ding{55} & \ding{55} && $-$ & $-$ && \hspace{0.5em}0.78 & \hspace{0.5em}0.89 & \hspace{0.5em}0.78 & \hspace{0.5em}1.97\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\multirow{6}{*}{\rotatebox{90}{Head\hspace{0.0em}}}&& $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ && \ding{55} & \ding{55} && $-$ & $-$ && 33.77 & 27.19 & 16.91 & \hspace{0.5em}6.78\tabularnewline
&& $\mbox{\ensuremath{\texttt{h}}}$ && \ding{55} & \ding{55} && I+P & Alex && 76.42 & 67.48 & 57.05 & 36.48\tabularnewline
&& $\texttt{h}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ && \ding{55} & \ding{55} && I+P+CC & Alex && 80.32 & 72.82 & 63.18 & 45.45\tabularnewline
&& \texttt{\small{}$\mbox{\ensuremath{\texttt{h}_{\texttt{deepid}}}}$} && DeepID2+\cite{Sun2014ArxivDeepId2plus} & \ding{55} && $-$ & $-$ && 68.06 & 65.49 & 60.69 & 61.49\tabularnewline
&& $\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && DeepID2+\cite{Sun2014ArxivDeepId2plus} & \ding{55} && I+P & Alex && 85.94 & 81.95 & 75.85 & 66.00\tabularnewline
&& \texttt{DeepFace}\cite{Zhang2015CvprPiper} && DeepFace\cite{Taigman2014CvprDeepFace} & \ding{55} && $-$ & $-$ && 46.66 & $-$ & $-$ & $-$\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\multirow{7}{*}{\rotatebox{90}{Body\hspace{0.0em}}}&& $\mbox{\ensuremath{\texttt{b}}}$ && \ding{55} & \ding{55} && I+P & Alex && 69.63 & 59.29 & 44.92 & 20.38\tabularnewline
&& $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ && \ding{55} & \ding{55} && I+P & Alex && 83.36 & 73.97 & 63.03 & 38.15\tabularnewline
&& $\texttt{P}=\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$ && \ding{55} & \ding{55} && I+P & Alex && 85.33 & 76.49 & 66.55 & 42.17\tabularnewline
&& \texttt{GlobalModel}\cite{Zhang2015CvprPiper} && \ding{55} & \ding{55} && I+P & Alex && 67.60 & $-$ & $-$ & $-$\tabularnewline
&& \texttt{PIPER}\cite{Zhang2015CvprPiper} && DeepFace\cite{Taigman2014CvprDeepFace} & Poselets\cite{Bourdev2009IccvPoselets} && I+P & Alex && 83.05 & $-$ & $-$ & $-$\tabularnewline
&& \texttt{Pose}\cite{kumar2017pose} && \ding{55} & Pose group && I+P+V & Alex && 89.05 & 82.37 & 74.84 & 56.73\tabularnewline
&& \texttt{COCO}\cite{liu_2017_coco} && \ding{55} & Part det.\cite{ren15fasterrcnn} && I+P & Goog,Res && \textbf{92.78} & 83.53 & 77.68 & 61.73\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\multirow{4}{*}{\rotatebox{90}{Image\hspace{0.0em}}} && $\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$ && \ding{55} & \ding{55} && I+P & Alex && 85.71 & 76.68 & 66.55 & 42.31\tabularnewline
&& $\texttt{naeil}=\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}\negthinspace+\negthinspace\texttt{E}$\cite{oh2015person} && \ding{55} & \ding{55} && I+P+E & Alex && 86.78 & 78.72 & 69.29 & 46.54\tabularnewline
&& \texttt{Contextual}\cite{Li_2016_CVPR} && DeepID\cite{sun2014deep} & \ding{55} && I+P & Alex && 88.75 & 83.33 & 77.00 & 59.35\tabularnewline
\cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
&& $\texttt{naeil2}$ (this paper) && DeepID2+\cite{Sun2014ArxivDeepId2plus} & \ding{55} && I+P+E & Alex && {90.42} & \textbf{86.30} & \textbf{80.74} & \textbf{70.58}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:test-set-accuracy-four-splits}PIPA \emph{test} set accuracy (\%) of the proposed method and prior arts on the four splits. For each method, we indicate any face recognition or pose estimation module included, and the data and convnet architecture for other features. \protect \\
Cues on extended data $\texttt{E}=\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}\negthinspace+\negthinspace\texttt{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}}+\ensuremath{\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}}}$.\protect \\
$\oplus$ means concatenation after $L_{2}$ normalisation.\protect \\
In the data column, I indicates ImageNet\cite{Deng2009CvprImageNet} and P indicates PIPA \emph{train} set. CC means CACD\cite{Chen2014Eccv}$+$CASIA\cite{Yi2014ArxivLearningFace} and E means CC$+$PETA\cite{Deng2014AcmPeta}. V indicates the VGGFace dataset \cite{Parkhi15}.\protect \\
In the architecture column, (Alex,Goog,Res) refers to (AlexNet\cite{Krizhevsky2012Nips},GoogleNetv3\cite{szegedy2016rethinking},ResNet50\cite{He_2016_CVPR}).
}
\end{table*}
In this section, we measure the performance of our final model and key intermediate results on the PIPA \emph{test} set, and compare against the prior arts. See table \ref{tab:test-set-accuracy-four-splits} for a summary.
\subsection{\label{subsec:face-rgb-baseline}Baselines}
We consider two baselines for measuring the inherent difficulty of the task. First baseline is the ``chance level'' classifier, which does not see the image content and simply picks the most commonly occurring class. It provides the lower bound for any recognition method, and gives a sense of how large the gallery set is.
Our second baseline is the raw RGB nearest neighbour classifier $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$. It uses the raw downsized ($40\negmedspace\times\negmedspace40\ \mbox{pixels}$) and blurred RGB head crop as the feature. The identity of the Euclidean distance nearest neighbour training image is predicted at test time. By design, $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ is only able to recognize
near identical head crops across the $\mbox{\emph{test}}_{\nicefrac{0}{1}}$ splits.
\subsubsection*{Results}
See results for ``chance level'' and $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ in table \ref{tab:test-set-accuracy-four-splits}. While the ``chance level'' performance is low ($\leq2\%$ in all splits), we observe that $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ performs unreasonably well on the Original split (33.77\%). This shows that the Original splits share many nearly identical person instances across the split, and the task is very easy. On the harder splits, we see that the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ performance diminishes, reaching only 6.78\% on the Day split. Recognition on the Day split is thus far less trivial -- simply taking advantage of pixel value similarity would not work.
\subsubsection*{Conclusion}
Although the gallery set is large enough, the task can be made arbitrarily easy by sharing many similar instances across the splits (Original split). We have remedied the issue by introducing three more challenging splits (Album, Time, and Day) on which the naive RGB baseline ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$) no longer works (\S\ref{subsec:PIPA-splits}).
\subsection{Methods based on head}
We consider our four intermediate models ($\textnormal{\texttt{h}}$, $\texttt{h}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$, $\mbox{\ensuremath{\texttt{h}_{\texttt{deepid}}}}$, $\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$) and a prior work \texttt{DeepFace} \cite{Zhang2015CvprPiper,Taigman2014CvprDeepFace}.
We observe the same trend as described in the previous sections on the \emph{val} set (\S\ref{sec:Head-or-face}, \ref{sec:deepid}). Here, we focus on the comparison against \texttt{DeepFace} \cite{Taigman2014CvprDeepFace}. Even without a specialised face module, $\textnormal{\texttt{h}}$ already performs better than \texttt{DeepFace} (76.42\% versus 46.66\%, Original split). We believe this is for two reasons: (1) \texttt{DeepFace} only takes face regions as input, leaving out valuable hair and background information (\S\ref{sec:Head-or-face}), (2) \texttt{DeepFace} only makes predictions on 52\% of the instances where the face can be registered. Note that $\mbox{\ensuremath{\texttt{h}_{\texttt{deepid}}}}$ also do not always make prediction due to the failure to estimate the pose (17\% failure on PIPA), but performs better than \texttt{DeepFace} in the considered scenario (68.06\% versus 46.66\%, Original split).
\subsection{Methods based on body}
We consider three of our intermediate models ($\textnormal{\texttt{b}}$, $\texttt{h}\negthinspace+\negthinspace\texttt{b}$, $\texttt{P}=\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$) and four prior arts (\texttt{GlobalModel}\cite{Zhang2015CvprPiper}, \texttt{PIPER}\cite{Zhang2015CvprPiper}, \texttt{Pose}\cite{kumar2017pose}, \texttt{COCO}\cite{liu_2017_coco}). \texttt{Pose} \cite{kumar2017pose} and \texttt{COCO} \cite{liu_2017_coco} methods appeared after the publication of the conference version of this paper \cite{oh2015person}. See table \ref{tab:test-set-accuracy-four-splits} for the results.
Our body cue $\mbox{\ensuremath{\texttt{b}}}$ and Zhang et al.'s \texttt{GlobalModel} \cite{Zhang2015CvprPiper} are the same methods implemented independently. Unsurprisingly, they perform similarly (69.63\% versus 67.60\%, Original split).
Our $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ method is the minimal system matching Zhang et al.'s \texttt{PIPER} \cite{Zhang2015CvprPiper} ($83.36\%$ versus 83.05\%, Original split). The feature vector of $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ is about $50$ times smaller than \texttt{PIPER}, and do not make use of face recogniser or pose estimator.
In fact, \texttt{PIPER} captures the head region via one of its poselets. Thus, $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ extracts cues from a subset of \texttt{PIPER}'s ``\texttt{GlobalModel+Poselets}''
\cite{Zhang2015CvprPiper}, but performs better (83.36\% versus $78.79\%$, Original split).
\subsubsection*{Methods since the conference version\cite{oh2015person}}
\texttt{Pose} by Kumar et al. \cite{kumar2017pose} uses extra keypoint annotations on the PIPA \emph{train} set to generate pose clusters, and train separate models for each pose cluster (PSM, pose-specific models). By performing a form of pose normalisation they have improved the results significantly: 2.27 pp and 10.19 pp over \texttt{naeil} on Original and Day splits, respectively.
\texttt{COCO} by Liu et al. \cite{liu_2017_coco} proposes a novel metric learning loss for the person recognition task. Metric learning gives an edge over classifier-based methods by enabling recognition of unseen identities without re-training. They further use Faster-RCNN detectors \cite{ren15fasterrcnn} to localise face and body more accurately. The final performance is arguably good in all four splits, compared to \texttt{Pose} \cite{kumar2017pose} or \texttt{naeil} \cite{oh2015person}. However, one should note that the face, body, upper body, and full body features in \texttt{COCO} are based on GoogleNetv3 \cite{szegedy2016rethinking} and ResNet50 \cite{He_2016_CVPR} -- the numbers are not fully comparable to all the other methods that are largely based on AlexNet.
\subsection{Methods based on full image}
We consider our two intermediate models ($\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$, $\texttt{naeil}=\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}\negthinspace+\negthinspace\texttt{E}$) and \texttt{Contextual} \cite{Li_2016_CVPR}, a method which appeared after the conference version of this paper \cite{oh2015person}.
Our \texttt{naeil} performs better than \texttt{PIPER} \cite{Zhang2015CvprPiper} (86.78\% versus 83.05\%, Original split), while having a 6 times smaller feature vector and not relying on face recogniser or pose estimator.
\subsubsection*{Methods since the conference version\cite{oh2015person}}
\texttt{Contextual} by Li et al. \cite{Li_2016_CVPR} makes use of person co-occurrence statistics to improve the results. It performs 1.97 pp and 12.81 pp better than \texttt{naeil} on Original and Day splits, respectively. However, one should note that \texttt{Contextual} employs a face recogniser DeepID \cite{sun2014deep}. We have found that a specialised face recogniser improves the recognition quality greatly on the Day split (\S\ref{sec:deepid}).
\subsection{\label{subsec:naeil2}Our final model \texttt{naeil2}}
\texttt{naeil2} is a weighted combination of \texttt{naeil} and $\texttt{h}_{\texttt{deepid}}$ (see \S\ref{sec:naeil+deepid} for details). Observe that by attaching a face recogniser module on \texttt{naeil}, we achieve the best performance on Album, Time, and Day splits. In particular, on the Day split, \texttt{naeil2} makes a 8.85 pp boost over the second best method \texttt{COCO} \cite{liu_2017_coco} (table \ref{tab:test-set-accuracy-four-splits}). On the Original split, \texttt{COCO} performs better (2.36 pp gap), but note that \texttt{COCO} uses more advanced feature representations (GoogleNet and ResNet).
Since \texttt{naeil2} and \texttt{COCO} focus on orthogonal techniques, they can be combined to yield even better performances.
\subsection{Computational cost}
We report computational times for some pipelines in our method. The feature training takes 2-3 days on a single GPU machine. The SVM training takes 42 seconds for $\texttt{h}$ ($4\,096$ dim) and $1\,118$ seconds for \texttt{naeil} on the Original split (581 classes, $6\,443$ samples). Note that this corresponds to a realistic user scenario in a photo sharing service where $\sim\negmedspace500$ identities are known to the user and the average number of photos per identity is $\sim\negmedspace10$.
\section{\label{sec:challenges-analysis}Analysis}
In this section, we provide a deeper analysis of individual cues towards the final performance. In particular, we measure how contributions from individual cues (e.g. face and scene) change when either the system has to generalise across time or head viewpoint. We study the performance as a function of the number of training samples per identity, and examine the distribution of identities according to their recognisability.
\begin{figure}
\centering{}\hspace*{\fill}%
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/barplot2v4_fixed}
\vspace{0em}
\caption{\label{fig:splits-accuracy-relative}PIPA \emph{test} set relative accuracy of various methods in the four splits, against the final system \texttt{naeil2}.}
\par\end{center}%
\end{figure}
\subsection{Contribution of individual cues \label{subsec:Importance-of-features}}
We measure the contribution of individual cues towards the final system \texttt{naeil2} (\S\ref{sec:naeil+deepid}) by dividing the accuracy for each intermediate method by the performance of \texttt{naeil2}. We report results in the four splits in order to determine which cues contribute more when there are larger time gap between training and testing samples and vice versa.
\subsubsection*{Results}
See figure \ref{fig:splits-accuracy-relative} for the relative performances in four splits. The cues based more on context (e.g. $\texttt{b}$ and $\texttt{s}$) see greater drop from the Original to Day split, whereas cues focused on face $\texttt{f}$ and head $\texttt{h}$ regions tend to drop less. Intuitively, this is due to the greater changes in clothing and events in the Day split.
On the other hand, $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ increases in its relative contribution from Original to Day split, nearly explaining 90\% of \texttt{naeil2} in the Day split. $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ provides valuable invariant face feature especially when the time gap is great. However, on the Original split $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ only reaches about 75\% of \texttt{naeil2}. Head orientation robust $\texttt{naeil}$ should be added to attain the best performance.
\subsubsection*{Conclusion}
Cues involving context are stronger in the Original split; cues around face, especially the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$, are robust in the Day split. Combining both types of cues yields the best performance over all considered time/appearance changes.
\subsection{\label{subsec:Head-orientation-analysis}Performance by viewpoint}
We study the impact of test instance viewpoint on the proposed systems. Cues relying on face are less likely to be robust to occluded faces, while body or context cues will be robust against viewpoint changes. We measure the performance of models on the head orientation partitions defined by a DPM head detector (see \S\ref{subsec:face-detection}): frontal $\texttt{FR}$, non-frontal $\texttt{NFR}$, and
no face detected $\texttt{NFD}$. $\texttt{NFD}$ subset is a proxy for back-view and occluded-face instances.
\subsubsection*{Results}
Figure \ref{fig:threeway-bar} shows the accuracy of methods on the three head orientation subsets for the Original and Day splits. All the considered methods show worse performance from frontal $\texttt{FR}$ to non-frontal $\texttt{NFR}$ and no face detected $\texttt{NFD}$ subsets. However, in the Original split, \texttt{naeil2} still robustly predicts the identities even for the \texttt{NFD} subset ($\sim\negmedspace 80\%$ accuracy). On the Day split, \texttt{naeil2} also do struggle on the \texttt{NFD} subset ($\sim\negmedspace 20\%$ accuracy). Recognition of \texttt{NFD} instances under the Day split constitutes the main remaining challenge of person recognition.
In order to measure contributions from individual cues in different head orientation subsets, we report the relative performance against the final model \texttt{naeil2} in figure \ref{fig:threeway-relative}. The results are reported on the Original and Day splits. Generally, cues based on more context (e.g. $\texttt{b}$ and $\texttt{s}$) are more robust when face is not visible than the face specific cues (e.g. $\texttt{f}$ and $\texttt{h}$). Note that $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ performance drops significantly in $\texttt{NDET}$, while $\texttt{naeil}$ generally improves its relative performance in harder viewpoints. \texttt{naeil2} envelops the performance of the individual cues in all orientation subsets.
\subsubsection*{Conclusion}
$\texttt{naeil}$ is more viewpoint robust than $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$, making a contrast to the time-robustness analysis (\S\ref{subsec:Importance-of-features}). The combined model \texttt{naeil2} takes the best of both worlds. The remaining challenge for person recognition lies on the no face detected \texttt{NFD} instances under the Day split. Perhaps image or social media metadata could be utilised (e.g. camera statistics, time and GPS location, social media friendship graph).
\begin{figure}
\begin{centering}
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/threeway-test-bar_fixed}
\caption{\label{fig:threeway-bar}PIPA \emph{test} set accuracy of methods on the frontal ($\texttt{FR}$), non-frontal ($\texttt{NFR}$), and no face detected ($\texttt{NFD}$) subsets. Left: Original split, right: Day split.}
\par\end{center}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/threeway-test-v2_fixed}
\caption{\label{fig:threeway-relative}PIPA \emph{test} set relative accuracy of frontal ($\texttt{FR}$), non-frontal ($\texttt{NFR}$), and non-detection ($\texttt{NDET}$) head orientations, relative to the final model \texttt{naeil2}. Left: Original split, right: Day split. }
\par\end{center}
\end{centering}
\end{figure}
\begin{figure*}
\begin{centering}
\begin{center}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/threeway-svm-fr_fixed}\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/threeway-svm-nondet_fixed}\hspace*{\fill}
\caption{\label{fig:threeway-svm}PIPA \emph{test} set performance when the identity classifier (SVM) is only trained on either frontal ($\texttt{FR}$, left) or no face detected ($\texttt{NFD}$, right) subset. Related scenario: a robot has only seen frontal views of people; who is this person shown from the back view? }
\par\end{center}
\end{centering}
\end{figure*}
\subsection{Generalisation across viewpoints\label{subsec:SVM-orientation}}
Here, we investigate the viewpoint generalisability of our models. For example, we challenge the system to identify a person from the back, having only shown frontal face samples.
\subsubsection*{Results}
Figure \ref{fig:threeway-svm} shows the accuracies of the methods, when they are trained either only on the frontal subset $\texttt{FR}$ (left plot) or only on the no face detected subset $\texttt{NFD}$ (right plot). When trained on $\texttt{FR}$, \texttt{naeil2} has difficulties generalising to the $\texttt{NFD}$ subset ($\texttt{FR}$ versus $\texttt{NFD}$ performance is $\sim\negmedspace95\%$ to $\sim\negmedspace40\%$ in Original; $\sim\negmedspace85\%$ to $\sim\negmedspace35\%$ in Day). However, the absolute performance is still far above the random chance (see \S\ref{subsec:face-rgb-baseline}), indicating that the learned identity representations are to a certain degree generalisable. The \texttt{naeil} features are more robust in this case than $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$, with less dramatic drop from \texttt{FR} to \texttt{NFD}.
When no face is given during training (training on \texttt{NFD} subset), identities are much harder to learn in general. The recognition performance is low even for no-generalisation case: $\sim\negmedspace60\%$ and $\sim\negmedspace30\%$ for Original and Day, respectively, when trained and tested on \texttt{NFD}.
\subsubsection*{Conclusion}
$\texttt{naeil2}$ does generalise marginally across viewpoints, largely attributing to the \texttt{naeil} features. It seems quite hard to learn identity specific features (either generalisable or not) from back-views or occluded faces (\texttt{NFD}).
\subsection{Viewpoint distribution does not matter for feature training\label{subsec:Feature-learning-orientation}}
We examine the effect of the ratio of head orientations in the feature training set on the quality of the head feature $\ensuremath{\texttt{h}}$. We fix the number of training examples that consists only of frontal \texttt{FR} and non-frontal faces \texttt{NFR}, while varying their ratio.
One would hypothesize that the maximal viewpoint robustness of the feature is achieved at a balanced mixture of \texttt{FR} and \texttt{NFR} for each person; also that $\ensuremath{\texttt{h}}$ trained with $\texttt{FR}$ ($\texttt{NFR}$) subset is relatively strong at predicting $\texttt{FR}$ ($\texttt{NFR}$) subset (respectively).
\subsubsection*{Results}
Figure \ref{fig:threeway-feat} shows the performance of $\ensuremath{\texttt{h}}$ trained with various $\texttt{FR}$ to $\texttt{NFR}$ ratios on \texttt{FR}, \texttt{NFR}, and \texttt{NFD} subsets. Contrary to the hypothesis, changing the distribution of head orientations in the feature training has $<3\%$ effect on their performances across all viewpoint subsets in both Original and Day splits.
\subsubsection*{Conclusion}
No extra care is needed to control the distribution of head orientations in the feature training set to improve the head feature $\texttt{h}$. Features on larger image regions (e.g. \texttt{u} and \texttt{b}) are expected to be even less affected by the viewpoint distribution.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/threeway-feat-o_fixed}\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/threeway-feat-d_fixed}\hspace*{\fill}\vspace{0.5em}
\par\end{centering}
\caption{\label{fig:threeway-feat} Train the feature $\ensuremath{\texttt{h}}$ with different mixtures of frontal $\texttt{FR}$ and non-frontal $\texttt{NFR}$ heads. The viewpoint wise performance is shown for the Original (left) and Day (right) splits.}
\end{figure}
\subsection{Input resolution \label{subsec:Analysis-of-remaining-factors}}
This section provides analysis on the impact of input resolution. We aim to identify methods that are robust in different range of resolutions.
\subsubsection*{Results}
Figure \ref{fig:remaining_factors} shows the performance with respect to the input resolution (head height in pixels). The final model \texttt{naeil2} is robust against low input resolutions, reaching $\sim\negmedspace80\%$ even for instances with $<50$ pixel heads on Original split. On the day split, \texttt{naeil2} is less robust on low resolution examples ($\sim\negmedspace55\%$).
Component-wise, note that $\texttt{naeil}$ performance is nearly invariant to the resolution level. $\texttt{naeil}$ tends to be more robust for low resolution input than the $\texttt{h}_{\texttt{deepid}}$ as it is based on body and context features and do not need high resolution faces.
\subsubsection*{Conclusion}
For low resolution input $\texttt{naeil}$ should be exploited, while for high resolution input $\texttt{h}_{\texttt{deepid}}$ should be exploited. If unsure, \texttt{naeil2} is a good choice -- it envelops the performance of both in all resolution levels.
\begin{figure}
\centering{}\hspace*{\fill}%
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/res_bar_fixed}
\vspace{0em}
\caption{\label{fig:remaining_factors}PIPA \emph{test} set accuracy of systems at different levels of input resolution. Resolution is measured in terms of the head height (pixels).}
\par\end{center}%
\end{figure}
\subsection{Number of training samples\label{subsec:Importance-of-training}}
We are interested in two questions: (1) if we had more samples per identity, would person recognition be solved with the current method? (2) how many examples per identity are enough to gather substantial amount of information about a person? To investigate the questions, we measure the performance of methods at different number of training samples per identity. We perform 10 independent runs per data point with fixed number of training examples per identity (subset is uniformly sampled at each run).
\subsubsection*{Results}
Figure \ref{fig:numtrain-accuracy} shows the trend of recognition performances of methods with respect to different levels of training sample size. \texttt{naeil2} saturates after $10\sim15$ training examples per person in Original and Day splits, reaching $\sim\negmedspace92\%$ and $\sim\negmedspace83\%$, respectively, at $25$ examples per identity. At the lower end, we observe that 1 example per identity is already enough to recognise a person far above the chance level ($\sim\negmedspace67\%$ and $\sim\negmedspace35\%$ on Original and Day, respectively).
\subsubsection*{Conclusion}
Adding a few times more examples per person will not push the performance to 100\%. Methodological advances are required to fully solve the problem. On the other hand, the methods already collect substantial amount of identity information only from single sample per person (far above chance level).
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/numtrain_accuracy_fixed}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0em}
\par\end{centering}
\caption{\label{fig:numtrain-accuracy}Recognition accuracy at different number of training samples per identity. Error bars indicate $\pm 1$ standard deviation from the mean.}
\end{figure}
\subsection{Distribution of per-identity accuracy \label{subsec:per-id-accuracy}}
Finally, we study how much proportion of the identities are easy to recognise and how many are hopeless. We study this by computing the distribution of identities according to their per-identity recognition accuracies.
\subsubsection*{Results}
Figure \ref{fig:per-id-accuracy} shows the per identity accuracy for each identity in a descending order for each considered method. On the Original split, \texttt{naeil2} gives $100\%$ accuracy for $185$ out of the $581$ test identities, whereas there was only one identity where the method totally fails. On the other hand, on the Day split there are $11$ out of the $199$ test identities for whom \texttt{naeil2} achieves $100\%$ accuracy and $12$ identities with zero accuracy. In particular, \texttt{naeil2} greatly improves the per-identity accuracy distribution over $\texttt{naeil}$, which gives zero prediction for $40$ identities.
\subsubsection*{Conclusion}
In the Original split, \texttt{naeil2} is doing well on many of the identities already. In the Day split, the $\texttt{h}_{\texttt{deepid}}$ feature has greatly improved the per-identity performances, but \texttt{naeil2} still misses some identities. It is left as future work to focus on the hard identities.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/perid_O_fixed}\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/perid_D_fixed}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0em}
\par\end{centering}
\caption{\label{fig:per-id-accuracy}Per identity accuracy of on the Original and Day splits. The identities are sorted according to the per identity accuracy for each method separately.}
\end{figure}
\section{\label{sec:Conclusion}Conclusion}
We have analysed the problem of person recognition in social media photos where people may appear with occluded faces, in diverse poses, and in various social events. We have investigated efficacy of various cues, including the face recogniser DeepID2+\cite{Sun2014ArxivDeepId2plus}, and their time and head viewpoint generalisability. For better analysis, we have contributed additional splits on PIPA \cite{Zhang2015CvprPiper} that simulate different amount of time gap between training and testing samples.
We have made four major conclusions. (1) Cues based on face and head are robust across time (\S\ref{subsec:Importance-of-features}). (2) Cues based on context are robust across head viewpoints (\S\ref{subsec:Head-orientation-analysis}). (3) The final model \texttt{naeil2}, a combination of face and context cues, is robust across both time and viewpoint and achieves a $\sim\negmedspace9$ pp improvement over a recent state of the art on the challenging Day split (\S\ref{subsec:naeil2}). (4) Better convnet architectures and face recognisers will improve the performance of the \texttt{naeil} and \texttt{naeil2} frameworks in the future \S\ref{subsec:naeil2}).
The remaining challenges are mainly the large time gap and occluded face scenarios (\S\ref{subsec:Head-orientation-analysis}). One possible direction is to exploit non-visual cues like GPS and time metadata, camera parameters, or social media album/friendship graphs. Code and data are publicly available at \url{https://goo.gl/DKuhlY}.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This research was supported by the German Research Foundation (DFG CRC 1223).
\section*{Copyright}
\copyright~2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\begin{figure*}
\begin{centering}
\includegraphics[bb=0bp 0bp 2275bp 1536bp,width=1.8\columnwidth]{figures/qualitativev3}
\par\end{centering}
\begin{raggedright}
\medskip{}
\par\end{raggedright}
\caption{\label{fig:success-O-split} Success and failure cases on the Original split. Single images: test examples. Arrows point to the training samples for the predicted identities. Green and red crosses indicate correct and wrong predictions.}
\vspace{-1em}
\end{figure*}
\begin{figure*}
\begin{centering}
\includegraphics[bb=0bp 0bp 797bp 501bp,width=1.8\columnwidth]{figures/qualitative_failure}
\par\end{centering}
\begin{raggedright}
\medskip{}
\par\end{raggedright}
\caption{\label{fig:failure-O-split} Failure cases of \texttt{naeil2} and $\texttt{PIPER}$ on the Original split. Single images: test examples. Arrows point to the training samples for the predicted identities. Green and red crosses indicate correct and wrong predictions. Typical hard cases are: 1) uniform clothing (top left), 2) babies (top right), 3) children (bottom left), and 4) annotation errors (bottom right). }
\vspace{-1em}
\end{figure*}
\newpage
\appendices
\section{\label{sec:face-detector-details}Face detection}
For face detection we use the DPM detector from Mathias et al. \cite{Mathias2014Eccv}. This detector is trained on $\sim\negmedspace15\mbox{k}$ faces from the AFLW database, and is composed of $6$ components which give a rough indication of face orientation: $\pm0\degree$ (frontal), $\pm45\degree$ (diagonal left and right), and $\pm90\degree$ (side views). Figure \ref{fig:face-detection-examples} shows example face detections on the PIPA dataset. It shows detections, the estimated orientation, the regressed head bounding box, the corresponding ground truth head box, and some failure modes. Faces corresponding to $\pm0\degree$ are considered frontal (\texttt{FR}), and all others ($\pm45\degree$, $\pm90\degree$) are considered non-frontal (\texttt{NFR}). No ground truth is available to evaluate the face orientation estimation; except a few mistakes, the $\pm0\degree$ components seems a rather reliable estimators (while more confusion is observed between $\pm45\degree$/$\pm90\degree$).
\section{\label{sec:open-world}Open-world recognition}
In the main paper, we have focused on the scenario where the test instances are always from a closed world of gallery identities. However, for example when person detectors are used to localise instances, as opposed to head box annotations, the detected person may not be one of the gallery set. One may wonder how our person recognisers would perform when the test instance could be an unseen identity.
In this section, we study the task of ``open-world person recognition''. The test identity may be either from a gallery set (training identities) or from a background set (unseen identities). We consider the scenario where test instances are given by a face detector \cite{Mathias2014Eccv} while the training instance locations have been annotated by humans.
Key challenge for our recognition system is to tell apart gallery identities from background faces, while simultaneously classifying the gallery identities. Obtained from a detector, the background faces may contain any person in the crowd or even non-faces. We will introduce a simple modification of our recognition systems' test time algorithm to let them further make the gallery versus background prediction. We will then discuss the relevant metrics for our systems' open-world performances.
\subsection{Method}
At test time, body part crops are inferred from the detected face region ($\texttt{f}$). First, $\texttt{h}$ is regressed from $\texttt{f}$, using the PIPA \emph{train} set statistics on the scaling and displacement transformation from \texttt{f} to \texttt{h}. All the other regions ($\texttt{u}$, $\texttt{b}$, $\texttt{s}$) are computed based on $\texttt{h}$ in the same way as in \S\textcolor{red}{3.2} of main paper.
To measure if the inferred head region \texttt{h} is sound and compatible with the models trained on \texttt{h} (as well as $\texttt{u}$ and $\texttt{b}$), we train the head model \texttt{h} on head annotations and test on the heads inferred from face detections. The recognition performance is $87.74\%$, while when trained and tested on the head annotations, the performance is $89.85\%$. We see a small drop, but not significant -- the inferred regions to be largely compatible.
The gallery-background identity detection is done by thresholding the final SVM score output. Given a recognition system and test instance $x$, let $\mathcal{S}_{k}\left(x\right)$ be the SVM score for identity $k$. Then, we apply a thresholding parameter $\tau>0$ to predict background if $\underset{k}{\max}\,\,\mathcal{S}_{k}\left(x\right)<\tau$, and predict the argmax gallery identity otherwise.
\subsection{Evaluation metric}
The evaluation metric should measure two aspects simultaneously: (1) ability to tell apart background identities, (2) ability to classify gallery identities. We first introduce a few terms to help defining the metrics. Refer to figure \ref{fig:open-metric} for a visualisation. We say a detected test instance $x$ is a ``foreground prediction'' if $\underset{k}{\max}\,\,\mathcal{S}_{k}\left(x\right)\ge\tau$. A foreground prediction is either a true positive ($TP$) or a false positive ($FP$), depending on whether $x$ is a gallery identity or not. If $x$ is a $TP$, it is either a sound true positive $TP_s$ or an unsound true positive $TP_u$, depending on the classification result $\underset{k}{\arg\max}\,\,\mathcal{S}_{k}\left(x\right)$. A false negative ($FN$) is incurred if a gallery identity is predicted
as background.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/open-metric-v2_fixed}\hspace*{\fill}
\par\end{centering}
\caption{\label{fig:open-metric}Diagram of various subsets generated by a person recognition system in an open world setting (cf. Figure \textcolor{red}{2} of main paper). $TP_s$: sound true positive, $TP_u$: unsound true positive, $FP$: false positive, $FN$: false negative. See text for the definitions.}
\end{figure}
We first measure the system's ability to screen background identities while at the same time classifying the gallery identities. The \textbf{recognition recall (RR)} at threshold $\tau$ is defined as follows
\begin{equation}
\label{eq:RR}
\mathrm{RR}(\tau)=\frac{\left|TP_s\right|}{\left|\mbox{face det.}\cap\mbox{head anno.}\right|}=\frac{\left|TP_s\right|}{\left|TP\cup FN\right|}.
\end{equation}
To factor out the performance of face detection, we constrain our evaluation to the intersection between face detections and head annotations (the denominator $TP\cup FN$). Note that the metric is a decreasing function of $\tau$, and when $\tau\rightarrow-\infty$ the corresponding system is operating under the closed world assumption.
The system enjoys high RR when $\tau$ is decreased, but the system then predicts many background cases as foreground ($FP$). To quantify the trade-off we introduce a second metric:
\textbf{false positive per image (FPPI)}. Given a threshold $\tau>0$, FPPI is defined as
\begin{equation}
\label{eq:FPPI}
\mathrm{FPPI}(\tau)=\frac{\left|FP_{\mathrm{type}1}\right|}{\left|\mbox{images}\right|},
\end{equation}
measuring how many wrong foreground predictions the system makes per image. It is also a decreasing function of $\tau$. When $\tau\rightarrow\infty$, the FPPI attains zero.
\subsection{Results}
\begin{figure*}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.7\columnwidth]{figures/open_O_fixed}\hspace*{\fill}\includegraphics[width=0.7\columnwidth]{figures/open_D_fixed}\hspace*{\fill}
\par\end{centering}
\caption{\label{fig:open-world-results}Recognition recall (RR) versus false positive per image (FPPI) of our recognition systems in the open world setting. Curves are parametrised by $\tau$ -- see text for details.}
\end{figure*}
\begin{figure*}
\begin{centering}
\hspace*{\fill}\subfloat[$-90\degree$]{\protect\begin{centering}
\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_-90_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_-90_b}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_-90_c}\protect
\par\end{centering}
}\hspace*{\fill}\subfloat[$+90\degree$]{\protect\centering{}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_+90_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_+90_b}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_+90_c}\protect}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\hspace*{\fill}\subfloat[$-45\degree$]{\protect\begin{centering}
\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_-45_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_-45_b}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_-45_c}\protect
\par\end{centering}
}\hspace*{\fill}\subfloat[$+45\degree$]{\protect\centering{}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_+45_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_+45_b}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_+45_c}\protect}\hspace*{\fill}
\par\end{centering}
\hspace*{\fill}\subfloat[$\pm0\degree$]{\protect\centering{}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_0_b}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_0_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_0_c}\protect}\hspace*{\fill}\subfloat[Missing detections]{\protect\centering{}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_missed_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_missed_c}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_missed_b}\protect}\hspace*{\fill}
\hspace*{\fill}\subfloat[Legend]{\protect\centering{}\hspace{1.6em}\protect\includegraphics[height=0.11\textwidth]{figures/face_detections/face_detections_legend}\hspace{1.6em}\protect}\hspace*{\fill}\subfloat[Detected heads, but wrong orientation estimate]{\protect\centering{}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_wrong_orientation_a}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_wrong_orientation_c}\hspace*{0.3em}\protect\includegraphics[width=0.11\textwidth]{figures/face_detections/face_wrong_orientation_b}\protect}\hspace*{\fill}\protect\caption{\label{fig:face-detection-examples}Example results from the face detector (PIPA \emph{val} set), and estimated head boxes. }
\end{figure*}
Figure \ref{fig:open-world-results} shows the recognition rate (RR) versus false positive per image (FPPI) curves parametrised by $\tau$. As $\tau\to\infty$, $RR(\tau)$ approaches the close world performance on the face detected subset ($\texttt{FR}\cup\texttt{NFR}$): $87.74\%$ (Original) and $46.67\%$ (Day) for $\texttt{naeil}$. In the open-world case, for example when the system makes one FPPI, the recognition recall for $\texttt{naeil}$ is $76.25\%$ (Original) and $25.29\%$ (Day). Transitioning from the open world to close world, we see quite some drop, but one should note that the set of background face detections is more than $7\times$ greater than the foreground faces.
Note that the DeepID2+ \cite{Sun2014ArxivDeepId2plus} is not a public method, and so we cannot compute $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ features ourselves; we have not included the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ or \texttt{naeil2} results in this section.
\subsection{Conclusion}
Although performance is not ideal, a simple SVM score thresholding scheme can make our systems work in the open world recognition scenario.
\newpage
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}}
\IEEEraisesectionheading{\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{W}{ith} the advent of social media and the shift of image capturing mode from digital cameras to smartphones and life-logging devices, users share massive amounts of personal photos online these days. Being able to recognise people in such photos would benefit the users by easing photo album organisation. Recognising people in natural environments poses interesting challenges; people may be focused on their activities with the face not visible, or can change clothing or hairstyle. These challenges are largely new -- traditional focus of computer vision research for human identification has been face recognition (frontal, fully visible faces) or pedestrian re-identification (no clothing changes, standing pose).
Intuitively, the ability to recognise faces in the wild \cite{Huang2007Lfw,Sun2014ArxivDeepId2plus} is still an important ingredient. However, when people are engaged in an activity (i.e. not posing) their faces become only partially visible (non-frontal, occluded) or simply fully invisible (back-view). Therefore, additional information is required to reliably recognize people. We explore other cues that include (1) body of a person that contains information about the shape and appearance; (2) human attributes such as gender and age; and (3) scene context. See Figure \ref{fig:teaser} for a list of examples that require increasing number of contextual cues for successful recognition.
\begin{figure}
\begin{centering}
\arrayrulecolor{gray}
\par\end{centering}
\begin{centering}
\includegraphics[bb=0bp 0bp 601bp 600bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser/teaser50_allcorrect_face_28134}\includegraphics[bb=0bp 0bp 721bp 721bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser/teaser50_headincorrect_whole_35166}\includegraphics[bb=0bp 0bp 501bp 501bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser/teaser50_personincorrect_8850}\includegraphics[bb=0bp 0bp 401bp 401bp,width=0.25\columnwidth,height=0.25\columnwidth]{figures/teaser/teaser50_onlynaeil_4631}
\par\end{centering}
\vspace{0.5em}
\begin{raggedright}
\begin{tabular}{lcc|ccc|ccc|ccc}
\hspace*{-0.6em}Head & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.15em} & \hspace{1.0em} & \includegraphics[height=1em]{figures/no} & \hspace{1.0em} & \hspace{1.0em} & \includegraphics[height=1em]{figures/no} & \hspace{1.0em} & \hspace{0.95em} & \includegraphics[height=1em]{figures/no} & \hspace{10em}\tabularnewline
\hspace*{-0.6em}Body & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.17em} & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=1em]{figures/no} & & & \includegraphics[height=1em]{figures/no} & \tabularnewline
\hspace*{-0.6em}{\scriptsize{}Attributes} & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.17em} & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=1em]{figures/no} & \tabularnewline
\hspace*{-0.6em}{\small{}All cues} & \hspace*{-0.6em}\includegraphics[height=0.8em]{figures/yes} & \hspace*{-0.17em} & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=0.8em]{figures/yes} & & & \includegraphics[height=0.8em]{figures/yes} & \tabularnewline
\end{tabular}
\par\end{raggedright}
\arrayrulecolor{black}\vspace{0.5em}
\caption{\label{fig:teaser}In social media photos, depending on face occlusion or pose, different cues may be effective. For example, the surfer in the third column is not recognised using only head and body cues due to unusual pose. However, she is successfully recognised when additional attribute cues are considered.}
\end{figure}
This paper presents an in-depth analysis of the person recognition task in the social media type of photos: given a few annotated training images per person, who is this person in the test image? The main contributions of the paper are summerised as follows:
\begin{itemize}
\item{Propose realistic and challenging person recognition scenarios on the PIPA benchmark (\S\ref{sec:PIPA-dataset}).}
\item{Provide a detailed analysis of the informativeness of different cues, in particular of a face recognition module DeepID2+ \cite{Sun2014ArxivDeepId2plus} (\S\ref{sec:Cues-for-recognition}).}
\item{Verify that our journal version final model \texttt{naeil2} achieves the new state of the art performance on PIPA (\S\ref{sec:Test-set-results}).}
\item{Analyse the contribution of cues according to the amount of appearance and viewpoint changes (\S\ref{sec:challenges-analysis})}.
\item{Discuss the performance of our methods under the open-world recognition setup (\S\ref{sec:open-world})}
\item Code and data are open source: available at \url{https://goo.gl/DKuhlY}.
\end{itemize}
\section{\label{sec:Related-work}Related work}
\subsection{By data type}
We review work on human identification based on various visual cues. Faces are the most obvious and widely studied cue, while other biometric cues have also been considered. We discuss how our personal photo setup is different from them.
\subsubsection*{Face}
The bulk of previous work on person recognition focuses on faces. The Labeled Faces in the Wild (LFW) \cite{Huang2007Lfw} has been a great testbed for a host of works on the face identification and verification outside the lab setting. The benchmark has saturated, attributing to the deep features \cite{Taigman2014CvprDeepFace,Sun2014ArxivDeepId2plus,Zhou2015ArxivNaiveDeepFace,Schroff2015ArxivFaceNet,Parkhi15,chen2016unconstrained,wen2016discriminative,ranjan2017l2,wang2018additive,deng2018arcface} trained on large scale face databases that outperform the traditional methods involving sophisticated classifiers based on hand-crafted features and metric learning approaches \cite{Guillaumin2009Iccv,Chen2013CvprBlessing,Cao2013IccvTransferLearning,Lu2014ArxivGaussianFace}. While faces are clearly the most discriminative cue for recognising people in personal photos as well, they are often occluded in natural footage - e.g. the person may be engaged in other activities. Since LFW contains largely frontal views of subjects, it does not fully represent the setup we are interested in. LFW is also biased to public figures.
Some recent face recognition benchmarks have introduced more face occlusions. IARPA Janus Benchmark A (IJB-A) \cite{klare2015pushing} and Celebrities in Frontal Profile (CFP)~\cite{sankaranarayanan2016triplet} datasets include faces with profile viewpoints; however, both IJB-A and CFP do not consider subject fully turning away from the camera (back-view) and the subjects are limited to public figures. Age Database (AgeDB) \cite{moschoglou2017agedb} evaluates the recognition across long time span (years), but is again biased towards public figures; recognition across age gap is a part of our task and we focus on personal photos without celebrity bias.
MegaFace \cite{kemelmacher2016megaface,nech2017level} is perhaps the largest known open source face database over personal photos on Flickr. However, MegaFace still does not contain any back-view subject and it is not designed to evaluate the ability to combine cues from multiple body regions. Face recognition datasets are not suitable for training and evaluating systems that identify a human from face and other body regions.
\subsubsection*{Pedestrian Re-Identification from RGB Images}
Not only face, but the entire body and clothing patterns have also been explored as cue for human identification. For example, pedestrian re-identification (re-id) tackles the problem of matching pedestrian detections in different camera views. First benchmarks include VIPeR \cite{gray2007VIPeR}, CAVIAR \cite{cheng2011CAVIAR}, CUHK \cite{li2012CUHK1}, while nowadays most re-id papers report results on Market 1501~\cite{zheng2015scalable}, MARS~\cite{zheng2016mars}, CUHK03~\cite{Li2014CvprDeepReID}, and DukeMTMC-reID~\cite{ristani2016performance}. There is an active line of research on pedestrian re-id, starting with hand-crafted features \cite{Li2013Cvpr,Zhao2013IccvSalienceMatching,Bak2014WacvBrownian} and has moved towards deep feature based schemes \cite{Li2014CvprDeepReID,Yi2014Arxiv,Hu2014Accvw,ahmed2015improved,Cheng_2016_CVPR,Xiao_2016_CVPR,Varior2016,Chen/cvpr2017,zheng2017unlabeled}.
However, the re-id datasets and benchmarks do not fully cover the social media setup in three aspects. (1) Subjects are pedestrians and mostly appear in the standing pose; in personal photos people may be engaged in a diverse array of activities and poses - e.g. skiing, performing arts, presentation. (2) Typically resolution is low; person recognition in personal photos includes the problem of matching identities across a huge resolution range - from selfies to group photos.
\subsubsection*{Pedestrian Re-Identification from Depth Images}
In order to identify humans based on body shapes, potentially to enable recognition independent of clothing changes, researchers have proposed depth-based re-identification setups. Datasets include RGBD-ID~\cite{barbosa2012re}, IAS-Lab RGBD-ID \cite{munaro2014one}, and recent SOMAset \cite{barbosa2017looking}. SOMAset in particular has clothing changes enforced in the dataset. There is a line of work~\cite{barbosa2012re,munaro2014one,barbosa2017looking,wu2017robust} that has improved the recognition technology under this setup. While recognition across clothing changes is related to our task of identifying human in personal photos, the RGBD based re-identification typically requires depth information for good performance; for personal photos depth information is unavailable. Moreover, the relevant datasets are collected in controlled lab setup, while personal photos are completely unconstrained.
\subsubsection*{Other biometric cues}
Traditionally, fingerprints and iris patterns have been considered strong visual biometric cues \cite{maltoni2009handbook,daugman2009iris}. Gaits \cite{CONNOR20181} are also known to be an identity correlated feature. We do not use them explicitly in this work, as such information is not readily given in personal photos.
\subsubsection*{Personal Photos}
Personal photos have distinct characteristics that set new challenges not fully addressed before. For example, people may be engaged in certain activity, not cooperating with the photographer, and people may change clothing over time. Some pre-convnet work have addressed this problem in a small scale \cite{zhang2003automated,song2006context,anguelov2007contextual,Gallagher2008Cvpr}, combining cues from face as well as clothing regions. Among these, Gallagher et al. \cite{Gallagher2008Cvpr} have published the dataset ``Gallagher collection person'' for benchmarking ($\sim\negmedspace 600$ images, 32 identities). It was not until the appearance PIPA dataset \cite{Zhang2015CvprPiper} was there a large-scale dataset of personal photos. The dataset consists of Flickr personal account images (Creative Commons) and is fairly large in scale ($\sim$40k images, $\sim$2k identities), with diverse appearances and subjects with all viewpoints and occlusion levels. Heads are annotated with bounding boxes each with an identity tag. We describe PIPA in greater detail in \S \ref{sec:PIPA-dataset}.
\subsection{By recognition task}
There exist multiple tasks related to person recognition \cite{Gong2014PersonReIdBook} differing mainly in the amount of training and testing data. Face and surveillance re-identification is most commonly done via verification: \emph{given one reference image (gallery) and one test image (probe), do they show the same person?} \cite{Huang2007Lfw,Bedagkar2014IvcPersonReIdSurvey}. In this paper, we consider two recognition tasks.
\begin{itemize}
\item Closed world identification: \emph{given a single test image (probe), who is this person among the identities that are among the training identities (gallery set)?}
\item Open world recognition \cite{kemelmacher2016megaface} (\S\ref{sec:open-world}) : \emph{given a singe test image (probe), is this person among the training identities (gallery set)? If so, who?}
\end{itemize}
Other related tasks are, face clustering \cite{Cui2007ChiEasyAlbum,Schroff2015ArxivFaceNet,OttoClustering}, finding important people \cite{mathialagan2015vip}, or associating names in text to faces in images \cite{Everingham2006Bmvc,Everingham2009Ivc}.
\subsection{Prior work on PIPA dataset~\cite{Zhang2015CvprPiper}}
Since the introduction of the PIPA dataset \cite{Zhang2015CvprPiper}, multiple works have proposed different methods for solving the person recognition problem in social media photos. Zhang et al. proposed the Pose Invariant Person Recognition (\texttt{PIPER}) \cite{Zhang2015CvprPiper}, obtaining promising results by combining three ingredients: DeepFace \cite{Taigman2014CvprDeepFace} (face recognition module trained on a large private dataset), poselets \cite{Bourdev2009IccvPoselets} (pose estimation module trained with 2k images and 19 keypoint annotations), and convnet features trained on detected poselets \cite{Krizhevsky2012Nips,Deng2009CvprImageNet}.
Oh et al. \cite{oh2015person}, the conference version of this paper, have proposed a much simpler model \texttt{naeil} that extracts AlexNet cues from multiple \emph{fixed} image regions. In particular, unlike \texttt{PIPER} it does not require data-heavy DeepFace or time-costly poselets; it uses only 17 cues (\texttt{PIPER} uses over 100 cues); it still outperforms \texttt{PIPER}.
There have been many follow-up works since then. Kumar et al. \cite{kumar2017pose} have improved the performance by normalising the body pose using pose estimation. Li et al. \cite{li2017sequential} considered exploiting people co-occurrence statistics. Liu et al. \cite{liu_2017_coco} have proposed to train a person embedding in a metric space instead of training a classifier on a fixed set of identities, thereby making the model more adaptable to unseen identities. We discuss and compare against these works in greater detail in \S\ref{sec:Test-set-results} Some works have exploited the photo-album metadata, allowing the model to reason over different photos \cite{joon16eccv,Li_2016_CVPR}.
In this journal version, we build \texttt{naeil2} from \texttt{naeil} and DeepID2+ \cite{Sun2014ArxivDeepId2plus} to achieve the state of the art result among the published work on PIPA. We provide additional analysis of cues according to time and viewpoint changes.
\section{\label{sec:PIPA-dataset}Dataset and experimental setup}
\subsubsection*{Dataset}
The PIPA dataset (``People In Photo Albums'') \cite{Zhang2015CvprPiper} is, to the best of our knowledge, the first dataset to annotate people's identities even when they are pictured from the back. The annotators labelled instances that can be considered hard even for humans (see qualitative examples in figure \ref{fig:success-O-split}, \ref{fig:failure-O-split}). PIPA features $37\,107$ Flickr personal photo album images (Creative Commons license), with $63\,188$ head bounding boxes of $2\,356$ identities. The head bounding boxes are tight around the skull, including the face and hair; occluded heads are hallucinated by the annotators. The dataset is partitioned into \emph{train}, \emph{val}, \emph{test}, and \emph{leftover} sets, with rough ratio $45\negmedspace:\negmedspace15\negmedspace:\negmedspace20\negmedspace:\negmedspace20$ percent of the annotated heads. The leftover set is not used in this paper. Up to annotation errors, neither identities nor photo albums by the same uploader are shared among these sets.
\subsubsection*{Task}
At test time, the system is given a photo and ground truth head bounding box corresponding to the test instance (probe). The task is to choose the identity of the test instance among a given set of identities (gallery set, 200$\sim$500 identities) each with $\sim$10 training samples.
In \S\ref{sec:open-world}, we evaluate the methods when the test instance may be a background person (e.g. bystanders -- no training image given). The system is then also required to determine if the given instance is among the seen identities (gallery set).
\subsubsection*{Protocol}
We follow the PIPA protocol in \cite{Zhang2015CvprPiper} for data utilisation and model evaluation. The \emph{train} set is used for convnet feature training. The \emph{test} set contains the examples for the test identities. For each identity, the samples are divided into $\mbox{\emph{test}}_{0}$ and $\mbox{\emph{test}}_{1}$. For evaluation, we perform a two-fold cross validation by training on one of the splits and testing on the other. The \emph{val} set is likewise split into $\mbox{\emph{val}}_{0}$ and $\mbox{\emph{val}}_{1}$, and is used for exploring different models and tuning hyperparameters.
\subsubsection*{Evaluation}
We use the recognition rate (or accuracy), the rate of correct identity predictions among the test instances. For every experiment, we average two recognition rates obtained from the (training, testing) pairs ($\mbox{\emph{val}}_{0}$, $\mbox{\emph{val}}_{1}$) and ($\mbox{\emph{val}}_{1}$, $\mbox{\emph{val}}_{0}$) -- analogously for \emph{test}.
\subsection{\label{subsec:PIPA-splits}Splits}
We consider four different ways of splitting the training and testing samples ($\mbox{\emph{val}}_{\nicefrac{0}{1}}$ and $\mbox{\emph{test}}_{\nicefrac{0}{1}}$) for each identity, aiming to evaluate different level of generalisation ability. The first one is from a prior work, and we introduce three new ones. Refer to table \ref{tab:stat-splits} for data statistics and figure \ref{fig:splits-visualisation} for visualisation.
\subsubsection*{Original split $\mathcal{O}$ \cite{Zhang2015CvprPiper}}
The Original split shares many similar examples per identity across the split -- e.g. photos taken in a row. The Original split is thus easy - even nearest neighbour on raw RGB pixels works (\S \ref{subsec:face-rgb-baseline}). In order to evaluate the ability to generalise across long-term appearance changes, we introduce three new splits below.
\subsubsection*{Album split $\mathcal{A}$ \cite{oh2015person}}
The Album split divides training and test samples for each identity according to the photo album metadata. Each split takes the albums while trying to match the number of samples per identity as well as the total number of samples across the splits. A few albums are shared between the splits in order to match the number of samples. Since the Flickr albums are user-defined and do not always strictly cluster events and occasions, the split may not be perfect.
\subsubsection*{Time split $\mathcal{T}$ \cite{oh2015person}}
The Time split divides the samples according to the time the photo was taken. For each identity, the samples are sorted according to their ``photo-taken-date'' metadata, and then divided according to the newest versus oldest basis. The instances without time metadata are distributed evenly. This split evaluates the temporal generalisation of the recogniser. However, the ``photo-taken-date'' metadata is very noisy with lots of missing data.
\subsubsection*{Day split $\mathcal{D}$ \cite{oh2015person}}
The Day split divides the instances via visual inspection to ensure the firm ``appearance change'' across the splits. We define two criteria for division: (1) a firm evidence of date change such as \{change of season, continent, event, co-occurring people\} and/or (2) visible changes in \{hairstyle, make-up, head or body wear\}. We discard identities for whom such a division is not possible. After division, for each identity we randomly discard samples from the larger split until the sizes match. If the smaller split has $\leq\negmedspace 4$ instances, we discard the identity altogether. The Day split enables clean experiments for evaluating the generalisation performance across strong appearance and event changes.
\subsection{\label{subsec:face-detection}Face detection}
Instances in PIPA are annotated by humans around their heads (tight around skull). We additionally compute face detections over PIPA for three purposes: (1) to compare the amount of identity information in head versus face (\S\ref{sec:Cues-for-recognition}), (2) to obtain head orientation information for further analysis (\S\ref{sec:challenges-analysis}), and (3) to simulate the scenario without ground truth head box at test time (\S\ref{sec:open-world}). We use the open source DPM face detector \cite{Mathias2014Eccv}.
Given a set of detected faces (above certain detection score threshold) and the ground truth heads, the match is made according to the overlap (intersection over union). For matched heads, the corresponding face detections tell us which DPM component is fired, thereby allowing us to infer the head orientation (frontal or side view). See Appendix \S{\color{red}A} for further details.
Using the DPM component, we partition instances in PIPA as follows: (1) detected and frontal ($\texttt{FR}$, 41.29\%), (2) detected and non-frontal ($\texttt{NFR}$, 27.10\%), and (3) no face detected ($\texttt{NFD}$, 31.60\%). We denote detections without matching ground truth head as Background. See figure \ref{fig:threeway-diagram} for visualisation.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{figures/threeway_fixed}\vspace{-0.5em}
\par\end{centering}
\caption{\label{fig:threeway-diagram}Face detections and head annotations in PIPA. The matches are determined by overlap (intersection over union). For matched faces (heads), the detector DPM component gives the orientation information (frontal versus non-frontal).}
\end{figure}
\begin{table}
\begin{centering}
\setlength\tabcolsep{0.5em}
\begin{tabular}{cccccccccccc}
& & & \multicolumn{4}{c}{\emph{val}} & & \multicolumn{4}{c}{\emph{test}} \tabularnewline
\vspace{-1em}
& & & & & & & \tabularnewline
\cline{4-7} \cline{9-12}
\vspace{-1em}
& & & & & & & \tabularnewline
& & & $\mathcal{O}$ & $\mathcal{A}$ & $\mathcal{T}$ & $\mathcal{D}$ & & $\mathcal{O}$ & $\mathcal{A}$ & $\mathcal{T}$ & $\mathcal{D}$ \tabularnewline
\vspace{-1em}
& & & & & & & \tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\vspace{-1em}
& & & & & & & \tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\vspace{-0.5em}
& & & & & & & \tabularnewline
\multirow{2}{*}{{\rotatebox{90}{{spl.0}\hspace{0em}}}}&instance & & 4820 & 4859 & 4818 & 1076 & & 6443 & 6497 & 6441 & 2484 \tabularnewline
&identity & & 366 & 366 & 366 & 65 & & 581 & 581 & 581 & 199 \tabularnewline
\vspace{-1em}
& & & & & & &\tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\vspace{-0.5em}
& & & & & & & \tabularnewline
\multirow{2}{*}{\rotatebox{90}{spl.1\hspace{0em}}}&instance & & 4820 & 4783 & 4824 & 1076 & & 6443 & 6389 & 6445 & 2485 \tabularnewline
&identity & & 366 & 366 & 366 & 65 & & 581 & 581 & 581 & 199 \tabularnewline
\vspace{-1em}
& & & & & & &\tabularnewline
\cline{1-2} \cline{4-7} \cline{9-12}
\end{tabular}
\par\end{centering}
\vspace{1em}
\caption{\label{tab:stat-splits}Split statistics for \emph{val} and \emph{test} sets. Total number of instances and identites for each split is shown.}
\end{table}
\begin{figure*}
\begin{centering}
\hspace*{\fill}\includegraphics[width=1.8\columnwidth]{figures/split/split1_ID117.png}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0.5em}
\par\end{centering}
\begin{centering}
\hspace*{\fill}\includegraphics[width=1.8\columnwidth]{figures/split/split2_ID118.png}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0.5em}
\par\end{centering}
\begin{centering}
\hspace*{\fill}\includegraphics[width=1.8\columnwidth]{figures/split/split3_ID1394.png}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0em}
\par\end{centering}
\caption{\label{fig:splits-visualisation}Visualisation of Original, Album,
Time and Day splits for three identities (rows 1-3). Greater appearance gap is observed from Original to Day splits.}
\end{figure*}
\section{\label{sec:Cues-for-recognition}Cues for recognition}
In this section, we investigate the cues for recognising people in social media photos. We begin with an overview of our model. Then, we experimentally answer the following questions: how informative are fixed body regions (no pose estimation) (\S\ref{sec:how-informative-body-region})? How much does scene context help (\S\textcolor{red}{\ref{sec:Scene}})? Is it head or face (head minus hair and background) that is more informative (\S\ref{sec:Head-or-face})? And how much do we gain by using extended data (\S{\ref{sec:Additional-training-data}} \& \S{\ref{sec:Attributes}})? How effective is a specialised face recogniser (\S{\ref{sec:deepid}})? Studies in this section are based exclusively on the \emph{val} set.
\subsection{Model overview}
\begin{wrapfigure}{O}{0.45\columnwidth}%
\begin{centering}
\par\end{centering}
\centering{}
\includegraphics[height=0.6\columnwidth]{figures/method_overview_v2}
\caption{\label{fig:image-regions}Regions considered for feature extraction: face $\ensuremath{\texttt{f}}$, head $\ensuremath{\texttt{h}}$, upper body $\ensuremath{\texttt{u}}$, full body $\ensuremath{\texttt{b}}$, and scene $\ensuremath{\texttt{s}}$. More than one cue can be extracted per region (e.g. $\ensuremath{\texttt{h}}_{1}$, $\ensuremath{\texttt{h}}_{2}$ ).
}
\end{wrapfigure}%
At test time, given a ground truth head bounding box, we estimate five different regions depicted in figure \ref{fig:image-regions}. Each region is fed into one or more convnets to obtain a set of cues. The cues are concatenated to form a feature vector describing the instance. Throughout the paper we write $+$ to denote vector concatenation. Linear SVM classifiers are trained over this feature vector (one versus the rest). In our final system, except for DeepID2+ \cite{Sun2014ArxivDeepId2plus}, all features are computed using the seventh layer (fc7) of AlexNet \cite{Krizhevsky2012Nips} pre-trained for ImageNet classification. The cues only differ amongst each other on the image area and the fine-tuning used (type of data or surrogate task) to alter the AlexNet, except for the DeepID2+ \cite{Sun2014ArxivDeepId2plus} feature
\subsection{\label{subsec:Body-regions}Image regions used}
We choose five different image regions based on the ground truth head
annotation (given at test time, see the protocol in \S\ref{sec:PIPA-dataset}). The
head rectangle $\ensuremath{\texttt{h}}$ corresponds to the ground
truth annotation. The full body rectangle $\ensuremath{\texttt{b}}$
is defined as $\left(3\negthinspace\times\negthinspace\mbox{head width},\right.$
$\left.6\negthinspace\times\negthinspace\mbox{head height}\right)$,
with the head at the top centre of the full body. The upper body rectangle
$\ensuremath{\texttt{u}}$ is the upper-half of $\ensuremath{\texttt{b}}$.
The scene region $\ensuremath{\texttt{s}}$ is the whole image containing
the head.
The face region $\ensuremath{\texttt{f}}$ is obtained using the DPM face detector discussed in \S\ref{subsec:face-detection}. For head boxes with no matching detection (e.g. back views and occluded faces), we regress the face area from the head using the face-head displacement statistics on the \emph{train} set. Five respective image regions are illustrated in figure \ref{fig:image-regions}.
Note that the regions overlap with each other, and that depending
on the person's pose they might be completely off. For example,
$\ensuremath{\texttt{b}}$ for a lying person is likely to contain
more background than the actual body. While precise body parts obtained via pose estimation~\cite{EldarPose,cao2017realtime} may contribute to even better performances~\cite{liu_2017_coco}, we choose not to use it for the sake of efficiency. Our simple region selection scheme still leads to the state of the art performances, even compared to methods that do rely on pose estimation (\S\ref{sec:Test-set-results}).
\subsection{\label{subsec:Implementation}Fine-tuning and parameters}
Unless specified otherwise AlexNet is fine-tuned using the PIPA \emph{train} set ($\sim\negmedspace30\mbox{k}$ instances, $\sim\negmedspace1.5\mbox{k}$ identities), cropped at five different image regions, with $300\mbox{k}$ mini-batch iterations (batch size $50$). We refer to the base cue thus obtained as $\ensuremath{\texttt{f}}$, $\ensuremath{\texttt{h}}$, $\ensuremath{\texttt{u}}$, $\ensuremath{\texttt{b}}$, or $\ensuremath{\texttt{s}}$, depending on the cropped region. On the \emph{val} set we found the fine-tuning to provide a systematic $\sim\negmedspace10$ percent points (pp) gain over the non-fine-tuned AlexNet (figure \ref{fig:fine-tuning-effect}). We use the seventh layer (fc7) of AlexNet for each cue ($4\,096$ dimensions).
We train for each identity a one-versus-all SVM classifier with the regularisation parameter $C=1$; it turned out to be an insensitive parameter in our preliminary experiments. As an alternative, the naive nearest neighbour classifier has also been considered. However, on the \emph{val} set the SVMs consistently outperforms the NNs by a $\sim\negmedspace10\ \mbox{pp}$ margin.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\columnwidth]{figures/fine_tunning_effect}\vspace{-0.5em}
\par\end{centering}
\caption{\label{fig:fine-tuning-effect}PIPA \emph{val} set performance of different
cues versus the SGD iterations in fine-tuning.}
\end{figure}
\subsection{\label{sec:how-informative-body-region}How informative is each image
region?}
\begin{table}
\begin{centering}
\par\end{centering}
\begin{centering}
\begin{tabular}{lllcc}
&& Cue & & Accuracy\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
{Chance level} &&&& \hspace{0.5em}1.04\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Scene (\S\ref{sec:Scene}) && \texttt{$\texttt{s}$} && 27.06\tabularnewline
Body && $\mbox{\ensuremath{\texttt{b}}}$ && 80.81\tabularnewline
Upper body && $\texttt{u}$ && 84.76\tabularnewline
Head && $\texttt{h}$ && 83.88\tabularnewline
Face (\S\ref{sec:Head-or-face}) && $\texttt{f}$ && 74.45\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Zoom out && $\texttt{f}$ && 74.45\tabularnewline
&& $\texttt{f}\negthinspace+\negthinspace\texttt{h}$ && 84.80\tabularnewline
&& $\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}$ && 90.65\tabularnewline
& & $\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$ && 91.14\tabularnewline
& & $\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{s}$ && 91.16\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Zoom in && \texttt{$\texttt{s}$} && 27.06\tabularnewline
&& $\texttt{s}\negthinspace+\negthinspace\texttt{b}$ && 82.16\tabularnewline
&& $\texttt{s}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{u}$ && 86.39\tabularnewline
& & $\texttt{s}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{h}$ && 90.40\tabularnewline
&& $\texttt{s}\negthinspace+\negthinspace\texttt{b}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{f}$ && 91.16\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Head+body && $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ && 89.42\tabularnewline
Full person && $\texttt{P}=\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$\hspace*{-1.5em} && 91.14\tabularnewline
Full image && $\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$ && 91.16\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:validation-set-regions-accuracy}PIPA \emph{val} set accuracy of cues based on different image regions and their concatenations ($+$ means concatenation).}
\end{table}
Table \ref{tab:validation-set-regions-accuracy} shows the \emph{val} set results of each region individually and in combination. Head $\ensuremath{\texttt{h}}$ and upper body $\ensuremath{\texttt{u}}$ are the strongest individual cues. Upper body is more reliable than the full body $\ensuremath{\texttt{b}}$ because the lower body is commonly occluded or cut out of the frame, and thus is usually a distractor. Scene $\ensuremath{\texttt{s}}$ is, unsurprisingly, the weakest individual cue, but it still useful information for person recognition (far above chance level). Importantly, we see that all cues complement each other, despite overlapping pixels. Overall, our features and combination strategy are effective.
\subsection{\label{sec:how-to-select-regions-otherwise}Empirical justification for the regions $\texttt{fhubs}$}
\begin{figure}
\centering
\footnotesize
\setlength\tabcolsep{0.15em}
\begin{tabular}{ccccc}
{\rotatebox{90}{\hspace{2.5em}Head ($\texttt{h}$) size \hspace{0.0em}}}
&\includegraphics[width=0.23\columnwidth]{figures/regions/regions_h_1_rand} &\includegraphics[width=0.23\columnwidth]{figures/regions/regions_h_2_grid} &\includegraphics[width=0.23\columnwidth]{figures/regions/regions_h_3_grid_o} &\includegraphics[width=0.23\columnwidth]{figures/regions/regions_h_3_grid_d} \\
&&&\includegraphics[width=0.2\columnwidth]{figures/regions/grid_heatmap_original_h_cbar.pdf}
&\includegraphics[width=0.2\columnwidth]{figures/regions/grid_heatmap_day_h_cbar.pdf} \\
{\rotatebox{90}{\hspace{1.5em}Upper body ($\texttt{u}$) size \hspace{0.0em}}}
&\includegraphics[width=0.23\columnwidth]{figures/regions/regions_u_1_rand} &\includegraphics[width=0.23\columnwidth]{figures/regions/regions_u_2_grid} &\includegraphics[width=0.23\columnwidth]{figures/regions/regions_u_3_grid_o} &\includegraphics[width=0.23\columnwidth]{figures/regions/regions_u_3_grid_d} \\
&&&\includegraphics[width=0.2\columnwidth]{figures/regions/grid_heatmap_original_u_cbar.pdf} &\includegraphics[width=0.2\columnwidth]{figures/regions/grid_heatmap_day_u_cbar.pdf} \\
& r-patch & sw & Original split & Day split \\
&&&\multicolumn{2}{c}{sw results}
\end{tabular}
\vspace{0em}
\caption{\label{fig:patch-analysis-visualisation}Regions considered for analysis. We consider cues from head ($\texttt{h}$) as well as upper body ($\texttt{u}$) sized patches that are either chosen randomly (r-patch, column 1) or in sliding window manner (sw, column 2). Recognition results for sw are visualised in Original (column 3) and Day (column 4) splits. }
\end{figure}
In order to further justify the choice of the five image regions ($\texttt{f}\texttt{h}\texttt{u}\texttt{b}\texttt{s}$), we compare their informativeness against three baseline types of cues: (1) \emph{random patch} (r-patch), (2) \emph{sliding window} (sw), and (3) \emph{random initialisation} (r-init). For each type, we consider head sized ($\texttt{h}$, ground truth head size) and upper body sized ($\texttt{u}$, $3\times3$ of head size) regions. See figure \ref{fig:patch-analysis-visualisation} columns 1 and 2 for r-patch and sw.
Specifically, (1) for head sized r-patch we sample regions from within the original upper body region ($\texttt{u}$); for upper body sized r-patch, we sample from within $\pm 1$ head away from the original upper body region. (2) For head sized sw, we set the stride as half of $\texttt{h}$ width/height, while for upper body sized ones, we set the stride as $\texttt{h}$ width/height themselves. The r-patch and sw are fixed across person instances with respect to the respective head locations. (3) The r-init are always based on the original head and upper body regions, but the features are trained with different random initialisations.
The results for the sliding window regions are shown in figure \ref{fig:patch-analysis-visualisation} columns 3 and 4, under Original and Day splits, respectively. In all sizes and domain gaps, the original head region is the most informative one. The informativeness of head region is amplified under the Day split (larger domain gap), with larger performance gap between head and context regions -- clothing and event changes in the context regions hamper identification. \S\ref{sec:challenges-analysis} contains more in-depth analysis regarding this point.
\begin{figure*}
\centering
\hfill
\subfloat[Original]{
\includegraphics[width=0.8\columnwidth]{figures/regions/comb_strategy_plot_original.pdf}
}\hfill
\subfloat[Day]{
\includegraphics[width=0.8\columnwidth]{figures/regions/comb_strategy_plot_day.pdf}
}\hfill
\vspace{0em}
\caption{\label{fig:plot-region-choice}Comparison of our region choice $\texttt{fhubs}$ against three types of baseline region types, random patch (r-patch), sliding window (sw), and random initialisation (r-init), based on head ($\texttt{h}$) and upper body ($\texttt{u}$) sized regions. We report the \emph{val} set accuracy against the number of cues used. The combination orders for r-patch and r-init are random; for sw, we combine regions close to the head first.}
\end{figure*}
We compare the three types of regions against our choice of regions $\texttt{f}\texttt{h}\texttt{u}\texttt{b}\texttt{s}$ and quantitatively. See figure \ref{fig:plot-region-choice} for the plot showing the trade off between the accuracy and the number of cues used. For $\texttt{f}\texttt{h}\texttt{u}\texttt{b}\texttt{s}$, we progressively combine from $\texttt{f}$ to $\texttt{s}$ (5 cues maximum). For random patch (r-patch) and random initialisation (r-init), the combination order is randomised. For sliding window (sw), we combine the regions nearest to head first, and expand the combination diameter. Note that for every baseline region types (r-patch/init and sw), we consider combining head and upper body size boxes together ($\texttt{hu}$).
From figure \ref{fig:plot-region-choice} we observe, most importantly, that our choice of regions $\texttt{fhubs}$ gives the best performance-complexity (measured in number of cues) trade off among the regions and combinations considered under both small and large domain gaps (Original and Day splits). Amongst the baseline types, sw and r-init beat the r-patch in general; it is important to focus on the head region, the most identity-relevant part. In the Original split, it helps to combine $\texttt{h}$ and $\texttt{u}$ for all baseline types; it is important to combine multi-scale cues. However, the best trade off is given by $\texttt{fhubs}$, one that samples the from diverse regions and scales.
\subsubsection*{Conclusion}
Our choice of the regions $\texttt{fhubs}$ efficiently captures diverse identity information at diverse scales (only five cues). $\texttt{fhubs}$ beats the baseline region selection methods including random patches, sliding windows, and ensemble of randomly initialised cues in terms of the performance-complexity (number of cues) trade off.
\subsection{\label{sec:Scene}Scene ($\textnormal{\texttt{s}}$)}
\begin{table}
\begin{centering}
\begin{tabular}{lllcc}
&& Method && Accuracy\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Gist && \texttt{$\texttt{s}_{\texttt{gist}}$} && 21.56\tabularnewline
PlacesNet scores && \texttt{$\texttt{s}_{\texttt{places 205}}$} && 21.44\tabularnewline
raw PlacesNet && \texttt{$\texttt{s}_{0\texttt{ places}}$} && 27.37\tabularnewline
PlacesNet fine-tuned && \texttt{$\texttt{s}_{3\texttt{ places}}$} && 25.62\tabularnewline
raw AlexNet && \texttt{$\texttt{s}_{0}$} && 26.54\tabularnewline
AlexNet fine-tuned && \texttt{$\texttt{s}=\texttt{s}_{3}$} && 27.06\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:validation-set-scene}PIPA \emph{val} set accuracy of different scene cues. See descriptions in \S\ref{sec:Scene}.}
\end{table}
Scene region is the whole image containing the person of interest.
Other than a fine-tuned AlexNet we considered multiple feature types
to encode the scene information. \texttt{$\texttt{s}_{\texttt{gist}}$}:
using the Gist descriptor \cite{Oliva2001IjcvGist} ($512$ dimensions).
\texttt{$\texttt{s}_{0\texttt{ places}}$}: instead of using AlexNet
pre-trained on ImageNet, we consider an AlexNet (PlacesNet) pre-trained
on $205$ scene categories of the ``Places Database'' \cite{Zhou2014NipsPlaces}
($\sim\negmedspace2.5$ million images). \texttt{$\texttt{s}_{\texttt{places 205}}$}:
Instead of the $4\,096$ dimensions PlacesNet feature vector, we also
consider using the score vector for each scene category ($205$ dimensions).
$\texttt{s}_{0}$,$\texttt{s}_{3}$: finally we consider using AlexNet
in the same way as for body or head (with zero or $300\mbox{k}$ iterations
of fine-tuning on the PIPA person recognition training set). \texttt{$\texttt{s}_{3\texttt{ places}}$: $\texttt{s}_{0\texttt{ places}}$
}fine-tuned for person recognition.
\subsubsection*{Results}
Table \ref{tab:validation-set-scene} compares the different alternatives
on the \emph{val} set. The Gist descriptor \texttt{$\texttt{s}_{\texttt{gist}}$}
performs only slightly below the convnet options (we also tried the
$4\,608$ dimensional version of Gist, obtaining worse results).
Using the raw (and longer) feature vector of \texttt{$\texttt{s}_{0\texttt{ places}}$}
is better than the class scores of \texttt{$\texttt{s}_{\texttt{places 205}}$}.
Interestingly, in this context pre-training for places classification
is better than pre-training for objects classification (\texttt{$\texttt{s}_{0\texttt{ places}}$}
versus \texttt{$\texttt{s}_{0}$}). After fine-tuning $\texttt{s}_{3}$
reaches a similar performance as \texttt{$\texttt{s}_{0\texttt{ places}}$}.\\
Experiments trying different combinations indicate that there is little
complementarity between these features. Since there is not a large
difference between \texttt{$\texttt{s}_{0\texttt{ places}}$} and
\texttt{$\texttt{s}_{3}$}, for the sake of simplicity we use \texttt{$\texttt{s}_{3}$}
as our scene cue \texttt{$\texttt{s}$} in all other experiments.
\subsubsection*{Conclusion}
Scene $\ensuremath{\texttt{s}}$ by itself, albeit weak, can obtain
results far above the chance level. After fine-tuning, scene recognition
as pre-training surrogate task \cite{Zhou2014NipsPlaces} does not
provide a clear gain over (ImageNet) object recognition.
\subsection{\label{sec:Head-or-face}Head ($\textnormal{\texttt{h}}$) or face
($\textnormal{\ensuremath{\texttt{f}}}$)?}
A large portion of work on face recognition focuses on the face region
specifically. In the context of photo albums, we aim to quantify how
much information is available in the head versus the face region. As discussed in \S\ref{subsec:face-detection}, we obtain the face regions $\ensuremath{\texttt{f}}$ from the DPM face detector \cite{Mathias2014Eccv}.
\subsubsection*{Results}
There is a large gap of $\sim\negmedspace10$ percent points performance
between $\texttt{f}$ and $\texttt{h}$ in table \ref{tab:validation-set-regions-accuracy}
highlighting the importance of including the hair and background around
the face.
\subsubsection*{Conclusion}
Using $\texttt{h}$ is more effective than $\texttt{f}$, but $\texttt{f}$ result still shows a fair performance. As with other body cues, there is a complementarity between $\texttt{h}$ and $\texttt{f}$; we suggest to use them together.
\begin{table}
\begin{centering}
\begin{tabular}{lllcc}
&& Method && Accuracy\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em} \tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
More data (\S\ref{sec:Additional-training-data}) && $\texttt{h}$ && 83.88\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ && 84.88\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$ && 86.08\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$\hspace*{-1.5em} && 86.26\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Attributes (\S\ref{sec:Attributes}) & &$\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11m}}$ && 74.63\tabularnewline
&& $\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$ & & 81.74\tabularnewline
&& $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$ && 85.00\tabularnewline
\arrayrulecolor{gray}
\cline{3-3} \cline{5-5}
\arrayrulecolor{black} && $\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$ && 77.50\tabularnewline
&& $\texttt{u}+\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$ && 85.18\tabularnewline
\arrayrulecolor{gray}
\cline{3-3} \cline{5-5}
\arrayrulecolor{black} && $\mbox{\ensuremath{\texttt{A}}}=\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}+\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$\hspace*{-1.5em} && 86.17\tabularnewline
&& $\texttt{h}+\texttt{u}$ && 85.77\tabularnewline
&& $\texttt{h}+\texttt{u}+\ensuremath{\texttt{A}}$ && 90.12\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\texttt{naeil} (\S\ref{sec:conf-naeil}) && \texttt{naeil}\cite{oh2015person} && 91.70\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:validation-set-extended-data-accuracy}PIPA \emph{val} set accuracy of different cues based on extended data. See \S\ref{sec:Additional-training-data}, \S\ref{sec:Attributes}, and \S\ref{sec:conf-naeil} for details.}
\end{table}
\subsection{\label{sec:Additional-training-data}Additional training data ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}},\,\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}}}$)}
It is well known that deep learning architectures benefit from additional
data. DeepFace \cite{Taigman2014CvprDeepFace} used by \texttt{PIPER} \cite{Zhang2015CvprPiper} is trained over $4.4\cdot10^{6}$
faces of $4\cdot10^{3}$ persons (the private SFC dataset \cite{Taigman2014CvprDeepFace}).
In comparison our cues are trained over ImageNet and PIPA's $29\cdot10^{3}$
faces over $1.4\cdot10^{3}$ persons. To measure the effect of training
on larger data we consider fine-tuning using two open source face recognition
datasets: CASIA-WebFace (CASIA) \cite{Yi2014ArxivLearningFace} and
the ``Cross-Age Reference Coding Dataset'' (CACD) \cite{Chen2014Eccv}.
CASIA contains $0.5\cdot10^{6}$ images of $10.5\cdot10^{3}$ persons
(mainly actors and public figures). When fine-tuning AlexNet
over these identities (using the head area $\mbox{\ensuremath{\texttt{h}}}$),
we obtain the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$ cue.
CACD contains $160\cdot10^{3}$ faces of $2\cdot10^{3}$ persons
with varying ages. Although smaller in total number of images than CASIA, CACD features greater number of samples per identity ($\sim\negmedspace2\times)$.
The $\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ cue is built
via the same procedure as $\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$.
\subsubsection*{Results}
See the top part of table \ref{tab:validation-set-extended-data-accuracy} for the results. $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$
and $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$ improve
over $\texttt{h}$ (1.0 and 2.2 pp, respectively). Extra convnet training data seems to help. However, due to the mismatch in data distribution,
$\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ and $\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$
on their own are about $\sim\negmedspace5\ \mbox{pp}$ worse than
$\texttt{h}$.
\subsubsection*{Conclusion}
Extra convnet training data helps, even if they are from different type of photos.
\subsection{\label{sec:Attributes}Attributes ($\textnormal{\texttt{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}},\,\ensuremath{\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}}}}$)}
Albeit overall appearance might change day to day, one could expect
that stable, long term attributes provide means for recognition. We build attribute cues by fine-tuning AlexNet features not for the person recognition task (like for all other cues), but rather for the attribute prediction surrogate task. We consider two sets attributes, one on the head region and the other on the upper body region.
We have annotated identities in the PIPA \emph{train} and \emph{val} sets ($1409+366$ in total) with five long term attributes: age, gender, glasses, hair colour, and hair length (see table \ref{tab:attributes-details} for details). We build $\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$ by fine-tuning AlexNet features for the task of head attribute prediction.
For fine-tuning the attribute cue $\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$, we consider two approaches: training a single
network for all attributes as a multi-label classification problem
with the sigmoid cross entropy loss, or tuning one network per attribute
separately and concatenating the feature
vectors. The results on the \emph{val} set indicate the latter
($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$) performs better
than the former ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11m}}$).
For the upper body attribute features, we use the ``PETA pedestrian attribute dataset''
\cite{Deng2014AcmPeta}. The dataset originally has $105$ attributes annotations for $19\cdot10^{3}$ full-body pedestrian images. We chose the five long-term attributes for our study: gender, age (young adult, adult), black hair, and short hair
(details in table \ref{tab:attributes-details}). We choose to use the upper-body $\texttt{u}$ rather than the full body $\texttt{b}$ for attribute prediction -- the crops are much less noisy. We train the AlexNet feature on upper body of PETA images with the attribute prediction task to obtain the cue $\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$.
\subsubsection*{Results}
See results in table \ref{tab:validation-set-extended-data-accuracy}. Both PIPA ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$)
and PETA ($\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$) annotations
behave similarly ($\sim\negmedspace1\ \mbox{pp}$ gain over $\texttt{h}$
and $\texttt{u}$), and show complementary
($\sim\negmedspace5\ \mbox{pp}$ gain over $\texttt{h}\negmedspace+\negmedspace\texttt{u}$).
Amongst the attributes considered, gender contributes the most to
improve recognition accuracy (for both attributes datasets).
\subsubsection*{Conclusion}
Adding attribute information improves the performance.
\begin{table}
\begin{centering}
\begin{tabular}{lllll}
Attribute && Classes& & Criteria\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
& \vspace{-0.9em}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Age && Infant && {\footnotesize{}Not walking (due to young age)}\tabularnewline
&& Child & &{\footnotesize{}Not fully grown body size}\tabularnewline
&& Young Adult && {\footnotesize{}Fully grown \& Age $<45$}\tabularnewline
&& Middle Age && {\footnotesize{}$45\leq\mbox{Age}\leq60$}\tabularnewline
&& Senior && {\footnotesize{}Age$\geq60$}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Gender && Female && {\footnotesize{}Female looking}\tabularnewline
&& Male && {\footnotesize{}Male looking}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Glasses && None && {\footnotesize{}No eyewear}\tabularnewline
&& Glasses && {\footnotesize{}Transparant glasses}\tabularnewline
&& Sunglasses && {\footnotesize{}Glasses with eye occlusion}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Haircolour && Black && {\footnotesize{}Black}\tabularnewline
&& White && {\footnotesize{}Any hint of whiteness}\tabularnewline
&& Others && {\footnotesize{}Neither of the above}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
Hairlength && No hair && {\footnotesize{}Absolutely no hair on the scalp}\tabularnewline
&& Less hair && {\footnotesize{}Hairless for $>\frac{1}{2}$ upper scalp}\tabularnewline
&& Short hair && {\footnotesize{}When straightened,$<10$ cm}\tabularnewline
&& Med hair && {\footnotesize{}When straightened, $<$chin level}\tabularnewline
&& Long hair && {\footnotesize{}When straightened, $>$chin level}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-5}
\end{tabular}
\par\end{centering}
\begin{centering}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:attributes-details}PIPA attributes details.}
\end{table}
\subsection{\label{sec:conf-naeil}Conference version final model ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{naeil}}}}}$) \cite{oh2015person}}
The final model in the conference version of this paper combines five vanilla regional cues ($\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$), two head cues trained with extra data ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}}$, $\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}}}$), and ten attribute cues ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}$, $\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}$), resulting in 17 cues in total. We name this method $\texttt{naeil}$ \cite{oh2015person}\footnote{``naeil'', \includegraphics[height=0.8em]{figures/naeil_in_korean}, means ``tomorrow'' and sounds like ``nail''.}.
\subsubsection*{Results}
See table \ref{tab:validation-set-extended-data-accuracy} for the results. \texttt{naeil}, by combining all the cues considered naively, achieves the best result 91.70\% on the \emph{val} set.
\subsubsection*{Conclusion}
Cues considered thus far are complementary, and the combined model \texttt{naeil} is effective.
\subsection{\label{sec:deepid}DeepID2+ face recognition module ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$) \cite{Sun2014ArxivDeepId2plus}}
Face recognition performance have improved significantly in recent years with better architectures and larger open source datasets \cite{Huang2007Lfw,Zhu2013Iccv,Taigman2014CvprDeepFace,Sun2014ArxivDeepId2plus,Ding2015Arxiv,Schroff2015ArxivFaceNet,Parkhi15,chen2016unconstrained,wen2016discriminative}. In this section, we study how much face recognition helps in person recognition. While DeepFace \cite{Taigman2014CvprDeepFace} used by the \texttt{PIPER} \cite{Zhang2015CvprPiper} would have enabled more direct comparison against the \texttt{PIPER}, it is not publicly available. We thus choose the DeepID2+ face recogniser \cite{Sun2014ArxivDeepId2plus}. Face recognition technology is still improving quickly, and larger and larger face datasets are being released -- the analysis in this section would be an underestimate of current and future face recognisers.
The DeepID2+ network is a siamese neural network that takes 25 different crops of head as input, with the joint verification-identification loss. The training is based on large databases consisting of CelebFaces+\cite{sun2014deep}, WDRef\cite{Chen2012}, and LFW\cite{Huang2007Lfw} -- totalling $2.9\cdot10^{5}$ faces of $1.2\cdot10^{4}$ persons. At test time, it ensembles the predictions from the 25 crop regions obtained by facial landmark detections. The resulting output is $1\,024$ dimensional head feature that we denote as $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$.
Since the DeepID2+ pipeline begins with the facial landmark detection, the DeepID2+ features are not available for instances with e.g. occluded or backward orientation heads. As a result, only $52\,709$ out of $63\,188$ instances ($83.42\%$) have the DeepID2+ features available, and we use vectors of zeros as features for the rest.
\subsubsection*{Results - Original split}
See table \ref{tab:deepid-val} for the \emph{val} set results for $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ and related combinations. $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ in itself is weak ($68.46\%$) compared to the vanilla head feature $\texttt{h}$, due to the missing features for the back-views. However, when combined with $\texttt{h}$, the performance reaches $85.86\%$ by exploiting information from strong DeepID2+ face features and the viewpoint robust $\texttt{h}$ features.
Since the feature dimensions are not homogeneous ($4\,096$ versus $1\,024$), we try $L_{2}$ normalisation of $\texttt{h}$ and $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ before concatenation ($\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$). This gives a further $3\%$ boost ($88.74\%$) -- better than $\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}$, previous best model on the head region ($86.26\%$).
\subsubsection*{Results - Album, Time and Day splits}
Table \ref{tab:deepid-val} also shows results for the Album, Time, and Day splits on the \emph{val} set. While the general head cue \texttt{h} degrades significantly on the Day split, $\texttt{h}_{\texttt{deepid}}$ is a reliable cue with roughly the same level of recognition in all four splits ($60\negmedspace\sim\negmedspace70\%$). This is not surprising, since face is largely invariant over time, compared to hair, clothing, and event.
On the other splits as well, the complementarity of $\texttt{h}$ and $\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}}$ is guaranteed only when they are $L_2$ normalised before concatenation. The $L_2$ normalised concatenation $\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ envelops the performance of individual cues on all splits.
\subsubsection*{Conclusion}
DeepID2+, with face-specific architecture/loss and massive amount of training data, contributes highly useful information for the person recognition task. However, being only able to recognise face-visible instances, it needs to be combined with orientation-robust $\texttt{h}$ to ensure the best performance. Unsurprisingly, having a specialised face recogniser helps more in the setup with larger appearance gap between training and testing samples (Album, Time, and Day splits). Better face recognisers will further improve the results in the future.
\begin{table}
\begin{centering}
\begin{tabular}{lccccc}
Method && Original & Album & Time & Day\tabularnewline
\cline{1-1} \cline{3-6}
&\vspace{-1em}\tabularnewline
\cline{1-1} \cline{3-6}
$\texttt{h}$ && 83.88 & 77.90 & 70.38 & 40.71\tabularnewline
$\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 68.46 & 66.91 & 64.16 & 60.46\tabularnewline
$\texttt{h}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 85.86 & 80.54 & 73.31 & 47.86\tabularnewline
$\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 88.74 & 85.72 & 80.88 & 66.91\tabularnewline
\cline{1-1} \cline{3-6}
$\texttt{naeil}$\cite{oh2015person} && 91.70 & 86.37 & 80.66 & 49.21\tabularnewline
$\texttt{naeil}+\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && 92.11 & 86.77 & 81.08 & 51.02\tabularnewline
$\texttt{naeil2}$ && {93.42} & {89.95} & {85.87} & {70.58}\tabularnewline
\cline{1-1} \cline{3-6}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:deepid-val}PIPA \emph{val} set accuracy of methods involving $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$. The optimal combination weights are $\lambda^{\star}=[0.60\;1.05\;1.00\;1.50]$ for Original, Album, Time, and Day splits, respectively.\protect \\
$\oplus$ means $L_{2}$ normalisation before concatenation.}
\end{table}
\subsection{\label{sec:naeil+deepid}Combining $\texttt{naeil}$ with $\texttt{h}_\texttt{deepid}$ ($\textnormal{\ensuremath{\mbox{\ensuremath{\texttt{naeil2}}}}}$)}
We build the final model of the journal version, namely the $\texttt{naeil2}$ by combining $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$. As seen in \S\ref{sec:deepid}, naive concatenation is likely to fail due to even larger difference in dimensionality ($4\,096\times 17=69\,632$ versus $1\,024$). We consider $L_{2}$ normalisation of $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$, and then performing a weighted concatenation.
\begin{equation}
\label{eq:weighted-sum}
\texttt{naeil}\oplus_\lambda\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}} = \frac{\texttt{naeil}}{||\texttt{naeil}||_2}+\lambda\cdot\frac{\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}}{||\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}||_2},
\end{equation}
where, $\lambda>0$ is a parameter and $+$ denotes a concatenation.
\subsubsection*{Optimisation of $\lambda$ on \emph{val} set}
$\lambda$ determines how much relative weight to be given to $\texttt{h}_\texttt{deepid}$. As we have seen in \S\ref{sec:deepid}, the amount of additional contribution from $\texttt{h}_\texttt{deepid}$ is different for each split. In this section, we find $\lambda^\star$, the optimal values for $\lambda$, for each split over the \emph{val} set. The resulting combination of $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$ is our final method, $\texttt{naeil2}$. $\lambda^\star$ is searched on the equi-distanced points $\{0,0.05,0.1,\cdots,3\}$.
See figure \ref{fig:lambda-deepid+naeil} for the \emph{val} set performance of $\texttt{naeil}\oplus_\lambda\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ with varying values of $\lambda$. The optimal weights are found at $\lambda^{\star}=[0.60\;1.05\;1.00\;1.50]$ for Original, Album, Time, and Day splits, respectively. The relative importance of $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ is greater on splits with larger appearance changes. For each split, we denote $\texttt{naeil2}$ as the combination $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$ based on the optimal weights.
Note that the performance curve is rather stable for $\lambda\geq1.5$ in all splits. In practice, when the expected amount of appearance changes of subjects are unknown, our advice would be to choose $\lambda\approx 1.5$. Finally, we remark that the weighted sum can also be done for the $17$ cues in $\texttt{naeil}$; finding the optimal cue weights is left as a future work.
\subsubsection*{Results}
See table \ref{tab:deepid-val} for the results of combining $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$. Naively concatenated, $\texttt{naeil}+\texttt{h}_\texttt{deepid}$ performs worse than $\texttt{h}_\texttt{deepid}$ on the Day split ($51.02\%$ vs $60.46\%$). However, the weighted combination \texttt{naeil2} achieves the best performance on all four splits.
\subsubsection*{Conclusion}
When combining $\texttt{naeil}$ and $\texttt{h}_\texttt{deepid}$, a weighted combination is desirable, and the resulting final model \texttt{naeil2} beats all the previously considered models on all four splits.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.7\columnwidth]{figures/lambda}\hspace*{\fill}\vspace{0.5em}
\par\end{centering}
\caption{\label{fig:lambda-deepid+naeil}PIPA \emph{val} set accuracy of $\texttt{naeil}\oplus_\lambda\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$
for varying values of $\lambda$. Round dots denote the maximal \emph{val} accuracy.}
\end{figure}
\section{\label{sec:Test-set-results}PIPA test set results and comparison}
\begin{table*}
\begin{centering}
\begin{tabular}{cclcllcllccccc}
&& && \multicolumn{2}{c}{Special modules} && \multicolumn{2}{c}{General features} && \tabularnewline
\cline{5-6} \cline{8-9}
&& Method && Face rec. & Pose est. && Data & Arch. && Original & Album & Time & Day\tabularnewline
\cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
&\vspace{-1em}\tabularnewline
\cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
&& Chance level && \ding{55} & \ding{55} && $-$ & $-$ && \hspace{0.5em}0.78 & \hspace{0.5em}0.89 & \hspace{0.5em}0.78 & \hspace{0.5em}1.97\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\multirow{6}{*}{\rotatebox{90}{Head\hspace{0.0em}}}&& $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ && \ding{55} & \ding{55} && $-$ & $-$ && 33.77 & 27.19 & 16.91 & \hspace{0.5em}6.78\tabularnewline
&& $\mbox{\ensuremath{\texttt{h}}}$ && \ding{55} & \ding{55} && I+P & Alex && 76.42 & 67.48 & 57.05 & 36.48\tabularnewline
&& $\texttt{h}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$ && \ding{55} & \ding{55} && I+P+CC & Alex && 80.32 & 72.82 & 63.18 & 45.45\tabularnewline
&& \texttt{\small{}$\mbox{\ensuremath{\texttt{h}_{\texttt{deepid}}}}$} && DeepID2+\cite{Sun2014ArxivDeepId2plus} & \ding{55} && $-$ & $-$ && 68.06 & 65.49 & 60.69 & 61.49\tabularnewline
&& $\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ && DeepID2+\cite{Sun2014ArxivDeepId2plus} & \ding{55} && I+P & Alex && 85.94 & 81.95 & 75.85 & 66.00\tabularnewline
&& \texttt{DeepFace}\cite{Zhang2015CvprPiper} && DeepFace\cite{Taigman2014CvprDeepFace} & \ding{55} && $-$ & $-$ && 46.66 & $-$ & $-$ & $-$\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\multirow{7}{*}{\rotatebox{90}{Body\hspace{0.0em}}}&& $\mbox{\ensuremath{\texttt{b}}}$ && \ding{55} & \ding{55} && I+P & Alex && 69.63 & 59.29 & 44.92 & 20.38\tabularnewline
&& $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ && \ding{55} & \ding{55} && I+P & Alex && 83.36 & 73.97 & 63.03 & 38.15\tabularnewline
&& $\texttt{P}=\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$ && \ding{55} & \ding{55} && I+P & Alex && 85.33 & 76.49 & 66.55 & 42.17\tabularnewline
&& \texttt{GlobalModel}\cite{Zhang2015CvprPiper} && \ding{55} & \ding{55} && I+P & Alex && 67.60 & $-$ & $-$ & $-$\tabularnewline
&& \texttt{PIPER}\cite{Zhang2015CvprPiper} && DeepFace\cite{Taigman2014CvprDeepFace} & Poselets\cite{Bourdev2009IccvPoselets} && I+P & Alex && 83.05 & $-$ & $-$ & $-$\tabularnewline
&& \texttt{Pose}\cite{kumar2017pose} && \ding{55} & Pose group && I+P+V & Alex && 89.05 & 82.37 & 74.84 & 56.73\tabularnewline
&& \texttt{COCO}\cite{liu_2017_coco} && \ding{55} & Part det.\cite{ren15fasterrcnn} && I+P & Goog,Res && \textbf{92.78} & 83.53 & 77.68 & 61.73\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\multirow{4}{*}{\rotatebox{90}{Image\hspace{0.0em}}} && $\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$ && \ding{55} & \ding{55} && I+P & Alex && 85.71 & 76.68 & 66.55 & 42.31\tabularnewline
&& $\texttt{naeil}=\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}\negthinspace+\negthinspace\texttt{E}$\cite{oh2015person} && \ding{55} & \ding{55} && I+P+E & Alex && 86.78 & 78.72 & 69.29 & 46.54\tabularnewline
&& \texttt{Contextual}\cite{Li_2016_CVPR} && DeepID\cite{sun2014deep} & \ding{55} && I+P & Alex && 88.75 & 83.33 & 77.00 & 59.35\tabularnewline
\cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
&& $\texttt{naeil2}$ (this paper) && DeepID2+\cite{Sun2014ArxivDeepId2plus} & \ding{55} && I+P+E & Alex && {90.42} & \textbf{86.30} & \textbf{80.74} & \textbf{70.58}\tabularnewline
\cline{1-1} \cline{3-3} \cline{5-6} \cline{8-9} \cline{11-14}
\end{tabular}
\par\end{centering}
\vspace{0.8em}
\caption{\label{tab:test-set-accuracy-four-splits}PIPA \emph{test} set accuracy (\%) of the proposed method and prior arts on the four splits. For each method, we indicate any face recognition or pose estimation module included, and the data and convnet architecture for other features. \protect \\
Cues on extended data $\texttt{E}=\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}\negthinspace+\negthinspace\texttt{\ensuremath{\mbox{\ensuremath{\texttt{h}}}_{\texttt{pipa11}}}+\ensuremath{\mbox{\ensuremath{\texttt{u}}}_{\texttt{peta5}}}}$.\protect \\
$\oplus$ means concatenation after $L_{2}$ normalisation.\protect \\
In the data column, I indicates ImageNet\cite{Deng2009CvprImageNet} and P indicates PIPA \emph{train} set. CC means CACD\cite{Chen2014Eccv}$+$CASIA\cite{Yi2014ArxivLearningFace} and E means CC$+$PETA\cite{Deng2014AcmPeta}. V indicates the VGGFace dataset \cite{Parkhi15}.\protect \\
In the architecture column, (Alex,Goog,Res) refers to (AlexNet\cite{Krizhevsky2012Nips},GoogleNetv3\cite{szegedy2016rethinking},ResNet50\cite{He_2016_CVPR}).
}
\end{table*}
In this section, we measure the performance of our final model and key intermediate results on the PIPA \emph{test} set, and compare against the prior arts. See table \ref{tab:test-set-accuracy-four-splits} for a summary.
\subsection{\label{subsec:face-rgb-baseline}Baselines}
We consider two baselines for measuring the inherent difficulty of the task. First baseline is the ``chance level'' classifier, which does not see the image content and simply picks the most commonly occurring class. It provides the lower bound for any recognition method, and gives a sense of how large the gallery set is.
Our second baseline is the raw RGB nearest neighbour classifier $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$. It uses the raw downsized ($40\negmedspace\times\negmedspace40\ \mbox{pixels}$) and blurred RGB head crop as the feature. The identity of the Euclidean distance nearest neighbour training image is predicted at test time. By design, $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ is only able to recognize
near identical head crops across the $\mbox{\emph{test}}_{\nicefrac{0}{1}}$ splits.
\subsubsection*{Results}
See results for ``chance level'' and $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ in table \ref{tab:test-set-accuracy-four-splits}. While the ``chance level'' performance is low ($\leq2\%$ in all splits), we observe that $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ performs unreasonably well on the Original split (33.77\%). This shows that the Original splits share many nearly identical person instances across the split, and the task is very easy. On the harder splits, we see that the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$ performance diminishes, reaching only 6.78\% on the Day split. Recognition on the Day split is thus far less trivial -- simply taking advantage of pixel value similarity would not work.
\subsubsection*{Conclusion}
Although the gallery set is large enough, the task can be made arbitrarily easy by sharing many similar instances across the splits (Original split). We have remedied the issue by introducing three more challenging splits (Album, Time, and Day) on which the naive RGB baseline ($\mbox{\ensuremath{\texttt{h}}}_{\texttt{rgb}}$) no longer works (\S\ref{subsec:PIPA-splits}).
\subsection{Methods based on head}
We consider our four intermediate models ($\textnormal{\texttt{h}}$, $\texttt{h}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{casia}}\negthinspace+\negthinspace\mbox{\ensuremath{\texttt{h}}}_{\texttt{cacd}}$, $\mbox{\ensuremath{\texttt{h}_{\texttt{deepid}}}}$, $\texttt{h}\oplus\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$) and a prior work \texttt{DeepFace} \cite{Zhang2015CvprPiper,Taigman2014CvprDeepFace}.
We observe the same trend as described in the previous sections on the \emph{val} set (\S\ref{sec:Head-or-face}, \ref{sec:deepid}). Here, we focus on the comparison against \texttt{DeepFace} \cite{Taigman2014CvprDeepFace}. Even without a specialised face module, $\textnormal{\texttt{h}}$ already performs better than \texttt{DeepFace} (76.42\% versus 46.66\%, Original split). We believe this is for two reasons: (1) \texttt{DeepFace} only takes face regions as input, leaving out valuable hair and background information (\S\ref{sec:Head-or-face}), (2) \texttt{DeepFace} only makes predictions on 52\% of the instances where the face can be registered. Note that $\mbox{\ensuremath{\texttt{h}_{\texttt{deepid}}}}$ also do not always make prediction due to the failure to estimate the pose (17\% failure on PIPA), but performs better than \texttt{DeepFace} in the considered scenario (68.06\% versus 46.66\%, Original split).
\subsection{Methods based on body}
We consider three of our intermediate models ($\textnormal{\texttt{b}}$, $\texttt{h}\negthinspace+\negthinspace\texttt{b}$, $\texttt{P}=\texttt{f}\negthinspace+\negthinspace\texttt{h}\negthinspace+\negthinspace\texttt{u}\negthinspace+\negthinspace\texttt{b}$) and four prior arts (\texttt{GlobalModel}\cite{Zhang2015CvprPiper}, \texttt{PIPER}\cite{Zhang2015CvprPiper}, \texttt{Pose}\cite{kumar2017pose}, \texttt{COCO}\cite{liu_2017_coco}). \texttt{Pose} \cite{kumar2017pose} and \texttt{COCO} \cite{liu_2017_coco} methods appeared after the publication of the conference version of this paper \cite{oh2015person}. See table \ref{tab:test-set-accuracy-four-splits} for the results.
Our body cue $\mbox{\ensuremath{\texttt{b}}}$ and Zhang et al.'s \texttt{GlobalModel} \cite{Zhang2015CvprPiper} are the same methods implemented independently. Unsurprisingly, they perform similarly (69.63\% versus 67.60\%, Original split).
Our $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ method is the minimal system matching Zhang et al.'s \texttt{PIPER} \cite{Zhang2015CvprPiper} ($83.36\%$ versus 83.05\%, Original split). The feature vector of $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ is about $50$ times smaller than \texttt{PIPER}, and do not make use of face recogniser or pose estimator.
In fact, \texttt{PIPER} captures the head region via one of its poselets. Thus, $\texttt{h}\negthinspace+\negthinspace\texttt{b}$ extracts cues from a subset of \texttt{PIPER}'s ``\texttt{GlobalModel+Poselets}''
\cite{Zhang2015CvprPiper}, but performs better (83.36\% versus $78.79\%$, Original split).
\subsubsection*{Methods since the conference version\cite{oh2015person}}
\texttt{Pose} by Kumar et al. \cite{kumar2017pose} uses extra keypoint annotations on the PIPA \emph{train} set to generate pose clusters, and train separate models for each pose cluster (PSM, pose-specific models). By performing a form of pose normalisation they have improved the results significantly: 2.27 pp and 10.19 pp over \texttt{naeil} on Original and Day splits, respectively.
\texttt{COCO} by Liu et al. \cite{liu_2017_coco} proposes a novel metric learning loss for the person recognition task. Metric learning gives an edge over classifier-based methods by enabling recognition of unseen identities without re-training. They further use Faster-RCNN detectors \cite{ren15fasterrcnn} to localise face and body more accurately. The final performance is arguably good in all four splits, compared to \texttt{Pose} \cite{kumar2017pose} or \texttt{naeil} \cite{oh2015person}. However, one should note that the face, body, upper body, and full body features in \texttt{COCO} are based on GoogleNetv3 \cite{szegedy2016rethinking} and ResNet50 \cite{He_2016_CVPR} -- the numbers are not fully comparable to all the other methods that are largely based on AlexNet.
\subsection{Methods based on full image}
We consider our two intermediate models ($\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}=\texttt{P}\negthinspace+\negthinspace\texttt{s}$, $\texttt{naeil}=\texttt{\ensuremath{\mbox{\ensuremath{\texttt{P}}}_{s}}}\negthinspace+\negthinspace\texttt{E}$) and \texttt{Contextual} \cite{Li_2016_CVPR}, a method which appeared after the conference version of this paper \cite{oh2015person}.
Our \texttt{naeil} performs better than \texttt{PIPER} \cite{Zhang2015CvprPiper} (86.78\% versus 83.05\%, Original split), while having a 6 times smaller feature vector and not relying on face recogniser or pose estimator.
\subsubsection*{Methods since the conference version\cite{oh2015person}}
\texttt{Contextual} by Li et al. \cite{Li_2016_CVPR} makes use of person co-occurrence statistics to improve the results. It performs 1.97 pp and 12.81 pp better than \texttt{naeil} on Original and Day splits, respectively. However, one should note that \texttt{Contextual} employs a face recogniser DeepID \cite{sun2014deep}. We have found that a specialised face recogniser improves the recognition quality greatly on the Day split (\S\ref{sec:deepid}).
\subsection{\label{subsec:naeil2}Our final model \texttt{naeil2}}
\texttt{naeil2} is a weighted combination of \texttt{naeil} and $\texttt{h}_{\texttt{deepid}}$ (see \S\ref{sec:naeil+deepid} for details). Observe that by attaching a face recogniser module on \texttt{naeil}, we achieve the best performance on Album, Time, and Day splits. In particular, on the Day split, \texttt{naeil2} makes a 8.85 pp boost over the second best method \texttt{COCO} \cite{liu_2017_coco} (table \ref{tab:test-set-accuracy-four-splits}). On the Original split, \texttt{COCO} performs better (2.36 pp gap), but note that \texttt{COCO} uses more advanced feature representations (GoogleNet and ResNet).
Since \texttt{naeil2} and \texttt{COCO} focus on orthogonal techniques, they can be combined to yield even better performances.
\subsection{Computational cost}
We report computational times for some pipelines in our method. The feature training takes 2-3 days on a single GPU machine. The SVM training takes 42 seconds for $\texttt{h}$ ($4\,096$ dim) and $1\,118$ seconds for \texttt{naeil} on the Original split (581 classes, $6\,443$ samples). Note that this corresponds to a realistic user scenario in a photo sharing service where $\sim\negmedspace500$ identities are known to the user and the average number of photos per identity is $\sim\negmedspace10$.
\section{\label{sec:challenges-analysis}Analysis}
In this section, we provide a deeper analysis of individual cues towards the final performance. In particular, we measure how contributions from individual cues (e.g. face and scene) change when either the system has to generalise across time or head viewpoint. We study the performance as a function of the number of training samples per identity, and examine the distribution of identities according to their recognisability.
\begin{figure}
\centering{}\hspace*{\fill}%
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/barplot2v4_fixed}
\vspace{0em}
\caption{\label{fig:splits-accuracy-relative}PIPA \emph{test} set relative accuracy of various methods in the four splits, against the final system \texttt{naeil2}.}
\par\end{center}%
\end{figure}
\subsection{Contribution of individual cues \label{subsec:Importance-of-features}}
We measure the contribution of individual cues towards the final system \texttt{naeil2} (\S\ref{sec:naeil+deepid}) by dividing the accuracy for each intermediate method by the performance of \texttt{naeil2}. We report results in the four splits in order to determine which cues contribute more when there are larger time gap between training and testing samples and vice versa.
\subsubsection*{Results}
See figure \ref{fig:splits-accuracy-relative} for the relative performances in four splits. The cues based more on context (e.g. $\texttt{b}$ and $\texttt{s}$) see greater drop from the Original to Day split, whereas cues focused on face $\texttt{f}$ and head $\texttt{h}$ regions tend to drop less. Intuitively, this is due to the greater changes in clothing and events in the Day split.
On the other hand, $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ increases in its relative contribution from Original to Day split, nearly explaining 90\% of \texttt{naeil2} in the Day split. $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ provides valuable invariant face feature especially when the time gap is great. However, on the Original split $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ only reaches about 75\% of \texttt{naeil2}. Head orientation robust $\texttt{naeil}$ should be added to attain the best performance.
\subsubsection*{Conclusion}
Cues involving context are stronger in the Original split; cues around face, especially the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$, are robust in the Day split. Combining both types of cues yields the best performance over all considered time/appearance changes.
\subsection{\label{subsec:Head-orientation-analysis}Performance by viewpoint}
We study the impact of test instance viewpoint on the proposed systems. Cues relying on face are less likely to be robust to occluded faces, while body or context cues will be robust against viewpoint changes. We measure the performance of models on the head orientation partitions defined by a DPM head detector (see \S\ref{subsec:face-detection}): frontal $\texttt{FR}$, non-frontal $\texttt{NFR}$, and
no face detected $\texttt{NFD}$. $\texttt{NFD}$ subset is a proxy for back-view and occluded-face instances.
\subsubsection*{Results}
Figure \ref{fig:threeway-bar} shows the accuracy of methods on the three head orientation subsets for the Original and Day splits. All the considered methods show worse performance from frontal $\texttt{FR}$ to non-frontal $\texttt{NFR}$ and no face detected $\texttt{NFD}$ subsets. However, in the Original split, \texttt{naeil2} still robustly predicts the identities even for the \texttt{NFD} subset ($\sim\negmedspace 80\%$ accuracy). On the Day split, \texttt{naeil2} also do struggle on the \texttt{NFD} subset ($\sim\negmedspace 20\%$ accuracy). Recognition of \texttt{NFD} instances under the Day split constitutes the main remaining challenge of person recognition.
In order to measure contributions from individual cues in different head orientation subsets, we report the relative performance against the final model \texttt{naeil2} in figure \ref{fig:threeway-relative}. The results are reported on the Original and Day splits. Generally, cues based on more context (e.g. $\texttt{b}$ and $\texttt{s}$) are more robust when face is not visible than the face specific cues (e.g. $\texttt{f}$ and $\texttt{h}$). Note that $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ performance drops significantly in $\texttt{NDET}$, while $\texttt{naeil}$ generally improves its relative performance in harder viewpoints. \texttt{naeil2} envelops the performance of the individual cues in all orientation subsets.
\subsubsection*{Conclusion}
$\texttt{naeil}$ is more viewpoint robust than $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$, making a contrast to the time-robustness analysis (\S\ref{subsec:Importance-of-features}). The combined model \texttt{naeil2} takes the best of both worlds. The remaining challenge for person recognition lies on the no face detected \texttt{NFD} instances under the Day split. Perhaps image or social media metadata could be utilised (e.g. camera statistics, time and GPS location, social media friendship graph).
\begin{figure}
\begin{centering}
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/threeway-test-bar_fixed}
\caption{\label{fig:threeway-bar}PIPA \emph{test} set accuracy of methods on the frontal ($\texttt{FR}$), non-frontal ($\texttt{NFR}$), and no face detected ($\texttt{NFD}$) subsets. Left: Original split, right: Day split.}
\par\end{center}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/threeway-test-v2_fixed}
\caption{\label{fig:threeway-relative}PIPA \emph{test} set relative accuracy of frontal ($\texttt{FR}$), non-frontal ($\texttt{NFR}$), and non-detection ($\texttt{NDET}$) head orientations, relative to the final model \texttt{naeil2}. Left: Original split, right: Day split. }
\par\end{center}
\end{centering}
\end{figure}
\begin{figure*}
\begin{centering}
\begin{center}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/threeway-svm-fr_fixed}\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/threeway-svm-nondet_fixed}\hspace*{\fill}
\caption{\label{fig:threeway-svm}PIPA \emph{test} set performance when the identity classifier (SVM) is only trained on either frontal ($\texttt{FR}$, left) or no face detected ($\texttt{NFD}$, right) subset. Related scenario: a robot has only seen frontal views of people; who is this person shown from the back view? }
\par\end{center}
\end{centering}
\end{figure*}
\subsection{Generalisation across viewpoints\label{subsec:SVM-orientation}}
Here, we investigate the viewpoint generalisability of our models. For example, we challenge the system to identify a person from the back, having only shown frontal face samples.
\subsubsection*{Results}
Figure \ref{fig:threeway-svm} shows the accuracies of the methods, when they are trained either only on the frontal subset $\texttt{FR}$ (left plot) or only on the no face detected subset $\texttt{NFD}$ (right plot). When trained on $\texttt{FR}$, \texttt{naeil2} has difficulties generalising to the $\texttt{NFD}$ subset ($\texttt{FR}$ versus $\texttt{NFD}$ performance is $\sim\negmedspace95\%$ to $\sim\negmedspace40\%$ in Original; $\sim\negmedspace85\%$ to $\sim\negmedspace35\%$ in Day). However, the absolute performance is still far above the random chance (see \S\ref{subsec:face-rgb-baseline}), indicating that the learned identity representations are to a certain degree generalisable. The \texttt{naeil} features are more robust in this case than $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$, with less dramatic drop from \texttt{FR} to \texttt{NFD}.
When no face is given during training (training on \texttt{NFD} subset), identities are much harder to learn in general. The recognition performance is low even for no-generalisation case: $\sim\negmedspace60\%$ and $\sim\negmedspace30\%$ for Original and Day, respectively, when trained and tested on \texttt{NFD}.
\subsubsection*{Conclusion}
$\texttt{naeil2}$ does generalise marginally across viewpoints, largely attributing to the \texttt{naeil} features. It seems quite hard to learn identity specific features (either generalisable or not) from back-views or occluded faces (\texttt{NFD}).
\subsection{Viewpoint distribution does not matter for feature training\label{subsec:Feature-learning-orientation}}
We examine the effect of the ratio of head orientations in the feature training set on the quality of the head feature $\ensuremath{\texttt{h}}$. We fix the number of training examples that consists only of frontal \texttt{FR} and non-frontal faces \texttt{NFR}, while varying their ratio.
One would hypothesize that the maximal viewpoint robustness of the feature is achieved at a balanced mixture of \texttt{FR} and \texttt{NFR} for each person; also that $\ensuremath{\texttt{h}}$ trained with $\texttt{FR}$ ($\texttt{NFR}$) subset is relatively strong at predicting $\texttt{FR}$ ($\texttt{NFR}$) subset (respectively).
\subsubsection*{Results}
Figure \ref{fig:threeway-feat} shows the performance of $\ensuremath{\texttt{h}}$ trained with various $\texttt{FR}$ to $\texttt{NFR}$ ratios on \texttt{FR}, \texttt{NFR}, and \texttt{NFD} subsets. Contrary to the hypothesis, changing the distribution of head orientations in the feature training has $<3\%$ effect on their performances across all viewpoint subsets in both Original and Day splits.
\subsubsection*{Conclusion}
No extra care is needed to control the distribution of head orientations in the feature training set to improve the head feature $\texttt{h}$. Features on larger image regions (e.g. \texttt{u} and \texttt{b}) are expected to be even less affected by the viewpoint distribution.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/threeway-feat-o_fixed}\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/threeway-feat-d_fixed}\hspace*{\fill}\vspace{0.5em}
\par\end{centering}
\caption{\label{fig:threeway-feat} Train the feature $\ensuremath{\texttt{h}}$ with different mixtures of frontal $\texttt{FR}$ and non-frontal $\texttt{NFR}$ heads. The viewpoint wise performance is shown for the Original (left) and Day (right) splits.}
\end{figure}
\subsection{Input resolution \label{subsec:Analysis-of-remaining-factors}}
This section provides analysis on the impact of input resolution. We aim to identify methods that are robust in different range of resolutions.
\subsubsection*{Results}
Figure \ref{fig:remaining_factors} shows the performance with respect to the input resolution (head height in pixels). The final model \texttt{naeil2} is robust against low input resolutions, reaching $\sim\negmedspace80\%$ even for instances with $<50$ pixel heads on Original split. On the day split, \texttt{naeil2} is less robust on low resolution examples ($\sim\negmedspace55\%$).
Component-wise, note that $\texttt{naeil}$ performance is nearly invariant to the resolution level. $\texttt{naeil}$ tends to be more robust for low resolution input than the $\texttt{h}_{\texttt{deepid}}$ as it is based on body and context features and do not need high resolution faces.
\subsubsection*{Conclusion}
For low resolution input $\texttt{naeil}$ should be exploited, while for high resolution input $\texttt{h}_{\texttt{deepid}}$ should be exploited. If unsure, \texttt{naeil2} is a good choice -- it envelops the performance of both in all resolution levels.
\begin{figure}
\centering{}\hspace*{\fill}%
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/res_bar_fixed}
\vspace{0em}
\caption{\label{fig:remaining_factors}PIPA \emph{test} set accuracy of systems at different levels of input resolution. Resolution is measured in terms of the head height (pixels).}
\par\end{center}%
\end{figure}
\subsection{Number of training samples\label{subsec:Importance-of-training}}
We are interested in two questions: (1) if we had more samples per identity, would person recognition be solved with the current method? (2) how many examples per identity are enough to gather substantial amount of information about a person? To investigate the questions, we measure the performance of methods at different number of training samples per identity. We perform 10 independent runs per data point with fixed number of training examples per identity (subset is uniformly sampled at each run).
\subsubsection*{Results}
Figure \ref{fig:numtrain-accuracy} shows the trend of recognition performances of methods with respect to different levels of training sample size. \texttt{naeil2} saturates after $10\sim15$ training examples per person in Original and Day splits, reaching $\sim\negmedspace92\%$ and $\sim\negmedspace83\%$, respectively, at $25$ examples per identity. At the lower end, we observe that 1 example per identity is already enough to recognise a person far above the chance level ($\sim\negmedspace67\%$ and $\sim\negmedspace35\%$ on Original and Day, respectively).
\subsubsection*{Conclusion}
Adding a few times more examples per person will not push the performance to 100\%. Methodological advances are required to fully solve the problem. On the other hand, the methods already collect substantial amount of identity information only from single sample per person (far above chance level).
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/numtrain_accuracy_fixed}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0em}
\par\end{centering}
\caption{\label{fig:numtrain-accuracy}Recognition accuracy at different number of training samples per identity. Error bars indicate $\pm 1$ standard deviation from the mean.}
\end{figure}
\subsection{Distribution of per-identity accuracy \label{subsec:per-id-accuracy}}
Finally, we study how much proportion of the identities are easy to recognise and how many are hopeless. We study this by computing the distribution of identities according to their per-identity recognition accuracies.
\subsubsection*{Results}
Figure \ref{fig:per-id-accuracy} shows the per identity accuracy for each identity in a descending order for each considered method. On the Original split, \texttt{naeil2} gives $100\%$ accuracy for $185$ out of the $581$ test identities, whereas there was only one identity where the method totally fails. On the other hand, on the Day split there are $11$ out of the $199$ test identities for whom \texttt{naeil2} achieves $100\%$ accuracy and $12$ identities with zero accuracy. In particular, \texttt{naeil2} greatly improves the per-identity accuracy distribution over $\texttt{naeil}$, which gives zero prediction for $40$ identities.
\subsubsection*{Conclusion}
In the Original split, \texttt{naeil2} is doing well on many of the identities already. In the Day split, the $\texttt{h}_{\texttt{deepid}}$ feature has greatly improved the per-identity performances, but \texttt{naeil2} still misses some identities. It is left as future work to focus on the hard identities.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/perid_O_fixed}\hspace*{\fill}\includegraphics[width=0.5\columnwidth]{figures/perid_D_fixed}\hspace*{\fill}
\par\end{centering}
\begin{centering}
\vspace{0em}
\par\end{centering}
\caption{\label{fig:per-id-accuracy}Per identity accuracy of on the Original and Day splits. The identities are sorted according to the per identity accuracy for each method separately.}
\end{figure}
\section{\label{sec:open-world}Open-world recognition}
So far, we have focused on the scenario where the test instances are always from a closed world of gallery identities. However, for example when person detectors are used to localise instances, as opposed to head box annotations, the detected person may not be one of the gallery set. One may wonder how our person recognisers would perform when the test instance could be an unseen identity.
In this section, we study the task of ``open-world person recognition''. The test identity may be either from a gallery set (training identities) or from a background set (unseen identities). We consider the scenario where test instances are given by a face detector \cite{Mathias2014Eccv} while the training instance locations have been annotated by humans.
Key challenge for our recognition system is to tell apart gallery identities from background faces, while simultaneously classifying the gallery identities. Obtained from a detector, the background faces may contain any person in the crowd or even non-faces. We will introduce a simple modification of our recognition systems' test time algorithm to let them further make the gallery versus background prediction. We will then discuss the relevant metrics for our systems' open-world performances.
\subsection{Method}
At test time, body part crops are inferred from the detected face region ($\texttt{f}$). First, $\texttt{h}$ is regressed from $\texttt{f}$, using the PIPA \emph{train} set statistics on the scaling and displacement transformation from \texttt{f} to \texttt{h}. All the other regions ($\texttt{u}$, $\texttt{b}$, $\texttt{s}$) are computed based on $\texttt{h}$ in the same way as in \S\textcolor{red}{3.2} of main paper.
To measure if the inferred head region \texttt{h} is sound and compatible with the models trained on \texttt{h} (as well as $\texttt{u}$ and $\texttt{b}$), we train the head model \texttt{h} on head annotations and test on the heads inferred from face detections. The recognition performance is $87.74\%$, while when trained and tested on the head annotations, the performance is $89.85\%$. We see a small drop, but not significant -- the inferred regions to be largely compatible.
The gallery-background identity detection is done by thresholding the final SVM score output. Given a recognition system and test instance $x$, let $\mathcal{S}_{k}\left(x\right)$ be the SVM score for identity $k$. Then, we apply a thresholding parameter $\tau>0$ to predict background if $\underset{k}{\max}\,\,\mathcal{S}_{k}\left(x\right)<\tau$, and predict the argmax gallery identity otherwise.
\subsection{Evaluation metric}
The evaluation metric should measure two aspects simultaneously: (1) ability to tell apart background identities, (2) ability to classify gallery identities. We first introduce a few terms to help defining the metrics. Refer to figure \ref{fig:open-metric} for a visualisation. We say a detected test instance $x$ is a ``foreground prediction'' if $\underset{k}{\max}\,\,\mathcal{S}_{k}\left(x\right)\ge\tau$. A foreground prediction is either a true positive ($TP$) or a false positive ($FP$), depending on whether $x$ is a gallery identity or not. If $x$ is a $TP$, it is either a sound true positive $TP_s$ or an unsound true positive $TP_u$, depending on the classification result $\underset{k}{\arg\max}\,\,\mathcal{S}_{k}\left(x\right)$. A false negative ($FN$) is incurred if a gallery identity is predicted
as background.
\begin{figure}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/open-metric-v2_fixed}\hspace*{\fill}
\par\end{centering}
\caption{\label{fig:open-metric}Diagram of various subsets generated by a person recognition system in an open world setting (cf. Figure \textcolor{red}{2} of main paper). $TP_s$: sound true positive, $TP_u$: unsound true positive, $FP$: false positive, $FN$: false negative. See text for the definitions.}
\end{figure}
We first measure the system's ability to screen background identities while at the same time classifying the gallery identities. The \textbf{recognition recall (RR)} at threshold $\tau$ is defined as follows
\begin{equation}
\label{eq:RR}
\mathrm{RR}(\tau)=\frac{\left|TP_s\right|}{\left|\mbox{face det.}\cap\mbox{head anno.}\right|}=\frac{\left|TP_s\right|}{\left|TP\cup FN\right|}.
\end{equation}
To factor out the performance of face detection, we constrain our evaluation to the intersection between face detections and head annotations (the denominator $TP\cup FN$). Note that the metric is a decreasing function of $\tau$, and when $\tau\rightarrow-\infty$ the corresponding system is operating under the closed world assumption.
The system enjoys high RR when $\tau$ is decreased, but the system then predicts many background cases as foreground ($FP$). To quantify the trade-off we introduce a second metric:
\textbf{false positive per image (FPPI)}. Given a threshold $\tau>0$, FPPI is defined as
\begin{equation}
\label{eq:FPPI}
\mathrm{FPPI}(\tau)=\frac{\left|FP\right|}{\left|\mbox{images}\right|},
\end{equation}
measuring how many wrong foreground predictions the system makes per image. It is also a decreasing function of $\tau$. When $\tau\rightarrow\infty$, the FPPI attains zero.
\subsection{Results}
\begin{figure*}
\begin{centering}
\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/open_O_fixed}\hspace*{\fill}\includegraphics[width=0.8\columnwidth]{figures/open_D_fixed}\hspace*{\fill}
\par\end{centering}
\caption{\label{fig:open-world-results}Recognition recall (RR) versus false positive per image (FPPI) of our recognition systems in the open world setting. Curves are parametrised by $\tau$ -- see text for details.}
\end{figure*}
Figure \ref{fig:open-world-results} shows the recognition rate (RR) versus false positive per image (FPPI) curves parametrised by $\tau$. As $\tau\to\infty$, $RR(\tau)$ approaches the close world performance on the face detected subset ($\texttt{FR}\cup\texttt{NFR}$): $87.74\%$ (Original) and $46.67\%$ (Day) for $\texttt{naeil}$. In the open-world case, for example when the system makes one FPPI, the recognition recall for $\texttt{naeil}$ is $76.25\%$ (Original) and $25.29\%$ (Day). Transitioning from the open world to close world, we see quite some drop, but one should note that the set of background face detections is more than $7\times$ greater than the foreground faces.
Note that the DeepID2+ \cite{Sun2014ArxivDeepId2plus} is not a public method, and so we cannot compute $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ features ourselves; we have not included the $\mbox{\ensuremath{\texttt{h}}}_{\texttt{deepid}}$ or \texttt{naeil2} results in this section.
\subsection{Conclusion}
Although performance is not ideal, a simple SVM score thresholding scheme can make our systems work in the open world recognition scenario.
\section{\label{sec:Conclusion}Conclusion}
We have analysed the problem of person recognition in social media photos where people may appear with occluded faces, in diverse poses, and in various social events. We have investigated efficacy of various cues, including the face recogniser DeepID2+\cite{Sun2014ArxivDeepId2plus}, and their time and head viewpoint generalisability. For better analysis, we have contributed additional splits on PIPA \cite{Zhang2015CvprPiper} that simulate different amount of time gap between training and testing samples.
We have made four major conclusions. (1) Cues based on face and head are robust across time (\S\ref{subsec:Importance-of-features}). (2) Cues based on context are robust across head viewpoints (\S\ref{subsec:Head-orientation-analysis}). (3) The final model \texttt{naeil2}, a combination of face and context cues, is robust across both time and viewpoint and achieves a $\sim\negmedspace9$ pp improvement over a recent state of the art on the challenging Day split (\S\ref{subsec:naeil2}). (4) Better convnet architectures and face recognisers will improve the performance of the \texttt{naeil} and \texttt{naeil2} frameworks in the future \S\ref{subsec:naeil2}).
The remaining challenges are mainly the large time gap and occluded face scenarios (\S\ref{subsec:Head-orientation-analysis}). One possible direction is to exploit non-visual cues like GPS and time metadata, camera parameters, or social media album/friendship graphs. Code and data are publicly available at \url{https://goo.gl/DKuhlY}.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This research was supported by the German Research Foundation (DFG CRC 1223).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUbBo4uzqh_FJ_JUX5
| 5 | 1 |
\section{introduction}
We start with a scalar potentials expansion of the electromagnetic
field in the three media $j=1,2,3$ corresponding respectively to air
metal and substrate (i.e. glass or fused silica). There is no source
in the substrate and the dipole (point source) is located in medium
$j=1$. We write for the field in each medium:
\begin{eqnarray}
\mathbf{D}_j=\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times[\mathbf{\hat{z}}\Psi_{\textrm{TM},j}]+ik_0\varepsilon_j\boldsymbol{\nabla}\times[\mathbf{\hat{z}}\Psi_{\textrm{TE},j}]\nonumber\\
\mathbf{B}_j=\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times[\mathbf{\hat{z}}\Psi_{\textrm{TE},j}]-ik_0\boldsymbol{\nabla}\times[\mathbf{\hat{z}}\Psi_{\textrm{TM},j}]\label{modale2}
\end{eqnarray} with
\begin{eqnarray}
[\boldsymbol{\nabla}^2+k_0^2\varepsilon_j]\Psi_{\textrm{TM,TE},j}=0.
\end{eqnarray}
Using Boundary conditions we show that the only non vanishing scalar
potentials for the dipole direction perpendicular to the film is in
medium $j=3$
\begin{eqnarray}
\Psi^{\textrm{TM},\bot}(\mathbf{x},z)=\frac{i\mu_\bot}{4\pi}\int_{0}^{+\infty} \frac{kdk}{k_1}\tilde{T}_{13}^{\textrm{TM}}(k)e^{ik_1h}e^{ik_3z}J_0(k\varrho)\nonumber\\
=\frac{i\mu_\bot}{8\pi}\int_{-\infty}^{+\infty} \frac{kdk}{k_1}\tilde{T}_{13}^{\textrm{TM}}(k)e^{ik_1h}e^{ik_3z}H_0^{(+)}(k\varrho),
\end{eqnarray}
where we defined $k_i=\sqrt{k_0^2\varepsilon_i-k^2}$ with
$\textrm{Imag}[k_j]\geq0$, $k_0=2\pi/\lambda=\omega/c$, and $z\geq
d$. To obtain Eq.~3 we also used the formula
$H^{(+)}_0(u)-H^{(+)}_0(-u)=2J_0(u)$ which is valid in the complex
plane $u=u'+iu''$ (if $|\arg{(z)}|< \pi$). The Fresnel coefficient
characterizing the transmission of the metal film is for TM waves
defined by \begin{eqnarray}
\tilde{T}_{13}^{\textrm{TM}}(k)=\frac{T_{23}^{\textrm{TM}}T_{12}^{\textrm{TM}}}{1+R_{23}^{\textrm{TM}}R_{12}^{\textrm{TM}}e^{2ik_2d}}e^{i(k_2-k_3)d}
\end{eqnarray} where
\begin{eqnarray}
R_{ij}^{\textrm{TM}}=\frac{k_i/\varepsilon_i-k_j/\varepsilon_j}{k_i/\varepsilon_i+k_j/\varepsilon_j}\\
T_{ij}^{\textrm{TM}}=\frac{2k_i/\varepsilon_i}{k_i/\varepsilon_i+k_j/\varepsilon_j}.
\end{eqnarray}
We then introduce the variables $k=k_0n\sin{\xi}$, $k_3=k_0n\cos{\xi}$ with $\xi=\xi'+i\xi''$ and write
\begin{eqnarray}
\Psi^{\textrm{TM},\bot}(\mathbf{x},z)=\int_{\Gamma}d\xi F_+^{\textrm{TM},\bot}(\xi)e^{ik_0n((z-d)\cos{\xi}+\varrho\sin{\xi})}
\end{eqnarray}
with
\begin{eqnarray}
F_+^{\textrm{TM},\bot}(\xi)=\frac{i\mu_\bot}{8\pi}\frac{k_0n\sin{\xi}\cos{\xi}}{\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}}
\tilde{T}_{13}^{\textrm{TM}}(k_0n\sin{\xi})\nonumber\\e^{i\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}h}
\cdot e^{ik_0nd\cos{\xi}}H_0^{(+)}(k_0n\varrho\sin{\xi})e^{-ik_0n\varrho\sin{\xi}}\nonumber\\
\end{eqnarray} (we point out that the $\varrho$ and $\varphi$ dependencies are here and in the following implicit in our notation: $F_+(\xi):=F_+(\xi,\varphi,\varrho)$).
Similar expressions can be obtained for the components $\boldsymbol{\mu}_{||}$ of the dipole parallel to the interface. More precisely for the TM modes we have
\begin{eqnarray}
\Psi^{\textrm{TM},||}(\mathbf{x},z)=\int_{\Gamma}d\xi F_+^{\textrm{TM},||}(\xi)e^{ik_0n((z-d)\cos{\xi}+\varrho\sin{\xi})}
\end{eqnarray}
with
\begin{eqnarray}
F_+^{\textrm{TM},||}(\xi)=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varrho}}}{8\pi}k_0n\cos{\xi}
\tilde{T}_{13}^{\textrm{TM}}(k_0n\sin{\xi})\nonumber\\e^{i\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}h}
\cdot e^{ik_0nd\cos{\xi}}H_1^{(+)}(k_0n\varrho\sin{\xi})e^{-ik_0n\varrho\sin{\xi}}.\nonumber\\
\end{eqnarray} Similarly for the TE waves we obtain:
\begin{eqnarray}
\Psi^{\textrm{TE},||}(\mathbf{x},z)=\int_{\Gamma}d\xi F_+^{\textrm{TE},||}(\xi)e^{ik_0n((z-d)\cos{\xi}+\varrho\sin{\xi})}
\end{eqnarray}
with\begin{eqnarray}
F_+^{\textrm{TE},||}(\xi)=-\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varphi}}}{8\pi}\frac{k_0\cos{\xi}}{\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}}
\tilde{T}_{13}^{\textrm{TE}}(k_0n\sin{\xi})\nonumber\\e^{i\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}h}
\cdot e^{ik_0nd\cos{\xi}}H_1^{(+)}(k_0n\varrho\sin{\xi})e^{-ik_0n\varrho\sin{\xi}}.\nonumber\\
\end{eqnarray}We used the formula $H^{(+)}_1(u)+H^{(+)}_1(-u)=2J_1(u)$. Here the Fresnel coefficients are defined by \begin{eqnarray}
\tilde{T}_{13}^{\textrm{TE}}(k)=\frac{T_{23}^{\textrm{TE}}T_{12}^{\textrm{TE}}}{1+R_{23}^{\textrm{TE}}R_{12}^{\textrm{TE}}e^{2ik_2d}}e^{i(k_2-k_3)d}
\end{eqnarray} with
\begin{eqnarray}
R_{ij}^{\textrm{TE}}=\frac{k_i-k_j}{k_i+k_j}\\
T_{ij}^{\textrm{TE}}=\frac{2k_i}{k_i+k_j}.
\end{eqnarray}
The presence of the singular Hankel functions $H_1^{(+)}$ and
$H_0^{(+)}$ in all these expressions imply the existence of a branch
cut starting at the origin and associated with the function
$1/\sqrt{(\sin{\xi})}$. This branch cut is chosen in order to have
no influence during subsequent contour deformations and is running
just below the actual path $\Gamma$ slightly off the real axis
$\xi'$ and to the left of the vertical line $\xi''=-\pi/2$ (the
original branch cut is composed of the line $\xi'=-\pi/2$ and of the
half-axis [$\xi''=0$, $\xi'\leq 0$]). We also introduce the polar
coordinates $\varrho=r\sin{\vartheta}$, $z=d+r\cos{\vartheta}$
leading to $(z-d)\cos{\xi}+\varrho\sin{\xi}=r\cos{(\xi-\vartheta)}$
and therefore:
\begin{eqnarray}
\Psi(\mathbf{x},z)=\int_{\Gamma}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}.\label{field}
\end{eqnarray}
The definition of the square root
$k_1=k_0n\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}$,
with $\varepsilon_3=n^2$ real and
$\varepsilon_1=\varepsilon_1'+i\varepsilon_1''\sim 1+i\delta$ with
$\delta\rightarrow 0^+$, implies the presence of a branch cut which
must be chosen carefully in order i) to be consistent with the
choice for $k_1$ made in Eq.~3 during integration along the contour
$\Gamma$, ii) to allow further contour deformations leading to
convergent calculations. The branch cut adapted to our problem is
shown in Figs.~1,2 and correspond to the choice
$\textrm{Imag}[k_1]\geq0$ in the whole complex $\xi$-plane. The
branch cut starts at the branch point $M$ of coordinate $\xi_c$
defined by the condition $k_1=0$. We point out that due to
invariance of the Fresnel coefficient
$\tilde{T}_{13}^{\textrm{TM},\textrm{TE}}(k_0n\sin{\xi})$ over the
permutation $k_2\leftrightarrow- k_2$ we don't actually need an
additional branch cut for $k_2$ (this important property will
survive for a larger number of layers).
\section{The different contributions along the closed contour}
After introducing the function $f(\xi)=i\cos(\xi-\vartheta)=i\cos{(\xi'-\vartheta)}\cosh{\xi''}+\sin{(\xi'-\vartheta)}\sinh{\xi''}$ we define the steepest descent path $SDP$ by the condition\begin{eqnarray}\textrm{Imag}[f(\xi)]=\cos{(\xi'-\vartheta)}\cosh{\xi''}=1.~\label{SDP}\end{eqnarray} $SDP$ goes through the saddle point $\xi_0$ defined by the condition $\frac{ df(\xi)}{d\xi}=0$ which has a solution at $\xi_0=\vartheta$. Importantly, there are actually two trajectories solutions of Eq.~\ref{SDP} and going through $\xi_0$. We choose the one such that the real part of $f(\xi)$ decay uniformly along $SDP$ when going away arbitrarily to the left or to the right from the saddle point(see Fig.~1).\\
\indent Cauchy theorem allows us to deform the original $\Gamma$
contour to include $SDP$ as a part of the integration path. For this
we label $\Gamma$ by the letter $ABCD$ (see Fig.~1). The integral in
Eq.~\ref{field} is thus written $\int_{\Gamma}=\int_{ABCD}$. We will
consider two cases depending whether $\vartheta$ is or not larger
than the real part of the branch point $\xi'_c\simeq
\arcsin{(1/n)}=\vartheta_c$.
\subsection{Closing the contour in the case $\vartheta>\xi'_c$}
If $\vartheta>\xi'_c$ the closed integration contour contain eight
contributions (see Fig.~1) and we have:
\begin{equation}0=\int_{\Gamma}+\int_{DE}+\int_{EF}+\tilde{\int_{FG}}+\tilde{\int_{GH}}+\int_{HI}+\int_{IA}-I_{SP}.\end{equation}
The contribution \begin{eqnarray}
\int_{DE}:=\int_{DE}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}\nonumber \\
=\int_{\pi/2-i\infty}^{\pi/2+\vartheta-i\infty}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}
\end{eqnarray}
approaches zero asymptotically and can therefore be neglected.\\
Similarly, we can neglect $\int_{IA}:=\int_{+i\infty}^{-\pi/2+i\infty}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ which approaches also zero asymptotically.\\
The contribution $\int_{EF}$ and $\tilde{\int_{FG}}$ are calculated along the $SDP$. However, due to the presence of the branch cut crossing $SDP$ at $F$ the integration along $FG$ actually corresponds to a change of Riemann sheet associated with the second determination for the square root $k_1$ (we point out that since the branch cut is very close to the imaginary axis at $F$ we have at the limit $\xi_F\simeq i\textrm{arccosh}{(1/\cos{\vartheta})}$). More precisely if we call ``+'' the Riemann sheet in which $\textrm{Imag}[k_1]\geq0$ the second Riemann
\begin{figure*}[h!t]
\centering
\includegraphics[width=16cm]{image1.eps}
\caption{Integration contour in the complex $\xi$-plane for $\vartheta>\xi'_c$. }
\end{figure*}
surface ``-'' associated with the condition
$\textrm{Imag}[k_1]\leq0$ is connected to ``+'' through the branch
cut represented in Fig.~1. Therefore, crossing the branch cut at $F$
corresponds actually to a change in sign of the square root
$k_1\rightarrow-k_1$. We have consequently the contributions
\begin{eqnarray}
\int_{EF}:=\int_{EF}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}\nonumber \\
\tilde{\int_{FG}}:=\int_{FG}d\xi F_-(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}
\end{eqnarray}
where $F_{-}(\xi)$ is the same function of $k$ as $F_+(\xi)$ but $\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}$ (defined with $\textrm{Imag}[\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}]\geq 0$) is now replaced by $-\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}$. More precisely the square root $z_{+}=\sqrt{g}$ of the complex variable $g'+ig''$ is defined on the ``+'' Riemann sheet by $z_{+}=\textrm{sign}(g'')\sqrt{((g'+|g|)/2)}+i\sqrt{((-g'+|g|)/2)}$ where $\textrm{sign}(x)=1$ if $x>0$, $\textrm{sign}(x)=-1$ if $x<0$ and $\textrm{sign}(x)=0$ if $x=0$. On the``-'' sheet we have therefore $z_{-}=-z_{+}$. An important remark concerns here the integration convergence along the SDP when approaching the vertical asymptotes $\pm\frac{\pi}{2}+\vartheta\mp i\infty$. Indeed, due to the presence of the coefficient $e^{\pm ik_0nh\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi})}}$ in the definition of $F_{\pm}(\xi)$ it is not obvious that the integrand will take a finite value at infinity. Actually a careful study of the limit behaviour of $F_{\pm}(\xi)$ including the exponentials terms as
well as the Hankel function contribution shows that there is no convergence problem for $F_{+}(\xi)$ at infinity (this also explains why $\int_{DE}$ and $\int_{IA}$ goes to zero asymptotically). However when going on the ``-'' Riemann sheet the convergence is not always ensured. We found that however no problem occurs on this second sheet as soon as the condition $z+\varrho\tan{\vartheta}>h$ is verified. In particular, no problem appears if we impose $z>h$. Since here we are interested in the asymptotic behavior valid for $z\gg h$ this condition will be automatically satisfied.\\
\indent This point is particularly relevant when we consider the contribution $\tilde{\int}_{GH}:=\int_{-\pi/2+\vartheta+i\infty}^{i\infty}d\xi F_-(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ which approaches zero if the previous condition $z>h$ is fulfilled.
\indent From $H$ we thus cross the branch cut and go back to the ``+'' sheet. We thus obtain a contour $\int_{HI}=\int_{HI}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ longing the branch cut in the original ``+'' space and contourning the branch point $k_1=0$ (corresponding nearly to $\xi_c\simeq \arcsin{(1/n)}=\vartheta_c$). We will see in the subsection D that this contribution corresponds to a lateral wave associated with a Goos-H\"{a}nchen effect in transmission.\\
\indent Finally, due to the presence of isolated singularities in the complex plane (i.e. poles) for the TM waves we must subtract a residue contribution $I_{SP}$ which value will precisely depends on the position $\vartheta$ along the real axis (i.e. whether or not the poles are encircled by the closed contour in the complex $\xi$-plane). A complete analysis of these singularities show that we can in principle extract from the transmission coefficient $\tilde{T}_{13}^{\textrm{TM}}(k)$ four poles corresponding to the four SP modes guided along the metal slab. However, the branch cut choice made here allows only the existence of three solutions called respectively symmetric leaky ($s_l$), symmetric bound ($s_b$) and asymmetric bound ($a_b$) modes. The two bound modes are always well outside the region of integration and are never encircled by the contour. Only the leaky mode $s_l$ can eventually contribute as a residue depending whether or not the angle $\vartheta$ is larger than the leakage radiation angle $\vartheta_{LR}$ defined by the condition $\cos{(\xi_p'-\vartheta_{LR})}\cosh{\xi_p''}=1$ (with $\xi_p$ the complex coordinate of the SP pole $s_l$). This implies:
\begin{equation}
\vartheta_{LR}=\xi_p'+\arccos{(1/\cosh{\xi_p''})}\simeq \xi_p',
\end{equation}
and therefore the residue contribution is written:
\begin{eqnarray}
I_{SP}=2\pi i \textrm{Res}[F_+(\xi_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\Theta(\vartheta-\vartheta_{LR}).\nonumber\\
\end{eqnarray} In the following we write $k_p=k_0n\sin{\xi_p}$, $k_{3,p}=k_0n\cos{\xi_p}$ and $k_{1,p}=k_0n\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi_p})}$ the pole wavevectors associated with this $s_l$ mode.
The calculation of the different residues is straightforward and leads for the vertical dipole case to:
\begin{widetext}\begin{eqnarray}
\textrm{Res}[F_+^{\textrm{TM},\bot}(\xi_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}=\frac{i\mu_\bot}{8\pi}\frac{k_0n\sin{\xi_p}\cos{\xi_p}}{\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi_p})}}
\textrm{Res}[\tilde{T}_{13}^{\textrm{TM}}(k_0n\sin{\xi_p})]e^{i\sqrt{(\frac{\varepsilon_1}{\varepsilon_3}-\sin^2{\xi_p})}h}
\cdot e^{ik_0nz\cos{\xi_p}}H_0^{(+)}(k_0n\varrho\sin{\xi_p})\nonumber\\
\end{eqnarray}\end{widetext}
We now write $\tilde{T}_{13}^{\textrm{TM}}(k)$ as a rational fraction $\frac{N_{13}(k)}{D_{13}(k)}$ (with polynomial functions $N_{13}(k)$, $D_{13}(k)$ of the variable $k$) and therefore for the single pole $\xi_p$ we get $$\textrm{Res}[\tilde{T}_{13}^{\textrm{TM}}(k_0n\sin{\xi_p})]=\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_0n\sin{\xi_p})}{\partial \xi_p} }=\frac{1}{k_{3,p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }.$$ We thus have finally
\begin{eqnarray}
\textrm{Res}[F_+^{\textrm{TM},\bot}(\xi_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\nonumber\\
=\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}e^{ik_{1,p}h}e^{ik_{3,p}z}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }H_0^{(+)}(k_p\varrho).
\end{eqnarray}
A similar expression is obtained for the horizontal dipole:
\begin{eqnarray}
\textrm{Res}[F_+^{\textrm{TM},||}(\xi_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\nonumber\\
=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varrho}}}{8\pi}e^{ik_{1,p}h}e^{ik_{3,p}z}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }H_1^{(+)}(k_p\varrho).
\end{eqnarray}
There is no residue for the TE modes.\\
\indent Going back to the SDP contribution we define the variable $\tau=e^{i\pi/4}\sqrt{2}\sin{((\xi-\vartheta)/2)}$ which leads to $f(\xi)=i-\tau^2$. Along $SDP$ $\tau$ is real such that $\tau^2=-\sin{(\xi'-\vartheta)}\sinh{\xi''}\geq 0$. We thus obtain $\tau=2\sin{((\xi'-\vartheta)/2)}\cosh{(\xi''/2)}$. The saddle point corresponds to $\tau=0$ ($\tau>0$ if $\xi''<0$ and $\tau<0$ if $\xi''>0$ along $SDP$). With this new variable the point $F$ has therefore the coordinate $\tau_F\simeq-2\sin{(\vartheta/2)}\cosh{(\frac{1}{2}\textrm{arccosh}(1/\cos(\vartheta)))}=-2\sin{(\vartheta/2)}\sqrt{1+\frac{1}{\cos{\vartheta}}}<0$. Defining the term $I_{SDP}=-\int_{EF}-\tilde{\int_{FG}}$ we therefore obtain
\begin{eqnarray}
I_{SDP}=e^{ik_0nr}\{\int_{-\infty}^{\tau_F^-}d\tau G_{-}(\tau)e^{-k_0nr\tau^2}\nonumber\\+\int_{\tau_F^+}^{+\infty}d\tau G_{+}(\tau)e^{-k_0nr\tau^2}\}\nonumber\\
=e^{ik_0nr}\int_{-\infty}^{+\infty}d\tau\{G_{+}(\tau)[1-\Theta(\tau_F-\tau)]\nonumber\\+G_{-}(\tau)[1-\Theta(\tau-\tau_F)]\}e^{-k_0nr\tau^2}
\end{eqnarray} where we defined $G_{\pm}(\tau)=F_{\pm}(\xi)\frac{d\xi}{d\tau}$ and used $\frac{d\xi}{d\tau}=\sqrt{2}e^{-i\pi/4}/\cos{((\xi-\vartheta)/2)}$. We introduced the Heaviside step function $\Theta(x)$ defined as: $\Theta(x)=1$ if $x\geq0$ and $\Theta(x)=0$ otherwise. Importantly $\lim_{\tau\rightarrow \tau_F^+} G_{+}(\tau)=\lim_{\tau\rightarrow \tau_F^-}G_{-}(\tau)$ and therefore the function $G(\tau)=G_{+}(\tau)(1-\Theta(\tau_F-\tau))+G_{-}(\tau)(1-\Theta(\tau-\tau_F))$ which is not defined at $\tau_F$ can be prolonged without difficulties at $F$.
\subsection{Closing the contour in the case $\vartheta<\xi'_c$}
If $\vartheta<\xi'_c$ the closed integration contour contain 6
contributions (see Fig.~2) and we have:
\begin{equation}0=\int_{\Gamma}+\int_{DE}+\int_{EF}+\tilde{\int_{FG}}+\int_{GH}+\int_{HA}.\end{equation}
All these contribution but $\int_{NM}$ are defined on the ``+'' Riemann sheet. $\int_{HA}$ and $\int_{DE}$ tends asymptotically to zero for reasons already discussed in the previous \begin{figure*}[h!t]
\centering
\includegraphics[width=16cm]{image2.eps}
\caption{Integration contour in the complex $\xi$-plane for $\vartheta<\xi'_c$. }
\end{figure*}paragraph. Importantly there is no contribution along the branch cut since the integration path along SDP starts and finishes on the proper Riemann sheet ``+''. Regrouping the terms we thus have for $\vartheta>\xi'_c$: $\int_{\Gamma}=I_{SDP}=-\int_{EF}-\tilde{\int_{FG}}-\int_{GH}$ with
\begin{eqnarray}
\int_{EF}:=\int_{EF}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}\nonumber \\
\tilde{\int_{FG}}:=\int_{FG}d\xi F_-(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}\nonumber \\
\int_{GH}:=\int_{GH}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}.
\end{eqnarray}
We then use the same variable $\tau$ and function $G_{\pm}(\tau)$ and thus obtain
\begin{eqnarray}
I_{SDP}=e^{ik_0nr}\{\int_{-\infty}^{\tau_F^-}d\tau G_{+}(\tau)e^{-k_0nr\tau^2}\nonumber\\+\int_{\tau_F^+}^{\tau_G^-}d\tau G_{+}(\tau)e^{-k_0nr\tau^2}\nonumber\\+\int_{\tau_G^+}^{-\infty}d\tau G_{+}(\tau)e^{-k_0nr\tau^2}\}\end{eqnarray}
which is rewritten as
\begin{eqnarray}
I_{SDP}=e^{ik_0nr}\int_{-\infty}^{\tau_F^-}d\tau G(\tau)e^{-k_0nr\tau^2}\end{eqnarray}
with\begin{eqnarray}
G(s)=G_{+}(\tau)[1-\Theta(\tau-\tau_F)]\nonumber\\
+G_{+}(\tau)[1-\Theta(\tau_G-\tau)]\nonumber\\+G_{-}(\tau)[1-\Theta(\tau_F-\tau)][1-\Theta(\tau-\tau_G)].
\end{eqnarray}
\subsection{The Steepest descent path contribution}
The previous integral $I_{SDP}$ for both $\vartheta>\xi'_c$ and $\vartheta<\xi'_c$ is of the gaussian form and can be evaluated by doing a Taylor expansion of $G(\tau)$ around $\tau=0$. Using well known integrals we thus obtain
\begin{eqnarray}
I_{SDP}=e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{m!(k_0nr)^{\frac{m+1}{2}}}\frac{d^m}{d\tau^m}G(0).\label{taylor}
\end{eqnarray}
It is important to observe that $G(\tau)$ is highly singular in the vicinity of the SP pole $s_l$. Writing $\tau_p$ the coordinate of the pole in the $\tau$-space we thus define
\begin{eqnarray}
G(\tau):=G_0(\tau)+\frac{\textrm{Res}[G(\tau_p)]}{\tau-\tau_p}
\end{eqnarray} which (together with Eq.~\ref{taylor}) immediately implies
\begin{eqnarray}
I_{SDP}=e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{(k_0nr)^{\frac{m+1}{2}}}\{\frac{1}{m!}\frac{d^m}{d\tau^m}G_0(0)\nonumber\\
-\frac{\textrm{Res}[G(\tau_p)]}{\tau_p^{m+1}}\}.
\end{eqnarray}Remarkably, the singular integral $$I_{SDP}^{\textrm{pole}}:=e^{ik_0nr}\int_{-\infty}^{+\infty}d\tau\frac{\textrm{Res}[G(\tau_p)]}{\tau-\tau_p}e^{-k_0nr\tau^2}$$ can be directly calculated and we thus obtain
\begin{eqnarray}
I_{SDP}=e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{m!(k_0nr)^{\frac{m+1}{2}}}\frac{d^m}{d\tau^m}G_0(0)+I_{SDP}^{\textrm{pole}}\nonumber\\
\end{eqnarray}
with
\begin{eqnarray}
I_{SDP}^{\textrm{pole}}=-2i\pi\textrm{Res}[G(\tau_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\{\Theta(-\tau_p'')
\nonumber\\
-\frac{1}{2}\textrm{erfc}(-i\tau_p\sqrt{(k_0nr)})\}\nonumber\\
=-e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{(k_0nr)^{\frac{m+1}{2}}}\frac{\textrm{Res}[G(\tau_p)]}{\tau_p^{m+1}}
\end{eqnarray}
where $\textrm{erfc}(z)=(2/\sqrt\pi)\int_{z}^{+\infty} e^{-t^2}dt$ is the Gauss complementary error function.
Notably, we have (see appendix) $\textrm{Res}[G(\tau_p)]=\textrm{Res}[F_{+}(\xi_p)]$ and $\Theta(-\tau_p'')=\Theta(\vartheta-\vartheta_{LR})$ therefore $I_{SDP}^{\textrm{pole}}$ contains up to the sign difference the same contribution which already appeared in $I_{SP}$. Consequently, the sum $I_{SDP}^{\textrm{pole}}+I_{SP}$ of the two contributions proportional to the residue represents a simple explicit mathematical expression:
\begin{eqnarray}
I_{SDP}^{\textrm{pole}}+I_{SP}\nonumber\\=i\pi\textrm{Res}[G(\tau_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\textrm{erfc}(-i\tau_p\sqrt{(k_0nr)}).
\end{eqnarray}
This sum is sometimes by definition associated with the surface plasmon mode. We point out however that the error function is highly singular and therefore we should preferably use the equivalent expression:
\begin{eqnarray}
I_{SDP}^{\textrm{pole}}+I_{SP}=2i\pi\textrm{Res}[G(\tau_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\Theta(\vartheta-\vartheta_{LR})\nonumber\\
-e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{(k_0nr)^{\frac{m+1}{2}}}\frac{\textrm{Res}[G(\tau_p)]}{\tau_p^{m+1}}.\nonumber\\
\end{eqnarray}We also note that most of the discussions and confusions made during the XX$^{th}$ on the role of SPs in the Sommerfeld integral resulted from the above mentioned intricate relationship existing between the two singular terms $I_{SP}$ and $I_{SDP}^{\textrm{pole}}$. For a historical discussion see Collin\cite{Collin}.
\subsection{The lateral wave contribution: Goos-H\"{a}nchen effect in transmission}
In the case $\vartheta>\xi'_c$ the integral $\int_{HI}$ along the
branch cut can be transformed using the method described in Ref.~2.
For this we separate the integral $\int_{HI}d\xi
F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ into
\begin{figure*}[h!t]
\centering
\includegraphics[width=12cm]{image4.eps}
\caption{Integration contour in the complex $\xi$-plane along the branch cut $HI$ around the branch point $M$ for $\vartheta>\xi'_c$. (A) shows the closed contour used to deform analytically the contour $HM$. (B) shows the closed contour used to deform the contour $MI$. }
\end{figure*}
a contribution $\int_{HM}=\int_{HM}d\xi
F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ starting at infinity at
$\xi=i\infty+0^+$ and stopping at the branch-point $M$ ($\xi_c\simeq
\vartheta_c$) and into a contribution $\int_{MI}=\int_{MI}d\xi
F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ starting at $M$ and
finishing at infinity $\xi=i\infty+0^-$ on the other side of the
branch cut. As shown in Fig.~3(A) in order to calculate $I_{HM}$
the integration contour is closed by longing the modified steepest
descent path $MG$ defined by the equation
\begin{eqnarray}
\cos{(\xi'-\vartheta)}\cosh{\xi''}=\cos{(\xi_c'-\vartheta)}\cosh{\xi_c''}\nonumber\\ \simeq\cos{(\vartheta_c-\vartheta)}=K<1.
\end{eqnarray}
The curve $MG$ with a vertical asymptote at $\xi=-\pi/2+\vartheta$
is thus defined by
$\xi''=\textrm{arccosh}(K/\cos{(\xi'-\vartheta)})$. We thus have
$0=\int_{HM}+\int_{MN}+\tilde{\int}_{NG}+\tilde{\int}_{GH}$ where
$N$ is the intersection point between the modified steepest descent
path $MG$ and the branch cut ($\xi_N\simeq
i\textrm{arccosh}(K/\cos{(\vartheta)})$).
$\tilde{\int}_{NG}:=\int_{NG}d\xi
F_-(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ and
$\tilde{\int}_{GH}:=\int_{GH}d\xi
F_-(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ are evaluated on the ``-''
Riemann sheet while $\int_{HM}:=\int_{HM}d\xi
F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ and
$\int_{MN}:=\int_{MN}d\xi F_+(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$
are evaluated on the ``+'' Riemann sheet. From $H$ we cross a second
time the branch cut in order to close the contour on the ``+''
Riemann sheet.\\
A similar analysis is done for the integration contour $\int_{MI}$.
We have $0=\int_{MI}+\int_{IG}+\int_{GN}+\tilde{\int}_{NM}$ where
$\int_{MI}$, $\int_{IG}$ and $\int_{GN}$ are defined as previously
on the ``+'' Riemann sheet while $\tilde{\int}_{NM}:=\int_{NM}d\xi
F_-(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}$ is evaluated on the ``-''
Riemann sheet. In order to close the contour in ``+'' we must
finally cross the branch cut in the region surrounding $M$. The
infinitesimal loop surrounding $M$ gives however a vanishing
contribution which can be neglected.\\ Regrouping all these
expressions we define $I_{LW}=-\int_{HM}-\int_{MI}$ and we obtain
\begin{widetext}\begin{eqnarray}
I_{LW}=\int_{\xi_c}^{\xi_N} d\xi [F_+(\xi)-F_+(\xi)]e^{ik_0nr\cos{(\xi-\vartheta)}}
+\int_{\xi_N}^{-\pi/2+\vartheta+i\infty} d\xi [F_-(\xi)-F_+(\xi)]e^{ik_0nr\cos{(\xi-\vartheta)}}+\int_{IG}+\tilde{\int_{GH}}.
\end{eqnarray}\end{widetext}
The contributions $\int_{IG}$, $\tilde{\int_{GH}}$ vanish asymptotically as discussed before and therefore can be neglected.
Importantly due to the definition of the square root the function $F_-(\xi)-F_+(\xi)$ tends to vanish at the intersection point $N$.
We then define the function $\Phi(\xi)=[F_+(\xi)-F_+(\xi)]\textrm{sign}(\xi''_N-\xi'')$ vanishing at $\xi_c$ and write
\begin{eqnarray}
I_{LW}=\Theta(\vartheta-\xi'_c)\int_{\xi_c}^{-\pi/2+\vartheta+i\infty} d\xi \Phi(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}.\nonumber\\
\end{eqnarray} The Heaviside function was introduced in order to remember that $I_{LW}$ is only defined if $\xi'_c<\vartheta$.
In the present work we will only evaluate $I_{LW}$ approximately
using the method discussed in Ref.~2. First, we observe that
$e^{ik_0nr\cos{(\xi-\vartheta)}}=e^{ik_0nrK}e^{k_0nr\sin{(\xi'-\vartheta)}\sinh{\xi''}}$.
Second, considering that only $\xi$ values in the vicinity of
$\xi_c\simeq\vartheta_c$ contribute significantly to $I_{LW}$ we
write $d\xi\approx id\xi''$, and
$\sin{(\xi'-\vartheta)}\sinh{\xi''}\approx-\sin{(\vartheta-\vartheta_c)}\xi''<0$.
We therefore obtain
\begin{widetext}\begin{eqnarray}
I_{LW}\approx ie^{ik_0nrK}\Theta(\vartheta-\vartheta_c)\int_{0}^{+\infty} d\xi'' \Phi(\vartheta_c+i\xi'')e^{-k_0nr\sin{(\vartheta-\vartheta_c)}\xi''}\nonumber\\
=2ie^{ik_0nrK}\Theta(\vartheta-\vartheta_c)\int_{0}^{+\infty} udu \Phi(\vartheta_c+iu^2)e^{-k_0nr\sin{(\vartheta-\vartheta_c)}u^2}
\end{eqnarray}\end{widetext} where we used the variable $\xi''=u^2$. This integral is of the Gaussian kind and can be computed exactly using a Taylor expansion of $\Phi$ near $\vartheta_c$. We consequently deduce
\begin{widetext}\begin{eqnarray}
I_{LW}\approx ie^{ik_0nrK}\Theta(\vartheta-\vartheta_c)\sum_{m=1}^{+\infty}\frac{\Gamma(1+m/2)}{(k_0nr\sin{(\vartheta-\vartheta_c)})^{1+m/2}}\frac{H^{(m)}(0)}{m!}
\end{eqnarray}\end{widetext} where we used the series expansion $\Phi(\vartheta_c+iu^2)=H(u)=\sum_{m=1}^{+\infty}\frac{u^m}{m!}\frac{d^m}{du^m}H(u)|_{u=0}=\sum_{m=1}^{+\infty}u^m\frac{H^{(m)}(0)}{m!}$ (the term $m=0$ vanishes since $\Phi(\theta_c)=0$).\\
\begin{figure}[h]
\centering
\includegraphics[width=8.2cm]{image5.eps}\caption{Geometric construction of the Goos-H\"{a}nchen phase in transmission.}
\end{figure}
The phase
$\delta \varphi = k_0nr\cos{(\vartheta-\vartheta_c)}=k_0nr[\cos{\vartheta}\cos{\vartheta_c}+\sin{\vartheta}\sin{\vartheta_c}]$ takes a simple interpretation if you define the length $L_1$ of $L_2$ by:
\begin{eqnarray}
r\sin{\vartheta}=\varrho=L_1+L_2\sin{\vartheta_c}\nonumber\\
r\cos{\vartheta}=z-d=L_2\cos{\vartheta_c}.
\end{eqnarray}
Therefore we obtain
\begin{eqnarray}
\delta \varphi = k_0n(L_2+L_1\sin{\vartheta_c})=k_0nL_2+k_0L_1
\end{eqnarray}
where we used $n\sin{\vartheta_c}=1$. As it is clear from Fig.~4 $L_1$ is the path length of a `creeping' wave propagating along the interface before to be re-emitted at the critical angle $\vartheta_c$. The re-emitted waves propagates along a distance $L_2$ in the medium of optical index $n$ and then reaches the point defined by the coordinates $(r,\vartheta)$. The phase $\delta\varphi$ is thus generated by a virtual propagation of length $L_1$ along the interface air-dielectric $z=d$ (supposing no metal is present and that the volume corresponding to the film between $z=0$ and $z=d$ is filled with the medium of permittivity $\varepsilon_1\simeq 1$) and followed by a re-emission at the critical angle in the glass substrate $\varepsilon_3=n^2$. The previous analysis justifies therefore the name ``lateral'' we gave to the contribution $I_{LW}$. This effect can be seen as a kind of Goos-H\"{a}nchen deflection in transmission and is somehow equivalent to the already known Goos-H\"{a}nchen effect associated with lateral waves in the reflection mode.\\
\subsection{The Far-field Fraunhofer regime}
We are interested into evaluating the different integrals when $r\rightarrow +\infty$.
As a first approximation, concerning $I_{SDP}$ we calculate only the term $m=0$ in the sum which reads:
\begin{eqnarray}
I_{SDP,m=0}=\frac{\sqrt{\pi}e^{ik_0 nr}}{\sqrt{k_0nr}}G(0)=\frac{\sqrt{2\pi}e^{ik_0 nr}e^{-i\pi/4}}{\sqrt{k_0nr}}F_+(\vartheta).
\end{eqnarray}
In the far-field where $r>>\lambda$ the Hankel function can be approximated using the asymptotic formulas
\begin{eqnarray}
H^{(+)}_0(x)=\sqrt{\frac{2}{\pi x}}e^{-i\pi/4}(1-\frac{i}{8x})e^{ix}+O(x^{-5/2})\nonumber\\
H^{(+)}_1(x)=\sqrt{\frac{2}{\pi x}}e^{-i3\pi/4}(1+\frac{3i}{8x})e^{ix}+O(x^{-5/2})
\end{eqnarray} which are valid for $x>>1$.
Therefore for the vertical dipole we get
\begin{widetext}\begin{eqnarray}
I_{SDP,m=0}^\bot=\frac{2\pi k_0n\cos{\vartheta}}{ir}e^{ik_0 nr}\tilde{\Psi}^{\textrm{TM},\bot}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d(1-\frac{i}{8k_0nr\sin{\vartheta}^2}+...)
\end{eqnarray}\end{widetext} where \begin{eqnarray}\tilde{\Psi}_{\textrm{TM},\bot}[\mathbf{k},z=d]=\frac{i\mu_\bot}{8\pi^2 k_1}\tilde{T}_{13}^{\textrm{TM}}(k)e^{ik_3 d}e^{ik_1 h}\nonumber\\=\frac{i\mu_\bot}{8\pi^2 k_0\sqrt{(1-n^2\sin{\vartheta}^2)}}\tilde{T}_{13}^{\textrm{TM}}(k)e^{ik_3 d}e^{ik_1 h}\end{eqnarray} is the 2D Fourier transform of $\Psi^{\textrm{TM},\bot}(\varrho,z)$ calculated at $z=d$ (i.e $\int \frac{d^2\mathbf{x}}{4\pi^2}\Psi^{\textrm{TM},\bot}(\varrho,z=d)e^{-i\mathbf{k}\cdot\mathbf{x}}$) for the wavevector $\mathbf{k}=k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}}$.
Similarly for the Horizontal dipole we obtain for TM components:
\begin{widetext}\begin{eqnarray}
I_{SDP,m=0}^{||}=\frac{2\pi k_0n\cos{\vartheta}}{ir}e^{ik_0 nr}\tilde{\Psi}^{\textrm{TM},||}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d](1+\frac{3i}{8k_0nr\sin{\vartheta}^2}+...)
\end{eqnarray}\end{widetext}
where\begin{eqnarray}\tilde{\Psi}^{\textrm{TM},||}[\mathbf{k},z=d]=\frac{-i\mu_{||}\cdot\mathbf{k}}{8\pi^2 k^2}\tilde{T}_{13}^{\textrm{TM}}(k)e^{ik_3 d}e^{ik_1 h}\nonumber\\=\frac{-i\mu_{||}\cdot\boldsymbol{\hat{\varrho}}}{8\pi^2 k_0nr\sin{\vartheta}}\tilde{T}_{13}^{\textrm{TM}}(k)e^{ik_3 d}e^{ik_1 h}.\end{eqnarray}For TE components we have also:
\begin{widetext}\begin{eqnarray}
I_{SDP,m=0}^{||}=\frac{2\pi k_0n\cos{\vartheta}}{ir}e^{ik_0 nr}\tilde{\Psi}^{\textrm{TE},||}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d](1+\frac{3i}{8k_0nr\sin{\vartheta}^2}+...)
\end{eqnarray}\end{widetext} with now
\begin{eqnarray}\tilde{\Psi}^{\textrm{TE},||}[\mathbf{k},z=d]=\frac{ik_0\mu_{||}\cdot(\mathbf{\hat{z}}\times\mathbf{k})}{8\pi^2 k_1k^2}\tilde{T}_{13}^{\textrm{TE}}(k)e^{ik_3 d}e^{ik_1 h}\nonumber\\=\frac{i\mu_{||}\boldsymbol{\hat{\varphi}}}{8\pi^2 k_0 \sqrt{(1-n^2\sin{\vartheta}^2)}n\sin{\vartheta}}\tilde{T}_{13}^{\textrm{TE}}(k)e^{ik_3 d}e^{ik_1 h}.\end{eqnarray}
In the far field only the term in $1/r$ survives and (in agreement
with the Stratton-Chu formalism~\cite{stratton} and Richards and
Wolf~\cite{wolf})~we can always write:
\begin{eqnarray}
\Psi\simeq I_{SDP,m=0}\simeq\frac{2\pi k_0n\cos{\vartheta}}{ir}e^{ik_0nr}\nonumber\\ \tilde{\Psi}^{\textrm{TM or te}}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d].\end{eqnarray}
\subsection{The intermediate regime: Generalization of the Norton wave }
The next term in the power expansion of $\Psi$ contributes proportionally to $1/r^2$. To evaluate this term we must take into account not only $I_{SDP,m=0}$ but also $I_{SDP,m=2}$ and $I_{LW,m=1}$.
We use the notation \begin{eqnarray}F_{\pm}(\xi)=\sqrt{(\frac{2\pi k_0n}{r})}e^{-i\frac{\pi}{4}}k_0n\cos{\xi}Q_{\pm}^{\alpha}(k_0n\sin{\xi})\nonumber\\ \cdot[1-i\frac{1-4\alpha^2}{k_0nr(\sin{\xi})^2}+...]\end{eqnarray} (with $\alpha=0$ or 1 depending whether the dipole is vertical or horizontal) and we obtain for the SDP contributions proportional to $1/r^2$:
\begin{eqnarray}I_{SDP,m=0}=\frac{2\pi(1-4\alpha^2)}{r^2}e^{ik_0nr}\frac{\cos{\vartheta}Q_{+}^{\alpha}(k_0n\sin{\xi})}{(\sin{\vartheta})^2}\end{eqnarray}
and
\begin{eqnarray}
I_{SDP,m=2}=\frac{e^{ik_0nr}\sqrt{\pi}}{4(k_0nr)^{3/2}}\frac{d^2G(\tau)}{d\tau^2}|_{\tau=0}\nonumber\\
=\frac{-\pi}{r^2}e^{ik_0nr}\frac{d^2[\frac{\cos{\xi}}{\cos{((\xi-\vartheta)/2)}}Q_{+}^{\alpha}(k_0n\sin{\xi})]}{d\xi^2}|_{\xi=\vartheta}.\nonumber\\
\end{eqnarray}
We also have to include the lateral wave (i.e. Goos-H\"{a}nchen) contribution:\begin{eqnarray}I_{LW,m=1}\simeq e^{ik_0nrK}\frac{i\sqrt{\pi}\Theta(\vartheta-\vartheta_c)}{2(k_0nr\sin{(\vartheta-\vartheta_c)})^{3/2}}\frac{d H(u)}{du}|_{u=0}\end{eqnarray} which reads
\begin{widetext}\begin{eqnarray}I_{LW,m=1}= \frac{\pi e^{ik_0(nL_2+L_1)}\Theta(\vartheta-\vartheta_c) e^{i\frac{\pi}{4}}}{r^2(\sin{(\vartheta-\vartheta_c)})^{3/2}}\frac{d\{\cos(\vartheta_c+iu^2)[Q_{+}^{\alpha}(k_0n\sin{(\vartheta_c+iu^2)})-Q_{-}^{\alpha}(k_0n\sin{(\vartheta_c+iu^2)})]\}}{du}|_{u=0}.
\end{eqnarray}\end{widetext}
The sum $I_{SDP,m=0}+I_{SDP,m=2}+I_{LW,m=1}$ describes an asymptotic field varying as $1/r^2$ and which constitutes a generalization of the result obtained by Norton for the radio wave antenna on a conducting earth problem.
\section{How to define the surface plasmon mode?}
\subsection{From the near-field to the far-field}
As seen in Section 2.E the dominant contribution in the far-field has the form
\begin{eqnarray}
\Psi(\mathbf{x},z)=\frac{2\pi k_0n\cos{\vartheta}}{ir}e^{ik_0 nr}\tilde{\Psi}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d].
\end{eqnarray}
From Eq.~53 we also have the relation
\begin{eqnarray}\tilde{\Psi}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d]:=Q(s)\nonumber\\=\sqrt{(\frac{r}{2\pi k_0n})}e^{i\frac{\pi}{4}}\frac{F_+(\xi)}{k_0n\cos{\xi}}\nonumber\\=\sqrt{(\frac{r}{2\pi k_0n})}e^{i\frac{\pi}{4}}F_+(\xi)\frac{d\xi}{ds}\end{eqnarray} where $s=k_0n\sin{\xi}$ and where $\xi$ is here identical to $\vartheta$ (as usual the $\varrho$ and $\varphi$ dependencies are here implicit in $Q(s):=Q(s,\varphi,\varrho)$ and $F_+(\xi):=F_+(\xi,\varphi,\varrho)$).
In the complex plane $\xi=\xi'+i\xi''$ and $s=s'+is''$ we have the singular/regular decomposition: $Q(s)=Q_0(s)+\textrm{Res}[Q(s_p)]/(s-s_p)$. Furthermore, from Eq.~59 this implies
\begin{eqnarray}
\frac{1}{2\pi i}\oint_{C_p}ds Q(s)=\textrm{Res}[Q(s_p)]\nonumber\\
=\sqrt{(\frac{r}{2\pi k_0n})}e^{i\frac{\pi}{4}}\frac{1}{2\pi i}\oint_{\mathcal{C}_p}d\xi F_+(\xi)
\nonumber\\
=\sqrt{(\frac{r}{2\pi k_0n})}e^{i\frac{\pi}{4}}\textrm{Res}[F_+(\xi_p)]
\end{eqnarray}
where $C_p$ and $\mathcal{C}_p$ are small closed contours surrounding the plasmon pole in respectively the complex $s$-plane and $\xi$-plane. Therefore, we can equivalently write
\begin{eqnarray}
Q(s)=Q_0(s)+\sqrt{(\frac{r}{2\pi k_0n})}e^{i\frac{\pi}{4}}\frac{\textrm{Res}[F_+(\xi_p)]}{s-s_p}.
\end{eqnarray}
The calculations being done in the far-field limit, where $r,\varrho\rightarrow +\infty$, we have for the vertical dipole case the residue:
\begin{widetext}\begin{eqnarray}
\textrm{Res}[F_+^{\textrm{TM},\bot}(\xi_p)]
=\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}e^{ik_{1,p}h}e^{ik_{3,p}d}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }H_0^{(+)}(k_p\varrho)e^{-ik_{p}\varrho}
\simeq\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}e^{ik_{1,p}h}e^{ik_{3,p}d}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\sqrt{(\frac{2}{\pi k_p \varrho})}e^{-i\frac{\pi}{4}},2
\end{eqnarray}\end{widetext}
and similarly for the horizontal dipole residue:
\begin{widetext}\begin{eqnarray}
\textrm{Res}[F_+^{\textrm{TM},||}(\xi_p)]=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varrho}}}{8\pi}e^{ik_{1,p}h}e^{ik_{3,p}d}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }H_1^{(+)}(k_p\varrho)e^{-ik_{p}\varrho}
\simeq\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varrho}}}{8\pi}e^{ik_{1,p}h}e^{ik_{3,p}d}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\sqrt{(\frac{2}{\pi k_p \varrho})}e^{-i\frac{3\pi}{4}}.
\end{eqnarray}\end{widetext}
Regrouping all the terms and using the fact that $Q(s)=\tilde{\Psi}[\mathbf{k},z=d]$ with $\mathbf{k}=k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}}$ and $\varrho=r\sin{\vartheta}$ this allow us to obtain a decomposition of the Fourier field $\tilde{\Psi}[\mathbf{k},z=d]$ into a singular (i.e. SP) and regular contribution:
\begin{eqnarray}
\tilde{\Psi}^{\bot,||}[\mathbf{k},z=d]=\tilde{\Psi}^{\bot,||}_{0}[\mathbf{k},z=d]+\tilde{\Psi}^{\bot,||}_{SP}[\mathbf{k},z=d]
\end{eqnarray}
with
\begin{eqnarray}
\tilde{\Psi}^{\bot}_{SP}[\mathbf{k},z=d]=\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}\frac{e^{ik_{1,p}h}e^{ik_{3,p}d}}{\pi\sqrt{kk_p}(k-k_p)}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\nonumber\\
\tilde{\Psi}^{||}_{SP}[\mathbf{k},z=d]=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\mathbf{k}}}{8\pi}\frac{e^{ik_{1,p}h}e^{ik_{3,p}d}}{i\pi\sqrt{kk_p}(k-k_p)}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }.
\end{eqnarray}
These formulas are rigorously only valid in the propagative sector where $|\mathbf{k}|\leq k_0n$ (i.e. from the far-field definition). However, due to the simplicity of the mathematical expressions obtained one is free to extend the validity of Eqs.~65 to the full spectrum of $\mathbf{k}\in \mathbb{R}^2$ values including both the propagative sector for which $k_3=\sqrt{(k_0^2n^2-|\mathbf{k}|^2)}$ and the evanescent sector for which $k_3=i\sqrt{(|\mathbf{k}|^2-k_0^2n^2)}$ (i.e. if $|\mathbf{k}|\geq k_0 n$).\\ It should now be observed that we can slightly modify our current analysis by observing that Eq.~61 is not exactly a Laurent series since there are other isolated singularities in the complex plane which were here included in the definition of $Q_0(s)$ i.e. $\tilde{\Psi}^{\bot,||}_{0}[\mathbf{k},z=d]$. The previous choice was justified for all practical purposes by the detailed calculation done in Section 2 in which only the $\xi_p$ singularity corresponding to the $s_l$ mode contributed to the integration contours used. Still, for the symmetry of the mathematical expressions it is clearly possible, and actually very useful (as we will see below), to extract a second SP contribution $\xi_{-p}=-\xi_p$ corresponding to $-k_p$. This is clearly the $s_l$ pole associated with propagation in the opposite radial direction. Taking into account this second pole and the symmetries of $k_{1p}$, $k_{3p}$, $N_{13}(k_p)$ and antisymmetry of $\frac{\partial D_{13}(k_p)}{\partial k_p}$ in the substitution $k_p\rightarrow-k_p$ one obtain after straightforward calculations:
\begin{widetext}\begin{eqnarray}
\tilde{\Psi}^{\bot}_{SP}[\mathbf{k},z=d]:=\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}\frac{e^{ik_{1,p}h}e^{ik_{3,p}d}}{\pi\sqrt{k_p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\frac{1}{\sqrt{k}}[\frac{1}{k-k_p}+\frac{1}{i(k+k_p)}]\nonumber\\
\tilde{\Psi}^{||}_{SP}[\mathbf{k},z=d]:=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\mathbf{k}}}{8\pi}\frac{e^{ik_{1,p}h}e^{ik_{3,p}d}}{\pi\sqrt{k_p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\frac{1}{\sqrt{k}}[\frac{1}{i(k-k_p)}+\frac{1}{k+k_p}].
\end{eqnarray}\end{widetext}
From this definition we can calculate the SP field in the complete space. In particular for $z\geq d$ we have $\Psi^{\bot,||}_{SP}(\mathbf{x},z)=\int d^2 \mathbf{k}\tilde{\Psi}^{\bot,||}_{SP}[\mathbf{k},z=d]e^{ik_3Z}$ with $Z=z-d$. More precisely using the symmetry of the system we obtain
\begin{eqnarray}
\Psi^{\bot}_{SP}(\mathbf{x},z)=\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}\frac{e^{ik_{1,p}h}e^{ik_{3,p}d}}{\pi\sqrt{k_p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\nonumber\\
\cdot 2\pi\int_0^{+\infty}\frac{kdk e^{ik_3Z}J_0(k\varrho)}{\sqrt{k}}[\frac{1}{k-k_p}+\frac{1}{i(k+k_p)}]\end{eqnarray} and
\begin{eqnarray}\Psi^{||}_{SP}(\mathbf{x},z)=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varrho}}}{8\pi}\frac{e^{ik_{1,p}h}e^{ik_{3,p}d}}{\pi\sqrt{k_p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\nonumber\\
\cdot 2\pi\int_0^{+\infty}\frac{kdk e^{ik_3Z}J_1(k\varrho)}{\sqrt{k}}[\frac{1}{k-k_p}-\frac{1}{i(k+k_p)}].
\end{eqnarray}
with $\mathbf{x}=\varrho\hat{\boldsymbol{\varrho}}$. To obtain these last equations we also used the well known Bessel function properties:
\begin{eqnarray}
\oint d\varphi_ke^{ik\varrho\cos{(\varphi-\varphi_k)}}
\{\begin{array}{c}
\cos{(m\varphi_k)}\\
\sin{(m\varphi_k)}
\end{array}\}\nonumber\\
=2\pi i^m\{\begin{array}{c}\cos{(m\varphi)}\\ \sin{(m\varphi)}\end{array}\}J_m(k\varrho)
\end{eqnarray} (m=0,1,...) to integrate over the $\varphi_k$-coordinate of the 2D vector $\mathbf{k}$.\\ We point out that the convergence of integrals 69, 70 is ensured since the Cosine integral $\int^{+\infty}_{a}dk cos{(k\varrho)}/k=-\textrm{Ci}(a\varrho)\simeq \frac{\cos{(a\varrho)}}{(a\varrho)^2}-\frac{\sin{(a\varrho)}}{a\varrho}$ for $a\varrho\gg 1$ is bounded.
\subsection{Asymptotic expansion}
Remarkably, using the relations $H^{(+)}_0(u)-H^{(+)}_0(-u)=2J_0(u)$ and $H^{(+)}_1(u)+H^{(+)}_1(-u)=2J_1(u)$ (valid for $|\arg{(z)}|< \pi$) as well as the parity properties of the functions $\frac{1}{\sqrt{k}}[\frac{1}{k-k_p}\pm\frac{1}{i(k+k_p)}]$ (i.e. under the transformation $k_p\rightarrow-k_p$) we obtain the practical relations
\begin{eqnarray}
\int_0^{+\infty}\frac{kdk e^{ik_3Z}J_0(k\varrho)}{\sqrt{k}}[\frac{1}{k-k_p}+\frac{1}{i(k+k_p)}]\nonumber\\
=\frac{1}{2}\int_{-\infty}^{+\infty}\frac{kdk e^{ik_3Z}H^{(+)}_0(k\varrho)}{\sqrt{k}}[\frac{1}{k-k_p}+\frac{1}{i(k+k_p)}]\end{eqnarray}
and
\begin{eqnarray}
\int_0^{+\infty}\frac{kdk e^{ik_3Z}J_1(k\varrho)}{\sqrt{k}}[\frac{1}{k-k_p}-\frac{1}{i(k+k_p)}]\nonumber\\
=\frac{1}{2}\int_{-\infty}^{+\infty}\frac{kdk e^{ik_3Z}H^{(+)}_1(k\varrho)}{\sqrt{k}}[\frac{1}{k-k_p}-\frac{1}{i(k+k_p)}].
\end{eqnarray}
Those relations would not be possible if we didn't included both the
$k_p$ and $-k_p$ poles in the analysis. Inserting Eqs.~72,73 into
Eqs.~69,70 and using the complex variable $\xi$ such as
$k=k_0n\sin{\xi}$ and the integration contour $\Gamma$ used in the
previous Sections we obtain
\begin{eqnarray}
\Psi^{\bot,||}_{SP}(\mathbf{x},z)=\int_{\Gamma}d\xi F^{\bot,||}_{SP}(\xi)e^{ik_0nr\cos{(\xi-\vartheta)}}
\end{eqnarray}
where
\begin{widetext}\begin{eqnarray}
F^{\bot}_{SP}(\xi)=\frac{i\mu_\bot}{8\pi}\frac{k_p}{k_{1,p}}\frac{\sqrt{(k_0n)}e^{ik_{1,p}h}e^{ik_{3,p}d}}{\sqrt{k_p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\frac{\sin{\xi}H_0^{(+)}(k_0n\varrho\sin{\xi})e^{-ik_0n\varrho\sin{\xi}}}{\sqrt{\sin{\xi}}}[\frac{\cos{\xi}}{\sin{\xi}-\sin{\xi_p}}+\frac{\cos{\xi}}{i(\sin{\xi}+\sin{\xi_p})}]\nonumber\\
F^{||}_{SP}(\xi)=\frac{\boldsymbol{\mu}_{||}\cdot\hat{\boldsymbol{\varrho}}}{8\pi}\frac{\sqrt{(k_0n)}e^{ik_{1,p}h}e^{ik_{3,p}d}}{\sqrt{k_p}}\frac{N_{13}(k_p)}{\frac{\partial D_{13}(k_p)}{\partial k_p} }\frac{\sin{\xi}H_1^{(+)}(k_0n\varrho\sin{\xi})e^{-ik_0n\varrho\sin{\xi}}}{\sqrt{\sin{\xi}}}[\frac{\cos{\xi}}{\sin{\xi}-\sin{\xi_p}}-\frac{\cos{\xi}}{i(\sin{\xi}+\sin{\xi_p})}].\nonumber\\
\end{eqnarray}\end{widetext}
The integral along $\Gamma$ can be evaluated by using the same
contour deformation as in Section 2. However, due to the absence of
the square root $k_1$ in Eq.~75 there is no branch cut contribution
to the integration contour. The integral can thus be split into one
contribution from the residue and one contribution from the SDP. We
get therefore:
\begin{eqnarray}
\Psi^{\bot,||}_{SP}(\mathbf{x},z)=2\pi i \textrm{Res}[F^{\bot,||}_{SP}(\xi_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\Theta(\vartheta-\vartheta_{LR})\nonumber\\+e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{m!(k_0nr)^{\frac{m+1}{2}}}\frac{d^m}{d\tau^m}G^{\bot,||}_{SP}(0)\nonumber\\
\end{eqnarray}with $G^{\bot,||}_{SP}(\tau)=F^{\bot,||}_{SP}(\xi)\frac{d\xi}{d\tau}$.\\
Few remarks are here important:\\
(i) First, the singular term $$2\pi i
\textrm{Res}[F^{\bot,||}_{SP}(\xi_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\Theta(\vartheta-\vartheta_{LR})$$
is exactly identical to the pole contribution appearing in Eq.~22.
This results from the equality
$\textrm{Res}[F^{\bot,||}_{SP}(\xi_p)]=\textrm{Res}[F_{+}^{\textrm{TM},\bot,||}(\xi_p)]$
(compare with Eqs.~24-25).\\
(ii) Second, the term $m=0$ in the SDP contribution is dominant in
the far-field regime and leads to
$\Psi^{\bot,||}_{SP}(\mathbf{x},z)=\frac{2\pi
k_0n\cos{\vartheta}}{ir}e^{ik_0
nr}\tilde{\Psi}^{\bot,||}_{SP}[k_0n\sin{\vartheta}\boldsymbol{\hat{\varrho}},z=d]$
as expected.\\
(iii) Third, the decomposition
$Q(s)=Q_0(s)+\textrm{Res}[Q(s_p)]/(s-s_p)+\textrm{Res}[Q(-s_{p}]/(s+s_p)$
leads to \begin{eqnarray}
G^{\bot,||}_{SP}(\tau)=G^{\bot,||}_{SP,0}(\tau)+\frac{\textrm{Res}[G^{\bot,||}_{SP}(\tau_p)]}{\tau-\tau_p}
\nonumber\\+\frac{\textrm{Res}[G^{\bot,||}_{SP}(\tau_{-p}]}{\tau-\tau_{-p}}
\end{eqnarray}
where $\tau_{-p}=-e^{i\pi/4}\sqrt{2}\sin{((\xi_{p}+\vartheta)/2)}$.
Therefore, if we compare with Eqs.~32-38 we see that
$\Psi^{\bot,||}_{SP}(\mathbf{x},z)$ is not exactly equal to
$I_{SP}+I_{SDP}^{\textrm{pole}}$ explicitly defined in Eqs.~37 and
36. More precisely we obtain:
\begin{eqnarray}
\Psi^{\bot,||}_{SP}(\mathbf{x},z)=2i\pi\textrm{Res}[G^{\bot,||}_{SP}(\tau_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\Theta(\vartheta-\vartheta_{LR})\nonumber\\
-e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{(k_0nr)^{\frac{m+1}{2}}}\frac{\textrm{Res}[G^{\bot,||}_{SP}(\tau_p)]}{\tau_p^{m+1}} \nonumber\\
-e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{(k_0nr)^{\frac{m+1}{2}}}\frac{\textrm{Res}[G^{\bot,||}_{SP}(\tau_{-p})]}{\tau_{-p}^{m+1}}\nonumber\\
+e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{m!(k_0nr)^{\frac{m+1}{2}}}\frac{d^m}{d\tau^m}G^{\bot,||}_{SP,0}(0)\nonumber\\
\end{eqnarray} which differs from Eqs.~37, 38 by the two last lines.
We can also rewrite these expressions as
\begin{eqnarray}
\Psi^{\bot,||}_{SP}(\mathbf{x},z)=e^{ik_0nr}\sum_{m\in \textrm{even}}\frac{\Gamma(\frac{m+1}{2})}{m!(k_0nr)^{\frac{m+1}{2}}}\frac{d^m}{d\tau^m}G^{\bot,||}_{SP,0}(0)\nonumber\\
+i\pi\textrm{Res}[G^{\bot,||}_{SP}(\tau_p)]e^{ik_0nr\cos{(\xi_p-\vartheta)}}\textrm{erfc}(-i\tau_p\sqrt{(k_0nr)})\nonumber\\
+i\pi\textrm{Res}[G^{\bot,||}_{SP}(\tau_{-p})]e^{ik_0nr\cos{(\xi_p+\vartheta)}}\textrm{erfc}(-i\tau_{-p}\sqrt{(k_0nr)})\nonumber\\
\end{eqnarray}
where we used Eq.~36. applied to $\tau_{-p}$ and $\tau_{p}$.
\section{More on intensity and field in the back focal plane and image plane of the microscope}
A general analysis of the imaging process occurring through a
microscope objective with high numerical aperture $NA$ and an ocular
tube lens is given in for example Ref.~\cite{Sheppard}. Here, we
give without proofs the calculated field and intensity in the focal
plane of the
objective and the image plane of the microscope expressed in term of the TE and TM scalar potentials defined in Eqs.~1,2.\\
For this purpose we use the Fourier transform of the electromagnetic
TM and TE field at the $z=d$ interface defined by:
\begin{eqnarray}
\tilde{\mathbf{D}}_{\textrm{TM}}[\mathbf{k},z]
=-\{\mathbf{k}k_3(k)-k^2\hat{\mathbf{z}}\}\tilde{\Psi}_{\textrm{TM}}[\mathbf{k},z]\nonumber\\
\tilde{\mathbf{D}}_{TE}[\mathbf{k},z]
=-k_0n^2\mathbf{k}\times\mathbf{\hat{z}}\tilde{\Psi}_{TE}[\mathbf{k},z].
\end{eqnarray}
This implies \cite{Sheppard} that the electric field recorded in the
back focal plane of the objective is (i.e. taking into account the
vectorial nature of the field and the transformation of the
spherical wave front to a planar wave front):
\begin{widetext}
\begin{eqnarray}
\mathbf{E}_{\textrm{back focal plane $\Pi$}}=\frac{2\pi
e^{ik_0nf}}{if}\frac{T_1\sqrt{k_0k_3(k)}}{n}\{-k_0n\mathbf{k}\tilde{\Psi}_{\textrm{TM}}[\mathbf{k},d]+k_0n^2k\boldsymbol{\hat{\varphi}}_1\tilde{\Psi}_{TE}[\mathbf{k},d]\}
\end{eqnarray}
\end{widetext}with by definition $\boldsymbol{\hat{\varphi}}_1=-\mathbf{k}\times\hat{\mathbf{z}}/k$.
The geometric coefficient $k_3(k)$ is reminiscent from the 'sin'
condition \cite{Sheppard} which lead to strong geometrical
abberations at very large angle $\theta$. As a direct consequence
we deduce the intensity in the back focal plane:
\begin{eqnarray}
|\mathbf{E}_{\textrm{back focal plane $\Pi$}}|^2=\frac{4\pi^2t_1}{f^2n^2}k_0k_3(k)[|\mathbf{\tilde{D}}_{\textrm{TM},3}[\mathbf{k},d]|^2+|\mathbf{\tilde{D}}_{\textrm{TE},3}[\mathbf{k},d]|^2]\nonumber\\
\end{eqnarray} which is therefore proportional to the total Fourier field intensity for TM and TE waves taken
separately.\\
Finally, in the image
plane we obtain the electric field :
\begin{eqnarray}
\mathbf{E}(\mathbf{x}')=N'\int_{|\mathbf{k}|\leq
k_0NA}d^2\mathbf{k}\sqrt{k_3(k)}e^{-i\mathbf{k}\cdot\frac{\mathbf{x}'}{M}}
\cdot\{\tilde{\mathbf{D}}_{\textrm{TM},||}[\mathbf{k},d]\frac{k_0n}{k_3(k)}+\tilde{\mathbf{D}}_{\textrm{TE}}[\mathbf{k},d]\}\nonumber\\
\end{eqnarray}i.e.
\begin{eqnarray}
\mathbf{E}(\mathbf{x}')=N'\int_{|\mathbf{k}|\leq
k_0NA}d^2\mathbf{k}\sqrt{k_3(k)}e^{-i\mathbf{k}\cdot\frac{\mathbf{x}'}{M}}
\cdot\{-k_0n\mathbf{k}\tilde{\Psi}_{\textrm{TM}}[\mathbf{k},d]+k_0n^2k\boldsymbol{\hat{\varphi}}_1\tilde{\Psi}_{\textrm{TE}}[\mathbf{k},d]\}
\end{eqnarray}
where $N'$ is a constant characterizing the microscope. In the
letter we used theses formulas for computing fields and intensity in
the Fourier and image plane (see Figs.~3,4 of the letter).
|
train/arxiv
|
BkiUbHjxK1ThhAcYidKl
| 5 | 1 |
\section{Motivations}
Wilsonian renormalisation provides an elegant way of building an effective theory, and gives an intuitive understanding of scale
dependence in Quantum Field Theory (QFT). Its fundamental idea is based on the expectation that physics at large scale should
be independent of most microscopic details, and predictions should involve a small portion only of all the parameters
describing these details. As an example, the description of water flowing in a stream is independent of the details of the water molecule, and the corresponding
effective description is provided by Fluid Mechanics instead of Quantum Mechanics.
In QFT, because one deals with an infinite number of degrees of freedom, naive quantum corrections diverge, and one needs to regularise momentum integrals.
Any regularisation necessarily involves an energy scale which must be put by hand, and physical quantities then depend on
this arbitrary scale.
The interpretation for this scale dependence is that a given system is described by different parameters at different energies. A would-be divergence is therefore
turned into a scale dependence, which is the essence of the concept of renormalisation.
An intuitive understanding of this scale dependence originates from Statistical Mechanics, in the study of phase transitions, as explained below.
The fundamental object, at the core of the method, is the partition function, whose properties allow for the derivation of exact functional identities.
The corresponding Wilsonian approach to renormalisation in QFT is explained
in the present lecture notes, which focus on few essential points, and more detailed reviews can be found in \cite{lectures}.
\subsection{Scale dependence of a theory}
In the 4-dimensional scalar theory with interaction $\phi^4$, the one-loop coupling $g^{(1)}$ is formally given by
\begin{equation}
ig^{(1)}=ig_b+\frac{3\hbar(ig_b)^2}{2}\int\frac{d^4p}{(2\pi)^4}\frac{i^2}{(p^2-m^2)^2}~,
\end{equation}
where $g_b$ is the bare coupling, the factor 3 arises from the three possibilities of displaying the external lines, and 1/2 is
the symmetry factor of the graph. A Wick rotation leads to
\begin{equation}
g^{(1)}=g_b-\frac{3\hbar g_b^2}{32\pi^2}\int \frac{xdx}{(x+m^2)^2}~,
\end{equation}
which is logarithmically divergent. To avoid this divergence, one introduces by hand the ultraviolet (UV) cut off $\Lambda$, which leads to
\begin{equation}\label{g1loop}
g^{(1)}=g_b-\frac{3\hbar g_b^2}{16\pi^2}\ln\left(\frac{\Lambda}{m}\right)~+~\mbox{finite}~=g_b-\frac{3\hbar (g^{(1)})^2}{16\pi^2}\ln\left(\frac{\Lambda}{m}\right)+{\cal O}(\hbar^2)~.
\end{equation}
The process of regularization of the loop integral therefore introduces a mass scale, such that a potential divergence has been replaced
by a scale dependence. If one uses dimensional regularization instead, the arbitrary scale is introduced in the bare coupling
$g_b\Lambda^\epsilon$, which has a positive mass dimension in space time dimension $d=4-\epsilon$. But
in any case, one needs to introduce an arbitrary mass scale $\Lambda$ if one wishes to avoid the divergence.\\
One can obtain a $\Lambda$-independent quantity though:
the beta function $\beta\equiv\Lambda\partial_\Lambda g_b$ at fixed $g^{(1)}$, which at one loop is
\begin{equation}
\beta^{(1)}=\frac{3\hbar (g^{(1)})^2}{16\pi^2}~.
\end{equation}
The scale-dependence obtained from the regularization of loop integrals might seem artificial, but is actually a deep feature of
QFT, which can be understood in a more intuitive way through the elegant process of Wilsonian renormalisation.
\subsection{Spin blocks}
Wilson's idea of renormalisation was originally motivated by the study of condensed matter systems in the vicinity of
a phase transition. Let's take the example of a ferromagnetic sample, where two effects are competing: \\
{\it(i)} Magnetic order, as a result of the interactions between spins located on a lattice, tending to align all the
spins in the same direction. The situation where this effect dominates corresponds to the spontaneous symmetry breaking phase,
where the specific direction of spins breaks the rotation group $O(3)$ to $U(1)$. The total magnetisation, which plays the
role of order parameter, is non-vanishing;\\
{\it(ii)} Thermal disorder, which tends to give spins random directions. If this effect dominates, the system is in the
symmetric phase, where the total magnetisation vanishes.
The situation where these two effects are of the same order corresponds to the phase transition, and the correlation between spins
becomes important: the system is very sensitive to the modification in the direction of one spin, which is felt by other spins
located many lattice sites away. The phase transition therefore involves many degrees of freedom, interacting with each other.
The concept of spin blocks is motivated by the idea that, when the correlation length $\xi$ becomes large
(compared to the lattice spacing), details with a typical size $<<\xi$ should not play a role
in the features of the phase transition, such that these details can be integrated out in order to simplify the description of the system.
Integrating out spins $S_i^{(0)}$ can be achieved by defining new spin variables $S^{(1)}_j$ from a block of the original spins.
The new Hamiltonian $H^{(1)}[S^{(1)}]$ of the system can then be expressed in terms of the new spin variables as follows
\begin{equation}
\exp(-H^{(1)}[S^{(1)}])=\sum_{S_i^{(0)}}\Pi_j\delta\Big(S^{(1)}_j-f_j(S^{(0)})\Big)\exp(-H^{(0)}[S^{(0)}])~,
\end{equation}
where $H^{(0)}$ is the original Hamiltonian, defined with the original spin variables $S_i^{(0)}$, and $f_j$ corresponds to the definition
of the block $j$ which contains some original spins. The partition function of the system is independent of the spin variables
\begin{equation}
Z=\sum_{S_i^{(0)}}\exp(-H^{(0)}[S^{(0)}])=\sum_{S^{(1)}_j}\exp(-H^{(1)}[S^{(1)}])~,
\end{equation}
which leads to the same physical predictions. The blocking procedure can be repeated again and again, leading to a chain of Hamiltonians
$\{H^{(n)}\}$, each defined by a set of parameters which depend on the blocking step $n$. This construction therefore provides a
scale-dependent description of the system. The simplification in the description occurs because, among the potentially large
set of parameters defining the Hamiltonian, many will play no role in the infrared (IR) limit, where the system is zoomed out
(irrelevant parameters), and only few of them will dominate the IR description (relevant parameters). The scale dependence of parameters
generates renormalisation flows, which are discussed below in the context of scalar field theory.
\subsection{One-loop Wilsonian renormalisation flows}
We work in Euclidean space; the IR field is denoted $\phi$ and has non-vanishing Fourier components for $p\le k$;
the UV field is denoted $\psi$ and has non-vanishing Fourier components for $k<p\le\Lambda$.\\
The original action, defined at some scale $\Lambda$, is of the form
\begin{equation}
S_\Lambda[\Phi]=\int_x\left(\frac{1}{2}\partial_\mu\Phi\partial^\mu\Phi+U_\Lambda(\Phi)\right)~,
\end{equation}
where $\Phi=\phi+\psi$, and the effective action $S_k$ at the scale $k$ is defined by
\begin{equation}
\exp\left(-\frac{1}{\hbar}S_k[\phi]\right)=\int{\cal D}[\psi]\exp\left(-\frac{1}{\hbar}S_\Lambda[\phi+\psi]\right)~.
\end{equation}
This definition of effective action corresponds to defining ``block spins'' with lattice spacing $k^{-1}$ from the original lattice spacing $\Lambda^{-1}$.
Then we assume that $S_\Lambda[\phi+\psi]$ is minimum for $\psi=0$ (no instanton configuration in $\psi$), such that
\begin{equation}\label{blocking}
\exp\left(-\frac{1}{\hbar}S_k[\phi]\right)=\exp\left(-\frac{1}{\hbar}S_\Lambda[\phi]\right)
\int{\cal D}[\psi]\exp\left(-\frac{1}{2\hbar}\int_{x,y}\frac{\delta^2S_\Lambda[\phi]}{\delta\phi_x\delta\phi_y}\psi_x\psi_y+\cdots\right)~,
\end{equation}
where dots represent higher order in $\psi$, which will contribute to higher loop orders (this can be seen with the change of functional variable
$\psi\to\sqrt\hbar~\psi$).\\
\noindent Choosing a uniform IR field $\phi=\phi_0$, the action is $S_k[\phi_0]=VU_k(\phi_0)$, where $U_k$ is the running potential and
$V$ is the space time volume, also equal to
\begin{equation}
V\equiv\int d^4x=\left.\int d^4x\exp(ip x)\right|_{p=0}=\tilde\delta(0)~.
\end{equation}
Eq.(\ref{blocking}) leads then to
\begin{eqnarray}
&&\exp\left(-\frac{V}{\hbar}U_k(\phi_0)\right)\\
&=&\exp\left(-\frac{V}{\hbar}U_\Lambda(\phi_0)\right)
\int{\cal D}[\psi]
\exp\left(-\frac{1}{2\hbar}\int_{k\le|p|\le\Lambda}[p^2+U_\Lambda''(\phi_0)]\tilde\psi_p\tilde\psi_{-p}+\cdots\right)~,\nonumber
\end{eqnarray}
which is a Gaussian functional integral. The latter is calculated using
\begin{equation}
\int{\cal D}[\psi]\exp\left(-\tilde\psi_p{\cal O}_{pq}\tilde\psi_q\right)=\frac{1}{\sqrt{\mbox{det}{\cal O}_{pq}}}
=\exp\left(-\frac{1}{2}\mbox{Tr}\left\{\ln{\cal O}_{pq}\right\}\right)~,
\end{equation}
as well as the logarithm and the trace of a diagonal operator
\begin{eqnarray}
\ln[F(p)\tilde\delta(p+q)]&=&\tilde\delta(p+q)\ln[F(p)]\\
\mbox{Tr}\left\{G(p)\tilde\delta(p+q)\right\}&=&\int_p\int_q\tilde\delta(p+q)G(p)\tilde\delta(p+q)=V\int_p G(p)~.
\end{eqnarray}
In the present case, Fourier modes integrated out are defined for $k<|p|\leq\Lambda$, such that
\begin{eqnarray}\label{blockingbis}
U_k(\phi_0)&=&U_\Lambda(\phi_0)+\frac{\hbar}{2V}\mbox{Tr}_{k<|p|\le\Lambda}\left\{\tilde\delta(p+q)\ln[p^2+U_\Lambda''(\phi_0)]\right\}
+{\cal O}(\hbar^2)\nonumber\\
&=&U_\Lambda(\phi_0)+\frac{\hbar}{2}\int_k^\Lambda \frac{d^4p}{(2\pi)^4}
\ln\left(\frac{p^2+U_\Lambda''(\phi_0)}{p^2+U_\Lambda''(0)}\right)+{\cal O}(\hbar^2)~,
\end{eqnarray}
where the origin of the potential is chosen such that $U_k(0)=0$.
A derivative of eq.(\ref{blockingbis}) with respect to $k$ finally leads to the one-loop flow equation
\begin{equation}\label{1loopflow}
k\partial_k U_k(\phi_0)=-\frac{\hbar k^4}{16\pi^2}\ln\left(\frac{k^2+U_\Lambda''(\phi_0)}{k^2+U_\Lambda''(0)}\right)+{\cal O}(\hbar^2)~,
\end{equation}
which shows the explicit scale dependence of the effective potential defined at the scale $k$.
\section{Exact flows (sharp cut off)}
The one-loop flow equation (\ref{1loopflow}) is enough when quantum effects are perturbative only, and higher-orders in $\hbar$ can
indeed be neglected. If one considers non-perturbative effects though (as in section 3.3 for example),
it is necessary to consider an improved Wilsonian renormalisation procedure. This can be obtained by lowering the cut off infinitesimally,
as explained here.
\subsection{Wegner-Houghton equation}
We consider here the local potential approximation where, for all $k$, the running action has the form
\begin{equation}\label{locpot}
S_k[\phi]=\int_x\left(\frac{1}{2}\partial_\mu\phi\partial^\mu\phi+U_k(\phi)\right)~.
\end{equation}
This corresponds to a projection of the action on a subspace of functional space, where only the non-derivative part of the action
is allowed to evolve with $k$, but not the kinetic term, higher order derivatives or derivative interactions.\\
\noindent Instead of integrating Fourier modes from the cut off $\Lambda$ to some scale $k$ in one go,
we start the blocking procedure from $k$, and implement an infinitesimal step $dk<<k$. We now eliminate Fourier modes $\psi_p$
which are non-zero for $k-dk<|p|\le k$, and we assuming again that no instanton in $\psi$ is present (see section 2.4 for
the situation where a non-trivial saddle point in $\psi$ arises):
\begin{eqnarray}
&&\exp\left(-\frac{V}{\hbar}U_{k-dk}(\phi_0)\right)\\
&=&\exp\left(-\frac{V}{\hbar}U_k(\phi_0)\right)
\int{\cal D}[\psi]\exp\left(-\frac{1}{2\hbar}\int_{k-dk<|p|\le k}[p^2+U_k''(\phi_0)]\tilde\psi_p\tilde\psi_{-p}+\cdots\right)\nonumber
\end{eqnarray}
where, this time, higher orders in $\psi$ involve higher orders in $dk$, since the trace is
taken in the infinitesimal shell of radius $k$ and thickness $dk$. As a consequence, we get
\begin{eqnarray}
U_{k-dk}(\phi_0)&=&U_k(\phi_0)+\frac{\hbar}{2V}\mbox{Tr}_{k-dk\le|p|\le k}\left\{\tilde\delta(p+q)\ln[p^2+U_k''(\phi_0)]\right\}
+{\cal O}(dk/k)^2\nonumber\\
&=&U_k(\phi_0)+\frac{\hbar 2\pi^2 k^3 dk}{2(2\pi)^4}\ln\left(\frac{k^2+U_k''(\phi_0)}{k^2+U_k''(0)}\right)+{\cal O}(dk/k)^2~,
\end{eqnarray}
where $\tilde\delta(0)=V$ was used.
The limit $dk\to0$ leads then to the equation satisfied by the running potential
\begin{equation}\label{WH}
k\partial_k U_k(\phi_0)=-\frac{\hbar k^4}{16\pi^2}\ln\left(\frac{k^2+U_k''(\phi_0)}{k^2+U_k''(0)}\right)~.
\end{equation}
This exact equation was initially derived in \cite{WegnerHoughton}, and is self-consistent: the running potential appears on
both sides, the Wegner-Houghton equation is a bit similar to a differential Schwinger-Dyson equation. For this reason, it
consists in a partial resummation of all the orders
in $\hbar$. The resummation is partial because eq.(\ref{WH}) is derived in the framework of the approximation (\ref{locpot}).
If one expands the potential in powers of $\hbar$, then $U_k(\phi_0)=U_\Lambda(\phi_0)+{\cal O}(\hbar)$, and the Wegner Houghton equation
(\ref{WH}) gives the one-loop flow equation (\ref{1loopflow}). \\
\subsection{Fixed point and classification of coupling constants}
\noindent One then parametrise the running potential in the polynomial form
\begin{equation}
U_k(\phi_0)=\frac{g_2(k)}{2}\phi_0^2+\frac{g_4(k)}{24}\phi_0^4+\frac{g_6(k)}{6!}\phi_0^6+\cdots
\end{equation}
An expansion of the right-hand side of the flow equation (\ref{WH}) in powers of $\phi_0$ generates an infinite series of terms,
that we truncate here to $\phi_0^6$.
The identification of powers of $\phi_0$ of both sides of the equation leads then to
\begin{eqnarray}
k\partial_kg_2(k)&=&-\frac{\hbar k^4}{16\pi^2}~\frac{g_4(k)}{k^2+g_2(k)}\\
k\partial_kg_4(k)&=&-\frac{\hbar k^4}{16\pi^2}
\left(\frac{-3g_4^2(k)}{[k^2+g_2(k)]^2}+\frac{g_6(k)}{k^2+g_2(k)}\right)\\
k\partial_kg_6(k)&=&-\frac{15\hbar k^4}{16\pi^2}
\left(\frac{2g_4^3(k)}{[k^2+g_2(k)]^3}-\frac{g_4(k)g_6(k)}{[k^2+g_2(k)]^2}\right)~.
\end{eqnarray}
The dimensionless couplings $\tilde g_{2n}$ are, in 4 dimensions,
\begin{equation}
g_{2n}(k)=k^{4-2n}\tilde g_{2n}~,~~~~~\mbox{with}~~[\tilde g_{2n}]=0~,
\end{equation}
which lead to the renormalisation equations
\begin{eqnarray}\label{flow1}
k\partial_k \tilde g_{2}&=&-2\tilde g_2-\frac{\hbar}{16\pi^2}~\frac{\tilde g_4}{1+\tilde g_2}\\
k\partial_k\tilde g_4&=&~~0~~~-\frac{\hbar}{16\pi^2}\left(\frac{-3\tilde g_4^2}{[1+\tilde g_2]^2}+\frac{\tilde g_6}{1+\tilde g_2}\right)\\
k\partial_k \tilde g_6&=&~~2\tilde g_6~-\frac{15\hbar}{16\pi^2}\left(\frac{2\tilde g_4^3}{[1+\tilde g_2]^3}
-\frac{\tilde g_4\tilde g_6}{[1+\tilde g_2]^2}\right)~,
\end{eqnarray}
where $\tilde g_4=g_4$, since $[g_4]=0$.\\
By definition, a fixed point $\tilde g^\star=\{\tilde g^\star_{2n}\}$ of the renormalisation flows (\ref{flow1})
is invariant in the blocking procedure: $\partial_k \tilde g^\star=0$. Once a fixed point is found,
one can classify the coupling constants $g\ne g^\star$ according to their behaviour when $k\to0$:\\
{\it Relevant coupling}: $\tilde g_{2p}$ goes away from $\tilde g_{2p}^\star$ as $k\to0$\\
{\it Irrelevant coupling}: $\tilde g_{2q}$ converges to $\tilde g_{2q}^\star$ as $k\to0$\\
\noindent In the previous example, the only fixed point is the trivial one: $\tilde g^\star=0$.
This is called the Gaussian fixed point because the action contains the kinetic term only, which is quadratic in the field.
Compared to this trivial fixed point, one can classify the coupling constants (the classical scaling - if not zero - is dominant):\\
$\tilde g_2(k)$ is relevant, since it increases in the IR: $\partial_k \tilde g_{2}<0$;\\
$\tilde g_6(k)$ is irrelevant, it decreases in the IR: $\partial_k \tilde g_6>0$.\\
If one truncates the theory to $\phi_0^4$, then $\partial_k \tilde g_4>0$: $\tilde g_4$
is irrelevant. Note that this is a consequence of quantum fluctuations only, since at the classical level ($\hbar=0$), $\tilde g_4$
is {\it marginal} = it does not depend on $k$.
\vspace{0.5cm}
\noindent Once coupling constants are classified, one defines a {\it universality class} as a set of theories which differ only by irrelevant
parameters. In this case, the renormalisation flows of these different theories all lead to the same IR physics, defined by the set
of relevant parameters, because a modification to an irrelevant parameter will not have any consequence in the IR.
\vspace{1cm}
\noindent {\bf Exercise 1}: {\it Critical exponents}\\
A critical exponent $\alpha$ is defined by the power law $\xi=\xi_0(k/k_0)^\alpha$ which gives the evolution of the parameter $\xi$ with the scale $k$.
Consider a model defined by the dimensionless couplings $\tilde g=\{\tilde g_1,\tilde g_2,\cdots\}$ and the renormalisation group equations
$k\partial_k\tilde g=f(\tilde g)$, with the fixed point $\tilde g^\star$.
After linearising the renormalisation equations,
show that the eigenvalues of the matrix $M_{ab}\equiv\partial f_a/\partial g_b|_{\tilde g^\star}$ are the critical exponents of the model.\\
\noindent {\bf Exercise 2}: {\it Wilson-Fisher fixed point}\\
Write the Wegner-Houghton equation in dimension $d=4-\epsilon$ and the corresponding evolution equations of the dimensionless couplings, after truncating
the equation to $\phi^4$. Show that there is a non-trivial fixed point, obtained by expanding the flow equations to the quadratic order in the running couplings.
\subsection{Relation to renormalisability}
The blocking procedure defines renormalisation flows which go from high momenta to the IR. On the other hand, in Particle Physics,
one is interested in the high-energy behaviour, with a fixed scale $\Lambda$ and $k\to\infty$,
such that a relevant coupling corresponds to a super-super-normalisable parameter: it decreases in the UV and its behaviour is controlled.
On the other hand, an irrelevant coupling corresponds to a non-non-normalisable parameter:
it increases in the UV and diverges as $k\to\infty$.\\
\noindent A classically marginal coupling corresponds to a non-normalisable theory, and the behaviour of the coupling at high energy
depends on the sign of quantum fluctuations to the renormalisation flow. In the situation of the non-normalisable $\phi^4$ bare theory,
quantum fluctuations make $g_4$ increase with $k$:
if one truncates the theory to $\phi^4$, one obtains, for high momentum, the equation
\begin{equation}
k\partial_k g_4(k)=\frac{3\hbar}{16\pi^2}~\frac{g_4^2(k)}{[1+g_2(k)/k^2]^2}\simeq\frac{3\hbar}{16\pi^2}~g_4^2(k)~,
\end{equation}
with solution
\begin{equation}
g_4(k)=g_4(\Lambda)\left(1-\frac{3\hbar}{16\pi^2}g_4(\Lambda)\ln\left(\frac{k}{\Lambda}\right)\right)^{-1}~.
\end{equation}
One can make two comments at this point:
\begin{itemize}
\item The latter solution corresponds to the resummation of a geometric series of $n$-loop graphs, corresponding to the product of $n$
one-loop graphs (\ref{g1loop}). The present truncation of the renormalisation equation (\ref{WH})
therefore provides us with an improved one-loop calculation;
\item In the spirit of Wilsonian renormalisation, $k\le\Lambda$, such that $g_4(k)$ is never singular.
But if one fixes the scale $\Lambda$ and
increases $k$, a singularity occurs at
\begin{equation}
k_\infty=\Lambda\exp\left(\frac{16\pi^2}{3\hbar g_4(\Lambda)}\right)>>\Lambda~.
\end{equation}
This singularity also occurs in QED, where it corresponds to the {\it Landau pole}
\begin{equation}
k_{Landau}\simeq m_{electron}\exp(685)>>M_{Planck}~.
\end{equation}
In QCD though, the coupling decreases with $k$, such that the
theory is asymptotically free.
\end{itemize}
\subsection{Maxwell construction}
In the situation of a bare potential containing a concave part (as for the double-well potential), non-trivial saddle points
in $\psi$ appear at some stage of the blocking procedure \cite{ABP}, in order to avoid the ``spinodal instability'' which would
otherwise occur at the scale $k_s$ satisfying $k_s^2+U_{k_s}''(\phi_0)=0$, when the restoration force for quadratic
fluctuations vanish. These saddle points
lead to the Maxwell construction in the limit $k\to0$, where the effective potential if flat between the bare minima.
This flattening ensures that the effective potential is convex, as expected from
general arguments (see Appendix B). Indeed, in the limit of infinite volume, the Wilsonian effective potential is identical
to the one-particle irreducible (1PI) effective potential (see Appendix C), and must therefore be convex in the IR limit $k=0$.
\begin{figure}
\epsfxsize=8cm
\centerline{\epsffile{ABP.eps}}
\caption{Evolution of the running potential in the situation of a concave bare potential. Convexity is recovered in the
IR limit $k\to0$, where the effective potential becomes flat, as expected from the Maxwell construction. (Figure taken from \cite{ABP}) }
\end{figure}
Taking into account the presence of a non-trivial saddle point $\psi_{s}$ in the integration over the shell of thickness $dk$,
the blocking reads
\begin{eqnarray}
&&\exp\left(-\frac{V}{\hbar}U_{k-dk}(\phi_0)\right)=\exp\left(-\frac{1}{\hbar}S_k[\phi_0+\psi_s]\right)\\
&\times&\int{\cal D}[\psi]
\exp\left(-\frac{1}{2\hbar}\int_p\frac{\delta^2S_\Lambda[\phi_0+\psi_s]}{\delta\phi_p\delta\phi_{-p}}(\psi-\psi_s)_p(\psi-\psi_s)_{-p}
+\cdots\right)~.\nonumber
\end{eqnarray}
Ignoring quantum fluctuation, the latter blocking gives
\begin{equation}
VU_{k-dk}(\phi_0)\simeq S_k[\phi_0+\psi_s]~,
\end{equation}
which leads to a finite-difference equation for the running potential $U_k$. This ``tree-level'' renormalisation flow has been studied
numerically in \cite{ABP}: the saddle point is assumed to be a plane wave, whose amplitude is evaluated at each blocking step, and contributes to the gradual elimination of
the concave part of the bare potential (see Fig.1).
Note that this convexity is not obtained if the scalar field is coupled to a gauge field, as in the Standard Model: in order
to define the partition function, one needs then to fix a gauge and therefore restrict the field space over which the path integral
is defined. This restriction imposes to quantise the theory over one given vacuum, whereas convexity is obtained by taking into account
the different vacua of the bare theory, when defining the partition function. A analytical derivation of the convex 1PI effective potential is given in
\cite{AT}, where the Maxwell construction is obtained in the limit of infinite volume: the flat dressed potential describes a system where the ground state is a
superposition of the two bare vacua.
\subsection{Problems with a sharp cut off}
The description given in this section has the advantage of being intuitive and free of any additional technicality.
The sharp cut off used here has two important problems though: it cannot be used for a theory including gauge invariance, and
it cannot predict the evolution of the derivative terms of the action.
As far as gauge invariance is concerned, one cannot imposes a cut off $\Lambda$ on Fourier components of a gauge field:
a gauge transformation would spoil this cut off, since it involves any gauge function, which
can have non-vanishing Fourier modes for momentum larger than $\Lambda$.
The point concerning the derivative terms in the action is more subtle. In the previous example, a constant IR field has been used, which is enough to
evaluate the potential part of the action (the part containing no derivatives). In order to derive the evolution of derivative terms, one needs to
consider a coordinate-dependent IR field, for example
\begin{equation}\label{phi1}
\phi=\phi_0+\phi_1\sin(p^\mu x_\mu)~,
\end{equation}
where $p$ is fixed (with $|p|<k$) and $\phi_1$ is a constant.
Each blocking step introduces an infinite series of derivative terms, and the running action can be parametrised by
the derivative expansion
\begin{equation}\label{gradexp}
S_k[\phi]=\int d^4x\left\{\frac{1}{2}\sum_{n=0}^\infty Z_k^{(n)}(\phi)\partial_\mu\phi\Box^n\partial^\mu\phi+U_k(\phi)\right\}~,
\end{equation}
where $Z_k^{(n)}(\phi)$ are functions allowing wave function renormalisation and derivative interactions which are introduced by the blocking.
The different evolution equations are then obtained, from the renormalisation equation, by the identification of the
following terms:\\
$\bullet$ terms independent of $\phi_1$ for the potential $U_k(\phi)$;\\
$\bullet$ terms proportional to $p^{2n+2}\phi_1^2$ for $Z^n_k(\phi)$.\\
(Terms depending on $\phi_1$ but not on $p$ give the evolution equations for the field derivatives $\partial_\phi^n U_k(\phi)$, which are consistent with
the evolution for $U_k(\phi)$.)\\
For the IR field (\ref{phi1}), the second derivative of the action (\ref{gradexp}) is of the form
\begin{equation}
\frac{\delta^2S_k}{\delta\phi_x\delta\phi_y}=F(\sin(p x),\partial_\mu)\delta(x-y)~,
\end{equation}
such that the evolution equation contains terms of the form
\begin{equation}
\int_{\cal D}\tilde F(p,q)\tilde\psi_q\tilde\psi_{-p-q}~.
\end{equation}
Because one integrates Fourier modes $\tilde\psi$ in the shell of radius $k$ and thickness $dk$, the domain ${\cal D}$ of integration over $q$
of the latter integral is deformed and doesn't correspond to the spherical shell any more: ${\cal D}$ is defined by the simultaneous conditions
\begin{equation}
k-dk\le|q|\le k~,~~\mbox{and}~~k-dk\le|q+p|\le k~,
\end{equation}
which can be achieved for all $q$ only if $|p|<<dk$. But in order to obtain the flow equation, one takes the limit $dk\to0$,
which leads then to $\phi=\phi_0$, and therefore we are left with the original situation where only the evolution
of the potential can be found.
\section{Exact flows (smooth cut off)}
We explain here how both problems with the Wegner-Houghton approach can be avoided, by introducing a cut off function which
allows a progressive (``smooth'') elimination of Fourier modes.
In this section and the following, we set $\hbar=1$, and Fourier modes are denoted without a tilde.
\subsection{Polchinski renormalisation equation}
The idea of a smooth cut off was introduced by Polchinski \cite{Polchinski}.
High momentum Fourier modes are gradually cut off above a given scale $k$, by replacing the inverse propagator $p^2+m^2$ with
a differentiable cut-off function $Q_k^{-1}(p^2)$ which satisfies the following conditions
\begin{itemize}
\item $Q_k^{-1}(p^2)= p^2+m^2$ for $p^2\le k^2$, such that Fourier modes for $p^2\le k^2$ propagate as expected;
\item $Q_k(p^2)$ decreases rapidly to 0 for $p^2>k^2$, such that Fourier modes for $p^2>k^2$ dominate the path integral and are thus integrated out.
\end{itemize}
We are interested in the evolution of the running action $S_k$, describing Fourier modes for $|p|\leq k$.
The total running action, including the cut off function and the source term, is
\begin{equation}
\Sigma_k=S_k+\frac{1}{2}\int_p\phi_{-p}Q_k^{-1}\phi_p+\int_pj_{-p}\phi_p~,
\end{equation}
and the scale dependence of the action $S_k$ is obtained by imposing that the partition function is independent of $k$.
\noindent The source $j_p$ is assumed to vanish for $p^2>k^2$, such that we have
\begin{equation}\label{jQ}
j_p\partial_kQ_k(p^2)=0~,
\end{equation}
and the partition function is
\begin{equation}
Z[j]=\int{\cal D}[\phi]\exp\left(-\Sigma_k[\phi,j]\right)~.
\end{equation}
The IR physics should be independent of the arbitrary scale $k$, which implies
\begin{equation}\label{partialkZ}
\partial_kZ=0=-\int{\cal D}[\phi]\left(\partial_kS_k[\phi]+\frac{1}{2}\int_p\phi_{-p}\partial_kQ_k^{-1}\phi_p\right)\exp\left(-\Sigma_k[\phi,j]\right)~.
\end{equation}
The next step is to find what the variation $\partial_kS_k$ should be, to satisfy $\partial_kZ=0$. For this,
we note that the following total functional derivative can be written
\begin{eqnarray}\label{exo2}
&&\partial_kQ_k^{-1}\left[\frac{\delta^2e^{-\Sigma_k}}{\delta\phi_p\delta\phi_{-p}}
+Q_k^{-1}\frac{\delta(\phi_pe^{-\Sigma_k})}{\delta\phi_p}
+Q_k^{-1}\frac{\delta(\phi_{-p}e^{-\Sigma_k})}{\delta\phi_{-p}}\right]\\
&=&Q_k^{-2}\left[Q_k\partial_k Q_k^{-1}\tilde\delta(0)-\phi_{-p}\partial_kQ_k^{-1}\phi_p-\partial_kQ_k
\left(\frac{\delta S_k}{\delta\phi_p}\frac{\delta S_k}{\delta\phi_{-p}}-\frac{\delta^2S_k}{\delta\phi_p\delta\phi_{-p}}\right)
\right]e^{-\Sigma_k}~,\nonumber
\end{eqnarray}
where the condition (\ref{jQ}) was used. We therefore see that, after ignoring the field-independent term
$Q_k\partial_k Q_k^{-1}\delta(0)$, if we choose
\begin{equation}\label{polchequa}
\partial_k S_k\equiv\frac{1}{2}\int_p\partial_kQ_k(p^2)\left(\frac{\delta S_k}{\delta\phi_p}\frac{\delta S_k}{\delta\phi_{-p}}
-\frac{\delta^2S_k}{\delta\phi_p\delta\phi_{-p}}\right)~,
\end{equation}
the scale-independence condition (\ref{partialkZ}) is satisfied: the functional integral of the functional derivative vanishes, since the integrand
decreases exponentially for large field amplitudes.
The self-consistent equation (\ref{polchequa}) describes how the bare action $S_k$ should evolve,
for the IR physics to be unchanged when $k$ varies.\\
\noindent Exercise 4 consists in solving the Plochinski equation in a simple context, where only quadratic and quartic terms in the field are taken into account, but where
any power of the momentum is allowed. On the other hand, the local potential approximation consists in allowing any function of the field for the running potential,
but neglecting any correction to momentum-dependent parts of the action. For the Polchinski equation, this consists in projecting the running action on the functional subspace
\begin{equation}
S_k=\int_x~U_k(\phi)~~~~~\mbox{for all}~k~,
\end{equation}
such that, for a constant IR configuration $\phi_0$,
\begin{eqnarray}
\frac{\delta S_k}{\delta\phi_p}&=&\int_x\frac{\delta S_k}{\delta\phi_x}\frac{\delta\phi_x}{\delta\phi_p}=\int_xU_k'(\phi_0)e^{ipx}=U_k'(\phi_0)\tilde\delta(p)\\
\frac{\delta^2 S_k}{\delta\phi_p\delta\phi_q}&=&\int_x\int_y\frac{\delta^2 S_k}{\delta\phi_x\delta\phi_x}\frac{\delta\phi_x}{\delta\phi_p}\frac{\delta\phi_y}{\delta\phi_q}
=\int_x\int_yU_k''(\phi_0)\delta(x-y)e^{ipx+iqy}=U_k''(\phi_0)\tilde\delta(p+q)~.\nonumber
\end{eqnarray}
For this constant IR configuration $\phi_0$, we also have $S_k=VU_k(\phi_0)$, such that the Polchinski equation gives
\begin{eqnarray}
\partial_kU_k(\phi_0)&=&\frac{1}{2V}[U_k'(\phi_0)]^2\int_p\partial_kQ_k(p^2)[\tilde\delta(p)]^2-\frac{1}{2V}U_k''(\phi_0)\int_p\partial_kQ_k(p^2)\tilde\delta(0)\nonumber\\
&=&\frac{1}{2}[U_k'(\phi_0)]^2\partial_kQ_k(0)-\frac{1}{2}U_k''(\phi_0)\int_p\partial_kQ_k(p^2)~.
\end{eqnarray}
Although there is no logarithm as in the Wegner-Houghton equation, it is here the quadratic term $[U_k']^2$ which leads to a non-trivial renormalisation flow.
\vspace{1cm}
\noindent{\bf Exercise 3}: Derive eq.(\ref{exo2})\\
\noindent{\bf Exercise 4}: {\it Quartic ansatz for the Polchinski equation}\\
Assume the following quartic ansatz
$$
S_k=\frac{1}{2}\int_p\phi_pF_k(p^2)\phi_{-p}+\frac{1}{24}\int_{pqr}G_k(p,q,r)\phi_p\phi_q\phi_r\phi_{-p-q-r}~,
$$
and derive from eq.(\ref{polchequa}) the evolution equations for the functions $F$ and $G$.
\subsection{Wetterich average effective action}
An elegant way to implement exact Wilsonian renormalisation is through the average
effective action, introduced by Wetterich (see \cite{Wetterich} for a review),
which corresponds to a 1PI effective action for which modes with $|p|<k$ are ``frozen'' in the partition function,
and mainly modes with $|p|>k$ are integrated out.
In Polchinski's approach, the IR action is kept fixed and one studies the evolution of the bare action when the arbitrary scale $k$
is changed. The average effective action, on the other hand, does depend on the scale $k$ and the bare effective action is fixed.
The average effective action recovers the usual 1PI generating functional when $k\to0$, where all quantum fluctuations have the same
weight in the partition function.
This procedure is implemented by adding the following quadratic term to the bare action
\begin{equation}\label{additional}
S_k[\phi]=\frac{1}{2}\int_p\phi_p R_k(p^2) \phi_{-p}~,
\end{equation}
where the smooth cut off function $R_k$ satisfies:\\
{\it(i)} $R_k\to0$ when $k\to0$, in order to recover the usual 1PI action in the deep IR;\\
{\it(ii)} $R_k(p^2)$ goes quickly to 0 for $p^2>k^2$, in order to leave undisturbed the integration over UV modes;\\
{\it(iii)} $R_k(p^2)\simeq k^2$ for $p^2\leq k^2$, which ``freezes'' IR degrees of freedom,
by giving them the effective mass $k$.\\
Given the additional term (\ref{additional}) in the bare action, one builds the effective average action $\Gamma_k[\phi_c]$
from the average partition function
\begin{equation}
Z_k[j]\equiv\int{\cal D}[\phi]\exp\left(-S[\phi]-S_k[\phi]-\int_pj_p\phi_{-p}\right)\equiv\exp(-W_k[j])~,
\end{equation}
without forgetting to take into account the additional term (\ref{additional}) in the definition of the
Legendre transform:
\begin{equation}
\Gamma_k[\phi_c]=W_k[j]-S_k[\phi_c]-\int_xj\phi_c~.
\end{equation}
The effect of the Legendre transform is to change the description from the functional $W$ of the source $j$ to the
functional $\Gamma$ of the classical field $\phi_c$, assuming that there is a one-to-one mapping between $j$ and $\phi_c$.\\
This construction is similar to the introduction of the Gibbs free energy $G$, which is a Legendre transform of the energy $U$:
\begin{equation}
G\equiv U-S\left(\frac{\partial U}{\partial S}\right)_V-V\left(\frac{\partial U}{\partial V}\right)_S=U-TS+PV~~,~~~~\mbox{with}~~ dU=TdS-PdV~.
\end{equation}
The Legendre transform implies $dG=-SdT+VdP$, such that a description in terms of the variables $(S,V)$ is
turned into a description in terms of $(T,P)$, and there is a one-to-one mapping between $(S,V)$ and $(T,P)$.\\
We note that the partition function $Z_k$ contains all the graphs of the theory, whereas $W_k$ contains the connected graphs only. The cancellation of the non-connected graphs occurs
when taking the logarithm of $Z_k$, which can be seen with a perturbative expansion. The Legendre transform $\Gamma_k$ then contains the one-particle-irreducible graphs only, which
cannot be reduced to two graphs by cutting one internal line. As shown in Appendix A, $\Gamma_k$ corresponds to the bare action plus quantum corrections
$\Gamma_k[\phi_c]=S[\phi_c]+{\cal O}(\hbar)$.
\vspace{0.5cm}
One can then obtain an exact functional differential equation for $\Gamma_k$, as shown here.
Keeping in mind that, after the Legendre transform, the two independent variables are $k$ and $\phi_c$, we have
\begin{eqnarray}
\partial_k\Gamma_k[\phi_c]&=&\frac{dW_k[j]}{dk}-\partial_kS_k[\phi_c]-\partial_k\int_xj\phi_c\nonumber\\
&=&\partial_kW_k[j]+\int_x\frac{\delta W}{\delta j}\partial_kj-\frac{1}{2}\int_p\phi_c(p) \partial_kR_k(p^2)
\phi_c(-p)-\int_x\partial_kj\phi_c\nonumber\\
&=&\partial_kW_k[j]-\frac{1}{2}\int_p\phi_c(p) \partial_kR_k(p^2) \phi_c(-p)~.
\end{eqnarray}
We therefore need an expression for $\partial_kW$, which can also be written
\begin{equation}
\partial_k W_k[j]=\frac{1}{2}\int_p\partial_kR_k(p^2)\left<\phi_p\phi_{-p}\right>~,
\end{equation}
where
\begin{equation}
\left<\cdots\right>\equiv\frac{1}{Z_k}\int{\cal D}[\phi](\cdots)\exp\left(-S[\phi]-S_k[\phi]-\int_pj_p\phi_{-p}\right)~.
\end{equation}
The next step is to express the different functional derivatives of $W_k$ and $\Gamma_k$:
\begin{eqnarray}
\frac{\delta W_k}{\delta j_{-p}}&=&\phi_c(p)\\
\frac{\delta^2W_k}{\delta j_{-p}\delta j_{-q}}&=&\phi_c(q)\phi_c(p)-\left<\phi_q\phi_p\right>\\
\frac{\delta\Gamma_k}{\delta\phi_c(p)}&=&-R_k(p^2)\phi_c(-p)-j_{-p}\\
\label{d2Gd2W}\frac{\delta^2\Gamma_k}{\delta\phi_c(p)\delta\phi_c(q)}&=&-R_k(p^2)~\tilde\delta(p+q)
-\left(\frac{\delta^2W_k}{\delta j_{-p}\delta j_{-q}}\right)^{-1}~.
\end{eqnarray}
Taking into account the different relations, we finally obtain the exact flow equation for $\Gamma_k$
\begin{equation}\label{wettequa}
\partial_k\Gamma_k=\frac{1}{2}\mbox{Tr}\left\{\partial_kR_k\left(\frac{\delta^2\Gamma_k}{\delta\phi_c(p)\delta\phi_c(q)}
+R_k(p^2)~\tilde\delta(p+q)\right)^{-1}\right\}~.
\end{equation}
As the Polchinski equation (\ref{polchequa}), the Wetterich equation (\ref{wettequa}) is an exact self-consistent equation, but
describes the evolution of the average effective action, instead of the bare action. Solving eq.(\ref{wettequa}) requires to
project $\Gamma_k$ onto a subspace of functionals, usually defined by the derivative expansion (\ref{gradexp}), whose convergence
has been originally studied by Morris \cite{Morris}. We also note that, because of the cut off function $R_k$, the trace
appearing in the renormalisation equation (\ref{wettequa}) is finite and doesn't require regularisation. \\
Finally, although the IR limit $k\to0$ is, by construction, independent of the choice of cut off function $R_k(p^2)$,
the flows do depend on $R_k(p^2)$. A particularly convenient choice is the cut off function introduced by Litim \cite{Litim}
\begin{equation}\label{optcutoff}
R_k(p^2)=(k^2-p^2)\Theta(k^2-p^2)~,
\end{equation}
which not only simplifies calculations, but also optimizes the convergence of flows, with respect to different truncations of the
average effective action.
Although this cut off contains a Heaviside function, it is differentiable once, and therefore is suitable for the definition of
renormalisation flows.
\vspace{1cm}
\noindent{\bf Exercise 5}: {\it Evolution of the average effective potential}\\
Neglecting the evolution of the derivative terms of the average effective action
$$
\Gamma_k[\phi_c]=\int d^4x\left(\frac{1}{2}\partial_\mu\phi_c\partial^\mu\phi_c+U_k(\phi_c)\right)~,
$$
derive the flow equation satisfied by the potential $U_k(\phi_c)$, for the cut off function (\ref{optcutoff}).
\subsection{Example: asymptotic safety in quantum gravity}
The idea of average effective action for gravity was introduced by Reuter \cite{Reuter}. Technical details are also given
in the original articles \cite{LauscherReuter,ReuterSaueressig1} and reviews can be found in \cite{reviews}.
These studies are motivated by the existence of a non-trivial UV fixed point for Einstein Gravity in $2+\epsilon$ dimensions \cite{Weinberg},
together with the conjecture of a non-trivial UV fixed point in higher-order-derivative gravity in 4 dimensions (see \cite{asympsafe} for
a general review of asymptotic safety in gravity).
General Relativity is perturbatively non-renormalisable, which can be understood intuitively by introducing a cut off for graviton momentum.
If a physical quantity $P$ is calculated perturbatively, it can be expressed in the form
\begin{equation}
P=P_0+GP_1+G^2P_2+G^3P_3\cdots~,
\end{equation}
where $P_0$ is the bare quantity, $P_n$ consists in $n$-loop graphs and dots represent higher order terms in the gravitational constant $G$. Since the latter has mass dimension -2,
each term $P_n$ must be of the form $\Lambda^{2n}p_n$, where $p_n$ has the mass dimension of $P_0$. The expansion in powers of $G$ therefore seems to diverge
when $\Lambda$ is sent to infinity
\begin{equation}
P=P_0+G\Lambda^2p_1+(G\Lambda^2)^2p_2+(G\Lambda^2)^3p_3+\cdots
\end{equation}
But one could imagine that the latter expansion could actually be re-summed to give a final result in the limit $\Lambda\to\infty$, which would
secure the predictive power of quantum Einstein gravity.
The studies summarized here involve Wilsonian renormalisation flows for higher-order derivative gravity, assuming that the running
average effective action lies in the functional subspace of $f(R)$ gravities.
In this context, a non-trivial UV fixed point is found, indeed suggesting asymptotic safety for gravity.\\
\noindent {\bf General framework}\\
One considers a background field approach, where the total metric $g_{\mu\nu}+h_{\mu\nu}$ is decomposed into the (fixed) background metric
$g_{\mu\nu}$ and fluctuations $h_{\mu\nu}$ which are integrated out.
The partition function, depending on the background metric $g_{\mu\nu}$, is
\begin{eqnarray}
Z_k[t,\sigma,\overline\sigma]&=&\exp(-W_k[t,\sigma,\overline\sigma])\\
&=&\int{\cal D}[h,C,\overline C]\exp\Big\{-S[g+h]-S_{gf}[h]-S_{gh}[h,C,\overline C]\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~-S_k[h,C,\overline C]-S_{source}[h,C,\overline C]\Big\}~,\nonumber
\end{eqnarray}
where
$S[g+h]$ is the bare action,
$S_{gf}[h]$ is the gauge fixing term,
$S_{gh}[h,C,\overline C]$ is the ghosts action,
$S_k[h,C,\overline C]$ is the cut off action, and the source term is
\begin{equation}
S_{source}[h,C,\overline C]=\int d^4x\sqrt{g}(h_{\mu\nu}t^{\mu\nu}+\overline\sigma_\mu C^\mu+\overline C_\mu\sigma^\mu)~.
\end{equation}
The usual Fadeev-Popov gauge fixing term, defined in terms of the background metric, is of the form
\begin{equation}
S_{gf}=\frac{1}{\alpha}\int d^4x\sqrt{g}g_{\mu\nu}F^\mu F^\nu~.
\end{equation}
For example, the harmonic gauge is obtained with
\begin{equation}
F_\mu=\kappa(\nabla^\nu h_{\mu\nu}-\frac{1}{2}\nabla_\mu h^\nu_{~\nu})~,
\end{equation}
where $\kappa$ is a parameter with mass dimension (the covariant derivatives are taken with respect to the background metric $g_{\mu\nu}$),
and the corresponding ghost action $S_{gh}$ is quadratic in the ghosts.
The cut off action must be quadratic in the fluctuations $h_{\mu\nu}$ and the ghosts, in order to obtain a closed self-consistent evolution
equation for the effective action, based on the property (\ref{d2Gd2W}). The cut off action is then of the form
\begin{equation}
S_k[h,C,\overline C]=\int d^4x\sqrt{g}~h_{\mu\nu} R_k^{\mu\nu\rho\sigma} h_{\rho\sigma}+\int d^4x\sqrt{g}~\overline C_\mu Q_kC^\mu~,
\end{equation}
where the cut off functions $R_k$ and $Q_k$ depend on the background metric $g_{\mu\nu}$ only.
The cut off function classifies modes
to be eliminated according to the eigenvalues of the background-covariant derivatives, such that the elimination of degrees of
freedom can be done in a ``covariant way''. This procedure is less intuitive than in flat space time though, where these eigenvalues are simply the 4-momentum. \\
The corresponding classical fields are
\begin{equation}
h_{\mu\nu}^c=\frac{1}{\sqrt{g}}\frac{\delta W_k}{\delta t^{\mu\nu}}~~,~~~~
C^\mu_c=\frac{1}{\sqrt{g}}\frac{\delta W_k}{\delta\overline\sigma_\mu}~~,~~~~
\overline C_\mu^c=\frac{1}{\sqrt{g}}\frac{\delta W_k}{\delta\sigma^\mu}~,
\end{equation}
and the average effective action, defined on the background metric $g_{\mu\nu}$, is
\begin{equation}
\Gamma_k[h^c,C_c,\overline C_c]=W_k[t,\sigma,\overline\sigma]-S_{source}[h^c,C_c,\overline C_c]-S_k[h^c,C_c,\overline C_c]~.
\end{equation}
We are eventually interested in the effective action as a functional of the background metric $g_{\mu\nu}$ only,
such that we can set the classical fields to 0: $h_c=C_c^\mu=\overline C_\mu^c=0$, and look for the evolution of the relevant average effective action
\begin{equation}
\Gamma^{gr}_k\equiv\Gamma_k[0,0,0]~
\end{equation}
which depends on the background metric $g_{\mu\nu}$ in a gauge invariant way.\\
The graviton $h_{\mu\nu}$ can be decomposed into different fields
\begin{equation}
h_{\mu\nu}=\overline{h}_{\mu\nu}^\bot+\frac{h}{4}g_{\mu\nu}+\nabla_\mu\xi_\nu^\bot+\nabla_\nu\xi_\mu^\bot+\left(\nabla_\mu^\rho\nabla_{\rho\nu}-\frac{1}{4}g_{\mu\nu}\nabla^2\right)\chi~,
\end{equation}
where the spin-0 components are $h=$tr$\{h_{\mu\nu}\}$ and $\chi$;
the spin-1 component $\xi_\mu^\bot$ is transverse;
the spin-2 component $\overline h_{\mu\nu}^\bot$ is traceless and transverse. These components are orthogonal in the sense that for any $b\ne a$, the averages vanish $\left<\Phi_a\Phi_b\right>=0$,
where $\Phi_a$ is a generic notation for the different components.
Finally, if one uses an appropriate choice of gauge, $\delta^2\Gamma_k$ is diagonal in $\Phi_a$.
If one chooses identical cut off functions for all the components $\Phi_a$ and the ghosts
(up to the tensorial structure), the evolution equation for the average effective action is of the form
\begin{equation}
\partial_k\Gamma_k=\frac{1}{2}\sum_a A_a\mbox{Tr}
\left\{\left(\frac{\delta^2\Gamma_k}{\delta\Phi\delta\Phi}+R_k\right)^{-1}\partial_kR_k\right\}~,
\end{equation}
where $A_a=1$ for bosonic fields and $A_a=-2$ for the ghosts.
The trace is calculated using the heat kernel representation of the trace,
for which a review can be found in \cite{heatkernel} (see also exercise 6 for the main idea).
\vspace{1cm}
\noindent {\bf Exercise 6}: {\it Heat kernel representation of the trace}\\
Let $D_{x,y}$ be an inverse propagator (containing second order space time derivatives), and $\tau$ a parameter.
By definition, the heat kernel $K(\tau,x,y)$ satisfies the diffusion equation $(\partial_\tau+D)K=0$,
with initial condition $K(0,x,y)=\delta(x-y)$, and can formally be written $K=\exp(-\tau D)$.\\
{\it a)} Assume that $\lim_{\tau\to\infty}K=0$ and show that $D^{-1}=\int_0^\infty d\tau K$;\\
{\it b)} Show that, up to an infinite constant, any positive parameter $\lambda$ satisfies
$$
\ln\lambda=-\int_0^\infty\frac{d\tau}{\tau}\exp(-\tau\lambda)~;
$$
{\it c)} Assume that $D$ has positive eigen values, and show that
$$
\ln\mbox{det}D=-\int_0^\infty\frac{d\tau}{\tau}\mbox{tr}\{K\}~.
$$
\vspace{1cm}
\noindent{\bf $f(R)$ gravity}\\
The functional space of average effective actions is infinite and it is
impossible to take into account all the covariant operators. An approximation consists in
projecting the running average effective action onto the functional subspace of $f(R)$ gravities \cite{f(R)}.
A proof of asymptotic safety would in principle require the complete set of different curvature terms,
but the UV fixed point obtained for $f(R)$ gravity is a hint towards asymptotic safety.
One therefore assumes that the average effective action takes the form
\begin{equation}
\Gamma^{gr}_k=\sum_{n=0}^N a_n(k)\int d^4x\sqrt{g}~R^n~,
\end{equation}
and simplifications arise when the maximally symmetric de Sitter background is chosen, for which the Ricci scalar $R$=constant.
This ansatz is equivalent to the local potential approximation (\ref{locpot}), and the mass dimensions of coupling constants are
$[a_n]=4-2n$. In this context, the Einstein-Hilbert action corresponds to the truncation $N=1$, defined by the relevant couplings $a_0^{EH}=2\Lambda M_{Pl}^2$ and $a_1^{EH}=M_{Pl}^2$:
\begin{equation}
S_{EH}=M_{Pl}^2\int d^4x\sqrt{g}(2\Lambda+R)~,
\end{equation}
where $M_{Pl}$ is the Planck mass and $\Lambda$ the cosmological constant. Within the Einstein truncation, one allows only the first two couplings to evolve
\begin{equation}
a_0(k)\equiv 2k^4\lambda(k)/g(k)~~~,~~\mbox{and}~~~ a_1(k)\equiv k^2/g(k)~,
\end{equation}
where $g(k)$ is the dimensionless running gravitational constant and $\lambda(k)$ is the dimensionless running cosmological constant.
The integration of the renormalisation flows shows \cite{ReuterSaueressig1} (see Fig.2):\\
{\it(i)} a Gaussian IR fixed point $g^\star=\lambda^\star=0$;\\
{\it(ii)} a non-trivial UV fixed point for which $g^\star,$ and $\lambda^\star$ are of order 1.\\
Finally, this UV fixed point is shown to be stable against the choice of gauge and cut off function. Also, the renormalisation flows converge with the order $N$ of truncations in powers of
the Ricci scalar $R$ \cite{FallsLitim}, which tends to confirm the existence of a non-trivial UV fixed point.
\begin{figure}
\epsfxsize=14cm
\centerline{\epsffile{RS.eps}}
\caption{Renormalisation flows in the parameter space $(g,\lambda)$ for the Einstein truncation, where the arrows show the IR direction, and
the different flows correspond to different initial conditions.
Only one flow, starting with a specific initial condition in the UV, leads to the IR Gaussian fixed point, which is therefore repulsive. On the other hand,
the non-trivial UV fixed point is attractive and any IR initial condition with $g>0$ leads to asymptotic safety.
(Figure taken from \cite{ReuterSaueressig1})}
\end{figure}
\section{Conclusion}
Three different implementations of the concept of ``blocking'' in QFT have been presented in these lectures:{\it(i)} the Wegner-Houghton, {\it(ii)} the Polchinski
and {\it(iii)} the Wetterich approaches.
The approaches {\it(i)} and {\it(ii)} both deal with the Wilsonian effective action, although they don't lead to the same renormalisation flows. The approach {\it(iii)} deals
with the Legendre transformed effective action, and thus leads to yet another flow.
Therefore these approaches are quite different in
their technical details, and one can question which is the most relevant in a given physical situation.
The essential point is that these approaches describe qualitatively the same Physics at large scale, and they predict the same universality classes.
The differences arising from their technical implementation is similar to choosing different
blocks of spins, in Statistical Mechanics. Also, it has been shown in \cite{equivalent} that
there is an exact mapping between the Polchinski and the Wetterich equations, in the local potential approximation, and using the cut off function (\ref{optcutoff}).
\addcontentsline{toc}{section}{Appendix}
\section*{Appendix}
(See the introduction of \cite{AT} for the original articles on the following topics)
\addcontentsline{toc}{subsection}{A. Path integral quantisation}
\subsection*{A. Path integral quantisation}
We review here the steps of path integral quantization. Starting from the bare action $S[\phi]$, the partition function is
\begin{equation}
Z[j]=\int{\cal D}[\phi]\exp\left(\frac{i}{\hbar}S[\phi]+\frac{i}{\hbar}\int_x j\phi\right)~,
\end{equation}
where $j(x)$ is the source. The connected graph generating functional is then
\begin{equation}
W[j]=-i\hbar\ln(Z[j])~,
\end{equation}
from which the classical field is defined as
\begin{equation}\label{classicphi}
\phi_c(x)=\frac{\delta W}{\delta j(x)}.
\end{equation}
The one-particle irreducible (1PI) graph generating functional $\Gamma[\phi_c]$ is defined as the Legendre transform of $W[j]$
\begin{equation}
\Gamma[\phi_c]=W[j]-\int_x j\phi_c~,
\end{equation}
where $j$ should be seen a functional of $\phi_c$, after inverting the definition (\ref{classicphi}).
This 1PI effective action contains all the quantum corrections of the theory, which can be seen with a saddle point approximation to
evaluate $Z$:
\begin{equation}
Z[j]=\exp\left(\frac{i}{\hbar}S[\phi_{saddle}]+\frac{i}{\hbar}\int_x j\phi_{saddle}\right)+{\cal O}(\hbar)~,\nonumber
\end{equation}
where, by definition of the saddle point configuration $\phi_{saddle}$,
\begin{equation}
\left.\frac{\delta S}{\delta\phi}\right|_{saddle}+j=0~.
\end{equation}
The connected graph generating functional is then
\begin{equation}
W[j]=-i\hbar\ln(Z[j])=S[\phi_{saddle}]+\int_x j\phi_{saddle}+{\cal O}(\hbar)~,
\end{equation}
and the classical field is
\begin{eqnarray}
\phi_c&=&\frac{\delta W}{\delta j}=\int_x\left(\left.\frac{\delta S}{\delta\phi}\right|_{saddle}+j\right)
\frac{\delta\phi_{saddle}}{\delta j}+\phi_{saddle}+{\cal O}(\hbar)\nonumber\\
&=&\phi_{saddle}+{\cal O}(\hbar)~.
\end{eqnarray}
From the definition of $\Gamma[\phi_c]$, one eventually obtains
\begin{equation}
\Gamma[\phi_c]=S[\phi_c]+{\cal O}(\hbar)~,
\end{equation}
such that the 1PI action is the bare action plus quantum corrections.
\addcontentsline{toc}{subsection}{B. Convexity of the 1PI effective action}
\subsection*{B. Convexity of the 1PI effective action}
From the definition of the Legendre transform $\Gamma$, one can write the equation of motion for the dressed system as
\begin{equation}
\frac{\delta\Gamma}{\delta\phi_c}=\int_x\frac{\delta W}{\delta j}\frac{\delta j}{\delta \phi_c}
-\int_y\frac{\delta j}{\delta \phi_c}\phi_c-j=-j~,\nonumber
\end{equation}
and a further derivative gives
\begin{equation}\label{d2Gd2Wbis}
\frac{\delta^2\Gamma}{\delta\phi_c\delta\phi_c}=-\frac{\delta j}{\delta\phi_c}=-\left(\frac{\delta^2 W}{\delta j\delta j}\right)^{-1}~.
\end{equation}
But one also has
\begin{equation}
\frac{\delta^2W}{\delta j\delta j}=\left<\phi\right>\left<\phi\right>-\left<\phi\phi\right>~,
\end{equation}
where
\begin{equation}
\left<\cdots\right>\equiv\frac{1}{Z}\int{\cal D}[\phi](\cdots)\exp\left(-S[\phi]-\int j\phi\right)~,
\end{equation}
which shows that the second functional derivative of $W$ is necessarily negative, as the opposite of a variance.
As a consequence, and given the relation (\ref{d2Gd2Wbis}), the second functional derivative of $\Gamma$ is positive:
the 1PI effective action is a convex functional.
Its derivative-independent part, the 1PI effective potential, is thus a convex function.
\addcontentsline{toc}{subsection}{C. Equivalence between the Wilsonian and the 1PI effective potentials}
\subsection*{C. Equivalence between the Wilsonian and the 1PI effective potentials}
\noindent The Wilsonian effective potential is defined as
\begin{equation}
\exp\left( iVU_{Wils}(\phi_0)\right)=\int{\cal D}[\phi]\delta\left(\int_x(\phi-\phi_0)\right)\exp\left( iS[\phi]\right)~,
\end{equation}
where $V$ is the space time volume.
The Dirac distribution is then written as the Fourier transform of an exponential
\begin{eqnarray}
\exp\left( iVU_{Wils}(\phi_0)\right) &=&\int dj\int{\cal D}[\phi]\exp\left( iS[\phi]+ij\int_x (\phi-\phi_0)\right)\nonumber\\
&=&\int dj \exp\left(iW[j]-ijV\phi_0\right)~,
\end{eqnarray}
and the integration over $j$ is evaluated with the saddle point approximation, which is exact in the limit $V\to\infty$:
\begin{equation}
\exp\left( iVU_{Wils}(\phi_0)\right)=\exp\left(iW[j_0]-ij_0V\phi_0\right)~,
\end{equation}
where $j_0$ satisfies $\delta W/\delta j_0=\phi_0$, such that $\phi_0$ is the classical field corresponding to $j_0$. Finally
\begin{equation}
\exp\left( iVU_{Wils}(\phi_0)\right)=\exp\left( i\Gamma[\phi_0]\right)
=\exp\left( iVU_{1PI}(\phi_0)\right)~,
\end{equation}
which shows the equivalence between $U_{Wils}$ and $U_{1PI}$.
We note that this argument is valid with a Minkowski metric only, since
it is based on the Fourier transform of the Dirac distribution.
\eject
\addcontentsline{toc}{section}{References}
|
train/arxiv
|
BkiUfpc5qhLBuWLdGDP0
| 5 | 1 |
\section{Introduction}
Throughout, let $\N$, $\Z$, $\R$, and $\C$ be the sets of positive integers,
integers, real numbers, and complex numbers respectively.
Further, let $\mathbb{N}_0=\mathbb{N}\cup\{0\}$, $\mathbb{Z}_0^{-}=\mathbb{Z}\setminus\mathbb{N}$,
$\R^{+}=\R\setminus\{r\in\R:\ r\leq 0\}$,
and $\D = \C\setminus\{x\in\R:\ x\leq 0\}$.
The gamma function $\Ga(z)$ is one of the most important special functions in mathematics with applications in many disciplines like
Physics and Statistics. It was first introduced by Euler in the integral form
\begin{equation}\label{Ga-integral}
\Ga(z) = \int_{0}^{\infty} t^{z-1} e^{-t}\ dt.
\end{equation}
Well-known equivalent definitions for the gamma function include the following three forms:
\begin{equation}\label{Ga-weierstrass}
\Ga(z) = \left(z e^{z \ga}\prod_{n=1}^{\infty}(1+\frac{z}{n})e^{-\frac{z}{n}}\right)^{-1} ,
\end{equation}
\begin{equation}\label{Ga-limit}
\Ga(z) =\lim_{n\to\infty}\frac{n^z n!}{(z)_{n+1}},
\end{equation}
\begin{equation}\label{Ga-euler}
\Ga(z) = \frac{1}{z}\prod_{n=1}^{\infty} (1+\frac{1}{n})^z (1+\frac{z}{n})^{-1},
\end{equation}
where $\ga$ is the \emph{Euler-Mascheroni constant}
\[
\ga = \lim_{n\to\infty}(1+\frac{1}{2}+\ldots+\frac{1}{n} - \log n)
\]
and $(z)_{n}$ is the \emph{Pochhammer symbol}
\[ (z)_n= \begin{cases}
1& \text{if\ } n=0, \\
z(z+1)\ldots(z+n-1)& \text{if\ } n\in\N.
\end{cases}
\]
The gamma function satisfies the basic functional equation
$\Ga(z+1) = z\Ga(z)$.
Barnes~\cite{Barnes 2} and Post~\cite{Post} investigated the theory of difference equations
of the more general form $\phi(z+1)= f(z) \phi(z)$ under conditions on the function $f(z)$ and obtained generalized gamma functions as
solutions. See also Barnes~\cite{Barnes 2} where \emph{multiple gamma functions} have been introduced.
Many mathematicians considered concrete cases of generalized gamma functions. Dilcher~\cite{Dilcher 94} introduced for
any nonnegative integer $k$ the function
\[
\Ga_k (z) := \lim_{n\to\infty}\frac{ \exp\left\{\frac{\log^{k+1}n}{k+1} z\right\}
\prod_{j=1}^n \exp\left\{ \frac{1}{k+1}\log^{k+1} j\right\}}
{\prod_{j=0}^n \exp\left\{\frac{1}{k+1}\log^{k+1} (j+z)\right\}}
\]
which for $k=0$ becomes $\Ga(z)$, see formula (\ref{Ga-limit}).
Di\'{a}z~and~Pariguan~\cite{Diaz-Pariguan} extended the integral representation~(\ref{Ga-integral}) to the function
\[
\Ga_k(z) = \int_{0}^{\infty} t^{z-1}e^{-\frac{t^k}{k}}\,dt \qquad (k\in\mathbb{R}^{+})
\]
which for $k=1$ is nothing else but $\Ga(z)$. Recently Loc~and~Tai~\cite{Loc-Tai} involved polynomials to define
\[
\Ga_f(z) = \int_{0}^{\infty}f(t)^{z-1}e^{-t}\,dt
\]
which for $f(t)=t$ clearly gives $\Ga(z)$.
In this paper we present a gamma function $\Ga(x,z)$ in two complex variables which is meromorphic in both variables
and which satisfies $\lim_{x\to 1}\Ga(x,z)= \Ga(z)$.
Our motivation is to extend the Weierstrass form (\ref{Ga-weierstrass}) in much the same way
the Hurwitz zeta function
\[ \zeta(x,s)=\sum_{n=0}^{\infty} \frac{1}{(n+x)^s} \]
extends the Riemann zeta function
\[ \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}.
\]
So our definition involves the infinite product
\[
\pr(1+ \frac{z}{n+x})^{-1} e^{\frac{z}{n+x}}\ \quad \text{rather than\quad }
\prod_{n=1}^{\infty}(1+ \frac{z}{n})^{-1}e^{\frac{z}{n}}
\]
and in order to maintain valid the analogues of properties of $\Ga(z)$ the factor $e^{-z \ga}$ will be replaced by $e^{-z \ga(x)}$,
where $\ga(x)$ is defined as follows.
\begin{definition}\label{def:ga(x)}
For $x\in \D\setminus\Z_0^{-}$ let the function $\gamma(x)$ be
\[
\gamma(x) = \lim_{n\to\infty}(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n-1} - \log n) =
\frac{1}{x}+\sum_{n=1}^{\infty}\big(\frac{1}{x+n}-\log\frac{n+1}{n}\big).
\]
\end{definition}
Note that $\gamma(1)= \gamma$ and that
$\gamma(x) = \gamma_0 (x) = -\psi(x)$ where
\[
\gamma_0 (x) =\lim_{n\to\infty}(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n} - \log (n+x))
\]
is the \emph{zeroth Stieltjes constant} and
\[
\psi(x) = \log' \Gamma(x) = \frac{\Gamma'(x)}{\Gamma(x)}
\]
is the \emph{digamma function}.
For an account of these functions we refer to Coffey~\cite{Coffey 2012} and Dilcher~\cite{Dilcher 92}.
It is easily seen that the function $\gamma(x)$ represents an analytic function on $\C\setminus\Z_0^{-}$ and that
\begin{equation}\label{gamma-functional}
\gamma(x+1) = \frac{-1}{x}+ \gamma(x).
\end{equation}
In section~2 we study the function $G(x,z)$ represented as an infinite product.
This prepares the ground for
section~3 where we introduce the gamma function $\Ga(x,z)$ along with some of its
basic properties including functional equations and a formula for the modulus $|\Ga(n+i,n+i)|$ for $n\in\N_0$.
Section~4 is devoted to the analogues of the forms~(\ref{Ga-limit})~and~(\ref{Ga-euler})
together with their consequences such as values at half-integers and residues at poles. In section~5 we give the analogue of
the Gauss' duplicate formula. Further in section~6 we present the analogue of the Stirling's formula leading to asymptotic
estimates for our function. Finally in section~7 we give series expansions in both variables and as a result we provide
recursive formulas for the coefficients of the series in terms of the Riemann-Hurwitz zeta functions.
\section{The function $G(x,z)$}
\begin{definition}\label{G(x,z)}
For $x\in\C\setminus\Z_0^{-}$ and $z\in\C$ let the function $G(x,z)$ be defined as follows
\[
G(x,z) = \pr (1+ \frac{z}{n+x}) e^{-\frac{z}{n+x}}.
\]
\end{definition}
Note that $G(x,z)$ is entire in $z$ for fixed $x \in \C\setminus\Z_0^{-}$ and that
$\lim_{z\to 0} G(x,z) = G(x,0) = 1$.
\begin{proposition}\label{G-functional}
We have:
\[ (a)\quad G(x,z-1) = (z+x-1) e^{\ga(x)} G(x,z). \]
\[ (b)\quad G(x-1,z) = \frac{z+x-1}{x-1} e^{-\frac{z}{x-1}} G(x,z). \]
\end{proposition}
\begin{proof}
(a)\ Clearly the zeros of $G(x,z)$ are $-x, -(x+1), -(x+2), \ldots$ and the zeros of $G(x,z-1)$ are
$-(x-1), -x, -(x+1), -(x+2), \ldots$. Then by the theory of Weierstrass products, we can write
\[
G(x,z-1) = e^{g(x,z)} (z+x-1) \prod_{n=0}^{\infty}(1+\frac{z}{x+n}) e^{-\frac{z}{x+n}}
\]
for an entire function $g(x,z)$. Taking logarithms and differentiating with respect to $z$ we find
\begin{equation} \label{G-help1}
\frac{d}{dz} \log G(x,z-1) = \frac{d}{dz}g(x,z) + \frac{1}{z+x-1} +
\su (\frac{1}{z+x+n}-\frac{1}{x+n}).
\end{equation}
On the other hand, from the definition of $G(x,z)$ we have
\[
\frac{d}{dz} \log G(x,z-1) = \su \left(\frac{1}{z+x+n-1}-\frac{1}{x+n}\right) \]
\[
= \frac{1}{z+x-1}-\frac{1}{x} + \su \left(\frac{1}{z+x+n}-\frac{1}{x+n}\right) + \su\left(\frac{1}{x+n}-\frac{1}{x+n+1}\right),
\]
which gives
\begin{equation} \label{G-help2}
\frac{d}{dz} \log G(x,z-1) = \frac{1}{z+x-1} + \su \left(\frac{1}{z+x+n}-\frac{1}{x+n}\right).
\end{equation}
Then the relations (\ref{G-help1}) and (\ref{G-help2}) imply that
$\frac{d}{dz} g(x,z) = 0$ and so $g(x,z)$ is independent of $z$, say $g(x,z)=g(x)$. It remains to prove that $g(x)=\ga(x)$.
From $G(x,z-1) = (z+x-1) e^{g(x)} G(x,z)$ and $G(x,0)=1$ we get
\[
e^{-g(x)} = x G(x,1) = x \pr \left(\frac{x+n+1}{x+n}\right) e^{-\frac{1}{x+n}}.
\]
Furthermore,
\[
\begin{split}
x \prod_{m=0}^{n-1} \left(\frac{x+m+1}{x+m}\right) e^{-\frac{1}{x+m}} &=
(x+n) e^{-(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n-1})} \\
&=
x e^{-(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n-1})} + n e^{-(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n-1})},
\end{split}
\]
which yields
\[
e^{-g(x)} = \lim_{n\to\infty} x \prod_{m=0}^{n-1}\left(\frac{x+m+1}{x+m}\right) e^{-\frac{1}{x+m}}
= \lim_{n\to\infty} n e^{-(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n-1})},
\]
or equivalently,
\[
g(x) = \lim_{n\to\infty}(\frac{1}{x}+\frac{1}{x+1}+\ldots+ \frac{1}{x+n-1} - \log n) = \ga (x),
\]
as desired.
Part (b) follows directly by the definition of $G(x,z)$. This completes the proof.
\end{proof}
\begin{proposition} \label{G-sin}
If $x\in\C\setminus\Z$, then
\[
G(x,-z) G(-x,z) = \frac{(z-x) \sin \pi (z-x)}{x \sin \pi x} e^{z\cot (\pi x) +\frac{z}{x}}.
\]
\end{proposition}
\begin{proof}
As the zeros of $\sin(z-x)$ are $x, \pi+x, -\pi +x, 2\pi +x, -2\pi +x,\ldots$, by the theory of Weierstrass products we have
\begin{equation} \label{G-help3}
\sin(z-x) = (z-x) e^{g(x,z)} \prod_{n=1}^{\infty}\left(1-\frac{z}{n\pi +x}\right)e^{\frac{z}{n\pi +x}}
\prod_{n=1}^{\infty}\left(1+\frac{z}{n\pi -x}\right)e^{-\frac{z}{n\pi -x}}
\end{equation}
for an entire function $g(x,z)$. We find $e^{g(x,z)}$ as follows. Setting
\[ f_n(x,z) = e^{g(x,z)} (z-x) \prod_{k=1}^{n} \left(1-\frac{z}{k\pi +x}\right)e^{\frac{z}{k\pi +x}}
\left( 1+\frac{z}{k\pi -x}\right)e^{-\frac{z}{k\pi +x}},
\]
we have $\sin(z-x) = \lim_{n\to\infty} f_n(x,z)$.
Taking logarithms and differentiating we obtain
\[
\begin{split}
\frac{f_n'(x,z)}{f_n(x,z)}
&=
\frac{d}{dz}g(x,z) + \frac{1}{z-x} + \sum_{k=1}^{n} \left(\frac{-1}{k\pi +x-z}+\frac{1}{k\pi -x+z}+\frac{1}{k\pi +x}-\frac{1}{k\pi -x}\right) \\
&=
\frac{d}{dz}g(x,z) + \frac{1}{z-x} + \sum_{k=1}^{n}\frac{2(x-z)}{(k\pi)^2 - (x-z)^2} - \sum_{k=1}^{n}\frac{2x}{(k\pi)^2 - x^2}.
\end{split}
\]
But as is well-known,
\[
\lim_{n\to\infty} \frac{f_n'(x,z)}{f_n(x,z)} = \cot (z-x)
= \frac{1}{z-x} + \sum_{n=1}^{\infty} \frac{2(z-x)}{(z-x)^2 - (n\pi)^2}.
\]
Thus
\[
\frac{d}{dz} g(x,z)= \sum_{n=1}^{\infty}\frac{2x}{(k\pi)^2 - x^2} = \frac{1}{x}-\cot x,
\]
and hence $g(x,z)= z(\frac{1}{x}-\cot x) + h(x)$. Now equation (\ref{G-help3})
gives
\[
\frac{\sin(z-x)}{z-x} = e^{h(x)} e^{\frac{z}{x}-z\cot x}
\prod_{n=1}^{\infty}\left(1-\frac{z}{n\pi +x}\right)e^{\frac{z}{n\pi +x}}
\left(1+\frac{z}{n\pi -x}\right)e^{-\frac{z}{n\pi -x}},
\]
which by letting $z\to 0$ implies
\[ e^{h(x)} = \frac{\sin x}{x}. \]
Therefore we have
\[
\frac{\sin(z-x)}{z-x} = \frac{\sin x}{x} e^{\frac{z}{x}-z \cot x}
\prod_{n=1}^{\infty}(1-\frac{z}{n\pi +x})e^{\frac{z}{n\pi +x}} (1+\frac{z}{n\pi -x})e^{\frac{-z}{n\pi -x}} .
\]
In particular,
\begin{equation}\label{G-help4}
\begin{split}
\frac{\sin \pi(z-x)}{\pi(z-x)}
&=
\frac{\sin \pi x}{\pi x}e^{\frac{z}{x}-\pi z\cot \pi x}
\prod_{n=1}^{\infty}(1-\frac{z}{n +x}) e^{\frac{z}{n +x}}
\prod_{n=1}^{\infty}(1+\frac{z}{n -x}) e^{\frac{-z}{n -x}} \\
&=
\frac{\sin \pi x}{\pi x} e^{\frac{z}{x}-\pi z\cot \pi x} G(x,-z) G(-x,z) \left(1-\frac{z}{x}\right)^{-2} e^{-\frac{2z}{x}} \\
&=
\frac{\sin \pi x}{\pi x} e^{\frac{z}{x}-\pi z\cot \pi x} \frac{x^2}{(x-z)^2} G(x,-z) G(-x,z),
\end{split}
\end{equation}
or equivalently
\[
G(x,-z) G(-x,z) = \frac{(z-x) \sin \pi (z-x)}{x \sin \pi x} e^{z\cot (\pi x) +\frac{z}{x}}.
\]
This completes the proof.
\end{proof}
\begin{corollary}
If $x\in \C\setminus\Z$, then
\[
\prod_{n=1}^{\infty}\left(1-\frac{z^2}{(n+x)^2}\right)\left(1-\frac{z^2}{(n-x)^2}\right) =
\left(\frac{x}{\sin \pi x}\right)^2 \frac{\sin^2 \pi z - \sin^2 \pi x}{z^2 -x^2}.
\]
\end{corollary}
\begin{proof}
By the first identity in (\ref{G-help4}) we have
\[
\frac{\sin \pi(z-x)}{\pi(z-x)} \frac{\sin \pi(z+x)}{\pi(z+x)} = (\frac{\sin \pi x}{\pi x})^2
\prod_{n=1}^{\infty}\left(1-\frac{z^2}{(n+x)^2}\right)\left(1-\frac{z^2}{(n-x)^2}\right),
\]
which means that
\[
\prod_{n=1}^{\infty}\left(1-\frac{z^2}{(n+x)^2}\right)\left(1-\frac{z^2}{(n-x)^2}\right)
= \frac{x^2}{z^2 - x^2}\frac{\\sin^2 \pi z - \sin^2 \pi x}{\sin^2 \pi x},
\]
which completes the proof.
\end{proof}
\section{The function $\Ga(x,z)$}
Throughout for any $x\in\C$ let
\[
S_x = \C\setminus\{-x+n:\ n\in\N_0 \cup\{-1\}\}.
\]
\begin{definition} \label{main-def}
For $x\in\C\setminus\Z_0^{-}$ and $z\in S_x$ let the function $\Ga(x,z)$ be defined as follows.
\[
\Ga(x,z) = \left( (z+x-1) e^{z \ga(x)} G(x,z) \right)^{-1}.
\]
\end{definition}
Note that for fixed $x\in \C\setminus\Z_0^{-}$ the function $\Ga(x,z)$ is meromorphic with simple poles at $z\in S_x$
and that $\lim_{x\to 1} \Ga(x,z)=\Ga(1,z) = \Ga(z)$.
\begin{proposition} \label{functional}
We have
\[
\begin{split}
(a)& \quad \Ga(x,z+1) = (z+x-1) \Ga(x,z), \qquad (x\in\C\setminus\Z_0^{-}, z+1\in S_x) \\
(b)& \quad \Ga(x+1,z) = \frac{z+x-1}{x} \Ga(x,z), \qquad (x+1\in\C\setminus\Z_0^{-}, z\in S_x) \\
(c)& \quad \Ga(x+1,z+1) = \frac{(z+x-1)(z+x)}{x} \Ga(x,z), \qquad (x+1\in\C\setminus\Z_0^{-}, z+1\in S_{x+1}).
\end{split}
\]
\end{proposition}
\begin{proof}
(a)\ We have
\[
\begin{split}
\Ga(x,z+1) &= \left( (z+x) e^{(z+1)\ga(x)} G(x,z+1) \right)^{-1} \\
&=
\left( (z+x) e^{\ga(x)} G(x,z+1) e^{z \ga(x)} \right)^{-1} \\
&=
\left( G(x,z) e^{z \ga(x)}\right)^{-1} \\
&=
(z+x-1) \Ga(x,z),
\end{split}
\]
where the fourth identity follows by Proposition~\ref{G-functional}(a).
(b)\ We have
\[
\begin{split}
\Ga(x+1,z) &= \left( (z+x) e^{z \ga(x+1)} G(x+1,z) \right)^{-1} \\
&=
\left( (z+x) e^{z(\frac{-1}{x}+ \ga(x))} G(x+1,z) \right)^{-1} \\
&=
\left( (z+x) e^{-\frac{z}{x}} G(x+1,z) e^{z \ga(x)} \right)^{-1} \\
&=
\frac{1}{x} \left( e^{z \ga(x)} G(x,z) \right)^{-1} \\
&=
\frac{z+x-1}{x} \Ga(x,z),
\end{split}
\]
where the second identity follows from the relation (~\ref{gamma-functional}) and the
fourth identity from Proposition~\ref{G-functional}(b).
(c)\ This part follows by a combination of part (a) and part (b).
\end{proof}
\begin{corollary} \label{Gamma-integers}
Let $x\in\C\setminus\Z_0^{-}$ and let $n\in\N$ . Then we have
\[
\begin{split}
(a)& \quad \Ga(x,1) = 1, \\
(b)&\quad \Ga(x,0) = \frac{1}{x-1},\quad (x\not= 1) \\
(c)&\quad \Ga(x,n) = (x)_{n-1},\quad (n\geq 2) \\
(d)&\quad \Ga(x,-n) = \frac{1}{(x-n-1)_{n+1}},\\
(e)&\quad \Ga(n,z) = \frac{(z)_{n-1}}{(n-1)!} \Ga(z),\quad (n\geq 2).
\end{split}
\]
\end{corollary}
\begin{proof}
(a)\ As $G(x,0)=1$, we have by Proposition~\ref{G-functional}(a)
\[ 1 = x e^{\ga(x)} G(x,1), \]
and thus by definition
\[ \Ga(x,1) = \left( x e^{\ga(x)} G(x,1) \right)^{-1} = 1. \]
Parts (b) and (c) follow directly from Proposition~\ref{functional}(a). As to part (d) combine part (b) and Proposition~\ref{functional}(a).
As to part (e) combine Proposition~\ref{functional}(b) with the fact that $\Ga(1,z)=\Ga(z)$.
\end{proof}
\begin{proposition}\label{Gamma-sin}
We have
\[
(a)\quad \Ga(x,1-z) \Ga(1-x, z) = \frac{- \sin \pi x}{(z-x)\sin \pi(z-x)}.
\]
\[
(b)\quad \Ga(x,z)\Ga(-x,-z) = \frac{-x \sin\pi x}{\left((z+x)^3 - (z+x)\right) \sin\pi(z+x)}.
\]
\end{proposition}
\begin{proof}
(a)\ We have
\[
\begin{split}
\Ga(x,1-z)\Ga(1-x,z)
&=
\left((-z+x) e^{(1-z)\ga(x)} (z-x) e^{z \ga(1-x)} G(x,1-z)G(1-x,z)\right)^{-1} \\
&=
\frac{- e^{-\ga(x)}e^{z\ga(x)} e^{-z(\frac{1}{x}+\ga(-x))}}{(z-x)^2}
\left( \frac{e^{-\ga(x)}}{-z+x}\frac{-x e^{\frac{-z}{x}}}{z-x} G(x,-z) G(-x,z) \right)^{-1} \\
&=
\frac{- e^{z\ga(x)} e^{-z(\frac{1}{x}+\ga(-x))}} {x e^{-\frac{z}{x}} \frac{(z-x)\sin \pi(z-x)}{x\sin\pi x} e^{z\cot \pi x-\frac{z}{x}}} \\
&=
-\frac{e^{z(\ga(x)-\ga(-x))}}{e^{z\cot \pi x-\frac{z}{x}}} \frac{\sin \pi x}{(z-x)\sin \pi(z-x)}.
\end{split}
\]
By Corollary~\ref{Gamma-integers}(a, b), the previous relation gives for $z=1$
\[
\frac{1}{x-1} = \frac{ -e^{\ga(x)-\ga(-x)-\cot \pi x +\frac{1}{x}} }{1-x} \frac{\sin \pi x}{\sin \pi(1-x)}
= \frac{ -e^{\ga(x)-\ga(-x)-\cot \pi x +\frac{1}{x}} }{1-x},
\]
which implies that
\[ e^{\ga(x)-\ga(-x)-\cot \pi x +\frac{1}{x}} = 1, \]
giving part (a).
(b)\ By Proposition~\ref{functional}(a, b) we have
\[
\Ga(1-x,z) = \frac{z-x-1}{-x} \Ga(-x,z)\ \text{and\ }
\Ga(x,1-z) = (-z+x-1)\Ga(x,-z).
\]
Then by virtue of part (a) we get
\[
\frac{z-x-1}{-x} \Ga(-x,z) (-z+x-1)\Ga(x,-z) = \frac{-\sin \pi x}{(z-x)\sin \pi (z-x)},
\]
or equivalently
\[
\Ga(-x,z) \Ga(x,-z) = \frac{- x \sin\pi x}{\bigl((z-x)^3 - (z-x) \bigr)\sin\pi (z-x)}.
\]
This completes the proof.
\end{proof}
\begin{corollary}\label{x-neg-int}
If $n\in\N_0$ and $z\not\in\N_0$, then
\[
\lim_{x\to -n}\Ga(x,z) = 0.
\]
\end{corollary}
\begin{proof}
By Proposition~\ref{Gamma-sin}(a) we have
\[
\Ga(x,z) = \Ga(x,1-(1-z)) = \frac{-\sin\pi x}{(1-z-x)\sin\pi(1-z-x)}\frac{1}{\Ga(1-x,1-z)}.
\]
Then
\[
\lim_{x\to -n}\Ga(x,z) = \frac{-\sin\pi n}{(1-z+n)\sin\pi(1-z+n)}\frac{1}{\Ga(1+n,1-z)} = 0.
\]
\end{proof}
\begin{corollary} \label{Norm}
If $n\in\N_0$, then
\[
| \Ga(n+i,n+i) |^2 = | \Ga(n-i,n-i)|^2 =
\frac{5 \prod_{k=0}^{2n-2}(4+k^2)}{\prod_{k=0}^{n-1}(1+k^2)}\frac{e^{\pi}}{10(e^{2\pi}+1)}.
\]
\end{corollary}
\begin{proof}
First note that
\begin{equation}\label{Ga-conjugate}
\Ga(\bar{x},\bar{z}) = \overline{\Ga(x,z)},
\end{equation}
from which the first identity immediately follows.
As to the second formula, using identity (\ref{Ga-conjugate}) and Proposition~\ref{Gamma-sin}(b) we obtain
\[
| \Ga(i,i) |^2 = \Ga(i,i) \overline{\Ga(i,i)} = \Ga(i,i) \Ga(\bar{i},\bar{i}) =
\frac{-i \sin \pi i}{\bigl( (2i)^3- (2i) \bigr) \sin 2\pi i} = \frac{e^{\pi}}{10(e^{2\pi}+1)},
\]
which gives the result for $n=0$. If $n>1$ we have by Proposition~\ref{functional}(c)
\[
|\Ga(n+i,n+i)|^2= \Ga(n+i,n+i) \Ga(n-i,n-i)
\]
\[
= \frac{(2i-1)(2i)\ldots(2i+2n-2)}{i(i+1)\ldots(i+n-1)} \frac{(-2i-1)(-2i)\ldots(-2i+2n-2)}{-i(-i+1)\ldots(-i+n-1)} |\Ga(i,i)|^2
\]
\[
= \frac{(-1)^{2n}(2i-1)(2i+1)(2i)(2i)(2i+1)(2i-1)\ldots(2i+2n-2)(2i-(2n-2))}
{(-1)^n (i)(i)(i+1)(i-1)\ldots(i+n-1)(i-(n-1))}|\Ga(i,i)|^2
\]
\[
= \frac{(4+1)(4)(4+1)\ldots(4+(2n-2)^2)}{1 (1+1)\ldots (1+(n-1)^2)} \frac{e^{\pi}}{10(e^{2\pi}+1)}.
\]
This completes the proof.
\end{proof}
\section{Analogues of Euler's formulas, residues, and values at half-integers}
\begin{proposition}\label{Euler-analogue}
We have
\[ (a)\quad \Ga(x,z) = \lim_{n\to\infty} \frac{n^z x(x+1)\ldots (x+n-1)}{(z+x-1)(z+x)\ldots(z+x+n-1)}
= \lim_{n\to\infty} \frac{n^z (x)_{n}}{(z+x-1)_{n+1}}. \]
\[ (b)\quad \Ga(x,z) = \frac{x}{(z+x-1)(z+x)}\prod_{n=1}^{\infty} \left(1+\frac{1}{n}\right)^z \left(1+\frac{z}{x+n}\right)^{-1}. \]
\end{proposition}
\begin{proof}
(a) We have
\[
\begin{split}
\Ga(x,z)
&=
\lim_{n\to\infty} \left((z+x-1)e^{z(\frac{1}{x}+\frac{1}{x+1}+\ldots+\frac{1}{x+n-1}-\log n)}
\prod_{k=0}^{n-1} (1+\frac{z}{x+k})e^{-\frac{z}{x+k}} \right)^{-1} \\
&=
\lim_{n\to\infty} \left((z+x-1)e^{-z\log n}\prod_{k=0}^{n-1}\frac{z+x+k}{x+k}\right)^{-1} \\
&=
\lim_{n\to\infty} \left(n^{-z} \frac{(z+x-1)(z+x)\ldots (z+x+n-1)}{x(x+1)\ldots (x+n-1)}\right)^{-1} \\
&=
\lim_{n\to\infty} \frac{n^z x(x+1)\ldots (x+n-1)}{(z+x-1)(z+x)\ldots(z+x+n-1)}.
\end{split}
\]
(b) By the previous proof we have
\[
\begin{split}
\Ga(x,z)
&=
\frac{1}{z+x-1} \lim_{n\to\infty} n^z \prod_{k=0}^{n-1}\left(1+\frac{z}{x+k}\right)^{-1} \\
&=
\frac{1}{z+x-1} \lim_{n\to\infty}\prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^{z}
\prod_{k=0}^{n-1}\left(1+\frac{z}{x+k}\right)^{-1} \\
&=
\frac{x}{(z+x-1)(z+x)} \prod_{n=1}^{\infty}\left(1+\frac{1}{n}\right)^{z} \left(1+\frac{z}{x+n}\right)^{-1}.
\end{split}
\]
\end{proof}
\begin{corollary}\label{duplicate}
If $x, x+z \in\C\setminus\Z_0^{-}$, then
\[
\Ga(x,z) \Ga(x+z,-z) = \frac{1}{(x-1)(z+x-1)}.
\]
\end{corollary}
\begin{proof}
By Proposition~\ref{Euler-analogue}(a) we have
\[
\Ga(x,z) = \lim_{n\to\infty} \frac{n^z (x)_{n}}{(z+x-1)_{n+1}} = \frac{1}{x-1}\lim_{n\to\infty} \frac{n^z (x-1)_{n+1}}{(z+x-1)_{n+1}}
= \frac{1}{z+x-1}\lim_{n\to\infty} \frac{n^z (x)_{n+1}}{(z+x)_{n+1}},
\]
where the last identity follows since $\lim_{n\to\infty}\frac{x+n}{z+x+n} =1$.
Then
\[
\Ga(x+z,-z) = \frac{1}{x-1}\lim_{n\to\infty} \frac{n^{-z} (x+z)_{n+1}}{(x)_{n+1}}
= \frac{1}{(x-1)(z+x-1) \Ga(x,z)}.
\]
This completes the proof.
\end{proof}
\begin{corollary} \label{half-integer}
If $k, l\in\N_0$ such that $k+l\not=0$, then
\[
\Ga\left(\frac{2k+1}{2},\frac{2l+1}{2}\right) = \frac{2}{\sqrt{\pi} (2k-1)}
\frac{(2l+2)!}{(-4)^{l+1} (l+1)!}
\frac{(k+l-1)!}{(-l-\frac{1}{2})_{k+l}}
\]
\end{corollary}
\begin{proof}
On the one hand we have by Corollary~\ref{duplicate}
\[
\Ga\left(\frac{2k+1}{2},\frac{2l+1}{2}\right) \Ga\left(k+l+1,-\frac{2l+1}{2}\right) = \frac{2}{(2k-1)(k+l)}.
\]
On the other hand by Corollary~\ref{Gamma-integers}(e) and the well-known fact that
\[
\Ga(1/2 - k) = \frac{\sqrt{\pi}(-4)^k k!}{(2k)!}
\]
we have
\[
\Ga\left(k+l+1,-\frac{2l+1}{2}\right) = \frac{(-l-1/2)_{k+l}}{(k+l)!} \Ga(-l-1/2)
\]
\[
=
\frac{(-l-1/2)_{k+l}}{(k+l)!} \Ga(1/2 -(l+1)) = \frac{(-l-1/2)_{k+l}}{(k+l)!}
\frac{\sqrt{\pi} (-4)^{l+1} (l+1)!}{(2l +2)!}.
\]
Now combine these identities to deduce the required formula.
\end{proof}
\begin{corollary} \label{residues}
If $x\in\C\setminus\Z$ and $m\in\N_0\cup\{-1\}$, then the residue of $\Ga(x,z)$ at $z=-(x+m)$ is
\[
\begin{cases}
\frac{1}{(x-1)\Ga(x-1)}, & \text{if\ }m=-1 \\
\frac{(-1)^{m+1} (x)_{2m+1}}{(m+1)!}\frac{1}{\Ga(x+2m+1)},& \text{otherwise.} \\
\end{cases}
\]
\end{corollary}
\begin{proof}
Suppose first that $m=-1$. By Corollary~\ref{duplicate} and Proposition~\ref{functional}(c) we obtain
\[
\Ga(x,z) = \frac{1}{(x-1)(z+x-1)}\frac{1}{\Ga(z+x,-z)}.
\]
Then
\[
\lim_{z\to -(x-1)}(z+(x-1))\Ga(x,z) = \lim_{z\to -(x-1)} \frac{1}{(x-1)\Ga(1,x-1)} = \frac{1}{(x-1)\Ga(x-1)}.
\]
Suppose now that $m\not= -1$.
Then repeatedly application of Proposition~\ref{functional}(c) yields
\[
\Ga(x,z) = \frac{1}{(x-1)(z+x-1)}\frac{1}{\Ga(z+x,-z)}
\]
\[
=\frac{x}{(z+x-1)(z+x)}\frac{1}{\Ga(z+x+1,-z+1)}
\]
\[
= \frac{(x)_{2m+1}}{(z+x-1)_{m+2}}\frac{1}{\Ga(z+x+m+1,-z+m+1)},
\]
or equivalently,
\[
(z+x+m) \Ga(x,z) = \frac{(x)_{2m+1}}{(z+x-1)_{m+1}}\frac{1}{\Ga(z+x+m+1,-z+m+1)}.
\]
Thus
\[
\lim_{z\to -(x+m)} (z+x+m) \Ga(x,z) = \frac{(x)_{2m+1}}{(-1)^{m+1} (m+1)!}\frac{1}{\Ga(1,x+2m+1)},
\]
which implies the desired result since $\Ga(1,x+2m+1) = \Ga(x+2m+1)$.
\end{proof}
Note that if $x=1$ and $m=-1,0,1,2,\ldots$, then Corollary~\ref{residues} agrees with the well-known fact
that the residue of $\Ga(z)$ at $z=-(m+1)$ is
\[
\frac{(-1)^{m+1}}{(m+1)!}.
\]
\section{An analogues of the Gauss' multiplication formula}
\begin{proposition} \label{constant-function}
If $x\in\C\setminus\Z_0^{-}$, then the function
\[
f(x,z) = \frac{n^{nz} \Ga(x,z)\Ga(x,z+\frac{1}{n})\ldots \Ga(x,z+\frac{n-1}{n})}{n \Ga(n(x-1)+1,nz)}
\]
is independent of $z$.
\end{proposition}
\begin{proof}
By Proposition~\ref{Euler-analogue} we have
\[
f(x,z)
=
\frac{ n^{nz} \prod_{k=0}^{n-1}\lim_{m\to\infty}
\frac{m^{z +\frac{k}{n}} (x)_{m-1}}{(z+ \frac{k}{n}+x-1)(z+\frac{k}{n}+x)\ldots(z+\frac{k}{n}+x+m-2)} }
{ n \lim_{m\to\infty} \frac{ (mn)^{nz} \bigl(n(x-1)\bigr)_{mn-1} }{ \bigl(nz+n(x-1)\bigr)_{mn} }}
\]
\[
=
\lim_{m\to\infty} \frac{n^{mn-1} m^{\frac{n-1}{2}} \bigl((x)_{m-1}\bigr)^{n-1}}{ (n(x-1))_{mn-1} }
\frac{ \bigl( n(z+x-1) \bigr)_{mn} }{ \prod_{k=0}^{n-1}\prod_{j=0}^{m-1}(n(z+x-1)+k+jn) }
\]
\[
=
\lim_{m\to\infty} \frac{n^{mn-1} m^{\frac{n-1}{2}} \bigl((x)_{m-1}\bigr)^{n-1}}{ (n(x-1))_{mn-1} }
\]
where the last identity follows as
\[
\frac{ \bigl( n(z+x-1) \bigr)_{mn} }{ \prod_{k=0}^{n-1}\prod_{j=0}^{m-1}(n(z+x-1)+k+jn) } =1.
\]
This completes the proof.
\end{proof}
\begin{corollary} \label{Gauss-analogue}
We have
\[
\Ga(x,z)\Ga(x,z+\frac{1}{2})\Ga(1-x,z)\Ga(1-x,z+\frac{1}{2})
\]
\[
= 2^{2-4z}\Ga(2x-1,2z)\Ga(1-2x,2z)\frac{\tan \pi x}{x-\frac{1}{2}}.
\]
\end{corollary}
\begin{proof}
Taking $z=\frac{1}{n}$ in Proposition~\ref{constant-function} we obtain
\[
f(x,z) f(1-x,z)
=
f(x,\frac{1}{n}) f(1-x,\frac{1}{n})
\]
\[
=
\frac{ \Ga(x,\frac{1}{n}) \Ga(x,\frac{2}{n})\ldots \Ga(x,\frac{n-1}{n}) }{ \Ga(n(x-1)+1,1) }
\frac{ \Ga(1-x,\frac{1}{n}) \Ga(1-x,\frac{2}{n})\ldots \Ga(1-x,\frac{n-1}{n}) }{ \Ga(-nx+1,1) }
\]
\[
=
\Ga(x,\frac{1}{n})\Ga(1-x,\frac{n-1}{n}) \Ga(x,\frac{2}{n})\Ga(1-x,\frac{n-1}{n}) \ldots
\Ga(x,\frac{n-1}{n}) \Ga(1-x,\frac{1}{n})
\]
\[
=
\frac{ (-1)^{n-1} \sin \pi x}{\prod_{k=1}^{n-1}\left((\frac{k}{n}-x)\sin \pi(\frac{k}{n}-x)\right)},
\]
where the last identity follows by Proposition~\ref{Gamma-sin}(a). Now if $n=2$, then Proposition~\ref{constant-function}
combined with the previous formula gives
\[
2^{4z-2} \frac{\Ga(x,z)\Ga(x,z+\frac{1}{2})\Ga(1-x,z)\Ga(1-x,z+\frac{1}{2})}
{\Ga(2x-1,2z) \Ga(1-2x,2z)} =
\frac{- \sin \pi x}{(\frac{1}{2}-x) \sin (\frac{\pi}{2}-\pi x)},
\]
or equivalently,
\[
\Ga(x,z)\Ga(x,z+\frac{1}{2})\Ga(1-x,z)\Ga(1-x,z+\frac{1}{2}) = 2^{2-4z} \Ga(2x-1,2z) \Ga(1-2x,2z) \frac{\tan \pi x}{x-\frac{1}{2}}.
\]
This completes the proof.
\end{proof}
\section{An analogue of the Stirling's formula}
In this section we essentially use the same ideas as in Lang~\cite[p. 422-427]{Lang} to derive a formula for
$\log \Ga(x,z)$ leading to asymptotic formulas for $\Ga(x,z)$ which are analogues to the Stirling's formula.
For $t\in\R$, let $P(t)= t-\lfloor t\rfloor -\frac{1}{2}$ and for convenience for $z\in \D$ let
\[
I_n(z) =\int_{0}^{n}\frac{P(t)}{z+t}\,dt, \quad\text{and\quad }
I(z) = \lim_{n\to\infty}I_n(z)=\int_{0}^{\infty}\frac{P(t)}{z+t}\,dt.
\]
\begin{proposition} \label{pre-stirling}
If $x\in\R^{+}$ and $z\in\R^{+}\cap S_x$, then
\[
\log\Ga(x,z) = (z+x-\frac{3}{2})\log(z+x-1) -z + 1- (x-\frac{1}{2})\log x + I(x)- I(z+x-1).
\]
\end{proposition}
\begin{proof}
We have with the help of Euler's summation formula
\[
\log \frac{(z+x-1)(z+x)\ldots (z+x+n-1)}{x(x+1)\ldots(x+n)} =
\sum_{k=0}^n \log(z+x-1+k)-\sum_{k=0}^n \log(x+k)
\]
\[
=
\int_{0}^{n}\log(z+x-1+t)\,dt +\frac{1}{2}\bigl( \log(z+x-1+n)+\log(z+x-1) \bigr) + I_n(z+x-1)
\]
\[
- \int_{0}^{n}\log(x+t)\,dt - \frac{1}{2}\bigl( \log(x+n)+\log x \bigr) - I_n(x)
\]
\[
=
\Big[(z+x-1+t)\log(z+x-1+t)-(z+x-1+t) \Big]_0^{n} - \Big[ (x+t)\log(z+x-1+t)-(x+t)\Big]_0^{n}
\]
\[
+ \frac{1}{2}\big( \log(z+x-1+n) + \log(z+x-1)\big) - {1\over 2}\big(\log(x+n)+\log(x)\big)
+ I_n(z+x-1) - I_n(x),
\]
which after routine calculations becomes
\[
\log \frac{(z+x-1)(z+x)\ldots (z+x+n-1)}{x(x+1)\ldots(x+n)} =
\log n^z + z \log\left(1+ \frac{z+x-1}{n}\right)
\]
\[
+ (x+n-\frac{3}{2})\log\left(1+\frac{z+x-1}{n}\right)
- (z+x-\frac{3}{2})\log(z+x-1) - (x+n+\frac{1}{2})\log\left(1+\frac{x}{n}\right)
\]
\[
+ \left( x-\frac{1}{2}\right) \log x - \log n + I_n(z+x-1) - I_n(x).
\]
Equivalently,
\[
\log \frac{(z+x-1)(z+x)\ldots (z+x+n-1)}{n^z x(x+1)\ldots(x+n-1)} =
\log(x+n) + z \log\left(1+ \frac{z+x-1}{n}\right)
\]
\[
+ (x+n-\frac{3}{2})\log\left(1+\frac{z+x-1}{n}\right)
- (z+x-\frac{3}{2})\log(z+x-1) - (x+n+\frac{1}{2})\log\left(1+\frac{x}{n}\right)
\]
\[
+ \left( x-\frac{1}{2}\right) \log x - \log n + I_n(z+x-1) - I_n(x)
\]
\[
=
z \log\left(1+ \frac{z+x-1}{n}\right) + (x+n-\frac{3}{2})\log\left(1+\frac{z+x-1}{n}\right)
- (z+x-\frac{3}{2})\log(z+x-1)
\]
\[
-(x+n-\frac{1}{2})\log\left(1+\frac{x}{n}\right) + \left( x-\frac{1}{2}\right) \log x + I_n(z+x-1) - I_n(x).
\]
Now use the fact that
\[ \log\left(1 + \frac{z}{n}\right) = \frac{z}{n} + O\left(\frac{1}{n^2}\right)
\]
and Proposition~\ref{Euler-analogue}(a) and take $\lim_{n\to\infty}$ on both sides to get
\[
\log \frac{1}{\Ga(x,z)} = (z+x-1) - \left(z+x-\frac3{2} \right)\log(z+x-1) - x + (x-\frac{1}{2})\log x + I(z+x-1)-I(x),
\]
implying the required identity.
\end{proof}
\begin{corollary}\label{Strling}
Let $x\in\R^{+}$ and $z\in\R^{+}\cap S_x$. Then
(a)\ for $x\to\infty$ we have
\[
\Ga(x,z)\sim (z+x-1)^{z+x-3/2} e^{1-z} x^{1/2 - x},
\]
(b)\ for $z\to\infty$ we have
\[
\Ga(x,z) \sim (z+x-1)^{z+x-3/2} e^{1-z} x^{1/2 - x} + I(x).
\]
\end{corollary}
\begin{proof}
Combine Proposition~\ref{pre-stirling} with the fact that
$\lim_{z\to \infty} I(z) = 0$.
\end{proof}
\section{Series expansions and recursive formulas for the coefficients}
To use the property $\log (z_1 z_2)=\log z_1 + \log z_2$, we suppose in this section that $x\in\R^{+}$ and $z\in\R^{+}\cap S_x$.
\begin{proposition} \label{series-1}
If $|z|<\inf (1,|x|)$, then
\[
\log \Ga(x,z+1) = -z \ga(x) - \sum_{m=2}^{\infty} \frac{(-1)^{m-1}}{m} \zeta(m,x) z^{m}.
\]
\end{proposition}
\begin{proof}
On the one hand, we have by definition
\[
\log\Ga(x,z) = -\log(z+x-1) - z\ga(x) - \su\left(\log(1+\frac{z}{x+n}) - \frac{z}{x+n} \right).
\]
On the other hand, by Proposition~\ref{functional}(a) we have
\[
\log\Ga(x,z+1) = -\log(z+x-1) + \log\Ga(x,z).
\]
Combining these two relations we obtain
\[
\begin{split}
\log\Ga(x,z+1)
&=
- z\ga(x) - \su\left(\log(1+\frac{z}{x+n}) - \frac{z}{x+n} \right) \\
&=
- z\ga(x) - \su \left(\left(\sum_{m=1}^{\infty}\frac{-(1)^{m-1}}{m}\frac{z^m}{(x+n)^m} \right)-\frac{z}{x+n}\right) \\
&=
- z\ga(x) - \sum_{m=2}^{\infty}\frac{(-1)^{m-1}}{m} z^m \su\frac{1}{(x+n)^m} \\
&=
- z\ga(x) - \sum_{m=2}^{\infty}\frac{(-1)^{m-1}}{m} \zeta(m,x) z^m.
\end{split}
\]
\end{proof}
\begin{corollary}\label{cor-series-1}
If $|z|<\inf(1,|x|)$ and
\[
\Ga(x,z+1) = \sum_{m=0}^{\infty}a_m(x) z^m,
\]
then $a_0(x) = 1$ and for $m>0$ we have
\[
a_m(x) = \frac{1}{m}\left(-a_{m-1}(x) \ga(x) + \sum_{k=0}^{m-2}(-1)^m a_k(x)\zeta(m-k,x)\right).
\]
\end{corollary}
\begin{proof}
Clearly if $z=0$, then $\Ga(x,1)=a_0(x)=1$ by Corollary~\ref{Gamma-integers}(a).
Differentiating the power series with respect to $z$ gives
\begin{equation}\label{deriv1}
\frac{d}{dz}\Ga(x,z+1) = \sum_{m=1}^{\infty} ma_m(x) z^{m-1}.
\end{equation}
Further in Proposition~\ref{series-1} differentiating with respect to $z$ yields
\begin{equation}\label{deriv2}
\frac{d}{dz}\log \Ga(x,z+1) = \frac{\frac{d}{dz}\Ga(x,z+1)}{\Ga(x,z+1)}=
-\ga(x) - \sum_{m=2}^{\infty}(-1)^{m-1}\zeta(m,x) z^{m-1}.
\end{equation}
Next combining (\ref{deriv1}) with (\ref{deriv2}) gives
\[
\sum_{m=1}^{\infty}m a_m(x) z^{m-1} = \left(\sum_{m=0}^{\infty} a_m(x) z^m\right)
\left(-\ga(x)+ \sum_{m=2}^{\infty}(-1)^m \zeta(m,x) z^{m-1} \right).
\]
Now the desired identity follows by equating the coefficients.
\end{proof}
\begin{proposition}\label{series-2}
If $|x-1|<\inf(1,|z+1|)$, then
\[
\log\Ga(x+1,z) = \sum_{n=1}^{\infty}\left(z\log\frac{n+1}{n}-\log\frac{z+n}{n}\right)
\]
\[
+ \sum_{n=2}^{\infty}\frac{z(x-1)}{n(n+z)}+
\sum_{m=2}^{\infty} \frac{(-1)^{m}\big(\zeta(m,z)-\zeta(m)-\frac{1}{z^m}+1\big)}{m} (x-1)^{m}.
\]
\end{proposition}
\begin{proof}
Note that it is easily checked that
\begin{equation}\label{help-gamma}
-z \ga(x)+ \sum_{n=2}^{\infty}\left(\frac{z}{x+n-1}-\frac{n+z}{n}\right)
\end{equation}
\[
=
-\frac{z}{x}+\log(1+z) + \sum_{n=1}^{\infty} \left(z\log\frac{n+1}{n}-\log\frac{n+z}{n}\right).
\]
Now combining the definition of $\Ga(x,z)$ with Proposition~\ref{functional}(b) yields
\[
\log\Ga(x+1,z)
=
-\log x - z\ga(x)
\]
\[- \su\left(\log(x+n+z)-\log(x+n)-\frac{z}{x+n}\right)
\]
\[
= -z \ga(x) - \log(x+z) + \frac{z}{x}
\]
\[
-\sum_{n=2}^{\infty}\left(\log(x-1+n+z)-\log(x-1+n)-\frac{z}{x-1+n}\right)
\]
\[
= -z \ga(x) - \log(x+z) + \frac{z}{x}
\]
\[ -\sum_{n=2}^{\infty}\left(\log(n+z)+\log(1+\frac{x-1}{n+z})
-\log n -\log(1+\frac{x-1}{n})-\frac{z}{x-1+n}\right)
\]
\[
= -z \ga(x)+\sum_{n=2}^{\infty}(\frac{z}{x-1+n}-\log \frac{n+z}{n}) -\log(x+z)+\frac{z}{x}
\]
\[
-\sum_{n=2}^{\infty}\left(\log(1+\frac{x-1}{n+z})-\log(1+\frac{x-1}{n}) \right)
\]
\[
= \log(1+z)-\log(x-1+z+1)+\sum_{n=1}^{\infty} \left(z\log\frac{n+1}{n}-\log\frac{n+z}{n}\right)
\]
\[
+ \sum_{n=2}^{\infty}\sum_{m=1}^{\infty}\frac{(-1)^{m-1}}{m}\left(\frac{1}{n^m}-\frac{1}{(n+z)^m}\right) (x-1)^m
\]
\[
= -\log(1+\frac{x-1}{z+1}) + \sum_{n=1}^{\infty} \left(z\log\frac{n+1}{n}-\log\frac{n+z}{n}\right)
+ \sum_{n=2}^{\infty}\frac{z(x-1)}{n(n+z)}
\]
\[
+ \sum_{m=2}^{\infty}\frac{(-1)^{m-1}}{m}\left( -1+ \zeta(m) +\frac{1}{z^m}+\frac{1}{(z+1)^m}-\zeta(m,z)\right) (x-1)^m
\]
\[
= \sum_{n=1}^{\infty} \left(z\log\frac{n+1}{n}-\log\frac{n+z}{n}\right)
\]
\[
+ \sum_{n=2}^{\infty}\frac{z(x-1)}{n(n+z)}
+ \sum_{m=2}^{\infty}\frac{(-1)^{m-1}}{m} (-1+\zeta(m) +\frac{1}{z^m}-\zeta(m,z)) (x-1)^m,
\]
where the fifth identity follows with the help of (\ref{help-gamma}).
This completes the proof.
\end{proof}
\begin{corollary}\label{coefficients}
If $|x-1|<\inf(1,|z+1|)$ and
\[
\Ga(x+1,z) = \sum_{m=0}^{\infty} b_m(z) (x-1)^m,
\]
then $b_0(z) = z \Ga(z)$ and for $m>0$
\[
b_m (z) = \frac{1}{m}b_{m-1}(z)+ \sum_{n=2}^{\infty}\frac{z}{n(n+z)}
\]
\[
+ \frac{1}{m}\sum_{k=0}^{m-2} (-1)^{m-k} b_k(z) \bigl( \zeta(m-k,z)-\zeta(m-k)- z^{-(m-k)}+1\bigr).
\]
\end{corollary}
\begin{proof}
Taking $x=1$, we have
$b_0(z) = \Ga(2,z) = z\Ga(z)$ by Corollary~\ref{Gamma-integers}(e). Further, by Proposition~\ref{series-2} we have
\begin{equation} \label{coeff-1}
\frac{d}{dx}\log \Ga(x+1,z) = \frac{ \frac{d}{dx}\Ga(x+1,z)}{\Ga(x+1,z)}
\end{equation}
\[
=
\sum_{n=2}^{\infty}\frac{z}{n(n+z)}+\sum_{m=2}^{\infty}(-1)^m \bigl( \zeta(m,z)-\zeta(m) - z^{-m}+1\bigr) (x-1)^{m-1}.
\]
On the other hand, it follows from the assumption that
\begin{equation}\label{coeff-2}
\frac{d}{dx} \Ga(x+1,z) = \sum_{m=1}^{\infty} m b_m(z) (x-1)^{m-1}.
\end{equation}
Then from (\ref{coeff-1}) and (\ref{coeff-2}) we get
\[
\sum_{m=1}^{\infty} m b_m(z) (x-1)^{m-1} =
\left(\sum_{m=0}^{\infty} b_m (x-1)^m\right)\times
\]
\[
\left(\sum_{n=2}^{\infty}\frac{z}{n(n+z)}+\sum_{m=2}^{\infty}(-1)^m \bigl(\zeta(m,z)-\zeta(m)- z^{-m}+1\bigr) (x-1)^{m-1}\right),
\]
and the result follows by equating the coefficients.
\end{proof}
|
train/arxiv
|
BkiUdIA4eIXhrWWONdcy
| 5 | 1 |
\section{Introduction}
One of the major achievements of the {\sl Hubble Space Telescope
(HST)} to date has been the discovery that the progenitors of globular
clusters (GCs), once thought to be the oldest building blocks of
galaxies, continue to form until today (e.g., de Grijs et al. 2003b,
2005). {\sl HST} observations allow us to resolve and study GC systems
far beyond the well-studied GC systems in the \object{Milky Way} and
the \object{Magellanic Clouds}.
Young massive star clusters (YMCs; $M_{\rm cl} \ga 10^5$ M$_\odot$)
have been detected in a great variety of actively star-forming
galaxies. Systems containing a large number of YMCs are most often
interacting spiral-spiral pairs (e.g., the \object{``Antennae''}
galaxies, Whitmore \& Schweizer 1995; \object{NGC 7252}, Whitmore et
al. 1993; the \object{``Mice''} and \object{``Tadpole''} systems, de
Grijs et al. 2003d), since these naturally provide large reservoirs of
gas for star and cluster formation. In fact, massive star {\it
cluster} formation is likely the major mode of star formation in such
extreme starburst environments (cf. de Grijs et al. 2003d). Therefore,
we can use young and intermediate-age massive star clusters as
efficient tracers of the recent violent star formation and interaction
history of galaxies by determining accurate ages for the individual
clusters, even at those ages when morphological interaction features
in their host galaxies have already disappeared.
The formation of YMCs and field stars alike is limited by the supply
of gas. The colliding gas masses associated with interacting galaxies
determine the violence of the star and cluster formation, as well as
the overall gas pressure. Hence the number, and perhaps also the
nature (such as their masses and compactness), of the YMCs, and the
ratio of YMC to field-star formation in particular, is expected to
depend on Hubble type. While observations of gas-rich spiral-spiral
mergers are numerous, star-formation tracers in mixed-pair mergers
have thus far largely been overlooked, although they are undoubtedly
of great interest in the context of the parameter space covered by
the star cluster and field star formation processes.
One of the most important open questions in this field, and one that
we address in this {\it Research Note}, is whether the smaller amount
of gas available for star formation in early-type galaxies might act
as a threshold for star formation in general, and for massive and
compact cluster formation in particular, or possibly result in
enhanced field star formation (including star formation in small
clusters, $M_{\rm cl} \la 10^3$ M$_\odot$) with respect to the mode of
star formation in massive clusters.
The present analysis is of particular current relevance as ever more
sensitive observations increasingly reveal neutral and/or molecular
hydrogen in elliptical and early-type spiral galaxies (e.g., Wiklind
\& Henkel 1989; Cullen et al. 2003, 2006) and in late-stage merger
remnants, which might be the progenitors of present-day elliptical
galaxies (cf. Georgakakis et al. 2001). Star (cluster) formation in
early-type galaxies, long thought to be essentially dust- and gas-free
environments, has therefore returned to the forefront of interest.
\section{Object Selection, Data and Data Reduction}
\subsection{\object{Arp 116}: selection rationale}
\label{selection.sec}
We selected \object{Arp 116}, a combination of the bright ($M_B =
-21.48$ mag) early-type (E2) galaxy \object{NGC 4649} (\object{M60})
and its Sc-type companion \object{NGC 4647} ($M_B = -19.81$ mag), as
our prime target. This is one of the closest systems of mixed-type
interacting galaxies, at a distance of $\sim 16.8$ Mpc in the
\object{Virgo cluster} [Tonry et al. 2001; based on surface brightness
fluctuations (SBF), see Sect. \ref{distances.sec}], and with a
projected separation of $\sim 2\farcm5 \equiv 12$ kpc. At this
distance, the resolution of the Advanced Camera for Surveys (ACS)/Wide
Field Camera (WFC) on board the {\sl HST}, $\sim 0.05$ arcsec
pixel$^{-1}$ (corresponding to $\sim 4$ pc), allows us to (marginally)
resolve and (thanks to their intrinsic brightnesses) robustly identify
individual star clusters of ``typical'' GC size (i.e., with typical
half-mass radii of $R_{\rm hm} \sim 5$ pc).
The \object{Arp 116} system presents an ideal configuration for our
case study into the impact of gravitational interactions on the
triggering of star formation in the early-type member. First, studies
of the nucleus of \object{NGC 4649} by Rocca-Volmerange (1989) show an
excess of far-UV emission, which is indicative of a relatively high
molecular gas density. She postulates this to be either left over from
the main star-formation episode in the galaxy, or the result of
accretion over the galaxy's lifetime. In the context of her ``UV-hot''
model of galaxy evolution, Rocca-Volmerange (1989) interprets the
presence of the far-UV excess in \object{NGC 4649} as caused by a
mostly continuous star-formation rate, although at a level
(independently confirmed by X-ray observations) below her detection
threshold, unless the newly-formed stars are all concentrated in a
very dense cluster of stars. With our new {\sl HST} observations, we
can probe significantly fainter and at higher spatial resolution than
this earlier work, while our passband coverage is also sensitive to
the signatures of recent star formation.
Secondly, Randall et al. (2004) confirm that the X-ray bright
elliptical galaxy is characterised by a dominant excess of bright soft
X-ray emission, predominantly in its nuclear area, at a temperature of
$kT \approx 0.80$ keV (and a model-dependent metal abundance within a
factor of 2 of solar; but see Randall et al. 2006). They interpret
this as thermal emission from interstellar gas, combined with hard
emission from unresolved low-mass X-ray binaries. Moreover,
B\"ohringer et al. (2000) report the galaxy to have an {\it extended}
thermal X-ray halo.
Thirdly, Huchtmeier et al. (1995) and Georgakakis et al. (2001)
determined the total H{\sc i} gas mass in \object{NGC 4649} at,
respectively, $M_{\rm HI} < 1.62 \times 10^8$ and $< 4 \times 10^8$
M$_\odot$. Contamination of this mass by the gas in the late-type
companion is thought to be small. Finally, Cullen et al. (2006), using
CO observations, place an upper limit on the molecular gas mass of
$M_{\rm molec} < 0.72 \times 10^7$ M$_\odot$.
The presence of this moderate amount of atomic and molecular gas,
combined with the likely disturbance caused by the apparent
interaction (possibly linked to enhanced star-formation efficiencies),
suggests that dense gas flows could result in principle, possibly
followed by their turbulent collapse into stars and possibly (massive)
star clusters. In fact, Kundu \& Whitmore (2001; see also Larsen et
al. 2001) used archival {\sl HST}/WFPC2 data of (a section of)
\object{NGC 4649}, in which they detect a significant number (several
tens) of blue clusters -- comprising only the bright end of the GC
luminosity function. Based on their [$0 < (V-I) < 0.5$] mag colours,
these clusters could be as young as 20 Myr for solar metallicity, or
$\la 100$ Myr if their metallicity is as low as 0.02 Z$_\odot$.
\subsection{Observations and reduction}
We obtained archival observations of the NGC 4647/9 system, centred on
\object{NGC 4649}, taken with the ACS/WFC on board {\sl HST} (GO-9401,
PI C\^ot\'e) through the F475W and F850LP broad-band filters on UT
2003 June 17, with exposure times of 750 and 1120 s, respectively.
These filters closely correspond to the Sloan Digitial Sky Survey
(SDSS) $b$ and $i$ bands. The ACCUM imaging mode was used to preserve
dynamic range and to facilitate the removal of cosmic rays.
For the purpose of the research reported on in this {\it Research
Note}, we used the standard on-the-fly data reduction pipeline
(CALACS) in {\sc iraf/stsdas}\footnote{The Image Reduction and
Analysis Facility ({\sc iraf}) is distributed by the National Optical
Astronomy Observatories, which is operated by the Association of
Universities for Research in Astronomy, Inc., under cooperative
agreement with the US National Science Foundation. {\sc stsdas}, the
Space Telescope Science Data Analysis System, contains tasks
complementary to the existing {\sc iraf} tasks. We used Version 3.5
(March 2006) for the data reduction performed in this paper.},
providing us with images corrected for the effects of flat fielding,
shutter shading, cosmic ray rejection and dark current. The use of the
latest flat fields is expected to result in a generic large-scale
photometric uniformity level of the images of $\sim 1$\%. We used the
final, dithered images produced by the most recent release of the {\sl
PyDrizzle} software tool. {\sl PyDrizzle} also performs a geometric
correction on all ACS data, using fourth-order geometric distortion
polynomials, and subsequently combines multiple images into a single
output image and converts the data to units of count rate at the same
time. We registered the individual images obtained for both passbands
to high (subpixel) accuracy, using the Iraf {\sc imalign} routine.
\section{Pixel-by-pixel Analysis}
We constructed colour-magnitude diagrammes (CMDs) on a pixel-by-pixel
basis from our ACS observations to study the field star population.
This technique proved very powerful in our analysis of the ACS early
release observations (ERO) of the \object{``Mice''} and the
\object{``Tadpole''} interacting systems (de Grijs et al. 2003d; see
also Eskridge et al. 2003). Since the ACS/WFC is somewhat undersampled
by its point-spread function at optical wavelengths, the individual
pixels are statistically independent. In our previous work on the ACS
ERO data we identified several subpopulations from the CMDs, which
were found to originate from spatially well-defined regions within the
interacting systems.
We applied the same techniques to the ACS images of the \object{NGC
4647/9} system, in order to search for any evidence of CMD features
corresponding to regions of enhanced star formation in \object{NGC
4649}. Our painstaking analysis of these high-quality {\sl HST}-based
CMD data proved conclusively that \object{NGC 4649} is essentially
composed of a similar stellar population mix throughout the entire
body of the galaxy. The only specific CMD features of note correspond
to (redder) dusty pockets and to the large population of blue and red
GCs (see below).
The galaxy's pixel-CMD does not reveal any significant subpopulation
of pixels that could be ascribed to (an enhanced level of) more recent
star formation, neither throughout the galaxy as a whole nor in
spatially confined regions. For the latter purpose, we carefully
scrutinized the pixel-CMD behaviour of the galaxy in its four
quadrants defined by the major and minor axes, as well as in smaller
wedge-shaped slices, all for galactocentric radii between 20 and 90
arcsec ($\sim 1.6 - 7.2$ kpc). These radial constraints were imposed
on the one hand to avoid the effects of the bright galactic nucleus,
and on the other to avoid spurious pixel-CMD values originating from
the spiral companion, \object{NGC 4647}.
In Fig. \ref{wedgecmds.fig} we show the pixel-CMDs for the eight
wedge-shaped areas studied independently in the galaxy. The locations
of which are also indicated here, using a F475W -- F850LP colour image
of the field as guidance (the numbers in each panel correspond to the
numbered wedges superimposed onto the galaxy image). By applying
edge-fitting techniques based on colour histograms at a range of
magnitudes we conclude that both the blue and the red edges of the
individual wedge pixel-CMDs are identical, within the observational
uncertainties of $\Delta ({\rm F475W} - {\rm F850LP}) \la 0.08$ mag,
to some extent depending on pixel magnitude (in the sense of smaller
uncertainties at brighter magnitudes). For additional comparison
purposes, in Table \ref{wedgecols.tab} we list the mean (F475W --
F850LP) colours and standard deviations of the bright-peak pixels, for
$m_{\rm F475W} \le 27.0$ mag. Since these pixels are unaffected by any
recent star formation, the variation in the mean colour reflects the
intrinsic colour variation in the main body of the galaxy. We remind
the reader that the uncertainties are small because of the large
number of data points these statistical measures are based on; they
were obtained by slightly varying the magnitude cut-off used to obtain
the mean values.
\begin{table}
\caption{(F475W -- F850LP) colours and standard deviations of the
pixels with $m_{\rm F475W} \le 27.0$ mag.}
\label{wedgecols.tab}
\centering
\begin{tabular}{c c c}
\hline\hline
Wedge & Mean & Standard \\
& colour (mag) & deviation (mag) \\
\hline
1 & $0.194 \pm 0.004$ & $0.142 \pm 0.012$ \\
2 & $0.188 \pm 0.003$ & $0.098 \pm 0.003$ \\
3 & $0.205 \pm 0.002$ & $0.089 \pm 0.002$ \\
4 & $0.217 \pm 0.001$ & $0.088 \pm 0.003$ \\
5 & $0.205 \pm 0.001$ & $0.122 \pm 0.006$ \\
6 & $0.184 \pm 0.001$ & $0.121 \pm 0.002$ \\
7 & $0.181 \pm 0.002$ & $0.166 \pm 0.010$ \\
8 & $0.194 \pm 0.002$ & $0.100 \pm 0.005$ \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=13.0cm]{reducedcmds.ps}
\caption{Pixel-CMDs of eight wedge-shaped fields in \object{NGC
4649}. The wedge numbers shown in the top left-hand corner of each
panel refer to the wedges indicated on the (F475W -- F850LP) colour
image of the galaxy, included in the bottom right-hand corner of the
figure. The areas of the ``blue excess'' pixels are indicated by the
dashed ellipses in all panels; the linear spurs and features are
identified by labels {\sl a--g}. The colour image shows the full {\sl
HST}/ACS field of view (after geometric corrections), with 4000 ACS
pixels, or 200 arcsec, on a side. The colour levels range linearly
from (F475W -- F850LP) $\simeq -0.6$ (black) to $\simeq 1.3$ (white).}
\label{wedgecmds.fig}
\end{figure*}
In all cases, and after having taken into account the reddening effect
introduced by Poissonian noise (shot noise) statistics at the faintest
magnitudes, the mean (F475W -- F850LP) colour was compared to the {\sc
galev} simple stellar population (SSP) models (cf. Anders \&
Fritze--v. Alvensleben 2003), which include model sets calculated for
the full set of {\sl HST} imaging filters. The mean colour is
consistent with old stellar populations, for a range of relevant
metallicities (from 0.02 to 1.0 Z$_\odot$).
In Fig. \ref{wedgecmds.fig} we have also indicated the area where we
observe a ``blue excess'' in all wedges (dashed ellipses), as well as
a number of almost linear spurs and feather-like features on both the
blue or red side of the pixel-CMD of the main galaxy body ({\it
a--g}). In all cases, the ``blue excess'' pixels correspond to a
combination of clearly identifiable blue GCs (which appear as
spatially clumped blue excess pixels) and shot noise at the faintest
isophote covered by our detailed investigation (down to $m_{\rm F475W}
= 28.5$ mag); the latter could also be clearly identified based on its
radially random spatial distribution and random noise
characteristics. [In wedge 7, the apparently larger amount of these
blue excess pixels is due to the presence of a foreground star with
similar colours.] We note that there is no clear enhancement of blue
excess pixels on the side of the galaxy facing its spiral companion;
the amount of such pixels is similar or even slightly reduced with
respect to the rest of the galaxy.
Similarly, the spurs and feathers ({\sl a--g}) correspond to the large
(red and blue) GC population, as well as to shot noise at the faintest
isophote covered by our study. This also applies to the spurs
extending to brighter magnitudes starting from the cut-off magnitude
of the main body of the galaxy in wedges 3 and 4. Once again, the
nature of these pixels which led us to identify them as GCs or noise
is very clear from their spatial distribution and brightness
characteristics. We note that these spurs seem to occur predominantly
on the side of the galaxy away from its spiral companion, which is
consistent with the distribution of the blue excess pixels.
Thus, we conclude that the apparent tidal interaction with \object{NGC
4747} (but see Sect. \ref{verdict.sec} for a discussion on the
robustness of the tidal-interaction assumption) has had a negligible
effect (if any) on the recent star-formation activity in \object{NGC
4649}. However, close inspection of the (F475W -- F850LP) colour image
in the bottom right-hand panel {\it does} show a darker (bluer) loop
of pixels close to the spiral companion. This may be the first
evidence of star formation in the elliptical component induced by the
close encounter between these two galaxies. The mean (F475W -- F850LP)
colour of this blue loop is $-0.04 \pm 0.01$, i.e., approximately 0.20
mag bluer than the pixels in the brightest part of the main body of
the galaxy\footnote{A back-of-the-envelope calculation suggests that
if the colours in Table \ref{wedgecols.tab} represent the old stellar
population, and that (depending on metallicity) a population reddens
by $\sim 1.0$ mag from 10 Myr to 1 Gyr, the blue-loop pixels contain
approximately a $\la 30$ per cent contribution from a young, recently
formed stellar population.}. Since these blue pixels occur at the
faint limit of our observations, they cannot be distinguished in the
individual pixel-CMDs of wedges 4 and 5 because of the overall
increase of the observational uncertainties at low
brightnesses. However, because the blue loop appears to be a distinct
feature in colour space, at a level unexpected from potential
flat-field variations after {\sc MultiDrizzle} and geometric
corrections ($\ll 1$ per cent; Pavlovsky et al. 2006; see also STScI
Analysis Newsletter for ACS, 9 August 2002), we believe this to be a
real feature intrinsic to the galaxy's stellar population. We also
point out that this colour variation of $\sim 0.20$ mag exceeds the
intrinsic colour variations seen in Table \ref{wedgecols.tab}, thus
further suggesting that this is indeed a real feature.
\section{Interaction or chance alignment?}
Our search for young bright blue pixels signifying recent star
formation, particularly in the south-eastern quadrant of the
elliptical, has proven futile. This indicates that there have been no
recent significant episodes of massive star formation within
\object{NGC 4649}, at least not on large spatial scales. This implies
that either the Jeans mass criterion for the formation of massive star
clusters was not met (or, at least, that the threshold for star
formation may not have been met) or that the galaxies may not actually
be interacting.
\subsection{Conditions for the onset of star formation}
There are a number of possible reasons as to why the Jeans mass
criterion, or the threshold conditions for star formation to proceed,
may not have been met in \object{NGC 4649}. The most likely of these
seem to be that (i) there may be insufficient gas, (ii) the gas
density may be too low with respect to the strength of the postulated
gravitational interaction, or (iii) the temperature may be too high
for any collapse to proceed unimpeded. The latter is of concern in
view of the X-ray temperature quoted in Sect. \ref{selection.sec},
although this temperature is comfortably in the range expected for
star-forming and starburst galaxies [see e.g. Hartwell et al. (2004)
for the starburst galaxy \object{NGC 4214}].
Regarding options (i) and (ii), from the example of the large
interacting spiral galaxy \object{NGC 6745a,b} and its much smaller
early-type companion \object{NGC 6745c}, there is evidence that star
and cluster formation in early-type, gas-poor galaxies may be
triggered if the gravitational interaction is sufficiently violent (de
Grijs et al. 2003a). We will return to the issue as to whether the
interaction in the \object{Arp 116} system is sufficiently violent
(and whether the galaxies are sufficiently close to one another) in
Sects. \ref{distances.sec} and \ref{verdict.sec}. However, we point
out here that Cullen et al. (2006) report an upper limit to the
detectable H{\sc i} column density (at the $3\sigma$ level) of $N_{\rm
HI} \la (3.5 \pm 0.3) \times 10^{20}$ cm$^{-2}$. They emphasize that
this is a factor of four lower than that observed in the elliptical
galaxy \object{NGC 1410} by Keel (2004), which does not show strong
evidence of recent star formation either, based on {\sl HST}/STIS
imaging and WIYN spectral mapping techniques. In fact, a conservative
back-of-the-envelope estimate of the mean surface density of the
atomic and molecular gas in \object{NGC 4649}, implies values of $\la
0.1$ M$_\odot$ pc$^{-2}$. We based this estimate on a total gas mass
estimate of $2 \times 10^8$ M$_\odot$ (cf. Sect. 2.1), smoothly
distributed within the galaxy's $D_{25}$ isophote ($7\farcm4 \times
6\farcm0$). This is an order of magnitude lower than the (critical)
threshold densities for star formation derived by Kennicutt (1989) for
the outer regions of spiral galaxies, a situation that is even
worsened if we realise that the H{\sc i} distribution is often
distributed well beyond the optical extent of most galaxies.
This suggests that the possibly weak interaction combined with the low
density of the interstellar gas in the galaxy, may not be sufficiently
conducive to trigger star-formation rates at the level that can be
observed with the current-best instrumentation.
We should also point out that the mass ratio of the \object{NGC
4647/9} system is opposite to that of the \object{NGC 6745} system;
from $K$-band imaging, Cullen et al. (2006) conclude that the ratio of
the masses of the early with respect to the late-type component is
$\sim 5.2$. If a gravitational encounter occurs between unevenly
matched galaxies, provided that they contain sufficient gas
reservoirs, then -- as expected, both intuitively and based on a large
number of dynamical simulations -- the effects of the gravitational
interaction are much more pronounced in the smaller galaxy. For
instance, when we compare the impact of the interaction as evidenced
by star cluster formation (which we take as the most violent mode of
star formation here) between \object{M82} (de Grijs et al. 2001,
2003b,c) and \object{M81} (Chandar et al. 2001), the evidence for
enhanced cluster formation in the larger galaxy is minimal if at all
detectable.
\subsection{The distance to \object{Arp 116}}
\label{distances.sec}
Despite these galaxies being members of the \object{Virgo cluster}
(they are, in fact, projected to be close to the cluster core), their
distance estimates are rather uncertain. Clearly, for the galaxies
to interact, they need to be sufficiently close to one another in
order to respond to their mutual gravitational effects.
Therefore, we embarked on a detailed literature search for reliable
distance measurements to both galaxies. The most comprehensive review
of the distance to \object{NGC 4649}, including an extensive
bibliography up to July 1999, was published by the {\sl HST} Key
Project on the Extragalactic Distance Scale (Ferrarese et
al. 2000a). The secondary distance indicators they used include the
method of SBF, the bright-end cut-off of the Planetary Nebula
Luminosity Function (PNLF), and the peak of the GC Luminosity Function
(GCLF); homogeneous calibration relations of these methods, relating
them to the Cepheid distance scale, were published by Ferrarese et
al. (2000b). Independent analysis of their sample galaxies based on
the $D_n-\sigma$ method resulted in fully consistent distance
measurements (Kelson et al. 2000), thus instilling confidence in the
robustness of these distance calibrations. Here we will discuss the
additional relevant evidence regarding the distance to \object{NGC
4649} presented in studies following the 1999 review.
Ferrarese et al. (2000a,b) adopted as the best average distance
modulus to \object{NGC 4649}, $m-M = 31.09 \pm 0.08$ mag. Subsequent
measurements largely agree with this value (e.g., Tonry et al. 2001;
Di Criscienzo et al. 2006; Mar\'\i n-Franch \& Aparicio 2006; but see
Neilsen \& Tsvetanov 2000). This distance modulus corresponds to a
physical distance to \object{NGC 4649} of $D = 16.52 \pm 0.60$
Mpc. For comparison, if we use the galaxy's systemic velocity, $v_{\rm
sys} = 1117 \pm 6$ km s$^{-1}$ (Ferrarese et al. 2006), and a Hubble
constant H$_0 = 67$ km s$^{-1}$ Mpc$^{-1}$, the corresponding distance
is in close agreement, at $D_{{\rm H}_0} = 16.67 \pm 0.09$ Mpc
(although we point out that large-scale peculiar motions may affect
this result).
In addition, Larsen et al. (2001) used {\sl HST}/WFPC2 data to obtain
the \object{NGC 4649} GCLF based on 345 GCs. They deduced a turn-over
magnitude of $m_V = 23.58 \pm 0.08$ mag for the entire GC sample, and
$m_V = 23.46 \pm 0.13$ mag for the GCs in the blue peak, which are
usually associated with the oldest GC population in a galaxy making up
the ``universal'' GCLF (e.g., Fritze--v. Alvensleben 2004). Using the
Ferrarese et al. (2000b) distance calibration of GCLFs in the
\object{Virgo cluster}, these turn-over magnitudes correspond to
distances of $D_{\rm GCLF} = 17.22^{+2.64}_{-2.29}$ and $D_{\rm GCLF}
= 16.29 \pm 2.50$ Mpc, respectively. The latest study by Forbes et
al. (2004), using GMOS on Gemini-North, detected 2647 GC candidates;
they find a turn-over magnitude of $m_I = 23.17 \pm 0.15$
mag. However, as we are specifically interested in the distance to
\object{NGC 4649}, we are limited to $V$-band measurements for reasons
of observational robustness. For this reason we convert their $I$-band
turnover magnitude to a $V$-band magnitude, using the {\sc galev} SSP
models. We adopt a Salpeter stellar initial mass function covering a
mass range from 0.1 to 70 M$_\odot$ and a metallicity of 0.2
Z$_\odot$. This yields $(V-I) = 1.03$ at an age of 10 Gyr, and hence
we deduce $m_V \simeq 24.2$ mag. Adopting an absolute turn-over
magnitude of $M_V = -7.3$ (Harris et al. 1991), gives a distance of
20.0 Mpc; using Ferrarese et al.'s (2000b) calibration, we derive a
distance of 22.9 Mpc to the galaxy. Considering the manipulation we
had to go through in order to reach this result, the uncertainty on
these $I$-band GCLF distances is $\ga 4$ Mpc.
We base our initial analysis of the distance to the spiral companion,
\object{NGC 4647}, on the compilation of measurements by Solanes et
al. (2002). We also consider the work by Sch\"oniger \& Sofue (1997),
who carefully compared the reliability of using the Tully-Fisher
relation (TFR) based on both H{\sc i} and CO observations (locally
calibrated using Cepheid distances). All of the Solanes et al. (2002)
measurements are TFR based.
Solanes et al.'s (2002) best distance modulus to \object{NGC 4647},
based on homogenisation of the available distance measurements, $m - M
= 31.25 \pm 0.26$ mag, corresponds to a physical distance of $D =
17.78^{+2.26}_{-2.00}$ Mpc. The spread in distance measurements
obtained for this galaxy is in essence within $\sim 2 \sigma$ of this
value, and straddled by the TFR distances of Sch\"oniger \& Sofue
(1997): $D_{\rm CO} = 17.5$ Mpc and $D_{\rm HI} = 22.0$ Mpc. For
comparison, the galaxy's recessional velocity of $v_{\rm sys} = 1419
\pm 63$ km s$^{-1}$ (from the compilation of Falco et al. 1999; but
note that the optical and H{\sc i} 21-cm velocities differ by $\sim
30$ km s$^{-1}$; cf. Huchtmeier \& Richtler 1986) implies a distance
of $D_{{\rm H}_0} \simeq 20.6$ Mpc (or 18.9 Mpc if we base our
estimate on the recessional velocity of $v_{\rm LG} \simeq 1305$ km
s$^{-1}$ with respect to the barycentre of the \object{Local Group};
Helou et al. 1984).
Thus, here we conclude that the two galaxies are most likely within
1--1.5 Mpc of each other. Their velocity differential of order 300 km
s$^{-1}$ (e.g., Gavazzi et al. 1999; Cullen et al. 2006) is also well
within the velocity dispersion of the \object{Virgo cluster} as a
whole, $\sigma_{\rm Virgo} \sim 821$ km s$^{-1}$ (e.g., Sandage \&
Tammann 1976). It is, however, unclear whether the galaxies are
approaching or receding from each other; even very careful modelling
of their luminosity profiles in search of extinction features at large
radii (White et al. 2000) proved inconclusive as to which galaxy might
be in front of the other.
\section{The final verdict?}
\label{verdict.sec}
Owing to the uncertainties in the absolute calibration of the various
secondary distance indicators we needed to rely on in
Sect. \ref{distances.sec}, combined with the relatively low gas
content in the elliptical galaxy, we can neither rule out nor confirm
that both galaxies are in fact sufficiently close to expect signs of
gravitational disturbances\footnote{In fact, we realise that the
\object{Milky Way} and \object{M31} are also well inside the
uncertainty range derived in Sect. \ref{distances.sec}, yet show no
sign of any mutual interaction...}.
Rubin et al. (1999) point out that, although their long-slit optical
spectra show that the kinematics of the spiral component, \object{NGC
4647}, appear nearly normal (but see Young et al. 2006), the H$\alpha$
and molecular (CO and H$_2$) gas disks are clearly asymmetric
(Koopmann et al. 2001; Young et al. 2006), while the more extended
H{\sc i} disk is also slightly extended toward \object{NGC 4649}
(Cayatte et al. 1990), but not nearly by as much as the molecular gas
(Young et al. 2006). This might reflect an H{\sc i} infall scenario,
as Cayatte et al. (1990) suggest, or possibly the {\it onset} of a
tidal interaction between the two galaxies (Rubin et al. 1999; Cullen
et al. 2006). The latter idea is supported by results from
observations as well as numerical simulations that the most apparent
effects of ongoing tidal encounters tend to occur {\it after} the
period of closest approach (e.g., Moore et al. 1998; Rubin et
al. 1999; Young et al. 2006). However, from their detailed analysis of
the gas pressure across the disk of \object{NGC 4647}, Young et
al. (2006) conclude that ram pressure effects alone, postulated to be
caused by tidal forces from \object{NGC 4649}, do not explain the
observations entirely satisfactorily. Instead, they favour a lopsided
gravitational potential as the primary cause for the asymmetries
observed, akin to those commonly seen in spiral galaxies.
Bender et al. (1994) analysed the resolved optical kinematic profiles
of \object{NGC 4649} along both its major and minor axes, out to radii
of $\sim 40$ arcsec. Interestingly, they found evidence for a weak
asymmetry in the line-of-sight velocity dispersion (LOSVD), as well as
negative $h_3$ values (i.e., third-order Gauss-Hermite coefficients,
giving higher-order kinematic information about the deviations of the
LOSVD from a Gaussian distribution), along the galaxy's major axis
(roughly pointing toward the spiral companion). On the other hand, the
minor-axis kinematics do not show any significant asymmetries.
Negative $h_3$ values are normally seen in disky elliptical galaxies,
and in ellipticals with kinematically decoupled cores, but Bender et
al. (1994) did not find any photometric or kinematic evidence for the
presence of such components. In addition, they concluded that the
observed asymmetry was unlikely caused by projection effects. These
conclusions are corroborated by both De Bruyne et al. (2001) and
Pinkney et al. (2003), who traced the galaxy's kinematics out to $\sim
90$ and 40 arcsec, respectively, along its major axis. De Bruyne et
al. (2001), in particular, detected a $\sim 70$ km s$^{-1}$ difference
between the rotation curves on either side of the galactic centre at
their outermost measured radii, just beyond the galaxy's effective
radius. These results provide circumstantial support for the tidal
interaction idea, with \object{NGC 4647} thought to have caused this
minor disturbance of the \object{NGC 4649} kinematics.
Thus, while there may be grounds for the tidal-interaction assumption
in the case of the \object{Arp 116} system, any interaction has thus
far been of insufficient strength to trigger an enhanced level of
recent star formation in the elliptical galaxy component (with the
possible exception of a $\sim 0.20$ mag bluer ``loop'' of pixels in
the elliptical galaxy on the side of its spiral companion). This was
shown conclusively by the null result we obtained from our careful
analysis of the elliptical galaxy's CMD on a pixel-by-pixel
basis. This suggests that we are currently witnessing the onset of the
tidal interaction between \object{NGC 4647} and \object{NGC 4649}. In
addition, the threshold for new star formation in \object{NGC 4649}
may be a further unmet constraint due to the much lower mass of the
spiral component. Detailed numerical ($N$-body) and hydrodynamical
(SPH) modelling, taking into account both the current shape of the
gaseous morphology of \object{NGC 4647} (as well as entertaining the
possibility of the presence of a lopsided gravitational potential),
and the kinematic disturbances seen along the \object{NGC 4649} major
axis, are required to shed light on the future evolution of this
intriguing system.
\begin{acknowledgements}
We acknowledge stimulating discussions with Simon Goodwin and Peter
Anders. RdG acknowledges an International Outgoing Short Visit grant
to the National Astronomical Observatories of China in Beijing from
the Royal Society, and hospitality and support from Profs. X.Q. Na,
and L.C. Deng, Dr. J. Na, and Z.M. Jin during the final stages of
this project. We are grateful to the referee for suggesting
improvements to make this paper more robust. This paper is based on
archival observations with the NASA/ESA {\sl Hubble Space Telescope},
obtained from the ST-ECF archive facility. We acknowledge the use of
the HyperLeda database (http://leda.univ-lyon1.fr). This research has
also made use of NASA's Astrophysics Data System Abstract Service.
\end{acknowledgements}
|
train/arxiv
|
BkiUcbLxK0fkXPSOpdlS
| 5 | 1 |
\section{Introduction}
3D hand pose estimation has been greatly improving in the past few years, especially with the availability of depth cameras.
While new methods~\cite{oberweger2015training,ye2016spatial,ge20173d,wan2017crossing,tang2018opening} and datasets~\cite{tang2014latent,tompson2014real,sun2015cascade,yuan2017bighand2,garcia2018first} have been published, state-of-the-art methods are still
lacking in accuracy required for fine manipulations for AR or VR systems.
There is a large accuracy gap between pose estimation from RGB and depth image input, which several recent works have aimed to narrow~\cite{simon2017hand,zimmermann2017learning,mueller2017real,PantelerisArgyros2017}.
One of the difficulties has been the lack of large-scale realistic RGB datasets with accurate annotations. Recent papers have addressed this issue by creating synthetic datasets~\cite{zimmermann2017learning}, or employing GANs to generate training data~\cite{mueller2017ganerated}.
In this paper we propose using depth data as {\em privileged information} during training.
Fully annotated depth datasets~\cite{tang2014latent,tompson2014real,sun2015cascade,yuan2017bighand2,garcia2018first} are abundant in the literature, but so far no attempt has been made to use this data to support the task of 3D hand pose estimation from RGB images.
There are also a few RGB-D datasets proposed recently~\cite{zimmermann2017learning,zhang20163d} to tackle the problem of 3D hand pose estimation from RGB images, however all existing methods~\cite{zimmermann2017learning,mueller2017ganerated,zhang20163d} utilise only RGB images for training. The available depth images, either paired with RGB images~\cite{zhang20163d,zimmermann2017learning} or alone in the large-scale \textit{BigHand2.2M} dataset~\cite{yuan2017bighand2} could be used to aid the training.
The use of privileged information in training~\cite{vapnik2009new}, also called training with hidden information~\cite{wang2015classifier}, or side information~\cite{xu2013speedup}, has been shown to improve performance in other domains, such as image classification~\cite{chen2017training}, object detection~\cite{hoffman2016learning}, and action recognition~\cite{shi2017learning}.
But the concept of using privileged information to help 3D hand pose estimation from RGB images has not been attempted.
To the best of our knowledge, this paper proposes the first solution. Existing methods for 3D hand pose estimation from RGB images pursue two main directions: (1) using only RGB images for 3D hand pose estimation~\cite{zhang20163d,zimmermann2017learning,mueller2017ganerated}, with different CNN models being proposed. Given the limited size of real RGB datasets, a large number of synthetic images~\cite{zimmermann2017learning,mueller2017ganerated} are created to help the training, whether they are purely synthetic~\cite{zimmermann2017learning}, or using CycleGAN~\cite{zhu2017unpaired} to enforce a certain realism ~\cite{mueller2017ganerated}. (2) Using RGB-D images for 3D hand pose tracking~\cite{mueller2017real}, where the input is the depth channel in addition to the RGB channels. This works well when the paired RGB and depth images are available at test time. The lack of large-scale annotated training data limits the success of this approach. Our study proposes a new framework for 3D hand pose estimation from RGB images, by using the existing abundant fully annotated depth data in training, as privileged information. This helps improve 3D hand pose estimation using a single RGB image input at test time.
Our method transfers supervision from depth images to RGB images. We use two networks, an RGB-based network and a depth-based network, see Figure~\ref{fig:PI_cnn_model}.
We explore different ways to use depth data: (1) initially, we treat a large amount of independent external depth training data as privileged information to train the depth-based network. (2) After the initial training is completed, paired RGB and depth images are used to tune the RGB-based network and the depth-based network. The idea is to let the middle layer activations of the RGB network mimic that of the depth network. (3) We also explore the use of foreground hand masks to suppress background area activations in the middle layers of the RGB network. By doing this, we force the RGB network to extract features only from the foreground area.
Compared to existing methods for 3D hand pose estimation by RGB images, our main contributions are:
\begin{itemize}
\item To the best of our knowledge, this paper is the first to introduce the concept of using privileged information (depth images) to help the training of a RGB-based hand pose estimator.
\item We propose three ways to use the privileged information: as external training data for a depth-based network, as paired depth data to transfer supervision from the depth-based network to the RGB-based network, as hand masks to suppress the background activations in the RGB-based network.
\item Our training strategy can be easily embedded into existing pose estimation methods. We demonstrate this in the experiments of 2D hand pose estimation with an RGB image input by a different CNN model. Results on 2D hand pose estimation, using our training strategy improve over state-of-the-art methods for 2D hand pose estimation with RGB input.
\end{itemize}
Comprehensive experiments are conducted on three datasets: the Stereo dataset~\cite{zhang20163d}, the RHD dataset~\cite{zimmermann2017learning}, and the Dexter-Object dataset~\cite{sridhar2016real}. The Stereo dataset and RHD dataset are used for evaluating 3D pose estimation from an RGB input. All three datasets are used for evaluating 2D hand pose estimation from a single RGB image.
\begin{figure*}[t]
\centering
\includegraphics[trim=1.0cm 5.5cm 5.5cm 2cm, clip=true,width=1.0 \textwidth]{structure_6.pdf}
\caption{
\textbf{Proposed framework for 3D hand pose estimation from an RGB image using privileged depth data.} \textit{Training proceeds in two stages, a pre-training stage and privileged information (PI)-training stage. In the first stage, a depth-based network (top) and an RGB-based (bottom) network are trained independently to minimize 3D pose loss \textit{Loss\_D} and \textit{Loss\_C}. In the second stage, we freeze the parameters of the depth-based network and continue training with paired RGB and depth images, by minimizing a joint loss, which includes \textit{Loss\_C} and a mid-level feature regression loss \textit{Loss\_Inter}.}}
\label{fig:PI_cnn_model}
\end{figure*}
\section{Related Work}
\textbf{3D hand pose estimation.}
Hand pose estimation from depth data has made rapid progress in the past years~\cite{oberweger2015training,ge20173d,wan2017crossing,sharp2015accurate,choi2015collaborative}, where comprehensive studies~\cite{erol2007vision,supancic2015depth,yuan20183d} have been instrumental in advancing the field.
Random forests~\cite{tang2014latent,tang2015opening,wan2016hand} and CNNs~\cite{ye2016spatial,ge20173d,wan2017crossing,tompson2014real} trained on large-scale public depth image
datasets~\cite{tang2014latent,tompson2014real,sun2015cascade,yuan2017bighand2,garcia2018first} have shown good performance.
A recent benchmark evaluation~\cite{yuan20183d} showed that modern methods achieve mean 3D joint position errors of less than 10mm.
Hand pose estimation from RGB images is significantly more challenging
~\cite{simon2017hand,zimmermann2017learning,mueller2017real,PantelerisArgyros2017}.
Due to the difficulty in capturing real RGB datasets with accurate 3D annotations, recent methods employ synthetic CG data~\cite{zimmermann2017learning}, or \textit{GANerated} images~\cite{mueller2017ganerated}, which are more realistic synthetic images created with a CycleGAN~\cite{zhu2017unpaired}.
Mueller \etal~\cite{mueller2017ganerated} use an image-to-image translation network to create a large amount of RGB training images and combine a CNN with a kinematic 3D hand model for pose estimation. The method requires a predefined hand model, adapted for each user.
Simon \etal's {\textit{OpenPose}}~\cite{simon2017hand} system generates an annotated RGB dataset using a panoptic studio setup, using multiple views to bootstrap 2D hand pose estimation.
Zimmermann and Brox~\cite{zimmermann2017learning} proposed combining hand segmentation and 2D hand pose estimation (using \textit{CPM}~\cite{wei2016convolutional}), followed by estimating 3D hand pose relative to a canonical pose.
Panteleris and Argyros~\cite{PantelerisArgyros2017} estimate absolute 3D hand pose by first estimating 2D hand pose and then optimizing a 3D hand model with inverse kinematics.
Note that there also exists a large body of work on the related task of recovering full 3D human body pose from images.
One line of work aims to directly estimate the 3D pose from images~\cite{li20143d,zhou2016deep,toshev2014deeppose}.
A second approach is to first estimate 2D pose, often in terms of joint locations, and then lift this to 3D pose.
2D key points can be reliably estimated using CNNs and 3D pose is estimated using structured learning or a kinematic model~\cite{tome2017lifting,tompson2014joint,simo2012single,zhou2016sparseness}.\\
\textbf{Learning with privileged information and transfer learning.}
Privileged information denotes training data that is available only during training but not at test time.
The concept to provide teacher-like supervision at training time was introduced by Vapnik and Vashist~\cite{vapnik2009new}.
The idea has proven useful in other domains~\cite{chen2017training,hoffman2016learning,shi2017learning}.
Shi \etal~\cite{shi2017learning} treated skeleton data as privileged information in CNN-RNNs for action recognition from depth sequences. Chen \etal~\cite{chen2017training} manually annotated object masks in 10\% of the training data and treated these as privileged information for image classification.
The idea is related to network compression and mimic learning proposed by Ba and Caruana~\cite{ba2014nips} as well as network distillation by Hinton \etal~\cite{hinton2015distillation}, where intermediate layer outputs of one network are approximated by another, possibly smaller, network.
These techniques can be used to significantly reduce the number of model parameters without a significant drop in accuracy.
In our case, the application target is similar to transfer learning and domain adaptation. Information from one task, prediction from depth images, is shared with another, prediction from RGB images.
In transfer learning and domain adaptation information is shared across different data modalities~\cite{rad2018domain,chen2014recognizing,hoffman2016learning}. Chen \etal~\cite{chen2014recognizing} proposed recognition in RGB images by learning from labeled RGB-D data. A common feature representation is learned across two feature modalities.
Hoffman \etal~\cite{hoffman2016learning} learned an additional \textit{hallucination} representation, which is informed by the depth data in training. At testing, it used the softmax to select the final prediction between the predictions from the hallucination representation and the predictions from RGB representation.
Luo \etal~\cite{luo2017graph} recently proposed graph distillation for action detection with privileged modalities (RGB, depth, skeleton, and flow), where a novel graph distillation layer was used to dynamically learn to distill knowledge from the most effective modality, depending on the type of action.
In our case, we use paired depth and RGB images during training. Depth and RGB networks are first trained separately. Subsequently the RGB network are progressively updated, while the depth network parameters remain fixed.
\textbf{Learning a latent space representation.}
Latent space representation also shows promising for 3D hand pose estimation from RGB images~\cite{spurr2018cross,iqbal2018hand}.
Spurr \etal~\cite{spurr2018cross} learned a cross-modal statistical hand model, via learning of a latent space representation that embeds sample points from multiple data sources such as 2D keypoints, images, and 3D hand poses. Multiple encoders were used to project different data modalities into a unified low-dimensional latent space, where a set of decoders reconstruct the hand configuration in either modality.
Iqbal \etal~\cite{iqbal2018hand} used latent 2.5D heatmaps, containing the latent 2D heatmaps and latent depth maps, to ensure the scale and translation invariance. Absolute 3D hand poses are reconstructed from the latent 2.5D heatmaps.
Cai \etal~\cite{Caiweakly} proposed a weakly-supervised method for 3D hand pose estimation from RGB image by introducing an additional depth regularizer module, which rendered a depth image from the estimated 3D hand pose. Training was conducted by minimizing an additional loss term, which is the $L1$ distance between the rendered depth image and the ground truth depth image.
\section{Methods}
We propose a framework to train a hand pose estimation model from RGB images by using depth images as privileged information.
The model learns a new RGB representation which is influenced by the paired depth representation through mimicking the mid-level features of a depth network.
As shown in Figure~\ref{fig:PI_cnn_model}, we use depth images in two ways: (1) to train an initial depth-based network with the aim of regressing 3D hand poses. Depth data that is annotated with 3D full hand pose information is abundant in the literature, and we choose the largest real dataset BigHand2.2M~\cite{yuan2017bighand2} to train our depth-based model, see the top row of Figure~\ref{fig:PI_cnn_model}. (2) Paired RGB and depth images are fed into the RGB-based and depth-based network with the parameters of the depth-based network being frozen. The training of the RGB-based network continues with the aim of minimizing a joint loss function. The joint loss function has two parts, the first part being the 3D hand pose regression loss, \textit{Loss\_C}, and the second part the mid-level regression loss, \textit{Loss\_Inter}.
\subsection{Architecture}
Figure~\ref{fig:PI_cnn_model} shows our training architecture. There are two base models, each for one input channel. We use deep convolutional neural networks (CNNs), which have been widely used in hand pose estimation and have proven useful in transferring information from one network to another~\cite{hinton2015distillation}.
Prior work~\cite{mueller2017real} has been shown useful in combining RGB and depth images as a four-dimensional RGB-D input to a single CNN model to estimate 3D hand pose.
In our architecture, we share information in the middle layers of our two CNN models, one is a depth-based network and the other one is an RGB-based network. Each CNN model takes an input (a depth image or an RGB image) and produces a 3D hand pose estimation result.
For clarity, we denote the depth-based network \textit{Depth\_Net}, the RGB-based network \textit{RGB\_Net} when this is trained before privileged information is used. When privileged information is introduced in the training, we denote the RGB-based network \textit{RGB\_PI\_Net}. In summary, \textit{RGB\_Net} and \textit{RGB\_PI\_Net} are the same CNN model trained before and after the paired RGB and images are used to train the RGB channel.
We aim at sharing information between the middle layers of our two CNN models, and in particular using \textit{Depth\_Net} to inform \textit{RGB\_PI\_Net} in the training time when paired RGB and depth images are available.
To let the \textit{Depth\_Net} channel share information with \textit{RGB\_PI\_Net}, we introduce an intermediate regression loss between the paired layers in the two models.
This intermediate regression loss is inspired by prior works~\cite{hoffman2016learning,hinton2015distillation}, where similar techniques are used for model distillation~\cite{hinton2015distillation}, supervision transfer from well labeled RGB images to depth images with limited annotation~\cite{gupta2016cross}, and hallucination of different modalities~\cite{hoffman2016learning}.
We therefore introduce an intermediate loss,
which helps \textit{RGB\_PI\_Net} to extract middle level features that mimic the responses of the corresponding layer of the \textit{Depth\_Net} using the paired depth image.
The intermediate loss (or \textit{Loss\_Inter} as shown in Figure~\ref{fig:PI_cnn_model}) is defined as:
\begin{equation}
Loss\_Inter(k)= \|A_{k}^{Depth} - A_{k}^{RGB}\|_{2}^{2} \,,
\label{equ:interloss}
\end{equation}
where $A_{k}^{Depth}$ and $A_{k}^{RGB}$ are the $k_{th}$ layer activations for Depth Network and RGB Network, respectively.
During testing, where only an RGB image is available, we feed the RGB image into \textit{RGB\_PI\_Net} to estimate the 3D hand pose.
\subsection{Training with privileged information}
This section explains the details of training the proposed architecture.
We choose a base CNN for \textit{Depth\_Net} and \textit{RGB\_PI\_Net} for 3D hand pose estimation. For the base model, we build on Convolutional Pose Machine (CPM)'s~\cite{wei2016convolutional} feature extraction layers with two fully connected layers to regress a 63 dimensional 3D hand pose with 21 joints.
In this initial stage, we call this external depth images as privileged information. Our \textit{Depth\_Net} is initially independently trained on BigHand2.2M~\cite{yuan2017bighand2} dataset, which has 2.2 million fully annotated (21 joints) depth images.
After training, the model is further trained on the depth images of a smaller dataset (\eg, Stereo~\cite{zhang20163d} and RHD~\cite{zimmermann2017learning} datasets) that has fully annotated paired RGB and depth images.
The \textit{RGB\_Net} is initially trained on the RGB images from the same dataset.
When the initial training is completed for both CNN models, we freeze the parameters of the \textit{Depth\_Net} and start training \textit{RGB\_PI\_Net} with privileged information.
In this stage, our privileged information is the paired depth images, and comes into use in the form of the middle layer activations of the \textit{Depth\_Net}.
During the privileged training stage, we want the \textit{RGB\_PI\_Net}'s middle level layer's activations to match the activations of the corresponding layers of the \textit{Depth\_Net}.
We have two losses to optimize: (1) \textit{Loss\_Inter} (Eqn.~\ref{equ:interloss}) is used to match the middle layer activations of the two CNN models. (2) \textit{Loss\_C} (see Figure~\ref{fig:PI_cnn_model}) is the $L2$ loss between the ground truth and the estimated 3D hand pose. Here we use a joint loss:
\begin{equation}
Loss\_Joint(k)= Loss\_Inter(k) + \lambda \cdot Loss\_C,
\label{equ:jointloss}
\end{equation}
where $\lambda$ is used to balance the two losses, a larger value of $\lambda$ means less supervision is required from the privileged information, a smaller value means that the model depends more on the supervision. We set $\lambda$ to 100 for all experiments.
\begin{figure*}[t]
\centering
\includegraphics[trim=3cm 5.5cm 5.5cm 4.3cm, clip=true,width=.9\textwidth]{Mask_PI_4.pdf}
\caption{\textbf{Treating hand mask as privileged information}. Hand mask are used as privileged information to suppress the responses from the background area in the middle layers.}
\label{fig:PI_cnn_sec_feature}
\end{figure*}
\subsection{Foreground mask as privileged information}
In addition to the supervision from depth images, we also explore the idea of extracting hand masks from depth images and embedding the hand masks into CNN layers of \textit{RGB\_PI\_Net} to suppress the background features.
As shown in Figure~\ref{fig:PI_cnn_sec_feature}, we treat the hand mask $M_{h}$ as privileged information.
At test time, when the hand mask is not available, the CNN model is viewed as a standard CNN with convolutional layers, pooling layers and full-connected layers, where the \textit{Loss\_Mask} is not used.
In the training stage, the foreground hand mask is introduced in the last convolutional layer, as shown in Figure~\ref{fig:PI_cnn_sec_feature}. Pixels of the mask $M_{h}$ are zero on the hand region, and one otherwise. We suppress background features by minimizing the regression loss \textit{Loss\_Mask}:
\begin{equation}
Loss\_Mask= \|A_{k}^{RGB} \odot M_{h}\|_{2}^{2} \\ \; ,
\label{equ:maskloss}
\end{equation}
where $\odot$ denotes element-wise multiplication.
By minimizing the regression loss, where the response on the hand is multiplied by zero and the response outside the hand is multiplied by one, the response from outside the hand area is suppressed, focusing the response on the hand region.
\begin{table}[t]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrrclc}
\toprule
\bf Dataset & \bf No. Training & \bf No. Test & \bf No. Joints & \bf Annotation & \bf Type \\
\midrule
Stereo~\cite{zhang20163d} & 15,000 & 3,000 & 21 & 2D, 3D & real \\
RHD~\cite{zimmermann2017learning} & 41,258 & 2,728 & 21 & 2D, 3D & synthetic \\
Dexter-Object~\cite{sridhar2016real} & - & 3,111 & 5 (tips) & 2D, 3D & real\\
\bottomrule
\end{tabular}}
\caption{\textbf{Public datasets used in our experiments.} }
\label{tab:datasets}
\vspace{-5mm}
\end{table}
\begin{comment}
\end{}
\begin{table}[t]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrrclc}
\toprule
\bf Dataset & \bf No. Training & \bf No. Test & \bf No. Subjects & \bf No. Joints & \bf Annotation & \bf Type \\
\midrule
Stereo~\cite{zhang20163d} & 15,000 & 3,000 & 1 & 21 & 2D, 3D & real \\
RHD~\cite{zimmermann2017learning} & 41,258 & 2,728 & 20 & 21 & 2D, 3D & synthetic \\
Dexter-Object~\cite{sridhar2016real} & - & 3,111 & 1 & 5 (tips) & 2D, 3D & real\\
\bottomrule
\end{tabular}}
\caption{\textbf{Public datasets used in our experiments.} }
\label{tab:datasets}
\vspace{-5mm}
\end{table}
\end{comment}
\section{Experiments}
We carried out experiments on both 3D and 2D hand pose estimation from RGB images.
Our experiments are conducted on three public RGB-D datasets: the RHD dataset~\cite{zimmermann2017learning}, the Stereo dataset~\cite{zhang20163d}, and the Dexter-Object dataset~\cite{sridhar2016real}, as shown in Table~\ref{tab:datasets}.
The RHD dataset is created synthetically and contains 41,238 training and 2,728 test images, with a resolution of $320 \times 320$. Each pair of RGB and depth images contains 3D annotations for 21 hand joints, and intrinsic camera parameters. The RHD dataset is built from 20 different subjects performing 39 actions. The training set has 16 subjects performing 31 actions, while the test set has 4 subjects performing 8 actions. The dataset contains diverse backgrounds sampled from 1,231 Flickr images.
The Stereo~\cite{zhang20163d} dataset is a real RGB-D dataset, which has 18,000 pairs of RGB and depth images with a resolution of $640 \times 480$ pixels. Each pair is fully annotated with 21 joints. The dataset contains six different backgrounds with respect to different difficulties (\eg textured/textureless, dynamic/static, near/far, hightlights/no-highlights). For each background, there are two sequences, each containing 1,500 image pairs. The dataset is manually annotated. In our experiments, we follow the evaluation protocol of~\cite{zimmermann2017learning}, \ie, we train on 10 sequences (15,000 images) and test on the remaining 2 sequences (3,000 images).
The Dexter-Object~\cite{sridhar2016real} dataset contains 3,111 images of two subjects performing manipulations with a cuboid. The dataset provides RGB and depth images, but only fingertips are annotated. The RGB images have a resolution of $640\times320$ pixels. Due to the incomplete hand annotation, we use this dataset for cross-dataset generalization.
During testing on a GTX 1080 Ti, the network forward steps take 6ms for 3D pose estimation and 8ms for the 2D case. The image cropping and normalization is the same as in~\cite{zimmermann2017learning}. To crop the hand region, we use ground truth annotations to obtain an axis aligned crop, resized to 256$\times$256 pixels by bilinear interpolation. Examples are shown in the first row of Figure~\ref{fig:feature_activations}.
For 3D hand pose estimation, we use the root joint's world coordinates and the hand's scale to normalize the results.
\subsection{3D hand pose estimation from RGB}
In this section, we investigate the usefulness of depth images to improve the performance of 3D hand pose estimation from an RGB image.
Our base CNN model is built upon the feature extraction layers of Convolutional Pose Machine (CPM)~\cite{wei2016convolutional} with two fully connected layers. The final output is a 63 dimensioal vector denoting the 21 joint 3D locations. Specifically, our base CNN model contains 14 convolutional layers, 4 pooling layers, and 2 fully-connected layers.
At training stage, we have access to paired RGB and depth images.
Initially the \textit{Depth\_Net} is trained on \textit{BigHand2.2M}~\cite{yuan2017bighand2}.
We continue to train the \textit{Depth\_Net} using the depth images from the small dataset, \eg, Stereo dataset or RHD dataset. We train the \textit{RGB\_Net} with the RGB images from the small dataset.
When the initial training is completed, we start PI-training with the paired RGB and depth images. We freeze the weights of the \textit{Depth\_Net} and add the intermediate regression loss \textit{Loss\_Inter} among the mid-level features of \textit{Depth\_Net} and \textit{RGB\_PI\_Net}, then we continue the training of \textit{RGB\_PI\_Net} by minimizing the joint loss $Loss\_Joint$. We apply the intermediate loss to the last convolutional layers of both branches, where the parameter $k$ is set to 18 in Equation~\ref{equ:interloss} and Equation~\ref{equ:jointloss}.
\begin{figure*}[t]
\includegraphics[trim=2.9cm 2cm 5.5cm 4cm, clip=true, width=0.32\textwidth]{PCK_allowed_RHD_3d_self_v2-eps-converted-to.pdf}
\includegraphics[trim=2.9cm 2cm 5.5cm 4cm, clip=true, width=0.32\textwidth]{PCK_allowed_Stereo_3d_self_2-eps-converted-to.pdf}
\includegraphics[trim=2.9cm 2cm 5.5cm 4cm, clip=true, width=0.32\textwidth]
{sota_comparison_Stereo_3-eps-converted-to.pdf}
\hfill
\includegraphics[trim=2.9cm 2cm 5.5cm 4cm, clip=true, width=0.32\textwidth]{PCK_allowed_RHD_2d_self-eps-converted-to.pdf}
\includegraphics[trim=2.9cm 2cm 5.5cm 4cm, clip=true, width=0.32\textwidth]{PCK_allowed_Stereo_2d_self-eps-converted-to.pdf}
\includegraphics[trim=2.9cm 2cm 5.5cm 4cm, clip=true, width=0.32\textwidth]{sota_comparison_RHD_Dexter-eps-converted-to.pdf}
\hfill
\caption{\textbf{Results one Stereo and RHD dataset for 3D hand pose and 2D pose accuracy}.\textit{Top row shows the comparisons of 3D hand pose accuracy, bottom row shows the comparisons of 2D hand pose accuracy. Top-left is self-comparison on RHD dataset, top-middle is self-comparison on Stereo dataset, top-right is comparison with state-of-the-art on Stereo dataset. Bottom-left is self comparison on RHD dataset, bottom-middle is self comparison on Stereo dataset, bottom-right is comparison with state-of-the-art on the Dexter-Object dataset.}}
\label{fig:sota_com}
\end{figure*}
\textbf{Effect of PI-Learning:}
We conduct experiments with the two baseline CNNs and the CNN after PI training, see the accuracy curves in Figure~\ref{fig:sota_com} (top-left plot).
Our networks only estimate relative 3D pose from a cropped RGB image patch containing the hand, to yield 3D hand pose in world coordinates, we follow a similar procedure of~\cite{zimmermann2017learning}, \ie, by adding the absolute position of the root joint to our estimated results.
For comparison we choose the Percentage of Correct Keypoints (PCK) over a varying threshold.
Training with depth data significantly improves the performance of the RGB-based network, narrowing the gap to the depth-based network.
\textbf{Comparison with the state of the art:}
We compare our results with state-of-the-art methods, including PSO~\cite{oikonomidis2011efficient}, ICPPSO~\cite{qianrealtime}, Zhang \etal~\cite{zhang20163d}, Z\&B~\cite{zimmermann2017learning}, GANerated~\cite{mueller2017ganerated}, Cai \etal~\cite{Caiweakly}, Spurr \etal~\cite{spurr2018cross}, Iqbal \etal~\cite{iqbal2018hand}, Panteleris \etal~\cite{Panteleris2018}, see Figure~\ref{fig:sota_com} (top-right plot).
Our method out-performs all existing state-of-the-art methods. We outperform (Z\&B)~\cite{zimmermann2017learning} and~\cite{mueller2017ganerated}. While both~\cite{zimmermann2017learning} and~\cite{mueller2017ganerated} used extra training data, \cite{zimmermann2017learning} used both Stereo (real) and RHD (synthetic) data to train their network. \cite{mueller2017ganerated} used synthetic (GANerated) data to train their network. The proposed method uses less RGB training data and achieved the best performance. we significantly outperformed both methods with our privileged training strategy.
\begin{figure*}[t]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=0.8\textwidth]{activiations_2.PNG}
\caption{\textbf{Feature activation maps}. \it (top row) input images, (row 2) activations of the RGB network trained on RGB only, (row 3) activations of the RGB network trained with additional depth data, (row 4) activations of the depth network, and (row 5) depth images. During training, depth data helps the RGB network focus on the region of interest, reducing the influence of background regions.}
\label{fig:feature_activations}
\end{figure*}
\begin{figure*}[t]
\includegraphics[trim=0cm 8cm 0cm 0cm, clip=true, width=1.0\textwidth]{testing_loss_Stereo.pdf}
\caption{\textbf{Loss function evolution on the stereo dataset~\cite{zhang20163d}.} \it The loss for 3D hand pose (left plot) of the RGB network on the test data converges at iteration 15,000, we continue training for another 5,000 iterations. From iteration 20,000, we fix the depth network parameters and connect mid-level features between the RGB and depth networks, and continue training by minimizing the joint loss (right plot) using RGB-D image pairs. The intermediate loss (middle plot) is used to suppress the difference between the mid-level feature between the RGB and depth networks. Loss for 3D hand pose of the RGB network, and the joint loss stop decreasing at around iteration 30k.}
\label{fig:testing_loss}
\end{figure*}
\textbf{Feature activation maps:}
To give more intuitions on the effectiveness of training using additional privileged information, we visualize the activations of the mid-level feature for the three networks. Feeding an RGB image into each network, we aggregate all the mid-level feature maps into feature map by taking the maximum across all feature maps (similar to the maxout operation~\cite{goodfellow2013maxout}). As shown in Figure~\ref{fig:feature_activations}, training with privileged information helps to select more representative features, where the visualized activations are close to the foreground (the hand).
\textbf{Loss function evolution:} We keep a record of the loss during our training on the Stereo dataset, see Figure~\ref{fig:testing_loss}. The loss for 3D hand pose (left plot) of the RGB network on the test data converges at iteration 15,000, we continue training for another 5,000 iterations. From iteration 20,000, we fix the depth network parameters and connect mid-level features between the RGB and depth networks, and continue training by minimizing the joint loss (right plot) using RGB-D image pairs. The intermediate loss (middle plot) is used to suppress the difference between the mid-level feature between the RGB and depth networks. Loss for 3D hand pose of the RGB network, and the joint loss stop decreasing at around iteration 30,000.
\begin{figure*}[t]
\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=1.0\textwidth]{drawing_qualitative_stereo_3d.pdf}
\caption{\textbf{Qualitative 3D pose estimation results}. \it Comparing the outputs of the RGB network (blue, second row), the RGB network with PI training (red, third row), and the Depth network (magenta, bottom row) with the ground truth 3D pose (green) on the Stereo dataset. Top row are the original images.}
\label{fig:quali_stereo_3dpose}
\end{figure*}
\begin{figure*}[t]
\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=1.0\textwidth]{rhd_2d_pi_3.pdf}
\caption{\textbf{Qualitative 2D pose estimation results}. \it Comparing the outputs of (middle) the RGB network and (bottom) the RGB network with PI training on the RHD dataset. Top row are the original images.}
\label{fig:quali_rhd_2}
\end{figure*}
\begin{figure*}[t]
\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=1.0\textwidth]{Stereo.png}
\caption{\textbf{Qualitative 2D pose estimation results on Stereo dataset}. \it Comparing the outputs of (top) the RGB network and (bottom) the RGB network with PI training.}
\label{fig:quali_stereo_2}
\end{figure*}
\subsection{2D hand pose estimation from RGB}
In this section, we choose the base CNN model as CPM~\cite{wei2016convolutional}, which has shown great performance for 2D human pose estimation~\cite{wei2016convolutional}, and 2D hand pose estimation~\cite{zimmermann2017learning}.
Results are reported in Table~\ref{tab:RHDbaselines}, where `EPE' stands for the `average end point error' in pixels, where an end point is a hand joint. Qualitative examples are shown in Figure~\ref{fig:quali_rhd_2} and Figure~\ref{fig:quali_stereo_2}.
In this part of experiments, we treat the hand mask as privileged data, the CNN base model is CPM~\cite{simon2017hand}. The baseline is obtained by the normal training procedure, \ie, feeding the pre-processed hand image into CPM and obtaining the 2D hand pose by finding out the maximum location in each of the 21 heatmaps. For training with privileged information, we randomly select a certain proportion of RGB training data and use the hand masks, which are obtained by thresholding the depth images, in the \textit{Loss\_Mask} to suppress the background responses. As shown in Table~\ref{tab:RHDbaselines}, where 0.2 and 0.8 denotes the percentage of images when the \textit{Loss\_Mask} is used during the training for 2D hand pose estimation.
\textbf{Performance on hand-object interaction dataset:}
In Figure~\ref{fig:sota_com} (bottom-right plot), we show a comparison in terms of 2D PCK (in pixels) on the Dexter-Object~\cite{sridhar2016real} dataset. \textit{Z\&B\_Joint} denotes the method of Z\&B~\cite{zimmermann2017learning} trained on both RHD and Stereo datasets, which is better than \textit{Z\&B\_Stereo} (trained on Stereo) and \textit{Z\&B\_RHD} (trained on RHD). Our approach outperformed \textit{Z\&B\_Joint} even though we used less RGB training data.
\begin{table}
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{llllll}
\toprule
\bf Method & \bf Testing & \bf Training & \bf EPE median & \bf EPE mean \\
\midrule
Z\&B~\cite{zimmermann2017learning} & RHD & RHD+Stereo & 5.001 & 9.135 \\
Baseline RGB & RHD & RHD & 3.708 & 7.841 \\
Baseline Depth & RHD & RHD & 2.087 & 3.902 \\
RGB + PI training & RHD & RHD & 2.642 & 5.223 \\
\hline
Z\&B~\cite{zimmermann2017learning} & Stereo & RHD+Stereo & 5.522 & 5.013 \\
Baseline RGB & Stereo & Stereo & 5.250 & 6.533 \\
Baseline Depth & Stereo & Stereo & 4.775 & 5.883 \\
RGB + PI training (0.2)& Stereo & Stereo & 5.068 & 6.280 \\
RGB + PI training (0.8)& Stereo & Stereo & 4.515 & 5.801 \\
\hline
Z\&B~\cite{zimmermann2017learning} & Dexter-Object & RHD+Stereo & 13.684 & 25.160 \\
Baseline RGB & Dexter-Object & RHD & 13.360 & 18.278 \\
RGB + PI training & Dexter-Object & RHD & 11.809 & 14.593 \\
\bottomrule
\end{tabular}}
\caption{\textbf{2D Hand Pose Accuracy}. \it Results when training on the RHD and Stereo datasets.
}
\label{tab:RHDbaselines}
\vspace{-5mm}
\end{table}
\section{Conclusions}
In this paper, we proposed a framework for 3D hand pose estimation from RGB images, with the training stage aided with privileged information, {\it i.e.} depth data.
To the best of our knowledge, our method is the first to introduce the concept of using privileged information (depth images) to support training a RGB-based 3D hand pose estimator.
We proposed three ways to use the privileged information: as external training data for a depth-based network branch, as paired depth data to transfer supervision from the depth-based network to the RGB-based network, and as a hand mask to suppress background activations in the RGB-based network.
Our training strategy can be easily embedded into existing pose estimation methods. As an illustration, we estimate 2D hand pose from an RGB image using a different CNN model. Results on 2D hand pose estimation, using our training strategy, are improved over state-of-the-art methods for 2D hand pose estimation from RGB input.
During testing, when only RGB images are available, our model significantly outperforms the same model trained only using RGB images.
This training strategy can be incorporated into existing models to boost the performance of hand pose estimation from an RGB image.
One limitation of our method is the difficulty of handling occlusion by objects, which can be addressed by systematically adding synthetic objects in the depth data (privileged information).
\textbf{Acknowledgement}: This work was supported
by Huawei Technologies.
{\small
\bibliographystyle{ieee}
|
train/arxiv
|
BkiUdSI5qoTAp2zWUOQQ
| 5 | 1 |
\section{Introduction} \label{sec:intro}
The rapid evolution of user-centric services and the increase in complexity of communication networks inspired concerted efforts towards their management~\cite{ietfautnetrfc7576}. This has seen the development of services with a paradigm shift in performance evaluation through the introduction of complex and semantic \Glspl{kpi}, replacing communication \glspl{kpi} such as data rate or delay. The design and modeling of communication services is a key requirement and a renewed challenge to ensure the delivery of expected performance to service consumers.
\Gls{ibn} is considered a key enabler for the management of next generation networks and provisioning of services~\cite{intentdefsrfc9315}.
The management of modern networks focuses on determining a suitable set of actions required for the provisioning of a service as well as the acquisition of required resources to fulfill its requirements. In this process, consumer feedback provides the necessary adaptability for a dynamic management model for networked services. The coordination of the network and service-related information to achieve personalized provisioning strategies is a key design objective for \gls{ibn}.
\Gls{ibn} proposes a solution where consumers are able to interact with the network administrative processes for the requisition and deployment of desired services. Interested users may seek and obtain the required service from many providers by utilizing high-level descriptive language without the need to be informed about low-level network deployment configurations. A key design goal with \gls{ibn} management is the creation of a contextual model to describe, comprehend, and deploy a high-level intent as a service.
This process involves mapping functionalities between the domain-specific and agnostic representations of user input for network management decisions.
The processing of a domain agnostic context model is performed by a network administrator with closed-loop information exchange with the deployment infrastructure and intent-generating user. The user input is also refined to create dynamic network control policies after deliberation with the available service offerings and other stakeholders. These policies can be translated into domain-specific configurations based on the underlying infrastructure.
The process of intent consumption by the network requires a standardized intent representation model that also aids in maintaining interoperability between different domains. The \gls{3gpp} \cite{3gppintent28312} and \gls{tmf} \cite{tmforum-intent-common-model-tr290} have defined intent description models consisting of a set of expectations, each of which is defined in terms of a set of objectives, goals, and associated restrictions for the network. Within an expectation, the target parameters and all of their characteristics can be defined in a domain-agnostic manner.
Intent language models have been proposed with pre-defined vocabulary inspired by network or operational named entities. The choice of a natural language~\cite{intent-lang-nemo, intent-lang-nile} or domain specific language~\cite{ietf-aut-net-rfc7575} for intent expression remains an interesting and challenging direction with tradeoffs in either case. However, the representation of intent with domain-independent data models remains a fundamental challenge.
The intent representation problem requires defining an intent design model and data representation models to organize information regarding the network and service-level deliverables needed to fulfill user-generated intents. In order to aid the consumption of intents in different domains, these data models constitute the necessary organization of knowledge from different sources of information. This paper investigates a similar approach by considering the established data organization frameworks --- \gls{owl} and \gls{rdf}.
\begin{figure*}[!htbp]
\includegraphics[width=1\textwidth]{Resources/Images/kg-combined-labeled.png}
\caption{Proposed intent model with service, resource and KPI parameter extensions.}
\label{image: tmf-icm-mcptt-nmcptt}
\end{figure*}
This work builds on \gls{tmf} and \gls{3gpp} definitions of intent and associated relationships within the intent lifecycle to organize different sources of data in the networking and service orchestration domains. The proposed knowledge organization framework utilizes ontologies for the intent, resource, and service models for \gls{mc} and \gls{nmc} services. These models are used to propose an intent processing engine with contextual reasoning of the service requirements for intent translation into specific service intents for deployment via a slicing engine. The contributions of the paper are listed below:
\begin{itemize}
\item Implementation of an intent ontology and description model as a basic intent model by extending the standardized model proposed by \gls{tmf}~\cite{tmforum-intent-common-model-tr290}. This includes the definition of an intent model that is domain agnostic and provides sufficient flexibility for different application use cases.
\item Ontology and service description models to represent mission-critical \gls{ps} and non-mission critical services using the standardized performance metrics and \glspl{slo}\cite{3gpp-5gsystem-23-501}. We make these models publicly available~\cite{intentRDF23}, and they aid in the processing and translation of intents, leading to the refinement and deployment of requested services.
\item A knowledge graph-based IBN framework for processing and translation of intents into domain-specific service requests and deployment of the services
\item Mapping between the intents and \gls{ns-3} in a non-standalone \gls{5g} architecture for validating the deployment of intents as services.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{sec:know-models} presents the proposed knowledge base consisting of ontological descriptions for the \gls{icm} and service extension models to be utilized for intent processing and translation. Section \ref{sec:framework} consists of the proposed context-based \gls{ibn} framework for the management and orchestration of intents as well as the requested services. Section \ref{sec: results} covers the proposed proof-of-concept orchestration framework for the deployment of intents as requested services. In addition, the results and performance evaluation of the proposed IBN framework are also presented in Section \ref{sec: results}. Section \ref{sec: conc} concludes the paper along with a discussion on the planned future activities using the proposed \gls{ibn} framework.
\section{Proposed Knowledge Models for Intents and Services}
\label{sec:know-models}
In the path towards evolved automated network management \cite{MEHMOOD2023109477}, understanding users' intent enables the automation of intelligent actions that were previously simply human-driven. It empowers the system to assess the circumstances and prioritize actions that shift the network's operating strategy toward a more desirable direction. An intent enables communication of what is desired and what should be avoided along with enabling the network to comprehend what is required of it. It introduces the concept of usefulness. It enables the source of intent to convey its usefulness model. This allows the autonomous system to make educated decisions regarding the events it perceives and the actions it intends to take. As a result, an intent-driven system is not limited to just following human-driven policies but can dynamically create and modify its strategies.
An intent can essentially be treated as a knowledge object with an associated lifecycle that is dynamically managed by intent management functionality in the communication networks. This knowledge must also be provided to the system in a way that automated reasoning processes can convert it into a suitable system action. This means that learning about the consumer's expectations must be communicated, disseminated, and regulated in a standardized manner. The knowledge required to be expressed in intent is described as follows:
\begin{itemize}
\item \textit{Functional requirements:} consist of the type of service to be delivered along with the expected functionality for the users;
\item \textit{Non-functional requirements:} consist of the information required to explain expectations of the required service e.g., \glspl{kpi} and associated metrics;
\item \textit{Restrictions and preferences:} convey the information related to the service user domain or any geographical constraints for any service or consumers.
\end{itemize}
The representation of intent knowledge must follow a structured approach, and knowledge graphs \cite{know-graphs} provide an excellent information organization tool.
\subsection{Knowledge Graphs in \gls{ibn}}
Intents are considered knowledge objects with well-defined expressions based on intrinsic vocabulary \cite{intentdefsrfc9315} and semantic roles for different entities like requirements, objectives, and associated constraints. A key goal in any intent model is the standardization of the intent model to provide interoperability for multi-domain and application use cases for \gls{ibn}. Utilizing a well-known standard knowledge model representation, such as \gls{rdf} and \gls{owl}, meets the required objective. Hence, the definition of an intent ontology is the first step in designing an \gls{ibn} framework consisting of relevant concepts arranged as an ontology model.
Previous works addressed the designing of a knowledge graph-based intent-driven networking design \cite{intent-ont-kid,intent-ont-indira}. However, these proposals lack a standardized intent model due to specific information classes for different knowledge graphs.
The standardized intent common model (ICM) by the \gls{tmf} \cite{tmforum-intent-common-model-tr290} consists of the knowledge entities necessary to describe any type of intents ranging from domain-specific to multi-domain high-level intents. We modified the \gls{icm} with additional information and implemented it as a knowledge graph depicted in \figurename~\ref{image: tmf-icm-mcptt-nmcptt}. The proposed extensions are explained in the following subsections.
\import{./Resources/Tables}{services}
\subsection{Proposed Service and Resource, and Service KPI Extension Models}
A communication service is expected to have a distinct set of associated deliverable metrics and \glspl{kpi} from a variety of standard well-defined performance metrics \cite{3gpp-5gkpis-28-554}. Hence, a service model consists of a set of expectations that need to be fulfilled by the service provider for the service consumers as per \gls{sla}. These sets of expectations are defined as distinct \glspl{slo} mapped onto a set of parameters such as availability, reliability, latency, and bandwidth.
We utilize \gls{icm} as the basis for our proposed knowledge base consisting of an implementation of the intent model and service extension models for the \gls{5g} \gls{mc} and \gls{nmc} services. The proposed extension models ensure compatibility by complementing several information classes including \textit{icm:Expectation, icm:Parameter} and, \textit{icm:Target} from the \gls{icm}. These extended information classes are utilized to incorporate domain-specific data for the services and resources for the intent-based deployment in the network. The \gls{rdf} classes in the service extension models are described as follows:
\subsubsection{Service and Resource Model}
The resource model for the \gls{icm} is extended via the \textit{icm:Target} class. This is the intended target for the deployment of an intent objective (for example, a service). In the proposed framework, this represents the necessary resource type (i.e., targetResource:GBR and targetResource:NGBR) for a specific category of services as listed in Table~\ref{tab:services}. Two possible subclasses are defined in the service extension models for \gls{mc} and \gls{nmc} services requiring \gls{gbr} and \gls{ngbr} resources (e.g.,\ \textit{service:McpttGBRService} and \textit{service:NonMcpttNGBRService}).
\subsubsection{Service KPI Model}
In our proposed \gls{kpi} knowledge graph, we utilize latency, packet error rate, priority, and 5G QoS identifier (5GQI) as service \gls{kpi} parameters included in the \textit{icm:PropertyParameter} subclass of the \gls{icm} \gls{rdf} model. These parameters contain the values as \gls{rdf} literal terms for different services requiring the \gls{gbr} and \gls{ngbr} resources in subclasses \textit{kpi:latency}, \textit{kpi:packeterrorrate}, \textit{met:priority}, and \textit{met:qi5G}, respectively. The \gls{kpi} extension models for the \gls{mc} and \gls{nmc} services, shown in \figurename~\ref{image: tmf-icm-mcptt-nmcptt}, are based on the \glspl{kpi} from the \gls{3gpp} Table-5.7.4-1 \cite{3gpp-5gsystem-23-501}.
A detailed performance metric specification for these services is provided in Table~\ref{tab:services}. These values can be extracted using SPARQL \cite{sparql-w3c} queries during the processing and translation of intents. Moreover, the information regarding expected resource type is also modeled in the service models in order to enable the orchestration of relevant resources from the available network resources. A generic framework for intent processing using the proposed knowledge base is presented in Section \ref{sec:framework}.
\subsection{Querying the Knowledge Base}
SPARQL is one of the three fundamental enablers alongside \gls{rdf} and \gls{owl} designed for querying graph-based data. In SPARQL, the queries are focused on what the user wants to know about the data rather than on the structure of the data. The SPARQL syntax utilizes specific keywords to manipulate RDF graphs and information stored in the knowledge base. For example, The keyword \textit{SELECT} matches the required data values from the RDF graphs, \textit{WHERE} describes information (relations and properties) about the queried data, and \textit{FILTER} modifies the data query.
These common features between RDF graphs and SPARQL query structure motivate us to use SPARQL syntax to perform queries for the extraction of relevant information from the knowledge base. This is done successively for the extraction of intent templates, services, and resource information from the knowledge base, as well as the reported intent compliance information.
A sample of the SPARQL query to retrieve service specific \glspl{kpi} from the service extension model for a generic service \textbf{serv} with associated \glspl{kpi} \textbf{param} is shown in Listing ~\ref{lst:serv-kpi-query}.
\begin{lstlisting}[language=SPARQL, frame=top, frame=bottom, captionpos=b, caption={Sample query template for extraction of service KPIs from the knowledge base where \textbf{param} refers to the service parameter and \textbf{serv} refers to the service being queried.}, label=lst:serv-kpi-query,
]
PREFIX icm: <http://tio.models.tmforum.org/tio/ v2.0.0/IntentCommonModel/>
SELECT ?parameter ?value
WHERE {
?service ?property [ icm:valueBy [
?parameter ?value ] ] .
FILTER (?parameter = |\textbf{param}| && ?service = |\textbf{serv}|)
}
\end{lstlisting}
The query \textit{SELECT}s the service parameters from the service extension model in the knowledge base by exploiting the existing relationships between services and their relevant \glspl{kpi}. This relationship information is specified by the \textit{WHERE} keyword in the query and it is independent of the type of service but depends on the type of \gls{kpi} (for example, latency and packet error rate in the depicted query can represent \textbf{param} variable). This is done by the \textit{SELECT, WHERE} statement pair to match the RDF triples and then only \textit{FILTER} out the matching values of \textbf{param} for the service \textbf{serv}. This query retrieves all pairs of parameters and their associated values from the current knowledge base.
\subsection{Applicability of Proposed Knowledge Graph-based Service Models}
A major advantage of the proposed domain-independent knowledge base is the feasibility to initialize new knowledge graphs to represent different concepts for the \gls{ibn} management and service orchestration. Thus, the proposed service and \gls{kpi} models can be extended for any type of service utilizing the \gls{icm} and having well-defined performance metrics as well as resource requirements.
The process for expanding the proposed intent model with new service extensions is intuitive and exemplified next:
\begin{enumerate}
\item The resource type can be specified by defining new subclasses of \textit{icm:Target}. For instance, a new subclass could be delay critical \gls{gbr} from the \gls{5g} system specifications \cite{3gpp-5gsystem-23-501}.
\item The additional specification of service \glspl{kpi} is done by extending the service \gls{kpi} model shown in \figurename~\ref{image: tmf-icm-mcptt-nmcptt}. A subclass of \textit{icm:PropertyParameter} would then be defined for each new \gls{kpi} along with respective definitions for the \textit{icm: Expectation} class.
\item A new service could then be defined using the new resource target subclass of the \textit{icm:Target} and the service \glspl{kpi}. This step consists of the recommended values for the literal bounds for each \gls{kpi} parameter for different service types.
\item The initialization of new information sources in the knowledge base is completed with the introduction of newly added parameters and service expectations in the reporting subclasses of the \gls{icm} namely \textit{icm:RequirementReporter}.
\end{enumerate}
The described extensibility is matched by SPARQL, which provides an efficient interface to the knowledge base for retrieval of information since one can design queries that remain valid even when new services or resources are added to the network. It is also intuitive to utilize SPARQL queries to introduce performance metrics of different deployed services in the form of \textit{icm:IntentReport} in order to make them accessible in the knowledge base.
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{Resources/Images/ibn-framework.png}
\end{center}
\caption{Proposed intent processing framework with knowledge-based \gls{ibn} model.}
\label{image: ibn-framework}
\end{figure*}
\section{An Intent Processing Framework with Contextual Knowledge Base} \label{sec:framework}
Next-generation networks will produce and consume various types of information that can be used to manage the decision-making process of the network deployment and service orchestration. The proposed framework builds on a knowledge base consisting of abstractions for the service and resource level ontologies that extend the intent ontology defined by the \gls{icm}. In this section, we demonstrate how to utilize a converged knowledge base for the \gls{ibn} management and orchestration of services. The proposed extension facilitates an organized intent processing lifecycle and knowledge management framework that will help in translating high-level intent templates from the service consumers into specific technical artifacts to be used by the network administrator and orchestrator. To this end, we propose a knowledge-based \gls{ibn} framework for the operational management of multi-domain next-generation networks offering heterogeneous services as shown in \figurename~\ref{image: ibn-framework}.
\subsection{Application Layer}
The expression and definition of intents for the orchestration of different services and network functions are achieved through an application portal with a catalog of offered services to consumers. In the proposed setup, the service consumers specify the type of expected service using pre-defined options such as \lq Mission Critical Voice\rq at the application portal. Therefore the purpose of the application layer is to provide the user an option to express the intent service requirements.
As a possible extension, the application layer can also be designed to include a natural language interface to specify intents, but that is beyond the scope of this work. The focus of this paper is on the representation of the user intents in a standardized and inter-operable data format that is consumable by different agents (human or machine) throughout the network management and orchestration lifecycles. The application layer also contributes to the knowledge base with the available service profiles of active service consumers.
\subsection{Intent Layer}
The intent layer is responsible for providing context regarding the expected service. The input specification by the user applications is utilized to create an \gls{rdf} graph of intents by following the \gls{tmf}'s \gls{icm}. This model consists of the bare minimum components of intent for operational, monitoring, and reporting purposes. The operational part consists of the expected \glspl{slo} and the acceptable range of performance metric values. The monitoring part is concerned with the fulfillment of intents by the network functions. The reporting part defines the set of \glspl{slo} to consider while monitoring the lifecycle of an intent after deployment.
A user's intent is refined by adding required details regarding the construction of the service intent through queries to the intent model for service-related knowledge graph fields. Starting from the \gls{icm} intent template, extensions are added to create an empty service intent with the required expectations.
An example of a service intent modeled from a service requested by a user is depicted in \figurename~\ref{image: Intentlayer-intent}. It includes the proposed service and resource extension models with knowledge facts describing the expectations of the service in terms of \textit{icm:DeliveryExpectation} and \textit{icm:PropertyExpectation}, corresponding to fields from the \gls{icm}. After verification, the requested services and their respective parameters are embedded in the \gls{icm} to complete the service intent. There can be a single service intent requesting multiple services or an individual service intent for each service.
The intent monitoring and reporting objectives are achieved through the knowledge base subclasses of the \textit{icm:RequirementReporter} realized as \textit{icm:IntentReport} and \textit{icm:ExpectationReport} from the \gls{icm}. The \glspl{slo} expressed through the extension models of \textit{icm:Expectation} and \textit{icm:Parameter} classes help in generating these periodic reports for the deployed intents. The report consists of several events being provided to the intent monitoring engine with an updated status of the intent. These events represent the state of the intent throughout the intent management lifecycle. Key intent state events are \textit{icm:IntentStateReceived, icm:IntentStateCompliant, icm:StateDegraded, icm:StateUpdated} and \textit{icm:StateFinalized}.
\begin{figure*}[!htbp]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Resources/Images/Intent-layer-intent-vis.png}
\caption{Intent Layer}
\label{image: Intentlayer-intent}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Resources/Images/Intent-layer-network-vis.png}
\caption{Network Layer}
\label{image: NetworkLayer-network}
\end{subfigure}
\caption{Intent State and Updates in different layers.}
\end{figure*}
\subsection{Network Layer}
The network layer in the proposed \gls{ibn} framework consists of the processing of service intents and converting them into network intents readily deployed using a service orchestrator.
The service intent is forwarded to the network layer for validation and orchestration as per available resources and service models. The network intent is created by querying the service extension model from the knowledge base for \glspl{slo} for embedding in the service intents. A sample of the network intent is shown in \figurename~\ref{image: NetworkLayer-network} which consists of the required information for a service to be deployed, pending a final resource-level validation for the required resources.
The validation at the network layer detects possible conflicts with respect to resource allocation to the requested service \glspl{kpi} given the state of the available resources. This is performed via the resource validation query on the service models to check the available resource targets and service specified \glspl{kpi}. After this step, the resource intents are generated by slicing creation requests for different domains within the network infrastructure
\subsection{Resource Layer}
The resource layer consists of the available infrastructure and associated resources for the deployment of different services requested by the network intents via the service orchestrator. The resource model information is included in the knowledge base in the form of resource type, as an extension and subclass of the \textit{icm:Target} class shown in \figurename~\ref{image: tmf-icm-mcptt-nmcptt}.
The proposed extension models for the resource model in the \gls{icm} consist of two types of \lq icm:targetResources\rq, namely NGBR and GBR. These resources are utilized by the service orchestrator to request allocation requests for the deployment of network intents.
In addition, the resource layer is also responsible for keeping a catalog for the available and utilized resources, as well as the compliance of the deployed services according to the \glspl{slo}. This information is updated in the knowledge base and made available upon request for intent monitoring and reporting. Finally, the completion of service orchestration for a given network intent is indicated by the \textit{icm:IntentAccepted} state in the \gls{icm}.
\section{Performance Evaluation via Intent-Driven \gls{5g} Service Orchestration} \label{sec: results}
This section covers an \gls{ibn} proof-of-concept using \gls{ns-3} to orchestrate \gls{5g} services in a non-standalone network. The simulated setup follows the workflow at the different layers shown in \figurename~\ref{image: ibn-framework}. The knowledge base and intent processing framework are implemented in Python, where the proposed service models and associated queries are made publicly available on github \cite{intentRDF23}.
The intents are deployed using a \gls{ns-3} network consisting of \gls{5g} new radio (NR) access network and \gls{lte} core. The simulation results are collected and made available in the knowledge base for the reporting modules in the \gls{icm} in order to verify compliance of the deployed intent with \glspl{slo}.
\begin{figure}[!htbp]
\includegraphics[width=0.49\textwidth]{Resources/Images/simulation-lifecycle.png}
\caption{Intent lifecycle in the simulated network where the colors correspond to the different layers in \figurename~\ref{image: ibn-framework}.}
\label{image: intent-lifecycle}
\end{figure}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Resources/Results/simtes-bar.png}
\caption{Non-congested}
\label{image: service-timing}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Resources/Results/simtes-cong-bar.png}
\caption{Congested}
\label{image: service-cong-timing}
\end{subfigure}
\caption{KPI performance for deployed services.}
\end{figure*}
\subsection{Simulation Setup}
The implemented processes and queries are given in \figurename~\ref{image: intent-lifecycle} with the main steps summarized as:
\begin{enumerate}
\item Acquisition of intent from the user (\textit{application layer});
\item Recognition of relevant keywords in the user intent and \gls{icm} query from the knowledge base (\textit{intent layer});
\item Querying the service extension model and creation of a service intent (\textit{intent layer});
\item Service \gls{kpi} query from the service extension model and creation of the network intent (\textit{network layer});
\item Deployment of the network intent in the underlying \gls{ns-3} network (\textit{resource layer}).
\item Intent status reporting with compliance information of deployed services to the intent-generating user and manager.
\end{enumerate}
Queries to the knowledge base occur at various stages of the intent lifecycle, indicated as \textbf{\textcircled{1}} in \figurename~\ref{image: intent-lifecycle}.
\subsection{Simulation Results}
The evaluation of the proposed \gls{ibn} framework is performed using the orchestration of the services from Table I in the underlying \gls{ns-3} network. The services are orchestrated in two different scenarios; (a) normal operational conditions with specified \glspl{kpi} and (b) a congested network with accumulated data queues for different services. We have analyzed the proposed \gls{ibn} processing framework by observing the creation of intents and the associated interactions with the knowledge base. The intent creation times and the querying times for service and network intents are observed for different services from Table I. Another aspect that is being investigated is the performance of the deployed services against the specified \glspl{kpi}.
\subsubsection{Service Compliance to Intents}
The proposed \gls{ibn} framework for intent processing and deployment requires recursive testing for different scenarios and types of services.
The network intents for the services in Table I are generated and deployed, and the performance measurements presented in \figurename~\ref{image: service-timing} show that the \gls{pdb} or experienced latency remain within an acceptable range with minimum jitter. The deployed \gls{ns-3} network is designed to support the considered services with minimal congestion due to queuing and packet overload to measure their actual experienced performance. The reporting parameters continue to periodically update the intent monitoring module of the status of the services to confirm compliance with the network intents. This event is a subclass of \textit{icm:event}, and the \gls{imo}\cite{tmforum-intent-mgmt-ont-tr292} defines it as \textit{imo:StateComplies} for the compliant services.
\begin{lstlisting}[language=SPARQL, frame=top, frame=bottom, captionpos=b, caption={RDF snippet from the \textit{Intent Report} generated for the \textbf{ConvVideo} service.}, label=lst:iReport-convVideo,
]
rep:ER2_ServiceProperty a icm:ExpectationReport ;
|\color{blue}\textbf{icm:compliant}| [ a icm:PropertyParameter ;
icm:reason |\textbf{icm:ReasonMeetsRequirement}| ;
icm:reportsAbout exI:Par3_per ;
icm:valueBy [ |\color{blue}\textbf{kpi:packeterrorrate}| "0"^^xsd:string ] ] ;
|\color{red}\textbf{icm:degraded}| [ a icm:PropertyParameter ;
icm:reason |\textbf{icm:ReasonNotCompliant}| ;
icm:reportsAbout exI:Par2_latency ;
icm:valueBy [ |\color{red}\textbf{kpi: latency}| "493.1097 ms"^^xsd:string ] ] ;
icm:hasTarget catalog:ExampleService ;
icm:reportsAbout exI:Exp1_property .
\end{lstlisting}
\subsubsection{Non-Compliance of Deployed Services}
We have also deployed the intents for different services in a congested network deployment in order to visualize the feedback mechanism towards the intent monitoring engine. The results obtained for the deployed intents in this scenario are shown in \figurename~\ref{image: service-cong-timing}.
The congestion is obtained by increasing the packet sizes for the services in order to produce a backlog of packets in queues. We observe that 9 out of 11 deployed services via intents remain compliant excluding the service types \textit{icm:ConvVideo} and \textit{icm:ProcessMonitor} that violate their acceptable level of \lq latency\rq. The expected latency threshold for \textit{ConvVideo} and \textit{ProcessMonitor} services are 150 and 50 milliseconds respectively. Furthermore, we furnish measured values of the observed jitter for the implemented services to facilitate visualization of a comprehensive latency profile. We compare the compliance of deployed services with their respective network intents by analyzing the generated intent reports.
\begin{lstlisting}[language=SPARQL, frame=top, frame=bottom, captionpos=b, caption={RDF snippet from the \textit{Intent Report} generated for the \textbf{McpttData} service.}, label=lst:iReport-mcpttData
]
rep:ER2_ServiceProperty a icm:ExpectationReport ;
|\color{blue}\textbf{icm:compliant}| [ a icm:PropertyParameter ;
icm:reason |\textbf{icm:ReasonMeetsRequirement}| ;
icm:reportsAbout exI:Par3_per ;
icm:valueBy [ |\color{blue}\textbf{kpi:packeterrorrate}| "0"^^xsd:string ] ],
[ a icm:PropertyParameter ;
icm:reason |\textbf{icm:ReasonMeetsRequirement}| ;
icm:reportsAbout exI:Par2_latency ;
icm:valueBy [ |\color{blue}\textbf{kpi:latency}| "17.6459 ms"^^xsd:string ] ] ;
icm:hasTarget catalog:ExampleService ;
icm:reportsAbout exI:Exp1_property .
\end{lstlisting}
\subsubsection{Intent Status and Reporting}
Intent reporting forms the key aspect of the intent management lifecycle and it is accomplished by implementing the \textit{icm:IntentReport} class from the \gls{icm}. The service parameters from the deployed intent are utilized to populate the \textit{icm:RequirementReporter} class using the \textit{icm:ReportingParameter} subclass of the \textit{icm:Parameter}. This allows the creation of required dependencies between deployed services and the reported performance results. The \gls{ns-3} simulation is monitored for the reporting parameters with the observed values being added to the generated \textit{icm:IntentReport} for different deployed services. The compliance of the service to a particular parameter is specified through \textit{icm: compliant} and \textit{icm:degraded} properties. The reason for the evaluation of a particular service \gls{kpi} (icm:Parameter) is specified as \textit{icm:ReasonNotCompliant} for degraded and \textit{icm:ReasonMeetsRequirement} for complying values respectively. For example, we provide an RDF representation from the intent reports for the \textbf{ConvVideo} and \textbf{McpttData} services in Listings~\ref{lst:iReport-convVideo} and ~\ref{lst:iReport-mcpttData}, respectively.
The values for \textit{kpi:latency} and \textit{kpi:packeterrorrate} are being reported in the intent report for the \textit{ConvVideo} service as being compliant and degraded to the knowledge base. These values are available for access using a simple SPARQL query (similar to Listing ~\ref{lst:serv-kpi-query}) for the intent user and intent monitoring engine to validate the status of the deployed intent. In this, the \gls{imo} defines that an \textit{imo:StateDegrades} event notifies the intent monitoring engine of the performance degradation for the services failing to meet their respective \glspl{kpi}.
Similarly, the intent report for the \textit{McpttData} service shows that both of the observed \glspl{kpi} are compliant and the intent state is reported as \textit{imo:StateComplies} to the knowledge base and intent monitoring module. An excerpt from the intent report is provided in Listing ~\ref{lst:iReport-mcpttData}.
\section{Conclusion} \label{sec: conc}
This paper covers the intent representation and knowledge organization aspect of the \gls{ibn}. For this purpose, we study a service orchestration use case for cellular \gls{5g} networks and observe that it is possible to design a common knowledge base for the service, intent, and resource models. We focus on the intent representation and processing in the networking domain assuming the availability of service information mapped from the declarative user intent expression. The proposed \gls{ibn} framework provides the ability to translate intents into various network comprehensible forms leading to deployment of the requested services. The deployment is done using a \gls{ns-3} based infrastructure with observed \glspl{kpi} within the \glspl{slo}.
We intend to extend the knowledge base to include data representation from the application domain and different types of potential intent users. This will enrich the knowledge base to be applicable in developing control loops for network modification and expansion, keeping in view the input from the intent users. Moreover, the current setup requires detailed investigation using a service orchestrator that provides an active control loop management such as Open Source MANO. A 5G lab and testbed are being developed at IIK --- NTNU and they will be utilized as a potential resource infrastructure for the deployment of the proposed \gls{ibn} framework as a next step.
|
train/arxiv
|
BkiUdlo4ubng04WQtxK7
| 5 | 1 |
\section{Introduction}
The coupling of critical statistical models has recently been a subject of intense study~\cite{gefen1980critical,cheraghalizadeh2018self,barat1995statistics,cheraghalizadeh2018gaussian,daryaei2012watersheds,cheraghalizadeh2017mapping,kremer1981self,najafi2016bak,najafi2016water,najafi2016monte}. One way of the coupling of models is to define a model on a host system with internal degrees of freedom whose arrangement is realized by another statistical model~\cite{cheraghalizadeh2017mapping,cheraghalizadeh2018gaussian,cheraghalizadeh2018self}. This type of coupling may help to understand the structure of fixed points of the combined model~\cite{najafi2018coupling}. Among the 2D critical statistical models, the Gaussian free field (GFF) has an especial importance due to its connection to a wide range of statistical models, ranging from free Bosons~\cite{francesco1996conformal}, to stationary state of Edwards-Wilkinson (EW) of growth process of rough surfaces~\cite{stanley2012random}. The coupling of GFF to the other models, is possible via (but not restricted to) the dilution of the host media whose pattern is tuned by the other statistical model, or by distributing some disorders in the regular lattice according to a model that yields the position pattern of the disorders. The example is the Poisson equation due to a white noise charged disorder (which is equivalent to GFF) in the presence of metallic regions whose formation pattern are modeled by the Ising model with an artificial temperature~\cite{cheraghalizadeh2018gaussian}. Such a study is expected to be relevant in understanding of many solid state systems which are doped with the metallic particles with a vast applications ranging from optical devices~\cite{miller2013nonlinear,haglund1993picosecond,shalaev1998nonlinear,cai2005superlens,kravets2010plasmonic,kim2017ultrafast,kim2018dielectric,kim2013nondegenerate,arya1986anderson} and random lasers~\cite{meng2013metal}, to sensor technology~\cite{franke2006metal} and solar cells~\cite{huang2013mitigation,yu2017effects,li2016electrostatically}.\\
Along with the experimental interests, there are also theoretical interests on the various versions of the GFF model. Many condensed matter systems are directly mapped to the Coulomb gases (which, in the zero background charge, corresponds to the GFF model)~\cite{nienhuis1982analytical}. Among them are the XY model~\cite{villain1975j}, the Ashkin-Teller model~\cite{knops1982renormalization}, the $q$-state Potts model~\cite{knops1982renormalization,nienhuis1982analytical}, the antiferromagnetic Potts model~\cite{den1982critical}, the $O(n)$ model, the frustrated Ising models~\cite{nienhuis1982b}, vortices dynamics in superfluids~\cite{kosterlitz1974jm} and the quantum Hall systems via the plasma analogy of the wave function~\cite{girvin1999quantum}. This correspondence is not restricted to the equilibrium phenomena. For example the Edwards-Wilkinson (EW) model of growth process in the stationary state corresponds to GFF~\cite{kondev2000nonlinear,kondev1995geometrical}. On the two-dimensional (2D) regular systems, it is well-known that the GFF belongs to $c=1$ conformal field theory (CFT)~\cite{francesco1996conformal} and also the Coulomb gas with the coupling constant $g=1$~\cite{cardy2005sle}. All of these make the main aim of the present paper, i.e. the effect of environmental disorder on GFF as a long-standing problem in any condensed matter system, very important in both theoretical and experimental sides. In the CFT language, the problem of GFF in the presence of the un-correlated metallic disorder with critical occupation is interpreted as the coupling of $c=1$ CFT with the $c=0$ (critical percolation) CFT, which has poorly been investigated in the literature. Also the structure of presumable fixed points in the off-critical regime is a worthy problem.\\
In the present paper we realize the GFF by considering the Poisson equation in the background of random white-noise charges with normal distribution, and study the effect of metallic regions whose positional configurations are modeled by un-correlated percolation model which is tuned by the occupation probability $p$. We study various local and global statistical observable in terms of $p$ and the behavior of the critical exponents is obtained. Interestingly we observe that there are two regimes with distinct critical behaviors: small (called UV) scales and large (called IR) scales. The IR exponents fit properly to the regular GFF model, i.e. GFF$_{p=1}$, whereas the UV exponents show a new universality class which should be characterized in more details in the community. By analyzing the finite size effects along with the cross-over scale between these behaviors, we propose a fixed point structure for this problem, whose phase space is drawn at the end.\\
The paper has been organized as follows: The SEC.~\ref{Review} is devoted to the some concepts of rough surfaces and the important statistical observables in the problem. In the SEC.~\ref{RoughSurfaces}, we motivate this study and introduce and describe the model. The numerical methods and details are explored in SEC.~\ref{NUMDet}. The results are presented in the section~\ref{results} for local (SEC.~\ref{local}) and global (SEC.\ref{global}) quantities. We end the paper by a discussion and conclusion in SEC.~\ref{conclusion}.
\section{A review on the scale-invariant rough surfaces}\label{Review}
It is necessary to review some features of the scale-invariant 2D random fields and rough surfaces. The methods employed in these systems have wide applications, ranging from the classical~\cite{kondev2000nonlinear,kondev1995geometrical,cheraghalizadeh2018gaussian} to the quantum systems~\cite{Najafi2017Scale,najafi2018scaling,najafi2018percolation,najafi2018interaction}. Let $V(x,y)\equiv V(\mathbf{r})$ be the \textit{height} profile (in this paper the electrostatic potential) of a scale invariant 2D random rough field. The main property of self-affine random fields is their invariance under rescaling \cite{barabasi1995fractal,kirchner2003critical,falconer2004fractal}. The probability distribution function of these fields transform under $\textbf{r}\rightarrow \lambda\textbf{r}$ as follows:
scaling law
\begin{eqnarray}\label{scaleinvariance}
V(\lambda \mathbf{r}) \stackrel{d}{=} \lambda ^{\alpha} V(\mathbf{r}),
\end{eqnarray}
where the parameter $\alpha$ is \textit{roughness} exponent or the \textit{Hurst} exponent and $\lambda$ is a scaling factor and the symbol $\stackrel{d}{=}$ means the equality of the distributions. Let us denote the Fourier transform of $V(\textbf{r})$ by $V(\textbf{q})$. The distribution of a wide variety of random fields characterized by the toughness exponent $\alpha$ is Gaussian with the form
\begin{eqnarray}
P\left\lbrace V \right\rbrace \sim \exp \left[ -\frac{k}{2} \int _ 0 ^ {q_0} d \mathbf{q}q^{2(1+\alpha)}V_{\mathbf{q}}V_{-\mathbf{q}} \right],
\end{eqnarray}
where $q_0$ is the momentum cut-off which is of the order of the inverse of the lattice constant~\cite{kondev1995geometrical} and $k$ is some constant. The scale invariance, when combined with the translational, rotational and scale invariance, has many interesting consequences. For example the height-correlation function of $V(\mathbf{r})$, $C(r) \equiv \langle \left[ V(\mathbf{r}+\mathbf{r_0})-V(\mathbf{r_0}) \right]^2 \rangle$ is expected to behave like
\begin{eqnarray}\label{height-corr}
C(r) \sim |\mathbf{r}| ^{2\alpha_l},
\end{eqnarray}
where the parameter $\alpha_l$ is called the local roughness exponent \cite{barabasi1995fractal} and $\left\langle \right\rangle$ denotes the ensemble average. The above equation implies that the second moment of $V(\textbf{q})$ scales with $q$ for small values of $q$, i.e. $S(\mathbf{q})\equiv \langle |V(\mathbf{q})|^2\rangle\sim |\mathbf{q}|^{-2(1+\alpha)}$~\cite{falconer2004fractal} which is obtained from the relation \ref{height-corr}. Another measure to classify the scale invariant profile $V(\mathbf{r})$ is the total variance
\begin{eqnarray}\label{total variance}
W(L)\equiv \langle \left[ V(\mathbf{r}) - \bar{V} \right]^2 \rangle _L \sim L^{2\alpha_g}
\end{eqnarray}
where $\bar{V}=\langle V(\mathbf{r}) \rangle_L$, and $\langle \dots \rangle _L$ means that, the average is taken over $\mathbf{r}$ in a box of size $L$. The parameter $\alpha_g$ is the global roughness exponent. Self-affine surfaces are mono-fractals just if $\alpha_g = \alpha_l = \alpha$ \cite{barabasi1995fractal}. \\
The other test for $V(\textbf{r})$ to be Gaussian is that all of its finite-dimensional probability distribution functions are Gaussian \cite{adler1981geometry}. One of the requirements of this is that its distribution is Gaussian:
\begin{eqnarray}
P( V ) \equiv \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{V^2}{2\sigma^2}},
\end{eqnarray}
where $\sigma$ is the standard deviation. Another quantity whose moments distributions should be Gaussian is the local curvature which is defined (at position $\mathbf{r}$ and at scale $b$) as~\cite{kondev2000nonlinear}
\begin{eqnarray}\label{local curvature}
C_b(\mathbf{r}) = \sum_{m=1}^M \left[ V(\mathbf{r}+ b\mathbf{e}_m) - V(\mathbf{r}) \right],
\end{eqnarray}
in which the offset directions $\left\lbrace\mathbf{e}_1,\dots,\mathbf{e}_M\right\rbrace$ are a fixed set of vectors whose sum is zero, i.e. $\sum_{m=1}^M \mathbf{e}_m =0$. If the rough surface is Gaussian, then the distribution of the local curvature $P (C_b)$ is Gaussian and the first and all the other odd moments of $C_b$ manifestly vanish since the random field has up/down symmetry $V(\mathbf{r})\longleftrightarrow -V(\mathbf{r})$. Additionally, for Gaussian random fields we have:
\begin{eqnarray}\label{fourth moment}
\frac{\langle C_b^4 \rangle }{\langle C_b^2 \rangle ^2} = 3.
\end{eqnarray}
This relation is an important test for the Gaussian/non-Gaussian character of a random field. \\
All of the analysis presented above are in terms of local variable $V(\textbf{r})$. There is however a non-local view of point in such problems, i.e. the iso-height lines of the profile $V(\mathbf{r})$ at the level set $V(\mathbf{r}) = V_0$ which also show the scaling properties. When we cut the self-affine surface $V(x,y)$ some non-intersecting loops result which come in many shapes and sizes \cite{kondev1995geometrical,kondev2000nonlinear}. We choose $10$ different $V_0$ between maximum and minimum potentials and a \textit{contour loop ensemble} (CLE) is obtained. These geometrical objects are scale invariant and show various power-law behaviors, e.g., their size distribution is characterized by a few power law relations and scaling exponents. The scaling theory of CLEs of self-affine Gaussian fields was introduced in Ref. \cite{kondev1995geometrical} and developed in Ref. \cite{kondev2000nonlinear}. In the following we introduce the various functions and relation, firstly introduced in Ref. \cite{kondev2000nonlinear}. The most important local quantities are $\alpha_l$ and $\alpha_g$. For the non-local quantities the exponent of the distribution functions of loop lengths $l$ ($P(l)$) and the gyration radius of loops $r$ ($P(r)$) are of especial importance. In addition the contour loop ensemble can be characterized through the loop correlation function $G(\mathbf{r})=G(r)$ ($r\equiv |\textbf{r}|$) which is the probability measure of how likely the two points separated by the distance $r$ lie on the same contour. For large $r$s this function scales with $r$ as
\begin{eqnarray}\label{loop correlation function}
G(r) \sim \frac{1}{r^{2x_l}},
\end{eqnarray}
where $x_l$ is the loop correlation exponent. It is believed that the exponent $x_l$ is superuniversal, i.e. for all the known mono-fractal Gaussian random fields in two dimensions this exponent is equal to $\frac{1}{2}$~\cite{kondev1995geometrical,kondev2000nonlinear}. \\
Now consider the probability distribution $P(l,r)$ which is the measure of having contours with length $(l, l + dl)$ and radius $(r, r + dr)$. For the scale invariant CLE, $P(l,r)$ is hypothesized to behave like~\cite{kondev1995geometrical}:
\begin{eqnarray}
P(l,r) \sim l^{-\tau_l -1/D_f} g(l/r^{D_f}),
\end{eqnarray}
where $g$ is a scaling function and the exponents $D_f$ and $\tau_l$ are the fractal dimension and the length distribution exponent, respectively. One also can define the fractal dimension of the loops by the relation $\langle l \rangle \sim r^{\gamma_{lr}}$. By the following straightforward calculation
\begin{eqnarray}\label{loop fractal dimesion}
\langle l \rangle \equiv \frac{\int _0^\infty lP(l,r) dl}{\int _0^\infty P(l,r) dl} \sim r^{D_f},
\end{eqnarray}
we see that $\gamma_{lr}=D_f$. Note also that the probability distribution of contour lengths $P(l)$ is obtained using the relation $P(l) \equiv \int_0^\infty P(l,r)dr \sim l^{-\tau_l}$. It is shown that there are the important scaling relations between the scaling exponents $\alpha$, $D_f$, $\tau_l$ and $x_l$ as follows~\cite{kondev2000nonlinear}:
\begin{eqnarray}\label{hyper1}
D_f(\tau_l -1) = 2-\alpha,
\end{eqnarray}
and
\begin{eqnarray}\label{hyper2}
D_f(\tau_l-3) = 2x_l -2.
\end{eqnarray}\\
In the general theory of critical phenomena, each system in the critical state shows some power-law behaviors for the local and geometrical quantities, i.e. $P(x)\sim x^{-\tau_x}$ ($x=$ the local and geometrical statistical quantities). The estimation of these exponents is a challenging problem, for which a detailed finite-size analysis is required. For mono-fractal finite systems, the finite-size scaling (FSS) theory predicts that~\cite{goldenfeld1992lectures}:
\begin{equation}
P_x(x,L)=L^{-\beta_x}g_x(xL^{-\nu_x}),
\label{eq:FSS}
\end{equation}
in which $g_x$ is a universal function and $\beta_x$ and $\nu_x$ are some exponents that are related by $\tau_x=\frac{\beta_x}{\nu_x}$. For multi-fractal systems, this prediction does not work. For example, for a system with two distinct critical regions (UV and IR regions in our main problem), one expects some cross-over point $x^*$ which connects these two regions~\cite{najafi2012avalanche,najafi2016bak,najafi2016water}. For determining these points, we have followed the method presented in~\cite{najafi2018statistical}, in which the slope of each part of the graph is obtained (in the log-log plot) and the cross-over point is obtained as the point in which the linear fits meet each other. The exact determination of these points is not simple in simulations, since when the exponents of the two regions are close to each other, the statistical error bar for $x^*$ becomes large~\cite{najafi2018statistical}.\\
Finally we note that there is a hyper-scaling relation between the $\tau$ exponents and the fractal dimensions $\gamma_{x,y}$, which are defined by the relation $x\sim y^{\gamma_{x,y}}$, namely:
\begin{equation}
\gamma_{x,y}=\frac{\tau_y-1}{\tau_x-1}.
\end{equation}
This relation is valid only when the conditional probability function $p(x|y)$ is a function with a very narrow peak for both $x$ and $y$ variables.
\section{The construction of the problem}\label{RoughSurfaces}
\label{sec:model}
In this section we construct the main idea of the present paper. The Gaussian free field (GFF) is a very important model to which many models are mapped, ranging from free Boson filed, to Edwards-Wilkinson (EW) model of surface growth process. A realization of GFF is the Poisson equation in the background of white-noise charge disorders, which itself is mapped to EW model in the stationary state. If other kinds of disorder is present in the system (that is the case for any condensed matter system), the problem of the statistics of the electric potential becomes complicated, since the GFF is coupled to the other model which realizes the disorder. There are many experimental~\cite{miller2013nonlinear,haglund1993picosecond,shalaev1998nonlinear,cai2005superlens,kravets2010plasmonic,kim2017ultrafast,kim2018dielectric,kim2013nondegenerate,meng2013metal,subramanian2001semiconductor,huang2013mitigation,yu2017effects,li2016electrostatically} and theoretical~\cite{cheraghalizadeh2018gaussian} motivations to consider the metallic disorders in dielectric media. If the mentioned disorders are some metallic particles randomly distributed over the sample (which is the subject of the present paper), then the problem is simply finding the solution of the Poisson equation with some additional boundary conditions imposed by metallic regions. The configuration of the position of metallic particles is naturally random and may be modeled by some well-understood models, like the percolation theory (for uncorrelated metallic disorders). In this case, noting that in the absence of iso-potential islands the system corresponds to Gaussian free field (GFF), we can imagine of this problem as \textit{the coupling of the GFF with the percolation theory as a model of the position pattern of the metallic islands}. It is the easiest way of configuring metallic particles in the media, although the spatial pattern of connectedness of metallic particles is generally complex and many internal degrees of freedom play role in the problem, e.g. the cohesive energy, the particle sizes and the effect of the media around. When the host configuration is made, one can simulate the dynamical (GFF) model, assuming that the metallic islands are quenched. Some other examples of coupling of statistical models can be seen in~\cite{najafi2018coupling,najafi2016monte}.\\
For the purposes mentioned above, the system is meshed by cells each of which can have one of the two states: empty or occupied by a metallic particle (which we call metallic site). The metric space is therefore tuned by the occupation probability $p$ which is the probability that a site is empty (not occupied by a metallic particle). Then the Poisson equation is solved in the background of the metallic islands for the white-noise random charges in the non-metallic (active) area. \\
Before describing the problem in this type of media, let us first briefly introduce the standard method of generating GFFs. As mentioned above, the EW model in the stationary state becomes GFF which is generated by the following equation for the height field $V(\textbf{r})$:
\begin{equation}
\partial_t V(\vec{r},t)=\nabla^2V(\vec{r},t)+\eta(\vec{r},t),
\label{Eq:EW}
\end{equation}
in which $\eta(\vec{r},t)$ is a space-time white noise with the properties $\left\langle \eta(\vec{r},t)\right\rangle = 0 $ and $\left\langle \eta(\vec{r},t)\eta(\vec{r}',t')\right\rangle = \zeta \delta^3(\vec{r}-\vec{r}')\delta(t-t') $ and $\zeta$ is the strength of the noise. $V(\vec{r},t)$ can be served as the electrostatic potential in our paper, once it becomes $t$-independent (the stationary state of EW, in which $\partial_t V=0$), acquiring the following form (with the dielectric constant $\epsilon\equiv 1$):
\begin{equation}
\nabla^2V(\vec{r})=-\rho(\vec{r}),
\label{Eq:Poisson}
\end{equation}
where $\rho(\vec{r})$ is the spatial white noise with the normal distribution and the properties $\left\langle \rho(\vec{r})\right\rangle = 0 $ and $\left\langle \rho(\vec{r})\rho(\vec{r}')\right\rangle = (n_ia)^2 \delta^3(\vec{r}-\vec{r}')$, $n_i$ is the total density of Coulomb disorder, $a$ is the lattice constant. It is well-known that this model in the scaling limit belongs is described by Gaussian distribution function (GFF) which is $c=1$ conformal filed theory~\cite{francesco1996conformal}. It is also known that the contour lines of this model are described by the Schramm-Loewner evolution (SLE) theory with the diffusivity parameter $\kappa=4$~\cite{cardy2005sle}, which is understood in terms of the general CFT/SLE correspondence with the relation $c=(6-\kappa)(3\kappa-8)/(2\kappa)$. The fractal dimension of the contour loops $D_f^{\text{GFF}}=\frac{3}{2}$ which is also compatible with the relation $D_f=1+\frac{\kappa}{8}$. \\
Now let us consider the problem of GFF in the background of metallic islands. The effect of these islands is that, over them the potential is constant, i.e. the Neumann type boundary conditions. This cause the contour lines of potential are deformed and also the fluctuations of the potential are changed (as becomes clear in the following sections). To determine the shape of these islands, the percolation theory is defined on the $L\times L$ square lattice with the occupation probability $p$ and the connected clusters composed of active (non-metallic) sites are identified. The \textit{active space} is defined as the set of sites that are un-occupied and also are not completely surrounded by a metallic island, i.e. there are some free paths from the site to infinity (or system boundaries). The random charged impurities are put on the sites in the active space and the Poisson equation is solved with the imposed boundary condition, that is free in our paper. \\
This problem belongs also to the context of the critical phenomena on the fractal systems. This concept was mainly begun by the work of Gefen \textit{et. al.}~\cite{gefen1980critical} in which it was claimed that the critical behavior of the models is tuned by the detail of the topological quantities of the fractal lattice. The cluster fractal dimension, the order of ramification and the connectivity are some examples of these quantities~\cite{gefen1980critical}. This concept can be extended to dilute systems that are fractal in some limits~\cite{cheraghalizadeh2017mapping,najafi2016bak,najafi2016monte,najafi2016water,najafi2018coupling}. There are also some experimental motivations for such studies. The examples is the magnetic materials in the porous media \cite{kose2009label,kikura2004thermal,matsuzaki2004real,philip2007enhancement,kim2008magnetic,keng2009colloidal,kikura2007cluster,najafi2016monte}. In this paper the critical percolation model ($p=p_c$ in which $p_c$ is the critical occupation for the percolation model above which there is almost surely one percolated cluster, i.e. a cluster of same type which connects two apposite boundaries) plays the role of the fractal lattice on which the GFF is considered. For the case $p=1$ which is a regular lattice, one retrieves the results of the ordinary $c=1$ CFT and also SLE$_{4}$.\\
The off-critical occupations, i.e. $p>p_c$, are also very important, especially in the close vicinity of the $p_c$, which help to determine the universality class of the model. In this regime, some critical exponent may be obtained. Also in many cases, the off-criticality parameter (here $\epsilon_0\equiv 1-p$) drive the original critical model (here GFF$_{p=1}$, i.e. regular GFF) towards some other fixed point. The relevance of irrelevance of this perturbation should be deduced from some numerical evidences, which is a part of aims of this paper.
\subsection{Numerical methods}\label{NUMDet}
\begin{figure*}
\centering
\begin{subfigure}{0.40\textwidth}\includegraphics[width=\textwidth]{pGreaterThanPcSample.jpg}
\caption{}
\label{fig:pGreaterThanPcSample}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}\includegraphics[width=\textwidth]{pGreaterThanPcSolution.jpg}
\caption{}
\label{fig:pGreaterThanPcSolution}
\end{subfigure}
\begin{subfigure}{0.40\textwidth}\includegraphics[width=\textwidth]{pcSample2.jpg}
\caption{}
\label{fig:pcSample2}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}\includegraphics[width=\textwidth]{pcSolution2.jpg}
\caption{}
\label{fig:pcSolution2}
\end{subfigure}
\caption{(Color online) The (a) sample and (b) the corresponding electrostatic potential for $p=0.70>p_c$. The (c) sample and (d) the corresponding electrostatic potential for $p=p_c$.}
\label{fig:samples}
\end{figure*}
As explained in the previous subsection, we consider $L\times L$ square lattice and put some metallic particles in some random sites over the lattice in such a way that the mentioned site is completely covered by the metallic particle. Each site is occupied by a metallic particle with the probability $1-p$ and is un-occupied (active) with the probability $p$. When an electrostatic potential is obtained, we extract the contour lines by $10$ different cuts with the same spacing between maximum and minimum values. We have run the program for lattice sizes $L=256, 512$ and $1024$ to control the finite size effects. It is notable that for each $L=1024$ sample (for a given $p$) about $\sim 10^3$ loops were obtained. This means that for each $p$ and $L$, something like $10^8$ loops were generated. The Hoshen-Kopelman~\cite{hoshen1976percolation} algorithm has been employed for identifying the clusters in the lattice. Figure \ref{fig:samples} shows samples and their corresponding potentials for two cases: $p=p_c$ and $p>p_c$. We see that the metallic islands become larger and self-similar as $p$ approaches $p_c$. The contour lines for $10$ different cuts have also been shown.
\section{Results}\label{results}
We have simulated the system for some $p\geq p_c=0.5927$. The samples, along with the electrostatic solutions have been shown in Fig.~\ref{fig:samples}. The blue area in the percolation samples is the metallic area or is the area which is surrounded by metallic particles, and is therefore iso-potential region. For all occupation probabilities ($p$), the system show critical behaviors and the random potential pattern $V(\textbf{r})$ is self-similar and scale invariant. However these critical behaviors are not the same for all scales, i.e. for all statistical observables we have observed two distinct regions with their own critical properties. One of them governs the properties of the model in small scales, and the other controls the large scale behaviors. We call the former as the \textit{UV properties} and the latter as the \textit{IR properties}. In the following sections we report the critical properties of the model in each region. Our study contains two separate parts: the local observables and the global ones.
\subsection{local properties}\label{local}
The scale-invariant rough surfaces have especial local and global properties, that were reviewed in SEC~\ref{RoughSurfaces}. The most important local exponent in a scale-invariant rough surface is the $\alpha$ exponent defined in Eqs~\ref{height-corr} and \ref{total variance} and generally via the relation~\ref{scaleinvariance}. These quantities have been shown in Figs~\ref{fig:C-r} and~\ref{fig:W-l}. In both graphs, two regions are distinguishable. For small scales (small $r$s in Fig.~\ref{fig:C-r} and small $L$s in Fig.~\ref{fig:W-l}) the behavior is power-law, whereas for the large scales the behavior is logarithmic (with $\alpha_l^{\text{IR}}=\alpha_g^{\text{IR}}=0$). Since the logarithmic behavior is characteristics of the GFF in the regular systems, we conclude that the large-scale (IR) properties of the model are described by this model, i.e. the GFF in the regular systems. This is confirmed by calculating of the geometrical exponents, to be reported in the following section. On the other hand the UV region is characterized by a non-zero $\alpha_l^{\text{UV}}$ and $\alpha_g^{\text{UV}}$ whose dependence on $p$ have been shown in the upper-right insets of Figs~\ref{fig:C-r} and~\ref{fig:W-l}. Our observations in this paper support the fact that there are two points in the phase space with robust exponents: $p=p_c$ (named as UV fixed point, i.e. GFF$_{p=p_c}$) and $p=1$ (named as IR fixed point, i.e. GFF$_{p=p_c}$), between which a cross-over occurs. This cross-over takes place in some point, named as $r^*$ in Fig.~\ref{fig:C-r} and as $L^*$ in Fig.~\ref{fig:W-l}. The determination of the $\alpha_l^{\text{UV}}$ and $\alpha_g^{\text{UV}}$ exponent (and all other exponents) needs the determination of $r^*$ and $L^*$ which is determined by the linear fit of the log-log plot and is defined as the point at which the $R^2$ of the linear fit becomes lower than a threshold, i.e. $0.9$ in this paper. We see that $r^*$ and $L^*$ decrease with $p$ (in a power-law fashion) and vanish for large enough $p$s, where the logarithmic dependence (assigned to the IR region) dominates the graphs. An interesting observation is that $r^*_p(L_0)/L_0$ and $L^*_p(L_0)/L_0$ are decreasing functions of both $L_0$ and $p$ (see the lower insets of Figs~\ref{fig:C-r} and~\ref{fig:W-l}). $r^*_p(L_0)/L_0$ and $L^*_p(L_0)/L_0$ are the parameters that separate IR and UV behaviors and the fact that their dependence on $p$ and $L_0$ are qualitatively the same shows that large (small) scales and large (small) $p$s favor the same regime, i.e. IR (UV) regime. In the other words, having in mind that for larger system sizes (larger $L_0$) the IR properties of any system are more seen and therefore the IR regime dominates the corresponding graphs, we interpret the above observation (decreasing of $r^*_{L_0}(p)/L_0$ and $L^*_{L_0}(p)/L_0$ in terms of $L_0$ and $p$) to mean that \textit{GFF$_{p=1}$ is the IR fixed point towards which GFF$_{p=p_c}$ is unstable}. The fact that the region of UV (power-law) behaviors shrinks to zero as $p$ increases, supports this hypothesis. Such a behavior is regularly seen in the other statistical observable, as we will see in the following section. \\
Our numerical results show that $\alpha_l^{\text{UV}}\approx \alpha_g^{\text{UV}}=0.5\pm 0.1$. In the IR region however we have $C(r)=a\log r$ and $W(L)=b\log L$ (corresponding to $\alpha_l^{\text{IR}}=\alpha_g^{\text{IR}}=0$) with some proportionality constants $a$ and $b$ which have been reported in the insets. The finite size dependence of $C(r)$ has been shown in Fig.~\ref{fig:C-r-L} in which the logarithmic behavior has been explicitly shown in a semi-logarithmic graph. The decreasing of $a$ and $b$ shows that \textit{the site-dilution of the system decreases the spatial correlations of the potential field and also the roughness of the system in the IR regime}. It has explicitly displayed in Fig.~\ref{fig:C_r_p} in which $C(r,p)$ has been sketched in terms of $p$ for various rates of $r$, according to which we see that $C(r,p)$ decreases with decreasing $p$. $C(r,p)$ changes linearly with $p-p_c$ which has been shown in the inset of Fig.~\ref{fig:C_r_p} for $r=10$, and is correct for all $r$s. According to these observations, we propose that this function has the following form in the vicinity of $p=p_c$:
\begin{equation}
C(r,p)\sim (p-p_c)\times \begin{cases} \log r & \text{ for large} \ r \\
r^{\alpha_l^{\text{UV}}} & \text{ for small} \ r
\end{cases}
\end{equation}
The graphs show deviations from this relation for larger $p$s. Note also that $C(r,p)$ becomes vanishingly small as $p\rightarrow p_c$, which confirms that the dilution of the system suppresses the spatial correlations.
\begin{figure*}
\centering
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{c_r}
\caption{}
\label{fig:C-r}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{W_l}
\caption{}
\label{fig:W-l}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{c_r_L}
\caption{}
\label{fig:C-r-L}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{C_r_p}
\caption{}
\label{fig:C_r_p}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{p_h}
\caption{}
\label{fig:p-h}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{p_c1}
\caption{}
\label{fig:Cb}
\end{subfigure}
\caption{(Color online) The behavior of (a) $C(r)$ and (b) $W(l)$ with the power-law behavior in the small scales and the logarithmic behavior in the large scales. The upper insets show the exponents $\alpha_l^{\text{UV}}$ and $\alpha_g^{\text{UV}}$ for the small scales. The lower insets are the cross-over points $r^*/L_o$ and $L^*/L_0$ in terms of $p$ for various system sizes. The proportionality constants $a(p)$ (for $C(r)$) and $b(p)$ (for $W(L)$) also have been shown. (c) The finite size dependence of $C(r)$ at $p=0.6$. (d) The $p$ dependence of $C(r,p)$. (e) The distribution of the random potential $P(V)$ with respect to $\left( V-\left\langle V\right\rangle \right) /\sigma_p$ for various amounts of $p$ for $L=1024$. Note that $\left\langle V\right\rangle=0$ for all $p$s. Inset: the variance $\sigma_p$ in terms of $p$. (f) The distribution of the curvature of the random potential $P(C_1)$ for various amounts of $p$. The fit is $\exp \left[ -\beta\left|C_1\right|\right] $. Left inset: $\beta$ in terms of $p$. Right inset: $F_b\equiv \frac{\left\langle C_1^4\right\rangle }{\left\langle C_1^2\right\rangle^2}$ in terms of $p$.}
\label{fig:Off-Tc}
\end{figure*}
One of the important issues in the random field surfaces is the Gaussian/non-Gaussian properties. The question whether the distribution of a random field is Gaussian can be addressed directly by calculating the distribution function of the field itself (here $P(V)$), and the corresponding curvature field $P(C_b)$ as explained in the previous section. The fact that these functions are Gaussian is a necessary, but not sufficient condition for Gaussian random fields. We have shown these quantities in Figs.~\ref{fig:p-h} and~\ref{fig:Cb}. We see that $P(V)$ preserves the Gaussian form for all $p$s, however the width of the distribution ($\sigma_p$) changes. From the inset, it is inferred that $\sigma_p\sim p^{-0.43\pm 0.03}$ for $L_0=1024$. On the other hand, the fact that $P(C_1)\sim \exp\left( -\beta |C_1|\right)$, ($\beta\sim p^{-3.19\pm 0.02}$ in the close vicinity of $p=p_c$), reveals that for $p\ne 1$ we have non-Gaussian surface. To test this more precisely, we have calculated $F_b\equiv \frac{\left\langle C_b^4\right\rangle }{\left\langle C_b^2\right\rangle^2}$ (which should be equal to $3$ for a Gaussian random field) in terms of $b$ in the inset of Fig.\ref{fig:Cb}. We see that $F_b(p=0.95)$, after some changes for small $b$s, is fixed to $3$ for larger $b$s, whereas the final values for the other $p$s are different, which confirm that the surface becomes non-Gaussian for smaller $p$s, especially at $p=p_c$. Therefore it is important to note that \textit{the GFF$_{p=p_c}$ fixed point is not a Gaussian field, and therefore the Kondev hyperscaling relations are not hold}.\\
The total exponents of the local observables have been gathered in TABLE~\ref{tab:local-exponents}.
\begin{table}
\begin{tabular}{c|c|c}
\hline Exponent & Definition & value \\
\hline $\alpha_l^{\text{UV}}$ & $C^{\text{UV}}(r)\sim r^{\alpha_l^{\text{UV}}}$ & $0.5\pm 0.1$ \\
\hline $\alpha_g^{\text{UV}}$ & $W^{\text{UV}}(L)\sim L^{\alpha_g^{\text{UV}}}$ & $0.5\pm 0.1$ \\
\hline $\alpha_l^{\text{IR}}$ & $C^{\text{IR}}(r)\sim r^{\alpha_l^{\text{IR}}}$ & $0$ \\
\hline $\alpha_g^{\text{IR}}$ & $W^{\text{IR}}(L)\sim L^{\alpha_g^{\text{IR}}}$ & $0$ \\
\hline $\gamma_{\sigma}$ & $\sigma_p \sim p^{-\gamma_{\sigma}}$ & $0.43\pm 0.03$ \\
\hline $\gamma_{\beta}$ & $\beta \sim p^{-\gamma_{\beta}}$ & $3.19\pm 0.02$ \\
\hline
\end{tabular}
\caption{The critical exponents of the local quantities.}
\label{tab:local-exponents}
\end{table}
\subsection{geometrical properties}\label{global}
\begin{figure*}
\centering
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{df}
\caption{}
\label{fig:Df}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{Df-l}
\caption{}
\label{fig:L-Df}
\end{subfigure}
\caption{(Color online) (a) The fractal dimension defined by $\left\langle l\right\rangle\sim r^{D_f} $ for various rates of $p$ and $L_0=1024$. Upper inset: $D_f^{\text{UV}}$ in terms of $p$, Lower inset: $D_f^{\text{IR}}$ in terms of $p$. (b) The finite size dependence of the fractal dimension.}
\label{fig:geometrical1}
\end{figure*}
The local features of the critical models imply some non-local properties, which make the problem worthy to be investigated from the geometrical point of view. This helps to distinguish more precisely the universality class of the model in hand. The example of such geometrical quantities can be found in Figs.~\ref{fig:pGreaterThanPcSolution} and~\ref{fig:pcSolution2} in which the level lines of the potential sample have been shown. By looking at these figures some observations can be made. For example we see that some large iso-potential islands have more chance to appear for $p=p_c$ when compared with $p>p_c$, e.g. $p=0.7$. \\
For the bulk regions, these level lines are some stochastic non-intersecting loops with different shapes and sizes. When a large number of potentials are obtained for various percolation samples, we have contour loop ensemble (CLE) for which many geometrical observables can be obtained. In this paper we analyze the length of loops ($l$), and the gyration radius of loops ($r$). For the mono-fractal systems one expects that the distribution function of these quantities show the power-law behavior, i.e. $P(x)\sim x^{-\tau_x}$ with $x=l$ and $r$ (which results from Eq.~\ref{eq:FSS}), and also $\left\langle l\right\rangle\sim r^{D_f} $ in which the exponent $D_f$ is the fractal dimension (FD) of loops. FD is a key geometrical exponent which is uniquely dedicated to universality classes of the critical systems and can be interpreted as the representative of that class. Also, according to the SLE theory, the 2D critical models are classified by means of the diffusivity parameter $\kappa$ (and the corresponding random curves are said to be SLE$_{\kappa}$) which is related to FD by the relation $D_f=1+\frac{\kappa}{8}$, which shows the importance of this exponent. For multi-fractal systems however these relations are not hold. For example, in many cases when there is a cross-over between two fixed points, two distinct critical exponents are observed~\cite{najafi2012avalanche}, for which Eq.~\ref{eq:FSS} is not applicable. In such cases, one may act just like the previous section, i.e. find the cross-over points and extract two exponents that is expected to be different for the two distinct regime.\\
Figure~\ref{fig:Df} contains the (shifted) $\log(\left\langle l\right\rangle )-\log(r)$ plot for $L_0=1024$ for various rates of $p$. Just like the local quantities in the previous section, here a smooth cross over takes place between two regimes: UV and IR, and the FD for these regimes are not the same. In two insets, we have shown $D_f^{\text{UV}}$ and $D_f^{\text{IR}}$ in terms of $p$ with some fitting line. A very similar fittings have been found for $L_0=256$ and $512$. By analyzing the upper inset, we conclude that $\lim_{p\rightarrow p_c} D_f^{\text{UV}}=1.295\pm 0.005$. This approach is of power-law form with exponent $2.90\pm 0.02$ which has been shown in the graph. In the lower inset however, we see that $\lim_{p\rightarrow 1} D_f^{\text{IR}}=1.50\pm 0.02$ and the corresponding exponent is $0.29\pm 0.04$. The obtained $D_f^{\text{IR}}(p\rightarrow 1)$ is just the FD of the level lines of Edwards-Wilkinson model, and also $\kappa=4$ SLE, i.e. $D_f=\frac{3}{2}$, which again confirms that the IR regime is described by GFF$_{p=1}$. In the Fig.~\ref{fig:L-Df} the log-log plot of $l-r$ has bee shown for $L_0=256,512$ and $1024$ for $p=0.6$. In the inset $D_F^{\text{UV}}$ has been shown in terms of the system size $L_0$ for the UV regime.\\
\begin{figure*}
\centering
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{p_r}
\caption{}
\label{fig:p-r-L}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{p_l}
\caption{}
\label{fig:p-l-L}
\end{subfigure}
\caption{(Color online) The log-log plot of the distribution function of (a) the gyration radius $r$, (b) the loop length $l$. The $p$ dependence of (c) $\tau_x^{\text{UV}}$ and $\tau_x^{\text{IR}}$, $x=r$ and $l$ have been shown in the insets. The trend-lines have been sketched for eyes helping.}
\label{fig:geometrical2}
\end{figure*}
The other important data can be obtained by analyzing the distribution functions of $l$ and $r$, i.e. Figs~\ref{fig:p-r-L} and \ref{fig:p-l-L} with two-slope character. The upper insets justify the theoretical prediction of $\tau_r^{\text{GFF}}=3.0$ and $\tau_l^{\text{GFF}}=\frac{7}{3}$~\cite{kondev2000nonlinear}, i.e. $\lim_{p\rightarrow 1} \tau_r^{\text{IR}}=3.0\pm 0.1$ and $\lim_{p\rightarrow 1} \tau_l^{\text{IR}}=2.33\pm 0.02$. In the UV regime, some new exponents appear. From the lower insets we see that $\lim_{p\rightarrow p_c} \tau_r^{\text{UV}}=2.2\pm 0.1$ and $\lim_{p\rightarrow p_c} \tau_l^{\text{UV}}=1.95\pm 0.1$. These exponents have been gathered in TABLE~\ref{tab:global-exponents1}.\\
\begin{table}
\begin{tabular}{c|c|c}
\hline & UV $p\rightarrow p_c$ regime & IR $p\rightarrow 1$ regime \\
\hline $\tau_r$ & $2.2\pm 0.1$ & $3.0\pm 0.1$ \\
\hline $\tau_l$ & $1.95\pm 0.1$ & $2.33\pm 0.03$ \\
\hline $D_f$ & $1.295\pm 0.005$ & $1.50\pm 0.02$ \\
\hline
\end{tabular}
\caption{The critical exponents of the global quantities ($\tau_r$ and $\tau_l$) for $L=1024$.}
\label{tab:global-exponents1}
\end{table}
\section*{Discussion and Conclusion}\label{conclusion}
\label{sec:conc}
In this paper we have considered the Gaussian free field (GFF) in the 2D disordered media. To generate GFF samples in a regular lattice, one can consider the Edwards-Wilkinson (EW) model in the stationary state which is equivalent to the Poisson model in the background of white-noise charges. For disordering the host media, we have considered each site to be in one state of two possibilities: empty or occupied by metallic particle (namely the metallic site). The spatial arrangement of metallic particles was modeled by the percolation theory: with the probability $p$ the site is empty and with the probability $1-p$ the site is metallic. Therefore the media is composed of ordinary and metallic regions (inside which the potential is constant). We have mapped the problem to a rough random surface (with some iso-height islands) and have calculated the corresponding exponents. The most general finding of our (local) investigation has been that the incorporation of metallic particles to the system annihilates the spatial correlation of the potential field and also decreases the statistical fluctuations of them.\\
Two especial points were seen in the phase space: $p=p_c$ and $p=1$. The first point is called GFF$_{p=p_c}$ which captures the critical behaviors of the system in the small spatial scales (UV behaviors), and the last one is named as GFF$_{p=1}$ which captures the critical behaviors of the system in the large spatial scales (IR behaviors). We have detected a cross over region between these two limits around some spatial scale $r^*$ (corresponding to $l^*$), in which the behaviors smoothly changes from UV to IR region. By analyzing this point, we realized that, under enlarging the system size, the IR properties dominate the phase space, and equivalently the UV fixed point is unstable towards the IR fixed point which is described by the GFF in the regular system. On the other hand, the UV fixed point has new exponents which has been gathered in TABLEs~\ref{tab:local-exponents} and~\ref{tab:global-exponents1}. Accordingly we propose the phase space shown schematically in Fig.~\ref{fig:phasespace} which demonstrates the structure of fixed points. The hollow circle is representative of unstable (UV) fixed point with non-Gaussian distribution, and the solid circle shows the stable (IR) fixed point which is GFF in the regular system. The local and geometrical exponents of GFF$_p$ have been shown in this figure to facilitate comparison the fixed points in the future works.
\begin{figure*}
\centerline{\includegraphics[scale=.40]{pic}}
\caption{(Color Online) The schematic representation of the structure of fixed points for the GFF$_p$ problem.}
\label{fig:phasespace}
\end{figure*}
We conclude that the dilution perturbation which is tuned by the off-criticality parameter $\epsilon_0\equiv 1-p$ is irrelevant for the $p=1$ fixed point. Also the unstable fixed point (GFF$_{p=p_c}$) which is composed of two ingredients ($c=1$ and $c=0$ CFTs, the former as the dynamical model and the latter as the host media) contains some critical exponents (importantly $D_f=1.295\pm 0.005$, corresponding to $\kappa=2.36\pm 0.04$) and should be characterized in some more details.
|
train/arxiv
|
BkiUeBk4uBhjD09iH5GO
| 5 | 1 |
\section{Introduction}
A necessity of a Green function diagonal study is directly connected with the generalized zeta-function (GZF) theory of elliptic differential operators \cite{S},
which is successfully applied to a regularization of the operators determinants \cite{H}. Such elliptic problems, for example, appear as Laplace transform of heat kernel equations, generally, with variable coefficients, which are conventially named as potentials. An important application of the theory is evaluation of semiclassical quantum corrections calculations to nontrivial classical solutions of important nonlinear equations and field theory \cite{RS,Kon}. The corrections are intimately linked to the fundamental solutions of related linear problems for the heat operator Laplace transform, which diagonal enters the zeta-function definition. Such regularization, for example, is realized in explicit form for kink solutions of the integrable Sine-Gordon equation \cite{kwant,ZL}
as well as non-integrable Landau-Ginzburg ($\phi^4$) models \cite{Kon,ZL}. The kink solution (as well as multikink one) in this context corresponds to the case of point spectrum of the elliptic operators that appear after division of variables.
This paper is devoted to investigation of a wide class of potentials which spectrum is continuous with eventual gaps - more precisely so-called finite-gap ones - see, e.g. the book \cite{BEB}.
Such potentials and, especially three-gap one, correspond to basic three-wave interaction, which is important in many quasiperiodic processes description, an exemplary applications one can find in \cite{BB}.
In the Sec. 2, starting from Laplace transform of the heat equation by time, we derive a nonlinear equation for the Green function diagonal, along ideas similar ones, mentioned in \cite{C.H, BEB} in the context of other equations. We construct its solutions in cases which potentials has direct link to the polynomial functions in appropriate variables (Sec. 3). The last section is devoted to examples and appendix contains a Mathematica program, related to a class of illustrations.
\section{The equation}
We are interested in a class of problems, connected with the parabolic partial differential operator Green function (kernel of heat equation)
\begin{equation}
\left(\frac{\partial}{\partial y}+\frac{\partial^2}{\partial x^2}-U(x)\right)g(x,x_0,y)=\delta(x-x_0)\delta(y),
\end{equation}
where $g(x,x_0,y) \in S$ is the fundamental solution over Schwartz space $S$; $\delta(x-x_0), \quad \delta(y)$ are Dirac delta-functions.
After Laplace transform:
\begin{equation}\label{a}
\left(p+\frac{\partial^2}{\partial x^2}-U(x)\right)\hat{g}(x,x_0,p)=\delta(x-x_0)
\end{equation}
The construction of GZF in fact rely upon the Green function diagonal.
In \cite{ZL} a statement about
$\hat{g}(p,x,x)=G(p,x)$,
is used. Namely,
$G(p,x)$ solves the equation
\begin{equation}\label{Hermit}
2GG''_{xx} - (G'_x)^2 - 4(U(x)-p)G^2+1=0.
\end{equation}
on condition, that $U(x)$ is bounded. The equation resembles one of derived by Hermit for \cite{C.H}.
\textbf{Proof:}
Let us consider homogeneous equation
\begin{equation}\label{aa}
\left(p + \frac{\partial^2}{\partial x^2} -U(x)\right)f\left(p,x,x_{0}\right)=0.
\end{equation}
The fundamental solution of (\ref{aa}) is built by standard procedure \cite{MW}. It has two linearly independent solutions, for example $\phi$ and $\psi$, converging respectively at $-\infty$ and $+\infty$.
One can represent $\hat{g}_{D}$ through $\phi$ and $\psi$ respectively for $x<x_0$ and $x>x_0$ with a sewing condition determined by equation (\ref{a})
\begin{equation}\label{b}
\hat{g}_{D}(p,x,x_0)=\left\{\begin{array}{c}
A(x_0)\phi(p,x),\ x\leq x_0 \\
B(x_0)\psi(p,x),\ x\geq x_0
\end{array}\right. .
\end{equation}
From continuity condition of $\hat{g}_{D}$ one gets
$$A(x_0)\phi(p,x_0)=B(x_0)\psi(p,x_0).$$
What leads to:
$$A(x_0)=C(x_0)\psi(p,x_0),$$
$$B(x_0)=C(x_0)\phi(p,x_0).$$
Due to the symmetry of Green function in respect to exchanging $x$ and $x_0$, $C(x_0)$ is constant (later referred as C). To obtain condition for derivatives of $\phi$ and $\psi$ one integrates (\ref{a}) over $x$ in an $\varepsilon$ neighbourhood of $x_0$:
\begin{equation}\label{c}
\int_{x_0-\varepsilon}^{x_0+\varepsilon} \left(p + \frac{\partial^2}{\partial x^2} -U(x)\right)\hat{g}_{D}\left(p,x,x_{0}\right)dx=1,
\end{equation}
$$\left.\frac{\partial \hat{g}_{D}}{\partial x}\left(p,x,x_{0}\right)\right|_{x=x_0-\varepsilon}^{x_0+\varepsilon}+
\int_{x_0-\varepsilon}^{x_0+\varepsilon} \left(p-U(x)\right)\hat{g}_{D}\left(p,x,x_{0}\right)dx=1,$$
$$\frac{\partial \phi}{\partial x}(p,x_0+\varepsilon)\ C\psi(p,x_0)- \frac{\partial \psi}{\partial x}(p,x_0-\varepsilon)\ C\phi(p,x_0)+
\int_{x_0-\varepsilon}^{x_0+\varepsilon} \left(p-U(x)\right)\hat{g}_{D}\left(p,x,x_{0}\right)dx=1.$$
In $\varepsilon \rightarrow 0$ limit above equation reduces to
\begin{equation}\label{d1}
\frac{\partial \phi}{\partial x}(p,x_0)\ C\psi(p,x_0)-\frac{\partial \psi}{\partial x}(p,x_0)\ C\phi(p,x_0)=1.
\end{equation}
Since solutions of (\ref{aa}) are linear, one can assume $C=1$.
Then (\ref{d1}) reduces to
\begin{equation}\label{e}
\frac{\partial\phi}{\partial x}(p,x_0)\ \psi(p,x_0)=\frac{\partial\psi}{\partial x}(p,x_0)\ \phi(p,x_0)+1.
\end{equation}
Actual proof will be made, by inserting (\ref{b}) to (\ref{Hermit}). For brevity function arguments will be omitted and $'$ will denote a derivative with respect to $x$
$$2\psi \phi \left(\psi '' \phi + 2\psi' \phi' + \psi \phi '' \right)-\left(\psi '\phi+\psi \phi '\right)^2-4(U(x)-p)\psi^2 \phi^2 +1=0$$
$$2\psi^2 \phi \left(\phi''-(U(x)-p)\phi\right)+2\psi \phi^2 \left(\psi''-(U(x)-p)\psi\right)+4\psi' \phi'\psi \phi-\left(\psi '\phi+\psi \phi '\right)^2 +1=0.$$
Because of (\ref{aa}) two first elements are nullified. One also uses property (\ref{e}):
$$4\psi' \phi'\psi \phi-\left(2\psi '\phi+1\right)^2+1=0,$$
$$4\psi' \phi'\psi \phi-4\psi'^2\phi^2-4\psi'\phi-1+1=0,$$
\begin{equation}\label{g}
\psi' \phi'\psi \phi-\psi'^2\phi^2-\psi' \phi=0,
\end{equation}
\begin{equation}
\psi'^2\phi^2+\psi'\phi-\psi'^2\phi^2-\psi' \phi=0.
\end{equation}
Thus the proof is concluded.
It is important to note, that the transition is general and doesn't rely on the nature of $U(x)$ as long as it's bound. It's usefulness is dependent on a few qualities of the potential though.
\section{The main equation solution}
\subsection{Substitutions}
We consider a class of solutions of the equation (\ref{Hermit}), to be written in a form
\begin{equation}\label{rep}
G(p,x)=\frac{P(p,x)}{2\sqrt{Q(p)}}.
\end{equation}
It is most useful, if there exists a variable transition $x\rightarrow z$, $U(x)\rightarrow u(z)$, in which $P$ and $Q$ are polynomials. Basic conditions for it to be possible are:
\begin{enumerate}
\item $u$ is a polynomial in $z$,
\item $(z'_x)^2$ is a polynomial in $z$ (note, that $z''_{xx}=\frac{1}{2}\frac{\partial}{\partial z}(z'_x)^2$),
\end{enumerate}
This does not ensure simplicity of solutions as will be shown further in the text. At this point, it is important to notice, that the second condition restricts $z(x)$ - apart from a class of elementary functions - to elliptic and hyperelliptic functions - see, e.g. \cite{BEB}.
Let us assume, that $(z'_x)^2$ and $u$ are polynomials in the variable $z$ of degree $L+1$, and $K$ respectively, hence, $z''_{xx}$ is a polynomial of the degree $L$. We also assume, that coefficient by the highest power term of $(z'_x)^2$ is equal to $1$ (which is always attainable). After the change of variables, the equation will take the form:
\begin{equation}
2P(P''(z'_x)^2+P'z''_{xx})-\left(P'z'_x\right)^2-4(u(z)-p)P^2+4Q=0
\end{equation}
We also assume solution in a given form:
\begin{equation}\label{sol}
\begin{array}{c}
P(p,z)=\sum_{n=0}^{N}p^n \sum_{l=0}^{M_n}P_{n,l} z^l\\
Q(p)=\sum_{n=0}^{2N+1} q_n p^n
\end{array}
\end{equation}
\subsection{Classification}
We will now proceed to analyse the solution by separating the equation in respect to powers of $p$ and $z$. The equation for $p^0, z^{2M_0+max(K,L-1)}$ takes following form:
If $K>L-1$
\begin{equation}
4u_{K}P_{0,M_0}=0
\end{equation}
If $K\leq L-1\quad \land\quad M_0\geq 1$
\begin{equation}
2P^2_{0,M_0}(M_0 (M_0-1)+\frac{L+1}{2}M_0)-P^2_{0,M_0}M^2_0-4u_{K}P^2_{0,M_0}\delta_{K,L-1}=0
\end{equation}
\begin{equation}\label{M0}
M_0^2+(L-1)M_0-4u_{K}\delta_{K,L-1}=0
\end{equation}
Note, that $M_0=0$ leads to $K=0$ (this case will be examined later in the text). Another conclusion is, that $K\leq L-1$ is necessary for sought type of solutions. Furthermore, this leads to following, more precise conditions:
\begin{equation}
\begin{array}{c}
L\geq 1 \qquad (due\ to\ K\geq 0) \\
K=L-1 \qquad (for\ M_0\ to\ have\ positive\ value)
\end{array}
\end{equation}
Equation for $p^{2N+1}$
\begin{equation}
4\left(\sum_{l=0}^{M_N}P_{N,l} z^l\right)^2+4q_{2N+1}=0
\end{equation}
leads to following conclusions:
\begin{equation}
M_N=0\quad \land\quad P^2_{N,0}=-q_{2N+1}.
\end{equation}
Let us now look at subsequent equations for descending powers of $p$. For $p^{2N}$ we have
\begin{equation}
-4u(z)P^2_{N,0}+8P_{N,0}\sum_{l=0}^{M_{N-1}}P_{N-1, l}z^l+4q_{2N}=0,
\end{equation}
which leads to
\begin{equation}\label{2N}
\sum_{l=0}^{M_{N-1}}P_{N-1, l}z^l=\frac{1}{2}\left(P_{N,0}u(z)-\frac{q_{2N}}{P_{N,0}}\right),
\end{equation}
\begin{equation}
M_{N-1}=K.
\end{equation}
For $p^{2N-1}$ we get (for the highest power of $z$):
on condition $M_{N-2}>M_{N-1}+K$ ($z\neq x^2$)
\begin{equation}
4P_{N,0}P_{N-2,M_{N-2}}=0,
\end{equation}
on condition $M_{N-2}\leq M_{N-1}+K$
\begin{equation}
\begin{array}{c}
2P_{N,0}P_{N-1,M_{N-1}}(M_{N-1}(M_{N-1}-1)+\frac{K+2}{2}M_{N-1}) \\ -2P_{N,0}P_{N-1,M_{N-1}}-8u_{K}P_{N,0}P_{N-1,M_{N-1}} \\ +8P_{N,0}P_{N-2,M_{N-2}}\delta_{M_{N-2},L-1}+4q_{2N-1}=0.
\end{array}
\end{equation}
It's obvious, that $M_{N-2}\leq M_{N-1}+K$ is a necessary condition for (\ref{sol}). Now we can consider a general rule for all remaining equations. Thesis: $\forall_{0\leq k<N-1} M_{k}\leq (N-k)K$. Proof by induction:
If $M_{k}> (N-k)K$, then equation for $p^{N+k+1}$ and highest power of $z$ takes form:
\begin{equation}
4P_{N,0}P_{k,M_k}=0
\end{equation}
If $M_{k}\leq (N-k)K$ and $\forall_{k<l<N} M_l=(N-l)K$ (possibility giving the highest possible value of $M_k$), then equation for $p^{N+k+1}$ and highest power of $z$ takes form:
\begin{equation}
\begin{array}{c} 2P_{N,0}P_{k+1}(M_{k+1}(M_{k+1}-1)+\frac{K+2}{2}M_{k+1})\\ +2\sum_{n=k+2}^{N-1}P_{n,M_n}P_{N-n+k+1,M_{N-n+k+1}}(M_{N-n+k+1}(M_{N-n+k+1}-1) \\ +\frac{K+2}{2}M_{N-n+k+1}) -2\sum_{n=k+2}^{N-1}P_{n,M_n}P_{N-n+k+1}M_{N-n+k+1}M_{n} \\ -4u_{K}\sum_{n=k+1}^{N}P_{n,M_n}P_{N-n+k+1} \\ +4\sum_{n=k+1}^{N-1}P_{n,M_n}P_{N-n+k,M_{N-n+k}} \\ +4\delta_{M_{k},M_{k+1}+L-1}P_{N,0}P_{k,M_k}=0
\end{array}
\end{equation}
Thus the solution exists only if the thesis holds. This leads directly to a minimal condition on $N$:
\begin{equation}\label{Nmin}
N\geq\frac{M_0}{K} \qquad \forall M_0\geq K
\end{equation}
In summary: existence of solutions of form (\ref{sol}) depends on the power of the potential ($K$), power of $(z'_x)^2$ ($L\leq 1$ can give abnormal results), amplitude of the highest power term of the potential and there exists a definite formula for the minimal value of $N$.
\subsection{Solving algorithm}\label{Al}
\begin{enumerate}
\item If there exists a solution in the form (\ref{sol}) for a given $N$ (which fulfils requirement (\ref{Al})), it can be obtained in a straightforward manner. Since the actual value of $N$ is unknown, one starts with the minimal possible value $\frac{M_0}{K}$. After separating the equation in respect to powers of $p$ one analyzes the resulting equations starting from the highest power of $p$. All those equations can be written in a manner similar to (\ref{2N}) (here for the $p^{N+n+1} with possible $n$ from $N-1$ to $0$)$:
\begin{equation}
P_{N,0}\sum_{l=0}^{(N-n)*K}P_{n,l}z^l=F(P_{N,0},P_{N-1,K},P_{N-1,K-1},\dots,P{n+1,0},z)-\frac{q_{N+n+1}}{2}
\end{equation}
where $F$ contains all elements of equation not written explicitly. It is easy to see, we can obtain all coefficients, except for $P_{n,0}$, as solutions of linear equations, since all elements on the RHS, except for $q_{N+n+1}$ are known. As for the equation for $z^0$, it is more convenient, to express $q_{N+n+1}$ in terms of $P_{n,0}$. Solving all equations down to $p^{N+1}$ gives us all $P_{n,l}$ as well as some of $q_n$ in terms of $\{P_{n,0}\}_{i\in\{0,...,N\}}$. It is important to note, that all calculations done up to that point stay relevant, even if value of $N$ will have to be increased.
\item In the next step, we use the equation for $p^N$ to calculate the possible values of $P_{n,0}$. Again, if we start from the highest powers of $z$, we can obtain those coefficients as solutions of linear equations, since any element containing $P_{n,0}$ is proportional to at most $z^{K(N-n+1)}$ and any element containing $P^2_{n,0}$ is proportional to at most $z^{K(N-2n+1)}$ (negative exponent means, that such coefficients are not present in equation for $p^N$).
\item Subsequently we check the solution. If it doesn't hold, we increase the value of n, add relevant components to $P$ and $Q$ polynomials, calculate values of all new coefficients and go back to step 2.
\end{enumerate}
Since $q_n$ aren't necessary for calculation of any $P_{n,l}$ and checking the solution, we can slightly simplify the algorithm by calculating $q_n$ after all other coefficients.
\noindent Described algorithm was implemented in Mathematica 7 and used to obtain solutions presented in section \ref{sol}.
\subsection{On uniqueness of solutions}
Using the above algorithm we obtain all coefficients as solutions of linear equations in respect to sought coefficients. This leads to a simple conclusion, that for a given $N$ solutions of form (\ref{sol}) are unique except for constant $u$, in which case the solution has no dependence on $z$ and because of this, there is no relation between any of $P_{n,0}$ (\ref{const}). As yet, there is no method of finding all allowed $N$ for a given potential. Therefore uniqueness of solutions is uncertain.
\section{Exemplary solutions}\label{sol}
\subsection{Constant potential}\label{const}
Let's consider a constant potential
\begin{equation}\label{con}
U(x)=u
\end{equation}
It is obvious, that no change of variables is necessary, thus we can use $z=x$ ($L=-1$). Equation for $p^0 z^{2M_0}$ immediately gives
\begin{equation}
-4uP^2_{0,M_0} +4q_0\delta_{M_0,0}=0
\end{equation}
Since $u$ can have an arbitrary value, if we shift the $p$ variable, this equation only holds for $M_0=0$. This means, that the simplest solution would be
\begin{equation}
G(p,x)=\frac{1}{2\sqrt{u-p}}
\end{equation}
This is not the only one, as will be shown. Let us consider a solution for potential (\ref{con}) and an arbitrary $N$. Equation for $p^{2N+1}$ gives as usual
\begin{equation}
M_N=0
\end{equation}
\begin{equation}
q_{2N+1}=-P^2_{N,0}
\end{equation}
Equation for $p^{2N}$ gives
\begin{equation}
M_{N-1}=0
\end{equation}
\begin{equation}
P_{N-1,0}=\frac{1}{2}\left(uP_{N,0}-\frac{q_{2N}}{P_{N,0}}\right)
\end{equation}
Subsequent equations will look alike, with $M_i=0$ for all $i$. It is easy to see, that one obtains a total of $3N+3$ parameters with only $2N+2$ equations and one can obtain a solutions for any value of $N$. In a sense, it's a consequence of the condition (\ref{Nmin}), since $\frac{0}{0}$ is an indeterminate symbol.
\subsection{Triple-gap cnoidal potential}
Let's take a solution one order higher then that for $\phi^4$ cnoidal solution:
\begin{equation}
U(x)=-12m^2k^2\ cn^2(mx;k)
\end{equation}
\begin{equation}
z=cn^2(mx;k)
\end{equation}
\begin{equation}
\left(z'_x\right)^2=4m^2z(1-z)(1-k^2+k^2z)
\end{equation}
\begin{equation}
z''_{xx}=2m^2(-3k^2z^2+(4k^2-2)z+1-k^2)
\end{equation}
\begin{equation}
M_0=3
\end{equation}
\begin{equation}
N=3
\end{equation}
Algorithm explained in section (\ref{Al}) gives (assuming $P_{3,0}=1$ for simplicity):
\small
\begin{eqnarray}
P_{2} & = & -2m^2(7 + k^2 (-14 + 3 z)) \\
P_{1} & = & m^4(49 + k^2 (-256 + 78 z) + k^4 (256 + 3 z (-52 + 15 z))) \\
P_{0} & = & -3m^6(12 + 8 k^2 (-19 + 9 z) + 3 k^4 (128 + z (-121 + 45 z)) +\\ & &
k^6 (-256 + 3 z (121 + 5 z (-18 + 5 z)))) \\
Q(p) & = & -((-4 + 8 k^2) m^2 + p) ((9 - 96 k^2 + 96 k^4) m^4 +
10 (-1 + 2 k^2) m^2 p + p^2)\\ & & ((9 - 42 k^2 + 33 k^4) m^4 +
2 (-5 + 7 k^2) m^2 p + p^2)\\ & & (3 k^2 (-8 + 11 k^2) m^4 +
2 (-2 + 7 k^2) m^2 p + p^2)
\end{eqnarray}
\normalsize
\section{Conclusion}
Described equation allows calculation of heat equation's Green function diagonal in a straightforward manner. Developed algorithm should be especially useful for finding solutions for finite-gap potentials, which naturally emerge in periodic and quasi-periodic structures.
|
train/arxiv
|
BkiUdSI4ubng19hinVa7
| 5 | 1 |
\section{Introduction}
The search for a theory of exotic objects through Einstein's general theory of relativity has been receiving a lot of interest in the literature. A black hole, e.g. the Schwarzschild black
hole, possesses one of the possible solutions to Einstein's field equations, see Ref.\cite{Schwz}. The recent detection of gravitational waves (GWs) \cite{Abbott:2016blz} demonstrated that stellar-mass black holes really exist in Nature. Interestingly, the author of Ref.\cite{Flamm} realized in 1916 that another solution was viable which is presently known as a "white hole".
In 1935, Einstein and Rosen used the theory of general relativity to propose the existence of "bridges" through space-time \cite{ERb}. These bridges connect two different points in space-time enable to create a shortcut called Einstein-Rosen bridges, or wormholes. However, the existence of wormholes needs to be experimentally observed. Moreover, Morris and Throne \cite{Morris} demonstrated that wormholes are solutions of Einstein field equations. Hypothetically, they connect two space-time regions of the universe by a throat. The first type of wormhole solution was the Schwarzschild wormhole \cite{Vladimir} which would be present in the Schwarzschild metric describing an eternal black hole. However, it was found that it would collapse too quickly. In principle, it is possible to stabilize the wormholes if there exists an exotic matter with negative energy density.
In order to maintain the structure of the wormhole, we need the exotic matter which satisfies the flare-out condition and violates
weak energy condition \cite{Morris:1988cz,Morris:1988tu}. Classically, there are no traversable wormholes. However, it has been recently shown that quantum matter fields can provide enough negative
energy to allow some wormholes to become traversable. As a result, to construct such a traversable wormhole, one requires an exotic matter with a negative energy density and a large negative pressure, which should have a
higher value than the energy density.
In the literature, many authors have intensively studied various aspects of traversable wormhole (TW) geometries
within different modified gravitational theories
\cite{Harko:2013aya,Lobo:2009ip,Bohmer:2011si,Zangeneh:2015jda,Clement:1983fe,Bronnikov:2010tt,Visser:2003yf,Mehdizadeh:2017tcf,Jusufi:2017drg,Jusufi:2016leh,Jusufi:2018waj,Jusufi:2019knb,Dai:2019mse,Tsukamoto:2014swa,Tsukamoto:2016zdu,Tripathi:2019trz,Teo:1998dp,Ovgun:2018xys,Shaikh:2018yku,Shaikh:2016dpl,Ovgun:2018prw,MontelongoGarcia:2010xd,Rahaman:2016jds,Rahaman:2014dpa,Jamil:2013tva,Rahaman:2012pg,Sahoo:2017ual}. Recently the shadows of wormholes and Kerr-like wormholes was investigated in Refs.\cite{Shaikh:2018kfv,Amir:2018pcu,Gyulchev:2018fmd,Jusufi:2018gnz,Amir:2018szm,Tsukamoto:2012xs}.
These include $f(R)$ and $f(T)$ theories, see e.g., \cite{Bahamonde:2016jqq,Bahamonde:2016ixz,Jamil:2012ti,Jamil:2008wu}. Among them, as the possibility of phantom energy, this presents us with a natural
scenario for the existence of traversable wormholes \cite{Lobo:2005us}. In addition, the wormhole construction
in $f(R)$ gravity is studied in Refs.\cite{Bahamonde:2016ixz,Rahaman:2013qza}. Interestingly, the Casimir effect also provides a possibility to produce negative energy density and can be used to stabilze tranversable wormholes.
The main aim of this paper is to investigate the effect of the Generalized Uncertainty Principle (GUP) in the Casimir Wormhole spacetime recently proposed by Garattini \cite{Garattini:2019ivd}. In particular, we consider three types of the GUP relations: (1) the Kempf, Mangano and Mann (KMM) model, (2) the Detournay, Gabriel and Spindel (DGS) model, and (3) the so called type II model for GUP principle. We study a class of asymptotically flat wormhole solutions supported by Casimir energy under the effect of GUP.
This paper is organized as follows: In Sec.\ref{Sec2}, we take a short recap of the Casimir effect under the Generalized
Uncertainty Principle and consider three models with the generic functions $f\left(\hat{p}^{2}\right)$ and $g\left(\hat{p}^{2}\right)$. In Sec.\ref{gupWorm}, we construct the GUP Casimir wormholes by particularly focusing on three types of the GUP relations. We then examine the energy conditions of our proposed models in Sec.\ref{Ener} and quantifies the amount of exotic matter required for wormhole maintenance in Sec.\ref{amont}. Furthermore, we study the gravitational lensing effect in the spacetime of the GUP Casimir wormholes in Sec.\ref{chvi}. We finally conclude our findings in the last section. In this present work, we use the geometrical units such that $G=c=1$.
\section{The Casimir Effect under the Generalized
Uncertainty Principle}
\label{Sec2}
The Casimir effect manifests itself as the interaction of a pair of neutral, parallel conducting planes caused by the disturbance of the vacuum
of the electromagnetic field. The Casimir effect can be described in terms of the zero-point energy of a quantized field in the intervening space between the objects. It is a macroscopic quantum effect which causes the plates to attract each other. In his famous paper \cite{Casimir}, Casimir derived the finite energy between plates and found that the energy per unit surface is given by
\begin{eqnarray}
\mathcal{E}=-\frac{\pi^{2}}{720}\frac{\hbar }{a^{3}},\label{eq:2}
\end{eqnarray}
where $a$ is a distance between plates along the $z$-axis, the direction perpendicular to the plates. Consequently, we can determine the finite force per unit area acting between the plates to yield $\mathcal{F}=-\frac{\pi^{2}}{240}\frac{\hbar }{a^{4}}$. Notice that the minus sign corresponds to an attractive force. The resolution of small distances in the spacetime is limited by the existence of a minimal length in the theory. Note that the prediction of a minimal measurable length in order of Planck length in various theories
of quantum gravity restricts the maximum energy that any particle can attain to the Planck
energy. This implied the modification of linear momentum and also quantum commutation
relations and results the modified dispersion relation, e.g., gravity's rainbow \cite{Magueijo:2002xx}, see some particular cosmological \cite{Chatrabhuti:2015mws,Channuie:2019kus,Hendi:2016tiy} and astrophysical implications \cite{Hendi:2016hbe,Feng:2017gms,Hendi:2018sbe,Panahiyan:2018fpb,Dehghani:2018qvn}. Moreover, this scale naturally arises in theories of quantum gravity in the form of an effective minimal uncertainly in positions $\Delta x_{0}>0$.
For instance, in string theory, it is impossible to improve the spatial resolution below the characteristic length of the strings. As a results, a correction to the position-momentum uncertainty relation related to this characteristic length can be obtained. In one dimension, this minimal length can be implemented adding corrections to the uncertainty relation to obtain
\begin{eqnarray}
\label{MUC}
\Delta x \Delta p \geq \frac{\hbar}{2}\left[1+\beta\left(\Delta p\right)^{2}+\gamma\right],\qquad\beta,\gamma>0, \label{cr}
\end{eqnarray}
where a finite minimal uncertainty $\Delta x_{0}=\hbar\sqrt{\beta}$ in terms of the minimum length parameter $\beta$ appears. As a result, the modification of the uncertainty relation Eq. \eqref{cr} implies a small correction term to the usual Heisenberg commutator relation of the form:
\begin{eqnarray}
\label{MCR}
\left[\hat{x},\hat{p}\right]=i\hbar\left(1+\beta\hat{p}^{2}+\ldots\right). \label{H}
\end{eqnarray}
It is worth noting in these theories that the eigenstates of the position operator are no longer physical states whose matrix elements would have the usual direct physical interpretation about positions. Therefore, one introduces the "quasi-position representation", which consists in projecting the states onto the set of maximally localized states. Interestingly, the usual commutation relation given in Eq.(\ref{H}) can be basically generalized. In $n$ spatial dimensions, the generalized commutation relations leading to the GUP that provides a minimal uncertainty are assumed of the form \cite{Frassino:2011aa}:
\begin{eqnarray}
\left[\hat{x}_{i},\,\hat{p}_{j}\right] & = & i\hbar\left[f\left(\hat{p}^{2}\right)\delta_{ij}+g\left(\hat{p}^{2}\right)\hat{p}_{i}\hat{p}_{j}\right]\,, \label{RC}
\end{eqnarray}
where $i,j=1, ... n$ and the generic functions $f\left(\hat{p}^{2}\right)$ and $g\left(\hat{p}^{2}\right)$ are not necessarily arbitrary. Note that the relations between them can be quantified by imposing translational and rotational invariance on the generalized commutation relations. As mentioned in Ref.\cite{Frassino:2011aa}, the specific form of these states depends on the number of dimensions and on the specific model considered. For example, when $n>1$ the generalized uncertainty relations are not unique and different models may be obtained by choosing different functions $f\left(\hat{p}^{2}\right)$ and/or $g\left(\hat{p}^{2}\right)$ which will yield different maximally localized states.
\subsection{Model I (KMM)}
The specific form of these states depends on the number of dimensions and on the specific model considered. In
literature there are at least two different approaches to construct maximally localized states: the procedure proposed by Kempf, Mangano and Mann (KMM). This model correspond to the choice of the generic functions $f\left(\hat{p}^{2}\right)$ and $g\left(\hat{p}^{2}\right)$ given in Ref. \cite{Frassino:2011aa}:
\begin{eqnarray}f\left(\hat{p}^{2}\right)=\frac{\beta\hat{p}^{2}}{\sqrt{1+2\beta\hat{p}^{2}}-1},\qquad g\left(\hat{p}^{2}\right)=\beta\,.\label{eq:4}\end{eqnarray}
From now on we will remove the hat over the operator.
Following the KMM construction, one obtains then the final result with the first order correction term in the minimal uncertainty parameter $\beta$ introduced in the modified commutation relations of Eq.~\eqref{MCR} :
\begin{eqnarray}
\mathcal{E} & = & -\frac{\pi^{2}}{720}\frac{\hbar }{a^{3}}\left[1+\pi^{2}\left(\frac{28+3\sqrt{10}}{14}\right)\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]. \label{eq:RisultatoK}
\end{eqnarray}
The force per unit area relation in this model is given by
\begin{eqnarray}
\mathcal{F} & = & -\frac{\pi^{2}}{240}\frac{\hbar }{a^{4}}\left[1+\pi^{2}\left(\frac{10}{3}+\frac{5\sqrt{10}}{14}\right)\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]. \label{e7}
\end{eqnarray}
\subsection{Model I (DGS)}
In this model the Casimir energy per unit surface is given by \cite{Frassino:2011aa}
\begin{eqnarray}
\mathcal{E}& = & -\frac{\pi^{2}}{720}\frac{\hbar }{a^{3}}\left[1+\pi^{2}\frac{4\left(3+\pi^{2}\right)}{21}\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]\,.
\label{eq:RisultatoK}
\end{eqnarray}
On the other hand, the finite force per unit area acting between the plates
\begin{eqnarray}
\mathcal{F}& = & -\frac{\pi^{2}}{240}\frac{\hbar }{a^{4}}\left[1+\pi^{2}\left(\frac{20}{21}+\frac{20\pi^{2}}{63}\right)\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]. \label{e9}
\end{eqnarray}
\subsection{Model II }
The model proposed is completely different from that given by Eq. \eqref{eq:4}. This model has
the functions $f$ and $g$ as follows:\cite{Frassino:2011aa}
\begin{eqnarray}
f\left(p^{2}\right)=1+\beta p^{2},\qquad g\left(p^{2}\right)=0.
\end{eqnarray}
One obtains then the final result \cite{Frassino:2011aa}
\begin{eqnarray}
\mathcal{E}
= -\frac{\pi^{2}}{720}\frac{\hbar }{a^{3}}\left[1+\pi^{2}\frac{2}{3}\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right].\label{eq:Risultato}
\end{eqnarray}
The first term in Eq. \eqref{eq:Risultato} is the usual Casimir energy reported in Eq. \eqref{eq:2} and is obtained without the cut-off function.
The second term is the correction given by the presence in the theory of a minimal length. We note that it is attractive.
The force per unit area in this model is given by
\begin{eqnarray}
\mathcal{F} & = & -\frac{\pi^{2}}{240}\frac{\hbar }{a^{4}}\left[1+\pi^{2}\frac{10}{9}\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]. \label{e12}
\end{eqnarray}
\subsection{GUP corrected energy density}
Let us know elaborate in more details about the GUP corrected energy densities by writing first the renormalized energies for three GUP cases
\begin{equation}
E=-\frac{\pi^{2}S}{720}\frac{\hbar }{a^{3}}\left[1+C_i\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]
\end{equation}
where $S$ is the surface area of the plates and
$a$ is the separation between them. Note that we have introduced the constant $C_i$ where $i=1,2,3$. In particular we have the following three cases:
\begin{eqnarray}
C_1&=&\pi^{2}\left(\frac{28+3\sqrt{10}}{14}\right),\\
C_2&=&4\pi^{2}\left(\frac{3+\pi^2}{21}\right),\label{e19}\\
C_3&=&\frac{2 \pi^2}{3}.\label{e20}
\end{eqnarray}
Then the force can be obtained with the
computation of
\begin{equation}
F=-\frac{dE}{da}=-\frac{3 \pi^{2}S}{720}\frac{\hbar }{a^{4}}\left[1+\frac{5}{3}C_i\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]
\end{equation}
Thus, using
\begin{equation}
P=\frac{F}{S}=-\frac{3 \pi^{2}}{720}\frac{\hbar }{a^{4}}\left[1+\frac{5}{3}C_i\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right]=\omega \rho.
\end{equation}
At this point we note that in the case of Casimir energy there is a natural EoS establishing fundamental relationship by choosing $\omega=3$. From the last equation we obtain the GUP corrected energy density in a compact form as
\begin{equation}
\rho=-\frac{ \pi^{2}}{720}\frac{\hbar }{a^{4}}\left[1+\frac{5}{3}C_i\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}\right].
\end{equation}
Setting $\beta=0$, we obtain the usual Casimir result. In this way we can introduce a new constant $D_i=5C_i/3$, however
the GUP extension seems to be not uniquely defined, therefore different extensions lead to different $D_i$. This, on the other hand, suggests a possible extension of energy density. For example, one can postulate the following extension
\begin{equation}
\rho=-\frac{ \pi^{2}}{720}\frac{\hbar }{a^{4}}\left[1+A_i\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{2}+B_i\left(\frac{\hbar\sqrt{\beta}}{a}\right)^{4}+... \right],
\end{equation}
where $A_i$ and $B_i$ are some constants. In the present work we shall use the expression (19) for the energy density and leave Eq. (20) for future work.
\section{GUP Casimir Wormholes}
\label{gupWorm}
We consider a static and spherically symmetric Morris-Thorne traversable wormhole in the Schwarzschild coordinates given by \cite{Morris}
\begin{equation}
\mathrm{d}s^{2}=-e^{2\Phi (r)}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}+r^{2}\left(
\mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\phi ^{2}\right), \label{5}
\end{equation}
in which $\Phi (r)$ and $b(r)$ are the redshift and shape
functions, respectively. In the wormhole geometry, the redshift
function $\Phi (r)$ should be finite in order to avoid the
formation of an event horizon. Moreover, the shape function $b(r)$ determines the wormhole geometry, with the following condition $b(r_{0})=r_{0}$, in which $r_{0}$ is the radius of the wormhole throat. Consequently, the shape function must satisfy the flaring-out condition \cite{Morris}:
\begin{equation}
\frac{b(r)-rb^{\prime }(r)}{b^{2}(r)}>0,
\end{equation}%
in which $b^{\prime }(r)=\frac{db}{dr}<1$ must hold at the throat of the
wormhole. With the help of the line element \eqref{5}, we obtain the following set of equations resulting from
the energy-momentum components to yield
\begin{eqnarray}
\rho (r) &=&\frac{1}{8\pi r^{2}} b^{\prime }(r), \\
\mathcal{P}_{r}(r) &=&\frac{1}{8\pi }\left[ 2\left( 1-\frac{b(r)}{r}\right)
\frac{\Phi ^{\prime }}{r}-\frac{b(r)}{r^{3}}\right]
, \\
\mathcal{P}_t(r) &=&\frac{1}{8\pi }\left( 1-\frac{b(r)}{r}\right) \Big[\Phi
^{\prime \prime }+(\Phi ^{\prime })^{2}-\frac{b^{\prime }r-b}{2r(r-b)}\Phi
^{\prime } \notag \\
&-&\frac{b^{\prime }r-b}{2r^{2}(r-b)}+\frac{\Phi ^{\prime }}{r%
}\Big]. \label{18}
\end{eqnarray}%
where $\mathcal{P}_t=\mathcal{P}_{\theta }=\mathcal{P}_{\phi }$.
Having used the energy density, we can find shape function $b(r)$ and then we can use the EoS with a specific value for $\omega$ to determine the redshift function. However, in general, it is known that most of the solutions are unbounded if $r$ is very large. Hence such corresponding solutions may not be physical. In the present paper, we are interested in deriving the equation of state (connecting pressures with density) for a given wormhole geometry. In other words, we fix the geometry parameters using different redshift functions of a wormhole and then ask what the EoS parameter in the corresponding case is. Moreover, we also need to check the behavior of energy conditions near the throat.
In order to simplify the notation from now on, we shall set the Planck constant to one, i.e., $\hbar=1$.
\subsection{Model $\Phi=constant$}
To simplify our calculations, we are going to introduce $D_i$ and the replacement $a \to r$ in the expression for the energy density. In that case, using Eq. (19) the energy density relations can be rewritten
\begin{eqnarray}
\rho& = & -\frac{\pi^{2}}{720 r^4}\left[1+D_{i}\left(\frac{\sqrt{\beta}}{r}\right)^{2}\right]. \label{e18}
\end{eqnarray}
where $i=1,2,3$. In particular we have the following three cases:
\begin{eqnarray}
D_1&=&5\,\pi^{2}\left(\frac{28+3\sqrt{10}}{42}\right),\\
D_2&=&20\,\pi^{2}\,\left(\frac{3+\pi^2}{63}\right),\label{e19}\\
D_3&=&\frac{10 \pi^2}{9}.\label{e20}
\end{eqnarray}
The simplest case is a model with $\Phi=constant$, namely a spacetime with no tidal forces, namely $\Phi'(r)=0$. In other words, this is asymptotically flat wormhole spacetime.
We find
\begin{equation}
b(r)=C_1+\frac{\pi^3 }{90 r}+\frac{\pi^3 D_{i} \beta}{270 r^3}.
\end{equation}
\begin{figure}
\includegraphics[width=8.2cm]{bprime.pdf}
\caption{ We check the flare out condition. Variation of $b'(r)$ against $r$. We have used $r_0=1$, $\hbar=1$ and $\beta=0.1$. }\label{fig1}
\end{figure}
\begin{figure}
\includegraphics[width=8.2cm]{shapef1.pdf}
\caption{The shape function of the GUP wormhole against $r$. We use $\hbar=1$ and $\beta=0.1$. }
\end{figure}
Finally we use $b(r_0)=b_0=r_0$, to calculate the the constant $C$. Thus by solving the last differential equation we find the shape function to be
\begin{equation}
b(r)=r_0+\frac{\pi^3 }{90}\left(\frac{1}{r}-\frac{1}{r_0}\right)+\frac{\pi^3 D_{i} \beta}{270} \left(\frac{1}{r^3}-\frac{1}{r_0^3}\right).\label{s22}
\end{equation}
Introducing the scaling of coordinate $\exp(2 \Phi) dt^2 \to dt^2$ (since $\exp(2 \Phi)=const$), the wormhole metric reads
\begin{eqnarray}\notag
ds^2&=&-dt^2+\frac{dr^2}{1-\frac{r_0}{r}-\frac{\pi^3}{90 r}\left(\frac{1}{r}-\frac{1}{r_0}\right)-\frac{\pi^3 D_{i} \beta}{270 r} \left(\frac{1}{r^3}-\frac{1}{r_0^3}\right)}\\
&+&r^2 (d\theta^2+\sin^2\theta \,d\phi^2),
\end{eqnarray}
Clearly in the limit $r \to \infty $, we obtain
\begin{equation}
\lim_{ r \to \infty} \frac{b(r)}{r} \to 0.
\end{equation}
The asymptotically flat metric can be seen also from the Fig.(\ref{fig1}). Using the EoS $\mathcal{P}_r(r)=\omega(r) \rho(r)$, one can easily see that when $\Phi(r)=0$ (tideless wormholes) we obtain
\begin{equation}\label{24}
8 \omega(r) \rho(r) \pi r^3+b(r)=0.
\end{equation}
Solving this equation for the EoS parameter we obtain
\begin{equation}
\omega=-\frac{\beta D_{i} \pi^3 (r^3-r_0^3)+3r_0^2r^2((\pi^3-90r_0^2)r-\pi^3 r_0) }{3 (D_{i} \beta +r^2) \pi^3 r_0^3 }.\label{e27}
\end{equation}
\begin{figure}
\includegraphics[width=8.2cm]{omegaT.pdf}
\caption{The EoS parameter $\omega$ for the GUP wormhole with $\Phi=0$ as a function of $r$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$. }
\end{figure}
\subsection{Model with $\Phi(r)=\frac{r_0}{r}$}
\subsubsection{EoS: $\mathcal{P}_r(r)=\omega_r(r) \rho(r)$}
We shall begin our analysis by considering the following EoS $\mathcal{P}_r(r)=\omega_r(r) \rho(r)$. From the Einstein's field equations (\ref{18}), we find
\begin{equation}\label{24}
\Phi'(r)=-\frac{8 \omega_r(r) \rho(r) \pi r^3+b(r)}{2 r(-r+b(r))}.
\end{equation}
Now considering the model function
\begin{equation}
\Phi(r)=\frac{r_0}{r},\label{e26}
\end{equation}
we obtain the following equation
\begin{eqnarray}\notag
\frac{(r-2 r_0)b(r)+8 \omega_1(r) \rho(r) r^4 \pi+2 r_0 r}{8 \pi r^4}=0.
\end{eqnarray}
Finally using the shape function (\ref{e19}) for the EoS parameter we obtain
\begin{equation}
\omega_r(r)=-\frac{\beta \left[D_{i} \pi^3 (r^4-2r^3 r_0-r r_0^3+2 r_0^4) \right]+\mathcal{F}}{3 (D_{i} \beta +r^2)r \pi^3 r_0^3 },\label{e27}
\end{equation}
where
\begin{eqnarray}
\mathcal{F}&=&3 \pi^3 r^4 r_0^2-9 \pi^3 r^3 r_0^3+6 \pi^3 r^2 r_0^4-810 r^4 r_0^4\nonumber\\&&+540 r^3 r_0^5.\label{e31}
\end{eqnarray}
\subsubsection{EoS: $\mathcal{P}_t(r)=\omega_t (r) \mathcal{P}_r(r)$}
Let us now consider the scenario in which the EoS is of the form $\mathcal{P}_t(r)=\omega_t(r) \mathcal{P}_r(r)$, where $\omega_t(r)$ is as an arbitrary function of $r$ . In this case, combining the second and the third equation in (\ref{18}) we find the following equation:
\begin{eqnarray}\notag
&2& r(r+1)(r-b(r))\Phi''(r)+2r^2 (r-b(r))(\Phi'(r))^2\\\notag
&-&r \Phi'(r)\left[ (-4 \omega_2(r)+r-1)b(r)+4 \omega_2(r) r \right]\\
&+&b(r)(2 \omega_2(r) -r +1)=0.
\end{eqnarray}
Using the shape function (\ref{s22}) along with Eq. (\ref{e26}) from the last equation we obtain
\begin{equation}
\omega_t(r)= \frac{(r-r_0)\Big[\beta D_{i} \pi^3(r^2+r r_0+r_0^2)\mathcal{H}+3 r_0^2 r^2\mathcal{G}\Big]}{2 r \left[\beta D_{i} \pi^3 (r^4-2 r^3 r_0-r r_0^3+2 r_0^4)+\mathcal{F}\right]},\label{e30}
\end{equation}
where
\begin{eqnarray}
\mathcal{H} &=&2 r_0^2+r_0(4+5 r-r^2)+r^2(r-1),\label{e31}\\
\mathcal{G}&=&180 r r_0^3+r_0^2(2 \pi^3-90 r^3+450 r^2+360 r)- \pi^3
\nonumber\\&&\times r_0 (r^2-5 r-4)+r^2 \pi^3(r-1),\label{e32}\\
\mathcal{F}&=& 3 \pi^3 r^4 r_0^2-9\pi^3 r^3 r_0^3+6\pi^3 r^2 r_0^4-810 r_0^4 r^4\nonumber\\&&+540 r^3 r_0^5.
\end{eqnarray}
\begin{figure}
\includegraphics[width=8.2cm]{omega1.pdf}
\caption{The EoS parameter $\omega_r(r)$ against $r$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$ along with a non-constant redshift
function $\Phi=r_0/r$. }
\end{figure}
\begin{figure}
\includegraphics[width=8.2cm]{n1.pdf}
\caption{The EoS parameter $\omega_t(r)$ for the GUP wormhole with a non-constant redshift
function $\Phi=r_0/r$ as a function of $r$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$. We consider only the case $D_1$. }\label{fig4}
\end{figure}
Finally the GUP Casimir wormhole metric can be written as
\begin{equation}
\mathrm{d}s^{2}=-\exp\left({\frac{2 r_0}{r}}\right)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}+r^{2}\left(
\mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\phi ^{2}\right),
\end{equation}
with the shape function given by Eq.(\ref{e12}) and satisfies the EoS with the parameter $\omega$ and $n$, given by Eq.(\ref{e27}) and Eq.(\ref{e30}), respectively.
\subsection{Model $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$}
Our second example is the following wormhole metric given by
\begin{equation}
\mathrm{d}s^{2}=-\left(1+\frac{\gamma^2}{r^2}\right)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}+r^{2}\left(
\mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\phi ^{2}\right),
\end{equation}
where $\gamma$ is some positive parameter and $r\geq r_0$. As in the last section, we can assume the EoS $\mathcal{P}_r(r)=\omega_r(r) \rho(r)$ then solve Eq. (\ref{e27}) for the EoS parameter $\omega_r(r)$. Due to the limitation of space, here we can simply skip the full expression for $\omega_r(r)$ and give only the plot for a domain of $\omega_r(r)$ as a function of $r$, illustrated in Fig.\ref{f6}. Finally, we can use the EoS of the form $\mathcal{P}_t(r)=\omega_t(r) \mathcal{P}_r(r)$, and obtain an expression for $\omega_t(r)$. As we already pointed out, we can simply skip the full expression and it is straightforward to check the dependence of $\omega_t(r)$ against $r$ given by Fig.\ref{f7}.
\begin{figure}
\includegraphics[width=8.2cm]{omega22.pdf}
\caption{The EoS parameter $\omega_r(r)$ against $r$ using the model $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$ for different values of $\gamma$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$. Here we consider the case $D_1$. }\label{f6}
\end{figure}
\begin{figure}
\includegraphics[width=8.2cm]{n22.pdf}
\caption{The EoS parameter $\omega_t(r)$ against $r$ using the model $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$ and different values of $\gamma$. We use $r_0=1$, $\hbar=1$ and GUP parameter $\beta=0.1$ and $D_1$. }\label{f7}
\end{figure}
\subsection{Isotropic model with $\omega_r(r)=const.$}
From the conservation equation $\nabla_{\mu}T^{\mu\nu}=0$, we can obtain the hydrostatic equation for equilibrium
of the matter sustaining the wormhole
\begin{equation}
\mathcal{P'}_r(r)=\frac{2(\mathcal{P}_t(r)-\mathcal{P}_r(r))}{r}-(\rho(r)+\mathcal{P}_r(r))\Phi'(r),
\end{equation}
where we have considered a perfect fluid with $\mathcal{P}_t=\mathcal{P}_r$, and assumed the EoS $\mathcal{P}_r(r)=\omega_r \rho(r)$, where $\omega$ is a constant parameter this time. Then it can be reduced to
\begin{equation}
\omega_r \rho'(r)=-(1+\omega_r) \rho(r) \,\Phi'(r)
\end{equation}
in which $\rho(r)$ is given by Eq. (\ref{e18}). Solving the last differential equation by setting $\rho(r)\rightarrow -|\rho(r)|$, we obtain the following result
\begin{equation}
\Phi(r)=C+\frac{\omega_r}{\omega_r+1}\left[\ln\left(\frac{r^6}{r^2+\beta D_i}\right)\right].\label{iso}
\end{equation}
Absorbing the constant $C$ via the scaling $dt \to C dt$, the wormhole metric element can be written as
\begin{eqnarray}\notag
\mathrm{d}s^{2}&=&-\left(\frac{r^6}{r^2+\beta D_i}\right)^{\frac{2}{1+1/\omega}}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}\\
&+& r^{2}\left(
\mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\phi ^{2}\right),
\end{eqnarray}
where $r \geq r_0$. It is easy to see that the above solution is finite at the wormhole throat with $r=r_0$, provided $\omega_r \neq -1$. Note that the redshift function $\Phi$ is unbounded for large $r$ as a result one cannot construct asymptotically flat GUP wormholes with isotropic pressures and, in general, such solutions may not be physical.
\subsection{Anisotropic model with $\omega_r=const.$}
As we have observed, the isotropic model is of very limited physical interest. In this final example we shall elaborate an anisotropic GUP Casimir wormhole spacetime. To do so, can use the following relations $\mathcal{P}_t(r)= n\,\omega_r\, \rho(r)$, and $\mathcal{P}_r(r)=\omega_r \rho(r)$, where $n$ is some constant. We get the following relation
\begin{equation}
\omega_r\, \rho'(r)=\frac{2\,\omega_r\, \rho(r)\left(n-1\right)}{r}-(1+\omega_r) \rho(r)\,\Phi'(r).
\end{equation}
Solving this equation for the redshift function, we obtain
\begin{equation}
\Phi(r)=C+\frac{\omega_r}{\omega_r+1}\left[\ln\left(\frac{r^{2(n+2)}}{r^2+\beta D_i}\right) \right].
\end{equation}
We obatain the metric
\begin{eqnarray}\notag
\mathrm{d}s^{2}&=&-\left(\frac{r^{2(n+2)}}{r^2+\beta D_i}\right)^{\frac{2}{1+1/\omega_r}}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}\\
&+& r^{2}\left(
\mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\phi ^{2}\right),
\end{eqnarray}
provided $r \geq r_0$. Notice that we recover an isotropic case (\ref{iso}) here when setting $n=1$ and a singularity at $\omega_r=-1$. In the anisotropic case it is not difficult to show that one can construct asymptotically flat spacetime. Setting $n=-1$ and $\omega_r \neq -1$, the above metric reduces to
\begin{eqnarray}\notag
\mathrm{d}s^{2}&=&-\left(\frac{1}{1+\frac{\beta D_i}{r^2}}\right)^{\frac{2}{1+1/\omega_r}}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}\\
&+& r^{2}\left(
\mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\phi ^{2}\right),
\end{eqnarray}
which is asymptotically flat spacetime. In fact, it is easy to check that the case $n=-1$ gives the only asymptotically
at flat solution. As we can see from Fig.\ref{f88}, in the limit $r \to \infty$ we obtain $\exp(2\Phi(r))=1$, as was expected.
\begin{figure}
\includegraphics[width=8.0 cm]{anisotropic.pdf}
\caption{We plot $\exp(2\Phi(r))$ for the anisotropic case. We have used $r_0=1$, $\hbar=1$, $\beta=0.1$, $n=-1$ and $\omega=1$. }\label{f88}
\end{figure}
\section{Embedding Diagram}
In this section we discuss the embedding diagrams to represent the GUP corrected Casimir wormhole by considering an equatorial slice $\theta=\pi/2$ at some fix moment in time $t=constant$. The metric can be written as
\begin{equation}
ds^2=\frac{dr^2}{1-\frac{b(r)}{r}}+r^2d\phi^2.\label{emb}
\end{equation}
We embed the metric (\ref{emb}) into three-dimensional Euclidean space to visualize this slice and
the spacetime can be written in cylindrical coordinates as
\begin{equation}
ds^2=dz^2+dr^2+r^2d\phi^2.
\end{equation}
From the last two equations we find that
\begin{equation}
\frac{dz}{dr}=\pm \sqrt{\frac{r}{r-b(r)}-1}.
\end{equation}
where $b(r)$ is given by Eq. (\ref{s22}). Note that the integration of the last expression cannot
be accomplished analytically. Invoking numerical techniques allows us to illustrate the
wormhole shape given in Fig.\ref{f8}. From Fig. \ref{f8} we observe the effect of GUP parameter on the wormhole geometry.
\begin{figure*}
\includegraphics[width=6.0 cm]{wormhole.pdf}\hspace{1.5 cm}
\includegraphics[width=6.5 cm]{wormhole2.pdf}
\caption{The GUP Corrected Casimir wormhole embedded
in a three-dimensional Euclidean space. Left panel: We have used $r_0=1$, $\hbar=1$, $\beta=0.06$. Right panel: We have used $r_0=1$, $\hbar=1$, $\beta=0.18$. In both plots we have used $D_1$. }\label{f8}
\end{figure*}
\section{ADM Mass of GUP wormhole}
Now let us compute the ADM mass for GUP Casimir wormhole. We consider the asymptotic flat
spacetime
\begin{eqnarray}
ds^2_{\Sigma} = \psi(r)dr^2+r^2 \chi(r)\left(d\theta^2+\sin^2\theta d\phi^2\right),
\end{eqnarray}
where we have identified
\begin{equation}
\psi(r)=\frac{1}{1-\frac{b(r)}{r}},\quad \text{and} \quad \chi(r)=1.
\end{equation}
In order to compute the ADM mass, we use the approach followed the following relation (see, \cite{Shaikh:2018kfv}):
\begin{equation}\label{for}
M_{ADM}=\lim_{r\to \infty} \frac{1}{2}\left[-r^2 \chi'+r(\psi -\chi) \right].
\end{equation}
On substituting the values in (\ref{for}) and after computing the limit we get the ADM mass for the
wormhole,
\begin{equation}\label{ADM}
M_{ADM}=r_0-\frac{\pi^3 }{90 r_0}-\frac{\beta D_{i} \pi^3 }{270 r_0^3}.
\end{equation}
Note that this is the mass of the wormhole as seen by an observer located at the
asymptotic spatial infinity. It is observed that the GUP effect decreases the ADM mass. Notice that the ADM mass (\ref{ADM}) consists of three terms: the geometric term $r_0$ given by the first term, a semiclassical quantum effect of the spacetime given by the second term, and finally the GUP effect given by the third term. Related to the GUP parameter, let us point our that in Ref. \cite{Das:2008kaa} authors have speculated about the possibility to predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale.
\section{Energy Conditions}
\label{Ener}
Given the redshift function and the shape function, we can compute the energy-momentum components. In
particular for the radial component we find
\begin{equation}
\mathcal{P}_r=\frac{\beta D_{i} \pi^3 (r^4-2 r^3 r_0-r r_0^3+2 r_0^4)+\mathcal{F} }{2160 r^7 r_0^3 \pi},
\end{equation}
where $\mathcal{F} $ is given by Eq. (\ref{e31}). On the other hand for the tangential component of the pressure we find the following result
\begin{equation}
\mathcal{P}_t=\frac{(r-r_0) \left[\beta D_{i} \pi^3 (r^2+r r_0+r_0^2)\mathcal{H}+3r_0^2 r^2 \mathcal{G}\right]}{4320 r_0^3 \pi r^8},\label{fig5}
\end{equation}
in which $\mathcal{H}$ and $\mathcal{G}$ are given by Eq. (\ref{e31}) and (\ref{e32}), respectively.
\begin{figure}
\includegraphics[width=8cm]{En1.pdf}
\caption{ The variation of $\rho+\mathcal{P}_r$ as a function of $r$ using $\Phi=r_0/r$. We use $r_0=1$, $\hbar=1$, $\beta=0.1$ and $D_1$. }
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{En2.pdf}
\caption{ The variation of $\rho+2\mathcal{P}_t$ against $r$ and $\Phi=r_0/r$. We use $r_0=1$, $\hbar=1$, $\beta=0.1$ and $D_1$. }\label{fig6}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{En3.pdf}
\caption{ The variation of $\rho+\mathcal{P}_r+2 \mathcal{P}_t$ against $r$ and $\Phi=r_0/r$. We use $r_0=1$, $\hbar=1$, $\beta=0.1$ and $D_1$.}\label{fig7}
\end{figure}
With these results we can continue our our discussion on the issue of energy conditions and make some
regional plots to check the validity of all energy conditions. In particular we recall that the WEC is
defined by $T_{\mu \nu }U^{\mu }U^{\nu }\geq 0$ i.e.,
\begin{equation}
\rho (r)+\mathcal{P}_{r}(r)\geq 0,
\end{equation}
where $T_{\mu \nu }$ is the energy momentum tensor and $U^{\mu }$ denotes the timelike vector. In other words, the
local energy density is positive and it gives rise to the continuity of NEC,
which is defined by $T_{\mu \nu }k^{\mu }k^{\nu }\geq 0$ i.e.,
\begin{equation}
\rho (r)+\mathcal{P}_{r}(r)\geq 0,
\end{equation}
where $k^{\mu }$ is a null vector. On the other hand the strong energy condition (SEC) stipulates that
\begin{equation}
\rho (r)+2\mathcal{P}_t(r)\geq 0,
\end{equation}
and
\begin{equation}
\rho (r)+\mathcal{P}_{r}(r)+2\mathcal{P}_t(r)\geq 0.
\end{equation}
We see from Figs.(\ref{fig4}-\ref{fig7}), and similarly Figs.(\ref{fig11}-\ref{figure13}), NEC, WEC, and SEC, are not satisfied at the wormhole throat $r=r_0$. In fact one can check numerically that in all plots at the wormhole throat $r=r_0$, we have $\left(\rho+\mathcal{P}_r\right)\vert_{r_0=1}<0$, along with $\left(\rho+\mathcal{P}_r+2\mathcal{P}_t\right)\vert_{r_0=1}<0$, by small values.
However, from the quantum field theory it is known that quantum fluctuations violate most energy conditions without any restrictions and this opens the possibility that quantum fluctuations may play an important role in the wormhole stability. For instance, one can
examine the consequences of the constraint imposed by a Quantum Weak Energy Condition (QWEC) given by \cite{Garattini:2019ivd}
\begin{equation}
\rho (r)+\mathcal{P}_{r}(r)<f(r), \,\,\,f(r)>0,
\end{equation}
where $r \in[r_0,\infty)$. Thus such small violations of energy conditions due to the quantum fluctuations are possible in quantum field theory.
\begin{figure}
\includegraphics[width=8cm]{Eg1.pdf}
\caption{ The variation of $\rho+\mathcal{P}_r$ as a function of $r$ using $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$. }\label{fig11}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{eg3.pdf}
\caption{ The variation of $\rho+2\mathcal{P}_t$ as a function of $r$ using $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$. }\label{fig12}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{eg2.pdf}
\caption{ The variation of $\rho+\mathcal{P}_r+2\mathit{P}_t$ as a function of $r$ using $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$. }\label{figure13}
\end{figure}
\section{Amount of exotic matter}
\label{amont}
In this section we shall briefly discuss the ``volume integral quantifier,'' which basically quantifies the amount of exotic matter required for wormhole maintenance. This quantity is related only to $\rho$ and $\mathcal{P}_r$, not to the transverse components, and is defined in terms of the following definite integral
\begin{eqnarray}
\mathcal{I}_V=\oint [\rho+\mathcal{P}_r]~\mathrm{d}V=2 \int_{r_0}^{\infty} \left( \rho+\mathcal{P}_r\right)~\mathrm{d}V,
\end{eqnarray}
which can be written also as
\begin{eqnarray}
\mathcal{I}_V =8 \pi \int_{r_0}^{\infty} \left( \rho+\mathcal{P}_r \right)r^2 dr.
\end{eqnarray}
As we already pointed out, the value of this volume-integral encodes information about the ``total amount"
of exotic matter in the spacetime, and we are going to evaluate this integral for
our shape function $b(r)$. It is convenient to introduce a cut off such that the wormhole extends form $r_0$ to a radius situated at $`a'$ and then we get the very simple result
\begin{equation}
\mathcal{I}_V= 8 \pi \int_{r_0}^{a} \left( \rho+\mathcal{P}_r \right)r^2 dr.
\end{equation}
In the special case $a \rightarrow r_0$, we should find $\int{(\rho+\mathcal{P}_r)} \rightarrow 0$. In the specific case having $\Phi=r_0/r$, the Casimir wormhole is supported by arbitrarily small quantities of exotic matter. Evaluating the above integral we find that
\begin{equation}
\mathcal{I}_V=\frac{\beta D_{i} \pi^3 (a-r_0)\mathcal{M}+18 a^2 r_0^2 \mathcal{N}}{1620 a^4 r_0^3},
\end{equation}
where
\begin{equation}
\mathcal{M}=6 a^4 \ln(\frac{a}{r_0})-17 a^4+12a^3 r_0+8 a r_0^3-3 r_0^4,
\end{equation}
and
\begin{eqnarray}
\mathcal{N}&=&\ln(\frac{a}{r_0})(\pi^3-270r^2)\nonumber\\&&-3(a-r_0)[(\pi^3-60 r_0^2)a-\pi^3 r_0].
\end{eqnarray}
From Fig.(\ref{fig166}), we observe that the quantity $\mathcal{I}_V$ is negative, i.e., $\mathcal{I}_V<0$. On the other hand we can also use the redshift $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$ to obtain an expression for the amount of exotic matter. Due to the limitation of space, we are going to skip the full expression for $\mathcal{I}_V$ and give only the dependence of $\mathcal{I}_V$ against $r$ and $a$, given by Fig.(\ref{fig17}). Hence it demonstrates the existence of spacetime geometries containing traversable wormholes that are supported by arbitrarily small quantities of ``exotic matter''. Such small violations of this quantity can be linked to the quantum fluctuations. We leave this interesting topic for further investigation.
\begin{figure}
\includegraphics[width=8.2cm]{Iv.pdf}
\caption{ The variation of $\mathcal{I}_V$ against $r$ and $a$ of the case $\Phi=r_0/r$. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$.}\label{fig166}
\end{figure}
\begin{figure}
\includegraphics[width=8.2cm]{Iv1.pdf}
\caption{ The variation of $\mathcal{I}_V$ against $r$ and $a$ of the case $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$. We use $r_0=1$, $\gamma=2$, $\hbar=1$ and $\beta=0.1$. }\label{fig17}
\end{figure}
\section{Light deflection by GUP Casimir wormhole}
\label{chvi}
\subsection{Case with $\Phi(r)=const$.}
In this section we shall proceed to explore the gravitational lensing effect in the spacetime of the GUP Casimir wormhole with $\Phi(r)=const$. The optical metric of GUP wormhole, in the equatorial plane, is simply found by letting $\mathrm{d}s^2=0$, yielding
\begin{eqnarray}\notag
dt^2&=&\frac{dr^2}{1-\frac{r_0}{r}-\frac{\pi^3}{90 r}\left(\frac{1}{r}-\frac{1}{r_0}\right)-\frac{\pi^3 D_{i} \beta}{270 r} \left(\frac{1}{r^3}-\frac{1}{r_0^3}\right)}\\
&+&r^2d\phi^2.
\end{eqnarray}
In the present paper, we are going to use a recent geometric method based on the Gauss-Bonnet theorem (GBT) to calculate the deflection angle. Let $\mathcal{A}_{R}$ be a non-singular domain (or a region outside the light ray) with boundaries $\partial
\mathcal{A}_{R}=\gamma_{g^{(op)}}\cup C_{R}$, of an oriented two-dimensional surface $S$ with the optical metric $g^{(op)}$. Furthermore let $K$ and $\kappa $ be the Gaussian optical
curvature and the geodesic curvature, respectively. Then, the GBT can be stated as follows \cite{Jusufi:2018waj}
\begin{equation}
\iint\limits_{\mathcal{A}_{R}}K\,\mathrm{d}S+\oint\limits_{\partial \mathcal{%
A}_{R}}\kappa \,\mathrm{d}t+\sum_{k}\theta _{k}=2\pi \chi (\mathcal{A}_{R}).
\label{10}
\end{equation}
In which $\mathrm{d}S$ is the optical surface element, $\theta _{k}$ gives the exterior angle at the $k^{th}$ vertex. Basically the GBT provides a relation between the geometry and the topology of the spacetime. By construction, we need to choose the domain of integration to be outside of the light
ray in the $(r,\phi)$ optical plane. Moreover this domain can be thought to have the topology of disc having the Euler characteristic number $\chi (\mathcal{A}_{R})=1$. Next, let us introduce a smooth curve defined as $\gamma:=\{t\}\to \mathcal{A}_{R}$, with the geodesic curvature defined by the following relation
\begin{equation}
\kappa =g^{(op)}\,\left( \nabla _{\dot{\gamma}}\dot{\gamma},\ddot{\gamma}%
\right),
\end{equation}%
along with the unit speed condition $g^{(op)}(\dot{\gamma},\dot{\gamma})=1$, and $\ddot{\gamma}$ being the unit acceleration vector. Now if we consider a very large, but finite radial distance $l\equiv R\rightarrow \infty $, such that the two jump angles (at the source $\mathcal{S}$, and observer $\mathcal{O})
$, yields $\theta _{\mathit{O}}+\theta _{\mathit{S}}\rightarrow \pi $. Note that, by definition, the geodesic curvature for the light ray (geodesics) $\gamma_{g^{(op)}}$ vanishes, i.e. $\kappa (\gamma_{g^{(op)}})=0$. One should only compute the contribution to the curve $C_{R}$. That being said, from the GBT we find
\begin{equation}
\lim_{R\rightarrow \infty }\int_{0}^{\pi+\hat{\alpha}}\left[\kappa \frac{d t}{d \phi}\right]_{C_R} d \phi=\pi-\lim_{R\rightarrow \infty }\iint\limits_{\mathcal{A}_{R}}K\,\mathrm{d}S
\end{equation}
The geodesic curvature for the curve $C_{R}$ located at a coordinate distance $R$ from the coordinate system chosen at the ringhole center can be calculated via the relation
\begin{equation}
\kappa (C_{R})=|\nabla _{\dot{C}_{R}}\dot{C}_{R}|.
\end{equation}
With the help of the unit speed condition, one can show that the asymptotically Euclidean condition is satisfied:
\begin{eqnarray}
\lim_{R\rightarrow \infty } \left[\kappa \frac{\mathrm{d}t}{\mathrm{d}\phi}\right]_{C_{R}}=1.
\end{eqnarray}
From the GBT it is not difficult to solve for the deflection angle which gives
\begin{equation}
\hat{\alpha}=-\int\limits_{0}^{\pi }\int\limits_{r=\frac{b}{\sin \phi}%
}^{\infty } K \mathrm{d}S.
\end{equation}
where an equation for the light ray is $r(\phi)=\mathsf{b}/\sin \phi $. The Gaussian optical curvature takes the form:
\begin{equation}
K=\frac{3 r_0^2 r^2 [(\pi^3-90 r_0^2)r-2 \pi^3 r_0]+ \beta D_{i} \pi^3 (r^3-4 r_0^3) }{540 r^6 r_0^3}.
\end{equation}
Approximating this expression in leading order, the deflection angle reads
\begin{equation}
\hat{\alpha}=-\int\limits_{0}^{\pi }\int\limits_{\frac{\mathsf{b}}{\sin \phi }%
}^{\infty }\left[ -\frac{ r_0}{2r^3}+\frac{ \pi^3 (r-2r_0)}{180 r^4 r_0}\right]r dr d\phi.
\end{equation}
Solving this integral, we find the following solution
\begin{equation}
\hat{\alpha}\simeq \frac{r_0}{\mathsf{b}}-\frac{\pi^3 }{90 r_0 \mathsf{b}}\left(1-\frac{\pi r_0}{4\,\mathsf{b}}\right).\label{alga}
\end{equation}
We see that the first term is due to the wormhole geometry, while the second term is related to the semiclassical quantum effects of the spacetime.
\subsection{Case with $\Phi(r)=r_0/r$.}
In this case, the optical metric in the equatorial plane takes the form
\begin{eqnarray}\notag
dt^2&=&\frac{\exp[-{\frac{2r_0}{r}}] dr^2}{1-\frac{r_0}{r}-\frac{\pi^3 }{90 r}\left(\frac{1}{r}-\frac{1}{r_0}\right)-\frac{\pi^3 \hbar^3 D_{i} \beta}{270 r} \left(\frac{1}{r^3}-\frac{1}{r_0^3}\right)}\\
&+&\frac{r^2d\phi^2}{\exp[{\frac{2r_0}{r}}]}.
\end{eqnarray}
The Gaussian optical curvature in leading order terms is approximated as
\begin{equation}
K \simeq \frac{ r_0}{2r^3}-\frac{r_0^2}{2 r^4}+\frac{ \pi^3(3r^3+9 r^2 r_0-14r_0^3)}{540 r^6 r_0}.
\end{equation}
Approximating this expression in leading order, the deflection angle reads
\begin{equation}
\hat{\alpha}\simeq -\int\limits_{0}^{\pi }\int\limits_{\frac{\mathsf{b}}{\sin \phi }%
}^{\infty }\left[ \frac{ r_0}{2r^3}-\frac{r_0^2}{2 r^4}+\frac{ \pi^3(3r^3+9 r^2 r_0-14r_0^3)}{540 r^6 r_0} \right]r dr d\phi.
\end{equation}
Solving this integral we find the following solution
\begin{equation}
\hat{\alpha} \simeq -\frac{r_0}{\mathsf{b}}+\frac{\pi r_0^2}{8 \mathsf{b}^2}-\frac{\pi^3 }{90 r_0 \mathsf{b}}\left(1+\frac{3 \pi r_0}{8 \mathsf{b}}\right).\label{alpha2}
\end{equation}
One can infer from the above result that since the deflection of light is negative, it indicates that light rays in this case
always bend outward the wormhole due to the non-zero redshift function. Of course the resulting negative value should be taken as an absolute value $|\hat{\alpha} |$.
\subsection{Case with $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$}
In this particular case the optical metric in the equatorial plane reads
\begin{eqnarray}\notag
dt^2&=&\frac{\left(1+\frac{\gamma^2}{r^2}\right)^{-1}dr^2}{1-\frac{r_0}{r}-\frac{\pi^3}{90 r }\left(\frac{1}{r}-\frac{1}{r_0}\right)-\frac{\pi^3 D_{i} \beta}{270 r} \left(\frac{1}{r^3}-\frac{1}{r_0^3}\right)}\\
&+& \frac{r^2 d\phi^2}{1+\frac{\gamma^2}{r^2}}.
\end{eqnarray}
The Gaussian optical curvature in leading order terms is approximated as
\begin{eqnarray}\notag
K &\simeq & -\frac{ r_0}{2r^3}+\frac{ \pi^3 (r-2r_0)}{180 r^4 r_0} \\
&+&\gamma\left[\frac{180 r^2 r_0+(3\pi^2-270 r_0)r-4 \pi^3 r_0}{90 r^6 r_0}\right].
\end{eqnarray}
With this result in hand, in leading order the deflection angle is written as
\begin{eqnarray}
\hat{\alpha}&\simeq & -\int\limits_{0}^{\pi }\int\limits_{\frac{\mathsf{b}}{\sin \phi }%
}^{\infty }\left[ -\frac{ r_0}{2r^3}+\frac{ \pi^3 (r-2r_0)}{180 r^4 r_0} \right]r dr d\phi\\\notag
&-& \gamma \int\limits_{0}^{\pi }\int\limits_{\frac{\mathsf{b}}{\sin \phi }%
}^{\infty }\left[ \frac{180 r^2 r_0+(3\pi^2-270 r_0)r-4 \pi^3 r_0}{90 r^6 r_0} \right]r dr d\phi.
\end{eqnarray}
Solving this integral we find the following solution
\begin{equation}
\hat{\alpha} \simeq \frac{r_0}{\mathsf{b}}-\frac{\gamma \pi}{2 \mathsf{b}^2}-\frac{\pi^3 }{90 r_0 \mathsf{b}}\left(1-\frac{\pi r_0}{4\,\mathsf{b}}\right).\label{alpha3}
\end{equation}
As was expected, in the limit $\gamma \to 0$, we recover deflection angle given by Eq. (\ref{alga}). In other words, the presence of the parameter $\gamma$ decreases the deflection angle compared to Eq. (\ref{alga}). The first and the second term are related to the geometric structure of the wormhole, while the third term encodes the semiclassical quantum effects.
\subsection{Case with $\exp(2\Phi(r))=\left(\frac{1}{1+\frac{\beta D_i}{r^2}}\right)^{\frac{2}{1+1/\omega}}$}
In this particular case the optical metric in the equatorial plane reads
\begin{eqnarray}\notag
dt^2&=&\frac{\left(\frac{1}{1+\frac{\beta D_i}{r^2}}\right)^{-\frac{2}{1+1/\omega}}dr^2}{1-\frac{r_0}{r}-\frac{\pi^3}{90 r }\left(\frac{1}{r}-\frac{1}{r_0}\right)-\frac{\pi^3 D_{i} \beta}{270 r} \left(\frac{1}{r^3}-\frac{1}{r_0^3}\right)}\\
&+& \frac{r^2 d\phi^2}{\left(\frac{1}{1+\frac{\beta D_i}{r^2}}\right)^{\frac{2}{1+1/\omega}}}.
\end{eqnarray}
Let us consider the special case with $\omega=1$. The Gaussian optical curvature in leading order terms is approximated as
\begin{eqnarray}
K &\simeq & -\frac{ r_0}{2r^3}+\frac{ \pi^3 (r-2r_0)}{180 r^4 r_0} \\\notag
&& + \frac{\beta D_i[\pi^3 r^3-1080 r^2 r_0^3+\mathcal{J} r+20 \pi^3 r_0^3]}{540 r_0^3r^6},
\end{eqnarray}
where
\begin{equation}
\mathcal{J}=1620 r_0^4-18 \pi^3 r_0^2.
\end{equation}
With this result in hand, in leading order the deflection angle is written as
\begin{eqnarray}
\hat{\alpha}&\simeq & -\int\limits_{0}^{\pi }\int\limits_{\frac{\mathsf{b}}{\sin \phi }%
}^{\infty }\left[ -\frac{ r_0}{2r^3}+\frac{ \pi^3 (r-2r_0)}{180 r^4 r_0} \right]r dr d\phi\\\notag
&&-\beta D_i \int\limits_{0}^{\pi }\int\limits_{\frac{\mathsf{b}}{\sin \phi }%
}^{\infty }\frac{[\pi^3 r^3-1080 r^2 r_0^3+\mathcal{J} r+20 \pi^3 r_0^3]}{540 r_0^3r^6}r dr d\phi.
\end{eqnarray}
Solving this integral we find the following solution
\begin{equation}
\hat{\alpha} \simeq \frac{r_0}{\mathsf{b}}+\frac{\beta D_i \pi }{2 \mathsf{b}^2}-\frac{\pi^3 }{90 r_0 \mathsf{b}}\left(1-\frac{\pi r_0}{4\,\mathsf{b}}\right).\label{alpha4}
\end{equation}
In this case, beside the first term which is related to the wormhole geometry, we find an effect of GUP parameter $\beta$ in leading order terms on the deflection angle encoded in the second term, while the third term is related to the semiclassical quantum effects. We show graphically the dependence of deflection angle against the impact parameter in Fig. (\ref{fig18}).
\begin{figure}
\includegraphics[width=8.2cm]{def.pdf}
\caption{ The deflection angle against the impact parameter $b$ using Eq. (\ref{alga}), (\ref{alpha2}), (\ref{alpha3}) and (\ref{alpha4}), respectively. We use $r_0=1$, $\hbar=1$ and $\beta=0.1$, and $\gamma=1$ for the case $D_1$. The blue curve corresponds to Eq. (\ref{alpha2}) showing that the light rays bend outward the wormhole. On the other hand, the effect of $\gamma$ decreases the deflection angle (black curve) compared to (\ref{alga}) (red curve). The deflection angle (\ref{alpha4}), corresponds to the anisotropic wormhole (green curve). }\label{fig18}
\end{figure}
\section{Conclusion}
In this paper, we have explored the effect of the Generalized Uncertainty Principle (GUP) on the Casimir wormhole spacetime. In particular, we have constructed three types of the GUP relations, namely the KMM model, DGS model, and finally the so called type II model for GUP principle. To this end, we have used three different models of the redshift function, i.e., $\Phi(r)=constant$, along with $\Phi(r)=r_0/r$ and $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$, to obtain a class of asymptotically flat wormhole solutions supported by Casimir energy under the effect of GUP. Having used the specific model for the wormhole geometry, we then used two EoS models $\mathcal{P}_r(r)=\omega_{r}(r) \rho(r)$ and $\mathcal{P}_t(r)=\omega_{t}(r)\mathcal{P}_r(r)$ to obtained the specific relation for the EoS parameter $\omega_{r}(r)$ and $\omega_{t}(r)$, respectively. In addition, we have considered the isotropic wormhole and found an interesting solution describing an asymptotically flat GUP wormhole with anisotropic matter.
Furthermore, we have checked the null, weak, and strong conditions at the wormhole throat with a radius $r_0$, and shown that in general the classical energy conditions are violated by some small and arbitrary quantities at the wormhole throat. However, we have also highlighted the Quantum Weak Energy Condition (QWEC) according to which such small violations are possible due to the quantum fluctuations. In this direction, we have also examined the ADM mass of the wormhole and the volume integral quantifier to calculate the amount of the exotic matter near the wormhole throat, such that the wormhole extends form $r_0$ to a cut off radius located at $`a'$. We studied the embedding diagram to show that with the increase of the GUP parameter there is an effect on the effective geometry of the GUP wormhole.
Finally, we have used the GBT to obtain the deflection angle in three wormhole geometries. We argued that the deflection angle in leading order terms is affected by the semiclassical quantum effect as well as the wormhole throat radius. As an interesting observation, we have found that the choice of the redshift function plays a significant role in determining the deflection angle. For example, in the case $\Phi=constant$ and $\exp(2\Phi(r))=1+\frac{\gamma^2}{r^2}$ the light rays bend towards the wormhole, while in contrast having $\Phi(r)=r_0/r$, we discovered that light rays bent outward the wormhole. We also found that the deflection angle depends upon the parameter $\gamma$, while there is an effect of the GUP parameter in leading order terms only in the case of anisotropic GUP wormhole. However, a thorough analysis of these effects will be intentionally left for further investigation.
|
train/arxiv
|
BkiUcbbxK7kjXIdG94DH
| 1 | 0.2 |
\subsection*{#1}}
\newcommand{\mwssec}[1]{\subsection*{#1}}
\newcommand{\smallpic}[2]{\begin{figure}\lbl{#2}
\vspace{30mm}\caption{#1}\end{figure}}
\newcommand{\medpic}[3]{\begin{figure}\lbl{#2}
\begin{center}
#3
\end{center}
\caption{#1}\end{figure}}
\newcommand{\bigpic}[2]{\begin{figure}\lbl{#2}\vspace{70mm}\caption{#1}\end{figure}}
\newcounter{note}
\newcounter{num}
\newcommand{\addtocounter{note}{1}$\,^{\thenote}$ }{\addtocounter{note}{1}$\,^{\thenote}$ }
\newcounter{notelist}
\newcommand{\addtocounter{notelist}{1}{\addtocounter{notelist}{1}
\medbreak \noindent \thenotelist.~}
\newcommand{\addtocounter{notelist}{1}{\addtocounter{notelist}{1}
\thenotelist.~}
\newcommand{\notesec}[1]{
\markright{Notes on Chapter \thechapter}
\section*{Notes on Chapter \thechapter}
\setcounter{notelist}{0} {\small #1}}
\newcommand{\anotesec}[1]{
\section*{Notes on Appendix \thechapter}
\setcounter{notelist}{0}{\small #1}}
\newcommand{\index}{\index}
\newcommand{\glossary}{\glossary}
\newcommand{\indexentry}[2]{\item #1, #2}
\newcommand{\subentry}[2]{\subitem #1, #2}
\newcommand{\gloentry}[2]{\item \makebox[1.8in]{\item #1\hfill #2}}
\newcommand{\disp}[1]{\vspace{1ex}\begin{center}
\parbox{30em}{\setlength{\parskip}{1ex}
\setlength{\parindent}{0em}#1}\end{center}\vspace{1ex}}
\newenvironment{Disp}
{\begin{description}} {\end{description}}
\newcommand{\doubfib}
{{\setlength{\unitlength}{1mm}
\begin{picture}(28,15)
\put(2,0){$U$} \put(24,0){${\cal P}$} \put(13,10){${\cal F}$}
\put(11,8){\vector(-1,-1){5}} \put(7,7){$q$} \put(20,7){$p$}
\put(17,8){\vector(1,-1){5}}
\end{picture}
}}
\newcommand{\doubfibred}
{{\setlength{\unitlength}{1mm}
\begin{picture}(28,15)
\put(2,0){$S$} \put(24,0){${\cal R}$} \put(13,10){${\cal F_{{\rm
r}}}$} \put(11,8){\vector(-1,-1){5}} \put(7,7){$q$}
\put(20,7){$p$} \put(17,8){\vector(1,-1){5}}
\end{picture}
}}
\newcommand{\doubfibredfrob}
{{\setlength{\unitlength}{1mm}
\begin{picture}(28,15)
\put(2,0){$M$} \put(24,0){$({\cal T},\Pi)$} \put(11,10){$({\cal
F},\Pi)$} \put(11,8){\vector(-1,-1){5}} \put(7,7){$q$}
\put(20,7){$p$} \put(17,8){\vector(1,-1){5}}
\end{picture}
}}
\newcommand{\mathrm{tr}}{\mathrm{tr}}
\newcommand{\ket}[1]{\left | #1 \right \rangle}
\newcommand{\bra}[1]{\left \langle #1 \right |}
\newcommand{\projop}[1]{\left | #1 \right \rangle \!
\left \langle #1 \right |}
\newcommand{\ket{\boldsymbol{\Psi}_{{\rm g}}}}{\ket{\boldsymbol{\Psi}_{{\rm g}}}}
\newcommand{\bra{\boldsymbol{\Psi}_{{\rm g}}}}{\bra{\boldsymbol{\Psi}_{{\rm g}}}}
\newcommand{\abs}[1]{\left | #1 \right |}
\newcommand{\mathrm{Re}}{\mathrm{Re}}
\newcommand{\mathrm{Im}}{\mathrm{Im}}
\newcommand\bA{\overline{A}}
\newcommand\bB{\overline{B}}
\newcommand{\boldsymbol{\phi}}{\boldsymbol{\phi}}
\newcommand{\boldsymbol{\psi}}{\boldsymbol{\psi}}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{remark}{Remark}
\newtheorem{corollary}{Corollary}
\def\mathrm{d}{\mathrm{d}}
\def\hbox{\rm i}{\hbox{\rm i}}
\def\tfrac#1#2{{\textstyle\frac{#1}{#2}}}
\def\ifrac#1#2{{#1}/{(#2)}}
\def\od#1{\frac{\mathrm{d}}{\mathrm{d}#1}}
\def\pd#1{\frac{\partial}{\partial#1}}
\def\pderiv#1#2{\frac{\partial#1}{\partial#2}}
\def\hide#1{}
\def{\rm P}_{\rm{\scriptstyle II}}{{\rm P}_{\rm{\scriptstyle II}}}
\def{\rm P}^{(n)}_{\rm{\scriptstyle II}}{{\rm P}^{(n)}_{\rm{\scriptstyle II}}}
\def\mathcal{L}{\mathcal{L}}
\begin{document}
\title{Entanglement entropy in quantum spin chains with finite range
interaction\footnote{A. Its was partially
supported by the NSF grants DMS-0401009 and DMS-0701768. F. Mezzadri and
M. Y. Mo acknowledge financial support by the EPSRC grant EP/D505534/1.}}
\author{A. R. Its, F. Mezzadri and M. Y. Mo}
\date{}
\maketitle
\begin{abstract}
We study the entropy of entanglement of the ground state in a wide
family of one-dimensional quantum spin chains whose interaction is of
finite range and translation invariant. Such systems can be thought
of as generalizations of the XY model. The chain is divided in two
parts: one containing the first consecutive $L$ spins; the second
the remaining ones. In this setting the entropy of entanglement is
the von Neumann entropy of either part. At the core of our
computation is the explicit evaluation of the leading order term
as $L \to \infty$ of
the determinant of a block-Toeplitz matrix with symbol
\[
\Phi(z) = \left(\begin{array}{cc} i\lambda & g(z) \\ g^{-1}(z) & i
\lambda \end{array}\right),
\]
where $g(z)$ is the square root of a rational function and
$g(1/z)=g^{-1}(z)$. The asymptotics of such determinant is computed
in terms of multi-dimensional theta-functions associated to a
hyperelliptic curve $\mathcal{L}$ of genus $g \ge 1$, which enter into the
solution of a Riemann-Hilbert problem. Phase transitions for these
systems are characterized by the branch points of $\mathcal{L}$ approaching
the unit circle. In these circumstances the entropy diverges
logarithmically. We also recover, as particular cases, the formulae
for the entropy discovered by Jin and Korepin~\cite{JK} for the XX
model and Its, Jin and Korepin~\cite{IJK1,IJK2} for the XY model.
\end{abstract}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\tableofcontents
\listoffigures
\section{Introduction}
\setcounter{equation}{0}
One dimensional quantum spin chains were introduced by Lieb
\textit{et. al.}~\cite{LSM61} in 1961 as a model to study the magnetic
properties of solids. Usually such systems depend on some parameter,
\textit{e.g.} the magnetic field. One of their most important features
is that at zero temperature, when the system is in the ground state,
as the number of spins tend to infinity they undergo a phase transition
for a critical value of the parameter. As a consequence, the rate of
the decay of correlation lengths changes suddenly from exponential to
algebraic at the critical point. Furthermore, many examples of such
chains are exactly solvable. Because of these reasons over the years
the statistical mechanical properties of quantum spin chains have
been investigated in great detail.
More recently, Osterloh \textit{et al.}~\cite{OAFF02}, and Osborne and
Nielsen~\cite{ON02} realized that the existence of non-local physical
correlations at a phase transition is a manifestation of the
entanglement among the constituent parts of the chain. Entangled
quantum states are characterized by non-local correlations that cannot
be described by classical mechanics. Such correlations play an
important role in the transmission of quantum information. It is
therefore essential to be able to quantify entanglement. In its full
generality this is still an open problem. However, when a physical
system is in a pure state and is \textit{bipartite}, \textit{i.e.} is
made of two separate parts, say A and B, a suitable measure of the
entanglement shared between the two constituents is the von Neumann
entropy of either part~\cite{BBPS96}. In this situation the Hilbert
space of the whole system is $\mathcal{H}_{\mathrm{AB}} =
\mathcal{H}_{\mathrm{A}} \otimes \mathcal{H}_\mathrm{B}$, where
$\mathcal{H}_{\mathrm{A}}$ and $\mathcal{H}_{\mathrm{B}}$ are the
Hilbert spaces associated to A and B respectively. Now, if
$\rho_{\mathrm{AB}}$ is the density matrix of the composite system, then
the reduced density matrices of A and B are
\begin{equation}
\label{eq:red_mat}
\rho_{\mathrm{A}} = \mathrm{tr}_{\mathrm{B}}\, \rho_{\mathrm{AB}} \quad
\mathrm{and} \quad \rho_{\mathrm{B}} = \mathrm{tr}_{\mathrm{A}}\,
\rho_{\mathrm{AB}},
\end{equation}
where $\mathrm{tr}_\mathrm{A}$ and $\mathrm{tr}_\mathrm{B}$ are partial traces
over the degrees of freedom A and B respectively. The entropy of the
entanglement of formation is
\begin{equation}
\label{eq:von_neu_ent}
S(\rho_{\mathrm{A}}) = - \mathrm{tr} \rho_\mathrm{A}
\log \rho_\mathrm{A} = S(\rho_{\mathrm{B}})
=- \mathrm{tr}\rho_\mathrm{B} \log \rho_\mathrm{B}
\end{equation}
In this paper we compute the entropy of entanglement of the ground
state of a vast class of spin chains whose interaction among the
constituent spins is non-local and translation invariant. These
systems can be mapped into quadratic chains of fermionic operators by
a suitable transformation and are generalizations of the XY model. We
study the ground state of such systems, divide the chain in two halves
and compute the von Neumann entropy in the thermodynamic limit of one
of the two parts. If the ground state is not degenerate, then
$\rho_{\mathrm{AB}} = \projop{\boldsymbol{\Psi}_\mathrm{g}}$. At the
core of our derivation of the entropy of entanglement is the
computation of determinants of Toeplitz matrices for a wide class of
$2 \times 2$ matrix symbols. The explicit expressions for such
determinants were not available in the literature. The appearance of
Toeplitz matrices and their invariants in the study of lattice models
is a simple consequence of the translation invariance of the
interaction among the spins. Thus, Toeplitz determinants appear in
the computations of many other physical quantities like spin-spin
correlations or the probability of the emptiness of formation, not
only the entropy of entanglement. Therefore, our results have
consequences that go beyond the application to the study of bipartite
entanglement that we discuss.
Vidal \textit{et. al.}~\cite{V} were the first to investigate the
entanglement of formation of the ground state of spin chains by
dividing them in two parts. The models they considered were the XX, XY
and XXZ model. They computed numerically the von Neumann entropy of
one half of the chain and discovered that at a phase transition it
grows logarithmically with its length $L$. Jin and Korepin~\cite{JK}
computed the von Neumann entropy of the ground state of the XX model
using the Fisher-Hartwig formula for Toeplitz determinants. They
showed that at the phase transition the entropy grows like $\frac13
\log L$, which is in agreement with the numerical observations of
Vidal \textit{et.al.} For lattice systems that have a conformal field
theory associated to it the logarithmic growth of the entropy was
first discovered by Holzhey \textit{et. al.}~\cite{HLW94} in 1994.
This approach was later developed by Korepin~\cite{Kor04}, and by
Calabrese and Cardy~\cite{CC05}. Its, Jin and Korepin~\cite{IJK1,IJK2}
determined the entropy for the XY model by computing an explicit formula
for the asymptotics of the determinant of a block-Toeplitz matrix.
They expressed the entropy of entanglement in terms of an integral of
Jacobi theta functions.
Consider a $p \times p$ matrix-valued function on the unit circle
$\Xi$:
\[
\varphi(z)= \sum_{k=-\infty}^\infty \varphi_kz^k, \quad |z|=1.
\]
A block-Toeplitz matrix with symbol $\varphi$ is defined by
\[
T_L[\varphi]= (\varphi_{j-k})_{0\le j,k \le L-1}.
\]
Furthermore, we shall denote its determinant by $D_L=\det
T_L[\varphi]$. The main ingredient of the computation of Its, Jin and
Korepin was to use the Riemann-Hilbert approach to derive an
asymptotic formula for the Fredholm determinant
\begin{equation}
\label{eq:Fred_det}
D_L(\lambda) = \det T_L[\varphi] = \det \left(I - \mathbf{K}_L\right),
\end{equation}
where $\mathbf{K}_L$ is an appropriate integral operator on
$L^2(\Xi,\mathbb{C}^2)$. The symbol of the Toeplitz matrix
$T_L[\varphi]$ was
\begin{equation}
\label{eq:ijk_symb}
\varphi\left(e^{i\theta}\right) =\pmatrix{i\lambda &g(\theta)\cr
-g^{-1}(\theta)&i\lambda},
\end{equation}
where
\[
g(\theta)=
\frac{ \alpha \cos
\theta - 1 - i \gamma \alpha \sin \theta}{\left | \alpha \cos
\theta -1 - i \gamma \alpha \sin \theta \right |}.
\]
Keating and Mezzadri~\cite{KM04,KM05} introduced families of spin
chains that are characterized by the symmetries of the spin-spin
interaction. The entropy of entanglement of the ground state of these
systems, as well as other thermodynamical quantities like the
spin-spin correlation function, can be determined by computing
averages over the classical compact groups, which in turn means
computing determinants of Toeplitz matrices or of sums of Hankel
matrices. These models are solvable and can be mapped into a
quadratic chain of Fermi operators via the Jordan-Wigner
transformations. One of the main features of these families is that
symmetries of the interaction can be put in one to
one correspondence with the structure of the invariant measure of the
group to be averaged over. If the Hamiltonian is translation
invariant and the interaction is isotropic, then the relevant group
over is $\mathrm{U}(N)$ equipped with Haar measure. In turn such
averages are equivalent to Toeplitz determinants with a scalar symbol.
These systems are generalizations of the XX model.
In this paper we consider spin chains whose interaction is translation
invariant but the Hamiltonian is not isotropic. These are
generalization of the XY model. The Fredholm determinant that
we need to compute has the same structure as~(\ref{eq:Fred_det}), but now
the $2\times 2$ matrix symbol is
\begin{equation}
\label{eq:our_symb}
\Phi(z) :=\pmatrix{i\lambda & g(z)\cr
-g^{-1}(z)& i\lambda},
\end{equation}
where function $g(z)$ is defined by
\begin{equation}
\label{eq:g_def}
g(z) := \sqrt{\frac{p(z)}{z^{2n}p(1/z)}}
\end{equation}
and $p(z)$ is a polynomial of degree $2n$. We recover the XY model
if we set
\begin{equation}
\label{eq:Xypol}
p(z) = \frac{\alpha (1 - \gamma)}{2}z^2 - z +
\frac{\alpha (1 + \gamma)}{2}.
\end{equation}
In the above equation $\alpha=2/h$, where $h$
magnetic field, and $\gamma$ measures the anisotropy of the Hamiltonian
in the XY plane.
\section{Statement of results}
\label{stat_res}
\setcounter{equation}{0}
Following~\cite{JK} and~\cite{IJK2}, we will identify the limiting von
Neumann entropy for the systems that we study with the double limit
\begin{equation}
\label{eq:intr_Ki}
S(\rho_A)= \lim_{\epsilon \to 0^+} \left[ \lim_{L\to \infty}
\frac{1}{4\pi i} \oint_{\Gamma(\epsilon)}
e(1 +\epsilon, \lambda)\frac{\mathrm{d}}{\mathrm{d}\lambda}
\log\left( D_L(\lambda)(\lambda^2 -1)^{-L}\right)\mathrm{d} \lambda\right].
\end{equation}
In the above formula $\Gamma(\epsilon)$ is the contour in
figure~\ref{fig1}, $D_L(\lambda)$ is the determinant of the
block-Toeplitz matrix $T_L[\Phi]$ with symbol~(\ref{eq:our_symb}) and
\begin{equation}
\label{binaryent}
e(x,\nu) := - \frac{x +
\nu}{2}\log\left(\frac{x + \nu}{2}\right) - \frac{x -
\nu}{2}\log\left(\frac{x - \nu}{2}\right).
\end{equation}
\begin{figure}
\centering
\begin{overpic}[scale=.75,unit=1mm]{residue}
\put(20,60){$\lambda$-plane}
\put(90,36.5){\tiny{$\mathrm{Re}(\lambda)$}}
\put(39,72){\tiny{$\mathrm{Im}(\lambda)$}}
\put(35,27){$\Gamma(\epsilon)$}
\put(48,55){$R = \infty$}
\put(20,3.5){$\Gamma'$}
\put(62.5,36.5){\tiny{$1$}}
\put(70,36.5){\tiny{$1 + \epsilon$}}
\put(17,36.5){\tiny{$1 - \epsilon$}}
\put(28,36.5){\tiny{$-1$}}
\put(29,42){\tiny{$r=\epsilon/2$}}
\put(55,42){\tiny{$r=\epsilon/2$}}
\put(71,42){\tiny{$r=\epsilon/2$}}
\put(71,42){\tiny{$r=\epsilon/2$}}
\put(13,41){\tiny{$r=\epsilon/2$}}
\end{overpic}
\caption{The contour $\Gamma(\epsilon)$ of the integral in
equation~(\ref{eq:intr_Ki}). The bold lines $(-\infty,-1-\epsilon)$
and $(1 + \epsilon, \infty)$ are the cuts of the integrand $e(1 +
\epsilon,\lambda)$. The zeros of $D_L(\lambda)$ are located on the
bold line $(-1,1)$. }
\label{fig1}
\end{figure}
The explicit Hamiltonians for the family of spin systems that we
consider and their connection to formula~(\ref{eq:intr_Ki}) will be
discussed in detail in sections~\ref{spin_chains}
and~\ref{vonneumanentr}.
One of the main objectives of this paper is to compute the double
limit~(\ref{eq:intr_Ki}), which, as we shall see, can be expressed as
an integral of multi-dimensional theta functions defined on Riemann
surfaces. Thus, in order to state our main results, we need to
introduce some definitions and notation.
Let us rewrite the function~(\ref{eq:g_def}) as
\begin{equation}
\label{eq:gn_def}
g^2(z) = \prod_{j=1}^{2n}{{z-z_j}\over{1-z_jz}},
\end{equation}
where the $z_j$'s are the $2n$ roots of the polynomial $p(z)$. This
representation of $g(z)$ will be used throughout the paper. We fix
the branch of $g(z)$ by requiring that $g(\infty)>0$ on the real axis.
The function $g(z)$ have jump discontinuities on the complex
$z$-plane. In order to define its branch cuts we need to introduce an
ordering of the roots $z_j$. Let
\begin{equation}
\label{eq:lambdai}
\{\lambda_1,\lambda_2,\ldots,\lambda_{4n}\}
=\{z_1,\ldots,z_{2n},z_{1}^{-1},\ldots,z_{2n}^{-1}\}
\end{equation}
where the above is merely an equality between sets, and we do not
necessarily have, for example, $\lambda_i=z_i$. We order the
$\lambda_i$'s such that
\begin{eqnarray}
\label{eq:order}
\mathrm{Re}(\lambda_i)&\leq& \mathrm{Re}(\lambda_j),\quad i<j\nonumber\\
\mathrm{Im}(\lambda_i)&\leq& \mathrm{Im}(\lambda_j) ,\quad i<j, \quad
|\lambda_i|, |\lambda_j|<1, \quad \mathrm{Re}(\lambda_i)=\mathrm{Re}(\lambda_j)\\
\mathrm{Im}(\lambda_i)&\leq& \mathrm{Im}(\lambda_j) ,\quad i>j,\quad
|\lambda_i|, |\lambda_j|>1,\quad \mathrm{Re}(\lambda_i)=
\mathrm{Re}(\lambda_j). \nonumber
\end{eqnarray}
This ordering need not coincide with the ordering $z_j$'s. If
necessary, we can always assume that one of the $z_j^{-1}$ has the
smallest real part and set $\lambda_1=z_j^{-1}$. This choice is
equivalent to taking the transpose of $T_L[\Phi]$. The branch cuts for
$g(z)$ are defined by the intervals $\Sigma_i$ joining
$\lambda_{2i-1}$ and $\lambda_{2i}$:
\begin{equation}
\label{eq:branchcut}
\Sigma_i=[\lambda_{2i-1},\lambda_{2i}],\quad i=1,\ldots, 2n.
\end{equation}
Therefore, $g(z)$ has the following jump discontinuities:
\begin{equation}
\label{eq:jphi}
g_+(z)=-g_-(z), \quad z\in\Sigma_i,
\end{equation}
where $g_{\pm}(z)$ are the boundary values of $g(z)$ on the
left/right hand side of the branch cut.
Now, let $\mathcal{L}$ be the hyperelliptic curve
\begin{equation}
\label{eq:L}
\mathcal{L}: w^2=\prod_{i=1}^{4n}(z-\lambda_i).
\end{equation}
The genus of $\mathcal{L}$ is $g=2n-1$.
\begin{figure}[htbp]
\begin{center}
\resizebox{8cm}{!}{\input{cycle1.pstex_t}}
\caption{The choice of cycles on the hyperelliptic curve
$\mathcal{L}$. The arrows denote the orientations of the cycles and branch cuts. Note that we have $\lambda_1=z_1^{-1}$.}\label{fig:cycle}
\end{center}
\end{figure}
We now choose a canonical basis for the cycles $\{a_i,b_i\}$ on $\mathcal{L}$
as shown in figure \ref{fig:cycle}, and define $\mathrm{d}\omega_i$ to be
1-forms dual to this basis, \textit{i.e.}
\begin{equation}\label{eq:normalizeforms}
\int_{a_i}\mathrm{d}\omega_j=\delta_{ij}, \quad
\int_{b_i}\mathrm{d}\omega_j=\Pi_{ij}.
\end{equation}
Furthermore, let us
define the $g \times g$ matrix $\Pi$ by setting
$(\Pi)_{ij}=\Pi_{ij}$. The theta function
$\theta:\mathbb{C}^g\longrightarrow \mathbb{C}$ associated to $\mathcal{L}$ is
defined by
\begin{equation}
\label{eq:thetadef}
\theta (\overrightarrow{s}) := \sum_{\overrightarrow{n}\in
\mathbb{Z}^g} {\rm e}^{i\pi \overrightarrow{n}\cdot \Pi
\overrightarrow{n} + 2i\pi \overrightarrow{s}\cdot
\overrightarrow{n}}.
\end{equation}
while the theta function with characteristics
$\overrightarrow{\epsilon}$ and $\overrightarrow{\delta}$ is defined
by
\begin{eqnarray}\label{eq:thetachar}
\theta\left[{ \overrightarrow{\epsilon} \atop \overrightarrow{\delta}}\right] (\overrightarrow{s}) := \exp\left(2i\pi\left( \frac {\overrightarrow{\epsilon}\cdot \Pi\cdot \overrightarrow{\epsilon} }8 + \frac 1 2 \overrightarrow{\epsilon} \cdot \overrightarrow{s} + \frac 1 4 \overrightarrow{\epsilon} \cdot \overrightarrow{\delta}\right) \right) \theta\left(\overrightarrow{s} + \frac {\overrightarrow{ \delta}} 2 + \Pi \frac {\overrightarrow{\epsilon} }2 \right)
\end{eqnarray}
where $\overrightarrow{\epsilon}$ and $\overrightarrow{\delta}$ are
$g$-dimensional complex vectors.
Our main results are summarised by the following two theorems.
\begin{theorem}
\label{main_theo1}
Let $H_\alpha$ be the Hamiltonian of the one-dimensional quantum
spin system defined in equation (\ref{genmodel2}). Let A be the
subsystem made of the first L spins and B the one formed by the
remaining $M-L$. We also assume that the system is in a
non-degenerate ground state $\left | \Psi_{\mathrm{g}} \right
\rangle $ and that the thermodynamic limit, i.e. $M \to \infty$, has
been already taken. Then, the limiting (as $L\to \infty$) von
Neumann entropy (\ref{eq:intr_Ki}) is
\begin{equation}
\label{eq:m_res1}
S(\rho_A)=\frac{1}{2}
\int_{1}^{\infty}\log{{\theta\left(\beta(\lambda)\overrightarrow{e}
+{\tau\over 2}\right)\theta\left(\beta(\lambda)
\overrightarrow{e}-{\tau\over 2}\right)}
\over{\theta^2\left({\tau\over 2}\right)}}\mathrm{d}\lambda,
\end{equation}
where $\overrightarrow{e}$ is a $2n-1$ vector whose last $n$
entries are $1$ and the first $n-1$ entries are $0$.
\end{theorem}
The parameter
$\tau$ in the argument of $\theta$ is introduced in
section~\ref{WH_fact} and is defined in equation~(\ref{eq:tau}), while
the expression of $\beta(\lambda)$ is
\begin{equation}
\label{eq:beta}
\beta(\lambda) :={1\over{2\pi
i}}\log{{\lambda+1}\over{\lambda-1}}.
\end{equation}
Theorem~\ref{main_theo1} generalizes the result by Its \textit{et
al.}~\cite{IJK1,IJK2} for the XY model. In that case the genus of
the of $\mathcal{L}$ is one, and the theta function in the integral reduces
to the Jacobi theta function $\theta_3$. However, for the XY model
the integral~(\ref{eq:m_res1}) can be expressed in term of the
infinite series
\begin{equation}
\label{eq:inf_ser}
S(\rho_{\mathrm{A}}) = \sum_{m=-\infty}^\infty (1 +
\mu_m)\log\frac{2}{1 + \mu_m}=
2 \sum_{m=0}^\infty e(1,\mu_m),
\end{equation}
where the numbers $\mu_m$ are the solutions of the equation
\begin{equation}
\label{eq:thet3_zer}
\theta_3\left(\beta(\lambda) + \frac{\sigma \tau}{2}\right)=0
\end{equation}
and $\sigma$ is $0$ or $1$ depending on the strength of the magnetic
field. The zeros of the one dimensional theta function are all known,
so that the numbers $\mu_{m}$ can be described by the explicit
formula
\[
\mu_{m} = -i\tan \left(m +\frac{1-\sigma}{2}\right)\pi \tau.
\]
Moreover, as it was shown by Peschel \cite{Pe} (who also suggested an
alternative heuristic derivation of equation (\ref{eq:inf_ser}) based
on the work of Calabrese and Cardy \cite{CC04}), the series
(\ref{eq:inf_ser}) can be summed up to an elementary function of the
complete elliptic integrals corresponding to the modular parameter
$\tau$.
It is an open problem whether an analogous representation of the
integral~(\ref{eq:m_res1}) exists for $g > 1$.
The next step consists of understanding what happens to
formula~(\ref{eq:m_res1}) when we approach a phase transition. The
hyperelliptic curve $\mathcal{L}$, and hence all the parameters in the
integral~(\ref{eq:m_res1}), are determined by the roots of the
polynomial $p(z)$ which defines the symbol~(\ref{eq:our_symb}). In
section~\ref{spin_chains} we discuss how the coefficients of $p(z)$
are related to the the Hamiltonians of the spin chains. In the case
of the XY model $p(z)$ is given by equation~(\ref{eq:Xypol}); since
the degree of $p(z)$ is two the roots $\lambda_j$ can be easily
determined as a function of the parameters $\alpha$ and $\gamma$. It
was shown by Calabrese and Cardy~\cite{CC05} that when $\alpha = 1$
--- or the magnetic field $h=2$ --- the XY model undergoes a phase
transition and the entropy diverges. Jin and Korepin~\cite{JK} showed
that when $\gamma$ approaches $0$, \textit{i.e.} the XY model
approaches the XX model, and $\alpha \le 1$, then the entanglement
entropy diverges logarithmically. Its
\textit{et. al.}~\cite{IJK1,IJK2} discovered that the divergence of
the entropy for the XY and XX model corresponds to the
roots~(\ref{eq:lambdai}) of (\ref{eq:L}) approaching the unit circle.
\begin{figure}
\centering
\begin{overpic}[scale=.65,unit=1mm]{roots}
\put(68.5,66){$\lambda_j$}
\put(76,75.5){$1/\overline{\lambda}_j$}
\put(69,17){$\overline{\lambda}_j$}
\put(76,7.5){$1/\lambda_j$}
\end{overpic}
\caption{The location of one of the roots~(\ref{eq:lambdai}), say
$\lambda_j$ determines the positions of other three:
$\overline{\lambda}_j$,
$1/\lambda_j$ and $1/\overline{\lambda}_j$ }
\label{fig1b}
\end{figure}
This phenomenon extends to the family of systems that we study. In
other words, a phase transition manifests itself when pairs of roots
of~(\ref{eq:L}) approach the unit circle; one root in each pair is
inside the unit circle, the other outside. As we shall see, in these
circumstances the entropy of entanglement diverges logarithmically.
From (\ref{eq:lambdai}) we see that if $\lambda_j$ is a root of
(\ref{eq:L}) so is $\lambda_j^{-1}$. Moreover, since (\ref{eq:L}) is a
polynomial with real coefficients, if $\lambda_j$ is complex then
$\overline{\lambda}_j$ and $\overline{\lambda}_j^{-1}$ will be roots
of (\ref{eq:L}) too (see figure~\ref{fig1b}). Now, suppose that $\lambda_j$
approaches the unit circle and $\abs{\lambda_j} < 1$, then
$\abs{\overline{\lambda}_j}^{-1} > 1$ and $\overline{\lambda}_j^{-1}$
will also be approaching the unit circle with
\[
\lambda_j-\overline{\lambda}_j^{-1}\to 0.
\]
At a phase transition the behavior of the entropy of entanglement is
captured by
\begin{theorem}
\label{thm:crit}
Let the $m$ pairs of roots $\lambda_{j}$, $\overline{\lambda}_{j}^{-1}$,
$j=1,\ldots, m$, approach together towards the unit circle such
that the limiting values of $\lambda_{j}$, $\overline{\lambda}_{j}^{-1}$ are
distinct from those of $\lambda_{k}$, $\overline{\lambda}_{k}^{-1}$ if $j\neq k$,
then the entanglement entropy is asymptotic to
\begin{equation}
\label{eq:critic_lim}
S(\rho_A)=-\frac{1}{6}\sum_{j=1}^m\log\abs{\lambda_{j} -
\overline{\lambda}_{j}^{-1}} + O(1),
\quad \lambda_{j} \to \overline{\lambda}_{j}^{-1}, \quad j=1,\ldots,m.
\end{equation}
\end{theorem}
From the integral~(\ref{eq:intr_Ki}) it is evident that in order to
prove theorems~\ref{main_theo1} and~\ref{thm:crit} we need an explicit
asymptotic formula for the determinant $D_L(\lambda)$. Indeed, the
following proposition gives us an asymptotic representation for the
determinants of block-Toeplitz matrices whose symbols belong to the
family defined in equations~(\ref{eq:our_symb}) and~(\ref{eq:g_def}).
\begin{proposition}
\label{th10_07}
Let $\Omega_{\epsilon}$ be the set
\begin{equation}
\label{Omegaepsilonsr}
\Omega_{\epsilon} : = \{\lambda \in {\Bbb R}: |\lambda| \geq 1 + \epsilon\}.
\end{equation}
Then the Toeplitz determinant $D_L(\lambda)$ admits the following
asymptotic representation, which is uniform in $\lambda \in
\Omega_{\epsilon}$:
\begin{equation}\label{DLasMay2}
D_L(\lambda)
= (1-\lambda^2)^{L}
\frac{ \theta \left(
\beta(\lambda)\overrightarrow{e}+\frac{ \tau}{2}\right)
\theta \left(\beta(\lambda)\overrightarrow{e}-\frac{\tau}{2}\right)}{
\theta^{2}\left(\frac{ \tau}{2}\right)}
\Bigl(1 + O\left(\rho^{-L}\right)\Bigr), \quad L \to \infty,
\end{equation}
Here $\rho$ is any real number satisfying the inequality
$$
1 < \rho < \mathrm{min}\{|\lambda_{j}|: |\lambda_{j}| > 1\}.
$$
\end{proposition}
\begin{remark}
The first factor in the right hand side of equation (\ref{DLasMay2})
corresponds to the ``trivial'' factor, $G[\Phi]$ of the general
Widom's formula (\ref{eq:Wid_theo1}), which we discuss in detail in
section~\ref{Wid_theo}, while the ratio of the theta
functions provides an explicit expression of the most interesting
part of the formula --- Widom's pre-factor $E[\Phi] \equiv \det
\left(T_{\infty}[\Phi]T_{\infty}[\Phi^{-1}]\right)$, which is given
in formula~(\ref{eq:Wid_theo2}).
\end{remark}
\begin{remark}
The Asymptotic representation~(\ref{DLasMay2}) is actually valid in
a much wider domain of the complex plane $\lambda$. Indeed, it is
true everywhere away from the zeros of the right hand side, which,
unfortunately, in the case of the genus $g > 1$ is very difficult to
express in a simple closed form --- one faces a very transcendental
object, i.e. the {\it theta-divisor}. This constitutes an important
difference between the general case and that one with $g=1$ studied
in~\cite{IJK1} and \cite{IJK2}, where the zeros of
equation~(\ref{eq:thet3_zer}) can be easily evaluated.
\end{remark}
\section{Quantum spin chains with anisotropic
Hamiltonians}
\label{spin_chains}
\setcounter{equation}{0}
The XY model is a spin-1/2 ferromagnetic chain with an exchange
coupling $\alpha$ in a constant transversal magnetic field $h$. The
Hamiltonian is $H=hH_{\alpha}$ with $H_\alpha$ given by
\begin{equation}
\label{eq:XYmodel}
H_{\alpha} = -\frac{\alpha}{2}\sum_{j = 0}^{M-1}
\left[(1 + \gamma)\sigma_j^x \sigma_{j+1}^x + (1-\gamma)\sigma_j^y
\sigma_{j+1}^y\right] - \sum_{j=0}^{M-1} \sigma_j^z,
\end{equation}
where $\{\sigma^x,\sigma^y,\sigma^z\}$ are the Pauli matrices. The
parameter $\gamma$ lies in the interval $[0,1]$ and measures the
anisotropy of $H_\alpha$. When $\gamma =0$ (\ref{eq:XYmodel}) becomes
the Hamiltonian of the XX model. In the limit $M \to \infty$ the XY
model undergoes a phase transition at $\alpha_{\mathrm{c}}= 1$.
It is well known that the Hamiltonian~(\ref{eq:XYmodel}) can be mapped
into a quadratic form of Fermi operators and then diagonalized. To
this purpose, we introduce the Jordan-Wigner transformations. Let us
define
\begin{equation}
\label{eq:mop}
m_{2l + 1} = \left(\prod_{j =0}^{l-1} \sigma_j^z
\right)\sigma_l^x \quad {\rm and} \quad m_{2l} =
\left(\prod_{j=0}^{l-1} \sigma_j^z\right)\sigma_l^y.
\end{equation}
The inverse relations are
\begin{eqnarray}
\label{eq:invrel}
\sigma^z_l & = & i m_{2l} m_{2l + 1}, \nonumber \\
\sigma_l^x & = &\left(\prod_{j=0}^{l-1}i
m_{2j}m_{2j+1}\right)m_{2l+1}, \nonumber \\
\sigma_l^y &=&\left(\prod_{j=0}^{l-1}i m_{2j}m_{2j+1}\right)m_{2l}
\end{eqnarray}
These operators obey the commutation relations
$\{m_j,m_k\}=2\delta_{jk}$ but are not quite Fermi operator since they
are Hermitian. Thus, we define
\[
b_l = (m_{2l+1} -im_{2l})/2 \quad \textrm{and} \quad b_l^\dagger
= (m_{2l+1} + im_{2l})/2,
\]
which are proper Fermi operator as
\[
\{b_j,b_k\} = 0 \quad \mathrm{and} \quad \{b_j,b_k^\dagger\}=\delta_{jk}.
\]
In terms of the operators $b_j$'s the Hamiltonian~(\ref{eq:XYmodel})
becomes\footnote{This is strictly true only for open-end Hamiltonians.
If we impose periodic boundary conditions, then the term
$b^{\dagger}_{M-1}b_0$ in~(\ref{eq:XYmodch}) should be replaced by
$\left[\prod_{j=0}^{M-1}\left(2b^\dagger_jb_j
-1\right)\right]b^\dagger_{M-1}b_0$. However, because we are interested
in the limit $M \rightarrow \infty$, the extra factor in front of
$b_{M-1}^\dagger b_0$ can be neglected.}
\begin{equation}
\label{eq:XYmodch}
H_\alpha = \frac{\alpha}{2} \sum_{j=0}^{M-1}\left[b^\dagger_jb_{j+1} +
b_{j+1}^\dagger b_j + \gamma \left(b_j^\dagger b^\dagger_{j+1}
-b_jb_{j+1}\right) \right] -2\sum_{j=0}^{M-1} b_j^\dagger b_j.
\end{equation}
It turns out that the expectation values of
the operators~(\ref{eq:mop}) with respect to the ground state $\left |
\Psi_{\mathrm{g}} \right \rangle $ are
\begin{eqnarray}
\label{exval1}
\left \langle \Psi_{\mathrm{g}} \right |m_k \left | \Psi_{\mathrm{g}} \right
\rangle & = & 0, \\
\label{exval2}
\left \langle \Psi_\mathrm{g} \right | m_jm_k \left | \Psi_{\mathrm{g}} \right
\rangle & = &\delta_{jk} + i (C_M)_{jk},
\end{eqnarray}
where the correlation matrix $C_M$ has the block structure
\begin{equation}
\label{eq:corr_mat}
C_M = \pmatrix{ C_{11} & C_{12} & \cdots & C_{1M} \cr
C_{21} & C_{22} & \cdots & C_{2M} \cr
\cdots & \cdots & \cdots & \cdots \cr
C_{M1} & C_{M2} & \cdots & C_{MM}}
\end{equation}
with
\[
C_{jk} = \pmatrix{0 & g_{j-k} \cr
-g_{k-j} & 0}.
\]
For large $M$, the real numbers $g_l$ are the Fourier coefficients of
\[
g(\theta) = \frac{ \alpha \cos
\theta - 1 - i \gamma \alpha \sin \theta}{\left | \alpha \cos
\theta -1 - i \gamma \alpha \sin \theta \right |}.
\]
In other words, $C_M$ is a block-Toeplitz matrix with symbol
\begin{equation}
\label{eq:symb_C}
\varphi(\theta) = \pmatrix{0 & g(\theta) \cr
- g^{-1}(\theta) & 0 }.
\end{equation}
(We outline the derivations of formulae~(\ref{exval1})
and~(\ref{exval2}) for the family of systems~(\ref{impH}) that we
study in the appendices B and C.)
Equation~(\ref{exval1}) is a straightforward consequence of the
invariance of $H_\alpha$ under the map $b_j \mapsto -b_j$; for the
same reason the expectation value of the product of an odd number of
$m_j$'s must be zero. Formula~(\ref{exval2}) was derived for the first
time by Lieb \textit{et al.}~\cite{LSM61}. The expectation values of
the product of an even number of the $m_j$'s can be computed using
Wick's theorem:
\begin{equation}
\label{Wick-Th}
\bra{\boldsymbol{\Psi}_{{\rm g}}} m_{j_1}m_{j_2}\cdots \, m_{j_{2n}} \ket{\boldsymbol{\Psi}_{{\rm g}}} =
\sum_{\mathrm{ all \; pairings}} (-1)^p \prod_{\mathrm{all \; pairs}}
\left(\mathrm{contraction \; of \; the \; pair}\right),
\end{equation}
where a contraction of a pair is defined by $\bra{\boldsymbol{\Psi}_{{\rm g}}} m_{j_l}m_{j_m} \ket{\boldsymbol{\Psi}_{{\rm g}}}$
and $p$ is the signature of the permutation, for a given pairing,
necessary to bring operators of the same pair next to one other from
the original order. Many important physical quantities, including the
von Neuamnn entropy and the spin-spin correlation functions, are
expressed in terms of the expectation values~(\ref{Wick-Th}).
In this paper we study generalizations of the
Hamiltonian~(\ref{eq:XYmodch}) that are quadratic in the Fermi
operators and translation invariant. More explicitly, we consider the
family of systems
\begin{equation}
\label{impH}
H_{\alpha} = \alpha \left[\sum_{j,k=0}^{M-1}
b^\dagger_jA_{jk} b_k + \frac{\gamma}{2}\left(b^\dagger_j
B_{jk}b^\dagger_k - b_j B_{jk}b_k \right)\right] -
2\sum_{j=0}^{M-1} b^{\dagger}_jb_j
\end{equation}
with cyclic boundary conditions. In terms of Pauli operators this
Hamiltonian becomes
\begin{eqnarray}
H_{\alpha} & = & -\frac{\alpha}{2} \sum_{0 \le j \le k \le M-1}
\left[(A_{jk} + \gamma B_{jk})\sigma_j^x
\sigma_k^x \left(\prod_{l=j+1}^{k-1}\sigma_l^z\right)
\right. \nonumber \\
\label{genmodel2}
& & + (A_{jk}-\gamma B_{jk})\sigma_j^y \left.
\sigma_k^y\left(\prod_{l=j+1}^{k-1}\sigma_l^z\right) \right]
-\sum_{j=0}^{M-1}\sigma_j^z.
\end{eqnarray}
The translation invariance of the interaction implies that
$A_{jk}=A_{j-k}$ and $B_{jk}= B_{j-k}$, and the cyclic boundary
conditions force $A$ and $B$ to be circulant matrices. Furthermore,
since $H_\alpha$ is a Hermitian operator, the matrices $A$ and $B$
must be symmetric and anti-symmetric respectively. Now, let us
introduce two real functions,
\[
a:\mathbb{Z}/M\mathbb{Z} \longrightarrow \mathbb{R} \quad \textrm{and}
\quad b : \mathbb{Z}/M\mathbb{Z} \longrightarrow \mathbb{R},
\]
such that
\begin{equation}
\label{eq:identific}
a(j-k) = \alpha A_{j-k} -2 \delta_{jk} \quad \textrm{and} \quad
b(j-k) =\alpha B_{j-k}, \quad j,k \in \mathbb{Z}/M\mathbb{Z}.
\end{equation}
Since $A$ is symmetric and $B$ anti-symmetric, we must have
\[
a(-j)=a(j) \quad \mathrm{and} \quad b(-j)=-b(j).
\]
We shall consider systems with finite range interaction, which
implies that there exists a fixed $n < M$ such that
\begin{equation}
\label{eq:pol_con}
a(j) = b(j) = 0 \quad \mathrm{for} \quad j > n.
\end{equation}
In the appendices B and C we derive the expectation values in the
ground state of the Jordan-Wigner operators $m_j$' s. They have the same
structure as the expectation values~(\ref{exval1}) and~(\ref{exval2}),
but now in the limit as $M \to \infty$ the symbol~(\ref{eq:symb_C}) of
the correlation matrix $C_M$ is replaced by
\begin{equation}
\label{eq:new_symb}
\Phi(z)= \pmatrix{0 & g(z) \cr
-g^{-1}(z) & 0}, \quad \abs{z}=1,
\end{equation}
where
\begin{eqnarray}
\label{eq:new_g}
g(z) & = & \sqrt{\frac{q(z)}{q(1/z)}} =
\sqrt{\frac{p(z)}{z^{2n}p(1/z)}} \\
\label{eq:new_g2}
q(z) & = & \sum_{j=-n}^n\left(a(j) - \gamma
b(j)\right)z^j \\
\label{eq:new_g3}
p(z) & = & z^nq(z).
\end{eqnarray}
\section{The von Neumann entropy and
block-Toeplitz determinants}
\label{vonneumanentr}
\setcounter{equation}{0}
We now concentrate our attention to study the entanglement of
formation of the ground state $\ket{\boldsymbol{\Psi}_{{\rm g}}}$ of the family of
Hamiltonians~(\ref{impH}). Since the ground state is not degenerate,
the density matrix is simply the projection operator
$\projop{\boldsymbol{\Psi}_{\mathrm{g}}}$. We then divide the system
into two subchains: the first one A containing $L$ spins; the second one
B, made of the remaining $M-L$. We shall further assume that $1 \ll L
\ll M$. This division creates a bipartite system. The Hilbert space
of the whole system is the direct product $\mathcal{H}_{\mathrm{AB}}
=\mathcal{H}_{\mathrm{A}} \otimes
\mathcal{H}_{\mathrm{B}}$, where $\mathcal{H}_{\mathrm{A}}$ and
$\mathcal{H}_{\mathrm{B}}$ are spanned by the vectors
\[
\prod_{j=0}^{L-1}(b^\dagger_j )^{r_j} \ket{\boldsymbol{\Psi}_{{\rm
vac}}} \quad \mathrm{and} \quad \prod_{j=L}^{M-L}(b^\dagger_j
)^{r_j} \ket{\boldsymbol{\Psi}_{{\rm vac}}}, \quad r_j=0,1,
\]
respectively. The vector $\ket{\boldsymbol{\Psi}_{\mathrm{vac}}}$ is
the vacuum state, which is defined by
\[
b_j\ket{\boldsymbol{\Psi}_{{\rm vac}}} = 0, \quad
j=0,\ldots,M-1.
\]
Our goal is to determine the asymptotic behavior for large $L$, with
$L = o(M)$, of the von Neumann entropy
\begin{equation}
\label{entang2}
S(\rho_{{\rm A}}) = - \mathrm{tr} \rho_{{\rm A}} \log
\rho_{{\rm A}},
\end{equation}
where $\rho_{{\rm A}}= \mathrm{tr}_{{\rm B}} \rho_{{\rm AB}}$ and
$\rho_{{\rm AB}}= \projop{\boldsymbol{\Psi}_{\mathrm{g}}}$.
It turns out that after computing the partial trace of
$\rho_{\mathrm{AB}}$ over the degrees of freedom of $B$, the reduced
density matrix $\rho_{\mathrm{A}}$ can be expressed in terms of first $L$
Fermi operators that generate a basis spanning
$\mathcal{H}_{\mathrm{A}}$. As a consequence, only the submatrix $C_L$
formed by the first $2L$ rows and columns of the correlation
matrix~(\ref{eq:corr_mat}) will be relevant in the computation of the
entropy~(\ref{entang2}). Now, $C_L$ is even dimensional and
skew-symmetric. Furthermore, since
\[
g\left(e^{-i\theta}\right)=\overline{g\left(e^{i\theta}\right)}
\]
its Fourier coefficients are real, therefore there exists an
orthogonal matrix $V$ that block-diagonalizes $C_L$:
\begin{equation}
\label{eq:block_diag}
V C_L V^t = \bigoplus_{j=0}^{L-1} \nu_j \pmatrix{0 & 1 \cr -1 & 0},
\end{equation}
where the $\pm i \nu_j$s are imaginary numbers and are the eigenvalues
of the block-Toeplitz matrix $C_L = T_L[\varphi]$, where $\varphi$ is
the symbol~(\ref{eq:new_symb}).
Let us introduce the operators
\begin{equation}
\label{eq:new_fermi}
c_j = (d_{2j + 1} -id_{2j})/2, \quad j=0,\ldots, L-1,
\end{equation}
where
\begin{equation}
\label{eq:new_jor_wig}
d_j = \sum_{k=0}^{2L-1} V_{jk}m_k.
\end{equation}
Since $V$ is orthogonal $\{d_j,d_k\}=2\delta_j$ and the $c_j$s are
Fermi operators. Combining equations~(\ref{eq:block_diag}),
(\ref{eq:new_fermi}) and~(\ref{eq:new_jor_wig}), we obtain the
expectation values
\begin{eqnarray}
\label{od_exv}
\bra{\boldsymbol{\Psi}_{{\rm g}}} c_j\ket{\boldsymbol{\Psi}_{{\rm g}}} & = & \bra{\boldsymbol{\Psi}_{{\rm g}}} c_j\, c_k \ket{\boldsymbol{\Psi}_{{\rm g}}} = 0, \\
\label{ev_ex_val}
\bra{\boldsymbol{\Psi}_{{\rm g}}} c_j^\dagger \,c_k\ket{\boldsymbol{\Psi}_{{\rm g}}} & = & \delta_{jk}\, \frac{1 - \nu_j}{2}.
\end{eqnarray}
The reduced density matrix $\rho_{\mathrm{A}}$ can be computed
directly from these expectation values. We report this computation in
appendix A. We have
\begin{equation}
\label{rop}
\rho_{{\rm A}} = \prod_{j=0}^{L-1}\left(\frac{1
-\nu_j}{2}\,c^\dagger_j \, c_j + \frac{1 + \nu_j}{2}\, c_j \,
c^\dagger_j \right).
\end{equation}
In other words, as equations~(\ref{od_exv}) and~(\ref{ev_ex_val})
already suggest, these fermionic modes are in a product of
uncorrelated states, therefore the density matrix is the direct
product
\begin{equation}
\rho_{{\rm A}} = \bigotimes_{j=0}^{L-1} \rho_j \quad {\rm with}
\quad \rho_j = \frac{1 - \nu_j}{2}\,c^\dagger_j \, c_j + \frac{1 +
\nu_j}{2}\, c_j \, c^\dagger_j.
\end{equation}
Since $(1 + \nu_j)/2$ and $(1-\nu_j)/2$ are eigenvalues of density
matrices they must lie in the interval $(0,1)$, therefore,
\[
-1 < \nu_j < 1, \quad j=0,\ldots,L-1.
\]
At this point the entropy of the entanglement between the two
subsystems can be easily derived from equation~(\ref{entang2}):
\begin{equation}
\label{nent}
S(\rho_{{\rm A}}) = \sum_{j=0}^{L -1} e(1,\nu_j),
\end{equation}
where $e(x,\nu)$ is defined in equation~(\ref{binaryent}). Using the
residue theorem, formula~(\ref{nent}) can be rewritten as
\begin{eqnarray}
\label{korep_int}
S(\rho_{\mathrm{A}}) & = &
\lim_{\epsilon \to 0^+} \frac{1}{4\pi i} \oint_{\Gamma(\epsilon)}
\left((-1)^L\sum_{j=0}^{L-1}\frac{ 2\lambda }%
{\lambda^2 - \nu_j^2}\right)e(1 +\epsilon, \lambda) \mathrm{d} \lambda
\nonumber \\
&=& \lim_{\epsilon \to 0^+} \frac{1}{4\pi i} \oint_{\Gamma(\epsilon)}
e(1 +\epsilon, \lambda)\frac{\mathrm{d} \log D_L(\lambda)}{\mathrm{d}\lambda} \mathrm{d} \lambda
\end{eqnarray}
where $\Gamma(\epsilon)$ is the contour in figure~\ref{fig1} and
\begin{equation}
\label{eq:DL}
D_L(\lambda) = (-1)^L\prod_{j=0}^{L-1}(\lambda^2 - \nu_j^2)
\end{equation}
is the determinant of the block-Toeplitz\ matrix $T_L[\Phi](\lambda)$
with symbol~(\ref{eq:our_symb}).
The integral~(\ref{korep_int}) was introduced for the first time by
Jin and Korepin~\cite{JK} to compute the entropy of entanglement in
the XX model. In this case $g^{-1}(\theta)=g(\theta)$ and
$D_L(\lambda)$ becomes the determinant of a Toeplitz matrix with a
scalar symbol. Keating and Mezzadri~\cite{KM04,KM05} generalized it
to lattice models where $D_L(\lambda)$ becomes an average over one of
the classical compact groups. Its \textit{et al.}~\cite{IJK1,IJK2}
computed the same integral for the XY model, for which $D_L(\lambda)$
is the determinant of a block-Toeplitz matrix with
symbol~(\ref{eq:ijk_symb}). Following the same approach of
Its~\textit{et al.}, in this paper we express $D_L(\lambda)$ as a
Fredholm determinant of an integrable operator on
$L^2(\Xi,\mathbb{C}^2)$ and solve the Riemann-Hilbert problem
associated to it. This will give an explicit formula for
$D_L(\lambda)$, which can then be used to compute the
integral~(\ref{korep_int}).
\section{The Asymptotics of Block Toeplitz Determinants.
Widom's Theorem}
\label{Wid_theo}
\setcounter{equation}{0}
A generalization of the strong Szeg\H{o}'s theorem to determinants of
block-Toeplitz matrices was first discovered by
Widom~\cite{Wid74,Wid75}. Consider a $p\times p$ matrix symbol
$\varphi$ and assume that
\[
|| \varphi || =\sum_{k=-\infty}^\infty ||\varphi_k|| +
\left(\sum_{k=-\infty}^\infty |k|\, ||\varphi_k||^2\right)^{1/2} < \infty.
\]
The norm that appear in the right-hand side of this equation is the
Hilbert-Schmidt norm of the $p \times p$ matrices that occur. In
addition, we shall require that
\[
\det \varphi(z) \neq 0 \quad \mathrm{and} \quad \Delta|_{|z|=1} \arg
\det \varphi(z) =0.
\]
Widom showed that if one defines
\begin{equation}
\label{eq:Wid_theo1}
G[\varphi] := \exp\left(\frac{1}{2\pi i} \int_{\Xi} \log \det
\varphi(z) \frac{\mathrm{d} z}{z} \right)
\end{equation}
then
\begin{equation}
\label{eq:Wid_theo2}
E[\varphi] := \lim_{L \to \infty} \frac{D_L[\varphi]}{G[\varphi]^{L + 1}} =
\det \left(T_\infty[\varphi]T_{\infty}[\varphi^{-1}]\right),
\end{equation}
where $T_\infty[\varphi]$ is a semi-infinite Toeplitz matrix acting on
the Hilbert space of semi-infinite sequence of $p$-vectors:
\[
l^2 = \left\{ \{\mathbf{v}_k\}_{k=0}^\infty \left |\,
\mathbf{v}_k \in \mathbb{C}^p, \quad \sum_{k=0}^\infty ||
\mathbf{v}_k||^2 < \infty \right.\right\}.
\]
Formulae~(\ref{eq:Wid_theo1}) and~(\ref{eq:Wid_theo2}) reduce to
Szeg\H{o}'s strong limit theorem when $p=1$. Although this beautiful
formula is very general, it is difficult to extract information from
the right-hand side of equation~(\ref{eq:Wid_theo2}) and determine
formulae that can be used in the applications. The advantage of our
approach is precisely to derive explicit formula for the leading order
term of the asymptotics of block-Toeplitz determinants whose symbols
$\Phi(z)$ belong to the one-parameter family defined
in~(\ref{eq:our_symb}).
A starting point of our analysis is the asymptotic representation of
the logarithmic derivative (with respect to the parameter $\lambda$)
of the determinant $D_{L}(\lambda) = \det T_L [\Phi](\lambda)$ in
terms of $2\times 2$ matrix-valued functions, denoted by $U_{\pm}(z)$
and $V_{\pm}(z)$, which solve the following Wiener-Hopf factorization
problem:
\begin{eqnarray}
\label{eq:WH}
\Phi(z)& = & U_+(z)U_-(z)=V_-(z)V_+(z), \\
U_-(z) & \mbox{and}& V_-(z)\quad (U_+(z) \quad \mbox{and} \quad
V_+(z)) \quad \mbox{are analytic outside (inside)} \nonumber\\
\mbox{the unit}& \mbox{circle}& \Xi, \\
U_-(\infty)& = & V_-(\infty)=I.
\end{eqnarray}
Now, let us fix $\epsilon > 0$ and define the set
\begin{equation}\label{Omegaepsilon}
\Omega_{\epsilon} : = \{\lambda \in {\Bbb R}: |\lambda| \geq 1 + \epsilon\}.
\end{equation}
In the next section we will show that
for every $\lambda \in \Omega_{\epsilon}$ the solution of the above
Wiener-Hopf factorization problem exists, and the corresponding matrix
functions, $U_{\pm}(z)$ and $V_{\pm}(z)$
satisfy the following uniform estimate:
\begin{equation}
\label{WHest}
\left|\frac{1}{\lambda}U_{+}(z)\right|,\,\,
\left|\frac{1}{\lambda}V_{+}(z)\right|,\,\,
|U_{-}(z)|,\,\, |V_{-}(z)| < C_{\epsilon},\quad \forall z \in
\mathcal{D}_{\pm}, \quad \forall \lambda
\in \Omega_{\epsilon},
\end{equation}
where the notation $ \mathcal{D}_{+}$ ($\mathcal{D}_{-}$) is used for
the interior (exterior) of the unit circle $\Xi$. Moreover,
generalizing the approach of \cite{IJK1,IJK2} we will obtain
the multidimensional theta function explicit formulae for the
functions $U_{\pm}(z)$ and $V_{\pm}(z)$.
The asymptotic representation of the logarithmic derivative $\mathrm{d} \log
D_{L}(\lambda)/\mathrm{d}\lambda$ is given by the following theorem:
\begin{theorem}\label{widom1}
Let $\lambda \in \Omega_{\epsilon}$, and fix a positive number $R >
0$. Then, we have the following asymptotic representation for the
logarithmic derivative of the determinant $D_{L}(\lambda) = \det
T_L[\Phi] $:
\begin{eqnarray}
\label{our}
\frac{\mathrm{d}}{\mathrm{d}\lambda}\log
D_L(\lambda) & = & -\frac{2\lambda}{1-\lambda^{2}}L \nonumber \\
& & + \frac{1}{2\pi} \int_{\Xi}\mathrm{tr} \, \Bigl[\left(U_{+}'(z)U_{+}^{-1}(z)
+V_{+}^{-1}(z)V_{+}'(z)\right)\Phi^{-1}(z)\Bigr]\mathrm{d} z \nonumber \\
& & + r_{L}(\lambda),
\end{eqnarray}
where $(')$ means the derivative with respect to $z$, the error term
$r_{L}(\lambda)$ satisfies the estimate
\begin{equation}
\label{errorour}
|r_{L}(\lambda)|\leq C \rho^{-L}, \quad
\quad \lambda \in \Omega_{\epsilon}\cap\{|\lambda| \leq R\},
\quad L \geq 1,
\end{equation}
and $\rho$ is any real number such that $1 < \rho <
\mathrm{min}\{|\lambda_{j}|:
|\lambda_{j}| > 1\}$.
\end{theorem}
This theorem, without the error term estimate is a specification of
one of the classical results of H. Widom \cite{Wid74} for the case of
the matrix generators $\Phi(z)$ whose dependence on the extra
parameter $\lambda$ is given by the equation
$$
\Phi(z) \equiv \Phi(z;\lambda) = i\lambda I + \Phi(z;0).
$$
The estimate (\ref{errorour}) of the error term as well as an
alternative proof of the theorem itself in the case of curves of
genus one is given in \cite{IJK1} and~\cite{IJK2}. The method of
\cite{IJK1} and~\cite{IJK2} is based on the Riemann-Hilbert approach
to the Toeplitz determinants \cite{D} and on the theory of the
integrable Fredholm operators~\cite{IIKS,HI}; its extension to
symbols (\ref{eq:our_symb}), where the polynomial $p(z)$ entering
in~(\ref{eq:g_def}) is of arbitrary degree, is straightforward.
Indeed, the following generalization of theorem \ref{widom1} follows
directly from the analytic considerations of \cite{IJK2}.
\begin{theorem}
\label{widom2}
Suppose that the matrix generator $\Phi(z)$ is analytic in the annulus,
$$
\mathcal{D}_{\delta} = \{1-\delta < |z| < 1 + \delta\}.
$$
Suppose also that $\Phi(z)$ depends analytically on an extra
parameter $\mu$ and that it admits a Wiener-Hopf factorisation
for all $\mu$ from a certain set $\mathcal{M}$. Finally, we shall
assume that the matrix functions
$$
\Phi(z),\,\, \Phi^{-1}(z),\,\, \frac{\partial \Phi(z)}{\partial \mu},\,\,
U_{\pm}(z), \,\,\mbox{and}\,\, V_{\pm}(z)
$$
are uniformly bounded for all $\mu \in \mathcal{M}$ and all
$z$ from the respective domains, i.e. $\mathcal{D}_{\delta}$ in the case
of $\Phi(z)$, $\Phi^{-1}(z)$, and ${\partial \Phi(z)}/{\partial \mu}$,
and $\mathcal{D}_{\pm}$ in the case of $U_{\pm}(z)$
and $V_{\pm}(z)$. Then, the logarithmic derivative of
the determinant $D_{L}(\mu) = \det T_L[\Phi] $ has the following
asymptotic representation:
\begin{eqnarray}
\label{our1}
\frac{\mathrm{d}}{\mathrm{d}\mu}\log
D_L(\mu)& = & \frac{L}{2\pi i}\int_{\Xi}\mathrm{tr}\,
\left(\Phi^{-1}(z)\frac{\partial \Phi(z)}{\partial \mu}\right)
\frac{\mathrm{d} z}{z}
+ \frac{1}{2\pi i}\int_{\Xi}\mathrm{tr}\,
\left((\Phi^{-1})'(z)\frac{\partial \Phi(z)}{\partial \mu}\right)
\frac{\mathrm{d} z}{z} \nonumber \\
& & + \frac{1}{2\pi i} \int_{\Xi}\mathrm{tr}\, \left(U_{+}'(z)U_{+}^{-1}(z)
\frac{\partial \Phi(z)}{\partial \mu}\Phi^{-1}(z)
+V_{+}^{-1}(z)V_{+}'(z)\Phi^{-1}(z)\frac{\partial \Phi(z)}{\partial
\mu}\right)\mathrm{d} z \nonumber \\
& & + r_{L}(\mu),
\end{eqnarray}
where the error term $r_{L}(\mu)$ satisfies the uniform estimate
\begin{equation}
\label{errorour1}
|r_{L}(\mu)|\leq C \rho^{-L}, \quad
\quad \mu \in \mathcal{M},
\quad L \geq 1,
\end{equation}
and $\rho$ is any positive number such that $1 < \rho < 1+\delta$.
\end{theorem}
This theorem, without the estimate of the error term and with much
weaker assumptions on the generator $\Phi(z)$, is exactly the
classical result of Widom from \cite{Wid74}.
\begin{remark}
Denote
$$
u_{\pm}(z) = V^{-1}_{\pm}(z), \quad \mbox{and}\quad v_{\pm}(z) =
U^{-1}_{\pm}(z),
$$
so that
$$
\Phi^{-1}(z) = u_{+}(z)u_{-}(z) = v_{-}(z)v_{+}(z).
$$
Then, equation~(\ref{our1}) can be re-written in a more compact way:
\begin{eqnarray}
\label{our11}
\frac{\mathrm{d}}{\mathrm{d}\mu}\log
D_L(\mu) & = & \frac{L}{2\pi i}\int_{\Xi}\mathrm{tr}\,
\left(\Phi^{-1}(z)\frac{\partial \Phi(z)}{\partial \mu}\right)
\frac{\mathrm{d} z}{z} \nonumber \\
& & + \frac{i}{2\pi} \int_{\Xi}\mathrm{tr}\, \left((u_{+}'(z)u_{-}(z)
-v_{-}'(z)v_{+}(z))\frac{\partial \Phi(z)}{\partial \mu}\right)\mathrm{d} z
\nonumber \\
&& + r_{L}(\mu).
\end{eqnarray}
This form in which this result is formulated in \cite{Wid74}.
\end{remark}
Theorem \ref{widom2} can be used to strengthen the statement of
theorem \ref{widom1} by removing the dependence of the constant $C$ on
$R$ in the estimate (\ref{errorour}). This leads to the following
extension of theorem \ref{widom1}:
\begin{theorem}
\label{widom3}
Let $\Omega_{\epsilon}$ be the set defined in (\ref{Omegaepsilon})
and let $\Phi(z)$ be the symbol defined in~(\ref{eq:our_symb}).
Then we have the following asymptotic representation of the
logarithmic derivative of the determinant $D_{L}(\lambda) = \det
T_L[\Phi] $ for all $\lambda \in \Omega_{\epsilon}$:
\begin{eqnarray}
\label{our2}
\frac{\mathrm{d}}{\mathrm{d}\lambda}\log
D_L(\lambda) & = & -\frac{2\lambda}{1-\lambda^{2}}L
+ \frac{1}{2\pi} \int_{\Xi}\mathrm{tr}\, \Bigl[\left(U_{+}'(z)U_{+}^{-1}(z)
+V_{+}^{-1}(z)V_{+}'(z)\right)\Phi^{-1}(z)\Bigr]\mathrm{d} z \nonumber \\
&& + r_{L}(\lambda),
\end{eqnarray}
where $(')$ means the derivative with respect to $z$,
the error term $r_{L}(\lambda)$ satisfies the uniform estimate
\begin{equation}
\label{errorour2}
|r_{L}(\lambda)|\leq \frac{C}{|\lambda|^3} \rho^{-L}, \quad
\quad \lambda \in \Omega_{\epsilon},
\quad L \geq 1
\end{equation}
and $\rho$ is any real number such that $1 < \rho <
\mathrm{min}\{|\lambda_{j}|:
|\lambda_{j}| > 1\}$.
\end{theorem}
\begin{proof}
Let $R>1+{\epsilon}$ and denote $C_{1}$ the constant $C$ from
estimate (\ref{errorour}). Take now $\lambda \in \Omega_{\epsilon}$,
$|\lambda| \geq R$ and set
$$
\mu = \frac{1}{\lambda} \in \mathcal{M} \equiv \left\{\mu \in {\Bbb
R}: |\mu| \leq \frac{1}{R} < \frac{1}{1+\epsilon}\right\}.
$$
By trivial algebra, we arrive at
$$
\det D_{L}(\lambda) = (-\lambda^{2})^{L}\det\tilde{D}_{L}(\mu),
$$
where $\tilde{D}_{L}(\mu) \equiv \det T_{L}[\tilde{\Phi}]$ and
\begin{equation}
\label{Phitildedef}
\tilde{\Phi}(z) \equiv \frac{1}{i\lambda}\Phi(z) = I -i\mu\Phi(z;0) \equiv
\pmatrix{1 & -i\mu g(z) \cr
i\mu g^{-1}(z) & 1}.
\end{equation}
From this relation it also follows that
\begin{equation}
\label{DtildeD}
\frac{\mathrm{d}}{\mathrm{d}\lambda}\log \det D_{L}(\lambda) = \frac{2L}{\lambda}
-\frac{1}{\lambda^2}\frac{\mathrm{d}}{\mathrm{d} \mu}\log \det\tilde{D}_{L}(\mu),
\end{equation}
and hence the asymptotic analysis of the logarithmic derivative $\mathrm{d}\log
\det D_{L}(\lambda)/\mathrm{d}\lambda$ for $|\lambda| \geq R$ is reduced to
that one of the logarithmic derivative $\mathrm{d}\log \det
\tilde{D}_{L}(\mu)/\mathrm{d} \mu$ for $\mu \in \mathcal{M} \equiv \left\{\mu
\in {\Bbb R}: |\mu| \leq \frac{1}{R} < \frac{1}{1+\epsilon}\right\}$.
Firstly, we notice that for all $\mu \in \mathcal{M}$ and $z\in
\mathcal{D}_{\delta}$ the functions $\tilde{\Phi}(z)$,
$\tilde{\Phi}^{-1}(z)$ and $\partial \tilde{\Phi}(z)/
\partial \mu$ are uniformly bounded. Secondly, we have that
$$
\tilde{\Phi}(z) = \frac{1}{i\lambda}\Phi(z) =
\frac{1}{i\lambda}U_{+}(z)U_{-}(z)
=\frac{1}{i\lambda}V_{-}(z)V_{+}(z),
$$
and hence the matrix valued functions $\tilde{U}_{\pm}(z)$
and $\tilde{V}_{\pm}(z)$ defined by the relations
$$
\tilde{U}_{+}(z) = \frac{1}{i\lambda}U_{+}(z), \quad
\tilde{V}_{+}(z) = \frac{1}{i\lambda}V_{+}(z), \quad
\tilde{U}_{-}(z) = U_{-}(z), \quad \tilde{V}_{-}(z) = V_{-}(z)
$$
provide the Wiener-Hopf factorization of the generator
$\tilde{\Phi}(z)$. Moreover, because of the estimates (\ref{WHest}),
the functions $\tilde{U}_{\pm}(z)$ and $\tilde{V}_{\pm}(z)$ are
uniformly bounded for all $\mu \in \mathcal{M}$ and $z\in
\mathcal{D}_{\pm}$. Hence, all the conditions of theorem \ref{widom2}
are met, and we can claim the uniform asymptotic representation
(\ref{our1}) of the logarithmic derivative of the determinant
$\tilde{D}_{L}(\mu)$ with the symbols $\Phi$, $U$, and $V$ replaced by
$\tilde{\Phi}$, $\tilde{U}$ and $\tilde{V}$ respectively. We shall
also use the notation $\tilde{r}_{L}(\mu)$ and $C_{2}$ for the error
term and constant $C$ from the corresponding estimate
(\ref{errorour1}) respectively.
The specific form (\ref{Phitildedef}) of dependence of the generator
$\tilde{\Phi}(z)$ on
the parameter $\mu$ implies that
\begin{equation}\label{trace1}
\tilde{\Phi}^{-1}(z)\frac{\partial \tilde{\Phi}(z)}{\partial \mu}
= \frac{1}{1-\mu^2}
\pmatrix{-\mu & -i g(z) \cr
ig^{-1}(z) & -\mu},
\end{equation}
and
\begin{equation}\label{trace2}
(\tilde{\Phi}^{-1})'(z)\frac{\partial \tilde{\Phi}(z)}{\partial \mu}
= \frac{1}{1-\mu^2}
\pmatrix{-\mu g^{-1}(z)g'(z)& 0 \cr
0 & \mu g^{-1}(z)g'(z)}.
\end{equation}
Hence
\begin{eqnarray*}
\mathrm{tr}\,\left(\tilde{\Phi}^{-1}(z)\frac{\partial
\tilde{\Phi}(z)}{\partial \mu}\right)
&=& -\frac{2\mu}{1-\mu^2} = \frac{2\lambda}{1-\lambda^2} \\
\mathrm{tr}\,\left((\tilde{\Phi}^{-1})'(z)\frac{\partial
\tilde{\Phi}(z)}{\partial \mu}\right) &=& 0
\end{eqnarray*}
and equation (\ref{our1}) for the determinant $\tilde{D}_{L}(\mu)$ becomes
\begin{eqnarray}
\label{our3}
\frac{d}{\mathrm{d} \mu}\log
\tilde{D}_L(\mu) &=& \frac{2\lambda}{1-\lambda^2}L \nonumber \\
&& + \frac{1}{2\pi i} \int_{\Xi}\mathrm{tr}\,
\left(\tilde{U}_{+}'(z)\tilde{U}_{+}^{-1}(z)
\frac{\partial \tilde{\Phi}(z)}{\partial \mu}\tilde{\Phi}^{-1}(z)
+\tilde{V}_{+}^{-1}(z)\tilde{V}_{+}'(z)\tilde{\Phi}^{-1}(z)\frac{\partial
\tilde{\Phi}(z)}{\partial \mu}\right)\mathrm{d} z \nonumber \\
&& + \tilde{r}_{L}(\mu),
\end{eqnarray}
with
\begin{equation}\label{errorour3}
|\tilde{r}_{L}(\mu)|\leq C_{2} \rho^{-L}, \quad
\quad \mu \in \mathcal{M},
\quad L \geq 1.
\end{equation}
Observe now that equation (\ref{trace1}) can be rewritten as
$$
\tilde{\Phi}^{-1}(z)\frac{\partial \tilde{\Phi}(z)}{\partial \mu}
=\frac{\partial \tilde{\Phi}(z)}{\partial \mu} \tilde{\Phi}^{-1}(z)
=\left(\lambda I -i\lambda^2 \Phi^{-1}(z)\right).
$$
This relation, together with the obvious fact that
$$
\tilde{U}_{+}'(z)\tilde{U}_{+}^{-1}(z) = U_{+}'(z)U_{+}^{-1}(z)
\quad\mbox{and}\quad \tilde{V}^{-1}_{+}(z)\tilde{V}_{+}'(z) =
V_{+}^{-1}(z)V_{+}'(z),
$$
allows to transform (\ref{our3}) into the asymptotic formula
\begin{eqnarray}
\label{our4}
\frac{\mathrm{d}}{\mathrm{d} \mu}\log
\tilde{D}_L(\mu) & = & \frac{2\lambda}{1-\lambda^2}L \nonumber \\
&& - \frac{\lambda^2}{2\pi } \int_{\Xi}\mathrm{tr}\,\left[
\left(U_{+}'(z)U_{+}^{-1}(z)
+V_{+}^{-1}(z)V_{+}'(z)\right)\Phi^{-1}(z)\right]\mathrm{d} z \nonumber \\
&& + \tilde{r}_{L}(\mu).
\end{eqnarray}
The substitution of this relation into the right hand side of
equation (\ref{DtildeD}) yields the following asymptotic formula ---
which is complementary to the equation (\ref{our}) ---
\begin{eqnarray}
\label{our5}
\frac{\mathrm{d}}{\mathrm{d}\lambda}\log
D_L(\lambda) & = &-\frac{2\lambda}{1-\lambda^{2}}L \nonumber \\
&& + \frac{1}{2\pi} \int_{\Xi}\mathrm{tr}\, \Bigl[\left(U_{+}'(z)U_{+}^{-1}(z)
+V_{+}^{-1}(z)V_{+}'(z)\right)\Phi^{-1}(z)\Bigr]\mathrm{d} z \nonumber \\
&& + r_{L}(\lambda),
\end{eqnarray}
with the error term $r_{L}(\lambda)$ satisfying the estimate
\begin{equation}\label{errorour5}
|r_{L}(\lambda)|\leq \frac{C_{2}}{|\lambda|^2} \rho^{-L}, \quad
\quad \lambda \in \Omega_{\epsilon}\cap\{|\lambda| \geq R\},
\quad L \geq 1.
\end{equation}
Choosing
$$
C = \mbox{max}\,\{C_{1}R, C_{2}\},
$$
we arrive at the statement of the theorem, but with a better estimate
for the error term $r_{L}(\lambda)$ than that one in~(\ref{errorour2}).
In order to improve the estimate (\ref{errorour5}), we notice that since
$\tilde{\Phi}(z)$ becomes the identity matrix as $\mu \to 0$, the
Wiener-Hopf factorization of $\tilde{\Phi}(z)$ exists for all $\mu$
from the small complex neighbourhood
$$
\mathcal{M}_{0} \equiv \{\mu \in {\Bbb C}: |\mu| < \epsilon_{0} \leq
\frac{1}{R}\}
$$
of the point $\mu =0$. In particular, this implies that the Wiener-Hopf
factors, $\tilde{U}_{\pm}(z)$
and $\tilde{V}_{\pm}(z)$, admit an analytic continuation to the disc
$\mathcal{M}_{0}$
and that the validity of the formulae (\ref{our3}) and (\ref{errorour3})
can be
extended to the set
$$
\mathcal{M}_{0}\cup \mathcal{M}.
$$
Moreover, from equation (\ref{our3}) it follows that
$\tilde{r}_{L}(\mu)$ is analytic in the disc $\mathcal{M}_{0}$ and
that $\tilde{r}_{L}(0) = 0$. In order to see that the latter equality
is true, one has to take into account that
$\tilde{U}_{\pm}(z)=\tilde{V}_{\pm}(z) = I$ for all $z$ and $\mu =0$
and the evenness of $\tilde{D}_{L}(\mu)$ as a function of $\mu$. Now,
define
$$
\hat{r}_{L}(\mu) = \frac{\tilde{r}_{L}(\mu)}{\mu}.
$$
The function $\hat{r}_{L}(\mu)$ is analytic in the disc
$\mathcal{M}_{0}$ and satisfies the estimate (\ref{errorour3}) uniformly
for $\mu \in C_{\epsilon'} \equiv \{|\mu| = \epsilon'\}$ and for any $0 <
\epsilon' < \epsilon_{0}$. With the help of the Cauchy formula,
$$
\hat{r}_{L}(\mu) = \frac{1}{2\pi i}\oint_{|\mu'| = \epsilon_{0}/2}
\frac{\hat{r}_{L}(\mu')}{\mu'-\mu}\mathrm{d} \mu',
$$
we conclude that
$$
|\hat{r}_{L}(\mu) | < C\rho^{L}, \quad |\mu| \leq \epsilon_{0}/3,
\quad L > 1
$$
or
$$
|\tilde{r}_{L}(\mu) | < C|\mu|\rho^{L}, \quad |\mu| \leq \epsilon_{0}/3,
\quad L > 1.
$$
The last inequality combined with (\ref{errorour3}) allows to replace
it by the estimate
$$
|\tilde{r}_{L}(\mu) | < C|\mu|\rho^{L}, \quad \mu \in \mathcal{M} ,
\quad L > 1,
$$
which, in turn, transforms estimate (\ref{errorour5}) into the estimate
\begin{equation}\label{errorour6}
|r_{L}(\lambda)|\leq \frac{C_{2}}{|\lambda|^3} \rho^{-L}, \quad
\quad \lambda \in \Omega_{\epsilon}\cap\{|\lambda| \geq R\},
\quad L \geq 1,
\end{equation}
and hence yields the correction term as announced in (\ref{errorour2}).
This completes the proof of the theorem.
\end{proof}
\section{The Wiener-Hopf factorization of $\Phi(z)$}
\label{WH_fact}
\setcounter{equation}{0}
In this section we will compute the Wiener-Hopf factorization of
$\Phi(z)$. We will express the solution in terms of theta functions on
a hyperelliptic curve $\mathcal{L}$.
From the equality
\begin{eqnarray*}
(1-\lambda^2)\sigma_3\Phi^{-1}(z)\sigma_3=\Phi(z),\quad
\sigma_3=\pmatrix{1&0\cr
0&-1\cr},
\end{eqnarray*}
we can express $V$ in terms of $U$ as follows:
\begin{eqnarray}
\label{eq:UV}
V_-(z)&=&\sigma_3U_-^{-1}\sigma_3\nonumber\\
V_+(z)&=&\sigma_3U_+^{-1}(z)\sigma_3(1-\lambda^2), \quad
\lambda\neq\pm 1.
\end{eqnarray}
Therefore, we only need to compute $U(z)$. To do so, first note
that $\Phi(z)$ can be diagonalized by the matrix
\begin{equation}
\label{eq:dia}
Q(z)=\pmatrix{g(z)&-g(z)\cr
i&i\cr}.
\end{equation}
Indeed, it is straightforward to see that
\begin{eqnarray*}
\Phi(z)&=&Q(z)\Lambda Q^{-1}(z),\\
\Lambda&=&i\pmatrix{\lambda+1&0\cr
0&\lambda-1\cr}.
\end{eqnarray*}
The function $Q(z)$ has the following jump discontinuities on the
$z$-plane:
\begin{eqnarray*}
Q_+(z)&=&Q_-(z)\sigma_1,\quad z\in\Sigma_i,\\
\sigma_1&=&\pmatrix{0&1\cr
1&0\cr},
\end{eqnarray*}
where the branch cuts $\Sigma_i$ are defined in (\ref{eq:lambdai}),
(\ref{eq:order}) and (\ref{eq:branchcut}) and $Q_\pm(z)$ are the
boundary values of $Q(z)$ to the left/right of $\Sigma_i$. It also
has square-root singularities at each branch point with the
following behavior:
\begin{eqnarray*}
Q(z)=Q_{\pm i}(z)\pmatrix{(z-z_i^{\pm 1})^{{\pm}{1\over 2}}&0\cr
0&1\cr}\pmatrix{1&-1\cr
1&1\cr},\quad
z\rightarrow z_i^{\pm},
\end{eqnarray*}
where $Q_{\pm i}(z)$ are functions that are holomorphic and
invertible at $z_i^{\pm}$.
Let us define
\begin{eqnarray}\label{Sdef}
S(z)&=&U_-(z)Q(z)\Lambda^{-1},\quad |z|\geq 1,\nonumber \\
S(z)&=&U_+(z)^{-1}Q(z),\quad |z|\leq 1.
\end{eqnarray}
By direct computation we see $S(z)$ is the unique solution of the
following Riemann-Hilbert problem:
\begin{eqnarray}
\label{eq:RHtheta}
S_+(z)&=&S_-(z)\sigma_1,\quad z\in\Sigma_i, \quad i=1,\ldots, n\nonumber\\
S_+(z)&=&S_-(z)\Lambda\sigma_1\Lambda^{-1},\quad z\in\Sigma_i, \quad
i=n+1,\ldots, 2n\\
\lim_{z \to \infty} S(z)&=& Q(\infty)\Lambda^{-1}, \nonumber
\end{eqnarray}
where, as before, $S_\pm(z)$ denotes the boundary values of $S(z)$ to
the left/right of the branch cuts. The matrix function $S(z)$ is
holomorphic and invertible everywhere, except on the cuts $\Sigma_j$,
where it has the jump discontinuities given in~(\ref{eq:RHtheta}), and
in proximity of the branch points, where it behaves like
\begin{eqnarray}
\label{eq:singtheta} S(z)&=S_{\pm i}(z)\pmatrix{(z-z_i^{\pm
1})^{{\pm}{1\over 2}}&0\cr
0&1\cr}\pmatrix{1&-1\cr
1&1\cr},\quad
z\rightarrow
z_i^{\pm},\quad
|z_i|<1,\\
S(z)&=S_{\pm i}(z)\pmatrix{(z-z_i^{\pm 1})^{{\pm}{1\over 2}}&0\cr
0&1\cr}\pmatrix{1&-1\cr
1&1\cr}\Lambda^{-1},\quad
z\rightarrow
z_i^{\pm},\quad
|z_i|>1.\nonumber
\end{eqnarray}
where $S_{\pm i}(z)$ are holomorphic and invertible at $z_i^{\pm}$.
The Riemann-Hilbert problem~(\ref{eq:RHtheta}) can be solved in terms
of the multi-dimensional theta functions~(\ref{eq:thetadef}).
However, before we compute explicitly $S(z)$, we need to introduce
further notions and properties of $\theta$.
Throughout the rest of this section we shall use the
definitions~(\ref{eq:L}) of the hyperelliptic curve $\mathcal{L}$
and~(\ref{eq:thetadef}) of the theta function associated to $\mathcal{L}$.
Furthermore, recall that the choice of the canonical basis for the
cycles is described in figure~\ref{fig:cycle} and that the normalized 1-forms
dual to this basis are defined in equation~(\ref{eq:normalizeforms}).
Let us introduce some basic properties of the theta functions. The
proofs of such properties can be found in many standard textbooks in
Riemann surfaces like, for example, \cite{FK}.
\begin{proposition}
\label{pro:per}
The theta function is quasi-periodic with the following properties:
\begin{eqnarray}
\label{eq:period}
\theta (\overrightarrow{s}+\overrightarrow{M})&=&
\theta(\overrightarrow{s}), \\
\theta (\overrightarrow{s}+\Pi\overrightarrow{M})&=& \exp \left[ 2\pi
i\left(-\left<\overrightarrow{M},\overrightarrow{s}\right>-\left<\overrightarrow{M},{\Pi\over{2}}\overrightarrow{M}\right>\right)\right]
\theta(\overrightarrow{s}),
\end{eqnarray}
\end{proposition}
where $ \left \langle \cdot, \cdot \right \rangle$ denotes the usual
inner product in $\mathbb{C}^g$.
A divisor $D$ of degree $m$ on a hyperelliptic curve $\mathcal{L}$ is a formal
sum of $m$ points on $\mathcal{L}$, \textit{i.e.}
\[
D := \sum_{i=1}^m d_i, \quad d_i \in \mathcal{L}.
\]
Let us introduce the Abel map $\omega:\mathcal{L}\longrightarrow\mathbb{C}^g$ by
setting
\begin{eqnarray*}
\omega(p) :=\left(\int_{p_0}^p\mathrm{d}\omega_1,\ldots,\int_{p_0}^p\mathrm{d}\omega_g\right),
\end{eqnarray*}
where $p_0$ is a chosen base point on $\mathcal{L}$ and $\omega_i$ are the normalized 1-forms given in (\ref{eq:normalizeforms}). In what follows we shall
set $p_0=z_1=\lambda_1$. The composition of the theta function with
the Abel map has $g$ zeros on $\Sigma$. The following lemma tells us
where the zeros are.
\begin{lemma}
\label{le:zero}
Let $D=\sum_{i=1}^gd_i$ be a divisor of degree $g$ on $\mathcal{L}$,
then the multivalued function
\[
\theta (\omega(p)-\omega(D)-K)
\]
has precisely $g$ zeros located at the points $d_i$, $i=1,\ldots,g$. The
vector $K=(K_1,\ldots,K_g)$ is the Riemann constant
\[
K_j={{2\pi i+\Pi_{jj}}\over 2}-{1\over{2\pi i}}\sum_{l\neq
j}\int_{a_l}(\mathrm{d}\omega_l(p)\int_{z_1}^p\mathrm{d}\omega_j).
\]
\end{lemma}
The hyperelliptic curve $\mathcal{L}$ can be thought of as a branched cover
of the Riemann sphere $\mathbb{C} \cup \{\infty\}$. Indeed, a point
$p \in \mathcal{L}$ can be identified by two complex variables, $p=(z,w)$,
where $w$ and $z$ are related by equation~(\ref{eq:L}). We shall denote by
$\mathbb{C}_1$ the the Riemann sheet where $g(\infty)>0$ on the real
axis, and by $\mathbb{C}_2$ the other Riemann sheet in
$\mathcal{L}$. Thus, a function $f$ on $\mathcal{L}$ can be thought of as a function in
two complex variables:
\begin{eqnarray*}
f(p)=f(z,w).
\end{eqnarray*}
Consider the map
\begin{eqnarray*}
T& : &\mathbb{C}/\cup_{i=1}^{2n}\Sigma_i \longrightarrow \mathcal{L}\\
T(z)&=&(z,w),
\end{eqnarray*}
where the branch of $w$ is chosen such that $(z,w)$ is on $\mathbb{C}_1$.
A function $f$ on $\mathcal{L}$ then defines the function $f\circ T$ on
$\mathbb{C}/\cup_{i=1}^{2n}\Sigma_i$ by
\begin{eqnarray*}
f\circ T(z)=f(z,w).
\end{eqnarray*}
For the sake of simplicity, and when there is no ambiguity, we shall write
$f(z)$ instead of $f\circ T(z)$ and $f(p)$ instead of $f(z,w)$.
Abelian integrals on $\mathcal{L}$ can be represented as integrals on the
Riemann sheet with jump discontinuities. To do so, let us first define
a Jordan arc $\Sigma$ as in figure \ref{fig:sigma}. Let $f(z,w)$ be a
function on $\mathcal{L}$ and $f(z)=f\circ T(z)$. Then an Abelian integral on
$\mathcal{L}$,
\[
I(p)=\int_{\lambda_1}^pf(p')dp',
\]
defines the following integral on $\mathbb{C}$:
\begin{eqnarray*}
I(z)=\int_{\lambda_1}^zf\circ T(z')dz',
\end{eqnarray*}
where the path of the integration does not intersect
$\Sigma/\{\lambda_1\}$. Such integral will in general have jump
discontinuities along $\Sigma$, and its value on the left hand side of
$\Sigma$ will be denoted by $I(z)_+$, while its value on the right
hand side of $\Sigma$ will be denoted by $I(z)_-$.
Let $\rho$ be the hyperelliptic involution that interchanges the two
sheets of $\mathcal{L}$, \textit{i.e.}
\begin{eqnarray*}
\rho(z,w)=(z,-w).
\end{eqnarray*}
The action of
$\rho$ on $f(z)$ is given by
\begin{eqnarray}\label{eq:rhof}
\rho(f)(z)=f(z,-w)
\end{eqnarray}
\textit{i.e} it is the function evaluated on $\mathbb{C}_2$.
Similarly, the action of $\rho$ on an integral $I(z)$ is defined by
\begin{eqnarray}\label{eq:rhoI}
\rho(I)(z)=\int_{\lambda_1}^z\rho(f)(z')\mathrm{d} z'
\end{eqnarray}
\begin{figure}[htbp]
\begin{center}
\resizebox{8cm}{!}{\input{sigma1.pstex_t}}\caption{The Jordan arc
$\Sigma$ connects all the branch points and extends to infinity on
the left hand side of $\lambda_1$ and on the right hand side of
$\lambda_{4n}$. All branch cuts belong to $\Sigma$ and are denoted
by $\Sigma_i$, while the intervals between the branch cuts are
denoted by $\tilde{\Sigma}_i$.}\label{fig:sigma}
\end{center}
\end{figure}
From proposition \ref{pro:per} we see that the composition
of the Abel map $\omega$ with $\theta$ has the following jump
discontinuities when considered as a function on $\mathbb{C}$:
\begin{lemma}
\label{le:jumptheta}
Let $z$ be a point on $\mathbb{C}$, and let $\Sigma$ be a Jordan
arc joining all the branch cuts as in figure \ref{fig:sigma}, then
the quotient of theta functions has the following jump
discontinuities on $\Sigma$
\begin{eqnarray*}
\left({{\theta(\omega(z)+A)}\over{\theta(\omega(z)+B)}}\right)_+&=&
\left({{\theta(\omega(z)+A)}\over{\theta(\omega(z)+B)}}\right)_-,\quad z\in\tilde{\Sigma}_j\\
\left({{\theta(\omega(z)+A)}\over{\theta(\omega(z)+B)}}\right)_+&=&
\left({{\theta(-\omega(z)+A)}\over{\theta(-\omega(z)+B)}}\right)_-e^{-2\pi
i(A_{j-1}-B_{j-1})},\quad z\in\Sigma_j
\end{eqnarray*}
where $A$ and $B$ are arbitrary $2n-1$ vectors and $A_0=B_0=0$.
\end{lemma}
\begin{proof}
The holomorphic differentials $\mathrm{d}\omega_j$ are
given by
\begin{eqnarray*}
\mathrm{d}\omega_i={{P_i(z)}\over {w(z)}}\mathrm{d} z,
\end{eqnarray*}
for some polynomial $P_i(z)$ of degree less than $2n-1$ in $z$.
This means that, under the action of $\rho$, $\mathrm{d}\omega_i$ becomes
$-\mathrm{d}\omega_i$. In particular, we have
\begin{eqnarray}\label{eq:rhoo}
\rho(\omega)(z)=-\omega(z)
\end{eqnarray}
where the action of $\rho$ on $\omega$ is given by (\ref{eq:rhof})
and (\ref{eq:rhoI}).
We first consider the jumps across the gaps $\tilde{\Sigma}_j$. Take
two distinct paths from $\lambda_1$ to a point $z\in
\tilde{\Sigma}_j$. Assume also that both curves do not intersect
$\Sigma$ and that one extends to the left of $\Sigma$, while the other to
its right. The union of these paths lifts to a loop $\tilde{\gamma}$
on $\mathcal{L}$. Moreover, $\tilde{\gamma}$ is a linear combinations of
$a$-cycles, \textit{i.e.}
\[
\tilde{\gamma}=\sum_{i=1}^gN_ia_i,
\]
where the $N_i$'s are non-negative integers.
Therefore, we have
\begin{eqnarray*}
\left({{\theta(\omega(z)+\aleph+A)}\over{\theta(\omega(z)+\aleph+B)}}\right)_+&=&
\left({{\theta(\omega(z)+A)}\over{\theta(\omega(z)+B)}}\right)_+\\
&=&\left({{\theta(\omega(z)+A)}\over{\theta(\omega(z)+B)}}\right)_-
,\quad z\in\tilde{\Sigma}_j\\
\aleph&=&\sum_{i=1}^gN_iE_i
\end{eqnarray*}
where $E_i$ is the column vector with 1 in the $i^{th}$ entry and
zero elsewhere.
Now consider the jumps on the branch cuts $\Sigma_j$. Let $z\in
\Sigma_j$, then take a loop $\gamma$ on $\mathcal{L}$ consisting of two
distinct curves joining $\lambda_1$ to $z$, , both non-intersecting
$\Sigma$; one on the left of the cut in $\mathbb{C}_1$, the other on
the right of the cut in $\mathbb{C}_2$. This closed loop $\gamma$ is
homologic to the $b$-cycle $b_j$. Therefore,
\begin{eqnarray*}
\left({{\theta(\omega(z)+\Im+A)}\over{\theta(\omega(z)+\Im+B)}}\right)_-&=&
\left({{\theta(\omega(z)+A)}\over{\theta(\omega(z)+B)}}\right)_-e^{-2\pi
i(A_{j-1}-B_{j-1})}\\
&=&\left({{\theta(-\omega(z)+A)}\over{\theta(-\omega(z)+B)}}\right)_+
,\quad z\in\Sigma_j\\
\Im_k&=&\Pi_{ki}.
\end{eqnarray*}
This proves the lemma.
\end{proof}
We can now solve the Riemann-Hilbert problem (\ref{eq:RHtheta}),
(\ref{eq:singtheta}). Let us define
\begin{eqnarray}
\label{eq:tau}
\frac{\tau}{2} & := & -\sum_{i=2}^{2n}\omega(z_i^{-1})-K, \\
\Delta(z)& := &\int_{+\infty}^z\mathrm{d}\Delta, \nonumber
\end{eqnarray}
where $\mathrm{d}\Delta$ is the normalized differential of third type with
simple poles at $\infty^{\pm}$ and residues $\pm \frac12$
respectively. In addition, we write
\[
\kappa := \left(\frac{1}{2\pi i}\int_{b_1} \mathrm{d} \Delta,\ldots,
\frac{1}{2\pi i} \int_{b_g}\mathrm{d} \Delta \right).
\]
\begin{proposition}
\label{pro:sol}
Let $\infty^{\pm}$ be the points on $\mathcal{L}$ that projects to $\infty$
on $\mathbb{C}_1$. The unique solution of the Riemann-Hilbert problem
(\ref{eq:RHtheta}), (\ref{eq:singtheta}) is given by
\begin{equation}\label{Ssol}
S(z)=Q(\infty)\Lambda^{-1}\Theta^{-1}(\infty)\Theta(z),
\end{equation}
where entries of $\Theta(z)$ are given by
\begin{eqnarray}
\label{eq:entrytheta}
\Theta_{11}(z)&=&
\sqrt{z-\lambda_1}e^{-\Delta(z)}{{\theta\left(\omega(z)
+\beta(\lambda)\overrightarrow{e}-\kappa+{{\tau}\over
2}\right)}\over{\theta\left(\omega(z)+{{\tau}\over 2}\right)}},\nonumber\\
\Theta_{12}(z)&=&-\sqrt{z-\lambda_1}e^{\Delta(z)}{{\theta\left(\omega(z)-\beta(\lambda)\overrightarrow{e}+\kappa-{{\tau}\over
2}\right)}\over{\theta\left(\omega(z)-{{\tau}\over 2}\right)}}, \nonumber\\
\Theta_{21}(z)&=&-\sqrt{z-\lambda_1}e^{\Delta(z)}{{\theta\left(\omega(z)+\beta(\lambda)\overrightarrow{e}+\kappa-{{\tau}\over
2}\right)}\over{\theta\left(\omega(z)-{{\tau}\over 2}\right)}},\\
\Theta_{22}(z)&=&\sqrt{z-\lambda_1}e^{-\Delta(z)}{{\theta\left(\omega(z)-\beta(\lambda)\overrightarrow{e}-\kappa+{{\tau}\over
2}\right)}\over{\theta\left(\omega(z)+{{\tau}\over 2}\right)}}, \nonumber
\end{eqnarray}
where and $\overrightarrow{e}$ is a $2n-1$ dimensional vector whose
last $n$ entries are 1 and the first $n-1$ entries are 0. The branch
cut of $\sqrt{z-\lambda_1}$ is defined to be
$\Sigma/\tilde{\Sigma}_0$.
\end{proposition}
\begin{proof}
By using lemma \ref{le:jumptheta}, we see that
$\Theta(z)$ has the following jump discontinuities
\begin{eqnarray*}
\left(\Theta_{11}(z)\right)_+&=&\left(\Theta_{12}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=1,\ldots, n\\
\left(\Theta_{12}(z)\right)_+&=&\left(\Theta_{11}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=1,\ldots, n\\
\left(\Theta_{21}(z)\right)_+&=&\left(\Theta_{22}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=1,\ldots, n\\
\left(\Theta_{22}(z)\right)_+&=&\left(\Theta_{21}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=1,\ldots, n\\
\left(\Theta_{11}(z)\right)_+&=&{{\lambda-1}\over{\lambda+1}}
\left(\Theta_{12}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=n+1,\ldots, 2n\\
\left(\Theta_{12}(z)\right)_+&=&{{\lambda+1}\over{\lambda-1}}
\left(\Theta_{11}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=n+1,\ldots, 2n\\
\left(\Theta_{21}(z)\right)_+&=&{{\lambda-1}\over{\lambda+1}}
\left(\Theta_{22}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=n+1,\ldots, 2n\\
\left(\Theta_{22}(z)\right)_+&=&{{\lambda+1}\over{\lambda-1}}\left(\Theta_{21}(z)\right)_-,\quad
z\in\Sigma_i,\quad i=n+1,\ldots, 2n
\end{eqnarray*}
This means that $\Theta(z)$ has the same jump discontinuities as in
(\ref{eq:RHtheta}).
To see that $\Theta(z)$ has the singularity structure given by
(\ref{eq:singtheta}), note that the function
\begin{eqnarray*}
\tilde{U}_+&=&Q(z)\Theta^{-1}(z),\quad |z|<1\\
\tilde{U}_-&=&\Theta(z)\Lambda Q^{-1}(z),\quad |z|>1
\end{eqnarray*}
has no jump discontinuities across the branch cuts $\Sigma_j$. It
can at only have singularities of order less than or equal to
$1\over 2$ at the points $z_j^{\pm 1}$. This means that, if it was
singular at $z_j^{\pm 1}$, then it would have jump discontinuities
across $\Sigma_j$ due to the branch point type singularities.
Therefore it is holomorphic at the points $z_j^{\pm 1}$. Hence, the
function $\Theta(z)$ must have the singularity structure of the form
(\ref{eq:singtheta}).
To show that $S(z)$ has the correct asymptotic behavior at
$z=\infty$, we only need to prove that $\Theta(z)$ is invertible at
$z=\infty$. The asymptotic behavior of $\Theta(z)$ is given by
\begin{eqnarray*}
\Theta_{11}(\infty)&=&\theta\left(\omega(\infty)-\kappa+\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)e^{-\Delta_0}\nonumber\\
\Theta_{22}(\infty)&=&\theta\left(\omega(\infty)-\kappa-\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)e^{-\Delta_0}\\
\Theta_{12}(\infty)&=&\Theta_{21}(\infty)=0\nonumber
\end{eqnarray*}
where $\Delta_0=\lim_{z\rightarrow\infty}\Delta(z)-{1\over
2}\log(z-\lambda_1)$.
We will now show that $\omega(\infty)=\kappa$. Let $\eta$ be a third
type differential with simple poles at the points $x_i\in\mathcal{L}$ and
$\tilde{\eta}$ be a holomorphic differential. Let $\Pi^i$ and
$\tilde{\Pi}^i$ be their periods
\begin{eqnarray*}
\int_{a_i}\eta&=&\Pi^i, \quad \int_{b_i}\eta=\Pi^{i+g}\\
\int_{a_i}\tilde{\eta}&=&\tilde{\Pi}^i, \quad
\int_{b_i}\tilde{\eta}=\tilde{\Pi}^{i+g}
\end{eqnarray*}
Now, by the Riemann bilinear relation \cite{GH} we have
\begin{eqnarray*}
\sum_{i=1}^g\Pi^i\tilde{\Pi}^{i+g}-\Pi^{g+i}\tilde{\Pi}^{i}=2\pi
i\sum_{x_i}{\rm Res}_{x_i}(\eta)\int_{p_0}^{x_i}\tilde{\eta},
\end{eqnarray*}
where $p_0$ is an arbitrary point on $\mathcal{L}$. By substituting
$\eta=\mathrm{d}\Delta$ and $\tilde{\eta}=\mathrm{d}\omega_j$ for $j=1,\ldots, g$,
we see that
\begin{eqnarray*}
\kappa_j={1\over
2}\left(\omega_j(\infty^+)-\omega_j(\infty^-)\right)=\omega_j(\infty),
\end{eqnarray*}
where the last equality follows from (\ref{eq:rhoo}). Therefore, we
obtain
\begin{eqnarray}\label{eq:asyinf}
\Theta_{11}(\infty)&=&\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)e^{-\Delta_0}\nonumber\\
\Theta_{22}(\infty)&=&\theta\left(-\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)e^{-\Delta_0}\\
\Theta_{12}(\infty)&=&\Theta_{21}(\infty)=0\nonumber
\end{eqnarray}
Therefore $\Theta(z)$ is invertible at $\infty$ as long as
\begin{eqnarray}\label{eq:zero}
\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(-\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\neq 0.
\end{eqnarray}
Thus, $S(z)$ is the unique solution of the Riemann-Hilbert problem
(\ref{eq:RHtheta}).
\end{proof}
\begin{remark}
In appendix F, we will show that the Wiener-Hopf factorization is
solvable for $\beta(\lambda)\in i\mathbb{R}$, i.e. the
Riemann-Hilbert problem (\ref{eq:RHtheta}) is solvable for these
$\beta(\lambda)$. This in turn implies that (\ref{eq:zero}) is true
for all $\beta(\lambda)\in i\mathbb{R}$. Define (cf.
(\ref{Omegaepsilon})) $$
\Omega_{\epsilon} = \{\lambda \in {\Bbb R}:
|\lambda| \geq 1+\epsilon\}. $$
The function $\lambda \to
\beta(\lambda)$ maps $\Omega_{\epsilon}$ onto the bounded subset
$\mathcal{N} \equiv \{\alpha \in i{\Bbb R}: 0 < |\alpha| \leq
\frac{1}{2\pi}\log (2\epsilon^{-1}+1)$. By continuity, the
inequality (\ref{eq:zero}) is valid for all $\alpha$ from the closure
of $\mathcal{N}$. This fact, together with the explicit formulae
(\ref{Ssol}), (\ref{Sdef}) and (\ref{eq:UV}) implies the uniform
estimates which have been stated in (\ref{WHest}) and used in the
proof of theorem \ref{widom3}.
\end{remark}
\section{The asymptotics of $\mathrm{d}\log D_{L}(\lambda)/\mathrm{d}\lambda$
and $D_{L}(\lambda)$}
\setcounter{equation}{0}
We are now ready to compute the derivative of the determinant
$D_L(\lambda)$. First we notice that in virtue of (\ref{eq:UV}),
equation (\ref{our}) can be re-written as
\begin{eqnarray}
\label{our2b}
\frac{\mathrm{d}}{\mathrm{d}\lambda}\log D_{L}(\lambda) & = &
-\frac{2\lambda}{1-\lambda^2}L \nonumber \\
&& + \frac{1}{2\pi} \int_{|z| = 1}\mathrm{tr}\,
\left[U_{+}'(z)U_{+}^{-1}(z)\left(\Phi^{-1}(z)-
\sigma_3\Phi^{-1}(z)\sigma_3\right)\right]\mathrm{d} z
\nonumber \\
&& + r_{L}(\lambda).
\end{eqnarray}
Define
$$\Psi(z):=\Phi^{-1}(z)-\sigma_3\Phi^{-1}(z)\sigma_3=
\frac{2}{1-\lambda^2}\left( \begin{array}{cc}
0 & -g(z)\\
g^{-1}(z)&0
\end{array} \right).$$
From equations (\ref{Sdef}) and (\ref{Ssol}) we have
$$ U_{+}^{-1}(z)= A\Theta(z)Q^{-1}(z),\quad U'_{+}(z)=
Q'(z) \Theta^{-1}(z) A^{-1}+Q(z) (\Theta^{-1})'(z) A^{-1},$$ where we
denote $A = Q(\infty)\Lambda^{-1}\Theta^{-1}(\infty)$. Furthermore,
from equation (\ref{eq:dia}) we obtain
$$ Q^{-1}(z)=\frac{1}{2}\left(
\begin{array}{cc}
g^{-1}(z) & -i\\
-g^{-1}(z)&-i
\end{array} \right).
$$
Therefore, formula (\ref{our2b}) transforms into
the relation
\begin{eqnarray}
\frac{d}{\mathrm{d} \lambda}\log D_{L}(\lambda) &=&
-\frac{2\lambda}{1-\lambda^2}L \nonumber \\
& &
+ {{i}\over{\pi(1-\lambda^2)}}\int_{\Xi}{\rm tr}\left[\Theta^{-1}{{\mathrm{d}}\over{\mathrm{d}
z}}\Theta(z)\sigma_3\right]\mathrm{d} z \nonumber \\
&& + r_{L}(\lambda).
\end{eqnarray}
We will now prove the following:
\begin{theorem}
\label{th6}
Let $s(\lambda)$ be given by
\begin{eqnarray}\label{eq:s}
s(\lambda)&=&{i\over{\pi(1-\lambda^2)}}\int_{|z|=1}\alpha(z)\mathrm{d}
z, \\
\alpha(z)&=&{\rm tr}\left[\Theta^{-1}{\mathrm{d}\over {\mathrm{d}
z}}\Theta(z)\sigma_3\right]\nonumber
\end{eqnarray}
where the entries of the $2\times 2$ matrix $\Theta(z)$ are given
by (\ref{eq:entrytheta}).
Then $s(\lambda)$ can be written as
\begin{eqnarray*}
s(\lambda)=-{i\over{\pi(1-\lambda^2)}}{\mathrm{d}\over{\mathrm{d}{\beta}}}\log\left(\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{{\tau}\over
2}\right)\right)
\end{eqnarray*}
\end{theorem}
\begin{proof}
To begin with, we would like to treat $\alpha(z)\mathrm{d} z$
as a 1-form on the hyperelliptic curve $\mathcal{L}$.
We will show that it is, in fact, the holomorphic 1-form
\begin{eqnarray*}
\alpha(z)\mathrm{d}
z=\sum_{i=1}^{2n-1}\partial_i\log\left(\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{{\tau}\over
2}\right)\right)\mathrm{d}\omega_i
\end{eqnarray*}
where $\mathrm{d}\omega_i$ are the normalized holomorphic differentials on
$\mathcal{L}$ and $\partial_i$ is the partial derivative with
respect to the $i^{th}$ argument.
Suppose this is true, then by deforming the contour of the integral
(\ref{eq:s}), we see that it can be written as
\begin{eqnarray*}
s(\lambda){{\pi(1-\lambda^2)}\over
{i}}&=&-\sum_{k=n}^{2n-1}\int_{a_k}\alpha(z)\mathrm{d}
z\\
&=&-\sum_{k=n}^{2n-1}\int_{a_k}\sum_{j=1}^{2n-1}\partial_j\log\left(\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{{\tau}\over 2}\right)\right)\mathrm{d}\omega_j\\
&=&-\sum_{k=n}^{2n-1}\partial_j\log\left(\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{{\tau}\over 2}\right)\right)\\
&=&-{\mathrm{d}\over{\mathrm{d}{\beta}}}\log\left(\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{{\tau}\over
2}\right)\right)
\end{eqnarray*}
To see that $\alpha(z)\mathrm{d} z$ is given by the corresponding 1-form,
let us first compute $\alpha(z)\mathrm{d} z$. We have
\begin{eqnarray}\label{eq:alpha}
\alpha(z)\mathrm{d} z&=&\left(\det\Theta
(z)\right)^{-1}\Bigg(\Theta_{22}(z)\Theta_{11}^{\prime}(z)-\Theta_{11}(z)\Theta_{22}^{\prime}(z)
\nonumber\\
&&-\Theta_{12}(z)\Theta_{21}^{\prime}(z)+\Theta_{21}(z)\Theta_{12}^{\prime}(z)\Bigg)\mathrm{d}
z,
\end{eqnarray}
where the prime denotes the derivative with respect to $z$.
We can simplify equation~(\ref{eq:alpha}) by observing that
\begin{eqnarray*}
\Theta_{11}(z)&=&h_1(z)\theta\left(\omega(z)+\beta(\lambda)\overrightarrow{e}-\kappa+{{\tau}\over
2}\right)\\
\Theta_{22}(z)&=&h_1(z)\theta\left(\omega(z)-\beta(\lambda)\overrightarrow{e}-\kappa+{{\tau}\over
2}\right)\\
\Theta_{12}(z)&=&h_2(z)\theta\left(\omega(z)-\beta(\lambda)\overrightarrow{e}+\kappa-{{\tau}\over
2}\right) \\
\Theta_{21}(z)&=&h_2(z)\theta\left(\omega(z)+\beta(\lambda)\overrightarrow{e}+\kappa-{{\tau}\over
2}\right)\\
h_1(z)&=&\sqrt{z-\lambda_1}{{e^{-\Delta(z)}}\over{\theta\left(\omega(z)+{{\tau}\over
2}\right)}}\\
h_2(z)&=&-\sqrt{z-\lambda_1}{{e^{\Delta(z)}}\over{\theta\left(\omega(z)-{{\tau}\over
2}\right)}}.
\end{eqnarray*}
Therefore, we have
\begin{eqnarray*}
\Theta_{22}(z)\Theta_{11}^{\prime}(z)-\Theta_{11}(z)\Theta_{22}^{\prime}(z)&=&(h_1(z))^2\left(
\theta_2\theta_1^{\prime}-\theta_1\theta_2^{\prime}\right)\\
\Theta_{12}(z)\Theta_{21}^{\prime}(z)-\Theta_{21}(z)\Theta_{12}^{\prime}(z)&=&(h_2(z))^2\left(
\theta_3\theta_4^{\prime}-\theta_4\theta_3^{\prime}\right),
\end{eqnarray*}
where the $\theta_i$'s are given by
\begin{eqnarray*}
\theta_1&=&\theta\left(\omega(z)+\beta(\lambda)\overrightarrow{e}-\kappa+{{\tau}\over
2}\right)\\
\theta_2&=&\theta\left(\omega(z)-\beta(\lambda)\overrightarrow{e}-\kappa+{{\tau}\over
2}\right)\\
\theta_3&=&\theta\left(\omega(z)-\beta(\lambda)\overrightarrow{e}+\kappa-{{\tau}\over
2}\right) \\
\theta_4&=&\theta\left(\omega(z)+\beta(\lambda)\overrightarrow{e}+\kappa-{{\tau}\over
2}\right).
\end{eqnarray*}
Now, the $\theta_i^{\prime}$'s are just
\begin{eqnarray*}
\theta_i^{\prime}\mathrm{d}
z=\sum_{k=1}^{2n-1}\left(\partial_k\theta_i\right)\mathrm{d}\omega_k.
\end{eqnarray*}
By substituting the right hand side of this equation into
(\ref{eq:alpha}) we obtain
\begin{eqnarray*}
\alpha(z)\mathrm{d}
z&=&{\det\Theta(z)}^{-1}\sum_{k=1}^{2n-1}\mathrm{d}\omega_k\left((h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)\right)\\
G_k^1(z)&=&\theta_2\partial_k\theta_1-\theta_1\partial_k\theta_2\\
G_k^2(z)&=&\theta_3\partial_k\theta_4-\theta_4\partial_k\theta_3.
\end{eqnarray*}
We would like to show that the expression
\begin{eqnarray*}
{\det\Theta(z)}^{-1}\left((h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)\right)
\end{eqnarray*}
is a constant. First note that, by considering the jump and
singularity structure of $\det\Theta(z)$, we have
\begin{eqnarray*}
\det\Theta(z)=g(z)\det\Theta(\infty)g(\infty)^{-1},
\end{eqnarray*}
where $g(z)$ is given by (\ref{eq:gn_def}).
Since the $\Theta_{ij}(z)$'s have square root singularities at the $n$
points $z=z_j^{-1}$, the functions
\begin{eqnarray*}
(h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)
\end{eqnarray*}
can have at most simple poles at the points $(z_j)^{\pm 1}$,
$j=1,\ldots, 2n$. Near each of these points, they behave like
\begin{eqnarray*}
(h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)&=&A_0^j+A_1^j(z-z_j)^{1\over
2}+O(z-z_j), \quad z\rightarrow z_j\\
(h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)&=&B_0^j(z-z_j^{-1})^{-1}+B_1^j(z-z_j^{-1})^{-{1\over
2}}+O(1), \quad z\rightarrow z_j^{-1}
\end{eqnarray*}
Since $\rho(\Delta)(z)=-\Delta(z)$, $\rho(\omega)(z)=-\omega(z)$ and
$\rho(z-\lambda_1)=z-\lambda_1$, we have
\begin{eqnarray*}
\rho(h_1^2)(z)=h_2^2(z),\quad \rho(\theta_1)(z)=\theta_3(z), \quad
\rho(\theta_2)(z)=\theta_4(z)
\end{eqnarray*}
and
\begin{eqnarray}\label{eq:invol}
(h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)=(h_1(z))^2G_k^1(z)-\rho((h_1)^2G_k^1)(z).
\end{eqnarray}
Since the action of $\rho$ on a Laurent series near a branch point
$\lambda_j$ is given by
\begin{eqnarray*}
\rho\left(\sum_{k=-\infty}^{\infty}X_k(z-\lambda_j)^{k\over
2}\right)=\sum_{k=-\infty}^{\infty}X_k(-(z-\lambda_j))^{k\over 2},
\end{eqnarray*}
by (\ref{eq:invol}) we obtain $A_0^j=B_0^j=0$ for all $j$. Hence,
the function
\begin{eqnarray}\label{eq:ddl}
\det\Theta(z)^{-1}\left((h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)\right)
\end{eqnarray}
does not have any pole on $\mathcal{L}$. To see that it does not have jumps
too, let us consider
\[
(h_1(z))^2G_k^1(z)=(h_1(z))^2
\left(\theta_2\partial_k\theta_1-\theta_1\partial_k\theta_2\right).
\]
The periodicity of the term inside the brackets is given by
proposition \ref{pro:per}:
\begin{eqnarray*}
\theta_1\partial_k\theta_2(z+a_j)&=&\theta_1\partial_k\theta_2\\
\theta_1\partial_k\theta_2(z+b_j)&=&\theta_1\partial_k\theta_2e^{-2\pi
i(2\omega_j(z)-2\kappa_j+{\tau}_j+\Pi_{jj})}\\&-&\theta_1\theta_2(2\pi
i\delta_{jk})e^{-2\pi i(2\omega_j(z)-2\kappa_j+{\tau}_j+\Pi_{jj})}\\
\theta_2\partial_k\theta_1(z+a_j)&=&\theta_1\partial_k\theta_2\\
\theta_2\partial_k\theta_1(z+b_j)&=&\theta_2\partial_k\theta_1e^{-2\pi
i(2\omega_j(z)-2\kappa_j+{\tau}_j+\Pi_{jj})}\\&-&\theta_2\theta_1(2\pi
i\delta_{jk})e^{-2\pi i(2\omega_j(z)-2\kappa_j+{\tau}_j+\Pi_{jj})},
\end{eqnarray*}
where $\omega_j(z)=\int_{\lambda_1}^z\mathrm{d}\omega_j$ is the $j^{th}$
component of the vector $\omega(z)$. Hence the multiplicative factor
picked up by $G_k^1(z)$ after going around a $b$-cycle cancels
exactly with the factor picked up by $\left(h_1(z)\right)^2$. It
follows that the function (\ref{eq:ddl}) does not have jumps on
$\mathcal{L}$ too. Hence, they are holomorphic functions on $\mathcal{L}$ without
any pole and must be constants. These constants can be computed by
taking $z=\infty$. In other words, they are given by
(\ref{eq:asyinf}). We therefore have
\begin{eqnarray*}
\det\Theta(z)^{-1}\left((h_1(z))^2G_k^1(z)-(h_2(z))^2G_k^2(z)\right)=\partial_k\log\left(\theta\left(\beta(\lambda)\overrightarrow{e}+{{\tau}\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{{\tau}\over
2}\right)\right)
\end{eqnarray*}
This proves the theorem.
\end{proof}
Theorem \ref{th6}, in its turn, yields our main asymptotic result.
\begin{theorem}\label{th7}
Let $\Omega_{\epsilon}$ be the domain of solvability (\ref{Omegaepsilon}).
Then the logarithmic derivative of Toeplitz determinant $D_L(\lambda)$
admits the following asymptotic representation, which is uniform in
$\lambda \in \Omega_{\epsilon}$.
\begin{eqnarray}
\label{DLasMay}
\frac{d}{\mathrm{d} \lambda}\log D_L(\lambda)
& = & -\frac{2\lambda}{1-\lambda^{2}}L
+\frac{d}{\mathrm{d} \lambda}
\log \left[ \theta \left(
\beta(\lambda)\overrightarrow{e}+\frac{ \tau}{2}\right)
\theta \left(\beta(\lambda)\overrightarrow{e}-\frac{\tau}{2}\right)
\right] \nonumber \\
& & + O\left(\frac{\rho^{-L}}{\lambda^2}\right), \quad L \to \infty.
\end{eqnarray}
Here $\rho$ is any real number satisfying the inequality
$$
1 < \rho < \mathrm{min}\{|\lambda_{j}|: |\lambda_{j}| > 1\}.
$$
\end{theorem}
The uniformity of the estimate (\ref{DLasMay}) with respect to $\lambda \in
\Omega_{\epsilon}$
allows its integration over $\Omega_{\epsilon}$, which yields the equation
$$
\log\left(D_{L}(\lambda)(1-\lambda2)^{-L}\right)
-\lim_{s \to \infty}\log\left(D_{L}(s)(1-s^2)^{-L}\right)
=\log\frac{ \theta \left(
\beta(\lambda)\overrightarrow{e}+\frac{ \tau}{2}\right)
\theta \left(\beta(\lambda)\overrightarrow{e}-\frac{\tau}{2}\right)}{
\theta^{2}\left(\frac{ \tau}{2}\right)}
$$
$$
+ r(L),
$$
where $r(L)=O\left(\rho^{-L}\right)$ as $L \to \infty$. Taking into
account (\ref{eq:DL}), the second term in the left hand side is
zero. This proves Proposition~\ref{th10_07}.
\section{The limiting entropy}
\setcounter{equation}{0}
Observe that equation (\ref{korep_int}) can also be rewritten as
\begin{eqnarray}
\label{eaaMay0}
S_{L}(\rho_A)=\lim_{\epsilon \to 0^+} \frac{1}{4\pi \mathrm{i}}
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda)
\frac{\mathrm{d}}{\mathrm{d} \lambda} \log
\left( D_{L}(\lambda)(\lambda^{2} - 1)^{-L}\right)\mathrm{d} \lambda.
\end{eqnarray}
The right hand side of this equation follows from
$$
\lim_{\epsilon \to 0^+}
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda)
\frac{\mathrm{d}}{\mathrm{d} \lambda} \log
(\lambda^{2} - 1)^{-L} \mathrm{d} \lambda = L
\lim_{\epsilon \to 0^+}
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda)
\frac{2\lambda}{1-\lambda^{2} } \mathrm{d} \lambda
$$
$$
=2\pi iL \lim_{\epsilon \to 0^+} \Biggl[\mbox{res}_{\lambda=1}
\left(e(1+\epsilon, \lambda)
\frac{2\lambda}{1-\lambda^{2} }\right)
+ \mbox{res}_{\lambda=-1}\left(e(1+\epsilon, \lambda)
\frac{2\lambda}{1-\lambda^{2} }\right)\Biggr]
$$
$$
=2\pi iL \lim_{\epsilon \to 0^+}
\left((2+\epsilon)\log{\frac{2+\epsilon}{2}} + \epsilon
\log{\frac{\epsilon}{2}}\right)
= 0.
$$
We identify the limiting entropy $S(\rho_A)$ as the following double limit
(cf.\cite{IJK2}),
\begin{eqnarray}
\label{eaaMay}
S(\rho_A)=\lim_{\epsilon \to 0^+}\left[\lim_{L\to \infty} \frac{1}{4\pi
\mathrm{i}}
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda)
\frac{\mathrm{d}}{\mathrm{d} \lambda} \log
\left(D_{L}(\lambda)(\lambda^{2} - 1)^{-L}\right)\right]\mathrm{d} \lambda.
\end{eqnarray}
We now want to apply theorem \ref{th7} and evaluate the large $L$
limit in the right hand side of this equation. To this end we need
first to replace the integration along the contour $\Gamma(\epsilon)$
by the integration along a subset of the set $\Omega_{\epsilon}$ where
we can use the uniform asymptotic formula (\ref{DLasMay}).
Let us define
$$
\delta(\lambda):= \frac{\mathrm{d}}{\mathrm{d} \lambda} \log
\left(D_{L}(\lambda)(\lambda^{2} - 1)^{-L}\right).
$$
The function $\delta(\lambda)$ satisfies the following properties.
\begin{enumerate}
\item $\delta(\lambda)$ is analytic outside of the interval $[-1,1]$.
\item $\delta(-\lambda) = -\delta(\lambda)$.
\item $\delta(\lambda) = O\left(\lambda^{-3}\right), \quad \lambda \to
\infty$.
\item $\delta(\lambda) = O\left(\log|1-\lambda^{2}|\right), \quad \lambda
\to \pm 1$.
\end{enumerate}
Consider the identity $$
\oint_{\Gamma(\epsilon)} e(1+\epsilon,
\lambda) \frac{\mathrm{d}}{\mathrm{d} \lambda} \log
\left(D_{L}(\lambda)(\lambda^{2} - 1)^{-L}\right) \mathrm{d} \lambda \equiv
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda) \delta(\lambda) \mathrm{d}
\lambda. $$
Property 1 allows us to replace the contour of
integration $\Gamma(\epsilon)$ by the large contour $\Gamma'$ as
depicted in figure~1, so that $$
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda) \delta(\lambda) \mathrm{d}
\lambda =
\oint_{\Gamma'} e(1+\epsilon, \lambda)
\delta(\lambda) \mathrm{d} \lambda . $$
Simultaneously, property 3 allows to push $R
\to \infty$ in the right hand side of the last formula and hence
re-write it as the relation, $$
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda) \delta(\lambda)
\mathrm{d} \lambda
=\int_{-\infty}^{-1-\epsilon}\delta(\lambda)\left[ -\frac{1+\epsilon
+\lambda}{2}\left( \log_{+}\left(\frac{1+\epsilon
+\lambda}{2}\right) -\log_{-}\left(\frac{1+\epsilon
+\lambda}{2}\right)\right)\right]\mathrm{d} \lambda $$
\begin{equation}\label{enteval1}
+\int_{1+\epsilon}^{\infty}\delta(\lambda)\left[
-\frac{1+\epsilon -\lambda}{2}\left(
\log_{+}\left(\frac{1+\epsilon -\lambda}{2}\right)
-\log_{-}\left(\frac{1+\epsilon -\lambda}{2}\right)\right)\right]\mathrm{d} \lambda.
\end{equation}
Here $ \log_{+}\left(\frac{1+\epsilon \pm \lambda}{2}\right)$ and
$ \log_{-}\left(\frac{1+\epsilon \pm \lambda}{2}\right)$ denote,
respectively, the
upper and lower boundary values of the functions
$ \log\left(\frac{1+\epsilon \pm \lambda}{2}\right)$ on the real axis.
We note that
$$
\log_{+}\left(\frac{1+\epsilon + \lambda}{2}\right)
- \log_{-}\left(\frac{1+\epsilon + \lambda}{2}\right) =2\pi i,
\quad\mbox{for all }\quad \lambda < -1 -\epsilon,
$$
and
$$
\log_{+}\left(\frac{1+\epsilon - \lambda}{2}\right)
- \log_{-}\left(\frac{1+\epsilon - \lambda}{2}\right) =-2\pi i,
\quad\mbox{for all }\quad \lambda > 1 +\epsilon.
$$
Therefore, equation (\ref{enteval1}) becomes
\begin{eqnarray}
\label{enteval2}
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda)
\delta(\lambda)\mathrm{d} \lambda & = & -\pi \mathrm{i}\int_{-\infty}^{-1-\epsilon}(1+\epsilon
+\lambda)\delta(\lambda)\mathrm{d} \lambda
+ \pi \mathrm{i}\int_{1+\epsilon}^{\infty}(1+\epsilon
-\lambda)\delta(\lambda)\mathrm{d} \lambda \nonumber \\
&= & 2\pi \mathrm{i}\int_{1+\epsilon}^{\infty}(1+\epsilon
-\lambda)\delta(\lambda)\mathrm{d} \lambda,
\end{eqnarray}
where we have also taken into account
the oddness of the function $\delta(\lambda)$,
\textit{i.e.} property 2. Recalling the definition of the function
$\delta(\lambda)$,
we arrive at
\begin{equation}\label{enteval3}
\oint_{\Gamma(\epsilon)} e(1+\epsilon, \lambda)
\frac{\mathrm{d}}{\mathrm{d} \lambda} \log
\left(D_{L}(\lambda)(\lambda^{2} - 1)^{-L}\right)\mathrm{d} \lambda
=
2\pi \mathrm{i} \int_{1+\epsilon}^{\infty} (1+\epsilon- \lambda)
\frac{\mathrm{d}}{\mathrm{d} \lambda} \log
\left(D_{L}(\lambda)(\lambda^{2} - 1)^{-L}\right)\,
\mathrm{d} \lambda.
\end{equation}
The estimate (\ref{DLasMay}) can be used in the right hand side of formula
(\ref{enteval3}).
This enables us to perform an explicit evaluation of the large $L$
limit in (\ref{eaaMay}) so that the formula for the entropy $S(\rho_A)$
becomes
\begin{eqnarray}
\label{ent123}
S(\rho_A)& =& \frac{1}{2}\lim_{\epsilon \to 0^+}\left[
\int_{1+\epsilon}^{\infty}(1+\epsilon - \lambda)
\frac{d}{\mathrm{d} \lambda}
\log \Bigl(\theta \left(
\beta(\lambda)\overrightarrow{e}+\frac{ \tau}{2}\right)
\theta \left(\beta(\lambda)\overrightarrow{e}-\frac{\tau}{2}\right)
\Bigr) \mathrm{d} \lambda \right] \nonumber \\
& =& \frac{1}{2}\lim_{\epsilon \to 0^+}
\int_{1+\epsilon}^{\infty}
\log{{\theta\left(\beta(\lambda)\overrightarrow{e}+{\tau\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over 2}\right)}}\mathrm{d}\lambda.
\end{eqnarray}
To complete the evaluation of the entropy, we need to prove the existence
of this limit.
\section{Integrability at $\pm 1$.
The final formula for the entropy}
\label{se:integ}
\setcounter{equation}{0}
We will now proof the integrability of the function
\begin{eqnarray*}
\log{{\theta\left(\beta(\lambda)\overrightarrow{e}+{\tau\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over 2}\right)}}
\end{eqnarray*}
at $\pm 1$.
First let us denote the real and imaginary parts of the period
matrix $\Pi$ by $\mathrm{Re} \,\Pi$ and $\mathrm{Im} \,\Pi$. Since the $\mathrm{Im} \,\Pi$ is non-singular, there exist a real vector
$\overrightarrow{v}$ such that
\begin{eqnarray*}
\overrightarrow{e}=\mathrm{Im} \,\Pi\overrightarrow{v}
\end{eqnarray*}
We now can write
\begin{eqnarray*}
i\overrightarrow{e}=\left(\Pi-\mathrm{Re} \,\Pi\right)\overrightarrow{v}.
\end{eqnarray*}
Let $Q$ be a large real number, and let $\overrightarrow{m}$ be an
integer vector such that
\begin{eqnarray}\label{eq:qm}
Q\overrightarrow{v}=\overrightarrow{m}+\overrightarrow{q},
\end{eqnarray}
where the entries of $\overrightarrow{q}$ are between 0 and 1.
In particular, we have
\begin{eqnarray*}
\overrightarrow{m}=Q\left(\mathrm{Im} \, \Pi\right)^{-1}
\overrightarrow{e}-\overrightarrow{q}.
\end{eqnarray*}
Then, from the periodicity of the theta function (\ref{eq:period}),
we see that
\begin{eqnarray}
\label{eq:asym1}
\theta\left(iQ\overrightarrow{e}+\overrightarrow{c}_0\right)&=&
\theta\left((\overrightarrow{m}+
\overrightarrow{q})^T\left(\Pi-\mathrm{Re}\Pi\right)+\overrightarrow{c}_0\right)
\nonumber\\
&=&\exp\Bigg(Q^2\pi\Bigg[i\left<\overrightarrow{e},\left(\mathrm{Im} \,
\Pi\right)^{-1}\mathrm{Re}\Pi\left(\mathrm{Im} \,
\Pi\right)^{-1}\overrightarrow{e}\right>\nonumber\\
& & + \left<\overrightarrow{e},\left(\mathrm{Im}
\, \Pi\right)^{-1}\overrightarrow{e}\right>\Bigg]\nonumber\\
& &- 2i\pi Q\left[\left<\overrightarrow{e},\left(\mathrm{Im} \,
\Pi\right)^{-1}\mathrm{Re}
\,\Pi\overrightarrow{q}\right>+\left<\overrightarrow{e},\left(\mathrm{Im}
\,
\Pi\right)^{-1}\overrightarrow{c}_0\right>+\left<2i\overrightarrow{e},\overrightarrow{q}\right>
\right]\nonumber\\
& & + i\pi\left[\left<\overrightarrow{q},\Pi\overrightarrow{q}\right>
+2\left<\overrightarrow{q},\overrightarrow{c}_0\right>\right]
\Bigg)\nonumber\\
& &\times \theta\left(-(\overrightarrow{m}+\overrightarrow{q})^T\mathrm{Re}
\, \Pi+\overrightarrow{c}_0+\overrightarrow{q}^T\Pi\right)
\end{eqnarray}
for some bounded constant $\overrightarrow{c}_0$.
Note that there exists an integer vector $\overrightarrow{l}$ and
real vector $\overrightarrow{r}$ with entries between 0 and 1 such
that
\begin{eqnarray*}
(\overrightarrow{m}+\overrightarrow{q})^T \mathrm{Re} \, \Pi=\overrightarrow{l}+\overrightarrow{r}.
\end{eqnarray*}
Therefore, we have
\begin{eqnarray*}
\theta\left(-(\overrightarrow{m}+\overrightarrow{q})^T\mathrm{Re} \, \Pi+\overrightarrow{c}_0+\overrightarrow{q}^T\Pi\right)
=\theta\left(-\overrightarrow{r}+\overrightarrow{c}_0+\overrightarrow{q}^T\Pi\right).
\end{eqnarray*}
If $\log\theta\left(iQ\overrightarrow{e}+\overrightarrow{c}_0\right)$
is non-zero for all $Q$, then from (\ref{eq:asym1}) we see that
\begin{eqnarray*}
\log\theta\left(iQ\overrightarrow{e}+\overrightarrow{c}_0\right)&=&Q^2\pi\Bigg[i\left<\overrightarrow{e},\left(\mathrm{Im}
\, \Pi\right)^{-1}\mathrm{Re} \, \Pi\left(\mathrm{Im} \,
\Pi\right)^{-1}\overrightarrow{e}\right>\nonumber\\& &+ \left<\overrightarrow{e},\left(\mathrm{Im}
\,\Pi\right)^{-1}\overrightarrow{e}\right>+2 i
{N(Q,c_0)\over Q^2}+O(Q^{-1})\Bigg], \quad Q \to \infty,
\end{eqnarray*}
where $N(Q,c_0)$ is an integer that depends on the branch of the
logarithm. It may depend on $Q$ and $\overrightarrow{c}_0$. This
term arises because in the integral expression of the entropy,
\begin{eqnarray}\label{eq:int}
\frac{1}{2}\int_{1+\epsilon}^{\infty}\log{{\theta\left(\beta(\lambda)\overrightarrow{e}+{\tau\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over 2}\right)}}\mathrm{d}\lambda,
\end{eqnarray}
the branch of the logarithm must be chosen so that the integrand is
continuous in $\lambda$. We shall determine the asymptotic behavior
of $N(Q,c_0)$ as $Q \to \infty$.
Due to theorem \ref{thm:solv}, the inequality (\ref{eq:zero}) is
true when $\beta(\lambda)\in i\mathbb{R}$. Therefore, we can apply
the above result to compute the asymptotic behavior of the integrand
in (\ref{eq:int}):
\begin{eqnarray}
\label{eq:intasym}
\log{{\theta\left(\beta(\lambda)\overrightarrow{e}+{\tau\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over
2}\right)}}&=&-2\beta(\lambda)^2\pi\Bigg[\left<\overrightarrow{e},\left(\mathrm{Im}
\,
\Pi\right)^{-1}\overrightarrow{e}\right>\nonumber\\
&&+ i\left<\overrightarrow{e},\left(\mathrm{Im}
\, \Pi \right)^{-1}\mathrm{Re} \, \Pi\left(\mathrm{Im} \, \Pi
\right)^{-1}\overrightarrow{e}\right>\nonumber\\
&&-2 i {{N(\beta(\lambda),{\tau\over
2})+N(\beta(\lambda),-{\tau\over 2})}\over
\beta(\lambda)^2}\nonumber \\
&&+ O(\beta(\lambda)^{-1})\Bigg]
\end{eqnarray}
Since $D_L(\lambda)$ in (\ref{eq:DL}) is real and positive
for $\lambda\in(1,\infty)$, and that $\log D_L(\lambda)(\lambda^{2} - 1)^{-L}$ has to be
zero at $\lambda=\infty$ (which is needed to deform the contour to
obtain (\ref{ent123})), we see that $\log D_L(\lambda)$ has to be real for
$\lambda\in(1,\infty)$. Therefore, the imaginary part of the leading
order term in (\ref{eq:intasym}) must be zero. In particular, this
means that
\begin{eqnarray*}
\left<\overrightarrow{e},\left(\mathrm{Im} \,\Pi \right)^{-1}\mathrm{Re} \,
\Pi\left(\mathrm{Im} \, \Pi \right)^{-1}\overrightarrow{e}\right>-2{{N(\beta(\lambda),{\tau\over
2})+N(\beta(\lambda),-{\tau\over 2})}\over
\beta(\lambda)^2}=O(\beta(\lambda)^{-1}).
\end{eqnarray*}
Thus, the asymptotic behavior of the integrand in (\ref{eq:int}) is
\begin{eqnarray}
\label{eq:intasym1}
\log{{\theta\left(\beta(\lambda)+{\tau\over
2}\right)\theta\left(\beta(\lambda)-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over
2}\right)}}=-2\pi\beta(\lambda)^2\left(\left<\overrightarrow{e},\left(\mathrm{Im}
\, \Pi\right)^{-1}\overrightarrow{e}\right>+O(\beta(\lambda)^{-1})\right),
\quad \lambda \to 1^+.
\end{eqnarray}
The left hand side of this equation is therefore integrable at
$\lambda=1^+$ and we can take the limit $\epsilon\rightarrow 0$ in
(\ref{ent123}) to obtain our final result for the entropy:
\begin{eqnarray}\label{eq:entro}
S(\rho_A)=\frac{1}{2}\int_{1}^{\infty}\log{{\theta\left(\beta(\lambda)\overrightarrow{e}+{\tau\over
2}\right)\theta\left(\beta(\lambda)\overrightarrow{e}-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over 2}\right)}}\mathrm{d}\lambda.
\end{eqnarray}
\section{Critical behavior as roots of $g(z)$ approaches the unit circle}\label{se:real}
\setcounter{equation}{0}
The purpose of this section is to prove theorem~\ref{thm:crit}. We
shall study the critical behavior of the entropy of entanglement as
some pairs of the roots (\ref{eq:lambdai}) approach the unit circle.
As we discussed in section~\ref{stat_res}, in each pair one root
lies inside the unit circle, while the other outside. In this limit
the entropy becomes singular.
We shall study all the possible cases of such degeneracy, namely the
following three:
\begin{enumerate}
\item the limit of two real roots approaching 1;
\item the limit of $2r$ pairs of complex roots approaching the unit
circle;
\item the limit of $2r$ pairs of complex roots approaching the unit
circle together with one pair of real roots approaching 1.
\end{enumerate}
When pairs of roots in (\ref{eq:lambdai}) approach the unit circle,
the period matrix $\Pi$ in the definition of the theta function
(\ref{eq:thetadef}) becomes degenerate and some of its entries tend to
zero. This will lead to a divergence in the sum (\ref{eq:thetadef})
and hence a divergence in the entropy. It is very difficult to study
such divergence directly from the sum (\ref{eq:thetadef}). In order
to compute such limits, we need to perform modular transformations to
the theta functions. In particular, the following theorem from
\cite{FR} will be used throughout the whole section.
\begin{theorem}
\label{thm:modular}
If the canonical bases of cycles $(\tilde{A}\quad\tilde{B})$ and $(A\quad B)$ are related by
\begin{eqnarray*}
\pmatrix{\tilde{A}\cr \tilde{B}}\ &=&Z\pmatrix{A\cr
B}=\pmatrix{Z_{11}&Z_{12}\cr Z_{21}&Z_{22}}\ \pmatrix{A\cr B},
\end{eqnarray*}
where the matrix $Z$ is symplectic \textit{i.e.}
\begin{eqnarray*}
Z^T\pmatrix{0&-I_{2n-1}\cr
I_{2n-1}&0}Z&=&\pmatrix{0&-I_{2n-1}\cr
I_{2n-1}&0},\\
Z^{-1}&=&\pmatrix{Z_{22}^T&-Z_{12}^T\cr
-Z_{21}^T&Z_{11}^{T}},
\end{eqnarray*}
then we have the following relations between the theta functions
with different period matrices:
\begin{eqnarray}\label{eq:modular}
\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\xi,\Pi)=\varsigma\exp\left[-\pi
i\tilde{\xi}^T(-Z_{12}^{T}\tilde{\Pi}+Z_{22}^T)^{-1}Z_{12}^T\tilde{\xi}\right]\theta\left[{\tilde{\varepsilon}\atop
\tilde{\varepsilon}^{\prime}}\right](\tilde{\xi},\tilde{\Pi}),
\end{eqnarray}
where
\begin{eqnarray}\label{eq:transxi}
\tilde{\xi}=\left((-Z_{12}^T\tilde{\Pi}+Z_{22}^T)^T\right)\xi
\end{eqnarray}
and $\varsigma$ is a constant. The characteristics of the theta
functions are related by
\begin{eqnarray*}
\varepsilon&=&Z_{22}^T\tilde{\varepsilon}+Z_{12}^T\tilde{\varepsilon}^{\prime}-{\rm diag}\left(Z_{12}^TZ_{22}\right)\\
\varepsilon^{\prime}&=&Z_{21}^T\tilde{\varepsilon}+Z_{11}^T\tilde{\varepsilon}^{\prime}-{\rm diag}\left(Z_{11}^TZ_{21}\right),
\end{eqnarray*}
where ${\rm diag}(CD^T)$ is a column vector whose entries are the
diagonal elements of $CD^T$. The new period matrix is given by
\begin{eqnarray}\label{eq:modperiod}
\tilde{\Pi}=\left(Z_{22}\Pi+Z_{21}\right)\left(Z_{12}\Pi+Z_{11}\right)^{-1}
\end{eqnarray}
and the normalized one forms are related by
\begin{eqnarray}\label{eq:modform}
\mathrm{d}\tilde{\Omega}&=&\left((-Z_{12}^T\tilde{\Pi}+Z_{22}^T)^T\right)\mathrm{d}\Omega\\
\mathrm{d}\tilde{\Omega}^T&=&(\mathrm{d}\tilde{\omega}_1,\ldots,\mathrm{d}\tilde{\omega}_{2n-1})^T,\quad
\mathrm{d}\Omega^T=(\mathrm{d}\omega_1,\ldots,\mathrm{d}\omega_{2n-1})^T,\nonumber
\end{eqnarray}
which is the same transformation as in~(\ref{eq:transxi}).
\end{theorem}
Our aim is to find a good choice of basis
$\pmatrix{\tilde{A}&\tilde{B}}$ such that
$\theta(\tilde{\xi},\tilde{\Pi})$ remains finite while some entries of
$\tilde{\Pi}$ tend to infinity as certain pairs of roots $\lambda_j$
approach the unit circle. This would confine the divergence of the
entropy within the exponential factor in (\ref{eq:modular}), which can
be computed.
\subsection{The limit of two real roots approaching 1}
In this section the choice of the basis $(\tilde{A} \quad\tilde{B})$
described in theorem \ref{thm:modular} is the one shown in
figure~\ref{fig:cycle2}.
\begin{figure}[htbp]
\begin{center}
\resizebox{8cm}{!}{\input{cycle2.pstex_t}}\caption{The choice of
cycles on the hyperelliptic curve $\mathcal{L}$. The arrows denote the
orientations of the cycles and branch cuts. }\label{fig:cycle2}
\end{center}
\end{figure}
In the notation of theorem \ref{thm:modular}, the new basis
$(\tilde{A}\quad \tilde{B})$ and the old one $(A \quad B)$ are related by
\begin{eqnarray}\label{eq:ci}
\pmatrix{\tilde{A}\cr
\tilde{B}}\ &=&Z \pmatrix{A \cr B}\nonumber\\
Z&=&\pmatrix{Z_{11}&Z_{12}\cr Z_{21}&Z_{22}}\ =\pmatrix{0&-C_2\cr C_1&0}\ \nonumber\\
\tilde{A}^T&=&(\tilde{a}_1,\ldots,\tilde{a}_{2n-1})^T,\quad \tilde{B}^T=(\tilde{b}_1,\ldots,\tilde{b}_{2n-1})^T\nonumber\\
A^T&=&(a_1,\ldots,a_{2n-1})^T,\quad B^T=(b_1,\ldots,b_{2n-1})^T\nonumber\\
(C_1)_{ij}&=&1, \quad j\geq i,\quad (C_1)_{ij}=0, \quad j<i\\
(C_2)_{ii}&=&1, ,\quad (C_2)_{i,i-1}=-1, \quad (C_2)_{ij}=0, \quad
j\neq i, i-1\nonumber\\
C_1&=&\left(C_2^{-1}\right)^T.\nonumber
\end{eqnarray}
The relation between the two period matrices can be found using
(\ref{eq:modperiod})
\begin{eqnarray}\label{eq:tildePi}
\tilde{\Pi}=-C_1\Pi^{-1} C_2^{-1}.
\end{eqnarray}
To study the behavior of the entropy as the real roots
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$, we need to know the
behavior of the period matrix $\tilde{\Pi}$ in this limit. Now, we
have
\begin{equation}
\label{eq:limw}
w_0 = \lim_{\lambda_{2n} \to \lambda_{2n}^{-1}}
\sqrt{\prod_{i=1}^{4n}(z-\lambda_i)}=(z-1)
\sqrt{\prod_{i\neq 2n,2n+1}^{4n}(z-\lambda_i)}.
\end{equation}
Furthermore, as $\lambda_{2n} \to \lambda_{2n}^{-1}$ the integration
around $\tilde{a}_n$ tends the residue at $z=1$; the hyperelliptic
curve $\mathcal{L}$ becomes a singular hyperelliptic curve $\mathcal{L}_0$ of genus
$2n-2$; the tilded basis of canonical cycles on this curve reduces to
\begin{eqnarray}
\label{eq:basis}
\tilde{A}_0^T&=&(\tilde{a}_1,\ldots,\tilde{a}_{n-1},\tilde{a}_{n+1},
\ldots,\tilde{a}_{2n-1})^T,\nonumber\\
\tilde{B}_0^T&=&(\tilde{b}_1,\ldots,
\tilde{b}_{n-1},\tilde{b}_{n+1},\ldots,\tilde{b}_{2n-1})^T.
\end{eqnarray}
\begin{figure}[htbp]
\begin{center}
\resizebox{6cm}{!}{\input{crit1.pstex_t}}\caption{As $\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$, integration around $\tilde{a}_n$ becomes a residue integral around $z=1$. }\label{fig:crit1}
\end{center}
\end{figure}
The holomorphic 1-forms $\mathrm{d}\tilde{\omega}_j$ tend to the following
limit \cite{BBEIM}:
\begin{eqnarray*}
\tilde{\mathrm{d}\omega}_j^0={{\varphi_j(z)}\over{w_0}}\mathrm{d} z,
\end{eqnarray*}
where $\varphi_j(\lambda)$ are degree $2n-2$ polynomials determined
by the normalization conditions
\begin{eqnarray*}
\int_{\tilde{a}_j}\mathrm{d}\tilde{\omega}_k^0&=&\delta_{kj},\quad j\neq n\\
2\pi i{\rm Res}_{z=1,w=w_0(1)}\mathrm{d}\tilde{\omega}_k^0&=&\delta_{kn}.
\end{eqnarray*}
Therefore, the 1-forms $\mathrm{d}\tilde{\omega}_k^0$, $k\neq n$, become the
holomorphic 1-forms that are dual to the basis $\tilde{A}_0$ on
$\mathcal{L}_0$. Furthermore, $\tilde{\mathrm{d}\omega}_n^0$ becomes a normalized
meromorphic 1-form with simple poles at the points above $z=1$ on
$\mathcal{L}_0$.
As in \cite{BBEIM}, we see that the entries of the period matrix
$\tilde{\Pi}$ tend to the following limits:
\begin{eqnarray*}
\lim_{\lambda_{2n} \to \lambda_{2n}^{-1}} \tilde{\Pi}_{jk}
& = & \tilde{\Pi}_{jk}^0, \quad i,j\neq n,n\\
\tilde{\Pi}_{nn}&=&2\sum_{j=1}^{n}
\int_{\lambda_{2j-1}}^{\lambda_{2j}}\mathrm{d}\tilde{\omega}_n\\
&=&{1\over {\pi i}}\log|\lambda_{2n}^{-1}-\lambda_{2n}|+O(1), \quad
\lambda_{2n} \to \lambda_{2n}^{-1},
\end{eqnarray*}
where $\tilde{\Pi}_{ij}^0$ is finite for $i,j\neq n,n$.
Let us adopt the notation of theorem \ref{thm:modular} and denote the
argument of the theta function in the entropy (\ref{eq:entro}) by
$\xi$, that is
\begin{eqnarray}\label{eq:xi1}
\xi=\beta(\lambda)\overrightarrow{e}\pm {{\tau}\over 2}.
\end{eqnarray}
We will now compute the behavior of the argument $\tilde{\xi}$ in
(\ref{eq:modular}) with $\xi$ given by (\ref{eq:xi1}). We have
\begin{lemma}\label{le:tildexi1} Let $\xi$ be given by (\ref{eq:xi1}) and $\tilde{\xi}$ be
\begin{eqnarray*}
\tilde{\xi}=\left((-Z_{12}^T\tilde{\Pi}+Z_{22}^T)^T\right)\xi,
\end{eqnarray*}
where $Z_{ij}$ are given by (\ref{eq:ci}). Then as
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$ we have
\begin{eqnarray}\label{eq:tildearg}
\tilde{\xi}_i&=&\beta(\lambda)\tilde{\Pi}_{in}\pm\eta_i,\quad
i=1,\ldots, 2n-1,
\end{eqnarray}
where $\eta_i$ remains finite as
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$.
\end{lemma}
\begin{proof}
To begin with, we will need to express $\frac{\tau}{2}$ in terms of
the Abel map.
Recall that the term ${\tau\over 2}$ in (\ref{eq:m_res1}) is given
by
\begin{eqnarray*}
{\tau\over 2}=-\sum_{j=2}^{2n}\omega(z_j^{-1})-K,
\end{eqnarray*}
where $K$ is the Riemann constant. As in \cite{FK} (see also
appendix D), the Riemann constant can be expressed as a sum of
images of branch points under the Abel map. In particular, we have
\begin{eqnarray*}
K=-\sum_{j=2}^{2n}\omega(\lambda_{2j-1}).
\end{eqnarray*}
Therefore we have
\begin{eqnarray*}
{\tau\over
2}=-\sum_{j=2}^{2n}\omega(z_j^{-1})+\sum_{j=2}^{2n}\omega(\lambda_{2j-1})
\end{eqnarray*}
Now by substituting (\ref{eq:ci}) into (\ref{eq:transxi}) and make
use of (\ref{eq:modform}) and (\ref{eq:tildePi}), we see that the argument $\tilde{\xi}$ in
$\theta(\tilde{\xi},\tilde{\Pi})$ can be expressed as follows
\begin{equation}
\label{eq:sum_for}
\tilde{\xi}_i=\beta(\lambda)\tilde{\Pi}_{in}\pm\left(\sum_{j=2}^{2n}\tilde{\omega}_i(z_j^{-1})-\sum_{j=1}^{2n}\tilde{\omega}_i(\lambda_{2j-1})\right),\quad
i=1,\ldots, 2n-1,
\end{equation}
where $\tilde{\omega}$ is the Abel map with $\mathrm{d}\omega$ replaced by
$\mathrm{d}\tilde{\omega}$ and $\tilde{\omega}_i$ is the $i^{th}$ component
of the map.
We would like to show that the term
\begin{eqnarray*}
\sum_{j=2}^{2n}\tilde{\omega}_i(z_j^{-1})-\sum_{j=1}^{2n}\tilde{\omega}_i(\lambda_{2j-1})
\end{eqnarray*}
in (\ref{eq:sum_for}) remains finite as
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$.
To see this, note that the set of points $\{z_j^{-1}\}$ must contain
either one of the points $\lambda_{2n}$ or $\lambda_{2n}^{-1}$, but
not both, while $\{\lambda_{2i-1}\}$ contains $\lambda_{2n}^{-1}$
only. As $\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$, the terms
$\tilde{\omega}_n(\lambda_{2n})$ and
$\tilde{\omega}_n(\lambda_{2n}^{-1})$ in the sum in
equation~(\ref{eq:sum_for}) will tend to $-\infty$. However, since
they appear in the sum with opposite signs, these contributions
cancel and the quantity
\begin{eqnarray*}
\sum_{j=2}^{2n}\tilde{\omega}_n(z_j^{-1})-\sum_{j=1}^{2n}\tilde{\omega}_n(\lambda_{2j-1})
\end{eqnarray*}
remains finite as $\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$.
We can therefore write $\tilde{\xi}$ as
\begin{eqnarray*}
\tilde{\xi}_i&=&\beta(\lambda)\tilde{\Pi}_{in}\pm\eta_i,\quad
i=1,\ldots, 2n-1
\end{eqnarray*}
where $\eta_i$ remains finite as
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$. \end{proof}
We are now ready to apply theorem \ref{thm:modular} to compute the
theta function as $\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$.
\begin{lemma}\label{le:theta1}
In the limit $\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$ the theta
function $\theta(\xi,\Pi)$ behaves like
\begin{eqnarray}\label{eq:theta1}
\theta(\xi,\Pi)=\exp\left(\log|\lambda_{2n}-
\lambda_{2n}^{-1}|\beta^2(\lambda)+O(1)\right),
\end{eqnarray}
where $\xi$ is given by (\ref{eq:xi1}).
\end{lemma}
\begin{proof}
Firstly, let us use (\ref{eq:modular}) and (\ref{eq:tildePi}) to
express $\theta(\xi,\Pi)$ in terms of
$\theta(\tilde{\xi},\tilde{\Pi})$, we have
\begin{eqnarray}\label{eq:hat}
\theta(\xi,\Pi)&=&\varsigma\exp\left[\pi
i\tilde{\xi}^{T}\tilde{\Pi}^{-1}\tilde{\xi}\right]\theta\left(\tilde{\xi},\tilde{\Pi}\right).
\end{eqnarray}
Let us now use (\ref{eq:tildearg}) to compute the asymptotic of the
exponential term in (\ref{eq:hat}). We obtain
\begin{eqnarray}\label{eq:expon}
\tilde{\xi}^T\tilde{\Pi}^{-1}\tilde{\xi}
=\sum_{i,j}\left(\tilde{\Pi}^{-1}\right)_{ij}\tilde{\xi}_i\tilde{\xi}_j.
\nonumber
\end{eqnarray}
The behavior of the entries in $\tilde{\Pi}^{-1}$ can be calculated
by computing the determinant and the minors. We have
\begin{eqnarray*}
\left(\tilde{\Pi}^{-1}\right)_{ij}&=&O(1),\quad
\lambda_{2n}\rightarrow\lambda_{2n}^{-1}, \quad i,j\neq n\\
\left(\tilde{\Pi}^{-1}\right)_{nj}&=&O\left(\log^{-1}|\lambda_{2n}-\lambda_{2n}^{-1}|\right),\quad
\lambda_{2n}\rightarrow\lambda_{2n}^{-1}, \quad j\neq n\\
\left(\tilde{\Pi}^{-1}\right)_{nn}&=&\pi
i\log^{-1}|\lambda_{2n}-\lambda_{2n}^{-1}|+O\left(\log^{-2}|\lambda_{2n}-\lambda_{2n}^{-1}|\right),
\quad \lambda_{2n}\rightarrow\lambda_{2n}^{-1}.
\end{eqnarray*}
Therefore, equation (\ref{eq:expon}) becomes
\begin{eqnarray}\label{eq:expon1}
\pi
i\sum_{i,j}\left(\tilde{\Pi}^{-1}\right)_{ij}\tilde{\xi}_i\tilde{\xi}_j
=\log|\lambda_{2n}-\lambda_{2n}^{-1}|\beta^2(\lambda)+O(1), \quad
\lambda_{2n}\rightarrow\lambda_{2n}^{-1}.
\end{eqnarray}
Next, we will use the definition (\ref{eq:thetadef}) of the theta
function to compute its limit as
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$. We have,
\begin{eqnarray}
\label{eq:thetacrit}
\theta(\tilde{\xi},\tilde{\Pi})&=&\sum_{\overrightarrow{m}\in\mathbb{Z}^{2n-1}}\exp\Bigg[\pi
i\sum_{jk\neq nn}\tilde{\Pi}_{jk}m_jm_k+2\pi i\sum_{j\neq n}\left(\beta(\lambda)\tilde{\Pi}_{jn}\pm\eta_j\right)m_j\nonumber\\
& & + 2\pi i\tilde{\Pi}_{nn}\left(m_n^2+2\beta(\lambda)m_n\right)\pm
2\eta_nm_n\Bigg].
\end{eqnarray}
Since
\[
\lim_{\lambda_{2n} \to \lambda_{2n}^{-1}} \mathrm{Re}(2\pi
i\tilde{\Pi}_{nn}) = -\infty
\]
and $\beta(\lambda)$ is purely imaginary, we see that in the limit
only the terms with $m_n=0$ contribute to the sum. Therefore,
equation~(\ref{eq:thetacrit}) reduces to
\begin{eqnarray}\label{eq:limthet1}
\lim_{\lambda_{2n} \to \lambda_{2n}^{-1}}
\theta(\tilde{\xi},\tilde{\Pi}) &=&
\theta\left(\tilde{\xi}^0,\tilde{\Pi}^0\right)\\
\tilde{\xi}^0&=&(\tilde{\xi}_1,\ldots,\hat{\tilde{\xi}}_{n},
\ldots,\tilde{\xi}_{2n-1})^T,\nonumber
\end{eqnarray}
where the $\hat{\tilde{\xi}}_n$ in the above equation means that the
$n^{th}$ entry of the vector is removed. The period matrix
$\tilde{\Pi}^0$ is an $(2n-2)\times(2n-2)$ matrix obtained by removing
the $n^{th}$ row and $n^{th}$ column of the period matrix
$\tilde{\Pi}$. Thus, the theta function
$\theta\left(\tilde{\xi}^0,\tilde{\Pi}^0\right)$ remains finite as
$\lambda_{2n}\rightarrow\lambda_{2n}^{-1}$. This fact, together with
(\ref{eq:expon1}), shows that $\theta(\xi,\Pi)$ behaves like
\begin{eqnarray*}
\theta(\xi,\Pi)=\varsigma\exp\left(\log|\lambda_{2n}-
\lambda_{2n}^{-1}|\beta^2(\lambda)+O(1)\right)\theta\left(\tilde{\xi}^0,
\tilde{\Pi}^0\right), \quad \lambda_{2n} \to \lambda_{2n}^{-1}.
\end{eqnarray*}
Since $\theta\left(\tilde{\xi}^0,\tilde{\Pi}^0\right)$ and
$\varsigma$ remain finite as $\lambda_{2n} \to
\lambda_{2n}^{-1}$, the above equation becomes~(\ref{eq:theta1}). This
proves the lemma.
\end{proof}
Finally, by substituting (\ref{eq:theta1}) into (\ref{eq:entro}), we
have
\begin{eqnarray*}
S(\rho_A)&=&\frac{1}{2}\int_{1}^{\infty}\log{{\theta\left(\beta(\lambda)
\overrightarrow{e}+{\tau\over 2}\right)
\theta\left(\beta(\lambda)\overrightarrow{e}-{\tau\over
2}\right)}\over{\theta^2\left({\tau\over 2}\right)}}\mathrm{d}\lambda\\
&=&\int_1^{\infty}
\left(\log|\lambda_{2n}-\lambda_{2n}^{-1}|\beta^2(\lambda)+O(1)\right)\mathrm{d}\lambda
\end{eqnarray*}
Since
\[
\int_1^\infty \beta^2(\lambda) \mathrm{d} \lambda = - \frac16,
\]
we arrive at the following expression for the entropy of entanglement
\[
S(\rho_A)=- \frac16\log|\lambda_{2n}-\lambda_{2n}^{-1}| + O(1), \quad
\lambda_{2n} \to \lambda_{2n}^{-1}.
\]
\begin{figure}
\centering
\begin{overpic}[scale=.3,unit=1mm]{index}
\put(-4,59.5){$\lambda_{2(j+1)}$}
\put(22,59.5){$\lambda_{2n}$}
\put(42,59.5){$\lambda_{2n + 1}$}
\put(66,59.5){$\lambda_{2(2n - j)- 1}$}
\put(-4,17){$\lambda_{2j+1}$}
\put(22,17){$\lambda_{2n-1}$}
\put(42,17){$\lambda_{2n + 2}$}
\put(66,17){$\lambda_{2(2n - j)}$}
\put(11,14){$\ldots$}
\put(11,56.5){$\ldots$}
\put(55,14){$\ldots$}
\put(55,56.5){$\ldots$}
\end{overpic}
\caption{Two pairs of roots, labelled according to the
ordering~(\ref{eq:order}), approaching the unit circle in the
critical limit. We have $ \lambda_{2(j+1)} \to \lambda_{2(2n -j) -
1}$, $\lambda_{2n} \to \lambda_{2n + 1}$ and $\lambda_{2j + 1} \to
\lambda_{2(2n -j)}$ respectively.}\label{fig:index}
\end{figure}
\subsection{The limit of complex roots approaching the unit circle}
\label{se:im}
We will now study the case when $2r$ pairs of complex roots approach
each other towards the unit circle. Let $\lambda_{2j + 1}$ be a
complex root with $n-r \le j \le n-1$. As we discussed in
section~\ref{stat_res}, $\overline{\lambda}_{2j + 1}$, $1/\lambda_{2j
+ 1}$ and $1/\overline{\lambda}_{2j + 1}$ are roots too. The
ordering~(\ref{eq:order}) implies (see figure~\ref{fig:index})
\begin{eqnarray}
\label{eq:order_complex}
\lambda_{2(j + 1)} &=& \overline{\lambda}_{2j + 1} \hspace{.2cm} \quad \quad
\lambda_{2(2n - j )- 1} = \overline{\lambda}_{2(2n - j)} \nonumber \\
\lambda_{2(2n -j)} &=& 1/\lambda_{2(j+1)} \quad \lambda_{2(2n-j)-1} =
1/\lambda_{2j + 1}.
\end{eqnarray}
The critical limit occurs as $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j)-1}$. From the relations~(\ref{eq:order_complex})
this implies $\lambda_{2j+ 1} \to \lambda_{2(2n - j)}$. Thus, in what
follows we shall mainly discuss the limit $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j)-1}$.
\subsubsection{Case 1: $r< n$}
\label{se:case1}
We now choose the tilded canonical basis of the cycles
$(\tilde{A}\quad \tilde{B})$ as in figure \ref{fig:crit2}. Namely, we
have
\begin{eqnarray}
\label{eq:newbasis}
\tilde{a}_j& = &a_j,\quad j<n-r,\quad j>n+r-1\nonumber\\
\tilde{b}_j& = &b_j,\quad j<n-r,\quad j>n+r-1\nonumber\\
\tilde{a}_{n-k}& = &b_{n-k}-b_{n+k-1}+\sum_{j=n-k+1}^{n+k-2}a_{j},
\quad, k=1,\ldots, r\\
\tilde{a}_{n+k}&=&b_{n+k}-b_{n-k-1}
+\sum_{j=n-k-1}^{n+k}a_{j},\quad,k=0,\ldots, r-1\nonumber\\
\tilde{b}_{n-k}&=&b_{n-k}-
\sum_{j=n-k}^{n+k-2}a_{j}-\sum_{j=n-r}^{n-k-1}
(-1)^{n-k-j}\left(a_{j}-2b_{j}\right),\quad, k=1,\ldots, r\nonumber\\
\tilde{b}_{n+k}&=&b_{n+k}+\sum_{j=n-r}^{n-k-2}(-1)^{n-k-j}
\left(a_{j}-2b_{j}\right),\quad k=0,\ldots, r-1.\nonumber
\end{eqnarray}
\begin{figure}[htbp]
\begin{center}
\resizebox{6cm}{!}{\input{crit2.pstex_t}}\caption{The choice of
cycles on the hyperelliptic curve $\mathcal{L}$. The arrows denote the
orientations of the cycles and branch cuts. }\label{fig:crit2}
\end{center}
\end{figure}
We will show in appendix E that this is indeed a canonical basis of
cycles. We can partition this basis as follows:
\begin{eqnarray*}
\pmatrix{\tilde{A}}&=&\pmatrix{\tilde{a}^I\cr
\tilde{a}^{II}\cr\tilde{a}^{III}}\\
\tilde{a}^I_j&=&\tilde{a}_j,\quad 1\leq k\leq n-r-1\\
\tilde{a}^{II}_j&=&\tilde{a}_{n-r+j-1},\quad 1\leq k\leq 2r\\
\tilde{a}^{III}_j&=&\tilde{a}_j,\quad n+r\leq k\leq 2n-1.
\end{eqnarray*}
The relations among the $b$-cycles and the untilded basis are
analogous.
If we write this relation in matrix form as in theorem
\ref{thm:modular}, then the corresponding transformation matrix is
given by
\begin{eqnarray*}
\pmatrix{\tilde{A}\cr \tilde{B}}\ &=&Z\pmatrix{A\cr
B}=\pmatrix{Z_{11}&Z_{12}\cr Z_{21}&Z_{22}}\ \pmatrix{A\cr B},
\end{eqnarray*}
where the blocks $Z_{ij}$ can be written as
\begin{eqnarray*}
Z_{ij}&=&\pmatrix{\delta_{ij}I_{n-r-1}&0&0\cr
0&C_{ij}&0\cr
0&0&\delta_{ij}I_{n-r}},
\end{eqnarray*}
where $I_{n-r-1}$ is the identity matrix of dimension $n-r-1$ and
the $C_{ij}$'s are the following $2r\times2r$ matrices:
\begin{eqnarray*}
\left(C_{11}\right)_{kl}&=&\left\{
\begin{array}{ll}
1 & \hbox{$\quad k+1\leq l\leq 2r-k$,} \\
0 & \hbox{otherwise,}
\end{array}
\right. \quad 1\leq k\leq r\\
\left(C_{11}\right)_{kl}&=&\left\{
\begin{array}{ll}
1 & \hbox{$\quad k\leq l\leq 2r-k+1$,} \\
0 & \hbox{otherwise,}
\end{array}
\right.\quad r+1\leq k\leq 2r\\
\left(C_{12}\right)_{kl}&=&\delta_{kl}-\delta_{l,2r-k+1}\quad 1\leq k,l\leq 2r\\
\left(C_{21}\right)_{kl}&=&\left\{
\begin{array}{ll}
(-1)^{k-l+1}, & \hbox{$\quad 1\leq l\leq k-1$;} \\
-1 & \hbox{$k\leq l\leq 2r-k$,}\\
0 & \hbox{$\quad 2r-k+1\leq l$,}
\end{array}
\right.,\quad 1\leq k\leq r\\
\left(C_{21}\right)_{kl}&=&\left\{
\begin{array}{ll}
(-1)^{k+l} & \hbox{$\quad 1\leq l\leq 2r-k$,} \\
0 & \hbox{otherwise,}
\end{array}
\right.\quad r+1\leq k\leq 2r\\
\left(C_{22}\right)_{kl}&=&\delta_{kl}+\left\{
\begin{array}{ll}
2(-1)^{k-l} & \hbox{$\quad 1\leq l\leq k-1$,} \\
0 & \hbox{otherwise,}
\end{array}
\right.\quad 1\leq k\leq r\\
\left(C_{22}\right)_{kl}&=&\delta_{kl}-2\left(C_{21}\right)_{kl},\quad
r+1\leq k\leq 2r
\end{eqnarray*}
These are matrices of the form
\begin{eqnarray}\label{eq:cij}
C_{11}&=&\pmatrix{0&1&1&\ldots&\ldots&1&1&0\cr
0&0&1&\ldots&\ldots&1&0&0\cr
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\cr
0&0&\ldots&1&1&\ldots&0&0\cr
0&0&\ldots&0&0&\ldots&0&0\cr
0&\ldots&0&1&1&0&\ldots&0\cr
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\cr
1&1&1&\ldots&\ldots&1&1&1}\nonumber\\
C_{12}&=&I_{2r}-J_{2r}\nonumber\\
J_r&=&\pmatrix{0&\ldots&\ldots&1\cr
0&\ldots&1&0\cr
\vdots&\vdots&\vdots&\vdots\cr
1&0&\ldots&0}\\
C_{21}&=&\pmatrix{-1&-1&-1&\ldots&\ldots&-1&-1&-1&-1&0\cr
1&-1&-1&\ldots&\ldots&\ldots&-1&-1&0&0\cr
-1&1&-1&-1&\ldots&\ldots&-1&0&0&0\cr
\vdots&\ddots&\ddots&\ddots&\vdots&\vdots&\vdots&\vdots&\vdots\cr
\ldots&\ldots&-1&1&-1&0&\ldots&\ldots&\ldots&0\cr
\ldots&1&-1&1&0&\ldots&\ldots&\ldots&0&0\cr
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\vdots&\vdots\cr
-1&1&0&\ldots&\ldots&\ldots&\ldots&\ldots&0&0\cr
1&0&0&\ldots&\ldots&\ldots\ldots&\ldots&0&0&0\cr
0&0&0&\ldots&\ldots&\ldots&\ldots&0&0&0}\nonumber\\
C_{22}&=&\pmatrix{1&0&0&\ldots&0&0&0\cr
-2&1&0&\ldots&0&0&0\cr
\vdots&\ddots&\ddots&\ddots&\vdots&\vdots&\vdots\cr
\ldots&2&-2&1&\ldots&0&0\cr
\ldots&2&-2&0&1&\ldots&0\cr
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\cr
-2&0&0&\ldots&0&1&0\cr
0&0&0&\ldots&0&0&1}\nonumber
\end{eqnarray}
As in section \ref{se:real}, some holomorphic 1-forms
$\mathrm{d}\tilde{\omega}_j$ will become meromorphic as the roots approach the
unit circle.
In this case, the holomorphic 1-form $\mathrm{d}\tilde{\omega}_{j}$,
$n-r\leq k\leq n+r-1$ becomes a meromorphic 1-form with a simple
pole at $\lambda_{2(j+1)}$. All the other holomorphic 1-forms become
normalized holomorphic 1-forms in the resulting surface.
In particular, we have the following:
\begin{lemma}
\label{le:periodentries2} The entries of the period matrix
$\tilde{\Pi}$ behave like
\begin{eqnarray}\label{eq:periodentries2}
\lim_{\lambda_{2(j+1)} \rightarrow \lambda_{2(2n - j) -1}}
\tilde{\Pi}_{ij}&=&\tilde{\Pi}_{ij}^{0}, \quad i\neq j\nonumber\\
\lim_{\lambda_{2(j+1)} \rightarrow \lambda_{2(2n - j) -1}} \tilde{\Pi}_{jj}
&= & \tilde{\Pi}_{jj}^0,\quad j>n+r-1, \quad j<n-r\nonumber\\
\lim_{\lambda_{2(j+1)} \rightarrow \lambda_{2(2n - j) -1}} \tilde{\Pi}_{jj}
&= &\gamma_{j}+\tilde{\Pi}_{jj}^0,
\quad n-r\leq j\leq n+r-1 \nonumber\\
\gamma_j&=&{1\over{\pi
i}}\log\left|\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}\right|,
\end{eqnarray}
where $\tilde{\Pi}_{ij}^0$ are finite.
\end{lemma}
Let us now consider the behavior of the terms $\tilde{\xi}$ in
(\ref{eq:modular}).
\begin{lemma}\label{le:tildexi2}
Let $\xi$ be given by (\ref{eq:xi1}) and $\tilde{\xi}$ be
\begin{eqnarray*}
\tilde{\xi}=\left((-Z_{12}^T\tilde{\Pi}+Z_{22}^T)^T\right)\xi,
\end{eqnarray*}
where $Z_{ij}$ are given by (\ref{eq:cij}). Then in the limit
$\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$ we have
\begin{eqnarray}\label{eq:tildexi2}
\tilde{\xi}_i&=&\eta_i^{\pm}, \quad i>n+r-1, \quad i<n-r\nonumber\\
\tilde{\xi}_i&=&\epsilon_i\beta(\lambda)\gamma_i+\eta_i^{\pm}, \quad
n-r\leq i\leq n+r-1,\\
\epsilon_i& = &1, \quad i<n\nonumber\\
\epsilon_i& = &-1. \quad i\geq n \nonumber
\end{eqnarray}
where $\eta_i^{\pm}$ remains finite as $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}$.
\end{lemma}
\begin{proof}Let
\begin{eqnarray}\label{eq:z11}
Z_{12}^T\tilde{\Pi}-Z_{22}^T&=&\pmatrix{0&0&0\cr
0&(I_{2r}-J_{2r})D_r&0\cr
0&0&0}\ +W\nonumber\\
D_r&=&{\rm diag} (\gamma_{n-r},\gamma_{n-r+1},\ldots, \gamma_{n-r+1},
\gamma_{n-r}),
\end{eqnarray}
where $W$ is a matrix that remains finite as $\lambda_{2(j+1)}
\rightarrow \lambda_{2(2n - j) -1}$ . Then from (\ref{eq:transxi})
and (\ref{eq:modform}), we see that $\tilde{\xi}$ is given by
\begin{eqnarray}
\label{eq:tildez}
\tilde{\xi}_i&=&
\beta(\lambda)\sum_{j=1}^{n-1}W_{n+j,i}\pm{{\tilde{\tau}_i}\over
2}, \quad i>n+r-1, \quad i<n-r\nonumber\\
\tilde{\xi}_i&=&\epsilon_i\beta(\lambda)\gamma_i+\beta(\lambda)\sum_{j=1}^{n-1}W_{n+j,i}\pm{{\tilde{\tau}_i}\over
2},\quad
n-r\leq i\leq n+r-1,
\end{eqnarray}
where
\begin{eqnarray}
\epsilon_i& = &1, \quad i<n\nonumber\\
\epsilon_i& = &-1, \quad i\geq n \nonumber \\
{{\tilde{\tau}_i}\over 2} & = &
\sum_{j=1}^{2n}\tilde{\omega}_i(z_j^{-1})
-\sum_{j=1}^{2n}\tilde{\omega}_i(\lambda_{2j-1}).\nonumber
\end{eqnarray}
Let $\lambda_{2(j+1)}$, $\lambda_{2(2n-j)-1}$ and $\lambda_{2j+1}$,
$\lambda_{2(2n-j)}$, $n-r\leq j\leq n-1$ be the pairs of points that
approach each other. From their ordering we have
$\lambda_{2(j+1)}=\lambda_{2(2n-j)}^{-1}$ and
$\lambda_{2j+1}=\lambda_{2(2n-j)-1}^{-1}$.
For each fixed $j$, the point $\lambda_{2j+1}$ is a pole of
$\mathrm{d}\tilde{\omega}_{2n-j-1}$, while $\lambda_{2(2n-j)-1}$ is a pole
of $\mathrm{d}\tilde{\omega}_{j}$. Therefore, the Riemann constant behaves
like
\begin{eqnarray*}
\sum_{j=1}^{2n}\tilde{\omega}_{i}(\lambda_{2j-1})={1\over
2}\gamma_{i}+ O(1), \quad \lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1}
\quad n-r\leq i\leq n+r-1.
\end{eqnarray*}
Moreover, among these 4 points there are exactly two points of the
form $z_k^{-1}$ for some $k$. However, since $z_k$ are the roots of
a polynomial with real coefficients, if $\lambda_j=z_k^{-1}$ for
some $k$, then its complex conjugate $\overline{\lambda}_j$ is also
of the form $z_{k^{\prime}}^{-1}$ for some $k^{\prime}$. This means
that either of the following is true:
\begin{enumerate}
\item Both $\lambda_{2(j+1)}$ and $\lambda_{2j+1}$ are of the form
$z_k^{-1}$,
\item Both $\lambda_{2(2n-j)}$ and $\lambda_{2(2n-j)-1}$ are of the
form $z_k^{-1}$,
\end{enumerate}
Either way, we have
\begin{eqnarray*}
\sum_{j=1}^{2n}\tilde{\omega}_i(z_j^{-1})={1\over
2}\gamma_{i}+ O(1),\quad n-r\leq i\leq n+r-1.
\end{eqnarray*}
Therefore, we can rewrite (\ref{eq:tildez}) as
\begin{eqnarray*}
\tilde{\xi}_i&=&\eta_i^{\pm}, \quad i>n+r-1, \quad i<n-r\nonumber\\
\tilde{\xi}_i&=&\epsilon_i\beta(\lambda)\gamma_i+\eta_i^{\pm}, \quad
n-r\leq i\leq n+r-1,
\end{eqnarray*}
where $\eta_i^{\pm}$ remains finite as $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}$.\end{proof}
We now compute the behavior of the theta function $\theta(\xi,\Pi)$
in this limit.
\begin{lemma}\label{le:theta2}
In the limit $\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$,
$n-r\leq j\leq n+r-1$, the theta function $\theta(\xi,\Pi)$ behaves
like
\begin{eqnarray}\label{eq:theta2}
\theta(\xi,\Pi)= \exp\left(2\pi
i\beta^2(\lambda)\sum_{j=n-r}^{n-1}\gamma_{j}+O(1)\right),
\end{eqnarray}
where $\xi$ is given by (\ref{eq:xi1}) and $\gamma_j$ by
(\ref{eq:periodentries2}).
\end{lemma}
\begin{proof}
From (\ref{eq:modular}) we see that
\begin{eqnarray}\label{eq:mod}
\theta(\xi,\Pi)=\varsigma\exp\left(\pi
i\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}\right)\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\tilde{\xi},\tilde{\Pi}),
\end{eqnarray}
where the characteristics on the right hand side are obtained by
solving the linear equations
\begin{eqnarray*}
{\rm diag}\left(Z_{12}^{T}Z_{22}\right)&=&Z_{22}^T\varepsilon+Z_{12}^T\varepsilon^{\prime}\\
{\rm diag}\left(Z_{11}^{T}Z_{21}\right)&=&Z_{21}^T\varepsilon+Z_{11}^T\varepsilon^{\prime}.
\end{eqnarray*}
The solution of this system is
\begin{eqnarray}
\label{eq:char}
\varepsilon_j&=&0\quad {\rm mod}\: 2, \quad j=1,\ldots,2n-1\nonumber\\
\varepsilon_{j}^{\prime}&=&\left\{
\begin{array}{ll}
1\quad {\rm mod} \: 2, & \hbox{$\quad n-r\leq j\leq n-1$;} \\
0\quad {\rm mod} \: 2, & \hbox{otherwise.}
\end{array}
\right.
\end{eqnarray}
Note that, from (\ref{eq:thetachar}) and the periodicity properties of the theta function proposition \ref{pro:per}, characteristics that differ by an even integer vector give the same theta function. That is
\[
\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\xi,\Pi)=\theta\left[{\varepsilon+2\overrightarrow{N}\atop
\varepsilon^{\prime}+2\overrightarrow{M}}\right](\xi,\Pi), \quad
\overrightarrow{N},\overrightarrow{M}\in\mathbb{Z}^{2n-1}
\]
We will now compute the exponential term of (\ref{eq:mod}). By
performing rows and columns operations on
$Z_{12}^T\tilde{\Pi}-Z_{22}^T$, we can transform its determinant
into the form
\begin{eqnarray*}
\det\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)&
=&\det\left(\pmatrix{0_{n-r-1}&0&0\cr
0&\mathcal{S}D_r&0\cr
0&0&0_{n-r}}\ +W^{\prime}\right)\\
\mathcal{S}_{ij}&=&\left\{
\begin{array}{ll}
0, & \hbox{$1\leq i\leq r$,} \\
\delta_{ij}, & \hbox{$r+1\leq i\leq 2r$,}
\end{array}
\right.
\end{eqnarray*}
for some matrix $W^{\prime}$ that remains finite as $\lambda_{2(j +
1)} \to \lambda_{2(2n-1) - 1}$.
This means that the leading order term of the determinant is of the
order of $\prod_{k=n-r}^{n-1}\gamma_{k}$. That is
\begin{eqnarray*}
\det\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)
&=&\mathcal{D}_{r}+O(\gamma_i^{r-1}), \quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1},
\nonumber\\
\mathcal{D}_{r}&=&\mathcal{W}^{\prime}\prod_{k=n-r}^{n-1}\gamma_{k},
\end{eqnarray*}
where the notation $O(\gamma_i^{r-1})$ means
\begin{equation}
\label{eq:nbigo} O(\gamma_i^{r-1}) = O\left(\prod_i
\gamma_i^{\alpha_i}\right), \quad \sum_i\alpha_i \le r-1,
\end{equation}
Furthermore, $\mathcal{W}^{\prime}$ is the determinant of the
$(2n-r-1)\times (2n-r-1)$ matrix formed by removing the $(n-r)^{th}$
up to the $(n-1)^{th}$ rows and columns in
$W^{\prime}$.
Similarly, we see that the minors of $Z_{12}^T\tilde{\Pi}-Z_{22}^T$
cannot contain more than $r$ factors of $\gamma$. In particular, this
means that the inverse matrix
$\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}$ is finite
as $\lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1}$.
Therefore the inverse matrix
$\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}$ behaves like
\begin{eqnarray}\label{eq:z11inverse}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}&=&X^0+X^{-1}+O(\gamma_i^{-2}),
\quad \lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1},
\end{eqnarray}
where $X^{-1}$ is a term of order $-1$ in $\gamma_i$ and $X^0$ is a
finite matrix.
From (\ref{eq:z11inverse}) and (\ref{eq:z11}), we see that the
leading order term of
\begin{eqnarray}\label{eq:Id}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)=I_{2n-1}
\end{eqnarray}
gives the following
\begin{eqnarray*}
X^0\pmatrix{0&0&0\cr
0&(I_{2r}-J_{2r})D_r&0\cr
0&0&0}=0,
\end{eqnarray*}
while the leading order term of
\begin{eqnarray*}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}=I_{2n-1}
\end{eqnarray*}
gives
\begin{eqnarray*}
\pmatrix{0&0&0\cr
0&(I_{2r}-J_{2r})D_r&0\cr
0&0&0}X^0=0.
\end{eqnarray*}
This implies that
\begin{eqnarray}\label{eq:x0rel}
X^0_{i,j}=X^0_{i,2n-j-1},\quad 1\leq i\leq 2n-1,\quad n-r\leq j\leq
n+r-1\nonumber\\
X^0_{i,j}=X^0_{2n-i-1,j}, \quad n-r\leq i\leq n+r-1,\quad 1\leq
j\leq 2n-1.
\end{eqnarray}
The leading order term of the bilinear product in (\ref{eq:mod})
then becomes
\begin{eqnarray}\label{eq:expfactor}
\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}&=&\beta^2(\lambda)\epsilon^TD_nX^{-1}\pmatrix{0&0&0\cr
0&(I_{2r}-J_{2r})&0\cr
0&0&0}\ D_n\epsilon \nonumber \\
&& + O(1), \quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1},\nonumber\\
\epsilon_i&=&0, \quad i<n-r, \quad i>n+r-1, \\
\epsilon_i&=&1, \quad n-r\leq i<n,\nonumber\\
\epsilon_i&=&-1,\quad n\leq i<n+r-1\nonumber\\
D_n&=&\pmatrix{0_{n-r-1}&0&0\cr
0&D_r&0\cr
0&0&0_{n-r}}.\nonumber
\end{eqnarray}
Let us denote $\mathcal{P}$ by
\begin{eqnarray*}
\mathcal{P}=X^{-1}\pmatrix{0&0&0\cr
0&(I_{2r}-J_{2r})&0\cr
0&0&0} D_n,
\end{eqnarray*}
Then constant term of (\ref{eq:Id}) gives the following
\begin{eqnarray*}
X^{-1}\pmatrix{0&0&0\cr
0&(I_{2r}-J_{2r})&0\cr
0&0&0}\ D_n +X^0W=I_{2n-1}.
\end{eqnarray*}
By applying (\ref{eq:x0rel}) to the above, we see that the entries
of $\mathcal{P}$ are related by
\begin{eqnarray*}
\mathcal{P}_{l,j}=\mathcal{P}_{2n-l-1,j}+\delta_{l,j}+\delta_{2n-l-1,j},
\quad n-r\leq l\leq n-1,\quad n-r\leq j\leq n+r-1.
\end{eqnarray*}
By substituting this back into (\ref{eq:expfactor}), we see that the
the exponential factor in (\ref{eq:mod}) behaves like
\begin{eqnarray}\label{eq:expon2}
\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}=2\beta^2(\lambda)\sum_{j=n-r}^{n-1}\gamma_{j}+O(1).
\end{eqnarray}
We will now show that the limit of the theta function with
characteristics remains finite. By using the
definition~(\ref{eq:thetadef}), we have
\begin{eqnarray*}
\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\tilde{\xi},\tilde{\Pi})&=&\sum_{m_j \in
\mathbb{Z}} \exp\Bigg[\pi
i\sum_{j=n-r}^{n-1}\gamma_{j}\Bigg(\left(m_{j}+{{\varepsilon_{j}}\over{2}}\right)\Bigg(2\beta(\lambda)+m_{j}
\\& & + {{\varepsilon_{j}}\over{2}}\Bigg)+\left(m_{2n-j-1}
+{{\varepsilon_{2n-j-1}}\over{2}}\right)\\
& & \times
\Bigg(-2\beta(\lambda)+m_{2n-j-1}+{{\varepsilon_{2n-j-1}}\over{2}}\Bigg)+
O(1)\Bigg], \quad \lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1}.
\end{eqnarray*}
As before, since $\beta(\lambda)$ is purely imaginary, only terms such
that
\begin{eqnarray*}
\left(m_{j}+{{\varepsilon_{j}}\over{2}}\right)^2+\left(m_{2n-j-1}+{{\varepsilon_{2n-j-1}}\over{2}}\right)^2=0,\quad
n-r\leq j\leq n-1,
\end{eqnarray*}
contribute. Recall that from (\ref{eq:char}) we have
$\varepsilon_j=\varepsilon_{2n-j-1}=0$, therefore
\begin{eqnarray*}
m_{j}=m_{2n-j-1}=0,\quad n-r\leq j\leq n-1.
\end{eqnarray*}
Thus, as before, the theta function with characteristics
reduces to a $2n-2r-1$ dimensional theta function
\begin{eqnarray}\label{eq:limthet2}
\lim_{\lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1}}
\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\tilde{\xi},\tilde{\Pi})
=\theta(\tilde{\xi}^0,
\tilde{\Pi}^0),
\end{eqnarray}
where the arguments on the right hand side are obtained from
removing the $(n-r)^{th}$ up to the $(n+r-1)^{th}$ entries and that
$\theta(\tilde{\xi}^0,
\tilde{\Pi}^0)$ is finite in the limit.
By combining (\ref{eq:expon2}) and (\ref{eq:limthet2}), we see that
the theta function $\theta(\xi,\Pi)$ behaves like
\begin{eqnarray*}
\theta(\xi,\Pi)=\varsigma\exp\left(2\pi
i\beta^2(\lambda)\sum_{j=n-r}^{n-1}\gamma_{j}+O(1)\right)\theta(\tilde{\xi}^0,
\tilde{\Pi}^0)
\end{eqnarray*}
This concludes the proof of the lemma. \end{proof}
Finally, from lemma \ref{le:theta2} we see that the entropy
(\ref{eq:entro}) behaves like
\begin{eqnarray*}
S(\rho_A)=-\frac13
\sum_{j=n-r}^{n-1}\log\left|\lambda_{2(j+1)}-\lambda_{2(2n-j)-1}\right|
+ O(1), \quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}.
\end{eqnarray*}
\subsubsection{Case 2: r=n}
We will now consider the case when $r=n$. That is, all roots are
complex and they all approach each other pairwise. The canonical
basis will be chosen as in (\ref{eq:newbasis}) but with $r=n-1$,
(not $n$) while the last elements in the basis are given by
\[
\tilde{a}_{2n-1}=b_{2n-1},\quad \tilde{b}_{2n-1}=-a_{2n-1}.
\]
In other words, we have
\begin{eqnarray}
\label{eq:basiscase2}
\tilde{a}_{n-k}& = &
b_{n-k}-b_{n+k-1}+\sum_{j=n-k+1}^{n+k-2}a_{j},\quad,
k=1,\ldots, n-1\nonumber\\
\tilde{a}_{n+k}&=&b_{n+k}-b_{n-k-1}+\sum_{j=n-k-1}^{n+k}a_{j},
\quad,k=0,\ldots, n-2\\
\tilde{b}_{n-k}&=&b_{n-k}-\sum_{j=n-k}^{n+k-2}a_{j}-
\sum_{j=1}^{n-k-1}(-1)^{n-k-j}\left(a_{j}-2b_{j}\right),\quad,
k=1,\ldots, n-1\nonumber\\
\tilde{b}_{n+k}&=&b_{n+k}+\sum_{j=1}^{n-k-2}(-1)^{n-k-j}
\left(a_{j}-2b_{j}\right),\quad
k=0,\ldots, n-2\nonumber\\
\tilde{a}_{2n-1}&=&b_{2n-1},\quad \tilde{b}_{2n-1}=-a_{2n-1}.
\end{eqnarray}
As before, we can partition the basis as follows:
\begin{eqnarray}\label{eq:part}
\pmatrix{\tilde{A}}&=&\pmatrix{\tilde{a}^I\cr
\tilde{a}^{II}}\nonumber\\
\tilde{a}^I_j&=&\tilde{a}_j,\quad 1\leq k\leq n-r-1\\
\tilde{a}^{II}_1&=&\tilde{a}_{2n-1}.\nonumber
\end{eqnarray}
Furthermore, the $b$-cycles and the untilded basis are connected by
analogous relations.
In the notation of theorem \ref{thm:modular} we have
\begin{eqnarray*}
\pmatrix{\tilde{A}\cr \tilde{B}}\ &=&Z\pmatrix{A\cr
B}=\pmatrix{Z_{11}&Z_{12}\cr Z_{21}&Z_{22}}\ \pmatrix{A\cr B},
\end{eqnarray*}
where the transformation matrix $Z$ can be written in block form
according to the partition (\ref{eq:part}):
\begin{eqnarray}\label{eq:Z3}
Z_{ij}&=&\pmatrix{
C_{ij}&0\cr
0&\mathcal{E}_{ij}},
\end{eqnarray}
where $C_{ij}$ are $2(n-2)\times2(n-2)$ matrices defined as in
(\ref{eq:cij}), and $\mathcal{E}$ is given by
\begin{eqnarray*}
\mathcal{E}_{ij}&=&0,\quad i=j\\
\mathcal{E}_{12}&=&1,\quad\mathcal{E}_{21}=-1.
\end{eqnarray*}
By deformation of the contours, we see that the cycles $\tilde{a}_j$
become close loops around $\lambda_{2(j+1)}$ in the
critical limit.
Let $\tilde{a}_0$ be the closed curve that becomes a loop around
$\lambda_2$ as $\lambda_{2} \to \lambda_{4n-1}$ (see figure
\ref{fig:tildea0}). We have
\begin{eqnarray*}
\tilde{a}_0&=&-b_{2n-1}+\sum_{j=1}^{2n-2}a_{j}\\
\tilde{a}_0&=&-\tilde{a}_{2n-1}+\sum_{j=1}^{n-1}(-1)^{j+1}\tilde{a}_j+\sum_{j=n}^{2n-2}(-1)^{j}\tilde{a}_j
\end{eqnarray*}
\begin{figure}[htbp]
\begin{center}
\resizebox{4cm}{!}{\input{tildea0.pstex_t}}\caption{The curve going around $\lambda_2$.}\label{fig:tildea0}
\end{center}
\end{figure}
In particular, this means that in the limit, the 1-form
$\tilde{\omega}_j$ will have a simple pole at $\lambda_{2(j+1)}$
with residue ${1\over {2\pi i}}$ and a simple pole at $\lambda_{2}$
with residue $(-1)^{j+1}{1\over {2\pi i}}$ for $1\leq j\leq n-1$,
$(-1)^j{1\over {2\pi i}}$ for $n\leq j\leq 2n-2$ and $-{1\over{2\pi
i}}$ for $j=2n-1$. Thus, we arrive at the following
\begin{lemma}
\label{le:periodentries3}
The entries of the period matrix behave like
\begin{eqnarray*}
\lim_{\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}}\tilde{\Pi}_{ij}&= &\tilde{\Pi}_{ij}^0, \quad i\neq j,
\quad
i,j\neq 2n-1\\
\lim_{\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}} \tilde{\Pi}_{jj}&=&\gamma_j+\tilde{\Pi}_{jj}^0, \quad
1\leq j\leq 2n-2\\
\lim_{\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}} \tilde{\Pi}_{2n-1,2n-1}&=&2\gamma_{2n-1}+\tilde{\Pi}_{2n-1,2n-1}^0\\
\lim_{\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}} \tilde{\Pi}_{j,2n-1}&= &(-1)^{j}\gamma_{2n-1}+\tilde{\Pi}_{j,2n-1}^0,
\quad
1\leq j\leq n-1\\
\lim_{\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}} \tilde{\Pi}_{j,2n-1}&=&(-1)^{j+1}\gamma_{2n-1}+\tilde{\Pi}_{j,2n-1}^0,
\quad
n\leq j\leq 2n-2\\
\tilde{\Pi}_{2n-1,j}&=&\tilde{\Pi}_{j,2n-1}\\
\gamma_j&=&{1\over{\pi
i}}\log\left|\lambda_{2(j+1)}-\lambda_{2(2n-j)-1}\right|,
\end{eqnarray*}
where $\tilde{\Pi}_{ij}^0$ are finite in the limit
$\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$.
\end{lemma}
In this case, the argument $\tilde{\xi}$ in (\ref{eq:modular})
behaves as follows.
\begin{lemma}\label{le:tildexi3}
Let $\xi$ be given by (\ref{eq:xi1}) and $\tilde{\xi}$ be
\begin{eqnarray*}
\tilde{\xi}=\left((-Z_{12}^T\tilde{\Pi}+Z_{22}^T)^T\right)\xi,
\end{eqnarray*}
where $Z_{ij}$ are given by (\ref{eq:Z3}). Then in the limit
$\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$ we have
\begin{eqnarray}\label{eq:tildexi3}
\tilde{\xi}_i&=&\sigma_i\beta(\lambda)\gamma_i+\eta_i^{\pm}, \quad
1\leq i\leq 2n-1,\\
\sigma_i&=&(1+(-1)^{i+1}), \quad 1\leq i\leq n-1\nonumber\\
\sigma_i&=&-(1+(-1)^{i+1}). \quad n\leq i\leq 2n-1 \nonumber
\end{eqnarray}
where $\eta_i^{\pm}$ remains finite as $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}$.
\end{lemma}
\begin{proof}
In this case the matrix $Z_{12}^T\tilde{\Pi}-Z_{22}^T$ takes the
form
\begin{eqnarray}\label{eq:transmat}
Z_{12}^T\tilde{\Pi}-Z_{22}^T&=&\pmatrix{
(I_{2r}-J_{2r})D_{n-1}&0\cr
\overrightarrow{D}_{n-1}&2\gamma_{2n-1}}\ +W\nonumber\\
D_{n-1}&=&{\rm diag} (\gamma_{1},\gamma_{2},\ldots, \gamma_{2}, \gamma_{1})\\
\overrightarrow{D}_{n-1}&=&(-\gamma_{1},\gamma_{2},\ldots,
\gamma_{2}, -\gamma_{1}),\nonumber
\end{eqnarray}
where $W$ is a finite matrix as $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}$.
Therefore, $\tilde{\xi}$ behaves like
\begin{eqnarray*}
\tilde{\xi}_i&=&\sigma_i\beta(\lambda)\gamma_i+\beta(\lambda)\sum_{j=1}^{n-1}W_{n+j,i}\pm{{\tilde{\tau}_i}\over
2},\quad
1\leq i\leq 2n-1\nonumber\\
\sigma_i&=&(1+(-1)^{i+1}), \quad 1\leq i\leq n-1\nonumber\\
\sigma_i&=&-(1+(-1)^{i+1}), \quad n\leq i\leq 2n-1 \\
{{\tilde{\tau}_i}\over
2}&=&\sum_{j=1}^{2n}\tilde{\omega}_i(z_j^{-1})-\sum_{j=1}^{2n}\tilde{\omega}_i(\lambda_{2j-1}).
\nonumber
\end{eqnarray*}
As in section \ref{se:case1}, the leading order terms of
$\frac{\tilde{\tau_i}}{2}$ are zero. We can therefore rewrite
$\tilde{\xi}$ as
\begin{eqnarray*}
\tilde{\xi}_i&=&\sigma_i\beta(\lambda)\gamma_i+\eta_i^{\pm}, \quad
1\leq i\leq 2n-1,
\end{eqnarray*}
where $\eta_i^{\pm}$ are finite in the limit.\end{proof}
The behavior of the theta function for this case is given by
\begin{lemma}\label{le:theta3}
In the limit $\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$,
$1\leq j\leq 2n-1$, the theta function $\theta(\xi,\Pi)$ behaves
like
\begin{eqnarray}\label{eq:theta3}
\theta(\xi,\Pi)= \exp\left(2\pi i
\beta^2(\lambda)\sum_{j=1}^{n-1}\gamma_{j}+O(1)\right),
\end{eqnarray}
where $\xi$ is given by (\ref{eq:xi1}) and $\gamma_j$ by lemma
\ref{le:periodentries3}.
\end{lemma}
\begin{proof}
As in section \ref{se:case1}, from (\ref{eq:modular}) we have,
\begin{eqnarray}\label{eq:hat3}
\theta(\xi,\Pi)=\varsigma\exp\left(\pi
i\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}\right)\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\tilde{\xi},\tilde{\Pi}),
\end{eqnarray}
where the characteristics on the right hand side are given by the
same formula as before, with $r$ replaced by $n-1$:
\begin{eqnarray*}
\varepsilon_j&=&0\quad {\rm mod} \:2, \quad j=1,\ldots,2n-1\nonumber\\
\varepsilon_{j}^{\prime}&=&\left\{
\begin{array}{ll}
1\quad {\rm mod}\: 2, & \hbox{$\quad 1\leq j\leq n-1$;} \\
0\quad {\rm mod} \; 2, & \hbox{otherwise.}
\end{array}
\right.
\end{eqnarray*}
Since there is no non-zero matrix $X_0$ that is independent of
$\gamma_j$ such that the leading order term of
\begin{eqnarray*}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)X_0
\end{eqnarray*}
is zero, we can write the inverse matrix
$\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}$ as
\begin{eqnarray*}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}=X^{-1}+ O(\gamma_i^{-2}),
\quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}.
\end{eqnarray*}
where $X^{-1}$ is a term that is of order $-1$ in the $\gamma_j$.
Then, the leading order term of the bilinear product in
(\ref{eq:hat3}) is
\begin{eqnarray}\label{eq:exfactor2}
\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}&=&\beta^2(\lambda)\sigma^TD_nX^{-1}\pmatrix{
(I_{2n-2}-J_{2n-2})&0\cr
0&1}\ D_n\sigma \nonumber \\
& & + O(1), \quad
\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1},\nonumber\\
\sigma_i&=&(1+(-1)^{i+1}), \quad 1\leq i\leq n-1\nonumber\\
\sigma_i&=&-(1+(-1)^{i+1}), \quad n\leq i\leq 2n-1 \\
D_n&=&{\rm diag}(\gamma_1,\gamma_2,\ldots,\gamma_2,\gamma_1,2\gamma_{2n-1}).\nonumber
\end{eqnarray}
Let $\tilde{{\Pi}}^1$ be the leading order term of $\tilde{\Pi}$:
\begin{eqnarray*}
\tilde{{\Pi}}^1=\pmatrix{D_{n-1}&\overrightarrow{D}_{n-1}^T\cr
\overrightarrow{D}_{n-1}&2\gamma_{2n-1}}.
\end{eqnarray*}
Equation (\ref{eq:exfactor2}) can now be rewritten as
\begin{eqnarray}\label{eq:product}
\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}&=&
\beta^2(\lambda)\epsilon^T\tilde{{\Pi}}^1X^{-1}\pmatrix{
(I_{2n-2}-J_{2n-2})&0\cr
0&1}\tilde{{\Pi}}^1\epsilon \nonumber \\
&& +O(1), \quad
\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}\nonumber\\
\epsilon_i&=&1, \quad 1\leq i\leq n-1\\
\epsilon_i&=&-1, \quad n\leq i\leq 2n-1 \nonumber
\end{eqnarray}
The constant term of
\begin{eqnarray*}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)=I_{2n-1}
\end{eqnarray*}
now gives
\begin{eqnarray*}
X^{-1}\pmatrix{
I_{2n-2}-J_{2n-2}&0\cr
0&1}\ \tilde{{\Pi}}^1=I_{2n-1}.
\end{eqnarray*}
By substituting this back into (\ref{eq:product}), we obtain
\begin{eqnarray*}
\pi
i\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}=\sum_{j=1}^{2n-1}\log\left|\lambda_{2(j+1)}-\lambda_{2(2n-j)-1}\right|+O(1).
\end{eqnarray*}
To complete the proof, note that in this case, the theta function in
the right hand side of (\ref{eq:hat3}) becomes 1:
\begin{eqnarray*}
\lim_{\lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1}}
\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\tilde{\xi},\tilde{\Pi})= 1.
\end{eqnarray*}
Therefore, we have
\begin{eqnarray*}
\theta\left(\xi,\Pi\right)=\varsigma\exp\left(\pi
i\sum_{j=1}^{2n-1}\gamma_j+O(1)\right),\quad
\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}.
\end{eqnarray*}
This completes the proof of the lemma.
\end{proof}
Finally, by substituting (\ref{eq:theta3}) into (\ref{eq:entro}), we
find that the entropy behaves like
\begin{eqnarray*}
S(\rho_A)=-\frac13\sum_{j=1}^{2n-1}\log\left|\lambda_{2(j+1)}-\lambda_{2(2n-j)-1}\right|+O(1),
\quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}.
\end{eqnarray*}
\subsection{Pairs of complex roots
approaching the unit circle together with one pair of real roots
approaching 1} The canonical basis used in this section is shown in
figure \ref{fig:crit3}:
\begin{eqnarray*}
\tilde{a}_k&=&-b_k+b_{k-1},\quad k<n-r,\quad k>n+r \quad b_0=0\\
\tilde{b}_k&=&\sum_{j=k}^{2n-1}a_j-\sum_{j=n-r}^{n+r-1}a_j,\quad k<n-r\\
\tilde{a}_{n-k}&=&b_{n-k}-b_{n+k-1}+\sum_{j=n-k+1}^{n+k-2}a_j,\quad k=1,\ldots, r\\
\tilde{a}_{n+k}&=&b_{n+k}-b_{n-k-1}+\sum_{j=n-k-1}^{n+k}a_{j},\quad k=0,\ldots, r-1\\
\tilde{b}_{n-k}&=&b_{n-k}+(-1)^{r-k}\sum_{j=n+r}^{2n-1}a_j-\sum_{j=n-k}^{n+k-2}a_{j}-\sum_{j=n-r}^{n-k-1}(-1)^{n-k-j}\left(a_{j}-2b_{j}\right),\quad k=1,\ldots, r\\
\tilde{b}_{n+k}&=&b_{n+k}+(-1)^{r-k}\sum_{j=n+r}^{2n-1}a_j+\sum_{j=n-r}^{n-k-2}(-1)^{n-k-j}\left(a_{j}-2b_{j}\right),\quad
k=0,\ldots, r-1\\
\tilde{a}_{n+r}&=&b_{n-r-1}-b_{n+r}+\sum_{j=0}^{r-1}(-1)^{r-j-1}\left(2b_{n+j}+a_{n+j}-2b_{n-j-1}+a_{n-j-1}\right)\\
\tilde{b}_k&=&\sum_{j=k}^{2n-1}a_j, \quad k\geq n+r.
\end{eqnarray*}
\begin{figure}[htbp]
\begin{center}
\resizebox{10cm}{!}{\input{crit3.pstex_t}}\caption{The choice of
cycles on the hyperelliptic curve $\mathcal{L}$. The arrows denote the
orientations of the cycles and branch cuts. }\label{fig:crit3}
\end{center}
\end{figure}
In the notation of theorem \ref{thm:modular}, the two bases are related by
\begin{eqnarray}\label{eq:Z4}
\pmatrix{\tilde{A}\cr \tilde{B}}\ &=&Z\pmatrix{A\cr
B}=\pmatrix{Z_{11}&Z_{12}\cr
Z_{21}&Z_{22}}\ \pmatrix{A\cr B}\ \nonumber\\
Z_{11}&=&\pmatrix{0&0&0\cr
0&C_{11}&0\cr
0&\mathcal{T}^{32}&0}\ \nonumber\\
Z_{12}&=&\pmatrix{-C_2^{n-r-1}&0&0\cr
0&C_{12}&0\cr
\mathcal{V}^{31}&\mathcal{V}^{32}&-C_2^{n-r-1}}\ \\
Z_{21}&=&\pmatrix{C_1^{n-r-1}&0&\mathcal{U}_{13}\cr
0&C_{21}&\mathcal{U}_{23}\cr
0&0&C_1^{n-r-1}}\ \nonumber\\
Z_{22}&=&\pmatrix{0&0&0\cr
0&C_{22}&0\cr
0&0&0},\nonumber
\end{eqnarray}
where $C_{ij}$ are defined in (\ref{eq:cij}) and $C_i^{k}$ are
$k\times k$ matrix with entries defined as in (\ref{eq:ci}). All the
entries of the matrices $\mathcal{U}^{13}$ are $1$, while the
entries of $\mathcal{V}^{31}$, $\mathcal{V}^{32}$ and
$\mathcal{U}^{23}$ are defined in
\begin{eqnarray*}
\mathcal{T}_{ij}^{32}&=&\delta_{i1}(-1)^{j+1}, \quad \mathcal{T}_{i,2r-j+1}^{32}=\mathcal{T}_{ij}^{32}, \quad 1\leq j\leq r, \\
\mathcal{V}_{ij}^{31}&=&\delta_{i1}\delta_{j,n-r-1}\\
\mathcal{V}_{ij}^{32}&=&2(-1)^{j}\delta_{i1}\\
\mathcal{U}_{ij}^{23}&=&(-1)^{i+1}.
\end{eqnarray*}
Performing the same analysis as in section \ref{se:case1} we arrive at
\begin{lemma}
\label{le:periodentries4} The entries of the period matrix
$\tilde{\Pi}$ behave like
\begin{eqnarray}\label{eq:periodentries4}
\lim_{\lambda_{2(j+1)} \rightarrow \lambda_{2(2n - j) -1}}
\tilde{\Pi}_{ij}&=&\tilde{\Pi}_{ij}^{0}, \quad i\neq j\nonumber\\
\lim_{\lambda_{2(j+1)} \rightarrow \lambda_{2(2n - j) -1}}
\tilde{\Pi}_{jj}
&= & \tilde{\Pi}_{jj}^0,\quad j>n+r, \quad j<n-r\nonumber\\
\lim_{\lambda_{2(j+1)} \rightarrow \lambda_{2(2n - j) -1}}
\tilde{\Pi}_{jj} &= &\gamma_{j}+\tilde{\Pi}_{jj}^0,
\quad n-r\leq j\leq n+r \nonumber\\
\gamma_j&=&{1\over{\pi
i}}\log\left|\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}\right|,
\end{eqnarray}
where $\tilde{\Pi}_{ij}^0$ are finite.
\end{lemma}
In this case, the argument $\tilde{\xi}$ is given by the following
\begin{lemma}\label{le:tildexi4}
Let $\xi$ be given by (\ref{eq:xi1}) and $\tilde{\xi}$ be
\begin{eqnarray*}
\tilde{\xi}=\left((-Z_{12}^T\tilde{\Pi}+Z_{22}^T)^T\right)\xi,
\end{eqnarray*}
where $Z_{ij}$ are given by (\ref{eq:Z4}). Then in the limit
$\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$ we have
\begin{eqnarray}\label{eq:tildexi4}
\tilde{\xi}_i&=&\eta_i^{\pm}, \quad i>n+r, \quad i<n-r\nonumber\\
\tilde{\xi}_i&=&\epsilon_i\beta(\lambda)\gamma_i+\eta_i^{\pm}, \quad n-r\leq i\leq n+r-1\\
\tilde{\xi}_{n+r}&=&\beta(\lambda)\gamma_{n+r}+\eta_{n+r}^{\pm}\nonumber\\
\epsilon_i&=&1, \quad i<n, \quad \epsilon_i=-1, \quad i>n-1,\nonumber
\end{eqnarray}
where $\eta_i^{\pm}$ remains finite as $\lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}$, $n-r\leq j\leq n+r$.
\end{lemma}
The proof of this lemma follows from exactly the same type of
argument as in section \ref{se:case1}.
We will now compute the limit of the theta function.
\begin{lemma}\label{le:theta4}
In the limit $\lambda_{2(j+1)}\rightarrow\lambda_{2(2n-j)-1}$,
$n-r\leq j\leq n+r$, the theta function $\theta(\xi,\Pi)$ behaves
like
\begin{eqnarray}\label{eq:theta4}
\theta(\xi,\Pi)= \exp\left(2\pi
i\beta^2(\lambda)\sum_{j=1}^{n-1}\gamma_{j}+\beta^2(\lambda)\gamma_{n+r}+O(1)\right),
\end{eqnarray}
where $\xi$ is given by (\ref{eq:xi1}) and $\gamma_j$ by
(\ref{eq:periodentries4}).
\end{lemma}
\begin{proof}
The characteristics in the theta function in (\ref{eq:modular}) are
once more given by (\ref{eq:char}). The matrix
$Z_{12}^T\tilde{\Pi}-Z_{22}^T$ can now be written as
\begin{eqnarray*}
Z_{12}^T\tilde{\Pi}-Z_{22}^T&=&\pmatrix{0_{n-r-1}&0&0&0\cr
0&(I_{2r}-J_{2r})D_r&0&0\cr
0&0&\gamma_{n+r}&0\cr
0&0&0&0_{n-r-1}}\ +W\nonumber\\
D_r&=&{\rm diag} (\gamma_{n-r},\gamma_{n-r+1},\ldots, \gamma_{n-r+1},
\gamma_{n-r}),
\end{eqnarray*}
where $W$ is finite in the limit and $0_{n-r-1}$ is the zero matrix
of dimension $n-r-1$.
As in section \ref{se:case1}, by performing rows and columns
operations on the matrix $Z_{12}^T\tilde{\Pi}-Z_{22}^T$, we see that
the determinants has the following asymptotic behavior:
\begin{eqnarray*}
\det\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)&=&\gamma_{n+r}\mathcal{D}_{r}+O(\gamma_i^r),
\quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1},\nonumber\\
\mathcal{D}_{r}&=&\mathcal{W}^{\prime}\prod_{k=n-r}^{n-1}\gamma_{k},
\end{eqnarray*}
where the notation $O(\gamma_i^r)$ was defined in equation~(\ref{eq:nbigo})
and $\mathcal{W}^{\prime}$ is some constant.
The inverse matrix $\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}$
can now be written as in (\ref{eq:z11inverse}):
\begin{eqnarray*}
\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}&=&X^0+X^{-1}+O(\gamma_i^{-2}),
\quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1},
\end{eqnarray*}
where the entries of the $2r$ dimensional matrix $X^0$ satisfy
(\ref{eq:x0rel}) with $(X^0)_{n+r,n+r}=0$, and $X^{-1}$ is a matrix
of order $-1$ in the $\gamma_j$ with
$(X^{-1})_{n+r,n+r}=\gamma_{n+r}^{-1}$.
Following exactly the same analysis in section \ref{se:im}, we see
that the leading order term in the exponential factor in
(\ref{eq:mod}) is
\begin{eqnarray*}
\tilde{\xi}^T\left(Z_{12}^T\tilde{\Pi}-Z_{22}^T\right)^{-1}Z_{12}^T\tilde{\xi}=
\beta^2(\lambda)\left(2\sum_{j=n-r}^{n-1}\gamma_{j}+\gamma_{n+r}\right)+O(1).
\end{eqnarray*}
We now look at the term $\theta\left(\tilde{\xi},\tilde{\Pi}\right)$
in (\ref{eq:modular}). As in section \ref{se:case1}, we see that the
theta function becomes $2n-2r-2$ dimensional:
\begin{eqnarray*}
\lim_{\lambda_{2(j + 1)} \to \lambda_{2(2n-j) - 1}}
\theta\left[{\varepsilon\atop
\varepsilon^{\prime}}\right](\tilde{\xi},\tilde{\Pi})
=\theta(\tilde{\xi}^0, \tilde{\Pi}^0),
\end{eqnarray*}
where the arguments on the right hand side are obtained from
removing the $(n-r)^{th}$ up to the $(n+r-1)^{th}$ entries.
Therefore the theta function $\theta\left(\xi,\Pi\right)$ behaves
like
\begin{eqnarray*}
\theta\left(\xi,\Pi\right)=\varsigma\exp\left(2\pi
i\beta^2(\lambda)\left(2\sum_{j=n-r}^{n-1}\gamma_{j}+\gamma_{n+r}\right)+O(1)\right)\theta(\tilde{\xi}^0,
\tilde{\Pi}^0).
\end{eqnarray*}
This completes the proof of the lemma.
\end{proof}
By substituting (\ref{eq:theta4}) into (\ref{eq:entro}), we see that
the entropy is asymptotic to
\begin{eqnarray*}
S(\rho_A)&=& -\frac13 \sum_{j=n-r}^{n-1}\log\left|\lambda_{2(j+1)}-\lambda_{2(2n-j)-1}\right|-\frac{1}{6}\log\left|\lambda_{2(n-r)}-\lambda_{2(n+r)+1}\right|
\\
&& +O(1), \quad \lambda_{2(j + 1)} \to
\lambda_{2(2n-j) - 1}.
\end{eqnarray*}
This concludes the proof of theorem \ref{thm:crit}.
|
train/arxiv
|
BkiUdzg5qsBDHn4Q0B9y
| 5 | 1 |
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{R}{einforcement} learning (RL) is posed as a problem of sequential decision-making in an interactive environment. Recently, deep RL has led to tremendous progress in a variety of challenging domains \cite{mnih2015human, zhang2020designing}. Nevertheless, they are often criticized for being black boxes and their lack of interpretability, which has increasingly been a pressing concern in deep RL. In many real-world scenarios where trust and reliability are critical, however, it is hardly satisfactory to merely pursue state-of-the-art performance. The temporal causal relations between sequential observations and decisions need to be revealed so that the provided insights are trustworthy. The inability to explain and justify their decisions, makes it harder to deploy RL systems in some safety-critical fields such as healthcare \cite{raghu2017continuous, zhang2020designing} and finance \cite{chia2019machines}. Therefore, it is crucial to develop the ability to reason about the behavior of RL agents for acquiring the trust of users.
Explaining the decision of black-box systems \cite{cao2019interpretable, monfort2019moments, liu2019tabby} is an area of active research, and there are some popular methods developed for generating visual interpretations, such as LIME \cite{ribeiro2016should}, LRP \cite{binder2016layer}, DeepLIFT \cite{shrikumar2017learning}, Grad-CAM \cite{selvaraju2017grad}, Kernel-SHAP \cite{lundberg2017unified} and network dissection \cite{bau2017network, zhou2018interpreting}. However, most of these methods are proposed exclusively for supervised learning, and cannot be directly adapted to sequential decision-making. So far, some existing methods have provided valuable and insightful interpretations for vision-based RL from the spatial perspective, and they typically focus on a better understanding of what information is attended to and why mistakes are made via visualization techniques, such as gradient backpropagation \cite{zahavy2016graying, wang2016dueling}, perturbation injection \cite{greydanus2018visualizing, puri2020explain} and attention mechanism \cite{mott2019towards}. While gradient-based and perturbation-based methods generally interpret single action rather than long-term behavior, attention-augmented methods need to adapt and retrain the agent model to be interpreted. Moreover, they cannot uncover temporal causal information, which is essential for understanding the behavior of RL agents. While a few works \cite{karpathy2015visualizing, bargal2018excitation} have studied temporal interpretations for recurrent neural networks (RNNs), very little work has provided reliable temporal-spatial interpretations for sequential decisions of RL agents.
In the context of machine learning, RL is distinguished from other learning paradigms due to two unique characteristics. First, single-timestep observation alone is usually not enough to achieve optimal performance for RL environments that are partially observable. Second, an observation is relevant to not only the current decision but also the future decisions. In other words, the behavior of RL agents depends not only on spatial features but also on temporal features, which intuitively can be extracted from consecutive observations of the same environment. Our work is motivated by trying to understand sequential decisions of vision-based RL agents from a temporal-spatial causal perspective. To that end, we draw on the notion of Granger causality \cite{granger1969investigating}, which is based on the intuition that a cause helps predict its effects in the future. Granger causality is an effective approach to reasoning about temporal causal relations between time series involving many features \cite{arnold2007temporal, gong2015discovering}. Prior works \cite{schwab2019granger, schwab2019cxplain} have applied a non-temporal variant of Granger causality for model interpretation in supervised learning, but there is little use in the practice of attempting to discuss causality in RL without introducing time.
One of our main contributions is that a \emph{Temporal-Spatial Causal Interpretation} (TSCI) model is proposed to understand sequential decisions of vision-based RL agents. TSCI model builds on the concept of temporal causality that characterizes the temporal causal relations between sequential observations and decisions. To identify temporal-spatial causal features, a separate causal discovery network is employed to learn the temporal causality. TSCI model, once trained, can be used to generate causal interpretations about the agent's behavior in little time. Our approach does not depend on a specific way to obtain the agent model and can be applied readily to deep RL agents that use recurrent structures. In particular, it does not require adapting or retraining the original agent model. We conduct comprehensive experiments on Atari 2600 games of the Arcade Learning Environment \cite{bellemare2013arcade}. The empirical results verify the effectiveness of our method, and demonstrate that our method can produce high-resolution and sharp attention masks to highlight task-relevant temporal-spatial information that constitutes most evidence for the agent's behavior. In other words, our method discovers temporal-spatial causal features to interpret how vision-based RL agents make sequential decisions.
For RL environments that are partially observable, consecutive observations are generally stacked together to enable the learning of temporal representation about high-level semantic features. Another main contribution of this work is to further reveal and understand the role that temporal dependence plays in sequential decision-making. To that end, we perform more experiments to evaluate the impact of temporal dependence on the agent's long-term performance based on counterfactual analysis, and then leverage the proposed TSCI model to further explain the resulting counterfactual phenomenon. In addition, this work provides empirical explanations about why frame stacking is generally necessary even for the agent that uses a recurrent structure from the point of view of temporal dependence. The results demonstrate that the proposed TSCI model can be applied to provide valuable causal interpretations about the agent's behavior from the temporal-spatial perspective.
The remainder of this paper is organised as follows. In the following two sections, we summarize the related works and give a brief introduction to the preliminaries used in this work. In Section \ref{sec:methodology}, we mainly present a temporal-spatial causal interpretation (TSCI) model for interpreting vision-based RL agents. In Section \ref{sec:experiments}, empirical results are provided to verify the effectiveness of our method. In Section \ref{sec:temporal interpretaions}, the proposed method is applied to further reveal and understand the role that temporal dependence plays in sequential decision-making. In the last section, we draw the conclusion and outline the future work.
\section{Related Work}\label{sec:related work}
\subsection{Interpreting Deep RL Agents}
There is a substantial body of literature about how to interpret deep RL agents. While the broad objective of RL interpretation is to make RL policies more understandable, each work has its own special purposes, sets of applicable problems, limitations, and challenges \cite{alharin2020reinforcement}. Here we review some popular interpretation methods introduced in previous works.
\emph{Gradient-based methods} identify input features that are most salient to the trained deep neural network (DNN) by using the gradient to estimate their influence on the output. A feasible approach is to generate Jacobian saliency maps \cite{simonyan2014deep} to visualize which pixels in the state affect the action the most \cite{wang2016dueling}. There are several variants modifying gradient to obtain more meaningful saliency, such as Integrated Gradients \cite{sundararajan2017axiomatic}, Excitation Backpropagation \cite{zhang2018top}, DeepLIFT \cite{shrikumar2017learning} and Grad-CAM \cite{selvaraju2017grad}. Unfortunately, these gradient-based methods depend on the shape in the neighborhood of a few points and are vulnerable to adversarial attacks \cite{ghorbani2019interpretation}. Furthermore, they are unable to provide a valid interpretation from a temporal perspective. To enable video attribution in the temporal dimension, some methods extend Excitation Backpropagation and Grad-CAM to produce temporal maps \cite{bargal2018excitation, stergiou2019saliency}. However, the above problem still remains unsettled, and most of these methods require a well-designed network structure.
\emph{Perturbation-based methods} measure the variation of a black-box model's output when some of the input information is removed or perturbed \cite{fong2017interpretable, dabkowski2017real}. It is important to choose a perturbation that removes information without introducing any new information. The simplest perturbation approach is to replace part of an input image with a gray square \cite{zeiler2014visualizing} or region \cite{ribeiro2016should}. In order to provide reliable interpretations, some recent works attempt to estimate feature importance by combining the perturbation approach with Granger causal analysis \cite{granger1969investigating}, such as causal explanations (CXPlain) \cite{schwab2019cxplain} and the attentive mixture of experts (AME) \cite{schwab2019granger}. A particular example of the perturbation approach is Shapley values \cite{shapley1953value, lundberg2017unified, ancona2019explaining}, but the exact computation of which is NP-hard. To interpret deep RL agents, there are some works \cite{greydanus2018visualizing, puri2020explain, iyer2018transparency} using perturbation-based saliency maps to understand how an agent learns a policy, although the saliency maps are suggested to be viewed as an exploratory tool rather than an explanatory tool \cite{atrey2020exploratory}. However, such a suggestion seems to be contentious. First, the proposed method does not work for recurrent agents. Second, it emphasizes the consistency of attribution results with the human inspection, which deviates from the target of attribution, i.e., discovering the regions relied upon by models rather than humans.
\emph{Attention-augmented methods} incorporate various attention mechanisms into the agent model. Learning attention to generate saliency maps for understanding internal decision pattern is one of the most popular methods \cite{wang2020paying} in deep learning community, and there are already a considerable number of works in the direction of interpretable RL. These works are aimed at getting better interpretability while not sacrificing the performance of RL agents. A simple approach is to augment the actor with customized self-attention modules \cite{manchin2019reinforcement, nikulin2019free, sorokin2015deep}, which learn to focus its attention on semantically relevant areas. Another branch of this category implements the key-value structure of attention to learn explainable policies by sequentially querying its view of the environment \cite{mott2019towards, annasamy2019towards, choi2017multi}. However, attention-augmented methods need to adapt and retrain the agent model, making it unable to interpret the agent models that have been trained or whose network structure cannot be changed. Moreover, attention and causality are two different concepts that are associated with interpretability. While attention aims to find the semantic information that is salient to the agent's decision, causality is the relationship between cause and effect. The principle of causality is that everything has a cause. Different from attention-augmented methods, our work draws on the notion of Granger causality, i.e., a cause helps predict its effects in the future, to discover temporal-spatial causal features for reliable interpretations about the RL agent's behavior.
Besides the above established categories of interpretation methods, structural causal models (SCMs) \cite{madumal2020explainable}, decision trees \cite{bastani2018verifiable} and mimic models \cite{zhang2020atari} have also recently been proposed for deep RL. However, while SCMs learn an action influence model whose causal structure must be given beforehand, the others are designed for specific models or build on human demonstration datasets. Lastly, a major limitation of most existing RL interpretation methods is that they generally interpret single action rather than long-term behavior and cannot uncover temporal causal information. In contrast, this work follows the idea underlying SSINet \cite{shi2020self} to learn an end-to-end interpretation model, but aims to understand the agent's behavior from the temporal-spatial perspective.
\subsection{Causal Analysis of Time Series}
Another related research field to ours is causal analysis of time series, which aims to find the temporal causal relations from time series. There are some very interesting past works that have explored to reveal the temporal causal information underlying time series data, such as Granger causal analysis \cite{granger1969investigating, gong2015discovering}, graphical Granger methods \cite{eichler2006graphical, arnold2007temporal}, the SIN method \cite{drton2008sinful} and vector autoregression (VAR) \cite{valdes2005estimating, opgen2007learning}. However, these methods are developed exclusively for non-Markovian and low-dimensional time series data, and have not shown the ability to explain the behavior of deep RL agents whose observation spaces are generally high-dimensional, such as images. In contrast, this work builds on the Granger causality to discover temporal-spatial causal features for interpreting the behavior of vision-based RL agents.
\section{Preliminaries}
Reinforcement learning (RL) is a general class of algorithms that allow an agent to learn how to sequentially make decisions by interacting with an environment $E$. Specifically, the agent takes an action $a_t$ in an observation $o_t$, and receives a scalar reward $r_{t+1}$. Meanwhile, the environment changes its observation to $o_{t+1}$. Then a history $h_t$ is defined as a sequence of observations and actions $``o_0a_0o_1a_1\cdots a_{t-1}o_t"$ that occurred in the past. A state $s_t$ is a summary of all of the information that an agent could possibly have about its current situation, and formally defined as a sufficient statistic for history $h_t$ \cite{wiering2012reinforcement}. In practice, only finite historical observations $``o_{t-m+1}\cdots o_{t-1}o_t"$, denoted by $o_{t-m+1:t}$, are explicitly considered in a state $s_t$, while the others are usually discarded directly or encoded using memory cells, such as recurrent neural networks (RNNs).
Formally, an RL task can be modelled as a Markov decision process (MDP) with state space $\mathcal{S}$, action space $\mathcal{A}$, initial state distribution $p_0$, transition dynamics $p(s_{t+1}|s_t,a_t)$, and reward function $r_{t+1}=r(s_t, a_t)$. An agent's behavior is defined by a policy $\pi$, which maps a state to a probability distribution over all actions $\pi:\mathcal{S}\rightarrow \mathcal{P}(\mathcal{A})$. The value function of a state $s_t$ under a policy $\pi$, denoted $v_\pi(s_t)$, is the expected sum of discounted future rewards when starting in $s_t$ and following $\pi$ thereafter, i.e., $v_\pi(s_t)=\mathbb{E}_\pi[\sum_{k=0}^{\infty}\gamma^k r_{t+k+1}]$ with a discount factor $\gamma\in[0,1]$. In this work, the agents to be interpreted are obtained with proximal policy optimization (PPO) algorithm \cite{schulman2017proximal}, which uses trust region update to improve a general stochastic policy with gradient descent.
\section{Methodology}\label{sec:methodology}
In this section, we start with the description of the interpretability problem addressed in this work. Then we present a strict derivation about temporal causal objective, which forms the theoretical foundation of the temporal-spatial causal interpretation (TSCI) model that is proposed afterwards. Finally, a two-stage training procedure is given for training our TSCI model.
\subsection{Problem Setting}
Consider the setting in which we need to interpret an actor (or agent) model $\pi$ which sequentially takes as input the state $s_t$ to predict the action $a_t$. For the convenience of our formulation and without loss of generality, a state $s_t$ can be reformulated as a set of $p$ temporal features $X^\mathcal{D}_t=\{o^i_{t-m+1:t}, i\in\mathcal{D}\}$ with $\mathcal{D}=\{1,\cdots,p\}$, which represents all available information in the state $s_t$. The temporal feature $o^i_{t-m+1:t}$ is a sequence of past and present values for the $i$-th specific feature $o^i$. Under the above setting, the causality between state and action refers to the causal relation between temporal features $X^\mathcal{D}_{0:T}= \{o^i_{0:T}, i\in\mathcal{D}\}$ and action sequence $a_{0:T}$ over a horizon $T$. Then, our goal is to develop a separate \emph{Temporal-Spatial Causal Interpretation} (TSCI) model $f_{exp}$ that (i) can discover causal features $X^{\mathcal{D}_c}_{0:T}$ ($\mathcal{D}_c \subseteq \mathcal{D}$) from sequential observations, and (ii) is able to interpret the RL agent's behavior from the temporal-spatial perspective.
\subsection{Temporal Causal Objective}\label{subsec:temporal causal objective}
The core component of our TSCI model is the temporal causal objective that enables us to learn and discover temporal causal features for understanding the long-term behavior of RL agents. The temporal causal objective builds on Granger causality \cite{granger1969investigating}, which has been widely used to find the causal relation from time series. However, the original Granger causal analysis usually assumes a linear model. In this work, we first contribute an adapted version of Granger causality for the RL domain, i.e., \emph{temporal causality} that is independent of the form of agent model.
\begin{definition}[Temporal Causality]\label{def:sequential causality}
The causality between temporal features $X^{\mathcal{D}_s}_{0:T} (\mathcal{D}_s\subseteq \mathcal{D})$ and action sequence $a_{0:T}$ exists, denoted by $X^{\mathcal{D}_s}_{0:T}\rightarrow a_{0:T}$, if the agent model $\pi$ is able to predict better actions $a_{0:T}$ using all available information $X^\mathcal{D}_{0:T}$ than if the information apart from $X^{\mathcal{D}_s}_{0:T}$ has been used.
\end{definition}
Given a state $s_t$ (or temporal features $X^\mathcal{D}_t$), we denote $\varepsilon^{\mathcal{D}-\mathcal{D}_s}_t$ as the prediction error without including any information from the temporal features $X^{\mathcal{D}_s}_t$ and $\varepsilon^\mathcal{D}_t$ as the prediction error when considering all available information. To calculate $\varepsilon^{\mathcal{D}-\mathcal{D}_s}_t$ and $\varepsilon^\mathcal{D}_t$, we first compute the predicted actions $a^{\mathcal{D}-\mathcal{D}_s}_t$ and $a^\mathcal{D}_t$ (abbreviated by $a_t$) without and with using $X^{\mathcal{D}_s}_t$, respectively:
\begin{align}\label{eqn:predicted actions}
a^{\mathcal{D}-\mathcal{D}_s}_t &= \pi(X^{\mathcal{D}-\mathcal{D}_s}_t), \\
a_t = a^\mathcal{D}_t &= \pi(X^\mathcal{D}_t).
\end{align}
Note that the predicted output is a probability distribution for discrete action spaces. Denote $a^*_t$ and $v^*$ as the optimal action and the optimal state value function respectively, then we can calculate $\varepsilon^{\mathcal{D}-\mathcal{D}_s}_t$ and $\varepsilon^\mathcal{D}_t$:
\begin{align}
\label{eqn:action prediction errors}
\varepsilon^{\mathcal{D}-\mathcal{D}_s}_t &= \mathcal{L}(a^*_t, a^{\mathcal{D}-\mathcal{D}_s}_t) + \alpha \left\|v^*(X^\mathcal{D}_t)-v(X^{\mathcal{D}-\mathcal{D}_s}_t)\right\|_2, \\
\label{eqn:value prediction errors}
\varepsilon^\mathcal{D}_t &= \mathcal{L}(a^*_t, a^\mathcal{D}_t) + \alpha \left\|v^*(X^\mathcal{D}_t)-v(X^\mathcal{D}_t)\right\|_2,
\end{align}
where $\alpha$ is a small weight coefficient and $\|\cdot\|_2$ denotes the $L_2$-norm. It is worth emphasizing that the consideration of value consistency is optional in both $\varepsilon^{\mathcal{D}-\mathcal{D}_s}_t$ and $\varepsilon^\mathcal{D}_t$, making our method also applicable to the case where the value function is unavailable. The selection of distance measure $\mathcal{L}$ hinges on the type of action space. Here we use Euclidean distance for continuous action space and Wasserstein distance \cite{arjovsky2017wasserstein} for discrete action space. Following the above definition of temporal causality, we define the degree $\Delta\varepsilon^{\mathcal{D}_s}_{0:T}$ to which the temporal features $X^{\mathcal{D}_s}_{0:T}$ causally contributed to the predicted action sequence $a_{0:T}$ as the decrease in the sum of discounted sequential errors
\begin{align}\label{eqn:discounted errors}
\Delta\varepsilon^{\mathcal{D}_s}_{0:T} = \sum_{t=0}^{T}\gamma^t\left(\varepsilon^{\mathcal{D}-\mathcal{D}_s}_t - \varepsilon^\mathcal{D}_t\right).
\end{align}
Then we have that if $\Delta\varepsilon^{\mathcal{D}_s}_{0:T}>0$, the temporal features $X^{\mathcal{D}_s}_{0:T}$ includes at least one temporal causal feature that causes $a_{0:T}$ according to Definition \ref{def:sequential causality}. This temporal causal relation does not require direct access to the process by which the agent produces its output and thus does not depend on a specific agent model. Formally, our \emph{temporal causal objective} is to discover the temporal causal features $X^{\mathcal{D}_c}_{0:T}$ such that
\begin{align}\label{eqn:sequential causal objective}
\Delta\varepsilon^{\mathcal{D}^\prime}_{0:T}>0,~~ \forall~\mathcal{D}^\prime\subseteq\mathcal{D}_c~\text{and}~\mathcal{D}^\prime\neq\varnothing.
\end{align}
However, the optimal action $a^*_t$ and the optimal state value $v^*$ in equations (\ref{eqn:action prediction errors}) and (\ref{eqn:value prediction errors}) cannot be directly obtained, hampering the application of temporal causality in RL. To address this issue, we observe that $X^{\mathcal{D}_c}_{0:T}$ can be obtained by leaving out all non-causal temporal features $X^{\mathcal{D}_{nc}}_{0:T}$ such that $\Delta\varepsilon^{\mathcal{D}_{nc}}_{0:T} = \Delta\varepsilon^{\mathcal{D}-\mathcal{D}_c}_{0:T} \leq 0$. Then $a^*_t$ and $v^*$ can be eliminated by applying the following proposition to construct an upper bound for $\Delta\varepsilon^{\mathcal{D}_{nc}}_{0:T}$.
\begin{proposition}\label{pro:surrogate contribution degree}
For a specific distance measure $\mathcal{L}$, the following bound holds:
\begin{align}\label{eqn:surrogate contribution degree}
\begin{split}
\Delta\varepsilon^{\mathcal{D}_{nc}}_{0:T} &\leq \Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T} \\
&= \sum_{t=0}^{T}\!\gamma^t\!\left(\mathcal{L}(a^{\mathcal{D}_c}_t\!, a^\mathcal{D}_t)+\alpha\!\left\|v(X^{\mathcal{D}_c}_t)-v(X^\mathcal{D}_t)\right\|_2\right)\!.
\end{split}
\end{align}
\end{proposition}
\begin{proof}
In Euclidean geometry and some other geometries, the distance measure $\mathcal{L}$ satisfies the triangle inequality theorem, which states that for any triangle, the sum of the lengths of any two sides must be greater or equal to the length of the remaining side. Therefore, we have
\begin{align}
\Delta\varepsilon^{\mathcal{D}_{nc}}_{0:T}
& = \sum_{t=0}^{T}\gamma^t\left(\varepsilon^{\mathcal{D}-\mathcal{D}_{nc}}_t - \varepsilon^\mathcal{D}_t\right) \\
& = \sum_{t=0}^{T}\gamma^t\left(\varepsilon^{\mathcal{D}_c}_t - \varepsilon^\mathcal{D}_t\right) \\
& = \sum_{t=0}^{T}\!\gamma^t\!\bigg(\mathcal{L}(a^*_t, a^{\mathcal{D}_c}_t) + \alpha \big\|v^*(X^\mathcal{D}_t) - v(X^{\mathcal{D}_c}_t)\big\|_2\! \\ \nonumber
&~~~~~~~~~~~~~~~ - \mathcal{L}(a^*_t, a^\mathcal{D}_t) - \alpha \!\left\|v^*\!(X^\mathcal{D}_t) - v(X^\mathcal{D}_t)\right\|_2\!\bigg)\! \\
& \leq \sum_{t=0}^{T}\gamma^t\!\left(\!\mathcal{L}(a^{\mathcal{D}_c}_t, a^\mathcal{D}_t)+\alpha \big\|v(X^{\mathcal{D}_c}_t)-v(X^\mathcal{D}_t)\big\|_2\!\right)\! \\
& = \Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T}.
\end{align}
\end{proof}
Proposition \ref{pro:surrogate contribution degree} shows the possibility to formulate the temporal causality without the need to provide $a^*_t$ and $v^*$. More concretely, the inequality $\Delta\varepsilon^{\mathcal{D}_{nc}}_{0:T} \leq 0$ can be guaranteed by limiting the upper bound to no greater than zero, i.e., $\Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T} \leq 0$. On the other hand, we have $\Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T} \geq 0$ according to the definition in (\ref{eqn:surrogate contribution degree}). Based on the above observations, an easy-to-implement variant of temporal causal objective (\ref{eqn:sequential causal objective}) is to leave out all non-causal temporal features $X^{\mathcal{D}_{nc}}_{0:T}$ that satisfies
\begin{align}\label{eqn:variant of sequential causal objective}
\Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T} = \Delta\hat{\varepsilon}^{\mathcal{D}-\mathcal{D}_c}_{0:T} = 0.
\end{align}
The above causality analysis method is applicable not only to high-dimensional image data but also to low-dimensional vector data. However, one of the main purposes of this work is to understand the RL agent's sequential decision-making from the temporal perspective, hence we only focus on vision-based RL environments that are partially observable and thus show obvious temporal dependence between consecutive observations of a state. In contrast, some RL environments that use vector data are usually fully observable, such as MuJoCo \cite{todorov2012mujoco}. For vision-based RL environments that have high-dimensional state space, another challenge of temporal causality is that it is difficult to separate temporal features $o^i_{0:T} (i\in \mathcal{D})$ from each other. In non-temporal scenarios, a sensible method is to group non-overlapping regions of adjacent pixels into super-pixels \cite{schwab2019cxplain, lundberg2017unified}. However, the same semantic feature may be in different locations of images at every timestep, hence the ``super-pixels'' method is not feasible in our temporal setting. To tackle this challenge, we use DNNs to build our TSCI model for vision-based RL in the next section. An advantage of deep models is that they can extract high-level feature representations from high-dimensional data \cite{lecun2015deep}, and thus remove the need to perform manual feature engineering.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-0.01cm}
\setlength{\belowcaptionskip}{-0.20cm}
\begin{center}
\includegraphics[width=0.95\linewidth]{CDNet.pdf}
\end{center}
\caption{Architecture diagram of our TSCI model and the agent model to be interpreted. The agent model consists of a feature extractor, a Gated Recurrent Unit (GRU) \cite{cho2014properties} and two fully-connected layers. TSCI model mainly involves an encoder-decoder structure with the encoder shared from the feature extractor. The agent model and the encoder are fixed during training.}
\label{fig:cdnet}
\end{figure*}
\subsection{Temporal-Spatial Causal Interpretation Model}
Here we first explain the temporal-spatial causality considered in this work, then we present a trainable TSCI model that learns to generate temporal-spatial causal interpretations about the sequential decisions of vision-based RL agent in an end-to-end manner.
\textbf{Temporal-Spatial Causality.}
In this work, temporal-spatial causality is based on the intuition that a cause helps predict its effects in the future, and it is supposed to suggest informative explanations that accurately represent the intrinsic reasons for the agent's decision-making from both the spatial and temporal dimensions. More concretely, spatial causality aims to uncover task-relevant semantic features that affect the agent's decision-making, i.e., what information is important and where to look, while temporal causality focuses on revealing the underlying temporal dependence between temporally adjacent observations and how the feature importance varies as time goes on.
\textbf{Training Objective.}
Based on temporal-spatial causality, TSCI model aims to predict which parts of the input state are considered causal for the agent's behavior. To that end, TSCI model learns a mask generation function $g(X^\mathcal{D}_t)$, taking value between 0 and 1, to discover causal features
\begin{align}\label{eqn:sequential causal features}
X^{\mathcal{D}_c}_t=f_{exp}(X^\mathcal{D}_t)= X^\mathcal{D}_t\odot g(X^\mathcal{D}_t),
\end{align}
where $\odot$ denotes the element-wise multiplication. The generated masks map sequential input pixels to saliency scores, which reflect the relative degree to which the causal features at every timestep causally contribute to the action. A reasonable choice of training objective is to adopt the variant of temporal causal objective (\ref{eqn:variant of sequential causal objective}), which is better for optimization than the vanilla form (\ref{eqn:sequential causal objective}). Substituting equation (\ref{eqn:sequential causal features}) into (\ref{eqn:variant of sequential causal objective}), we have that $X^{\mathcal{D}_{nc}}_{0:T}$ is the maximal set satisfying
\begin{align}\label{eqn:non-causal feature series}
\begin{split}
\Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T} = \sum_{t=0}^{T}\gamma^t \Big(&\mathcal{L}\big(\pi(f_{exp}(X^\mathcal{D}_t)), \pi(X^\mathcal{D}_t)\big) + \\
&\alpha\big\|v(f_{exp}(X^\mathcal{D}_t))-v(X^\mathcal{D}_t)\big\|_2\Big) = 0,
\end{split}
\end{align}
which requires the agent's behavior to be consistent with the original after the states are overlaid with the attentions generated by $f_{exp}$. Taking the above conditions into consideration, the objective function is defined to minimize the upper bound of contribution degree $\Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T}$ added with a sparse regularization term
\begin{align}\label{eqn:objective function}
\mathcal{L}_{TSCI}(g) = \Delta\hat{\varepsilon}^{\mathcal{D}_{nc}}_{0:T} - \beta\sum_{t=0}^{T}\left\|1-g(X^\mathcal{D}_t)\right\|_1,
\end{align}
where $\|\cdot\|_1$ denotes the $L_1$-norm, and $\beta$ is a coefficient controlling the sparseness of the mask. In fact, the sparse regularization term requires $f_{exp}$ to attend to as little information as possible, enabling easy understanding of decision-making for humans. In total, this training objective leads to adversarial masks and is composed of two terms. The first term ensures that the change of prediction errors, after the non-causal temporal features are removed, is close to zero. The second term encourages that the masked non-causal feature region is large and thus pushes for better compliance with the temporal causal objective.
\textbf{Causal Discovery Network.}
As mentioned above, the main purpose of the causal discovery network is to learn the mask generation function $g(X^\mathcal{D}_t)$, which produces an attention mask to highlight the task-relevant information for making decision. To that end, the causal discovery network must learn which parts of the state are considered important by the agent. In the field of computer vision, learning the mask is a dense prediction task, which arises in many vision problems, such as semantic segmentation \cite{ronneberger2015u, lin2017refinenet} and scene depth estimation \cite{mayer2016large}. In order to make the masks sharp and precise, we adapt a U-Net \cite{ronneberger2015u} architecture to build the causal discovery network for TSCI model, as depicted in Figure \ref{fig:cdnet}. As discussed in previous section, the input state is composed of temporally extended observations, hence it is no longer necessary to design a recurrent structure for the causal discovery network, the detailed architecture of which is given in Appendix A.1 of the supplementary material. In particular, instead of learning $g(X^\mathcal{D}_t)$ from scratch, we directly reuse the feature extractor of the agent model as the encoder. This greatly reduces the risk of overfitting and ensures that the generated masks are semantically consistent with the agent model. Therefore, we just need to optimize the decoder with the temporal causal objective.
\subsection{Training Procedure}
We train the causal discovery network to directly minimize the objective function (\ref{eqn:objective function}) with supervised learning. The weights of the encoder (yellow block on the left in Figure \ref{fig:cdnet}) are kept fixed during the training.
Our training procedure includes two stages. In the first stage, to capture the temporal causal relations between states and actions at different timesteps, temporal data is collected for training. More concretely, we first use the agent model to collect $M$ episodes with a fixed horizon $T$. Each episode is divided into a state sequence, an action sequence and a state value sequence. While the state sequences are regarded as the input data, the action and state value sequences together form the label. Once the episode dataset is built, a standard supervised learning procedure is then applied to train the causal discovery network in the second stage. The pseudo-code of training is summarized in Algorithm \ref{alg:TSCI}. In fact, an alternative approach to training our causal discovery network is to use single-timestep state-action pairs and non-temporal objective functions. In our experiments, we will make a fair comparison between our method and similar methods that use non-temporal objective functions.
\setlength{\algomargin}{1.5em}
\begin{algorithm}[h]
\caption{The training procedure of TSCI model}
\label{alg:TSCI}
The agent model or actor $\pi$ to be interpreted and corresponding critic $v$\;
Initialize a causal discovery network $g$. The encoder is initialized with the feature extractor of $\pi$, while the decoder is initialized with random weights\;
Use $\pi$ to collect $M$ episodes with a fixed horizon $T$, and build the dataset for training\;
\For{Epoch = 1, K}
{
\For{Iteration = 1, $M$ mod $N$}
{
Sample $N$ episodes\;
Calculate the objective function defiend by Equation \eqref{eqn:objective function}\;
Update the weights of decoder while the weights of encoder are fixed\;
}
}
\end{algorithm}
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-0.01cm}
\setlength{\belowcaptionskip}{-0.10cm}
\begin{center}
\includegraphics[width=1.0\linewidth]{temporal_causal_features.pdf}
\end{center}
\caption{Visualization of temporal-spatial causal features discovered by our method. The last four frames are explicitly considered for each state as shown by the dashed black rectangle (time goes from left to right). The frames on the diagonal are identical as shown by the dashed red rectangles.}
\label{fig:temporal causal features}
\end{figure*}
\section{Validity of Our Method}\label{sec:experiments}
Before we apply the proposed method to render temporal-spatial causal interpretations for vision-based RL agents, we first verify the effectiveness of our method through performance evaluation and comparative evaluation in this section. Subsequently, in Section \ref{sec:temporal interpretaions}, the proposed TSCI model is applied to further reveal and understand the role that temporal dependence plays in sequential decision-making.
\subsection{Experiment Setup}
We conduct extensive experiments on Atari 2600 games of the Arcade Learning Environment \cite{bellemare2013arcade}, which is a widely used benchmark in the field of RL interpretability. The agent models to be interpreted are pretrained with an actor-critic setup and the standard PPO training procedure. Then we apply the agent model to generate $10^4$ episodes with a fixed horizon 64. Finally, the decoder of causal discovery network is trained using the collected episode data. More details regarding task setup and training hyperparameters are provided in Appendix A.2 of the supplementary material. To enable fair and meaningful evaluations, we mainly select vision-based tasks for three reasons. First, we are better able to manipulate the state on vision-based tasks for some specific purposes. Second, most of the existing methods that we wish to compare to are developed exclusively for vision-based tasks. Third, these vision-based tasks are generally partially observable, making it convenient to verify the underlying temporal dependence of sequential decision-making. Nevertheless, we note that the temporal causality introduced in Section \ref{subsec:temporal causal objective} is compatible with any deep RL algorithm and task.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-0.01cm}
\setlength{\belowcaptionskip}{-0.05cm}
\begin{center}
\includegraphics[width=0.95\linewidth]{saliency_comparisons.pdf}
\end{center}
\caption{Comparing saliency maps generated by different methods including our method, SARFA \cite{puri2020explain}, Gaussian perturbation method \cite{greydanus2018visualizing} and gradient-based method \cite{wang2016dueling}.}
\label{fig:saliency comparisons}
\end{figure*}
\subsection{Evaluations}
The main goal of our evaluations is to demonstrate the effectiveness of our proposed TSCI model by answering the following three questions:
\textbf{\emph{Question 1: Is our method capable of discovering temporal-spatial causal features for a better interpretation of the RL agent's sequential decisions?}}~
Figure \ref{fig:temporal causal features} visualizes the temporal-spatial causal features discovered by our method and reveals how they changes over time. The most dominant pattern is that the agent focuses attention selectively on only small regions which are strongly task-relevant at each timestep, while other regions are very ``blurry'' and can be ignored. In other words, the agent learns what information is important for making decisions and where to look at each timestep. In addition, the agent's sequential decisions can be understood from at least two aspects. First, the agent's decision is causally attributed to not only the features of current timestep but also that of past timesteps. For example, as the yellow ellipses show, both the driving cars in the last two frames are found to be remarkably causal for the decision. In fact, multiple observations of the same object can provide information such as motion direction, velocity or acceleration. Second, the same causal features contribute to the actions in varying degrees at different timesteps. As illustrated by the red circles, the cars observed at the past timesteps become decreasingly important for the current decision as time goes on. More results on other tasks are provided in Appendix B of the supplementary material.
In particular, it is worth noting that the results in Figure \ref{fig:temporal causal features} only represent the relative importance of discovered features at different timesteps. Therefore, the first two frames may still be important for the agent's decision-making, though they are visually less salient than the last two frames. In fact, the degree of feature saliency is related to the choice of regularization coefficient $\beta$ in Equation \eqref{eqn:objective function}. The ablation analysis of $\beta$ is given in Appendix A.2 of the supplementary material.
\textbf{\emph{Question 2: How does TSCI compare to existing RL interpretation methods in terms of the quality of discovered features?}}~
We compare TSCI against several popular RL interpretation methods including specific and relevant feature attribution (SARFA) \cite{puri2020explain}, Gaussian perturbation method \cite{greydanus2018visualizing} and gradient-based method \cite{wang2016dueling}. Here we do not consider attention-augmented methods for comparisons, since they generally require adapting and retraining the agent model to be interpreted. Figure \ref{fig:saliency comparisons} shows the saliency maps generated by different methods. It can be seen that our method produces higher-resolution and sharper saliency maps than the others. As illustrated by the yellow circles on MsPacman task, our method is able to locate precisely all causal features. In contrast, other methods either highlight lots of non-causal features or omit some causal features, as shown by the yellow circles on Enduro task.
To quantitatively evaluate the quality of features discovered by different methods, we further compare the average return of several policies that can access to only the pixels of particular features obtained by different methods during the training. Table \ref{tab:comparison return} summarizes the results of four methods, and all results are averaged across five random training runs. It can be seen that the policy still achieves good performance when accessing to only the pixels of the features discovered by our method. In contrast, we can observe obvious performance degradation when using the other methods to generate the features.
\begin{table}[h]
\centering
\fontsize{9.0}{10}\selectfont
\setlength{\tabcolsep}{1.5mm}{}
\caption{The performance comparison of the features discovered by different methods.}
\label{tab:comparison return}
\begin{threeparttable}
\begin{tabular}{ccccc}
\toprule
Tasks & Ours & SARFA \cite{puri2020explain} & Perturbation \cite{greydanus2018visualizing} & Gradient \cite{wang2016dueling} \cr
\midrule
Enduro & \textbf{2903} & 2369 & 1741 & 819 \cr
Seaquest & \textbf{2517} & 2085 & 1830 & 836 \cr
\bottomrule
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-0.20cm}
\centering
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.88\linewidth]{Seaquest_action_matching.pdf}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Seaquest_state_matching.pdf}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Seaquest_return.pdf}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.92\linewidth]{Enduro_action_matching.pdf}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Enduro_state_matching.pdf}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Enduro_return.pdf}
\end{minipage}
\caption{Reliability evaluation metrics. Comparing against CXPlain objective \cite{schwab2019cxplain} and imitation learning (IL) objective \cite{zhang2020atari}. All results are averaged across seven evaluation episodes.}
\label{fig:evaluation metrics}
\end{figure*}
\textbf{\emph{Question 3: Does our temporal causal objective enable reliable causal interpretations about the RL agent's long-term behavior?}}~
As presented in Section \ref{sec:methodology}, our TSCI model mainly relies on the temporal causal objective to interpret the RL agent's long-term behavior, hence we consider two recently popular objectives for comparisons against our temporal causal objective. 1) CXPlain objective \cite{schwab2019cxplain}, which also builds on the definition of Granger causality \cite{granger1969investigating} used by our TSCI but in a non-temporal form. In our implementation, we apply Proposition \ref{pro:surrogate contribution degree} to avoid the use of the optimal actions in RL. 2) Imitation learning (IL) objective \cite{zhang2020atari, pomerleau1991efficient}, which treats the behavior matching task as a multi-class pattern problem with a standard log-likelihood objective. In particular, similar to temporal causal objective, we add the single-timestep value prediction error in CXPlain and imitation objectives for the fairness of comparison. It is worth emphasizing that both CXPlain and imitation objectives use independent state-action pairs for training, while our temporal causal objective treats an episode (or state-action sequence) as a sample. Therefore, different from the others, our temporal causal objective minimizes the cumulative discounted errors rather than single-timestep prediction error. Totally, all of these methods use the same network architecture except that the objective functions used for training are different.
Now we consider the metrics used for comparisons. In the context of sequential decision-making, we are usually more concerned with the agent's long-term return than one-step reward. Therefore, a reasonable way to evaluate the reliability is to measure the degree of long-term behavior matching rather than one-step action matching. To that end, we use four evaluation metrics to measure the reliability of the generated causal interpretations, namely temporal causality error (TCE) $e_{tc}$, normalized return $\overline{R}$, action matching error (AME) $e_a(t)$ and state matching error (SME) $e_s(t)$. Specifically, suppose the trajectories $\{s_0,\pi(s_0),r_1,s_1,\pi(s_1)\cdots\}$ and $\{s_0,\pi(f_{exp}(s_0)),\hat{r}_1,\hat{s}_1,\pi(f_{exp}(\hat{s}_1))\cdots\}$ are generated by the agent model $\pi$ taking as input all available information and only the causal features discovered by $f_{exp}$ respectively, then the TCE $e_{tc}$ is calculated using equation \eqref{eqn:non-causal feature series} in an undiscounted form while the other evaluation metrics are calculated as follows:
\begin{align}\label{eqn:evaluation metrics}
\overline{R} &= \sum\nolimits_t\hat{r}_t\Big/\sum\nolimits_t r_t, \\
e_a(t) &= KL\big(\pi(f_{exp}(\hat{s}_t)), \pi(s_t)\big), \\
e_s(t) &= \big\|(v(f_{exp}(\hat{s}_t)) - v(s_t)\big\|^2_2,
\end{align}
where $KL(\cdot)$ denotes the Kullback-Leibler divergence. In fact, similar metrics are suggested in prior work \cite{zhang2020atari}. In the above setting, two episodes are obtained by performing different rollouts from the same initial state, indicating that the SME $e_s(t)$ and AME $e_a(t)$ depend on the whole trajectory before time-step $t$ rather than only the current state-action pair. Consequently, these metrics are able to measure the consistency of long-term behaviors between two trajectories on deterministic RL environments.
Figure \ref{fig:evaluation metrics} shows the comparison results of all three methods. More results on other tasks are provided in Appendix B of the supplementary material. It can be seen that although all three methods have no performance loss in terms of mean normalized return, our TSCI model that uses temporal causal objective has smaller behavior matching errors and temporal causality error than the others. Hence our temporal causal objective enables reliable interpretations about the RL agent's long-term behavior from the temporal causality perspective.
\section{Temporal Interpretations for Deep RL}\label{sec:temporal interpretaions}
As discussed above, an important observation about the temporal-spatial causal features discovered by TSCI model is that the agent can extract high-level semantic information from consecutive observations of the same object. In other words, there exists an underlying temporal dependence between temporally adjacent frames, which are connected by the same semantic concepts. The goal of this section is to further reveal and understand the role that temporal dependence plays in sequential decision-making. To that end, we first provide counterfactual analysis about temporal dependence by evaluating its impact on the agent's long-term performance, and leverage TSCI model to explain the resulting counterfactual phenomenon. Then, TSCI model is further applied to interpret temporally-extended RL agents and reason about why frame stacking is generally necessary even for the agent that has used a recurrent structure from the point of view of temporal dependence. Finally, we apply the proposed method to provide more downstream interpretations for vision-based RL.
\subsection{Counterfactual Analysis of Temporal Dependence}
Here an intervention-based approach is proposed to render empirical evidences about the underlying temporal dependence in vision-based RL. Specifically, we intervene on the input state (or temporally extended sequences of frames) to produce counterfactual conditions. Prior work \cite{szegedy2014intriguing} has focused on manipulating the pixel input, but this does not modify the underlying temporal dependence. Instead, we intervene directly on the input state to change semantic concepts related to temporal dependence. For the convenience of manipulation, we intervene on the input state by masking out partial semantic information located on different frames. For example, we denote ``34'' as the scheme where the semantic information in the third and fourth frames are partially masked out, as shown in Figure \ref{fig:example}. In particular, we denote ``None'' as the case without any semantic information masked out.
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.01cm}
\setlength{\belowcaptionskip}{-0.20cm}
\begin{center}
\includegraphics[width=0.95\linewidth]{example.pdf}
\end{center}
\caption{An example for how to intervene on the input state, which consists of the last four consecutive frames $``o_{t-3}\cdots o_t"$.}
\label{fig:example}
\end{figure}
In order to evaluate the impact of temporal dependence on the agent's long-term performance, we compare the average return of the agent model under different counterfactual conditions. In our implementation, we suppress the underlying temporal dependence between frames to varying degrees by applying different intervene schemes to temporally-extended frames as described above. The agent to be evaluated remains unchanged except for the input state. The empirical results are summarized in Figure \ref{fig:reliability return}. It can be seen that the reduction of temporal dependence results in different degrees of performance degradation. More concretely, first, the performance gradually decreases as the degree to which we intervene on the input state increases (i.e., ``4''$\rightarrow$``34''$\rightarrow$``234''$\rightarrow$``1234''). Second, if the temporal dependence is destroyed completely such as the schemes ``234'' and ``1234'', the agent is close to collapse. Third, although the scheme ``123'' also destroys the temporal dependence, the performance of scheme ``123'' does not collapse dramatically since the fourth frame is the most important frame for making decisions. Fourth, intervening on single previous frame does not lead to obvious performance degradation, such as the schemes ``1'', ``2'' and ``3''. Furthermore, we can conclude that the earlier the frame is observed, the less important it is to the current decision, since the intervention on the current frame leads to larger performance degradation than that on the previous frames, as can be seen from the schemes ``1'', ``2'', ``3'' and ``4''.
It is worth noting that the semantic information masked out mainly consists of task-irrelevant (or background) information and semantic features related to temporal dependence. Meanwhile, it can be seen from the last column of Figure \ref{fig:evaluation metrics} that the mean normalized return is greater than or equal to 1. In other words, the agent model taking as input only the causal features discovered by TSCI model achieves better performance than that taking as input all available information. Therefore, it can be concluded that the loss of background information does not cause performance degradation, and the performance degradation observed in Figure \ref{fig:reliability return} is mainly attributed to the destruction of semantic features related to temporal dependence.
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.01cm}
\setlength{\belowcaptionskip}{-0.15cm}
\begin{center}
\includegraphics[width=0.95\linewidth]{reliability_return.pdf}
\end{center}
\caption{Performance evaluations of temporal causal features when different intervention schemes are applied to the input state. The agent to be evaluated remains unchanged except for the input state.}
\label{fig:reliability return}
\end{figure}
To further explain and understand how the reduction of temporal dependence affects the agent's decision and performance, we apply the proposed TSCI model to visualize the agent's attention for making decisions under different counterfactual conditions. Specifically, for each intervention scheme, we retrain a separate TSCI model to discover temporal-spatial causal features while the agent to be interpreted remains unchanged. The results are shown in Figure \ref{fig:reliability}, which renders empirical causal reasoning about temporal dependence from two aspects. First, by comparing ``None'' to ``3'' or ``4'', we observe that the agent shifts some of its attention to the previous frames when we only intervene on a single frame of the input state. In other words, the agent can recover temporal dependence partially by extracting high-level semantic features from the unmasked part of input state. Second, when the temporal dependence is destroyed completely such as ``234'' and ``1234'', the agent fails to learn the representation of high-level semantic concepts and is thus prone to collapse. The above observations provide empirical explanations about why the agent is still able to perform well in schemes ``1'', ``2'', and ``3'' but there is obvious performance degradation in schemes ``234'' and ``1234''. Third, under the intervention scheme ``34'', the agent shifts some of its attention to the first two frames, hence the performance does not collapse dramatically like ``234'' and ``1234''. Nevertheless, the scheme ``34'' also leads to obvious performance degradation as can be seen in Figure \ref{fig:reliability return}, since the first two frames are far less important than the last two frames according to the result of ``None'' in Figure \ref{fig:reliability}. In summary, both the last frame and temporal dependence are important to the agent's performance. While destroying one of them will only lead to varying degrees of performance degradation, destroying all of them is likely to cause a dramatic collapse in performance. In fact, due to the essential role of temporal dependence in vision-based RL, it may be untrustworthy to explain vision-based RL agents with some existing interpretation methods developed exclusively for supervised learning, which does not involve temporal dimension.
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-0.10cm}
\begin{center}
\includegraphics[width=0.96\linewidth]{reliability.pdf}
\end{center}
\caption{Visualization of temporal-spatial causal features discovered by the proposed TSCI model when different intervention schemes are applied to intervene the input state. Every row is the same state consisting of four consecutive frames (time goes from left to right). The black areas are left out during the retraining of TSCI model.}
\label{fig:reliability}
\end{figure*}
\subsection{Interpreting Temporally-Extended Agents}
In the context of vision-based RL, the agent is usually devised to have a recurrent structure such as RNNs, especially for partially observable environments. The main motivation of such design is to enhance and utilize the underlying temporal dependence, which has been empirically verified to play a significant role for sequential decision-making in the previous section. In principle, the agent with a recurrent structure should be able to guarantee temporal dependence by extracting high-level information from consecutive observations. However, frame stacking technique is generally needed to obtain temporally-extended input states apart from applying a recurrent structure in many applications. Therefore, a direct question about this is that whether frame stacking is redundant for the agent with a recurrent structure. In other words, is it necessary for temporally-extended agent model to use temporally-extended inputs in vision-based RL?
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.01cm}
\setlength{\belowcaptionskip}{-0.25cm}
\begin{center}
\includegraphics[width=0.95\linewidth]{frame_return.pdf}
\end{center}
\caption{Performance comparison of five agents that apply different forms of input state or structure. While the first agent applies a full-connected (fc) feedforward structure, the last four agents use the same recurrent structure and their input states are obtained by stacking different number of the last consecutive frames together. All results are averaged across five independent runs.}
\label{fig:frame return}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.97\linewidth]{frame_stacking_number.pdf}
\end{center}
\caption{Comparison between the attentions of four recurrent agent models that apply frame stacking in different ways. Each row visualizes the attention of the agent whose input states $s_k$ are obtained by stacking $m$ consecutive frames $``o_{k-m+1}\cdots o_{k-1}o_k"$ together. Each red dashed rectangular represents the input state $s_k$ at the $k$-th timestep. Here we consider the input states that only include the last four frames $``o_{t-3:t}"$, and thus not all the last four input states $``s_{t-3:t}"$ are visualized due to the limitation of space.}
\label{fig:frame stacking number}
\end{figure*}
To answer the above question, we perform comparative experiments to assess the impacts of both recurrent structure and frame stacking technique on the agent's performance. Specifically, we retrain two groups of agents that apply different forms of input state and network structure. The first group of agents have the same form of input state but different network structures, and the second group of agents use the same recurrent structure but their input states consist of different numbers of the last consecutive frames. The empirical results are shown in Figure \ref{fig:frame return}. It can be seen that the agent performs badly when the input state includes only the current frame and does not include any previous frames (i.e., frame stacking technique is not applied). In contrast, using a full-connected feedforward structure does not lead to significant performance degradation, indicating that a recurrent structure itself may be not enough to capture complete temporal dependence between consecutive frames although it shows remarkable memory ability for low-dimensional sequential data. In addition, the results in Figure \ref{fig:temporal causal features} show that the first two frames are not as important as the last two frames, but we can see that only using the last two or three frames in states causes information loss and thus results in performance degradation as shown by the third and fourth columns of Figure \ref{fig:frame return}. In summary, we can observe that the agent performs better as the input state includes more consecutive frames. Based on these observations, it can be concluded that only using a recurrent structure is not enough for the agent to achieve good performance in vision-based RL, and it is generally necessary to apply frame stacking technique to obtain temporally-extended input states, especially for partially observable RL environments.
Intuitively, the agent with a recurrent structure is expected to be capable of learning temporal representations for making sequential decisions. To explain and understand why a recurrent agent model still needs temporally-extended inputs in vision-based RL, we leverage the proposed TSCI model to visualize how the recurrent agent's attention changes as the number of frames stacked together in states varies. The empirical results are visualized in Figure \ref{fig:frame stacking number}. It can be seen that the agent fails to attend to causal features located at the previous frames through recurrent structure. For example, the agent mistakenly focuses most of its attention on the current frame while the previous frames are considered almost irrelevant to the current decision as shown in the first row. More concretely, the green background of $s_t$ in the first row indicates that the background in the current frame is mistakenly considered more important than the lane lines in the previous frames, although the lane lines in the current frame are properly considered the most important. As a consequence, the agent cannot build completely the temporal dependence between consecutive frames that are input to the model at different timesteps. Additionally, we can observe that the temporal dependence between the frames that are stacked together in the same state is built successfully, as shown in the last three rows. Therefore, it is generally necessary for temporally-extended agent model to use temporally-extended inputs in vision-based RL from the perspective of temporal dependence.
In fact, recurrent structures have been shown to be effective in other fields like natural language processing (NLP). Why recurrent structures do not show the same effectiveness in vision-based RL may be attributed to the following reasons. First, there exists information loss in recurrent structures such as GRUs, and the decision of RL agent relies heavily on the temporal dependence between consecutive observations and thus is more sensitive to information loss than NLP, where there are usually only semantic and syntactic connections between different words. Second, the learning of low-dimensional latent vectors of images is generally coupled with the learning of credit assignment, but sparse task rewards do not provide enough signal for the agent to learn what to store in memory and RL agents require self-supervised auxiliary training to learn an abstract, compressed representation of each input frame \cite{fortunato2019generalization, wayne2018unsupervised, lampinen2021towards}. More potential reasons and their verifications are important research directions for future work.
\subsection{More Downstream Interpretations for Deep RL}
In addition to the network structure discussed above, there are many other factors that also affect the behavior of RL agents. In this section, we further discuss the impact of different RL algorithms and environments on the causal features of vision-based RL agents. Such an analysis can provide explainable basis for the selection of models in different scenarios.
\textbf{Case 1: RL algorithm.}
Here we compare the temporal-spatial causal features of two RL agents that are trained using PPO and Advantage Actor-Critic (A2C) algorithms respectively. The empirical results are provided and visualized in Figure \ref{fig:downstream_algorithm}. It can be seen that there is no remarkable difference between the temporal-spatial causal features discovered by using the proposed TSCI method, but the agent trained using PPO algorithm exhibits better performance than the agent using A2C algorithm in terms of feature saliency and completeness. This observation explains to some extent the reason why PPO generally performs better than A2C on most RL tasks.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{downstream_algorithm.pdf}
\end{center}
\caption{The temporal-spatial causal features of two RL agents that are trained using PPO and A2C respectively.}
\label{fig:downstream_algorithm}
\end{figure}
\textbf{Case 2: Environment.}
To evaluate the performance of our method on different environments, we perform comparative experiments on two similar tasks of two different environments, namely Enduro of Arcade Learning Environment and Lane-following of Duckietown \cite{gym_duckietown}. While Arcade Learning Environment builds on top of the Atari 2600 emulator Stella, Duckietown is a self-driving car simulator that builds on top of ROS environment. As shown in Figure \ref{fig:downstream_environment}, our method is able to discover causal features on both Arcade Learning Environment and Duckietown, although they have slight differences in feature saliency.
\begin{figure}[t]
\setlength{\belowcaptionskip}{-0.15cm}
\begin{center}
\includegraphics[width=0.95\linewidth]{downstream_environment.pdf}
\end{center}
\caption{The temporal-spatial causal features discovered by our method on Duckietown and Arcade Learning environments.}
\label{fig:downstream_environment}
\end{figure}
\section{Conclusion}
We presented a trainable temporal-spatial causal interpretation (TSCI) model for vision-based RL agents. TSCI model is based on the formulation of temporal causality between sequential observations and decisions. To identify temporal-spatial causal features, a separate causal discovery network is employed to learn the temporal causality, which emphasizes the explanation about the agent's long-term behavior rather than single action. This approach has several appealing advantages. First, it is compatible with most RL algorithms and applicable to recurrent agents. Second, we do not require adapting or retaining the agent model to be interpreted. Third, TSCI model can, once trained, be used to discover temporal-spatial causal features in little time. We showed experimentally that TSCI model can produce high-resolution and sharp attention masks to highlight task-relevant temporal-spatial information that constitutes most evidence about how vision-based RL agents make sequential decisions. We also demonstrated that our method is able to provide valuable causal interpretations about the agent's behavior from the temporal-spatial perspective. In summary, this work provides significant insights towards interpretable vision-based RL from a temporal-spatial causal perspective. A more extensive study will be carried out to reason about the temporal dependence between causal features of different observations, and explore how to devise better RL agent structures with strong temporal dependence.
\section*{Acknowledgments}
Wenjie Shi and Gao Huang contribute equally to this work. This work is supported in part by the National Science and Technology Major Project of the Ministry of Science and Technology of China under Grant 2018AAA0100701, the Major Research and Development Project of Guangdong Province under Grant 2020B1111500002, the National Natural Science Foundation of China under Grants 61906106 and 62022048. We would like to thank the reviewers for their valuable comments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUfAY5qhLBeyJsPEhq
| 5 | 1 |
\section{Introduction}
Azimuth disambiguation of the transverse field in vector
magnetograph data is a key problem determining reliability of the
physical researches utilizing the knowledge of full vector of the
photospheric magnetic field. Apparently complete and exact
solution to this problem is hardly possible because of high level
of noise in transverse field data and impossibility of spatial
resolution of real pattern of the magnetic field (whose thin
structure is significantly less than the spatial resolution of
modern magnetographs). In this connection, the main requirement
for algorithms of the azimuth disambiguation is to reach
invariably high quality that could describe adequately (with
minimum distortions) the physically significant real structural
elements of magnetic configurations. Also of importance is the
code efficiency with regions regardless of their location on the
solar disc; this allows us to use data on all available
measurements, including those near the limb. To perform on-line
processing of continuously incoming data, the final preparation
time should be comparable to time of measurement data acquisition.
Currently, the best codes providing invariably high quality and
meeting the requirements for physical investigation are the codes
based on the simulated annealing algorithm (different variants of
the "minimum energy" method (ME \inlinecite{Metcalf1};
\inlinecite{Leka2}, \inlinecite{Rudenko}). Quality of the code
outcome is significantly higher than the results of other the
results available numerous codes \cite{Metcalf1}. Unfortunately,
ME codes are among the most long-term: the required time goes up
with increasing number of nodes of data grid. Therefore they are
not suitable for real-time processing of continuous data flow with
high spatial resolution. The SDO/HMI instrument, for instance,
generates vector magnetogram with resolution of 4096x4096 once
every 12 minutes.
A fast NPFC algorithm \cite{Georgoulis} is now examined in order
to be used in data flow processing, since it represents the best
compromise between quality and time required for processing.
Quality of the NPFC algorithm is ranked second among well-known
methods, though it is inferior to that of ME codes. It does not
always show invariably satisfactory results in different real
configurations of magnetic regions, particularly in those near the
limb.
In this paper, we present examples of the azimuth disambiguation
of model and real ambiguous magnetograms with the use of three
codes: the new Super Fast and Quality (SFQ), NPFC, and SME
\cite{Rudenko}. We show that quality of the SFQ outcome far
surpasses that of NPFC. Besides, SFQ is much faster than NPFC.
Moreover, time for the azimuth disambiguation of
vector-magnetograph data with any spatial resolution (including
SDO/HMI measurements of the full disc) may be significantly
reduced due to the SFQ parallelization into several processes.
This suggests potential on-line processing of current data flow.
\section{The method}
The SFQ method is a two-step processing system. Step 1 involves
preliminary azimuth disambiguation, using special metric (grid
difference metric) relying on reference information of the
potential field. Step 2 (cleaning) comprises application of
smoothing masks in several scales. Our solution does not use
random search (like in ME) or convergent iterations (like in NPFC)
that's why it is very fast.
\subsection{Step 1}
The key point of the method is application of the "grid difference
metric" defining measure of difference between the initial
ambiguous field ${\bf B}^{amb}$ and the potential (reference)
field ${\bf B}^{ref}$. Metric is constructed for each node $I$,
$j$ of the magnetogram grid as follows:
\begin{eqnarray}\label{eq:1}
g^{ij}\left( {\bf B}_{\perp }^{amb}\right)
=\sqrt{\sum_{s=0}^{1}\sum_{t=0}^{1}\left[ \Delta _{s}\left(
B_{t}^{amb}-B_{t}^{ref}\right) \right] ^{2}};\nonumber\\
\Delta _{0}=f=f^{i+1,j}-f^{i,j},\Delta _{1}=f=f^{i,j}-f^{i,j+1};\\
B_{0}=B_{x},B_{1}=B_{y}.\nonumber
\end{eqnarray}
Metric (\ref{eq:1}) is then used as a conditional mask to
determine common sign for transverse components of the preliminary
field $B^{Step\_1}$:
\begin{equation}\label{eq:2}
\left( {\bf B}_{\perp }^{Step\_1} \right)^{i,j}=\left\
\begin{array}{c}
\left( {\bf B}_{\perp }^{amb} \right)^{i,j} \\
-\left( {\bf B}_{\perp }^{amb}\right)^{i,j} \\
\end{array
\right|
\left
\begin{array}{c}
if \quad g^{i,j}\left( {\bf B}_{\perp }^{amb}\right) \leq g^{i,j}\left( -{\bf B}_{\perp }^{amb}\right)\\
if \quad g^{i,j}\left( {\bf B}_{\perp }^{amb}\right) > g^{i,j}\left( -{\bf B}_{\perp }^{amb}\right) \\
\end{array
\right\}
\end{equation}
According to definition of metric (\ref{eq:1}), its application is
valid only for nodes with locally continuous ${\bf B}^{amb}$
(i.e., random distribution over azimuth direction nodes fails for
${\bf B}^{amb}_{\perp }$). Consequently, requirement that initial
distribution of the transverse field be locally continuous in most
nodes is a necessary condition. This requirement is easily
satisfied, for instance, if a component of the transverse field
($x$ or $y$) is assumed to be everywhere positive. The potential
reference field can be obtained using the standard method for the
FFT extrapolation over the longitudinal component $B_z$. In our
case, we used the potential extrapolation in quasi-spherical
geometry \cite{Rudenko} and subsequent extrapolation of the field
into nodes of the initial magnetogram grid.
We think that the principle of construction of metric $g$
(\ref{eq:1}) is similar to the metric used in the most effective
ME method:
\begin{equation}\label{eq:3}
E=\left| j_z \right|+\left| \nabla_\perp \cdot {\bf B} +
\partial_z B^{pot}_z \right|\equiv \left| j_z -j^{pot}_z \right|+\left| \nabla_\perp \cdot {\bf B} -\nabla_\perp \cdot {\bf B}^{pot}\right|
\end{equation}
Indeed, close inspection of the equation shows that, formally,
terms in the right part of identity (\ref{eq:3}) are the
particular linear combinations of differences (\ref{eq:1}). For
simplicity of comparison, we will confine ourselves to the case of
flat approximation and $B_l=B_z$. We attach certain physical
meaning to summands of (3). The first summand in (\ref{eq:3})
provides minimisation of vertical current. This term is
responsible for nodes in which local continuity of the transverse
ambiguous field is disturbed. In other nodes, it does not depend
on azimuth direction of the transverse field. The second summand
in (\ref{eq:3}) is the divergence simulation; it reflects
correlation of some differences between the test and reference
fields (i.e., it performs function analogous to (\ref{eq:1}). It
is thus reasonable to suppose that the main positive function of
the azimuth selection in both cases is performed by differential
relations between the test and reference fields. We do not assign
value of a geometric object's element (similar to the value of
field derivatives making up tensor components of vector field
derivatives) to each difference $\Delta _{s}f$ in (\ref{eq:1}). We
consider these differences as formal functions of two near points,
reflecting local coupling between the test and reference fields.
This approach allows us to consider all magnetogram nodes as
equivalent (regardless of their location in the visible part of
the photosphere) and relieves us of having to use additional grids
and geometric transformations. Notice that the NPFC method (which
is referred to as nonpotential) uses approximation $\frac{\partial
B_{z}}{\partial z}=\frac{\partial B_{z}^{pot}}{\partial z}$
\cite{Metcalf2} when deducing equations for current component
$B_c$ in indirect form. This condition implies differential
reference relation with the potential field.
\subsection{Step 2}
The necessity to clean the preliminary magnetogram after Step 1 is
caused by the peculiar features of our application of metric
(\ref{eq:1}). Let us suppose that we apply Step 1 to a noise-free
magnetogram of the potential field. In this case, we would
probably observe the following: solution to the disambiguation
problem is rigorous in the main continuum of points in whose
vicinities initial distribution of the transverse field is
continuous. Random chains of isolated contrast (bad) pixels on
transverse component images would be observed only on lines of the
local discontinuity of data. Almost all these bad pixels may be
cleaned at one go through comparing with the smoothed transverse
field $\overline{{\bf B}}_\perp$:
\begin{equation}\label{eq:4}
\left( {\bf B}_{\perp }^{n+1} \right)^{i,j}=\left\{\left.
\begin{array}{c}
\left( {\bf B}_{\perp }^{n} \right)^{i,j} \\
-\left( {\bf B}_{\perp }^{n}\right)^{i,j} \\
\end{array
\right.\left|
\begin{array}{c}
if \quad \left( {\bf B}_{\perp }^{n}\cdot\overline{{\bf B}}_\perp^n\right)^{i,j} \leq 0\\
if \quad \left( {\bf B}_{\perp }^{n}\cdot\overline{{\bf B}}_\perp^n\right)^{i,j} > 0 \\
\end{array
\right.\right\}
\end{equation}
Typical distribution of bad pixels after Step 1 processing of real
magnetograms is well exemplified by images of transverse
components $(B_x, B_y)$ of Hinode/SOT SP data (Level 2) in the
magnetic region AR10930 for two its locations - near the disc
centre (Fig. \ref{fig:1}) and near the limb (Fig. \ref{fig:2}).
Figures show that the general pattern of the transverse field is
satisfactorily neat already after Step 1 processing regardless of
bad pixels; noteworthy is the fact that the outcome quality near
the limb is comparable to that near the centre. This suggests that
the chosen metric (\ref{eq:1}) is quite effective, irrespective of
proximity to the limb. Close inspection of images in Figures
\ref{fig:1} and \ref{fig:2} shows that there are many bad
microfragments (groups of adjacent bad pixels). It is obvious that
single application of (\ref{eq:4}) for many of such microfragments
may be insufficient. On the other hand, it is reasonable to expect
that multiple application of (\ref{eq:4}), given relevant
smoothing parameters, may lead to the "collapse" of fragments due
to the motion of their boundaries towards the decrease in absolute
magnitude of the transverse field. In practice, most types of bad
fragments have the necessary feature: closed fragments, as a rule,
collapse (towards the necessary side), whereas open fragments
(part of the boundary in the weak noisy field is not seen)
transfer their boundaries to the noise region.
We have conducted numerous tests and selected the following type
of cleaning (Step 2). Two types of smoothing procedures are used
for cleaning. In terms of the IDL code, they have the form: \\
a) Smoothig(s) - sBx=smooth(Bx,s,$\backslash$edge)-Bx/float(s\symbol{"5E}2) \& sBy=smooth(By,s,$\backslash$edge)-By/float(s\symbol{"5E}2) \\
b) Median(s) - sBx=median(Bx,s,$\backslash$even) \& sBy=median(Bx,s,$\backslash$even) \\
Here, $s\times s$ is the size of smoothing window. In this form,
smoothing in each node of the grid is performed only in adjacent
nodes, exclusive of node values. This modification enhances
features of the collapse. Either of smoothing types a) or b) for
the chosen parameter $s$ is repeatedly used with subsequent
procedure (\ref{eq:4}) in the cycle with the exit condition:
number of iterations reaches 300 or number of modified pixels is
less than 5 or 0.01 \% of the total amount of pixels.
The final cleaning (Step 2) is the following set of consecutive
cycles: \\
Loop1 - \emph{Median}(3); \\
Loop2 - \emph{Smoothig}(19); \\
Loop3 - \emph{Smoothig}(9); \\
Loop4 - \emph{Smoothig}(5); \\
Loop5 - \emph{Smoothig}(3).\\
We have used this scheme for all the examples of calculations of
model and real magnetograms given before \footnote{Many
configurations of "cleaning" mode are possible. Probably some of
them can provide better results. So the "cleaning" version
described in this paper is most likely not final and may be
improved in the future. The latest version of the SFQ code
implemented in IDL language is available at
\url{http://bdm.iszf.irk.ru/sfq_idl}}.
\section{Model tests}
\subsection{Known models} \label{sect:known_models}
To compare SFQ with other codes, we have processed all ambiguous
magnetograms\footnote{The magnetograms are available on our web
page \url{http://bdm.iszf.irk.ru/SFQ_Disambig}} of known models
used by \inlinecite{Metcalf2} and \inlinecite{Leka1} to estimate
the majority of known methods for the azimuth disambiguation:
\begin{itemize}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2005/DATA_FILES/Barnes_TPD7.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2005/DATA_FILES/fan_simu_ts56.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2006_workshop/TPD10/TPD10a.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2006_workshop/TPD10/TPD10b.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2006_workshop/TPD10/TPD10c.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2006_workshop/FLOWERS/flowers13a.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2006_workshop/FLOWERS/flowers13b.sav}
\item \url{http://www.cora.nwra.com/AMBIGUITY_WORKSHOP/2006_workshop/FLOWERS/flowersc.sav}
\end{itemize}
All processing results are presented on
\url{http://bdm.iszf.irk.ru/SFQ_Disambig/models.zip} by the set of
graphs of transverse components of the field (to make a visual
estimate) and by the corresponding set of digital IDL sav files (
to make quantitative assessment of vector magnetograms' quality)
\subsection{Answer vector model of AR 10930}\label{sect:answer}
The following type of testing of $\pi$-disambiguation methods
relies on the answer vector model of the photospheric field of a
real magnetic region. Such a model can be easily obtained through
fixing vector components of the real component with removed
$\pi$-ambiguity (reference magnetogram) by the reference method
providing most reliable results. The answer model is the field
components of the reference magnetogram transformed into the
Carrington spherical coordinate system, with interpolation into
nodes of the uniform spherical grid. The resulting field is then
used to generate answer magnetograms simulating transit of the
selected magnetic configuration across the Sun's disc.
Transforming answer magnetograms into ambiguous ones and comparing
their processing effects, we can obtain quantitative assessment
for the method under study. Such an approach to modelling allows
us to present peculiarities of vector data and features of the
real magnetic structure to a great extent in the model. Degree of
quality of the quantitative assessment depends on quality of the
reference magnetogram. Disambiguation errors in the reference
magnetogram affect quality of the following tests. Tests in
section \ref{sect:known_models} do not have this disadvantage; on
the other hand, there is no assurance that they properly represent
real peculiarities of data.
When making the answer vector model in our study, we used vector
data on the SOT/SP level 2 (12-17 December 2006) in AR 10930 and
the reference SME method of $\pi$ disambiguation \cite{Rudenko}.
Using the model obtained as basis, we simulated transit of a fixed
magnetic structure across the disc by a set of answer magnetograms
corresponding to moments of AR 10930 real measurement data. To
make quantitative assessment, we used standard parameters from
\inlinecite{Metcalf2} and \inlinecite{Leka1}:
\begin{eqnarray}\label{eq:metrics}
M_{area}=\#pixels(\Delta\theta =0)/\#pixels, \quad \Delta\theta
=0^0\lor \pm 180^0;\nonumber \\
M_{flux}=\sum\left(|B_n|_{\Delta\theta=0}\right)/\sum
|B_n|;\nonumber\\
M_{B_\perp >T}=\sum(B_\perp (s)_{\Delta\theta=0,B_\perp
>T})/\sum(B_\perp (s)_{B_\perp
>T})\\
M_{\Delta B}=\sum |{\bf B}(s)-{\bf B}(s)|/\#pixels ;\nonumber\\
M_{J_z}=M(a,s)_{J_z}=1-\frac{\sum(J_{n(answer)}-J_{n(solution)})}{2\sum
J_{n(answer)} } \nonumber.
\end{eqnarray}
Parameter values of $\pi$-disambiguation (\ref{eq:metrics}) of model magnetograms for three methods (SFQ, SME, and NPFC) are presented is Tables 1-6.
The first and second columns show position of magnetograms on the
disc: the first corresponds to the time of real magnetograms with
the same position on the disc; the second demonstrates angular
distance from the disc centre to the centre of an arbitrarily
chosen region. Under close examination, we see that the SFQ
quality in these tables is closer to SME. NPFC demonstrates the
worst quality near the limb. When approaching the limb, most SFQ
parameters seem more preferable than SME. Besides, it is easily
seen that the SFQ quality all along the region transit is almost
always on the same high level, whereas results of other methods
deteriorate to a variable degree when approaching the limb. The
last line of the table does not correspond to the time of the real
magnetogram. Parameters of this line show the SFQ efficiency in
the artificial position of the region when major part of the
strong field stricture is behind the limb. Images of transverse
components of the answer magnetogram and SFQ magnetogram in this
case are presented in Fig. \ref{fig:aranswer} and
\ref{fig:aranswersfq}, respectively. This example proves that we
can apply the SFQ method for $\pi$ disambiguation of full-disc
magnetograms without significant distortions of magnetic
structures on the limb.
All other images corresponding to Tables 1-6 are available on:
\begin{description}
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/Answer_AR10930_model.zip}] -- transverse components of answer magnetograms;
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/SFQ_AR10930_model.zip}] -- transverse components of SFQ magnetograms;
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/SME_AR10930_model.zip}] -- transverse components of SME magnetograms;
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/NPFC_AR10930_model.zip}] -- transverse components of NPFC magnetograms
\end{description}
\section{Disambiguation of real magnetogram data}
\subsection{AR 10930}
Complete set of graphics files containing results of $\pi$-disambiguation of real Hinode/SOT SP (Level~2) magnetogram data of AR 10930 by three methods is available on:
\begin{description}
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/SFQ_AR10930.ZIP}] -- transverse components of SFQ magnetograms;
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/SME_AR10930.ZIP}] -- transverse components of SME magnetograms;
\item[\url{http://bdm.iszf.irk.ru/SFQ_Disambig/NPFC_AR10930.ZIP}] -- transverse components of NPFC magnetograms.
\end{description}
Let us comment on these results, using two cases of the magnetic field location - near the centre (Fig. \ref{fig:ar10930_1}-\ref{fig:ar10930_1_npfc}) and near the limb (Fig. \ref{fig:ar10930_2}-\ref{fig:ar10930_2_npfc}).
Fig. \ref{fig:ar10930_1}-\ref{fig:ar10930_1_npfc} illustrate
similar quality of disambiguation by the three methods. When the
magnetic region is at its nearest to the limb (Fig.
\ref{fig:ar10930_2}-\ref{fig:ar10930_2_npfc}), only SFQ code copes
with the task successfully. Throughout this series of
magnetograms, only SFQ code demonstrates satisfactory quality. SME
provides good quality for all magnetograms except for the last
one. NPFC maintains satisfactory level of quality from 12 December
2006, 20:30:05 (Lc=50.1348770), to 15 December 2006, 05:45:05
(Lc=50.1348770). Unlike other codes, SFQ maintains stable level of
quality regardless of distance to the limb. This is consistent
with the quantitative analysis presented in section
\ref{sect:answer}.
\subsection{disambiguation of full-disc SDO/HMI data}
In this subsection, we demonstrate disambiguation of SDO/HMI data
obtained 2 July 2010 for the entire disc in original spatial
resolution. Initial data were taken from
\url{http://sun.stanford.edu/~todd /HMICAL/VectorB/}. We divided
data into 10x10 rectangular fragments and sequentially applied the
SFQ code to them. Fig. \ref{fig:hmi} presents the result of
fragment assembly (file containing this image in full resolution
and FITS files of the field components are available on
\url{http://bdm.iszf.irk.ru/SFQ_Disambig/20100702_010000_hmi.zip}).
It takes about one hour to disambiguate with Core 2 Quad (2.6
GHz) workstation. The result that we have obtained demonstrates
good quality for all large-scale and small-scale magnetic
structures with magnitudes higher than the noise level throughout
the disc. Notice that the corrected magnetogram has a patchwork
structure in the field of very weak fields. This is due to
extremely low signal-to-noise ratio in the regions with very weak
fields
\section{Conclusion}
This paper presents a new code for the azimuth disambiguation of
vector magnetograms. Among all well-known algorithms for the
azimuth disambiguation, SFQ is the fastest (more than 4 times
faster as compared to the NPFC algorithm). Besides, our algorithm
provides invariably high quality in all parts of the solar disc.
This statement has been proved by testing on well-known analytical
models and using real data from HINODE SOT/SP and SDO/HMI
instruments. In the case of real magnetograms, the method is
efficient near the limb where other algorithms (ME and NPFC) do
not give reliable results. Due to the high speed and quality of
disambiguation, the SFQ method can be applied to process SDO/HMI
vector magnetograms in full resolution (4096x4096) and in almost
real-time mode.
All magnetograms presented in the paper are available on
\url{http://bdm.iszf.irk.ru/SFQ_Disambig}.
\begin{acks}
G. Barnes and NASA/LWS contract NNH05CC75C, K. D. Leka (PI) model
data are used here.
This work was supported by the Ministry of Education and Science
of Russian Federation (GS 8407 and GK 14.518.11.7047) and by the
RFBR (12-02-31746 mol\_a)
\end{acks}
|
train/arxiv
|
BkiUdp44ubng6eY9MekG
| 5 | 1 |
\section{Introduction}
Question answering (QA) and question generation (QG) are two fundamental tasks in natural language processing \cite{Manning1999,Jurafsky2000}.
Both tasks involve reasoning between a question sequence $q$ and an answer sentence $a$.
In this work, we take answer sentence selection \cite{yang2015wikiqa} as the QA task, which is a fundamental QA task and is very important for many applications such as search engine and conversational bots.
The task of QA takes a question sentence $q$ and a list of candidate answer sentences as the input, and finds the top relevant answer sentence from the candidate list.
The task of QG takes a sentence $a$ as input, and generates a question sentence $q$ which could be answered by $a$.
It is obvious that the input and the output of these two tasks are (almost) reverse, which is referred to as ``duality'' in this paper.
This duality connects QA and QG, and potentially could help these two tasks to improve each other.
Intuitively, QA could improve QG through measuring the relevance between the generated question and the answer.
This QA-specific signal could enhance the QG model to generate not only literally similar question string, but also the questions that could be answered by the answer.
In turn, QG could improve QA by providing additional signal which stands for the probability of generating a question given the answer.
Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$.
Given a question-answer pair $\langle q, a \rangle$, the joint probability $P(q, a)$ can be computed in two equivalent ways.
\begin{equation}\label{equation:pqa}
P(q, a) = P(a) P(q|a) = P(q)P(a|q)
\end{equation}
The conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model\footnote{In this work, our QA model is $f_{qa}(a,q;\theta_{qa})$. The conditional distribution $P(a|q)$ could be derived from the QA model, which will be detailed in the next section.}.
Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them.
Based on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks.
There might be different ways of exploiting the duality of QA and QG.
In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks.
Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\theta_{qa}$ and the QG model parameterized by $\theta_{qg}$ by minimizing their loss functions subject to the following constraint.
\begin{equation}\label{equation:regular}
P_a(a) P(q|a;\theta_{qg}) = P_q(q)P(a|q;\theta_{qa})
\end{equation}
$P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively.
We examine the effectiveness of our training criterion by applying it to strong neural network based QA and QG models.
Specifically, we implement a generative QG model based on sequence-sequence learning, which takes an answer sentence as input and generates a question sentence in an end-to-end fashion.
We implement a discriminative QA model based on recurrent neural network, where both question and answer are represented as continuous vector in a sequential way.
As every component in the entire framework is differentiable, all the parameters could be conventionally trained through back propagation.
We conduct experiments on three datasets \cite{yang2015wikiqa,rajpurkar-EtAl:2016:EMNLP2016,nguyen2016ms}. Empirical results show that our training framework improves both QA and QG tasks. The improved QA model performs comparably with strong baseline approaches on all three datasets.
\section{The Proposed Framework}
In this section, we first formulate the task of QA and QG, and then present the proposed algorithm for jointly training the QA and QG models.
We also describe
the connections and differences between this work and existing studies.
\subsection{Task Definition and Notations}
This work involves two tasks, namely question answering (QA) and question generation (QG).
There are different kinds of QA tasks in natural language processing community.
In this work, we take answer sentence selection \cite{yang2015wikiqa} as the QA task, which takes a question $q$ and a list of candidate answer sentences $A = \{a_1, a_2, ... , a_{|A|}\}$ as input, and outputs one answer sentence $a_i$ from the candidate list which has the largest probability to be the answer. This QA task is typically viewed as a ranking problem. Our QA model is abbreviated as $f_{qa}(a,q;\theta_{qa})$, which is parameterized by $\theta_{qa}$ and the output is a real-valued scalar.
The task of QG takes a sentence $a$ as input, and outputs a question $q$ which could be answered by $a$.
In this work, we regard QG as a generation problem and develop a generative model based on sequence-to-sequence learning. Our QG model is abbreviated as $P_{qg}(q|a;\theta_{qg})$, which is parameterized by $\theta_{qg}$ and the output is the probability of generating a natural language question $q$.
\subsection{Algorithm Description}
We describe the proposed algorithm in this subsection.
Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG.
Accordingly, the training objective of our framework includes three parts, which is described in Algorithm 1.
The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\theta_{qa}), label)$,
where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not.
Since the goal of a QA model is to predict whether a question-answer pair is correct or not, it is necessary to use negative QA pairs whose labels are zero.
The details about the QA model will be presented in the next section.
For each correct question-answer pair, the QG specific objective is to minimize the following loss function,
\begin{equation}
l_{qg}(q, a) = -log P_{qg}(q|a;\theta_{qg})
\end{equation}
where $a$ is the correct answer of $q$.
The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer.
The QG model will be described in the following section.
\begin{algorithm}[tb]
\caption{Algorithm Description}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} Language models $P_a(a)$ and $P_q(q)$ for answer and question, respectively; hyper parameters $\lambda_q$ and $\lambda_a$; optimizer $opt$
\STATE {\bfseries Output:} QA model $f_{qa}(a,q)$ parameterized by $\theta_{qa}$; QG model $P_{qg}(q|a)$ parameterized by $\theta_{qg}$
\STATE
\STATE Randomly initialize $\theta_{qa}$ and $\theta_{qg}$
\REPEAT
\STATE Get a minibatch of positive QA pairs $\langle q^p_i, a^p_i \rangle_{i=1}^m$, where $a_i$ is the answer of $q_i$;
\STATE Get a minibatch of negative QA pairs $\langle q^n_i, a^n_i \rangle_{i=1}^m$, where $a^n_i$ is not the answer of $q^n_i$;
\STATE Calculate the gradients for $\theta_{qa}$ and $\theta_{qg}$.
\vspace{-0.3cm}
\STATE \begin{align}\nonumber G_{qa} = \triangledown_{\theta_{qa}} &\frac{1}{m}\sum_{i = 1}^{m}[l_{qa}(f_{qa}(a^p_i,q^p_i;\theta_{qa}), 1) \\
&\nonumber + l_{qa}(f_{qa}(a^n_i,q^n_i;\theta_{qa}),0) \\
& +\lambda_al_{dual}(a^p_i,q^p_i;\theta_{qa}, \theta_{qg})]\end{align}
\vspace{-0.8cm}
\STATE \begin{align}\nonumber G_{qg} = \triangledown_{\theta_{qg}} &\frac{1}{m}\sum_{i = 1}^{m}[\ l_{qg}(q^p_i,a^p_i) \\& + \lambda_ql_{dual}(q^p_i,a^p_i;\theta_{qa}, \theta_{qg})]\end{align}
\STATE Update $\theta_{qa}$ and $\theta_{qg}
\STATE $\theta_{qa} \leftarrow opt(\theta_{qa}, G_{qa})$, $\theta_{qg} \leftarrow opt(\theta_{qg}, G_{qg})$
\UNTIL{models converged}
\end{algorithmic}
\end{algorithm}
The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation~\ref{equation:regular}.
Specifically, given a correct $\langle q, a \rangle$ pair, we would like to minimize the following loss function,
\begin{align} \nonumber
l_{dual}(a,q;\theta_{qa}, \theta_{qg}) &= [logP_a(a) + log P(q|a;\theta_{qg}) \\
& - logP_q(q) - logP(a|q;\theta_{qa})]^2
\end{align}
where $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model. $P(a|q;\theta_{qg})$ could also be easily calculated with the markov chain rule:
$P(q|a;\theta_{qg}) = \prod_{t=1}^{|q|} P(q_t|q_{<t}, a;\theta_{qg})$, where the function $P(q_t|q_{<t}, a;\theta_{qg})$ is the same with the decoder of the QG model (detailed in the following section).
However, the conditional probability $P(a|q;\theta_{qa})$ is different from the output of the QA model $f_{qa}(a,q;\theta_{qa})$. To address this, given a question $q$, we sample a set of answer sentences $A'$, and derive the conditional probability $P(a|q;\theta_{qa})$ based on our QA model with the following equation.
\begin{align}\nonumber
&P(a|q;\theta_{qa}) = \\
&\dfrac{exp(f_{qa}(a,q;\theta_{qa}))}{exp(f_{qa}(a,q;\theta_{qa})) + \sum_{a' \in A'} exp(f_{qa}(a',q;\theta_{qa}))}
\end{align}
In this way, we learn the models of QA and QG by minimizing the weighted combination between the original loss functions and the regularization term.
\subsection{Relationships with Existing Studies}
Our work differs from \cite{yang2017semi} in that they regard reading comprehension (RC) as the main task, and regard question generation as the auxiliary task to boost the main task RC.
In our work, the roles of QA and QG are the same, and our algorithm enables QA and QG to improve the performance of each other simultaneously.
Our approach differs from Generative Domain-Adaptive Nets \cite{yang2017semi} in that we do not pretrain the QA model. Our QA and QG models are jointly learned from random initialization.
Moreover, our QA task differs from RC in that the answer in our task is a sentence rather than a text span from a sentence.
Our approach is inspired by dual learning \cite{xia2016dual,xia2017dual}, which leverages the duality between two tasks to improve each other. Different from the dual learning \cite{xia2016dual} paradigm, our framework learns both models from scratch and does not need task-specific pretraining.
The recently introduced supervised dual learning \cite{xia2017dual} has been successfully applied to image recognition, machine translation and sentiment analysis. Our work could be viewed as the first work that leveraging the idea of supervised dual learning for question answering.
Our approach differs from Generative Adversarial Nets (GAN) \cite{goodfellow2014generative} in two respects.
On one hand, the goal of original GAN is to learn a powerful generator, while the discriminative task is regarded as the auxiliary task. The roles of the two tasks in our framework are the same.
On the other hand, the discriminative task of GAN aims to distinguish between the real data and the artificially generated data, while we focus on the real QA task.
\section{The Question Answering Model}
We describe the details of the question answer (QA) model in this section.
Overall, a QA model could be formulated as a function $f_{qa}(q, a;\theta_{qa})$ parameterized by $\theta_{qa}$ that maps a question-answer pair to a scalar. In the inference process, given a $q$ and a list of candidate answer sentences, $f_{qa}(q, a;\theta_{qa})$ is used to calculate the relevance between $q$ and every candidate $a$.
The top ranked answer sentence is regarded as the output.
We develop a neural network based QA model.
Specifically, we first represent each word as a low dimensional and real-valued vector, also known as word embedding \cite{Bengio2003,Mikolov2013a,Pennington2014}.
Afterwards, we use recurrent neural network (RNN) to map a question of variable length to a fixed-length vector.
To avoid the problem of gradient vanishing, we use gated recurrent unit (GRU) \cite{cho-EtAl:2014:EMNLP2014} as the basic computation unit.
The approach recursively calculates the hidden vector $h_{t}$ based on the current word vector $e^q_t$ and the output vector $h_{t-1}$ in the last time step,
\begin{align}
&z_i = \sigma(W_{z}e^q_{i} + U_{z}{h}_{i-1}) \\
&r_i = \sigma(W_{r}e^q_{i} + U_{r}{h}_{i-1}) \\
&\widetilde{h}_i = \tanh(W_{h}e^q_{i} + U_{h}(r_i \odot {h}_{i-1})) \\
&{h}_{i} = z_i \odot \widetilde{h}_i + (1-z_i) \odot {h}_{i-1}
\end{align}
where $z_i$ and $r_i$ are update and reset gates of s, $\odot$ stands for element-wise multiplication, $\sigma$ is sigmoid function.
We use a bi-directional RNN to get the meaning of a question from both directions, and use the concatenation of two last hidden states as the final question vector $v_q$. We compute the answer sentence vector $v_a$ in the same way.
After obtaining $v_q$ and $v_a$, we implement a simple yet effective way to calculate the relevance between question-sentence pair.
Specifically, we represent a question-answer pair as the concatenation of four vectors, namely $v(q, a) = [v_q; v_a; v_q \odot v_a ; e_{c(q,a)}]$, where $\odot$ means element-wise multiplication, $c(q,a)$ is the number of co-occurred words in $q$ and $a$.
We observe that incorporating the embedding of the word co-occurrence $e^c_{c(q,a)}$ could empirically improve the QA performance.
We use an additional embedding matrix $L_c \in \mathbb{R}^{d_c \times |V_c|}$, where $d_c$ is the dimension of word co-occurrence vector and $|V_c|$ is vocabulary size.
The values of $L_c$ are jointly learned during training. The output scalar $f_{qa}(a,q)$ is calculated by feeding $v(q,a)$ to a linear layer followed by $tanh$.
We feed $f_{qa}(a,q)$ to a $softmax$ layer and use negative log-likelihood as the QA specific loss function. The basic idea of this objective is to classify whether a given question-answer is correct or not.
We also implemented a ranking based loss function $max(0, 1 - f_{qa}(q,a) + f_{qa}(q,a^*))$, whose basic idea is to assign the correct QA pair a higher score than a randomly select QA pair. However, our empirical results showed that the ranking loss performed worse than the negative log-likelihood loss function. We use log-likelihood as the QA loss function in the experiment.
\begin{table*}[t]
\centering
\begin{tabular}{l|ccc|ccc|ccc}
\hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{MARCO} & \multicolumn{3}{c|}{SQUAD} & \multicolumn{3}{c}{WikiQA} \\
\cline{2-10}
& Train & Dev & Test & Train & Dev & Test& Train & Dev & Test \\
\hline
{\# questions} & 82,326 & 4,806 & 5,241 & 87,341 & 5,273 & 5,279 & 1,040 & 140 & 293\\
\# question-answer pairs & 676,193 & 39,510 & 42,850 & 440,573 & 26,442 & 26,604 & 20,360 & 2,733 & 6,165\\
{Avg \# answers per question} & 8.21& 8.22&8.18 & 5.04 & 5.01 & 5.04 & 19.57 & 19.52 & 21.04\\
{Avg length of questions} & 6.05& 6.05& 6.10 & {11.37} &11.57 &11.46 & 6.40& 6.46& 6.42\\
{Avg length of answers} & 82.73& 82.54& 82.89 & {27.80} & 28.70 & 28.66 & 30.04& 29.65& 28.91\\
\hline
\end{tabular}
\caption{Statistics of the MARCO, SQUAD and WikiQA datasets for answer sentence selection.}
\label{table:statistic}
\end{table*}
\section{The Question Generation Model}
We describe the question generation (QG) model in this section.
The model is inspired by the recent success of sequence-to-sequence learning in neural machine translation.
Specifically, the QG model first calculates the representation of the answer sentence with an encoder, and then takes the answer vector to generate a question in a sequential way with a decoder.
We will present the details of the encoder and the decoder, respectively.
The goal of the encoder is to represent a variable-length answer sentence ${a}$ as a fixed-length continuous vector.
The encoder could be implemented with different neural network architectures such as convolutional neural network \cite{kalchbrenner-blunsom:2013:EMNLP,meng2015encoding} and recurrent neural network (RNN) \cite{bahdanau2014neural,sutskever2014sequence}.
In this work, we use bidirectional RNN based on GRU unit, which is consistent with our QA model as described in Section 3.
The concatenation of the last hidden vectors from both directions is used as the output of the encoder, which is also used as the initial hidden state of the decoder.
The decoder takes the output of the encoder and generates the question sentence.
We implement a RNN based decoder, which works in a sequential way and generates one question word at each time step.
The decoder generates a word $q_{t}$ at each time step $t$ based on the representation of $a$ and the previously predicted question words $q_{<t}=\{q_1,q_2,...,q_{t-1}\}$.
This process is formulated as follows.
\begin{equation}
p(q|a)=\prod^{|q|}_{t=1}p(q_{t}|q_{<t},a)
\end{equation}
Specifically, we use an attention-based architecture \cite{luong-pham-manning:2015:EMNLP}, which selectively finds relevant information from the answer sentence when generating the question word. Therefore, the conditional probability is calculated as follows.
\begin{equation}
p(q_{t}|q_{<t},a)=f_{dec}(q_{t-1},s_{t}, c_t)
\end{equation}
where $s_{t}$ is the hidden state of GRU based RNN at time step $t$, and $c_t$ is the attention state at time step $t$.
The attention mechanism assigns a probability/weight to each hidden state in the encoder at one time step, and calculates the attention state $c_t$ through weighted averaging the hidden states of the encoder: $c_{t}=\sum^{|a|}_{i=1}\alpha_{\langle t,i\rangle}h_i$.
When calculating the attention weight of $h_i$ at time step $t$, we also take into account of the attention distribution in the last time step. Potentially, the model could remember which contexts from answer sentence have been used before, and does not repeatedly use these words to generate the question words.
\begin{align}
\alpha_{\langle t,i\rangle}=\frac{\exp{[z(s_{t},h_i,\sum^{N}_{j=1}\alpha_{\langle t-1,j\rangle}h_j)]}}{\sum^{H}_{i'=1}\exp{[z(s_{t},h_{i'},\sum^{N}_{j=1}\alpha_{\langle t-1,j\rangle}h_{j})]}}
\end{align}
Afterwards, we feed the concatenation of $s_t$ and $c_t$ to a linear layer followed by a $softmax$ function.
The output dimension of the $softmax$ layer is equal to the number of top frequent question words (e.g. 30K or 50K) in the training data.
The output values of the $softmax$ layer form the probability distribution of the question words to be generated.
Furthermore, we observe that question sentences typically include informative but low-frequency words such as named entities or numbers.
These low-frequency words are closely related to the answer sentence but could not be well covered in the target vocabulary.
To address this, we add a simple yet effective post-processing step which replaces each ``unknown word'' with the most relevant word from the answer sentence.
Following \cite{luong-EtAl:2015:ACL-IJCNLP}, we use the attention probability as the relevance score of each word from the answer sentence.
Copying mechanism \cite{gulcehre2016pointing,gu2016incorporating} is an alternative solution that adaptively determines whether the generated word comes from the target vocabulary or from the answer sentence.
Since every component of the QG model is differentiable, all the parameters could be learned in an end-to-end way with back propagation.
Given a question-answer pair $\langle q,a\rangle$, where $a$ is the correct answer of the question $q$, the training objective is to minimize the following negative log-likelihood.
\begin{equation}
l_{qg}(q,a)=-\sum^{|q|}_{t=1}\log[p(y_t|y_{<t},a)]
\end{equation}
In the inference process, we use beam search to get the top-$K$ confident results, where $K$ is the beam size.
The inference process stops when the model generates the symbol $\langle eos \rangle$ which stands for the end of sentence.
\begin{table*}[t]
\centering
\begin{tabular}{l|ccc|ccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{MARCO} & \multicolumn{3}{c}{SQUAD} \\
\cline{2-7}
& MAP & MRR & P@1 & MAP & MRR & P@1 \\
\hline
WordCnt & 0.3956 &0.4014&0.1789 & 0.8089&0.8168&0.6887\\
WgtWordCnt & 0.4223& 0.4287&0.2030 & 0.8714&0.8787&0.7958 \\
CDSSM \cite{shen2014CDSSM} & 0.4425 &0.4489 &0.2284 & 0.7978 & 0.8041 &0.6721 \\
ABCNN \cite{yin2015abcnn} & 0.4691 & 0.4767 & 0.2629 & 0.8685 & 0.8750 & 0.7843 \\
\hline
Basic QA & 0.4712 & 0.4783 & 0.2628 & 0.8580 & 0.8647 & 0.7740 \\
Dual QA & 0.4836 & 0.4911 & 0.2751 & 0.8643 & 0.8716 & 0.7814 \\
\hline
\end{tabular}
\caption{QA Performance on the MARCO and SQUAD datasets.}
\label{table:results-qa}
\end{table*}
\section{Experiment}
We describe the experimental setting and report empirical results in this section.
\subsection{Experimental Setting}
We conduct experiments on three datasets, including MARCO \cite{nguyen2016ms}, SQUAD \cite{rajpurkar-EtAl:2016:EMNLP2016}, and WikiQA \cite{yang2015wikiqa}.
The MARCO and SQUAD datasets
are originally developed for the reading comprehension (RC) task, the goal of which is to answer a question with a text span from a document.
Despite our QA task (answer sentence selection) is different from RC, we use these two datasets because of two reasons. The first reason is that to our knowledge they are the QA datasets that contains largest manually labeled question-answer pairs.
The second reason is that, we could derive two QA datasets for answer sentence selection from the original MARCO and SQUAD datasets, with an assumption that the answer sentences containing the correct answer span are correct, and vice versa.
We believe that our training framework could be easily applied to RC task, but we that is out of the focus of this work.
We also conduct experiments on WikiQA \cite{yang2015wikiqa}, which is a benchmark dataset for answer sentence selection.
Despite its data size is relatively smaller compared with MARCO and SQUAD, we still apply our algorithm on this data and report empirical results to further compare with existing algorithms.
It is worth to note that a common characteristic of MARCO and SQUAD is that the ground truth of the test is invisible to the public.
Therefore, we randomly split the original validation set into the dev set and the test set.
The statistics of SQUAD and MARCO datasets are given in Table \ref{table:statistic}.
We use the official split of the WikiQA dataset.
We apply exactly the same model to these three datasets.
We evaluate our QA system with three standard evaluation metrics: \textit{Mean Average Precision (MAP)}, \textit{Mean Reciprocal Rank (MRR)} and \textit{Precision@1 (P@1)} \cite{manning2008ir}.
It is hard to find a perfect way to automatically evaluate the performance of a QG system. In this work, we use BLEU-4~\cite{papineni2002bleu} score as the evaluation metric, which measures the overlap between the generated question and the ground truth.
\subsection{Implementation Details}
We train the parameters of the QA model and the QG model simultaneously.
We randomly initialize the parameters in both models with a combination of the fan-in and fan-out \cite{glorot2010understanding}.
The parameters of word embedding matrices are shared in the QA model and the QG model.
In order to learn question and answer specific word meanings, we use two different embedding matrices for question words and answer words. The vocabularies are the most frequent 30K words from the questions and answers in the training data.
We set the dimension of word embedding as 300, the hidden length of encoder and decoder in the QG model as 512, the hidden length of GRU in the QA model as 100, the dimension of word co-occurrence embedding as 10, the vocabulary size of the word co-occurrence embedding as 10, the hidden length of the attention layer as 30.
We initialize the learning rate as 2.0, and use AdaDelta~\cite{zeiler2012adadelta} to adaptively decrease the learning rate.
We use mini-batch training, and empirically set the batch size as 64.
The sampled answer sentences do not come from the same passage.
We get 10 batches (640 instances) and sort them by answer length for accelerating the training process. The negative samples come from these 640 instances, which are from different passages.
In this work, we use smoothed bigram language models as $p_a(a)$ and $p_q(q)$. We also tried trigram language model but did not get improved performance.
Alternatively, one could also implement neural language model and jointly learn the parameters in the training process.
\subsection{Results and Analysis}
We first report results on the MARCO and SQUAD datasets.
As the dataset is splitted by ourselves, we do not have previously reported results for comparison.
We compare with the following four baseline methods.
It has been proven that word co-occurrence is a very simple yet effective feature for this task \cite{yang2015wikiqa,yin2015abcnn}, so the first two baselines are based on the word co-occurrence between a question sentence and the candidate answer sentence.
\textbf{WordCnt} and \textbf{WgtWordCnt} use unnormalized and normalized word co-occurrence.
The ranker in these two baselines are trained with with FastTree, which performs better than SVMRank and linear regression in our experiments.
We also compare with \textbf{CDSSM} \cite{shen2014CDSSM}, which is a very strong neural network approach to model the semantic relatedness of a sentence pair.
We further compare with \textbf{ABCNN} \cite{yin2015abcnn}, which has been proven very powerful in various sentence matching tasks.
\textbf{Basic QA} is our QA model which does not use the duality between QA and QG.
Our ultimate model is abbreviated as \textbf{Dual QA}.
The QA performance on MARCO and SQUAD datasets are given in Table \ref{table:results-qa}.
We can find that CDSSM performs better than the word co-occurrence based method on MARCO dataset.
On the SQUAD dataset, Dual QA achieves the best performance among all these methods.
On the MARCO dataset, Dual QA performs comparably with ABCNN.
We can find that Dual QA still yields better accuracy than Basic QA, which shows the effectiveness of the joint training algorithm.
It is interesting that word co-occurrence based method (WgtWordCnt) is very strong and hard to beat on the MARCO dataset. Incorporating sophisticated features might obtain improved performance on both datasets, however, this is not the focus of this work and we leave it to future work.
\begin{table}[h]
\centering
\begin{tabular}{l|cc}
\hline
Method & MRR & MAP \\
\hline
CNN \cite{yang2015wikiqa} &0.6652& 0.6520 \\
APCNN \cite{dos2016attentive} & 0.6957& 0.6886 \\
NASM \cite{miao2016neural} &0.7069 & 0.6886 \\
ABCNN \cite{yin2015abcnn} & 0.7018 & 0.6921 \\
\hline
Basic QA & 0.6914 & 0.6747 \\
Dual QA & 0.7002 & 0.6844 \\
\hline
\end{tabular}
\caption{QA performance on the WikiQA dataset.}
\label{table:results-qa-wikiqa}
\end{table}
\begin{table*}[t]\small
\centering
\begin{tabular}{p{3cm}|p{6.5cm}|p{3cm}|p{3cm}}
\hline
\textbf{question} & \textbf{correct answer} & \textbf{question generated by \ \ \ \ \ Dual QG} & \textbf{question generated by Basic QG}\\
\hline
\textit{what 's the name of the green space north of the center of newcastle ?} & \textit{Another green space in Newcastle is the Town Moor , lying immediately north of the city centre .} & \textit{what is the name of the green building in the city ?} &\textit{ what is the name of the city of new haven ? }\\
\hline
\textit{for what purpose do organisms make peroxide and superoxide ?} & \textit{Parts of the immune system of higher organisms create peroxide , superoxide , and singlet oxygen to destroy invading microbes .} & \textit{what is the purpose of the immune system ?} & \textit{what is the main function of the immune system ?} \\
\hline
\textit{how much money was spent on other festivities in the bay area to help celebrate the coming super bowl 50 ?} & \textit{In addition , there are \$ 2 million worth of other ancillary events , including a week - long event at the Santa Clara Convention Center , a beer , wine and food festival at Bellomy Field at Santa Clara University , and a pep rally .}& \textit{how much of the beer is in the santa monica convention center ?} & \textit{what is the name of the beer in the santa monica center ?} \\
\hline
\end{tabular}
\caption{Sampled examples from the SQUAD dataset.}
\label{table:results-example}
\end{table*}
Results on the WikiQA dataset is given in Table \ref{table:results-qa-wikiqa}.
On this dataset, previous studies typically report results based on their deep features plus the number of words that occur both in the question and in the answer \cite{yang2015wikiqa,yin2015abcnn}. We also follow this experimental protocol. We can find that our basic QA model is simple yet effective. The Dual QA model achieves comparably to strong baseline methods.
To give a quantitative evaluation of our training framework on the QG model, we report BLEU-4 scores on MARCO and SQUAD datasets.
The results of our QG model with or without using joint training are given in Table \ref{table:results-qg}.
We can find that, despite the overall BLEU-4 scores are relatively low, using our training algorithm could improve the performance of the QG model.
\begin{table}[h]
\centering
\begin{tabular}{lccc}
\hline
{Method} & MARCO & SQUAD &WikiQA \\
\hline
Basic QG & 8.87 & 4.34 & 2.91\\
Dual QG & 9.31 & 5.03 & 3.15\\
\hline
\end{tabular}
\caption{QG performance (BLEU-4 scores) on MARCO, SQUAD and WikiQA datasets.}
\label{table:results-qg}
\end{table}
We would like to investigate how the joint training process improves the QA and QG models. To this end, we analyze the results of development set on the SQUAD dataset.
We randomly sample several cases that the Basic QA model gets the wrong answers while the Dual QA model obtains the correct results.
Examples are given in Table \ref{table:results-example}.
From these examples, we can find that the questions generated by Dual QG tend to have more word overlap with the correct question, despite sometimes the point of the question is not correct.
For example, compared with the Basic QG model, the Dual QG model generates more informative words, such as ``green'' in the first example, ``purpose'' in the second example, and ``how much'' in the third example.
We believe this helps QA because the QA model is trained to assign a higher score to the question which looks similar with the generated question.
It also helps QG because the QA model is trained to give a higher score to the real question-answer pair, so that generating more answer-alike words gives the generated question a higher QA score.
Despite the proposed training framework obtains some improvements on QA and QG, we believe the work could be further improved from several directions.
We find that our QG model not always finds the point of the reference question.
This is not surprising because the questions from these two reading comprehension datasets only focus on some spans of a sentence, rather than the entire sentence. Therefore, the source side (answer sentence) carries more information than the target side (question sentence).
Moreover, we do not use the answer position information in our QG model.
Accordingly, the model may pay attention to the point which is different from the annotator's direction, and generates totally different questions.
We are aware of incorporating the position of the answer span could get improved performance \cite{zhou2017neural}, however, the focus of this work is a sentence level QA task rather than reading comprehension.
Therefore, despite MARCO and SQUAD are of large scale, they are not the desirable datasets for investigating the duality of our QA and QG tasks.
Pushing forward this area also requires large scale sentence level QA datasets.
\subsection{Discussion}
We would like to discuss our understanding about the duality of QA and QG, and also present our observations based on the experiments.
In this work, ``duality'' means that the QA task and the QG task are equally important. This characteristic makes our work different from Generative Domain-Adaptive Nets \cite{yang2017semi} and Generative Adversarial Nets (GAN) \cite{goodfellow2014generative}, both of which have a main task and regard another task as the auxiliary one.
There are different ways to leverage the ``duality'' of QA and QG to improve both tasks.
We categorize them into two groups. The first group is about the training process and the second group is about the inference process.
From this perspective, dual learning \cite{xia2016dual} is a solution that leverages the duality in the training process.
In particular, dual learning first pretrains the models for two tasks separately, and then iteratively fine-tunes the models.
Our work also belongs to the first group.
Our approach uses the duality as a regularization item to guide the learning of QA and QG models simultaneously from scratch.
After the QA and QG models are trained, we could also use the duality to improve the inference process, which falls into the second group.
The process could be conducted on separately trained models or the models that jointly trained with our approach.
This is reasonable because the QA model could directly add one feature to consider $q$ and $q'$, where $q'$ is the question generated by the QG model.
The first example in Table \ref{table:results-example} also motivates this direction.
Similarly, the QA model could give each $\langle q', a \rangle$ a score which could be assigned to each generated question $q'$.
In this work we do not apply the duality in the inference process. We leave it as a future plan.
This work could be improved by refining every component involved in our framework.
For example, we use a simple yet effective QA model, which could be improved by using more complex neural network architectures \cite{hu2014convolutional,yin2015abcnn} or more external resources.
We use a smoothed language model for both question and answer sentences, which could be replaced by designed neural language models whose parameters are jointly learned together with the parameters in QA and QG models.
The QG model could be improved as well, for example, by developing more complex neural network architectures to take into account of more information about the answer sentence in the generation process.
In addition, it is also very important to investigate an automatic evaluation metric to effectively measure the performance of a QG system.
BLEU score only measures the literal similarity between the generated question and the ground truth.
However, it does not measure whether the question really looks like a question or not.
A desirable evaluation system should also have the ability to judge whether the generated question could be answered by input sentence, even if the generated question use totally different words to express the meaning.
\section{Related Work}
Our work relates to existing studies on question answering (QA) and question generation (QG).
There are different types of QA tasks including text-level QA \cite{yu2014deep}, knowledge based QA \cite{berant2013semantic}, community based QA \cite{qiu2015convolutional} and the reading comprehension \cite{rajpurkar-EtAl:2016:EMNLP2016,nguyen2016ms}.
Our work belongs to text based QA where the answer is a sentence.
In recent years, neural network approaches \cite{hu2014convolutional,yu2014deep,yin2015abcnn} show promising ability in modeling the semantic relation between sentences and achieve strong performances on QA tasks.
Question generation also draws a lot of attentions in recent years.
QG is very necessary in real application as it is always time consuming to create large-scale QA datasets.
In literature, \cite{yao2010question} use Minimal Recursion Semantics (MRS) to represent the meaning of a sentence, and then realize the MSR structure into a natural language question.
\cite{heilman2011automatic} present a overgenerate-and-rank framework consisting of three stages. They first transform a sentence into a simpler declarative statement, and then transform the statement to candidate questions by executing well-defined
syntactic transformations. Finally, a ranker is used to select the questions of high-quality.
\cite{chali2015towards} focus on generating questions from a topic.
They first get a list of texts related to the topic, and then generate questions by exploiting the named entity information and the predicate argument structures of the texts.
\cite{labutov2015deep} propose an ontology-crowd-relevance approach to generate questions from novel text.
They encode the original text in a low-dimensional ontology, and then align the question templates obtained via crowd-sourcing to that space. A final ranker is used to select the top relevant templates.
There also exists some studies on generating questions from knowledge base \cite{song2016domain,serban-EtAl:2016:P16-1}.
For example, \cite{serban-EtAl:2016:P16-1} develop a neural network approach which takes a knowledge fact (including a subject, an object, and a predicate) as input, and generates the question with a recurrent neural network.
Recent studies also investigate question generation for the reading comprehension task \cite{du2017question,zhou2017neural}.
The approaches are typically based on the encoder-decoder framework, which could be conventionally learned in an end-to-end way.
As the answer is a text span from the sentence/passage, it is helpful to incorporate the position of the answer span \cite{zhou2017neural}.
In addition, the computer vision community also pays attention to generating natural language questions about an image \cite{mostafazadeh2016generating}.
\section{Conclusion}
We focus on jointly training the question answering (QA) model and the question generation (QG) model in this paper.
We exploit the ``duality'' of QA and QG tasks, and introduce a training framework to leverage the probabilistic correlation between the two tasks.
In our approach, the ``duality'' is used as a regularization term to influence the learning of QA and QG models.
We implement simple yet effective QA and QG models, both of which are neural network based approaches.
Experimental results show that the proposed training framework improves both QA and QG on three datasets.
\subsubsection{Acknowledgments.}
We sincerely thank Wenpeng Yin for running the powerful ABCNN model on our setup.
|
train/arxiv
|
BkiUc8HxaKgT0hdL9OVX
| 5 | 1 |
\section{Introduction}
The $n$th Catalan number $C_n$, given explicitly by
$\frac{1}{n+1}\binom{2n}{n}$, is well-known to be the answer to many
different counting problems; for example, it is the number of
bracketings of an $(n+1)$-fold product. Thus there are many $\mathbb
N$-indexed families of sets whose cardinalities are the Catalan
numbers; Stanley~\cite{StanleyEC2, StanleyAddendum} describes at
least 205 such.
A Catalan family of sets may bear extra structure that is invisible in
the mere sequence of Catalan numbers. For example, one presentation of
the $n$th Catalan set is as the set of functions $f \colon \{1, \dots,
n\} \to \{1, \dots, n\}$ which preserve order and satisfy $f(k)
\leqslant k$ for each $k$. The set of such functions is a monoid under
composition, and in this way we obtain the \emph{Catalan
monoids}~\cite{Sol} which are of importance to combinatorial
semigroup theory. For another example, a result due to
Tamari~\cite{Tamari1962The-algebra} makes each Catalan set into a
lattice, whose ordering is most clearly understood in terms of
bracketings of words, as the order generated by the basic inequality
$(xy)z \leqslant x(yz)$ under substitution.
The first main objective of this paper is to describe another kind of
structure borne by Catalan families of sets. We shall show how to
define functions between them in such a way as to produce a simplicial
set $\mathbb C$, which is the ``Catalan simplicial set'' of the title.
The simplicial structure can be defined in various ways, but the most
elegant makes use of what seems to be a new presentation of the
Catalan sets that relies heavily on the Boolean algebra $\mathbf 2$.
Simplicial sets are abstract, combinatorial entities, most often used
as models of spaces in homotopy theory, but flexible enough to also
serve as models of higher
categories~\cite{Lurie2009Higher,Verity2008Complicial}. Therefore, we
might hope that the Catalan simplicial set had some natural role to
play in homotopy theory or higher category theory. Our second
objective in this paper is to affirm this hope, by showing that
the Catalan simplicial set has a \emph{classifying property} with
respect to certain kinds of categorical structure. More precisely, we
will consider simplicial maps from $\mathbb C$ into the nerves of
various kinds of higher category (the \emph{nerve} of such a structure
is a simplicial set which encodes its cellular data). We will see that:
\begin{enumerate}[(a)]
\item Maps from $\mathbb C$ to the nerve of a monoidal category $\mathscr V$ are the same thing
as monoids in $\mathscr V$;
\item Maps from $\mathbb C$ to the nerve of a bicategory $\mathscr B$ are the same thing
as monads in $\mathscr B$;
\item Maps from $\mathbb C$ to the \emph{pseudo} nerve of the monoidal
bicategory $\mathrm{Cat}$ of categories and functors are the same
thing as monoidal categories;
\item Maps from $\mathbb C$ to the \emph{lax} nerve of the monoidal
bicategory $\mathrm{Cat}$ are the same thing as \emph{skew-monoidal
categories}.
\end{enumerate}
Skew-monoidal categories generalise Mac~Lane's notion of monoidal
category~\cite{ML1963} by dropping the requirement of invertibility
of the associativity and unit constraints; they were introduced
recently by Szlach\'anyi~\cite{Szl2012} in his study of bialgebroids,
which are themselves an extension of the notion of quantum group. The
result in (d) can be seen as a coherence result for the notion of
skew-monoidal category, providing an abstract justification for the
axioms. Thus the work presented here lies at the interface of several
mathematical disciplines:
\begin{enumerate}[\ \ \textbullet\,]
\item combinatorics, in the form of the Catalan numbers;
\item algebraic topology, via simplicial sets and nerves;
\item quantum groups, through recent work on bialgebroids;
\item logic, through the distinguished role of the Boolean algebra $\mathbf{2}$; and
\item category theory.
\end{enumerate}
Nor is this the end of the story. Monoidal categories and
skew-monoidal categories can be generalised to notions of
\emph{monoidale} and \emph{skew monoidale} in a monoidal bicategory;
this has further relevance for quantum algebra, since Lack and Street
showed in~\cite{Smswqc} that quantum categories in the sense
of~\cite{QCat} can be described using skew monoidales. In a sequel to
this paper, we will generalise (c) and (d) to prove that:
\begin{enumerate}[(a)]
\addtocounter{enumi}{4}
\item Maps from $\mathbb C$ to the pseudo nerve of a monoidal
bicategory $\mathscr W$ are the same thing as monoidales in
$\mathscr W$; and
\item Maps from $\mathbb C$ to the lax nerve of a monoidal bicategory
$\mathscr W$ are the same thing as skew monoidales in $\mathscr W$.
\end{enumerate}
The results (a)--(f) use only the lower dimensions of the Catalan
simplicial set, and we expect that its higher dimensions in fact
encode \emph{all} of the coherence that a higher-dimensional monoidal
object should satisfy. We therefore hope also to show that:
\begin{enumerate}[(a)]
\addtocounter{enumi}{6}
\item Maps from $\mathbb C$ to the pseudo nerve of the monoidal
tricategory $\mathrm{Bicat}$ of bicategories are the same thing as
monoidal bicategories;
\item Maps from $\mathbb C$ to the homotopy-coherent nerve of the
monoidal simplicial category $\infty\text-\mathrm{Cat}$ of
$\infty$-categories are the same thing as monoidal
$\infty$-categories in the sense of~\cite{LurieHA};
\end{enumerate}
together with appropriate skew analogues of these results.
Finally, a note on the genesis of this work. We have chosen to present
the Catalan simplicial set as basic, and its classifying properties as
derived. This belies the method of its discovery, which was to look
for a simplicial set with the classifying property (d); the link with
the Catalan numbers only later came to light. The notion that a
classifying object as in (d) might exist is based on an old idea of
Michael Johnson's on how to capture not only associativity but also
unitality constraints simplicially. He reminded us of this in a
recent talk~\cite{MJ2013} to the Australian Category Seminar.
\section{The Catalan simplicial set}
\label{sec:catal-simpl-set}
In this section we define and investigate the Catalan simplicial set.
We begin by recalling some basic definitions. We write $\Delta$ for
the \emph{simplicial category}, whose objects are non-empty finite
ordinals $[n] = \{0,\dots,n\}$ and whose morphisms are
order-preserving functions, and write $\mathrm{SSet}$
for the category of presheaves
on $\Delta$. Objects $X$ of $\mathrm{SSet}$ are called {\em simplicial
sets}; we think of them as glueings-together of discs, with the
$n$-dimensional discs in that glueing labelled by the set $X_n :=
X([n])$ of \emph{$n$-simplices} of $X$. We write $\delta_i \colon
[n-1]\to [n]$ and $\sigma_{i} \colon [n+1]\to [n]$ for the maps of
$\Delta$ defined by
\begin{equation*}
\delta_i(x) = \begin{cases} x & \text{if } x < i \\ x+1 & \text{otherwise}
\end{cases} \quad \text{and} \quad \sigma_i(x) = \begin{cases} x &
\text{if } x \leqslant i \\ x-1 & \text{otherwise.}
\end{cases}
\end{equation*}
The action of these morphisms on a simplicial set $X$ yields functions
$d_{i} \colon X_n \to X_{n-1}$ and $s_i \colon X_n \to X_{n+1}$, which
we call \emph{face} and \emph{degeneracy} maps. An $(n+1)$-simplex $x$
is called {\em degenerate} when it is in the image of some $s_i$, and
\emph{non-degenerate} otherwise. The face and degeneracy maps of a
simplicial set satisfy the following \emph{simplicial identities}:
\begin{equation*}
\begin{aligned}
d_i d_j &= d_{j-1} d_i \qquad \!\text{for $i < j$} \\
s_i s_j &= s_{j+1} s_i \qquad \text{for $i \leqslant j$}
\end{aligned} \qquad \quad
\begin{aligned}
d_i s_j &= \begin{cases}
s_{j-1} d_i & \text{for $i < j$} \\
\mathrm{id} & \text{for $i = j, j+1$} \\
s_j d_{i-1} & \text{for $i > j+1$;}
\end{cases}
\end{aligned}
\end{equation*}
and in fact, a simplicial set may be completely specified by giving
its sets of $n$-simplices, together with face and degeneracy maps satisfying
the simplicial identities.
\begin{definition}\label{def:catalan}
The \emph{Catalan simplicial set} $\mathbb C$ has its $n$-simplices
given by \emph{Dyck words} of length $2n+2$; these are strings
comprised of $(n+1)$ $U$'s and $(n+1)$ $D$'s such that the $i$th $U$
precedes the $i$th $D$ for each $1 \leqslant i \leqslant n+1$.
The face maps $d_i \colon \mathbb C_n \to \mathbb C_{n-1}$ act on a
word $W$ by deleting the $i$th $U$ and $i$th $D$; the degeneracy maps
$s_i \colon \mathbb C_{n-1} \to \mathbb C_n$ act on a word $W$ by
repeating the $i$th $U$ and $i$th $D$.
\end{definition}
The sets of Dyck words of length $2n$ are a Catalan family of
sets---corresponding to (i) or (r) in Stanley's
enumeration~\cite{StanleyEC2}---and so we have that $\abs{\mathbb C_n}
= C_{n+1}$, the $(n+1)$st Catalan number.
\begin{remark}
The sets of $n$-simplices of $\mathbb C$ are not quite a Catalan
family, due to the dimension shift causing us to omit the $0$th
Catalan number. We may rectify this by viewing $\mathbb C$ as an
\emph{augmented} simplicial
set.
An augmented simplicial set is a presheaf on $\Delta_+$, the
category of all finite ordinals and order-preserving maps; it is
equally given by a simplicial set $X$ together with a set $X_{-1}$
of $(-1)$-simplices and an ``augmentation'' map $d_0 \colon X_0 \to
X_{-1}$ satisfying $d_0d_0 = d_0d_1 \colon X_1 \to X_{-1}$. By
allowing $n$ to range over $\{-1\} \cup \mathbb N$ in the definition
of the Catalan simplicial set $\mathbb C$, it becomes an augmented
simplicial set with the property that its sets of $(n-1)$-simplices
(for $n \in \mathbb N$) are a Catalan family.
\end{remark}
In order to understand the Catalan simplicial set as a simplicial
set, we must understand the face and degeneracy relations between its
simplices. In low dimensions, we see directly that
$\mathbb C$ has:
\begin{itemize}
\item A unique $0$-simplex $UD$, which we write as $\star$;
\item Two $1$-simplices $UUDD$ and $UDUD$, the first of which is
$s_0(\star)$ and the second of which is non-degenerate; we write these as $e =
s_0(\star) \colon \star \to \star$ and $c \colon \star \to
\star$;
\item Five $2$-simplices: three degenerate ones $UUUDDD$, $UUDDUD$ and
$UDUUDD$, and two non-degenerate ones $UUDUDD$ and $UDUDUD$.
We depict these, and their faces, by:
\begin{equation}
\begin{gathered}
\twosimp{\star}{\star}{\star}{e}{e}{e}{\substack{\,s_0(e)\\=s_1(e)}} \qquad
\twosimp{\star}{\star}{\star}{e}{c}{c}{s_0(c)} \qquad
\twosimp{\star}{\star}{\star}{c}{e}{c}{s_1(c)} \\
\twosimp{\star}{\star}{\star}{c}{c}{c}{t} \qquad
\twosimp{\star}{\star}{\star}{e}{e}{c}{i} \rlap{\quad .}
\end{gathered}\label{eq:1}
\end{equation}
\end{itemize}
In higher dimensions, the simplices of $\mathbb C$ will be determined
by \emph{coskeletality}. A simplicial set is called
\emph{$r$-coskeletal} when every $n$-boundary with $n > r$ has a
unique filler; here, an \emph{$n$-boundary} in a simplicial set is a
collection of $(n-1)$-simplices $(x_0, \dots, x_n)$ satisfying
$d_j(x_i) = d_i(x_{j+1})$ for all $0 \leqslant i \leqslant j < n$; a
\emph{filler} for such a boundary is an $n$-simplex $x$ with $d_i(x)
= x_i$ for $i = 0, \dots, n$.
\begin{proposition}
\label{prop:1}
The Catalan simplicial set is $2$-coskeletal.
\end{proposition}
\begin{proof}
For each natural number $n$, let $\mathbb K_n$ be the set of binary
relations $R \subset \{0, \dots, n\}^2$ such that
\begin{enumerate}[(i)]
\item $i \mathbin R j$ implies $i < j$;
\item $i < j < k$ and $i \mathbin R k$ implies $i \mathbin R j$ and
$j \mathbin R k$.
\end{enumerate}
For each $n \geqslant 0$, there is a bijection $\mathbb C_{n} \to
\mathbb K_{n}$ which sends a Dyck word $W$ to the set of those
pairs $i <j$ such that the $(j+1)$st $D$ precedes the $(i+1)$st $U$
in $W$; these bijections induce a simplicial structure on the
$\mathbb K_n$'s, and it suffices to prove that this induced
structure is
$2$-coskeletal.
We may identify the faces of an $n$-simplex $R \in \mathbb K_n$
with the restrictions of $R$ to the $(n+1)$ distinct $n$-element
subsets of $\{0, \dots, n\}$. An arbitrary collection $(R_0, \dots,
R_n)$ of such relations, seen as elements of $\mathbb K_{n-1}$,
comprises an $n$-boundary just when each $R_i$ and $R_j$ agree on
the intersections of their domains. In this situation, there is a a
unique relation $R \subset \{0, \dots, n\}^2$ restricting back to
the given $R_i$'s, and satisfying (i) since each $R_i$ does. If
$n>2$, then each triple $0 \leqslant i < j < k \leqslant n$ will
lie entirely inside the domain of some $R_\ell$, and so the
relation $R$ will satisfy (ii) since each $R_\ell$ does, and thus
constitute an element of $\mathbb K_n$. Thus for $n >
2$, each $n$-boundary of $\mathbb K \cong \mathbb C$ has a unique filler.
\end{proof}
We now give one further description of the Catalan
simplicial set, perhaps the most appealing: we will exhibit it as the
monoidal nerve of a particularly simple monoidal category, namely the
poset $\mathbf 2 = \bot \leqslant \top$, seen as a monoidal category
with tensor product given by disjunction.
We first explain what we mean by this. Recall that if $\CA$ is a
category, then its \emph{nerve} $\mathrm N(\CA)$ is the simplicial
set whose $0$-simplices are objects of $\CA$, and whose $n$-simplices
for $n > 0$ are strings of $n$ composable morphisms. Since the face
and degeneracy maps are obtained from identities and composition in
$\CA$, the nerve in fact encodes the entire category structure of
$\CA$.
Suppose now that $\CA$ is a \emph{monoidal category} in the sense
of~\cite{ML1963}---thus, equipped with a tensor product functor $\otimes
\colon \CA \times \CA \to \CA$, a unit object $I \in \CA$, and
families of natural isomorphisms $\alpha_{ABC} \colon (A \otimes B)
\otimes C \cong A \otimes (B \otimes C)$, $\lambda_A \colon I \otimes
A \cong A$ and $ \rho_A \colon A \cong A \otimes I$, satisfying
certain coherence axioms which we recall in detail in
Section~\ref{sec:higher} below. In this situation, the nerve of $\CA$
as a category fails to encode any information concerning the monoidal
structure. However, by viewing $\CA$ as a one-object bicategory
(=weak $2$-category), we may form a different nerve which \emph{does}
encode this extra information.
\begin{definition}\label{def:monnerve}
Let $\CA$ be a monoidal category. The \emph{monoidal nerve} of $\CA$ is the
simplicial set $\mathrm N_\otimes(\CA)$ defined as follows:
\begin{itemize}
\item There is a unique $0$-simplex, denoted $\star$.
\item A $1$-simplex is an object $A \in \CA$; its two faces are
necessarily $\star$.
\item A $2$-simplex is a morphism $A_{12} \otimes A_{01} \to A_{02}$
in $\CA$; its three faces are $A_{12}$,
$A_{02}$ and $A_{01}$.
\item A $3$-simplex is a commuting diagram
\def\xab{A_{01}}
\def\xbc{A_{12}}
\def\xcd{A_{23}}
\def\xac{A_{02}}
\def\xad{A_{03}}
\def\xbd{A_{13}}
\def\xbcd{A_{123}}
\def\xacd{A_{023}}
\def\xabd{A_{013}}
\def\xabc{A_{012}}
\begin{equation}\label{eq:threesimpdiag}
\monnerve
\end{equation} in $\CA$; its four faces are $A_{123}, A_{023},
A_{013}$ and $A_{012}$.
\item Higher-dimensional simplices are determined by $3$-coskeletality.
\end{itemize}
The degeneracy of the unique $0$-simplex is the unit object $I \in
\CA$; the two degeneracies $s_0(A), s_1(A)$ of a $1$-simplex are the
respective coherence constraints $\rho^{-1}_A \colon A \otimes I \to
A$ and $\lambda_A \colon I \otimes A \to A$; the three degeneracies of
a $2$-simplex are simply the assertions that certain diagrams commute,
which is so by the axioms for a monoidal category. Higher degeneracies
are determined by coskeletality.
\end{definition}
Note that, because the monoidal nerve arises from viewing a monoidal
category as a one-object bicategory, we have a dimension shift:
objects and morphisms of $\CA$ become $1$- and $2$-simplices of the
nerve, rather than $0$- and $1$-simplices.
\begin{proposition}
The simplicial set $\mathbb C$ is uniquely isomorphic to the
monoidal nerve of the poset $\mathbf 2 = \mathord \bot \leqslant \mathord \top$, seen as a monoidal category
under disjunction.
\end{proposition}
\begin{proof}
In any monoidal nerve $\mathrm N_\otimes(\CA)$, each $3$-dimensional
boundary has at most one filler, existing just when the
diagram~\eqref{eq:threesimpdiag} associated to the boundary
commutes. Since every diagram in a poset commutes, the nerve
$\mathrm{N}_\otimes(\mathbf 2)$, like $\mathbb C$, is
$2$-coskeletal. It remains to show that $\mathbb C \cong \mathrm
N_\otimes(\mathbf 2)$ in dimensions $0,1,2$. In dimension $0$ this
is trivial. In dimension $1$, any isomorphism must send $s_0(\star)
= e \in \mathbb C_1$ to $s_0(\star) = \bot \in \mathrm
N_\otimes(\mathbf 2)_1$ and hence must send $c$ to $\top$. In
dimension $2$, the $2$-simplices of $\mathrm N_\otimes(\mathbf 2)$
are of the form
\begin{equation*}
\twosimp{\star}{\star}{\star}{x_{01}}{x_{12}}{x_{02}}{}
\end{equation*}
where $x_{12} \vee x_{01} \leqslant x_{02}$ in $\mathrm{N}_\otimes(\mathbf
2)$. Thus in $\mathrm N_\otimes(\mathbf 2)$, as in $\mathbb C$, there is at
most one $2$-simplex with a given boundary, and by examination of
\eqref{eq:1}, we see that the same possibilities
arise on both sides; thus there is a unique isomorphism $\mathbb C_2
\cong \mathrm N_\otimes(\mathbf 2)_2$ compatible with the face maps, as required.
\end{proof}
We conclude this section by investigating the non-degenerate simplices of the
Catalan simplicial set; these will be of importance in the following
sections, where they will play the role of basic coherence data in
higher-dimensional monoidal structures. We will see that these
non-degenerate simplices form a sequence of Motzkin sets. The
\emph{Motzkin numbers}~\cite{Donaghey1977Motzkin} $1,1,2,4,9,\dots$
are defined by the recurrence relations
\[
M_0 = 1 \qquad \text{and} \qquad M_{n+1} = M_n + \textstyle\sum_{k=0}^{n-1} M_{k}
M_{n-1-k}\rlap{ .}
\]
An $\mathbb N$-indexed family of sets is a \emph{sequence of Motzkin
sets} if there are a Motzkin number of elements in each dimension.
\begin{example}
A \emph{Motzkin word} is a string in the alphabet $\{U,C,D\}$ which,
on striking out every $C$, becomes a Dyck word. The sets $\mathbb M_n$
of Motzkin words of length $n$ are a sequence of Motzkin sets.\label{ex:2}
\end{example}
\begin{proposition}\label{prop:nd}
The family $(\mathrm{nd}\,{\mathbb C}_n : n \in \mathbb N)$ of
non-degenerate simplices of $\mathbb C$ is a sequence of Motzkin
sets.
\end{proposition}
\begin{proof}
It suffices to construct a bijection $\mathrm{nd}\,{\mathbb C}_n
\cong \mathbb M_n$ for each $n$. In one direction, we have a map
$\mathrm{nd}\,{\mathbb C}_n \to \mathbb M_n$ sending a
non-degenerate Dyck word $W$ to the Motzkin word $M_1 \dots M_n$
defined as follows: if the $i$th and $(i+1)$st $U$'s are adjacent in
$W$, then $M_i = U$; if the $i$th and $(i+1)$st $D$'s are adjacent
in $W$, then $M_i = D$; otherwise $M_i = C$. (Note that the first
two cases are disjoint; a Dyck word $W$ satisfying both would have
to be in the image of the $i$th degeneracy map).
In the other direction, suppose given a Motzkin word $M = M_1 \dots
M_n$. Let $a_1 < \dots < a_k$ enumerate all $i$ for which $M_i$ is
$D$ or $C$, and let $b_1 < \dots < b_k$ enumerate all $i$ for which
$M_i$ is $U$ or $C$. The inverse mapping $\mathbb M_n \to \mathrm{nd}\,{\mathbb C}_n
$ now sends $M$ to the Dyck word
\[U^{a_1}D^{b_1}U^{a_2-a_1}D^{b_2-b_1} \cdots U^{a_k - a_{k-1}}D^{b_k - b_{k-1}}
U^{n+1-a_k} D^{n+1-b_k}\rlap{ .}\qedhere
\]
\end{proof}
Using this result, we may re-derive a well-known combinatorial
identity relating the Catalan and Motzkin numbers.
\begin{corollary}
For each $n \geqslant 0$, we have $C_{n+1} = \sum_k {n \choose k} M_k$.
\end{corollary}
\begin{proof}
Recall that the \emph{Eilenberg--Zilber lemma}~\cite[\S II.3]{GZ} states that
every simplex $x \in X_n$ of a simplicial set $X$ is the image under
a unique surjection $\phi \colon [n] \twoheadrightarrow [k]$ in
$\Delta$ of a unique non-degenerate simplex $y \in X_k$. Since there
are $n \choose k$ order-preserving surjections $[n]
\twoheadrightarrow [k]$,
\[C_{n+1} = \abs{\mathbb C_n} = \textstyle\sum_{\phi \colon [n]
\twoheadrightarrow [k]} \abs{\mathrm{nd}\,{\mathbb C}_k} = \sum_k
{n \choose k} \abs{\mathrm{nd}\,{\mathbb C}_k} = \sum_k {n \choose k} M_k \] as required.
\end{proof}
\renewcommand{\aa}{a} \renewcommand{\ll}{\ell} \newcommand{\rr}{r}
\newcommand{\kk}{k}
\section{First classifying properties}
\label{sec:class-prop-catal}
We now begin to investigate the \emph{classifying properties} of the
Catalan simplicial set, by looking at the structure picked out by maps
from $\mathbb C$ into the nerves of certain kinds of categorical
structure.
For our first classifying property, recall that a \emph{monoid} in a
monoidal category $\CA$ is given by an object $A \in \CA$ and
morphisms $\mu \colon A \otimes A \to A$ and $\eta \colon I \to A$
rendering commutative the three diagrams
\begin{equation*}
\def\xab{A}
\def\xbc{A}
\def\xcd{A}
\def\xac{A}
\def\xad{A}
\def\xbd{A}
\def\xbcd{\mu}
\def\xacd{\mu}
\def\xabd{\mu}
\def\xabc{\mu}
\monnerve
\quad
\cd[@C-0.5em]{
A \ar[r]^-{\rho_A} \ar@{=}[d] & A \otimes I \ar[d]^{1 \otimes \eta} \\
A & A \otimes A \ar[l]^-{\mu}
} \quad \cd{
I \otimes A \ar[r]^{\lambda_A} \ar[d]_{\eta \otimes 1} & A \\
A \otimes A \ar[ur]_{\mu}
}
\end{equation*}
\begin{proposition}\label{prop:classifymonoids}
If $\CA$ is a monoidal category, then to give a simplicial map $f
\colon \mathbb C \to \mathrm N_\otimes(\CA)$ is equally to give a
monoid in $\CA$.
\end{proposition}
\begin{proof}
Since $\mathrm N_\otimes(\CA)$ is $3$-coskeletal, a simplicial map $f \colon
\mathbb C \to \mathrm N_\otimes(\CA)$ is uniquely determined by where it
sends non-degenerate simplices of dimension $\leqslant 3$. We have
already described the non-degenerate simplices in dimensions
$\leqslant 2$, while in dimension $3$, there are four such, given
by
\begin{align*}
\aa &= (\tee,\tee,\tee,\tee) &
\ll &= (\eye,s_1(c),\tee,s_1(c)) \\
\rr &= (s_0(c),\tee,s_0(c),\eye) &
\kk &= (\eye, s_1(c), s_0(c), \eye) \rlap{ .}
\end{align*}
Here, we take advantage of $2$-coskeletality of $\mathbb C$ to
identify a $3$-simplex $x$ with its tuple $(d_0(x), d_1(x), d_2(x),
d_3(x))$ of $2$-dimensional faces. We thus see that to
give $f \colon \mathbb C \to \mathrm N_\otimes(\CA)$ is to give:
\begin{itemize}
\item In dimension $0$, no data: $f$ must send $\star$ to $\star$;
\item In dimension $1$, an object $A \in \CA$, the image of the
non-degenerate simplex $c \in \mathbb C_1$;
\item In dimension $2$, morphisms $\mu \colon A \otimes A \to A$ and
$\eta' \colon I \otimes I \to A$, the images of the non-degenerate
simplices $t, i \in \mathbb C_2$;
\item In dimension $3$, commutative diagrams
\begin{equation*}
\def\xab{A}
\def\xbc{A}
\def\xcd{A}
\def\xac{A}
\def\xad{A}
\def\xbd{A}
\def\xbcd{\mu}
\def\xacd{\mu}
\def\xabd{\mu}
\def\xabc{\mu}
f(a) = \monnerve
\end{equation*}
\begin{equation*}
\def\xab{A}
\def\xbc{I}
\def\xcd{I}
\def\xac{A}
\def\xad{A}
\def\xbd{A}
\def\xbcd{\eta'}
\def\xacd{\lambda_A}
\def\xabd{\mu}
\def\xabc{\lambda_A}
f(\ell) = \monnerve
\end{equation*} \begin{equation*}
\def\xab{I}
\def\xbc{I}
\def\xcd{A}
\def\xac{A}
\def\xad{A}
\def\xbd{A}
\def\xbcd{\rho^{-1}_A}
\def\xacd{\mu}
\def\xabd{\rho^{-1}_A}
\def\xabc{\eta'}
f(r) = \monnerve
\end{equation*}
\begin{equation*}
\def\xab{I}
\def\xbc{I}
\def\xcd{I}
\def\xac{A}
\def\xad{A}
\def\xbd{A}
\def\xbcd{\eta'}
\def\xacd{\lambda_A}
\def\xabd{\rho^{-1}_A}
\def\xabc{\eta'}
f(k) = \monnerve
\end{equation*}
the images as displayed of the non-degenerate $3$-simplices
of $\mathbb C$.
\end{itemize}
On defining $\eta = \eta' \circ \rho_A \colon I \to I \otimes I \to
A$, we obtain a bijective correspondence between the data
$(A,\mu,\eta')$ for a simplicial map $\mathbb C \to \mathrm N_\otimes(\CA)$
and the data $(A,\mu,\eta)$ for a monoid in $\CA$. Under this
correspondence, the axiom $f(a)$ for $(A, \mu, \eta')$ is clearly
the same as the first monoid axiom for $(A,\mu,\eta)$; a short
calculation with the axioms for a monoidal category shows that
$f(\ell)$ and $f(r)$ correspond likewise with the second and third
monoid axioms. This leaves only $f(k)$; but it is easy to show that
this is automatically satisfied in any monoidal category. Thus
monoids in $\CA$ correspond bijectively with simplicial maps
$\mathbb C \to \mathrm N_\otimes(\CA)$ as claimed.
\end{proof}
\begin{remark}
A generalisation of this classifying property concerns maps
from $\mathbb C$ into the nerve of a \emph{bicategory} $\CB$ in the
sense of~\cite{Ben1967}. Bicategories are ``many object'' versions
of monoidal categories, and the nerve of a bicategory is a ``many
object'' version of the monoidal nerve of
Definition~\ref{def:monnerve}. An easy modification of the
preceding argument shows that simplicial maps $\mathbb C \to
\mathrm N(\CB)$ classify monads in the bicategory $\CB$.
\end{remark}
\section{Higher classifying properties}\label{sec:higher}
The category $\mathrm{Cat}$ of small categories and functors bears a
monoidal structure given by cartesian product, and monoids with respect
to this are precisely small \emph{strict} monoidal categories---those
for which the associativity and unit constraints $\alpha$, $\lambda$
and $\rho$ are all identities. It follows by
Proposition~\ref{prop:classifymonoids} that simplicial maps $\mathbb C
\to \mathrm N_\otimes(\mathrm{Cat})$ classify small strict monoidal
categories.
The purpose of this section is to show that, in fact, we may also classify both
\begin{enumerate}[(i)]
\item Not-necessarily-strict monoidal categories; and
\item \emph{Skew-monoidal} categories in the sense of~\cite{Szl2012}
\end{enumerate}
by simplicial maps from $\mathbb C$ into suitably modified nerves of
$\mathrm{Cat}$, where the modifications at issue involve changing the
simplices from dimension $3$ upwards. The $3$-simplices will no longer
be commutative diagrams as in~\eqref{eq:threesimpdiag}, but rather
diagrams commuting up to an natural transformation, invertible in the
case of (i) but not necessarily so for (ii). The $4$-simplices will
be, in both cases, suitably commuting diagrams of natural
transformations, while higher simplices will be determined by coskeletality
as before.
Note that, to obtain these new classification results, we do not need
to change $\mathbb C$ itself, only what we map it into. The
change is from something $3$-coskeletal to something $4$-coskeletal,
which means that the non-degenerate $4$-simplices of $\mathbb C$ come
into play. As we will see, these encode precisely the coherence
axioms for monoidal or skew-monoidal structure.
Before continuing, let us make precise the definition of skew-monoidal
category. As explained in the introduction, the notion was introduced
by Szlach\'anyi in~\cite{Szl2012} to describe structures arising in
quantum algebra, and generalises Mac~Lane's notion of monoidal
category by dropping the requirement that the coherence constraints be
invertible.
\begin{definition}
A \emph{skew-monoidal category} is a
category $\CA$ equipped with a unit element $I \in \CA$, a tensor
product $\otimes \colon \CA \times \CA \to \CA$, and natural families
of (not neccesarily invertible) constraint maps
\begin{equation}
\begin{gathered}
\alpha_{ABC} \colon (A \otimes B) \otimes C \to A \otimes (B \otimes
C)\\
\lambda_{A} \colon I \otimes A \to A \quad \text{and} \quad \rho_A \colon A \to A
\otimes I
\end{gathered}\label{eq:data}
\end{equation}
subject to the commutativity of the
following diagrams---wherein tensor is denoted by juxtaposition---for
all $A,B,C,D \in \CA$:
\begin{equation*}
\def\xa{((AB)C)D}
\def\xc{(AB)(CD)}
\def\xe{A(B(CD))}
\def\xb{\llap{$(A(B$}C))D}
\def\xd{A((B\rlap{$C)D)$}}
\def\xo{\textstyle (5.1)}
\def\xac{\alpha}
\def\xce{\alpha}
\def\xab{\alpha 1}
\def\xbd{\alpha}
\def\xde{1 \alpha}
\pent[0.1cm]
\quad
\cd[@!C@C-6pt]{
(AI)B \ar[r]^{\alpha} & A(IB) \ar[d]^{1\lambda}
\ar@{}[dl]|{\textstyle (5.2)}& \\
AB \ar[u]^{\rho1} \ar[r]_{\mathrm{id}} & AB
}
\end{equation*}
\begin{equation*}
\cd[@!C@C-22pt]{
& I(AB) \ar[dr]^{\lambda} \ar@{}[d]|(0.6){\textstyle (5.3)} & \\
(IA)B \ar[ur]^{\alpha} \ar[rr]_{\lambda1} & & AB
}
\quad
\cd[@!C@C-20pt]{
& (AB)I \ar[dr]^{\alpha} \ar@{}[d]|(0.6){\textstyle (5.4)} & \\
AB \ar[ur]^{\rho} \ar[rr]_{1\rho} & & A(BI)
}
\quad
\cd[@!C@C-3pt@R+4pt]{
& II \ar[dr]^{\lambda} \ar@{}[d]|(0.6){\textstyle (5.5)} & \\
I \ar[ur]^{\rho} \ar[rr]_{\mathrm{id}} && I\rlap{ .}
}
\end{equation*}
\end{definition}
A skew-monoidal category in which $\alpha$, $\lambda$ and $\rho$ are
invertible is exactly a monoidal category; the axioms (5.1)--(5.5) are
then Mac~Lane's original five axioms~\cite{ML1963}, justified by the
fact that they imply the commutativity of \emph{all} diagrams of
constraint maps. In the skew case, this justification no longer
applies, as the axioms no longer force every diagram of constraint
maps to commute; for example, we need not have $1_{I \otimes I} =
\rho_I \circ \lambda_I \colon I \otimes I \to I \otimes I$. The
classification of skew-monoidal structure by maps out of the Catalan
simplicial set can thus be seen as an alternative justification of the
axioms in the absence of such a result.
Before giving our classification result, we describe the modified
nerves of $\mathrm{Cat}$ which will be involved. The possibility of
taking natural transformations as $2$-cells makes $\mathrm{Cat}$ not
just a monoidal category, but a \emph{monoidal bicategory} in the
sense of~\cite{GPS}. Just as one can form a nerve of a monoidal
category by viewing it as a one-object bicategory, so one can form a
nerve of a monoidal bicategory by viewing it as a one-object
tricategory (=weak 3-category), and in fact, various nerve
constructions are possible---see~\cite{GRoT}. The following
definitions are specialisations of some of these nerves to the
case of $\mathrm{Cat}$.
\begin{definition}
The \emph{lax nerve} $\mathrm{N}_\ell(\mathrm{Cat})$ of the monoidal bicategory
$\mathrm{Cat}$ is the simplicial set defined as follows:
\begin{itemize}
\item There is a unique $0$-simplex, denoted $\star$.
\item A $1$-simplex is a (small) category $\CA_{01}$; its two faces are
both $\star$.
\item A $2$-simplex is a functor $A_{012} \colon \CA_{12} \times \CA_{01} \to \CA_{02}$.
\item A $3$-simplex is a natural transformation
\begin{equation*}
\def\xab{\CA_{01}}
\def\xbc{\CA_{12}}
\def\xcd{\CA_{23}}
\def\xac{\CA_{02}}
\def\xad{\CA_{03}}
\def\xbd{\CA_{13}}
\def\xbcd{A_{123}}
\def\xacd{A_{023}}
\def\xabd{A_{013}}
\def\xabc{A_{012}}
\def\xabcd{A_{0123}}
\laxnerve
\end{equation*} with $1$-cell components
\[
(A_{0123})_{a_{23}, a_{12}, a_{01}} \colon A_{013}(A_{123}(a_{23},a_{12}),a_{01}) \to
A_{023}(a_{23},A_{012}(a_{12},a_{01}))\rlap{ .}
\]
\item A $4$-simplex is a quintuple of appropriately-formed natural transformations
$(A_{1234}, A_{0234}, A_{0134}, A_{0124}, A_{0123})$ making the
pentagon
\begin{equation*}
\cd[@[email protected]]{& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &
A_{024}(A_{234}(a_{34}, a_{23}), A_{012}(a_{12}, a_{01}))
\ar[dddr]^{A_{0234}}\\ \\
A_{014}(A_{124}(A_{234}(a_{34}, a_{23}), a_{12}), a_{01})
\ar[uurr]^{A_{0124}} \ar[dd]_{A_{014}(A_{1234}, 1)}\\
& & &
A_{034}(a_{34}, A_{023}(a_{23}, A_{012}(a_{12}, a_{01})))
\\
A_{014}(A_{134}(a_{34}, A_{123}(a_{23}, a_{12})), a_{01}) \ar[ddrr]_{A_{0134}}
\\ \\
&&A_{034}(a_{34}, A_{013}(A_{123}(a_{23}, a_{12}), a_{01}))
\ar[uuur]_{A_{034}(1, A_{0123})}
}
\end{equation*}
commute in $\CA_{04}$ for all $(a_{01}, a_{12}, a_{23}, a_{34}) \in
\CA_{01} \times \CA_{12} \times \CA_{23} \times \CA_{34}$.
\item Higher-dimensional simplices are determined by $4$-coskeletality, and
face and degeneracy maps are defined as before.
\end{itemize}
The \emph{pseudo nerve} $\mathrm{N}_p(\mathrm{Cat})$
is defined identically except that the natural transformations
occurring in dimensions $3$ and $4$ are required to be invertible.
\end{definition}
We are now ready to give our higher classifying property of the
Catalan simplicial set.
\begin{proposition}
To give a simplicial map $f
\colon \mathbb C \to \mathrm N_p(\mathrm{Cat})$ is equally to give a
small monoidal category; to give a simplicial map $f
\colon \mathbb C \to \mathrm N_\ell(\mathrm{Cat})$ is equally to
give a small skew-monoidal category.
\end{proposition}
\begin{proof}
First we prove the second statement. Since $\mathrm N_\ell(\mathrm{Cat})$
is $4$-coskeletal, a simplicial map into it is uniquely determined
by where it sends non-degenerate simplices of dimension $\leqslant
4$. In dimensions $\leqslant 3$, to give $f \colon \mathbb C \to
\mathrm N_\ell(\mathrm{Cat})$ is to give:
\begin{itemize}
\item In dimension $0$, no data: $f$ must send $\star$ to $\star$;
\item In dimension $1$, a small category $\CA = f(c)$;
\item In dimension $2$, a functor $\otimes = f(t) \colon \CA \times \CA
\to \CA$ and an object $I \in \CA$ picked out by the functor $f(i)
\colon 1 \times 1 \to \CA$;
\item In dimension $3$, natural transformations
\begin{equation*}
\def\xab{\CA}
\def\xbc{\CA}
\def\xcd{\CA}
\def\xac{\CA}
\def\xad{\CA}
\def\xbd{\CA}
\def\xbcd{\otimes}
\def\xacd{\otimes}
\def\xabd{\otimes}
\def\xabc{\otimes}
\def\xabcd{\alpha}
f(a) = \laxnerve
\end{equation*}
\begin{equation*}
\def\xab{\CA}
\def\xbc{1}
\def\xcd{1}
\def\xac{\CA}
\def\xad{\CA}
\def\xbd{\CA}
\def\xbcd{f(i)}
\def\xacd{\cong}
\def\xabd{\otimes}
\def\xabc{\cong}
\def\xabcd{\lambda}
f(\ell) = \laxnerve
\end{equation*} \begin{equation*}
\def\xab{1}
\def\xbc{1}
\def\xcd{\CA}
\def\xac{\CA}
\def\xad{\CA}
\def\xbd{\CA}
\def\xbcd{\cong}
\def\xacd{\otimes}
\def\xabd{\cong}
\def\xabc{f(i)}
\def\xabcd{\rho}
f(r) = \laxnerve
\end{equation*}
\begin{equation*}
\def\xab{1}
\def\xbc{1}
\def\xcd{1}
\def\xac{\CA}
\def\xad{\CA}
\def\xbd{\CA}
\def\xbcd{f(i)}
\def\xacd{\cong}
\def\xabd{\cong}
\def\xabc{f(i)}
\def\xabcd{\kappa}
f(k) = \laxnerve
\end{equation*}
which are equally well natural families $\alpha$, $\lambda$ and
$\rho$ as in~\eqref{eq:data} together with a map $\kappa_\star
\colon I \to I$.
\end{itemize}
So the data in dimensions $\leqslant 3$ for a simplicial map
$\mathbb C \to \mathrm N_\ell(\mathrm{Cat})$ is the data
$(\CA,\otimes,I,\alpha,\lambda,\rho)$ for a small skew-monoidal
category augmented with a map $\kappa_\star \colon I \to I$ in
$\CA$. It remains to consider the action on non-degenerate
$4$-simplices of $\mathbb C$. There are nine such, given by:
\begin{equation*}
\begin{tabular}{ l l l }
$A1 = (\aa,\aa,\aa,\aa,\aa)$ \hspace{0.7cm} &
$A6 = (s_0(\eye),\ll, \kk, \rr, s_2(\eye))$ \\
$A2 = (\rr,s_1(\tee),\aa,s_1(\tee),\ll)$ &
$A7 = (\kk,\ll, s_0s_1(c), \rr, \kk)$ \\
$A3 = (\ll,\ll,s_2(\tee),\aa,s_2(\tee))$ &
$A8 = (\rr, s_1(\tee), s_0(\tee), \rr, \kk)$ \\
$A4 = (s_0(\tee),\aa,s_0(\tee),\rr,\rr)$ \hspace{0.7cm} &
$A9 = (\kk, \ll, s_2(\tee), s_1(\tee), \ll)$ \\
$A5 = (s_1(\eye),s_2(\eye), \kk, s_0(\eye), s_1(\eye))$ &
\end{tabular}
\end{equation*}
where as before, we take advantage of coskeletality of $\mathbb C$ to
identify a $4$-simplex with its tuple of $3$-dimensional faces.
The images of these simplices each assert the commutativity of a
pentagon of natural transformations involving $\alpha$, $\rho$,
$\lambda$ or $\kappa$; explicitly, they assert that for any $A,B,C,D \in \CA$,
the following pentagons commute in $\CA$:
\begin{equation*}
\!\!\!
\def\xa{((AB)C)D}
\def\xc{(AB)(CD)}
\def\xe{A(B(CD))}
\def\xb{\llap{$(A(B$}C))D}
\def\xd{A((B\rlap{$C)D)$}}
\def\xo{\textstyle\text{(A1)}}
\def\xac{\alpha}
\def\xce{\alpha}
\def\xab{\alpha 1}
\def\xbd{\alpha}
\def\xde{1 \alpha}
\pent[0.1cm]
\
\def\xa{AB}
\def\xc{AB}
\def\xe{AB}
\def\xb{(AI)B}
\def\xd{A(IB)}
\def\xo{\textstyle\text{(A2)}}
\def\xac{1}
\def\xce{1}
\def\xab{\rho 1}
\def\xbd{\alpha}
\def\xde{1 \lambda}
\pent[0.1cm]
\
\def\xa{(IA)B}
\def\xc{I(AB)}
\def\xe{AB}
\def\xb{AB}
\def\xd{AB}
\def\xo{\textstyle\text{(A3)}}
\def\xac{\alpha}
\def\xce{\lambda}
\def\xab{\lambda 1}
\def\xbd{1}
\def\xde{1}
\pent[0.1cm]
\end{equation*}
\begin{equation*}
\def\xa{AB}
\def\xc{(AB)I}
\def\xe{A(BI)}
\def\xb{AB}
\def\xd{AB}
\def\xo{\textstyle\text{(A4)}}
\def\xac{\rho}
\def\xce{\alpha}
\def\xab{1}
\def\xbd{1}
\def\xde{1\rho}
\pent[0.1cm]\
\def\xa{I}
\def\xc{I}
\def\xe{I}
\def\xb{I}
\def\xd{I}
\def\xo{\textstyle\text{(A5)}}
\def\xac{1}
\def\xce{1}
\def\xab{1}
\def\xbd{\kappa_\star}
\def\xde{1}
\pent[0.1cm]\
\def\xa{I}
\def\xc{II}
\def\xe{I}
\def\xb{I}
\def\xd{I}
\def\xo{\textstyle\text{(A6)}}
\def\xac{\rho_I}
\def\xce{\lambda_I}
\def\xab{1}
\def\xbd{\kappa_\star}
\def\xde{1}
\pent[0.1cm]
\end{equation*}
\begin{equation*}
\def\xa{I}
\def\xc{II}
\def\xe{I}
\def\xb{I}
\def\xd{I}
\def\xo{\textstyle\text{(A7)}}
\def\xac{\rho_I}
\def\xce{\lambda_I}
\def\xab{\kappa_\star}
\def\xbd{1}
\def\xde{\kappa_\star}
\pent[0.1cm]\
\def\xa{A}
\def\xc{AI}
\def\xe{AI}
\def\xb{AI}
\def\xd{AI}
\def\xo{\textstyle\text{(A8)}}
\def\xac{\rho}
\def\xce{1}
\def\xab{\rho}
\def\xbd{1}
\def\xde{1\kappa}
\pent[0.1cm]\
\def\xa{IA}
\def\xc{IA}
\def\xe{A\rlap{ .}}
\def\xb{IA}
\def\xd{IA}
\def\xo{\textstyle\text{(A9)}}
\def\xac{1}
\def\xce{\lambda}
\def\xab{\kappa 1}
\def\xbd{1}
\def\xde{\lambda}
\pent[0.1cm]
\end{equation*}
Note first that (A5) forces $\kappa_\star = 1_I \colon I \to I$. Now
(A1)--(A4) express the axioms (5.1)--(5.4), both (A6) and (A7) express
axiom (5.5), whilst (A8) and (A9) are trivially satisfied. Thus the
$4$-simplex data of a simplicial map $\mathbb C \to
\mathrm{N}_\ell(\mathrm{Cat})$ exactly express the skew-monoidal
axioms and the fact that the additional datum $\kappa_\star \colon I
\to I$ is trivial; whence a simplicial map $\mathbb C \to
\mathrm{N}_\ell(\mathrm{Cat})$ is precisely a small skew-monoidal
category.
The same proof now shows that a simplicial map $\mathbb C \to
\mathrm{N}_p(\mathrm{Cat})$ is precisely a small monoidal category,
under the identification of monoidal categories with skew-monoidal
categories whose constraint maps are invertible.
\end{proof}
|
train/arxiv
|
BkiUflY5qhLBqPGg5zF0
| 5 | 1 |
\section{Introduction}
We are interested in the following two fundamental problems in astrophysics and cosmology: \begin{enumerate}[A.]
\item What does the interior of a black hole look like, and how strong is the (potential) singularity within it?\label{PbI}
\item How does the universe behave near its initial time, and is there a ``Big Bang'' singularity? \label{PbII}
\end{enumerate} As it turns out, these two questions are intimately connected, loosely speaking because a black hole interior's terminal boundary corresponds to the time-reverse of an initial-time singularity (at least locally).
In the present manuscript, we analyze a class of spatially-homogeneous singular spacetimes, with the goal to shed some light on Problem~\ref{PbI} as our underlying motivation. We will also emphasize the deep connections between Problem~\ref{PbI} and Problem~\ref{PbII}: The common theme to both problems are \emph{spacelike singularities} and whether they are stable dynamically. A very important example of such a spacelike singularity at $\{\tau=0\} $ is given by the so-called Kasner metrics \cite{Kasner}, which are spatially homogeneous (but anisotropic) spacetimes of the form \begin{equation}\label{Kasner}
g_{K} = -d\tau^2 + \tau^{2 p_1} dx_1^2 + \tau^{2p_{2}} dx_2^2 + \tau^{2p_{3}} d x_3^2, \text{ with } p_1+ p_{2}+p_{3}=1.
\end{equation}
The main conjectured dynamics of spacelike singularities (in (3+1)-dimensional vacuum, or for gravity coupled to a reasonable matter model), corresponding respectively to Problem~\ref{PbI} and Problem~\ref{PbII} are as follows:
\begin{conj}[Spacelike singularity conjecture, \cite{KerrStab,Kommemi,JonathanICM,r0}] \label{conj.BH}
In the setting of gravitational collapse (one-ended data, left picture in Figure~\ref{Fig1}), the terminal boundary of a generic asymptotically flat black hole consists of \begin{itemize}
\item a null piece emanating from infinity $i^+$ -- the \emph{Cauchy horizon} $\mathcal{CH}_{i^+}$, as depicted in Figure~\ref{Fig1}.
\item a non-empty \underline{spacelike singularity} $\mathcal{S}$, such that (by definition) for all $p \in \mathcal{S}$, the causal past of $p$ has relatively compact intersection with the initial data hypersurface $\Sigma$.
\end{itemize}
\end{conj}
\begin{figure}[H]
\begin{center}
\includegraphics[width=160 mm, height=50 mm]{onetwoendedconj}
\end{center}
\caption{Penrose diagram of a (non-hairy) black hole interior with a Cauchy horizon $\mathcal{CH}_{i^+}$ and a spacelike singularity $\mathcal{S}$. Left: one-ended black hole (gravitational collapse case). Right: two-ended black hole.}
\label{Fig1}
\end{figure}
\noindent The following conjecture regroups a series of heuristics \cite{BK1, BKL1, BKL2, KL} and somewhat imprecise statements regarding the typical behavior near spacelike singularities, and is often termed the ``BKL scenario''.
\begin{conj}\label{conj.cosmo}[BKL proposal.] The dynamics near a generic spacelike, initial cosmological singularity $\mathcal{S}$, once restricted to a region sufficiently close to $\mathcal{S}$, are described as such. \begin{enumerate}
\item \label{BKL1} \underline{Asymptotically velocity term
dominated behavior}. The causal future $J^{+}(p)$ of any given point $p \in \mathcal{S}$ on the singularity is well-described by a nonlinear system of ODEs, once one is sufficiently close to $\mathcal{S}$. Solutions to these ODEs resemble a sequence of Kasner-like regimes, which may be stable or unstable.
\item\label{BKL2bii} \underline{Kasner inversions}. Any unstable Kasner regime transitions towards a (stable or unstable) different Kasner.
\end{enumerate}
\begin{enumerate}
\item[3.a.] \label{BKL2a} \label{monotone.conj} (For stiff matter models only) \underline{Monotonic regime}. There are finitely many Kasner inversions: ultimately the spacetime only approaches a single (stable) Kasner metric with monotonic dynamics.
\item[3.b.] \label{BKL2b} \label{osc.conj} (For vacuum or non-stiff matter) \underline{Chaotic regime}. There are infinitely many Kasner inversions between unstable Kasner-like regimes in any generic $J^{+}(p)$.
\end{enumerate}
\end{conj} Here, a stiff matter model is either a stiff fluid or scalar field, see \cite{AnderssonRendall,BK1, BKL2} for a discussion. \\\indent Our objective is to study a class of \textit{spatially-homogeneous solutions}
of a stiff-matter model: the Einstein--Maxwell--Klein--Gordon equations, in which a \emph{charged} scalar field $\phi$ is coupled to electromagnetism and gravity. \begin{align}\label{E1} & Ric_{\mu \nu}(g)- \frac{1}{2}R(g)g_{\mu \nu}+ \Lambda g_{\mu \nu}= \mathbb{T}^{EM}_{\mu \nu}+ \mathbb{T}^{KG}_{\mu \nu} , \\ & \label{E2} \mathbb{T}^{EM}_{\mu \nu}=2\left(g^{\alpha \beta}F _{\alpha \nu}F_{\beta \mu }-\frac{1}{4}F^{\alpha \beta}F_{\alpha \beta}g_{\mu \nu}\right), \hskip 7 mm \nabla^{\mu}F_{\mu \nu}= iq_{0}\left( \frac{ \phi \overline{D_{\nu}\phi} -\overline{\phi} D_{\nu}\phi}{2}\right) , \; F=dA, \\ & \label{E3} \mathbb{T}^{KG}_{\mu \nu}= 2\left( \Re(D _{\mu}\phi \overline{D _{\nu}\phi}) -\frac{1}{2}(g^{\alpha \beta} D _{\alpha}\phi \overline{D _{\beta}\phi} + m ^{2}|\phi|^2 )g_{\mu \nu} \right), \hskip 7 mm D_{\mu}= \nabla_{\mu}+ iq_0 A_{\mu}, \\ &\label{E5} g^{\mu \nu} D_{\mu} D_{\nu}\phi = m ^{2} \phi, \end{align} with $\Lambda \in \mathbb{R}$ the cosmological constant, $m^2 \in \mathbb{R}$ the Klein--Gordon mass, and $q_0 \neq 0$ the scalar field charge.
Our study is based on the evolution of initial data posed on bifurcate characteristic hypersurfaces $\mathcal{H}_L$ and $\mathcal{H}_R$ emulating the event horizons of a two-ended black hole, and we show they lead to a spacelike singularity.
Our main Theorem~\ref{thm.intro} will also provide a precise near-singularity behavior in terms of one or two Kasner metrics of the form \eqref{Kasner}. In cosmological terms, the spacetimes we construct in Theorem~\ref{thm.intro} are so-called Kantowski-Sachs metrics, namely spatially homogeneous, but anisotropic cosmological spacetimes with spatial topology $\mathbb{R}\times \mathbb{S}^2$.
We shall view also each of our constructed spacetimes as the interior region of a so-called \emph{hairy black hole} (see Section~\ref{holo.intro}), namely a stationary black hole with non-trivial matter fields (the ``hair(s)'') on the horizon (see \cite{Review2,Review3,Review1} and references therein for a review of various types of hairy black holes). We comment that though we only expect to be able to construct an exterior region to our spacetime in the asymptotically Anti-de-Sitter case, we nonetheless name our spacetimes hairy black holes for all choices of $\Lambda \in \mathbb{R}$.
Furthermore, our construction from Theorem~\ref{thm.intro} has bearings on the interior of (non-hairy) \emph{asymptotically flat black holes} as well, as we explain in Section~\ref{non-hairy.intro}. To summarize: the domain of dependence property allows to consider the black hole interior region independently from the black hole exterior (see Figure~\ref{construction}); thus the repercussions of Theorem~\ref{thm.intro} extend significantly beyond asymptotically AdS hairy black holes.
\subsection{Rough version of our main result} \label{roughthm.intro}
Before explaining the relevance to Conjectures~\ref{conj.BH} and \ref{conj.cosmo} of these novel hairy black hole interiors that we construct, we will first give a rough (but detailed) version of our main result immediately below. To this effect, we recall the well-known interior region of the Reissner--Nordstr\"{o}m-(dS/AdS) black hole (see also Section~\ref{sub:reissner_nordstrom}).$$g_{RN} = - \left ( 1 - \frac{2M}{r} + \frac{\mathbf{e}^2}{r^2} - \frac{\Lambda r^2}{3}\right ) dt^2 + \left ( 1 - \frac{2M}{r} + \frac{\mathbf{e}^2}{r^2} - \frac{\Lambda r^2}{3} \right )^{-1} dr^2 + r^2 d \sigma_{\mathbb{S}^2}$$ with parameters $(M, \mathbf{e},\Lambda)$. The hairy black hole constructed in the following Theorem~\ref{thm.intro} has initial data (\ref{rough.data1}), (\ref{rough.data2}) that are $O(\epsilon} \newcommand{\ls}{\lesssim^2)$-perturbations of $g_{RN}$, with scalar hair of initial size $\epsilon} \newcommand{\ls}{\lesssim$.
\begin{theo} \textup{[Rough version]} \label{thm.intro}
Fix the following characteristic initial data on bifurcate event horizons $\mathcal{H}_L \cup \mathcal{H}_R$: \begin{align}
& \phi \equiv \epsilon,\label{rough.data1}\\ & g= g_{RN} \color{black} + O(\epsilon^2) \label{rough.data2}
\end{align} where $ g_{RN}$ is a Reissner--Nordstr\"{o}m-(dS/AdS) metric with sub-extremal parameters $(M, \mathbf{e},\Lambda)$.
Define $(\mathcal{M}=\mathbb{R}\times (-\infty,s_{\infty}) \times \mathbb{S}^2,g,F,\phi)$ to be the maximal globally hyperbolic future development of this data. $(\mathcal{M}, g)$ is a spatially-homogeneous, spherically symmetric spacetime, and we write $g$, $F$ and $\phi$ in a suitable gauge a
\begin{equation}\label{MGHD}
g= -\Omega^2(s) [- dt^2+ ds^2] + r^2(s) d\sigma_{\mathbb{S}^2},\; F = \frac{Q(s)}{r^2(s)}\Omega^2(s) ds \wedge dt,\; \phi=\phi(s),
\end{equation}
solving \eqref{E1}--\eqref{E5} with $q_0\neq 0$ and initial data given by \eqref{rough.data1}, \eqref{rough.data2} and $Q_{|\mathcal{H}_L \cup \mathcal{H}_R} \equiv \mathbf{e}$.
Let $\eta > 0$ be sufficiently small. Then there exists $\epsilon} \newcommand{\ls}{\lesssim_0(M,\mathbf{e},\Lambda,m^2,q_0,\eta)>0$ and a set $E_{\eta} \subset (- \epsilon} \newcommand{\ls}{\lesssim_0, \epsilon} \newcommand{\ls}{\lesssim_0) \setminus \{0\}$, satisfying $\frac{|(-\delta, \delta) \setminus E_{\eta}|}{2 \delta} = O(\eta)$ for all $0 < \delta \leq \epsilon} \newcommand{\ls}{\lesssim_0$, such that for all $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta}$, the spacetime $(\mathcal{M},g)$ terminates at a \textbf{spacelike singularity $\mathcal{S}=\{r=0\}$}, asymptotically described by a Kasner metric of positive exponents $(p_1,p_{2},p_{3}) \in (0,1)^3$.
The spacetime $(M, g)$ may be partitioned into several regions, as illustrated by the Penrose diagram of Figure~\ref{Penrose_simplified}, and has the following features:
\begin{enumerate}
\item \label{I1} \underline{Almost formation of a Cauchy horizon}.\ \label{thm.a}In the early regions, $(g,F,\phi)$ are uniformly close to the Reissner--Nordstr\"{o}m-(dS/AdS) background Cauchy horizon, and the scalar field is approximated by a linearly-oscillating profile:
\begin{equation}\label{linear.osc}
\phi(s) = B(M, \mathbf{e},\Lambda,m^2,q_0)\cdot \epsilon} \newcommand{\ls}{\lesssim \cdot e^{i\omega_{RN}(q_0,M,\mathbf{e},\Lambda) s} + \overline{B}(M,\mathbf{e},\Lambda,m^2,q_0) \cdot \epsilon} \newcommand{\ls}{\lesssim \cdot e^{-i \omega_{RN}(q_0,M,\mathbf{e},\Lambda) s}+ O(\epsilon} \newcommand{\ls}{\lesssim^2),
\end{equation}
where $B(M,\mathbf{e},\Lambda,m^2,q_0) \in \mathbb{C}\setminus\{0\}$ is a linear scattering parameter, and $\omega_{RN}(M,\mathbf{e},\Lambda,q_0)= |q_0 \mathbf{e}| \cdot (\frac{1}{r_-}-\frac{1}{r_+})\neq 0$. Here $r_{\pm}(M,\mathbf{e},\Lambda)>0$ are respectively the radii of the event and Cauchy horizons of the background Reissner--Nordstr\"{o}m-(dS/AdS) metric as defined in Section~\ref{sub:reissner_nordstrom}.
\item \label{I2}\underline{Collapsed oscillations}.\ \label{thm.b} The scalar field experiences \emph{growing} oscillations while $r$ shrinks towards $0$. Consequently \begin{equation}\label{Jo.intro}
\phi := \alpha(\epsilon) \approx C \cdot \sin( \omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+ O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1}))), \text{ and } r \approx \epsilon} \newcommand{\ls}{\lesssim \text{ at the end of the collapsed oscillations region }
\end{equation} with $C(M,\mathbf{e}, \Lambda, m^2, q_0) \in \mathbb{R}$ and $\omega_0(M,\mathbf{e},\Lambda,m^2,q_0)>0$. Note that $\phi|_{r \approx \epsilon} \newcommand{\ls}{\lesssim} =O(1)$ despite data being $O(\epsilon} \newcommand{\ls}{\lesssim)$.
\item \label{I3} \underline{Formation of the first Kasner regime}.\ \label{thm.c}A Kasner regime starts developing with Kasner exponents \begin{equation}\label{first.Kasner}
p_1 = P(\alpha) =\frac{\alpha^2-1}{3+\alpha^2},\quad p_2=p_3=\frac{2}{3+\alpha^2}.
\end{equation}
\item \label{I4} \label{thm.d}\underline{The final Kasner regime}. For any $\sigma\in(0,1)$, we introduce the following disjoint subsets of $E_{\eta}$. \begin{itemize}
\item (Non-inversion case) $\epsilon} \newcommand{\ls}{\lesssim\in E_{\eta,\sigma}^{'\ Ninv} \subset E_{\eta}$ if $|\alpha(\epsilon} \newcommand{\ls}{\lesssim)|\geq 1+\sigma$ (i.e.\ $p_1>0$). When $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta,\sigma}^{'\ Ninv} $, the first Kasner regime is also the final Kasner regime and continues all the way to $\mathcal{S}=\{r=0\}$ in a monotonic fashion.
\item (Kasner inversion case) $\epsilon} \newcommand{\ls}{\lesssim\in E_{\eta,\sigma}^{'\ inv} \subset E_{\eta}$ if $ \eta \leq |\alpha(\epsilon} \newcommand{\ls}{\lesssim)| \leq 1-\sigma $. We have $| E_{\eta,\sigma}^{'\ inv}| >0$ (in particular $E_{\eta,\sigma}^{'\ inv}\neq \emptyset$). When $\epsilon} \newcommand{\ls}{\lesssim\in E_{\eta,\sigma}^{'\ inv} $, the above Kasner regime eventually transitions towards a (different) final Kasner regime with positive Kasner exponents \begin{equation}\label{last.Kasner}
p_1 = P\left(\frac{1}{\alpha}\right) =\frac{1-\alpha^2}{1+3\alpha^2},\quad p_{2}=p_{3}=\frac{2\alpha^2}{1+3\alpha^2}.
\end{equation}
The final Kasner regime then continues all the way to $\mathcal{S} = \{ r = 0 \}$ in a monotonic fashion.
\end{itemize}
\item \label{charge.retention} \label{I5}\underline{Charge retention of the Kasner singularity}.\ \label{thm.e}The charge $Q(r)$ admits a limit $Q_{\infty} \neq 0$ as $r\rightarrow 0$. More precisely \begin{align}
& \label{retention1} Q_{\infty} = (1 - \delta(M,\mathbf{e},\Lambda)) \cdot \mathbf{e} + O(\epsilon) \text{ with } \delta(M,\mathbf{e},\Lambda) \in \left( 0,\frac{1}{2} \right) ,\\ & \text{ and } \delta(M,\mathbf{e},\Lambda) \left\{
\begin{array}{ll}
= \frac{1}{4} & \mbox{if } \Lambda=0, \\
\in (0,\frac{1}{4}) & \mbox{if } \Lambda>0, \\
\in (\frac{1}{4},\frac{1}{2}) & \mbox{if } \Lambda<0.
\end{array}
\right.
\end{align}
\end{enumerate}
\end{theo} Theorem~\ref{thm.intro} is a rough version of our main results later stated as Theorem~\ref{maintheorem} and Theorem~\ref{maintheorem2}. The content of Theorem~\ref{maintheorem} covers the statements \ref{I1}, \ref{I2} and \ref{I5} in Theorem~\ref{thm.intro}, together with some preliminary estimates towards the statements \ref{I3}-\ref{I4}. Theorem~\ref{maintheorem2} is specifically dedicated to Kasner regimes (including the Kasner inversion phenomenon) and covers the statements \ref{I3} and \ref{I4} in Theorem~\ref{thm.intro}.
\begin{rmk}
The constants $\eta, \sigma > 0$, presumed small throughout this article, are present to ensure that $\alpha(\epsilon)$ is bounded away from $\{0,1\}$. Note that we do not obtain a statement for every sufficiently small $\epsilon$, but only for a set $E_{\eta}$, which features an $O(\eta)$ loss in the sense given above. However, the methods used to prove Theorem~\ref{thm.intro} allow for $|1-\alpha| \epsilon} \newcommand{\ls}{\lesssim^{-0.01}>1$ or $|\alpha| \epsilon} \newcommand{\ls}{\lesssim^{-0.01}>1$, with only minor adjustments. Understanding what happens when say $|1-\alpha| \ll \epsilon} \newcommand{\ls}{\lesssim^{2}$ or $|\alpha| \ll \epsilon} \newcommand{\ls}{\lesssim^{2}$, is an interesting open problem (see Section~\ref{cosmo.intro} and \ref{holo.intro}).
\end{rmk}
\begin{figure}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\path[fill=gray, opacity=0.3] (0, -6) -- (-6, 0)
.. controls (0, -1) .. (6, 0) -- (0, -6);
\path[fill=orange!70!black, opacity=0.5] (6, 0)
.. controls (0, 1.5) .. (-6, 0)
.. controls (0, -1) .. (6, 0);
\path[fill=brown!70!black, opacity=0.6] (6, 0)
.. controls (0, 1.5) .. (-6, 0)
.. controls (0, 4) .. (6, 0);
\path[fill=yellow!80!black, opacity=0.4] (6, 0)
.. controls (0, 4) .. (-6, 0)
.. controls (0, 6) .. (6, 0);
\node (p) at (0, -6) [circle, draw, inner sep=0.5mm, fill=black] {};
\node (r) at (6, 0) [circle, draw, inner sep=0.5mm] {};
\node (l) at (-6, 0) [circle, draw, inner sep=0.5mm] {};
\draw [thick] (p) -- (r)
node [midway, below right] {$\mathcal{H}_R$};
\draw [thick] (p) -- (l)
node [midway, below left] {$\mathcal{H}_L$};
\draw [color=gray]
(l) .. controls (0, -1) .. (r);
\draw [orange] (l) .. controls (0, 1.5) .. (r);
\draw [brown] (l) .. controls (0, 4) .. (r);
\draw [very thick, dotted] (l) .. controls (0, 6) .. (r)
node [midway, above, align=center] {\scriptsize spacelike singularity \\ \scriptsize $\mathcal{S} = \{ r = 0 \}$};
\node at (0, -2.6) {\footnotesize Early regions, $g \approx g_{RN}$};
\node at (0, +0.2) {\footnotesize Region of \textit{collapsed oscillations}};
\node at (0, +1.95) {\footnotesize First Kasner regime};
\node at (0, +3.6) {\footnotesize Final Kasner regime};
\end{tikzpicture}}
\caption{Penrose diagram of the hairy black hole interior from Theorem \ref{thm.intro}. If $|\alpha| \geq 1 + \sigma > 1$, then the first Kasner regime matches the final Kasner regime and continues to $\{ r = 0 \}$. If $0 < \eta \leq |\alpha| \leq 1 - \sigma < 1$, then there is a Kasner inversion between the first and final Kasner regimes. A more detailed breakdown is given in Figure~\ref{Penrose_detailed}.}
\label{Penrose_simplified}
\end{figure}
We will now discuss the relations between Theorem~\ref{thm.intro}, Problem~\ref{PbI} and Problem~\ref{PbII}. The following paragraphs provide short summaries of the subsequent sections in the introduction. One of the main features is the existence of a \emph{Kasner inversion}: we provide the first rigorous example of a spacetime in which \textbf{an unstable Kasner regime forms dynamically, and then disappears under Kasner inversion}.
\paragraph{Differences and connections with non-hairy black holes} The hairy black holes of Theorem~\ref{thm.intro} and the uncharged-matter ones from \cite{VDM21} have a spacelike singularity $\mathcal{S}=\{r=0\}$ and no Cauchy horizon (see Section~\ref{non-hairy.intro}) and, in that, differ globally from the non-hairy black holes
corresponding to Conjecture~\ref{conj.BH} (compare Figure~\ref{Fig1} and Figure~\ref{Penrose_simplified}). However, the domain of dependence property (in two-ended black holes\footnote{The differences between one-ended and two-ended black holes in Figure~\ref{Fig1} will be elaborated upon in Section~\ref{non-hairy.intro}. For now, let us note that the \emph{appearance of the spacelike singularity}, i.e.\ $\mathcal{S}\neq \emptyset$, crucially depends on whether the black hole is one-ended or two-ended \cite{r0}. However, numerics indicate that the quantitative behavior near the spacelike singularity is similar in the one or two ended case.}) shows that the spacelike singularity $\mathcal{S}$ in non-hairy black holes can arise from initial data
that is locally similar to the data of Theorem~\ref{thm.intro}, see Figure~\ref{construction}. Therefore, our new theorem on hairy black holes may also dictate the qualitative behavior of the spacelike singularity $\mathcal{S}$ inside the \emph{non-hairy black holes} of Conjecture~\ref{conj.BH} (see Section~\ref{non-hairy.intro} for a discussion). Lastly, our proof opens the door to studying spacelike singularity beyond the spatially homogeneous case in subsequent works -- notably in spherical symmetry -- see Open Problem~\ref{open1}.
\paragraph{Comparison with hairy black hole interiors for other matter models}
The Kasner exponents' dependency on $\sin\left(\omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1}))\right)$ found in Theorem~\ref{thm.intro} gives rise to fluctuations near $\epsilon=0$: we term this phenomenon the \emph{fluctuating collapse}.
The fluctuating collapse and Kasner inversions from Theorem~\ref{thm.intro} contrast with the \emph{violent nonlinear collapse} of the \emph{uncharged} hairy black holes (\eqref{E1}-\eqref{E5} with $q_0=0$, also a stiff model) found in \cite{VDM21} (see Section~\ref{other.intro}). Recall from \cite{VDM21} that in the $q_0=0$ case, \emph{there is no Kasner inversion}, and the final Kasner exponents and curvature are of the form $$ (p_1,p_{2},p_{3})=(1-O(\epsilon} \newcommand{\ls}{\lesssim^2),\ O(\epsilon} \newcommand{\ls}{\lesssim^2),\ O (\epsilon} \newcommand{\ls}{\lesssim^2)),\ K(r) \approx r^{-O(\epsilon} \newcommand{\ls}{\lesssim^{-2})}.$$ The name ``violent collapse'' in the $q_0=0$ comes from the $O(\epsilon} \newcommand{\ls}{\lesssim^{-2})$ power in the blow-up rate of the curvature $K(r)$, which becomes \emph{more singular} as $\epsilon \rightarrow0$. This is an example of a singular limit, since $\epsilon} \newcommand{\ls}{\lesssim=0$ corresponds to Reissner--Nordstr\"{o}m, while we can make sense of a $\epsilon} \newcommand{\ls}{\lesssim\rightarrow 0$ (weak) limit in the appropriate region using $$ \lim_{\epsilon} \newcommand{\ls}{\lesssim\rightarrow 0} (p_1,p_{2},p_{3})(\epsilon} \newcommand{\ls}{\lesssim)= (1,0,0) \text{, which corresponds to a subset of Minkowski}.$$ In contrast, in the $q_0 \neq 0$ case newly studied in our Theorem~\ref{thm.intro}, there is \emph{not even a weak limit} as $\epsilon} \newcommand{\ls}{\lesssim\rightarrow 0$, since $(p_1,p_{2},p_{3})(\epsilon} \newcommand{\ls}{\lesssim)$ depends on $\sin(\omega_0 \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2} + O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1})))$ as discussed above. Also, in the $q_0\neq 0$ case, any neighborhood of $\epsilon} \newcommand{\ls}{\lesssim=0$ contains all (positive) Kasner exponents (away from the degenerate cases), whereas in the $q_0=0$ case of \cite{VDM21} a neighborhood of $\epsilon} \newcommand{\ls}{\lesssim=0$ is mapped into a neighborhood of $(p_1,p_{2},p_{3})=(1,0,0)$.
\paragraph{Collapsed oscillations and charge retention} The main new phenomenon driving the dynamics of Theorem~\ref{thm.intro} are what we term \emph{collapsed oscillations}. Even at the linear level, a (spatially-homogeneous) charged scalar field has (infinite) linear oscillations of the form \eqref{linear.osc} near the Cauchy horizon of the Reissner--Nordstr\"{o}m interior spacetime. In the nonlinear setting, such linear oscillations appear in some ``early regions'' in the dynamics (namely $\mathcal{EB}$ and $\mathcal{LB}$ in Figure~\ref{Penrose_detailed}). Subsequently these oscillations start interacting nonlinearly with the collapse process (as $r$ gets closer to $0$), leading to a Bessel-function type behavior in terms of $\frac{r}{\epsilon} \newcommand{\ls}{\lesssim}$. Because $\frac{r}{\epsilon} \newcommand{\ls}{\lesssim}$ is a \emph{decreasing} function of time, this Bessel behavior leads to \emph{growth of the scalar field} which transitions from amplitude $\epsilon} \newcommand{\ls}{\lesssim$ to amplitude $\alpha(\epsilon} \newcommand{\ls}{\lesssim)=O(1)$. We will elaborate on collapsed oscillations and their mechanism in Section~\ref{osc.intro}
Another important question relating to Problem~\ref{PbI} is whether spacelike singularities can retain charge/angular momentum. This issue is puzzling, as the only explicit black hole solution with a spacelike singularity is the Schwarzschild interior, which is uncharged and non-rotating. In point~\ref{thm.e} of Theorem~\ref{thm.intro} we exhibit a \emph{mechanism of discharge} of the black hole, passing from $\mathbf{e}$ at the event horizon $\mathcal{H} = \mathcal{H}_L \cup \mathcal{H}_R$ to $(1-\delta)\mathbf{e}$ at the spacelike singularity $\mathcal{S}$, where $\delta \in (0,\frac{1}{2})$. Note that the \emph{discharge is not complete} and the spacelike singularity retains a non-zero final charge $(1-\delta)\mathbf{e}$. It is remarkable that for $\Lambda=0$, the discharge ratio $\delta=\frac{1}{4}$ is independent of the black hole parameters.
To the best of our knowledge, the spacetime of Theorem~\ref{thm.intro} is the first example of a spacelike singularity retaining\footnote{We note that the charged hairy black holes with uncharged matter from \cite{VDM21} also had non-zero charge, but it was not dynamical.} charge. The main mechanism of discharge and charge retention occurs at the same time as the collapsed oscillations, and the charge varies little past the collapsed oscillations region (see Figure~\ref{Fig1}) until the spacelike singularity, see Remark~\ref{retention.rmk}.
\paragraph{Kasner inversions} In Theorem~\ref{thm.intro}, we show that if $|\alpha(\epsilon} \newcommand{\ls}{\lesssim)| \approx C \cdot |\sin(\omega_0 \epsilon} \newcommand{\ls}{\lesssim^{-2} +O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1})))| < 1$, then there will be a Kasner inversion. For $(M,\mathbf{e},\Lambda,m^2, q_0)$ such that $C(M,\mathbf{e},\Lambda,m^2,q_0)>1$, the inversion condition $C \cdot |\sin(\omega_0 \epsilon} \newcommand{\ls}{\lesssim^{-2} +O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1})))|<1$ holds for all $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta}^{inv} \cap (0,\epsilon_0)$, where $E_{\eta}^{inv} $ has non-zero measure: in fact $|E_{\eta}^{inv} \cap (0, \epsilon} \newcommand{\ls}{\lesssim_0)| \approx \frac{\epsilon_0}{C}$ if $C\gg 1$. On the other hand, if $C(M, \mathbf{e}, \Lambda, m^2, q_0) < 1$, then the inversion condition always holds. These Kasner inversions have the following features: \begin{enumerate}
\item For some range of proper time $ \{ \epsilon} \newcommand{\ls}{\lesssim^{q^2(\alpha)}<\tau \ll e^{- b^2(\alpha) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2} }\} $ with $q(\alpha)>0,\ b(\alpha)>0$, the metric is uniformly close to a $(p_1,\frac{1-p_1}{2},\frac{1-p_1}{2})$ Kasner, where $p_1(\epsilon} \newcommand{\ls}{\lesssim) \approx \frac{\alpha^2-1}{ 3+\alpha^2 }<0$.
\item In the smallest values of proper time from the singularity $\{ 0 < \tau \ll e^{- b^2(\alpha) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2} }\}$, the metric is uniformly close to, and indeed converges as $\tau \rightarrow 0$ towards, a $(\acute{p}_1,\frac{1-\acute{p}_1}{2},\frac{1-\acute{p}_1}{2})$ Kasner, where $\acute{p}_1(\epsilon} \newcommand{\ls}{\lesssim) \approx \frac{1-\alpha^2}{ 1+3\alpha^2 }>0$.
\end{enumerate}
\begin{rmk}
When writing (imprecisely) $ e^{- b^2(\alpha) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}} \ll \tau $, we mean $ \epsilon} \newcommand{\ls}{\lesssim^{-N_1} \cdot e^{- b^2(\alpha) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}}< \tau$ for some $N_1>0$, and similarly $\tau \ll e^{- b^2(\alpha) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}} $ means $\tau < \epsilon} \newcommand{\ls}{\lesssim^{N_2} \cdot e^{- b^2(\alpha) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}}$ for $N_2>0$, so that the transition region in between the two Kasner regimes has size $O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))$ in terms of $\log(\tau^{-1})$, while $\log(\tau^{-1}) \approx \epsilon} \newcommand{\ls}{\lesssim^{-2}$ at the times where the inversion is occurring. We interpret this to mean that the inversion occurs very quickly in terms of proper time $\tau$.
\end{rmk}
We note that the final (post-inversion) Kasner regime has $\acute{p}_1>0$, while the pre-inversion Kasner has $p_1<0$. This is consistent with the early predictions of BKL \cite{BK1, BKL1, BKL2} that Kasner metrics with positive Kasner exponents are stable, while those with at least one negative exponent are unstable (see Section \ref{cosmo.intro} for an extended discussion).
\paragraph{Numerics and holographic superconductors in the AdS-CFT correspondence}
The recent numerical study \cite{hartnolletal2021} already predicted the scenario of Theorem~\ref{thm.intro} and coined the term ``Kasner inversion''. Beyond the setting of Theorem~\ref{thm.intro} (which features zero, or one Kasner inversion depending on $\epsilon} \newcommand{\ls}{\lesssim$), \cite{hartnolletal2021} also discusses the possibility of having \emph{two} (or more) Kasner inversions. The original motivation of \cite{hartnolletal2021} is related to a body of work on the physical significance of the hairy black hole (including its asymptotically AdS exterior region) of Theorem~\ref{thm.intro}, claimed to be the model for a \emph{holographic superconductor} in the context of the AdS-CFT theory (see Section~\ref{holo.intro}).
\paragraph{Outline of the rest of the Introduction}
\begin{itemize}
\item In Section~\ref{non-hairy.intro}, we will first extend our discussion of Problem~\ref{PbI} and Conjecture~\ref{conj.BH} and elaborate on the links between non-hairy black holes and Theorem~\ref{thm.intro}. We will also review in this context the existing literature on the interior of black holes, and provide outstanding open problems.
\item In Section~\ref{other.intro}, we will compare our new hairy black holes with charged matter from Theorem~\ref{thm.intro} to hairy black holes arising from other matter models. We will in particular discuss the charged hairy black holes with uncharged matter from \cite{VDM21}, whose setting and model are very similar to that of Theorem~\ref{thm.intro}, but the late-time spacetime dynamics end up being very different.
\item In Section~\ref{osc.intro}, we will discuss one of the two primary nonlinear mechanisms governing the dynamics of the spacetime of Theorem~\ref{thm.intro}: \emph{the collapsed oscillations}, which lead to the growth of the scalar field. We will explain in particular how this phenomenon arises from the interaction between the linear oscillatory behaviour for charged scalar fields at the Reissner--Nordstr\"{o}m Cauchy horizon on the one hand, and the tendency of the Einstein equations to form a spacelike singularity $\{r=0\}$ on the other hand.
\item In Section~\ref{cosmo.intro}, we will discuss the other primary nonlinear mechanism at play: the occurence of \emph{Kasner inversions}. This phenomenon has previously been investigated in depth in the cosmological setting, and we will elaborate on the connection with the pre-existing literature regarding such phenomena.
\item In Section~\ref{holo.intro}, we will discuss the \emph{exterior region} corresponding to the black hole interior of Theorem~\ref{thm.intro}, the most physically relevant case of which is asymptotically Anti-de-Sitter. We will emphasize the physical motivation in studying these spacetimes, called \emph{holographic superconductors}, which have been discovered and studied in the high-energy physics literature most notably in the context of the AdS-CFT correspondence.
\item In Section~\ref{outline.sec}, we give an outline of the paper and introduce the different regions of Figure~\ref{Penrose_detailed}.
\end{itemize}
\subsection{Differences and connections with non-hairy black holes} \label{non-hairy.intro}
The spacetimes constructed in Theorem~\ref{thm.intro} are spatially-homogeneous and we interpret them as the interior region of so-called ``hairy black holes''. The main distinctive feature of ``hairy black holes'' is the presence of so-called \emph{scalar hair}, meaning that the scalar field $\phi$ in \eqref{E5} \emph{does not decay on the event horizon} $\mathcal{H}^+$ and tends to a non-zero constant instead. (In the case of the hairy black holes of Theorem~\ref{thm.intro}, $\phi$ is identically equal to this constant on $\mathcal{H} = \mathcal{H}_L \cup \mathcal{H}_R$, see \eqref{rough.data1}).
As we will discuss in Section~\ref{holo.intro}, the construction of a hairy black hole exterior for \eqref{E1}-\eqref{E5} in spherical symmetry is only contemplated for asymptotically-AdS data (where $\Lambda < 0$). This exterior region will be static; note indeed, that if a black hole solution is $t$-independent, where $t$ is a timelike coordinate in the exterior, then the exterior is typically static, while the interior is spatially-homogeneous, due to $t$ becoming a spacelike coordinate in the interior.
Nonetheless, one may also wish to consider asymptotically flat (where $\Lambda = 0$) spacetimes which are relevant to the study of astrophysical black holes. In this context, one anticipates that solutions of
\eqref{E1}--\eqref{E5} with regular Cauchy data \emph{decay} towards a Reissner--Nordstr\"{o}m exterior solution, in particular $\phi$ tends to $0$ on $\mathcal{H}^+$ (in spherical symmetry, see \cite{PriceLaw,JonathanStabExt} for \eqref{E1}--\eqref{E5} with $q_0=m^2=0$, and also \cite{Moi2} for small $|q_0\mathbf{e}|$ on a fixed Reissner--Nordstr\"{o}m exterior). The resulting black holes thus feature $\phi$ decaying on $\mathcal{H}^+$ at the following rate: for all $v>1$, \begin{equation}\label{decay.s}
|\phi|_{\mathcal{H}+}(v)+ |D_v\phi|_{\mathcal{H}+}(v) \lesssim v^{-s},
\end{equation} where $v$ is a standard Eddington--Finkenstein type advanced-time coordinate and $s>\frac{1}{2}$. We will call such black holes ``non-hairy'' in the sequel, to mark the contrast with the black holes from Theorem~\ref{thm.intro}.
The interior of the non-hairy black holes solving \eqref{E1}-\eqref{E5} in spherical symmetry was studied in \cite{MoiChristoph,Moi,MoiThesis,r0,Moi4} and their Penrose diagram was completely characterized (modulo issues related to locally naked singularities, see the second paragraph below). In this section, we will briefly explain these results and provide contrast with our hairy black hole interiors from Theorem~\ref{thm.intro}. We also comment that one can use our hairy black hole interiors as a tool to retrieve information on non-hairy black holes, connecting Theorem~\ref{thm.intro} to Conjecture~\ref{conj.BH}, see the third paragraph below.
\paragraph{Local structure of the non-hairy black hole interior near infinity $i^+$}
We discuss the terminal boundary of the black hole interior. In the case of the hairy black holes of Theorem~\ref{thm.intro}, it is entirely spacelike $\mathcal{S}=\{r=0\}$. In contrast, for (spherically symmetric) non-hairy black holes solutions of, the terminal boundary is \emph{not} entirely spacelike and admits a null component near $i^+$ -- called the Cauchy horizon $\mathcal{CH}^+ \neq \emptyset$. This fact constitutes the most important difference between hairy and non-hairy black holes.
\begin{thm}[\cite{Moi}] \label{CH.thm} Consider regular spherically symmetric characteristic data on $\mathcal{H}^+ \cup \underline{C}_{in}$, where $\mathcal{H}^+:= [1,+\infty)_v \times \mathbb{S}^2$, converging to a sub-extremal Reissner--Nordstr\"{o}m exterior as \eqref{decay.s} . Then, restricting $\underline{C}_{in}$ to be sufficiently short, the future domain of dependence of $\mathcal{H}^+ \cup \underline{C}_{in}$ is bounded by a Cauchy horizon $\mathcal{CH}^+$, namely a null boundary emanating from $i^+$ and foliated by spheres of strictly positive area-radius $r$, as depicted in Figure~\ref{Fig.relax}.
\end{thm}
\begin{figure}
\begin{center}
\includegraphics[width= 58 mm, height=40
mm]{Kerr3}
\end{center}
\caption{Penrose diagram of the spacetime corresponding to Theorem~\ref{CH.thm}}
\label{Fig.relax}
\end{figure}
\begin{rmk}
We note that the presence of a Cauchy horizon $\mathcal{CH}^+ \neq \emptyset$ in the interior of dynamical black holes is not specific to spherical symmetry: for instance, it has been obtained for perturbations of Kerr for the Einstein vacuum equations (without symmetry) in \cite{KerrStab}. Whether Theorem~\ref{CH.thm} can be generalized to \eqref{E1}-\eqref{E5} without spherical symmetry is an interesting open problem, in view of the particularly slow decay of the form \eqref{decay.s} one has to assume in the presence of matter (see \cite{MoiChristoph2,JonathanICM,MoiThesis} for an extended discussion of slow decay).
\end{rmk}
\paragraph{Global structure of non-hairy black hole interiors}
Theorem~\ref{CH.thm} only gives information on a local region located near $i^+$ (see Figure~\ref{Fig.relax}). The \emph{global nature of the terminal boundary} as it turns out (see Theorem~\ref{r0.thm}) depends on the topology of the initial data; we distinguish two important cases: \begin{enumerate}[a.]
\item two-ended (topology of time-slices: $\mathbb{R}\times \mathbb{S}^2$). The maximally-extended Schwarzschild/Reissner-Nordstr\"{o}m/Kerr spacetimes, and our hairy black holes from Theorem~\ref{thm.intro} possess the two-ended topology.
\item one-ended (topology of time-slices: $\mathbb{R}^3$) because it is the topology suitable to studying the gravitational collapse of a star into a black hole \cite{Christo1,Kommemi,JonathanICM,r0} (referred to as ``gravitational collapse'' for short).
\end{enumerate}
\begin{thm}[Black hole interior in gravitational collapse, \cite{r0,Moi2,Moi4}] \label{r0.thm} We consider a one-ended black hole interior, under the assumptions of Theorem~\ref{CH.thm} and additional inverse-polynomial lower bounds on $\phi$ consistent with \eqref{decay.s}. Then, assuming the absence of locally naked singularities emanating from the center $\Gamma$, there is a (non-empty) spacelike singularity $\mathcal{S}=\{r=0\}$ and the Penrose diagram is given by the left-most of Figure~\ref{Fig1}.
\end{thm}
\begin{rmk} \label{loc.naked.rmk}
Locally naked singularities are (outgoing) null boundary $\mathcal{CH}_{\Gamma}$ emanating from the center $\Gamma$. Assuming their absence in Theorem~\ref{r0.thm} is unavoidable, since examples of locally naked singularities have been constructed \cite{Christo.existence} for \eqref{E1}-\eqref{E5}. However, for \eqref{E1}-\eqref{E5} with $F\equiv 0$, such locally naked singularities are non-generic within spherical symmetry \cite{Christo2,Christo3} and one may conjecture the same statement in the more general situation where $F\neq 0$. See \cite{Kommemi,r0} for an extended discussion of this delicate issue.
\end{rmk}
For two-ended black holes (irrelevant, however, to gravitational collapse), Theorem~\ref{r0.thm} is false because small perturbations of Reissner--Nordstrom obeying \eqref{decay.s} feature no spacelike singularity \cite{nospacelike}. However, it is conjectured \cite{nospacelike,KerrDaf,Kommemi} that, even in the two-ended case, large perturbations would yield a spacelike singularity $\mathcal{S}=\{r=0\} \neq \emptyset$ and a Penrose diagram corresponding to the rightmost in Figure~\ref{Fig1}.
\begin{rmk}\label{domain.rmk}
Note that for a two-ended black hole as in the rightmost Penrose diagram of Figure~\ref{Fig1}, the causal past of any compact subset of $\mathcal{S}$ intersects the event horizon $\mathcal{H}^+$ on a set with compact closure. This observation will be important in the discussion of the next paragraph, see Figure~\ref{construction}.
\end{rmk}
We conclude this section by mentioning previous works providing a pretty detailed characterization of spacelike singularities in spherical symmetry \cite{DejanAn,AnZhang,Christo4,Christo1,Christo2,Christo3} in the uncharged case (i.e.\ \eqref{E1}-\eqref{E5} with $F\equiv 0$).
\paragraph{Connections between hairy and non-hairy black holes and related open problems}
We come back to our original motivation: Conjecture~\ref{conj.BH} and understanding spacelike singularities inside black holes. Our goal is to construct a large class of (asymptotically flat) black holes with \emph{both} a spacelike singularity $\mathcal{S}=\{r=0\}$ and a null Cauchy horizon $\mathcal{CH}^+$, with precise quantitative information on (at least part of) $\mathcal{S}=\{r=0\}$.
Note that Theorem~\ref{r0.thm} shows that either there is a locally naked singularity (conjecturally non-generic, see Remark~\ref{loc.naked.rmk}), or there is a singularity $\mathcal{S}=\{r=0\}\neq \emptyset$ and a null Cauchy horizon $\mathcal{CH}^+\neq \emptyset$, but does not provide quantitative estimates on $\mathcal{S}$.
To obtain quantitative information on $\mathcal{S}$, we will use Theorem~\ref{thm.intro}, through the following construction. Parameterize the two event horizons by $\mathcal{H}_R=\{(-\infty,v),\ v \in \mathbb{R}\}$, $\mathcal{H}_L=\{(u,-\infty),\ u \in \mathbb{R}\}$. \begin{enumerate}[i.]
\item \label{G1} Fix $\phi_{|\mathcal{H}^+_R}(v)\equiv \epsilon} \newcommand{\ls}{\lesssim$ for $v \leq A$, and $\phi_{|\mathcal{H}^+_L}(v)\equiv \epsilon} \newcommand{\ls}{\lesssim$ for $u \leq A$. Evolving this characteristic data on $(\mathcal{H}_R \cap \{ v\leq A\}) \cup (\mathcal{H}_L \cap \{ u\leq A\})$ towards the past, we obtain a (non-unique) solution to \eqref{E1}--\eqref{E5} up to the bifurcate null cones $C_{out} \cup \underline{C}_{in}$ (see Figure~\ref{construction}).
\item \label{G2}Extend $C_{out}$ (respectively $\underline{C}_{in}$) into an outgoing (respectively ingoing) cone which is asymptotically flat (or even eventually flat) using a gluing argument. The bifurcate null cones $\tilde{C}_{out} \cup \underline{\tilde{C}}_{in}$ thus obtained intersect (what should be thought of as) future null infinity $\mathcal{I}^+=\mathcal{I}_L^+ \cup \mathcal{I}_R^+$, and $\phi_{|\tilde{C}_{out} \cup \underline{\tilde{C}}_{in}}$ decays towards $\mathcal{I}^+$.
\item \label{G3}Solve \emph{forward} for the above characteristic data on $\tilde{C}_{out} \cup \underline{\tilde{C}}_{in}$. By the domain of dependence property, the following spacetime region (consisting of the dark grey, orange, brown and yellow regions in Figure~\ref{construction}) $$\{ (u,v): u \leq A,\ v\leq A\}\cap \mathcal{D}^+((\mathcal{H}_R \cap \{ v\leq A\}) \cup (\mathcal{H}_L \cap \{ u\leq A\})), \text{ where } \mathcal{D}^+ \text{ denotes the domain of dependence }$$ is isometric to a subset of the hairy black hole $g_{\epsilon} \newcommand{\ls}{\lesssim}$ of Theorem~\ref{thm.intro} that contains a large portion of the spacelike singularity $\mathcal{S}$ (in green in Figure~\ref{construction}).
\item \label{G4} Using that $\phi_{|\tilde{C}_{out} \cup \underline{\tilde{C}}_{in}}$ decays towards $\mathcal{I}^+$, prove that the decay condition \eqref{decay.s} is satisfied on the new event horizons $\mathcal{H}^*_R$ and $\mathcal{H}^*_L$ (note that the event horizons $\mathcal{H}^*_R$, $\mathcal{H}^*_L$ for the newly constructed black hole do not coincide with the original event horizons $\mathcal{H}_R$, $\mathcal{H}_L$ of the hairy black hole). As a consequence of Theorem~\ref{CH.thm}, obtain the existence of Cauchy horizons $\mathcal{CH}_L, \mathcal{CH}_R \neq \emptyset$.
\end{enumerate}
\begin{figure}[h]
\centering
\scalebox{0.6}{
\begin{tikzpicture}
\path[fill=gray, opacity=0.5] (0, -6) -- (-6, 0)
.. controls (0, -1) .. (6, 0) -- (0, -6);
\path[fill=orange!70!black, opacity=0.5] (6, 0)
.. controls (0, 1.5) .. (-6, 0)
.. controls (0, -1) .. (6, 0);
\path[fill=brown!70!black, opacity=0.6] (6, 0)
.. controls (0, 1.5) .. (-6, 0)
.. controls (0, 4) .. (6, 0);
\path[fill=yellow!80!black, opacity=0.4] (6, 0)
.. controls (0, 4) .. (-6, 0)
.. controls (0, 6) .. (6, 0);
\draw [color=gray]
(l) .. controls (0, -1) .. (r);
\draw [orange] (l) .. controls (0, 1.5) .. (r);
\draw [brown] (l) .. controls (0, 4) .. (r);
\draw [very thick, dotted] (l) .. controls (0, 6) .. (r)
node [midway, above] {\scriptsize spacelike singularity};
\draw [thick, dashed] (p) -- (r)
node [midway, above left] {$\mathcal{H}_R \cap \{ v \leq A \}$};
\draw [thick, dashed] (p) -- (l)
node [midway, above right] {$\mathcal{H}_L \cap \{ u \leq A \}$};
\path[fill=white] (0, -6.5) -- (6.75, 0.25) -- (3, 4)
.. controls (2, 4) and (1.5, 4.2) .. (0.5, 4.45)
-- (5.475, -0.525) -- (p) -- (-5.475, -0.525) -- (-0.5, 4.45)
.. controls (-1.5, 4.2) and (-2, 4) .. (-3, 4)
-- (-6.75, 0.25) -- (0, -6.5);
\path[fill=lightgray, opacity=0.3] (0, -6.5) -- (6.75, 0.25) -- (3, 4)
.. controls (2, 4) and (1.5, 4.2) .. (0.5, 4.45)
-- (5.475, -0.525) -- (p) -- (-5.475, -0.525) -- (-0.5, 4.45)
.. controls (-1.5, 4.2) and (-2, 4) .. (-3, 4)
-- (-6.75, 0.25) -- (0, -6.5);
\node (p) at (0, -6) [circle, draw, inner sep=0.5mm, fill=black] {};
\node (s) at (0.5, 4.45) [circle, draw, inner sep=0.5mm, fill=red, red] {};
\node (t) at (-0.5, 4.45) [circle, draw, inner sep=0.5mm, fill=red, red] {};
\node (c) at (0, -8) [circle, draw, inner sep=0.5mm, fill=black] {};
\node (r2) at (7.5, -0.5) [circle, draw, inner sep=0.5mm] {};
\node (l2) at (-7.5, -0.5) [circle, draw, inner sep=0.5mm] {};
\node (h) at (0, -6.5) [circle, draw, inner sep=0.5mm, fill=blue, blue] {};
\node (r3) at (6.75, 0.25) [circle, draw, inner sep=0.5mm, blue] {};
\node (l3) at (-6.75, 0.25) [circle, draw, inner sep=0.5mm, blue] {};
\node (chr) at (3, 4) [circle, draw, inner sep=0.5mm, black] {};
\node (chl) at (-3, 4) [circle, draw, inner sep=0.5mm, black] {};
\node at (r3) [anchor=south west, blue] {$i^+_R$};
\node at (l3) [anchor=south east, blue] {$i^+_L$};
\draw [thick, dashed] (p) -- (1, -7);
\draw [thick, dashed] (p) -- (-1, -7);
\draw (c) -- (7.5, -0.5) node [midway, below right] {$C_{out}$ extends to $\tilde{C}_{out}$};
\draw (c) -- (-7.5, -0.5) node [midway, below left] {$\underline{C}_{in}$ extends to $\tilde{\underline{C}}_{in}$};
\draw [very thick] (c) -- (6.475, -1.525) node [midway, below right] {};
\draw [very thick] (c) -- (-6.475, -1.525) node [midway, below left] {};
\draw [thick, dashed, blue] (r3) -- node [midway, below right] {$\mathcal{H}^*_R$} (h) -- node [midway, below left] {$\mathcal{H}^*_L$} (l3);
\draw [thick, dashed] (r3) -- (r2) node [midway, above right] {$\mathcal{I}^+_R$};
\draw [thick, dashed] (l3) -- (l2) node [midway, above left] {$\mathcal{I}^+_L$};
\draw [thick, dashed] (r3) -- (chr) node [midway, above right] {$\mathcal{CH}_R$};
\draw [thick, dashed] (l3) -- (chl) node [midway, above left] {$\mathcal{CH}_L$};
\draw [very thick, dotted] (s) .. controls (1.5, 4.2) and (2, 4) .. (chr);
\draw [very thick, dotted] (t) .. controls (-1.5, 4.2) and (-2, 4) .. (chl);
\draw [very thick, red] (s) -- (6.475, -1.525);
\draw [very thick, red] (t) -- (-6.475, -1.525);
\draw [very thick, green] (s) .. controls (0, 4.55) .. (t);
\node at (0, -0.3) [align = center] {Region isometric \\ to the hairy \\ black hole metric \\ $g_{\epsilon} \newcommand{\ls}{\lesssim}$ from Theorem~\ref{thm.intro}};
\end{tikzpicture}}
\caption{The proposed construction of a two-ended black with a spacelike singularity $\mathcal{S}$ via gluing argument. The union of the dark grey, orange, brown and yellow regions (including the green part of the spacelike singularity) is isometric to a subset of the hairy black hole of Figure~\ref{Penrose_simplified}. $\mathcal{I}_L^+$ and $\mathcal{I}_R^+$ are the components of null infinity $\mathcal{I}^+$.}
\label{construction}
\end{figure}
We want to point out that the only unknown step is the proof of \eqref{decay.s} (step~\ref{G4}) which relies on establishing polynomial decay on the event horizon. We note, furthermore, that for small charge $|q_0 \mathbf{e}|\ll 1$ and $m^2=0$, step~\ref{G4} should follow from (a slight generalization) of \cite{Moi2}.
Running steps~\ref{G1}-\ref{G4} successfully provides a (two-ended) black hole with Penrose diagram as in the rightmost picture of Figure~\ref{Fig1}, i.e.\ a black hole with a Cauchy horizon $\mathcal{CH}^+ \neq \emptyset$ and a spacelike singularity $\mathcal{S}\neq \emptyset$ partly given by $\mathcal{S}$ in the hairy black hole of Theorem~\ref{thm.intro}. We formalize the above strategy into the following open problem. \begin{open}\label{open1}
Construct a (one or two)-ended black hole with a Cauchy horizon $\mathcal{CH}^+ \neq \emptyset$ and a spacelike singularity $\mathcal{S}\neq \emptyset$, which coincides with the hairy black hole singularity $\mathcal{S}$ from Theorem~\ref{thm.intro} away from $\mathcal{CH}^+ \cap \mathcal{S}$.
\end{open}
Our road-map towards a resolution of Open Problem~\ref{open1} indicates that the fluctuations and Kasner inversions of Theorem~\ref{thm.intro} \textbf{should play a role in the interior of asymptotically flat, non-hairy black holes}. We find it striking that, even when restricted to spherical symmetry, the spacelike singularity inside a black hole can obey such intricate dynamics.
To understand a larger class of black holes with both a Cauchy horizon $\mathcal{CH}^+ \neq \emptyset$ and a spacelike singularity $\mathcal{S}\neq \emptyset$, it is of interest to perturb the hairy black hole of Theorem~\ref{thm.intro} within spherical symmetry but relaxing spatial homogeneity. Subsequently following steps~\ref{G1}-\ref{G4}, where $g_{\epsilon} \newcommand{\ls}{\lesssim}$ is replaced by a perturbed spacetime, will yield even more general insights than Open Problem~\ref{open1} into spherically-symmetric spacelike inside black holes, which we formalize in the following open problem (note Open Problem~\ref{open2} is the charged ($q_0\neq0$) version of Open Problem v in \cite{VDM21}).
\begin{open}\label{open2}
Consider (two-ended) initial data on $\mathcal{H}^+$ such that, instead of \eqref{rough.data1}: \begin{equation}
|\phi_{|\mathcal{H}^+}(v) -\epsilon} \newcommand{\ls}{\lesssim | \leq |\epsilon} \newcommand{\ls}{\lesssim|^N \cdot e^{-C_0 v}
\end{equation} for $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta}$, as defined in Theorem~\ref{thm.intro}, with $N>0$ and $C_0>0$ sufficiently large constants. Prove (or disprove) that the terminal boundary is spacelike, and provide (reasonably) precise quantitative estimates.
Then, construct a (one or two)-ended black hole with a Cauchy horizon $\mathcal{CH}^+ \neq \emptyset$ and a spacelike singularity $\mathcal{S}\neq \emptyset$, which coincides with the above perturbed hairy black hole singularity $\mathcal{S}$ away from $\mathcal{CH}^+ \cap \mathcal{S}$.
\end{open}
We finally want to emphasize that our quantitative methods give hope to transpose some results of Theorem~\ref{thm.intro} to towards Open Problem~\ref{open2}. We hope to return to this these very interesting questions in future work.
\subsection{Comparison with hairy black hole interiors for other matter models} \label{other.intro}
\paragraph{The charged hairy black holes with uncharged matter from \cite{VDM21}}
An alternative to studying \eqref{E1}-\eqref{E5} with a charged scalar field ($q_0\neq 0$) is to study the uncharged scalar field case $q_0=0$ where the Maxwell field $F\neq 0$ does not interact with $\phi$. This was first done numerically in \cite{hartnolletal2020} and then rigorously by the second author \cite{VDM21} and qualified as ``violent nonlinear collapse''. It is remarkable that the behavior in the $q_0=0$ case differs drastically from what we found in the $q_0\neq 0$ case in Theorem~\ref{thm.intro}, as the following result shows.
\begin{thm}[\cite{VDM21}] \label{violent.thm}
Under the same assumptions as Theorem~\ref{thm.intro}, except that now $q_0=0$, hence $ F= \frac{\mathbf{e}}{r^2(s)} \Omega^2 ds \wedge dt$ where $\mathbf{e}\neq 0$.
Then, for almost every sub-extremal parameters $(M,\mathbf{e},\Lambda,m^2)$, there exists $\epsilon} \newcommand{\ls}{\lesssim_0(M,\mathbf{e},\Lambda,m^2)>0$ such that all $0<|\epsilon} \newcommand{\ls}{\lesssim|<\epsilon} \newcommand{\ls}{\lesssim_0$, the spacetime $(\mathcal{M},g)$ ends at a {spacelike singularity $\mathcal{S}=\{r=0\}$} asymptotically described by a Kasner metric with exponents $(p_1,p_{2},p_{3}) =(1,0,0)+O(\epsilon} \newcommand{\ls}{\lesssim^2)\in(0,1)^3$ and given by Figure~\ref{Fig7}.
Moreover, the Kretschmann scalar $\mathcal{K}=R^{\alpha \beta \gamma \delta} R_{\alpha \beta \gamma \delta}$ blows up at a rate $r^{-C\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\epsilon^{-1})}$ on $\mathcal{S}=\{r=0\}$ for $C(M,\mathbf{e},\Lambda,m^2)>0$.
\end{thm}
\begin{comment}
The term ``violent nonlinear collapse'' in \cite{VDM21} is justified by the $r^{-O(\epsilon^{-2})}$ rate of blow-up of the curvature, which becomes \emph{more severe for smaller} $|\epsilon|$ (recall that, as in the $q_0\neq 0$ case, $\epsilon} \newcommand{\ls}{\lesssim=0$ corresponds to the Reissner--Nordstr\"{o}m interior, which does not have a spacelike singularity, but a smooth Cauchy horizon instead).
\end{comment}
We point out the following similarities and differences between Theorem~\ref{violent.thm} and Theorem~\ref{thm.intro}. \begin{enumerate}
\item In both cases, the terminal boundary is a spacelike singularity $\mathcal{S}=\{r=0\}$ approximately described by a Kasner metric \eqref{Kasner} with positive Kasner exponents (compare Figure~\ref{Penrose_simplified} and Figure~\ref{Fig7}).
\item In both cases, the early regions are similar and governed by the almost formation of a Cauchy horizon.
\item In both cases, the Maxwell charge $Q$ is uniformly bounded away from $0$: in Theorem~\ref{violent.thm} this is trivial ($Q=\mathbf{e}\neq 0$ is constant), in Theorem~\ref{thm.intro} this is item~\ref{charge.retention}, a surprising property that we call ``charge retention''.
\item Even at the linear level (i.e.~\eqref{E5} on a fixed Reissner--Nordstrom interior), \eqref{linear.osc} is not true if $q_0=0$: the scalar field does not oscillate, it grows instead like $\phi \approx \epsilon} \newcommand{\ls}{\lesssim \cdot s$, where $\mathcal{CH}^+=\{s=\infty\}$ (except possibly for an exceptional set of $(M,\mathbf{e},\Lambda,m^2)$ of $0$-Lebesgue measure that leads to the absence of growth, see \cite{Kehle2018,VDM21}, which is why in Theorem~\ref{violent.thm} one restricts to \emph{almost} every parameters).
\item The oscillating profile \eqref{linear.osc} interacts with the mechanism $r\rightarrow 0$ leading to Bessel-type oscillations after which $|\phi|$ becomes $O(1)$ around $r\approx \epsilon} \newcommand{\ls}{\lesssim$ (collapsed oscillations). There is no such mechanism in Theorem~\ref{violent.thm}.
\item The final Kasner exponent $p_1 \in (0,1)$ in Theorem~\ref{thm.intro} is related to $| \sin( \omega_0\epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1})))|$, but we restrict $\epsilon} \newcommand{\ls}{\lesssim$ so that $p_1$ is bounded away from $0$ and $1$, so there is no overlap (by our assumptions) with any of the Kasner exponents obtained in Theorem~\ref{violent.thm} where $|p_1-1| \lesssim \epsilon} \newcommand{\ls}{\lesssim^2$.
\item As a consequence, the collapse in Theorem~\ref{thm.intro} is \emph{not violent} but instead rapidly fluctuating in $\epsilon} \newcommand{\ls}{\lesssim$: one can easily see that $\mathcal{K}$ blows up at a rate $r^{-q}$ where $q$ depends on $\sin( \omega_0 \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1})))$.
\item There is no Kasner inversion in the $q_0=0$ setting: in fact, in Theorem~\ref{violent.thm} one proves that the final exponent $p_1 \in (0,1)$, so there is no mechanism triggering a Kasner inversion. In contrast, in Theorem~\ref{thm.intro}, in the regimes where $|\sin( \omega_0 \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2} + O(\log (\epsilon^{-1})))|$ is too small, a Kasner regime with $p_1 < 0$ forms, which is unstable, and ultimately disappears under Kasner inversion, giving rise to a second Kasner regime with $p_1 \in (0,1)$.
\end{enumerate}
\begin{figure}
\begin{center}
\includegraphics[width=94 mm, height=102 mm]{Maldcolor22B}
\end{center} \caption{The Penrose diagram of the hairy black hole interiors constructed in Theorem~\ref{violent.thm}.}
\label{Fig7}
\end{figure}
We also remark that Theorem~\ref{thm.intro} restricts $\epsilon} \newcommand{\ls}{\lesssim$ to a subset of $(-\epsilon} \newcommand{\ls}{\lesssim_0,\epsilon} \newcommand{\ls}{\lesssim_0)\setminus\{0\}$, which is the complement of a set of small measure, while Theorem~\ref{violent.thm} does not have this restriction. The restriction is to ensure that the final $p_1$ is bounded away from $\{0,1\}$ in Theorem~\ref{thm.intro}, as we explained above
Finally, we note that Theorem~\ref{violent.thm} should lead to a resolution of Open Problem~\ref{open1} in the case of an uncharged, massive scalar field (i.e.\ \eqref{E1}-\eqref{E5} with $q_0=0$, $m^2\neq 0$), upon the proof of \eqref{decay.s} for $m^2\neq 0$ (i.e.\ steps~\ref{G4} in the last paragraph of Section~\ref{non-hairy.intro}).
\paragraph{Other hairy black holes}
The study of spatially homogeneous hairy black holes has been abundant both in the mathematics and physics literature: we first mention the important examples of Einstein--Yang--Mills hairy black holes \cite{Bizon,Maison,Yau2,Galtsov1,Sarbach,Yau1}. For the Einstein--Yang--Mills black holes, even though the above works suggest that spacelike singularities play an important role in some regime, they also indicate that the qualitative behavior is different than what we obtained for charged scalar fields in Theorem~\ref{thm.intro}. Finally, we mention the existence of rotating hairy black holes with massive Klein--Gordon fields \cite{OtisYakov,numerics.KerrAdS,numerics.Kerr}.
We refer the reader to the introductions of \cite{OtisYakov,VDM21} for an extended discussion of various hairy black holes.
\subsection{The collapsed oscillations resulting from the charge of the scalar field} \label{osc.intro}
The collapsed oscillations occur in a region $\mathcal{O}=\{ \epsilon} \newcommand{\ls}{\lesssim \lessapprox r \lessapprox r_-\}$ (see Figure~\ref{Penrose_simplified}). The key point is that, schematically, $\phi$ will be shown to obey the following Bessel equation of order $0$ in $\mathcal{O}$, with respect to a new variable which is the renormalized square of the area-radius $z:=\frac{r^2}{\epsilon} \newcommand{\ls}{\lesssim^2}$: \begin{equation}\label{Bessel.eq}
\frac{d}{dz} \left(z \frac{d\phi}{dz}\right)+ \xi_0^2 z \phi=error.
\end{equation}
Here $\xi_0\neq 0$ is a constant proportional\footnote{One sees, as predicted by Theorem~\ref{violent.thm} (see \cite{VDM21}), that in the $q_0=0$ case, we have $\frac{d}{dr}\left(r\frac{d\phi}{dr}\right)\approx 0$, hence $r\frac{d\phi}{dr} \approx constant=\epsilon} \newcommand{\ls}{\lesssim^{-1}$, which is why in the $q_0=0$ case, the behavior is violent, and not fluctuating as in the $q_0\neq0$ case, see also the discussion in Section~\ref{other.intro}.} to $q_0$. To simplify the discussion here, we normalize $\xi_0=1$. Since $1 \lessapprox z \lessapprox \epsilon} \newcommand{\ls}{\lesssim^{-2}$, we need to understand the large $z$ behavior: it is given by damped oscillations of the form
\begin{equation}\label{Bessel}
Y_0(z) \sim \sqrt{\frac{2}{\pi z}}\cos(z-\frac{\pi}{4})\text{ or } J_0(z) \sim \sqrt{\frac{2}{\pi z}}\sin(z-\frac{\pi}{4}) \text{ as } z \rightarrow +\infty.
\end{equation}Note, however, that $z$ is a \underline{past}-directed timelike variable, so the damping is ``backwards-in-time''. \sloppy Thus $|Y_0|(z),\ |J_0|(z)\lesssim \epsilon} \newcommand{\ls}{\lesssim$ on the past boundary $z\sim \epsilon} \newcommand{\ls}{\lesssim^{-2}$ of $\mathcal{O}$, but $|Y_0|(z),\ |J_0|(z)\lesssim 1$ on the future boundary $z\approx 1$ of $\mathcal{O}$; modulo the oscillations, this means that \emph{the scalar field amplitude has experienced growth of size $\epsilon} \newcommand{\ls}{\lesssim^{-1}$} in $\mathcal{O}$.
\begin{rmk}
Note that, as long as $r$ is bounded away from $0$, \eqref{Bessel.eq} is consistent with (linear) oscillations giving rise to \eqref{linear.osc}: it is only as $r$ gets close to $0$ that these oscillations provide growth, hence the name ``collapsed oscillations''. We will show, however, that as soon as $r\ll \epsilon} \newcommand{\ls}{\lesssim$, $\phi$ no longer oscillates, see Section~\ref{cosmo.intro}.
\end{rmk}
The algebraic relations connecting \eqref{Bessel} to the ODE initial conditions $\phi(r\approx r_-)$ will ultimately show that $\phi$ has the following schematic form at the exit of the collapsed oscillations region $\mathcal{O}$: \begin{equation}\label{exit.osc}
\phi(r) \approx C \cos( \omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))) J_0 \left( \xi_0 \frac{r^2}{\epsilon} \newcommand{\ls}{\lesssim^2}\right) + C \sin( \omega_0 \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))) Y_0 \left(\xi_0 \frac{r^2}{\epsilon} \newcommand{\ls}{\lesssim^2}\right).
\end{equation} Contrary to appearances, \eqref{exit.osc} is not symmetric in $J_0$ and $Y_0$: when $r\ll\epsilon} \newcommand{\ls}{\lesssim$, the function $ Y_0 \left(\xi_0 \frac{r^2}{\epsilon} \newcommand{\ls}{\lesssim^2}\right)$ dominates $J_0 \left(\xi_0 \frac{r^2}{\epsilon} \newcommand{\ls}{\lesssim^2} \right)$, since the Bessel functions $J_0(z)$ and $Y_0(z)$ obey the asymptotics \begin{equation}
J_0(z) = O(1) \text{ and } Y_0(z) \sim \log(z^{-1}) \text{ as } z \rightarrow 0.
\end{equation}
Hence, for $ e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2} } \ll r\ll \epsilon} \newcommand{\ls}{\lesssim$ (the lower bound will be explained in Section~\ref{cosmo.intro}), we show schematically \begin{equation}\label{phi.protoKasner}
\phi(r) \approx C \sin( \omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))) \log (\xi_0^{-1} \frac{\epsilon} \newcommand{\ls}{\lesssim^2}{r^2}) \approx C \sin( \omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))) \log (\frac{\epsilon} \newcommand{\ls}{\lesssim}{r}).
\end{equation}
Since on a fixed Kasner metric \eqref{Kasner}, we find $\phi = p_{\phi} \log(\tau^{-1})$, where $\tau$ is roughly a power of $r$, and $p_{\phi}$ is chosen so that $p_{1}^2 + p_{2}^2+ p_{3}^2+2p_{\phi}^2=1$ (see already \eqref{kasner2}), the expression \eqref{phi.protoKasner} explains why we obtain final Kasner exponents that depend on $ \sin( \omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))) $.
\begin{rmk}\label{retention.rmk}
Most other quantities, such as the charge $Q$, already determine their final values at $r=0$ inside $\mathcal{O}$ (up to $O(\epsilon} \newcommand{\ls}{\lesssim)$-errors). Therefore, the charge retention mechanism from Theorem~\ref{thm.intro} results from an explicit computation in $\mathcal{O}$, see Lemma~\ref{lem:jo_charge_retention}.
\end{rmk}
For more details on Bessel equations and functions, we refer the reader to our Appendix~\ref{sec:appendix_A}.
\subsection{Kasner inversions and connections to cosmology} \label{cosmo.intro}
We now relate the results of this paper to the heuristic observations of BKL \cite{BK1, BKL1, BKL2, KL} regarding problems in relativistic cosmology, and explain how these heuristics manifest themselves rigorously in our work.
\paragraph{The BKL heuristics and Kasner inversions}
In \cite{KL}, Khalatnikov and Lifschitz propose an asymptotic form of the metric for a spacetime obeying the vacuum Einstein equations in the vicinity of a spacelike singularity. Assuming the spacetime $\mathcal{M}$ to be $I \times \Sigma = (0, T) \times \Sigma$ for some spatial $3$-manifold $\Sigma$, they write:
\begin{equation} \label{kl_ansatz}
g \approx - d \tau^2 + \sum_{I = 1}^3 \tau^{2 p_I(x)} \omega_I(x) \otimes \omega_I(x).
\end{equation}
Here the exponents $p_I(x)$ are smooth functions on $\Sigma$, the $\omega_I(x)$ form a basis of $1$-forms on $\Sigma$, and the metric is `synchronized' so that the singularity is located at $\tau = 0$. The exponents $p_I(x)$ are further constrained to obey the following two so-called \emph{Kasner relations}:
\begin{equation} \label{kasner1}
\sum_{I = 1}^3 p_I(x) = \sum_{I = 1}^3 p_I^2(x) = 1.
\end{equation}
However, \cite{KL} argues that generically, there is an inconsistency in the ansatz \eqref{kl_ansatz}, so long as near $\tau = 0$, one fails to obey the \emph{subcriticality condition}:
\begin{equation} \label{subcriticality}
\tau^{p_I - p_J - p_K} \ll \tau^{-1} \text{ for all } I, J , K \in \{1, 2, 3\} \text{ with } J \neq K.
\end{equation}
Further, in $1+3$-dimensional vacuum, the relations (\ref{kasner1}) mean that the subcriticality condition \eqref{subcriticality} can never hold, outside of the exceptional case where $(p_1, p_2, p_3) = (1, 0, 0)$ or a permutation thereof. \cite{KL} thus concludes that singularities of the form \eqref{kl_ansatz} are not generic.
Subsequently, in \cite{BKL1}, the authors suggest that the metric \eqref{kl_ansatz} may be valid in some interval $(\tau_1, \tau_2) \subset I$, but as $\tau$ decreases further towards $0$, there must be a transition to a new modified Kasner-like regime:
\begin{equation*}
g \approx - d \tau^2 + \sum_{I = 1}^3 \tau^{2 \acute{p}_I(x)} \acute{\omega}_I(x) \otimes \acute{\omega}_I(x),
\end{equation*} \begin{equation} \label{transition}
\acute{p}_1 = \frac{- p_1}{1 + 2 p_1}, \quad \acute{p}_2 = \frac{p_2 + 2 p_1}{1 + 2 p_1}, \quad \acute{p}_3 = \frac{p_3 + 2 p_1}{1 + 2 p_1}.
\end{equation}
Such a transition is what we call a \emph{Kasner inversion}, and also may be described in the literature as a Kasner bounce or oscillation.
The new Kasner exponents $\acute{p}_I$ also obey the Kasner relations \eqref{kasner1}, and as such, will also fail to obey the subcriticality condition. Hence \cite{BKL1} predicts that the generic behaviour in the vicinity of a spacelike singularity \emph{in vacuum} is an infinite cascade of such transitions, which they term as the \emph{oscillatory approach} to singularity, and is expected to be highly chaotic in nature, see again Conjecture~\ref{conj.cosmo}.
To avoid this infinite cascade of transitions occuring in vacuum, the authors of \cite{BK1} then consider gravity coupled to a massless scalar field $\phi$, and modify the ansatz \eqref{kl_ansatz} and relations \eqref{kasner1} to
\begin{gather} \label{kl_ansatz2}
g \approx - d \tau^2 + \sum_{I = 1}^3 \tau^{2 p_I(x)} \omega_I(x) \otimes \omega_I(x), \quad \phi \approx p_{\phi}(x) \log \tau,
\\[0.5em] \label{kasner2}
\sum_{I = 1}^3 p_I(x) = \sum_{I = 1}^3 p_I^2(x) + 2 p_{\phi}^2(x) = 1.
\end{gather}
For particular choices of generalized exponents $(p_1, p_2, p_3, p_{\phi})$, for instance the tuple $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{\sqrt{3}})$, it is now possible for the subcriticality condition \eqref{subcriticality} to hold near $\tau = 0$, and as such the ansatz \eqref{kl_ansatz2} is consistent, and moreover, conjecturally stable. We note that in this context, the condition \eqref{subcriticality} is identical to $\min \{p_I(x) \} > 0$, i.e.~all Kasner exponents being positive.
There still exist, of course, choices of exponents that violate \eqref{subcriticality}; the corresponding spacetimes are then subject to an instability with the same Kasner transition map \eqref{transition}, to which we append the transition of the scalar field coefficient: $p_{\phi} \mapsto \acute{p}_{\phi} = \frac{p_{\phi}}{1 + 2 p_1}$. After a finite number of such transitions \cite{BK1}, one will reach a tuple of generalized Kasner coefficients obeying \eqref{subcriticality}. Hence a scalar field is often referred to as a stiff matter model, as in Conjecture~\ref{conj.cosmo}.
We make one final observation. The source of the instability in \cite{BK1, BKL1} is a spatial curvature term, which is actually suppressed in spherical symmetry. However, in \cite{BK2}, the authors argue that one can alternatively use an electromagnetic field to source the instability, and that the transition map \eqref{transition} between different regimes of Kasner exponents is identical. This is consistent with the stability of the Schwarzschild interior in spherical symmetry for electromagnetism-free matter models \cite{DejanAn, Christo1}, in contrast with Theorem~\ref{thm.intro} and Theorem~\ref{violent.thm}.
For further discussions regarding the BKL ansatz in relativistic cosmology, including generalization to higher dimension and other matter models, see also \cite{BelinskiHenneaux, Billiards, DemaretHenneauxSpindel, Henneaux}.
\paragraph{Rigorous constructions and stability results of Kasner metrics}
Beyond the heuristics of \cite{BK1, BK2, BKL1, BKL2, KL}, one may ask the following questions -- can one actually
construct a large class of spacetimes containing a spacelike singularity, obeying the asymptotics \eqref{kasner1} and \eqref{kasner2}, and what does one know about their stability?
For the first problem, we mention \cite{AnderssonRendall} constructing a large class of \emph{real analytic} solutions to the Einstein-scalar field system obeying the asymptotics \eqref{kl_ansatz2}. Beyond the real analytic regime, \cite{FournodavlosLuk} recently constructed a reasonably general class of \underline{vacuum} spacetimes
obeying \eqref{kl_ansatz}, which are moreover allowed to be only $C^k$, for large $k$.
Regarding stability, the state of the art is due to Fournodavlos--Rodnianski--Speck \cite{FournodavlosRodnianskiSpeck} which, loosely speaking, proves the stability of the exact generalized Kasner spacetime on $(0, + \infty) \times \mathbb{T}^3$, so long as certain exponents obey the subcriticality condition \eqref{subcriticality}. For other related results, we refer the reader to \cite{RodnianskiSpeck2, RodnianskiSpeck1, SpeckS3}.
Extending stability results beyond those for explicit Kasner and FLRW spacetimes (for instance for the spacetimes of Theorem~\ref{thm.intro} or Theorem~\ref{violent.thm}, see also Open Problem v in \cite{VDM21}) is an interesting question. In particular, note that existing results only deal with spacetimes obeying subcritical asymptotics in the sense of \eqref{subcriticality}, and hence do not feature any Kasner inversions, contrary to our (spatially-homogeneous) spacetime of Theorem~\ref{thm.intro}.
Finally, we mention the work of Ringstr\"om \cite{ringstrombianchi} on Bianchi IX cosmologies containing a rigorous study of a large class of spatially homogeneous spacetimes.
Among other things, \cite{ringstrombianchi} provides examples of spacetimes with infinitely many Kasner bounces in vacuum and, in contrast, proves the convergence to a stable Kasner-like regime in the presence of stiff matter
\paragraph{The Kasner inversion mechanism for charged scalar fields}
We will now explain the schematic mechanism behind the Kasner inversion phenomenon as obtained in our Theorem~\ref{thm.intro}, which is the second main novelty with respect to the $q_0=0$ case of \cite{VDM21}. We will show that the Kasner inversion, \emph{in the regimes where it occurs} (namely, for $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta, \sigma}^{'\ inv}$ of positive measure), is located in a region of the following form; for constants $D>0$, $N>0$: \begin{equation}\label{Kinv.intro}
\mathcal{K}_{inv} \subset \{ e^{- D \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}} \epsilon} \newcommand{\ls}{\lesssim^{N} \lesssim r \lesssim e^{- D \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}} \epsilon} \newcommand{\ls}{\lesssim^{-N} \}.
\end{equation}
We define the key quantity $\Psi$, which is a dimensionless derivative of $\phi$: for $r_-(M,\mathbf{e},\Lambda)>0$ and $\delta_0(M, \mathbf{e}, \Lambda) > ~0$ to be defined later, let \begin{equation}\label{Psi.def}
\Psi:= -r \frac{d\phi}{dr}, \text{ and define } \Psi_{i}:= \Psi \big|_{{r}=e^{-\delta_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}}r_-}.
\end{equation}
The condition for the presence of an inversion will end up being \begin{equation}\label{inv.cond} \tag{inv}
\eta \leq |\Psi_i| \leq 1-\sigma, \text{ for some } \eta, \sigma>0 \text{ independent of } \epsilon} \newcommand{\ls}{\lesssim.
\end{equation}
The reason for assuming $\eta \leq |\Psi_i|$ is that based on numerics (see Section~\ref{holo.intro}) there could be multiple Kasner inversions when$ |\Psi_i|$ is close to $0$, and the dynamics would be even more complicated
As a consequence of \eqref{phi.protoKasner}, we find that for some $C(M,\mathbf{e},\Lambda,m^2,q_0) \neq 0$ \begin{equation}\label{psii}
|\Psi_i| \approx |C| \cdot |\sin( \omega_0\cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1})))| .
\end{equation} Combining \eqref{inv.cond} and \eqref{psii} explains heuristically why the presence of an inversion depends on $\epsilon} \newcommand{\ls}{\lesssim$ and why any small neighborhood of $0$ of the form $(-\delta,\delta)$ still contains infinitely many spacetimes featuring an inversion.
Our non-inversion condition in Theorem~\ref{thm.intro} is not the complement of \eqref{inv.cond}, it is instead \begin{equation}\label{ninv.cond}
|\Psi_i| \geq 1+\sigma, \text{ for some } \sigma>0 \text{ independent of } \epsilon} \newcommand{\ls}{\lesssim.
\tag{no-inv} \end{equation} If $|\Psi_i|$ is too close to $1$, though we are still able to produce a spacelike singularity (see already Theorem~\ref{maintheorem}), we do not claim further quantitative estimates, as some Kasner exponents degenerate towards $0$ in this case
We now explain why it is that if \eqref{inv.cond} is satisfied, there is an inversion, whereas if \eqref{ninv.cond} is satisfied then there is no inversion.
Since the wave equation is second-order, $\Psi$ should satisfy a first-order ODE.
Though our system is highly nonlinear, this ODE surprisingly turns out to be presentable in a simple form, written schematically\footnote{Note that the discussion of the ODE \eqref{Psi.ODE} and its solutions was already present in \cite{hartnolletal2021} at the heuristic and numerical level.} as follows: \begin{equation}\label{Psi.ODE}
\frac{d\Psi}{dR} = - \Psi (\Psi-\Psi_i) (\Psi- \Psi_i^{-1})+error, \text{ where } R:= \log(\frac{r_-}{r}).
\end{equation}
The dynamics of $\Psi$ relies on the linearized stability of \eqref{Psi.ODE} near $\Psi=\Psi_i$, which is of the schematic form $$ \frac{d(\delta \Psi)}{dR} = - (\Psi_i^2- 1) \cdot \delta \Psi +error.$$
If $|\Psi_i|> 1$, then $\Psi=\Psi_i$ is a stable fixed point as $R \rightarrow+\infty$ (corresponding to $r\rightarrow 0$): this is what happens if \eqref{ninv.cond} is true and then, there is no inversion and $\Psi \approx \Psi_i$ up to $r=0$.
In contrast, if $|\Psi_i|<1$, then $\Psi=\Psi_i$ is a \emph{unstable} fixed point, but $\Psi=\Psi_i^{-1}$ is a stable fixed point. If \eqref{inv.cond} is true, then we find that $\Psi$ gets inverted from $\Psi_i$ to $\Psi_i^{-1}$ in the region \eqref{Kinv.intro}, over which the change in $R$ is $\Delta R=O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))$.
Once the behavior of $\Psi$ has been quantified, one can retrieve immediately the Kasner exponents from Theorem~\ref{thm.intro} using similar techniques as in \cite{VDM21}%
. The most important (but algebraically trivial) feature of these relations is that $p_1=P(\beta)>0$ if and only if $|\beta|>1$, see the formula \eqref{first.Kasner}, \eqref{last.Kasner} (where $\alpha \approx \Psi_i$): thus the final Kasner always has $p_1>0$ in Theorem~\ref{thm.intro}. Hence the final Kasner indeed lies in the subcritical regime of exponents as explained earlier in Section~\ref{cosmo.intro}.
\subsection{Numerics and holographic superconductors in the AdS-CFT correspondence} \label{holo.intro}
In this section, we will discuss two separate questions, which end up being connected by a vast literature. \begin{itemize}
\item \label{holo1} In what sense is the spacetime of Theorem~\ref{thm.intro} the interior of a hairy black hole, i.e.\ can we construct a corresponding ``hairy black hole exterior''?
\item \label{holo2} What do numerics tell us about the hairy black hole interior of Theorem~\ref{thm.intro}?
\end{itemize}
The first question will take us outside the realm of asymptotically flat black holes: indeed from \cite{Bekenstein}, there are no non-trivial static, spherically symmetric solutions of \eqref{E1}-\eqref{E5} with $\Lambda = 0$ (although with the restriction $m^2\geq0$ and $q_0=0$).
However, in the $\Lambda < 0$ case, there is hope to construct an asymptotically Anti-de-Sitter hairy black hole exterior (see Open Problem iv in \cite{VDM21}) with appropriate boundary conditions corresponding to the interior of Theorem~\ref{thm.intro}. In fact, such a construction has been proposed heuristically and numerically in the pioneering works \cite{holorigin1,holorigin3,holorigin2} as a model of a \emph{holographic superconductor} in the AdS/CFT correspondence. More recently, there have been follow-up works \cite{holo3} (\eqref{E1}-\eqref{E5} with $F\equiv 0$, giving hairy perturbations of Schwarzschild), \cite{hartnolletal2020} (\eqref{E1}-\eqref{E5} with $q_0=0$ but $F\neq 0$, corresponding to Theorem~\ref{violent.thm}) and \cite{hartnolletal2021} (\eqref{E1}-\eqref{E5} with $q_0\neq 0$, the setting of Theorem~\ref{thm.intro}), on which we focus in what follows. Before entering into specifics, we also mention the interesting follow-up works \cite{holo1,holo5,holo4,holo2,holo6} for different matter models.
Anti-de-Sitter asymptotics impose that, for a negative cosmological constant $\Lambda<0$ in \eqref{E1}: \begin{equation}\label{ads1}
g= \left(1-\frac{2M}{r}- \frac{\Lambda r^2}{3}+o(r^{-1})\right)[-dt^2+ds^2] + r^2(s) d\sigma_{\mathbb{S}^2},
\end{equation} which gives the following asymptotics on $\phi$ in \eqref{E5}: for $m^2<0$, there exists constants $\phi_{(0)}$, $\phi_{(1)}$ such that
\begin{equation}\label{ads2}\phi(r) = \phi_{(0)} \cdot u_D(r)+ \phi_{(1)}\cdot u_N(r), \text{ where } u_D (r)\sim r^{-\frac{3}{2}+\sqrt{\frac{9}{4}-m^2}} \text{ and } u_N (r)\sim r^{-\frac{3}{2}-\sqrt{\frac{9}{4}-m^2}} \text{ as } r \rightarrow+\infty.\end{equation} Here $\phi_{(0)}=0$ corresponds to Dirichlet-type boundary conditions, while $\phi_{(1)}=0$ is Neumann-type. In \cite{holorigin3,holorigin2}, the authors propose the construction of a static hairy black hole obeying \eqref{ads1} and \eqref{ads2}, either with $\phi_{(0)}\neq 0$ (stimulated emission) or $\phi_{(0)}= 0$ (spontaneous emission). This concludes our discussion of the first question.
We turn to the second question and start by remarking that the collapsed oscillations and Kasner inversions from Theorem~\ref{thm.intro} were previously anticipated numerically in \cite{hartnolletal2021}, in which an analogy was drawn with the (AC) Josephson effect inside a ``standard'' superconductor, see \cite{Jo1,Jo2}, ultimately causing the Kasner inversion in some regime. This scenario, verified numerically, is entirely consistent with our findings from Theorem~\ref{thm.intro}.
Finally, note that, by the formula \eqref{first.Kasner}, when $\alpha \approx \Psi_i$ gets close to $0$, then the Kasner exponents $p_{2}=p_{3}$ get close to $0$: when this happens, \cite{hartnolletal2021} observes numerically a \emph{second Kasner inversion} and have also mentioned the possibility of arbitrarily many Kasner inversions. These very interesting aspects are not covered by Theorem~\ref{thm.intro}, since its assumptions \emph{specifically require to choose $\epsilon} \newcommand{\ls}{\lesssim$} so that $\alpha(\epsilon} \newcommand{\ls}{\lesssim)$ is bounded away from $0$ (at the cost, of course, of reducing the measure of the set of eligible $\epsilon} \newcommand{\ls}{\lesssim$ by an arbitrarily small amount $\eta>0$). It would be of great interest to prove rigorous results confirming the numerical breakthroughs of \cite{hartnolletal2021}. \begin{open}\label{openc}
Generalize the conclusions of Theorem~\ref{thm.intro} for a larger set of $\epsilon} \newcommand{\ls}{\lesssim \in (-\epsilon} \newcommand{\ls}{\lesssim_0,\epsilon} \newcommand{\ls}{\lesssim_0)\setminus\{0\}$ such that $|\Psi_i| \lesssim \epsilon} \newcommand{\ls}{\lesssim^N$ are allowed for a suitably chosen $N>0$, where $\Psi_i$ defined in \eqref{Psi.def} is schematically of the form \eqref{psii}. In the $|\Psi_i| \lesssim \epsilon} \newcommand{\ls}{\lesssim^N$ situation, control the occurrence of two (or more) Kasner inversions.
\end{open}
We finally note that the techniques of Theorem~\ref{thm.intro} still allow to control quantitatively the spacetime up to the Proto--Kasner region (see Figure~\ref{Penrose_detailed} and Section~\ref{sec:protokasner}), even if $|\Psi_i|$ is close to $\{0,1\}$, but not beyond.
\subsection{Outline of the paper and the different regions of Figure~\ref{Penrose_detailed}}\label{outline.sec}
The paper (and the proof) will follow the various regions $\mathcal{R}$, $\mathcal{N}$, $\mathcal{EB}$, $\mathcal{LB}$, $\mathcal{O}$, $\mathcal{PK}$ and $\mathcal{K}$ depicted on Figure~\ref{Penrose_detailed}.
\begin{itemize}
\item In Section~\ref{prelim.section}, we give some geometric preliminaries, and explain the gauge used in Theorem~\ref{thm.intro}.
\item In Section~\ref{sec:theorem}, we give a precise statement of the main result corresponding to Theorem~\ref{thm.intro}, together with the precise definition of the regions $\mathcal{R}$, $\mathcal{N}$, $\mathcal{EB}$, $\mathcal{LB}$, $\mathcal{O}$, $\mathcal{PK}$ and $\mathcal{K}$.
\item In Section~\ref{sec:einsteinrosen}, we prove estimates in the red-shift region $\mathcal{R}$, the no-shift region $\mathcal{N}$, the early blue-shift region $\mathcal{EB}$, and the late blue-shift region $\mathcal{LB}$. These estimates are similar to the ones appearing in \cite{VDM21} and feature the almost formation of a Cauchy horizon.
\item In Section~\ref{sec:oscillations}, we prove estimates in the oscillations region $\mathcal{O}$. This section corresponds to the collapsed oscillations discussed in Section~\ref{osc.intro}.
\item In Section~\ref{sec:protokasner}, we prove estimates in the proto-Kasner region $\mathcal{PK}$. In this section, we demonstrate the onset of a Kasner geometry transitioning from the collapsed oscillations in $\mathcal{O}$ to the Kasner behavior in $\mathcal{K}$.
\item In Section~\ref{sec:sing}, we linearize the system of Einstein equations to control precisely the phase $\Theta(\epsilon} \newcommand{\ls}{\lesssim)= \omega_0 \epsilon} \newcommand{\ls}{\lesssim^{-2} + O(\log (\epsilon} \newcommand{\ls}{\lesssim^{-1}))$ appearing in \eqref{psii}; this step is essential in constructing the set $E_{\eta}$ of acceptable $\epsilon} \newcommand{\ls}{\lesssim$ in Theorem~\ref{thm.intro}.
\item In Section~\ref{sec:kasner}, we prove estimates in the Kasner region $\mathcal{K}$. In particular, we prove in this section the Kasner inversion phenomenon discussed in Section~\ref{cosmo.intro}.
\item In Section~\ref{section:quantitative}, we conclude the proof of (the precise version of) Theorem~\ref{thm.intro} by providing precise geometric estimates characteristic of the Kasner behavior in a sub-region of $\mathcal{PK}\cup\mathcal{K}$.
\end{itemize}
We will also introduce the following regions that overlap with some of the regions of Figure~\ref{Penrose_detailed}: the restricted proto-Kasner region $\mathcal{PK}_1$, the first Kasner region $\mathcal{K}_1$, the second Kasner region $\mathcal{K}_2$ and the Kasner-inversion region $\mathcal{K}_{inv}$ (see Figure~\ref{FigN}) and Section~\ref{sec:theorem}. Note, however, that in the absence of a Kasner inversion, $\mathcal{K}_2=\mathcal{K}_{inv}=\emptyset$.
\begin{figure}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\path[fill=red!70!black, opacity=0.3] (0, -6) -- (-6, 0)
.. controls (0, -6) .. (6, 0) -- (0, -6);
\path[fill=green!70!black, opacity=0.3] (6, 0)
.. controls (0, -4.5) .. (-6, 0)
.. controls (0, -6) .. (6, 0);
\path[fill=blue, opacity=0.3] (6, 0)
.. controls (0, -4.5) .. (-6, 0)
.. controls (0, -2.5) .. (6, 0);
\path[fill=blue!70!black, opacity=0.4] (6, 0)
.. controls (0, 0.2) .. (-6, 0)
.. controls (0, -2.5) .. (6, 0);
\path[fill=orange!70!black, opacity=0.6] (6, 0)
.. controls (0, 1.5) .. (-6, 0)
.. controls (0, -1) .. (6, 0);
\path[fill=brown!70!black, opacity=0.6] (6, 0)
.. controls (0, 1.5) .. (-6, 0)
.. controls (0, 4) .. (6, 0);
\path[fill=yellow!80!black, opacity=0.4] (6, 0)
.. controls (0, 4) .. (-6, 0)
.. controls (0, 6) .. (6, 0);
\node (p) at (0, -6) [circle, draw, inner sep=0.5mm, fill=black] {};
\node (r) at (6, 0) [circle, draw, inner sep=0.5mm] {};
\node (l) at (-6, 0) [circle, draw, inner sep=0.5mm] {};
\draw [thick] (p) -- (r)
node [midway, below right] {$\mathcal{H}_R$};
\draw [thick] (p) -- (l)
node [midway, below left] {$\mathcal{H}_L$};
\draw [red] (l) .. controls (0, -6) .. (r)
node [black, midway, above] {};
\draw [color=green!50!black] (l) .. controls (0, -4.5) .. (r)
node [black, midway, above] {};
\draw [blue] (l) .. controls (0, -2.5) .. (r)
node [black, midway, above] {};
\draw [dashed, thick, color=orange]
(l) .. controls (0, -1) .. (r)
node [midway, above, black] {};
\draw [dashed, thick, color=blue] (l) .. controls (0, 0.2) .. (r)
node [midway, above, black] {};
\draw [orange] (l) .. controls (0, 1.5) .. (r)
node [black, midway, above] {};
\draw [brown!70!black, dashed, thick]
(l) .. controls (0, 3.2) .. (r)
node [black, midway, above=-0.7mm] {};
\draw [dashed, brown, thick] (l) .. controls (0, 4) .. (r)
node [black, midway, above=-0.7mm] {};
\draw [very thick, dotted] (l) .. controls (0, 6) .. (r)
node [midway, above] {\tiny $r = 0$};
\node at (0, -5.1) {$\mathcal{R}$};
\node at (0, -3.8) {$\mathcal{N}$};
\node at (0, -2.6) {$\mathcal{EB}$};
\node at (0, -0.8) {$\mathcal{LB}$};
\node at (0, +0.2) {$\mathcal{O}$};
\node at (0, +2) {$\mathcal{PK}$};
\node at (0, +2.65) {$\mathcal{PK}_1$};
\node at (0, +3.8) {$\mathcal{K}$};
\end{tikzpicture}}
\caption{A more detailed version of Figure~\ref{Penrose_simplified}, partitioning the hairy black hole interior into the different regions $\mathcal{R}, \mathcal{N}, \mathcal{EB}, \mathcal{LB}, \mathcal{O}, \mathcal{PK}, \mathcal{K}$, to be precisely defined in Section~\ref{sub:regions}.}
\label{Penrose_detailed}
\end{figure}
\begin{figure}
\begin{center}
\scalebox{0.7}{
\begin{tikzpicture}
\path [shade, top color=orange!70!black, opacity=0.8] (-6, 0) .. controls (0, -1.5) .. (6, 0)
-- (4, -2) -- (-4, -2) -- (-6, 0);
\path[fill=brown!70!black, opacity=0.6] (6, 0)
.. controls (0, -1.5) .. (-6, 0)
.. controls (0, +1) .. (6, 0);
\path[fill=yellow!80!black, opacity=0.4] (6, 0)
.. controls (0, 0.2) .. (-6, 0)
.. controls (0, 3) .. (6, 0);
\path[pattern=crosshatch, pattern color=purple!50!white] (6, 0)
.. controls (0, 3) .. (-6, 0)
.. controls (0, 3.5) .. (6, 0);
\path[fill=yellow!80!black, opacity=0.4] (6, 0)
.. controls (0, 5.5) .. (-6, 0)
.. controls (0, 3.5) .. (6, 0);
\node (r) at (6, 0) [circle, draw, inner sep=0.5mm] {};
\node (l) at (-6, 0) [circle, draw, inner sep=0.5mm] {};
\draw [thick] (4, -2) -- (r)
node [midway, below right] {$\mathcal{H}_R$};
\draw [thick] (-4, -2) -- (l)
node [midway, below left] {$\mathcal{H}_L$};
\draw [orange] (l) .. controls (0, -1.5) .. (r)
node [midway, above, black] {\tiny $r \sim \epsilon$};
\draw [brown, dashed, thick] (l) .. controls (0, +1) .. (r)
node [midway, above=-0.7mm, black] {\tiny $r = r_i$};
\draw [brown!70!black, dashed, thick] (l) ..controls (0, +0.2) .. (r)
node [midway, below=-0.7mm, black] {\tiny $r \sim \epsilon^2$};
\draw [yellow] (l) ..controls (0, 3) .. (r)
node [midway, below, black] {\tiny $r = r_{in}$};
\draw [yellow] (l) ..controls (0, 3.5) .. (r)
node [midway, above=-0.5mm, black] {\tiny $r = r_{out}$};
\draw [very thick, dotted] (l) .. controls (0, 5.5) .. (r)
node [midway, above] {\tiny $r=0$};
\node at (0, -0.3) {$\mathcal{PK}$};
\node at (0, +0.45) {$\mathcal{PK}_1$};
\node at (0, +1.3) {$\mathcal{K}_1$};
\node at (0, +2.45) {$\mathcal{K}_{inv}$};
\node at (0, +3.3) {$\mathcal{K}_2$};
\draw [->] (4, 5) -- (2, 1.2);
\node [align=left, fill=white] at (4, 5) {\footnotesize $\mathcal{K}_1$ is an unstable Kasner \\ \footnotesize regime with $\Psi \approx \alpha$};
\draw [->] (-5, 5.5) -- (-2, 2.7);
\node [align=left, fill=white] at (-5, 5.5) {\footnotesize $\mathcal{K}_2$ is a stable Kasner \\ \footnotesize regime with $\Psi \approx \alpha^{-1}$};
\end{tikzpicture}}
\vspace{0.5cm}
\scalebox{0.7}{
\begin{tikzpicture}
\path [shade, top color=orange!70!black, opacity=0.8] (-6, 0) .. controls (0, -1.5) .. (6, 0)
-- (4, -2) -- (-4, -2) -- (-6, 0);
\path[fill=brown!70!black, opacity=0.6] (6, 0)
.. controls (0, -1.5) .. (-6, 0)
.. controls (0, +1) .. (6, 0);
\path[fill=yellow!80!black, opacity=0.4] (6, 0)
.. controls (0, 0.2) .. (-6, 0)
.. controls (0, 3.5) .. (6, 0);
\node (r) at (6, 0) [circle, draw, inner sep=0.5mm] {};
\node (l) at (-6, 0) [circle, draw, inner sep=0.5mm] {};
\draw [thick] (4, -2) -- (r)
node [midway, below right] {$\mathcal{H}_R$};
\draw [thick] (-4, -2) -- (l)
node [midway, below left] {$\mathcal{H}_L$};
\draw [orange] (l) .. controls (0, -1.5) .. (r)
node [midway, above, black] {\tiny $r \sim \epsilon$};
\draw [brown, dashed, thick] (l) .. controls (0, +1) .. (r)
node [midway, above=-0.5mm, black] {\tiny $r= r_i$};
\draw [brown!70!black, dashed, thick] (l) ..controls (0, +0.2) .. (r)
node [midway, below=-0.7mm, black] {\tiny $r \sim \epsilon^2$};
\draw [very thick, dotted] (l) .. controls (0, 3.5) .. (r)
node [midway, above] {\tiny $r=0$};
\node at (0, -0.3) {$\mathcal{PK}$};
\node at (0, +0.45) {$\mathcal{PK}_1$};
\node at (0, +1.5) {$\mathcal{K}_1$};
\draw [->] (4, 4) -- (1.6, 1.4);
\node [align=left, fill=white] at (4, 4) {\footnotesize $\mathcal{K}_1$ is a stable Kasner \\ \footnotesize regime with $\Psi \approx \alpha$};
\end{tikzpicture}}
\end{center}
\caption{A zoom on $\mathcal{PK} \cup \mathcal{K}$ in Figure~\ref{Penrose_detailed}, with the top picture representing the inversion case ($\mathcal{K}_2, \mathcal{K}_{inv}\neq \emptyset$) and the bottom picture the non-inversion case ($\mathcal{K}_2, \mathcal{K}_{inv}= \emptyset$). Note the inclusion $\mathcal{PK}_1 \subsetneq \mathcal{PK}$. }
\label{FigN}
\end{figure}
\paragraph{Acknowledgements} We are grateful to Mihalis Dafermos for suggesting the construction of Figure~\ref{construction} and for useful comments on the manuscripts. We also would like to thank Grigorios Fournodavlos, Jonathan Luk and Yakov Shlapentokh-Rothman for useful comments on the manuscript, and Jorge Santos for helpful discussions.
\section{Geometric set-up and preliminaries}\label{prelim.section}
\subsection{Einstein-Maxwell-Klein-Gordon in double null coordinates}
We consider a spherically symmetric Lorentzian metric $(M, g)$ with a choice of double null coordinates $(u, v)$:
\begin{equation} \label{eq:metric}
g = - g_{\mathcal{Q}} + r^2 (u, v) d \sigma_{\mathbb{S}^2} = - \Omega^2(u, v) du dv + r^2 (u, v) d \sigma_{\mathbb{S}^2}.
\end{equation}
Here $(u, v)$ are coordinates on the quotient manifold $\mathcal{Q} = M / SO(3)$ and $d\sigma_{\mathbb{S}^2} = d \theta^2 + \sin^2 \theta d \varphi^2$ is the standard metric on the unit sphere. We call $r = r(u, v)$ the \textit{area-radius} function.
Due to the presence of charged scalar matter, the Maxwell field will itself be dynamical, and is described via the following function $Q(u, v)$ on $\mathcal{Q}$:
\begin{equation} \label{eq:em_doublenull}
F = \frac{Q \Omega^2}{2r^2} du \wedge dv.
\end{equation}
To describe the coupling to the scalar field, we must choose a gauge for the Maxwell field. In spherical symmetry, we specify the gauge using a one-form $A = A_u du + A_v dv$ on $\mathcal{Q}$ which satisfies $dA = F$.
Define the covariant derivative by $D_{\mu} = \nabla_{\mu} + i q_0 A_{\mu}$. Then the scalar field $\phi$ is a complex-valued function on $\mathcal{Q}$ satisfying the following covariant wave equation:
\begin{equation}
g^{\mu \nu} D_{\mu} D_{\nu} \phi = 0.
\end{equation}
Recall that the whole system of equations must be invariant under the gauge transformation $A \mapsto A + df$, $\phi \mapsto \phi e^{- i q_0 f}$, where $f$ is any smooth function on $\mathcal{Q}$.
We make a few more standard definitions. The \textit{Hawking mass} $\rho$ is given by
\begin{equation} \label{eq:hawking_mass}
\rho \coloneqq \frac{r}{2} ( 1 - g_{\mathcal{Q}}(\nabla r, \nabla r)) = \frac{r}{2} ( 1 - 4 \Omega^{-2} \partial_u r \partial_v r).
\end{equation}
In the presence of the Maxwell field and the cosmological constant, we further define the renormalized Vaidya mass $\varpi$ and the $r$-constant surface gravity $2K$ as
\begin{equation} \label{eq:vaidya_surface}
\varpi = \rho + \frac{Q^2}{2r} - \frac{\Lambda r^3}{6}, \hspace{0.5cm} 2K = \frac{2}{r^2} \left( \varpi - \frac{Q^2}{r} - \frac{\Lambda r^3}{3} \right).
\end{equation}
Suppose that $(M, g, F, \phi)$ are a solution to the Einstein-Maxwell-Klein-Gordon system \eqref{E1}--\eqref{E5}. In the double-null coordinates of (\ref{eq:metric}), the quantities $(r, \Omega^2, Q, A, \phi)$ then satisfy the following system of PDEs:
\begin{equation} \label{eq:raych_u_emkgss}
\partial_u ( \Omega^{-2} \partial_u r ) = - \Omega^{-2} r |D_u \phi|^2,
\end{equation}
\begin{equation} \label{eq:raych_v_emkgss}
\partial_v ( \Omega^{-2} \partial_v r ) = - \Omega^{-2} r |D_v \phi|^2,
\end{equation}
\begin{equation} \label{eq:wave_r_emkgss}
\partial_u \partial_v r = - \frac{\Omega^2}{4r} - \frac{\partial_u r \partial_v r}{r} + \frac{\Omega^2 Q^2}{4r^3} + \frac{\Omega^2r (m^2 |\phi|^2 + \Lambda)}{4} ,
\end{equation}
\begin{equation} \label{eq:wave_omega_emkgss}
\partial_u \partial_v \log(\Omega^2) = \frac{\Omega^2}{2r^2} + \frac{2 \partial_u r \partial_v r}{r^2} - \frac{\Omega^2 Q^2}{r^4} - 2 \mathfrak{Re} (D_u \phi \overline{D_v \phi}),
\end{equation}
\begin{equation} \label{eq:q_u_emkgss}
\partial_u Q = - q_0 r^2 \mathfrak{Im} (\phi \overline{D_u \phi}),
\end{equation}
\begin{equation} \label{eq:q_v_emkgss}
\partial_v Q = + q_0 r^2 \mathfrak{Im} (\phi \overline{D_v \phi}),
\end{equation}
\begin{equation} \label{eq:wave_psi_emkgss}
D_u D_v \phi = - \frac{\partial_u r \cdot D_v \phi}{r} - \frac{\partial_v r \cdot D_u \phi}{r} + \frac{i q_0 Q \Omega^2}{4r^2} \phi - \frac{m^2 \Omega^2}{4} \phi,
\end{equation}
\begin{equation} \label{eq:emgauge_emkgss}
\partial_u A_v - \partial_v A_u = \frac{Q \Omega^2}{2 r^2}.
\end{equation}
The equations (\ref{eq:raych_u_emkgss}) and (\ref{eq:raych_v_emkgss}) are the celebrated \textit{Raychaudhuri equations}, the equations (\ref{eq:wave_r_emkgss}) and (\ref{eq:wave_omega_emkgss}) can be viewed as wave equations for the geometric quantities $r$ and $\Omega^2$ on $\mathcal{Q}$, and the remaining equations describe the dynamics of the coupled Maxwell field and charged scalar field.
We recall also the transport equations for the Vaidya mass $\varpi$:
\begin{equation} \label{eq:modifiedhawkingmass_u_emkgss}
\partial_u \varpi = - 2r^2 (\Omega^{-2} \partial_v r)^{-1} | D_u \phi |^2 + \frac{m^2}{2} r^2 |\phi|^2 \partial_u r - q_0 Q r \mathfrak{Im} (\phi \overline{D_u \phi}),
\end{equation}
\begin{equation} \label{eq:modifiedhawkingmass_v_emkgss}
\partial_v \varpi = - 2r^2 (\Omega^{-2} \partial_u r)^{-1} | D_v \phi |^2 + \frac{m^2}{2} r^2 |\phi|^2 \partial_v r + q_0 Q r \mathfrak{Im} (\phi \overline{D_v \phi}).
\end{equation}
\subsection{The Reissner-Nordstr\"om(-dS/AdS) interior metric} \label{sub:reissner_nordstrom}
We are interested in charged hairy perturbations of sub-extremal Reissner-Nordstr\"om interiors. To define sub-extremality, given some parameters $M > 0$, $\mathbf{e}, \Lambda \in \mathbb{R}$, consider the polynomial
\begin{equation} \label{eq:rn_polynomial}
P_{M, \mathbf{e}, \Lambda}(X) = X^2 - 2 M X + \mathbf{e}^2 - \tfrac{1}{3} \Lambda X^4.
\end{equation}
Then the set of subextremal-parameters $(M, \mathbf{e}, \Lambda)$ is $\mathcal{P} = \mathcal{P}_{se}^{\Lambda \leq 0} \cup \mathcal{P}_{se}^{\Lambda > 0}$, where $\mathcal{P}_{se}^{\Lambda \leq 0}$ is such that $\Lambda \leq 0$ and the polynomial $P_{M, \mathbf{e}, \Lambda}(X)$ has two distinct positive real roots $r_- < r_+$, and $\mathcal{P}_{se}^{\Lambda > 0}$ is such that $\Lambda > 0$ and $P_{M, \mathbf{e}, \Lambda}(X)$ has three distinct positive real roots $r_- < r_+ < r_c$.
The Reissner-Nordstr\"om(-dS/AdS) spacetime is a solution to (\ref{E1})--(\ref{E5}) in electrovacuum (i.e.\ $\phi \equiv 0$), and can be written in standard $(t, r)$ coordinates as
\begin{equation} \label{eq:reissner_nordstrom1}
g_{RN} = - \left ( 1 - \frac{2M}{r} + \frac{\mathbf{e}^2}{r^2} - \frac{\Lambda r^2}{3}\right ) dt^2 + \left ( 1 - \frac{2M}{r} + \frac{\mathbf{e}^2}{r^2} - \frac{\Lambda r^2}{3} \right )^{-1} dr^2 + r^2 d \sigma_{\mathbb{S}^2}.
\end{equation}
%
In particular, the Reissner-Nordstr\"om(-dS/AdS) interior metric is given by (\ref{eq:reissner_nordstrom1}), restricted to the coordinate range $r_- < r < r_+$, $t \in \mathbb{R}$. Note that in the interior, $t$ is a spacelike coordinate while $r$ is a timelike coordinate.
The Maxwell field is given by having constant $Q \equiv \mathbf{e}$ in (\ref{eq:em_doublenull}), and $\Omega^2 = \Omega^2_{RN}$ as will be defined shortly. One choice of gauge field $A$ which will be consistent with the remainder of this article is
\begin{equation} \label{eq:reissner_nordstrom_gauge}
A = - \left( \frac{\mathbf{e}}{r_+} - \frac{\mathbf{e}}{r} \right) dt.
\end{equation}
To recast the metric (\ref{eq:reissner_nordstrom1}) into the double null form (\ref{eq:metric}), we define
\begin{equation}
\frac{dr}{dr^*} \coloneqq \frac{\Omega^2_{RN}}{4}, \hspace{0.5cm} \Omega^2_{RN} \coloneqq - 4 \left( 1 - \frac{2M}{r} + \frac{\mathbf{e}^2}{r^2} - \frac{\Lambda r^2}{3} \right),
\end{equation}
\begin{equation}
u \coloneqq \frac{r^* - t}{2}, \hspace{0.5cm} v \coloneqq \frac{r^* + t}{2}.
\end{equation}
%
In this $(u, v)$ coordinate system, the metric can now be written as
\begin{equation} \label{eq:reissner_nordstrom}
g_{RN} = - \Omega^2_{RN} du dv + r^2 d \sigma_{\mathbb{S}^2}.
\end{equation}
In the sequel, we denote the Reissner-Nordstr\"om area-radius function by $r_{RN}$.
Recalling the definition of $2K$ in (\ref{eq:vaidya_surface}), we define the surface gravity of the event horizon $2K_+$, and the surface gravity of the Cauchy horizon $2K_-$ by
\begin{equation} \label{eq:surface_gravity}
2K_{\pm} = 2K(r = r_{\pm}) = \frac{2}{r_{\pm}^2} \left( M - \frac{\mathbf{e}^2}{r_{\pm}} - \frac{\Lambda r_{\pm}^3}{3} \right).
\end{equation}
It is then a well-known fact that the null lapse $\Omega^2_{RN}$ obeys the following asymptotics, where $\alpha_{\pm} > 0$ are fixed constants depending on the black hole parameters:
\begin{equation} \label{eq:omega_asymptotic_1}
\Omega^2_{RN} \sim \alpha_+ e^{2 K_+ (M, \mathbf{e}, \Lambda) r^*} = \alpha_+ e^{2 K_+ (M, \mathbf{e}, \Lambda) \cdot (u + v)} \; \text{ as } \; r^* \to - \infty,
\end{equation}
\begin{equation} \label{eq:omega_asymptotic_2}
\Omega^2_{RN} \sim \alpha_- e^{2 K_- (M, \mathbf{e}, \Lambda) r^*} = \alpha_- e^{2 K_- (M, \mathbf{e}, \Lambda) \cdot (u + v)} \; \text{ as } \; r^* \to + \infty.
\end{equation}
We note that we always have $2K_+ > 0$ while $2K_- < 0$.
Introducing the following ``regular coordinates'' $U$ and $V$,
\begin{equation} \label{eq:coord_trans}
U = e^{2K_+u}, \hspace{0.5cm} V = e^{2K_+ v}.
\end{equation}
it is well-known that in the coordinate system $(U, v)$, the metric $g_{RN}$ can be smoothly extended beyond $U = 0$, and the right event horizon $\mathcal{H}_R = \{ (U, v): U = 0, v \in [- \infty, + \infty)\}$ is realised as a smooth null hypersurface. A similar construction can be made for the coordinates $(u, V)$, with $\mathcal{H}_L = \{ (u, V): u \in [-\infty, + \infty), V = 0 \}$.
Indeed, using the coordinate system $(U, V)$, the metric $g_{RN}$ is defined for $0 \leq U, V < + \infty$, and can be smoothly beyond both $\mathcal{H}_R$ and $\mathcal{H}_L$, including the bifurcation sphere $\mathcal{H}_R \cap \mathcal{H}_L = \{ U = 0, V = 0 \}$.
\subsection{Black hole interiors with charged scalar hair} \label{sub:data}
\begin{figure}[h]
\centering
\begin{tikzpicture}[domain=-5:5]
\coordinate (p) at (0,0) node[below] {};
\coordinate (L) at (-4.5,4.5);
\coordinate (R) at (4.5,4.5);
\draw (p) --
node[midway, below, sloped] {$\mathcal{H}_R$}
(R);
\draw (p) --
node[midway, below, sloped] {$\mathcal{H}_L$}
(L);
\path[fill=lightgray, opacity=0.7] (p) -- (1.5, 1.5)
-- (0, 3)
-- (-1.5, 1.5) -- (p);
\draw plot[id=x,domain=-4.3:4.3, samples=100]
(\x, {sqrt(4 + \x * \x});
\fill [lightgray, opacity=0.5]
(p)
-- (R) -- (4, 5)
-- plot[domain = 5:-5] ({\x}, {sqrt(9 + \x * \x)})
-- (-4, 5) -- (L)
-- cycle;
\draw[->, very thick]
(-2, {sqrt(8)}) -- ++({sqrt(8)/5}, -2/5)
node [above, midway] {$T$};
\draw[->, very thick]
(+1.5, {sqrt(6.25)}) -- ++({sqrt(6.25)/5}, +1.5/5)
node [above, midway] {$T$};
\node at (0, 2) [anchor=north] {\scriptsize $s = \text{const}$};
\node at (0, 1) {$\mathcal{R}$};
\end{tikzpicture}
\caption{The local solution to the characteristic initial value problem for a spatially homogeneous black hole interior with charged scalar hair} \label{fig:char_ivp}
\end{figure}
\noindent
Consider the characteristic initial value problem with initial data given on the two affine complete null hypersurfaces $\mathcal{H}_L = \{ (U, V): U \geq 0, V = 0 \} = \{ (u, v): u \in [- \infty, + \infty), v = - \infty \}$ and $\mathcal{H}_R = \{ (U, V): U = 0, V \geq 0 \} = \{ (u, v): u = -\infty, v \in [- \infty, + \infty) \}$, intersecting at the bifurcation sphere $(U, V) = (0, 0)$.
We normalize the regular coordinates $(U, V)$, which are related to the usual interior coordinates $(u,v)$ via (\ref{eq:coord_trans}), using the following gauge choice:
\begin{equation} \label{eq:gauge_v}
\Omega_R^2(U, v) |_{\mathcal{H}_R} = \frac{1}{2K_+} e^{-2K_+(M ,\mathbf{e}, \Lambda) \cdot u} \, \Omega^2(u, v) |_{\mathcal{H}_R} = \frac{\alpha_+}{2K_+} e^{2K_+ (M, \mathbf{e}, \Lambda) \cdot v},
\end{equation}
\begin{equation} \label{eq:gauge_u}
\Omega_L^2(u, V) |_{\mathcal{H}_L} = \frac{1}{2 K_+} e^{-2K_+(M ,\mathbf{e}, \Lambda) \cdot v} \, \Omega^2(u, v) |_{\mathcal{H}_L} = \frac{\alpha_+}{2 K_+} e^{2K_+ (M, \mathbf{e}, \Lambda) \cdot u},
\end{equation}
so that\footnote{Note that the gauge choice \eqref{eq:gauge_data} is slightly different from the one in \cite{VDM21}, without any consequences.}in the generalized Kruskal-Szekeres coordinate system $(U, V)$, one has on $\mathcal{H} = \mathcal{H}_R \cup \mathcal{H}_L$,
\begin{equation} \label{eq:gauge_data} \tag{$\Omega$-data}
\Omega^2_{reg}(U, V) |_{\mathcal{H}} = \frac{\alpha_+}{(2 K_+)^2}.
\end{equation}
In the context of this article, we pose the following characteristic initial data:
\begin{gather}
\tag{$r$-data} \label{eq:r_data}
r |_{\mathcal{H}} = r_+ (M ,\mathbf{e}, \Lambda), \\
\tag{$\varpi$-data} \label{eq:m_data}
\varpi|_{\mathcal{H}} =M>0, \\
\tag{$Q$-data} \label{eq:Q_data}
Q |_{\mathcal{H}} = \mathbf{e}\in \mathbb{R} \setminus \{0\},\\
\tag{$\phi$-data} \label{eq:phi_data}
\phi |_{\mathcal{H}} = \epsilon \in \mathbb{R} \setminus \{0\},
\end{gather}
as well as a gauge for the Maxwell field such that the components $A_V, A_U$ vanish on $\mathcal{H}_R, \mathcal{H}_L$ respectively. In particular $D_V \phi$ and $D_U \phi$ will vanish on their respective horizon pieces.
It is clear, therefore, that this data is compatible with the null constraints (\ref{eq:raych_u_emkgss}), (\ref{eq:raych_v_emkgss}). We would also like to understand the transversal derivatives of $r$ and $\phi$ along $\mathcal{H}$, for which we shall need the full system of equations (\ref{eq:raych_u_emkgss})--(\ref{eq:emgauge_emkgss}). It will be convenient to work in the regular coordinates $(U, V)$.
For $r$, we see that using (\ref{eq:wave_r_emkgss}) on $\mathcal{H}_R$,
\begin{equation} \label{eq:initial_data_r}
\partial_V \partial_U r = \frac{\alpha_+}{4 (2 K_+)^2} \left( -\frac{1}{r_+} + \frac{\mathbf{e}^2}{r_+^3} + r_+ \Lambda+ r_+ m^2 \epsilon^2 \right).
\end{equation}
Since $r_+ = r_+(M, \mathbf{e}, \Lambda)$ satisfies the equation
\begin{equation*}
P_{M, \mathbf{e}, \Lambda} (r_+) = r_+^2 - 2 M r_+ + \mathbf{e}^2 - \tfrac{1}{3} \Lambda r_+^4 = 0,
\end{equation*}
it is readily checked that the expression in the parentheses in (\ref{eq:initial_data_r}) is equal to $ - 2 K_+ + r_+ m^2 \epsilon^2$. We then integrate (\ref{eq:initial_data_r}), noting that $\partial_U r = 0$ at the bifurcation sphere $(U, V) = (0, 0)$, to find
\begin{equation*}
\partial_U r |_{\mathcal{H}_R} = \frac{\alpha_+ V}{8 K_+} \left( - 1 + \frac{r_+ m^2 \epsilon^2}{2K_+} \right).
\end{equation*}
Returning to $(u, v)$ coordinates, and performing a similar procedure on $\mathcal{H}_L$, we therefore deduce
\begin{equation}
\lim_{u \to - \infty} \frac{- 4 \partial_u r}{\Omega^2} (u, v) =
\lim_{v \to - \infty} \frac{- 4 \partial_v r}{\Omega^2} (u, v) = 1 - \frac{r_+ m^2 \epsilon^2}{2K_+}.
\end{equation}
We remark here that due to the presence of the Klein-Gordon mass, there is already an $O(\epsilon^2)$ deviation from the corresponding Reissner-Nordstr\"om quantity.
A similar procedure applied to the equation (\ref{eq:wave_psi_emkgss}) will yield
\begin{gather}
\frac{2 K_+}{\alpha_+ V} \cdot D_U \phi (0, V) = \lim_{u \to - \infty} \frac{D_u \phi}{\Omega^2} (u, v) = \beta_+ \epsilon, \\
\frac{2 K_+}{\alpha_+ U} \cdot D_V \phi (U, 0) = \lim_{v \to - \infty} \frac{D_v \phi}{\Omega^2} (u, v) = \beta_+ \epsilon,
\end{gather}
where $\beta_+ = \beta_+(M, \mathbf{e}, \Lambda, m)$ is some fixed constant we do not explicitly determine.
We should like to say that the data (\ref{eq:gauge_data}), (\ref{eq:r_data}), (\ref{eq:Q_data}), (\ref{eq:phi_data}) uniquely specifies a solution to the Einstein-Maxwell-Klein-Gordon system, which should moreover be spatially homogeneous. Of course, in order to have uniqueness we must impose a gauge for the Maxwell field $A$, and we choose to have
\begin{equation} \tag{$A$-gauge} \label{eq:maxwell_choice}
U A_U + V A_V = 0.
\end{equation}
We now identify the spacetime describing the hairy black hole interior spacetimes studied in this article, which we firstly describe in the regular coordinate system $(U, V)$.
\begin{proposition} \label{prop:char_ivp}
Consider characteristic initial data (\ref{eq:gauge_data}), (\ref{eq:r_data}), (\ref{eq:Q_data}), (\ref{eq:phi_data}) to the Einstein-Maxwell-Klein-Gordon system (\ref{eq:raych_u_emkgss})--(\ref{eq:emgauge_emkgss}), with $u, v$ replaced by $U, V$ respectively. Then imposing also (\ref{eq:maxwell_choice}), there exists a unique maximal future development\footnote{Maximality is meant in the sense that there is no larger causal region of the $(U, V)$-plane where one may smoothly extend the solution.} of the system $(\Omega^2, r, \phi, Q, A_U, A_V)$, regular up to the horizon $\mathcal{H} = \mathcal{H}_L \cup \mathcal{H}_R$.
Furthermore, the domain of definition of this maximal development is given by $\{ (U, V): 0 \leq UV < D_{max} \}$ for some $D_{max}(M, \mathbf{e}, \Lambda, m^2, q_0, \epsilon) \in (0, + \infty]$, and letting $T$ be the vector field (see Figure \ref{fig:char_ivp}):
\begin{equation} \label{eq:kvf}
T \coloneqq 2K_+ \left( - U \frac{\partial}{\partial U} + V \frac{\partial}{\partial V} \right),
\end{equation}
one finds that $T$ is a Killing vector field, satisfying $T r = T \Omega^2 = T Q = T \phi = T (U A_U) = 0$. In particular, the spacetime is foliated by Cauchy hypersurfaces $UV = const$, which are each spatially homogeneous with isometry group $\mathbb{R} \times SO(3)$.
\end{proposition}
\begin{proof}
The first step is to appeal to a local well-posedness result, which would state that there is a unique smooth solution in the rectangle $(U, V) \in [0, \delta] \times [0, \delta]$ for $\delta > 0$ chosen sufficiently small (see the darker shaded region of Figure \ref{fig:char_ivp}). One could, for instance, appeal to Proposition 4.1 of \cite{Kommemi} -- however the issue is that our gauge choice (\ref{eq:maxwell_choice}) is not well-suited to such local well-posedness statements.
We therefore instead first find a solution with respect to a Maxwell gauge choice adapted to such local results, for instance following \cite{Kommemi}, we impose:
\begin{equation} \tag{$A$-gauge'} \label{eq:maxwell_choice'}
A^{(0)}_V (U, V) = 0, \; A^{(0)}_U (U, 0) = 0.
\end{equation}
Standard local well-posedness results then assert that there exists a unique solution to (\ref{eq:raych_u_emkgss})--(\ref{eq:emgauge_emkgss}) in $[0, \delta] \times [0, \delta]$, with the prescribed data and the gauge (\ref{eq:maxwell_choice'}), with $A$ replaced by $A^{(0)}$ in the equations. In particular, Maxwell gauge-independent quantities such as $\Omega^2_{reg}, r, Q$ are already uniquely determined.
We seek a gauge transformation that relates an $A^{(0)}$ satisfying (\ref{eq:maxwell_choice'}) to an $A$ satisfying (\ref{eq:maxwell_choice}). We claim that the correct gauge transformation is $A = A^{(0)} - dh$, with $h$ given by
\begin{equation} \label{eq:h}
h(U, V) \coloneqq \int^1_0 U A^{(0)} (UT, VT) \, dT.
\end{equation}
The reason is that this choice of $h$ satisfies
\begin{align*}
U \frac{\partial h}{\partial U} + V \frac{\partial h}{\partial V}
&=
\int^1_0 U A^{(0)} (UT, VT) + U^2 T \frac{\partial A^{(0)}}{\partial U} (UT, VT) + UVT \frac{\partial A^{(0)}}{\partial V} (UT, VT) \, dT, \\
&= \int^1_0 \frac{d}{dT} (U A^{(0)}(UT, VT)) \, dT = U A^{(0)} (U, V),
\end{align*}
so that
\begin{equation*}
U A_U + V A_V = U A^{(0)} - U \frac{\partial h}{ \partial U} - V \frac{\partial h}{\partial V} = 0
\end{equation*}
as required. Furthermore, the $h$ chosen in (\ref{eq:h}) is the unique choice of gauge transformation that is regular at $(0,0)$, since if $\tilde{h}$ was another such function, then the difference $g = h - \tilde{h}$ would satisfy
\begin{equation*}
U \frac{\partial g}{\partial U} + V \frac{\partial g}{\partial V} = 0,
\end{equation*}
whose general solution is of the form $g(U, V) = G( U / V)$. So regularity at $(0,0)$ implies that $G$, and thus $g$, is constant, and $dh = d \tilde{h}$ after all.
Hence we have constructed a unique regular solution in the characteristic rectangle $[0, \delta] \times [0, \delta]$. We next show that the vector field $T$ defined in (\ref{eq:kvf}) annihilates all the relevant quantities.
%
For this purpose, we argue geometrically as follows: let $a > 0$ be any positive real number, and consider the double null coordinate transformation $U \mapsto U' = a U, V \mapsto V' = a^{-1} V$. Then in the $(U', V')$ coordinate system, we note that it still holds that on $\mathcal{H} = \{ (U', V'): U' = 0 \text{ or } V' = 0\}$, we have
\begin{gather*}
\Omega'^2_{reg} (U', V') = \frac{\alpha_+}{(2 K_+)^2}, \\
r = r_+,\\ Q = \mathbf{e},\\ \phi = \epsilon, \\
U' A_{U'} + V' A_{V'} = 0.
\end{gather*}
Hence by the aforementioned existence and uniqueness result, we have a unique solution in the characteristic rectangle $(U', V') \in [0, \delta] \times [0, \delta]$. Furthermore, this agrees with the solution in the original $(U, V)$ coordinates, so that for $f \in \{ \Omega^2, r, Q, \phi, UA_U \}$, we have
\begin{equation} \label{eq:magic}
f(U', V') = f(a U, a^{-1} V) = f(U, V).
\end{equation}
Allowing $a$ to vary across all positive reals, it is clear that we have a solution in the whole of $\{ (U, V) \in \mathbb{R}_{\geq 0}^2: 0 \leq UV \leq \delta^2 \}$, i.e.\ the lighter shaded region in Figure \ref{fig:char_ivp} that arises from sweeping out the darker shaded region for different choices of $a > 0$. Furthermore, it is immediate from (\ref{eq:magic}) that $T$ annihilates all such quantities $f$.
The extension to the region $\{ 0 \leq UV < D_{max} \}$ is then straightforward using standard extension principle arguments for nonlinear waves and again appealing to this geometric trick.
\end{proof}
We remark that by the generalized extension principle of \cite{Kommemi}, if the quantity $D_{max}$ of Proposition \ref{prop:char_ivp} is finite, then one must have $r \to 0$ as $UV \to D_{max}$. However, Proposition \ref{prop:char_ivp} is highly qualitative in nature and says little about quantitative properties of the interior, or if and how any spacelike singularity is formed.
\subsection{System of ODEs for spatially homogeneous solutions} \label{sub:evol_eqs}
Define $s = u + v$, $t = v - u$, where the null coordinates $(u, v)$ are fixed by the gauge choices (\ref{eq:gauge_v}), (\ref{eq:gauge_u}), (\ref{eq:coord_trans}). Since $\partial_t = \frac{1}{2} ( \partial_v - \partial_u ) = \frac{1}{2} T$, Proposition~\ref{prop:char_ivp} proves that the maximal future development arising from the characteristic data of Section~\ref{sub:data} obeys $\partial_t r = 0, \partial_t \Omega^2 = 0, \partial_t Q = 0, \partial_t \phi = 0$.
So we may consider these just as functions of $s$.
Of course, this is only true after imposing (\ref{eq:maxwell_choice}).
In the $(u, v)$ coordinate system, we notice that
\begin{equation} \label{eq:gauge_maxwell}
A = A_U dU + A_V dV = 2 K_+ (U A_U du + V A_V dv ) \eqqcolon \tilde{A} (du - dv) = - \tilde{A} dt,
\end{equation}
where, due to Proposition \ref{prop:char_ivp}, $\tilde{A} = UA_U(s)$ is a real-valued function of $s$, with $\lim_{s \to -\infty} \tilde{A}(s) = 0$.
We next show that this choice of gauge will in fact constrain the scalar field $\phi$ to be real.
\begin{lemma}
With the gauge choice (\ref{eq:gauge_maxwell}) and the initial data of Section \ref{sub:data}, $\phi = \phi(s)$ is everywhere real.
\end{lemma}
\begin{proof}
Consider the transport equations (\ref{eq:q_u_emkgss}), (\ref{eq:q_v_emkgss}) for the quantity $Q$. Since $\partial_t Q = \frac{1}{2} (\partial_v - \partial_u) Q = 0$,
\begin{equation*}
q_0 r^2 \mathfrak{Im} ( \phi \overline{ D_s \phi } )
= \tfrac{1}{2} q_0 r^2 \mathfrak{Im} ( \phi \overline{ (D_u \phi + D_v \phi) } ) = 0.
\end{equation*}
Hence by (\ref{eq:gauge_maxwell}) we must have $\mathfrak{Im} ( \phi \overline{D_s \phi} ) = \mathfrak{Im} (\phi \overline{\partial_s \phi}) = 0$.
Next, we decompose $\phi$ into its modulus and argument; $\phi = \Phi e^{i \theta}$. Then
\begin{equation*}
\mathfrak{Im} ( \phi \overline{\partial_s \phi} ) = - \Phi^2 \partial_s \theta = 0,
\end{equation*}
so that the phase $\theta$ is constant whenever $\phi$ is nonzero. But as $\phi$ is smooth in the variable $s$, it does not change phase when it reaches $0$, hence $\phi$ is real everywhere.
%
\end{proof}
\noindent
Using the identities $\partial_u = \partial_s - \partial_t, \partial_v = \partial_s + \partial_t$, we have that for $f \in \{ r(s), \Omega(s), \phi(s), Q(s), \tilde{A}(s) \}$, one has
\begin{equation*}
\partial_u f = \partial_v f = \frac{df}{ds} \eqqcolon \dot{f}.
\end{equation*}
We now proceed to rewrite the Einstein-Maxwell-Klein-Gordon system (\ref{eq:raych_u_emkgss})--(\ref{eq:emgauge_emkgss}) as a system of ODEs.
Firstly, the Raychaudhuri equation becomes
\begin{equation} \label{eq:raych}
\frac{d}{ds} (- \Omega^{-2} \dot{r}) = \Omega^{-2} r ( |\dot{\phi}|^2 + |\tilde{A}|^2 q_0^2 |\phi|^2 ).
\end{equation}
Defining the quantity $\kappa = - \frac{1}{4} \Omega^2 \dot{r}^{-1}$, which is exactly $1$ in Reissner-Nordstr\"om(-dS/AdS), we may rewrite this as
\begin{equation} \label{eq:raych_kappa}
\frac{d}{ds} \kappa^{-1} = 4 \Omega^{-2} r (|\dot{\phi}|^2 + |\tilde{A}|^2 q_0^2 |\phi|^2 ).
\end{equation}
We also at times appeal to \eqref{eq:raych} in the form:
\begin{equation} \label{eq:raych_transport}
\frac{d}{ds} (- \dot{r}) - \frac{d}{ds} \log (\Omega^2) \cdot (- \dot{r}) = r ( |\dot{\phi}|^2 + |\tilde{A}|^2 q_0^2 |\phi|^2).
\end{equation}
The wave equation for $r$ is now written as
\begin{equation} \label{eq:r_evol}
\ddot{r} = - \frac{\Omega^2}{4r} - \frac{\dot{r}^2}{r} + \frac{\Omega^2}{4r^3} Q^2 + \frac{\Omega^2 r}{4} (m^2 |\phi|^2 + \Lambda),
\end{equation}
which may be conveniently rewritten as
\begin{equation} \label{eq:r_evol_2}
\frac{d}{ds} ( - r \dot{r}) = \frac{\Omega^2}{4} - \frac{\Omega^2 Q^2}{4 r^2} - \frac{\Omega^2 r^2}{4} (m^2|\phi|^2 + \Lambda).
\end{equation}
The wave equation (\ref{eq:wave_omega_emkgss}) for the null lapse $\Omega^2$ becomes
\begin{equation} \label{eq:lapse_evol}
\frac{d^2}{ds^2} \log(\Omega^2) = \frac{\Omega^2}{2r^2} + 2 \frac{\dot{r}^2}{r^2} - \frac{\Omega^2}{r^4} Q^2 - 2 \dot{\phi}^2 + 2 |\tilde{A}|^2 q_0^2 |\phi|^2,
\end{equation}
or alternatively
\begin{equation} \label{eq:omega_evol_2}
\frac{d^2}{ds^2} \log (r \Omega^2) = \frac{\Omega^2}{4r^2} - \frac{3}{4} \frac{\Omega^2 Q^2}{r^4} - 2 |\dot{\phi}|^2 + 2 |\tilde{A}|^2 q_0^2 |\phi|^2 + \frac{\Omega^2 m^2}{4} |\phi|^2.
\end{equation}
For the Maxwell field $Q$ and the gauge field $\tilde{A}$, the equations (\ref{eq:q_u_emkgss}), (\ref{eq:q_v_emkgss}), (\ref{eq:emgauge_emkgss}) become
\begin{equation} \label{eq:Q_evol}
\dot{Q} = \tilde{A} q_0^2 r^2 |\phi|^2,
\end{equation}
\begin{equation} \label{eq:gauge_evol}
\dot{\tilde{A}} = -\frac{Q \Omega^2}{4r^2}.
\end{equation}
Finally, the wave equation (\ref{eq:wave_psi_emkgss}) for the scalar field may be written as the second order ODE
\begin{equation} \label{eq:phi_evol}
\ddot{\phi} = - \frac{2 \dot{r} \dot{\phi}}{r} - q_0^2 |\tilde{A}|^2 \phi - \frac{m^2 \Omega^2}{4} \phi,
\end{equation}
which we often use in the form
\begin{equation} \label{eq:phi_evol_2}
\frac{d}{ds} (r^2 \dot{\phi}) = - r^2 q_0^2 |\tilde{A}|^2 \phi - \frac{m^2 \Omega^2 r^2}{4} \phi.
\end{equation}
We also reformulate the initial data of Section~\ref{sub:data} so as to satisfy the ODE system (\ref{eq:raych})--(\ref{eq:phi_evol_2}). Data is posed at the limit $s \to -\infty$, and is given by
\begin{gather}
\lim_{s \to -\infty} r(s) = r_+,\; \lim_{s\to -\infty} Q(s) = \mathbf{e},\; \lim_{s\to -\infty} \phi(s) = \epsilon, \label{eq:ode_data_rqphi} \\
\lim_{s \to -\infty} \Omega^2(s) \cdot e^{- 2 K_+ s} = \alpha_+, \label{eq:ode_data_lapse} \\
\lim_{s \to -\infty} - 4 \Omega^{-2}(s)\dot{r}(s)= 1 - \frac{r_+ m^2 \epsilon^2}{2K_+}, \label{eq:ode_data_rdot} \\
\lim_{s \to - \infty} \frac{d}{ds} \log(\Omega^2) (s) = 2 K_+, \label{eq:ode_data_lapsedot}\\
\lim_{s \to - \infty} \Omega^{-2}(s) \tilde{A}(s) = - \frac{\mathbf{e}}{8K_+ r_+^2}, \label{eq:ode_data_gauge} \\
\lim_{s \to - \infty} \Omega^{-2}(s) \dot{\phi}(s) = \beta_+ \epsilon. \label{eq:ode_data_phidot}
\end{gather}
This concludes the set-up for the analytical problem considered in this paper.
\subsection{Linear scattering in the Reissner-Nordstr\"om interior} \label{sub:scattering}
We will often need to make comparisons to various quantities in exact Reissner-Nordstr\"om(-dS/AdS). While this is straightforward for $(r, \Omega^2, Q)$, the scalar field $\phi$ vanishes in Reissner-Nordstr\"om, so instead we make comparisons to solutions to the linear (\textit{charged}) covariant Klein-Gordon equation in the Reissner-Nordstr\"om interior:
\begin{equation} \label{eq:rn_kg}
g^{\mu \nu}_{RN} D_{\mu} D_{\nu} \phi = m^2 \phi.
\end{equation}
Here $g_{RN}$ and $D$ are the metric and covariant derivative in Reissner-Nordstr\"om(-dS/AdS), with $A$ as specified in Section \ref{sub:reissner_nordstrom}, and $m^2 \in \mathbb{R}$ and $q_0 \neq 0$ are fixed.
Since we are interested in the spatially homogeneous problem, we consider only solutions $\phi = \phi_{\mathcal{L}}$ to (\ref{eq:rn_kg}) that satisfy $T \phi_{\mathcal{L}} = 0$, $S \phi_{\mathcal{L}} = 0$, where $T = \partial_t$ is the Killing vector field of Reissner-Nordstr\"om(-dS/AdS) associated to stationarity and $S$ is any vector field on the sphere.
Then denoting $\psi (s) = \phi_{\mathcal{L}}(t = 0, r^* = s)$, where $r^*, t$ are as defined in Section \ref{sub:reissner_nordstrom}, it can be checked that $\psi$ satisfies the following second order ODE in $s$ (see, for instance, the equation (\ref{eq:phi_evol})):
\begin{equation} \label{eq:rn_kg_coord}
\ddot{\psi} = \frac{\Omega^2_{RN}(s)}{4} \frac{\dot{\psi}}{2r_{RN}(s)} - \frac{\Omega^2_{RN}(s)}{4} m^2 \psi - q_0^2 \left( \frac{\mathbf{e}}{r_+} - \frac{\mathbf{e}}{r_{RN}(s)} \right)^2 \psi.
\end{equation}
We define the quantities
\begin{equation} \label{eq:rn_omega}
\tilde{A}_{RN, \infty} = \frac{\mathbf{e}}{r_+} - \frac{\mathbf{e}}{r_-} \neq 0, \hspace{1cm}
\omega_{RN} \coloneqq q_0 \tilde{A}_{RN, \infty} \neq 0.
\end{equation}
Then following \cite{MoiChristoph} we define four functions solving (\ref{eq:rn_kg_coord}): $\psi_{\mathcal{H}, 1}, \psi_{\mathcal{H}, 2}, \psi_{\mathcal{CH}, 1}$ and $\psi_{\mathcal{CH}, 2}$. These satisfy the following asymptotics towards the event horizon $\mathcal{H} = \{ s = - \infty \}$ and the Cauchy horizon $\mathcal{CH} = \{ s = + \infty \}$ \begin{align}
& \psi_{\mathcal{H}, 1}(s) = 1 + o(1) \hspace{4cm} \text{ as } s \to - \infty, \\ & \psi_{\mathcal{H}, 2}(s) = s + o(1) \hspace{4cm} \text{ as } s \to - \infty,\\ & \psi_{\mathcal{CH}, 1}(s) = e^{i \omega_{RN} s} + o(1) \hspace{3cm} \text{ as } s \to + \infty, \\ & \psi_{\mathcal{CH}, 2}(s) = \overline{\psi_{\mathcal{CH}, 1}} (s) = e^{-i \omega_{RN} s} + o(1) \hspace{1cm} \text{ as } s \to + \infty.
\end{align}
The results of \cite{MoiChristoph} then imply the following:
\begin{proposition}
Recalling the definition of $2K_-(M, \mathbf{e}, \Lambda) < 0$ from (\ref{eq:surface_gravity}), and $\alpha_-(M, \mathbf{e}, \Lambda)>0$ from (\ref{eq:omega_asymptotic_2}), there exists some constant $C > 0$ such that
\begin{equation}
|\psi_{\mathcal{CH}, 1}(s) - e^{i \omega_{RN} s}| + \left| \frac{d \psi_{\mathcal{CH}, 1}}{ds}(s) - i \omega_{RN} e^{i \omega_{RN} s} \right| \leq C \Omega^2_{RN}(s) \leq 2 C \alpha_- e^{2 K_- s},
\end{equation}
\begin{equation}
|\psi_{\mathcal{CH}, 2}(s) - e^{- i \omega_{RN} s}| + \left| \frac{d \psi_{\mathcal{CH}, 2}}{ds}(s) + i \omega_{RN} e^{- i \omega_{RN} s} \right| \leq C \Omega^2_{RN}(s) \leq 2 C \alpha_- e^{2 K_- s}.
\end{equation}
Furthermore, there exists a scattering coefficient $B = B(M, q_0,\mathbf{e}, \Lambda) \in \C \setminus \{0\}$ such that
\begin{equation}\label{B.def}
\psi_{\mathcal{H}, 1}(s) = B \psi_{\mathcal{CH}, 1}(s) + \overline{B} \psi_{\mathcal{CH}, 2}(s) = 2 \mathfrak{Re}( B \psi_{\mathcal{CH}, 1}(s)).
\end{equation}
\end{proposition}
\begin{corollary} \label{cor:scattering}
Let $\phi_{\mathcal{L}}$ be the solution to (\ref{eq:rn_kg}) with constant data $\phi = \epsilon$ on the event horizon $\mathcal{H} = \{ s = - \infty \}$. Then there exists $C(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ and $\tilde{S}(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ such that for $s \geq \tilde{S}$, one has
\begin{equation}
\left| \phi_{\mathcal{L}}(s) - B \epsilon e^{i \omega_{RN} s} - \overline{B} \epsilon e^{- i \omega_{RN} s} \right| \leq C \epsilon e^{2K_- s}.
\end{equation}
\begin{equation}
\left| \dot{\phi}_{\mathcal{L}}(s) - i \omega_{RN} B \epsilon e^{i \omega_{RN} s} + i \omega_{RN} \overline{B} \epsilon e^{- i \omega_{RN} s} \right| \leq C \epsilon e^{2K_- s}.
\end{equation}
\end{corollary}
\section{Precise statement of the main theorems}\label{sec:theorem}
\subsection{Definition of the spacetime sub-regions} \label{sub:regions}
We now give a precise definition of the regions in Figure~\ref{Fig1}, under the gauge-choice for $s \in \mathbb{R}$ given by \eqref{eq:gauge_data}, for the spacetime described in Proposition~\ref{prop:char_ivp}:
\begin{itemize}
\item The red-shift region $\mathcal{R} \coloneqq \{ - \infty < s \leq - \Delta_{\mathcal{R}} \ll 0 \}$ for some $\Delta_{\mathcal{R}}>0$: here we make strong use of the positive surface gravity of the event horizon (red-shift effect, see \cite{MihalisPHD, Red} and subsequent works).
\item The no-shift region $\mathcal{N} \coloneqq \{ - \Delta_{\mathcal{R}} \leq s \leq S \gg 1 \}$, for some $S>0$: here we use a Cauchy stability argument and Gr\"onwall's inequality to show that quantities are still $\epsilon^2$-close to their Reissner-Nordstr\"om values.
\item The early blue-shift region $\mathcal{EB} \coloneqq \{ S \leq s \leq s_{lin} := |2 K_-|^{-1} \log(\nu \epsilon^{-1}) \}$ for $\nu>0$: here, we begin to exploit the \textit{blue-shift} effect of the Cauchy horizon of Reissner-Nordstr\"om (e.g.~negative surface gravity $2K_- < 0$).
\item The late blue-shift region $\mathcal{LB} \coloneqq \{ s_{lin} \leq s \leq \Delta_{\mathcal{B}} \epsilon^{-1} \}$ for some some $\Delta_{\mathcal{B}}>0$: here the spacetime geometry begins to depart from that of Reissner-Nordstr\"om, and we provide the key ingredients to help us with the analysis of subsequent regions. In particular, this region starts to see a growth of the Hawking mass (a relic of mass inflation, see \cite{Ori,Moi4} and the introduction of \cite{VDM21}).
\item The oscillation region $\mathcal{O}:=\{ s_O(\epsilon} \newcommand{\ls}{\lesssim):= 50 s_{lin} \leq s \leq s_{PK}(\epsilon} \newcommand{\ls}{\lesssim)\}$, where $r ( s_{PK})= 2 |B| \mathfrak{W} r_- \epsilon} \newcommand{\ls}{\lesssim$ for some $\mathfrak{W}>0$: here the Bessel-type behavior kicks in, leading to the collapsed oscillations discussed in Section~\ref{osc.intro}.
\item The proto-Kasner region $\mathcal{PK}:=\{ s_{PK} \leq s \leq s_{i}(\epsilon} \newcommand{\ls}{\lesssim)\}$, where $r ( s_{i})= e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}}$ for some $\delta_0 > 0$: here the Bessel-type behavior continues but ceases to oscillate, presenting a logarithmic divergence instead. In this region, the Kasner-type behavior starts manifesting itself.
We also define the sub-region $\mathcal{PK}_1 :=\{ s_{K_1} \leq s \leq s_{i}(\epsilon} \newcommand{\ls}{\lesssim)\} \subset \mathcal{PK}$, where $r ( s_{K_1})= 2 |B| \mathfrak{W} r_- \epsilon} \newcommand{\ls}{\lesssim^2$. Only in this sub-region will we establish that a Kasner-type behavior takes place.
\item The Kasner region $\mathcal{K}:=\{ s_{i} \leq s < s_{\infty}(\epsilon} \newcommand{\ls}{\lesssim)\}$, where $\lim_{s\rightarrow s_{\infty}}r (s)= 0$. In this region, we prove that the metric is close to one (or two) Kasner regimes and that the scalar field is governed by a first-order ODE.
\end{itemize}
We also introduce the additional sub-regions of $\mathcal{PK} \cup \mathcal{K}$ depicted in Figure~\ref{FigN}.
\begin{itemize}
\item The first Kasner region $\mathcal{K}_1\subset \mathcal{PK}_1 \cup \mathcal{K}$ overlaps with $\mathcal{PK}$ and $\mathcal{K}$. In this region, we will show the metric is in a Kasner regime.
Anticipating Theorem~\ref{maintheorem2}, we will find that $\mathcal{K}_1= \mathcal{PK}_1 \cup \mathcal{K}$ in the no Kasner inversion case \eqref{Ninv.eq} (i.e.\ the first Kasner regime is the final Kasner regime, with all Kasner exponents being positive) and $\mathcal{K}_1=\{s_{K_1} \leq s \leq s_{in}\} \neq \mathcal{PK}_1 \cup \mathcal{K}$ in the Kasner inversion case \eqref{inv.eq} (with one negative Kasner exponent, which is thus expected to be unstable). Here $s_{in}:= \min \{s \in \mathcal{K}: |\Psi(s)| = |\alpha|+ \epsilon} \newcommand{\ls}{\lesssim^2\}$.
\item The Kasner inversion region $\mathcal{K}_{inv} \subset \mathcal{K}$. We have $\mathcal{K}_{inv}=\emptyset$ in the no Kasner inversion case \eqref{Ninv.eq}, and $\mathcal{K}_{inv}=\{s_{in} \leq s \leq s_{out}\}$ in the Kasner inversion case \eqref{inv.eq}, where $s_{out}:= \min \{s \in \mathcal{K}: |\Psi(s)| = |\alpha|^{-1}- \epsilon} \newcommand{\ls}{\lesssim^2\}$. We have weaker control of the metric in $\mathcal{K}_{inv}$, but we show that it is very short in terms of proper time.
\item The second Kasner region $\mathcal{K}_{2} \subset \mathcal{K}$. We will have $\mathcal{K}_{2}=\emptyset$ in the no Kasner inversion case \eqref{Ninv.eq}, and $\mathcal{K}_{2}=\{s_{out} \leq s < s_{\infty}\}$ in the Kasner inversion case \eqref{inv.eq}, where we exhibit a second Kasner regime (with positive Kasner exponents, in contrast to the first Kasner regime $\mathcal{K}_{1} $).
\end{itemize}
\subsection{First statement: formation of a spacelike singularity}
We first start with our main result, which covers part (namely the formation of the spacelike singularity) of Theorem~\ref{thm.intro} up the restriction $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta} \subset (-\epsilon} \newcommand{\ls}{\lesssim_0,\epsilon} \newcommand{\ls}{\lesssim_0)\setminus\{0\}$, where $E_{\eta}$ has measure $2\epsilon} \newcommand{\ls}{\lesssim_0-O(\eta \epsilon} \newcommand{\ls}{\lesssim_0)$. We reiterate that Theorem~\ref{maintheorem} contains the statements~\ref{I1}, \ref{I2}, \ref{I5} of our rough Theorem~\ref{thm.intro}; more precisely statement~\ref{I1} of Theorem~\ref{thm.intro} corresponds to statement~\ref{SS1} of Theorem~\ref{maintheorem}, while statements~\ref{I2} and \ref{I5} of Theorem~\ref{thm.intro} are covered by statement~\ref{SS2} of Theorem~\ref{maintheorem} as well as the estimate (\ref{Q.PK}).
The statements~\ref{SS3} and \ref{SS4} of Theorem~\ref{maintheorem} will also lay the groundwork towards proving the more specific Kasner asymptotics claimed in statements~\ref{I3} and \ref{I4} of Theorem~\ref{thm.intro}; the precise nature of these asymptotics will be covered in Theorem~\ref{maintheorem2}, upon restricting further $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta,\sigma}' \subset E_{\eta}$.
In the following theorem and the rest of the paper, we will use the notation $A \lesssim B$ if there exists $C(M,\mathbf{e},\Lambda,q_0,m^2, \eta)>0$ such that $A \leq C B$.
\begin{thm}\label{maintheorem}
Let $(M,\mathbf{e},\Lambda) \in \mathcal{P}_{se}$ with $\mathbf{e}\neq0$, $q_0\neq 0$, $m^2 \in \mathbb{R}$.
Then, for any $\eta>0$ chosen sufficiently small, there exists $\epsilon_0(M,\mathbf{e},\Lambda,q_0,m^2,\eta)>~0$ and a subset $E_{\eta} \subset (-\epsilon} \newcommand{\ls}{\lesssim_0,\epsilon} \newcommand{\ls}{\lesssim_0)\setminus \{0\}$, satisfying $\frac{|(-\delta,\delta) \setminus E_{\eta}|}{2\delta}= O(\eta)$ for any $0<\delta \leq \epsilon} \newcommand{\ls}{\lesssim_0$, such that for all $\epsilon} \newcommand{\ls}{\lesssim\in E_{\eta}$, the maximal future development $\mathcal{M}$ for \eqref{E1}-\eqref{E5} of the characteristic data from Section~\ref{prelim.section} (i.e.\ \eqref{eq:gauge_data}, \eqref{eq:r_data}, \eqref{eq:Q_data}, and \eqref{eq:phi_data}) terminates at a spacelike singularity $\mathcal{S}$ on which $r$ extends continuously to $0$, and the Penrose diagram is given by Figure~\ref{Penrose_detailed}.
More precisely, there exists a foliation of $\mathcal{M}$ by spacelike hypersurfaces $\Sigma_{s}$, with $s\in (-\infty,s_{\infty}(\epsilon} \newcommand{\ls}{\lesssim))$, where $s$ is defined in \eqref{eq:gauge_data}, and $s_{\infty}= \frac{ b_-^{-2} }{ 4 |K_-|} \epsilon^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))$, where $B(M,\mathbf{e},\Lambda,m^2, q_0) \neq 0$ is defined in \eqref{B.def}, $2K_-(M,\mathbf{e},\Lambda)<0$, $\omega_{RN}(M,\mathbf{e},\Lambda,q_0)\neq 0$ in Section~\ref{sub:reissner_nordstrom}, and $b_-=\frac{2|B|\omega_{RN}}{2|K_-|}$.
The subsequent spacetime dynamics are described as such:
\begin{enumerate}[i.]
\item \label{SS1} (Almost formation of a Cauchy horizon). In the late blue-shift region $\mathcal{LB} \coloneqq \{ s_{lin}(\epsilon} \newcommand{\ls}{\lesssim) \leq s \leq \Delta_{\mathcal{B}} \epsilon^{-1} \}$, we have the following stability estimates with respect to the Reissner--Nordstr\"om(-dS/AdS) metric:
\begin{multline}
\epsilon} \newcommand{\ls}{\lesssim^{-1} |\phi(s) - 2 \epsilon} \newcommand{\ls}{\lesssim \Re( Be^{i \omega_{RN} s}) | + \epsilon} \newcommand{\ls}{\lesssim^{-1}\left|\frac{d}{ds}[\phi(s) - 2 \epsilon} \newcommand{\ls}{\lesssim \Re( B e^{i \omega_{RN} s})] \right|+ | r(s) - r_- | \\[0.5em] + | Q(s)-\mathbf{e} | + \left| \frac{d}{ds} \log (\Omega^2)(s) - 2 K_- \right| \lesssim \epsilon^2 s \lesssim \epsilon} \newcommand{\ls}{\lesssim.
\end{multline} For $s \in \mathcal{LB}$, we find also the following estimate for $-r \dot{r}(s)$:
\begin{equation} \label{lb_rrdot} \left | - r\dot{r}(s) - \frac{4 |B|^2 \omega_{RN}^2 \epsilon^2 r_-} {2 |K_-|} \right | \lesssim e^{2K_- s} + \epsilon^4 s. \end{equation}
\item \label{SS2} (Collapsed oscillations and loss of charge). In the oscillations region $\mathcal{O} \coloneqq \{ s_{O}(\epsilon} \newcommand{\ls}{\lesssim) \leq s \leq s_{PK}(\epsilon} \newcommand{\ls}{\lesssim) \}$, we have the following Bessel-type oscillations for the scalar field: for some $\xi_0 = \xi_0^{\epsilon} \newcommand{\ls}{\lesssim=0}(M,\mathbf{e},\Lambda,q_0,m^2)+O(\epsilon} \newcommand{\ls}{\lesssim^2 \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))\neq 0$,
\begin{equation}\label{Bessel.statement}
\left | \phi(s) - \left( C_J (\epsilon) J_0 \left( \frac{\xi_0 r^2(s)}{r_-^2 \epsilon ^2} \right) + C_Y(\epsilon) Y_0 \left( \frac{ \xi_0 r^2(s)} {r_-^2 \epsilon^2} \right)\right) \right| \lesssim\epsilon^2 \log(\epsilon^{-1}),\end{equation}
\begin{equation} \label{Bessel.statement.2} \left |\frac{d}{ds}\left( \phi(s) - \left( C_J (\epsilon) J_0 \left( \frac{\xi_0 r^2(s)}{r_-^2 \epsilon ^2} \right) + C_Y(\epsilon) Y_0 \left( \frac{ \xi_0 r^2(s)} {r_-^2 \epsilon^2} \right)\right) \right)\right| \lesssim\epsilon^2 \log(\epsilon^{-1}),\end{equation} where the constants $ C_J (\epsilon)$, $ C_Y (\epsilon)$ are highly oscillatory in $\epsilon} \newcommand{\ls}{\lesssim$; namely for $\mathfrak{W}(M,\mathbf{e},q_0,\Lambda) >0$ given by
\begin{equation} \mathfrak{W}(M,\mathbf{e},\Lambda,q_0)=\sqrt{\frac{|\omega_{RN}|}{2|K_-|}}= \sqrt{ \frac{|q_0 \mathbf{e}|}{|\frac{\mathbf{e}^2}{r_-^2}- \frac{\Lambda}{3} r_+ (r_+ + 2r_-)|}}>0,\label{M.formula}
\end{equation}
one finds that
\begin{equation}
\left| C_J(\epsilon) - \frac{\sqrt{\pi}}{2} \mathfrak{W}^{-1} \cos (\Theta(\epsilon)) \right|+ \left| C_Y(\epsilon) - \frac{\sqrt{\pi}}{2} \mathfrak{W}^{-1} \sin (\Theta(\epsilon)) \right| \lesssim \epsilon^2 \log (\epsilon^{-1}),\end{equation}
\begin{equation} \left| \Theta(\epsilon} \newcommand{\ls}{\lesssim)-\frac{\epsilon} \newcommand{\ls}{\lesssim^{-2}}{8 |B|^2 \mathfrak{W} } \right|+ \epsilon} \newcommand{\ls}{\lesssim \cdot \left|\frac{d}{\d\epsilon} \newcommand{\ls}{\lesssim}\left( \Theta(\epsilon} \newcommand{\ls}{\lesssim)-\frac{\epsilon} \newcommand{\ls}{\lesssim^{-2}}{8 |B|^2 \mathfrak{W} }\right) \right|\lesssim\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}). \end{equation}
For $s \in \mathcal{O}$, one has (note this improves upon \eqref{lb_rrdot} in $\mathcal{LB} \cap \mathcal{O}$):
\begin{equation} \label{o_rrdot} \left | - r\dot{r}(s) - \frac{4 |B|^2 \omega_{RN}^2 \epsilon^2 r_-} {2 |K_-|} \right | \lesssim \epsilon^4 \log(\epsilon^{-1}). \end{equation}
Moreover, the charge $Q$ transitions from $\mathbf{e}$ to $Q_{\infty}(M, \mathbf{e},\Lambda):=\frac{3}{4}\mathbf{e} + \Lambda\frac{r_-^2 r_+(2r_-+r_+)}{12 \mathbf{e}}$ (up to $O(\epsilon} \newcommand{\ls}{\lesssim^{2-})$ errors), and \begin{equation}
\left| Q(s)- \mathbf{e} +(\mathbf{e}- Q_{\infty})\left(1-\frac{r^2(s)}{r_-^2}\right)\right|\lesssim \epsilon} \newcommand{\ls}{\lesssim^2 \log(\epsilon} \newcommand{\ls}{\lesssim^{-1} ) \text{ for all } s\in \mathcal{O},\end{equation}
\begin{equation} \frac{Q_{\infty}(M, \mathbf{e}, 0)}{\mathbf{e}}=\frac{3}{4},\end{equation}
\begin{equation} \left\{ \frac{Q_{\infty}(M,\mathbf{e},\Lambda)}{\mathbf{e}}:\ (M,\mathbf{e},\Lambda)\in \mathcal{P}_{se},\ \Lambda<0 \right\}= \left( \frac{1}{2},\frac{3}{4} \right),\end{equation}
\begin{equation} \left\{\frac{Q_{\infty}(M,\mathbf{e},\Lambda)}{\mathbf{e}}:\ (M,\mathbf{e},\Lambda)\in \mathcal{P}_{se},\ \Lambda>0 \right \}= \left(\frac{3}{4},1\right). \end{equation}
\item (Logarithmic Bessel-type divergence). \label{SS3} \sloppy In the proto-Kasner region $\mathcal{PK}:=\{ s_{PK}(\epsilon} \newcommand{\ls}{\lesssim) \leq s \leq s_i(\epsilon} \newcommand{\ls}{\lesssim)\}$, the Bessel-type behavior persists with slightly different coefficients: there exists $(C_{YK}(\epsilon} \newcommand{\ls}{\lesssim), C_{JK}(\epsilon} \newcommand{\ls}{\lesssim),\xi_K)= (C_Y(\epsilon} \newcommand{\ls}{\lesssim),C_J(\epsilon} \newcommand{\ls}{\lesssim),\xi_0) + O(\epsilon} \newcommand{\ls}{\lesssim^2 \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))$ such that \eqref{Bessel.statement}, \eqref{Bessel.statement.2} remain true for all $s\in \mathcal{PK}$, after replacing $ (C_Y(\epsilon} \newcommand{\ls}{\lesssim),C_J(\epsilon} \newcommand{\ls}{\lesssim),\xi_0) $ by $(C_{YK}(\epsilon} \newcommand{\ls}{\lesssim), C_{JK}(\epsilon} \newcommand{\ls}{\lesssim),\xi_K)$. Consequently, we have the following logarithmic divergence: for all $s\in \mathcal{PK}$, \begin{equation}
\left | \phi(s) + \frac{2}{\pi} C_{YK}(\epsilon} \newcommand{\ls}{\lesssim)\log(\frac{r_-^2 \epsilon} \newcommand{\ls}{\lesssim^2}{ \xi_K r^2(s)}) - C_{YK}(\epsilon} \newcommand{\ls}{\lesssim) - 2\pi^{-1} (\gamma-\log2) C_{J K }(\epsilon} \newcommand{\ls}{\lesssim)\right| \lesssim\epsilon^2 \log(\epsilon^{-1}),\end{equation} \begin{equation} \left |\frac{d}{ds}\left( \phi(s) + \frac{2}{\pi} C_{YK}\log(\frac{r_-^2 \epsilon} \newcommand{\ls}{\lesssim^2}{ \xi_K r^2(s)}) \right)\right| \lesssim\epsilon^2 \log(\epsilon^{-1}).
\end{equation}
Moreover, \eqref{o_rrdot} still holds for $s \in \mathcal{PK}$, while the charge $Q$ remains very close to its value at $s_{PK}$. In particular for all $s\in \mathcal{PK}$: \begin{equation}\label{Q.PK}
| Q(s) - Q_{\infty}|\lesssim \epsilon} \newcommand{\ls}{\lesssim^2 \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}).
\end{equation}
Finally, in the sub-region $\mathcal{PK}_1=\{ s_{K_1}(\epsilon} \newcommand{\ls}{\lesssim) \leq s \leq s_i(\epsilon} \newcommand{\ls}{\lesssim)\} \subset \mathcal{PK}$, we define the quantity $\Psi(s):= -r(s) \frac{d\phi}{dr}(s)= \frac{r(s)}{-\frac{dr}{ds}(s)}\frac{d\phi}{ds}(s)$. This obeys the estimates:
\begin{equation}
\left|\Psi(s) + \frac{2}{\sqrt{\pi}} \mathfrak{W}^{-1} \sin(\Theta(\epsilon} \newcommand{\ls}{\lesssim))\right| \lesssim \epsilon} \newcommand{\ls}{\lesssim^2 \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}).
\end{equation}
\begin{equation}
|\Psi(s) - \Psi(s_i) | \lesssim r^2(s)\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}) \lesssim \epsilon} \newcommand{\ls}{\lesssim^4\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}) .
\end{equation}
\item \label{SS4} In the Kasner region $\mathcal{K}:=\{ s_{i} \leq s < s_{\infty}(\epsilon} \newcommand{\ls}{\lesssim)\}$, $-r \dot{r}(s)$ obeys the lower bound
\begin{equation} \label{k_rrdot}
- r \dot{r} (s) \geq \frac{4 |B|^2 \omega_{RN}^2 \epsilon^2 r_-} {2 |K_-|} \cdot \frac{1}{2} \eta^2.
\end{equation}
The quantity $\Psi$ obeys the following ODE: introducing $R:= \log(\frac{r_-}{r})$, \begin{equation}
\frac{d\Psi}{dR} = - \Psi (\Psi-\alpha) (\Psi-\alpha^{-1}) + \mathcal{F},
\end{equation}\begin{equation} \label{alpha.def}
|\alpha- \Psi(s_i)| \lesssim e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}},\ \mathcal{F}(R) \lesssim e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}} r(R).
\end{equation}
Finally, the charge retention remains in the sense that \eqref{Q.PK} is also valid for all $s \in \mathcal{K}$.
\end{enumerate}
\end{thm}
\subsection{Second statement: Kasner asymptotics in the $\mathcal{PK}$ and $\mathcal{K}$ regions}
We now enter into the details of the regions $\mathcal{PK}_1 \cup \mathcal{K}$, in which the Kasner-like behavior manifests itself. The following theorem requires a (slightly) stronger assumption on $\epsilon} \newcommand{\ls}{\lesssim$ than Theorem~\ref{maintheorem}.
\begin{thm}\label{maintheorem2}
Let $(M,\mathbf{e},\Lambda) \in \mathcal{P}_{se}$ with $\mathbf{e}\neq0$, $q_0\neq 0$, $m^2 \in \mathbb{R}$. Then, for any sufficiently small $\eta, \sigma > 0$, there exists $\epsilon_0(M,\mathbf{e},\
\Lambda,q_0,m^2,\eta,\sigma)>0$, and we may define a subset $E_{\eta,\sigma}' \subset E_{\eta}$ with $\frac{|(-\delta,\delta) \setminus E'_{\eta,\sigma}|}{2\delta}= O(\eta + \sigma)$ for any $0< \delta \leq \epsilon} \newcommand{\ls}{\lesssim_0$, as
$E'_{\eta,\sigma}=E_{\eta,\sigma}^{'\ Ninv} \cup E_{\eta,\sigma}^{'\ inv} $ where $E_{\eta,\sigma}^{'\ Ninv}$ and $E_{\eta,\sigma}^{'\ inv}$ are disjoint sets defined as (recall \eqref{M.formula}): \begin{align}\label{inv.eq}
& \epsilon} \newcommand{\ls}{\lesssim \in E_{\eta,\sigma}^{'\ inv} \text{ if } \eta < |\Psi(s_i)| = \frac{2}{\sqrt{\pi}} \mathfrak{W}^{-1}|\sin(\Theta(\epsilon} \newcommand{\ls}{\lesssim))| + O(\epsilon^2 \log (\epsilon^{-1})) \leq 1- \sigma <1.\\
\label{Ninv.eq} &
\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta,\sigma}^{'\ Ninv} \text{ if } |\Psi(s_i)| = \frac{2}{\sqrt{\pi}} \mathfrak{W}^{-1}|\sin(\Theta(\epsilon} \newcommand{\ls}{\lesssim))| + O(\epsilon^2 \log(\epsilon^{-1})) \geq 1+ \sigma >1.
\end{align}
We call these respectively the Kasner inversion case and the no Kasner inversion case.
Moreover, we have the following two possibilities. \begin{itemize}
\item If $\mathfrak{W}(M,\mathbf{e},\Lambda,q_0) \geq \frac{2}{\sqrt{\pi}}$, then $E_{\eta,\sigma}^{'\ Ninv}=\emptyset$, and therefore $\frac{|(-\epsilon} \newcommand{\ls}{\lesssim,\epsilon} \newcommand{\ls}{\lesssim)\setminus E_{\eta,\sigma}^{'\ inv}|}{2\epsilon} \newcommand{\ls}{\lesssim}=O(\eta + \sigma)$, for $\eta,\sigma$ small.
\item If $\mathfrak{W}(M,\mathbf{e},\Lambda,q_0) < \frac{2}{\sqrt{\pi}}$, then $|E_{\eta,\sigma}^{'\ Ninv}|, |E_{\eta,\sigma}^{'\ inv}|>0$, and for $\eta,\sigma$ small:
\begin{align*}
& \frac{|E_{\eta,\sigma}^{'\ Ninv}\cap(-\epsilon} \newcommand{\ls}{\lesssim, \epsilon} \newcommand{\ls}{\lesssim)|}{2\epsilon} \newcommand{\ls}{\lesssim}= \frac{2}{\pi} \arcsin(1-\frac{\sqrt{\pi}}{2} \mathfrak{W})+O(\sigma), \\ & \frac{|E_{\eta,\sigma}^{'\ inv} \cap (-\epsilon} \newcommand{\ls}{\lesssim, \epsilon} \newcommand{\ls}{\lesssim)|}{2\epsilon} \newcommand{\ls}{\lesssim}= 1-\frac{2}{\pi} \arcsin(1-\frac{\sqrt{\pi}}{2} \mathfrak{W})+O(\eta + \sigma).
\end{align*}
\end{itemize}
Then, for all $\epsilon} \newcommand{\ls}{\lesssim \in E_{\eta,\sigma}^{'}$, we have the following Kasner-like behavior.
\begin{enumerate}
\item In the first Kasner region $\mathcal{K}_1$, we have, recalling $\alpha$ from \eqref{alpha.def}.
\begin{enumerate}
\item In the no Kasner inversion case \eqref{Ninv.eq}, we have $|\alpha|>1$, $\mathcal{K}_1=\mathcal{PK}_1 \cup \mathcal{K}$, and for all $s\in \mathcal{K}_1$: \begin{equation}
|\Psi(s)-\alpha| \lesssim e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}} \cdot \left( \frac{r(s)}{r(s_i)} \right)^{\beta}.
\end{equation} where we define $\beta:= \min\{\frac{1}{2}, \alpha^2-1\}>0$. Moreover, the metric takes the following Kasner-like form
\begin{equation}
g = - d \tau^2 + \mathcal{X}_1 \cdot ( 1 + \mathfrak{E}_{X, 1}(\tau)) \,\tau^{\frac{2 (\alpha^2 -1)}{\alpha^2 + 3}} \, dt^2 + \mathcal{R}_1 \cdot ( 1 + \mathfrak{E}_{R, 1}(\tau)) \, r_-^2 \tau^{ \frac{4}{\alpha^2 + 3} } \, d \sigma_{\mathbb{S}^2} .
\end{equation} \begin{equation}\label{K}
\left| \log \mathcal{X}_1 + \frac{\alpha^2 + 1}{\alpha^2 + 3} \frac{4|K_-|^2}{|B|^2}\ \epsilon^{-2} \right| + \left| \log \mathcal{R}_1 - \frac{1}{\alpha^2 + 3} \frac{4|K_-|^2}{|B|^2}\ \epsilon^{-2} \right| \lesssim \log (\epsilon^{-1}),
\end{equation}
\begin{equation}
|\mathfrak{E}_{X, 1}(\tau)| + |\mathfrak{E}_{R, 1}(\tau)| \lesssim \epsilon^2 \cdot \left(\frac{\tau}{\tau(s_{K_1})}\right)^{\frac{2\beta}{\alpha^2 + 3}},
\end{equation} where $\tau$ is the proper time\footnote{More precisely $\tau$ is past-directed timelike variable, orthogonal to the hypersurfaces $\Sigma_s$, and normalized such that $g(d \tau, d \tau) = 1$ and $\tau = 0$ at the spacelike singularity $\{ r = 0 \}$.}, and we call $(p_1,p_{2},p_{3})= (\frac{\alpha^2-1}{\alpha^2+3}, \frac{2}{\alpha^2+3}, \frac{2}{\alpha^2+3}) \in (0,1)^3$ the Kasner exponents.
\item In the Kasner inversion case \eqref{inv.eq}, we have $|\alpha|<1$, $ \mathcal{PK}_1\subset\mathcal{K}_1 =\{ s_{K_1} \leq s \leq s_{in}\}\subset \mathcal{K}$, and for all $s\in \mathcal{K}_1$: \begin{equation}
|\Psi(s)-\alpha| \lesssim \epsilon} \newcommand{\ls}{\lesssim^2,
\end{equation} where $s_{in} = \frac{b_-^{-2}}{4|K_-|}+ O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))$ is such that \begin{equation}\label{rsin}
r(s_{in}) = r_- \cdot \exp(-\frac{\epsilon} \newcommand{\ls}{\lesssim^{-2}}{2b_-^2\cdot (1-\alpha^2)} +O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))).
\end{equation} Moreover, the metric takes the following Kasner-like form, where $\tau$ is the proper time, and $\tau_0>0$ a constant.
\begin{equation}
g = - d \tau^2 + \mathcal{X}_1 \cdot ( 1 + \mathfrak{E}_{X, 1}(\tau)) \,(\tau-\tau_0)^{\frac{2 (\alpha^2 -1)}{\alpha^2 + 3}} \, dt^2 + \mathcal{R}_1 \cdot ( 1 + \mathfrak{E}_{R, 1}(\tau)) \, r_-^2 (\tau-\tau_0)^{ \frac{4}{\alpha^2 + 3} } \, d \sigma_{\mathbb{S}^2}.
\end{equation}
We call $(p_1,p_{2},p_{3})= (\frac{\alpha^2-1}{\alpha^2+3}, \frac{2}{\alpha^2+3}, \frac{2}{\alpha^2+3}) \in (-\frac{1}{3},0)\times (\frac{1}{2},\frac{2}{3})^2$ the first Kasner exponents.
Moreover, $\mathcal{X}_1$ and $\mathcal{R}_1$ obey \eqref{K}, and for all $\tau(s_{K_1}) \leq \tau \leq \tau(s_{in})$:
\begin{equation}
|\mathfrak{E}_{X, 1}(\tau)| + |\mathfrak{E}_{R, 1}(\tau)| \lesssim \epsilon^2 .
\end{equation}
\end{enumerate}
\item In the Kasner inversion region $\mathcal{K}_{in}$ (only in the Kasner inversion case \eqref{inv.eq}), we have for all $s \in \mathcal{K}_{in}$ \begin{equation}
r(s) = r_- \cdot \exp(-\frac{\epsilon} \newcommand{\ls}{\lesssim^{-2}}{2b_-^2\cdot (1-\alpha^2)} +O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))).
\end{equation} Moreover, in terms of proper time, we have \begin{equation}
0<\tau(s_{in})-\tau(s_{out}) \lesssim \exp(-\frac{\epsilon} \newcommand{\ls}{\lesssim^{-2} }{b_-^2\cdot(1-\alpha^2)} +O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))).
\end{equation}
\item In the second Kasner region $\mathcal{K}_{2}$ (only in the Kasner inversion case \eqref{inv.eq}), we define $\beta:= \min\{\frac{1}{2}, \alpha^{-2}-1\}>0$ (since $|\alpha|<1$). We have
for all $s\in \mathcal{K}_1$: \begin{equation}
|\Psi(s)-\alpha^{-1}| \lesssim \epsilon} \newcommand{\ls}{\lesssim^2 \cdot \left( \frac{r(s)}{r(s_{out})}\right)^{\beta}.
\end{equation}
Moreover, the metric takes the following Kasner-like form, for all $0<\tau \leq \tau(s_{out})$ \begin{equation}
g = - d \tau^2 + \mathcal{X}_2 \cdot ( 1 + \mathfrak{E}_{X, 2}(\tau)) \, \tau^{\frac{2 (1 - \alpha^2)}{1 + 3 \alpha^2}} \, dt^2 + \mathcal{R}_2 \cdot ( 1 + \mathfrak{E}_{R, 2}(\tau)) \, r_-^2 \tau^{ \frac{4 \alpha^2}{1 + 3 \alpha^2} } \, d \sigma_{\mathbb{S}^2} .
\end{equation} \begin{equation}
\left| \log \mathcal{X}_2 + \frac{1 + \alpha^{-2}}{1 + 3 \alpha^2} \frac{4|K_-|^2}{|B|^2}\ \epsilon^{-2} \right| + \left| \log \mathcal{R}_2 - \frac{1}{1 + 3\alpha^2} \frac{4|K_-|^2}{|B|^2}\ \epsilon^{-2} \right| \lesssim \log (\epsilon^{-1}),
\end{equation}
\begin{equation}
|\mathfrak{E}_{X, 2}(\tau)| + |\mathfrak{E}_{R, 2}(\tau)| \lesssim \epsilon^2 \cdot \left(\frac{\tau}{\tau(s_{out})}\right)^{\frac{2\beta}{\alpha^{-2} + 3}}.
\end{equation} We call $(p_1,p_{2},p_{3})= (\frac{1-\alpha^2}{1+3\alpha^2}, \frac{2\alpha^2}{1+3\alpha^2}, \frac{2\alpha^2}{1+3\alpha^2}) \in (0,1)^3$ the second Kasner exponents.
Finally, in terms of proper time, we have (recalling that $\tau$ is normalized so that $\underset{s\rightarrow s_{\infty}}{\lim}\tau(s)=0$):
\begin{equation}
\exp(-\frac{1+\alpha^{-2}}{2b_-^2\cdot(1-\alpha^2)}\ \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1})))\lesssim \tau(s_{out}) \lesssim \exp(-\frac{1+\alpha^{-2}}{2b_-^2\cdot(1-\alpha^2)}\ \epsilon} \newcommand{\ls}{\lesssim^{-2}+O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))).
\end{equation}
\end{enumerate}
\end{thm}
Without loss of generality, we will choose $\epsilon} \newcommand{\ls}{\lesssim>0$ in all the subsequent sections.
\section{Almost formation of a Cauchy horizon} \label{sec:einsteinrosen}
In this section, we provide estimates up to a region $s \sim \epsilon^{-1}$. The analysis will be perturbative in nature, and we always bear in mind the comparison to the linear charged scalar field problem in Reissner-Nordstr\"om, see Section \ref{sub:scattering}. The analysis will largely follow \cite{VDM21}, with minor modifications due to the now dynamical nature of the charge $Q$ and the charge term for the scalar field.
As in \cite{VDM21}, the estimates up to $s \sim \epsilon^{-1}$ will be divided into the four regions $\mathcal{R}$, $\mathcal{N}$, $\mathcal{EB}$, and $\mathcal{LB}$ (see Figure~\ref{Penrose_detailed} and Figure~\ref{Fig1}). Where differences from \cite{VDM21} are minor, we shall aim to be relatively brief, and focus on the new techniques required to deal with the charge and the scalar field.
\subsection{Estimates up to the no-shift region}
We begin with some notation. As the arguments of this section are perturbative, it will help to introduce differences between the quantities $(r, \Omega^2, \phi, Q, \tilde{A})$ and their Reissner-Nordstr\"om(-dS/AdS) values. Therefore define
\begin{equation*}
\delta r (s) = r (s) - r_{RN}(s), \hspace{0.5cm} \delta \Omega^2(s) = \Omega^2(s) - \Omega^2_{RN}(s),
\end{equation*}
\begin{equation*}
\delta \phi (s) = \phi (s) - \phi_{\mathcal{L}}(s),
\end{equation*}
\begin{equation*}
\delta Q (s) = Q(s) - \mathbf{e}, \hspace{0.5cm} \delta \tilde{A}(s) = \tilde{A}(s) + \left( \frac{\mathbf{e}}{r_+} - \frac{\mathbf{e}}{r_{RN}(s)} \right),
\end{equation*}
where we used (\ref{eq:reissner_nordstrom_gauge}) to provide the Maxwell gauge field $\tilde{A}_{RN}$ to which we compare, and $\phi_{\mathcal{L}}$ is the solution to the linear charged scalar field scattering problem in Reissner-Nordstr\"om, see Corollary \ref{cor:scattering}.
We also use the quantity $\delta \log \Omega^2 (s) = \log \Omega^2 (s) - \log \Omega^2_{RN} (s) $. Assuming that $\delta \log \Omega^2 (s)$ is bounded above by say $1$, it is easily seen that
\begin{equation} \label{eq:logomegadiff}
C^{-1} \delta \Omega^2(s) \leq \Omega^2(s) \,\delta \log \Omega^2(s) \leq C \,\delta \Omega^2(s).
\end{equation}
We now proceed to the estimates in the red-shift region $\mathcal{R}$.
\begin{proposition} \label{prop:redshift}
There exist $D_R(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ and a large $\Delta_{\mathcal{R}}(M, \mathbf{e}, \Lambda) > 0$ such that in the red-shift region $\mathcal{R} = \{ - \infty < s < - \Delta_R \}$, the following estimates hold:
\begin{equation} \label{eq:rs_phi}
|\phi| + |\Omega^{-2} \dot{\phi}| \leq D_R \epsilon,
\end{equation}
\begin{equation} \label{eq:rs_kappa}
| \kappa^{-1} - 1 | \leq D_R \epsilon^2,
\end{equation}
\begin{equation} \label{eq:rs_mass}
| \varpi - M | \leq D_R \epsilon^2,
\end{equation}
\begin{equation} \label{eq:rs_omega}
\left | \frac{d \log (\Omega^2)}{ds} - 2 K(s) \right | + | \log(\alpha_+^{-1} \Omega^2) - 2 K_+ s | \leq D_R \Omega^2 \ll 1,
\end{equation}
\begin{equation} \label{eq:rs_diffs}
| \delta \log (\Omega^2) | + |\delta r| + |\delta \dot{r}| + |\delta \tilde{A}| \Omega^{-2} + | \delta Q | \leq D_R \epsilon^2,
\end{equation}
\begin{equation} \label{eq:rs_phidiff}
| \delta \phi | + | \delta \dot{\phi} | \leq D_R \epsilon^3.
\end{equation}
\end{proposition}
\begin{proof}
We provide a short sketch. For more details, see Proposition 4.5 in \cite{Moi} or Lemma 4.1 in \cite{VDM21}. We make the following bootstrap assumptions:
\begin{equation}
\left | \frac{d}{ds} \log(\Omega^2) - 2 K_+ \right | \leq K_+,
\tag{RS1} \label{eq:rs_bootstrap_lapse}
\end{equation}
\begin{equation}
| \phi | \leq 4 \epsilon,
\tag{RS2} \label{eq:rs_bootstrap_phi}
\end{equation}
\begin{equation}
|Q - \mathbf{e}| \leq \frac{\mathbf{e}}{2},
\tag{RS3} \label{eq:rs_bootstrap_Q}
\end{equation}
\begin{equation}
|r - r_+| \leq \frac{r_+}{2}.
\tag{RS4} \label{eq:rs_bootstrap_r}
\end{equation}
These will hold in neighborhood of $s = - \infty$ due to the asymptotic data (\ref{eq:ode_data_rqphi}) and (\ref{eq:ode_data_lapsedot}).
The bootstrap assumption (\ref{eq:rs_bootstrap_lapse}) will give the important estimate
\begin{equation} \label{eq:rs_omega_estimate}
\int^s_{-\infty} \Omega^2(s') \, ds' \lesssim \int^s_{-\infty} \frac{d}{ds} \log (\Omega^2) \cdot \Omega^2(s') \, ds' = \int^s_{-\infty} \frac{d}{ds} \Omega^2(s') = \Omega^2(s).
\end{equation}
The evolution equation (\ref{eq:gauge_evol}) for $\tilde{A}$ will immediately yield that $|\tilde{A}| \lesssim \Omega^2$. Therefore turning to the equation (\ref{eq:phi_evol_2}), we use (\ref{eq:rs_omega_estimate}) to see that
\begin{equation*}
|r^2 \dot{\phi}| \lesssim \int_{- \infty}^s \Omega^2(s') \epsilon \, ds \lesssim \Omega^2(s) \epsilon,
\end{equation*}
which after another round of integration gives
$|\phi| \leq \epsilon + C \Omega^2(s) \epsilon,$
for some positive constant $C$.
Since the right hand side of the equation (\ref{eq:raych}) is now bounded by $C \Omega^2 \epsilon^2$, the estimate (\ref{eq:rs_kappa}) is straightforward, and from this point (\ref{eq:rs_mass}) and (\ref{eq:rs_omega}) are also immediate. In particular, the remaining bootstraps are all improved, so long as $\Delta_{\mathcal{R}}$ is chosen large enough that $\Omega^2(s) < C^{-1}$ and $\epsilon$ is sufficiently small.
In order to get the difference estimates (\ref{eq:rs_diffs}), (\ref{eq:rs_phidiff}), we make yet another bootstrap assumption:
\begin{equation} \label{eq:rs_bootstrap_diffs} \tag{RS5}
|\delta r| + | \delta \dot{r} | + | \delta \log(\Omega^2) | + | \delta \tilde{A} | + \epsilon^{-1} | \delta \phi | + \epsilon^{-1} | \delta \dot{\phi} | \leq \epsilon^2.
\end{equation}
The claim is that we can then use the ODEs for differences (found by subtracting from the relevant ODE in Section \ref{sub:evol_eqs} the analogous ODE in Reissner-Nordstr\"om) to then improve the RHS of (\ref{eq:rs_bootstrap_diffs}) by $C \Omega^2 \epsilon^2$, hence improving the bootstrap for $\Delta_{\mathcal{R}} > 0$ sufficiently large.
We demonstrate this for the $\delta \phi$ estimate as an illustration. Taking the differences of (\ref{eq:phi_evol}), we have
\begin{equation*}
|\delta \ddot{\phi}| \lesssim |\delta \dot{r}| |\dot{\phi}| + |\dot{r}| |\delta \dot{\phi}| + |\dot{r} \dot{\phi}| |\delta r| + |\delta \tilde{A}| |\tilde{A}| |\phi| + (\tilde{A}^2 + \Omega^2) |\delta \phi| + |\delta \Omega^2| |\phi| .
\end{equation*}
Using (\ref{eq:rs_phi})--(\ref{eq:rs_omega}) as well as (\ref{eq:rs_bootstrap_diffs}) to estimate the RHS by appropriate powers of $\epsilon$ and $\Omega^2$ ((\ref{eq:logomegadiff}) is also useful here), we see that
\begin{equation*}
|\delta \ddot{\phi}| \lesssim \Omega^2 \epsilon^3,
\end{equation*}
so that integrating this from $s = -\infty$ once and then twice, we indeed get
\begin{equation*}
|\delta \dot{\phi}| + |\delta \phi| \leq C \Omega^2 \epsilon^3
\end{equation*}
as claimed. A similar procedure can be executed for the remaining equations of Section \ref{sub:evol_eqs}. This improves (\ref{eq:rs_bootstrap_diffs}) and completes the proof of this proposition.
\end{proof}
Next, we use a Gr\"onwall argument to provide estimates in the no-shift region $\mathcal{N}$.
\begin{proposition} \label{prop:noshift}
Take $S > 0$ to be any fixed real number. Then there exists some $C(M ,\mathbf{e}, \Lambda, m^2, q_0) > 1$ and $D_N (M ,\mathbf{e}, \Lambda, m^2, q_0) > 0$ such that in the region $\mathcal{N} = \{ - \Delta_{\mathcal{R}} \leq s \leq S \}$, the following estimates hold for $\epsilon$ sufficiently small:
\begin{equation}
|\phi| + |\dot{\phi}| \leq D_N C^s \epsilon, \label{eq:ns_phi}
\end{equation}
\begin{equation}
| \kappa^{-1} - 1 | \leq D_N C^s \epsilon^2, \label{eq:ns_kappa}
\end{equation}
\begin{equation}
| Q - \mathbf{e} | \leq D_N C^s \epsilon^2, \label{eq:ns_Q}
\end{equation}
\begin{equation}
| \varpi - M | \leq D_N C^s \epsilon^2, \label{eq:ns_mass}
\end{equation}
\begin{equation}
| \delta r | + | \delta \dot{r} | + | \delta \log (\Omega^2) | + \left| \frac{d}{ds} \delta \log (\Omega^2) \right| + |\delta \tilde{A}| \leq D_N C^s \epsilon^2, \label{eq:ns_diff}
\end{equation}
\begin{equation}
| \delta \phi | + | \delta \dot{\phi} | \leq D_N C^s \epsilon^3. \label{eq:ns_phidiff}
\end{equation}
In the sequel, we only apply these estimates at $s = S$, so that allowing $D_N$ to depend also on $S$, the terms involving $C^s$ can be absorbed into $D_N$ -- indeed we define $D_N' = C^S D_N$ for convenience.
\end{proposition}
\begin{proof}
Here, we begin with some bootstrap assumptions on the geometry and the Maxwell field which clearly hold in a neighborhood of $ s = - \Delta_{\mathcal{R}}$:
\begin{equation}
|Q| \leq 2|\mathbf{e}|,
\tag{NS1} \label{eq:ns_bootstrap_Q}
\end{equation}
\begin{equation}
|\tilde{A}| \leq \frac{2|\mathbf{e}|}{r_+},
\tag{NS2} \label{eq:ns_bootstrap_gauge}
\end{equation}
\begin{equation}
|\kappa^{-1} - 1| \leq \tfrac{1}{2},
\tag{NS3} \label{eq:ns_bootstrap_raych}
\end{equation}
\begin{equation}
r \geq \tfrac{r_-}{2},
\tag{NS4} \label{eq:ns_bootstrap_r}
\end{equation}
\begin{equation}
\Omega^2 \leq 2 \Omega^2_{\max} (M, \mathbf{e}, \Lambda) \coloneqq 2 \sup_{s \in \mathbb{R}} \Omega^2_{RN}(s).
\tag{NS5} \label{eq:ns_bootstrap_lapse}
\end{equation}
Using these bootstraps and (\ref{eq:phi_evol}), it is straightforward to see that
\begin{equation*}
\left | \frac{d}{ds} (\phi, \dot{\phi}) \right | \leq C_1(M, \mathbf{e}, \Lambda, q_0, m^2) | (\phi, \dot{\phi} )|,
\end{equation*}
which immediately yields (\ref{eq:ns_phi}). Using the evolution equation (\ref{eq:Q_evol}) for $Q$ then gives (\ref{eq:ns_Q}), improving (\ref{eq:ns_bootstrap_Q}) -- in fact we have $|Q - \mathbf{e}| \lesssim C_1^{2s} \epsilon^2$.
Next, we consider the following tuple of geometric and gauge quantities
\begin{equation*}
\mathbf{X} = (\delta r, \delta \dot{r}, \delta \log \Omega^2, \tfrac{d}{ds} \delta \log \Omega^2, \delta \tilde{A}).
\end{equation*}
Taking the differences from Reissner-Nordstr\"om for the equations (\ref{eq:r_evol}), (\ref{eq:lapse_evol}), (\ref{eq:gauge_evol}), along with the bootstrap assumptions, for some $C_2(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ we get
\begin{equation}
\left| \frac{d \mathbf{X}}{ds} \right| \leq C_2 (|\mathbf{X}| + C_1^{2s} \epsilon^2).
\end{equation}
Thus using Gr\"onwall we deduce (\ref{eq:ns_diff}) for some $C$ with appropriate dependence on $C_1, C_2$. For $\epsilon$ small, this improves (\ref{eq:ns_bootstrap_gauge}), (\ref{eq:ns_bootstrap_r}), (\ref{eq:ns_bootstrap_lapse}).
Next, for $s \geq - \Delta_{\mathcal R}$ and the above we know $\Omega^{-2} \leq C_3 e^{- 2K_- s}$ for some $C_3 (M, \mathbf{e}, \Lambda, m^2, q_0) > 0$. Hence we use the Raychaudhuri equation (\ref{eq:raych}) to estimate
\begin{equation}
| \kappa^{-1} (s) - \kappa^{-1} (- \Delta_{\mathcal{R}}) | \lesssim C_3 e^{-2 K_- s} C_1^{2s} \epsilon^2.
\end{equation}
So for $C$ chosen sufficiently large. we get (\ref{eq:ns_kappa}) and improve (\ref{eq:ns_bootstrap_raych}). The final estimate (\ref{eq:ns_phidiff}) is straightforward.
\end{proof}
\begin{corollary} \label{cor:noshift}
Let $S' > S$ be another arbitrarily chosen real number. Then in the region $S \leq s \leq S'$, so long as $\epsilon$ is sufficiently small there exists some $D_-(M, \mathbf{e}, \Lambda, m^2, q_0, S, S') > 0$ such that:
\begin{equation}
|r(s) - r_-| \leq D_-( \epsilon^2 + e^{2K_- s}),
\end{equation}
\begin{equation}
\left| \tilde{A} + \frac{\mathbf{e}}{r_+} - \frac{\mathbf{e}}{r_-} \right| \leq D_- ( \epsilon^2 + e^{2 K_- s} ),
\end{equation}
\begin{equation}
D_-^{-1} e^{2 K_- s} \leq \Omega^2 \leq D_- e^{2K_- s},
\end{equation}
\begin{equation} \label{eq:ns_cor_phi}
|\phi - B \epsilon e^{i \omega_{RN} s} - \overline{B} \epsilon e^{-i \omega_{RN} s}| + | \dot{\phi} - i \omega_{RN} B \epsilon e^{i \omega_{RN} s} + i \omega_{RN} \overline{B} \epsilon e^{-i \omega_{RN} s}| \leq D_- (\epsilon^3 + \epsilon e^{2K_- s}),
\end{equation}
\begin{equation} \label{eq:ns_cor_lapse}
\left | \frac{d \log(\Omega^2)}{ds} - 2 K_- \right | \leq D_- (\epsilon^2 + e^{2K_- s}) | \leq \frac{|K_-|}{100}.
\end{equation}
\end{corollary}
\begin{proof}
Note the previous proposition holds in the same way for $S$ replaced by $S'$. Therefore, the corollary follows from the difference estimates of that proposition, Corollary \ref{cor:scattering}, and some further basic computations on Reissner-Nordstr\"om such as
\begin{enumerate}
\item
$ r_{RN}(s) - r_- \lesssim e^{2 K_- s} $ for $s \geq S \gg 1$.
\item
$ \Omega^2_{RN} \sim e^{2 K_- s} $ for $s \geq S \gg 1$.
\end{enumerate}
The final part of (\ref{eq:ns_cor_lapse}) clearly follows by taking $S$ large and $\epsilon$ small. In the sequel, we will take advantage of the fact that $S$ and $S'$ can be taken to be as large as required.
\end{proof}
\subsection{Estimates on the early blue-shift region} \label{sub:earlyblueshift}
The early blue-shift $\mathcal{EB}$ is defined by $\mathcal{EB} = \{ S \leq s \leq s_{lin} = (|2 K_-|)^{-1} \log (\nu \epsilon^{-1})\}$, where $\nu(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ is a small constant to be determined later. In the blue-shift regions, we begin to exploit the fact that $\Omega^2$ is exponentially decaying in $s$.
\begin{proposition} \label{prop:earlyblueshift}
Take the quantity $S$ in Proposition \ref{prop:noshift} to be sufficiently large, and $\nu(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ to be sufficiently small. Then there exists some $D_E(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ such that the following hold:
\begin{equation}
| \delta \log (\Omega^2) | + s \left| \frac{d}{ds} \delta \log (\Omega^2) \right| \leq D_E \epsilon^2 s^2, \label{eq:eb_lapse}
\end{equation}
\begin{equation}
| \phi | + | \dot{\phi} | \leq D_E |B| \epsilon, \label{eq:eb_phi}
\end{equation}
\begin{equation}
| \delta r | s^{-1} + |\delta \dot{r} | + |\delta \tilde{A}| \leq D_E \epsilon^2, \label{eq:eb_diff}
\end{equation}
\begin{equation}
\left | \kappa^{-1} - 1 \right | \leq \tfrac{1}{100}, \label{eq:eb_kappa}
\end{equation}
\begin{equation}
| \varpi - M | + | Q - \mathbf{e} | \leq D_E \epsilon^2 s, \label{eq:eb_masscharge}
\end{equation}
\begin{equation}
|\delta \phi| + |\delta \dot{\phi}| \leq D_E \epsilon^3 s. \label{eq:eb_phidiff}
\end{equation}
\end{proposition}
\begin{proof}
Recall the quantities $D_N'$ from Proposition \ref{prop:noshift} and $B \in \C$ from the linear scattering theory. We bootstrap the following estimates:
\begin{equation}
| \delta \log \Omega^2 | \leq 4 D_N' \epsilon^2 s^2,
\tag{EB1} \label{eq:eb_bootstrap_lapse}
\end{equation}
\begin{equation}
|\phi| + \omega_{RN}^{-1}|\dot{\phi}| \leq 20 |B| \epsilon,
\tag{EB2} \label{eq:eb_bootstrap_phi}
\end{equation}
\begin{equation}
r \geq \tfrac{r_-}{2},
\tag{EB3} \label{eq:eb_bootstrap_r}
\end{equation}
\begin{equation}
|\tilde{A}| \leq \tfrac{2\mathbf{e}}{r_+}.
\tag{EB4} \label{eq:eb_bootstrap_gauge}
\end{equation}
To start with, note that by (\ref{eq:eb_bootstrap_lapse}) and the Reissner-Nordstr\"om asymptotics, we have that for some $C > 0$,
\begin{equation*}
C^{-1} e^{2 K_- s} \leq \Omega^2 \leq C e^{2 K_- s}.
\end{equation*}
Note also that $|\delta Q| \lesssim \epsilon^2 s$ is immediate from (\ref{eq:ns_Q}) and the bootstraps.
Next, we use (\ref{eq:eb_bootstrap_phi}) along with the Raychaudhuri equation (\ref{eq:raych}) to get the estimate on $\kappa$:
\begin{equation} \label{eq:eb_kappaestimate}
\left | \kappa^{-1} - 1 \right | \leq D_N' \epsilon^2 + C |B|^2 \epsilon^2 e^{2|K_-|s} \leq \frac{1}{100},
\end{equation}
where $\nu$ is chosen sufficiently small such that the second inequality holds. One can rewrite the left hand side of this expression as
\begin{equation*}
\left| \kappa^{-1} - 1 \right| = 4 \Omega^{-2}_{RN} | \delta \dot{r} + \dot{r} ( e^{- \delta \log \Omega^2} - 1 ) |.
\end{equation*}
Hence we can use (\ref{eq:eb_kappaestimate}) and (\ref{eq:eb_bootstrap_lapse}) to produce a preliminary estimate on $\delta \dot{r}$:
\begin{equation}
|\delta \dot{r}| \lesssim \Omega^2 | \delta \log \Omega^2 | + e^{2K_-s} \epsilon^2 + |B|^2 \epsilon^2 \lesssim e^{2K_- s} ( D_N' + D_N' s^2 ) \epsilon^2 + |B|^2 \epsilon^2.
\end{equation}
Integrating this up\footnote{
We use here a straightforward calculus lemma: given $\alpha > 0$ and $N \in \mathbb{N}$, then for $s_{0} \in \mathbb{R}$ large, $\int^{\infty}_{s_0} s^N e^{- \alpha s} \, ds \lesssim_{\alpha, N} s_0^N e^{- \alpha s_0}.$
}
from $s = S$, we get $|\delta r| \lesssim \epsilon^2 s$, or to be more specific we have the following, where $C$ depends on the black hole parameters, $m^2$ and $q_0$ but not the choice of $S$ (unlike the quantity $D_N' = C^S D_N$ which does have exponential dependence on $S$):
\begin{equation}
|\delta r| \leq C \epsilon^2 (|B|^2 s + D_N' + D_N' S^2 e^{2 K_-S}) \leq C \epsilon^2 (|B|^2 s + D_N').
\end{equation}
Here we use that $S^2 e^{2K_- S}$ is uniformly bounded for $S > 0$.
We wish to improve the bootstrap (\ref{eq:eb_bootstrap_lapse}). So we use the equation (\ref{eq:lapse_evol}) and take differences from Reissner-Nordstr\"om, leading to following inequality
\begin{align*}
\left | \frac{d^2}{ds^2} \delta \log \Omega^2 \right |
&\lesssim \Omega^2 ( |\delta \log \Omega^2| + |\delta r| + |\delta \dot{r}| + |\delta Q| ) + |\phi|^2 + |\dot{\phi}|^2, \\
&\lesssim e^{2K_-s} (D_N' s^2 + |B|^2 s + D_N') \epsilon^2 + |B|^2 \epsilon^2.
\end{align*}
Integrating this expression up from $s = S$ twice, we get
\begin{align*}
\left| \frac{d}{ds} \delta \log \Omega^2 \right|
&\lesssim D_N' \epsilon^2 + |B|^2 \epsilon^2 s, \\
|\delta \log \Omega^2 |
&\lesssim D_N' \epsilon^2 s + |B|^2 \epsilon^2 s^2.
\end{align*}
So for $S$ sufficiently large, we improve the bootstrap (\ref{eq:eb_bootstrap_lapse}) -- note that $D_N' = C^S D_N$ grows exponentially in $S$ so that $D_N' \gg |B|^2$ for $S$ large.
Then taking differences between our ODEs and the Reissner-Nordstr\"om ODEs will lead to the estimates (\ref{eq:eb_diff}) and (\ref{eq:eb_masscharge}), thus improving upon the bootstraps (\ref{eq:eb_bootstrap_r}) and (\ref{eq:eb_bootstrap_gauge}).
We now turn to the estimates on the charged scalar field. We shall use the equation (\ref{eq:phi_evol_2}) and consider the quantity
\begin{equation}
H = r^4 \dot{\phi}^2 + r^4 q_0^2 |\tilde{A}|^2 \phi^2.
\end{equation}
As we already control $r$ and $|\tilde{A}|$ from below in this region, producing an upper bound for $H$ would give us desired estimates on $|\phi|, |\dot{\phi}|$. Using (\ref{eq:phi_evol_2}) as well as (\ref{eq:gauge_evol}), we compute
\begin{equation*}
\frac{dH}{ds} = - \frac{m^2 \Omega^2 r^4}{2} \phi \dot{\phi} - \frac{r^2 q_0^2 \tilde{A} Q \Omega^2}{2} \phi^2 + 4 r^3 \dot{r} q_0^2 |\tilde{A}|^2 \phi^2.
\end{equation*}
The last term is negative since $\dot{r} < 0$, so we can apply Cauchy-Schwarz (again recalling the lower bounds on $r$ and $|\tilde{A}|$) to see that
\begin{equation*}
\frac{dH}{ds} \leq C_H \Omega^2 H,
\end{equation*}
for some $C_H = C_H(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$. Therefore, by Gr\"onwall, so long as we pick $S$ large enough such that
\begin{equation*}
\int_S^{s_{lin}} C_H \Omega^2 \, ds \leq C C_H \int_S^{\infty} e^{2 K_- s} \, ds < \log 2,
\end{equation*}
we deduce that $H(s) \leq 2 H(S) \leq 16 r_-^4 \omega_{RN}^2 |B|^2 \epsilon^2$, where the latter estimate for $H(S)$ is computed using (\ref{eq:ns_cor_phi}). This improves (\ref{eq:eb_bootstrap_phi}).
Finally, for the difference estimate on the scalar field we use the modified quantity $\tilde{H}$, given by
\begin{equation}
\tilde{H} = r^4 | \delta \dot{\phi} |^2 + r^4 q_0^2 |\tilde{A}|^2 |\delta \phi|^2.
\end{equation}
To find the analogous differential inequality here, we first need to find the difference version of (\ref{eq:phi_evol}):
\begin{equation} \label{eq:eb_htilde}
\delta \ddot{\phi} = - \frac{2\dot{r}}{r} \delta \dot{\phi} - q_0^2 |\tilde{A}|^2 \delta \phi - \frac{m^2 \Omega^2}{4} \delta \phi + \mathcal{E}_{EB},
\end{equation}
where the error\footnote{In this expression for the error term, our estimates in the region $\mathcal{EB}$ show that both $- s \dot{r}$ and $s^2 \Omega^2_{RN}$ are $O(1)$. Nonetheless, we choose to write it in this form both for clarity and because we shall use the same expression again in the late blue-shift region, where both these are still $O(1)$ but for different reasons.} term $\mathcal{E}_{EB}$ is estimated by $|\mathcal{E}_{EB}| \lesssim \epsilon^3 ( 1 - s \dot{r} + s^2 \Omega^2_{RN})$.
Using a similar procedure as before, one can use this equation to get the differential inequality:
\begin{equation}
\frac{d\tilde{H}}{ds} \lesssim \Omega^2 \tilde{H} + \epsilon^3 (1 - s \dot{r}+ s^2 \Omega_{RN}^2) \sqrt{\tilde{H}}.
\end{equation}
Rewriting this expression as a differential inequality in terms of $\sqrt{\tilde{H}}$ and using the usual Gr\"onwall inequality with an inhomogeneous term, we get
\begin{equation*}
\sqrt{\tilde{H}} (s) \lesssim \sqrt{\tilde{H}}(S) + \int_S^s \epsilon^3 (1 - s \dot{r} + s^2 \Omega^2_{RN}) \, ds \lesssim \epsilon^3 s
\end{equation*}
as required. This concludes the proof of (\ref{eq:eb_phidiff}) and the proposition.
\end{proof}
\subsection{Estimates on the late blue-shift region} \label{sub:lateblueshift}
To close this section, we give the estimates on the late blue-shift region. Here, we leave the regime where Cauchy stability holds, in particular showing that $\dot{r}$ remains at size $O(\epsilon^2)$ rather than continuing to decay exponentially.
\begin{proposition} \label{prop:lateblueshift}
Choose $\Delta_{\mathcal{B}} > 0$ sufficiently small, and define $b_- = \frac{2|B|\omega_{RN}}{2|K_-|}$ . Then in the region $ \mathcal{LB} = \{ s_{lin} \leq s \leq \Delta_{\mathcal{B}} \epsilon^{-1} \}$, there exists some $D_L (M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ such that the following estimates hold:
\begin{equation}
| r(s) - r_- | \leq D_L \epsilon^2 s, \label{eq:lb_rdiff}
\end{equation}
\begin{equation}
\left | - \dot{r} - \frac{4 |B|^2 \omega_{RN}^2 \epsilon^2 r(s)} {2 |K_-|} \right | \leq D_L( \Omega^2 + \epsilon^4 s), \label{eq:lb_rdot}
\end{equation}
\begin{equation}
\left | \frac{d}{ds} \log (\Omega^2) - 2 K_- \right | \leq D_L \epsilon^2 s, \label{eq:lb_lapse}
\end{equation}
\begin{equation}
\left | \frac{d}{ds} \log ( r(s)^{- b_-^{-2} \epsilon^{-2}} \cdot \Omega^2 (s) ) \right | \leq D_L (\epsilon^{-2} \Omega^2 + \epsilon^2 s), \label{eq:lb_rlapse}
\end{equation}
\begin{equation}
\left | \log \left ( \frac{\Omega^2}{C_- e^{2K_-s}} \right) \right| \leq D_L \epsilon^2 s^2, \label{eq:lb_lapse_2}
\end{equation}
\begin{equation}
|\phi| + |\dot{\phi}| \leq D_L \epsilon, \label{eq:lb_phi}
\end{equation}
\begin{equation}
|Q - \mathbf{e}| s^{-1} + |\delta \tilde{A}| \leq D_L \epsilon^2, \label{eq:lb_Q}
\end{equation}
\begin{equation}
|\phi - B \epsilon e^{i \omega_{RN} s} - \overline{B} \epsilon e^{-i \omega_{RN} s}| + | \dot{\phi} - i \omega_{RN} B \epsilon e^{i \omega_{RN} s} + i \omega_{RN} \overline{B} \epsilon e^{-i \omega_{RN} s}| \leq D_L \epsilon^3 s. \label{eq:lb_phidiff}
\end{equation}
\end{proposition}
\begin{proof}
We use the following bootstrap assumptions, where the constant $B_1(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ will be described later in the proof:
\begin{equation}
\left| \frac{d}{ds} \log \Omega^2 - 2K_- \right| \leq |K_-|,
\tag{LB1} \label{eq:lb_bootstrap_lapse}
\end{equation}
\begin{equation}
|\tilde{A}| \leq \tfrac{2|\mathbf{e}|}{r_+},
\tag{LB2} \label{eq:lb_bootstrap_gauge}
\end{equation}
\begin{equation}
- \dot{r} \leq \frac{8 |B|^2 \omega_{RN}^2 r_-}{|2 K_-|} \epsilon^2,
\tag{LB3} \label{eq:lb_bootstrap_r}
\end{equation}
\begin{equation}
|Q| \leq 2|\mathbf{e}|,
\tag{LB4} \label{eq:lb_bootstrap_Q}
\end{equation}
\begin{equation}
| \delta \log \Omega^2 | \leq B_1 \epsilon^2 s^2.
\tag{LB5} \label{eq:lb_bootstrap_lapsediff}
\end{equation}
From (\ref{eq:lb_bootstrap_r}), we have the trivial estimate $|\delta \dot{r}| \lesssim \epsilon^2$, and thus $| \delta r | \lesssim \epsilon^2 s$.
Due to (\ref{eq:lb_bootstrap_lapse}), we have that in $\mathcal{LB}$,
\begin{equation*}
\int_{s_{lin}}^s \Omega^2 (s') \, ds' \leq |K_-|^{-1} \Omega^2 (s_{lin}) \lesssim \epsilon^2,
\end{equation*}
So given all the bootstrap assumptions, we may proceed exactly as in the proof of Proposition \ref{prop:earlyblueshift} to recover the estimates (\ref{eq:lb_phi}) and (\ref{eq:lb_phidiff}). Note that the error $\mathcal{E}_{LB}$ in (\ref{eq:eb_htilde}) is replaced by $\mathcal{E}_{LB}$ which still satisfies the same estimate
\begin{equation*}
|\mathcal{E}_{LB}| \lesssim \epsilon^3 ( 1 - s \dot{r} + s^2 \Omega^2_{RN} ) \lesssim \epsilon^3,
\end{equation*}
so the proof follows in an identical fashion.
We also get the estimate (\ref{eq:lb_Q}) just as in Proposition \ref{prop:earlyblueshift}, improving the two bootstraps (\ref{eq:lb_bootstrap_gauge}) and (\ref{eq:lb_bootstrap_Q}).
Using the difference version of (\ref{eq:omega_evol_2}), we get the inequality
\begin{equation}
\left| \frac{d^2}{ds^2} \delta \log ( r \Omega^2 ) \right| \lesssim B_1 \Omega^2 \epsilon^2 s^2 + \epsilon^2.
\end{equation}
Integrating once, we deduce that
\begin{equation} \label{eq:lb_omegaestimate}
\left| \frac{d}{ds} \delta \log (r \Omega^2) \right| \lesssim B_1 \epsilon^2 s_{lin}^2(\epsilon) \cdot \epsilon^2 + \epsilon^2 s \lesssim \epsilon^2 s.
\end{equation}
Therefore, in light of the previous estimates on $\delta r, \delta \dot{r}$, for $\epsilon$ sufficiently small we have
\begin{equation}
\left| \frac{d}{ds} \log (r \Omega^2)(s) - 2 K_- \right|
+
\left| \frac{d}{ds} \log (\Omega^2)(s) - 2 K_- \right|
\leq C \epsilon^2 s \leq \frac{|K_-|}{10},
\end{equation}
where the final inequality follows for $\Delta_{\mathcal{B}}$ taken sufficiently small.
This improves the bootstrap (\ref{eq:lb_bootstrap_lapse}), and moreover integrating (\ref{eq:lb_omegaestimate}) once again will yield $|\delta \log (r \Omega^2)| \leq C (\epsilon^2 s_{lin}^2(\epsilon) + B_1 \epsilon^4 s_{lin}^2(\epsilon) + \epsilon^2 s^2 )$, thus improving (\ref{eq:lb_bootstrap_lapsediff}) after once again accounting for $\delta r$, so long as $B_1$ is chosen sufficiently large (i.e.\ larger than $4C$ for the $C$ appearing in this expression).
It remains to improve upon the bootstrap (\ref{eq:lb_bootstrap_r}). For this, as in the uncharged scalar field model of \cite{VDM21}, we use the Raychaudhuri equation in the convenient form (\ref{eq:raych_transport}). We begin by estimating the expression involving the scalar field; using (\ref{eq:lb_phidiff}) and (\ref{eq:lb_Q}), we have
%
\begin{equation*}
|\dot{\phi}|^2 + |\tilde{A}|^2 q_0^2 |\phi|^2
=
|B \omega_{RN} \epsilon e^{i \omega_{RN}s} - \overline{B} \omega_{RN} \epsilon e^{-i \omega_{RN}s}|^2 + \omega_{RN}^2 | B \epsilon e^{i \omega_{RN} s} + \overline{B} \epsilon e^{-i\omega_{RN} s}|^2 + \mathcal{E}_B.
\end{equation*}
Expanding out the trigonometric expressions on the right, we can see this can be rewritten as
\begin{equation}
|\dot{\phi}|^2 + |\tilde{A}|^2 q_0^2 |\phi|^2
= 4 |B|^2 \omega_{RN}^2 \epsilon^2 + \mathcal{E}_B,
\end{equation}
where the error term is bounded by $|\mathcal{E}_B| \lesssim \epsilon^4 s$.
Therefore using also (\ref{eq:lb_rdiff}) amd (\ref{eq:lb_omegaestimate}
, one finds that (\ref{eq:raych_transport}) may be written in the form
\begin{equation} \label{eq:lb_raych_1}
\frac{d}{ds} ( - \dot{r} ) - 2 K_- (- \dot{r} ) = 4 |B|^2 \omega_{RN}^2 r_- \epsilon^2 + \mathcal{E}_B(s),
\end{equation}
with the error $\mathcal{E}_B(s)$ once again satisfying $|\mathcal{E}_B(s)| \lesssim \epsilon^4 s$.
We now use a classical integrating factor of $e^{- 2K_-s}$ to integrate (\ref{eq:lb_raych_1}) between $s_{lin}$ and $s$, yielding
\begin{equation} \label{eq:lb_raych_2}
- \dot{r}(s) = - e^{2 K_- (s - s_{lin})} \dot{r}(s_{lin}) + \int_{s_{lin}}^s 4 |B|^2 \omega^2_{RN} r_- \epsilon^2 e^{2 K_-(s - s')} + \mathcal{E}_B(s') e^{2 K_-(s - s')} \, ds'.
\end{equation}
Using $|\mathcal{E}_B(s')| \lesssim \epsilon^4 s'$ and computing these integrals, we find that
\begin{equation} \label{eq:lb_raych_3}
\left| - \dot{r}(s) - \frac{4 |B|^2 \omega_{RN}^2 r_- \epsilon^2}{|2K_-|} \right| \lesssim e^{2K_-(s - s_{lin})} \left| - \dot{r}(s_{lin}) - \frac{4 |B|^2 \omega_{RN}^2 r_- \epsilon^2}{|2K_-|} \right| + \epsilon^4 s.
\end{equation}
From this and prior estimates we yield (\ref{eq:lb_rdot}), thus improving the remaining bootstrap (\ref{eq:lb_bootstrap_r}).
The final estimate (\ref{eq:lb_rlapse}) simply comes from combining (\ref{eq:lb_rdot}) and (\ref{eq:lb_lapse}).
\end{proof}
We conclude this section with a straightforward corollary concerning quantities evaluated at a specific point $s = s_{O}(\epsilon) \coloneqq 50 s_{lin}(\epsilon)$ that will be useful in the next region. For ease of notation, we first define the dimensionless parameter $\mathfrak{W}(M, \mathbf{e}, \Lambda, q_0) > 0$ as:
\begin{equation} \label{eq:frakw}
\mathfrak{W} \coloneqq \sqrt{ \frac{\omega_{RN}(M, \mathbf{e}, \Lambda, q_0)}{|2 K_- (M, \mathbf{e}, \Lambda)|}}.
\end{equation}
\begin{corollary} \label{cor:lateblueshift}
Consider $ s_O=50 s_{lin}(\epsilon) \in \mathcal{LB}$. Then, defining the quantities
\begin{gather}
r_0 = r( s_O),\label{r_0.def} \\
\omega_0 = |q_0 \tilde{A}( s_O)|, \label{w_0.def} \\
\xi_0 = \omega_0 \left( - \frac{d}{ds} \left( \frac{r^2}{r_-^2 \epsilon^2} \right) \right)^{-1}( s_O), \label{xi_0.def}
\end{gather}
we have the following estimates:
\begin{equation}\label{a0.est}
|r_0 - r_-| + \log (\epsilon^{-1}) |\omega_0 - \omega_{RN}| + \left| \xi_0 - \frac{1}{8 |B|^2 \mathfrak{W}^2} \right| \leq D_L \epsilon^2 \log (\epsilon^{-1}).
\end{equation}
\end{corollary}
\section{The collapsed oscillations} \label{sec:oscillations}
From this point, the results begin to diverge from the uncharged scalar field spacetime of \cite{VDM21}. We next study the region of `collapsed oscillations' (see Section~\ref{Jo.intro}), a region where $r$ eventually becomes $O(\epsilon} \newcommand{\ls}{\lesssim)$ small.
The dynamics in this section are largely driven by the charged scalar field, which remains oscillatory but also exhibits a slowly growing behaviour, in that the amplitude grows from $O(\epsilon)$ at start of the regime to $O(1)$ at the end of the regime. Indeed, after a change of variables, the behaviour of $\phi$ will be well approximated by a \textit{Bessel function} of order $0$. Schematically, in the region $\epsilon \lesssim r \ll r_-$ the scalar field indeed will behave as \begin{equation}\label{schm}
\phi(r) \approx \frac{C \epsilon} \newcommand{\ls}{\lesssim}{r}\ \cos
\left( \frac{r^2- r_-^{2}}{\epsilon^2} +O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1})) \right).
\end{equation}
We will in fact need to track the growth-oscillatory behavior more precisely with the help of the known Bessel functions of order $0$, namely $J_0(x)$ and $Y_0(x)$, defined as the two linearly independent solutions of Bessel's equation of $0$ order:
\begin{equation}
\frac{d}{dz} \left( z \frac{df}{dz} \right) + z f = 0,
\end{equation}
with the following asymptotic behavior (see also Facts \ref{fact:bessel1_taylor}, \ref{fact:bessel2_taylor} and \ref{fact:bessel_bigasymp} in Appendix \ref{sec:appendix_A} for more detailed asymptotics):
\begin{equation} \label{eq:jo_besselj}
J_0(z) =
\begin{cases}
\sqrt{ \frac{2}{\pi z} } \cos \left( z - \frac{\pi}{4} \right) + O( z^{-3/2} )
& \text{ as } z \to +\infty, \\
1 + O(z)
& \text{ as } z \to 0,
\end{cases}
\end{equation}
\begin{equation} \label{eq:jo_bessely}
Y_0(z) =
\begin{cases}
\sqrt{ \frac{2}{\pi z} } \sin \left( z - \frac{\pi}{4} \right) + O( z^{-3/2} )
& \text{ as } z \to +\infty, \\
\frac{2}{\pi} \left( \ln \left( \frac{z}{2} \right) + \gamma \right) + O(z)
& \text{ as } z \to 0.
\end{cases}
\end{equation}
Here $\gamma$ is the Euler-Mascheroni constant ($0.5772\ldots$). We see, therefore, that Bessel functions are both oscillatory and decaying\footnote{
As $z$ is decreasing in $s$ in our setting, and we are interested in behaviour as $z$ decreases, the decay of Bessel functions exhibits itself as inverse polynomial growth of the amplitude of the scalar field $\phi$.
}
at a slow inverse polynomial rate as $z \to \infty$, but $J_0(z)$ and $Y_0(z)$ exhibit very different behaviour in the $z \to 0$ regime. Therefore, it will be important to track the coefficients of each of these. In fact, we will show the schematic estimate \eqref{schm} takes the following more precise form involving the renormalized variable $z=\frac{r^2}{\epsilon} \newcommand{\ls}{\lesssim^2}$: for some frequency $\xi_0 \approx \frac{|K_-|(M,e,\Lambda)}{|B|^2 \cdot \omega_{RN}(M,e,\Lambda)}>0$ :
\begin{equation*}
\phi(r) \approx C_J(\epsilon} \newcommand{\ls}{\lesssim) \cdot J_0 \left( \frac{\xi_0}{r_-^2} \cdot \frac{r^2}{ \epsilon} \newcommand{\ls}{\lesssim^2} \right) + C_Y(\epsilon} \newcommand{\ls}{\lesssim) \cdot Y_0 \left(
\frac{\xi_0}{r_-^2} \cdot \frac{r^2}{ \epsilon} \newcommand{\ls}{\lesssim^2} \right),
\end{equation*}
\begin{equation*}
C_J(\epsilon} \newcommand{\ls}{\lesssim) \approx \sqrt{2\pi \xi_0 } \cdot |B| \cdot \cos( \xi_0 \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}),\ C_Y(\epsilon} \newcommand{\ls}{\lesssim) \approx \sqrt{2\pi \xi_0 } \cdot |B| \cdot \sin( \xi_0 \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2}).
\end{equation*}
The main objective of the present section is to prove that these schematic estimates are satisfied indeed. In view of the logarithmic divergence of $Y_0(z)$ for $z\ll 1$, the precise value of the coefficient $C_Y(\epsilon} \newcommand{\ls}{\lesssim)$ will be crucial in the geometry of the later regions where $r\ll \epsilon} \newcommand{\ls}{\lesssim$ (see Section \ref{sec:protokasner}).
Once the scalar fields asymptotics are understood, we also account for the scalar field \emph{backreaction} onto both the Maxwell field and the geometry. In this region, the backreaction on the spacetime geometry is minimal, however the backreaction on the Maxwell field has a rather curious effect: through the oscillatory region the charge $Q(s)$ will decrease in absolute value from a neighborhood of $\mathbf{e}$ to a neighborhood of $Q_{\infty}$, where $Q_{\infty}(M, \mathbf{e}, \Lambda)$ depends on the black hole parameters and lies strictly between $\frac{\mathbf{e}}{2}$ and $\mathbf{e}$, see (\ref{eq:qinfty}) and Lemma \ref{lem:jo_charge_retention}. $Q_{\infty}$ being bounded away from $0$ is causing the \emph{charge retention} in Theorem~\ref{thm.intro}, see Section~\ref{osc.intro}.
We now state the main result of this section.
\begin{proposition} \label{prop:oscillation}
Define the region $\mathcal{O}$ to be $\mathcal{O} = \{ s \geq s_O(\epsilon): r(s) \geq 2 |B| \mathfrak{W} \epsilon r_- \}$. Within $\mathcal{O}$, the lapse $\Omega^2$ satisfies
\begin{equation} \label{eq:osc_lapse}
\Omega^2(s) \leq \alpha_- \exp( K_- s ).
\end{equation}
There exists some $D_O(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ such that we have the following estimates for all $s \in \mathcal{O}$ (recalling that $ \mathfrak{W}$ was defined in \eqref{eq:frakw}):
\begin{equation} \label{eq:osc_r}
| - r \dot{r} (s) - 4 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2 \omega_{RN} | \leq D_O \epsilon^4 \log \epsilon^{-1},
\end{equation}
\begin{equation} \label{eq:osc_lapse_2}
\left| \frac{d}{ds} \log \Omega^2 - 2 K_- \right| \leq D_O \epsilon^2 ( \log (\epsilon^{-1}) + r^{-2}(s) ).
\end{equation}
\begin{equation} \label{eq:osc_gauge}
| q_0 \tilde{A}(s) - q_0 \tilde{A}_{RN, \infty} | \leq D_O \epsilon^2,
\end{equation}
\begin{equation} \label{eq:osc_Q}
\left| Q(s) - \mathbf{e} + \frac{1}{4} \frac{2 K_-}{\tilde{A}_{RN, \infty}} (r_-^2 - r^2(s)) \right| \leq D_O \epsilon^2 \log (\epsilon^{-1}).
\end{equation}
In particular, if one defines $Q_{\infty}$ as
\begin{equation} \label{eq:qinfty}
Q_{\infty} (M, \mathbf{e}, \Lambda) \coloneqq \mathbf{e} - \frac{2 K_- (M, \mathbf{e}, \Lambda) \cdot r_-^2(M, \mathbf{e}, \Lambda)}{4 \tilde{A}_{RN, \infty} (M, \mathbf{e}, \Lambda)},
\end{equation}
then for $s= s_{PK}$ such that $r(s_{PK}) = 2 |B| \mathfrak{W} \epsilon r_-$ one has
\begin{equation} \label{eq:q_precise_end}
|Q(s_{PK}) - Q_{\infty}| \leq D_O \epsilon^2 \log (\epsilon^{-2}).
\end{equation}
Finally, there exists coefficients $C_J(\epsilon)$ and $C_Y(\epsilon)$ determined via the formula (\ref{eq:jo_linalg}) and satisfying (\ref{eq:bessel_j_coeff}) and (\ref{eq:bessel_y_coeff})
such that we have the following two estimates on the scalar field: for all $s \in \mathcal{O}$
\begin{equation} \label{eq:osc_phi_1}
\left | \phi(s) - \left( C_J (\epsilon) J_0 \left( \frac{\xi_0 r^2(s)}{r_-^2 \epsilon ^2} \right) + C_Y(\epsilon) Y_0 \left( \frac{ \xi_0 r^2(s)} {r_-^2 \epsilon^2} \right)\right) \right| \leq D_O \epsilon^2 \log(\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:osc_phi_2}
\left | \dot{\phi}(s) - \omega_0 \left( C_J (\epsilon) J_1 \left( \frac{\xi_0 r^2(s)}{r_-^2 \epsilon^2} \right) + C_Y(\epsilon) Y_1 \left( \frac{\xi_0 r^2(s)}{r_-^2 \epsilon^2} \right) \right) \right | \leq D_O \epsilon^2 \log(\epsilon^{-1}).
\end{equation}
Here $\xi_0$ and $\omega_0$ are as defined in Corollary \ref{cor:lateblueshift}.
\end{proposition}
This section will be organized into three parts. In Section \ref{sub:osc_prelim} we introduce the main bootstrap assumptions for the region $\mathcal{O}$, and make several preliminary observations. In Section \ref{sub:bessel}, we derive the main scalar field estimates, establishing the aforementioned Bessel type behaviour.
In Section \ref{sub:osc_qomega}, we use the results on the scalar field to improve the bootstrap assumptions, proving in particular the estimates (\ref{eq:osc_lapse}), (\ref{eq:osc_lapse_2}) and (\ref{eq:osc_Q}) regarding $\Omega^2(s)$ and $Q(s)$,
then complete the proof of Proposition \ref{prop:oscillation}.
\subsection{Bootstraps and preliminary estimates} \label{sub:osc_prelim}
For the oscillatory region $\mathcal{O}$ of the spacetime as defined in Proposition \ref{prop:oscillation}, we make reference to the following $3$ bootstrap assumptions:
\begin{equation}
\Omega^2 \leq \epsilon^{40},
\tag{O1} \label{eq:jo_bootstrap_lapse}
\end{equation}
\begin{equation}
|\phi| + |\dot{\phi}| \leq \epsilon^{-1},
\tag{O2} \label{eq:jo_bootstrap_phi}
\end{equation}
\begin{equation}
|Q| \leq \epsilon^{-2}.
\tag{O3} \label{eq:jo_bootstrap_Q}
\end{equation}
By Proposition \ref{prop:lateblueshift}, we see that these will hold within a neighbourhood of $ s = s_O= 50 s_{lin}(\epsilon)$. We now make some preliminary estimates using these bootstraps and Proposition \ref{prop:lateblueshift}.
Morally, the following lemma will allow us to treat $- r \dot{r}$ and $\tilde{A}$ as constant inside $\mathcal{O}$, at least up to an extremely small error.
\begin{lemma} \label{lemma:jo_geometry}
Assuming the bootstraps (\ref{eq:jo_bootstrap_lapse})--(\ref{eq:jo_bootstrap_Q}), there exists some $s_{PK}$ satisfying $s_{PK} \lesssim \epsilon^{-2}$ such that $r(s_{PK}) = 2 |B| \mathfrak{W} \epsilon r_-$, i.e.\ $\mathcal{O} = \{ s_O < s \leq s_{PK}\}$.
Furthermore, there will exist some constant $D_O(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ such that we have
\begin{equation} \label{eq:jo_rd}
\left |\frac{d}{ds} (-r \dot{r}) \right |+ \left |r\dot{r}(s)- r\dot{r}(s_{O})\right | \leq D_O \epsilon^{30},
\end{equation}
\begin{equation} \label{eq:jo_r}
\left| -r\dot{r}(s) - 4 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon^2 \right| \leq D_O \epsilon^4 \log(\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:jo_gauge}
|q_0\tilde{A} (s) - q_0 \tilde{A} ( s_O )| \leq D_O \epsilon^{30},
\end{equation}
\begin{equation} \label{eq:jo_gauge_2}
||q_0 \tilde{A}| (s) - \omega_{RN}| \leq D_O \epsilon^2.
\end{equation}
\end{lemma}
\begin{proof}
These estimates are immediate from the equations (\ref{eq:r_evol_2}) and (\ref{eq:gauge_evol}). Indeed, letting $f$ be either $-r \dot{r}$ or $\tilde{A}$, we see that
\begin{equation*}
\left| \frac{df}{ds} \right| \leq \Omega^2 \cdot P ( |\phi|, |Q|, |\dot{\phi}|, r^{-2})
\end{equation*}
for some polynomial $P$ of degree less than $2$.
But then (\ref{eq:jo_bootstrap_lapse}) provides the large power of $\epsilon$, and the remaining bootstraps plus the fact we are in a region where $r \gtrsim \epsilon$ give the estimates (\ref{eq:jo_rd}) and (\ref{eq:jo_gauge}). The other two estimates then follow straightforwardly from Proposition \ref{prop:lateblueshift}, (\ref{eq:jo_rd}) and (\ref{eq:jo_gauge}).
As $- r \dot{r} \gtrsim \epsilon^2$, we also deduce that $s_{PK} \lesssim \epsilon^2$ as $r^2(s)$ is decreasing from $r_-^2 + O(\epsilon^2 \log(\epsilon^{-1}))$ to $O(\epsilon^2)$.
\end{proof}
We use Lemma \ref{lemma:jo_geometry} to rewrite several of the equations of Section \ref{sub:evol_eqs} as simplified equations with constant coefficients, plus an error term which is small enough to ignore. To simplify notation further, we denote
\begin{equation} \label{eq:xlambda}
x = x(s) \coloneqq \frac{r^2(s)}{r_-^2 \epsilon^2}, \; \text{ so that } \; \xi_0 = \left | \left(- \frac{dx}{ds} \right)^{-1} \cdot q_0 \tilde{A} \right| (s = s_O).
\end{equation}
We then proceed to rewrite (\ref{eq:r_evol_2}) and (\ref{eq:Q_evol}) in terms of the new variable $x$. Performing this change of variables, one gets the two equations:
\begin{equation} \label{eq:jo_phi_evol}
\frac{d}{dx} \left( x \frac{d\phi}{dx} \right) + \xi_0^2 x \phi = \mathcal{E}_{\phi},
\end{equation}
\begin{equation} \label{eq:jo_q_evol}
\frac{dQ}{dx} = \sgn(\mathbf{e}) q_0 r_-^2 \xi_0 \epsilon^2 x |\phi|^2 + \mathcal{E}_{Q}.
\end{equation}
We should like to estimate the error terms $\mathcal{E}_{\phi}$ and $\mathcal{E}_Q$. To do so, we apply Lemma \ref{lemma:jo_geometry} extensively to prove the following corollary.
\begin{corollary} \label{cor:jo_errors}
Define $x$ and $\xi_0$ as in (\ref{eq:xlambda}). Then one finds
\begin{equation} \label{eq:jo_lambda}
\left| \xi_0 - \left(- \frac{dx}{ds}\right)^{-1} \cdot |q_0 \tilde{A}| (s) \right| \leq D_O \epsilon^{20}.
\end{equation}
Furthermore, in the equations (\ref{eq:jo_phi_evol}) and (\ref{eq:jo_q_evol}), one has the following control on the error terms:
\begin{equation*}
|\mathcal{E}_{\phi}| + |\mathcal{E}_{Q}| \leq D_O \epsilon^{20}.
\end{equation*}
\end{corollary}
\subsection{Scalar field oscillations} \label{sub:bessel}
The focus of this subsection will be understanding the behaviour of the \textit{charged} scalar field $\phi(s)$ via the equation (\ref{eq:jo_phi_evol}). This equation is a Bessel type ODE that we would like to understand as $x$ decreases from $\epsilon^{-2} + O(\log (\epsilon^{-1}))$ to $r^2(s_{PK}) ( r_-^2 \epsilon^2)^{-1} = 4 |B|^2 \mathfrak{W}^2$.
The asymptotic behaviors and estimates are summarized in the below proposition.
\begin{proposition} \label{prop:oscillation_phi}
Assuming for now the bootstraps (\ref{eq:jo_bootstrap_lapse})-(\ref{eq:jo_bootstrap_Q}), in the region $\mathcal{O}$ there exists a constant $D_O>0$, and coefficients $C_J(\epsilon)$ and $C_Y(\epsilon)$ such that\footnote{
For a definition of the Bessel functions $J_1(z)$ and $Y_1(z)$, see Appendix \ref{sec:appendix_A} and Fact \ref{fact:besselrelation}.
}
\begin{equation} \label{eq:jo_phi}
| \phi - C_J(\epsilon) J_0( \xi_0 x ) - C_Y(\epsilon) Y_0 (\xi_0 x) | \leq D_O \epsilon^{10},
\end{equation}
\begin{equation} \label{eq:jo_phi_x}
\left | \frac{d\phi}{dx} + \xi_0 C_J(\epsilon) J_1( \xi_0 x ) + \xi_0 C_Y(\epsilon) Y_1 (\xi_0 x) \right | \leq D_O \epsilon^{10}.
\end{equation}
Defining $x_{\mathcal{B}} = x(s_O)$, the coefficients $C_J(\epsilon)$ and $C_Y(\epsilon)$ are determined by
\begin{equation} \label{eq:jo_linalg}
\begin{bmatrix}
C_J(\epsilon) \\ C_Y(\epsilon)
\end{bmatrix}
=
\frac{\pi x_{\mathcal{B}}}{2}
\begin{bmatrix}
- \xi_0 Y_1(\xi_0 x_{\mathcal{B}}) & - Y_0(\xi_0 x_{\mathcal{B}}) \\
\xi_0 J_1(\xi_0 x_{\mathcal{B}}) & J_0(\xi_0 x_{\mathcal{B}})
\end{bmatrix}
\begin{bmatrix}
\phi (x_{\mathcal{B}}) \\ \tfrac{d \phi}{d x}(x_{\mathcal{B}})
\end{bmatrix}
.
\end{equation}
%
The coefficients obey the estimates
\begin{equation} \label{eq:bessel_j_coeff}
\left| C_J(\epsilon) - \frac{\sqrt{\pi}}{2} \mathfrak{W}^{-1} \cos (\Theta(\epsilon)) \right| \leq D_O \epsilon^2 \log (\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:bessel_y_coeff}
\left| C_Y(\epsilon) - \frac{\sqrt{\pi}}{2} \mathfrak{W}^{-1} \sin (\Theta(\epsilon)) \right| \leq D_O \epsilon^2 \log (\epsilon^{-1}),
\end{equation}
where the phase function $\Theta(\epsilon)$ is given by:
\begin{equation} \label{eq:bessel_coeff_phase}
\Theta(\epsilon) = \left . |q_0 \tilde{A}|(s) \cdot r^2(s) \cdot \left( - \frac{d}{ds} r^2(s) \right)^{-1} + \omega_{RN} s + \arg B - \frac{\pi}{4} \right|_{s = s_O}.
\end{equation}
Finally, one has the following upper bounds for $\phi(s)$ and $\dot{\phi}(s)$:
\begin{equation} \label{eq:jo_phi_upperbound}
\max\{|\phi|, \omega^{-1}_{RN} | \dot{\phi} | \} \leq \frac{100 |B| \epsilon r_-}{r}.
\end{equation}
\end{proposition}
\begin{proof}
Ignoring the error term $\mathcal{E}_{\phi}$, the equation (\ref{eq:jo_phi_evol}) is exactly Bessel's equation of order $0$ after a rescaling $z = \xi_0 x$. We should like to separate the analysis of the linear behaviour arising from Bessel's equation and analysis of the error term.
For this purpose, we recast the equation (\ref{eq:jo_phi_evol}) as a first-order linear evolution equation in the variables $(\phi, \frac{d \phi}{dx})$, treating the error term as an inhomogeneity. We get the following (see also the discussion preceding Lemma \ref{lem:bessel_solution}):
\begin{equation} \label{eq:firstorder}
\frac{d}{dx} \begin{bmatrix} \phi(x) \\ \frac{d \phi}{dx} (x) \end{bmatrix}
=
\begin{bmatrix} 0 & 1 \\ -\xi_0^2 & - \frac{1}{x} \end{bmatrix} \begin{bmatrix} \phi(x) \\ \frac{d \phi}{dx}(x) \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{x} \mathcal{E}_{\phi} \end{bmatrix}.
\end{equation}
The analysis of this first-order system then follows from Lemma \ref{lem:bessel_solution}, or more precisely Corollary \ref{cor:bessel_solution_rescale}.
Following these, we aim to write the solution using Duhamel's principle; let $\mathbf{S}(z_1; z_0)$ be the linear solution operator for the usual Bessel's equation defined in Lemma \ref{lem:bessel_solution}, and $\mathbf{S}_{\xi_0}(x_1; x_0) \coloneqq \mathbf{Q}_{\xi_0} \circ \mathbf{S}(\xi_0 x_1; \xi_0 x_0) \circ \mathbf{Q}_{\xi_0}^{-1}$ as in Corollary \ref{cor:bessel_solution_rescale}, where $\mathbf{Q}_{\chi}: \mathbb{R}^2 \to \mathbb{R}^2$ is the linear stretching operator $\mathbf{Q}_{\chi}: (A, B) \mapsto (A, \chi B)$.
Then following Corollary \ref{cor:bessel_solution_rescale}, $\mathbf{S}_{\xi_0}(x_1; x_0)$ can be interpreted as the linear solution (semigroup) operator for the first-order system (\ref{eq:firstorder}), without the inhomogeneous term $\mathcal{E}_{\phi}$.
Defining $x_{\mathcal{B}}$ as in the statement of the proposition, we have from Corollary \ref{cor:lateblueshift} that $|x_{\mathcal{B}} - \epsilon^{-2}| \lesssim \log(\epsilon^{-1})$.
Duhamel's principle (i.e.\ Corollary \ref{cor:bessel_solution_rescale}) applied to first-order systems\footnote{
Note that in this section, we are always integrating backwards in $x$. Nonetheless, the standard theory of first order systems will still apply, with the usual convention that $\int_{x_0}^{x_1} = - \int_{x_1}^{x_0}$.
}gives that
\begin{equation} \label{eq:jo_bessel_inhomog}
\begin{bmatrix}
\phi \\ \frac{d \phi}{dx}
\end{bmatrix}
(x) =
\mathbf{S}_{\xi_0}(x; x_{\mathcal{B}})
\begin{bmatrix}
\phi \\ \frac{d \phi}{dx}
\end{bmatrix}
(x_{\mathcal{B}})
+
\bigintsss_{x_{\mathcal{B}}}^x \mathbf{S}_{\xi_0}(x; \tilde{x})
\begin{bmatrix}
0 \\ \frac{1}{\tilde{x}} \mathcal{E}_{\phi}(\tilde{x})
\end{bmatrix}
\, d \tilde{x} .
\end{equation}
We define:
\begin{equation}
\begin{bmatrix} \phi_B \\ \frac{d \phi_B}{dx} \end{bmatrix} (x) \coloneqq
\mathbf{S}_{\xi_0}(x; x_{\mathcal{B}}) \begin{bmatrix} \phi \\ \frac{d\phi}{dx} \end{bmatrix} (x_{\mathcal{B}}), \hspace{0.5cm}
\begin{bmatrix} \phi_e \\ \frac{d \phi_e}{dx} \end{bmatrix} (x) \coloneqq
\bigintsss_{x_{\mathcal{B}}}^x \mathbf{S}_{\xi_0}(x; \tilde{x})
\begin{bmatrix}
0 \\ \frac{1}{\tilde{x}} \mathcal{E}_{\phi}(\tilde{x})
\end{bmatrix}
\, d \tilde{x} .
\end{equation}
Thus $\phi_B(x)$ represents exactly the solution of the homogeneous linear Bessel ODE with data given at $x = x_{\mathcal{B}}$, while $\phi_e(x)$ is to be treated as an error term due to the small inhomogeneity. To treat $\phi_B(x)$ and $\phi_e(x)$, we shall use parts (\ref{bessel_uno'}) and (\ref{bessel_dos'}) of Corollary \ref{cor:bessel_solution_rescale} respectively.
As $\phi_B(x)$ must be a solution to (a rescaled) Bessel's equation, the solution must be exactly (using Fact \ref{fact:besselrelation} to justify the appearance of $J_1(\xi_0 x)$ and $Y_1(\xi_0 x)$ in the derivatives):
\begin{equation} \label{eq:jo_bessel_exact}
\phi_B(x) = C_J J_0(\xi_0 x) + C_Y Y_0(\xi_0 x),
\end{equation}
\begin{equation} \label{eq:jo_bessel_exact2}
\frac{d \phi_B}{d x}(x) = - \xi_0 C_J J_1(\xi_0 x) - \xi_0 C_Y Y_1(\xi_0 x).
\end{equation}
In these equations, the coefficients $C_J = C_J(\epsilon)$ and $C_Y = C_Y(\epsilon)$ must be chosen such that at $x = x_{\mathcal{B}}$, $\phi_{B}(x)$ agrees with $\phi(x)$ up to first order, i.e.\ $\phi(x_{\mathcal{B}})=\phi_B(x_{\mathcal{B}})$ and $\frac{d\phi}{dx}(x_{\mathcal{B}})=\frac{d\phi_B}{dx}(x_{\mathcal{B}})$. (It is immediate from the definition that $\phi_e(x)$ vanishes to first order at $x = x_{\mathcal{B}}$.) Part (\ref{bessel_uno'}) of Corollary \ref{cor:bessel_solution_rescale} then implies (\ref{eq:jo_linalg}).
In order to deal with $\phi_e(x)$, we recall part \ref{bessel_dos'} of Corollary \ref{cor:bessel_solution_rescale}. For all $\epsilon$ small, $\xi_0$ is uniformly bounded above and below due to (\ref{eq:jo_lambda}); hence the operator norm of the rescaled solution operator satisfies $$\| \mathbf{S}_{\xi_0}(x_1; x_0) \|_{l^2 \to l^2} \lesssim \max \left \{ 1, \frac{x_0}{x_1} \right \},$$
with an implicit constant independent of $\epsilon$. We are always integrating backwards in the variable $x$, i.e.\ $x < \bar{x}$ in (\ref{eq:jo_bessel_inhomog}), hence we can see tha
, defining the $l^{\infty}$ norm on $\mathbb{R}^2$ as $\|(x,y)\|_{l^{\infty}}= \max\{|x|,|y|\}$:
\begin{equation*}
\left \| \mathbf{S}_{\xi_0} (x, \tilde{x}) \begin{bmatrix} 0 \\ \frac{1}{\tilde{x}} \mathcal{E}_{\phi}(\tilde{x}) \end{bmatrix} \right \|_{l^{\infty}}
\lesssim \frac{1}{x} \sup |\mathcal{E}_{\phi}| \lesssim \epsilon^{20},
\end{equation*}
where we use Corollary \ref{cor:jo_errors} and the lower bound $x \geq 4 |B|^2 \mathfrak{W}^2$ in the last step. Furthermore, the length of the interval of integration in (\ref{eq:jo_bessel_inhomog}) is bounded by $x_{\mathcal{B}} - x \leq x_{\mathcal{B}} \lesssim 2 \epsilon^{-2}$, so we have
\begin{equation} \label{eq:jo_bessel_error}
\left \| \begin{bmatrix} \phi_e(x) \\ \frac{d \phi_e}{dx} (x) \end{bmatrix} \right \|_{l^{\infty}} = \left \|
\bigintsss_{x_{\mathcal{B}}}^x \mathbf{S}_{\xi_0}(x; \tilde{x})
\begin{bmatrix}
0 \\ \frac{1}{\tilde{x}} \mathcal{E}_{\phi}(\tilde{x})
\end{bmatrix}
\, dx' \right \|_{l^{\infty}} \lesssim \epsilon^{10}
\end{equation}
as required. Combining (\ref{eq:jo_bessel_exact}), (\ref{eq:jo_bessel_exact2}) and (\ref{eq:jo_bessel_error}) then gives both (\ref{eq:jo_phi}) and (\ref{eq:jo_phi_x}).
The most computationally intensive part of this proof is recovering the precise form for the coefficients $C_J(\epsilon), C_Y(\epsilon)$. We carefully compute $C_J(\epsilon)$; the other coefficient $C_Y(\epsilon)$ shall follows in an analogous manner. By (\ref{eq:jo_linalg}), we have the formula
\begin{equation} \label{eq:jo_cj}
C_J(\epsilon) = \frac{\pi x_{\mathcal{B}}}{2}
\left[ - \xi_0 Y_1 (\xi_0 x_{\mathcal{B}}) \phi (x_{\mathcal{B}}) - Y_0 (\xi_0 x_{\mathcal{B}}) \frac{d \phi}{d x} (x_{\mathcal{B}}) \right].
\end{equation}
We therefore obtain a precise expression for $C_J(\epsilon)$ using the large-$z$ asymptotics for the Bessel function $Y_{\nu}(z)$, and the scalar field estimates at the comparison point $s = s_O$ given by (\ref{eq:lb_phidiff}) in Proposition \ref{prop:lateblueshift}. By Fact \ref{fact:bessel_bigasymp} and $x_{\mathcal{B}} = \epsilon^{-2} + O(\log(\epsilon^{-1}))$,
\begin{equation} \label{eq:jo_coeff_1}
Y_0 (\xi_0 x_{\mathcal{B}}) = \sqrt{ \frac{2}{\pi \xi_0 x_{\mathcal{B}}}} \sin \left( \xi_0 x_{\mathcal{B}} -\frac{\pi}{4} \right) + O(\epsilon^3),
\end{equation}
\begin{equation} \label{eq:jo_coeff_2}
Y_1 (\xi_0 x_{\mathcal{B}}) = - \sqrt{ \frac{2}{\pi \xi_0 x_{\mathcal{B}}}} \cos \left( \xi_0 x_{\mathcal{B}} -\frac{\pi}{4} \right) + O(\epsilon^3).
\end{equation}
On the other hand, writing $B$ as $B = |B| (\cos \arg B + i \sin \arg B)$ in (\ref{eq:lb_phidiff}) gives for $\phi(x_{\mathcal{B}}) = \phi(s)|_{s = 50s_{lin}}$:
\begin{equation} \label{eq:jo_coeff_3}
\phi(x_{\mathcal{B}}) = 2 |B| \epsilon \cos ( \omega_{RN} s + \arg B)|_{s = s_O} + O(\epsilon^3 \log (\epsilon^{-1})).
\end{equation}
%
For $\frac{d \phi}{dx}$, we proceed in several steps, using in particular Lemma \ref{lemma:jo_geometry}, \eqref{eq:jo_lambda}, \eqref{eq:xlambda} and \eqref{a0.est}: \begin{align}
& \frac{d \phi}{d x} (x_{\mathcal{B}}) = \left. \left( \frac{dx}{ds} \right)^{-1} \dot{\phi} \, \right|_{s = s_O} \nonumber [0.5em] = - 2 |B| \epsilon \, \omega_{RN} \left. \left( \frac{dx}{ds} \right)^{-1} \sin(\omega_{RN} s + \arg B) \right|_{s = s_O} + O(\epsilon^3 \log(\epsilon^{-1})) \nonumber \\[0.5em] &= + 2 |B| \epsilon \, \omega_{RN} (8 |B|^2 \mathfrak{W}^2 \omega_{RN})^{-1} \sin (\omega_{RN} s + \arg B)|_{s = s_O} + O(\epsilon^3 \log(\epsilon^{-1})) \nonumber \\[0.5em] &= 2 |B| \epsilon \, \xi_0 \sin (\omega_{RN} s + \arg B )|_{s = s_O} + O(\epsilon^3 \log(\epsilon^{-1})). \label{eq:jo_coeff_4}
\end{align}
Substituting all of (\ref{eq:jo_coeff_1}), (\ref{eq:jo_coeff_2}), (\ref{eq:jo_coeff_3}), (\ref{eq:jo_coeff_4}) into (\ref{eq:jo_cj}), one deduces that
\begin{equation}
C_J(\epsilon) = \frac{\pi x_{\mathcal{B}}}{2} \sqrt{ \frac{2 \xi_0}{\pi x_{\mathcal{B}}}} \cdot 2 |B| \epsilon \cos \left( \xi_0 x_{\mathcal{B}} - \frac{\pi}{4} + \omega_{RN} \cdot s_O + \arg B \right) + O(\epsilon^2 \log(\epsilon^{-1})).
\end{equation}
%
To conclude, one uses Corollary \ref{cor:lateblueshift} and $x_{\mathcal{B}} = \epsilon^{-2} + O(\log (\epsilon^{-1}))$ to yield, since $\sqrt{8 \xi_0} |B| = \mathfrak{W} + O(\epsilon} \newcommand{\ls}{\lesssim \log(\epsilon} \newcommand{\ls}{\lesssim^{-1})$:
\begin{equation}
C_J(\epsilon) =
\frac{\sqrt{\pi}}{2} \mathfrak{W}^{-1}
\cos (\Theta(\epsilon)) + O(\epsilon^2 \log(\epsilon^{-1})),
\end{equation}
where by the definition of $\xi_0$ in Corollary \ref{cor:lateblueshift}, $\Theta(\epsilon)$ is as in (\ref{eq:bessel_coeff_phase}).
%
Finally, to obtain the upper bound (\ref{eq:jo_phi_upperbound}), note first of all that $\xi_0 x \geq \xi_0 4 |B|^2 \mathfrak{W}^2 \geq 1/4$ by Corollary \ref{cor:lateblueshift}. Since one can check that for $z \geq \frac{1}{4}$,
\begin{equation*}
\max \{ |J_0(z)|, |J_1(z)|, |Y_0(z)|, |Y_1(z)| \} \leq 10 z^{-1/2},
\end{equation*}
one can use (\ref{eq:jo_phi}) and (\ref{eq:jo_phi_x}) along with (\ref{eq:bessel_j_coeff}) and (\ref{eq:bessel_y_coeff}) to deduce that
\begin{equation}
\max \left \{ |\phi(x)|, \xi_0^{-1} \left| \frac{d \phi}{dx} \right| \right\} \leq \sqrt{\pi} \, \mathfrak{W}^{-1} \cdot 10 (\xi_0 x)^{-1/2} + O(\epsilon^2 \log(\epsilon^{-1})) \leq \frac{60 |B| \epsilon r_-}{r}.
\end{equation}
Then one can use Lemma \ref{lemma:jo_geometry} to translate the $x$-derivative to an $s$-derivative with negligible error to arrive at (\ref{eq:jo_phi_upperbound}). Note, in particular, that this improves the bootstrap (\ref{eq:jo_bootstrap_phi}) for $\epsilon$ small.
\end{proof}
\subsection{Precise estimates for $Q$ and $\Omega^2$} \label{sub:osc_qomega}
It remains to estimate for $Q$ and $\Omega^2$ in the region $\mathcal{O}$, and hence close the remaining bootstraps (\ref{eq:jo_bootstrap_Q}) and (\ref{eq:jo_bootstrap_lapse}). We shall show that $Q$ changes to (a value close to) the quantity $Q_{\infty}$, while $\Omega^2$ remains exponentially decaying in $s$.
\begin{proposition} \label{prop:jo_qomega}
Assuming as usual the bootstraps (\ref{eq:jo_bootstrap_lapse})--(\ref{eq:jo_bootstrap_phi}), one has that for $s \in \mathcal{O}$, we have the following estimates for $Q(s)$ and $\Omega^2(s)$ (allowing $D_O$ to be larger if necessary):
\begin{equation} \label{eq:jo_q_precise}
\left| Q(s) - \mathbf{e} + \frac{1}{4} \frac{2 K_-}{\tilde{A}_{RN, \infty}}(r_-^2 - r^2(s)) \right| \leq D_O \epsilon^2 \log(\epsilon^{-1}) ,
\end{equation}
\begin{equation} \label{eq:jo_lapse}
\left| \frac{d}{ds} \log \Omega^2 - 2K_- \right| \leq D_O \epsilon^2 \left( \log (\epsilon^{-1}) + r^{-2}(s) \right).
\end{equation}
\end{proposition}
\begin{proof}
We shall prove (\ref{eq:jo_q_precise}) using the equation (\ref{eq:jo_q_evol}) and Proposition \ref{prop:oscillation_phi}. By the $\mathcal{LB}$ estimates of Proposition \ref{prop:lateblueshift}, we know $|Q (x_{\mathcal{B}}) - \mathbf{e}| \lesssim \epsilon^2 \log(\epsilon^{-1})$, so it remains to estimate the integral:
\begin{equation} \label{eq:jo_integral_estimate}
\sgn(\mathbf{e}) \int_{x_{\mathcal{B}}}^x q_0 r_-^2 \xi_0 \epsilon^2 \tilde{x} |\phi|^2 (\tilde{x}) \, d\tilde{x}.
\end{equation}
Firstly, using (\ref{eq:jo_phi}), (\ref{eq:bessel_j_coeff}), (\ref{eq:bessel_y_coeff}), and the Bessel function asymptotics in (\ref{eq:jo_besselj}), (\ref{eq:jo_bessely}), one may find that
\begin{equation}
\left| \phi - \sqrt{\frac{1}{2 \xi_0 x}} \mathfrak{W}^{-1} \cos (\xi_0 x - \pi/4 - \Theta(\epsilon)) \right| \lesssim \epsilon^2 \log(\epsilon^{-1}) x^{-1/2} + x^{-3/2},
\end{equation}
which we may use to deduce the estimate
\begin{equation}
\left| \xi_0 x \phi^2 - \frac{\mathfrak{W}^{-2}}{2} \cos^2 (\xi_0 x - \pi/4 - \Theta(\epsilon)) \right| \lesssim \epsilon^2 \log(\epsilon^{-1}) + x^{-1}.
\end{equation}
Therefore, recalling that $\mathfrak{W}^{-2} = \frac{|2 K_-|}{\omega_{RN}} = \frac{|2 K_-|}{q_0 |\tilde{A}_{RN,\infty}|}$ by definition, we can integrate this to estimate (\ref{eq:jo_integral_estimate}) by
\begin{multline}
\left| \int_{x_{\mathcal{B}}}^x q_0 r_-^2 \xi_0 \epsilon^2 \tilde{x} |\phi|^2 (\tilde{x}) \, d\tilde{x} - \frac{2 |K_-| r_-^2}{2 |\tilde{A}_{RN, \infty}|} \epsilon^2
\int_{x_\mathcal{B}}^x \cos^2 (\xi_0 \tilde{x} - \pi/4 - \Theta(\epsilon)) \, d\tilde{x} \right| \\ \lesssim
\epsilon^2 ( \epsilon^2 \log(\epsilon^{-1}) x_{\mathcal{B}} + \log(x_{B} / x) ) \lesssim \epsilon^2 \log(\epsilon^{-1}).
\end{multline}
Then integrating $\cos^2(\xi_0 x - \pi/4 - \Theta(\epsilon)) = \frac{1}{2}(1 + \cos(2\xi_0 x - \pi/2 - 2 \Theta(\epsilon))$ in the usual manner, and recalling that $x = (\frac{r}{r_-})^2 \epsilon^{-2}$, one sees that
\begin{equation}
\left| \int_{x_{\mathcal{B}}}^x q_0 r_-^2 \xi_0 \epsilon^2 \tilde{x} |\phi|^2 (\tilde{x}) \, d\tilde{x}
- \frac{1}{4} \frac{2 |K_-|}{|\tilde{A}_{RN, \infty}|} (r^2(x) - r^2(x_{\mathcal{B}}) ) \right| \lesssim \epsilon^2 \log(\epsilon^{-1}).
\end{equation}
By replacing $r^2(x_{\mathcal{B}})$ by $r_-^2$ (which carries an error of $O(\epsilon^2 \log (\epsilon^{-1}))$), we get the estimate (\ref{eq:jo_q_precise}) -- to ensure we have the right sign in (\ref{eq:jo_q_precise}), recall that $\mathbf{e}$ and $\tilde{A}_{RN, \infty}$ have opposite signs, while $2K_-$ is negative.
We now move onto the estimates for $\Omega^2$. We return to the original $s$ coordinate, consider (\ref{eq:omega_evol_2}), and integrate by parts using (\ref{eq:phi_evol_2}) as follows, where we use (\ref{eq:jo_bootstrap_lapse}) to control error terms involving $\Omega^2$:
\begin{align}
\left| \frac{d}{ds} \log( r \Omega^2) - 2K_- \right| (s)
&\lesssim \left| \frac{d}{ds} \delta \log (r\Omega^2) \right| ( s_O) + \left | \int_{ s_O}^s (- |\dot{\phi}|^2 + q_0^2 |\tilde{A}|^2 |\phi|^2 )\, ds \right | + \epsilon^{30} \nonumber \\[0.5em]
&\lesssim \epsilon^2 \log(\epsilon^{-1}) + |\phi \dot{\phi} (s)| + |\phi \dot{\phi} ( s_O)| + \left| \int_{ s_O}^s \phi (\ddot{\phi} + q_0^2 |\tilde{A}|^2 \phi) \, ds\right| \nonumber \\[0.5em]
&\lesssim \epsilon^2 \log(\epsilon^{-1}) + |\phi \dot{\phi} (s)| + \left| \int_{ s_O}^s \frac{- \dot{r} \phi \dot{\phi}}{r} \, ds\right|. \label{eq:jo_lapse_final}
\end{align}
By (\ref{eq:jo_phi_upperbound}) in Proposition \ref{prop:oscillation_phi}, in the region $\mathcal{O}$ we have
\begin{equation*}
| \phi \dot{\phi} (s) | \leq \frac{10^4 |B|^2 \omega_{RN} \epsilon^2 r_-^2}{r^2}.
\end{equation*}
Inserting this estimate into all instances of $\phi \dot{\phi}$ in (\ref{eq:jo_lapse_final}) and then evaluating the integral, one eventually arrives at the estimate (\ref{eq:jo_lapse}), at least after applying (\ref{eq:jo_r}) again. Indeed, one has
\begin{equation*}
\left| \frac{d}{ds} \log \Omega^2 - 2 K_- \right| \leq \left| \frac{d}{ds} \log(r \Omega^2) - 2 K_- \right| + \frac{ - \dot{r}}{r} \lesssim \epsilon^2 \left[ \log (\epsilon^{-1}) + r^{-2}(s) \right].
\end{equation*}
This completes the proof of the proposition.
\end{proof}
\begin{comment}
For (\ref{eq:jo_lapse2}), we use the further observation from Lemma \ref{lemma:jo_geometry}, $- r \dot{r} \sim \epsilon^2$, so
\begin{equation} \label{eq:jo_lapse_final2}
| \log (r \Omega^2) (s) - 2 K_- s - \log r_- C_- | \lesssim \int^s_{50 s_{lin}} \left( \epsilon - \frac{\dot{r}}{r} \right) \lesssim \epsilon s + | \log r_- | + \log (r(s)^{-1}).
\end{equation}
In the region $\mathcal{O}$, we have $\log (r^{-1}) \lesssim \epsilon s$. To see this, note that for $r(s) \geq r_- / 2$ this is trivial and for $\Delta_{\mathcal{O}} \epsilon \leq r(s) \leq r_- / 2$, we already have $s \sim \epsilon^{-2}$ so that $\log (r^{-1}) \lesssim \log \epsilon^{-1} \lesssim \epsilon^{-1} \lesssim \epsilon s$. Putting the estimate $\log (r^{-1}) \lesssim \epsilon s$ into (\ref{eq:jo_lapse_final2}), we eventually reach the estimate (\ref{eq:jo_lapse2}).
\end{comment}
\begin{proof}[Proof of Proposition \ref{prop:oscillation}]
Suppose the bootstrap assumptions (\ref{eq:jo_bootstrap_lapse}), (\ref{eq:jo_bootstrap_Q}), (\ref{eq:jo_bootstrap_phi}) hold in $[ s_O=50 s_{lin}, s^*] \subset \mathcal{O}$. Then the conclusions of Lemma \ref{lemma:jo_geometry} and Propositions \ref{prop:oscillation_phi} and \ref{prop:jo_qomega} hold. In particular, (\ref{eq:jo_phi_upperbound}) and (\ref{eq:jo_q_precise}) show that (\ref{eq:jo_bootstrap_phi}) and (\ref{eq:jo_bootstrap_Q}) are indeed improved in the bootstrap region. It remains only to improve (\ref{eq:jo_bootstrap_lapse}).
For this purpose, we simply need to integrate (\ref{eq:jo_lapse}). The idea is that due to (\ref{eq:jo_r}), we know that $\epsilon^2 r^{-2} \leq - \dot{r} r$, hence due to (\ref{eq:lb_lapse_2}), we have
\begin{align}
| \log \Omega^2 (s) - 2 K_- s - \log C_- |
&\lesssim \epsilon^2 \log(\epsilon^{-1})^2 + \int^s_{ s_O} \left( \epsilon^2 \log (\epsilon^{-1}) - \frac{\dot{r}}{r} \right) \nonumber \\[0.5em]
&\lesssim \epsilon^2 \log(\epsilon^{-1}) s + \log \frac{r( s_O)}{r(s)}.
\label{eq:jo_lapse_final2}
\end{align}
In fact, the final term on the right hand side is also bounded by $\epsilon^2 \log(\epsilon^{-1}) s$. To see this, note that if $s$ is such that $r(s) \geq r_- / 2$, then (\ref{eq:jo_r}) implies that
\begin{equation*}
\log \frac{r( s_O)}{r(s)} = \int_{ s_O}^s \frac{-\dot{r}(\tilde{s})}{r(\tilde{s})} \, d \tilde{s} \lesssim \int_{ s_O}^s \frac{\epsilon^2}{r(\tilde{s})^2} \, d\tilde{s} \lesssim \epsilon^2 s.
\end{equation*}
On the other hand, if $r(s) \geq r_-/2$ then (\ref{eq:jo_r}) implies that $s \gtrsim \epsilon^{-2}$, hence we have $\log (r( s_O) / r(s)) \lesssim \log (\epsilon^{-1}) \lesssim \epsilon^2 \log(\epsilon^{-1}) s$.
Hence \ref{eq:jo_lapse_final2} implies that
\begin{equation}
\Omega^2 \leq C_- e^{(2 K_- - D_O \epsilon^2 \log(\epsilon^{-1}))s},
\end{equation}
which clearly implies (\ref{eq:osc_lapse}) for $\epsilon$ chosen small enough. Moreover, since $e^{K_- \cdot s_O(\epsilon)} \lesssim \epsilon^{50}$, we have improved the final bootstrap (\ref{eq:jo_bootstrap_lapse}), and we are allowed to extend all the way to $s =s_{PK}$ such that $r(s_{PK}) = 2 |B| \mathfrak{W} \epsilon r_-$. The remaining parts of the proposition are straightforward.
\end{proof}
Note that substituting $s = s_{PK}$ into (\ref{eq:jo_q_precise}) and noting that $r(s_{PK})^2 \lesssim \epsilon^2$, one immediately gets (\ref{eq:q_precise_end}). Remarkably, the estimate (\ref{eq:q_precise_end}) shows that the spacetime exhibits a nonzero yet controlled \textit{discharge} within the oscillatory region $\mathcal{O}$. We conclude this section with a (purely algebraic) lemma revealing that the final charge $Q_{\infty}(M, \mathbf{e}, \Lambda)$ lies strictly between $\mathbf{e}$ and $\mathbf{e}/2$.
\begin{lemma} \label{lem:jo_charge_retention}
Define $Q_{\infty}$ as in \eqref{eq:qinfty}.
Then one has the following alternative form for $Q_{\infty}$:
\begin{equation} \label{eq:qinfty2}
Q_{\infty} = \frac{3}{4} \mathbf{e} + \frac{\Lambda r_-^2 r_+ (2 r_- + r_+)}{ 12 \mathbf{e}}.
\end{equation}
From this, we make the following observations regarding $Q_{\infty}$:
\begin{enumerate}[(i)]
\item \label{case:lambda0}
If $\Lambda = 0$, then $Q_{\infty} = \frac{3}{4} \mathbf{e}$.
\item \label{case:lambda<}
If $\Lambda<0$, then $Q_{\infty}$ lies strictly between $\frac{1}{2} \mathbf{e}$ and $\frac{3}{4} \mathbf{e}$. Furthermore, as $(M, \mathbf{e}, \Lambda)$ varies across the sub-extremal parameter space $\mathcal{P}_{se}^{\Lambda \leq 0} \setminus \{ \Lambda = 0 \}$, $Q_{\infty} / \mathbf{e}$ achieves all values in $(\frac{1}{2}, \frac{3}{4})$.
\item \label{case:lambda>}
If $\Lambda > 0$, then $Q_{\infty}$ lies strictly between $\frac{3}{4} \mathbf{e}$ and $\mathbf{e}$. Furthermore, as $(M, \mathbf{e}, \Lambda)$ varies across the sub-extremal parameter space $\mathcal{P}_{se}^{\Lambda > 0}$, $Q_{\infty} / \mathbf{e}$ achieves all values in $(\frac{3}{4}, 1)$.
\end{enumerate}
In particular, in all cases, one has $|Q_{\infty}| > \frac{1}{2}|\mathbf{e}| > 0$.
\end{lemma}
\begin{proof}
To get (\ref{eq:qinfty2}), we need to find a clean expression for $2 K_-$ in terms of the black hole parameters. For this purpose, recall the polynomial (\ref{eq:rn_polynomial}), and define the function $f(X)$ as
\begin{equation*}
f(X) = X^{-2} P_{M, \mathbf{e}, \Lambda}(X) = X^{-2} (r_+ - X) (r_- - X) \left( \tfrac{\mathbf{e}^2}{r_+ r_-} - \tfrac{\Lambda}{3} (r_+ + r_-) X - \tfrac{\Lambda}{3} X^2 \right),
\end{equation*}
from which we may alternatively define the surface gravity $2K_-$ as:
\begin{equation*}
2 K_-(M, \mathbf{e}, \Lambda) = f'(X) |_{X = r_-} =r_-^{-2} (r_- - r_+) \left( \tfrac{\mathbf{e}^2}{r_+r_-} - \tfrac{\Lambda}{3} r_-(r_+ + r_-) - \tfrac{\Lambda}{3} r_-^2 \right).
\end{equation*}
Recalling once more that $\tilde{A}_{RN, \infty} (M, \mathbf{e}, \Lambda) = \frac{\mathbf{e}}{r_+} - \frac{\mathbf{e}}{r_-} = \frac{\mathbf{e}(r_- - r_+)}{r_+ r_-}$, we therefore get (\ref{eq:qinfty2}).
Once we have (\ref{eq:qinfty2}), the case (\ref{case:lambda0}) is immediate. The first statement of (\ref{case:lambda>}) is also immediate, since assuming without loss of generality that $\mathbf{e} > 0$, if $\Lambda > 0$ then (\ref{eq:qinfty2}) gives $Q_{\infty} > \frac{3}{4} \mathbf{e}$ while (\ref{eq:qinfty}) gives $Q_{\infty} < \mathbf{e}$ (recall indeed that $\tilde{A}_{RN,\infty}$ and $e$ have opposite signs). For the $\Lambda < 0$ case, we require one final observation, namely
\begin{equation*}
0= r_+^{-1} P_{M, \mathbf{e}, \Lambda} (r_+) - r_-^{-1} P_{M, \mathbf{e}, \Lambda} (r_-) = \mathbf{e}^2 \left( \frac{1}{r_+} -\frac{1}{ r_-}\right) + (r_+ - r_-) - \frac{\Lambda}{3} (r_+^3 - r_-^3).
\end{equation*}
Dividing by $r_+ - r_-$ and multiplying by $r_+ r_-$, this gives
\begin{equation*}
\tfrac{\Lambda}{3}r_+ r_- (r_+^2 + r_+ r_- + r_-^2) = - \mathbf{e}^2 + r_+ r_-.
\end{equation*}
In particular, in the case $\Lambda < 0$, we then have
\begin{equation*}
0 < - \tfrac{\Lambda}{3} r_-^2 r_+ (2 r_- + r_+) < - \tfrac{\Lambda}{3} r_+ r_- (r_+^2 + r_+ r_- + r_-^2) < \mathbf{e}^2.
\end{equation*}
Substituting this into (\ref{eq:qinfty2}), we get the first statement of (\ref{case:lambda<}).
To prove the second statements of (\ref{case:lambda<}), (\ref{case:lambda>}), we first state without proof several facts about the sub-extremal parameter spaces. Without loss of generality, we fix $\mathbf{e}$ to be positive. Then letting $\mathcal{P}^+_{se} = ( \mathcal{P}_{se}^{\Lambda \leq 0} \cup \mathcal{P}_{se}^{\Lambda > 0} ) \cap \{ \mathbf{e} > 0 \}$ be the space of subextremal parameters with positive $\mathbf{e}$,
\begin{itemize}
\item
$\mathcal{P}^+_{se}$ is an open, connected set in $\mathbb{R}^3$.
\item
Define the set $\mathcal{P}_{ex}^{\Lambda \leq 0}$ to be the set of $(M, \mathbf{e}, \Lambda) \in \mathbb{R}^+ \times \mathbb{R} \times (- \infty, 0]$ such that the polynomial $P_{M, \mathbf{e}, \Lambda}(X)$ has a single repeated positive root $X = R$, and $\mathcal{P}_{ex}^{\Lambda > 0}$ to be the set of $(M, \mathbf{e}, \Lambda) \in \mathbb{R}^+ \times \mathbb{R} \times (0, + \infty)$ such that the polynomial $P_{M, \mathbf{e}, \Lambda}(X)$ has two positive roots $X = R$ and $X = R_C$, with $X = R$ having multiplicity $2$ and $R < R_C$. Then, defining $\mathcal{P}^+_{ex} = ( \mathcal{P}_{se}^{\Lambda \leq 0} \cup \mathcal{P}_{se}^{\Lambda > 0}) \cap \{ \mathbf{e} > 0 \}$, $\mathcal{P}_{ex}^+$ is a subset of the boundary $\partial \mathcal{P}^+_{se}$.
\item
For $(M, \mathbf{e}, \Lambda) \in \mathcal{P}_{se}^+$, one must have $ - \infty < \Lambda M^2 < \frac{2}{9}$, i.e.\ we are constrained to have $\Lambda M^2 < \frac{2}{9}$ in order for there to exist a choice of $\mathbf{e} > 0$ such that the polynomial $P_{M, \mathbf{e}, \Lambda}(X)$ has the correct number of distinct positive roots. Moreover, all these values are achieved when restricting to the extremal case i.e.\ $\{ \Lambda M^2: (M, \mathbf{e}, \Lambda) \in \mathcal{P}_{ex}^+ \} = (- \infty, \frac{2}{9})$.
\item
The functions $r_+(M, \mathbf{e}, \Lambda)$ and $r_-(M, \mathbf{e}, \Lambda)$ are continuous in $\mathcal{P}_{se}^+$. Furthermore, they extend continuously to $\mathcal{P}_{ex}^+ \subset \partial \mathcal{P}_{se}^+$ so long as we define $r_- = r_+ = R$ on this set.
\end{itemize}
In light of these facts, the expression $Q_{\infty}$ will itself be a continuous function of $(M, \mathbf{e}, \Lambda)$ in $\mathcal{P}_{se}^+$, and furthermore, we can continuously extend $Q_{\infty}$ to the space of extremal parameters $\mathcal{P}_{ex}^+$, where we have
\begin{equation} \label{eq:qinfty3}
Q_{\infty} = \frac{3}{4} \mathbf{e} + \frac{\Lambda R^4}{4 \mathbf{e}}.
\end{equation}
We would like to estimate $\Lambda R^4$. For extremal parameters, we must have $P_{M, \mathbf{e}, \Lambda} (R) = \frac{d}{dX}P_{M, \mathbf{e}, \Lambda} (R) = 0$, from which we can get the two identities\footnote{These will arise from $\frac{d}{dX}P_{M,\mathbf{e},\Lambda}(R) = 0$ and $\frac{d}{dX}(X^{-4} P_{M, \mathbf{e},\Lambda})(R) = 0$ respectively.}:
\begin{equation} \label{eq:rel1}
- 2 \Lambda R^3 + 3 R - 3 M = 0,
\end{equation}
\begin{equation} \label{eq:rel2}
R^2 - 3 M R + 2 \mathbf{e}^2 = 0.
\end{equation}
Using (\ref{eq:rel1}) and then (\ref{eq:rel2}) in (\ref{eq:qinfty3}), we then get
\begin{equation} \label{eq:qinfty4}
Q_{\infty} = \frac{3}{4} \mathbf{e} \left( 1 + \frac{R (R - M)}{2 \mathbf{e}^2} \right) = \frac{3}{4} \mathbf{e} \left( 1 + \frac{R - M}{3 M - R} \right)= \frac{3}{2} \mathbf{e} \cdot \frac{1}{3 - R/M}.
\end{equation}
It remains to work out the range of values that $R/M$ can take. We rearrange (\ref{eq:rel1}) to get the equation:
\begin{equation}
1 - \frac{R}{M} + \frac{2}{3} \Lambda M^2 \left ( \frac{R}{M} \right )^3 = 0.
\end{equation}
As $\Lambda M^2$ varies between $- \infty$ and $\frac{2}{9}$, we can see that the unique positive root of this cubic expression in $R/M$ varies between $0$ and $\frac{3}{2}$, not including the endpoints. (Note that if $\Lambda M^2 > \frac{2}{9}$, then in fact there are no positive roots! This is why $\Lambda M^2$ must be upper bounded.)
Therefore, as $\Lambda M^2$ and thus $R/M$ vary within their allowed ranges, from (\ref{eq:qinfty4}) we have that $Q_{\infty}$ is allowed to take every value between $\mathbf{e}/2$ and $\mathbf{e}$, not including the endpoints.
Of course, this computation was done for extremal parameters in $\mathcal{P}_{ex}^+$ rather than the subextremal parameters in $\mathcal{P}_{se}^+$. But $\mathcal{P}_{se}^+$ is connected, so it is straightforward to show that indeed $\{ Q_{\infty}(M , \mathbf{e}, \Lambda): (M, \mathbf{e}, \Lambda) \in \mathcal{P}_{se}^+ \} = ( \mathbf{e}/2, \mathbf{e} )$ also. This concludes the proof of Lemma \ref{lem:jo_charge_retention}.
\end{proof}
\begin{comment}
We shall prove (\ref{eq:jo_q_precise}) using (\ref{eq:jo_q_evol}) and Proposition \ref{prop:oscillation_phi}. By the $\mathcal{LB}$ estimates, we know $|Q (x = x_{\mathcal{B}}) - \mathbf{e}| \lesssim \epsilon$, so it remains to estimate the integral:
\begin{equation} \label{eq:jo_integral_estimate}
\int_{x_{\mathcal{B}}}^x q_0 \xi_0 \epsilon^2 x' |\phi|^2 (x') \, dx'.
\end{equation}
Firstly, using Proposition \ref{prop:oscillation_phi} and the Bessel function asymptotics in (\ref{eq:jo_besselj}), (\ref{eq:jo_bessely}), one deduces that for some phase function $k(\epsilon)$
\footnote{indeed $k(\epsilon) = \frac{\pi}{4} - \frac{2|K_-|}{8 |B|^2 \omega_{RN}}-\theta(\epsilon)\epsilon^{-1}$.} with $k(\epsilon) \lesssim \epsilon^{-1}$ we have
\begin{equation}
\left| \phi - \sqrt{\frac{2 |K_-|}{2 \omega_{RN}}} \frac{1}{\sqrt{\xi_0 x}} \cos (\xi_0 x - k(\epsilon)) \right| \lesssim \epsilon x^{-1/2} + x^{-3/2},
\end{equation}
which we may use to deduce the estimate
\begin{equation}
\left| \xi_0 x \phi^2 - \frac{2 |K_-|}{2 \omega_{RN}} \cos^2 (\xi_0 x - k(\epsilon)) \right| \lesssim \epsilon + x^{-1}.
\end{equation}
Therefore, we can estimate (\ref{eq:jo_integral_estimate}) by
\begin{equation}
\left| \int_{x_{\mathcal{B}}}^x q_0 \xi_0 \epsilon^2 x' |\phi|^2 (x') \, dx' - \frac{2 |K_-|}{2 |\tilde{A}_{RN, \infty}|} \epsilon^2
\int_{x_\mathcal{B}}^x \cos^2 (\xi_0 x' - k(\epsilon)) \, dx' \right| \lesssim \epsilon.
\end{equation}
Then using standard methods to integrate the $\cos^2(\xi_0 x - k(\epsilon))$ term, and recalling that $x = r^2 / \epsilon^2$,
\begin{equation}
\left| \int_{x_{\mathcal{B}}}^x q_0 \xi_0 \epsilon^2 x' |\phi|^2 (x') \, dx'
- \frac{1}{4} \frac{2 |K_-|}{|\tilde{A}_{RN, \infty}|} (r^2(x) - r^2(x_{\mathcal{B}}) ) \right| \lesssim \epsilon.
\end{equation}
By replacing $r^2(x_{\mathcal{B}})$ by $r_-^2$ (which carries an error of $O(\epsilon)$), we therefore get (\ref{eq:jo_q_precise}).
Note that $r(s_{\mathcal{O}}) = \Delta_{\mathcal{O}} \epsilon$. So putting $s = s_{\mathcal{O}}$ in (\ref{eq:jo_q_precise}), we can treat the $r(s_{\mathcal{O}})$ as part of the $O(\epsilon)$ error, and it remains to show the identity
\begin{equation} \label{eq:jo_identity}
\frac{|2 K_-| r_-^2}{|\tilde{A}_{RN, \infty}|} = |\mathbf{e}| - \frac{ \Lambda r_-^2 r_+ (2 r_- + r_+)}{3 |\mathbf{e}|}.
\end{equation}
For this purpose, recall firstly that we have the following expression for $|\tilde{A}_{RN, \infty}|$:
\begin{equation*}
| \tilde{A}_{RN, \infty} | = | \mathbf{e} | \cdot \frac{ r_+ - r_- }{ r_+ r_- }.
\end{equation*}
To find a clean expression for $|2 K_-|$ in terms of black hole parameters, recall the polynomial (\ref{eq:rn_polynomial}), and define
\begin{equation*}
f(X) = X^{-2} P_{M, \mathbf{e}, \Lambda} (X) = X^{-2} (r_+ - X) (r_- - X) ( \tfrac{\mathbf{e}^2}{r_+ r_-} - \tfrac{\Lambda}{3} (r_+ + r_-) X - \tfrac{\Lambda}{3} X^2),
\end{equation*}
from which we have
\begin{equation*}
2 K_- (M, \mathbf{e}, \Lambda) \coloneqq f'(X) |_{X = r_-} = r_-^{-2} (r_- - r_+) (\tfrac{\mathbf{e}^2}{r_+ r_-} - \tfrac{\Lambda}{3} r_- (r_+ + r_-) - \tfrac{\Lambda}{3} r_-^2).
\end{equation*}
Putting all of these together, we do indeed get the expression (\ref{eq:jo_identity}).
To show the lower boundedness of $Q$, note that it is trivial that $|Q(s_{\mathcal{O}}) - \mathbf{e}| \leq |\mathbf{e}| / 2$ if $\Lambda \geq 0$, while if $\Lambda < 0$ we also use the fact that
\begin{equation*}
0 = r_+^{-1} P_{M, \mathbf{e}, \Lambda} (r_+) - r_-^{-1} P_{M, \mathbf{e}, \Lambda} (r_-) = - \mathbf{e}^2 \frac{r_+ - r_-}{r_+ r_-} + (r_+ - r_-) + \tfrac{\Lambda}{3} (r_+^3 - r_-^3)
\end{equation*}
and divide both sides by $r_+ - r_-$, to yield the identity
\begin{equation*}
- \frac{\mathbf{e}^2}{r_+ r_-} + 1 - \frac{\Lambda}{3} (r_+^2 + r_+ r_- + r_-^2 ) = 0.
\end{equation*}
So if $\Lambda < 0$, then $ - \frac{\Lambda}{3} r_+ r_- (r_+ r_- + 2r_-^2) \leq - \frac{\Lambda}{3} r_+ r_- (r_+^2 + r_+ r_- + r_-^2) < \mathbf{e} ^2$, which combined with (\ref{eq:jo_q_precise2}) gives exactly $|Q(s_{\mathcal{O}})| \geq |\mathbf{e}|/2$ for $\epsilon$ sufficiently small.
We now move onto the estimates for $\Omega^2$. We return to the original $s$-coordinate, consider (\ref{eq:omega_evol_2}) and integrate by parts as follows:
\begin{align*}
\left| \frac{d}{ds} \log( r \Omega^2) - 2K_- \right| (s)
&\lesssim \left| \frac{d}{ds} \delta \log (r\Omega^2) \right| (50 s_{lin}) + \left | \int_{50 s_{lin}}^s (- |\dot{\phi}|^2 + |\tilde{A}|^2 q_0^2 |\phi|^2 )\, ds \right | \\
&\phantom{+} \hspace{5cm} + P_{\mathcal{O}}(\epsilon^{-1}) \exp(-\epsilon^{-1/2}), \\
&\lesssim \epsilon + |\phi \dot{\phi} (s)| + |\phi \dot{\phi} (50 s_{lin})| + \left| \int_{50 s_{lin}}^s \phi (\ddot{\phi} + q_o^2 |\tilde{A}|^2 \phi) \, ds\right| \\
&\phantom{+} \hspace{5cm} + P_{\mathcal{O}}(\epsilon^{-1}) \exp(-\epsilon^{-1/2}), \\
&\lesssim \epsilon + |\phi \dot{\phi} (s)| + |\phi \dot{\phi} (50 s_{lin})| + \left| \int_{50 s_{lin}}^s \frac{\dot{r} \dot{\phi} \phi}{r} \, ds\right| + P_{\mathcal{O}}(\epsilon^{-1}) \exp(\epsilon^{-1/2}).
\end{align*}
By Proposition \ref{prop:oscillation_phi}, in the region $\mathcal{O}$ we have
\begin{equation*}
|\phi \dot{\phi}| \leq \frac{ 25 |B|^2 \epsilon^2 r_-^2 }{r^2},
\end{equation*}
hence putting this estimate into the above it is clear that we eventually have (\ref{eq:jo_lapse}). To get (\ref{eq:jo_lapse2}), we make the further observation that by Lemma \ref{lemma:jo_geometry}, $-r\dot{r} \sim \epsilon^2$, so
\begin{equation*}
\left| \log (r \Omega^2) (s) - 2 K_- s - \log r_- C_- \right| \lesssim \int_{50 s_{lin}}^s \left( \epsilon + \frac{\dot{r}}{r} \right) ds \lesssim \epsilon s + \log r_- + \log (r^{-1}).
\end{equation*}
In the region $\mathcal{O}$, we have $\log r^{-1} \lesssim \log \epsilon^{-1} \lesssim \epsilon s$, so we certainly have (\ref{eq:jo_lapse2}) for $\epsilon$ small.
This proposition closes the remaining bootstraps (\ref{eq:jo_bootstrap_Q}) and (\ref{eq:jo_bootstrap_lapse}).
\end{comment}
\paragraph{Estimate of $s_{PK}$:} A posteriori, we have shown that $\mathcal{O}= \{ s_O= 50 s_{lin} \leq s \leq s_{PK} \}$ for some $s_{PK}> s_O$ such that $r(s_{PK})=2 |B| \mathfrak{W} \epsilon r_-$. Note that integrating (\ref{eq:jo_r}) in the region $\mathcal{O}$, one gets:
\begin{equation}
\left| \frac{1}{2} r^2 (s_O) - \frac{1}{2} r^2(s_{PK}) - 4 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon^2 (s_{PK} - s_O) \right| \lesssim \epsilon^2 \log(\epsilon^{-1}),
\end{equation}
which simplifies after applying the estimates of Proposition \ref{prop:lateblueshift} to
\begin{equation} \label{eq:s0}
\left| r_-^2 - 8 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon^2 s_{PK}\right| \lesssim \epsilon^2 \log( \epsilon^{-1} ).
\end{equation}
\section{The Proto-Kasner Region} \label{sec:protokasner}
\subsection{Estimates beyond the oscillatory region -- statement of Proposition~\ref{prop:oscillation+}} \label{sub:osc_kasner}
In the previous region we considered the region $\mathcal{O} = \{ s \geq s_O : r(s) \geq 2 |B| \mathfrak{W} \epsilon r_- \}=\{ s_O \leq s \leq s_{PK} \}$, where $s_O(\epsilon} \newcommand{\ls}{\lesssim)=50s_{lin}$ and we recall $s_{PK}=O(\epsilon} \newcommand{\ls}{\lesssim^{-2})$) was defined such that $r(s_{PK})=2 |B| \mathfrak{W} \epsilon r_- $. The reason it was necessary to end this region at $r \sim \epsilon r_-$ was twofold:
\begin{enumerate}
\item \label{block1}
Considering the Bessel function form of $\phi$,
\begin{equation*}
\phi = C_J (\epsilon) J_0 \left( \frac{\xi_0 r^2}{r_-^2 \epsilon^2} \right) + C_Y (\epsilon) Y_0 \left( \frac{\xi_0 r^2}{r_-^2 \epsilon^2} \right) + \text{error},
\end{equation*}
then the Bessel functions $J_0(z), Y_0(z)$ will change behaviour from oscillatory (at large $z$) to convergent or logarithmically growing (at small $z$) once $r^2(s) \leq \xi_0^{-1} r_-^2 \epsilon^2 \approx 8 |B|^2 \mathfrak{W}^2 \epsilon^2 r_-^2$. (See (\ref{eq:jo_besselj}), (\ref{eq:jo_bessely}))
\item \label{block2}
We encountered many error terms using the `small' term $\Omega^2(s)$ to dominate polynomial powers of $r^{-1}$ and $|\phi|$. At the start of the region $\mathcal{O}$, i.e.\ $s = s_O= 50 s_{lin}$, we only had that $\Omega^2 \lesssim \epsilon^{100}$, so in order for the errors to be controlled we required $r^{-1}$ to be at worst an inverse power of $\epsilon$.
\end{enumerate}
Nevertheless, we will show that we may extend many of the important estimates beyond $s = s_{PK}$ to a region $\mathcal{PK} = \{ s \geq s_{PK}: r(s) \geq e^{- \delta_0 \epsilon^{-2}} r_-\}$, which for reasons that will later become clear we denote the \textit{proto-Kasner} region. Here the dimensionless constant $\delta_0(M, \mathbf{e}, \Lambda, q_0)$ is selected to be (recall that $b_- = \frac{2 |B| \omega_{RN}}{|2 K_-|}= 2|B| \mathfrak{W}^2$)
\begin{equation} \label{eq:delta0}
\delta_0 \coloneqq \frac{1}{80} |B(M, \mathbf{e}, \Lambda)|^{-2} \, \mathfrak{W}^{-4}(M, \mathbf{e}, \Lambda, q_0) = \frac{1}{20} b_-^{-2}.
\end{equation}
In the region $\mathcal{PK}$, we will overcome the difficulty \ref{block1} by instead using new bootstrap assumptions that reflect the now monotonic behaviour of $\phi$ and $\dot{\phi}$. The more fundamental difficulty \ref{block2} is dealt with by now using the fact that $\Omega^2$ is now such that $\Omega^2 \lesssim \exp(K_- s) \lesssim \exp(-O(\epsilon} \newcommand{\ls}{\lesssim^{-2}))$ schematically when $s$ is of order $\epsilon} \newcommand{\ls}{\lesssim^{-2}$ (which corresponds to the sub-region where $r$ is particularly small $r\approx\exp( -O(\epsilon} \newcommand{\ls}{\lesssim^{-2}))$).
\begin{proposition} \label{prop:oscillation+}
Choose $\delta_0(M, \mathbf{e}, \Lambda, q_0) > 0$ as in (\ref{eq:delta0}). Then in the region $\mathcal{PK} = \{ 2 |B| \mathfrak{W} \epsilon r_- \geq r(s) \geq e^{- \delta_0 \epsilon^{-2}} r_- \}$, there exists some $D_{PK}(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ such that
that one has the following exponential behaviour for the lapse $\Omega^2(s)$:
\begin{equation} \label{eq:pk_lapse}
\Omega^2(s) \leq D_{PK} \exp( K_- s ) \leq
D_{PK} \exp( - 50 \delta_0 \epsilon^{-2}).
\end{equation}
Recalling the quantity $Q_{\infty}$ as defined in (\ref{eq:qinfty}), we have the following for all $s\in \mathcal{PK}$:
\begin{equation} \label{eq:pk_r}
\left| - r \dot{r}(s) - 4 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon} \newcommand{\ls}{\lesssim^2 \right| \leq D_{PK} \epsilon^4 \log(\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:pk_gauge}
\left| q_0 \tilde{A}(s) - q_0 \tilde{A}_{RN, \infty} \right| \leq D_{PK} \epsilon^2,
\end{equation}
\begin{equation} \label{eq:pk_Q}
| Q(s) - Q_{\infty} | \leq D_{PK} \epsilon^2 \log(\epsilon^{-1}).
\end{equation}
Using these, we define the quantities $\omega_K$ and $\xi_K$ by
\begin{gather}
\omega_K \coloneqq | q_0 \tilde{A} | (s_{PK}) = \omega_{RN} + O(\epsilon^2), \label{eq:omegak} \\[0.5em] \xi_K \coloneqq \omega_K \left( - \frac{d}{ds} \frac{r^2}{r_-^2 \epsilon^2} \right)^{-1} (s_{PK}) = \frac{1}{8 |B|^2 \mathfrak{W}^2} + O(\epsilon^2 \log(\epsilon^{-1})). \label{eq:xik}
\end{gather}
For the scalar field $\phi$, there will exist coefficients $C_{JK}(\epsilon)$ and $C_{YK}(\epsilon)$ obeying, for $C_J(\epsilon)$ and $C_Y(\epsilon)$ as in Proposition \ref{prop:oscillation_phi}, the estimates $|C_{JK}(\epsilon) - C_J(\epsilon)| + |C_{YK}(\epsilon) - C_Y(\epsilon)| \leq D_{PK}\epsilon^2 \log(\epsilon^{-1})$, such that for all $s\in \mathcal{PK}$:
\begin{equation} \label{eq:pk_phi_1}
\left | \phi(s) - \left( C_{JK} (\epsilon) J_0 \left( \frac{\xi_K r^2(s)}{r_-^2 \epsilon ^2} \right) + C_{YK}(\epsilon) Y_0 \left( \frac{ \xi_K r^2(s)} {r_-^2\epsilon^2} \right)\right) \right| \leq D_{PK} \epsilon^2 \log(\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:pk_phi_2}
\left | \dot{\phi}(s) - \omega_K \left( C_{JK} (\epsilon) J_1 \left( \frac{\xi_K r^2(s)}{r_-^2\epsilon^2} \right) + C_{YK}(\epsilon) Y_1 \left( \frac{\xi_K r^2(s)}{r_-^2 \epsilon^2} \right) \right) \right | \leq D_{PK} \epsilon^2 \log(\epsilon^{-1}).
\end{equation}
Moreover, the following upper bounds for $\phi$ and $\dot{\phi}$ are satisfied for all $s \in \mathcal{PK}$:
\begin{equation} \label{eq:obar_phi}
|\phi(s)| \leq 100 \, \mathfrak{W}^{-1} \left( 1 + \log \left( \frac{8 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2}{r^2 }\right)\right),
\end{equation}
\begin{equation} \label{eq:obar_phid}
|\dot{\phi}| \leq 100 \, \omega_{RN} \cdot \frac{8 |B|^2 \mathfrak{W} r_-^2 \epsilon^2}{r^2}.
\end{equation}
\end{proposition}
\begin{remark}
Both the statement and the proof of Proposition \ref{prop:oscillation+} will show large similarities to Proposition \ref{prop:oscillation} in Section \ref{sec:oscillations}. One notes, however, that in order to obtain optimal error estimates it is required to adjust the quantities $\xi_0$ and $\omega_0$, which are defined as the values of certain quantities evaluated at $s = s_O = 50 s_{lin}$, to the quantities $\xi_K$ and $\omega_K$, which are the values of the same quantities but evaluated at $s = s_{PK}$ instead.
\end{remark}
\subsection{Proof of Proposition \ref{prop:oscillation+}} \label{sub:proof*}
In light of the results we wish to find, we make reference to the following $4$ bootstrap assumptions in the region $\mathcal{PK}$, indeed $3$ of these are part of the conclusion of Proposition \ref{prop:oscillation+}:
\begin{equation} \label{eq:pk_bootstrap_lapse} \tag{PK1}
\Omega^2 \leq C_- e^{K_- s} \leq C_- \exp( - 50 \delta_0 \epsilon^{-2}),
\end{equation}
\begin{equation} \label{eq:pk_bootstrap_phi} \tag{PK2}
|\phi(s)| \leq 100 \, \mathfrak{W}^{-1} \left( 1 + \log \left( \frac{8 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2}{r^2 }\right)\right),
\end{equation}
\begin{equation} \label{eq:pk_bootstrap_phi_s} \tag{PK3}
|\dot{\phi}| \leq 100 \, \omega_{RN} \cdot \frac{8 |B|^2 \mathfrak{W} r_-^2 \epsilon^2}{r^2}.
\end{equation}
\begin{equation} \label{eq:pk_bootstrap_Q} \tag{PK4}
|Q| \leq 2 |\mathbf{e} |.
\end{equation}
It is clear following Proposition \ref{prop:oscillation} that the bootstraps (\ref{eq:pk_bootstrap_phi}), (\ref{eq:pk_bootstrap_phi_s}), (\ref{eq:pk_bootstrap_Q}) hold in a neighborhood of $s=s_{PK}$.
%
The second part of \eqref{eq:pk_bootstrap_lapse} follows from \eqref{eq:s0}: indeed $\epsilon$ chosen sufficiently small, following the definition of (\ref{eq:delta0}), we have by \eqref{eq:s0}
\begin{equation} \label{eq:s0lower}
| K_- | s_{PK} \geq \frac{2 |K_-|}{\omega_{RN}} \frac{1}{2 |B|^2 \mathfrak{W}^2} \geq \frac{1}{2 |B|^2 \mathfrak{W}^4} = 50 \delta_0 \epsilon^{-2},
\end{equation} as required.
We now proceed in largely the same manner as in the region $\mathcal{O}$ (Section~\ref{sec:oscillations}): always assume the bootstraps, we begin as in Section~\ref{sub:osc_prelim} with some preliminary estimates on $- r \dot{r}$ and $\tilde{A}$, then as in \ref{sub:bessel} we use these together with the equation (\ref{eq:phi_evol_2}) written in the Bessel form to find precise asymptotics for the scalar field $\phi$. Finally one concludes using these Bessel asymptotics to close estimates for $Q$ and $\Omega^2$ as in \ref{sub:osc_qomega}. We shall aim to be rather terse, as modifications from the analysis of the region $\mathcal{O}$ are generally minor.
Before proceeding we briefly address how to deal with the difficulty \ref{block2} mentioned in Section \ref{sub:osc_kasner}. The idea is that the bootstrap (\ref{eq:pk_bootstrap_lapse}) now means that $\Omega^2$ is dominated by powers of $r^{-1}$, so terms in the system (\ref{eq:raych})--(\ref{eq:phi_evol_2}) involving $\Omega^2$ are generally ignorable (since $r^{-1} \lesssim e^{\delta_0 \epsilon} \newcommand{\ls}{\lesssim^2}$ in the region under consideration). For instance consider the following expression arising in (\ref{eq:r_evol_2}):
\begin{equation*}
\frac{\Omega^2}{4} \left( \frac{Q^2}{r^2} + m^2 r^2|\phi|^2 \right) \lesssim e^{- 48 \delta_0 \epsilon^{-2}}
\end{equation*}
in light of the various bootstrap assumptions (\ref{eq:pk_bootstrap_lapse})--(\ref{eq:pk_bootstrap_Q}). We now proceed with the proof in a series of lemmas similar to those of Section \ref{sec:oscillations}.
\begin{lemma} \label{lem:pk_geometry}
Assuming the bootstraps (\ref{eq:pk_bootstrap_lapse})--(\ref{eq:pk_bootstrap_Q}) for $s \in \mathcal{PK}$, one finds
\begin{equation} \label{eq:pk_rd}
\left |\frac{d}{ds} (-r \dot{r})(s) \right | + \left| - r \dot{r}(s) - r \dot{r}(s_{PK}) \right| \leq D_{PK} e^{-40 \delta_0 \epsilon^{-2}},
\end{equation}
\begin{equation} \label{eq:pk_gauge_2}
|q_0\tilde{A} (s) - q_0 \tilde{A} ( s_{PK} )| \leq D_{PK} e^{- 40 \delta_0 \epsilon^{-2}}.
\end{equation}
One also finds (\ref{eq:pk_r}) and (\ref{eq:pk_gauge}). Moreover, letting $x(s) = \frac{r^2(s)}{r_-^2 \epsilon^2}$ one gets the equation
\begin{equation} \label{eq:pk_phi_evol}
\frac{d}{dx} \left( x \frac{d\phi}{dx} \right) + \xi_K^2 x f = \mathcal{E}_{\phi}.
\end{equation}
where the error term $\mathcal{E}_{\phi}$ is bounded by
\begin{equation*}
|\mathcal{E}_{\phi}(s)| \leq D_{PK} e^{- 30 \delta_0 \epsilon^{-2}}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is almost identical to that of Lemma \ref{lemma:jo_geometry} and \ref{cor:jo_errors}, in light of the comments above. Note that on several occasions one must integrate in subintervals of $\mathcal{PK}$. But since for $s \in \mathcal{PK}$,
\begin{equation} \label{eq:pk_interval}
s - s_{PK} \leq \frac{ r^2(s_{PK}) - r^2(s) }{ \sup_{\tilde{s} \in [s_{PK}, s]} (- 2 r \dot{r} (\tilde{s})) } \leq \frac{ r^2(s_{PK})}{ \sup_{\tilde{s} \in [s_{PK}, s]} (- 2 r \dot{r} (\tilde{s})) } \leq \omega_{RN}^{-1} = O(1),
\end{equation}
in light of (\ref{eq:pk_r}), such integrations are always harmless.
\end{proof}
\begin{lemma} \label{lem:pk_phi}
Assuming the bootstraps (\ref{eq:pk_bootstrap_lapse})--(\ref{eq:pk_bootstrap_Q}) in the region $\mathcal{PK}$, let $x_{PK} = x(s_{PK})$, and define the coefficients $C_{JK}(\epsilon)$ and $C_{YK}(\epsilon)$ via
\begin{equation}
\begin{bmatrix} C_{JK}(\epsilon) \\ C_{YK}(\epsilon) \end{bmatrix} =
\begin{bmatrix} J_0(\xi_K x_{PK}) & Y_0(\xi_K x_{PK}) \\ - \xi_K J_1(\xi_K x_{PK}) & - \xi_K Y_1(\xi_K x_{PK}) \end{bmatrix}^{-1}
\begin{bmatrix} \phi(x_{PK}) \\ \frac{d\phi}{dx}(x_{PK}) \end{bmatrix}. \label{eq:pk_coeffs}
\end{equation}
Then one has that $|C_{JK}(\epsilon) - C_J(\epsilon)| + |C_{YK}(\epsilon) - C_Y(\epsilon)| \leq D_{PK} \epsilon^2 \log(\epsilon^{-1})$, and that
\begin{equation} \label{eq:pkl_phi}
| \phi(x) - C_{JK}(\epsilon) J_0( \xi_K x ) - C_{JK}(\epsilon) Y_0 (\xi_K x) | \leq D_{PK} e^{- 20 \delta_0 \epsilon^{-2}},
\end{equation}
\begin{equation} \label{eq:pkl_phi_x}
\left | \frac{d\phi}{dx}(s) + \xi_K C_{JK}(\epsilon) J_1( \xi_K x ) + \xi_K C_{YK}(\epsilon) Y_1 (\xi_K x) \right | \leq D_{PK} e^{- 20 \delta_0 \epsilon^{-2}}.
\end{equation}
\end{lemma}
\begin{proof}
The idea is that when dealing with the scalar field, since we have the same familiar form for the equation (\ref{eq:pk_phi_evol}), we can use the equation (\ref{eq:jo_bessel_inhomog}) describing the solution for the scalar field using the solution operator to the linear Bessel equation, just as in Proposition \ref{prop:oscillation_phi}.
To get optimal error estimates, however, we evolve the system from $x = x_{PK}$ as opposed to $x = x_{\mathcal{B}}$, hence the redefinition of the Bessel coefficients as in (\ref{eq:pk_coeffs}). We first show these are close to the original coefficients as claimed; from Proposition \ref{prop:oscillation_phi} we have
\begin{equation*}
\begin{bmatrix} \phi(x_{PK}) \\ \frac{d \phi}{dx} (x_{PK}) \end{bmatrix}
=
\begin{bmatrix} J_0(\xi_0 x_{PK}) & Y_0(\xi_0 x_{PK}) \\ - \xi_0 J_1(\xi_0 x_{PK}) & - \xi_0 Y_1(\xi_0 x_{PK}) \end{bmatrix}
\begin{bmatrix} C_J(\epsilon) \\ C_Y(\epsilon) \end{bmatrix}
+ O(\epsilon^{10}).
\end{equation*}
Moreover, since $x_{PK} = 4 |B|^2 \mathfrak{W}^2$, by Proposition \ref{prop:oscillation} we know that $1/4 \leq \xi_0 x_{PK}, \xi_K x_{PK} \leq 1$ and $|\xi_0 x_{PK} - \xi_K x_{PK}| \lesssim \epsilon^2 \log(\epsilon^{-1})$, so since the derivatives of $J_{\nu}(z)$ and $Y_{\nu}(z)$ are bounded for $1/4 \leq z \leq 1$, and the coefficients $C_J(\epsilon)$, $C_Y(\epsilon)$ are uniformly bounded, we can modify the above formula to
\begin{equation} \label{eq:pk_coeff_linalg_0}
\begin{bmatrix} \phi(x_{PK}) \\ \frac{d \phi}{dx} (x_{PK}) \end{bmatrix}
=
\begin{bmatrix} J_0(\xi_K x_{PK}) & Y_0(\xi_K x_{PK}) \\ - \xi_K J_1(\xi_K x_{PK}) & - \xi_K Y_1(\xi_K x_{PK}) \end{bmatrix}
\begin{bmatrix} C_J(\epsilon) \\ C_Y(\epsilon) \end{bmatrix}
+ O(\epsilon^2 \log(\epsilon^{-1})).
\end{equation}
Finally, since by Lemma \ref{lem:bessel_wronskian} the inverse matrix
\begin{equation*}
\begin{bmatrix} J_0(\xi_K x_{PK}) & Y_0(\xi_K x_{PK}) \\ - \xi_K J_1(\xi_K x_{PK}) & - \xi_K Y_1(\xi_K x_{PK}) \end{bmatrix}^{-1}
=
\frac{2}{\pi x_{PK}}
\begin{bmatrix} - \xi_K Y_1(\xi_K x_{PK}) & - Y_0(\xi_K x_{PK}) \\ \xi_K J_1(\xi_K x_{PK}) & J_0(\xi_K x_{PK}) \end{bmatrix}
\end{equation*}
has uniformly bounded entries, combining (\ref{eq:pk_coeff_linalg_0}) with (\ref{eq:pk_coeffs}) shows, as claimed, the relation
\begin{equation*}
\begin{bmatrix} C_{JK}(\epsilon) \\ C_{YK}(\epsilon) \end{bmatrix}
=
\begin{bmatrix} C_J(\epsilon) \\ C_Y(\epsilon) \end{bmatrix}
+ O(\epsilon^2 \log(\epsilon^{-1})).
\end{equation*}
The remainder of the proof is as in Proposition \ref{prop:oscillation_phi}. Adopting the same notation from the proof of this proposition, we record the analogue of (\ref{eq:jo_bessel_inhomog}) again here:
\begin{equation} \label{eq:pk_bessel_inhomog}
\begin{bmatrix}
\phi \\ \frac{d \phi}{dx}
\end{bmatrix}
(x) =
\mathbf{S}_{\xi_K}(x; x_{PK})
\begin{bmatrix}
\phi \\ \frac{d \phi}{dx}
\end{bmatrix}
(x_{PK})
+
\bigintsss_{x_{PK}}^x \mathbf{S}_{\xi_K}(x; \tilde{x})
\begin{bmatrix}
0 \\ \frac{1}{\tilde{x}} \mathcal{E}_{\phi}(\tilde{x})
\end{bmatrix}
\, d\tilde{x} .
\end{equation}
The first term on the right hand side will correspond to a solution of the linear Bessel equation after a rescaling $z = \xi_K x$, just as before. Using once again part (\ref{bessel_uno'}) of Corollary \ref{cor:bessel_solution_rescale} we see this corresponds to the objects on the left hand sides of (\ref{eq:pkl_phi}) and (\ref{eq:pkl_phi_x}).
For the second term on the right hand side of (\ref{eq:pk_bessel_inhomog}), we once again appeal to part (\ref{bessel_dos'}) of Corollary \ref{cor:bessel_solution_rescale}. From this corollary, the existence of the operator $\mathbf{S}_{\xi_K}(x; x_{PK})$ will contribute at worst an additional factor of $x^{-1} \leq \epsilon^2 e^{2 \delta_0 \epsilon^{-2} }$ in our $l^{\infty}$ estimates. Since the length of the integration interval $|x - x_{PK}| \leq x_{PK}$ is uniformly bounded, (\ref{eq:pkl_phi}) and (\ref{eq:pkl_phi_x}) follow straightforwardly.
\end{proof}
\begin{proof}[Completing the proof of Proposition \ref{prop:oscillation+}]
We are now in a position to close all the bootstraps and complete the proof. We first use Lemma \ref{lem:pk_phi} to improve (\ref{eq:pk_bootstrap_phi}) and (\ref{eq:pk_bootstrap_phi_s}). In fact, we shall only show the latter; the former follows similarly.
By Lemma \ref{lem:pk_phi} and the bounds on the coefficients in Proposition \ref{prop:oscillation_phi}, one has
\begin{equation*}
\left| \frac{d \phi}{dx} \right| \leq 2 \sqrt{\pi} \, \mathfrak{W}^{-1} \, \xi_K\cdot ( |J_1(\xi_K x)| + |Y_1(\xi_K x)|).
\end{equation*}
In light of Facts \ref{fact:bessel1_taylor} and \ref{fact:bessel2_taylor}, one can check e.g.\ numerically that for $z \leq 1$, one has $\max \{ z |J_0(z)|, z |Y_0(z)| \} \leq 1$, hence $\left|\frac{d \phi}{dx}\right| \leq 4 \sqrt{\pi} \, \mathfrak{W}^{-1} x^{-1}$. Hence using $- \frac{dx}{ds} = 8 |B|^2 \mathfrak{W}^2 \omega_{RN} + O(\epsilon^2 \log(\epsilon^{-1}))$ from Lemma \ref{lem:pk_geometry}, it is clear that we improve (\ref{eq:pk_bootstrap_phi_s}).
We now move onto $Q$ and $\Omega^2$. For $Q(s)$, it remains to understand only how $Q$ changes in the region $\mathcal{PK}$, in particular that it changes only by $O(\epsilon^2 \log(\epsilon^{-1}))$. Looking at (\ref{eq:Q_evol}), we need to study
\begin{equation*}
\left| \int_{s_{PK}}^s \tilde{A} q_0^2 r^2 |\phi|^2 (s') \, ds' \right|.
\end{equation*}
Now, we use (\ref{eq:jo_gauge_2}) and (\ref{eq:obar_phi}) to estimate
\begin{equation}
\left| \int_{s_{PK}}^s \tilde{A} q_0^2 r^2 |\phi|^2 (s') \, ds' \right| \lesssim \int_{s_{PK}}^s \epsilon^2 \cdot \frac{r^2}{r_-^2 \epsilon^2} \left( 1 + \left| \log \frac{\epsilon^2}{r^2}\right| \right)^2 (s') \, ds'.
\end{equation}
Substituting $x = \frac{r^2}{r_-^2 \epsilon^2}$ as usual, and noting that $-\frac{dx}{ds} \sim 1$, we therefore see that
\begin{equation}
\left| \int_{s_{PK}}^s \tilde{A} q_0^2 r^2 |\phi|^2 (s') \, ds' \right| \lesssim \int_0^{4|B|^2 \mathfrak{W}^2} \epsilon^2 x ( 1 + |\log x| )^2 \, dx \lesssim \epsilon^2.
\end{equation}
Combined with (\ref{eq:q_precise_end}), this will yield (\ref{eq:pk_Q}), and hence improve the bootstrap (\ref{eq:pk_bootstrap_Q}).
Last of all, for the quantity $\Omega^2$, we proceed using the Raychaudhuri equation (\ref{eq:raych}), which implies the monotonicity of $- \Omega^{-2} \dot{r}$. One finds from this that for $s \in \mathcal{PK}$:
\begin{equation*}
\frac{\Omega^2(s)}{- r \dot{r}(s)} \leq \frac{\Omega^2(s_{PK})}{- r \dot{r}(s_{PK})} \cdot \frac{r(s_{PK})}{r(s)}.
\end{equation*}
Therefore one applies (\ref{eq:pk_r}) and (\ref{eq:osc_lapse}) to find:
\begin{equation}
{\Omega^2(s)} \leq (1 + 2 D_O \epsilon^2 \log(\epsilon^{-1})) \cdot C_- e^{(2 K_- + D_O \epsilon^2 \log(\epsilon^{-1}))s_{PK} }\cdot e^{\delta_0 \epsilon^{-2}}.
\end{equation}
By (\ref{eq:s0lower}) and (\ref{eq:pk_interval}), it is straightforward to bound the right hand side such that we find (\ref{eq:pk_lapse}), which improves the final bootstrap (\ref{eq:pk_bootstrap_lapse}), in view of the fact that $\delta_0=\frac{1}{80 |B|^2 \mathfrak{M}^2}$ as we defined it.
\end{proof}
\subsection{The onset of the Kasner-like geometry}
Following the proof of Proposition \ref{prop:oscillation+}, we mention a corollary of Proposition \ref{prop:oscillation+} that will be important in later sections, and in interpreting the subset $\mathcal{PK}_1 = \mathcal{K}_1 \cap \mathcal{PK} = \{2 |B| \mathfrak{W} \epsilon^2 r_- \geq r(s) \geq e^{- \delta_0 \epsilon^{-2}} r_- \}$ as a genuine Kasner-like region. (Note that for this step we restrict ourselves to $r(s) \lesssim \epsilon^2 r_-$ rather than $r(s) \lesssim \epsilon r_-$ to be able to ignore the $O(1)$ terms in the Bessel asymptotics.)
\begin{corollary} \label{cor:protokasner}
Consider the region $s \in \mathcal{PK}_1 = \{ 2 |B| \mathfrak{W} \epsilon^2 r_- \geq r(s) \geq e^{- \delta_0 \epsilon^{-2}}r_- \}$. In this region, we have the following forms for $(\phi,\dot{\phi})$, where $c_1=1$ and $c_2 = 2 \pi^{-1} (\gamma-\log2)$: (here $\gamma=0.577..$ is the Euler constant)
\begin{equation} \label{eq:pk_phi_asymp}
\left| \phi(s) + \frac{2}{\pi} C_{YK}(\epsilon) \log \left( \frac{r_-^2 \epsilon^2 }{r^2(s) \xi_K} \right) - c_1 C_{JK}(\epsilon) - c_2 C_{YK}(\epsilon) \right| \leq D_{PK} \epsilon^2 \log (\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:pk_phi_s_asymp}
\left| \dot{\phi}(s) + \frac{2}{\pi} C_{YK}(\epsilon) \frac{r_-^2 \omega_{K} \epsilon^2 }{r^2(s) \xi_K} \right| \leq D_{PK} \epsilon^2 \log (\epsilon^{-1}).
\end{equation}
Next, recall the function $\Theta(\epsilon)$ arising in (\ref{eq:bessel_coeff_phase}).
Then, defining $\Psi(s):= \frac{r^2 \dot{\phi}}{- r \dot{r}}(s)$ [see already \eqref{eq:Psi} in the next section], one finds that
\begin{equation} \label{eq:pk1_Psi}
\left|\Psi(s) + \frac{2}{\sqrt{\pi}} \, \mathfrak{W}^{-1} \sin(\Theta(\epsilon)) \right| \leq D_{PK} \epsilon^2 \log(\epsilon^{-1}).
\end{equation}
Furthermore, if we let $\Psi_i = \Psi(s=s_i)$, one can determine that $\Psi(s)$ changes only slowly within $\mathcal{PK}_1$:
\begin{equation} \label{eq:pk1_Psi_2}
|\Psi(s) - \Psi_i| \leq D_{PK} r^2(s) \log (\epsilon^{-1}) \leq D_{PK}^2 \epsilon} \newcommand{\ls}{\lesssim^4 \log (\epsilon^{-1}) .
\end{equation}
Finally, one obtains the following consequences for the lapse $\Omega^2$:
\begin{equation} \label{eq:pk1_lapse_der}
\left| \frac{ \frac{d}{ds} \log \Omega^2(s) }{ \frac{d}{ds} \log r(s) } - \Psi_i^2 + 1 \right| \leq D_{PK} \epsilon^4 \log(\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:pk1_lapse}
\left| \log \left[ \Omega^2(s) \left( \frac{r(s)}{r_-} \right)^{1 - \Psi_i^2} \right] + \frac{1}{2} b_-^{-2} \epsilon^{-2} \right| \leq D_{PK} \log (\epsilon^{-1}).
\end{equation}
\end{corollary}
\begin{proof}
The equations (\ref{eq:pk_phi_asymp}) and (\ref{eq:pk_phi_s_asymp}) follow immediately from (\ref{eq:pk_phi_1}), (\ref{eq:pk_phi_2}) and the Bessel function asymptotics in Facts \ref{fact:bessel1_taylor} and \ref{fact:bessel2_taylor}. Indeed, by restricting $r(s) \leq 2 |B| \mathfrak{W} \epsilon^2 r_-$, we guarantee that
\begin{equation*}
\frac{\xi_K r^2(s)}{r_-^2 \epsilon^2} \leq \epsilon^2,
\end{equation*}
hence Facts \ref{fact:bessel1_taylor} and \ref{fact:bessel2_taylor} guarantee that
\begin{equation*}
\left| J_0 \left( \frac{\xi_K r^2(s)}{r_-^2 \epsilon^2} \right) - 1 \right| \lesssim \epsilon^4, \quad \left| J_1\left( \frac{\xi_K r^2(s)}{r_-^2 \epsilon^2} \right) \right| \lesssim \epsilon^2,
\end{equation*}
\begin{equation*}
\left| Y_0 \left( \frac{\xi_K r^2(s)}{r_-^2 \epsilon^2} \right) + \frac{2}{\pi} \log \frac{r_-^2 \epsilon^2}{\xi _K r^2(s)} - \frac{2}{\pi} ( \gamma - \log 2 ) \right| \lesssim \epsilon^4, \quad \left| Y_1\left( \frac{\xi_K r^2(s)}{r_-^2 \epsilon^2} \right) + \frac{2}{\pi} \frac{r_-^2 \epsilon^2}{\xi_K r^2(s)} \right| \lesssim \epsilon^2 \log(\epsilon^{-1}).
\end{equation*}
Substituting these into (\ref{eq:pk_phi_1}) and (\ref{eq:pk_phi_2}), we get (\ref{eq:pk_phi_asymp}) and (\ref{eq:pk_phi_s_asymp}) with $c_1 = 1$ and $c_2 = 2 \pi^{-1} (\gamma - \log 2)$.
Next, combining (\ref{eq:pk_phi_s_asymp}) with the estimates (\ref{eq:omegak}) and (\ref{eq:xik}) it is straightforward to get
\begin{equation} \label{eq:pk_rphi_s_asymp}
\left| r^2 \dot{\phi}(s) + 16 \pi^{-1} C_{YK}(\epsilon) |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon^2 \right | \lesssim \epsilon^4 \log (\epsilon^{-1}).
\end{equation}
Hence from (\ref{eq:pk_r}) we find that
\begin{equation}
\left| \Psi (s) + 4 \pi^{-1} C_{YK}(\epsilon) \right| = \left| \frac{r^2 \dot{\phi}(s)}{- r \dot{r}(s)} + 4 \pi^{-1} C_{YK}(\epsilon) \right| \lesssim \epsilon^2 \log(\epsilon^{-1}),
\end{equation}
thus (\ref{eq:pk1_Psi}) follows from $|C_{YK}(\epsilon) - C_Y(\epsilon)| \lesssim \epsilon^2 \log(\epsilon^{-1})$ and (\ref{eq:bessel_y_coeff}).
To get (\ref{eq:pk1_Psi_2}), note that (\ref{eq:pk_phi_s_asymp}) will also yield $|r^2 \dot{\phi} (s) - r^2 \dot{\phi} (s_i) | \lesssim \epsilon^2 \log(\epsilon^{-1}) r^2(s)$, while we also know from (\ref{eq:pk_rd}) that $|- r \dot{r}(s) - r \dot{r}(s_i)| \lesssim e^{-40 \delta_0 \epsilon^{-2}} \leq \epsilon^2 \log(\epsilon^{-1}) r^2(s)$ changes little in the region $\mathcal{PK}$. Combining these two estimates, given also that $- r \dot{r}(s) \sim \epsilon^2$ by (\ref{eq:pk_r}), yields (\ref{eq:pk1_Psi_2}).
We now move on to the estimate (\ref{eq:pk1_lapse_der}) for the derivative of $\Omega^2(s)$. We proceed here using the Raychaudhuri equation in the form (\ref{eq:raych_transport}), which after multiplying by $r(s) (\dot{r}(s))^{-2}$ gives:
\begin{equation*}
\frac{r}{\dot{r}} \frac{d}{ds} \log \Omega^2 - \frac{r \ddot{r}}{\dot{r}^2} - \frac{r^2 \dot{\phi}^2}{\dot{r}^2} = \frac{r^2 |\tilde{A}|^2 q_0^2 |\phi|^2}{\dot{r}^2}.
\end{equation*}
Using (\ref{eq:r_evol}) to substitute for $\ddot{r}$, and noticing that the rightmost term on the left hand side is exactly $\Psi^2$, one finds
\begin{equation*}
\frac{r}{\dot{r}} \frac{d}{ds} \log \Omega^2 + 1 - \Psi^2 = ( - r \dot{r})^{-2} \left[ r^4 \tilde{A}^2 q_0^2 |\phi|^2 + \frac{\Omega^2}{4} \left( Q^2 - r^2 + r^4 \Lambda + r^4 m^2 |\phi|^2 \right) \right].
\end{equation*}
From Proposition \ref{prop:oscillation+} and \eqref{eq:pk_phi_asymp}, the right hand side of this is bounded by a multiple of $ \epsilon} \newcommand{\ls}{\lesssim^{-4} (r^2 \log(\frac{1}{r^2}))^2\lesssim \epsilon^4 \log (\epsilon^{-1})$, so that if we use (\ref{eq:pk1_Psi_2}) also then we get (\ref{eq:pk1_lapse_der}) as required.
Finally, for (\ref{eq:pk1_lapse}), one would like to integrate (\ref{eq:pk1_lapse_der}). However, one needs to first estimate $\Omega^2(s_{K_1})$, where $s = s_{K_1}$ is such that $r(s_{K_1}) = 2 |B| \mathfrak{W} \epsilon^2 r_-$ (corresponding to the past boundary of $\mathcal{PK}_1$). For this purpose, one finds exactly as in the proof of Proposition \ref{prop:oscillation}\footnote{We need to extend from $r(s) \gtrsim \epsilon$ to $r(s) \gtrsim \epsilon^2$, but the proof still applies as we only needed $|\log (r / r_-)| \lesssim \log (\epsilon^{-1})$.} that for $s \in \mathcal{O} \cup ( \mathcal{PK}_1 ) = \{ s_O(\epsilon) \leq s \leq s_{K_1} \}$:
\begin{equation}
\left | \frac{d}{ds} \log \Omega^2(s) - 2 K_- \right| \lesssim \epsilon^2 [\log (\epsilon^{-1}) + r^{-2}(s)].
\end{equation}
Integrating this in exactly the region $s \in \mathcal{O} \cup (\mathcal{PK} \setminus \mathcal{K}_1)$, using that the $s$-length of the integration interval is of $O(\epsilon^{-2})$, one gets
\begin{equation*}
\left | \log \Omega^2(s_{K_1}) - 2 K_- s_{K_1} \right | \lesssim \log (\epsilon^{-1})+ |\log \Omega^2 ( s_O )| \lesssim \log (\epsilon^{-1}).
\end{equation*}
We also need to estimate the expression $2 K_- s_{K_1}$. This follows from an identical computation to that of $s_{PK}$ as in (\ref{eq:s0}), and one finds that
\begin{equation*}
2 K_- s_{K_1} = - \frac{|2 K_-|}{8 |B|^2 \mathfrak{W}^2 \omega_{RN} \epsilon^2} + O(\log (\epsilon^{-1})) = - \frac{1}{2} b_-^2 \epsilon^{-2} + O(\log (\epsilon^{-1})).
\end{equation*}
Making a final observation that $\frac{r(s_{K_1})}{r_-} \sim \epsilon^2$ so that additional factors of $\log(r(s_{K_1})/r_-)$ can also be added, we arrive at
\begin{equation} \label{eq:pk1_lapse_init}
\left | \log \left[ \Omega^2 (s_{K_1}) \left( \frac{ r(s_{K_1}) }{r_-} \right)^{1- \Psi_i^2} \right] + \frac{1}{2} b_-^2 \epsilon^{-2} \right | \lesssim \log(\epsilon^{-1}).
\end{equation}
Finally, we rewrite (\ref{eq:pk1_lapse_der}) as
\begin{equation*}
\left| \frac{d}{ds} \log \left[ \Omega^2(s) \left( \frac{r(s)}{r_-} \right)^{1 - \Psi_i^2} \right] \right| \lesssim \epsilon^2 \log (\epsilon^{-1}) \frac{- \dot{r}(s)}{r(s)}.
\end{equation*}
Integrating this in the region $s \in \mathcal{PK} \cap \mathcal{K}_1$ and using (\ref{eq:pk1_lapse_init}) to estimate the boundary term at $s = s_{K_1}$, we indeed find (\ref{eq:pk1_lapse}).
\end{proof}
\section{Construction of the sets $E_{\eta}$ and $E'_{\eta,\sigma}$ for further quantitative estimates} \label{sec:sing}
So far, we have a description of the hairy black hole interior up to the region $\mathcal{PK}$, where $s \approx (8 |B|^2 \mathfrak{W}^2 \omega_{RN} \epsilon^2)^{-1}$ and $r(s) \geq \exp(- \delta_0 \epsilon^{-2}) r_-$. These estimates hold for $0 < \epsilon \leq \epsilon_0$, where $\epsilon_0(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ is taken sufficiently small, and $\delta_0$ is a fixed quantity determined by (\ref{eq:delta0}).
In particular, up to this point we have placed no restriction on the value of $\epsilon$, other than its smallness.
However, at least within the present article, to proceed further we will be required to restrict attention to a smaller subset of values of $\epsilon$ verifying a certain condition. We define the following important quantity:
\begin{equation} \label{eq:Psi}
\Psi \coloneqq \frac{ r^2 \dot{\phi}}{-r \dot{r}} = - r \frac{d \phi}{dr}.
\end{equation}
Then the precise condition on $\epsilon$ we shall use is that
\begin{equation} \tag{$\dagger$} \label{condition_eta}
|\Psi_i|:= |\Psi(s_i)| \geq \eta > 0,
\end{equation}
where $s_i$ marks the end of the region $\mathcal{PK}$, i.e.\ $r(s_i) = e^{- \delta_0 \epsilon^{-2}} r_-$, and $\eta > 0$ is an arbitrary small constant.
%
Before using (\ref{condition_eta}) to analyse the spacetime beyond $\mathcal{PK}$ in Section~\ref{sec:kasner}, we first quantitatively characterize the set of $\epsilon$ for which a condition such as (\ref{condition_eta}) holds. In light of Corollary \ref{cor:protokasner}, we are required to study the quantity $\Theta(\epsilon)$ defined in (\ref{eq:bessel_coeff_phase}).
\subsection{Improved estimates on $\Theta(\epsilon)$}
From Proposition \ref{prop:lateblueshift}, $\Theta(\epsilon)$ is identified to have the following dependence on $\epsilon$:
\begin{equation} \label{eq:vartheta}
\Theta(\epsilon) = \frac{1}{8 |B|^2 \mathfrak{W}^2 \epsilon^2} + O(\log (\epsilon^{-1})).
\end{equation}
However, the $O(\log (\epsilon^{-1}))$ error prevents us from having any quantitative control on quantities such as $\sin(\Theta(\epsilon))$, e.g.\ given only the above expression, for any fixed $\eta > 0$, if $|\cdot|$ denotes Lebesgue measure, the \textit{limiting density}
\begin{equation*}
\limsup_{\epsilon_0 \to 0} \epsilon_0^{-1} | \{ \epsilon \in (0, \epsilon_0]: |\sin (\Theta(\epsilon))| > \eta \} |
\end{equation*}
could be arbitrarily small, or even vanish asymptotically.
To overcome this issue, we instead consider the quantity:
\begin{equation} \label{eq:bessel_coeff_phase_der}
\frac{d}{d \epsilon} \Theta(\epsilon) = \left ( \frac{\partial}{\partial \epsilon} + \frac{d \, s_O(\epsilon)}{d \epsilon} \frac{\partial}{\partial s} \right) \left (|q_0 \tilde{A}|(s) \cdot r^2(s) \cdot \left( - \frac{d}{ds} r^2(s) \right)^{-1} + \omega_{RN} s \right ),
\end{equation}
where to make sense of the right hand side, we now interpret $f(s) \in \{ r(s), \log \Omega^2(s), Q(s), \tilde{A}(s), \phi(s) \}$ as (smooth) functions of $\epsilon$ as well as $s$.
Denoting the $\epsilon$-derivatives by using a subscript, i.e.\ $f_{\epsilon} = \frac{\partial}{\partial \epsilon} f$, while still using $\dot{f} = \frac{\partial}{\partial s} f$ to denote $s$-derivatives, we shall take an $\epsilon$-derivative of the system (\ref{eq:raych})--(\ref{eq:phi_evol_2}) to find a system of \textit{linear} evolution equations for the quantities $f_{\epsilon}(s)$. For instance, the $\epsilon$-derivative of (\ref{eq:gauge_evol}) is
\begin{equation*}
\dot{\tilde{A}}_{\epsilon} = - \frac{\Omega^2}{4 r^2} \left( Q_{\epsilon} + Q (\log \Omega^2)_{\epsilon} - \frac{2 Q r_{\epsilon}}{r} \right),
\end{equation*}
while the corresponding evolution equations for the other linearized quantities are more complicated and will not be written explicitly.
Along with the evolution equations, we need to pose data for the various quantities $f_{\epsilon}(s)$ in the $s \to - \infty$ limit. For this purpose, one should return to the $1+1$-dimensional formulation of the problem in the regular $(U, V)$ coordinates as in Section \ref{sub:data}, and take the appropriate $\epsilon$-derivatives there. One finds that the correct asymptotic data is
\begin{gather}
\lim_{s \to -\infty} r_{\epsilon}(s) = 0, \; \lim_{s \to - \infty} Q_{\epsilon} (s) = 0, \; \lim_{s \to - \infty} \phi_{\epsilon} = 1, \label{eq:ode_datae_rqphi} \\
\lim_{s \to - \infty} (\log \Omega^2)_{\epsilon} = \lim_{s \to - \infty} \frac{d}{ds} (\log \Omega^2)_{\epsilon} = 0, \label{eq:ode_datae_omega} \\
\lim_{s \to - \infty} \Omega^{-2} \tilde{A}_{\epsilon} (s) = 0, \label{eq:ode_datae_gauge} \\
\lim_{s \to - \infty} 4 \Omega^{-2} \dot{r}_{\epsilon}(s) = \frac{2 r_+ m^2 \epsilon}{2 K_+}, \label{eq:ode_datae_rdot} \\
\lim_{s \to - \infty} \Omega^{-2} \dot{\phi}_{\epsilon} = \beta_+. \label{eq:ode_datae_phidot}
\end{gather}
The plan is now to use a similar procedure to Section \ref{sec:einsteinrosen} to find sufficiently strong estimates for the quantities $f_{\epsilon} \in \{ r_{\epsilon}, (\log \Omega^2)_{\epsilon}, Q_{\epsilon}, \tilde{A}_{\epsilon}, \phi_{\epsilon} \}$ up to the late blue shift region $\mathcal{LB}$, where we have nontrivial overlap with the oscillatory region $\mathcal{O}$, and compute $\frac{d}{d \epsilon}\Theta(\epsilon)$. The most crucial estimate will be to determine that $\dot{r}_{\epsilon}$ is comparable to $\epsilon$.
\begin{proposition} \label{prop:epsilon_der}
For $s \in \mathcal{EB} \cup \mathcal{LB} = \{ S \leq s \leq \Delta_{\mathcal{B}} \epsilon^{-1} \}$, there exists some constant $D_{LE}(M, \mathbf{e}, \Lambda, m^2, q_0)$ such that:
\begin{gather}
|\dot{r}_{\epsilon}| + s^{-1} |r_{\epsilon}| \leq D_{LE} \epsilon, \label{eq:lb_e_r} \\
\left| \frac{d}{ds} (\log \Omega^2)_{\epsilon} \right| + s^{-1} |(\log \Omega^2)_{\epsilon}| \leq D_{LE} \epsilon s, \label{eq:lb_e_lapse} \\
|\tilde{A}_{\epsilon}| + s^{-1} |Q_{\epsilon}| \leq D_{LE} \epsilon, \label{eq:lb_e_maxwell} \\
|\dot{\phi}_{\epsilon}| + |\phi_{\epsilon}| \leq D_{LE}, \label{eq:lb_e_phi} \\
|\delta \dot{\phi}_{\epsilon}| + | \delta \phi_{\epsilon}| \leq D_{LE} \epsilon^2 s. \label{eq:lb_e_phidiff}
\end{gather}
Furthermore, we have the more precise estimate for $ s_O=50 s_{lin} \leq s \leq \Delta_{\mathcal{B}} \epsilon^{-1}$:
\begin{equation} \label{eq:lb_e_r_precise}
\left| - \dot{r}_{\epsilon}(s) - 8 |B|^2 \mathfrak{W}^2 \omega_{RN} r_- \epsilon \right| \leq D_{LE} \epsilon^3 s.
\end{equation}
\end{proposition}
\begin{corollary} \label{cor:phase_der}
Consider the expression (\ref{eq:bessel_coeff_phase_der}). Then
\begin{equation} \label{eq:bessel_coeff_phase_epsilon}
\left| \frac{d}{d \epsilon} \Theta(\epsilon) + \frac{1}{4 |B|^2 \mathfrak{W}^2 \epsilon^3} \right| \leq D_{LE} \epsilon^{-1} \log (\epsilon^{-1}).
\end{equation}
\end{corollary}
\begin{proof}[Proof of Corollary \ref{cor:phase_der} given Proposition \ref{prop:epsilon_der}]
First consider the $\frac{\partial}{\partial s}$ derivative in (\ref{eq:bessel_coeff_phase_der}). One checks from (\ref{eq:r_evol_2}), (\ref{eq:gauge_evol}) and Proposition \ref{prop:lateblueshift}, that for some polynomial $P$ of degree less than $2$, one has
\begin{equation*}
\left |\frac{\partial}{\partial s} \left (|q_0 \tilde{A}|(s) \cdot r^2(s) \cdot \left( - \frac{d}{ds} r^2(s) \right)^{-1} + \omega_{RN} s \right ) \right|
\lesssim \epsilon^{-4} \, \Omega^2 \cdot P(r^{-1}, |\phi|) + \omega_{RN} - |q_0 \tilde{A}| \lesssim \epsilon^2
\end{equation*}
where the final step follows from $\Omega^2 \lesssim \epsilon^{100}$ at $s = s_O$ and Proposition \ref{prop:lateblueshift}. Hence even with the $\frac{d s_O}{d \epsilon} \sim \epsilon^{-1}$ factor in front, this term contributes at worst $O(\epsilon)$ and can be ignored.
The main term $(4 |B|^2 \mathfrak{W}^2 \epsilon^3)^{-1}$ on the left hand side of (\ref{eq:bessel_coeff_phase_epsilon}) comes from taking the $\epsilon$-derivative of $\left( - \frac{d}{ds} r^2(s) \right)^{-1}$. Indeed, the expression that arises from this is
\begin{equation*}
I = |q_0 \tilde{A}|(s) \cdot r^2(s) \cdot \left( - \frac{d}{ds} r^2(s) \right)^{-2} \cdot 2 ( r_{\epsilon}(s) \dot{r}(s) + r(s) \dot{r}_{\epsilon}(s)).
\end{equation*}
Using Propositions \ref{prop:lateblueshift} and \ref{prop:epsilon_der}, particularly \eqref{eq:lb_e_r_precise}, we can evaluate (note $\dot{r}(s) r_{\epsilon}(s) = O(\epsilon^3 s)$ is treated as part of the error):
\begin{align*}
I &= \frac{\omega_{RN} r_-^2}{(8 |B|^2 \mathfrak{W}^2 r_-^2 \omega_{RN} \epsilon^2)^2} \cdot 2(- 8 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon )+ O(\epsilon^{-1} \log (\epsilon^{-1})) \\
&= - \frac{1}{4 |B|^2 \mathfrak{W}^2 \epsilon^3} + O(\epsilon^{-1} \log(\epsilon^{-1})).
\end{align*}
Therefore, the $\epsilon$-derivative on the left hand side of (\ref{eq:bessel_coeff_phase_epsilon}) can be evaluated using Proposition~\ref{prop:epsilon_der} as:
\begin{equation*}
I + (|q_0 \tilde{A}|_{\epsilon}(s) r^2(s) + 2 |q_0 \tilde{A}|(s) r(s) r_{\epsilon}(s) ) \cdot \left( - \frac{d}{ds} r^2(s) \right)^{-1} = - \frac{1}{4 |B|^2 \mathfrak{W}^2 \epsilon^3} + O(\epsilon^{-1} \log(\epsilon^{-1})),
\end{equation*}
completing the proof of the corollary.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:epsilon_der}]
The first step will be to find the upper bounds. For this purpose, we will have to write out the full linear system of ODEs, however as mentioned previously this would be extremely complicated, and we therefore only include upper bounds for the various quantities $|\dot{f}_{\epsilon}|$. Differentiating (\ref{eq:r_evol})--(\ref{eq:phi_evol}) in $\epsilon$, the appropriate inequalities and equations are:
\begin{equation} \label{eq:r_e_evol}
|\ddot{r_{\epsilon}}| \lesssim \Omega^2 ( |r_{\epsilon}| + |(\log \Omega^2)_{\epsilon}| + |Q_{\epsilon}| ) + ( - \dot{r} ) (|r_{\epsilon}| + |\dot{r}_{\epsilon}|) + \Omega^2 | \phi \phi_{\epsilon} |,
\end{equation}
\begin{equation} \label{eq:lapse_e_evol}
\left| \frac{d^2}{ds^2} (\log \Omega^2)_{\epsilon} \right| \lesssim \Omega^2 ( |r_{\epsilon}| + |(\log \Omega^2)_{\epsilon}| + |Q_{\epsilon}| ) + ( - \dot{r} ) (|r_{\epsilon}| + |\dot{r}_{\epsilon}|) + |\tilde{A}| |\phi \phi_{\epsilon}| + |\dot{\phi} \dot{\phi}_{\epsilon}| + |\tilde{A}_{\epsilon}| |\phi|^2,
\end{equation}
\begin{equation} \label{eq:Q_e_evol}
| \dot{Q}_{\epsilon} | \lesssim (|\tilde{A}_{\epsilon}| + |\tilde{A} r_{\epsilon}|) |\phi|^2 + |\tilde{A} \phi \phi_{\epsilon}|,
\end{equation}
\begin{equation} \label{eq:gauge_e_evol}
|\dot{\tilde{A}}_{\epsilon}| \lesssim \Omega^2 ( |r_{\epsilon}| + |(\log \Omega^2)_{\epsilon}| + |Q_{\epsilon}|),
\end{equation}
\begin{equation} \label{eq:phi_e_evol}
\ddot{\phi}_{\epsilon} = - \frac{2 \dot{r} \dot{\phi}_{\epsilon}}{r} - q_0^2 \tilde{A}^2 \phi_{\epsilon} - \frac{m^2 \Omega^2}{4} \phi_{\epsilon} + J_{\phi}, \hspace{1cm} |J_{\phi}| \lesssim |\dot{r}_{\epsilon} - \dot{r} r_{\epsilon}| |\dot{\phi}| + |\tilde{A}_{\epsilon}| |\phi| + \Omega^2 (\log \Omega^2)_{\epsilon} |\phi|.
\end{equation}
Note that for the final equation (\ref{eq:phi_e_evol}), it was necessary to exactly keep the terms corresponding to $\phi$ being a solution of the linear charged scalar wave equation. We now proceed through the regions $\mathcal{R}$, $\mathcal{N}$, $\mathcal{EB}$ and $\mathcal{LB}$ exactly as in Section \ref{sec:einsteinrosen}.
\medskip \noindent
\underline{Step 1: The redshift region $\mathcal{R} = \{ - \infty < s \leq - \Delta_{\mathcal{R}} \}$}
\medskip \noindent
In this region, we consider the following bootstrap assumptions, which hold in a neighborhood of $s = -\infty$ by the asymptotic data above:
\begin{equation}
|r_{\epsilon}| + |\dot{r}_{\epsilon}| + |(\log \Omega^2)_{\epsilon}| + |Q_{\epsilon}| + |\Omega^{-2} \tilde{A}_{\epsilon} | \leq \epsilon,
\end{equation}
\begin{equation}
|\phi_{\epsilon}| + |\dot{\phi}_{\epsilon}| \leq 2.
\end{equation}
Note that by Proposition \ref{prop:redshift}, we know already that $- \dot{r}, |\tilde{A}| \lesssim \Omega^2$, and therefore given these bootstrap assumptions and the inequality (\ref{eq:rs_omega_estimate}) we may simply integrate up the equations (\ref{eq:r_e_evol})--(\ref{eq:phi_e_evol}) to get
\begin{equation}
|r_{\epsilon}| + |\dot{r}_{\epsilon}| + |(\log \Omega^2)_{\epsilon}| + \left| \frac{d}{ds} (\log \Omega^2)_{\epsilon} \right| + |Q_{\epsilon}| + |\Omega^{-2} \tilde{A}_{\epsilon} | \lesssim \epsilon \Omega^2,
\end{equation}
\begin{equation}
|\phi_{\epsilon} - 1| + |\dot{\phi}_{\epsilon}| \lesssim \Omega^2.
\end{equation}
(Recall that the asymptotic data for $r_{\epsilon}$, $\dot{r}_{\epsilon}$, $\log \Omega^2_{\epsilon}$ and $Q_{\epsilon}$ is $0$.) Hence with a choice of $\Delta_{\mathcal{R}}$ large enough, the bootstraps are easily improved.
\medskip \noindent
\underline{Step 2: The no-shift region $\mathcal{N} = \{ - \Delta_{\mathcal{R}} \leq s \leq S \}$}
\medskip \noindent
Since we are integrating only in a finite $s$-region, the no-shift region is easily dealt with using Gr\"onwall. To do this, let $\mathbf{X}_{\epsilon}$ and $\mathbf{\Phi}_{\epsilon}$ denote the tuples:
\begin{equation*}
\mathbf{X}_{\epsilon} = \left( r_{\epsilon}, \dot{r}_{\epsilon}, (\log \Omega^2)_{\epsilon}, \frac{d}{ds} (\log \Omega^2)_{\epsilon}, Q_{\epsilon}, \tilde{A}_{\epsilon} \right), \; \mathbf{\Phi}_{\epsilon} = (\phi_{\epsilon}, \dot{\phi}_{\epsilon}),
\end{equation*}
then in light of Proposition \ref{prop:noshift}, the system (\ref{eq:r_e_evol})--(\ref{eq:phi_e_evol}) can be translated into
\begin{equation*}
|\dot{\mathbf{X}}_{\epsilon}| \lesssim |\mathbf{X}_{\epsilon}| + \epsilon |\mathbf{\Phi}_{\epsilon}|, \;
|\dot{\mathbf{\Phi}}_{\epsilon}| \lesssim |\mathbf{\Phi}_{\epsilon}| + \epsilon |\mathbf{X}_{\epsilon}|.
\end{equation*}
Hence a straightforward use of Gr\"onwall in the bounded $s$-region $s \in[- \Delta_{\mathcal{R}}, S]$ yields
\begin{equation*}
\sup_{s \in \mathcal{N}} |\mathbf{X}_{\epsilon}|(s) \lesssim |\mathbf{X}_{\epsilon}|(-\Delta_{\mathcal{R}}) + \epsilon \sup_{s \in \mathcal{N}} |\mathbf{\Phi}_{\epsilon}|(s)
\lesssim |\mathbf{X}_{\epsilon}|(-\Delta_{\mathcal{R}}) + \epsilon |\mathbf{\Phi}_{\epsilon}|(-\Delta_{\mathcal{R}}) + \epsilon^2 \sup_{s \in \mathcal{N}} |\mathbf{X}_{\epsilon}|(s)
\end{equation*}
So for $\epsilon$ sufficiently small, we absorb the rightmost term into the left hand side, and we can see that there exists some constant $D_{NE}(M, \mathbf{e}, \Lambda, m^2, q_0)$ such that
\begin{equation}
|\mathbf{X}_{\epsilon}|(S) \leq D_{NE} \epsilon, \; |\mathbf{\Phi}_{\epsilon}|(S) \leq D_{NE}.
\end{equation}
\medskip \noindent
\underline{Step 3: Upper bounds in the blue shift regions $\mathcal{EB} \cup \mathcal{LB} = \{ S \leq s \leq \Delta_{\mathcal{B}} \epsilon \}$.}
\medskip \noindent
This step will be much simpler than the corresponding nonlinear estimates in Sections \ref{sub:earlyblueshift} and \ref{sub:lateblueshift}. We use the bootstrap assumptions:
\begin{gather}
|\dot{r}_{\epsilon}| \leq 10 D_{NE} \epsilon, \label{eq:b_e_bootstrap_r} \\
|(\log \Omega^2)_{\epsilon}| \leq 10 D_{NE} \epsilon s^3, \label{eq:b_e_bootstrap_lapse} \\
|Q_{\epsilon}| \leq 10 D_{NE} \epsilon s^2.
\end{gather}
Note that the first of these trivially implies $|r_{\epsilon}| \leq 10 D_{NE} \epsilon s$. Then integration of (\ref{eq:gauge_e_evol}) gives that $|\tilde{A}_{\epsilon} |\lesssim D_{NE} \epsilon$.
We next use these bootstraps to estimate $\phi_{\epsilon}$ and $\dot{\phi}_{\epsilon}$. Note that the expression $J_{\phi}$ in \eqref{eq:phi_e_evol} now obeys the estimate $|J_{\phi}| \lesssim D_{NE} \epsilon^2$. We now follow the proof of Proposition \ref{prop:earlyblueshift} and consider the quantity
\begin{equation*}
H^{(\epsilon)} = r^4 \dot{\phi}^2_{\epsilon} + r^4 q_0^2 |\tilde{A}|^2 \phi_{\epsilon}^2.
\end{equation*}
Completely analogously to before, one finds that
\begin{equation*}
\dot{H}^{(\epsilon)} \lesssim \Omega^2 H^{(\epsilon)} + |J_{\phi}| |\dot{\phi}| \lesssim \Omega^2 H^{(\epsilon)} + D_{NE} \epsilon^2 \sqrt{ H^{(\epsilon)} }.
\end{equation*}
So as $\Omega^2(s)$ is integrable for $s \in \mathcal{EB} \cup \mathcal{LB}$, one can apply Gr\"onwall to the quantity $\sqrt{ H^{(\epsilon)} }$ to yield
\begin{equation*}
\sqrt{ H^{(\epsilon)} } (s) \lesssim \sqrt{ H^{(\epsilon)} }(S) + D_{NE} \epsilon^2 s \lesssim |\phi_{\epsilon}|(S) + |\dot{\phi}_{\epsilon}|(S) + D_{NE} \epsilon^2 s \lesssim D_{NE}.
\end{equation*}
Since $H^{(\epsilon)} \sim \phi_{\epsilon}^2 + \dot{\phi}_{\epsilon}^2$, one gets the estimate (\ref{eq:lb_e_phi}).
The next step is to integrate (\ref{eq:r_e_evol}). Note that by Propositions \ref{prop:earlyblueshift} and \ref{prop:lateblueshift}, we have $- \dot{r} \lesssim \max \{ \Omega^2, \epsilon^2 \}$, and $\Omega^2 \lesssim e^{2 K_- s}$, so
\begin{equation*}
|\dot{r}_{\epsilon}| \leq D_{NE} \epsilon + C D_{NE} \int_S^{\Delta_{\mathcal{B}} \epsilon^{-1}} (e^{2 K_- s'} s'^3 \epsilon + \epsilon^3 s') \, ds' \leq 2 D_{NE}
\end{equation*}
where $C$ here is a constant independent of $\epsilon$ and $D_{NE}$, hence the second inequality follows for $S$ chosen sufficiently large and $\Delta_{\mathcal{B}}$ chosen sufficiently small. This improves (\ref{eq:b_e_bootstrap_r}) and in fact yields (\ref{eq:lb_e_r}) after a further integration.
The estimates (\ref{eq:lb_e_lapse}) and (\ref{eq:lb_e_maxwell}) then follow from integration of (\ref{eq:lapse_e_evol}), (\ref{eq:Q_e_evol}) and (\ref{eq:gauge_e_evol}), and this also improves the remaining two bootstrap assumptions.
\medskip \noindent
\underline{Step 4: Precise bounds for the scalar field}
\medskip \noindent
We now move onto the estimate (\ref{eq:lb_e_phidiff}). Note that $\delta \phi_{\epsilon} = \phi_{\epsilon} - (\phi_{\mathcal{L}})_{\epsilon}$, but recall that since $\phi_{\mathcal{L}}$ is the exactly the solution to the linear charged wave equation in a Reissner-N\"ordstrom background having initial data $\lim_{s \to - \infty} \phi_{\mathcal{L}} = \epsilon$, it is clear that $(\phi_{\mathcal{L}})_{\epsilon}$ is exactly the solution to the linear charged wave equation in Reissner-N\"ordstrom with $\lim_{s \to - \infty} (\phi_{\mathcal{L}})_{\epsilon} (s) = 1$. Namely, we have
\begin{equation*}
(\ddot{\phi}_{\mathcal{L}})_{\epsilon} = - \frac{2 \dot{r}_{RN} (\dot{\phi}_{\mathcal{L}})_{\epsilon}}{r_{RN}} - q_0^2 \tilde{A}_{RN}^2 (\phi_{\mathcal{L}})_{\epsilon} - \frac{m^2 \Omega_{RN}^2}{4} (\phi_{\mathcal{L}})_{\epsilon}.
\end{equation*}
Hence subtracting this equation from (\ref{eq:phi_e_evol}), and using both the estimates of Section \ref{sec:einsteinrosen} and earlier within the proof of this proposition, one finds
\begin{equation} \label{eq:lb_e_equation}
\delta \dot{\phi}_{\epsilon} = - \frac{2 \dot{r} \delta \dot{\phi}_{\epsilon}}{r} - q_0^2 \tilde{A}^2 \delta \phi - \frac{m^2 \Omega^2}{4} \delta \phi_{\epsilon} + J_{\phi} + \tilde{J}_{\phi},
\end{equation}
where $J_{\phi}$ is as in (\ref{eq:phi_e_evol}), $\tilde{J}_{\phi}$ arises from taking the differences of $r, \dot{r}, \tilde{A}, \Omega^2$ from their Reissner-Nordstr\"om quantities, and one gets the estimate
\begin{equation*}
|J_{\phi}(s)| + |\tilde{J}_{\phi}(s)| \lesssim
\begin{cases}
\epsilon^2 \Omega^2 & \text{ for } s \in \mathcal{R}, \\
\epsilon^2 & \text{ for } s \in \mathcal{N} \cup \mathcal{EB} \cup \mathcal{LB}.
\end{cases}
\end{equation*}
We now proceed in the usual way using the quantity
\begin{equation*}
\tilde{H}^{(\epsilon)} = r^4 | \delta \dot{\phi}_{\epsilon} |^2 + r^4 q_0^2 |\tilde{A}|^2 |\delta \phi_{\epsilon}|^2.
\end{equation*}
Then by using the equation (\ref{eq:lb_e_equation}), one finds that for all $s \in \mathcal{R} \cup \mathcal{N} \cup \mathcal{EB} \cup \mathcal{LB}$, we get
\begin{equation*}
\frac{d}{ds} \tilde{H}^{(\epsilon)} \lesssim \Omega^2 \tilde{H}^{(\epsilon)} + (|J_{\phi}| + |\tilde{J}_{\phi}|) \sqrt{ \tilde{H}^{(\epsilon)} }.
\end{equation*}
Using now the fact that $\lim_{s \to - \infty} \tilde{H}^{(\epsilon)}(s) = 0$ and $\int_{- \infty}^{\Delta_{\mathcal{B}} \epsilon^{-1}} \Omega^2$ is uniformly bounded in $\epsilon$, Gr\"onwall applied to this differential inequality gives:
\begin{equation}
\sqrt{\tilde{H}^{(\epsilon)}} \lesssim \int_{- \infty}^{\Delta_{\mathcal{B}} \epsilon^{-1}} (|J_{\phi}(\tilde{s})| + |\tilde{J}_{\phi}(\tilde{s})| )\, d\tilde{s} \lesssim \epsilon^2 s.
\end{equation}
This of course will yield the estimate (\ref{eq:lb_e_phidiff}). Note that by Corollary \ref{cor:scattering}, one then has for $s \in \mathcal{LB}$,
\begin{gather}
\left| \phi_{\epsilon} - B e^{i \omega_{RN} s} - \overline{B} e^{- i \omega_{RN} s} \right| \lesssim \epsilon^2 s, \label{eq:lb_e_phi_est1} \\
\left| \dot{\phi}_{\epsilon} - i \omega_{RN} B e^{i \omega_{RN} s} + i \omega_{RN} \overline{B} e^{- i \omega_{RN} s} \right| \lesssim \epsilon^2 s. \label{eq:lb_e_phi_est2}
\end{gather}
\medskip \noindent
\underline{Step 5: The precise estimate for $\dot{r}_{\epsilon}$}
\medskip \noindent
We finally move to the estimate (\ref{eq:lb_e_r_precise}). We now use the differentiated version of the Raychaudhuri equation in the convenient form (\ref{eq:raych_transport}). We see that using (\ref{eq:lb_e_r}), (\ref{eq:lb_e_lapse}) and (\ref{eq:lb_e_maxwell}) we can find
\begin{equation*}
\frac{d}{ds}(- \dot{r}_{\epsilon}) - \frac{d}{ds} \log (\Omega^2) \cdot ( - \dot{r}_{\epsilon}) = 2 r ( \dot{\phi} \dot{\phi}_{\epsilon} + |\tilde{A}|^2 q_0^2 \phi \phi_{\epsilon}) + J_{\Omega},
\end{equation*}
where the error $J_{\Omega}$ satisfies $|J_{\Omega}| \lesssim \epsilon^3 s$.
Now we use (\ref{eq:lb_e_phi_est1}), (\ref{eq:lb_e_phi_est2}), alongside the estimates (\ref{eq:lb_rdiff}), (\ref{eq:lb_lapse}), (\ref{eq:lb_phidiff}), to find further that
\begin{equation*}
\frac{d}{ds}(- \dot{r}_{\epsilon}) - 2 K_- ( - \dot{r}_{\epsilon}) = 8 |B|^2 \omega_{RN}^2 r_- \epsilon + J_{\Omega} + \tilde{J}_{\Omega},
\end{equation*}
where $|\tilde{J}_{\Omega}| \lesssim \epsilon^3 s$. We now use a standard integrating factor to integrate between $s_{lin}$ and $s \in \mathcal{LB}$, yielding
\begin{equation*}
- \dot{r}_{\epsilon}(s) = e^{2 K_-(s - s_{lin})} (- \dot{r}_{\epsilon})(s_{lin}) + \int_{s_{lin}}^s e^{2 K_-(s - s')} (8 |B|^2 \omega_{RN}^2 r_- \epsilon + J_{\Omega}(s') + \tilde{J}_{\Omega}(s')) \, ds' .
\end{equation*}
One may then simply compute the relevant integrals to find that
\begin{equation*}
\left| - \dot{r}_{\epsilon} - \frac{8 |B|^2 \omega_{RN}^2 r_- \epsilon}{2 K_-} \right| \lesssim \epsilon^{-1} \Omega^2 + \epsilon^3 s.
\end{equation*}
But for $s \geq s_O$, we know $\Omega^2 \lesssim \epsilon^{100}$, and (\ref{eq:lb_e_r_precise}) follows immediately.
\end{proof}
\subsection{The measure of the set $E_{\eta}$}
We now use Proposition~\ref{prop:epsilon_der}, or more precisely Corollary~\ref{cor:phase_der}, to control the measure of the set of values of $\epsilon$ such that condition (\ref{condition_eta}) holds. Let $\epsilon_0(M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ be such that the results of Section~\ref{sec:protokasner} hold for $0 < |\epsilon| < \epsilon_0$. Then we have the following corollary:
\begin{corollary} \label{cor:condition_eta}
Let $\eta > 0$ be a sufficiently small constant. We define $E_{\eta}$ to be the set of $\epsilon$ such that the hairy black hole interior corresponding to $\phi = \epsilon$ on $\mathcal{H}$ obeys the condition (\ref{condition_eta}) at $s = s_i$:
\begin{equation} \label{eq:eeta}
E_{\eta} = \{ \epsilon \in (0, \epsilon_0) : |\Psi_i| \geq \eta \}.
\end{equation}
%
Then the set $E_{\eta}$ is non-empty, has $0$ as a limit point, and we have the following upper bound for the limiting density of values of $\epsilon$ violating (\ref{condition_eta}): there exists some constant $K$ such that
\begin{equation} \label{eq:density}
\limsup_{\tilde{\epsilon} \downarrow 0} \tilde{\epsilon}^{-1} | (0, \tilde{\epsilon}) \setminus E_{\eta} | \leq K \mathfrak{W} \eta.
\end{equation}
\end{corollary}
\begin{proof}
We will estimate the measure of the set $F_{\eta, \tilde{\epsilon}} = (0, \tilde{\epsilon}) \setminus E_{\eta} = \{ \epsilon \in (0, \tilde{\epsilon}): |\Psi_i| < \eta \}$. We first use Corollary~\ref{cor:protokasner} to change variable from $\epsilon$ to $\Theta(\epsilon)$; for $\epsilon_0$ and $\eta$ sufficiently small,
\begin{align*}
|F_{\eta, \tilde{\epsilon}}|
&= \int_0^{\tilde{\epsilon}} \mathbbm{1}_{\{|\Psi_i| < \eta\}} \, d \epsilon,\\[0.5em]
&\leq \int_0^{\tilde{\epsilon}} \mathbbm{1}_{\{|\sin(\Theta(\epsilon))| < \sqrt{\pi} \mathfrak{W} \eta\} } \, d \epsilon, \\[0.5em]
&= \int_{\Theta(\tilde{\epsilon})}^{+\infty} \mathbbm{1}_{\{|\sin \Theta| < \sqrt{\pi} \mathfrak{W} \eta\}} \, \left| \frac{d}{d\epsilon} \Theta(\epsilon) \right|^{-1} \, d \Theta,
\end{align*}
We now apply Corollary~\ref{cor:phase_der} along with the previous estimate (\ref{eq:vartheta}) for $\Theta(\epsilon)$. This will yield:
\begin{equation}
\frac{d}{d\epsilon} \Theta(\epsilon) = - 4 \sqrt{2} |B| \mathfrak{W} \, \Theta(\epsilon)^{3/2} + O(\Theta^{1/2} \log \Theta).
\end{equation}
Combining with the above, we therefore see that
\begin{equation*}
| F_{\eta, \tilde{\epsilon}} | \leq \int^{+\infty}_{\Theta(\tilde{\epsilon})} \mathbbm{1}_{\{|\sin \Theta| < \sqrt{\pi} \mathfrak{W} \eta\}} \, \frac{1}{4 \sqrt{2} |B|\mathfrak{W} }\, \Theta^{-3/2} \, d \Theta.
\end{equation*}
To evaluate this integral, note that if $\eta$ is taken sufficiently small, then the set $\{ |\sin\Theta| < \sqrt{\pi} \mathfrak{W} \eta \}$ is simply a union of intervals of width $2 \sqrt{\pi} \mathfrak{W} \eta + O(\eta^2)$ and centred on the integer lattice $\pi \mathbb{Z}$. So the integral is akin to taking a discrete sum of the form $\sum^{+\infty}_{n = \Theta(\tilde{\epsilon})} \frac{\eta}{n^{3/2}}$, multiplied by appropriate weights. Keeping only the weights $|B|$ and $\mathfrak{W}$ which depend on the background parameters, we have
\begin{equation}
|F_{\eta, \tilde{\epsilon}}| \lesssim |B|^{-1} \, \Theta(\tilde{\epsilon})^{-1/2} \eta \lesssim \mathfrak{W}\eta \tilde{\epsilon},
\end{equation}
using (\ref{eq:vartheta}) again in the last step. This yields (\ref{eq:density}), and the remainder of the corollary follows easily.
\end{proof}
\begin{remark}
Recalling Theorem~\ref{maintheorem2}, as well as studying the values of $\epsilon} \newcommand{\ls}{\lesssim$ obeying (\ref{condition_eta}), we also need to investigate the measure of the set of values of $\epsilon} \newcommand{\ls}{\lesssim$ obeying the `inversion' condition $\eta \leq |\Psi_i| \leq 1 - \sigma$ or the `non-inversion' condition $|\Psi_i| \geq 1 + \sigma$. This can be done in the same way as the proof of Corollary~\ref{cor:condition_eta}, and we leave it as an exercise.
\end{remark}
\section{Kasner regimes and inversion} \label{sec:kasner}
Now assuming the condition \eqref{condition_eta}, i.e\ that $\epsilon$ lies in $E_{\eta}$ as defined in Theorem~\ref{maintheorem}, we will complete the proof of Theorem~\ref{maintheorem}. In particular, we claim there exists some $\epsilon_0(\eta) > 0$ depending on $\eta$ as well as the usual parameters $M ,\mathbf{e}, \Lambda, m^2, q_0$, such that if $0 < \epsilon \leq \epsilon_0 = \epsilon_0(\eta)$ \textbf{and} \eqref{condition_eta} holds, then our corresponding hairy black hole interior contains a crushing spacelike singularity, with more quantitative Kasner-like asymptotics to follow.
Firstly, we briefly describe the expected dynamics between the end of the region $\mathcal{PK}$ and the eventual spacelike singularity at $s = s_{\infty}$. It turns out the intermediate dynamics will be highly sensitive to the value of $\Psi(s_i)$, often denoted as $\Psi_i$ in the sequel, where $\Psi(s)$ is given by \eqref{eq:Psi}.
Following the discussion of the introduction, particularly Section \ref{cosmo.intro}, if $|\Psi_i| > 1$, then the region $\mathcal{PK}_1 \subset \mathcal{PK}$ (see Corollary~\ref{cor:protokasner}) should already lie in a regime associated with positive Kasner exponents. For $\epsilon$ sufficiently small, we will show that many of the estimates of Proposition \ref{prop:oscillation+} will persist all the way to the spacelike singularity $\{ s = s_{\infty}\}$, meaning a single (stable) Kasner-like regime.
On the other hand, if $\eta \leq |\Psi_i| < 1$, then initially, the spacetime lies in a regime associated with one negative and two positive Kasner exponents -- known to be unstable in the cosmological setting. In this case, we shall observe the aforementioned \textit{Kasner inversion} phenomenon. In simplified terms, between $s=s_i$ and $s = s_{\infty}$, the quantity $\Psi$ will \textit{invert} from its initial value $\Psi_i = \Psi(s_i)$ to a final value $\Psi_f \approx \Psi_i^{-1}$ satisfying $|\Psi_f| > 1$. The spacetime in turn evolves into a regime associated with positive Kasner exponents, which in turn persists up to the singularity.
The cause and nature of such Kasner inversions, as well as why this allows for spacelike singularity formation, will be the focus of this section. Denote by $\mathcal{K}$ the region $\{s \geq s_i\} = \{s: r(s) \leq e^{- \delta_0 \epsilon^{-2}} r_- \}$. The section will be organized as follows:
\begin{itemize}
\item
In Section \ref{sub:kasner_why}, we give further background into why we distinguish between $|\Psi| < 1$ and $|\Psi| > 1$, and hence the need to study Kasner inversions. We will also introduce new renormalized quantities and the equations that they obey. In particular, we state a ``soft result'' [Proposition~\ref{prop:yes_inversion}] which says that if $|\Psi_i|<1^{-}$ then there is necessarily a Kasner inversion. As any application of this will be superseded by the propositions in the subsequent sections, we do not provide a proof of this proposition.
\item
In Section \ref{sub:kasner_prelim}, we state the main Proposition \ref{prop:k_ode} regarding the quantity $\Psi$ in the region $\mathcal{K}$. We then state the main bootstrap assumptions used in this proof, and prove some preliminary results used in the proof of Proposition \ref{prop:k_ode}.
\item
In Section \ref{sub:kasner_main}, we provide the proof of Proposition \ref{prop:k_ode}. This will entail a detailed estimate for all the error terms involved in deriving an ODE of the form (\ref{Psi.ODE}). Once completed, this proof shows that the spacetime will exist up to a spacelike singularity at some $s = s_{\infty}$ with $r(s_{\infty}) = 0$.
\item
In Section \ref{sub:kasner_geom}, we apply our results regarding $\Psi$ in Proposition \ref{prop:k_ode} to find quantitative estimates on geometric quantities such as $\Omega^2$. This will be crucial in showing quantitative closeness to Kasner spacetimes in Section \ref{section:quantitative}.
\end{itemize}
\subsection{Background on Kasner inversions} \label{sub:kasner_why}
In this section, we briefly explore the important role played by the quantity $\Psi$, and the differing evolutionary dynamics that occur when $|\Psi_i| = |\Psi(s_i)|$ is greater than or less than $1$.
The key actors are the following $2$ equations: the evolution equation (\ref{eq:r_evol_2}) for $-r \dot{r}$ and the Raychaudhuri equation (\ref{eq:raych}).
It will be useful to instead rewrite the Raychaudhuri equation as an evolution equation for $\log (\Omega^2 / (- \dot{r}))$, and rewrite part of the right hand side in terms of the quantity $\Psi$:
\begin{equation} \label{eq:raych_2}
\frac{d}{ds} \log \left( \frac{\Omega^2}{-\dot{r}} \right) = \frac{\dot{r}}{r} \Psi^2 + \frac{r^2}{r \dot{r}} |\tilde{A}|^2 q_0^2 |\phi|^2.
\end{equation}
Supposing for now that the rightmost term is integrable and small, one sees that the value of $\Psi^2$ determines the leading order behaviour of $\Omega^2$ in $r$. Letting $\alpha^2 = \inf \Psi^2$ and $\beta^2 = \sup \Psi^2$ in our region of interest, integrating (\ref{eq:raych_2}) gives:
\begin{equation} \label{eq:kasner_upperlower}
C_{lower} r^{\beta^2} \leq \frac{\Omega^2}{- \dot{r}} \leq C_{upper} r^{\alpha^2}.
\end{equation}
For this heuristic discussion, it is only important to note that $C_{lower}$ and $C_{upper}$ are independent or $r$, indeed the important feature will be the powers of $r$.
In light of (\ref{eq:kasner_upperlower}), we turn to the evolution equation for $-r \dot{r}(s)$ written above. Inside a region such as $\mathcal{K}$ where $r$ is small, one expects that the dominant term on the right hand side is the term $\frac{Q^2 \Omega^2}{4 r^2}$, and (\ref{eq:kasner_upperlower}) will yield upper and lower bounds of the form:
\begin{equation}
- C_{lower} r^{\beta^2 - 2} \dot{r} \lesssim - \frac{d}{ds} (- r \dot{r}) \lesssim - C_{upper} r^{\alpha^2 - 2} \dot{r}.
\end{equation}
Integrating this expression, and ignoring the degenerate case where $\alpha$ or $\beta$ are equal to $1$ for now:
\begin{equation}
| r^{\beta^2-1}(s) - r^{\beta^2 - 1}(s_i) | \lesssim r \dot{r}(s) - r \dot{r}(s_i) \lesssim | r^{\alpha^2 - 1}(s) - r^{\alpha^2 - 1}(s_i) |.
\end{equation}
We can therefore make $2$ observations:
\begin{itemize}
\item
If $\beta < 1$, then $|r(s)^{\beta^2 - 1} - r(s_i)^{\beta^2 - 1}|$ is unbounded as $r(s) \to 0$, which suggests that $- r \dot{r}$ is unbounded. In fact, since this suggests $r \dot{r}(s)$ will at some point become positive, we have somehow exited the trapped region. So something must have gone wrong, either in our a priori assumptions or in assuming that $\beta < 1$. In our context, we shall show the latter issue arises; there must be an `inversion' which forces $\beta \geq 1$.
\item
If $\alpha > 1$, then $|r(s)^{\alpha^2 - 1} - r(s_i)^{\alpha^2 - 1}|$ is bounded by a multiple of $r(s_i)^{\alpha^2 - 1}$, which in our case has order of magnitude $O (e^{- (\alpha - 1) \delta_0 \epsilon^{-2}}) \ll - r \dot{r}(s_i)$. So $- r \dot{r}$ changes little from its value at $s = s_i$. The insight is that if $\Psi(s_i) > 1$ initially, then there is a hope of closing estimates in a bootstrap argument, such that $- r \dot{r}$ only changes by this extremely small quantity, and thus allows for formation of a spacelike singularity.
\end{itemize}
We now formalize the former observation in the following Proposition~\ref{prop:yes_inversion}, which is stated in a way such that it could potentially be applied in a more general setting from which data is supplied at some $s=s_i$.
Proposition~\ref{prop:yes_inversion} proves the presence of an inversion \emph{by contradiction}: if $|\Psi|(s_i) < 1$, then in order to form a spacelike singularity with both $|Q|$ and $- r \dot{r}$ uniformly bounded below in a neighborhood of the singularity, then one must have $\sup_{s \in \mathcal{K}} |\Psi(s)| \geq 1$.
As our applications of Proposition \ref{prop:yes_inversion} are subsumed under Proposition \ref{prop:k_ode} which is proved later in Section \ref{sub:kasner_main}, we omit its proof.
\begin{proposition} \label{prop:yes_inversion}
Let $(r, \Omega^2, \phi, Q, \tilde{A})$ be a solution to the system of equations (\ref{eq:raych})--(\ref{eq:phi_evol_2}), with data given at some $s = s_i$. Suppose that the solution exists in the interval $s_i \leq s < s_{\infty} < + \infty$, with $r(s_{\infty}) = 0$. Assume that there exist lower bounds on $|Q|$ and $- r \dot{r}$ in $s_i \leq s < s_{\infty}$:
\begin{equation}
\inf_{s_i \leq s < s_{\infty}} |Q(s)| > 0, \hspace{1cm} \inf_{s_i \leq s < s_{\infty}} - r \dot{r}(s) > 0.
\end{equation}
Then one must have $\sup_{s_i \leq s < s_{\infty}} |\Psi(s)| > 1$. In particular, if $|\Psi(s_i)| < 1$ there must exist some $s_i \in (s_i, s_{\infty})$ such that $|\Psi(s_i)| = 1$.
\end{proposition}
\begin{comment}
\begin{proof}[Proof of Proposition \ref{prop:no_inversion}]
We begin with similar bootstrap assumptions to those used in Section \ref{sec:oscillations}, although it is now crucial that we also keep track of powers of $r$. The three bootstrap assumptions are:
\begin{equation} \label{eq:c_bootstrap_lapse} \tag{C1}
\frac{\Omega^2}{- \dot{r}} (s) \leq 2 a^2 \cdot (- r^2 \dot{r}(s_i)) \cdot \left( \frac{r(s)}{r(s_i)} \right)^{\eta},
\end{equation}
\begin{equation} \label{eq:c_bootstrap_phi} \tag{C2}
|\phi(s)| \leq 2 ( P_0 + Z_0 ) \log r^{-1}(s),
\end{equation}
\begin{equation} \label{eq:c_bootstrap_Q} \tag{C3}
|Q(s)| \leq 2 Q_0.
\end{equation}
It is clear that all of these will hold in a neighborhood of $s = s_i$.
The first step is to provide a lower bound on the quantity $-r \dot{r}(s)$, which of course will allow for the formation of a spacelike singularity. Allowing the implicit constant in $\lesssim$ to depend on all the quantities $\eta, Z_0, Q_0, P_0, \Lambda, m^2 q_0$, one has from (\ref{eq:r_evol_2}) and the bootstraps that
\begin{equation*}
\left| \frac{d}{ds} (- r \dot{r})(s) \right| \lesssim
\frac{- \dot{r}}{r^2} \frac{\Omega^2}{- \dot{r}} (s) \lesssim
a^2 \cdot ( - r^{\eta-2} \dot{r} (s)) \cdot (- r^{2 - \eta} \dot{r}(s_i)).
\end{equation*}
Since $\eta > 1$, we may integrate this to yield
\begin{equation*}
| -r \dot{r}(s) + r \dot{r}(s_i) | \lesssim \frac{a^2}{\eta - 1} r^{\eta - 1}(s_i) \cdot ( - r^{2 - \eta} \dot{r} (s_i) ) \lesssim a^2 ( -r \dot{r} (s_i)).
\end{equation*}
This immediately yields (\ref{eq:c_r}). Note also that from (\ref{eq:gauge_evol}), essentially the same calculation gives
\begin{equation*}
| \tilde{A}(s) - \tilde{A}(s_i) | \lesssim a^2 ( - r \dot{r} (s_i)).
\end{equation*}
Combining this with (\ref{eq:c_r}) and the smallness assumption (\ref{eq:c_smallness}), we can get
\begin{equation} \label{eq:c_gauge}
\left| \frac{\tilde{A}}{- \dot{r}}(s) \right| \leq 2a \cdot \frac{r(s)}{r(s_i)} \leq 2 a.
\end{equation}
We now estimate the Maxwell charge $Q$. Using Maxwell's equation (\ref{eq:Q_evol}), the various bootstraps and the estimate (\ref{eq:c_gauge}), we have
\begin{align*}
|Q(s) - Q(s_i)|
&\leq \int^s_{s_i} |\tilde{A}| q_0^2 r^2 |\phi|^2 (\tilde{s}) \, d\tilde{s} \\
&\lesssim \int^s_{s_i} - \left| \frac{A}{- \dot{r}} \right| \dot{r}(\tilde{s}) \, d\tilde{s} \\
&\lesssim a \int^{r(s_i)}_0 \, dr = a r(s_i) \leq a^2.
\end{align*}
This is exactly (\ref{eq:c_Q}), and also improves the bootstrap (\ref{eq:c_bootstrap_Q}).
The next step is to estimate the scalar field. We actually begin by considering the derivative; from (\ref{eq:phi_evol_2}), the bootstraps, (\ref{eq:c_r}) and (\ref{eq:c_gauge}) one finds
\begin{equation*}
\left| \frac{d}{ds} (r^2 \dot{\phi}(s)) \right| \lesssim a^2 \cdot (- r \dot{r}(s_i)) \cdot (- \dot{r}(s)).
\end{equation*}
Thus one finds:
\begin{equation*}
| r^2 \dot{\phi}(s) - r^2 \dot{\phi}(s_i) | \lesssim a^3 ( - r \dot{r} (s_i) ).
\end{equation*}
Combining this with (\ref{eq:c_r}) once again, we get the estimate (\ref{eq:c_Psi}). Furthermore, by recalling that $\Psi = - r \frac{d \phi}{dr}$, it is straightforward to find $|\phi(s)| \leq (P_0 + 2 Z_0) \log r^{-1}(s)$, and hence improve the bootstrap (\ref{eq:c_bootstrap_phi}).
Finally, we deal with the null lapse $\Omega^2$. For this purpose, we choose $a$ small enough such that $D_0 a^2 < \eta - \eta^{1/2}$, hence by (\ref{eq:c_Psi}) one must have $|\Psi(s)| \geq \eta^{1/2}$ for $s_i \leq s < s_{\infty}$. Consider now the Raychaudhuri equation in the form (\ref{eq:raych_2}). As we have $\Psi^2 \geq \eta$, we have
\begin{equation*}
\frac{d}{ds} \log \left( \frac{\Omega^2}{- \dot{r}} \right) \leq \eta \frac{\dot{r}}{r},
\end{equation*}
which may be rewritten as
\begin{equation*}
\frac{d}{ds} \log \left( \frac{\Omega^2 r^{- \eta}}{- \dot{r}} \right) \leq 0.
\end{equation*}
So the expression inside the $\log(\cdot)$ is a decreasing quantity, hence one has:
\begin{equation*}
\frac{\Omega^2(s) \, r^{-\eta}(s)}{- \dot{r}(s)} \leq \frac{\Omega^2(s_i) \, r^{-\eta}(s_i)}{- \dot{r}(s_i)} \leq a^2 \cdot (- r^2 \dot{r} (s_i)) \cdot r^{-\eta}(s_i),
\end{equation*}
where the second line follows from the smallness assumption (\ref{eq:c_smallness}). We rearrange this and then apply (\ref{eq:c_r}) once again to get:
\begin{equation*}
\frac{\Omega^2(s)}{(- r \dot{r})^2} (s) \leq a^2 \cdot \frac{- r \dot{r} (s_i)}{ - r \dot{r}(s)} \cdot \left( \frac{r(s)}{r(s_i)}\right)^{\eta-1} \leq 2 a^2 \left( \frac{r(s)}{r(s_i)} \right)^{\eta-1}.
\end{equation*}
From this point, improving the bootstrap (\ref{eq:c_bootstrap_lapse}) is also straightforward. This concludes the proof of the proposition.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:yes_inversion}]
We use a contradiction argument. Suppose on the contrary that $|\Psi(s)| \leq 1$ for $s_i \leq s < s_{\infty}$. For concreteness, we also set the lower bounds on the quantities $|Q|$ and $- r \dot{r}$ to be
\begin{equation} \label{eq:inv_assmp}
\inf_{s_i \leq s < s_{\infty}} |Q(s)| = Q_{min}, \hspace{1cm} \inf_{s_i \leq s < s_{\infty}} - r \dot{r}(s) = k_{min}.
\end{equation}
Without loss of generality, it will be fine to suppose that $r(s_i) < Q_{min} / 2$. Looking at the equation (\ref{eq:r_evol_2}), this means that $- r \dot{r}$ is a decreasing quantity, and that by this monotonicity, we may in fact bound
\begin{equation} \label{eq:inv_omega_bound}
\int_{s_i}^{s_{\infty}} \frac{Q^2 \Omega^2}{4 r^2} \, d\tilde{s} \leq - 2 r \dot{r} (s_i) < + \infty.
\end{equation}
We first bound the Maxwell gauge field $\tilde{A}$ through a clever trick; using the equation (\ref{eq:gauge_evol}) one has
\begin{equation} \label{eq:inv_gauge}
|\tilde{A}(s) - \tilde{A}(s_i)| \leq \int_{s_i}^s \left| \frac{Q \Omega^2}{4 r^2} \right| \, d \tilde{s} \leq Q_{min}^{-1} \int_{s_i}^s \frac{Q^2 \Omega^2}{4r^2} \,d \tilde{s} < + \infty.
\end{equation}
So $\tilde{A}$ is bounded in $s$ as $s \to s_{\infty}$; we let $A_{max} = \sup_{s_i \leq s < s_{\infty}} |\tilde{A}|$.
Next, using the bound $|\Psi| \leq 1$ and $\Psi = - r \frac{d\phi}{dr}$ it is straightforward to see that $|\phi(s)| \leq P + \log r^{-1}(s)$ for some constant $P > 0$.
We now have all the ingredients to be able to use the Raychaudhuri equation (\ref{eq:raych_2}). We first need to investigate the second term on the right hand side of (\ref{eq:raych_2}), in particular showing that it is integrable. But indeed,
\begin{align*}
\int_{s_i}^{s_{\infty}} \frac{r}{-\dot{r}} |\tilde{A}|^2 q_0^2 |\phi|^2 (\tilde{s}) \, d \tilde{s}
&\leq k_{min}^{-2} q_0^2 A_{max}^2 \int_{s_i}^{s_{\infty}} - \dot{r} r^3(\tilde{s}) (P + \log r^{-1}(\tilde{s}))^2 \, d \tilde{s} \\
&= k_{min}^{-2} q_0^2 A_{max}^2 \int^{r(s_i)}_0 \tilde{r}^3 (P + \log \tilde{r}^{-1})^2 \, d \tilde{r} \\
&\eqqcolon V < +\infty.
\end{align*}
Now, the equation (\ref{eq:raych_2}) gives
\begin{equation*}
\frac{d}{ds} \log \left( \frac{\Omega^2}{- r \dot{r}}(s) \right) = \frac{\dot{r}}{r} (\Psi^2 - 1) - \frac{r}{-\dot{r}} |\tilde{A}|^2 q_0^2 |\phi|^2 \geq - \frac{r}{- \dot{r}} |\tilde{A}|^2 q_0^2 |\phi|^2,
\end{equation*}
which, combined with the previous estimate, integrates to give
\begin{equation} \label{eq:inv_lb}
\frac{\Omega^2}{- r \dot{r}} (s) \geq \frac{\Omega^2}{-r \dot{r}}(s_i) e^{-V}.
\end{equation}
The contradiction arises when trying to compute the integral in (\ref{eq:inv_omega_bound}) again using this bound. This is because from (\ref{eq:inv_lb}) and (\ref{eq:inv_assmp}), one gets
\begin{equation*}
\int_{s_i}^{s_{\infty}} \frac{Q^2 \Omega^2}{4 r^2} (\tilde{s}) \, d\tilde{s}
\geq \frac{Q_{min}^2 \Omega^2}{-4 r \dot{r}} (s_i) e^{-V} \cdot \int_{s_i}^{s_{\infty}} \frac{- \dot{r}}{r} (\tilde{s}) \, d \tilde{s} = + \infty.
\end{equation*}
This contradicts (\ref{eq:inv_omega_bound}), so in order to form a spacelike singularity with the conditions (\ref{eq:inv_assmp}), one must $\sup_{s_i \leq s < s_{\infty}} |\Psi| > 1$.
\end{proof}
\end{comment}
\begin{comment}
\begin{proposition} \label{prop:no_inversion*}
Let $\eta > 1$ be some fixed constant. Suppose we have the extra assumption that
\begin{equation} \tag{$\dagger$} \label{eq:no_inversion}
\Psi(s_i) = \chi > \eta > 1.
\end{equation}
Then there exists some $\epsilon_0 > 0$ depending on $\eta$ as well as the usual parameters $M, \mathbf{e}, \Lambda, m^2, q_0$ such that if $|\epsilon| < \epsilon_0$ and the extra assumption (\ref{eq:no_inversion}) applies, then there exists some $D_K(M, \mathbf{e}, \Lambda, m^2, q_0, \eta) > 0$ such that throughout the region $\mathcal{K}$ we have
\begin{equation}
\left| - r \dot{r} (s) - \frac{4 |B|^2 \omega^2_{RN} \epsilon^2 r_-^2}{2 |K_-|} \right| \leq D_K \epsilon^3.
\end{equation}
In particular we hit the spacelike singularity $\{r = 0\}$ at some $s = s_{\infty} \sim \epsilon^{-2}$.
Furthermore, turning to the scalar field, there exists some $\delta_1(M, \mathbf{e}, \Lambda, m^2, q_0, \eta) > 0$ such that
\begin{equation} \label{eq:c_Psi}
| \Psi(s) - \chi | \leq D_K \exp( - \delta_1 \epsilon^{-2} ).
\end{equation}
In fact, as $s \to s_{\infty}$, then $\Psi(s)$ tends to a limit $\Psi_{\infty}$ which trivially also satisfies this bound. For the lapse $\Omega^2$, we have the upper bound
\begin{equation}
\Omega^2 \leq D_K e^{- \delta_1 \epsilon^{-2}} \left( \frac{r}{r_-} \right)^{\eta - 1}.
\end{equation}
\end{proposition}
\begin{proof}
We begin with some similar bootstraps to the region $\bar{\mathcal{O}}$, though now we also keep track of powers of $r$ in $\Omega^2$:
\begin{equation} \tag{C1} \label{eq:c_bootstrap_lapse}
\Omega^2 \leq D_O \exp(-5 \delta_1 \epsilon^{-2}) \left( \frac{r}{r_-} \right)^{\eta - 1},
\end{equation}
\begin{equation} \tag{C2} \label{eq:c_bootstrap_phi}
|\phi| \leq 100 \sqrt{ \frac{2 |K_-|}{\omega_{RN}}} \log \left( \frac{\epsilon^2}{\xi^* r^2} \right),
\end{equation}
\begin{equation} \tag{C3} \label{eq:c_bootstrap_Q}
|Q| \leq 2 |\mathbf{e}|.
\end{equation}
Here $\delta_1$ must be chosen small enough so that we improve on (\ref{eq:c_bootstrap_lapse}) at $s= s_i$; to be precise, choose $\delta_1 < \delta_0$ such that $\Omega^2 \leq D_O \exp(- 10 \delta_1 \epsilon^{-2}) ( r(s_i)/r_- )^{\eta - 1}.$
We next show that we must form a spacelike singularity. Turning to the equation (\ref{eq:r_evol_2}), the above discussion together with the bootstrap assumptions will show that
\begin{equation*}
\left| \frac{d}{ds} (- r \dot{r}) \right| \lesssim \exp( - 4 \delta_1 \epsilon^{-2}) r^{\eta-3}.
\end{equation*}
The implicit constant here is allowed to depend on $\eta$. Multiplying both sides by $- r \dot{r}$ and integrating, we see that
\begin{equation} \label{eq:c_rd}
| (- r \dot{r})^2(s_i) - (-r \dot{r})^2(s) | \lesssim \exp( - 4 \delta_1 \epsilon^{-2}) r(s_i)^{\eta-1} \lesssim \epsilon^5.
\end{equation}
Combining with the results of Theorem \ref{thm:oscillation}, we therefore get
\begin{equation}
\left| -r \dot{r}(s) - \frac{4 |B|^2 \omega_{RN}^2 r_-^2 \epsilon^2}{2 |K_-|} \right| \lesssim \epsilon^3.
\end{equation}
Hence we form a spacelike singularity at some $s = s_{\infty} \sim \epsilon^{-2}$. Note that using the equation (\ref{eq:gauge_evol}) for $\tilde{A}$, it is then straightforward to get
\begin{equation}
| |q_0 \tilde{A} (s)| - \omega_{RN} | \lesssim \epsilon^2.
\end{equation}
Using this and (\ref{eq:Q_evol}), together with the bootstraps, one may easily improve the bootstrap (\ref{eq:c_bootstrap_Q}).
We now move to the scalar field. From equation (\ref{eq:phi_evol_2}) and the bootstraps, we have
\begin{equation}
\left| \frac{d}{ds} (r^2 \dot{\phi})(s) \right| \lesssim \epsilon^2 \frac{\xi^* r^2}{\epsilon^2} \log \left ( \frac{\epsilon^2}{\xi^* r^2} \right) + \exp(-5 \delta_1 \epsilon^{-2}) \lesssim \epsilon^2.
\end{equation}
But as the $s$-domain of the region $\mathcal{K}$ has length
\begin{equation*}
|s_{\infty} - s_i| \leq r(s_i)^2 / \inf_{\mathcal{K}} (- 2 r \dot{r}(s)) \leq \exp(- 2 \delta_0 \epsilon^{-2}) \frac{2 |K_-|}{4 |B|^2 \omega_{RN}^2 r_-^2} \epsilon^{-2}.
\end{equation*}
So we have
\begin{equation} \label{eq:c_phid}
|r^2 \dot{\phi} (s) - r^2 \dot{\phi} (s_i)| \lesssim \exp(-2 \delta_0 \epsilon^{-2}).
\end{equation}
Combining (\ref{eq:c_rd}) with (\ref{eq:c_phid}), the estimate (\ref{eq:c_Psi}) is immediate, and in particular $|\Psi| \leq 50 \sqrt{ \frac{2 |K_-|}{\omega_{RN}}}$. Recalling that $\Psi = - r \frac{d \phi}{ dr}$, we can use this and Theorem \ref{thm:oscillation} to improve the bootstrap (\ref{eq:c_bootstrap_phi}).
Finally, we study the lapse $\Omega^2$. Using Raychaudhuri in the form (\ref{eq:raych_2}), we see that
\begin{equation}
\frac{d}{ds} \log \left( \frac{\Omega^2}{-\dot{r}} \right) (s) \leq \frac{\dot{r}}{r} \inf_{\mathcal{K}} \Psi^2.
\end{equation}
So integrating we see that
\begin{equation}
\frac{\Omega^2}{- \dot{r}}(s) \cdot r(s)^{- \inf_{\mathcal{K}} \Psi^2}
\leq
\frac{\Omega^2}{- \dot{r}}(s_i) \cdot r(s_i)^{- \inf_{\mathcal{K}} \Psi^2} .
\end{equation}
Combining with the equation (\ref{eq:c_rd}) for $-r \dot{r}(s)$, we deduce that
\begin{equation}
\Omega^2 (s) \leq 2 \Omega^2 (s_i) \cdot r(s)^{\inf_{\mathcal{K}} \Psi^2 - 1} r(s_i)^{- (\inf_{\mathcal{K}}^2 \Psi^2 - 1)}.
\end{equation}
To conclude, note that $\Psi^2 \geq ( \chi + O(e^{-\delta_1 \epsilon^{-2}}) )^2 \geq \eta$, and $\Omega^2 (s_i) \leq D_O \exp(- 10 \delta_0 \epsilon^{-2})$ by (\ref{eq:jo*_lapse}), so we improve (\ref{eq:c_bootstrap_lapse}) so long as $\delta_1$ is taken sufficiently small. This completes the proof.
\end{proof}
\begin{proposition} \label{prop:yes_inversion*}
Suppose, on the other hand, that there exists some $\eta > 0$ such that
\begin{equation} \tag{$\ddagger$} \label{eq:yes_inversion}
\eta < |\Psi(s_i)| < 1.
\end{equation}
Then there exists some $\epsilon_0 > 0$ again depending on $M, \mathbf{e}, \Lambda, m^2, q_0, \eta$ such that if $|\epsilon| < \epsilon_0$ then within the region $\mathcal{K}$, we must have $\sup_{\mathcal{K}} |\Psi(s)| \geq 1$.
\end{proposition}
\begin{proof}
Suppose otherwise, that we have $|\Psi(s)| \leq \Xi < 1$ for all $s \in \mathcal{K}$. We first use (\ref{eq:yes_inversion}) to produce some a priori estimates on the quantities $\phi$, $-r \dot{r}$. To do this, we first add a bootstrap assumption
\begin{equation} \tag{C4} \label{eq:c_bootstrap_r2}
\eta^{-2} \geq \frac{-r \dot{r}(s)}{-r \dot{r} (s_i)} \geq \eta^2.
\end{equation}
In particular, this forces $-r \dot{r} \gtrsim \epsilon^2$, so we still form a spacelike singularity at $s_{\infty} \sim \epsilon^{-2}$. Let us see what consequences this has for $\phi$. Even without the bootstrap, the initial assumption (\ref{eq:yes_inversion}) tells us $ \left| - r \frac{d \phi}{dr} \right| \leq 1$, so that combining with Theorem \ref{thm:oscillation} we have
\begin{equation} \label{eq:c_phi_init}
|\phi| \lesssim \log \left( \frac{\epsilon^2}{\xi^* r^2} \right).
\end{equation}
To improve the additional bootstrap assumption (\ref{eq:c_bootstrap_r2}), we will actually produce estimates on the quantity $r^2 \dot{\phi}(s)$, then apply the assumption (\ref{eq:yes_inversion}) to bound $- r \dot{r} (s)$. We need to start with an initial bound on $\Omega^2$ via Raychaudhuri; since $\Omega^2/(-\dot{r})$ is decreasing,
\begin{equation} \label{eq:c_lapse_init}
\Omega^2 r (s) \leq \frac{-r \dot{r}(s)}{-r \dot{r}(s_i)} \cdot \Omega^2 r(s_i) \leq D_O \eta^{-2} \exp( - \delta_0 \epsilon^{-2}).
\end{equation}
Therefore, using (\ref{eq:c_phi_init}) and (\ref{eq:c_lapse_init}) in (\ref{eq:r_evol_2}), along with an additional result $||q_0 \tilde{A}(s)| - \omega_{RN}| \lesssim \epsilon^2$ which we prove under looser assumptions in Section \ref{sub:kasner_main}, we get that
\begin{equation}
\left| \frac{d}{ds} (r^2 \dot{\phi})(s) \right| \lesssim \exp(- 2 \delta_0 \epsilon^{-2}) \log \left( \frac{\epsilon^2}{\xi^* r^2} \right).
\end{equation}
Integrating up and performing a substitution to the variable $r$, we see that
\begin{equation}
|r^2 \dot{\phi} (s) - r^2 \dot{\phi} (s_i)| \lesssim
\int^{\exp(- \delta_0 \epsilon^{-2})}_0 \exp(-2 \delta_0 \epsilon^{-2} ) \frac{1}{-\dot{r}} \log \left( \frac{\epsilon^2}{\xi^* r^2} \right) \, dr \lesssim \exp ( - 2 \delta_0 \epsilon^{-2}),
\end{equation}
where we used once again $- r \dot{r} \sim \epsilon^2$ in the final step.
To combine this with the assumption (\ref{eq:yes_inversion}), it is more convenient to write this in the form of a ratio:
\begin{equation}
\left| \frac{r^2 \dot{\phi} (s)}{ r^2 \dot{\phi} (s_i) } - 1 \right|
\lesssim \frac{ \Psi(s_i) }{ - r \dot{r} (s_i)} \cdot \exp(-2 \delta_0 \epsilon^{-2})
\lesssim \exp(- \delta_0 \epsilon^{-2}).
\end{equation}
Therefore, we improve the lower bound of (\ref{eq:c_bootstrap_r2}) via the following computation:
\begin{equation}
\frac{-r \dot{r}(s)}{ - r \dot{r} (s_i)} = \frac{\Psi(s_i)}{\Psi(s)} \frac{r^2 \dot{\phi} (s)}{ r^2 \dot{\phi}(s_i)} \geq \eta \, \Xi^{-1} (1 + C \exp (- \delta_0 \epsilon^{-2})) \geq \eta^{3/2}.
\end{equation}
To improve the lower bound of (\ref{eq:c_bootstrap_r2}), we note that the only way for $ - r \dot{r}$ to increase is via the $\Omega^2 / 4$ term in (\ref{eq:r_evol_2}); but the integral of this is treated easily using Raychaudhuri -- we will perform this calculation in more detail in Section \ref{sub:kasner_main}.
So we have shown that the estimate (\ref{eq:c_bootstrap_r2}) holds in $\mathcal{K}$ up to the spacelike singularity. We will provide a contradiction by showing that in fact, we cannot reach $r = 0$. In order to do this, look again at Raychaudhuri in the form (\ref{eq:raych_2}):
\begin{equation}
\frac{d}{ds} \log \left( \frac{\Omega^2}{- \dot{r}} \right) (s) \geq \frac{\dot{r}}{r} \Xi^2 + \frac{r^2}{r \dot{r}} |\tilde{A}|^2 q_0^2 |\phi|^2.
\end{equation}
Using all previous estimates, it is clear that the rightmost term of this inequality is integrable in $[s_i, s_{\infty}]$, so there exists some constant $k$ such that
\begin{equation}
\frac{\Omega^2}{- \dot{r}} r^{- \Xi^2} (s) \geq k \cdot \frac{\Omega^2}{- \dot{r}} r^{- \Xi^2} (s_i).
\end{equation}
The real upshot of this is that after applying (\ref{eq:c_bootstrap_r2}) once again, we find that there exists some constant $\tilde{k}$ such that $\Omega^2 (s) \geq \tilde{k} r^{\Xi^2 - 1} (s)$ in the whole of $\mathcal{K}$. But assuming a lower bound on $|Q|$ that we once again delay until Section \ref{sub:kasner_main}, this yields that
\begin{equation}
\int^s_{s_i} \frac{Q^2 \Omega^2}{4 r^2} \, ds = \int_{r(s)}^{r(s_i)} \frac{Q^2 \Omega^2}{ - 4 r^2 \dot{r}} \, dr \gtrsim \int_{r(s)}^{r(s_i)} \tilde{k} \epsilon^{-2} r^{\Xi^2 - 2} \, dr = \frac{\tilde{k} \epsilon^{-2}}{1- \Xi^2} (r(s)^{\Xi^2 - 1} - r(s_i)^{\Xi^2 - 1}).
\end{equation}
In particular, this integral is certainly unbounded as $s \to s_{\infty}$. But this is a genuine contradiction, as looking at (\ref{eq:r_evol_2}) this would imply that $- r \dot{r}$ turns negative for some $s \in [s_i, s_{\infty}]$, contradicting the fact that we are in a trapped region! This concludes the proposition, which suggests that there must indeed be a Kasner inversion given (\ref{eq:yes_inversion}).
\end{proof}
\end{comment}
In what follows, we will obtain more \emph{quantitative information} on the behavior of spacetime, both when the inversion occurs (which will then imply Proposition~\ref{prop:yes_inversion}), and when it does not.
\begin{comment}
\begin{remark}
Note we have not mentioned the case where $|\Psi(s_i)|$ is close to $0$ -- it should surely be true that we have inversion here as well! But what makes this difficult is that it becomes much harder to control $-r \dot{r}$ and $Q$; in fact we may not be able to rule odd behaviour such as a singular Cauchy horizon or a spacelike singularity where $Q = 0$ and $\Omega^2$ behaves `badly' as in Schwarzschild. This will be outside the scope of this paper.
We also did not discuss the case where $\Psi(s_i)$ is close to $1$ -- this more subtle case will be dealt with later once we have a more sophisticated understanding of the inversion.
\end{remark}
\end{comment}
\subsection{The bootstraps, preliminary estimates and statement of Proposition~\ref{prop:k_ode}} \label{sub:kasner_prelim}
In this section, we state and initiate the proof of Proposition \ref{prop:k_ode}. As well as asserting the eventual formation of an $r=0$ spacelike singularity, the key content of this proposition is that we can control the quantity $\Psi(s)$ via a certain nonlinear ODE -- which, as we show in Section \ref{sub:kasner_geom}, in turn allows us to control the other geometric quantities including $\Omega^2(s)$.
\begin{proposition} \label{prop:k_ode}
Fix some $\eta \in \mathbb{R}$ with $0 <\eta < \min \{ \frac{1}{2} \mathfrak{W}, \frac{1}{4} \}$. Then there exists some $\epsilon_0(\eta) > 0$ depending on $\eta$ as well as the usual parameters $M, \mathbf{e}, \Lambda, m^2, q_0$, such that if both
\begin{equation} \label{eq:condition} \tag{$*$}
0 < \epsilon \leq \epsilon_0(\eta) \hspace{0.5cm} \textbf{and} \hspace{0.5cm} |\Psi_i| \coloneqq |\Psi(s_i)| \geq \eta,
\end{equation}
then the solution of the system (\ref{eq:raych})--(\ref{eq:phi_evol_2}) exists in the interval $s \in (- \infty, s_{\infty})$, with $s_i < s_{\infty} < + \infty$ and $r(s) \to 0$ as $s \to s_{\infty}$. In fact one has in the region $\mathcal{K} = \{ s \geq s_i: r(s) > 0 \}$ the following lower bound on $- r \dot r(s)$:
\begin{equation} \label{eq:k_r_lower}
- r \dot{r} (s) \geq 2 |B|^2 \mathfrak{W}^2 r_-^2 \omega_{RN} \eta^2 \epsilon^2.
\end{equation}
Regarding the quantity $\Psi$ defined in (\ref{eq:Psi}), there exists some constant $D_K(\eta, M, \mathbf{e}, \Lambda, m^2, q_0) > ~0$, a real number $\alpha = \alpha(\epsilon)$ and a function $\mathcal{F}(s)$ satisfying:
\begin{equation*}
| \alpha - \Psi_i | \leq D_K \exp( -\delta_0 \epsilon^{-2} ), \hspace{0.5cm} |\mathcal{F}(s)| \leq D_K \exp( -\delta_0 \epsilon^{-2} ) r(s) = D_K r_- e^{- \delta_0 \epsilon^{-2} - R},
\end{equation*}
where we define $R = \log (r_- / r(s))$, so that $\Psi = \Psi(R)$ satisfies the following ODE in the region $\mathcal{K}$:
\begin{equation} \label{eq:k_ode_main}
\frac{d \Psi}{d R} = - \Psi \left( \Psi - \alpha \right) \left( \Psi - \frac{1}{\alpha} \right) + \mathcal{F}.
\end{equation}
Finally, one has the following upper and lower bounds for $|\Psi(s)|$ for $s \in \mathcal{K}$:
\begin{equation} \label{eq:k_Psi_upperlower}
\min\{ |\Psi_i|, |\Psi_i^{-1}| \} - D_K e^{-\delta_0 \epsilon^{-2}} \leq |\Psi(s)|
\leq \max\{ |\Psi_i|, |\Psi_i^{-1}| \} + D_K e^{-\delta_0 \epsilon^{-2}}.
\end{equation}
\end{proposition}
\begin{remark}
We use the same notation $D_K = D_K(\eta, M, \mathbf{e}, \Lambda, m^2, q_0) > 0$ throughout the various lemmas and propositions of Section \ref{sec:kasner}. Note that it is possible to track the dependence of $D_K$ on $\eta$, and hence strengthen the result from having a fixed $\eta$ in (\ref{eq:condition}) to allowing $\eta = \eta(\epsilon)$ to decay as $\epsilon \to 0$, for instance $\eta = \epsilon^{0.01}$. However for the purpose of a simpler exposition we do not pursue that here.
\end{remark}
The proof of Proposition \ref{prop:k_ode} will be broken into several lemmas in Sections \ref{sub:kasner_prelim} and \ref{sub:kasner_main}. We first list the three main bootstrap assumptions of the region $\mathcal{K}$:
\begin{equation} \label{eq:k_bootstrap_Psi} \tag{K1}
|\Psi| \leq 4 \eta^{-1},
\end{equation}
\begin{equation} \label{eq:k_bootstrap_r} \tag{K2}
- r \dot{r}(s) \geq
|B|^2 \mathfrak{W}^2 r_-^2 \omega_{RN} \eta^2 \epsilon^2,
\end{equation}
\begin{equation} \label{eq:k_bootstrap_Q} \tag{K3}
\frac{|Q_{\infty}|}{2} \leq |Q(s)| \leq 2 |Q_{\infty}|.
\end{equation}
Here $Q_{\infty} = Q_{\infty}(M, \mathbf{e}, \Lambda) \neq 0$ is defined in (\ref{eq:qinfty}), see also Lemma \ref{lem:jo_charge_retention} to see that $Q_{\infty}$ lies strictly between $\mathbf{e}/2$ and $\mathbf{e}$. In light of Proposition \ref{prop:oscillation+} and Corollary \ref{cor:protokasner}, these bootstrap assumptions hold in a neighborhood of $s = s_i$.
In the remainder of Section \ref{sub:kasner_prelim}, we state and prove two preliminary lemmas, the first of which provides estimates for the Maxwell quantities $\tilde{A}(s)$ and $Q(s)$, as well as the scalar field $\phi(s)$. The second lemma then produces a crucial lower bound for $|\Psi|$ as well as a useful preliminary upper bound on $\Omega^2(s)$.
\begin{lemma} \label{lem:k_prelim_1}
Assuming the bootstraps (\ref{eq:k_bootstrap_Psi}), (\ref{eq:k_bootstrap_r}), (\ref{eq:k_bootstrap_Q}), we have the following preliminary estimates on $\phi$ as well as the Maxwell quantities $Q$ and $\tilde{A}$:
\begin{equation} \label{eq:k_phi}
|\phi(s)| \leq 2 \eta^{-1} \log \left( \frac{8 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2}{r^2(s)} \right),
\end{equation}
\begin{equation} \label{eq:k_Q}
|Q(s) - Q(s_i)| \leq D_K \exp(- \delta_0 \epsilon^{-2}),
\end{equation}
\begin{equation} \label{eq:k_gauge}
| |q_0 \tilde{A}|(s) - \omega_{RN} | \leq D_K \epsilon^2.
\end{equation}
In particular, using (\ref{eq:k_Q}) we immediately improve the bootstrap assumption (\ref{eq:k_bootstrap_Q}).
\end{lemma}
\begin{lemma} \label{lem:k_prelim_2}
Assuming the bootstraps (\ref{eq:k_bootstrap_Psi}), (\ref{eq:k_bootstrap_r}), (\ref{eq:k_bootstrap_Q}), we have the following upper bound on $- r \dot{r}$ and estimate on $r^2\dot{\phi}$ which we use to get a corresponding lower bound on $|\Psi|$:
\begin{equation} \label{eq:k_r_upper}
- r \dot{r} (s) \leq 4 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon^2 + D_K \epsilon^4 \log (\epsilon^{-1}),
\end{equation}
\begin{equation} \label{eq:k_phid}
\left| \frac{r^2 \dot{\phi} (s)}{ r^2 \dot{\phi} (s_i) } - 1 \right| \leq D_K \exp( - \delta_0 \epsilon^{-2} ),
\end{equation}
\begin{equation} \label{eq:k_Psi_lower}
|\Psi| \geq \eta - D_K \exp( - \delta_0 \epsilon^{-2}).
\end{equation}
In particular, $\Psi$ will never vanish, and thus never change sign, in the region $\mathcal{K}$. Moreover, we have the following initial upper bound for $\Omega^2(s)$:
\begin{equation} \label{eq:k_lapse_upper}
\frac{\Omega^2}{-\dot{r}} (s) \leq \frac{\Omega^2}{- \dot{r}} (s_i) \leq \exp(- 50 \delta_0 \epsilon^{-2}).
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:k_prelim_1}]
To prove the preliminary bound on $\phi$, we recall once again that we may rewrite $\Psi$ as $- r \frac{d \phi}{dr}$. So we use (\ref{eq:k_bootstrap_Psi}) as follows:
\begin{equation} \label{eq:psiPsi1}
|\phi(s) - \phi(s_i)| \leq \int^{r(s_i)}_{r(s)} \frac{|\Psi(r)|}{r} \, dr \leq 4 \eta^{-1} \log \frac{r(s_i)}{r(s)} \leq 2 \eta^{-1} \log \frac{r^2(s_i)}{r^2(s)}.
\end{equation}
But from \eqref{eq:pk_phi_asymp} in Corollary \ref{cor:protokasner}, one easily finds (using $\eta < \frac{1}{2} \mathfrak{W}$) that [recall the definition \eqref{eq:xik}]:
\begin{equation} \label{eq:psiPsi2}
|\phi(s_i)| \leq \frac{4}{\sqrt{\pi}} \mathfrak{W}^{-1} \log \left( \frac{r_-^2 \epsilon^2}{\xi_K r^2(s_i)} \right) \leq 2 \eta^{-1} \log \left( \frac{8 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2}{r^2(s_i)} \right).
\end{equation} where we have used $C_{YK} \approx \frac{\sqrt{\pi}}{2}\mathfrak{W} \sin(\Theta(\epsilon} \newcommand{\ls}{\lesssim)) + O(\epsilon^2 \log (\epsilon^{-1}))$.
Combining the two inequalities \eqref{eq:psiPsi1} and \eqref{eq:psiPsi2} will clearly yield (\ref{eq:k_phi}).
For the Maxwell gauge field $\tilde{A}(s)$, we use the following trick (in using this trick it is crucial that (\ref{eq:k_bootstrap_Q}) gives us a lower bound $|Q| \geq |Q_{\infty}| / 2$): as $r(s) \leq \exp(-\delta_0 \epsilon^{-2}) r_- \leq |Q_{\infty}|/4$ for $s \in \mathcal{K}$, (\ref{eq:r_evol_2}) tells us that $- r \dot{r}(s)$ is decreasing in $\mathcal{K}$. Furthermore, since we must have $- r \dot{r} (s) > 0$ for $s \in \mathcal{K}$, the same equation provides a bound on the integral:
\begin{equation}
\int_{s_i}^s \frac{Q^2 \Omega^2}{4 r^2} (\tilde{s}) \, d\tilde{s} \leq - 2 r \dot{r} (s_i).
\end{equation}
Hence using the lower bound of (\ref{eq:k_bootstrap_Q}) once more, the equation (\ref{eq:gauge_evol}) yields
\begin{equation}
| q_0 \tilde{A}(s) - q_0 \tilde{A}(s_i) | \leq \int_{s_i}^s \frac{|Q| \Omega^2}{4 r^2} (\tilde{s}) \, d \tilde{s} \leq \frac{2}{|Q_{\infty}|} \int_{s_i}^s \frac{Q^2 \Omega^2}{4 r^2} (\tilde{s}) \, d \tilde{s} \leq \frac{- 4 r \dot{r}(s_i)}{|Q_{\infty}|}.
\end{equation}
But since Proposition \ref{prop:oscillation+} tells us that $- r \dot{r}(s_i) \sim \epsilon^2$, we may use this to deduce (\ref{eq:k_gauge}).
To close the estimate on $Q$, we simply integrate (\ref{eq:Q_evol}):
\begin{equation}
|Q(s) - Q(s_i)| \leq \int^s_{s_i} q_0^2 |\tilde{A}| r^2 |\phi|^2 (\tilde{s}) \, d\tilde{s} \lesssim \int^s_{s_i} r^2(\tilde{s}) \log^2 \left( \frac{8 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2}{r^2(\tilde{s})} \right) \, d\tilde{s}.
\end{equation}
But in the region $\mathcal{K}$, where $r(s) \leq e^{- \delta_0 \epsilon^{-2}} r_-$ and the interval of integration has length $|s- s_i| = O (\epsilon^{-2} e^{- 2 \delta_0 \epsilon^{-2}})$ by (\ref{eq:k_bootstrap_r}), it is straightforward to determine (\ref{eq:k_Q}) and improve the bootstrap (\ref{eq:k_bootstrap_Q}).
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:k_prelim_2}]
The first estimate (\ref{eq:k_r_upper}) is immediate by monotonicity. Indeed, so long as $r(s) < |Q(s)|$, as shown in Lemma~\ref{lem:k_prelim_1}, the quantity $- r \dot{r}(s)$ is decreasing in $s$. So the estimate (\ref{eq:pk_r}) evaluated at $s= s_i$ yields (\ref{eq:k_r_upper}).
For the lower bound on $|\Psi|$, we first produce the estimate (\ref{eq:k_phid}) regarding $r^2 \dot{\phi}$. Using the monotonicity of $- \Omega^{-2} \dot{r}(s)$ from the Raychaudhuri equation (\ref{eq:raych}), and Proposition \ref{prop:oscillation+}, we first find the preliminary estimate
\begin{equation}
\frac{\Omega^2}{- \dot{r}} (s) \leq \frac{\Omega^2}{- \dot{r}} (s_i) \leq \frac{\Omega^2 r(s_i)}{-r \dot{r}(s_i)} \leq \exp(- 50 \delta_0 \epsilon^{-2}) \cdot \epsilon} \newcommand{\ls}{\lesssim^{-2} \exp(- \delta_0 \epsilon^{-2}) \leq \exp(- 50 \delta_0 \epsilon^{-2}).
\end{equation}
This is (\ref{eq:k_lapse_upper}), and we use this together with (\ref{eq:k_r_upper}) and the conclusions of Lemma \ref{lem:k_prelim_1} to yield from \eqref{eq:phi_evol_2}
\begin{equation}
\left| \frac{d}{ds} (r^2 \dot{\phi})(s) \right| \lesssim \exp( - \delta_0 \epsilon^{-2}) \cdot r(s) \log \left( \frac{8 |B|^2 \mathfrak{W}^2 r_-^2 \epsilon^2}{r^2(s)} \right) \lesssim \epsilon^2 \exp( - \delta_0 \epsilon^{-2}).
\end{equation}
Integrating this up, and using once again that the interval of integration has length bounded by $O(\epsilon^{-2} e^{ - 2\delta_0 \epsilon^{-2}})$, we therefore find that
\begin{equation} \label{eq:k_phiddiff}
|r^2 \dot{\phi} (s) - r^2 \dot{\phi} (s_i)| \lesssim \exp(- 3 \delta_0 \epsilon^{-2}).
\end{equation}
Finally, since $|\Psi(s_i)| \geq \eta$ and $- r \dot{r}(s_i)$ is bounded below using (\ref{eq:jo_r}), we have $|r^2 \dot{\phi} (s_i)| = - r \dot{r}(s_i) \cdot |\Psi(s_i)| \gtrsim \epsilon^2$. Combining this with (\ref{eq:k_phiddiff}) we obtain (\ref{eq:k_phid}).
Therefore, for $\epsilon$ sufficiently small we see that
\begin{equation}
|\Psi(s)| = \frac{ - r \dot{r} (s_i) }{- r \dot{r} (s)} \cdot \frac{ r^2 \dot{\phi} (s)}{ r^2 \dot{\phi} (s_i)} \cdot |\Psi(s_i)| \geq \eta - D_K \exp(- \delta_0 \epsilon^{-2}).
\end{equation}
The inequality here used (\ref{eq:k_phid}) and the fact that $-r \dot{r}(s)$ is decreasing for $s \in \mathcal{K}$, as well as the original assumption (\ref{eq:condition}).
\end{proof}
\subsection{The dynamical system for $\Psi$ and the proof of Proposition~\ref{prop:k_ode}} \label{sub:kasner_main}
In this section, we complete the proof of Proposition \ref{prop:k_ode}. The main step will be to find the ODE (\ref{eq:k_ode_main}). For this purpose, we start with a lemma concerning the first and second derivatives of $\Psi$ with respect to the timelike variable $r$.
\begin{lemma} \label{lem:k_first_second}
Assume that the bootstraps (\ref{eq:k_bootstrap_Psi}), (\ref{eq:k_bootstrap_r}), (\ref{eq:k_bootstrap_Q}) hold. Then if $\mathcal{E}$ is defined such that
\begin{equation} \label{eq:k_first_derivative}
\frac{d\Psi}{dr} =
\Psi \frac{1}{-r\dot{r}} \frac{\Omega^2}{-4\dot{r}}
\left( 1 - \frac{Q^2}{r^2} - m^2 r^2 |\phi|^2 -r^2 \Lambda \right)
+
\mathcal{E},
\end{equation}
then we have the following estimates in the region $s \in \mathcal{K}$:
\begin{equation} \label{eq:k_error}
| \mathcal{E} | \leq D_K \exp( - 2 \delta_0 \epsilon^{-2}) \cdot r , \hspace{0.5cm}
\left| \frac{d \mathcal{E}}{d r} \right| \leq D_K \exp( - 2 \delta_0 \epsilon^{-2} ) \cdot r^{-1}.
\end{equation}
For the second derivative, if the error term $\mathcal{F}_1$ is defined such that
\begin{equation} \label{eq:k_ode_1}
\frac{d^2 \Psi}{d r^2} - 2 \Psi^{-1} \left( \frac{ d \Psi } { d r} \right)^2 - \frac{\Psi^2 - 2}{r} \frac{d\Psi}{dr} = \mathcal{F}_1,
\end{equation}
then for $s \in \mathcal{K}$, the expression $\mathcal{F}_1$ satisfies the following bound:
\begin{equation} \label{eq:k_error_1}
\mathcal{F}_1 \leq D_K \exp(- 2\delta_0 \epsilon^{-2}) r^{-1}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:k_first_second}]
Assuming the bootstraps, the conclusions of Lemmas \ref{lem:k_prelim_1} and \ref{lem:k_prelim_2} will hold. In light of (\ref{eq:r_evol_2}) and (\ref{eq:phi_evol_2}), differentiating $\Psi$ in the variable $r$ one yields:
\begin{equation} \label{eq:k_first_derivative*}
\frac{d\Psi}{dr} = \frac{1}{\dot{r}} \frac{d \Psi}{ds} =
\Psi \frac{1}{-r\dot{r}} \frac{\Omega^2}{-4\dot{r}}
\left( 1 - \frac{Q^2}{r^2} - m^2 r^2 |\phi|^2-r^2 \Lambda \right)
+
\mathcal{E},
\end{equation}
where the error $\mathcal{E}$ is given by
\begin{equation} \label{eq:k_error*}
\mathcal{E} \coloneqq \frac{1}{-r\dot{r}} \frac{d}{dr} ( r^2 \dot{\phi} )
=
\frac{1}{-r\dot{r}} \left( \frac{q_0^2 |\tilde{A}|^2 r^3 \phi}{-r \dot{r}} + \frac{m^2 \Omega^2 r^2 \phi }{-4 \dot{r}} \right).
\end{equation}
We seek the estimates (\ref{eq:k_error}). Using Lemmas \ref{lem:k_prelim_1} and \ref{lem:k_prelim_2}, it is straightforward to get
\begin{equation} \label{eq:k_error_est1}
|\mathcal{E}| \lesssim \epsilon^{-4} r^3 \log \left( \frac{\epsilon^2}{\xi^* r^2} \right) + \exp(- 2 \delta_0 \epsilon^{-2}) \epsilon^{-2} r^2 \log \left( \frac{\epsilon^2}{\xi^* r^2} \right) \lesssim \exp(- 2 \delta_0 \epsilon^{-2}) r.
\end{equation}
For the derivative estimate in (\ref{eq:k_error}), we simply differentiate the expression (\ref{eq:k_error*}) term by term. This is rather cumbersome, and we simplify the exposition by merely considering the new multiplicative factors that arise when differentiating the various terms, noting that it is most crucial to keep track of the additional powers of $r^{-1}$ that arise:
\begin{enumerate}[(i)]
\item \label{k_i}
Differentiating $(- r \dot{r})^{-1}$ in $r$ will yield an extra multiplicative factor of
\begin{equation*}
\frac{\Omega^2}{- 4\dot{r}} \frac{1}{-r\dot{r}} \left( \frac{Q^2}{r^2} + m^2 r^2 |\phi|^2+ r^2 \Lambda - 1 \right)
\end{equation*}
so since $Q \neq 0$ this will contribute a multiplicative factor of $r^{-2}$. Note that though one could fear that additional powers of $\epsilon^{-1}$ will appear (e.g.\ in the $-r \dot{r}$ appearing in the denominator), these will be negated by the smallness of $\frac{\Omega^2}{- \dot{r}}$, see (\ref{eq:k_lapse_upper}).
\item
Differentiating $\tilde{A}$ in $r$ will introduce a similar factor of
\begin{equation*}
\frac{Q \Omega^2}{- 4 r^2 \dot{r}}
\end{equation*}
which we treat in exactly the same way as (\ref{k_i}).
\item
Differentiating $\phi$ in $r$ simply yields
\begin{equation*}
\frac{d \phi}{dr} = - \Psi r^{-1},
\end{equation*}
so that in light of bootstrap (\ref{eq:k_bootstrap_Psi}) providing an upper bound on $|\Psi|$, differentiating this term contributes only one power of $r^{-1}$.
\item
Differentiating the $\frac{\Omega^2}{4\dot{r}}$ in the second term of (\ref{eq:k_error*}) introduces, by the Raychaudhuri equation (\ref{eq:raych_2}), an extra multiplicative factor of
\begin{equation*}
\frac{r}{\dot{r}^2} ( |\dot{\phi}|^2 + q_0^2 |\tilde{A}|^2 \phi^2 ) = \frac{\Psi^2}{r} + \frac{r^3 q_0^2 |\tilde{A}|^2 \phi^2}{ (- r \dot{r})^2} \leq \frac{\Psi^2}{r} + 1,
\end{equation*}
where we used the bootstrap (\ref{eq:k_bootstrap_r}) and Lemma \ref{lem:k_prelim_1}. So differentiating this term contributes at worst one power of $r^{-1}$.
\item \label{k_v}
Finally, differentiating any powers of $r$ that arise in (\ref{eq:k_error*}) will trivially only lose one power of $r$.
\end{enumerate}
From (\ref{k_i})--(\ref{k_v}) we get the estimate (\ref{eq:k_error}).
Due to this estimate and (\ref{eq:k_lapse_upper}), we can find an initial upper bound on the first derivative using (\ref{eq:k_first_derivative}):
\begin{equation} \label{eq:k_psi_r_weak}
\left| \frac{d \Psi}{d r} \right| \leq D_K \exp(-50 \delta_0 \epsilon^{-2}) r^{-2} + D_K \exp(- 2 \delta_0 \epsilon^{-2}) r \leq D_K \exp( - 5 \delta_0 \epsilon^{-2}) r^{-2}.
\end{equation}
We now treat the second derivative. We differentiate the expression (\ref{eq:k_first_derivative}) again in $r$, and find
\begin{align} \label{eq:k_second_derivative}
\frac{d^2 \Psi}{dr^2} =\ &
\Psi^{-1} \left( 2 \frac{d\Psi}{dr} - \mathcal{E} \right) \left( \frac{d \Psi}{dr} - \mathcal{E} \right)
+ \left( \frac{d\Psi}{dr} - \mathcal{E} \right) \left( \frac{\Psi^2}{r} + \frac{r}{\dot{r}^2} q_0^2 |\tilde{A}|^2 \phi^2 \right) \\[0.5em] \nonumber &
+ \left( \frac{d\Psi}{dr} - \mathcal{E} \right) \frac{d}{dr} \log \left | 1 - \frac{Q^2}{r^2} - m^2 r^2 \phi ^2 - r^2 \Lambda\right | + \frac{d \mathcal{E}}{dr}.
\end{align}
To explain the derivation of the equation (\ref{eq:k_second_derivative}), we take (\ref{eq:k_first_derivative}), subtract $\mathcal{E}$ from both sides and take a logarithm, and then differentiating, to write,
\begin{equation*}
\frac{d}{dr} \log \left| \frac{d\Psi}{dr} - \mathcal{E} \right| = \frac{d}{dr} \left( \log |\Psi| - \log (- r \dot{r}) + \log \left( \frac{\Omega^2}{- 4 \dot{r}} \right) + \log \left| 1 - \frac{Q^2}{r^2} - m^2 r^2 \phi^2 - r^2 \Lambda \right| \right ),
\end{equation*}
from which (\ref{eq:k_second_derivative}) follows after using the usual evolution equations (\ref{eq:r_evol_2}) and (\ref{eq:raych_2}).
We now wish to recover the ODE (\ref{eq:k_ode_1}) along with (\ref{eq:k_error_1}) i.e.\ we just need to show that all error terms present within (\ref{eq:k_second_derivative}) are bounded by $\exp(- \delta_0 \epsilon^{-2}) r^{-1}$. This is mostly straightforward by (\ref{eq:k_error}) and (\ref{eq:k_psi_r_weak}), with the most complicated term being the one involving the $\log$.
Firstly, using (\ref{eq:k_psi_r_weak}) and (\ref{eq:k_error}) as well as the lower bound (\ref{eq:k_Psi_lower}) for $|\Psi|$, we see that
\begin{equation} \label{eq:k_ode_part1}
\left|
\Psi^{-1} \left( 2 \frac{d \Psi}{dr} - \mathcal{E} \right) \left( \frac{d \Psi}{dr} - \mathcal{E} \right) - 2 \Psi^{-1} \left( \frac{d \Psi}{dr} \right)^2
\right|
\lesssim \exp( - 2\delta_0 \epsilon^{-2} ) r^{-1}.
\end{equation}
Next, note that by (\ref{eq:k_r_upper}), (\ref{eq:k_phi}) and (\ref{eq:k_gauge}), the expression involving $\phi^2$ can be bounded by
\begin{equation*}
\frac{r}{\dot{r}^2} q_0^2 |\tilde{A}|^2 \phi^2 \lesssim \epsilon} \newcommand{\ls}{\lesssim^{-4} r^3 \log(r^{-1})\leq \exp(- \delta_0 \epsilon^{-2}) r^2 \leq \exp(-2 \delta_0 \epsilon^{-2}) r,
\end{equation*}
from which we may also deduce
\begin{equation} \label{eq:k_ode_part2}
\left|
\left( \frac{d\Psi}{dr} - \mathcal{E} \right) \left( \frac{\Psi^2}{r} + \frac{r}{\dot{r}^2} q_0^2 |\tilde{A}|^2 \phi^2 \right)
- \frac{\Psi^2}{r} \frac{d \Psi}{dr}
\right|
\lesssim \exp(- 2\delta_0 \epsilon^{-2}) r^{-1}.
\end{equation}
We now move onto the more tedious term involving $\frac{d}{dr} \log(\cdots)$. Computing the derivative using the system of equations (\ref{eq:raych})--(\ref{eq:phi_evol_2}) one eventually finds
\begin{align*}
\frac{d}{dr} \log \left| 1 - \frac{Q^2}{r^2} - m^2 r^2 \phi^2 -r^2 \Lambda\right|
&=
- \frac{2}{r} \frac{1 - f}{1 - g},
\end{align*}
where the expressions $f$ and $g$ are given by
\begin{equation*}
f = \frac{r^3}{Q^2} \left ( - \frac{Q \tilde{A} q_0^2 \phi^2}{ - \dot{r}} + m^2 r \phi^2 - m^2 r \phi \Psi + r \Lambda \right ),
\end{equation*}
\begin{equation*}
g = \frac{r^2}{Q^2} \cdot ( 1 - m^2 r^2 \phi^2 -r^2 \Lambda).
\end{equation*}
Using the bootstraps along with Lemmas \ref{lem:k_prelim_1} and \ref{lem:k_prelim_2}, it is straightforward to deduce that $|f| + |g| \lesssim r^2$.
The conclusion of this computation is therefore that
\begin{equation} \label{eq:k_ode_aux}
\left|
\frac{d}{dr} \log \left | 1 - \frac{Q^2}{r^2} - m^2 r^2 \phi^2 \right | + \frac{2}{r}
\right|
\lesssim r,
\end{equation}
which along with (\ref{eq:k_psi_r_weak}) and (\ref{eq:k_error}) finally yields the required estimate
\begin{equation} \label{eq:k_ode_part3}
\left|
\left( \frac{d \Psi}{dr} - \mathcal{E} \right) \frac{d}{dr} \log \left | 1 - \frac{Q^2}{r^2} - m^2 r^2 \phi^2 \right |
+ \frac{2}{r} \frac{d \Psi}{dr}
\right|
\lesssim \exp(-2 \delta_0 \epsilon^{-2}) r^{-1}.
\end{equation}
The desired equation (\ref{eq:k_ode_1}) is then found by combining the identity (\ref{eq:k_second_derivative}) with the estimates (\ref{eq:k_ode_part1}), (\ref{eq:k_ode_part2}), (\ref{eq:k_ode_part3}) and (\ref{eq:k_error}) to get the required error term $\mathcal{F}_1$ satisfying (\ref{eq:k_error_1}).
\end{proof}
We have now finished all the preparation for the proof of Proposition \ref{prop:k_ode}. Before turning to the actual proof below, we will first give a brief sketch for the benefits of the reader. After changing variables to $R = \log (r_- / r(s))$, we may integrate up the second-order ODE (\ref{eq:k_ode_1}) to derive a first-order ODE of a similar form to (\ref{eq:k_ode_main}). More precisely, one finds an equation of the form
\begin{equation} \label{eq:k_ode_main_sketch}
- r \frac{ d \Psi}{dr} = \frac{d \Psi}{d R} = - \Psi (\Psi^2 - K \Psi + 1) + \text{error},
\end{equation}
where the expression $K$ appearing here is a constant of integration. By evaluating (\ref{eq:k_ode_main_sketch}) at $s = s_i$ we find that $K \approx \Psi_i + \Psi_i^{-1}$.
We treat (\ref{eq:k_ode_main_sketch}) as a one-dimensional dynamical system for the unknown $\Psi = \Psi(R)$, and consider what happens as $r \to 0$, i.e.\ $R \to + \infty$. Assuming the error to be negligible, the dynamical system (\ref{eq:k_ode_main_sketch}) will prohibit $\Psi$ from growing too large -- in particular allowing us to improve the bootstrap (\ref{eq:k_bootstrap_Psi}). This in turn will allow us to improve the bootstrap (\ref{eq:k_bootstrap_r}) due to the definition of $\Psi$ and the upper bound on $r^2 \dot{\phi}$ from (\ref{eq:k_phid}) [recall that bootstrap \eqref{eq:k_bootstrap_Q} has already been improved in the earlier section].
Having improved all the bootstraps, the lower bound (\ref{eq:k_bootstrap_r}) allows us to continue the solution all the way to some $s = s_{\infty}$ with $r(s_{\infty}) = 0$. To obtain (\ref{eq:k_ode_main}), we again integrate up the second-order ODE (\ref{eq:k_ode_1}), but by determining the constant of integration $K$ teleologically at $R = + \infty$ (i.e.\ $r\rightarrow0$), we get the precise bound for the error term $\mathcal{F}$ as in Proposition \ref{prop:k_ode}.
Finally, some soft arguments using known upper and lower bounds for $|\Psi|$ will allow us to determine that $K \geq 2$, so that writing $K = \alpha + \alpha^{-1}$ we get the required ODE (\ref{eq:k_ode_main}). The remaining assertions are then straightforward. We now make this argument precise.
\begin{proof}[Proof of Proposition \ref{prop:k_ode}]
Performing the change of variables $R = \log ( r_- / r(s)) = \log r_- - \log r(s)$ on (\ref{eq:k_ode_1}), one finds the ODE
\begin{equation} \label{eq:k_ode_3}
\frac{d^2 \Psi}{d R^2} - 2 \Psi^{-1} \left( \frac{d \Psi}{d R} \right)^2 + (\Psi^2 - 1) \frac{d \Psi}{d R}
= r^2 \mathcal{F}_1.
\end{equation}
Multiplying by the integrating factor $\Psi^{-2}$, we write the left hand side as a total derivative:
\begin{equation} \label{eq:k_ode_4}
\frac{d}{dR} \left( \Psi^{-2} \frac{d \Psi}{dR} + \Psi + \frac{1}{\Psi} \right)
= \Psi^{-2} r^2 \mathcal{F}_1.
\end{equation}
Due to (\ref{eq:k_Psi_lower}) and (\ref{eq:k_error}), the right hand side of (\ref{eq:k_ode_4}) can be bounded by $ 4 \, D_K \eta^{-2} \exp( - 2 \delta_0 \epsilon^{-2}) r \leq \exp( - \delta_0 \epsilon^{-2}) e^{-R}$. So this error is integrable as $R \to+ \infty$, and we may proceed to integrate up the equation (\ref{eq:k_ode_4}).
For now, we can only integrate (\ref{eq:k_ode_4}) in a finite bootstrap region; for $R_0 > R_i := \log (r_- / r(s_i)) = \delta_0 \epsilon^{-2}$ lying in our bootstrap region there exists a constant of integration $K_{R_0}$ and an error term $\mathcal{F}_{R_0}(R)$ such that for $R \in [R_i, R_0]$, the following holds:
\begin{equation} \label{eq:k_ode_5}
\Psi^{-2} \frac{d \Psi}{d R} + \Psi - K_{R_0} + \frac{1}{\Psi} = \mathcal{F}_{R_0}(R).
\end{equation}
The choice of $K_{R_0}$ is made such that $\mathcal{F}_{R_0}(R_0) = 0$ and hence from the aforementioned bound on $\mathcal{F}_1$, one has $\mathcal{F}_{R_0}(R) \leq \exp( -\delta_0 \epsilon^{-2}) e^{-R}$. We also rewrite the above in the form
\begin{equation} \label{eq:k_ode_6}
\frac{d \Psi}{d R} = - \Psi ( \Psi^2 - K_{R_0} \Psi + 1 ) + \Psi^2 \mathcal{F}_{R_0}.
\end{equation}
In order to proceed, we must estimate the constant of integration $K_{R_0}$. For this purpose, we evaluate (\ref{eq:k_ode_5}) at $R = R_i$, and then apply Proposition \ref{prop:oscillation+} (\eqref{eq:pk_lapse} specifically). This proposition, together with (\ref{eq:k_first_derivative}) will give
\begin{equation*}
\left| \Psi^{-2} \frac{d \Psi}{d R} (R_i) \right | \leq \exp ( -2 \delta_0 \epsilon^{-2}),
\end{equation*}
so that when evaluating (\ref{eq:k_ode_5}) at $R=R_i$, one finds the estimate
\begin{equation} \label{eq:k_ode_const}
\left| K_{R_0} - \Psi_i - \frac{1}{\Psi_i} \right| \leq 2 \exp( -2 \delta_0 \epsilon^{-2} ).
\end{equation}
In particular, for $\epsilon$ chosen sufficiently small one can get the key upper bound $|K_{R_0}| \leq \frac{5}{3} \eta^{-1}$. Here we used the estimate (\ref{eq:pk1_Psi}) at $s=s_i$ in Corollary \ref{cor:protokasner} and the fact that $\eta \leq \min \{\frac{1}{2} \mathfrak{W}, \frac{1}{4} \}$, as well as $\pi^{-\frac{1}{2}}< \frac{2}{3}$.
We can use this to improve the bootstraps (\ref{eq:k_bootstrap_Psi}) and (\ref{eq:k_bootstrap_r}). We first improve the upper bound (\ref{eq:k_bootstrap_Psi}) on $|\Psi|$; we prove $|\Psi| \leq \frac{5}{3} \eta^{-1}$. Without loss of generality, suppose that $\Psi$, and hence $K_{R_0}$, are positive for $s \in \mathcal{K}$, and suppose for contradiction that $\sup_{s \in \mathcal{K}} \Psi > \frac{5}{3} \eta^{-1}$.
So we may choose $R_1$ to be $R_1 = \inf \{ R > R_i: \Psi(R) = \frac{5}{3} \eta^{-1} \}$. This trivially implies that $\frac{d \Psi}{d R} (R_1) \geq 0$. However, looking at (\ref{eq:k_ode_6}) for any $R_0 \geq R_1$ we get
\begin{equation*}
\frac{d \Psi}{d R}(R_1) = - \left( \tfrac{5}{3} \eta^{-1} \right)^2 \left( \tfrac{5}{3} \eta^{-1} - K_{R_0} \right) - \tfrac{5}{3} \eta^{-1} + \Psi^2 \mathcal{F}_{R_0} (R_1) < 0,
\end{equation*}
where the final step follows from $K_{R_0} \leq \frac{5}{3} \eta^{-1}$ and $ \Psi^2 |\mathcal{F}_{R_0}(R_1)| \leq \Psi^2 \exp( - 2 \delta_0 \epsilon^{-2} ) \leq \eta^{-1}$ for $\epsilon$ small. This is a contradiction, and thus ensures that $\sup_{s \in \mathcal{K}} |\Psi(s)| \leq \frac{5}{3} \eta^{-1}$. This improves the bootstrap (\ref{eq:k_bootstrap_Psi}).
For the remaining bootstrap (\ref{eq:k_bootstrap_r}), we combine the above with the estimate (\ref{eq:k_phid}). To be precise, we have
\begin{equation}
- r \dot{r} (s) = - r \dot{r} (s_i) \cdot \frac{ \Psi(s_i) }{ \Psi(s)} \cdot \frac{ r^2 \dot{\phi}(s) }{ r^2 \dot{\phi}(s_i) } \geq \frac{3}{5} \eta^2 \cdot ( -r \dot{r} (s_i)) \cdot \frac{ r^2 \dot{\phi}(s)}{r^2 \dot{\phi}(s_i)}.
\end{equation}
So that once we apply Proposition \ref{prop:oscillation+} and (\ref{eq:k_phid}), one finds the lower bound
\begin{equation} \label{eq:k_r_eventuallb}
- r \dot{r} (s) \geq \frac{3}{5} \eta^2 \cdot [ 4 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon^2 - D_K \epsilon^4 \log(\epsilon^{-1}) ],
\end{equation}
which clearly improves (\ref{eq:k_bootstrap_r}) for $\epsilon$ sufficiently small. So the bootstrap argument is complete, and in light of (\ref{eq:k_r_eventuallb}), we conclude that the spacetime extends all the way to $r = 0$, i.e.\ $R = + \infty$.
To find the final one-dimensional dynamical system (\ref{eq:k_ode_main}) we require a argument involving taking limits. Consider the identity (\ref{eq:k_ode_5}); another consequence is that for $R_0 > R_1 \geq R_i$,
\begin{equation*}
|K_{R_0} - K_{R_1}| = | \mathcal{F}_{R_0}(R_1) | \leq e^{- \delta_0 \epsilon^{-2}} e^{-R_1}.
\end{equation*}
In particular, if $(R_n)_{n \in \mathbb{N}}$ is a sequence with $R_n \to + \infty$, then the sequence $(K_{R_n})_{n \in \mathbb{N}}$ is Cauchy, and the limit is independent of the sequence taken. So there exists some $K \in \mathbb{R}$ such that $K_R \to K$ as $R \to + \infty$, which by (\ref{eq:k_ode_const}) also satisfies
\begin{equation} \label{eq:k_constant}
|K - \Psi_i - \Psi_i^{-1}| \leq 2 \exp(- 2 \delta_0 \epsilon^{-2}).
\end{equation}
Hence we may take the limit in equation (\ref{eq:k_ode_6}) where we fix $R$ and take $R_0 \to + \infty$. We find that there will exist some function $\mathcal{F}$ with $\Psi^2 \mathcal{F}_{R_0}(R) \to \mathcal{F}(R)$ as $R_0 \to + \infty$, also satisfying $|\mathcal{F}| \leq D_K e^{- \delta_0 \epsilon^{-2}} e^{-R}$, such that one has
\begin{equation} \label{eq:k_ode_7}
\frac{d \Psi}{d R} = - \Psi ( \Psi^2 - K \Psi + 1) + \mathcal{F}.
\end{equation}
By construction, $K$ has the same sign as $\Psi(s)$ in $\mathcal{K}$. We argue now that $|K| \geq 2$. Suppose otherwise that $|K| < 2$, which implies that $\Psi^2 - K \Psi + 1$ is bounded below by a positive constant $\beta$. Then (\ref{eq:k_ode_7}) implies that
\begin{equation*}
\frac{d |\Psi|}{d R} \leq - \beta |\Psi| + |\mathcal{F}|.
\end{equation*}
We then apply Gr\"onwall's inequality, finding that
\begin{equation*}
|\Psi(R)| \leq e^{- \beta(R - R^*)} |\Psi(R^*)| + \int^{R}_{R^*} e^{- \beta(R - \tilde{R})} | \mathcal{F} (\tilde{R}) | \, d \tilde{R} \xrightarrow{R \to + \infty} 0.
\end{equation*}
But this is a contradiction to the lower bound (\ref{eq:k_Psi_lower})! Hence $|K| \geq 2$, and we may write $K = \alpha + \alpha^{-1}$ for some $\alpha \in \mathbb{R}$. By $|K| \leq \frac{5}{3} \eta^{-1}$, it is clear that $|\alpha| \leq \frac{5}{3} \eta^{-1}$ also. So from (\ref{eq:k_constant}), we can deduce that
\begin{equation*}
| ( \alpha - \Psi_i ) (\alpha - \Psi_i^{-1}) | \lesssim \exp( - 2 \delta_0 \epsilon^{-2}) \eta^{-1} \lesssim \exp( - 2 \delta_0 \epsilon^{-2}).
\end{equation*}
So noting that we have left ourselves the freedom to interchange $\alpha \leftrightarrow \alpha^{-1}$, we may choose $\alpha$ accordingly such that $|\alpha - \Psi_i| \lesssim \exp(- \delta_0 \epsilon^{-2})$ as required.
So we have arrived at the ODE (\ref{eq:k_ode_main}). For the final statement in Proposition \ref{prop:k_ode}, note that the upper bound follows from rather straightforward analysis of this ODE, while the lower bound arises from a straightforward modification from the proof of (\ref{eq:k_Psi_lower}) in Lemma \ref{lem:k_prelim_2}.
\end{proof}
\subsection{Geometric features of the region $\mathcal{K}$ in the inversion case $|\Psi_i|<1$} \label{sub:kasner_geom}
In this section, we make use of Proposition \ref{prop:k_ode} to derive more quantitative information regarding the quantities $r(s)$ and $\Omega^2(s)$ in the region $s \in \mathcal{K}$, focusing for now on the interesting case where $|\Psi_i| < 1$ and there is a Kasner inversion. In particular, we will estimate the value of $r(s)$ where the inversion occurs, and bound $\Omega^2(s)$ in such a way that we can infer quantitative closeness to Kasner-like spacetimes before and after the inversion.
To make such precise statements about the convergence of these regions to Kasner-like geometries towards, we will have to assume further that $\Psi_i$, and therefore the quantities $\alpha$ and $\alpha^{-1}$ of Proposition \ref{prop:k_ode}, are bounded strictly away from $1$ in absolute value, where several important quantities will begin to degenerate\footnote{Assuming the Kasner correspondence of Section \ref{cosmo.intro}, having $|\alpha| = 1$ would imply a spacetime of Kasner exponents $0$, $1/2$ and $1/2$, which already begins to display degenerate features in the BKL picture, see Section~\ref{cosmo.intro}.}.
So for this section we strengthen the condition (\ref{eq:condition}) to the following assumption on $\Psi_i$, any $0< \sigma< 1/4$:
\begin{equation} \tag{$**$} \label{inversion_assumption}
\eta \leq |\Psi_i| \leq 1 - \sigma.
\end{equation}
In light of Corollary \ref{cor:condition_eta}, or more precisely the remark following it, the assumption (\ref{inversion_assumption}) is not vacuous; for $\eta$ sufficiently small and any choice of $\sigma \in \left( 0, \frac{1}{4}\right)$, there are certainly arbitrarily small $\epsilon$ such that (\ref{inversion_assumption}) is satisfied, and in fact the measure of this set of $\epsilon$ is controlled, as claimed in Theorem~\ref{maintheorem2}.
Assuming (\ref{inversion_assumption}), we make precise the region of spacetime where the inversion occurs in the following lemma.
\begin{lemma} \label{lem:inv_interval}
Given $n \in \mathbb{N}$ with $n \geq 2$, there exists $\epsilon_0(n, \eta, \sigma) > 0$ such that if $0 < |\epsilon| < \epsilon_0(n, \eta, \sigma)$ \textbf{and} the assumption (\ref{inversion_assumption}) holds, then for any $z > 0$ such that $z \in [|\alpha| + \epsilon^n, |\alpha|^{-1} - \epsilon^n]$, there exists a unique $s_z \in \mathcal{K}$ such that $|\Psi(s_z)| = z$. For this domain of $z$, the function $z \mapsto s_z$ is increasing, smooth and invertible, we may define the inversion interval $\mathcal{K}_{inv}^n$ to be $ \{ s_{in}(\epsilon) \leq s \leq s_{out}(\epsilon) \}$ with $|\Psi(s_{in})| = (|\alpha| + \epsilon^n)$ and $|\Psi(s_{out})| = (|\alpha|^{-1} - \epsilon^n)$.
Moreover there exists a constant $D_I(M ,\mathbf{e}, \Lambda, m^2, q_0, \eta, \sigma, n) > 0$ depending on $\eta$ and $\sigma$ as well as the usual parameters $M, \mathbf{e}, \Lambda, m^2, q_0$ such that we have the following for all $s \in \mathcal{K}_{inv}^n$:
\begin{equation} \label{eq:inv_interval}
\left| \epsilon^2 \cdot \log \frac{r_-}{r(s)} - \frac{b_-^{-2}}{2 (1 - \alpha^2)} \right| \leq D_I \epsilon^2 \log(\epsilon^{-1}).
\end{equation}
\end{lemma}
\begin{remark}
The idea is that Lemma \ref{lem:inv_interval} identifies precisely the region where the quantity $\Psi$ transitions from having absolute value smaller than $1$ to having absolute value greater than $1$, and that this transition occurs entirely within a region where $\log (\frac{r_-}{ r(s)})$ is of size $\sim \epsilon^{-2}$, but the $\log (\frac{r_-}{ r(s)})$-difference within the region is only $O(\log(\epsilon} \newcommand{\ls}{\lesssim^{-1})$.
\end{remark}
In what follows, we will for the most part take $n=2$ and define $\mathcal{K}_{inv}:= \mathcal{K}_{inv}^{n=2}$.
\begin{proof}
We shall prove Lemma \ref{lem:inv_interval} in four steps:
\begin{itemize}
\item
First, we show using equations (\ref{eq:raych_2}) and (\ref{eq:r_evol_2}) that when $\log(\frac{r_- } {r(s)}) \leq \frac{1}{2} b_-^{-2} \epsilon^{-2} (1 - \alpha^{2})^{-1} - O(\log (\epsilon^{-1}))$, we still have $- r \dot{r} (s) \approx - r \dot{r} (s_i)$ and $\Psi(s) \approx \Psi_i \approx \alpha$.
\item
Next we show using the same equations, that conversely, once we proceed to $\log(\frac{r_- } {r(s)}) \geq \frac{1}{2} b_-^{-2} \epsilon^{-2} (1 - \alpha^2)^{-1} + O(\log (\epsilon^{-1}))$, we must have that $\Psi(s) \geq |\alpha| + \epsilon^n$, hence identifying an $s = s_{in}(\epsilon)$ which which satisfies (\ref{eq:inv_interval}).
\item
Now applying Proposition \ref{prop:k_ode}, particularly the ODE (\ref{eq:k_ode_main}), we find that we proceed from $|\Psi| =|\alpha| + \epsilon^n$ to $|\Psi| = |\alpha|^{-1} - \epsilon^n$ in an $R$-interval of length $O(\log (\epsilon^{-1}))$, thus showing (\ref{eq:inv_interval}) for the whole of $s \in \mathcal{K}_{inv}$.
\item
Finally we use this ODE (\ref{eq:k_ode_main}) again to deduce that $\frac{d \Psi}{d R}$ is bounded strictly away from $0$ for $|\Psi| \in [ |\alpha| + \epsilon^2, |\alpha^{-1}| - \epsilon^2 ]$, which proves the remaining assertions of the lemma.
\end{itemize}
For ease of notation we suppose, without loss of generality, that $\Psi$ and $\alpha$ are positive in this proof.
By (\ref{eq:k_Psi_upperlower}), we know that, $\Psi \geq \alpha - \epsilon^2$ in $\mathcal{K}$. Therefore, one has from (\ref{eq:raych_2}) that
\begin{equation*}
\frac{d}{ds} \log \left[ \frac{\Omega^2(s)}{- \dot{r}} \left( \frac{r(s)}{r_-} \right)^{-(\alpha - \epsilon^2)^2} \right] \leq \frac{d}{ds} \log \left( \frac{\Omega^2(s)}{- \dot{r}(s)} \right) - \frac{\dot{r}}{r} \Psi^2 \leq 0.
\end{equation*}
So $\frac{\Omega^2}{- \dot{r}}(s) \left( \frac{r(s)}{r_-} \right)^{- (\alpha - \epsilon^2)^2}$ is decreasing. But by Corollary \ref{cor:protokasner}, specifically \eqref{eq:pk1_lapse}, and $|\alpha - \Psi_i| \lesssim \epsilon^2 \log (\epsilon^{-1})$,
\begin{equation}\label{monot}
\frac{\Omega^2}{- \dot{r}}(s) \left( \frac{r(s)}{r_-} \right)^{- (\alpha - \epsilon^2)^2} \leq \frac{\Omega^2}{- \dot{r}}(s_i) \left( \frac{r(s_i)}{r_-} \right)^{- (\alpha - \epsilon^2)^2} \leq \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp \left( D_K \log(\epsilon^{-1}) \right).
\end{equation} Note we have also used the fact that $\frac{1}{|\dot{r}|} \lesssim r \cdot \epsilon^{-2}$, absorbing the $\epsilon} \newcommand{\ls}{\lesssim^{-2}$ weight into $\exp \left( D_K \log(\epsilon^{-1})\right)$. Furthermore we used that, because $\Psi_i^2 - (\alpha-\epsilon} \newcommand{\ls}{\lesssim^2)^2 = O(\epsilon} \newcommand{\ls}{\lesssim^2)$, one has $(\frac{r(s_i)}{r_-})^{\Psi_i^2 - (\alpha-\epsilon} \newcommand{\ls}{\lesssim^2)^2} \lesssim e^{O(\epsilon^{-2}) \cdot O(\epsilon^2)} \lesssim 1$.
We now use this upper bound when integrating the equation (\ref{eq:r_evol_2}). Due to Lemma \ref{lem:k_prelim_1}, we can use the following estimate for the integral of the right hand side of (\ref{eq:r_evol_2}):
\begin{align*}
|- r \dot{r}(s) + r \dot{r}(s_i)|
&\leq \int^s_{s_i} \frac{Q_{\infty}^2 \Omega^2(\tilde{s})}{2 r^2(\tilde{s})} \, d \tilde{s}+ \int^s_{s_i} \frac{ \Omega^2(\tilde{s}) r^2(\tilde{s})}{4} (|\Lambda| + m^2 |\phi|^2 ) \, d \tilde{s} \\[0.5em]
&\lesssim \int^{r(s_i)}_{r(s)} \frac{\Omega^2}{- \dot{r}} \frac{1}{r^2} \, dr \\[0.5em]
&\lesssim \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp( D_K \log(\epsilon^{-1}) ) \int^{r(s_i)}_{r(s)} \cdot \left( \frac{r}{r_-} \right)^{(\alpha - \epsilon^2)^2 - 2} \, dr \\[0.5em]
&\lesssim \left( \frac{r_-}{r(s)} \right)^{1 - (\alpha - \epsilon^2)^2} \cdot \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp ( D_K \log(\epsilon^{-1})).
\end{align*}
Here, the last step follows as $1 - (\alpha - \epsilon^2)^2 \geq \sigma > 0$ for sufficiently small $\epsilon$ depending on $\sigma$. If $\frac{r(s)}{r_-} \geq~ \exp \left( (1 - \alpha^2)^{-1} \left(- \frac{1}{2} b_-^{-2} \epsilon^{-2} + (D_K + 6n) \log (\epsilon^{-1})\right) \right)$, then one finds
\begin{align*}
\left( \frac{r_-}{r(s)} \right)^{1 - (\alpha - \epsilon^2)^2}
&\leq \exp \left( - \frac{1 - (\alpha - \epsilon^2)^2}{ 1 - \alpha^2 } \left( - \frac{1}{2}b_-^{-2} \epsilon^{-2} + (D_K + 6n) \log(\epsilon^{-1}) \right) \right) \\[0.5em]
&\lesssim \epsilon^{6n} \cdot \exp \left( \frac{1}{2}b_-^{-2} \epsilon^{-2} \right) \cdot \exp( - D_K \log (\epsilon^{-1}) ).
\end{align*}
Putting this in the above one sees that $|- r \dot{r}(s) + r \dot{r}(s_i)| \lesssim \epsilon^{6n}$, or
\begin{equation*}
\left| \frac{- r \dot{r} (s)}{-r \dot{r}(s_i)} - 1 \right | \lesssim \epsilon^{6n - 2}.
\end{equation*}
But $- r \dot{r}(s)$ changing little from its initial value means that $\Psi(s)$ also changes little from its initial value; to see this use also (\ref{eq:k_phid}), which combined with the above implies $|\Psi(s) - \Psi_i| \lesssim \epsilon^{6n - 2}$. As $|\alpha - \Psi_i| \lesssim e^{- \delta_0 \epsilon^{-2}}$, we thus know that for $\log r(s) \geq - \frac{1}{2} (1 - \alpha^2)^{-1} b_-^{-2} \epsilon^{-2} + O(\log (\epsilon^{-1}))$ as specified, we have not yet entered the regime $\mathcal{K}_{inv}$.
On the other hand, we show that for $s_{in} = \sup \{ s \in \mathcal{K}: \Psi(s) \leq \alpha + \epsilon^n \}$, we must have $\log r (s_{in}) \geq - \frac{1}{2} ( 1 - \alpha^2)^{-1} b_-^{-2} \epsilon^{-2} - O(\log (\epsilon^{-1}))$. For this purpose, we use the Raychaudhuri equation (\ref{eq:raych_2}) to see that for $s_i \leq s \leq s_{in}$, one has
\begin{equation*}
\frac{d}{ds} \log \left[ \frac{\Omega^2(s)}{- \dot{r}} \left( \frac{r(s)}{r_-} \right)^{-(\alpha + \epsilon^n)^2} \right] \geq \frac{d}{ds} \log \left( \frac{\Omega^2(s)}{- \dot{r}(s)} \right) - \frac{\dot{r}}{r} \Psi^2 \geq - \frac{r^2}{- r \dot{r}} |\tilde{A}|^2 q_0^2 |\phi|^2.
\end{equation*}
Since $r(s) \leq e^{- \delta_0 \epsilon^{-2}} r_-$ for $s \in \mathcal{K}$, one sees using Proposition \ref{prop:k_ode} and Lemma \ref{lem:k_prelim_1} that the integral of the right hand side is bounded below by $- \log 2$, say. Therefore a similar application of Corollary \ref{cor:protokasner} will yield that
\begin{equation*}
\frac{\Omega^2}{- \dot{r}} \left( \frac{r(s)}{r_-} \right)^{- (\alpha + \epsilon^n)^2} \geq \frac{1}{2} \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp( - D_K \log(\epsilon^{-1})).
\end{equation*}
One now again integrates the equation (\ref{eq:r_evol_2}), getting now the upper bound
\begin{align*}
|- r \dot{r}(s) + r \dot{r}(s_i)|
&\geq \int^s_{s_i} \frac{Q_{\infty}^2 \Omega^2(\tilde{s})}{8 r^2(\tilde{s})} \, d \tilde{s}- |\Lambda| \int^s_{s_i} \frac{ \Omega^2(\tilde{s}) r^2(\tilde{s})}{4} \, d \tilde{s}
\gtrsim \int^{r(s_i)}_{r(s)} \frac{\Omega^2}{- \dot{r}} \frac{1}{r^2} \, dr \\[0.5em]
&\gtrsim \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp( - D_K \log(\epsilon^{-1}) ) \int^{r(s_i)}_{r(s)} \left( \frac{r}{r_-} \right)^{(\alpha + \epsilon^n)^2 - 2} \, dr \\[0.5em]
&\gtrsim \left( \frac{r_-}{r(s)} \right)^{1 - (\alpha + \epsilon^n)^2} \cdot \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp ( - D_K \log(\epsilon^{-1})).
\end{align*}
But by Proposition \ref{prop:k_ode}, we always have $- r \dot{r} \sim \epsilon^2$, so that
\begin{equation*}
\left( \frac{r(s)}{r_-} \right)^{1 - (\alpha + \epsilon^n)^2} \gtrsim \exp \left( - \frac{1}{2} b_-^{-2} \epsilon^{-2} \right) \cdot \exp ( (2- D_K)\log(\epsilon^{-1})).
\end{equation*}
Hence we do indeed find that for $s_i \leq s \leq s_{in}$, we must have $\log (r(s)/r_-) \geq - \frac{1}{2} (1 - \alpha^2)^{-1} b_-^{-2} \epsilon^{-2} - O(\log(\epsilon^{-1}))$ as claimed. This identifies $s = s_{in}$ obeying (\ref{eq:inv_interval}).
The remainder of this proof then proceeds entirely using the ODE (\ref{eq:k_ode_main}), which for $R = \log(\frac{r_-}{r(s)})$ we record again here as
\begin{equation} \label{eq:k_ode_main_copy}
\frac{d \Psi}{dR} = - \Psi ( \Psi - \alpha ) ( \Psi - \alpha^{-1} ) + \mathcal{F}, \quad |\mathcal{F}(R)| \leq D_K e^{- \delta_0 \epsilon^{-2} - R}.
\end{equation}
We have identified $R_{in} = R(s_{in}) = \frac{1}{2} (1 - \alpha^2)^{-1} b_-^{-2} \epsilon^{-2} + O(\log(\epsilon^{-1}))$ such that $\Psi(R_{in}) = \alpha + \epsilon^n$. We want to use that $\alpha$ is an unstable fixed point and $\alpha^{-1}$ a stable fixed point of this one-dimensional dynamical system.
Note that for $\Psi(s) \in [\alpha + \epsilon^n, \alpha^{-1} - \epsilon^n]$, one knows that (i) $\Psi \geq \eta$, and (ii) $\alpha^{-1} - 1 \geq 1 - \alpha \geq \sigma - O(e^{-\delta_0 \epsilon^{-2}})$, so we absorb the error term $\mathcal{F}$ and quantify the stability and instability of the fixed points to find
\begin{align}
\frac{d}{dR} (\Psi - \alpha) \geq \frac{\eta \sigma}{2} (\Psi - \alpha) \quad &\text{ if } \Psi \in [\alpha + \epsilon^n, 1], \label{eq:k_unstable} \\[0.5em]
\frac{d}{dR} (\alpha^{-1} - \Psi) \leq - \frac{\sigma}{2} (\alpha^{-1} - \Psi) \quad &\text{ if } \Psi \in [1, \alpha^{-1} - \epsilon^n]. \label{eq:k_stable}
\end{align} In particular, $\frac{d\Psi}{dR}>0$ as long as $\Psi\in [\alpha + \epsilon^n, \alpha^{-1} - \epsilon^n]$. From (\ref{eq:k_unstable}), one finds that $\Psi - \alpha$ proceeds from $\epsilon^n$ to $1 - \alpha$ in $O(\log (\epsilon^{-1}))$ time in $R$, and from (\ref{eq:k_stable}) that $\alpha^{-1} - \Psi$ proceeds from $\alpha^{-1} - 1$ to $\epsilon^n$ also in $O(\log (\epsilon^{-1}))$ time in $R$. Therefore defining $R_{out}$ to be the minimal $R$ such that $\Psi(R_{out}) = \alpha^{-1} - \epsilon^n$, one finds $R_{out} = R_{in} + O(\log (\epsilon^{-1})) = \frac{1}{2} (1- \alpha^2)^{-1} b_-^{-2} \epsilon^{-2} + O (\log(\epsilon^{-1}))$ also.
Finally, since $\frac{d \Psi}{d R} \geq \frac{1}{2} \epsilon^n \eta \sigma > 0$ when $\Psi \in [\alpha + \epsilon^n, \alpha^{-1} - \epsilon^n]$, we have that upon entering this region of $\Psi$ it is impossible to return, and the remaining claims of the lemma are immediate.
\end{proof}
\section{Quantitative Kasner-like asymptotics}\label{section:quantitative}
Let us recapitulate the various regions so far \begin{enumerate}
\item In Section~\ref{sec:protokasner}, we worked in the Proto-Kasner region $\mathcal{PK}=\{ s_{PK} \leq s \leq s_i\} =\{ 2|B| \mathfrak{W} \epsilon} \newcommand{\ls}{\lesssim \geq \frac{r}{r_-} \geq e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}}\}$.\\ We now also define the restricted region $\mathcal{PK}_1=\{ s_{K_1} \leq s \leq s_i\} =\{ 2|B| \mathfrak{W} \epsilon} \newcommand{\ls}{\lesssim^2 \geq \frac{r}{r_-} \geq e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}}\}$.
\item In Section~\ref{sec:kasner}, we have worked in the Kasner region $\mathcal{K}=\{ s_{i} \leq s < s_{\infty}\} =\{ 0< \frac{r}{r_-} \leq e^{-\delta_0 \epsilon} \newcommand{\ls}{\lesssim^{-2}}\}$ and showed that $\lim_{s\rightarrow s_{\infty}}r(s)=0$.
\end{enumerate}
In what follows, we want to prove quantitative estimates on the ``Kasner behavior'' of the metric; for this, we will first have to restrict $\mathcal{PK}$ to its aforementioned subset $\mathcal{PK}_1$ on which $r\gtrsim \epsilon} \newcommand{\ls}{\lesssim^2$ (recall that $r(s_{K_1})=2|B| \mathfrak{W} r_- \epsilon} \newcommand{\ls}{\lesssim^2$). While in the previous sections, the analysis was so far oblivious to the absence/presence of a Kasner inversion, we will now also be obliged to distinguish both cases and treat them differently.
The ``No-Kasner-inversion'' condition will be (as we will show) \eqref{***} as follows: for some $\sigma>0$
\begin{equation} \tag{$***$} \label{***}
|\Psi(s_i)| \geq 1 + \sigma.
\end{equation}
whereas the ``Kasner-inversion'' condition will be (as we will show) \eqref{**} as follows: for some $\sigma>0$:
\begin{equation} \tag{$**$} \label{**}
0 < \eta \leq |\Psi(s_i)| \leq 1 - \sigma.
\end{equation}
(note that even combining \eqref{***} and \eqref{**} does not cover all the possibilities).
In each case, we will further sub-divide $\mathcal{PK}\cup \mathcal{K}$ \emph{differently} as follows \begin{enumerate}[i.]
\item If the ``No-Kasner-inversion'' condition \eqref{***} is satisfied, we define the first Kasner region \begin{equation}\label{K1.def.no.inv}\mathcal{K}_1= \mathcal{PK}_1\cup \mathcal{K.}\end{equation}
Because there is no inversion in that case, we show indeed that the metric is close to a single Kasner spacetime on the whole of $\mathcal{K}_1= \mathcal{PK}_1\cup \mathcal{K}$ in Theorem~\ref{thm.***}.
\item If the ``Kasner-inversion'' condition \eqref{**} is satisfied, we define the first Kasner region \begin{equation}\label{K1.def.inv}
\mathcal{K}_1= \{s_{K_1} \leq s \leq s_{in}\}\supset \mathcal{PK}_1
\end{equation} with $s_{in} \in \mathcal{K}$, \begin{equation}\label{Kinv.def}
\mathcal{K}_{inv} = \{ s_{in} \leq s \leq s_{out}\}
\end{equation} to be the Kasner-inversion region, and \begin{equation}\label{K2.def.inv}
\mathcal{K}_2=\{ s_{out} \leq s < s_{\infty}\}
\end{equation} to be the second Kasner region, where $s_{in}$, $s_{out}$ given by Lemma~\ref{lem:inv_interval} applied to $n=2$ are defined so such \eqref{eq:inv_interval} is satisfied on $\mathcal{K}_{inv}$. Because of the Kasner inversion, we show that the metric is close to a first Kasner spacetime in $\mathcal{K}_1$, and close to a second, different Kasner spacetime in $\mathcal{K}_2$ in Theorem~\ref{thm.**}. Note also that by \eqref{eq:inv_interval}, the transition region $\mathcal{K}_{inv}$ is (relatively) small.
\end{enumerate}
We now state the two main theorems of this section, i.e.\ Theorem~\ref{thm.***} and \ref{thm.**}.
\begin{theorem}\label{thm.***}
Let $(r, \Omega^2, \phi, Q, \tilde{A})$ be a solution to the system (\ref{eq:raych})--(\ref{eq:phi_evol_2}), which by Proposition \ref{prop:oscillation+} exists at least up to the value $s = s_i = \frac{b_-^{-2} \epsilon^{-2}}{4 |K_-| } + O(\log (\epsilon^{-1}))$ at which $r(s) = e^{- \delta_0 \epsilon^{-2}} r_-$. Suppose that for some given $\sigma > 0$, \eqref{***} is satisfied.
Then there exists some $\epsilon_0 (M, \mathbf{e}, \Lambda, m^2, q_0, \sigma) > 0$, such that if $0 < |\epsilon| < \epsilon_0$, there exists some $s_{\infty} > s_i$ such that the solution of (\ref{eq:raych})--(\ref{eq:phi_evol_2}) exists for $s \in (- \infty, s_{\infty})$ with $\lim_{s \to s_{\infty}} r(s) = 0$.
Furthermore, we have the following Kasner-like asymptotics: denote $s_{K_1}$ such that $r(s_{K_1}) = 2 |B| \mathfrak{W} \epsilon^2 r_-$, then in the region $\mathcal{K}_1 = \{ s_{K_1} \leq s < s_{\infty} \}$, one may write the metric in the following form, where $\alpha$ is as determined in Proposition \ref{prop:k_ode}:
\begin{equation}
g = - d \tau^2 + \mathcal{X}_1 \cdot ( 1 + \mathfrak{E}_{X, 1}(\tau)) \, \tau^{\frac{2 (\alpha^2 -1)}{\alpha^2 + 3}} \, dt^2 + \mathcal{R}_1 \cdot ( 1 + \mathfrak{E}_{R, 1}(\tau)) \, r_-^2 \tau^{ \frac{4}{\alpha^2 + 3} } \, d \sigma_{\mathbb{S}^2} .
\end{equation}
Here, $\mathcal{X}_1$ and $\mathcal{R}_1$ are constants, and $\mathfrak{E}_{X, 1}(\tau)$ and $\mathfrak{E}_{R, 1}(\tau)$ are small functions of $\tau$ satisfying the following bounds for $\beta =\min \{\frac{1}{2},1-\alpha^2\}$
\begin{equation}
\left| \log \mathcal{X}_1 + \frac{\alpha^2 + 1}{\alpha^2 + 3} b_-^{-2} \epsilon^{-2} \right| + \left| \log \mathcal{R}_1 - \frac{1}{\alpha^2 + 3} b_-^{-2} \epsilon^{-2} \right| \leq C_K \log (\epsilon^{-1}),
\end{equation}
\begin{equation} \label{***.error}
|\mathfrak{E}_{X, 1}(\tau)| + |\mathfrak{E}_{R, 1}(\tau)| \leq C_K \epsilon^2 \cdot \left(\frac{\tau}{\tau(s_{K_1})}\right)^{\frac{2\beta}{\alpha^2 + 3}}.
\end{equation}
Hence the spacetime corresponds to a Kasner-like spacetime with Kasner exponents $\frac{\alpha^2 - 1}{\alpha^2 + 3}$, $\frac{2}{\alpha^2 + 3}$, $\frac{2}{\alpha^2 +3}$.
\end{theorem}
\begin{theorem}\label{thm.**}
Let $(r, \Omega^2, \phi, Q, \tilde{A})$ be a solution to the system (\ref{eq:raych})--(\ref{eq:phi_evol_2}), which by Proposition \ref{prop:k_ode} exists at least up to the value $s = s_i = \frac{b_-^{-2} \epsilon^{-2}}{4 |K_-| } + O(\log (\epsilon^{-1}))$ at which $r(s) = e^{- \delta_0 \epsilon^{-2}} r_-$. Suppose that for given $\eta, \sigma > 0$, one instead assumes \eqref{**} is satisfied.
Then there exists some $\epsilon_0 (M, \mathbf{e}, \Lambda, m^2, q_0, \eta, \sigma) > 0$, such that if $0 < |\epsilon| < \epsilon_0$, there exists some $s_{\infty} > s_i$ such that the solution of (\ref{eq:raych})--(\ref{eq:phi_evol_2}) exists for $s \in (- \infty, s_{\infty})$ with $\lim_{s \to s_{\infty}} r(s) = 0$.
In this case, we single out two different regions with Kasner-like asymptotics, between there is an intermediate region where the Kasner inversion occurs. Letting $s_{K_1}$ be such that $r(s_{K_1}) = 2 |B| \mathfrak{W} \epsilon^2 r_-$, we define the following three regions:
\begin{equation*}
\mathcal{K}_1 = \{ s_{K_1} \leq s \leq s_{in} \}, \; \mathcal{K}_{inv} = \{ s_{in} \leq s \leq s_{out} \}, \; \mathcal{K}_2 = \{ s_{out} \leq s \leq s_{\infty} \}.
\end{equation*}
We will describe Kasner-like asymptotics for the two regions $\mathcal{K}_1$ and $\mathcal{K}_2$.
In the region $\mathcal{K}_1$, one writes the metric in the following form, for $\alpha$ as in Proposition \ref{prop:k_ode}:
\begin{equation}
g = - d \tau^2 + \mathcal{X}_1 \cdot ( 1 + \mathfrak{E}_{X, 1}(\tau)) \,(\tau-\tau_0)^{\frac{2 (\alpha^2 -1)}{\alpha^2 + 3}} \, dt^2 + \mathcal{R}_1 \cdot ( 1 + \mathfrak{E}_{R, 1}(\tau)) \, r_-^2 (\tau-\tau_0)^{ \frac{4}{\alpha^2 + 3} } \, d \sigma_{\mathbb{S}^2} .
\end{equation}
Here $\tau_0>0$, $\mathcal{X}_1$ and $\mathcal{R}_1$ are constants, and $\mathfrak{E}_{X, 1}(\tau)$ and $\mathfrak{E}_{R, 1}(\tau)$ are functions of $\tau$ satisfying
\begin{equation}
\left| \log \mathcal{X}_1 + \frac{\alpha^2 + 1}{\alpha^2 + 3} b_-^{-2} \epsilon^{-2} \right| + \left| \log \mathcal{R}_1 - \frac{1}{\alpha^2 + 3} b_-^{-2} \epsilon^{-2} \right| \leq C_K \log (\epsilon^{-1}),
\end{equation}
\begin{equation}
|\mathfrak{E}_{X, 1}(\tau)| + |\mathfrak{E}_{R, 1}(\tau)| \leq C_K \epsilon^2.
\end{equation}
On the other hand, in the region $\mathcal{K}_2$, one instead has the following form for the metric
\begin{equation}
g = - d \tau^2 + \mathcal{X}_2 \cdot ( 1 + \mathfrak{E}_{X, 2}(\tau)) \, \tau^{\frac{2 (1 - \alpha^2)}{1 + 3 \alpha^2}} \, dt^2 + \mathcal{R}_2 \cdot ( 1 + \mathfrak{E}_{R, 2}(\tau)) \, r_-^2 \tau^{ \frac{4 \alpha^2}{1 + 3 \alpha^2} } \, d \sigma_{\mathbb{S}^2} .
\end{equation}
The constants $\mathcal{X}_2$ and $\mathcal{R}_2$ are constants, and the functions $\mathfrak{E}_{X, 2}(\tau)$ and $\mathfrak{E}_{R, 2}(\tau)$ now satisfying the following bounds for $\beta =\min \{\frac{1}{2},1-\alpha^2\}$
\begin{equation}
\left| \log \mathcal{X}_2 + \frac{1 + \alpha^{-2}}{1 + 3 \alpha^2} b_-^{-2} \epsilon^{-2} \right| + \left| \log \mathcal{R}_2 - \frac{1}{1 + 3\alpha^2} b_-^{-2} \epsilon^{-2} \right| \leq C_K \log (\epsilon^{-1}),
\end{equation}
\begin{equation} \label{**.error}
|\mathfrak{E}_{X, 2}(\tau)| + |\mathfrak{E}_{R, 2}(\tau)| \leq C_K \epsilon^2 \cdot \left(\frac{\tau}{\tau(s_{out})}\right)^{\frac{2\beta}{\alpha^{-2} + 3}}.
\end{equation}
One sees that the spacetime exhibits a Kasner bounce from the region $\mathcal{K}_1$ with Kasner exponents of $\frac{\alpha^2 - 1}{\alpha^2 + 3}, \frac{2}{\alpha^2 + 3}, \frac{2}{\alpha^2 + 3}$ to the region $\mathcal{K}_2$ with exponents of $\frac{1 - \alpha^2}{1 + 3 \alpha^2}, \frac{2\alpha^2}{1 + 3 \alpha^2}, \frac{2 \alpha^2}{1 + 3 \alpha^2}$. We further finds the following estimates regarding the proper time length of the regions $\mathcal{K}_2$ and $\mathcal{K}_{inv}$. For $\mathcal{K}_2$, the proper time $\tau$ from the singularity varies between $0$ and $\tau(s_{out})$, obeying:
\begin{equation} \label{eq:propertimek2}
\left| \log \tau(s_{out}) - \frac{1}{2} \frac{\alpha^{-2} + 1}{1 - \alpha^2} \cdot b_-^{-2} \epsilon^{-2} \right| \leq C_K \log(\epsilon^{-1}),
\end{equation}
while we have the following (non-sharp) upper bound for the size of the inversion region:
\begin{equation} \label{eq:propertimekinv}
0 \leq \tau(s_{in}) - \tau(s_{out}) \leq \exp( - \frac{b_-^{-2} \epsilon} \newcommand{\ls}{\lesssim^{-2}}{1 - \alpha^2} ) \cdot \exp( C_K \log (\epsilon^{-1})).
\end{equation}
\end{theorem}
\subsection{Asymptotics for $\Psi$ near the $\{r = 0\}$ singularity}
The first step in proving that the $\{ r = 0 \}$ singularity has Kasner-like asymptotics relies on showing that $\Psi$ tends to the appropriate constant $\alpha$ or $\alpha^{-1}$ sufficiently quickly near the singularity. The aim will be to show that $\Psi = \alpha + O( (\frac{r}{r_-})^{\beta})$ or $\Psi = \alpha^{-1} + O( (\frac{r}{r_-})^{\beta})$ for some positive exponent $\beta > 0$.
After the usual change of coordinates $r = e^{-R }\, r_-$, this translates to showing that $\Psi$ decays exponentially to $\alpha$ or $\alpha^{-1}$ in the variable $R$. For this purpose, we shall need to use the ODE (\ref{eq:k_ode_main}) from Proposition \ref{prop:k_ode}. By standard dynamical systems theory, one expects that $\Psi$ tends towards its stable fixed point at an exponential rate. We quantify this in the following lemma:
\begin{lemma} \label{lem:ode_decay}
Fix some constants $0 < \sigma, \eta < \frac{1}{2}$. For all $\alpha \in \mathbb{R}$ satisfying $1 + \sigma \leq |\alpha| \leq \eta^{-1}$, consider the following ODE for the function $\Psi = \Psi(R)$:
\begin{equation} \label{eq:ode_general}
\frac{d \Psi}{d R} = - \Psi ( \Psi - \alpha ) ( \Psi - \alpha^{-1} ) + \mathcal{F}(R), \qquad |\mathcal{F}(R)| \leq e^{-R}.
\end{equation}
Define $\beta = \min\{ \frac{1}{2}, \alpha^2 - 1\} \geq \sigma$. Then there exists some $\nu_0 = \nu_0(\sigma, \eta) > 0$ such that for all $|\nu| < \nu_0$ and $R_*$ satisfying both
\begin{equation} \label{eq:ode_general_init}
|\Psi(R_*) - \alpha| \leq \nu^2 \quad \textbf{ and } \quad e^{-R_*} \leq \nu^2,
\end{equation}
then one finds that for $R \geq R_*$, $\Psi(R)$ decays to $\alpha$ at the following exponential rate
\begin{equation} \label{eq:ode_general_decay}
|\Psi(R) - \alpha | \leq 8 \nu^2 e^{- \beta( R - R_* )}.
\end{equation}
\end{lemma}
The above lemma will later be applied with $\nu=\epsilon} \newcommand{\ls}{\lesssim$ or $\nu=\epsilon} \newcommand{\ls}{\lesssim^2$ in what follows.
\begin{proof}
We use a bootstrap argument along with Gr\"onwall's inequality. We take the bootstrap assumption to be the desired estimate (\ref{eq:ode_general_decay}). Assuming this holds in some bootstrap region, we have
\begin{equation*}
\Psi ( \Psi - \alpha^{-1} ) \geq \alpha^2 - 1 - | 2 \alpha - \alpha^{-1} | \cdot 8 \nu^2 e^{-\beta(R - R_*)}.
\end{equation*}
Therefore, for $R, \tilde{R}$ in the bootstrap region one computes that for $\nu$ small enough,
\begin{align*}
- \int_{\tilde{R}}^R \Psi(R') ( \Psi(R') - \alpha^{-1} ) \, dR'
&\leq - (\alpha^2 - 1) (R - \tilde{R}) + \beta^{-1} |2 \alpha - \alpha^{-1}| \cdot 8 \nu^2 e^{- \beta(\tilde{R} - R_*)} \\
&\leq - \beta (R - \tilde{R}) + \log 2.
\end{align*}
Hence, after finding from (\ref{eq:ode_general}) the differential inequality
\begin{equation*}
\frac{d}{dR} | \Psi - \alpha | \leq - \Psi ( \Psi - \alpha^{-1} ) |\Psi - \alpha| + e^{-R},
\end{equation*}
one uses Gr\"onwall to deduce that
\begin{equation*}
|\Psi(R) - \alpha| \leq 2 \nu^2 e^{- \beta(R - R_*)} + \int^R_{R_*} 2 e^{-\tilde{R}} e^{- \beta(R - \tilde{R})} \, d\tilde{R} \leq 2(\nu^2 + (1 - \beta)^{-1} e^{-R_*}) e^{-\beta(R - R_*)}.
\end{equation*}
So taking into account the second assumption of (\ref{eq:ode_general_init}) and $\beta \leq 1/2$, we improve the bootstrap assumption (\ref{eq:ode_general_decay}). So this estimate is true for all $R \geq R_*$.
\end{proof}
We emphasize that Lemma~\ref{lem:ode_decay} will be useful both in the case of an inversion, or in its absence.
\subsection{First case: absence of Kasner inversion}\label{no.inv.section}
In this subsection, assuming that (\ref{***}) holds, we prove the quantitative Kasner asymptotics in the region $\mathcal{K}_1 = \{ s_{K_1} \leq s < s_{\infty} \}$. The essential ingredient is the following lemma:
\begin{lemma} \label{lem:noinv_lapse}
Consider a solution $(r, \Omega^2, \phi, Q, \tilde{A})$ to the system (\ref{eq:raych})--(\ref{eq:phi_evol_2}) as in Proposition \ref{prop:k_ode}. Assuming also (\ref{***}), one finds that there exists some $\mathcal{Y}_1>0$ satisfying, for $s_{K_1} \leq s < s_{\infty}$:
\begin{equation} \label{eq:noinv_lapse}
\left| \log \left( \frac{\Omega^2}{- \dot{r}} \left( \frac{r}{r_-} \right)^{-\alpha^2} \right) (s) - \log \mathcal{Y}_1 \right| \leq D_1
\epsilon^2 \left( \frac{r(s)}{r_-} \right)^{\beta},
\end{equation} with $\mathcal{Y}_1$ satisfying \begin{equation}
\left| \log \mathcal{Y}_1 + \frac{1}{2} b_-^{-2} \epsilon} \newcommand{\ls}{\lesssim^{-2}\right|\leq D_1 \cdot \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}).
\end{equation}
Furthermore, one may find a constant $\mathcal{Z}_1>0$ such that
\begin{equation} \label{eq:noinv_r}
\left| - r \dot{r} (s) - \mathcal{Z}_1 \right| \leq D_1 \epsilon^4 \left( \frac{r (s)}{r_-} \right)^{\beta},
\end{equation} with $\mathcal{Z}_1$ satisfying \begin{equation}
\left| \mathcal{Z}_1 - 4 |B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon} \newcommand{\ls}{\lesssim^2 \right|\leq D_1 \cdot \epsilon} \newcommand{\ls}{\lesssim^4\log(\epsilon} \newcommand{\ls}{\lesssim^{-1}).
\end{equation}
\end{lemma}
\begin{proof}
We will use the Raychaudhuri equation (\ref{eq:raych_2}). In light of Lemma \ref{lem:ode_decay}, it is preferable to change variables once again, now to the $R$-coordinate:
\begin{equation} \label{eq:raych_3}
\frac{d}{dR} \log \left( \frac{\Omega^2}{- \dot{r}} \right) = - \Psi^2 - \frac{r^4 q_0^2 \tilde{A}^2 \phi^2}{ (- r\dot{r})^2}.
\end{equation}
Proposition \ref{prop:k_ode} tells us that the second term on the right hand side of this expression is $O( \epsilon^{-4} e^{-4 R} R^2)$. Therefore, by adding $\alpha^2$ to both sides and using that $\Psi$ is bounded, we have
\begin{equation} \label{eq:noinv_lapse_der}
\left| \frac{d}{dR} \log \left( \frac{\Omega^2}{- \dot{r}} \right) + \alpha^2 \right| \lesssim |\Psi - \alpha| + \epsilon^{-4} e^{-4 R} R^2.
\end{equation}
The crucial observation is that the right-hand side will be integrable (and with small integral) in the region $\mathcal{K}_1$. Indeed, we claim that if $R_{K_1} = - \log( \frac{r_{K_1}} {r_-}) = - \log(2 |B| \mathfrak{W} \epsilon^2)$, then for $R_{K_1} \leq R < + \infty$, we have
\begin{equation} \label{eq:noinv_lapse_int}
\int_R^{+\infty} |\Psi(\tilde{R}) - \alpha| \, d \tilde{R} + \int_R^{+\infty} \epsilon^{-4} e^{-4 \tilde{R}} \tilde{R}^2 \, d\tilde{R}\lesssim \epsilon^2 e^{- \beta R}.
\end{equation}
The bound for the latter integral follows by straightforward calculus, using also $\beta < 1/2$ and $e^{-R} \leq e^{- R_{K_1}} \sim \epsilon^2$ to get the correct dependence on $\epsilon$. For the former integral, we proceed in two steps; first we use Lemma \ref{lem:ode_decay} to deal with the region $\mathcal{K}$, then use Corollary \ref{cor:protokasner} to handle the remaining part $\mathcal{PK} \cap \mathcal{K}_1$.
Note that for the region $\mathcal{K}$ where we have access to the ODE (\ref{eq:k_ode_main}), by Proposition \ref{prop:k_ode} we have the bound $|\Psi(R_i) - \alpha| \lesssim e^{-\delta_0 \epsilon^{-2}}$. Hence by Lemma \ref{lem:ode_decay}, with $\nu^2=e^{-\delta_0 \epsilon^{-2}}$, we know that for $R \geq R_i$, we must have $|\Psi - \alpha| \lesssim e^{-\delta_0 \epsilon^{-2}} e^{-\beta(R - R_i)}$. Thus for $R \geq R_i$, one has
\begin{equation*}
\int_R^{+\infty} |\Psi(\tilde{R}) - \alpha | \, d\tilde{R} \lesssim e^{-\delta_0 \epsilon^{-2}} e^{- \beta(R - R_i)} \lesssim e^{- (1 - \beta) \delta_0 \epsilon^{-2}} e^{- \beta R}.
\end{equation*}
The last line here follows from the definition $R_i = \delta_0 \epsilon^{-2}$. The smallness of the expression $e^{- (1-\beta) \delta_0 \epsilon^{-2}}$ thus proves (\ref{eq:noinv_lapse_int}) for $R \geq R_i$.
For the remaining portion $R \in [R_{K_1}, R_i]$, we use Corollary \ref{cor:protokasner}. The point is that in this region, we have $|\Psi - \alpha| \leq |\Psi - \Psi_i| + |\Psi_i - \alpha| \lesssim e^{-2 R} \log(\epsilon^{-1}) + e^{- \delta_0 \epsilon^{-2}}$. Therefore, one finds that
\begin{equation*}
\int_R^{R_i} |\Psi(\tilde{R}) - \alpha| \lesssim e^{-2 R} \log(\epsilon^{-1}) + e^{-\delta_0 \epsilon^{-2}} R_i \lesssim \epsilon^2 e^{- \beta R}.
\end{equation*}
So we have proved the estimate (\ref{eq:noinv_lapse_int}). From (\ref{eq:noinv_lapse_der}), this shows that the expression
\begin{equation}
\log \left( \frac{\Omega^2}{- \dot{r}} \right) + \alpha^2 R = \log \left( \frac{\Omega^2}{- \dot{r}} \left( \frac{r}{r_-} \right)^{-\alpha^2} \right)
\end{equation}
indeed has a finite limit $\log \mathcal{Y}_1$ as $R \to \infty$, and moreover satisfies the estimate (\ref{eq:noinv_lapse}).
We next estimate the constant $\log \mathcal{Y}_1$. To do this, we evaluate (\ref{eq:noinv_lapse}) at $r = r_{K_1}$ i.e.\ $s = s_{K_1}$ and use the estimate (\ref{eq:pk1_lapse}) from Corollary \ref{cor:protokasner}. As $|\log r(s_{K_1})|, |\log (- \dot{r}(s_{K_1}))| = O(\log (\epsilon^{-1}))$ here, we find that
\begin{equation} \label{eq:noinv_y}
\left| \log \mathcal{Y}_1 + \frac{1}{2} b_-^{-2} \epsilon^{-2} \right| \lesssim \log (\epsilon^{-1}).
\end{equation}
Finally, we show the estimate (\ref{eq:noinv_r}) by considering the equation (\ref{eq:r_evol_2}) after changing variables to $r$; Propositions \ref{prop:oscillation+} and \ref{prop:k_ode} and then the now-known (\ref{eq:noinv_lapse}) tell us that for $s \in \mathcal{K}_1$, we have
\begin{equation*}
\left| \frac{d}{dr} (- r \dot{r}) \right| \,\lesssim\, \frac{\Omega^2}{- r^2 \dot{r}} \, \lesssim \, \mathcal{Y}_1 \cdot \left( \frac{r}{r_-} \right)^{\alpha^2 - 2}.
\end{equation*}
Integrating this expression then yields (\ref{eq:noinv_r}) -- of course it is essential here that $\alpha^2 - 1 \geq \frac{\sigma }{2} > 0$, and we have the smallness (\ref{eq:noinv_y}) for the expression $\mathcal{Y}_1$.
\end{proof}
\subsection{Second case: presence of a Kasner inversion}
Assuming instead that (\ref{**}) holds, Proposition \ref{prop:k_ode} and Lemma \ref{lem:inv_interval} (for $n=2$) show that the spacetime must exhibit a Kasner inversion. We proceed to show that if we define $\mathcal{K}_1$ and $\mathcal{K}_2$ as in \eqref{K1.def.inv}, \eqref{K2.def.inv}, there are Kasner-like asymptotics in both regimes. As in Section \ref{no.inv.section}, we first find the precise asymptotics for the lapse $\Omega^2$, as well as $- \dot{r}$.
\subsubsection{The pre-inversion Kasner regime}
We start to look at the region $\mathcal{K}_1$ (pre-inversion regime).
\begin{lemma} \label{lem:preinv_lapse}
Consider a solution $(r, \Omega^2, \phi, Q, \tilde{A})$ to the system (\ref{eq:raych})--(\ref{eq:r_evol_2}) as in Proposition \ref{prop:k_ode}. Assuming now the condition (\ref{**}) on the value of $\Psi$ at $s = s_i$, there will exist some $\mathcal{Y}_1$ satisfying, for $s_{K_1} \leq s \leq s_{in}$,
\begin{equation} \label{eq:preinv_lapse}
\left| \log \left( \frac{\Omega^2}{- \dot{r}} \left( \frac{r}{r_-} \right)^{-\alpha^2} \right) (s) - \log \mathcal{Y}_1 \right| \leq D_1
\epsilon^2,
\end{equation} with $\mathcal{Y}_1$ satisfying \begin{equation} \label{eq:inv_y}
\left| \log \mathcal{Y}_1 + \frac{1}{2} b_-^{-2} \epsilon^{-2} \right| \lesssim \log (\epsilon^{-1}).
\end{equation}
Furthermore, one may find a constant $\mathcal{Z}_1>0$ such that
\begin{equation} \label{eq:preinv_r}
\left| - r \dot{r} (s) - \mathcal{Z}_1 \right| \leq D_1 \epsilon^4,
\end{equation} with $\mathcal{Z}_1$ satisfying \begin{equation} \label{eq:inv_z}
\left| \mathcal{Z}_1 -4|B|^2 \mathfrak{W}^2 \omega_{RN} r_-^2 \epsilon} \newcommand{\ls}{\lesssim^2 \right| \lesssim \epsilon} \newcommand{\ls}{\lesssim^4 \log (\epsilon^{-1}).
\end{equation}
\end{lemma}
\begin{proof}
We follow a similar template to the proof of Lemma \ref{lem:noinv_lapse}, though unlike that case we do not need $(\frac{r}{r_-})^{\beta}$ decay rates. In particular, we are able to integrate from $R = R_{K_1}$ rather than backwards from $R = + \infty$. Using the same Raychaudhuri equation (\ref{eq:raych_3}), we need to prove the analogue of (\ref{eq:noinv_lapse_int}), i.e.\
\begin{equation} \label{eq:preinv_lapse_int}
\int_{R_{K_1}}^R |\Psi(\tilde{R}) - \alpha| \, d\tilde{R} + \int_{R_{K_1}}^R \epsilon^{-4} e^{-4 \tilde{R}} \tilde{R}^2 \, d\tilde{R} \lesssim \epsilon^2.
\end{equation}
As in Lemma \ref{lem:noinv_lapse}, the latter integral is straightforward, and we focus on the former. Following Corollary \ref{cor:protokasner} and Proposition \ref{prop:k_ode} we still know that in the region $\mathcal{PK}_1 = \mathcal{PK} \cap \mathcal{K}_1$, where $R \in [R_{K_1}, R_i]$, we have $|\Psi(R) - \alpha| \leq |\Psi(R) - \Psi_i| + |\Psi_i - \alpha| \lesssim e^{-2R} \log(\epsilon^{-1}) + e^{-\delta_0 \epsilon^{-2}}$. So we know as before that
\begin{equation} \label{eq:preinv_est1}
\int_{R_{K_1}}^{R_i} |\Psi(\tilde{R}) - \alpha| \, d \tilde{R} \lesssim \epsilon^2.
\end{equation}
On the other hand, when integrating between $R_i$ and $R_{in}$, $\Psi$ is growing away from $\alpha$ exponentially as opposed to exponentially decaying, so we need a new tactic. The key observation is that following the definition of $R_{in}$ from Lemma \ref{lem:inv_interval} for $n=2$, we know a priori that $|\Psi(R) - \alpha| \leq \epsilon^2$ when $R \in [R_i, R_{in}]$. The same lemma also tells us that $|R_i - R_{K_1}| \lesssim \epsilon^{-2}$, so these two observations combined tell us that $\int^{R_{in}}_{R_i} |\Psi(\tilde{R}) - \alpha| \, d\tilde{R} \lesssim 1$.
Of course, this is not quite the claimed estimate (\ref{eq:preinv_lapse_int}). To improve the $O(1)$ bound to $O(\epsilon^2)$, we apply Lemma \ref{lem:inv_interval} with $n=4$. Defining $R_{in}'$ to instead be the first (hence the unique) $R > R_i$ with $|\Psi(R_{in}') - \alpha| = \epsilon^4$, we find that for $R \in [R_{in}', R_{in}]$, we have $|\Psi(R) - \alpha| \leq e^{\frac{\eta\sigma}{2}(R - R_{in})} \epsilon^2$. Furthermore, for $R \in [R_i, R_{in}']$ we now know a priori that $|\Psi(R) - \alpha| \leq \epsilon^4$. So
\begin{equation}
\int_{R_i}^{R_{in}} |\Psi(\tilde{R}) - \alpha| \, d \tilde{R} \leq \int_{R_i}^{R_{in}'} \epsilon^4 \, d \tilde{R} + \int_{R_{in}'}^{R_{in}} e^{\frac{\eta \sigma}{2} (\tilde{R} - R_{in})} \epsilon^2 \, d \tilde{R} \lesssim \epsilon^2.
\end{equation}
Combining this with (\ref{eq:preinv_est1}) yields the desired (\ref{eq:preinv_lapse_int}). The estimate (\ref{eq:preinv_lapse}) then follows exactly as in the proof of (\ref{eq:noinv_lapse}), as does the estimate (\ref{eq:inv_y}) on $\mathcal{Y}_1$.
For the estimate (\ref{eq:preinv_r}), let us define $\mathcal{Z}_1$ as $- r \dot{r}(s_i)$. By Proposition \ref{prop:oscillation+}, it is easy to integrate (\ref{eq:r_evol_2}) and deduce (\ref{eq:preinv_r}) in the region $s \in [s_{K_1}, s_i]$. The difficulty lies in showing (\ref{eq:preinv_r}) in the region $s \in [s_i, s_{in}]$, not least because $- r \dot{r}$ is expected to change value during the inversion process.
What helps us here is that $- r \dot{r}$ is approximately inversely proportional to the quantity $\Psi$, given that $r^2 \dot{\phi} = - r \dot{r} \cdot \Psi$ is approximately constant. To be quantitatively precise, we combine the estimate (\ref{eq:k_phid}) with the trivial a priori observation that $\left|\frac{\Psi(s)}{\Psi_i} - 1\right| \lesssim \epsilon^2$ for $ s \in [s_i, s_{in}]$ to get
\begin{equation}
\left| \frac{- r \dot{r}(s)}{- r \dot{r}(s_i)} - 1 \right| = \left| \frac{\Psi(s_i)}{\Psi(s)} \cdot \frac{r^2 \dot{\phi}(s)}{r^2 \dot{\phi}(s_i)} - 1 \right| \lesssim \epsilon^2.
\end{equation}
Since $-r\dot{r}(s_i) \sim \epsilon^2$, the estimate (\ref{eq:preinv_r}) follows immediately.
\end{proof}
\subsubsection{The post-inversion Kasner regime}
Finally, we consider the post-inversion regime, i.e.\ the region $\mathcal{K}_2 = \{ s_{out} \leq s < s_{\infty}\}$.
\begin{lemma} \label{lem:postinv_lapse}
Consider a solution $(r, \Omega^2, \phi, Q, \tilde{A})$ to the system (\ref{eq:raych})--(\ref{eq:r_evol_2}) as in Proposition \ref{prop:k_ode}. Assuming the condition (\ref{**}) on the value of $\Psi$ at $s = s_i$, then for $s_{out} \leq s \leq s_{\infty}$, there exists some constant $\mathcal{Y}_2$ such that
\begin{equation} \label{eq:postinv_lapse}
\left| \log \left( \frac{\Omega^2}{- \dot{r}} \left( \frac{r}{r_-} \right)^{-\alpha^{-2}} \right) (s) - \log \mathcal{Y}_2 \right| \leq D_2
\epsilon^2 \cdot \left( \frac{r(s)}{r(s_{out})} \right)^{\beta},
\end{equation} with $ \mathcal{Y}_2 >0$ satisfying \begin{equation}\label{Y2.inv}
\left|\log\mathcal{Y}_2 - \frac{1}{2} \alpha^{-2} b_-^{-2} \epsilon} \newcommand{\ls}{\lesssim^{-2} \right|\leq D_2 \cdot \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}).
\end{equation}
Here $\beta = \min \{ \alpha^{-2} - 1, \frac{1}{2} \}$. One may also find a constant $\mathcal{Z}_2>0$ with $|\mathcal{Z}_2| \sim \epsilon} \newcommand{\ls}{\lesssim^2$ such that in the same region,
\begin{equation} \label{eq:postinv_r}
\left| - r \dot{r} (s) - \mathcal{Z}_2 \right| \leq D_2 \epsilon^4 \cdot \left( \frac{r(s)}{r(s_{out})} \right)^{\beta}.
\end{equation}
\end{lemma}
\begin{proof}
We again begin the proof of (\ref{eq:postinv_lapse}) by using the Raychaudhuri equation (\ref{eq:raych_3}). In this case, this equation alongside Proposition \ref{prop:k_ode} will tell us that
\begin{equation} \label{eq:postinv_lapse_der}
\left| \frac{d}{dR} \log \left( \frac{\Omega^2}{- \dot{r}} \right) - \alpha^{-2} \right| \lesssim
|\Psi(R) - \alpha^{-1}| + \epsilon^{-4} e^{-4R} R^2.
\end{equation}
To show that the expression $|\Psi - \alpha^{-1}|$ is integrable as $R \to \infty$, we apply Lemma \ref{lem:ode_decay}, with $R_* = R_{out}$, $\nu=\epsilon} \newcommand{\ls}{\lesssim$ and $\alpha$ replaced by $\alpha^{-1}$ [note that $\alpha^{-1}> 1+\sigma>0$ assuming \eqref{**}]. The lemma thus tells us that for $\epsilon$ chosen sufficiently small, we have $|\Psi - \alpha^{-1}| \leq 8 \epsilon^2 e^{- \beta (R - R_{out})}$ for all $R\geq R_{out}$. Hence we have from (\ref{eq:postinv_lapse_der}) that
\begin{equation*}
\left| \frac{d}{dR} \log \left( \frac{\Omega^2}{- \dot{r}} \left( \frac{r}{r_-} \right)^{\alpha^{-2}} \right) \right| \lesssim \epsilon^2 e^{- \beta(R - R_{out})}.
\end{equation*}
As the right hand side is integrable, $\frac{\Omega^2}{-\dot{r}} \left( \frac{r}{r_-} \right)^{\alpha^{-2}}$ has a well defined limit $\mathcal{Y}_2$ as $R \to + \infty$, and we obtain (\ref{eq:postinv_lapse}).
For (\ref{eq:postinv_r}), we will not estimate $- r \dot{r}(s)$ directly but instead use $\Psi$ and $r^2 \dot{\phi} = - r \dot{r} \cdot \Psi$. We already have $|\Psi - \alpha^{-1}| \lesssim \epsilon^2 e^{- \beta(R - R_{out})}$, while we integrate (\ref{eq:phi_evol_2}) backwards from $R = + \infty$ to find that
\begin{equation*}
| r^2 \dot{\phi} (s) - \lim_{\tilde{s} \to s_{\infty}} r^2 \dot{\phi}(\tilde{s})| \lesssim \int_s^{s_{\infty}} r^2 |\phi|(\tilde{s}) + \Omega^2 r^2 |\phi|(\tilde{s}) \, d \tilde{s} \lesssim \int_{0}^{r(s)} \frac{r^3 |\phi|}{- r \dot{r}} + \frac{\Omega^2}{- \dot{r}} r^2 |\phi| \, dr.
\end{equation*}
Using Proposition \ref{prop:k_ode} and that $\frac{\Omega^2}{-\dot{r}}$ is monotonically decreasing in $s$ and thus uniformly small (see e.g.\ (\ref{eq:k_lapse_upper})), it is straightforward to find that defining $\mathcal{P}_2 = \lim_{s \to s_{\infty}} r^2 \dot{\phi}(s) \sim \epsilon^2$, one (easily, since the RHS is $O(r^{4-})$) has
\begin{equation}
|r^2 \dot{\phi}(s) - \mathcal{P}_2| \lesssim \epsilon^4 e^{- \beta(R - R_{out})}.
\end{equation}
Combining this with the aforementioned $|\Psi - \alpha^{-1}| \lesssim \epsilon^2 e^{- \beta(R - R_{out})}$, we deduce (\ref{eq:postinv_r}) for $\mathcal{Z}_2 = \alpha \mathcal{P}_2$.
Finally, we provide the estimate (\ref{Y2.inv}) for $\log \mathcal{Y}_2$. Note that (\ref{eq:postinv_lapse_der}) is valid in the whole region $\mathcal{K} = \{ R_i \leq R < + \infty \}$, so that in particular we may integrate in the interval $R \in [R_{in}, + \infty)$ to find
\begin{equation}
\left| \log \mathcal{Y}_2 - \log \left( \frac{\Omega^2}{- \dot{r}} \left( \frac{r}{r_-} \right)^{- \alpha^{-2}} \right) (R_{in})\right| \lesssim \epsilon^2 + \int_{R_{in}}^{R_{out}} |\Psi(R) - \alpha^{-1}| \, dR.
\end{equation}
We now compare this to the estimate (\ref{eq:preinv_lapse}) evaluated at $R = R_{in}$. As we are changing the exponent from $\alpha^{-2}$ to $\alpha^2$, we generate an extra term on the left hand side:
\begin{equation}
\left| \log \mathcal{Y}_2 - \log \mathcal{Y}_1 - (\alpha^{-2} - \alpha^2) R_{in} \right| \lesssim \epsilon^2 + \int_{R_{in}}^{R_{out}} |\Psi(R) - \alpha^{-1}| \, dR.
\end{equation}
Now we appeal to Lemma \ref{lem:inv_interval}. By this lemma, we know that $|R_{out} - R_{in}| \lesssim \log (\epsilon^{-1})$, so the integral on the RHS is $O(\log (\epsilon^{-1}))$. Using also (\ref{eq:inv_interval}) to estimate $R_{in}$ and the estimate (\ref{eq:noinv_y}) for $\log \mathcal{Y}_1$, we find that
\begin{equation}
\left| \log \mathcal{Y}_2 - \frac{1}{2} \alpha^{-2} b_-^{-2} \epsilon^{-2} \right| \lesssim \log(\epsilon^{-1}).
\end{equation}
This completes the proof of the lemma.
\end{proof}
\subsection{Kasner-like asymptotics in synchronous coordinates in both cases}
To complete the proofs of Theorems \ref{thm.***} and \ref{thm.**}, we simply need to use the estimates of Lemmas \ref{lem:noinv_lapse}, \ref{lem:preinv_lapse} and \ref{lem:postinv_lapse} to put the metric $g$ in the Kasner-like form stated. The following lemma will justify the change of coordinates.
\begin{lemma} \label{lem:synchronous}
Let $(M, g)$ be a spherically symmetric spacetime with metric (\ref{eq:metric}). Defining the coordinates $s = u + v$, $t = v - u$, suppose further that $T = \partial_t = \frac{1}{2} (\partial_v - \partial_u)$ is a Killing field for the metric, i.e.\ $r(s)$ and $\Omega^2(s)$ are functions of $s$, and that we are in a trapped region with $\dot{r}(s) < 0$.
Suppose that for some interval $I \subset \mathbb{R}$, there exist constants $\mathcal{Y}, \mathcal{Z} > 0$ and an exponent $\gamma \geq 0$, as well as sufficiently small `lower-order terms' $\mathfrak{E}(s)$, such that we have the following asymptotics for the expressions $\frac{\Omega^2}{- \dot{r}}$ and $- r \dot{r}$ when $s \in I$:
\begin{equation} \label{eq:synch_y}
\frac{\Omega^2}{- \dot{r}} (s) = \mathcal{Y} \cdot \left( \frac{r(s)}{r_-} \right)^{\gamma} \cdot \left( 1 + \mathfrak{E}(s) \right),
\end{equation}
\begin{equation} \label{eq:synch_z}
- r \dot{r} (s) = \mathcal{Z} \cdot (1 + \mathfrak{E}(s)).
\end{equation}
We quantify the required smallness of $\mathfrak{E}$ in the following way: there exists some $\epsilon_* > 0$ chosen sufficiently small and a non-increasing function $\bar{\mathfrak{E}}(s)$ such that $|\mathfrak{E}(s)| \leq \bar{\mathfrak{E}}(s) \leq \epsilon_*$. Then defining (up to translation) the past-directed timelike coordinate $\tau$ such that
\begin{equation} \label{eq:tau}
\frac{d \tau}{d s} = -\frac{ \Omega(s) }{2}.
\end{equation}
Then there exist constants $\mathcal{X}$ and $\mathcal{R}$ depending on $\mathcal{Y}, \mathcal{Z}, \gamma, r_-$, and some $\tau_0 \in \mathbb{R}$ such that we have the following asymptotics for $\Omega^2$ and $r^2$ with respect to $\tau$:
\begin{equation} \label{eq:synch_x}
\Omega^2(\tau) = 4 \mathcal{X} \cdot (\tau - \tau_0)^{\frac{2(\gamma - 1)}{\gamma + 3}} \cdot (1 + \mathfrak{E}_{X}(\tau)),
\end{equation}
\begin{equation} \label{eq:synch_r}
r^2(\tau) = \mathcal{R} \cdot r_-^2 \cdot (\tau - \tau_0)^{\frac{4}{\gamma + 3}} \cdot (1 + \mathfrak{E}_{R}(\tau)).
\end{equation}
Here, there exists some $C = C(\gamma) > 0$ such that $|\mathfrak{E}_X(\tau)|, |\mathfrak{E}_R(\tau)| \leq C \bar{\mathfrak{E}}(s(\tau))$.
In particular, the metric $g$ can be written in the form
\begin{equation} \label{eq:metric_synch}
g = - d \tau^2 + \mathcal{X} \cdot (\tau - \tau_0)^{\frac{2(\gamma - 1)}{\gamma + 3}} \cdot (1 + \mathfrak{E}_X(\tau)) \, dt^2 + \mathcal{R} \cdot (\tau - \tau_0)^{\frac{4}{\gamma + 3}} \cdot ( 1 + \mathfrak{E}_R(\tau)) \cdot r_-^2 \, d \sigma_{\mathbb{S}^2}.
\end{equation}
Finally, one can find the following relationships between $\mathcal{X}, \mathcal{R}$ and $\mathcal{Y}$:
\begin{equation} \label{eq:xy}
\left| \log \mathcal{X} - \frac{2 (\gamma + 1)}{\gamma + 3}\log \mathcal{Y} \right| \lesssim 1 + | \log \mathcal{Z} |,
\end{equation}
\begin{equation} \label{eq:ry}
\left| \log \mathcal{R} + \frac{2}{\gamma + 3}\log \mathcal{Y} \right| \lesssim 1 + | \log \mathcal{Z} |.
\end{equation}
\end{lemma}
\begin{proof}
In this proof we shall schematically write all lower-order terms as $\mathfrak{E}$, leaving the precise claims on these errors to the reader. The crucial estimate is to find an expression for $\tau$ in terms of $r$. Using (\ref{eq:tau}), (\ref{eq:synch_y}) and (\ref{eq:synch_z}), we find
\begin{equation} \label{eq:taur}
\frac{d \tau}{dr} = \frac{\Omega}{- 2 \dot{r}} = \frac{ \mathcal{Y}^{1/2} r_-^{1/2}}{ 2 \mathcal{Z}^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\gamma+1}{2}} \cdot ( 1 + \mathfrak{E}).
\end{equation}
Integrating this expression, and normalizing $\tau_0$ so that $r(\tau_0) = 0$ (where we artificially extend the range of $r$ and $\tau$ to still satisfy (\ref{eq:taur}) if necessary), we find
\begin{equation} \label{eq:taur2}
\tau - \tau_0 = \frac{\mathcal{Y}^{1/2} r_-^{3/2}}{(\gamma + 3) \mathcal{Z}^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\gamma + 3}{2}} \cdot (1 + \mathfrak{E}).
\end{equation}
The remaining assertions of the lemma are all immediate by combining (\ref{eq:taur2}) with the assumptions (\ref{eq:synch_y}) and (\ref{eq:synch_z}).
\end{proof}
\textit{Conclusion of the proof of Theorem~\ref{thm.***} and \ref{thm.**}}. Theorems~\ref{thm.***} follows immediately from combining Lemmas \ref{lem:noinv_lapse} and \ref{lem:synchronous}, taking $\tau_0=0$ (note indeed that in the no-inversion case \eqref{***}, \eqref{eq:taur} is true up to $\{r=0\}$ so $\tau_0=0$). Here $\gamma = \alpha^2$, and in order to obtain the final error estimate (\ref{***.error}) we also need \eqref{eq:taur} to change variables from $r$ to $\tau$.
For the proof of Theorem~\ref{thm.**} in case \eqref{**}, we first run the proof in $\mathcal{K}_1$ and apply Lemma~\ref{lem:preinv_lapse} to show that \eqref{eq:synch_y} and \eqref{eq:synch_z} with $\mathcal{Y}= \mathcal{Y}_1$, $\mathcal{Z}= \mathcal{Z}_1$, $\gamma=\alpha^2$ are true in $\mathcal{K}_1$. Unlike in case \eqref{***}, we are not able to continue these estimates all the way up to $r = 0$, so we take $\tau_0 \neq 0$ to account for the fact that the proper time variable with respect to which $\mathcal{K}_1$ is Kasner-like must be modified. Thus, we can apply Lemma~\ref{lem:synchronous} to deduce the Kasner-like behavior of $\mathcal{K}_1$ claimed in Theorem~\ref{thm.**}. On the other hand, Lemma~\ref{lem:postinv_lapse} and Lemma~\ref{lem:synchronous}, applied to $\mathcal{Y} = \mathcal{Y}_2$, $\mathcal{Z} = \mathcal{Z}_2$, $\gamma = \alpha^{-2}$ determine the Kasner-like behavior of $\mathcal{K}_2$ (where here we take $\tau = 0$).
Finally, we prove the proper time estimates \eqref{eq:propertimek2} and \eqref{eq:propertimekinv}. For the former, taking the logarithm of \eqref{eq:taur2} in the context of the region $\mathcal{K}_2$ (where $\tau_0 = 0$) yields:
\begin{equation*}
\log \tau = \frac{1}{2} \log \mathcal{Y}_2 - \frac{1}{2} \log \mathcal{Z}_2 + \frac{\alpha^{-2} + 3}{2} \log r + O(1).
\end{equation*}
In particular, evaluating this at $s = s_{out}$, then using Lemma~\ref{lem:inv_interval} to estimate $r(s_{out}) = r_{out}$ yields
\begin{equation*}
\log \tau^{-1}(s_{out}) = \frac{1}{4} \alpha^{-2} b_-^{-2} \epsilon^{-2} +\frac{\alpha^{-2} + 3}{4 (1 - \alpha^2)} b_-^{-2} \epsilon^{-2} + O(\log (\epsilon^{-1})) = b_-^{-2}\frac{\alpha^{-2}+1}{2(1 - \alpha^{2})} + O(\log (\epsilon^{-1})).
\end{equation*}
For the final estimate \eqref{eq:propertimekinv}, we shall use the fact that $\frac{d \tau}{dr} = \frac{\Omega}{-2 \dot{r}}$ to find that
\begin{equation*}
|\tau(s_{in}) - \tau(s_{out})|
\leq (r_{in} - r_{out}) \cdot \max_{s \in \mathcal{K}_{inv}} \frac{\Omega}{-2 \dot{r}} \leq r_{in}^{3/2} \cdot \max_{s \in {\mathcal{K}_{inv}}} \left( \frac{\Omega^2}{-4 \dot{r}} \frac{1}{- r \dot{r}} \right)^{1/2}.
\end{equation*}
Now applying the estimate \eqref{monot} and then \eqref{eq:inv_interval} again (as $- r \dot{r} \sim \epsilon} \newcommand{\ls}{\lesssim^2$ it is absorbed into the $\exp(D \log(\epsilon} \newcommand{\ls}{\lesssim^{-1}))$ term in the next line):
\begin{align*}
|\tau(s_{in}) - \tau(s_{out})|
&\leq (\frac{r_{in}}{r_-})^{\frac{3 + (\alpha - \epsilon} \newcommand{\ls}{\lesssim^2)^2}{2}} \cdot \exp( - \frac{1}{4} b_-^{-2} \epsilon} \newcommand{\ls}{\lesssim^{-2}) \cdot \exp(D \log (\epsilon} \newcommand{\ls}{\lesssim^{-1})), \\[0.5em]
&\leq \exp(- \frac{1}{4} b_-^{-2} \epsilon} \newcommand{\ls}{\lesssim^{-2} \left[\frac{3+ \alpha^2}{1 - \alpha^2} + 1\right]) \cdot \exp(D \log (\epsilon} \newcommand{\ls}{\lesssim^{-1})),
\end{align*}
yielding \eqref{eq:propertimekinv} as required. This completes the proof of Theorem~\ref{thm.**}, and also Theorem~\ref{maintheorem2}.
\begin{comment}
defining $\tilde{\tau}$ to be the unique solution of \eqref{eq:taur} in $\mathcal{K}_1$, up to a constant to be fixed. Now, as explained in the proof of that lemma we extend $\tilde{\tau}$ to be defined on the whole of $\mathcal{K}$, such that $\tilde{\tau}$ satisfies $$ \frac{d \tilde{\tau}}{dr} = \frac{ \mathcal{Y}_1^{1/2} r_-^{1/2}}{ 2 \mathcal{Z}_1^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\alpha^2+1}{2}} \cdot ( 1 + \mathfrak{E}).$$
(Note, however, that $\frac{d \tilde{\tau}}{dr} \neq \frac{\Omega}{- 2 \dot{r}} $ in $\mathcal{K}-\mathcal{K}_1$.) We determine the remaining constant of integration for $\tilde{\tau}$ such that $\tilde{\tau}(r=0)=0$, i.e.\ $$ \tilde{\tau}(r)= \int_0^{r} \left(\frac{ \mathcal{Y}_1^{1/2} r_-^{1/2}}{ 2 \mathcal{Z}_1^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\alpha^2+1}{2}} \cdot ( 1 + \mathfrak{E})\right)dr.$$
So, this implies that in $\mathcal{K}_1$ we have
\begin{equation}\label{tK1}\tilde{\tau}(r)=\int_0^{r} \left(\frac{\mathcal{Y}_1^{1/2}r_-^{1/2}}{ 2 \mathcal{Z}_1^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\alpha^2+1}{2}} \cdot ( 1 + \mathfrak{E})\right)dr= \frac{\mathcal{Y}_1^{1/2}r_-^{3/2}}{ (\alpha^2+3) \mathcal{Z}_1^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\alpha^2+3}{2}} \cdot ( 1 + \mathfrak{E}).\end{equation}
We wish to compare this variable $\tilde{\tau}$ with the variable $\tau$ measuring proper time from the singularity. So define $\tau = \tilde{\tau} -\tau_0$ ,where $\tau_0>0$ is a constant, to be the unique solution in $\mathcal{K}_2$ of $\frac{d\tau}{dr} = \frac{\Omega}{-2\dot{r}}$ with $\tau(r=0)=0$. We apply Lemma~\ref{lem:preinv_lapse} in $\mathcal{K}_2$ to show that \eqref{eq:synch_y} and \eqref{eq:synch_z} are satisfied with $\mathcal{Y}= \mathcal{Y}_2$, $\mathcal{Z}= \mathcal{Z}_2$, $\gamma=\alpha^{-2}$ so by Lemma~\ref{lem:synchronous}, $\tau$ satisfies (in $\mathcal{K}_2$): \begin{equation}\label{tK2}\tau(r)=\int_0^{r} \left(\frac{\mathcal{Y}_2^{1/2}r_-^{1/2}}{ 2 \mathcal{Z}_2^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\alpha^{-2}+1}{2}} \cdot ( 1 + \mathfrak{E})\right)dr= \frac{\mathcal{Y}_2^{1/2}r_-^{3/2}}{ (\alpha^{-2}+3) \mathcal{Z}_2^{1/2}} \cdot \left( \frac{r}{r_-} \right)^{\frac{\alpha^{-2}+3}{2}} \cdot ( 1 + \mathfrak{E}).\end{equation}
Now, note that by \eqref{monot} and $\frac{d\tau}{dr} = \frac{\Omega}{-2\dot{r}}$ we have \begin{equation*}
\tau(r_{in})-\tau(r_{out}) \lesssim (r_{in}-r_{out}) e^{-\frac{b_-^{-2}}{2} \epsilon} \newcommand{\ls}{\lesssim^{-2} \frac{(\alpha-\epsilon} \newcommand{\ls}{\lesssim^2)^2+1}{2(1-\alpha^2)}} \exp( D \cdot \log( \epsilon} \newcommand{\ls}{\lesssim^{-1})) \lesssim e^{-\frac{b_-^{-2}}{2(1-\alpha^2)} \epsilon} \newcommand{\ls}{\lesssim^{-2} \frac{3+\alpha^2}{2}} \exp( D \cdot \log( \epsilon} \newcommand{\ls}{\lesssim^{-1}))
\end{equation*
\end{comment}
|
train/arxiv
|
BkiUdm85qoYAsbqIR4rr
| 5 | 1 |
\section{Introduction}\label{sec:intro}
James Clerk Maxwell \cite{maxwell1} showed in 1861 that the electric and magnetic fields are not separate phenomena: they instead exchange energy as their amplitudes oscillate in wave patterns, which propagate through space at the speed of light. The resulting celebrated Maxwell equations have withstood the revolutions of the modern physics' world and, to the present day, are always needed to accurately describe radio-frequency devices in industry or to explain experimental findings in electromagnetism.
A hundred years after Maxwell's original theory, later succinctly recast by Heaviside \cite{heaviside1} in the language of vector calculus, Yee showed in \cite{yee} how a Maxwell initial boundary value problem (MIBVP) in 3+1 dimensions of space and time can be solved efficiently on computers, by appropriately choosing the points where fields and their derivatives are to be approximated by finite difference equations on two staggered and uniformly spaced Cartesian--orthogonal grids. Since then Yee's algorithm has slowly become ubiquitous (see \cite{taflove,meep}), yet, a plethora of other methods has consequently also been proposed, analysed and tested to account for its various shortcomings: ineffectiveness in the case of material discontinuities which cannot be aligned with the Cartesian axes and fixed $\mathcal{O}(h^2 + \tau^2)$ order of convergence, where $h$ and $\tau$ are the discrete steps in the spatial and temporal grids, respectively.
Without any pretence of being exhaustive, we mention in this introductory section some families of approaches which try to mend these drawbacks.
There are approaches based on conforming finite elements spaces (see \cite{jin,monk} and references therein), which work on unstructured space grids and present (tangentially continuous) piecewise-polynomials vector basis-functions of arbitrary degree (mainly the ones introduced by Nedelec in \cite{nedelec1980}). Unfortunately these approaches lose the efficiency inherent in the Yee algorithm, since the system matrices\footnote{usually the mass-matrices.} which need to be inverted at every time-step are banded but not (block-)diagonal. This amenable structure can be retrieved if mass-lumping techniques are employed (e.g. \cite{white}), where basis-functions are strongly tied to inexact numerical integration rules and need to be completely re-computed (or are simply unavailable) if the order of approximation needs to be increased.
Later developments led instead to the adoption of Discontinuous Galerkin (DG) Finite Element Method (FEM) approaches, which ignore the conformity constraint on the basis-functions and use orthonormal bases (which in principle lead to spectral convergence rates) compactly supported inside each finite element in the spatial discretisation of the domain. This choice, of course, destroys the geometry of the continuous Maxwell system, introducing spurious numerical solutions which do not converge to physical ones as the mesh size $h$ tends to zero, and the presence of which can be easily detected by applying the same discretisation method for solving the Maxwell eigenvalue problem (MEP) instead of the MIBVP \cite{spurious_modes}. Counter-measures can be taken, in the form of penalization terms for the tangential jumps in the approximated solutions: for example, using up-wind fluxes (as in \cite{warburton_jcp}) eliminates spurious solutions by introducing numerical energy dissipation in the scheme, which fact can become unacceptable when long-time behaviours of electromagnetic systems have to be studied. On the other hand, symmetric-interior-penalty (SIP) schemes (see \cite{grote,christophs,huang_sip}) preserve the hyperbolic nature of the system by introducing more unknowns which live on the skeleton of the mesh and do not approximate any physical quantity. Furthermore, a positive definite scalar penalty parameter, which must be tuned by the algorithm's user in accordance with $h$ and the maximum polynomial degree in the chosen bases, must also be inserted in the formulation.
There is a third class of mutually related methods which mimic more closely Yee's original algorithm: the Finite Integration Technique of \cite{weiland,matsuo}, which recasts the equations in integral form to apply the Yee algorithm to general staggered cuboidal elements but does not improve the accuracy of the original method otherwise (although we note that higher order versions of the method restricted to Cartesian-orthogonal grids do exist, e.g. \cite{chung}), the cell method (CM) of \cite{tonti,marrone,codecasa_politi,pigitd,dgatap}, which is also developed on two spatial grids in the more general setting of unstructured meshes, where a dual mesh is obtained either by barycentric subdivision (a procedure we will review in the present contribution) or by the circumcentric one of the primal mesh. These methods can be theoretically studied in a wider framework (see also \cite{Stern2015,teixeira_lattice}) of approaches particularly fitting for Maxwell's equations (since they encode the so-called De Rham complex), in which differential operators are discretised using only topological information about the input mesh and all the metric information is instead encapsulated in the mass-matrix (which is in this context much rather seen as a discrete Hodge-star operator, e.g. \cite{hiptmair2001, kettunen_hodge,bossavit_yee,auchmann}). The structure-preserving nature of these methods comes at the price of not being able to extend their convergence order to asymptotics steeper than $\mathcal{O}(h)$ (or $\mathcal{O}(h^2)$ at best if strict conditions on the mesh are imposed). This elusive higher order approximation remains a much desired property, since, far from material discontinuities, solutions of the MIBVP are smooth and oscillatory.
In the present paper we are strongly inspired by this latter framework: we start from the set of basis-functions introduced by Codecasa and co-authors in \cite{codecasa_politi,dgatap}, and more recently studied in \cite{dga_as_dg}, where an equivalence between their formulation and a peculiar DG one using two barycentric-dual unstructured meshes and piecewise-constant basis-functions was proven by some of the present authors. Building on this result, we show how to extend the method to arbitrary degree in the local polynomial basis-functions. To make the present work as self-contained as possible, we use Section \ref{sec:maxwell} to review the continuous problem and the associated notation and Section \ref{sec:mesh} to review concepts related to barycentric-dual cellular complexes. In Section \ref{sec:funspaces} the abstract setting in terms of involved functional spaces for the new algorithm is introduced (which fact also gives new and valuable mathematical background for the cell method), followed by an explicit construction of the bases for finite-dimensional (arbitrary-order) approximations of the newly introduced spaces. A proof is given for the electromagnetic energy conservation property of the ensuing semi-discrete scheme.
Section \ref{sec:basis} provides some insight on the relationship between the new arbitrary-order scheme and known lowest order ones, in so far they present the same explicit splitting of topological and geometric operators. Some details on the optimization of a possible computer implementation are also given. Section \ref{sec:num} provides numerical experiments to validate the correctness and performance of the proposed method: particular focus is devoted to showing the spectral correctness of the method (which is paramount for practical high bandwidth applications). Some general remarks, open questions, and directions for future work conclude the paper in Section \ref{sec:conclusio}.
\section{The Maxwell system of equations}\label{sec:maxwell}
In two space dimensions, the most general form for the MIBVP is
\begin{align}
& \partial_t\bm{D}\left(\bm{r},t\right) = \bm{curl}(H\left(\bm{r},t\right)) -\bm{J}(\bm{r},t), \label{eq:ampmax}\\
& \partial_t B\left(\bm{r},t\right) = - curl(\bm{E}\left(\bm{r},t\right)), \label{eq:faraday}\\
& div(\bm{D}(\bm{r},t)) = \rho_c (\bm{r},t), \label{eq:egauss}\\
& div(B(\bm{r},t)) = 0, \label{eq:mgauss}
\end{align}
\noindent to be solved $\forall t \!\in\! [0,+\infty[$ and for all $\bm{r}(x,y)$ in the bounded domain $\Omega \subset \mathbb{R}^{2}$. The fields $\bm{E}(\bm{r},t)$ and $\bm{D}(\bm{r},t)$ go by the names of electric field and electric displacement field, respectively, while the fields $H(\bm{r},t)$ and $B(\bm{r},t)$ are called magnetic field and magnetic induction field. Fields $\bm{J}(\bm{r},t)$ (the convective electric current) and $\rho_c(\bm{r},t)$ (the free electric charge) are source-terms which cause the dynamics of electromagnetic fields, i.e. they are the true right-hand side (r.h.s.) in the system of partial differential equations.
Since we set ourselves in the $\mathbb{R}^2$ ambient space, we denote only some of the unknown fields in bold-face: $\bm{E}(\bm{r},t)$ is in fact a (polar) vector field living in the Cartesian plane, while $H(\bm{r},t)$ is a pseudo-vector aligned with the $z$-axis: the true vector field would live in $\mathbb{R}^3$, with the condition $\bm{H}(\bm{r},t) = \left(0,\;0,\;H(\bm{r},t)\right)^{\mathrm{T}}$ (where the $(\cdot)^{\mathrm{T}}$ superscript denotes vector or matrix transposition).
In the applied jargon of microwave engineers this is the so-called \emph{Transverse-Magnetic} (TM) field.
We have accordingly used the appropriate $curl$ and $div$ (for divergence) operators for any vector field $\bm{v}(\bm{r},t) = ( v_x (\bm{r},t),\; v_y (\bm{r},t) )^{\mathrm{T}}$, defined (in Cartesian coordinates) as
\begin{align*}
& curl(\bm{v}(\bm{r},t)) = \partial_x v_y(\bm{r},t)-\partial_y v_x(\bm{r},t),\\
& div(\bm{v}(\bm{r},t)) = \partial_x v_x(\bm{r},t) + \partial_y v_y(\bm{r},t),
\end{align*}
\noindent as well as the $\bm{curl}$ and $div$ operators for any pseudo-vector $u(\bm{r},t)$, defined as
\begin{align*}
& \bm{curl}(u(\bm{r},t)) = \left(\partial_y u(\bm{r},t), \; -\partial_x u (\bm{r},t)\right)^\mathrm{T},\\
& div(u(\bm{r},t)) = \partial_z u(\bm{r},t) = 0,
\end{align*}
\noindent all valid for suitably differentiable components of $\bm{v}$, $u$. We will also make use of the identities
\begin{align}
& curl\left( u\bm{v}\right) = curl(\bm{v})\,u + \bm{curl}(u)\cdot\bm{v}, \label{eq:green_id}\\
& \int_\Omega curl\left(\bm{v}\right)\,\mathrm{d}\bm{r} = \oint_{\partial\Omega} \bm{v}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell, \label{eq:green_th}
\end{align}
\noindent namely the product rule for partial derivatives and the Green theorem, again valid for suitably differentiable functions.
The notation $\hat{\bm{t}}(\ell)$ will denote the tangential unit vector on a directed curve (for example the boundary of $\Omega$, denoted $\partial\Omega$) for which $\ell$ is the arc-length parameter.
Furthermore, we remark that the tangent unit vector is taken to always induce a counter-clockwise circulation on contours in accordance with the well-known cork-screw rule.
One can promptly argue that equations (\ref{eq:egauss})--(\ref{eq:mgauss}) are not dynamical constraints but rather initial conditions. It is easy to see that, if (\ref{eq:egauss})--(\ref{eq:mgauss}) hold true for $t=0$, then they are satisfied for any $t$ with $0< t < +\infty$: it suffices taking the divergence of both sides in the remaining two equations and integrating them with respect to time from zero to the chosen instant. We are therefore left with two equations and four unknowns: to make the system meaningful again, (\ref{eq:ampmax})--(\ref{eq:faraday}) must be supplemented with the phenomenological\footnote{experimentally determined.} constitutive equations
\begin{align}
& \bm{D}(\bm{r},t) = \varepsilon(\bm{r},t) \bm{E}\left(\bm{r},t\right),\label{eq:const_DE}\\
& B(\bm{r},t) = \mu(\bm{r},t) H(\bm{r},t),\label{eq:const_BH}
\end{align}
\noindent where $\varepsilon = \varepsilon_0\varepsilon_r$, $\mu = \mu_0\mu_r$ are respectively called dielectric permittivity and magnetic permeability, with $\mu_0$ and $\varepsilon_0$ also being experimental constants and $c_0 = \left(\mu_0\varepsilon_0\right)^{-\frac{1}{2}}$ being the speed of light (i.e. the wave-speed of electromagnetic radiation) in a vacuum.
We will now make some mildly restrictive assumptions: we consider, in all that follows, time-invariant materials (for which generalization to general dispersive ones is, as for all numerical methods, more involved and will be the object of future studies). We further assume the material parameters to be symmetric positive-definite (s.p.d.) tensors (of rank two for $\varepsilon$, rank one for $\mu$) with piecewise-smooth and point-wise bounded (in space) real coefficients.
Only for simplicity of presentation, we also consider the source-free equations, i.e. $\bm{J}=\bm{0}$, $\rho_c = 0$ (where generalization of the analysis to problems with sources is straightforward and will be employed in the numerical experiments in Section \ref{sec:num}).
We finally assume the spatial domain $\Omega$ to be a bounded polygon and allow homogeneous Dirichlet boundary conditions on either field: $\bm{E}(\bm{r},t)\cdot\hat{\bm{t}}(\ell) = \boldsymbol{0}$, $\forall \bm{r}\!\in\!\partial\Omega$, i.e. perfect electric conductor (PEC) boundary conditions in the applied jargon, or $H(\bm{r},t) = 0$, $\forall \bm{r}\!\in\!\partial\Omega$, i.e. perfect magnetic conductor (PMC). It is easy to deduce from the system of equations that Dirichlet boundary conditions for any of the two fields imply Neumann ones for the remaining unknown, and vice-versa.
\section{Barycentric-dual complexes}\label{sec:mesh}
Having as a goal the numerical solution of (\ref{eq:ampmax})--(\ref{eq:faraday}), we assume a conforming (see \ref{sec:app1}) partition of $\Omega$ into triangles (a triangular mesh) to be available, which can be easily provided from any black-box mesher (e.g. \cite{netgen,distmesh}).
Rigorously speaking, said partition is a particular kind of simplicial complex. We define a simplicial complex for $\Omega$, denoted $\mathcal{C}^\Omega$, as a sequence of sets of simplexes in various dimensions
$$\mathcal{C}^\Omega = \{\mathcal{C}_k^\Omega\}_{k=0,1,\dots,d},$$
\noindent where $d=2$ is the ambient space dimension. Keeping in mind that a $k$--simplex is the convex hull of $k\!+\!1$ affinely independent points, $\mathcal{C}_2^{\Omega}$ will denote the set of triangles (2-simplexes), $\mathcal{C}_1^\Omega$ will denote the set of edges (1-simplexes), while $\mathcal{C}_0^\Omega$ will denote the set of vertices (0-simplexes) in the mesh. We also define the skeleton of a complex:
\begin{align}
& \mathcal{S}\left(\mathcal{C}^\Omega\right) = \bigcup_{i=0}^{i=d-1} \mathcal{C}_k^\Omega , \label{eq:skeleton_primal}
\end{align}
\noindent i.e. the set of all simplexes of dimension smaller than the maximal one.
Furthermore, we assume that the vertices in $\mathcal{C}_0^\Omega$ can be ordered by virtue of an index set $\mathcal{I}$, $i\!\in\!\mathbb{N}^+, \forall i\in \mathcal{I}$. Consequently all edges in $\mathcal{C}_1^\Omega$ possess a global (in $\mathcal{C}^\Omega$) inner orientation induced by the ordering of vertices in their boundary.
A generic simplicial complex is itself a particular type of \emph{cellular} (or cell) complex, which is the more general structure one gets if they relax the requirement on geometric entities of $\mathcal{C}^\Omega$ from being simplexes to, for example, being generic polytopes (called $k$-cells instead of $k$-simplexes). Our starting mesh, as any given simplicial complex, possesses a dual complex, which we denote (in 2D) with $\tilde{\mathcal{C}}^\Omega = \{\tilde{\mathcal{C}}_0^\Omega,\tilde{\mathcal{C}}_1^\Omega,\tilde{\mathcal{C}}_2^\Omega\}$ and which is indeed a cellular complex \emph{but not a simplicial one}. The existence of a dual cellular complex hinges on a sequence of one-to-one mappings $\{D_k\}_{k=0,1,2}$ such that
\begin{align}
&D_k: \mathcal{C}_k^\Omega \mapsto \tilde{\mathcal{C}}_{d-k}^\Omega.\label{eq:duality_mapping}
\end{align}
This mathematical concept originally arose in solutions of algebraic-topological problems \cite{munkres}, and the geometric realization (which is non-unique) of such a dual cellular complex is very often outside of the computational needs of topologists. On the contrary, for what follows, it is a fundamental choice to construct $\tilde{\mathcal{C}}_\Omega$ via the barycentric subdivision of $\mathcal{C}^\Omega$: each vertex $\tilde{\mathbf{v}} \in \tilde{\mathcal{C}}_0^\Omega$ is the centroid of some $\mathcal{T}\in\mathcal{C}_2^\Omega$, each $\tilde{e} \in \tilde{\mathcal{C}}_1^\Omega$ is a polyline obtained by joining the centroid of some $E\in\mathcal{C}_1^\Omega$ to the centroids of neighbouring triangles, while each $\tilde{\mathcal{T}} \in \tilde{\mathcal{C}}_2^\Omega$ is a (generally non-convex) polygon bounded by dual edges (elements of $\tilde{\mathcal{C}}_1^\Omega$) and containing exactly one vertex $\mathbf{v}\in\mathcal{C}_0^\Omega$. A depiction of one simplicial complex and its barycentric-dual companion is given in the two first leftmost panels of Fig.~\ref{fig:staggered_grids}, while the whole formal procedure is more thoroughly described in \ref{sec:app1}.
\begin{figure}[!h]
\centering
\begin{minipage}{0.3\textwidth}
\centering
\begin{tikzpicture}[thick,scale=3.5, every node/.style={scale=3.5}]
\draw (0.0,0.0) -- (0.5,0.0) -- (0.5,0.5) -- (0.0,0.0);
\draw[fill=gray] (0.0,0.0) -- (0.5,0.5) -- (0.0,0.5) -- (0.0,0.0);
\draw (0.0,0.5) -- (0.5,0.5) -- (0.0,1.0) -- (0.0,0.5);
\draw (0.5,0.5) -- (0.0,1.0) -- (0.5,1.0) -- (0.5,0.5);
\draw (0.5,0.5) -- (0.5,0.0) -- (1.0,0.0) -- (0.5,0.5);
\draw (0.5,0.5) -- (1.0,0.0) -- (1.0,0.5) -- (0.5,0.5);
\draw (0.5,0.5) -- (1.0,0.5) -- (1.0,1.0) -- (0.5,0.5);
\draw (0.5,0.5) -- (1.0,1.0) -- (0.5,1.0) -- (0.5,0.5);
\draw[color=red] (1/2,1/2) -- (1.0,1/2);
\node[scale=0.3] at (1/2+0.05,1/2+0.1) {$\mathbf{v}$};
\node[scale=0.3,color=red,thick] at (3/4,1/2+0.05) {$E$};
\node[scale=0.3] at (1/6,1/3) {$\mathcal{T}$};
\node[circle,fill,scale=0.05] at (1/2,1/2) {};
\end{tikzpicture}\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\begin{tikzpicture}[thick,scale=3.5, every node/.style={scale=3.5}]
\draw[color=black,fill=gray] (1/2,1/4) -- (2/3,1/6) -- (3/4,1/4) -- (5/6,1/3) -- (3/4,1/2)
-- (5/6,2/3) -- (3/4,3/4) -- (2/3,5/6) -- (1/2,3/4) -- (1/3,5/6) -- (1/4,3/4)
-- (1/6,2/3) -- (1/4,1/2) -- (1/6,1/3) -- (1/4,1/4) -- (1/3,1/6) -- (1/2,1/4);
\draw[dashed] (0.0,0.0) -- (0.5,0.0) -- (0.5,0.5) -- (0.0,0.0);
\draw[dashed] (0.0,0.0) -- (0.5,0.5) -- (0.0,0.5) -- (0.0,0.0);
\draw[dashed] (0.0,0.5) -- (0.5,0.5) -- (0.0,1.0) -- (0.0,0.5);
\draw[dashed] (0.5,0.5) -- (0.0,1.0) -- (0.5,1.0) -- (0.5,0.5);
\draw[dashed] (0.5,0.5) -- (0.5,0.0) -- (1.0,0.0) -- (0.5,0.5);
\draw[dashed] (0.5,0.5) -- (1.0,0.0) -- (1.0,0.5) -- (0.5,0.5);
\draw[dashed] (0.5,0.5) -- (1.0,0.5) -- (1.0,1.0) -- (0.5,0.5);
\draw[dashed] (0.5,0.5) -- (1.0,1.0) -- (0.5,1.0) -- (0.5,0.5);
\draw[color=black] (1/4,0.0) -- (1/3,1/6);
\draw[color=black] (3/4,0.0) -- (2/3,1/6);
\draw[color=black] (1.0,1/4) -- (5/6,1/3);
\draw[color=black] (1.0,3/4) -- (5/6,2/3);
\draw[color=black] (3/4,1.0) -- (2/3,5/6);
\draw[color=black] (1/4,1.0) -- (1/3,5/6);
\draw[color=black] (0.0,3/4) -- (1/6,2/3);
\draw[color=black] (0.0,1/4) -- (1/6,1/3);
\draw[color=red] (5/6,1/3) -- (3/4,1/2) -- (5/6,2/3);
\node[scale=0.3] at (1/2+0.05,1/2+0.1) {$\tilde{\mathcal{T}}$};
\node[scale=0.3,thick,color=red] at (5/6,1/2+0.07) {$\tilde{E}$};
\node[circle,fill,scale=0.05] at (1/6,1/3) {};
\node[scale=0.3] at (1/6,1/4) {$\tilde{\mathbf{v}}$};
\end{tikzpicture}\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\begin{tikzpicture}[thick,scale=3.5, every node/.style={scale=3.5}]
\draw[color=black] (1/2,1/4) -- (2/3,1/6) -- (3/4,1/4) -- (5/6,1/3) -- (3/4,1/2)
-- (5/6,2/3) -- (3/4,3/4) -- (2/3,5/6) -- (1/2,3/4) -- (1/3,5/6) -- (1/4,3/4)
-- (1/6,2/3) -- (1/4,1/2) -- (1/6,1/3) -- (1/4,1/4) -- (1/3,1/6) -- (1/2,1/4);
\draw[color=black] (0.0,0.0) -- (0.5,0.0) -- (0.5,0.5) -- (0.0,0.0);
\draw[color=black] (0.0,0.0) -- (0.5,0.5) -- (0.0,0.5) -- (0.0,0.0);
\draw[color=black] (0.0,0.5) -- (0.5,0.5) -- (0.0,1.0) -- (0.0,0.5);
\draw[color=black] (0.5,0.5) -- (0.0,1.0) -- (0.5,1.0) -- (0.5,0.5);
\draw[color=black] (0.5,0.5) -- (0.5,0.0) -- (1.0,0.0) -- (0.5,0.5);
\draw[color=black] (0.5,0.5) -- (1.0,0.0) -- (1.0,0.5) -- (0.5,0.5);
\draw[color=black] (0.5,0.5) -- (1.0,0.5) -- (1.0,1.0) -- (0.5,0.5);
\draw[color=black] (0.5,0.5) -- (1.0,1.0) -- (0.5,1.0) -- (0.5,0.5);
\draw[color=black] (1/4,0.0) -- (1/3,1/6);
\draw[color=black] (3/4,0.0) -- (2/3,1/6);
\draw[color=black] (1.0,1/4) -- (5/6,1/3);
\draw[color=black] (1.0,3/4) -- (5/6,2/3);
\draw[color=black] (3/4,1.0) -- (2/3,5/6);
\draw[color=black] (1/4,1.0) -- (1/3,5/6);
\draw[color=black] (0.0,3/4) -- (1/6,2/3);
\draw[color=black] (0.0,1/4) -- (1/6,1/3);
\draw[dashed,fill=gray] (1/2,0.0) -- (3/4,0.0) -- (2/3,1/6) -- (1/2,1/4) -- (1/2,0.0);
\draw[color=red] (3/4,0.0) -- (2/3,1/6);
\draw[color=green] (1/2,1/4) -- (1/2,0.0);
\node[scale=0.3] at (3/5,1/9) {$K$};
\node[scale=0.3,thick,color=red] at (3/4+0.05,1/12) {$\tilde{e}$};
\node[scale=0.3,thick,color=green] at (1/2-0.05,1/8) {${e}$};
\end{tikzpicture}\end{minipage}
\caption{The primal and dual complex: a glossary. On the left we mesh the unit square $\Omega=[0,1]\times[0,1]$ with the simplicial complex $\mathcal{C}_\Omega$ and we show $\mathbf{v}\in\mathcal{C}_0^\Omega$, $E\in\mathcal{C}_1^\Omega$, $\mathcal{T}\in\mathcal{C}_2^\Omega$. In the middle, where the primal complex is shown dashed, we have constructed the barycentric-dual complex: $\tilde{\mathbf{v}}\in\tilde{\mathcal{C}}_0^\Omega$ is dual to $\mathcal{T}$, $\tilde{E}\in\tilde{\mathcal{C}}_1^\Omega$ is dual to $E$, $\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega$ is dual to $\mathbf{v}$. On the right, we finally draw the resulting auxiliary complex $\mathcal{K}^\Omega$ and also emphasize a quadrilateral $K\in\mathcal{K}_2^\Omega$, an edge ${e}\in\{\mathcal{K}_1^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega)\}$, and an edge $\tilde{e} \in \{\mathcal{K}_1^\Omega \cap \mathcal{S}( \tilde{\mathcal{C}}^\Omega )\}$.}
\label{fig:staggered_grids}
\end{figure}
We will also need, for what lies ahead in the paper, to define an additional complex $\mathcal{K}^\Omega =\{\mathcal{K}_k^\Omega\}_{k=0}^2$, where
\begin{align}
& \mathcal{K}_2^\Omega = \left\{ \emptyset \neq K=\mathcal{T}\cap\tilde{\mathcal{T}}, \;\; \forall\mathcal{T}\!\in\!\mathcal{C}_2^\Omega, \forall\tilde{\mathcal{T}}\!\in\!\tilde{\mathcal{C}}_2^\Omega \right\},\label{eq:kite_complex}
\end{align}
\noindent and where we note that each $d$-dimensional simplex of the original mesh (and hence the whole of $\Omega$) is thus further partitioned into $d$+1 disjoint subsets $K\!\in\!\mathcal{K}_2^\Omega$ (see again Fig.~\ref{fig:staggered_grids}, rightmost panel). For any original triangle, we get three irregular quadrilaterals, which will be of utmost importance, and we will call \emph{fundamental 2-cells} (see the definition of \emph{micro-cell} in \cite{marrone} or see \cite{dga_as_dg}) and denote with $K$ in the rest of the paper. Definitions of lower dimensional sets $\mathcal{K}_1^\Omega$ and $\mathcal{K}_0^\Omega$ are intuitive, but we additionally provide here an explicit decomposition of $\mathcal{K}_1^\Omega$ into the two sets of segments
\begin{align*}
& \mathcal{K}_1^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega) = \{ \emptyset \neq {e} = E \cap \partial K \; s.t. \; E \in \mathcal{C}_1^\Omega, K\in\mathcal{K}_2^\Omega\},\\
& \mathcal{K}_1^\Omega\cap\mathcal{S}(\tilde{\mathcal{C}}^\Omega) = \{ \emptyset \neq \tilde{{e}} =
\tilde{E} \cap \partial K \; s.t. \; \tilde{E} \in \tilde{\mathcal{C}}_1^\Omega, K\in\mathcal{K}_2^\Omega\},
\end{align*}
\noindent of which $\mathcal{K}_1^\Omega$ is the disjoint union. We furthermore note that every $K\in\mathcal{K}_2^\Omega$ is uniquely identified by a triangle--vertex pair $(\mathcal{T},\mathbf{v})$, for some triangle $\mathcal{T}\!\in\!\mathcal{C}_2^\Omega$ and some mesh vertex $\mathbf{v} \!\in\! \{\mathcal{C}_0^\Omega \cap \partial\mathcal{T}\}$.
Having constructed the appropriate dual complex we can refer hereinafter to the starting simplicial one as the primal complex.
We mention in passing the circumcentric-dual (see \cite{tonti}) as another popular construction employed in the literature, which has amenable properties for finite volumes schemes (see \cite{leveque_2002} and references therein), but requires the triangulation of $\Omega$ to be a Delaunay one, which is usually too restrictive or simply not satisfied by the meshing algorithm at hand.
We conclude the section by remarking that there is an equivalent definition of skeleton $\mathcal{S}^k(\tilde{\mathcal{C}}^\Omega)$ for the dual complex and noting that in the following the notation $|\mathcal{C}_k^\Omega|$ and $|\tilde{\mathcal{C}}_k^\Omega|$ will, as customary, denote the size of the argument set (e.g. the number of triangles in the primal complex is $|\mathcal{C}_2^\Omega|$, the number of edges in the primal complex is $|\mathcal{C}_1^\Omega|$, etc.).
\section{The new formulation: continuous and discrete}\label{sec:funspaces}
In the present section we will turn our attention to functions supported on these complexes and use ingredients from the theory of Sobolev spaces to develop a mathematical background for our method. We will thus finally jump back to the Maxwell system we want to solve and make use of all the machinery.
\subsection{Barycentric-dual discontinuous functional spaces}
For any bounded $\mathcal{D}\subset\mathbb{R}^2$, we recall the usual real Hilbert spaces:
\begin{align*}
& L^2(\mathcal{D}) = \left\{ f:\mathcal{D}\mapsto\mathbb{R} \; s.t. \; \int_\mathcal{D} \vert f \vert^2\,\mathrm{d}\bm{r} < +\infty \right\},\\
& \bm{L}^2(\mathcal{D}) = \bigm\{ \bm{v} = \left(f,\;g\right)^{\mathrm{T}} : \mathcal{D}\mapsto\mathbb{R}^2 \; s.t. \; f,g \!\in\! L^2(\mathcal{D})\bigm\},
\end{align*}
\noindent from which we infer the standard inner product and its induced norm:
\begin{align*}
& \left( f,g \right)_\mathcal{D} := \int_\mathcal{D} fg\,\mathrm{d}\bm{r},\;\;\;
\Norm{f}[\mathcal{D}] = \left(\int_\mathcal{D} \vert f \vert^2\,\mathrm{d}\bm{r}\right)^{\frac{1}{2}} = \left( f,f \right)_\mathcal{D}^{\frac{1}{2}},
\end{align*}
\noindent for all $f, g \!\in\! L^2(\mathcal{D})$. The following inner product and norm are also implied from the definition of $\bm{L}^2(\mathcal{D})$:
\begin{align*}
& \left( \bm{v},\bm{w} \right)_\mathcal{D} := \int_\mathcal{D} \bm{v}\cdot\bm{w}\,\mathrm{d}\bm{r},\;\;\;
\Norm{\bm{v}}[\mathcal{D}] = \left(\int_\mathcal{D} \vert \bm{v} \vert^2\,\mathrm{d}\bm{r}\right)^{\frac{1}{2}} = \left( \bm{v},\bm{v} \right)_\mathcal{D}^{\frac{1}{2}},
\end{align*}
\noindent for all $\bm{v}, \bm{w} \!\in\! \bm{L}^2(\mathcal{D})$.
To properly define the electromagnetic energy for a generic computational domain, we will often need a weighted norm which includes $\varepsilon$ and $\mu$, which we will denote by adding the appropriate symbol to the subscripts of standard $L^2$ or $\bm{L}^2$ norms:
\begin{align*}
& \Norm{\bm{v}}[\mathcal{D},\varepsilon] =
\left( \varepsilon\bm{v},\bm{v}\right)_\mathcal{D}^{\frac{1}{2}},\;\;
\Norm{u}[\mathcal{D},\mu] =
\left( \mu u,u\right)_\mathcal{D}^{\frac{1}{2}},
\end{align*}
\noindent which are well-defined by virtue of the s.p.d. assumption on the material tensors. We finally introduce the following real Sobolev spaces
\begin{align*}
& {\bm{H}}^{curl} (\mathcal{D}) = \left\{ \bm{v} \!\in\! \bm{L}^2(\mathcal{D}) \; s.t. \;
curl(\bm{v}) \!\in\! L^2(\mathcal{D})\right\},\\
& H^{\bm{curl}}(\mathcal{D}) = \left\{ u \!\in\! L^2(\mathcal{D}) \; s.t. \; \bm{curl}(u)\!\in\! \bm{L}^2(\mathcal{D})\right\},
\end{align*}
\noindent where all derivatives are now taken in the distributional sense. Since we assume that time and space are separable, the semi-weak solutions of the Maxwell system live in function spaces which are well-established in the literature:
\begin{align*}
& \bm{E}(\bm{r},t) \in AC\!\left([0,\mathrm{T}]\right)\otimes \bm{H}^{curl}(\Omega),\\
& {H}(\bm{r},t) \in AC\!\left([0,\mathrm{T}]\right) \otimes H^{\bm{curl}}(\Omega),
\end{align*}
for end-time $t=\mathrm{T}$ s.t. $0<\mathrm{T}<+\infty$, where $AC\left([0,\mathrm{T}]\right)$ denotes the space of absolutely continuous functions on $[0,\mathrm{T}]$. We keep the differentiability condition in the strong sense in the time variable, since we will be discretising it with finite differences (as in the Yee algorithm), and we postpone the inclusion of boundary conditions to a later point in the paper.
We define now, with reference to the complexes introduced in Section \ref{sec:mesh}, the new \emph{broken} Sobolev spaces
\begin{align}
\bm{H}^{curl}(\tilde{\mathcal{C}}_2^\Omega) &= \left\{ \bm{v} \!\in\! \bm{L}^2(\Omega) \; s.t. \;
\bm{v}|_{\tilde{\mathcal{T}}} \!\in\! \bm{H}^{curl}(\tilde{\mathcal{T}}),\,
\forall\tilde{\mathcal{T}} \!\in\! \tilde{\mathcal{C}}_2^\Omega
\right\},\label{eq:broken_vector_valued}\\
H^{\bm{curl}}(\mathcal{C}_2^\Omega) &= \left\{ u \!\in\! L^2(\Omega) \; s.t. \;
u|_\mathcal{T} \!\in\! H^{\bm{curl}}(\mathcal{T}),\,
\forall \mathcal{T} \!\in\! \mathcal{C}_2^\Omega\right\}. \label{eq:broken_scalar_valued}
\end{align}
Informally speaking, these are locally conforming spaces which are globally non-conforming on $\Omega$, yet the non-conformity has a different support for the two spaces. Our next step is now to apply \emph{local} testing in space to equations (\ref{eq:ampmax}) and (\ref{eq:faraday}) with respect to the new broken spaces, that is
\begin{align}
& \sum_{\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega}
\left( \varepsilon \partial_t\bm{E},\bm{v}\right)_{\tilde{\mathcal{T}}} =
\sum_{\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega} \left( \bm{curl}({H}), \bm{v}\right)_{\tilde{\mathcal{T}}},
\;\; & \forall \bm{v}\in \bm{H}^{curl}(\tilde{\mathcal{C}}_2^\Omega),\label{eq:local_am}\\
& \sum_{\mathcal{T}\in\mathcal{C}_2^\Omega}
\left( \mu \partial_t{H},u \right)_\mathcal{T} =
-\sum_{\mathcal{T}\in\mathcal{C}_2^\Omega} \left( curl(\bm{E}),u \right)_\mathcal{T},
\;\; &\forall u\in H^{\bm{curl}}(\mathcal{C}_2^\Omega),\label{eq:local_fa}
\end{align}
where the constitutive equations (\ref{eq:const_DE})--(\ref{eq:const_BH}) have been used and we stress the different local integration domains $\mathcal{T}$ and $\tilde{\mathcal{T}}$. The interplay of the two dual complexes can be exploited by making the r.h.s. of (\ref{eq:local_am})--(\ref{eq:local_fa}) ultra-weak, i.e. performing the following formal integration by parts
\begin{align*}
& \sum_{\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega}
\left( \varepsilon \partial_t\bm{E},\bm{v}\right)_{\tilde{\mathcal{T}}} =
\sum_{\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega} \left(
\int_{\partial\tilde{\mathcal{T}}} \hspace{-2.5mm} H\bm{v}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( {H}, curl\left(\bm{v}\right)\right)_{\tilde{\mathcal{T}}} \right),
\;\; & \forall \bm{v}\in \bm{H}_{0}^{curl}(\tilde{\mathcal{C}}_2^\Omega),\\
& \sum_{\mathcal{T}\in\mathcal{C}_2^\Omega}
\left( \mu \partial_t{H},u \right)_\mathcal{T} =
\sum_{\mathcal{T}\in\mathcal{C}_2^\Omega} \left(
\int_{\partial\mathcal{T}} \hspace{-2.5mm}u\bm{E}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( \bm{E},\bm{curl}(u) \right)_\mathcal{T}\right),
\;\; &\forall u\in H^{\bm{curl}}(\mathcal{C}_2^\Omega),
\end{align*}
\noindent where boundary terms arise from the tangential discontinuity of test-functions. This latter step proves to be a crucial part of the novel derivation: as we will show in the following subsection, the tangential traces of solutions appearing in the line-integral terms will all remain single-valued even when the trial-spaces for $\bm{E}$ and $H$ in our Galerkin approximation will be broken in the same manner as the test-spaces.
\subsection{Finite-dimensional approximation}
In non-conforming DG methods, once a mesh is available, the equations are independently tested on each triangle against some polynomial basis (or some other kind of locally smooth functions, if the DG FEM is combined with spectral or Trefftz approaches, e.g. \cite{egger}). This gives birth, once the solution is approximated within the same finite-dimensional basis, to block-diagonal (hence easily invertible) mass-matrices on the left-hand side (l.h.s) of the weak formulation of (\ref{eq:ampmax})--(\ref{eq:faraday}). We can here generate a similar block-diagonal structure by virtue of the two newly defined broken spaces. This will be done by using basis-functions of finite-dimensional subspaces for $\bm{H}^{curl}(\tilde{\mathcal{C}}_2^\Omega)$ and $H^{\bm{curl}}(\mathcal{C}_2^\Omega)$ with compact support limited to some $\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega$ and $\mathcal{T}\in\mathcal{C}_2^\Omega$, respectively.
The basis-functions will be as usual piecewise--polynomial (vectors) up to some fixed degree $p\geq0$.
Nevertheless, the procedure is far from equivalent to existing literature, since we have decided to use two different partitions of $\Omega$ which overlap and must be forced to exchange information. This is not a drawback, since it allows us to avoid introducing numerical fluxes (and handle all their consequences), as instead common in all popular DG approaches.
Let us start by defining local Cartesian--orthogonal coordinates $(\xi_1,\xi_2)$ and denote with $\hat{\mathcal{T}}$ the reference (or master) triangle, i.e. the convex hull of the point-set $\{(0,0)^\mathrm{T},(1,0)^\mathrm{T},(0,1)^\mathrm{T}\}$ in the given coordinates. We will denote with $\hat{\bm{r}} = \hat{\bm{r}}(\xi_1,\xi_2)$ position vectors on $\hat{\mathcal{T}}$.
This is a standard domain for FEM practitioners, as the usual procedure consists in defining local ``shape-functions'' on $\hat{\mathcal{T}}$ and subsequently using a family of continuous and invertible mappings $\varphi_\mathcal{T}$, which map $\hat{\mathcal{T}}$ to each physical triangle $\mathcal{T}\in\mathcal{C}_2^\Omega$, to ``patch-up'' global basis-functions on the whole of $\Omega$.
However, we note that there are, for each $\mathcal{T}\in\mathcal{C}_2^\Omega$, actually three different choices for affine transformations which map vertices of $\hat{\mathcal{T}}$ to vertices of $\mathcal{T}$ (up to reversal of orientation for the triangle), and they are in the form:
\begin{align*}
&\bm{r} = \varphi_{\mathcal{T},i}(\hat{\bm{r}}) := \mathbf{A}_{\mathcal{T},i} \hat{\bm{r}} + \mathbf{b}_{\mathcal{T},i}
\end{align*}
\noindent where $i\in\{1,2,3\}$, $\hat{\bm{r}}\in\hat{\mathcal{T}}$, $\bm{r}\in \mathcal{T}$, $\mathbf{A}_{\mathcal{T},i} \in \mathbb{R}^{2\!\times\!2}$, $\mathbf{b}_{\mathcal{T},i}\in\mathbb{R}^2$.
If we take any triangle $\mathcal{T}\in\mathcal{C}_2^\Omega$, denote with $\mathbf{v}_{\mathcal{T},1}, \mathbf{v}_{\mathcal{T},2}, \mathbf{v}_{\mathcal{T},3}$ the Euclidean vectors (now in the global mesh coordinates) for the three vertices in the set $\{\partial\mathcal{T} \cap \mathcal{C}_0^\Omega\}$,
and we recall that (as already remarked) each pair $(T,\mathbf{v}_{\mathcal{T},i})$ uniquely identifies a quadrilateral $K\in\mathcal{K}_2^\Omega$, we can make the notation less cumbersome by writing $\varphi_K$ (and $\mathbf{A}_K$, $\mathbf{b}_K$ as well) instead of using two subscripts. In connection with this, the following result additionally holds:
\begin{lem}\label{thm:kmap}
For each $\mathcal{T}\in\mathcal{C}_2^\Omega$, $\{\mathbf{v}_{\mathcal{T},i}\}_{i=1,2,3}$ (defined as above), the affine mapping $\varphi_{\mathcal{T},i} := \varphi_K$ is invertible, and the inverse $\varphi_K^{-1}$ maps $K\in\mathcal{K}_2^\Omega$ to the kite\footnote{a quadrilateral where two disjoint pairs of adjacent sides are equal.}-cell (KC), denoted with $\hat{K}$ and defined as
\begin{align*}\hat{K}=\mathrm{Conv}\left\{\left(0,0\right)^\mathrm{T}, \left(1/2,0\right)^\mathrm{T}, \left(1/3,1/3\right)^\mathrm{T}, \left(0,1/2\right)^\mathrm{T}\right\}.\end{align*}
\noindent where $\mathrm{Conv}\{\cdot,\dots,\cdot\}$ denotes the convex hull of its arguments.
\end{lem}\qed
Here lies in fact our biggest departure from the classical FEM approach: we work on a proper subset of the reference triangle $\hat{\mathcal{T}}$, namely $\hat{K}$. Both $\hat{\mathcal{T}}$ and $\hat{K}$ are depicted in Fig.~\ref{fig:ref_kite}.
\begin{figure}[th]
\centering
\begin{tikzpicture}
\begin{axis}[axis lines=middle,axis equal,
grid=both,xlabel=$\xi_1$,ylabel=$\xi_2$,
ymin=-0.3,ymax=1.2,xmax=1,xmin=-0.3,
xtick={-1.0,-0.75,...,0.75,1.0},
ytick={-1.0,-0.75,...,0.75,1.0},
extra x ticks = {0.333333333333333333}, extra y ticks = {0.333333333333333333},
xticklabel=\empty,yticklabel=\empty]
\addplot[color=black,thick=true] coordinates{(0,0) (1/2,0) (1/3,1/3) (0,1/2) (0,0)};
\addplot[color=red] coordinates{(0,0) (1,0) (0,1) (0,0)};
\node[fill,circle,scale=0.2] (a) at (1/3,1/3) {};
\node[] at (1/3+1/18,1/3+1/18) {$\tilde{O}$};
\node[] at (-1/18,-1/18) {$O$};
\node[] at (1/6,1/6) {$\hat{K}$};
\node (b) at (-1/4,5/8) {$\tilde{\xi}_1$};
\node (c) at (5/8,-1/4) {$\tilde{\xi}_2$};
\draw[->] (a) to node {} (b);
\draw[->] (a) to node {} (c);
\node[scale=0.1,label={181:{$\frac{1}{2}$}},inner sep=2pt] at (axis cs:0,1/2) {};
\node[scale=0.1,label={181:{$\frac{1}{3}$}},inner sep=2pt] at (axis cs:0,1/3) {};
\node[scale=0.1,label={270:{$\frac{1}{2}$}},inner sep=2pt] at (axis cs:1/2,0) {};
\node[scale=0.1,label={270:{$\frac{1}{3}$}},inner sep=2pt] at (axis cs:1/3,0) {};
\node[scale=0.1,label={180:{$1$}},inner sep=2pt] at (axis cs:0,1) {};
\node[scale=0.1,label={270:{$1$}},inner sep=2pt] at (axis cs:1,0) {};
\end{axis}
\end{tikzpicture}
\caption{The reference kite-cell and the reference triangle, from which it is derived.} \label{fig:ref_kite}
\end{figure}
\noindent With the introduction of a reference fundamental 2-cell $\hat{K}$ we want to develop a new (semi-conforming) finite element, where we can still work with the exact same local set of coordinates $(\xi_1,\xi_2)$ and define the following shape-functions' set:
\begin{align}
&\hat{\bm{w}}_{l}^{ij} (\hat{\bm{r}}) = C_{ijl}\,(\xi_{l})^i(\xi_{3-l})^j \hat{\nabla}\xi_{l},\;\;\;
&l\in\{1,2\},\;i,j\geq0,\;i+j \leq p,\label{eq:ref_vecfuns}
\end{align}
\noindent where $i$,$j$ are integers, and $\hat{\nabla}$ denotes the gradient operator in the local coordinates.
The values of scaling factors $C_{ijl} \in\mathbb{R}^+$ ensure that shape-functions take all values in $[0,1]^2$ for some $\hat{\bm{r}}\in\hat{K}$.
We remark that local shape-functions defined in (\ref{eq:ref_vecfuns}) are of two kinds. For example, by setting $j=0$ we get ``edge'' functions, in the following sense: the selected shape-functions yield monomials in arc-length when their tangential trace is computed on the line $\xi_{l}=0$ and yield zero when their tangential trace is computed on $\xi_{3-l}=0$.
This is a useful property when mapping vector-valued functions back to the \emph{physical} element $K\in\mathcal{K}_2^\Omega$. To do so we have to digress shortly on the index $l$, which is in fact a function of two additional indices: we can write (with some harmless abuse of notation in identifying sets with their indexing) $l=l(e,K)$, for any $e \in \mathcal{K}_1^\Omega \cap \mathcal{S}(\mathcal{C}^\Omega)$ and any $K\in\mathcal{K}_2^\Omega$ s.t. ${e}\subset\partial{K}$.
This completely specifies which one of the local coordinates $(\xi_1, \xi_2)$ provides an arc-length parametrization for the image of segment ${e}$ (under the appropriate mapping $\varphi_K^{-1}$) and allows us to introduce the set of functions:
\begin{align}
& \bm{w}_{e}^i(\bm{r}) :=
\begin{cases}\mathbf{A}_K^{-\mathrm{T}} \hat{\bm{w}}_{l}^{i0} (\,\varphi_K^{-1}(\bm{r})\,),
& \forall\bm{r}\in K,\,\forall K \in\mathcal{K}_2^\Omega \;\;s.t.\;\;
{e} \subset \{\partial{K}\},\, l = l(e,K),\\
0 & \text{otherwise,}\end{cases} \label{eq:phys_edgefuns
\end{align}
\noindent where $(\cdot)^{-\mathrm{T}}$ denotes the inverse-transpose matrix. In (\ref{eq:phys_edgefuns}) a (piecewise-)covariant transformation has been used, as it preserves tangential traces (see \cite{monk}) on two relevant boundary segments (while allowing fully discontinuous functions on the intersections of $\partial K$ with the skeleton $\mathcal{S}(\tilde{\mathcal{C}}^\Omega)$ of the dual complex).
For fixed polynomial order $p$, (\ref{eq:phys_edgefuns}) is not sufficient for a complete basis: we must move back to $\hat{K}$ and take also local shape-functions in (\ref{eq:ref_vecfuns}) with $j\neq 0$. These are ``bulk'' basis-functions, as their tangential component vanishes now on both local coordinate axes. To preserve this feature onto the global mesh, we again use their covariantly mapped versions:
\begin{align}
& \bm{w}_{K}^{ijl}(\bm{r}) =
\begin{cases}\mathbf{A}_K^{-\mathrm{T}} \hat{\bm{w}}_{l}^{ij} (\,\varphi_K^{-1}(\bm{r})\,), & \forall\bm{r}\in K, j>0,\\
0 & \text{otherwise,}\end{cases} \label{eq:phys_bulkfuns
\end{align}
\noindent where we note the appearance of $K$ as a subscript index, rather than ${e}$, and we note that both admissible values of $l$ now produce bulk functions.
Summarizing, by grouping the $\bm{w}_{e}^i$ (for all $i$ s.t. $0\leq i \leq p$ and all ${e}\in\{\mathcal{K}_1^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega)\}$) together with the $\bm{w}_{K}^{ijl}$ (for all admissible $\{i,j,l\}$ and all $K\in\mathcal{K}_2^\Omega$) into a new sequence $\{\bm{w}_n^p\}_{n=1}^{N}$, we achieve a complete set of basis-functions for the space
\begin{align*}\bm{W}^p := Span\{\,\{\bm{w}_n^p\}_{n=1}^{N}\,\} = \bm{H}^{curl}(\tilde{\mathcal{C}}_2^\Omega) \cap \boldsymbol{P}^p(\mathcal{K}_2^\Omega;\mathbb{R}^2),\end{align*}
\noindent where $\boldsymbol{P}^p(\mathcal{K}_2^\Omega;\mathbb{R}^2)$ denotes the space of vector-valued functions whose components are piecewise-polynomials of degree at most $p$ on each $K\in\mathcal{K}_2^\Omega$.
It is not difficult to compute the dimension of this global space for a given mesh: the $n$ index runs from 1 to $N$, with
\begin{align}
& N = (p+1)\left(2|\mathcal{C}_1^\Omega| + 3p|\mathcal{C}_2^\Omega|\right) =
2|\mathcal{C}_1^\Omega| + p|\mathcal{K}_1^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega)| + p(p+1)|\mathcal{K}_2^\Omega|,\label{eq:dimvec}
\end{align}
\noindent where the relationships between $\mathcal{K}^\Omega$ and $\mathcal{C}^\Omega$ have been used to make the splitting into lowest order, edge and bulk basis-functions manifest.
For the finite-dimensional space which will be approximating the pseudo-vector ${H}(\bm{r},t)$ instead, we proceed by first defining a new pair of oblique local coordinates $(\tilde{\xi}_1,\tilde{\xi}_2)$ on the KC element through an additional family of affine mappings $\tilde{\varphi}_K$ (and their inverses $\tilde{\varphi}_{K}^{-1}$),
which we can construct by enforcing the origin in the associated oblique coordinates' system to coincide with the point $\tilde{O}=(1/3,1/3)$ (for its sketch, we refer the reader again to Fig.~\ref{fig:ref_kite}), and by enforcing $0\leq \tilde{\xi}_1,\tilde{\xi}_2 \leq 1$ on $\hat{K}$.
Thus, by denoting the position vector with $\tilde{\bm{r}}$ in the new coordinates' system, we can concisely give the expressions of scalar-valued local shape-functions on $\hat{K}$.
Namely, we introduce the monomials
\begin{align*}
& \hat{\tilde{w}}_{\tilde{l}}^{ij}(\tilde{\bm{r}}) := (\tilde{\xi}_{\tilde{l}})^i(\tilde{\xi}_{3-{\tilde{l}}})^j,\;\;
& \tilde{l}\in\{1,2\}, \;\; i>0, j\geq 0,\, i+j \leq p,
\end{align*}
\noindent where we stress the fact that $i$ is now a strictly positive integer.
In this case, differently from the vector-valued setting, we have to consider segments $\tilde{e} \in \{\mathcal{K}_1^\Omega \cap \mathcal{S}(\tilde{\mathcal{C}}^\Omega)\}$ s.t. $\tilde{\xi}_1$ and $\tilde{\xi}_2$ provide arc-length parameters on them when moving back to any $K\in\mathcal{K}_2^\Omega$ in the physical mesh. Consequently we have introduced a different index $\tilde{l}=\tilde{l}(\tilde{e},K)$ . Setting $j=0$ yields a first subset of basis-functions for the global space
\begin{align}
& \tilde{w}_{\tilde{e}}^{i}(\bm{r}) :=
\begin{cases} \hat{\tilde{w}}_{\tilde{l}}^{i0} (\,\tilde{\varphi}_{K}^{-1}(\bm{r})\,) & \forall \bm{r}\in K \;\;s.t.\;\;
\tilde{e} \subset \partial{K}, \tilde{l}=\tilde{l}(\tilde{e},K),\\
0 & \text{otherwise,}\end{cases} \label{eq:phys_scaledgefuns}
\end{align}
\noindent obtained by simple piecewise combinations of local shape-functions' pull-backs. The ones in (\ref{eq:phys_scaledgefuns}) are again edge functions (even if scalar-valued ones, their support being an edge-patch in $\mathcal{K}^\Omega$). A new set of bulk basis-functions is also present,
defined by setting $\tilde{l}=1$ (without loss of generality) and requiring $i>0$ and $j>0$ to hold simultaneously. The condition on $i$ and $j$ ensures that the associated shape-functions have vanishing trace on both $\tilde{\xi}_1=0$ and $\tilde{\xi}_2=0$ lines.
Once more via pull-backs of local shape-functions onto the generic physical fundamental cell $K\in\mathcal{K}_2^\Omega$, we get
\begin{align}
& \tilde{w}_{K}^{ij}(\bm{r}) =
\begin{cases} \hat{\tilde{w}}_{1}^{ij} (\,\tilde{\varphi}_{K}^{-1}(\bm{r})\,) & \forall \bm{r}\in K,\, i,j>0,\\
0 & \text{otherwise,}\end{cases} \label{eq:phys_scalbulkfuns}
\end{align}
\noindent again using $K$ as an index.
To complete the scalar-valued basis, a third set of functions is required, namely the set
\begin{align*}\tilde{w}_\mathcal{T}:=\frac{\mathbbm{1}_\mathcal{T}}{|\mathcal{T}|},\end{align*}
\noindent for all the triangles $\mathcal{T}\in\mathcal{C}_2^\Omega$, where $|\mathcal{T}|$ denotes the measure of $\mathcal{T}$ and $\mathbbm{1}_\mathcal{T}$ is the characteristic (or indicator) function of $\mathcal{T}$, i.e. the discontinuous function which takes value one for any $\bm{r}\in \mathcal{T}$ and zero elsewhere. Since the latter are piecewise-constant, scalar-valued functions, the mapping from reference to physical elements is trivial.
We can again group all the $\tilde{w}_{\tilde{e}}^i$, $\tilde{w}_K^{ij}$ and $\tilde{w}_\mathcal{T}$ in a new sequence of basis-functions $\{\tilde{w}_m^p\}_{m=1}^M$, which provides the basis of a finite-dimensional subspace $\tilde{W}^p \subset H^{\bm{curl}}(\mathcal{C}_2^\Omega)$, where again $\tilde{W}^p := Span\{\,\{\tilde{w}_m^p\}_{m=1}^M\,\}$. Namely, we have constructed a basis for the vector space
\begin{align*}\tilde{W}^p = H^{\bm{curl}}(\mathcal{C}_2^\Omega) \cap {P}^p(\mathcal{K}_2^\Omega;\mathbb{R}),\end{align*}
\noindent where ${P}^p(\mathcal{K}_2^\Omega;\mathbb{R})$ is the space of piecewise-polynomials of degree at most $p$ on each $K\in\mathcal{K}_2^\Omega$.
Once more, we can easily compute the dimension of $\tilde{W}^p$, which amounts to
\begin{align}
& M = \left(1 + 3p + \frac{3}{2}p(p-1)\right)|\mathcal{C}_2^\Omega| =
|\mathcal{C}_2^\Omega| + p|\mathcal{K}_1^\Omega\cap\mathcal{S}(\tilde{\mathcal{C}}^\Omega)| + \frac{p}{2}(p-1)|\mathcal{K}_2^\Omega|,\label{eq:dimscal}
\end{align}
\noindent where the contributions due to the three different flavours of basis-functions have been again manifestly split.
With the aid of $\bm{W}^p$ and $\tilde{W}^p$, we can finally approximate the unknown fields with a Galerkin method: we seek $\bm{E}^{h,p}(\bm{r},t) \in AC\!\left([0,\mathrm{T}]\right)\otimes \bm{W}^p$ and ${H}^{h,p}(\bm{r},t) \in AC\!\left([0,\mathrm{T}]\right) \otimes \tilde{W}^p$ such that
\begin{align}
& \sum_{\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega}
\left( \varepsilon \partial_t\bm{E}^{h,p},\bm{v}\right)_{\tilde{\mathcal{T}}} =
\sum_{\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega} \left(
\int_{\partial\tilde{\mathcal{T}}} \hspace{-2.5mm} {H}^{h,p}\bm{v}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( {H}^{h,p}, curl\left(\bm{v}\right)\right)_{\tilde{\mathcal{T}}} \right),\label{eq:weak_ampmax}\\
& \sum_{\mathcal{T}\in\mathcal{C}_2^\Omega}
\left( \mu \partial_t{H}^{h,p},u \right)_\mathcal{T} =
\sum_{\mathcal{T}\in\mathcal{C}_2^\Omega} \left(
\int_{\partial\mathcal{T}} \hspace{-2.5mm}u\bm{E}^{h,p}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( \bm{E}^{h,p},\bm{curl}(u) \right)_\mathcal{T}\right),\label{eq:weak_faraday}
\end{align}
\noindent hold $\forall\bm{v} \in \bm{W}^p$ and $\forall u\in \tilde{W}^p$ simultaneously.
Furthermore, the following assertion holds:
\begin{thm}\label{thm:lemma1}
\textbf{ (Consistency and stability)}
The semi-discrete formulation (\ref{eq:weak_ampmax})--(\ref{eq:weak_faraday}) is consistent, meaning that it is satisfied by the true (conforming) weak solution of (\ref{eq:local_am})--(\ref{eq:local_fa}) in the limit $h\rightarrow 0$. Furthermore the semi-discrete electromagnetic energy $\mathcal{E}_K^{h,p}$ stored inside each $K\in\mathcal{K}_2^\Omega$ (and therefore in the whole of $\Omega$) is conserved through time:
\begin{align}
& \partial_t\mathcal{E}_K^{h,p} := \partial_t \left(\frac{1}{2}\Vert\bm{E}^{h,p}\Vert_{K,\varepsilon}^2 + \frac{1}{2}\Vert {H}^{h,p} \Vert_{K,\mu}^2\right) = 0, \; \forall t \in [0,\mathrm{T}],\label{eq:stability}
\end{align}
\noindent where $\varepsilon$ and $\mu$ are piecewise-smooth inside each $K\in\mathcal{K}_2^\Omega$.
\end{thm}
\begin{proof}
Consistency is trivial, we prove (\ref{eq:stability}). We start by splitting all integrals into their contributions from each fundamental cell $K\in\mathcal{K}_2^\Omega$, which is straightforward for double integrals but requires some care for boundary terms. From (\ref{eq:weak_ampmax})--(\ref{eq:weak_faraday}) it ensues
\begin{align*}
& \sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} }
\left( \varepsilon \partial_t\bm{E}^{h,p},\bm{v}\right)_{ {\color{black}K} } \!=\!
\sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} } \left(
\int_{ {\color{black}\partial{K}\cap\mathcal{S}(\tilde{\mathcal{C}}^\Omega)}} \hspace{-12.5mm} {H}^{h,p}\bm{v}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( {H}^{h,p}, curl\left(\bm{v}\right)\right)_{{\color{black}K}} \right),
&\!\forall \bm{v}\in \bm{W}^p,\\
& \sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} }
\left( \mu \partial_t{H}^{h,p},u \right)_{{\color{black}K}} \!=\!
\sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} } \left(
\int_{ {\color{black}\partial{K}\cap\mathcal{S}(\mathcal{C}^\Omega)} } \hspace{-12.5mm} u\bm{E}^{h,p}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( \bm{E}^{h,p},\bm{curl}(u) \right)_{{\color{black}K}}\right),
&\!\forall u\in \tilde{W}^p,
\end{align*}
where the definitions of sets $\mathcal{K}_1^\Omega\cap\mathcal{S}(\tilde{\mathcal{C}}^\Omega)$ and $\mathcal{K}_{1}^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega)$ have been used to split line-integrals along the boundary of each $\tilde{\mathcal{T}}$ and $\mathcal{T}$ into local contributions. We now use the fact that our approximate solutions ${H}^{h,p}$ and $\bm{E}^{h,p}$ are themselves admissible test-functions (being linear combinations of the basis-functions) and plug them as such in the weak formulation:
\begin{align*}
& \sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} }
\left( \varepsilon \partial_t\bm{E}^{h,p},\bm{E}^{h,p}\right)_{ {\color{black}K} } =
\sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} } \left(
\int_{ {\color{black}\partial{K}\cap\mathcal{S}(\tilde{\mathcal{C}}^\Omega)}} \hspace{-12.5mm} {H}^{h,p}\bm{E}^{h,p}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( {H}^{h,p}, curl\left(\bm{E}^{h,p}\right)\right)_{{\color{black}K}} \right),\\
& \sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} }
\left( \mu \partial_t{H}^{h,p},{H}^{h,p} \right)_{{\color{black}K}} =
\sum_{ {\color{black}K \in \mathcal{K}_2^\Omega} } \left(
\int_{ {\color{black}\partial{K}\cap\mathcal{S}(\mathcal{C}^\Omega)}} \hspace{-12.5mm}
{H}^{h,p}\bm{E}^{h,p}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
-\left( \bm{E}^{h,p},\bm{curl}({H}^{h,p}) \right)_{{\color{black}K}}\right).
\end{align*}
By adding the two equations together side-by-side, using the product rule for time derivatives on the l.h.s., while also using the Green theorem on the r.h.s., the assertion follows locally $\forall K\in\mathcal{K}_2^\Omega$.\end{proof}
The following remarks are in order: firstly, the definition of numerical fluxes is irrelevant as predicted (in a nutshell: when discretising the (ultra-)weak curls, wherever the test-functions present tangential trace jumps, trial-functions are tangentially continuous, and vice-versa).
On the other hand, the standard practice in Finite Element analysis is to set material tensors $\varepsilon$ and $\mu$ to a constant value on each triangle, since the primal complex is the one which is usually built (by some external tool) to resolve the geometry of discontinuities between materials. In this respect, the result of Theorem \ref{thm:lemma1} accommodates the output of any standard triangular mesher and at the same time suggests that well-behaved finite-dimensional approximations of the two new broken spaces should have local approximation properties not on whole primal and dual 2-cells, but on each $K\in\mathcal{K}_2^\Omega$, as is the case in our construction.
Lastly, we comment on boundary conditions: since $\partial \Omega$ is a subset of the skeleton $\mathcal{S}({\mathcal{C}}^\Omega)$ of the primal complex, the space $H^{\bm{curl}}(\mathcal{C}_2^\Omega)$ does not have a well-defined tangential trace on the boundary of the computational domain. In the above derivation this non-conformity is only apparently ignored: instead formal \emph{natural boundary conditions} have been employed, which, as can be deduced integrating by parts the r.h.s. of (\ref{eq:local_am}) on any $K\in\mathcal{K}_2^\Omega$ for which $\partial{K}\cap\partial\Omega\neq\emptyset$, amount to weakly enforcing $H|_{\partial\Omega}=0$.
On the other hand, the definition of $\bm{H}^{curl}(\tilde{\mathcal{C}}_2^\Omega)$ and, more precisely, the definitions of $\bm{w}_n^p$ basis-functions ensure that PEC boundary condition, if sought, can be enforced in the strong sense, since $\bm{W}^p$ possesses a tangential trace nearly everywhere on $\partial\Omega$. Additionally, we remark that both proposed bases are hierarchical by construction.
We finally note that (informally speaking) one can also \emph{swap the broken spaces}, i.e. we can define
\begin{align}
\bm{H}^{curl}(\mathcal{C}_2^\Omega) &= \left\{ \bm{v} \!\in\! \bm{L}^2(\Omega) \; s.t. \;
\bm{v}|_\mathcal{T} \!\in\! \bm{H}^{curl}(\mathcal{T}),\,\forall \mathcal{T} \!\in\! \mathcal{C}_2^\Omega\right\},\label{eq:dual_broken_vector_valued}\\
H^{\bm{curl}}(\tilde{\mathcal{C}}_2^\Omega) &= \left\{ u \!\in\! L^2(\Omega) \; s.t. \;
u|_{\tilde{\mathcal{T}}} \!\in\! H^{\bm{curl}}(\tilde{\mathcal{T}}),\,\forall\tilde{\mathcal{T}} \!\in\! \tilde{\mathcal{C}}_2^\Omega\right\}, \label{eq:dual_broken_scalar_valued}
\end{align}
\noindent in the continuous setting, where a new formulation with an analogous weak form, appropriate sets of basis-functions, and an equivalent of Theorem \ref{thm:lemma1} can be easily deduced from our previous construction. The only key difference of such a formulation would lie in boundary conditions: if (\ref{eq:dual_broken_vector_valued})--(\ref{eq:dual_broken_scalar_valued}) were to be again approximated by finite-dimensional spaces, the PMC boundary condition ${H}^{h,p}|_{\partial\Omega}= 0$ would become an essential one and be incorporated in the strong sense.
\section{Implementing the fully discrete scheme}\label{sec:basis}
Owing to the explicit construction of Section \ref{sec:funspaces}, we can expand the approximated unknown fields as
\begin{align}
& \bm{E}^{h,p}(\bm{r},t) = \sum\limits_{n=1}^{N} u_{n} (t) \bm{w}_{n}^p(\bm{r}), \;\;
{H}^{h,p}(\bm{r},t) = \sum\limits_{m=1}^{M} f_{m} (t) \tilde{w}_{m}^p (\bm{r}), \label{eq:solution_expansion}
\end{align}
\noindent where the space-time separation of variables assumption on the solution is incorporated via the time dependence of coefficients in the linear combinations.
The semi-discrete scheme, which we obtained by using (\ref{eq:solution_expansion}) and testing against the same basis-functions, has the following matrix-representation:
\begin{align}
& \begin{pmatrix*}[c]
\tilde{\mathbf{M}}_p^\varepsilon & \mathbf{0} \\
\mathbf{0} & \mathbf{M}_p^\mu
\end{pmatrix*} \frac{\mathrm{d}}{\mathrm{d}{t}}
\begin{pmatrix*}[c]
\bm{u}(t) \\ \bm{f}(t)
\end{pmatrix*} =
\begin{pmatrix*}[c]
\mathbf{0} & \mathbf{C}_p^\mathrm{T} \\
-\mathbf{C}_p & \mathbf{0}
\end{pmatrix*}
\begin{pmatrix*}[c]
\bm{u}(t) \\ \bm{f}(t)
\end{pmatrix*}, \label{eq:semidiscrete_cmp}
\end{align}
\noindent where $\bm{f}$ is the column-vector containing semi-discrete magnetic field degrees of freedom (DoFs), $\bm{u}$ is the column-vector containing semi-discrete electric field DoFs, and where the proved energy conservation property is reflected by the r.h.s. skew-symmetry. We remark that the basis-functions can be appropriately re-ordered, by grouping members which have support contained into some common $\mathcal{T}\in\mathcal{C}_2^\Omega$ for the ${H}^{h,p}(\bm{r},t)$ field, and contained into a common $\tilde{\mathcal{T}}\in\tilde{\mathcal{C}}_2^\Omega$ for the $\bm{E}^{h,p}(\bm{r},t)$ field, respectively. This is not mandatory, but we thus stress the block-diagonal nature of mass-matrices $\tilde{\mathbf{M}}_p^\varepsilon$ and $\mathbf{M}_p^\mu$, which is achieved by breaking the standard Sobolev spaces. To discretise time, we use the well-known leap-frog scheme, which is the symplectic time-integrator used by Yee in his seminal paper. The search for symplectic integrators of arbitrary order which keep the time-stepping explicit is an active topic of research (see \cite{RuthForest, Titarev2002, jcp_symplectic}) which goes beyond the scope of the present contribution (yet provides also a further future research direction). For the fully discrete scheme it ensues
\begin{align}
\begin{pmatrix*}[c]
\mathbf{u}^{\left(n+1/2\right)\tau} \\ \mathbf{f}^{\left(n+1\right)\tau}
\end{pmatrix*} &=
\begin{pmatrix*}[c]
\mathbf{u}^{\left(n-1/2\right) \tau} \\ \mathbf{f}^{n\tau}
\end{pmatrix*} + \nonumber\\
&+\tau\begin{pmatrix*}[c]
(\tilde{\mathbf{M}}_p^\varepsilon)^{-1} \!&\! \mathbf{0} \\
\mathbf{0} \!&\! (\mathbf{M}_p^\mu)^{-1}
\end{pmatrix*}
\begin{pmatrix*}[c]
\mathbf{0} & \mathbf{C}_p^\mathrm{T} \\
-\mathbf{C}_p & \mathbf{0}
\end{pmatrix*}
\begin{pmatrix*}[c]
\mathbf{u}^{\left(n+1/2\right)\tau} \\ \mathbf{f}^{n\tau}
\end{pmatrix*}, \label{eq:discrete_cmp}
\end{align}
\noindent where $\mathbf{f}$ is the column-vector containing (now fully discrete) magnetic field DoFs, $\mathbf{u}$ is the column-vector containing electric field DoFs, $\tau\in\mathbb{R}^+$ is the discrete time-step (whose upper bound for a stable scheme can quickly be estimated by, e.g., a power-iteration algorithm) and $n=0,1,\dots,\lceil \mathrm{T}/\tau \rceil$. The inverses of mass-matrices are easily computed by solving very small local systems of equations, once-and-for-all and block-by-block (see sparsity patterns in Fig.~\ref{fig:sparsity_h_check} and Fig.~\ref{fig:sparsity_p_check}).
\begin{figure}[!h]
\centering
\begin{minipage}{0.1\textwidth}
\vspace{-3cm}
$p=0$
\end{minipage}
\begin{minipage}{0.3\textwidth}
\vspace{-3cm}
\includegraphics[width=\textwidth]{spymesh1.pdf}
\end{minipage}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymeps1.pdf}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymmu1.pdf}\\
\centering
\begin{minipage}{0.1\textwidth}
\vspace{-3cm}
$p=0$
\end{minipage}
\begin{minipage}{0.3\textwidth}
\vspace{-3cm}
\includegraphics[width=\textwidth]{spymesh2.pdf}
\end{minipage}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymeps2.pdf}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymmu2.pdf}\\
\centering
\begin{minipage}{0.1\textwidth}
\vspace{-3cm}
$p=0$
\end{minipage}
\begin{minipage}{0.3\textwidth}
\vspace{-3cm}
\includegraphics[width=\textwidth]{spymesh3.pdf}
\end{minipage}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymeps3.pdf}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymmu3.pdf}
\caption{The sparsity pattern for the lowest order $\tilde{\mathbf{M}}_0^\varepsilon$ (second column) and $\mathbf{M}_0^\mu$ (third column) under uniform $h$-refinement, i.e. meshes in the first column are constructed by barycentric subdivision of uniform refinements (by means of edge-bisection) of the starting primal $\mathcal{C}^\Omega$ mesh. The label $\mathrm{nz}$ denotes the number of non-zero entries. Since for $p=0$ only $\tilde{w}_\mathcal{T}$ functions survive for the ${H}^{h,p}$ field, $\mathbf{M}_0^\mu$ is fully diagonal.}
\label{fig:sparsity_h_check}
\end{figure}
Going now into greater depth with implementation-related details, we provide a procedure for the explicit computation of all matrix entries.
It holds:
\begin{align}
\left(\tilde{\mathbf{M}}_p^\varepsilon\right)_{n,n'} &:=
\left(\varepsilon(\bm{r})\bm{w}_{n'}^p(\bm{r}),\bm{w}_{n}^p(\bm{r})\right)_\Omega
= \sum\limits_{K\in\mathcal{K}_2^\Omega} \left(\varepsilon(\bm{r})\bm{w}_{n'}^p(\bm{r}),\bm{w}_{n}^p(\bm{r})\right)_K = \nonumber \\
&=\sum\limits_{\substack{K\in \\ \{\mathcal{K}_2^\Omega \cap \,\supp(\bm{w}_n)\}}} \hspace{-5mm}
\left( J_K^{-1} \varepsilon(\varphi_K^{-1}(\hat{\bm{r}}))\mathbf{A}_K^{-\mathrm{T}}\hat{\bm{w}}_{l'}^{i'j'}(\hat{\bm{r}}),
\;\mathbf{A}_K^{-\mathrm{T}}\hat{\bm{w}}_{l}^{ij}(\hat{\bm{r}})\right)_{\hat{K}}= \nonumber\\
&=\sum\limits_{\substack{K\in \\ \{\mathcal{K}_2^\Omega\cap\,\supp(\bm{w}_n)\}}} \hspace{-5mm}
\left( J_K^{-1} \mathbf{A}_K^{-1}\varepsilon(\varphi_K^{-1}(\hat{\bm{r}}))\mathbf{A}_K^{-\mathrm{T}}\hat{\bm{w}}_{l'}^{i'j'}(\hat{\bm{r}}),
\;\hat{\bm{w}}_{l}^{ij}(\hat{\bm{r}})\right)_{\hat{K}}:= \nonumber\\
&:= \sum\limits_{\substack{K\in \\ \{\mathcal{K}_2^\Omega\cap\,\supp(\bm{w}_n)\}}} \hspace{-5mm}
\int_{\hat{K}} \left(\hat{\varepsilon}_K(\hat{\bm{r}})\hat{\bm{w}}_{l'}^{i'j'}(\hat{\bm{r}})\right) \cdot \hat{\bm{w}}_{l}^{ij}(\hat{\bm{r}})\,\mathrm{d}\hat{\bm{r}},\label{eq:mass_matrix_algebra}
\end{align}
\noindent where we reach line two (in which $\supp(\cdot)$ denotes the support of a function and $J_K$ is the Jacobian determinant of $\varphi_K$) by virtue of the transformation rules in (\ref{eq:phys_edgefuns}) and by using the local definition of shape-functions in (\ref{eq:ref_vecfuns}). We assume that local indices $i,j,l$ (in place of $n$) and $i',j',l'$ (in place of $n'$) exist such that the functional forms of some local shape-functions match the given $\bm{w}_{n'}^p$, $\bm{w}_{n}^p$. We remark that this is always true by construction of the space $\bm{W}^p$. Finally, (\ref{eq:mass_matrix_algebra}) is just a consistent re-definition where, for the sake of clarity, the modified material tensor $\hat{\varepsilon}_K := J_K^{-1} \mathbf{A}_K^{-1}\varepsilon\mathbf{A}_K^{-\mathrm{T}}$ has been introduced.
What (\ref{eq:mass_matrix_algebra}) means in practice is that, if the input mesh consists of straight-edged triangles and $\varepsilon$ is piecewise-constant on each $K$, all inner products in the mass-matrix con be computed (off-line with respect to the rest of computation) by working on the KC element, since the Jacobian (and hence $\hat{\varepsilon}_K$) is then piecewise-constant. These conditions are very often met in practical setups. A very similar procedure applies to the mass-matrix involving the scalar unknown, where a slightly different $\hat{\mu}_K:=J_K^{-1}\mu$ will arise, as the reader may also easily derive.
For the r.h.s. of (\ref{eq:semidiscrete_cmp}), we need to compute only half of the non-zero entries, as $\left(\mathbf{C}_p^{\mathrm{T}}\right)_{n,m} = \left(\mathbf{C}_p\right)_{m,n}$ (where $1\leq n \leq N$ and $1\leq m \leq M$).
Since the involved algebra is a bit tedious, we omit in the following $\bm{r}$ and $\hat{\bm{r}}$ dependences to make the manipulations easier to read. Furthermore we assume that we are computing an entry related to a pair of trial- and test-functions which have non-empty intersection between their support (the matrix entry is trivially null otherwise). We compute
\begin{align}
\left(\mathbf{C}_p\right)_{m,n} &:=
\sum_{K\in \mathcal{K}_2^\Omega}\!\int_{\partial{K}\cap\mathcal{S}(\mathcal{C}^\Omega)} \hspace{-10mm}\tilde{w}_{m}^p \bm{w}_{n}^p\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
+\sum_{K\in \mathcal{K}_2^\Omega}\!\left( \bm{w}_{n}^p,\bm{curl}(\,\tilde{w}_{m}^p\,) \right)_K = \nonumber \\
&= \int_{\substack{ {} \\ \partial{K}\cap\mathcal{S}(\mathcal{C}^\Omega)}} \hspace{-10mm}\tilde{w}_{m}^{p} \bm{w}_{n}^{p}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell
+\left( \bm{w}_{n}^{p},\bm{curl}(\,\tilde{w}_{m}^{p}\,) \right)_K = \nonumber \\
&= \int_{\substack{ \\ \partial{\hat{K}}\cap\mathcal{S}(\hat{\mathcal{T}})}} \hspace{-10mm}
J_K\hat{\tilde{w}}_{\tilde{l}'}^{i'j'} \left(\mathbf{A}_K^{-\mathrm{T}}\hat{\bm{w}}_{l}^{ij}\right) \cdot \left(J_K^{-1}\mathbf{A}_K\hat{\bm{t}}(\hat{\ell})\right)\,\mathrm{d}\hat{\ell}
\!+\nonumber\\
&+\left( J_K \mathbf{A}_K^{-\mathrm{T}} \hat{\bm{w}}_{l}^{ij},J_K^{-1}\mathbf{A}_K\hat{\bm{curl}}(\, \hat{\tilde{w}}_{\tilde{l}'}^{i'j'}\,) \right)_{\hat{K}} = \nonumber \\
&= \int_{\substack{ \\ \partial{\hat{K}}\cap\mathcal{S}(\hat{\mathcal{T}})}} \hspace{-10mm} \hat{\tilde{w}}_{\tilde{l}'}^{i'j'} \hat{\bm{w}}_{l}^{ij}\cdot\hat{\bm{t}}(\hat{\ell})\,\mathrm{d}\hat{\ell}
\!+\left(\hat{\bm{w}}_{l}^{ij}, \hat{\bm{curl}}(\,\hat{\tilde{w}}_{\tilde{l}'}^{i'j'}\,) \right)_{\hat{K}} = \nonumber \\
\begin{split}&=\!\int\limits_0^{\frac{1}{2}} \hat{\tilde{w}}_{\tilde{l}'}^{i'j'} \left(\hat{\bm{w}}_{l}^{ij}\cdot\hat{\bm{\xi_1}}\right)\!\big\rvert_{\xi_2=0}\,\mathrm{d}\xi_1
\!-\!\int\limits_0^{\frac{1}{2}} \hat{\tilde{w}}_{\tilde{l}'}^{i'j'} \left(\hat{\bm{w}}_{l}^{ij}\cdot\hat{\bm{\xi_2}}\right)\!\big\rvert_{\xi_1=0}\,\mathrm{d}\xi_2 +\\
&+ \left(\hat{\bm{w}}_{l}^{ij}, \hat{\bm{curl}}(\,\hat{\tilde{w}}_{\tilde{l}'}^{i'j'}\,) \right)_{\hat{K}},\end{split} \label{eq:weak_curl_algebra}
\end{align}
\noindent where the salient details are the following.
The disappearance of summation symbols on line two descends from the fact that, for any pair of basis-functions in $\tilde{W}^p \times \bm{W}^p$, there will be at most one $K\in\mathcal{K}_2^\Omega$ on which neither of the two identically vanishes, which we label precisely $K$.
Consequently, the definition of global basis-functions again yields the existence of appropriate ``matching'' local shape-functions with respective indices $\{i,j,l\}$ and $\{i',j',\tilde{l}'\}$.
Line three uses the chosen transformation rules for the local shape-functions, plus the fact that the $\mathbf{curl}$ of a covariant (pseudo-)vector field, under coordinate changes, transforms according to the contra-variant (also known as Piola) mapping, which is also the appropriate transformation rule for the tangent unit vector under the same change of coordinates (proofs of this standard facts can be found, e.g., in \cite{monk}).
The notation $\hat{\bm{curl}}$ is consequently introduced for the classical differential curl operator in the local (Cartesian) $\xi_1$ and $\xi_2$ coordinates.
Line four is achieved by virtue of standard matrix-algebra simplifications. This yields the final result, in which the line- and double integrals are explicitly written in the local coordinates on $\hat{K}$.
\begin{figure}[!h]
\centering
\begin{minipage}{0.1\textwidth}
\vspace{-3cm}
$p=1$
\end{minipage}
\begin{minipage}{0.3\textwidth}
\vspace{-3cm}
\includegraphics[width=\textwidth]{spymesh1.pdf}
\end{minipage}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymeps4.pdf}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymmu4.pdf}\\
\centering
\begin{minipage}{0.1\textwidth}
\vspace{-3cm}
$p=2$
\end{minipage}
\begin{minipage}{0.3\textwidth}
\vspace{-3cm}
\includegraphics[width=\textwidth]{spymesh1.pdf}
\end{minipage}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymeps5.pdf}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymmu5.pdf}\\
\centering
\begin{minipage}{0.1\textwidth}
\vspace{-3cm}
$p=3$
\end{minipage}
\begin{minipage}{0.3\textwidth}
\vspace{-3cm}
\includegraphics[width=\textwidth]{spymesh1.pdf}
\end{minipage}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymeps6.pdf}
\includegraphics[width=0.25\textwidth,height=0.25\textwidth]{spymmu6.pdf}
\caption{The sparsity pattern for the mass-matrices $\tilde{\mathbf{M}}_p^\varepsilon$ (second column) and $\mathbf{M}_p^\mu$ (third column) can also be studied under uniform $p$-refinement, i.e. the meshes remain unchanged in size, but the polynomial order is increased, namely we have $p=1,2,3$. The label $\mathrm{nz}$ again denotes the number of non-zero entries.}
\label{fig:sparsity_p_check}
\end{figure}
The formula in (\ref{eq:weak_curl_algebra}) is arguably more impressive than (\ref{eq:mass_matrix_algebra}), since no dependence on the geometry of the mesh is left after all algebraic manipulations. More in detail, all non-zero entries in the $\mathbf{C}_p$ matrix are copies\footnote{up to orientation of edges and triangles, from which a very small number of equivalence classes for $\hat{\mathbf{C}}_p$ can be derived.} of entries of a local, entirely topological, template $\hat{M}\times\hat{N}$ matrix $\hat{\mathbf{C}}_p$, where
\begin{align*}
& \hat{M} = \begin{pmatrix*}[c] p+2 \\2\end{pmatrix*} = \frac{(p+2)(p+1)}{2}, \;\;
\hat{N} = 2\hat{M},
\end{align*}
\noindent are the dimensions of local scalar and vector shape-function spaces. As a consequence, with limited additional bookkeeping effort, no sparse and huge discrete curl-matrix needs to be stored in memory, and the ultra-weak curl operator can be applied efficiently, via its pre-computed low-storage representation, throughout the time integration of the problem. We remark that a similar result is achievable in more conventional DG formulations, as demonstrated in \cite{christophs, warburton_matrix_free, bk_js}, yet the fact that this feat can be achieved even when using the presented novel formulation on barycentric-dual complexes was highly non-trivial.
A final important hint on implementation of the proposed discrete formulation is motivated by the fact that we found, by direct computation, the following remarkable identity to hold:
\begin{align}
& \int_{\hat{K}} \xi_1^r\xi_2^s\,\mathrm{d}\hat{\bm{r}} =
\frac{B_{\frac{1}{3}}(r+1,s+1)}{2^{s+1}(r+s+2)} + \frac{B_{\frac{1}{3}}(s+1,r+1)}{2^{r+1}(r+s+2)},\label{eq:closed_form_ints}
\end{align}
\noindent for arbitrary non-negative integers $r$, $s$, where $B_{\alpha}(a,b)$ is the incomplete $\beta$-function (also known as the Euler integral of the first kind), defined as
\begin{align*}
B_{\alpha}(a,b) = \int_0^\alpha z^{a-1}(1-z)^{b-1}\,\mathrm{d}z,
\end{align*}
\noindent the values of which can be computed to arbitrary precision and stored for all needed positive integers values of $a$, $b$ and for the particular value $\alpha=1/3$. Since all double integrals in the discrete formulation can be shown to reduce to linear combinations of terms equivalent to the l.h.s. of (\ref{eq:closed_form_ints}), no need for numerical integration arises (as long as the material parameters are piecewise-constant).
\subsection{The lowest order element and the cell method}
It is known that, both for conforming discretisations relying on finite elements of the Nedelec \cite{nedelec1980} type and for DG formulations based on central fluxes, the requirement is to have basis-functions which are piecewise-polynomials of degree $p$ and $p+1$ for the magnetic and electric field respectively (or vice-versa). This leads to sub-optimal convergence rates in the electromagnetic energy norm and also implies that, for the lowest admissible order, we need basis-functions which are piecewise-affine for one of the two unknown fields. For the proposed method instead, the two unknowns are approximated up to the same polynomial degree, and the lowest admissible one is $p=0$, i.e. piecewise-constant fields.
\begin{figure}[!h]
\label{fig:low_order_incidence}
\centering
\begin{tikzpicture}[thick,scale=7.0, every node/.style={scale=7.0}]
\coordinate (a) at (0.0,0.0);
\coordinate (b) at (1/2,1/2);
\coordinate (c) at (0.0,1/2);
\coordinate (d) at (0.0,1/4);
\coordinate (e) at (1/4,1/4);
\coordinate (f) at (1/4,1/2);
\coordinate (g) at (1/6,1/3);
\coordinate (i) at (1/2,0.0);
\coordinate (j) at (1/3,1/6);
\coordinate (k) at (1/4,0.0);
\coordinate (l) at (1/2,1/4);
\node[scale=0.2] at (-1/20,-1/20) {$\bm{v}_1$};
\node[scale=0.2] at (1/2+1/20,-1/20) {$\bm{v}_2$};
\node[scale=0.2] at (1/2+1/18,1/2+1/18) {$\bm{v}_4$};
\node[scale=0.2] at (-1/18,1/2+1/18) {$\bm{v}_3$};
\node[scale=0.2,blue] at (1/8,-1/18) {$u_1$};
\node[scale=0.2,blue] at (1/8+1/18,1/8) {$u_2$};
\node[scale=0.2,blue] at (-1/18,1/8) {$u_3$};
\node[scale=0.2,blue] at (3/8,-1/18) {$u_4$};
\node[scale=0.2,blue] at (1/2+1/18,1/8) {$u_5$};
\node[scale=0.2,blue] at (3/8+1/18,3/8) {$u_8$};
\node[scale=0.2,blue] at (1/2+1/18,3/8) {$u_9$};
\node[scale=0.2,blue] at (3/8,1/2+1/18) {$u_{10}$};
\node[scale=0.2,blue] at (1/8,1/2+1/18) {$u_7$};
\node[scale=0.2,blue] at (-1/18,3/8) {$u_6$};
\node[scale=0.2,red] at (1/3+1/36,1/6+1/18) {$f_1$};
\node[scale=0.2,red] at (1/6+1/18,1/3+1/36) {$f_2$};
\draw[color=black,thick] (a) -- (b) -- (c) -- (a);
\draw[color=black,thick] (a) -- (i);
\draw[color=black,thick] (i) -- (b);
\draw[color=black,dashed] (d) -- (g);
\draw[color=black,dashed] (e) -- (g);
\draw[color=black,dashed] (f) -- (g);
\draw[color=black,dashed] (j) -- (e);
\draw[color=black,dashed] (j) -- (k);
\draw[color=black,dashed] (j) -- (l);
\draw[-latex,color=red] (2/7,1/8) arc (-150:180:1mm) node[near start,left] {};
\draw[-latex,color=red] (1/12,2/7) arc (-150:180:1mm) node[near start,left] {};
\draw[draw,blue,->] (a) -- (1/8,0.0);
\draw[draw,blue,->] (k) -- (3/8,0.0);
\draw[draw,blue,->] (i) -- (1/2,1/8);
\draw[draw,blue,->] (a) -- (1/8,1/8);
\draw[draw,blue,->] (a) -- (0.0,1/8);
\draw[draw,blue,->] (e) -- (3/8,3/8);
\draw[draw,blue,->] (f) -- (3/8,1/2);
\draw[draw,blue,->] (d) -- (0.0,3/8);
\draw[draw,blue,->] (c) -- (1/8,1/2);
\draw[draw,blue,->] (l) -- (1/2,3/8);
\end{tikzpicture}
\caption{The $p=0$ method at work on a mesh consisting of two triangles $\mathcal{T}_1$ (with vertices $\bm{v}_1$,$\bm{v}_2$,$\bm{v}_4$) and $\mathcal{T}_2$ (with vertices $\bm{v}_1$, $\bm{v}_3$, $\bm{v}_4$).}
\end{figure}
We may in fact take a closer look at the dimensions of the spaces in (\ref{eq:dimvec}) and (\ref{eq:dimscal}). We notice that, if we set $p=0$, we still have a non-empty basis.
Specifically, we are left with one degree of freedom (DoF) per triangle for the pseudo-vector ${H}^{h,0}(\bm{r},t)$ and two DoFs per each triangle edge in the primal complex for the vector field $\bm{E}^{h,0}(\bm{r},t)$.
A simple illustrating example is given in Fig.~\ref{fig:low_order_incidence} for a mesh consisting of two triangles $\mathcal{T}_1$, and $\mathcal{T}_2$: we have there ten DoFs for the electric field $u_{n=1,2,\dots,10}$ and two DoFs (one per triangle) for the magnetic field $f_{m=1,2}$. Their indexing is induced strictly from the ordering of vertices in the primal complex: it is easy to prove that the $\bm{u}(t)$ DoFs are line-integrals of the electric fields along edges $e\in\mathcal{K}_1^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega)$ while the $\bm{f}(t)$ DoFs are fluxes of the $B^{h,0}(\bm{r},t) := \mu {H}^{h,0}(\bm{r},t)$ pseudo-vector field across the triangles $\mathcal{T}\in\mathcal{K}_2^\Omega$. Given an edge $e\in\{\mathcal{K}_1^\Omega\cap\mathcal{S}(\mathcal{C}^\Omega)\}$, it holds in fact:
\begin{align*}
\int_{e} \bm{E}^{h,0} (\bm{r},t)\cdot \hat{\bm{t}}(\ell)\,\mathrm{d}\ell &=
\int_{e} u_e(t)\bm{w}_{e}^0(\bm{r})\cdot \hat{\bm{t}}(\ell)\,\mathrm{d}\ell = \nonumber\\
&= u_e(t) \int_{e} C_{00l} \left(\mathbf{A}_K^{-\mathrm{T}} \hat{\nabla}{\xi}_{l} \right)\cdot \hat{\bm{t}}(\ell)\,\mathrm{d}\ell=\nonumber\\
&= u_e(t),
\end{align*}
\noindent where we abuse the notation again by using $e$ both as an index and as the integration domain, and $K \in\mathcal{K}_2^\Omega$ s.t. ${e} \subset \{\partial{K}\}, l = l(e,K)$. The fact that only one basis-function has non-null tangential component on the edge $e$ was also exploited, by setting $C_{00l}=1/|e|$. Very similar steps are easily computable for the magnetic field approximation ${H}^{h,0}$.
The discrete operator $\mathbf{C}_0$ and its transpose are instead exactly incidence matrices: this can be proved by direct computation of (\ref{eq:weak_curl_algebra}) where, as the reader may notice, the double integrals on the kite vanish identically (since $p=0$) but the line-integrals do not.
The left-multiplication of DoFs vectors with the incidence matrix $\mathbf{C}_0 \bm{u}$ equates to the Stokes theorem:
\begin{align*}\int_{\mathcal{T}_m} \partial_t(\mu{H}^{h,0})\,\mathrm{d}\bm{r} = \oint_{\partial\mathcal{T}_m} \bm{E}^{h,0}\cdot\hat{\bm{t}}(\ell)\,\mathrm{d}\ell, & \;\;\;m=1,2.\end{align*}
If PEC boundary conditions are enforced, we note that the number of unconstrained DoFs becomes equal (to two) for both unknowns $\bm{E}^{h,0}$ and ${H}^{h,0}$ (as predicted, for example, in \cite{he_tex_dofs}). We finally again stress that the $\bm{E}^{h,0}(\bm{r},t)$ field is allowed to be fully discontinuous on the dashed dual edges.
We remark that this is exactly the Cell Method's framework advocated by Tonti\cite{tonti,marrone}, while also being a generalization of the Yee algorithm to unstructured meshes. The peculiarity of having to split each $E\in\mathcal{C}_1^\Omega$ into two segments (while still preserving the physical interpretation of DoFs) is also not new, but was instead studied by some of the authors in the most general 3D setting in \cite{codecasa_politi,dgatap,dgagpu}, where tetrahedral meshes are used. In fact the (covariantly mapped) function
\begin{align*}\mathbf{A}_K^{-\mathrm{T}} \hat{\nabla}{\xi}_{l}\end{align*}
is a one-form which coincides exactly with the basis-functions introduced in \cite{codecasa_politi} directly in the global coordinates.
As anticipated in the introduction, a recent equivalence proof (in \cite{dga_as_dg}) between the lowest order 3D CM and a DG approach was a leading cause for the present developments.
\section{Numerical Results}\label{sec:num}
We shall here validate the method through numerical experiments. All computations are in natural units, i.e. physical units have been rescaled such that the speed of light in a vacuum is normalized to one, which means in practice $\varepsilon = \varepsilon_r$ and $\mu = \mu_r$.
\subsection{MIBVP results}
To test the transient behaviour of the method we use a manufactured time domain problem, with solution already available in closed form in \cite{dgatap}, where the computational domain is a waveguide $\Omega = [0,1]\!\times\![0,2]$. The fundamental propagating mode\footnote{the waveguide-mode with the lowest cut-off frequency.} is enforced as a time-dependent boundary condition at the entrance $y=0$ of the waveguide, while all other segments in $\partial\Omega$ are set to PEC.
To simulate an invariant structure in the $z$ direction (a needed assumption for true 2D problems) we need a transverse-electric (TE) mode, i.e. only one component of the electric field is not identically zero. It is very convenient for the purpose to swap the field approximation spaces with respect to the theory and make the $\bm{E}$ field a pseudo-vector. This poses no real hardships, as the input field can be injected as an equivalent magnetic current by projecting it on the vector-valued trial-space (which requires 1D Gauss integration on mesh edges at $y=0$).
\begin{figure}[!h]
\centering
\includegraphics[width=1.0\textwidth]{transient_three_fields_coarse.pdf}
\caption{The transient field in the waveguide at $t=2$.}
\label{fig:contourf}
\end{figure}
The behaviour in the whole waveguide for the three non-zero components of the electromagnetic field is shown in Fig.~\ref{fig:contourf}, for polynomial degree $p=5$, average mesh size $h=0.2$ and at time $t=2$ (again in natural units).
Due to the reflections at $y=2$, the $z$-aligned field is not everywhere continuously differentiable in time. The presence of critical points in the temporal behaviour is visible in Fig.~\ref{fig:td_full}, which shows how the various polynomial orders behave for the same mesh, chosen to be rather coarse with a maximum mesh size of $h=0.2$. All polynomial degrees in the bases are tested with the leap-frog time-stepping scheme using $\tau$ equal to the upper limit for stability (the usual practical choice). The qualitatively better approximation properties of the higher order versions of the method are clearly visible.
We do not make any claim to have programmed the fastest possible version of the method, yet it is useful to remark that for $p=6$ and the given mesh size $h$, the method requires $21\,472$ DoFs ($6\,464$ for the scalar-valued unknown, $15\,008$ for the vector-valued one), the maximum allowed time-step is $\tau=3.833\times 10^{-3}$, and the computation reaches a yield of $129.633$ time-steps per second (in wall-time, averaged over simulations with $10^5$ time-steps) on a modest laptop computer (Intel Core i7-6500U CPU, clocked at 2.50GHz with 4 physical cores, 8 GB of RAM), which amounts to roughly $2.783\times 10^6$ DoFs/second of average performance.
\begin{figure}[h!]
\centering
\centering
\includegraphics[width=0.49\textwidth]{time_domain_full.pdf}
\includegraphics[width=0.49\textwidth]{time_domain_detail.pdf}
\caption{The left panel shows the time-dependent solution to the waveguide problem. The field is measured at the centroid of the domain, as can be inferred by the delay in the propagation at the start. In the right panel a blow-up of a small interval around $t = 3.0$ is shown, where a critical point in the solution must be approximated.}
\label{fig:td_full}
\end{figure}
\subsection{Spectral accuracy}
Due to low regularity of the true solution for the transient waveguide problem, we cannot expect to observe the theoretical order of convergence for the method.
A good way to assess the superiority in terms of approximation properties when using higher order basis-functions is to use the proposed method to solve an associated generalized eigenvalue problem. In fact, since we are using a kind of discontinuous Galerkin approach, the spectral accuracy of the method is interesting in its own right, and not just as a mean to study convergence, since we have no formal guarantee for the absence of spurious modes, which would tarnish the appeal of any new numerical method. By acting directly on the semi-discrete system of (\ref{eq:semidiscrete_cmp}) and making it time-harmonic ($\partial_t \mapsto -i\omega$, where now $i=\sqrt{-1}$) we arrive at the two following ``dual'' formulations:
\begin{align}
& \mathbf{C}_p (\mathbf{M}_p^\varepsilon)^{-1} \mathbf{C}_p^{\mathrm{T}} \hat{\mathbf{f}} = \lambda \mathbf{M}_p^\mu \hat{\mathbf{f}},\label{eq:mevp}\\
& \mathbf{C}_p^{\mathrm{T}} (\mathbf{M}_p^\mu)^{-1} \mathbf{C}_p \hat{\mathbf{u}} = \lambda^* \mathbf{M}_p^\varepsilon \hat{\mathbf{u}},\label{eq:eevp}
\end{align}
\noindent where the hat super-script denotes the time-harmonic solutions and $\lambda$, $\lambda^*$ are the squared eigenfrequencies. Depending on boundary conditions, (\ref{eq:mevp})--(\ref{eq:eevp}) approximate either the Dirichlet MEP or the Neumann one.
We choose to work with (\ref{eq:mevp}) since the $H^{\bm{curl}}(\Omega)$ space, as defined in Section \ref{sec:funspaces}, coincides in two dimensions with the standard Sobolev space $H^1(\Omega)$, i.e. we are basically approximating the Laplace operator with the matrix $\mathbf{C}_p (\mathbf{M}_p^\varepsilon)^{-1} \mathbf{C}_p^{\mathrm{T}}$.
As a first example we take the unit square domain $\Omega=[0,1]\times[0,1]$ with uniform material coefficients $\mu_r=1$ and $\varepsilon_r=1$. In this case the eigenvalues are of the form $\lambda = (a^2+b^2)\pi^2$ where $a,b \in \mathbb{N}^+$ for Dirichlet boundary conditions on the $H$ field and $a,b\in\mathbb{N}_0$ for Neumann ones.
\begin{figure}[!h]
\centering
\includegraphics[width=0.65\textwidth]{spectrum_essential_BC.pdf}\\
\includegraphics[width=0.65\textwidth]{spectrum_natural_BC.png}
\caption{We show spectral correctness of the proposed method when solving the generalized eigenvalue problem in (\ref{eq:mevp}) for Neumann (note the one zero eigenvalue in the upper panel), and Dirichlet boundary conditions in lower panel. Here $\mu_r=\varepsilon_r=1$ holds on the whole domain $\Omega$.}
\label{fig:eigvals_uniform_material}
\end{figure}
\noindent Fig.~\ref{fig:eigvals_uniform_material} shows the first 80 eigenvalues (all scaled by $\pi^2$) for both cases, computed with $p=4$ and $h=0.2$ using the \textbf{eigs} function \cite{matlab_eigs} in MATLAB. No spurious eigenvalues appear (we note there is exactly one zero eigenvalue for the Neumann problem). Thorough testing for all $p<8$ and various mesh sizes confirms the absence of spurious eigenmodes due to the method. The accuracy is quite impressive for the shown test, for which we also present the first ten computed eigenfunctions in Fig.~\ref{fig:eigfuns_uniform_material}, where we note that, when the associated eigenvalue has algebraic multiplicity bigger than one, we cannot easily force the chosen solver to yield the appropriate mutually orthogonal eigenfunctions instead of a pair of their linear combinations.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{h_convergence_uniform_material.png}
\includegraphics[width=0.48\textwidth]{h_convergence_uniform_material_neumann.pdf}
\caption{The error in approximating the 16th eigenvalue of the Dirichlet (left panel) and Neumann (right panel) problem with respect to the mesh size $h$ vanishes with the expected rate for the various tested polynomial orders $p$ for the case of a uniformly filled cavity.}
\label{fig:h_conv_uniform_material}
\end{figure}
A more formal study of convergence is shown in Fig.~\ref{fig:h_conv_uniform_material}, which reveals $\mathcal{O}(h^{2p})$ convergence when polynomial degree $p$ is used and the mesh-size $h$ vanishes. This has been found to hold for the eigenvalues of both generalized problems (\ref{eq:mevp})--(\ref{eq:eevp}). The obtained rates are in perfect agreement with the theoretical studies of Buffa \& Perugia in \cite{perugia_buffa} for DG methods. We nevertheless stress that the analysis therein relies on the introduction of (mesh and polynomial degree dependent) penalty parameters, which should be big enough to ensure coercivity of the bilinear form on the l.h.s. of the weak formulation of the eigenvalue problem. No free parameters are instead present in the herein proposed formulation.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{eigenfuns_essential_BC.png}\\
\includegraphics[width=0.9\textwidth]{eigenfuns_natural_BC.png}
\caption{The first ten computed eigenfunctions for the case of the uniformly filled cavity: Neumann and Dirichlet case.}
\label{fig:eigfuns_uniform_material}
\end{figure}
We furthermore remark that the $p=0$ version of the method shows $\mathcal{O}(h^{2p+2})$ convergence rate, but this super-convergence phenomenon is not translated to higher polynomial degrees, at least for the proposed sets of basis-functions. This fact clearly begs for further theoretical investigation.
As a more testing setup, we split our square $\Omega$ exactly into two halves, with a discontinuity aligned with the $y$ axis. We fill the left half of the cavity $\Omega_1 =[0,1/2]\!\times\![0,1]$ with a higher index material $\varepsilon_1=4$, which corresponds to halving the speed of light with respect to the vacuum parameters, which we keep intact on $\Omega_2 = \Omega\setminus\Omega_1$.
The exact values of the Neumann eigenvalues are not easily computable with pen and paper any more, as one needs to solve a transcendental equation (see \cite{pincherle}) involving hyperbolic functions. Yet, using any symbolic mathematics toolbox, we can estimate their values with arbitrary precision.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{eigenfuns_2materials.png}
\caption{Eigenfunctions for the cavity with discontinuous permittivity.}
\label{fig:eigfuns_2materials}
\end{figure}
We show the first ten eigenfunctions we computed (again with $p=4$ and $h=0.2$) as a reference in Fig.~\ref{fig:eigfuns_2materials}, where we stress the fact that ``partially evanescent'' modes are clearly visible: more formally these are modes with real wave-number $k = \left(k_x^2 + k_y^2\right)^{\frac{1}{2}}$ (due to the positive-definiteness property of the Laplace operator) but imaginary $k_x$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{spectrum_2materials.pdf}
\includegraphics[width=0.50\textwidth]{h_convergence_discontinuous_material.png}
\caption{The method remains spectrally correct when approximating the Neumann problem for the inhomogenously filled square. The error in approximating the 20th eigenvalue is shown on the right to also still vanish with the optimal rate, with respect to the mesh size $h$, for the various tested polynomial orders $p$.}
\label{fig:h_conv_2materials}
\end{figure}
This behaviour is confirmed by the distribution of eigenvalues in Fig.~\ref{fig:h_conv_2materials} (leftmost panel, which again shows no spurious solutions), where the eigenvalues are shown to be perturbed closer together towards zero. The optimal order of convergence with varying polynomial degree is also again confirmed for the discontinuous material case in Fig.~\ref{fig:h_conv_2materials} (right panel).
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\textwidth]{eigenfuns_Lshape.png}
\caption{The first six eigenfunctions for the $L$-shape domain.}
\label{fig:eigfuns_lshape}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.48\textwidth]{spectrum_Lshape.pdf}
\includegraphics[width=0.48\textwidth]{p_convergence_Lshape.pdf}
\caption{Spectral correctness check for the L-shape domain: first six eigenvalues (on the left). On the right we show convergence under $p$ refinement for the approximation of the first five non-zero eigenvalues.}
\label{fig:eigvals_lshape}
\end{figure}
As a final test we show how the method behaves when singular solutions are expected. To this end we use the celebrated $L$-shaped domain: $\Omega = \{[-1,1]\times[-1,1]\}\setminus \{[0,1]\times[-1,0]\}$, for which the first six eigenfunctions when solving the Neumann problem are shown in Fig.~\ref{fig:eigfuns_lshape}, computed with a fine mesh. We show the six associated eigenvalues in \ref{fig:eigvals_lshape}, where values from \cite{dauge} (numerically estimated with the standard FEM, with eleven digits expected to be correct) are taken as a reference solution. Again no spurious solutions are observed. Naturally, optimal convergence cannot be expected (at least not with a naive mesh-refinement strategy) for the second and for the sixth eigenvalue, as the associated eigenfunctions have a strong unbounded singularity at the origin. Restoring optimal convergence by appropriate $hp$--refinement goes outside of the scope of the present contribution, while again providing an obvious research direction for future work.
\section{Conclusions}\label{sec:conclusio}
The proposed method presents very promising approximation properties, as shown both by theoretical and numerical investigations. Its potential for high-performance is preserved by the block-diagonal structure of the mass-matrices. Furthermore, the arbitrary order version also preserves the explicit splitting of the involved discrete operators into topological and geometric ones.
It has also not escaped our notice that, with slight modifications in the definitions of shape-functions and transformation rules, a method applicable to the acoustic wave equation (in the velocity--pressure first order formulation) instead of the Maxwell system can be obtained.
In lack of a complete theory, we hope that its extension to three spatial dimensions, which is currently being carried out and will be the topic of a subsequent submission, will further show its effectiveness as a fast solver for the time-dependent Maxwell equations.
Nevertheless, a more thorough theoretical analysis for the introduced functional spaces and the development of a spectral theory for the involved operators is a mandatory question to be investigated by researchers.
On a more critical note, we remark that the proposed local shape-functions, although in principle of arbitrary degree and hierarchical, are not practical for polynomial degrees $p>5$, since bases consisting of scaled monomials on a subset of the unit square will quickly yield ill-conditioned mass-matrix blocks (see also \cite{poptimal_basis}). This can be mended by partial orthonormalization techniques which do not pose any drastic theoretical hardships.
We finally remark that a reduction of the present high-order method to Cartesian-orthogonal meshes is straightforward (via the same barycentric subdivision procedure), and the resulting scheme degenerates to Yee's algorithm when piecewise-constant bases are used.
\section*{Acknowledgements}
Author Bernard Kapidani was financially supported by the Austrian Science Fund (FWF) under grant number F65: \emph{Taming Complexity in Partial Differential Equations}.
|
train/arxiv
|
BkiUfIE5qhDCnXuZu-G9
| 5 | 1 |
\section{Introduction}
Let $F$ be a field, $X$ a non-empty set and let $F \langle X
\rangle$ be the free unitary associative algebra over $F$ on the
set $X$. Recall that a \textit{T-ideal} of $F \langle X \rangle$
is an ideal closed under all endomorphisms of $F \langle X
\rangle$. Similarly, a \emph{$T$-subspace} (or a \emph{$T$-space})
is a vector subspace in $F \langle X \rangle$ closed under all
endomorphisms of $F \langle X \rangle$.
Let $I$ be a $T$-ideal in $F \langle X \rangle$. A subset $S
\subset I$ \textit{generates $I$ as a $T$-ideal} if $I$ is the
minimal $T$-ideal in $F \langle X \rangle$ containing $S$. A
$T$-subspace of $F \langle X \rangle$ generated by $S$ (as a
$T$-subspace) is defined in a similar way. It is clear that the
$T$-ideal ($T$-subspace) generated by $S$ is the ideal (vector
subspace) generated by all the polynomials $f(g_1,\ldots, g_m)$,
where $f=f(x_1, \ldots , x_m) \in S$ and $g_i\in F \langle X
\rangle$ for all $i$.
Note that if $I$ is a $T$-ideal in $F \langle X \rangle$ then
$T$-ideals and $T$-subspaces can be defined in the quotient
algebra $F \langle X \rangle/I$ in a natural way. We refer to
\cite{drbook, drformanekbook, gz, K-BR, kemerbook, rowenbook} for
the terminology and basic results concerning $T$-ideals and
algebras with polynomial identities and to \cite{BOR_cp, BKKS,
GrishCyb, grshchi, K-BR} for an account of the results concerning
$T$-subspaces.
From now on we write $X$ for $\{ x_1, x_2, \ldots \}$ and $X_n$
for $\{ x_1, \ldots , x_n \}$, $X_n \subset X$. If $F$ is a field
of characteristic $0$ then every $T$-ideal in $F \langle X
\rangle$ is finitely generated (as a $T$-ideal); this is a
celebrated result of Kemer \cite {Kemer, kemerbook} that solves
the Specht problem. Moreover, over such a field $F$ each
$T$-subspace in $F \langle X \rangle$ is finitely generated; this
has been proved more recently by Shchigolev \cite{Shchigolev01}.
Very recently Belov \cite{Belov10} has proved that, for each
Noetherian commutative and associative unitary ring $K$ and each
$n \in \mathbb N$, each $T$-ideal in $K \langle X_n \rangle$ is
finitely generated.
On the other hand, over a field $F$ of characteristic $p>0$ there
are $T$-ideals in $F \langle X \rangle$ that are not finitely
generated. This has been proved by Belov \cite{Bel99}, Grishin
\cite{Grishin99} and Shchigolev \cite{Shchigolev99} (see also
\cite{Bel00, Grishin00, K-BR}). The construction of such
$T$-ideals uses the non-finitely generated $T$-subspaces in $F
\langle X \rangle$ constructed by Grishin \cite{Grishin99} for
$p=2$ and by Shchigolev \cite{Shchigolev00} for $p>2$ (see also
\cite{Grishin00}). Shchigolev \cite{Shchigolev00} also constructed
non-finitely generated $T$-subspaces in $F \langle X_n \rangle$,
where $n>1$ and $F$ is a field of characteristic $p>2$.
A $T$-subspace $V^*$ in $F \langle X \rangle$ is called
\emph{limit} if every larger $T$-subspace $W \gneqq V^*$ is
finitely generated as a $T$-subspace but $V^*$ itself is not. A
\textit{limit $T$-ideal} is defined in a similar way. It follows
easily from Zorn's lemma that if a $T$-subspace $V$ is not
finitely generated then it is contained in some limit $T$-subspace
$V^*$. Similarly, each non-finitely generated $T$-ideal is
contained in a limit $T$-ideal. In this sense limit $T$-subspaces
($T$-ideals) form a ``border'' between those $T$-subspaces
($T$-ideals) which are finitely generated and those which are not.
By \cite{Bel99, Grishin99, Shchigolev99}, over a field $F$ of
characteristic $p>0$ the algebra $F \langle X \rangle$ contains
non-finitely generated $T$-ideals; therefore, it contains at least
one limit $T$-ideal. No example of a limit $T$-ideal is known so
far. Even the cardinality of the set of limit $T$-ideals in $F
\langle X \rangle$ is unknown; it is possible that, for a given
field $F$ of characteristic $p>0$, there is only one limit
$T$-ideal. The non-finitely generated $T$-ideals constructed in
\cite{AladovaKras} come closer to being limit than any other known
non-finitely generated $T$-ideal. However, it is unlikely that
these $T$-ideals are limit.
About limit $T$-subspaces in $F \langle X \rangle$ we know more
than about limit $T$-ideals. Recently Brand\~ao Jr., Koshlukov,
Krasilnikov and Silva \cite{BKKS} have found the first example of
a limit $T$-subspace in $F \langle X \rangle$ over an infinite
field $F$ of characteristic $p>2$. To state their result precisely
we need some definitions.
For an associative algebra $A$, let $Z(A)$ denote the centre of
$A$,
\[
Z(A) = \{ z \in A \mid za= az \mbox{ for all } a \in A \}.
\]
A polynomial $f(x_1,\ldots,x_n)$ is \emph{a central polynomial}
for $A$ if $f(a_1,\ldots, a_n) \in Z(A)$ for all $a_1, \dots , a_n
\in A$. For a given algebra $A$, its central polynomials form a
$T$-subspace $C(A)$ in $F \langle X \rangle$. However, not every
$T$-subspace can be obtained as the $T$-subspace of the central
polynomials of some algebra.
Let $V$ be the vector space over a field $F$ of characteristic
$\ne 2$, with a countable infinite basis $e_1$, $e_2, \dots $ and
let $V_s$ denote the subspace of $V$ spanned by $e_1, \ldots , e_s$
$(s = 2,3 , \ldots ) .$ Let $G$ and $G_s$ denote the unitary
Grassmann algebras of $V$ and $V_s$, respectively. Then as a
vector space $G$ has a basis that consists of 1 and of all
monomials $e_{i_1}e_{i_2}\ldots e_{i_k}$, $i_1<i_2<\cdots<i_k$,
$k\ge 1$. The multiplication in $G$ is induced by $e_ie_j=-e_je_i$
for all $i$ and $j$. The algebra $G_s$ is the subalgebra of $G$
generated by $e_1, \ldots ,e_s$, and $ \mbox{dim }G_s = 2^s$. We
refer to $G$ and $G_s$ $(s = 2,3, \ldots )$ as to the infinite
dimensional Grassmann algebra and the finite dimensional Grassmann
algebras, respectively.
The result of \cite{BKKS} concerning a limit $T$-subspace is as
follows:
\begin{theorem}[\cite{BKKS}]
\label{C_limit} Let $F$ be an infinite field of characteristic
$p>2$ and let $G$ be the infinite dimensional Grassmann algebra
over $F$. Then the vector space $C(G)$ of the central polynomials
of the algebra $G$ is a limit T-space in $F \langle X \rangle$.
\end{theorem}
It was conjectured in \cite{BKKS} that a limit $T$-subspace in $F
\langle X \rangle$ is unique, that is, $C(G)$ is the only limit
$T$-subspace in $F \langle X \rangle$. In the present article we
show that this is not the case. Our first main result is as
follows.
\begin{theorem}
\label{theorem_main1} Over an infinite field $F$ of characteristic
$p>2$ the algebra $F \langle X \rangle$ contains infinitely many
limit $T$-subspaces.
\end{theorem}
Let $F$ be an infinite field of characteristic $p >0.$ In order to
prove Theorem \ref{theorem_main1} and to find infinitely many
limit $T$-subspaces in $F \langle X \rangle$ we first find limit
$T$-subspaces in $F \langle X_n \rangle$ for $n = 2 k$, $k \ge 1$.
Let $C_n = C(G) \cap F \langle X_n \rangle$ be the set of the
central polynomials in at most $n$ variables of the unitary Grassmann
algebra $G$. Our second main result is as follows.
\begin{theorem}
\label{theorem_main2} Let $F$ be an infinite field of
characteristic $p>2.$ If $n = 2k$, $k \ge 1$, then $C_{n}$ is a
limit $T$-subspace in $F \langle X_{n} \rangle$. If $n = 2k+1$, $k
>1$, then $C_n$ is finitely generated as a $T$-subspace in $F
\langle X_n \rangle$.
\end{theorem}
\noindent \textbf{Remark. } We do not know whether the
$T$-subspace $C_3$ is finitely generated.
\bigskip
Define $[a,b]=ab-ba$, $[a,b,c] = [[a,b],c]$. For $k \ge 1$, let
$T^{(3,k)}$ denote the $T$-ideal in $F \langle X \rangle$
generated by $[x_1,x_2,x_3]$ and $[x_1,x_2][x_3,x_4] \ldots
[x_{2k-1},x_{2k}]$ and let $R_k$ denote the $T$-subspace in $F \langle X
\rangle$ generated by $C_{2k}$ and $T^{(3,k+1)}$. Theorem
\ref{theorem_main1} follows immediately from our third main result
that is as follows.
\begin{theorem}
\label{theorem_main3} Let $F$ be an infinite field of
characteristic $p>2.$ For each $k \ge 1$, $R_k$ is a limit
$T$-subspace in $F \langle X \rangle$. If $k \ne l$ then $R_k \ne
R_l$.
\end{theorem}
Now we modify the conjecture made in \cite{BKKS}.
\begin{problem}
Let $F$ be an infinite field of characteristic $p>2$. Is each
limit $T$-subspace in $F \langle X \rangle$ equal to either $C(G)$
or $R_k$ for some $k$? In other words, are $C(G)$ and $R_k$ $(k
\ge 1)$ the only limit $T$-subspaces in $F \langle X \rangle$?
\end{problem}
\medskip
In the proof of Theorems \ref{theorem_main2} and
\ref{theorem_main3} we will use the following theorem that has
been proved independently by Bekh-Ochir and Rankin \cite{BOR_cp},
by Brand\~ao Jr., Koshlukov, Krasilnikov and Silva \cite{BKKS} and
by Grishin \cite{Grishin10}. Let
\[
q(x_1,x_2)=x_1^{p-1}[x_1,x_2]x_2^{p-1}, \qquad
q_k(x_1,\dots,x_{2k})=q(x_1,x_{2}) \cdots q(x_{2k-1},x_{2k}).
\]
\begin{theorem}[\cite{BOR_cp}, \cite{BKKS}, \cite{Grishin10}]
\label{generators_of_C} Over an infinite field $F$ of a
characteristic $p>2$ the vector space $C(G)$ of the central
polynomials of $G$ is generated (as a T-space in $F \langle X
\rangle $) by the polynomial
\[
x_1[x_2,x_3,x_4]
\]
and the polynomials
\[
x_1^p \ , \ x_1^p \, q_1(x_2,x_3) \ , \ x_1^p \, q_2(x_2,x_3,x_4,
x_5) \ , \ldots ,\ x_1^p \, q_n(x_2, \ldots , x_{2n+1}) \, ,
\ldots .
\]
\end{theorem}
In order to prove Theorems \ref{theorem_main2} and
\ref{theorem_main3} we need some auxiliary results. Define, for
each $l \ge 0$,
\[
q^{(l)}(x_1,x_2)= x_1^{p^l-1}[x_1,x_2]x_2^{p^l-1},
\]
\[
q_{k}^{(l)}(x_1,\dots,x_{2k})= q^{(l)}(x_1,x_{2}) \cdots
q^{(l)}(x_{2k-1},x_{2k}).
\]
Recall that $C_n = C(G) \cap F \langle X_n \rangle$. To prove
Theorem \ref{theorem_main2} we need the following assertions that
are also of independent interest.
\begin{proposition}
\label{generators_of_C_n} If $n=2k$, $k>1$, then $C_n$ is
generated as a $T$-subspace in $F \langle X_n \rangle$ by the
polynomials
\[
x_1[x_2,x_3,x_4], \quad x_1^p, \quad x_1^p q_1(x_2,x_3), \quad
\ldots , \quad x_1^p q_{k-1}(x_2, \ldots , x_{2k-1})
\]
together with the polynomials
\[
\{ q_k^{(l)}(x_1, \ldots , x_{2k}) \mid l=1,2, \ldots \} .
\]
If $n=2k+1$, $k>1$, then $C_n$ is generated as a $T$-subspace in
$F \langle X_n \rangle$ by the polynomials
\[
x_1[x_2,x_3,x_4], \quad x_1^p, \quad x_1^p q_1(x_2,x_3), \quad
\ldots , \quad x_1^p q_{k}(x_2, \ldots , x_{2k+1}).
\]
\end{proposition}
Let $T^{(3)}$ denote the $T$-ideal in $F \langle X \rangle$
generated by $[x_1,x_2,x_3].$ Define $T^{(3)}_n = T^{(3)} \cap F
\langle X_n \rangle$. We deduce Proposition
\ref{generators_of_C_n} from the following.
\begin{proposition}
\label{generators_of_C_n/T3n} If $n=2k$, $k \ge 1$, then
$C_n/T^{(3)}_n$ is generated as a $T$-subspace in $F \langle X_n
\rangle /T^{(3)}_n$ by the polynomials
\begin{equation}\label{gen_C_n/T3n_1}
x_1^p + T^{(3)}_n, \quad x_1^p q_1(x_2,x_3)+T^{(3)}_n, \quad
\ldots , \quad x_1^p q_{k-1}(x_2, \ldots , x_{2k-1}) + T^{(3)}_n
\end{equation}
together with the polynomials
\begin{equation}\label{gen_C_n/T3n_2}
\{ q_k^{(l)}(x_1, \ldots , x_{2k})+ T^{(3)}_n \mid l=1,2, \ldots
\} .
\end{equation}
If $n=2k+1$, $k \ge 1$, then the $T$-subspace $C_n/T^{(3)}_n$ in
$F \langle X_n \rangle/T^{(3)}_n$ is generated by the polynomials
\begin{equation}\label{gen_C_n/T3n_3}
x_1^p + T^{(3)}_n, \quad x_1^p q_1(x_2,x_3)+T^{(3)}_n, \quad
\ldots , \quad x_1^p q_{k}(x_2, \ldots , x_{2k+1})+ T^{(3)}_n.
\end{equation}
\end{proposition}
\noindent \textbf{Remarks. } 1. For each $k \ge 1$, the limit
$T$-subspace $R_k$ does not coincide with the $T$-subspace $C(A)$
of all central polynomials of any algebra $A$.
Indeed, suppose that $R_k = C(A)$ for some $A$. Let $T(A)$ be the
$T$-ideal of all polynomial identities of $A$. Then, for each $f
\in C(A)$ and each $g \in F \langle X \rangle$, we have $[f,g] \in
T(A)$. Since $[x_1,x_2] \in R_k = C(A)$, we have $[x_1,x_2,x_3]
\in T(A)$. It follows that $T^{(3)} \subseteq T(A)$.
It is well-known that if a $T$-ideal $T$ in the free unitary
algebra $F \langle X \rangle$ over an infinite field $F$ contains
$T^{(3)}$ then either $T = T^{(3)}$ or $T = T^{(3, n)}$ for some
$n$ (see, for instance, \cite[Proof of Corollary 7]{gk}). Hence,
either $T(A) = T^{(3)}$ or $T(A) = T^{(3, n)}$ for some $n$. Note
that $T^{(3)} = T(G)$ and $T^{(3,n)} = T(G_{2n-1})$ (see, for
example, \cite{gk}) so we have either $T(A) = T(G)$ or $T(A) =
T(G_{2n-1})$ for some $n$.
For an associative algebra $B$, we have $f(x_1, \ldots , x_r) \in C(B)$ if and only if $[ f(x_1, \ldots , x_r), x_{r+1} ] \in T(B)$. It follows that if $B_1, B_2$ are algebras such that $T(B_1) = T(B_2)$ then $C(B_1) = C(B_2)$. In particular, if $T(A) = T(G)$ then $C(A) = C(G)$, and if $T(A) = T(G_{2n-1})$ then $C(A) = C(G_{2n-1})$.
However,
\[
x_1[x_2,x_3] \ldots [x_{2k+2},x_{2k+3}] \in R_k \setminus C(G)
\]
so $R_k \ne C(G)$. Furthermore, the $T$-subspaces $C(G_s)$ of the
central polynomials of the finite dimensional Grassmann algebras
$G_s$ $(s =2, 3, \ldots )$ have been described recently by
Bekh-Ochir and Rankin \cite{BOR_cpfd} and by Koshlukov,
Krasilnikov and Silva \cite{KKS}; these $T$-subspaces are finitely
generated and do not coincide with $R_k$. This contradiction
proves that $R_k \ne C(A)$ for any algebra $A$, as claimed.
2. For an associative unitary algebra $A$, let $C_n (A)$ and $T_n(A)$
denote the set of the central polynomials and the set of the
polynomial identities in $n$ variables $x_1, \ldots , x_n$ of $A$,
respectively; that is, $C_n (A) = C(A) \cap F \langle X_n \rangle$
and $T_n (A) = T(A) \cap F \langle X_n \rangle$. Then $C_n(A)$ is
a $T$-subspace and $T_n(A)$ is a $T$-ideal in $F \langle X_n \rangle$.
Note that, by Belov's result \cite{Belov10}, the $T$-ideal $T_n(A)$ is
finitely generated for each algebra $A$ over a Noetherian ring and
each positive integer $n$.
On the other hand, there exist unitary algebras $A$ over
an infinite field $F$ of characteristic $p>2$ such that, for some
$n>1$, the $T$-subspace $C_n (A)$ of the central polynomials of
$A$ in $n$ variables is not finitely generated. Moreover, such
an algebra $A$ can be finite dimensional. Indeed, take $A = G_s$,
where $s \ge n$. It can be checked that $C(G_s) \cap F \langle X_n
\rangle = C_n$ if $s \ge n$. By Proposition \ref{C2k_0}, the
$T$-subspace $C_{2k} (G_s)$ in $F \langle X_{2k} \rangle$ is
not finitely generated provided that $s \ge 2k$.
However, the following problem remains open.
\begin{problem}
Does there exist a finite dimensional algebra $A$ over an infinite
field $F$ of characteristic $p >0$ such that the $T$-subspace
$C(A)$ of all central polynomials of $A$ in $F \langle X \rangle$
is not finitely generated?
\end{problem}
Note that a similar problem for the $T$-ideal $T(A)$ of all
polynomial identities of a finite
dimensional algebra $A$ over an infinite field $F$ of characteristic
$p >0$ remains open as well; it is one of the most interesting and
long-standing open problems in the area.
\section{Preliminaries}
Let $\langle S \rangle^{TS}$ denote the $T$-subspace generated by
a set $S \subseteq F \langle X \rangle$. Then $\langle S
\rangle^{TS}$ is the span of all polynomials $f(g_1,\ldots, g_n)$,
where $f\in S$ and $g_i\in F \langle X \rangle$ for all $i$. It is
clear that for any polynomials $f_1,$ \dots, $f_s \in F \langle X
\rangle$ we have $\langle f_1, \dots, f_s \rangle^{TS}= \langle
f_1 \rangle^{TS}+\dots+\langle f_s \rangle^{TS}.$
Recall that a polynomial $f(x_1, \ldots , x_n) \in F \langle X
\rangle$ is called a \textit{polynomial identity} in an algebra
$A$ over $F$ if $f (a_1, \ldots , a_n) =0$ for all $a_1, \ldots ,
a_n \in A$. For a given algebra $A$, its polynomial identities
form a $T$-ideal $T(A)$ in $F \langle X \rangle$ and for every
$T$-ideal $I$ in $F \langle X \rangle$ there is an algebra $A$
such that $I = T(A)$, that is, $I$ is the ideal of all polynomial
identities satisfied in $A.$ Note that a polynomial
$f=f(x_1,\ldots,x_n)$ is central for an algebra $A$ if and only if
$[f,x_{n+1}]$ is a polynomial identity of $A.$
Let $f = f(x_1, \ldots , x_n) \in F \langle X \rangle$. Then $f =
\sum_{0 \le i_1, \ldots , i_n} f_{i_1 \ldots i_n},$ where each
polynomial $f_{i_1 \ldots i_n}$ is multihomogeneous of degree
$i_s$ in $x_s$ $(s = 1, \ldots , n).$ We refer to the polynomials
$f_{i_1 \ldots i_n}$ as to the \textit{multihomogeneous
components} of the polynomial $f.$ Note that if $F$ is an infinite
field, $V$ is a $T$-ideal in $F \langle X \rangle$ and $f \in V$
then $f_{i_1 \ldots i_n} \in V$ for all $i_1, \ldots , i_n$ (see,
for instance, \cite{Baht, drbook, gz, rowenbook}). Similarly, if
$V$ is a $T$-subspace in $F \langle X \rangle$ and $f \in V$ then
all the multihomogeneous components $f_{i_1 \ldots i_n}$of $f$
belong to $V$.
Over an infinite field $F$ the $T$-ideal $T(G)$ of the polynomial
identities of the infinite dimensional unitary Grassmann algebra
$G$ coincides with $T^{(3)}$. This was proved by Krakowski and
Regev \cite{krreg} if $F$ is of characteristic $0$ (see also
\cite{lat}) and by several authors in the general case, see for
example \cite{gk}.
It is well known (see, for example, \cite{krreg,
lat}) that over any field $F$ we have
\begin{eqnarray} \label{prop}
&&[g_1,g_2][g_1,g_3]+T^{(3)}=T^{(3)}; \nonumber\\
&&[g_1,g_2][g_3,g_4]+T^{(3)}=-[g_3,g_2][g_1,g_4]+T^{(3)}; \\
&&[g_1^m,g_2]+T^{(3)}=m g_1^{m-1}[g_1,g_2]+T^{(3)} \nonumber
\end{eqnarray}
for all $g_1,g_2,g_3,g_4 \in F \langle X \rangle$. Also it is well known (see, for
instance, \cite{BKKS, grshchi}) that a basis of the vector space $F\langle X
\rangle/T^{(3)}$ over $F$ is formed by the elements of the form
\begin{eqnarray} \label{basa}
x_{i_1}^{m_1} \cdots x_{i_d}^{m_d} [x_{j_1},x_{j_2}] \cdots
[x_{j_{2s-1}},x_{j_{2s}}]+T^{(3)},
\end{eqnarray}
where $d,s \ge 0$, $i_1< \dots <i_d,$ $j_1<\dots<j_{2s}.$
Define $T_n^{(3)} = T^{(3)} \cap F \langle X_n \rangle.$ We claim
that if $n < 2i$ then
\begin{equation} \label{T3iTn3}
T^{(3,i)} \cap F \langle X_n \rangle = T_n^{(3)}.
\end{equation}
Indeed, a basis of the vector space $( F
\langle X_n \rangle + T^{(3)})/T^{(3)}$ is formed by the elements
of the form (\ref{basa}) such that $1 \le i_1 < \ldots < i_d \le
n,$ $1 \le j_1< \ldots < j_{2s} \le n.$ In particular, we have $2s
\le n.$ On the other hand, it can be easily checked that
$T^{(3,i)}/T^{(3)}$ is contained in the linear span of the
elements of the form (\ref{basa}) such that $s \ge i$. Since $n <
2i$, we have
\[
((F \langle X_n \rangle + T^{(3)})/T^{(3)}) \cap (T^{(3,i)}/T^{(3)}) = \{ 0 \} ,
\]
that is, $T^{(3,i)} \cap F \langle X_n \rangle \subseteq
T^{(3)}$. It follows immediately that $T^{(3,i)} \cap F \langle
X_n \rangle \subseteq T_n^{(3)}$. Since $T_n^{(3)}
\subseteq T^{(3,i)} \cap F \langle X_n \rangle$ for all $i$,
we have $T^{(3,i)} \cap F \langle X_n \rangle = T_n^{(3)}$
if $n < 2i$, as claimed.
Let $F$ be a field of characteristic $p>2.$ It is well known (see,
for example, \cite{regevca, BOR_cp, BKKS, GrishCyb}) that, for each
$g, g_1, \ldots , g_n \in F \langle X \rangle$, we have
\begin{eqnarray} \label{prop2}
&& g^p + T^{(3)} \mbox{ is central in } F \langle X \rangle /
T^{(3)}; \nonumber \\
&&(g_1 \cdots g_n)^p+T^{(3)} = g_1^p \cdots g_n^p+T^{(3)};\\
&&(g_1 +\dots + g_n)^p+T^{(3)}= g_1^p +\dots + g_n^p+T^{(3)}.
\nonumber
\end{eqnarray}
Let $F$ be an infinite field of characteristic $p>2$. Let
$Q^{(k,l)}$ be the $T$-subspace in $F\langle X \rangle$ generated
by $q_{k}^{(l)}$ $(l \ge 0)$, $Q^{(k,l)}=\langle q_{k}^{(l)}(x_1,
\dots, x_{2k}) \rangle^{TS}$. Note that the multihomogeneous
component of the polynomial
\begin{eqnarray*}
&&q_k^{(l)}(1+x_1, \ldots , 1+x_{2k}) \\
&&= (1 + x_1)^{p^l-1}[x_1,x_2](1 + x_2)^{p^l-1} \ldots (1 +
x_{2k-1})^{p^l-1}[x_{2k-1},x_{2k}](1 + x_{2k})^{p^l-1}
\end{eqnarray*}
of degree $p^{l-1}$ in all the variables $x_1, \ldots , x_{2k}$ is
equal to
\[
\gamma \, q_k^{(l-1)}(x_1, \ldots , x_{2k}) = \gamma \,
x_1^{p^{l-1}-1}[x_1,x_2]x_2^{p^{l-1}-1} \ldots
x_{2k-1}^{p^{l-1}-1}[x_{2k-1},x_{2k}]x_{2k}^{p^{l-1}-1},
\]
where $\gamma = {p^{l}-1 \choose p^{l-1}-1}^{2k} \equiv 1
\pmod{p}.$ It follows that $q_k^{(l-1)} \in Q^{(k,l)}$ for all $l
>0$ so $Q^{(k,l-1)} \subseteq Q^{(k,l)}$. Hence, for each $l >0$ we have
\begin{equation}\label{sum_q}
\sum \limits_{i=0}^{l} Q^{(k,i)} = Q^{(k,l)}.
\end{equation}
The following lemma is a reformulation of a result of Grishin and
Tsybulya \cite[Theorem 1.3, item 1)]{GrishCyb}.
\begin{lemma}\label{lemma_GTs}
Let $F$ be an infinite field of characteristic $p>2$. Let $k \ge
1$, $a_i \ge 1$ for all $i = 1, 2 \ldots , 2k$ and let
\[
m = x_1^{a_1-1}x_2^{a_2-1} \ldots x_{2k}^{a_{2k}-1}[x_1,x_2]\ldots
[x_{2k-1},x_{2k}] \in F \langle X \rangle.
\]
Suppose that, for some $i_0$, $1 \le i_0 \le 2k$, we have $a_{i_0}
= p^l b$, where $l \ge 0$ and $b$ is coprime to $p$. Suppose also
that, for each $i$, $1 \le i \le 2k$, we have $a_i \equiv 0
\pmod{p^l}$. Then
\[
\langle m \rangle^{TS} + T^{(3)} = Q^{(k,l)} + T^{(3)}.
\]
\end{lemma}
\section{Proof of Propositions \ref{generators_of_C_n} and \ref{generators_of_C_n/T3n}}
In the rest of the paper, $F$ will denote an infinite field of
characteristic $p>2$.
\subsection*{Proof of Proposition \ref{generators_of_C_n/T3n}}
Let $U$ be the $T$-subspace of $F \langle X_n \rangle$ defined as
follows:
\begin{itemize}
\item[i)] $T_n^{(3)} \subset U$;
\item[ii)] the $T$-subspace $U/T_n^{(3)}$ of $F \langle X_n
\rangle/T_n^{(3)}$ is generated by the polynomials
(\ref{gen_C_n/T3n_1}) and (\ref{gen_C_n/T3n_2}) if $n=2k$ and by
the polynomials (\ref{gen_C_n/T3n_3}) if $n=2k+1$.
\end{itemize}
To prove the proposition we have to show that
$C_n/T_n^{(3)}=U/T_n^{(3)}$ (equivalently, $C_n = U$). It can be
easily seen that $U/T_n^{(3)} \subseteq C_n/T_n^{(3)}$. Thus, it
remains to prove that $C_n/T_n^{(3)} \subseteq U/T_n^{(3)}$
(equivalently, $C_n \subseteq U$).
Let $h$ be an arbitrary element of $C_n$. We are going to check
that $h+T_n^{(3)} \in U/T_n^{(3)}$.
Since $h \in C(G)$, it follows from The\-o\-rem
\ref{generators_of_C} that
\[
h=\sum_{j} \alpha_{j} \ v_j^{p} + \sum_{i, j } \alpha_{i j} \
w_{i j}^{p} \ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i j)})+ h',
\]
where $v_j, w_{i j}, f_s^{(i j)} \in F \langle X \rangle$,
$\alpha_j, \alpha_{i j} \in F$, $h' \in T^{(3)}$. Note that $h \in
F \langle X_n \rangle$ so we may assume that $v_j, w_{i j},
f_s^{(i j)}, h' \in F \langle X_n \rangle$ for all $i,j,s$. It
follows that
\[
h + T_n^{(3)} =\sum_{j} \alpha_{j} \ v_j^{p} + \sum_{i, j }
\alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i
j)})+ T_n^{(3)}.
\]
Recall that $T^{(3,i)}$ is the $T$-ideal in $F \langle X \rangle$
generated by the polynomials $[x_1,x_2,x_3]$ and $[x_1,x_2] \dots
[x_{2i-1},x_{2i}]$. By (\ref{T3iTn3}), we have $T^{(3,i)} \cap F
\langle X_n \rangle = T^{(3)}_{n}$ for each $i$ such that
$2i>n$. Since, for each $i, j$,
\[
w_{i j}^p \ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i j)}) \in T^{(3,i)},
\]
we have
\[
\sum_{i > \frac{n}{2}} \sum_{ j } \alpha_{i j} \ w_{i j}^{p}
\ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i j)}) \in T^{(3,i)} \cap F
\langle X_n \rangle = T_n^{(3)}.
\]
It follows that
\[
h + T_n^{(3)} =\sum_{j} \alpha_{j} \ v_j^{p} + \sum_{i \le
\frac{n}{2}} \sum_{ j } \alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i
j)}, \dots, f_{2i}^{(i j)})+ T_n^{(3)}.
\]
If $n=2k+1$ ($k \ge 1$) then we have
\[
h + T^{(3)}_{n} = \sum_{j} \alpha_{j} v_{j}^{p} + \sum_{i=1}^k
\sum_j \alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i j)}, \dots,
f_{2i}^{(i j)}) + T^{(3)}_{n}
\]
so $h + T^{(3)}_{n} \in U/T_n^{(3)}$, as required.
If $n=2k$ ($k \ge 1$) then we have
\[
h + T^{(3)}_{n} = h_1 + h_2 + T^{(3)}_{n},
\]
where
\[
h_1 = \sum_{j} \alpha_{j} v_j^{p} + \sum_{i=1}^{k-1} \sum_j
\alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i j)},\dots,f_{2i}^{(i
j)})
\]
and
\[
h_2 = \sum_j \alpha_{k j} \ w_{k j}^{p} \ q_k(f_1^{(k j)},\dots,
f_{2k}^{(k j)}).
\]
It is clear that $h_1 + T^{(3)}_{n}$ belongs to the $T$-subspace
generated by the polynomials (\ref{gen_C_n/T3n_1}); hence, $h_1 +
T^{(3)}_{n} \in U/T^{(3)}_{n}$. On the other hand, it can be
easily seen that $h_2 + T^{(3)}_{n}$ is a linear combination of
polynomials of the form $m + T^{(3)}_{n}$, where
\[
m = x_1^{b_1} \cdots x_{2k}^{b_{2k}}[x_1,x_2] \cdots [x_{2k-1},x_{2k}].
\]
We claim that, for each $m$ of this form, the polynomial $m + T_{2k}^{(3)}$ belongs to $U/T_{2k}^{(3)}$.
Indeed, by Lemma \ref{lemma_GTs}, we have $\langle m \rangle^{TS} + T^{(3)} = \langle q_k^{(l)} \rangle^{TS} + T^{(3)}$ for some $l \ge 0$. Since both $m$ and $q_k^{(l)}$ are polynomials in $x_1, \dots , x_{2k}$, this equality implies that $m + T_{2k}^{(3)}$ belongs to the $T$-subspace of $F \langle X_{2k} \rangle / T_{2k}^{(3)}$ that is generated by $q_k^{(l)} + T_{2k}^{(3)}$ for some $l \ge 0$. If $l \ge 1$ then $m + T_{2k}^{(3)}\in U/T_{2k}^{(3)}$ because, for $l \ge 1$, $q_k^{(l)} + T_{2k}^{(3)}$ is a polynomial of the form (\ref{gen_C_n/T3n_2}). If $l = 0$ then $m + T_{2k}^{(3)}$ belongs to the $T$-subspace of $F \langle X_{2k} \rangle / T_{2k}^{(3)}$ generated by $q_k^{(1)} + T_{2k}^{(3)}$. Indeed, in this case $m + T_{2k}^{(3)}$ belongs to the $T$-subspace generated by $q_k^{(0)} + T_{2k}^{(3)}$ and the latter $T$-subspace is contained in the $T$-subspace generated by $q_k^{(1)} + T_{2k}^{(3)}$ because $q_k^{(0)}$ is equal to the multilinear component of $q_k^{(1)}(1+x_1, \dots , 1+x_{2k})$. It follows that, again, $m + T_{2k}^{(3)}\in U/T_{2k}^{(3)}$. This proves our claim.
It follows that $h_2 + T^{(3)}_{n} \in U/T^{(3)}_{n}$ and,
therefore, $h + T^{(3)}_{n} \in U/T^{(3)}_{n}$, as required.
Thus, $C_n \subseteq U$ for each $n$. This completes the proof of
Proposition \ref{generators_of_C_n/T3n}.
\subsection*{Proof of Proposition \ref{generators_of_C_n}}
It is clear that the polynomial $x_1 [x_2,x_3,x_4] x_5$ generates
$T^{(3)}$ as a $T$-subspace in $F \langle X \rangle$. Since
$g_1 [g_2,g_3,g_4] g_5=g_1 [g_2,g_3,g_4, g_5] + g_1 g_5
[g_2,g_3,g_4]$
for all
$g_i \in F \langle X \rangle,$
the polynomial $x_1 [x_2,x_3,x_4]$ generates $T^{(3)}$ as a
$T$-subspace in $F \langle X \rangle$ as well. It follows that
$x_1 [x_2,x_3,x_4]$ generates $T_n^{(3)}$ as a $T$-subspace in $F
\langle X_n \rangle$ for each $n \ge 4$. Proposition
\ref{generators_of_C_n} follows immediately from Proposition
\ref{generators_of_C_n/T3n} and the observation above.
\section{Proof of Theorem \ref{theorem_main2} }
If $n = 2k +1$, $k > 1$, then Theorem \ref{theorem_main2} follows
immediately from Proposition \ref{generators_of_C_n}.
Suppose that $n = 2k$, $k \ge 1$. Then Theorem \ref{theorem_main2}
is an immediate consequence of the following two propositions.
\begin{proposition} \label{C2k_0}
For all $k \ge 1$, $C_{2k}$ is not finitely generated as a
$T$-subspace in $F \langle X_{2k} \rangle$.
\end{proposition}
\begin{proposition}\label{C_2k_limit}
Let $k \ge 1$ and let $W$ be a $T$-subspace of $F \langle X_{2k}
\rangle$ such that $C_{2k} \subsetneqq W$. Then $W$ is a finitely
generated $T$-subspace in $F \langle X_{2k} \rangle$.
\end{proposition}
\subsection*{Proof of Proposition \ref{C2k_0}}
The proof is based on a result of Grishin and Tsybulya
\cite[Theorem 3.1]{GrishCyb}.
By Proposition \ref{generators_of_C_n/T3n}, $C_{2k}$ is generated
as a $T$-subspace in $F \langle X_{2k} \rangle$ by $T_{2k}^{(3)}$
together with the polynomials
\begin{equation}\label{gen_C2k}
x_1^p,\ x_1^p q_1 (x_2,x_3), \ \ldots , \ x_1^p q_{k-1}(x_2,
\ldots , x_{2k-1})
\end{equation}
and
\[
\{ q_k^{(l)} (x_1, \ldots , x_{2k}) \mid l = 1, 2, \ldots \} .
\]
Let $V_l$ be the $T$-subspace of $F \langle X_{2k} \rangle $
generated by $T_{2k}^{(3)}$ together with the polynomials
(\ref{gen_C2k}) and the polynomials $ \{ q_k^{(i)}(x_1, \ldots ,
x_{2k}) \mid i \le l \} $. Then we have
\begin{equation}\label{union_wl}
C_{2k} = \bigcup_{l \ge 1} V_l.
\end{equation}
Also, it is clear that $V_1 \subseteq V_2 \subseteq \ldots .$
Let $U^{(k-1)}$ be the $T$-subspace in $F \langle X \rangle$
generated by the polynomials (\ref{gen_C2k}). The following
proposition is a particular case of \cite[Theorem 3.1]{GrishCyb}.
\begin{proposition}[\cite{GrishCyb}]\label{G-Ts}
For each $l \ge 1$,
\[
(Q^{(k,l+1)}+T^{(3)}) /T^{(3)} \not \subseteq (U^{(k-1)}+
Q^{(k,l)} + T^{(3,k+1)})/T^{(3)}.
\]
\end{proposition}
\noindent \textbf{Remark. } The $T$-subspaces $(U^{(k-1)} +
T^{(3)}) / T^{(3)}$, $(Q^{(k,l)}+T^{(3)})/T^{(3)}$ and
$T^{(3,k+1)}/T^{(3)}$ are denoted in \cite{GrishCyb} by
$\sum_{i<k} CD_p^{(i)}$, $C_{p^l}^{(k)}$ and $C^{(k+1)}$,
respectively.
\medskip
Since the $T$-subspace $Q^{(k,l+1)}$ is generated by the
polynomial $q_k^{(l+1)} $ and $T^{(3)} \subset T^{(3,k+1)}$,
Proposition \ref{G-Ts} immediately implies that
\[
q_k^{(l+1)} \notin U^{(k-1)} + Q^{(k,l)} + T^{(3,k+1)}.
\]
Further, since $T_{2k}^{(3)} \subset T^{(3)} \subset T^{(3,k+1)}$, we have
\[
V_l \subset U^{(k-1)} + \sum_{i \le l} Q^{(k,i)} + T^{(3,k+1)} =
U^{(k-1)} + Q^{(k,l)} + T^{(3,k+1)}
\]
(recall that, by (\ref{sum_q}), $\sum_{i \le l} Q^{(k,i)} =
Q^{(k,l)}$). It follows that $q_k^{(l+1)} \notin V_l$ for all $l
\ge 1$; on the other hand, $q_k^{(l+1)} \in V_{l+1}$ by the
definition of $V_{l+1}$. Hence,
\begin{equation}\label{strictly_ascending_wl}
V_1 \subsetneqq V_2 \subsetneqq \ldots .
\end{equation}
It follows immediately from (\ref{union_wl}) and
(\ref{strictly_ascending_wl}) that $C_{2k}$ is not finitely
generated as a $T$-subspace in $F \langle X_{2k} \rangle $. The
proof of Proposition \ref{C2k_0} is completed.
\subsection*{Proof of the Proposition \ref{C_2k_limit}}
For all integers $i_1, \ldots , i_t$ such that $1 \leq i_1<\ldots <i_t \leq n$ and all integers $a_1, \ldots , a_n \ge 0$ such that $a_{i_1}, \ldots , a_{i_t} \ge 1$, define $\frac{x_1^{a_1} x_2^{a_2} \ldots x_n^{a_n}}{x_{i_1} x_{i_2} \ldots x_{i_t}}$ to be the monomial
\[
\frac{x_1^{a_1} x_2^{a_2} \ldots x_n^{a_n}}{x_{i_1}x_{i_2}\ldots x_{i_t}}= x_1^{b_1}x_2^{b_2}\ldots x_n^{b_n} \in F \langle X \rangle,
\]
where $b_j=a_j-1$ if $j\in \{i_1,i_2,\ldots,i_t\}$ and $b_j=a_j$ otherwise.
\begin{lemma} \label{WC(G)}
Let $f(x_1,\dots,x_n) \in F \langle X \rangle$ be a multihomogeneous polynomial of the form
\begin{equation}\label{f_modulo_T3}
f=\alpha \, x_1^{a_1} \ldots x_n^{a_n}+\sum \limits_{1 \leq i_1<\ldots <i_{2t} \leq n }
\alpha_{(i_1,\ldots,i_{2t})}\frac{x_1^{a_1}\ldots x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}]
\end{equation}
where $\alpha, \alpha_{(i_1,\ldots,i_{2t})}\in F$. Let $L = \langle f \rangle^{TS} + \langle [x_1,x_2] \rangle ^{TS} +T^{(3)}$.
Suppose that $a_i = 1$ for some $i$, $1 \le i \le n.$ Then either $L = F \langle X \rangle$ or $L = \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$ or $L = \langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta + 1}] \rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$ for some $\theta \leq \frac{n-1}{2}$.
\end{lemma}
\begin{proof}
Note that each multihomogeneous polynomial $f(x_1,\dots,x_n) \in F \langle X \rangle$ can be written, modulo $T^{(3)}$, in the form (\ref{f_modulo_T3}). Hence, we can assume without loss of generality (permuting the free generators $x_1, \ldots , x_n$ if necessary) that $a_1=1$.
Note that if $\alpha \neq 0$, then $f(x_1,1,\ldots,1)=\alpha x_1 \in L$ so $ L = \langle x_1 \rangle^{TS} = F \langle X \rangle$. Suppose that $\alpha =0$.
We claim that we may assume without loss of generality that $f$ is of the form $f(x_1, \ldots, x_n) = x_1 g(x_2, \ldots, x_n),$ where
\begin{equation}\label{form_of_g}
g=\sum \limits_{\mathop{2 \le i_1<\ldots <i_{2t}\le n}\limits_{t \ge 1}}
\alpha_{(i_1,\ldots,i_{2t})}\frac{x_2^{a_2}\ldots x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}].
\end{equation}
Indeed, consider a term $m = \frac{x_1^{a_1}\ldots
x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}]$ in (\ref{f_modulo_T3}). If $i_1 >1$
then
\begin{equation}\label{m'}
m = x_1 \, \frac{x_2^{a_2}\ldots x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}].
\end{equation}
Suppose that $i_1 =1$; then $m = m'[x_1,x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]$, where $m' = \frac{x_2^{a_2}\ldots x_n^{a_n}}{x_{i_2}\ldots x_{i_{2t}}}$. We have
\begin{eqnarray*}
&&m + T^{(3)} = m'[x_1,x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)} \\
&& = [m' x_1,x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]-x_1
[m',x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)} \\
&& = [m' x_1 [x_{i_3},x_{i_4}]\ldots [x_{i_{2t-1}},x_{i_{2t}}],x_{i_2}] -x_1
[m',x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)}.
\end{eqnarray*}
Hence,
\begin{equation}\label{m''}
m = - x_1 [m',x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}] + h,
\end{equation}
where $h \in \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$.
It follows easily from (\ref{m'}) and (\ref{m''}) that there
exists a multihomogeneous polynomial $g_1 = g_1(x_2, \ldots, x_n)
\in F \langle X \rangle$ such that $f = x_1 g_1 + h_1$, where $h_1
\in \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$. Further, there is a
multihomogeneous polynomial $g$ of the form (\ref{form_of_g}) such
that $g \equiv g_1 \pmod{T^{(3)}}$; then $f = x_1 g + h_2$, where
$h_2 \in \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$. It follows
that $L = \langle x_1g(x_2, \ldots , x_n) \rangle^{TS} + \langle
[x_1,x_2] \rangle^{TS} + T^{(3)}$. Thus, we can assume without
loss of generality that $f = x_1 g(x_2, \ldots , x_n)$, where $g$
is of the form (\ref{form_of_g}), as claimed.
If $f = 0$ then $L = \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$.
Suppose that $f \ne 0$. Let $\theta = \mbox{ min }\{ t \mid
\alpha_{(i_1,\ldots,i_{2t})} \neq 0 \}$. It is clear that $2
\theta + 1 \le n$ so $\theta \le \frac{n-1}{2}$. We can assume
that $\alpha_{(2,\ldots,2\theta+1)}\neq 0$; then
\begin{eqnarray}\label{f_final_form}
f &=& x_1 \Big( \alpha_{(2, \ldots , 2 \theta +1)}
\frac{x_2^{a_2} \ldots x_n^{a_n}}{ x_2 \ldots x_{2 \theta
+1}}[x_{2},x_{3}]\ldots
[x_{2 \theta},x_{2 \theta +1}] \nonumber\\
&+& \sum \limits_{\mathop{2 \leq i_1<\ldots <i_{2t} \leq n }
\limits_{t \ge \theta, \ i_{2t} > 2 \theta +1}}
\alpha_{(i_1,\ldots,i_{2t})} \frac{x_2^{a_2} \ldots
x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}] \Big).
\end{eqnarray}
Let $f_1(x_1, \ldots , x_{2 \theta + 1}) = f(x_1,x_2, \ldots,
x_{2\theta+1},1,\ldots,1) \in L$; then
\[
f_1= \alpha_{(2, \ldots , 2 \theta +1)} \ x_1 \frac{x_2^{a_2}
\ldots x_n^{a_n}}{ x_2 \ldots x_{2 \theta +1}}[x_{2},x_{3}]\ldots
[x_{2 \theta},x_{2 \theta +1}].
\]
It can be easily seen that the multihomogeneous component of
degree 1 in the variables $x_1,x_2,\ldots, x_{2\theta+1}$ of the
polynomial $f_1(x_1,x_2+1, \ldots, x_{2\theta+1}+1)$ is equal to
\[
\alpha_{(2, \ldots , 2 \theta +1)} \, x_1[x_2,x_3]\ldots
[x_{2\theta}, x_{2\theta+1}].
\]
It follows that $x_1[x_2,x_3]\ldots [x_{2\theta}, x_{2\theta+1}]
\in L$; hence,
\[
\langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta + 1}]
\rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}
\subseteq L.
\]
On the other hand, it is clear that the polynomial $f$ of the form
(\ref{f_final_form}) belongs to the $T$-subspace of $F \langle X
\rangle$ generated by $x_1[x_2,x_3]\ldots [x_{2\theta},
x_{2\theta+1}]$; it follows that $\langle f \rangle^{TS} \subseteq
\langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta + 1}]
\rangle^{TS}$ and, therefore,
\[
L \subseteq \langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta
+ 1}] \rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}.
\]
Thus, $L = \langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta
+ 1}] \rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$.
The proof of Lemma \ref{WC(G)} is completed.
\end{proof}
\begin{proposition}
\label{C_2k/T32k_limit} Let $W$ be a $T$-subspace of $F \langle X_{2k} \rangle$ such that $C_{2k} \subsetneqq W$. Then $W=F \langle X_{2k} \rangle$ or $W$ is generated as a $T$-subspace by the polynomials
\[
x_1^p, \ x_1^pq_1(x_2,x_3), \ldots, \ x_1^pq_{\lambda -1}(x_2,\ldots,x_{2\lambda-1}),
\]
\[
x_1[x_2,x_3,x_4], \ x_1[x_2,x_3]\ldots [x_{2\lambda},x_{2\lambda+1}],
\]
for some positive integer $\lambda \leq k-1$.
\end{proposition}
\begin{proof}
It is well-known that over a field $F$ of characteristic $0$ each $T$-ideal in $F \langle X \rangle$ can be generated by its multilinear polynomials. It is easy to check that the same is true for each $T$-subspace in $F \langle X \rangle$. Over an infinite field $F$ of characteristic $p>0$ each $T$-ideal in $F \langle X \rangle$ can be generated by all its multihomogeneous polynomials $f(x_1, \ldots , x_n)$ such that, for each $i$, $ 1 \le i \le n$, $\mbox{deg}_{x_i} f = p^{s_i}$ for some integer $s_i$ (see, for instance, \cite{Baht}). Again, the same is true for each $T$-subspace in $F \langle X \rangle$.
Let $f(x_1,\ldots,x_{2k})\in W \setminus C_{2k}$ be an arbitrary multihomogeneous polynomial such that, for each $i$ ($1\leq i \leq 2k$), we have either $\mbox{deg}_{x_i} f=p^{s_i}$ or $\mbox{deg}_{x_i} f=0$. We may assume that $\mbox{deg}_{x_i} f=p^{s_i}$ for $i = 1, \ldots , l$ and $\mbox{deg}_{x_i} f=0$ for $i = l+1, \ldots , 2k$ (that is, $f = f(x_1, \ldots , x_l$)). Then we have
\[
f+T^{(3)}_{2k}=\alpha \, m+\sum \limits_{1 \le i_1<\ldots <i_{2t} \le l}
\alpha_{(i_1,\ldots,i_{2t})}\frac{m}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)}_{2k},
\]
where $\alpha, \alpha_{(i_1,\ldots,i_{2t})}\in F$, \ $m=x_1^{p^{s_1}} \ldots x_{l}^{p^{s_{l}}}.$
If $s_i > 0$ for all $i = 1, \ldots , l$ then it can be easily seen that $f \in C(G)$ so $f \in C_{2k}$, a contradiction with the choice of $f$. Thus, $s_i =0$ for some $i$, $1 \le i \le l$. Let $L_f$ be the $T$-subspace of $F \langle X \rangle$ generated by $f$, $[x_1,x_2]$ and $T^{(3)}$. By Lemma \ref{WC(G)}, we have either $L_f = F\langle X \rangle$ or
\[
L_f = \langle x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}] \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
\]
for some $\theta < k$ (since $f \notin C_{2k}$, we have $L_f \ne \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$). Note that if $k=1$ (that is, $f=f(x_1,x_2)$) then the only possible case is $L_f = F\langle X \rangle$.
It is clear that if $L_f = F \langle X \rangle$ for some $f \in W \setminus C_{2k}$ then $x_1 \in W$ so $W = F \langle X_{2k} \rangle$. Suppose that $W \ne F \langle X_{2k} \rangle$; then $k>1$ and $L_f \ne F \langle X \rangle $ for all $f \in W \setminus C_{2k}$. For each $f \in W \setminus C_{2k}$ satisfying the conditions of Lemma \ref{WC(G)}, the $T$-subspace $L_f$ in $F \langle X \rangle$ can be generated, by Lemma \ref{WC(G)}, by the polynomials
\begin{equation}\label{polyn_generators}
[x_1,x_2], \quad x_1[x_2,x_3x_4]\quad \mbox{and} \quad x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}]
\end{equation}
for some $\theta= \theta_f < k$. Since the polynomials (\ref{polyn_generators}) belong to $F \langle X_{2k} \rangle$ (recall that $k>1$), the $T$-subspace in $F \langle X_{2k} \rangle$ generated by $f$, $[x_1,x_2]$ and $ T^{(3)}$ is also generated (as a $T$-subspace in $F \langle X_{2k} \rangle$) by the polynomials (\ref{polyn_generators}). Note that $[x_1,x_2]$ and $x_1[x_2, x_3, x_4]$ belong to $C_{2k}$ so the $T$-subspace $V_f$ in $F \langle X_{2k} \rangle$ generated by $f$ and $C_{2k}$ can be generated by $C_{2k}$ and $x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}]$ for some $\theta= \theta_f < k$.
Let $\lambda = \mbox{ min } \{ \theta \mid x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}] \in W \}$. Since $W$ is the sum of the $T$-subspaces $V_f$ for all suitable multihomogeneous polynomials $f \in W \setminus C_{2k}$ and each $V_f$ is generated by $C_{2k}$ and $x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}]$ for some $\theta = \theta_f < k$, $W$ can be generated as a $T$-subspace in $F \langle X_{2k} \rangle$ by $C_{2k}$ and $x_1 [x_2,x_3]\ldots [x_{2\lambda},x_{2\lambda+1}]$. Now it follows easily from Proposition \ref{generators_of_C_n} that $W$ can be generated by the polynomials
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ldots, \ x_1^p q_{\lambda -1}(x_2,\ldots,x_{2\lambda-1})
\]
together with the polynomials
\[
x_1[x_2,x_3,x_4] \ \mbox{ and } \ x_1[x_2,x_3] \ldots [x_{2\lambda},x_{2\lambda+1}],
\]
where we note that $\lambda <k$.
This completes the proof of Proposition \ref{C_2k/T32k_limit}.
\end{proof}
Proposition \ref{C_2k_limit} follows immediately from Proposition
\ref{C_2k/T32k_limit}. The proof of Theorem \ref{theorem_main2} is
completed.
\section{Proof of Theorem \ref{theorem_main3} }
\begin{proposition} \label{rk}
For each $k \ge 1$, $R_k$ is not finitely generated as a
$T$-subspace in $F \langle X \rangle$.
\end{proposition}
\begin{proof}
Recall that $R_k$ is the $T$-subspace in $F \langle X \rangle$
generated by $C_{2k}$ and $T^{(3,k+1)}$. By Proposition
\ref{generators_of_C_n/T3n}, $C_{2k}$ is generated as a
$T$-subspace in $F \langle X_{2k} \rangle$ by $T_{2k}^{(3)}$
together with the polynomials (\ref{gen_C2k}) and the polynomials
$\{ q_k^{(l)} (x_1, \ldots ,x_{2k}) \mid l = 1, 2, \ldots \} .$
Since $T_{2k}^{(3)} \subset T^{(3)} \subset T^{(3,k+1)}$, we have
\[
R_k=U^{(k-1)}+ \sum_{l \ge 1} Q^{(k,l)}+T^{(3, k+1)},
\]
where $U^{(k-1)}$ and $Q^{(k,l)}$ are the $T$-subspaces in $F
\langle X \rangle$ generated by the polynomials (\ref{gen_C2k})
and by the polynomial $q_k^{(l)}(x_1, \ldots , x_{2k})$,
respectively.
Let $V_l = U^{(k-1)}+ \sum_{i \le l} Q^{(k,i)}+T^{(3, k+1)}$. Then
\begin{equation}\label{union_rk}
R_k = \bigcup_{l \ge 1} V_l
\end{equation}
and $V_1 \subseteq V_2 \subseteq \ldots .$ Recall that, by
(\ref{sum_q}), $\sum_{i \le l} Q^{(k,i)} = Q^{(k,l)}$ so $V_l =
U^{(k-1)}+ Q^{(k,l)}+T^{(3, k+1)}$. By Proposition \ref{G-Ts},
$Q^{(k,l+1)} \not \subseteq V_l$ for all $l \ge 1$ so
\begin{equation}\label{strictly-ascending_rk}
V_1 \subsetneqq V_2 \subsetneqq \ldots .
\end{equation}
The result follows immediately from (\ref{union_rk}) and
(\ref{strictly-ascending_rk}).
\end{proof}
\begin{lemma}\label{fTSRk}
Let $f = f(x_1, \ldots,x_{n}) \in F \langle X \rangle$ be a
multihomogeneous polynomial of the form
\begin{equation}\label{f_modulo_T3_2}
f=\alpha \, x_1^{p^{s_1}} \ldots x_n^{p^{s_n}} + \sum_{i_1<
\ldots <i_{2t}} \alpha_{(i_1,\ldots,i_{2t})} \,
\frac{x_1^{p^{s_1}} \ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}],
\end{equation}
where $\alpha, \alpha_{(i_1,\ldots,i_{2t})}\in F$, $s_i \geq 0$
for all $i$. Let $L = \langle f \rangle^{TS} + R_k$, $k \ge 1$.
Then one of the following holds:
\begin{enumerate}
\item $L = F \langle X \rangle$;
\item $L = R_k$;
\item $L = \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta
+1}] \rangle^{TS} + R_k$ for some $\theta$, $1 \le \theta \le k$;
\item $L=\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle
^{TS}+R_k$ for some $s \ge 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that each multihomogeneous polynomial $f(x_1,\dots,x_n) \in
F \langle X \rangle$ of degree $p^{s_i}$ in $x_i$ ($1 \le i \le
n$) can be written, modulo $T^{(3)}$, in the form
(\ref{f_modulo_T3_2}). Hence, we can assume without loss of
generality (permuting the free generators $x_1, \ldots , x_n$ if
necessary) that $s_1 \le s_i$ for all $i$. Write $s = s_1$.
Suppose that $s=0$. Then, by Lemma \ref{WC(G)}, we have either
\[
\langle f \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
= F \langle X \rangle
\]
or
\[
\langle f \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
= \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
\]
or
\[
\langle f \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
= \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}]
\rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
\]
for some $\theta$. Since $\langle [x_1,x_2] \rangle^{TS} +
T^{(3)} \subset R_k$ and $x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2
\theta +1}] \in R_k$ if $\theta > k$, we have either $L = F
\langle X \rangle$ or $L = R_k$ or
\[
L = \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}]
\rangle^{TS} + R_k
\]
for some $\theta \le k$.
Now suppose that $s>0$; then $s_i >0$ for all $i$, $1 \leq i \leq
n$. It can be easily seen that, by (\ref{prop2}), $ x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}} \in \left(\langle x_1^p \rangle^{TS} + T^{(3)}
\right) \subset R_k$ and, for all $t <k$,
\[
\frac{x_1^{p^{s_1}} \ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}] \in
\left(\langle x_1^pq_t(x_2, \ldots , x_{2t+1})\rangle^{TS} +
T^{(3)}\right) \subset R_k.
\]
Also we have $\frac{x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}] \in T^{(3,k+1)} \subset R_k$ for each $t
> k$. It follows that we can assume without loss of generality
that the polynomial $f$ is of the form
\begin{equation}\label{form_f}
f = \sum \limits_{1 \le i_1<\ldots <i_{2k} \le n}
\alpha_{(i_1,\ldots,i_{2k})} \,\frac{x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}].
\end{equation}
Note that if $n < 2k$ then $f=0$ and if $n=2k$ then
\[
f = \alpha_{(1,2, \ldots , 2k)} \frac{x_1^{p^{s_1}} \ldots
x_{2k}^{p^{s_{2k}}}}{x_{1}x_2\ldots x_{2k}}[x_{1},x_{2}]\ldots
[x_{2k-1},x_{2k}]
\]
so, by Lemma \ref{lemma_GTs}, we have $f \in Q^{(k,s)} +
T^{(3)}$, where $s=s_1>0$. In both cases we have $f \in R_k$ and
$L=R_k$.
Suppose that $n > 2k$. We claim that we may assume that $f$ is of
the form
\begin{equation}\label{form_f2}
f(x_1, \ldots , x_n) = x_1^{p^s} g(x_2, \ldots , x_n),
\end{equation}
where
\[
g= \sum \limits_{2 \le i_1<\ldots <i_{2k} \le n}
\alpha_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}].
\]
Indeed, consider a term $m = \frac{x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}]$ in (\ref{form_f}). If $i_1 >1$ then
\begin{equation}\label{m_x1_1}
m = x_1^{p^{s}} \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}].
\end{equation}
Suppose that $i_1 = 1$. Let $a_i = p^{s_i}$ for all $i$. Then
\[
m +T^{(3,k+1)} = x_1^{p^{s}-1} \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_2}\ldots x_{i_{2k}}}[x_{1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}] +T^{(3,k+1)}
\]
\[
= x_{j_1}^{a_{j_1}} \cdots x_{j_{l}}^{a_{j_{l}}} x_{1}^{a_{1}-1}
\cdots x_{i_{2k}}^{a_{i_{2k}}-1} [x_1,x_{i_{2}}] \cdots
[x_{i_{2k-1}},x_{i_{2k}}] + T^{(3,k+1)}
\]
\[
= x_1^{a_1-1}x_{j_1}^{a_{j_1}} \ldots
x_{j_l}^{a_{j_l}}[x_1,x_{i_2}]x_{i_2}^{a_{i_2}-1} m'+T^{(3,k+1)},
\]
where
\[
m'=x_{i_{3}}^{a_{i_{3}}-1} [x_{i_{3}},x_{i_{4}}]
x_{i_{4}}^{a_{i_{4}}-1} \ldots x_{i_{2k-1}}^{a_{i_{2k-1}}-1}
[x_{i_{2k-1}},x_{i_{2k}}] x_{i_{2k}}^{a_{i_{2k}}-1},
\]
$\{j_1,\ldots,j_l\}=\{1,\ldots,n\} \setminus
\{1,i_2,\ldots,i_{2k}\}$, $l=n-2k>0$. Suppose that
\[
a_1=a_{j_1}=a_{j_2}=\ldots=a_{j_z} \ \ \mbox{and} \ \
a_{j_{z+1}},a_{j_{z+2}},\ldots,a_{j_l}>a_1.
\]
Let
\[
u=x_1x_{j_1} \cdots x_{j_z}x_{j_{z+1}}^{a'_{j_{z+1}}}\cdots
x_{j_{l}}^{a'_{j_{l}}},
\]
where $a'_i = a_i/p^s$ for all $i$. Let
\[
h= h(x_1, \ldots ,x_{2k}) = x_1^{a_{1}-1}[x_1,x_2]x_2^{a_{i_2}-1} \ldots
x_{2k-1}^{a_{i_{2k-1}}-1}[x_{2k-1},x_{2k}] x_{2k}^{a_{i_{2k}-1}}.
\]
By (\ref{prop}), $h \in C(G)$; hence, $h \in C_{2k} \subset R_k$.
It follows that $h(u,x_{i_2}, \ldots , x_{i_{2k}}) \in R_k$, that
is,
\begin{equation}\label{uxi2}
u^{p^s-1}[u,x_{i_2}]x_{i_2}^{a_{i_2}-1} m'\in R_k.
\end{equation}
Since, by (\ref{prop2}), $[v_1^p,v_2] \in T^{(3)} \subset
T^{(3,k+1)}$ for all $v_1,v_2 \in F \langle X \rangle$, we have
\begin{eqnarray*}
&&u^{p^s-1}[u,x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)}\\
&&= \left(x_1x_{j_1} \cdots
x_{j_z}\right)^{p^s-1}x_{j_{z+1}}^{a_{j_{z+1}}}\cdots
x_{j_l}^{a_{j_l}}[x_1x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)} \\
&& = \left(x_1 x_{j_1} \cdots
x_{j_z}\right)^{p^s-1} x_{j_{z+1}}^{a_{j_{z+1}}}\cdots
x_{j_l}^{a_{j_l}}[x_1,x_{i_2}]x_{j_1} \ldots x_{j_z}
x_{i_2}^{a_{i_2}-1} m' \\
&& + \left(x_1x_{j_1} \cdots
x_{j_z}\right)^{p^s-1}x_{j_{z+1}}^{a_{j_{z+1}}}\cdots
x_{j_l}^{a_{j_l}}x_1[x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' + T^{(3,k+1)} \\
&&= m + x_1^{p^s} x_{j_1}^{p^s -1} \cdots x_{j_z}^{p^s-1} x_{j_{z+1}}^{a_{j_{z+1}}}\cdots x_{j_l}^{a_{j_l}} [x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)}
\end{eqnarray*}
where the second summand is not present if $z = 0$ (that is, if $a_{j_i} > a_1$ for all $i$), in which case $m \in R_k$. Since
\begin{eqnarray*}
&&x_1^{p^s} x_{j_1}^{p^s -1} \cdots x_{j_z}^{p^s-1} x_{j_{z+1}}^{a_{j_{z+1}}}\cdots x_{j_l}^{a_{j_l}} [x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)} \\
&& = x_1^{p^s} \sum \limits_{2 \le i_1<\ldots <i_{2k}}
\beta_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}}
\ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2k-1}},x_{i_{2k}}]
+T^{(3,k+1)}
\end{eqnarray*}
for some $\beta_{(i_1,\ldots,i_{2k})} \in F$, we have
\begin{equation}\label{mi1i2k_2}
m + x_1^{p^s} \sum \limits_{2 \le i_1<\ldots <i_{2k}}
\beta_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}}
\ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2k-1}},x_{i_{2k}}] \in
R_k.
\end{equation}
It is clear that, using (\ref{m_x1_1}) and (\ref{mi1i2k_2}), we
can write $f = f_1 + f_2$, where
\[
f_1 = x_1^{p^s} \left( \sum \limits_{2 \le i_1<\ldots <i_{2k}}
\gamma_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}] \right)
\]
is of the form (\ref{form_f2}) and $f_2 \in R_k$. Then we have
$\langle f \rangle^{TS} + R_k = \langle f_1 \rangle^{TS} + R_k.$
Thus, we can assume (replacing $f$ with $f_1$) that the polynomial
$f$ is of the form (\ref{form_f2}), as claimed.
If $f = 0$ then $L = R_k$. Suppose that $f \ne 0$. Then we can
assume without loss of generality that
$\alpha_{(2,3,\ldots,2k+1)}\neq 0$. It follows that the
$T$-subspace $\langle f \rangle^{TS}$ contains the polynomial
\begin{eqnarray*}
&&h(x_1,\ldots ,x_{2k+1})=\alpha_{(2,3,\ldots,2k+1)}^{-1}f(x_1,\ldots ,x_{2k+1},1,1,\ldots,1)\\
&&= x_1^{p^s}x_2^{p^{s_2}-1}\ldots x_{2k+1}^{p^{s_{2k+1}}-1}[x_2,x_3]\ldots [x_{2k},x_{2k+1}].
\end{eqnarray*}
Then $\langle f \rangle^{TS} + R_k$ also contains the homogeneous
component of the polynomial $h(x_1+1,\ldots,x_{2k+1}+1)$ of degree
$p^s$ in each variable $x_i$ $(i = 1,2, \ldots , 2k+1)$, that is
equal, modulo $T^{(3)}$, to
\[
\gamma \ x_1^{p^s}x_2^{p^s-1}\ldots x_{2k+1}^{p^s-1}[x_2,x_3]
\ldots [x_{2k}, x_{2k+1}],
\]
where $\gamma = \prod_{i=2}^{2k+1} {p^{s_i}-1 \choose p^s-1}
\equiv 1 \pmod{p}$. It follows that
\[
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \in \langle f \rangle^{TS} + R_k.
\]
On the other hand, for all $i_1, \ldots ,i_{2k}$ such that $2 \le
i_1 < \ldots < i_{2k} \le n$, we have
\[
x_1^{p^s} \frac{x_2^{p^{s_2}} \ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2k-1}},x_{i_{2k}}] \in
\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle^{TS} +
T^{(3,k+1)}
\]
(recall that $s_i \ge s$ for all $i$) so
\[
f \in \langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1})
\rangle^{TS} + R_k.
\]
Thus,
\[
\langle f \rangle^{TS}+R_k=\langle
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS}+R_k,
\]
where $s \ge 1$. The proof of Lemma \ref{fTSRk} is completed.
\end{proof}
\begin{proposition} \label{Rk_gen}
Let $W$ be a $T$-subspace of $F \langle X \rangle$ such that $R_k
\subsetneqq W.$ Then one of the following holds:
\begin{enumerate}
\item $W=F\langle X \rangle$.
\item $W$ is generated as a $T$-subspace by the polynomials
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ldots, \ x_1^p q_{\lambda-1}(x_2,
\ldots ,x_{2 \lambda -1}),
\]
\[
x_1[x_2,x_3,x_4],\ x_1[x_2,x_3]\ldots [x_{2\lambda},x_{2\lambda+1}]
\]
for some $\lambda \leq k$.
\item $W$ is generated as a $T$-subspace by the polynomials
\[
x_1^p, \ x_1^pq_1(x_2,x_3), \ldots, \ x_1^pq_{k-1}(x_2, \ldots
,x_{2 k -1}),
\]
\[
\{q_k^{(l)}(x_1, \ldots ,x_{2k}) \mid 1\leq l \leq \mu-1 \}, \
x_1^{p^{\mu}}q_k^{(\mu)}(x_2, \ldots ,x_{2k+1}),
\]
\[
x_1[x_2,x_3,x_4], \ x_1[x_2,x_3]\ldots [x_{2k+2},x_{2k+3}]
\]
for some $\mu \ge 1$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $f = f(x_1, \ldots , x_n)$ be an arbitrary polynomial in $W
\setminus R_k$ satisfying the conditions of Lemma \ref{fTSRk},
that is, an arbitrary multihomogeneous polynomial such that
$\mbox{deg}_{x_i}f = p^{s_i}$ for some $s_i \ge 0$ $(1 \le i \le
n)$. Let $L_f = \langle f \rangle^{TS} + R_k$. By Lemma
\ref{fTSRk}, we have either $L_f = F \langle X \rangle$ or
\[
L_f = \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}]
\rangle^{TS} + R_k
\]
for some $\theta \le k$ or
\[
L_f=\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS}+R_k
\]
for some $s \ge 1$.
Note that $W$ is generated as a $T$-subspace in $F \langle X
\rangle$ by $R_k$ together with the polynomials $f \in W
\setminus R_k$ satisfying the conditions of Lemma
\ref{fTSRk}. It follows that $W = \sum L_f $, where the sum
is taken over all the polynomials $f \in W \setminus R_k$
satisfying these conditions.
It is clear that if $L_f = F \langle X \rangle$ for some $f \in W
\setminus R_k$ then $W = F \langle X \rangle$. Suppose that $L_f
\ne F \langle X \rangle$ for all $f \in W \setminus R_k$. Let, for
some $f \in W \setminus R_k$, we have $L_f = \langle x_1[x_2,x_3]
\ldots [x_{2 \theta},x_{2 \theta +1}] \rangle^{TS} + R_k$, $\theta
\le k$. Define $\lambda = \mbox{ min }\{ \theta \mid x_1[x_2,x_3]
\ldots [x_{2 \theta},x_{2 \theta +1}] \in W \}$; note that
$\lambda \le k$. We have
\[
x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}] \in \langle
x_1[x_2,x_3] \ldots [x_{2 \lambda},x_{2 \lambda +1}] \rangle^{TS}
\]
for all $\theta \ge \lambda$ and
\[
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \in \langle x_1[x_2,x_3]
\ldots [x_{2 \lambda},x_{2 \lambda +1}] \rangle^{TS} + T^{(3)}
\]
for all $s$. Hence, $W = \langle x_1[x_2,x_3] \ldots [x_{2
\lambda},x_{2 \lambda +1}] \rangle^{TS} + R_k$, where $\lambda \le
k$. It follows that $W$ is generated as a $T$-subspace by the
polynomials
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ldots, \ x_1^p q_{\lambda-1}(x_2,
\ldots ,x_{2 \lambda -1}),
\]
\[
x_1[x_2,x_3,x_4], \ x_1[x_2,x_3] \ldots [x_{2 \lambda}, x_{2
\lambda+1}],
\]
$\lambda \leq k$.
Now suppose that, for all $f \in W \setminus R_k$ satisfying the
conditions of Lemma \ref{fTSRk}, we have
\[
L_f=\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS}+R_k
\]
for some $s = s_f \ge 1$. Note that if $s \leq r$ then
\[
x_1^{p^r}q_k^{(r)}(x_2, \ldots, x_{2k+1}) \in \langle
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS} + T^{(3)}.
\]
Take $\mu = \mbox{ min } \{ s \mid
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \in W \}$. Then we have
$W=R_k+\langle x_1^{p^\mu} q_k^{(\mu)} (x_2,\ldots,x_{2k+1})
\rangle^{TS}$ and it is straightforward to check that $W$ can be
generated as a $T$-subspace in $F \langle X \rangle$ by the
polynomials
\[
x_1^p, \ x_1^pq_1(x_2,x_3), \ldots, \
x_1^pq_{k-1}(x_2,\ldots,x_{2k-1})
\]
and the polynomials $\{ q_k^{(l)}(x_1,\ldots,x_{2k}) \mid 1\leq l
\leq \mu-1 \}$, $ x_1^{p^{\mu}} q_k^{(\mu)}(x_2, \ldots,
x_{2k+1})$ together with the polynomials
\[
x_1[x_2,x_3,x_4] \ \ \mbox{and} \ \ x_1[x_2,x_3]\ldots
[x_{2k+2},x_{2k+3}].
\]
This completes the proof of Proposition \ref{Rk_gen}.
\end{proof}
Proposition \ref{Rk_gen} immediately implies the following corollary.
\begin{corollary} \label{Rk_limit}
Let $W$ be a $T$-subspace of $F \langle X \rangle$ such that $R_k
\subsetneqq W$ ($k \ge 1$). Then $W$ is a finitely generated
$T$-subspace in $F \langle X \rangle$.
\end{corollary}
\begin{proposition} \label{Not_equal}
If $k \ne l$ then $R_k \ne R_l.$
\end{proposition}
\begin{proof}
Suppose, in order to get a contradiction, that $R_k = R_l$ for
some $k,l$, $k < l$. Then we have $C(G) \subseteq R_l.$
Indeed, by Theorem \ref{generators_of_C}, the $T$-subspace $C(G)$
is generated by the polynomial $x_1[x_2,x_3,x_4]$ and the
polynomials $x_1^p, x_1^p q_1(x_2,x_3), \ldots , x_1^p q_n(x_2,
\ldots , x_{2n+1}), \ldots .$ Clearly,
\[
x_1[x_2,x_3,x_4] \in T^{(3)} \subset R_l.
\]
Further,
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ \ldots , \ x_1^p q_{l-1}(x_2,
\ldots , x_{2l-1}) \in R_l
\]
by the definition of $R_l$ and
\[
x_1^p q_{k+1}(x_2, \ldots , x_{2k+3}), \ x_1^p q_{k+2}(x_2, \ldots
, x_{2k+5}), \ldots \in T^{(3, k+1)} \subseteq R_k = R_l
\]
by the definition of $T^{(3,k+1)}$. Since $k < l$, we have
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ \ldots , \ x_1^p q_{k}(x_2,
\ldots , x_{2k+1}), \ x_1^p q_{k+1}(x_2, \ldots , x_{2k+3}), \
\ldots \in R_l.
\]
Hence, all the generators of the $T$-subspace $C(G)$ belong to
$R_l$ so $C(G) \subseteq R_l$, as claimed.
Note that $T^{(3,k+1)} \subseteq R_l$ and $T^{(3,k+1)} \not
\subseteq C(G)$ so $ C(G) \subsetneqq R_l$. By Theorem
\ref{C_limit}, $C(G)$ is a limit $T$-subspace so each $T$-subspace
$W$ such that $ C(G) \subsetneqq W$ is finitely generated. In
particular, $R_l$ is a finitely generated $T$-subspace. On the
other hand, by Proposition \ref{rk}, the $T$-subspace $R_l$ is not
finitely generated. This contradiction proves that $R_k \ne R_l$
if $k \ne l$, as required.
\end{proof}
Theorem \ref{theorem_main3} follows immediately from Proposition \ref{rk},
Corollary \ref{Rk_limit} and Proposition \ref{Not_equal}.
\section{Acknowledgement}
This work was partially supported by DPP/UnB and by CNPq-FAPDF PRONEX grant 2009/00091-0 (193.000.580/2009). The work of the second and the third authors was partially supported by CNPq; the work of the third author was also partially supported by FEMAT. Thanks are due to the referee whose remarks and suggestions improved the paper.
|
train/arxiv
|
BkiUeFXxK6-gD0SreICk
| 5 | 1 |
\section{Introduction}
It is generally not an easy task to construct finite elements for biharmonic problems with high accuracy but a simple structure.
Although there are several conforming elements achieving high order convergence rate,
such as the Argyris element \cite{Argyris1968},
the Bogner-Fox-Schmit (BFS) element \cite{Bogner1965}, the Hsieh-Clough-Tocher (HCT) element \cite{Clough1965}
and the Fraijes de Veubeke-Sander (FVS) element \cite{Ciavaldini1974, Verbeke1968},
they suffer from the drawbacks caused by the strong $C^1$ continuity requirement.
For example, many degrees of freedom (DoFs) have to be utilized,
the degrees of the polynomials in shape function spaces must be very high,
or the structures of shape function spaces are too complicated.
An alternative way is to adopt nonconforming elements.
A successful construction is the Adini element on rectangular meshes,
whose DoFs are of the nodal type at the vertices of a mesh, resulting in fewer total DoFs.
In \cite{Lascaux1975}, it was shown that a second order convergence rate can be guaranteed in energy norm if the rectangular cells are of the same size, which is based on the interior symmetry of a rectangle.
Thereafter, Luo and Lin \cite{Luo2004} pointed out that the result above is still valid without such a uniformity assumption.
This fact was recently extended to higher-dimensional cases in \cite{Hu2016}.
However, one can only expect the same convergence order in discrete $H^1$- and $L^2$-norms due to the lower bound estimate given by Hu et al.~\cite{Hu2013, Hu2016}.
In recent years, the design for nonconforming elements draw rapidly increasing attention.
The bubble function technique has become a standard tool to achieve a high convergence order
if the desired element is $C^0$-continuous \cite{Gao2011,Wang2012,Chen2012,Chen2013}.
There are also some completely nonconforming constructions utilizing a high order inter-element continuity \cite{Zhang2018}.
In addition, we remark that the purpose of high accuracy can also be achieved by nonstandard methods such as the double set parameter (DSP) method \cite{Shi2000} or superconvergence analysis and postprocessing for low order elements \cite{Mao2009,Hu2016b}.
Biharmonic elements are often used to design divergence-free Stokes elements with discrete Stokes complexes.
A divergence-free Stokes element is usually preferred than a non divergence-free one,
as the former benefits from many advantages, say,
the conservation law is preserved numerically for incompressible flow,
the discrete scheme seems more robust and accurate with respect to the time discretization.
A comprehensive review of this topic can be found in \cite{John2016}.
Conforming examples include the Stokes elements and the discrete Stokes complexes
derived from the Argyris element \cite{Falk2013}, the BFS element \cite{Neilan2016},
the HCT element \cite{Christiansen2018} and the FVS element \cite{Neilan2018}, etc.
High order nonconforming complexes such as in \cite{Guzman2012} and \cite{Zhang2018} are also successful constructions.
Recently, the de Rham complex from the Adini element was also shown by Gillette et al.~\cite{Gillette2018} as a member of a nonstandard family.
In 1996, in order to construct a family of biharmonic elements by the DSP method,
Chen and Shi \cite{Chen1996} designed an intermediate rectangular element of 12 local DoFs,
which are the four point values, the four edge integrals of the shape functions
and the edge integrals of their normal derivatives.
The shape function space is selected as $P_3$ plus a space spanned by two monomials of degree four.
It was then used as a non $C^0$ nonconforming element for fourth order elliptic singular perturbation problems
with a uniform convergence property \cite{Chen2005}.
Very recently, we extended this method to general convex quadrilateral meshes
with some slight modifications to the shape function space,
and induced a divergence-free Stokes element for the Brinkman problem \cite{Zhou2018}.
The associated discrete Stokes complex was also provided.
Based on a standard consistency error estimate,
only the lowest order convergence rates of this complex can be derived.
Surprisingly, in our numerical tests,
we found that the velocity has an $O(h^2)$ approximation order in energy norm if the mesh consists of uniform rectangles,
one order higher than that given by the theoretical analysis therein.
Moreover, as far as we are aware,
although there have been many researches on the DSP method motivated by \cite{Chen1996},
the high accuracy analysis for the 12-DoF intermediate rectangular element introduced in \cite{Chen1996} is still unknown.
The aim of this work is to give a high accuracy analysis of the intermediate rectangular element given in \cite{Chen1996}
and the induced discrete Stokes complex.
We prove, by the special properties of the shape function space, that if the rectangular mesh is uniform,
this plate element has actually an $O(h^2)$ convergence rate in discrete $H^2$-norm
if it works for the biharmonic problem.
In addition, with the aid of an auxiliary biharmonic element and through a duality argument,
an $O(h^3)$ convergence order in discrete $H^1$-norm can be achieved,
provided that the solution region is convex.
In comparison with the well-known Adini element,
the numbers of local DoFs are the same.
Although there are more global DoFs,
this element space is highly nonconforming with many edge-oriented basis functions,
enjoying a low brand width when the stiffness matrix is accumulated.
This might be more convenient in implementation, especially for parallel computing.
Moreover, the convergence rate in discrete $H^1$-norm is one order faster than the Adini counterpart.
From this element, a nonconforming discrete complex is presented using the strategy in \cite{Zhou2018}.
The 1-form and 2-form constitute a divergence-free Stokes pair for incompressible flow.
Under uniform partitions, we also show its higher accuracy than that derived from a usual error estimate,
in both energy norm and $L^2$-norm for the velocity.
The convergence order of the pressure can be improved by a simple postprocessing.
This complex has the same key features as that provided in our previous work \cite{Zhou2018} over uniform rectangular meshes,
and so a rigorous explanation for the phenomenon observed in \cite{Zhou2018} is obtained.
The rest of this work is arranged as follows.
In Section \ref{s: stokes complex},
we give an overview of the investigated discrete Stokes complex.
Section \ref{s: scalar} provides the high accuracy analysis of the 0-form for biharmonic problems.
The error estimate for the Stokes pair determined by the 1-form and 2-form will be presented in Section \ref{s: vector}.
Finally, numerical examples are given in Section \ref{s: numer ex} to verify our theoretical analysis.
Standard notations in Sobolev spaces are used throughout this work.
For a domain $D\subset\mathbb{R}^2$,
$\bm{n}$ and $\bm{t}$ will be the unit outward normal and tangent vectors on $\partial D$, respectively.
The notation $P_k(D)$ denotes the usual polynomial space over $D$ of degree no more than $k$.
The norms and semi-norms of order $m$ in the Sobolev spaces $H^m(D)$
are indicated by $\|\cdot\|_{m,D}$ and $|\cdot|_{m,D}$, respectively.
The space $H_0^m(D)$ is the closure in $H^m(D)$ of $C_0^{\infty}(D)$.
The relation between $\bm{H}_0(\mathrm{div};\Omega)$ and $\bm{H}(\mathrm{div};\Omega)$ is in a similar manner.
We also adopt the convention that $L^2(D):=H^0(D)$,
where the inner-product is denoted by $(\cdot,\cdot)_D$.
The functions in its subspace $L_0^2(D)$ are of zero integral.
These notations of norms, semi-norms and inner-products also work for vector- and matrix-valued Sobolev spaces,
where the subscript $\Omega$ will be omitted if the domain $D=\Omega$.
Moreover, $C$ is a positive constant independent of the mesh size $h$ and may be different in different places.
\section{Preliminaries: A nonconforming discrete Stokes complex}
\label{s: stokes complex}
Let $\Omega\subset\mathbb{R}^2$ be a polygonal domain and $\partial\Omega$ be its boundary.
We assume that $\Omega$ can be uniformly partitioned into rectangular cells,
denoted by $\mathcal{T}_h$, with $h$ being the length of the diagonal of each rectangle.
For a cell $K\in\mathcal{T}_h$, the vertices are designated as $V_{i,K}$, $i=1,2,3,4$.
We select the coordinate system fulfilling that $V_{1,K}=(x'_K,y'_K)$, $V_{2,K}=(x''_K,y'_K)$,
$V_{3,K}=(x''_K,y''_K)$, $V_{4,K}=(x'_K,y''_K)$.
The edges $E_{i,K}=V_{i,K}V_{i+1,K}$ are parallel to the coordinate axes, whose lengths are naturally given by
\[
h_x=|E_{1,K}|=|E_{3,K}|=x''_K-x'_K,~h_y=|E_{2,K}|=|E_{4,K}|=y''_K-y'_K.
\]
Note that here $i$ is taken modulo four.
We can now give the following two elements over $K$.
See Figure \ref{fig: elements} as an example.
\begin{definition}
(See also \cite{Chen1996}) The element $(K,W_K,T_K)$ is defined through
\begin{itemize}
\setlength{\itemsep}{-\itemsep}
\item $K\in\mathcal{T}_h$ is a rectangle;
\item $W_K=P_3(K)\oplus\mathrm{span}\{x^4,y^4\}$ is the shape function space;
\item $T_K=\{\tau_{i,K},~i=1,2,\ldots,12\}$ is the DoF set where
\[
\begin{aligned}
\tau_{i,K}(w)&=w(V_{i,K}),~\tau_{i+4,K}(w)=\int_{E_{i,K}}w\,\mathrm{d}s,\\
\tau_{i+8,K}(w)&=\int_{E_{i,K}}\frac{\partial w}{\partial\bm{n}}\,\mathrm{d}s,
~\forall w\in W_K,~i=1,2,3,4.
\end{aligned}
\]
\end{itemize}
\end{definition}
\begin{definition}
The element $(K,\bm{V}_K,\Sigma_K)$ is defined through
\begin{itemize}
\setlength{\itemsep}{-\itemsep}
\item $K\in\mathcal{T}_h$ is a rectangle;
\item $\bm{V}_K$ is the shape function space, where
\[
\bm{V}_K=[P_1(K)]^2\oplus\mathrm{span}\{\curl\,x^3,\curl\,x^2y,\curl\,xy^2,\curl\,y^3,\curl\,x^4,\curl\,y^4\};
\]
\item $\Sigma_K=\{\sigma_{i,K},~i=1,2,\ldots,12\}$ is the DoF set:
\[
\begin{aligned}
\sigma_{i,K}(\bm{v})&=\int_{E_{i,K}}\bm{v}\cdot\bm{n}\,\mathrm{d}s,
~\sigma_{i+4,K}(\bm{v})=\int_{E_{i,K}}\bm{v}\cdot\bm{n}\xi_{i,K}\,\mathrm{d}s,\\
\sigma_{i+8,K}(\bm{v})&=\int_{E_{i,K}}\bm{v}\cdot\bm{t}\,\mathrm{d}s,
~\forall \bm{v}\in \bm{V}_K,~i=1,2,3,4,
\end{aligned}
\]
where $\xi_{i,K}\in P_1(K)$ are monomials in variable $x$ or $y$ such that
\[
\begin{aligned}
\xi_{1,K}&=\xi_{3,K},~\xi_{1,K}(V_{2,K})=\xi_{1,K}(V_{3,K})=-\xi_{1,K}(V_{1,K})=-\xi_{1,K}(V_{4,K})=1,\\
\xi_{2,K}&=\xi_{4,K},~\xi_{2,K}(V_{3,K})=\xi_{2,K}(V_{4,K})=-\xi_{2,K}(V_{1,K})=-\xi_{2,K}(V_{2,K})=1.
\end{aligned}
\]
\end{itemize}
\end{definition}
\begin{figure}[!htb]
\centering
\subfigure[DoFs of the element $(K,W_K,T_K)$] {
\label{fig: element 1}
\begin{overpic}[scale=0.4]{element1.eps}
\put(48,39){$K$} \put(8,10){$V_{1,K}$} \put(83,10){$V_{2,K}$} \put(83,67){$V_{3,K}$} \put(8,68){$V_{4,K}$}
\put(40,15){$\int$} \put(57,63){$\int$} \put(84,27){$\int$} \put(12,50){$\int$}
\end{overpic}
}
\subfigure[DoFs of the element $(K,\bm{V}_K,\Sigma_K)$] {
\label{fig: element 2}
\begin{overpic}[scale=0.4]{element2.eps}
\put(48,39){$K$}
\end{overpic}
}
\caption{DoFs of the two elements.\label{fig: elements}}
\end{figure}
\begin{theorem}
These two elements are well-defined.
\end{theorem}
\begin{proof}
The unisolvency of $W_K$ with respect to $T_K$ can be found in \cite{Chen1996}.
The assertion for the second element can be directly obtained by the technique provided in Theorem 2.5 in \cite{Zhou2018}
using the first element.
\end{proof}
For a given partition $\mathcal{T}_h$,
the sets of all vertices, interior vertices, boundary vertices,
edges, interior edges and boundary edges are correspondingly denoted by $\mathcal{V}_h$,
$\mathcal{V}_h^i$, $\mathcal{V}_h^b$, $\mathcal{E}_h$, $\mathcal{E}_h^i$ and $\mathcal{E}_h^b$.
For each $E\in\mathcal{E}_h$, $\bm{n}_E$ is a fixed unit vector perpendicular to $E$
and $\bm{t}_E$ is a vector obtained by rotating $\bm{n}_E$ by ninety degree counterclockwisely.
Moreover, for $E\in\mathcal{E}_h^i$, the jump of a function $v$ across $E$ is defined as $[v]_E=v|_{K_1}-v|_{K_2}$,
where $K_1$ and $K_2$ are the cells sharing $E$ as a common edge, and $\bm{n}_E$ points from $K_1$ to $K_2$.
For $E\in\mathcal{E}_h^b$, we set $[v]_E=v|_{K}$ if $E$ is an edge of $K$.
Under these notations, we introduce the following finite element spaces over $\Omega$:
\[
\begin{aligned}
W_h=\Bigg\{w&\in L^2(\Omega):~w|_K\in W_K,~\forall K\in\mathcal{T}_h,
~\mbox{$w$ is continuous at all $V\in\mathcal{V}_h^i$ and}\\
&\mbox{vanishes at all $V\in\mathcal{V}_h^b$},
~\int_E\left[w\right]_E\,\mathrm{d}s
=\int_E\left[\frac{\partial w}{\partial\bm{n}_E}\right]_E\,\mathrm{d}s=0
~\mbox{for all $E\in\mathcal{E}_h$}\Bigg\};\\
\bm{V}_h=\Bigg\{\bm{v}&\in [L^2(\Omega)]^2:~\bm{v}|_K\in\bm{V}_K,~\forall K\in\mathcal{T}_h,\\
&\int_E[\bm{v}\cdot\bm{n}_E]_E\xi\,\mathrm{d}s=0,~\forall\xi\in P_1(E),~\int_E[\bm{v}\cdot\bm{t}_E]_E\,\mathrm{d}s=0,~\forall E\in\mathcal{E}_h\Bigg\};\\
P_h=\Big\{q&\in L_0^2(\Omega):~q|_K\in P_0(K),~\forall K\in\mathcal{T}_h\Big\}.
\end{aligned}
\]
For each $K\in\mathcal{T}_h$,
we define interpolation operators $\mathcal{I}_K:H^2(K)\rightarrow W_K$ and $\bm{\Pi}_K:[H^1(K)]^2\rightarrow \bm{V}_K$,
such that $\tau_{i,K}(\mathcal{I}_Kw)=\tau_{i,K}(w)$, $\sigma_{i,K}(\bm{\Pi}_K\bm{v})=\sigma_{i,K}(\bm{v})$, $i=1,2,\ldots,12$.
The global interpolation operators $\mathcal{I}_h:H_0^2(\Omega)\rightarrow W_h$ and $\bm{\Pi}_h:[H_0^1(\Omega)]^2\rightarrow \bm{V}_h$ are naturally set as $\mathcal{I}_h|_K=\mathcal{I}_K$ and $\bm{\Pi}_h|_K=\bm{\Pi}_K$, $\forall K\in \mathcal{T}_h$.
Similarly, we write $\mathcal{P}_h$ as the $L^2$-projection operator from $L^2(\Omega)$ to $P_h$.
Moreover, the differential operators $\curl$ and $\mathrm{div}$ have their discrete versions $\curl_h$ and $\mathrm{div}_h$,
respectively, determined by $\curl_h|_K=\curl$ on $K$, and $\mathrm{div}_h|_K=\mathrm{div}$ on $K$, $\forall K\in \mathcal{T}_h$. The following commutative diagram provides a discrete Stokes complex.
\begin{theorem}
\label{th: discrete complex}
It holds that
\begin{equation}
\label{e: discrete complex}
\begin{tikzcd}[column sep=large, row sep=large]
0 \arrow{r} & H_0^2(\Omega) \arrow{r}{\curl} \arrow{d}{\mathcal{I}_h}
& \left[H_0^1(\Omega)\right]^2 \arrow{r}{\mathrm{div}} \arrow{d}{\bm{\Pi}_{h}} &
L_0^2(\Omega)\arrow{r} \arrow{d}{\mathcal{P}_h} &0\\
0 \arrow{r} & W_h \arrow{r}{\curl_h}
& \bm{V}_h \arrow{r}{\mathrm{div}_h}& P_h \arrow{r}&0.
\end{tikzcd}
\end{equation}
Moreover, the sequence in the second row is exact.
\end{theorem}
\begin{proof}
Please see Theorem 4.1 in \cite{Zhou2018} for quite a similar argument.
\end{proof}
Define the semi-norms
\[
|v|_{m,h}=\left(\sum_{K\in\mathcal{T}_h}|v|_{m,K}^2\right)^{1/2},~m=0,1,2,3,
\]
then $|\cdot|_{2,h}$ and $|\cdot|_{1,h}$ are norms over $W_h$ and $\bm{V}_h$, respectively.
Since $\mathcal{I}_K$ preserves $P_3(K)$ for each $K\in\mathcal{T}_h$, the Bramble-Hilbert lemma gives
\begin{equation}
\label{e: interp err scalar}
|w-\mathcal{I}_hw|_{j,h}\leq Ch^{k-j}|w|_k,~k=3,4,~j=0,1,2,3,~\forall w\in H_0^2(\Omega)\cap H^4(\Omega).
\end{equation}
Moreover, if $\bm{v}\in [H_0^1(\Omega)\cap H^3(\Omega)]^2$ and $\mathrm{div}\,\bm{v}=0$,
we can find $\bm{v}=\curl\,\phi$ for some $\phi\in H_0^2(\Omega)\cap H^4(\Omega)$.
The commutative diagram (\ref{e: discrete complex}) implies
\begin{equation}
\label{e: interp err vector}
\left|\bm{v}-\bm{\Pi}_h\bm{v}\right|_{1,h}=\left|\curl\,\phi-\curl_h\mathcal{I}_h\phi\right|_{1,h}
\leq C|\phi-\mathcal{I}_h\phi|_{2,h}\leq Ch^2|\phi|_4.
\end{equation}
For a cell $K\in\mathcal{T}_h$,
we write $\mathcal{P}_k^K$ as the $L^2$-projection operator from $L^2(K)$ to $P_k(K)$.
If $E$ is an edge of $K$,
we also define $\mathcal{P}_k^E$ that
$L^2$-projects the traces of the functions in $H^1(K)$ on $E$ to $P_k(E)$.
\section{High accuracy analysis of $W_h$ for biharmonic problems}
\label{s: scalar}
Let us turn to the application of $W_h$ to the biharmonic problem.
Given $f\in H^{-2}(\Omega)$, the biharmonic problem is to find $u$ such that
\begin{equation}
\label{e: model problem scalar}
\begin{aligned}
&\Delta^2u=f~~~&\mbox{in}~\Omega,\\
&u=\frac{\partial u}{\partial\bm{n}}=0~&\mbox{on}~\partial\Omega.
\end{aligned}
\end{equation}
A weak form can be represented as: Find $u\in H_0^2(\Omega)$ such that
\begin{equation}
\label{e: weak scalar}
(\nabla^2 u,\nabla^2 v)=(f,v),~\forall v\in H_0^2(\Omega),
\end{equation}
where $\nabla^2 v$ is the Hessian matrix of $v$.
Note that we have fixed the Poisson ratio $\sigma=0$.
For other $\sigma\in(0,1/2]$, the coming analysis has no intrinsic difference.
The discrete form of (\ref{e: weak scalar}) reads:
Find $u_h\in W_h$ such that
\begin{equation}
\label{e: discrete weak scalar}
\sum_{K\in\mathcal{T}_h}(\nabla^2 u_h,\nabla^2 v_h)_K=(f,v_h),~\forall v_h\in W_h.
\end{equation}
This problem has a unique solution due to the Lax-Milgram theorem.
Note that
\begin{equation}
\label{e: normal weak scalar}
\int_E\left[\frac{\partial w_h}{\partial \bm{n}_E}\right]_E\,\mathrm{d}s=0,~\forall E\in\mathcal{E}_h,~\forall w_h\in W_h.
\end{equation}
Hence, based on a classical consistency error analysis,
one can only predict an $O(h)$ convergence order in discrete $H^2$-norm.
Nevertheless, the result below hints a higher accuracy.
\begin{theorem}
\label{th: converge scalar}
Let $u\in H_0^2(\Omega)\cap H^4(\Omega)$ and $u_h\in W_h$ be the solutions of (\ref{e: weak scalar})
and (\ref{e: discrete weak scalar}), respectively.
If $\mathcal{T}_h$ is uniform, then
\begin{equation}
\label{e: err scalar}
|u-u_h|_{2,h}\leq Ch^2|u|_4.
\end{equation}
\end{theorem}
\begin{proof}
Let us begin with the Strang lemma
\begin{equation}
\label{e: Strang}
|u-u_h|_{2,h}\leq C\left(\inf_{v_h\in W_h}|u-v_h|_{2,h}
+\sup_{w_h\in W_h}\frac{|E_{2,h}(u,w_h)|}{|w_h|_{2,h}}\right)
\end{equation}
with
\[
E_{2,h}(u,w_h)=\sum_{K\in\mathcal{T}_h}(\nabla^2 u,\nabla^2 w_h)_K-(f,w_h).
\]
It follows from (\ref{e: interp err scalar}) that
\[
\inf_{v_h\in W_h}|u-v_h|_{2,h}\leq|u-\mathcal{I}_hu|_{2,h}\leq Ch^2|u|_4,
\]
therefore it suffices to consider the consistency error $E_{2,h}(u,w_h)$.
If $u\in H^4(\Omega)$ then $f\in L^2(\Omega)$.
Let $\mathcal{I}^{S_2}_K$ be the nodal interpolation operator from $W_K$ to $S_2(K)$,
the second order serendipity element space $P_2(K)\oplus\mathrm{span}\{x^2y,xy^2\}$,
satisfying
\[
\mathcal{I}^{S_2}_K(w_h|_K)(V_{i,K})=(w_h|_K)(V_{i,K}),
~\int_{E_{i,K}}\mathcal{I}^{S_2}_K(w_h|_K)\,\mathrm{d}s=\int_{E_{i,K}}w_h|_K\,\mathrm{d}s,~i=1,2,3,4,
\]
and then set $\mathcal{I}^{S_2}_h$ via $\mathcal{I}^{S_2}_h|_K=\mathcal{I}^{S_2}_K$, $\forall K\in\mathcal{T}_h$.
Owing to the weak continuity of $w_h$, we find $\mathcal{I}^{S_2}_hw_h\in H_0^1(\Omega)$.
An integration by parts using (\ref{e: model problem scalar}) gives
\begin{equation}
\label{e: E2h}
\begin{aligned}
E_{2,h}(u,w_h)=&\sum_{K\in\mathcal{T}_h}(\nabla^2 u,\nabla^2 w_h)_K-(f,\mathcal{I}^{S_2}_hw_h)-(f,w_h-\mathcal{I}^{S_2}_hw_h)\\
=&-\sum_{K\in\mathcal{T}_h}(\nabla\Delta u,\nabla(w_h-\mathcal{I}^{S_2}_hw_h))_K-(f,w_h-\mathcal{I}^{S_2}_hw_h)\\
&+\sum_{K\in\mathcal{T}_h}\int_{\partial K}\frac{\partial^2u}{\partial\bm{n}^2}\frac{\partial w_h}{\partial\bm{n}}\,\mathrm{d}s+\sum_{K\in\mathcal{T}_h}\int_{\partial K}\frac{\partial^2u}{\partial\bm{n}\partial\bm{t}}\frac{\partial w_h}{\partial\bm{t}}\,\mathrm{d}s\\
:=&I_1(u,w_h)+I_2(u,w_h)+I_3(u,w_h)+I_4(u,w_h).
\end{aligned}
\end{equation}
As far as $I_1(u,w_h)$ is concerned, noting that
\[
\int_K\nabla(w_h-\mathcal{I}^{S_2}_hw_h)\,\mathrm{d}\bm{x}=\int_{\partial K}(w_h-\mathcal{I}^{S_2}_hw_h)\bm{n}\,\mathrm{d}s=\bm{0}
\]
and $\mathcal{I}^{S_2}_K$ preserves $P_2(K)$, we derive
\begin{equation}
\label{e: I1}
\begin{aligned}
|I_1(u,w_h)|&=\left|\sum_{K\in\mathcal{T}_h}\left(\nabla\Delta u-\mathcal{P}_0^K\nabla\Delta u,
\nabla(w_h-\mathcal{I}^{S_2}_hw_h)\right)_K\right|\\
&\left\{
\begin{aligned}
&\leq\sum_{K\in\mathcal{T}_h}Ch|\nabla\Delta u|_{1,K}Ch|w_h|_{2,K}\leq Ch^2|u|_4|w_h|_{2,h};\\
&\leq\sum_{K\in\mathcal{T}_h}Ch|\nabla\Delta u|_{1,K}Ch^2|w_h|_{3,K}\leq Ch^3|u|_4|w_h|_{3,h}.
\end{aligned}
\right.
\end{aligned}
\end{equation}
One can also estimate $|I_2(u,w_h)|$ as
\begin{equation}
\label{e: I2}
|I_2(u,w_h)|\leq \|f\|_0\|w_h-\Pi_hw_h\|_0
\left\{
\begin{aligned}
&\leq Ch^2\|f\|_0|w_h|_{2,h}\leq Ch^2|u|_4|w_h|_{2,h};\\
&\leq Ch^3\|f\|_0|w_h|_{3,h}\leq Ch^3|u|_4|w_h|_{3,h}.
\end{aligned}
\right.
\end{equation}
Moreover, since the vertex values and the edge integrals of $w_h$ are continuous,
integrating by parts results in
\begin{equation}
\label{e: tangential weak scalar}
\int_E\left[\frac{\partial w_h}{\partial \bm{t}_E}\right]_E\xi_E\,\mathrm{d}s=0,~\forall \xi_E\in P_1(E),
~\forall E\in\mathcal{E}_h,~\forall w_h\in W_h,
\end{equation}
which asserts that
\begin{equation}
\label{e: I4}
\begin{aligned}
|I_4(u,w_h)|&=\left|\sum_{K\in\mathcal{T}_h}\sum_{E\subset\partial K}\int_E
\left(\frac{\partial^2u}{\partial\bm{n}\partial\bm{t}}-\mathcal{P}_1^E\frac{\partial^2u}{\partial\bm{n}\partial\bm{t}}
\right)\left(\frac{\partial w_h}{\partial \bm{t}}-\mathcal{P}_1^E\frac{\partial w_h}{\partial \bm{t}}\right)\,\mathrm{d}\sigma\right|\\
&\leq\sum_{K\in\mathcal{T}_h}\sum_{E\subset\partial K}\left\|\frac{\partial^2u}{\partial\bm{n}\partial\bm{t}}
-\mathcal{P}_1^E\frac{\partial^2u}{\partial\bm{n}\partial\bm{t}}\right\|_{0,E}\left\|\frac{\partial w_h}{\partial \bm{t}}-\mathcal{P}_1^E\frac{\partial w_h}{\partial \bm{t}}\right\|_{0,E}\\
&\left\{
\begin{aligned}
&\leq\sum_{K\in\mathcal{T}_h}Ch^{3/2}|u|_{4,K}Ch^{1/2}|w_h|_{2,K}\leq Ch^2|u|_4|w_h|_{2,h};\\
&\leq\sum_{K\in\mathcal{T}_h}Ch^{3/2}|u|_{4,K}Ch^{3/2}|w_h|_{3,K}\leq Ch^3|u|_4|w_h|_{3,h}.
\end{aligned}
\right.
\end{aligned}
\end{equation}
Hence, it remains to estimate $|I_3(u,w_h)|$.
To this end, notice from the weak continuity of $w_h$ that
\begin{equation}
\label{e: I3}
\begin{aligned}
I_3(u,w_h)=&\sum_{K\in\mathcal{T}_h}\left(
\int_{E_{3,K}}\frac{\partial^2u}{\partial y^2}
\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{3,K}}\frac{\partial w_h}{\partial y}\right)\,\mathrm{d}s
-\int_{E_{1,K}}\frac{\partial^2u}{\partial y^2}
\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{1,K}}\frac{\partial w_h}{\partial y}\right)\,\mathrm{d}s\right)\\
&+\sum_{K\in\mathcal{T}_h}\left(
\int_{E_{2,K}}\frac{\partial^2u}{\partial x^2}
\left(\frac{\partial w_h}{\partial x}-\mathcal{P}_0^{E_{2,K}}\frac{\partial w_h}{\partial x}\right)\,\mathrm{d}s
-\int_{E_{4,K}}\frac{\partial^2u}{\partial x^2}
\left(\frac{\partial w_h}{\partial x}-\mathcal{P}_0^{E_{4,K}}\frac{\partial w_h}{\partial x}\right)\,\mathrm{d}s\right)\\
:=&\sum_{K\in\mathcal{T}_h}I_{1,K}(u,w_h)+\sum_{K\in\mathcal{T}_h}I_{2,K}(u,w_h).
\end{aligned}
\end{equation}
Let us consider the summation involving $I_{1,K}(u,w_h)$. Clearly,
\begin{equation}
\label{e: bilinear scalar est 1}
\begin{aligned}
|I_{1,K}(u,w_h)|&=\Bigg|
\int_{E_{3,K}}\left(\frac{\partial^2u}{\partial y^2}-\mathcal{P}_0^{E_{3,K}}\frac{\partial^2u}{\partial y^2}\right)
\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{3,K}}\frac{\partial w_h}{\partial y}\right)\,\mathrm{d}s\\
&~~~~~-\int_{E_{1,K}}\left(\frac{\partial^2u}{\partial y^2}-\mathcal{P}_0^{E_{1,K}}\frac{\partial^2u}{\partial y^2}\right)
\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{1,K}}\frac{\partial w_h}{\partial y}\right)\,\mathrm{d}s
\Bigg|\\
&\leq Ch\left|\frac{\partial^2u}{\partial y^2}\right|_{1,K}|w_h|_{2,K}.
\end{aligned}
\end{equation}
On the other hand, introduce the bilinear form $I'_{1,K}(u,w_h)$:
\begin{equation}
\label{e: I'}
I'_{1,K}(u,w_h)=\frac{h_x^2}{12}\int_K\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^3w_h}{\partial x\partial y^2}
+\frac{\partial^4u}{\partial x^2\partial y^2}\frac{\partial^2w_h}{\partial y^2}\,\mathrm{d}x\mathrm{d}y.
\end{equation}
It follows from the inverse inequality that
\begin{equation}
\label{e: bilinear scalar est 2}
\begin{aligned}
|I'_{1,K}(u,w_h)|&\leq Ch^2\left|\frac{\partial^2u}{\partial y^2}\right|_{1,K}|w_h|_{3,K}+Ch^2\left|\frac{\partial^2u}{\partial y^2}\right|_{2,K}|w_h|_{2,K}\\
&\leq Ch\left(\left|\frac{\partial^2u}{\partial y^2}\right|_{1,K}+h\left|\frac{\partial^2u}{\partial y^2}\right|_{2,K}\right)|w_h|_{2,K}.
\end{aligned}
\end{equation}
Moreover, we find
\begin{equation}
\label{e: equal scalar y}
I_{1,K}(u,w_h)=I'_{1,K}(u,w_h)=0,~\mbox{if}~\frac{\partial^2 u}{\partial y^2}=y.
\end{equation}
Next, an easy verification shows
\[
\frac{\partial w_h}{\partial y}\bigg|_{E_{i,K}}\in P_2(E_{i,K}),~i=1,3,
\]
therefore by the Simpson quadrature rule,
\begin{equation}
\label{e: normal relation scalar}
\begin{aligned}
\frac{1}{h_x}\int_{E_{i,K}}&\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{i,K}}\frac{\partial w_h}{\partial y}\right)\xi_{i,K}\,\mathrm{d}s=\frac{1}{h_x}\int_{E_{i,K}}\frac{\partial w_h}{\partial y}\xi_{i,K}\,\mathrm{d}s\\
&=\frac{1}{6}\left(\frac{\partial w_h}{\partial y}(V_{i'',K})-\frac{\partial w_h}{\partial y}(V_{i',K})\right)
=\frac{1}{6}\int_{E_{i,K}}\frac{\partial^2 w_h}{\partial x\partial y}\,\mathrm{d}s,
\end{aligned}
\end{equation}
where $i''=3$ and $i'=4$ for $i=3$, and $i''=2$ and $i'=1$ for $i=1$.
This gives
\begin{equation}
\label{e: derivation x scalar}
\begin{aligned}
\frac{1}{h_x}&\left(\int_{E_{3,K}}\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{3,K}}\frac{\partial w_h}{\partial y}\right)\xi_{3,K}\,\mathrm{d}s
-\int_{E_{1,K}}\left(\frac{\partial w_h}{\partial y}-\mathcal{P}_0^{E_{1,K}}\frac{\partial w_h}{\partial y}\right)\xi_{1,K}\,\mathrm{d}s
\right)\\
&=\frac{1}{6}\int_{x'_K}^{x''_K}\frac{\partial^2 w_h}{\partial x\partial y}\bigg|_{y=y''_K}
-\frac{\partial^2 w_h}{\partial x\partial y}\bigg|_{y=y'_K}\,\mathrm{d}x
=\frac{1}{6}\int_K\frac{\partial^3 w_h}{\partial x\partial y^2}\,\mathrm{d}x\mathrm{d}y\\
&=\frac{h_x}{12}\int_K\frac{\partial\xi_{3,K}}{\partial x}\frac{\partial^3 w_h}{\partial x\partial y^2}\,\mathrm{d}x\mathrm{d}y,
\end{aligned}
\end{equation}
and thus
\begin{equation}
\label{e: equal scalar x}
I_{1,K}(u,w_h)=I'_{1,K}(u,w_h),~\mbox{if}~\frac{\partial^2 u}{\partial y^2}=x.
\end{equation}
It follows from (\ref{e: equal scalar y}) and (\ref{e: equal scalar x}) that
\begin{equation}
\label{e: key 1}
I_{1,K}(u,w_h)-I'_{1,K}(u,w_h)=0,~\forall \frac{\partial^2u}{\partial y^2}\in P_1(K),~\forall w_h\in W_K.
\end{equation}
Owing to (\ref{e: bilinear scalar est 1}), (\ref{e: bilinear scalar est 2}), (\ref{e: key 1}) and the Bramble-Hilbert lemma, we derive
\begin{equation}
\label{e: bilinear scalar est 3}
|I_{1,K}(u,w_h)-I'_{1,K}(u,w_h)|\leq Ch^2\left|\frac{\partial^2u}{\partial y^2}\right|_{2,K}|w_h|_{2,K}
\leq Ch^2|u|_{4,K}|w_h|_{2,K}.
\end{equation}
Substituting (\ref{e: bilinear scalar est 3}) into (\ref{e: I3}) results in
\begin{equation}
\label{e: I1K}
\sum_{K\in\mathcal{T}_h}I_{1,K}(u,w_h)\leq \sum_{K\in\mathcal{T}_h}I'_{1,K}(u,w_h)+Ch^2|u|_4|w_h|_{2,h}.
\end{equation}
Now we are in the position to estimate the right-hand side in (\ref{e: I1K}).
Indeed,
\[
\begin{aligned}
\sum_{K\in\mathcal{T}_h}I'_{1,K}(u,w_h)&=\frac{h_x^2}{12}\sum_{K\in\mathcal{T}_h}\int_{y'_K}^{y''_K}\left(\int_{x'_K}^{x''_K}
\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^3w_h}{\partial x\partial y^2}+\frac{\partial^4u}{\partial x^2\partial y^2}\frac{\partial^2w_h}{\partial y^2}\,\mathrm{d}x\right)\mathrm{d}y\\
&=\frac{h_x^2}{12}\sum_{K\in\mathcal{T}_h}\int_{y'_K}^{y''_K}\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^2w_h}{\partial y^2}\bigg|_{x=x''_K}-\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^2w_h}{\partial y^2}\bigg|_{x=x'_K}\,\mathrm{d}y\\
&=\frac{h_x^2}{12}\sum_{K\in\mathcal{T}_h}\Bigg(\int_{E_{2,K}}\frac{\partial^3u}{\partial x\partial y^2}
\left(\frac{\partial^2w_h}{\partial y^2}-\frac{\partial^2\mathcal{I}^{S_2}_Kw_h}{\partial y^2}\right)\,\mathrm{d}s\\
&~~~~~~~~~~~~~~~~~~-\int_{E_{4,K}}\frac{\partial^3u}{\partial x\partial y^2}
\left(\frac{\partial^2w_h}{\partial y^2}-\frac{\partial^2\mathcal{I}^{S_2}_Kw_h}{\partial y^2}\right)\,\mathrm{d}s\Bigg),
\end{aligned}
\]
where we have used the integration by parts formula,
the weak continuity of $w_h$ and the uniformity of the partition $\mathcal{T}_h$.
For each $K\in\mathcal{T}_h$,
\[
\begin{aligned}
(w_h|_K)|_{E_{2,K}}&\in P_2(E_{2,K})\oplus\mathrm{span}\{x^3,x^4\},\\
(w_h|_K)|_{E_{4,K}}&\in P_2(E_{4,K})\oplus\mathrm{span}\{x^3,x^4\}.
\end{aligned}
\]
Note that $x^3$ and $x^4$ are independent of $y$,
therefore a closer observation for $\mathcal{I}^{S_2}_h$ gives
\begin{equation}
\label{e: key 2}
(w_h|_K-\mathcal{I}^{S_2}_hw_h)\big|_{E_{2,K}}=(w_h|_K-\mathcal{I}^{S_2}_hw_h)\big|_{E_{4,K}},~\forall w_h\in W_h.
\end{equation}
Hence, it follows from a standard argument and the inverse inequality that
\begin{equation}
\label{e: tangential estimate scalar}
\begin{aligned}
\left|\sum_{K\in\mathcal{T}_h}I'_{1,K}(u,w_h)\right|&=\frac{h_x^2}{12}\Bigg|\sum_{K\in\mathcal{T}_h}\Bigg(\int_{E_{2,K}}
\left(\frac{\partial^3u}{\partial x\partial y^2}-\mathcal{P}_0^K\frac{\partial^3u}{\partial x\partial y^2}\right)
\left(\frac{\partial^2w_h}{\partial y^2}-\frac{\partial^2\mathcal{I}^{S_2}_hw_h}{\partial y^2}\right)\,\mathrm{d}s\\
&~~~~~~~~~~~~~~~~~~-\int_{E_{4,K}}
\left(\frac{\partial^3u}{\partial x\partial y^2}-\mathcal{P}_0^K\frac{\partial^3u}{\partial x\partial y^2}\right)
\left(\frac{\partial^2w_h}{\partial y^2}-\frac{\partial^2\mathcal{I}^{S_2}_hw_h}{\partial y^2}\right)\,\mathrm{d}s\Bigg)\Bigg|\\
&\leq Ch^2\sum_{K\in\mathcal{T}_h}\sum_{i=2,4}Ch^{1/2}\left|\frac{\partial^3u}{\partial x\partial y^2}\right|_{1,K}
Ch^{-2}\|w_h-\mathcal{I}^{S_2}_Kw_h\|_{0,E_{i,K}}\\
&\left\{
\begin{aligned}
&\leq Ch^2\sum_{K\in\mathcal{T}_h}\sum_{i=2,4}Ch^{1/2}|u|_{4,K}Ch^{-2}Ch^{3/2}|w_h|_{2,K}\leq Ch^2|u|_4|w_h|_{2,h};\\
&\leq Ch^2\sum_{K\in\mathcal{T}_h}\sum_{i=2,4}Ch^{1/2}|u|_{4,K}Ch^{-2}Ch^{5/2}|w_h|_{3,K}\leq Ch^3|u|_4|w_h|_{3,h}.
\end{aligned}
\right.
\end{aligned}
\end{equation}
Combining this estimate with (\ref{e: I1K}) we get
\[
\left|\sum_{K\in\mathcal{T}_h}I_{1,K}(u,w_h)\right|\leq Ch^2|u|_4|w_h|_{2,h}.
\]
Similarly, it holds that
\[
\left|\sum_{K\in\mathcal{T}_h}I_{2,K}(u,w_h)\right|\leq Ch^2|u|_4|w_h|_{2,h},
\]
which along with (\ref{e: I1}), (\ref{e: I2}), (\ref{e: I4}) and (\ref{e: I3}) gives
\[
|E_{2,h}(u,w_h)|\leq Ch^2|u|_4|w_h|_{2,h},
\]
and the proof is completed.
\end{proof}
Thanks to the zeroth order weak continuity for the gradients of the functions in $W_h$,
we can obtain the discrete $H^1$ error estimate using the duality argument provided in \cite{Shi1986,Park2013}.
To proceed, we need to construct an auxiliary $C^0$ nonconforming element with the first order orthogonality for normal derivatives, which will be carried out in two steps.
In the first step,
define the local shape function space
\[
\widetilde{W}_K^*=P_3(K)\oplus\mathrm{span}\{x^3y,xy^3\}\oplus\mathrm{span}\{b_K\phi_{i,K},~i=1,2,\ldots,8\},~K\in\mathcal{T}_h,
\]
where $b_K$ is a 4-degree bubble polynomial fulfilling $b_K|_{\partial K}=0$.
The functions $\phi_{i,K}$ are selected such that $\widetilde{W}_K^*$ is unisolvent to the DoF set
\[
\widetilde{T}_K^*=\left\{v(V_{i,K}),\nabla v(V_{i,K}),\int_{E_{i,K}}\frac{\partial v}{\partial \bm{n}}\,\mathrm{d}s,
\int_{E_{i,K}}\frac{\partial v}{\partial \bm{n}}\xi_{i,K}\,\mathrm{d}s,~i=1,2,3,4\right\},~\forall v\in \widetilde{W}_K^*.
\]
According to \cite{Chen2012}, a successful selection of these $\phi_{i,\widehat{K}}$ over the reference square $\widehat{K}=[-1,1]^2$ can be given as
\[
\widehat{x},\widehat{y},\widehat{x}^2\widehat{y},\widehat{x}\widehat{y}^2,
\widehat{x}^4,\widehat{y}^4,\widehat{x}^3\widehat{y},\widehat{x}\widehat{y}^3,
\]
then the physical $\phi_{i,K}$ will be obtained by an affine equivalent technique, $i=1,2,\ldots,8$.
The second step is to take a subspace of $\widetilde{W}_K^*$, named as $W_K^*$,
satisfying the following four linear relations of the DoFs in $\widetilde{T}_K^*$:
\begin{equation}
\label{e: linear relations scalar}
\begin{aligned}
\frac{1}{h_x}\int_{E_{i,K}}\frac{\partial v}{\partial y}\xi_{i,K}\,\mathrm{d}s
&=\frac{1}{6}\left(\frac{\partial v}{\partial y}(V_{i'',K})-\frac{\partial v}{\partial y}(V_{i',K})\right),~i=1,3,\\
\frac{1}{h_y}\int_{E_{j,K}}\frac{\partial v}{\partial x}\xi_{j,K}\,\mathrm{d}s
&=\frac{1}{6}\left(\frac{\partial v}{\partial x}(V_{j'',K})-\frac{\partial v}{\partial x}(V_{j',K})\right),~j=2,4,
~\forall v\in W_K^*,
\end{aligned}
\end{equation}
where $i''=3$ and $i'=4$ for $i=3$, and $i''=2$ and $i'=1$ for $i=1$;
$j''=3$ and $j'=2$ for $j=2$, and $j''=4$ and $j'=1$ for $j=4$.
By the Simpson quadrature rule, we find $P_3(K)\subset W_K^*$.
The actual DoF set is then set as
\[
T_K^*=\left\{v(V_{i,K}),\nabla v(V_{i,K}),\int_{E_{i,K}}\frac{\partial v}{\partial \bm{n}}\,\mathrm{d}s,~i=1,2,3,4\right\},~\forall v\in W_K^*,
\]
and the global finite element space $W_h^*$ reads
\[
\begin{aligned}
W_h^*=\Bigg\{w&\in L^2(\Omega):~w|_K\in W_K^*,~\forall K\in\mathcal{T}_h,
~\mbox{$w$ and $\nabla w$ are continuous at all}\\
&\mbox{$V\in\mathcal{V}_h^i$ and vanish at all $V\in\mathcal{V}_h^b$},
~\int_E\left[\frac{\partial w}{\partial\bm{n}_E}\right]_E\,\mathrm{d}s=0
~\mbox{for all $E\in\mathcal{E}_h$}\Bigg\}.
\end{aligned}
\]
Clearly, $W_h^*\subset H_0^1(\Omega)$.
For a fixed $K$, the interpolation $\mathcal{I}_K^*$ from $H^3(K)$ to $W_K^*$ is defined such that $\tau(\mathcal{I}_K^*v)=\tau(v)$,
$\forall v\in H^3(K)$, $\forall \tau\in T_K^*$.
The global version $\mathcal{I}_h^*$ from $H^3(\Omega)\cap H_0^2(\Omega)$ to $W_h^*$ is naturally determined by $\mathcal{I}_h^*|_K=\mathcal{I}_K^*$, $\forall K\in\mathcal{T}_h$.
This element is indeed a rectangular extension of the triangular $C^0$ nonconforming element designed by Gao et al.~\cite{Gao2011}.
\begin{theorem}
\label{th: converge scalar H1}
Let the solution domain $\Omega$ be convex.
In addition, under the same assumptions in Theorem \ref{th: converge scalar}, we have
\[
|u-u_h|_{1,h}\leq Ch^3|u|_4.
\]
\end{theorem}
\begin{proof}
We first set $e_h=u-u_h$ and consider the following dual problem: Find $\phi\in H_0^2(\Omega)\cap H^3(\Omega)$,
such that
\begin{equation}
\label{e: dual problem scalar}
(\nabla^2 \phi,\nabla^2 v)=\sum_{K\in\mathcal{T}_h}(\nabla e_h,\nabla v)_K,~\forall v\in H_0^2(\Omega).
\end{equation}
For the convex domain $\Omega$,
it follows from \cite{Grisvard1985} that $\Delta^2$ is an isomorphism from $H_0^2(\Omega)\cap H^3(\Omega)$ onto $H^{-1}(\Omega)$, and therefore (\ref{e: dual problem scalar}) has a unique solution with the bound
\begin{equation}
\label{e: dual regular scalar}
|\phi|_3\leq C\sup_{v\in H_0^1(\Omega)}\frac{\sum_{K\in\mathcal{T}_h}(\nabla e_h,\nabla v)_K}{|v|_1}\leq C|e_h|_{1,h}.
\end{equation}
We will use the decomposition
\begin{equation}
\label{e: decompose scalar}
|e_h|_{1,h}^2=\sum_{K\in\mathcal{T}_h}\left(\nabla e_h,\nabla (e_h-\mathcal{I}^{S_2}_he_h)\right)_K+\sum_{K\in\mathcal{T}_h}\left(\nabla e_h,\nabla \mathcal{I}^{S_2}_he_h\right)_K.
\end{equation}
The first summation is estimated with the aid of (\ref{e: err scalar}) as
\begin{equation}
\label{e: dual part 1 scalar}
\begin{aligned}
\left|\sum_{K\in\mathcal{T}_h}\left(\nabla e_h,\nabla (e_h-\mathcal{I}^{S_2}_he_h)\right)_K\right|
&\leq C|e_h|_{1,h}|e_h-\mathcal{I}^{S_2}_he_h|_{1,h}\\
&\leq Ch|e_h|_{1,h}|e_h|_{2,h}\leq Ch^3|u|_4|e_h|_{1,h}.
\end{aligned}
\end{equation}
As far as the second summation in (\ref{e: decompose scalar}) is concerned,
note that $\mathcal{I}^{S_2}_he_h\in H_0^1(\Omega)$,
and a density argument using (\ref{e: dual problem scalar}) with an integration
by parts shows that
\begin{equation}
\label{e: dual error part 2 main scalar}
\begin{aligned}
\left|\sum_{K\in\mathcal{T}_h}\left(\nabla e_h,\nabla \mathcal{I}^{S_2}_he_h\right)_K\right|
&=\left|-\left(\nabla\Delta\phi,\nabla \mathcal{I}^{S_2}_he_h\right)\right|\\
&=\left|\sum_{K\in\mathcal{T}_h}\left(\nabla\Delta\phi,\nabla (e_h-\mathcal{I}^{S_2}_he_h)\right)_K
-\sum_{K\in\mathcal{T}_h}\left(\nabla\Delta\phi,\nabla e_h\right)_K\right|\\
&\leq C|\phi|_3|e_h-\mathcal{I}^{S_2}_he_h|_{1,h}+|I_3(\phi,e_h)|+|I_4(\phi,e_h)|
+\left|\sum_{K\in\mathcal{T}_h}(\nabla^2 \phi,\nabla^2 e_h)_K\right|\\
&\leq Ch|\phi|_3|e_h|_{2,h}+Ch|\phi|_3|e_h|_{2,h}+\left|\sum_{K\in\mathcal{T}_h}(\nabla^2 \phi,\nabla^2 e_h)_K\right|,
\end{aligned}
\end{equation}
where we have used the weak continuity of $e_h$, see e.g.~(\ref{e: normal weak scalar}) and (\ref{e: tangential weak scalar}). The first two terms in the last line of (\ref{e: dual error part 2 main scalar}) are bounded by
\[
Ch|\phi|_3|e_h|_{2,h}\leq Ch^3|\phi|_3|u|_4\leq Ch^3|u|_4|e_h|_{1,h}
\]
according to (\ref{e: err scalar}) and (\ref{e: dual regular scalar}).
Hence it remains to estimate the last term in the last line of (\ref{e: dual error part 2 main scalar}).
To this end,
we select $\phi^*=\mathcal{I}_h^*\phi\in W_h^*$, and then $\mathcal{I}_h\phi^*\in W_h$ due to the definitions of $W_h^*$ and $W_h$.
As a result,
it follows from (\ref{e: discrete weak scalar}) and (\ref{e: E2h}) that
\begin{equation}
\label{e: dual part 2 2 scalar}
\begin{aligned}
\sum_{K\in\mathcal{T}_h}(\nabla^2 \phi,\nabla^2 e_h)_K
&=\sum_{K\in\mathcal{T}_h}(\nabla^2 \mathcal{I}_h\phi^*,\nabla^2 e_h)_K
+\sum_{K\in\mathcal{T}_h}(\nabla^2(\phi-\mathcal{I}_h\phi^*),\nabla^2 e_h)_K\\
&=\sum_{K\in\mathcal{T}_h}(\nabla^2 u, \nabla^2 \mathcal{I}_h\phi^*)_K
-(f,\mathcal{I}_h\phi^*)+\sum_{K\in\mathcal{T}_h}(\nabla^2(\phi-\mathcal{I}_h\phi^*),\nabla^2 e_h)_K\\
&=E_{2,h}(u,\mathcal{I}_h\phi^*)+\sum_{K\in\mathcal{T}_h}(\nabla^2(\phi-\mathcal{I}_h\phi^*),\nabla^2 e_h)_K\\
&=\sum_{i=1}^4I_i(u,\mathcal{I}_h\phi^*)+\sum_{K\in\mathcal{T}_h}(\nabla^2(\phi-\mathcal{I}_h\phi^*),\nabla^2 e_h)_K.
\end{aligned}
\end{equation}
Invoking (\ref{e: I1}), (\ref{e: I2}) and (\ref{e: I4}), one sees
\begin{equation}
\label{e: collect dual 1 scalar}
|I_i(u,\mathcal{I}_h\phi^*)|\leq Ch^3|u|_4|\mathcal{I}_h\phi^*|_{3,h}\leq Ch^3|u|_4|\phi^*|_{3,h}
\leq Ch^3|u|_4|\phi|_3,~i=1,2,4.
\end{equation}
Furthermore, by the triangle inequality,
\begin{equation}
\label{e: collect dual 2 scalar}
\begin{aligned}
\left|\sum_{K\in\mathcal{T}_h}(\nabla^2(\phi-\mathcal{I}_h\phi^*),\nabla^2 e_h)_K\right|
&\leq C|\phi-\mathcal{I}_h\phi^*|_{2,h}|e_h|_{2,h}\\
&\leq C|\phi-\mathcal{I}_h^*\phi|_{2,h}|e_h|_{2,h}+C|\phi^*-\mathcal{I}_h\phi^*|_{2,h}|e_h|_{2,h}\\
&\leq Ch^3|u|_4|\phi|_3.
\end{aligned}
\end{equation}
Hence, we shall estimate $I_3(u,\mathcal{I}_h\phi^*)$.
Indeed, we write $I_3(u,\mathcal{I}_h\phi^*)$ as
\[
I_3(u,\mathcal{I}_h\phi^*)=I_3(u,\mathcal{I}_h\phi^*-\phi^*)+I_3(u,\phi^*).
\]
Note that the linear relations in (\ref{e: linear relations scalar}) and the weak continuity of $\phi^*$ ensure that
\begin{equation}
\label{e: dual normal weak scalar}
\int_E\left[\frac{\partial \phi^*}{\partial \bm{n}_E}\right]_E\xi_E\,\mathrm{d}s=0,~\forall \xi_E\in P_1(E),
~\forall E\in\mathcal{E}_h,
\end{equation}
which gives
\begin{equation}
\label{e: collect dual 3 scalar}
\begin{aligned}
|I_3(u,\phi^*)|&=\left|\sum_{K\in\mathcal{T}_h}\sum_{E\subset\partial K}\int_E
\left(\frac{\partial^2u}{\partial\bm{n}^2}-\mathcal{P}_1^E\frac{\partial^2u}{\partial\bm{n}^2}
\right)\left(\frac{\partial \phi^*}{\partial \bm{n}}-\mathcal{P}_1^E\frac{\partial \phi^*}{\partial \bm{n}}\right)\,\mathrm{d}\sigma\right|\\
&\leq\sum_{K\in\mathcal{T}_h}Ch^{3/2}|u|_{4,K}Ch^{3/2}|\phi^*|_{3,K}\\
&\leq Ch^3|u|_4|\phi^*|_{3,h}\leq Ch^3|u|_4|\phi|_3.
\end{aligned}
\end{equation}
Next we consider $I_3(u,\mathcal{I}_h\phi^*-\phi^*)$.
Just as in (\ref{e: I3}), by the weak continuity of both $W_h$ and $W_h^*$, we can write
\begin{equation}
\label{e: I3 dual}
I_3(u,\mathcal{I}_h\phi^*-\phi^*)=\sum_{K\in\mathcal{T}_h}I_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)
+\sum_{K\in\mathcal{T}_h}I_{2,K}(u,\mathcal{I}_h\phi^*-\phi^*).
\end{equation}
Let us investigate the relation between $I_{1,K}(u,\phi^*)$ and $I'_{1,K}(u,\phi^*)$
defined in (\ref{e: I3}) and (\ref{e: I'}) for each $K\in\mathcal{T}_h$.
It is clear that (\ref{e: equal scalar y}) still holds if $w_h$ is replaced by $\phi^*$.
Moreover, a careful comparison between (\ref{e: normal relation scalar}) and the first line in (\ref{e: linear relations scalar}) shows that they are precisely the same thing if $w_h$ in (\ref{e: normal relation scalar}) is replaced by $v$
in (\ref{e: linear relations scalar}).
Hence, one can follow the same argument as in (\ref{e: normal relation scalar}) and (\ref{e: derivation x scalar})
to derive an analogue of (\ref{e: equal scalar x}).
As a consequence,
\[
I_{1,K}(u,\phi^*)-I'_{1,K}(u,\phi^*)=0,~\forall \frac{\partial^2u}{\partial y^2}\in P_1(K),
\]
which asserts by (\ref{e: bilinear scalar est 1}), (\ref{e: bilinear scalar est 2}), (\ref{e: key 1}) and the Bramble-Hilbert lemma that
\begin{equation}
\label{e: normal to tangent dual}
\begin{aligned}
\left|I_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)-I'_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)\right|
&\leq Ch^2\left|\frac{\partial^2u}{\partial y^2}\right|_{2,K}|\mathcal{I}_h\phi^*-\phi^*|_{2,K}\\
&\leq Ch^3|u|_{4,K}|\phi^*|_{3,K}\leq Ch^3|u|_{4,K}|\phi|_{3,K}.
\end{aligned}
\end{equation}
Substituting (\ref{e: normal to tangent dual}) into (\ref{e: I3 dual}) results in
\begin{equation}
\label{e: I1K dual}
\sum_{K\in\mathcal{T}_h}I_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)\leq \sum_{K\in\mathcal{T}_h}I'_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)+Ch^3|u|_4|\phi|_3.
\end{equation}
The right-hand side of (\ref{e: I1K dual}) can be estimated by an integration by parts:
\[
\begin{aligned}
\sum_{K\in\mathcal{T}_h}I'_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)
=&\frac{h_x^2}{12}\Bigg(\sum_{K\in\mathcal{T}_h}\int_{y'_K}^{y''_K}\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^2\mathcal{I}_h\phi^*}{\partial y^2}\bigg|_{x=x''_K}-\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^2\mathcal{I}_h\phi^*}{\partial y^2}\bigg|_{x=x'_K}\,\mathrm{d}y\\
&~~~~~~~~~-\sum_{K\in\mathcal{T}_h}\int_{y'_K}^{y''_K}\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^2\phi^*}{\partial y^2}\bigg|_{x=x''_K}-\frac{\partial^3u}{\partial x\partial y^2}\frac{\partial^2\phi^*}{\partial y^2}\bigg|_{x=x'_K}\,\mathrm{d}y
\Bigg).\\
\end{aligned}
\]
The last summation immediately vanishes as $\phi^*\in H_0^1(\Omega)$.
Therefore using the argument in (\ref{e: tangential estimate scalar}), one derives
\[
\left|\sum_{K\in\mathcal{T}_h}I'_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)\right|
\leq Ch^3|u|_4|\mathcal{I}_h\phi^*|_{3,h}\leq Ch^3|u|_4|\phi|_3,
\]
which along with (\ref{e: I1K dual}) gives
\begin{equation}
\label{e: I1K final dual}
\left|\sum_{K\in\mathcal{T}_h}I_{1,K}(u,\mathcal{I}_h\phi^*-\phi^*)\right|\leq Ch^3|u|_4|\phi|_3.
\end{equation}
In a similar fashion, it holds that
\begin{equation}
\label{e: I2K final dual}
\left|\sum_{K\in\mathcal{T}_h}I_{2,K}(u,\mathcal{I}_h\phi^*-\phi^*)\right|\leq Ch^3|u|_4|\phi|_3.
\end{equation}
Substituting (\ref{e: I1K final dual}), (\ref{e: I2K final dual}) into (\ref{e: I3 dual})
and combining (\ref{e: collect dual 1 scalar}), (\ref{e: collect dual 2 scalar}) and (\ref{e: collect dual 3 scalar})
with (\ref{e: dual part 2 2 scalar}), we have
\[
\left|\sum_{K\in\mathcal{T}_h}(\nabla^2 \phi,\nabla^2 e_h)_K\right|\leq Ch^3|u|_4|\phi|_3\leq Ch^3|u|_4|e_h|_{1,h}.
\]
Note that we have again used the regularity condition (\ref{e: dual regular scalar}).
Finally, it follows from (\ref{e: decompose scalar}), (\ref{e: dual part 1 scalar}) and (\ref{e: dual error part 2 main scalar}) that
\[
|e_h|_{1,h}^2\leq Ch^3|u|_4|e_h|_{1,h}.
\]
The desired conclusion is obtained by dividing both sides by $|e_h|_{1,h}$.
\end{proof}
\begin{remark}
In our previous work \cite{Zhou2018} over general quadrilateral meshes,
the shape function space $W_K$ is of degree six,
so that the edge integrals can be replaced by edge midpoint values in the DoF set,
which seems cheaper in practical applications, especially for nonhomogeneous boundary conditions.
Nevertheless, when applied to uniform rectangular meshes,
the key features (\ref{e: key 1}) and (\ref{e: key 2}) remain valid,
and so the error estimates in Theorems \ref{th: converge scalar} and \ref{th: converge scalar H1} still hold.
\end{remark}
\section{High accuracy analysis of $\bm{V}_h\times P_h$ for Stokes problems}
\label{s: vector}
In a similar fashion, we will give the high accuracy analysis of the Stokes element $\bm{V}_h\times P_h$.
The Stokes problem for incompressible flow reads:
For a given force field $\bm{f}$,
find the velocity $\bm{u}$ and the pressure $p$ satisfying
\begin{equation}
\label{e: stokes problem}
\begin{aligned}
-\Delta\bm{u}+\nabla p&=\bm{f}~~~~~~&\mbox{in}~\Omega,\\
\mbox{div}\,\bm{u}&=0~&\mbox{in}~\Omega,\\
\bm{u}&=\bm{0}~&\mbox{on}~\partial\Omega,
\end{aligned}
\end{equation}
A weak formulation of (\ref{e: stokes problem}) is to find a
pair $(\bm{u},p)\in [H_0^1(\Omega)]^2\times L_0^2(\Omega)$ such that
\begin{equation}
\label{e: stokes continuous}
\begin{aligned}
(\nabla\bm{u},\nabla\bm{v})-(\mathrm{div}\,\bm{v},p)&=(\bm{f},\bm{v}),~&\forall \bm{v}\in[H_0^1(\Omega)]^2,\\
(\mathrm{div}\,\bm{u},q)&=0,~&\forall q\in L_0^2(\Omega).
\end{aligned}
\end{equation}
We will approximate (\ref{e: stokes continuous}) by seeking $(\bm{u}_h,p_h)\in\bm{V}_h\times P_h$ yielding
\begin{equation}
\label{e: stokes discrete}
\begin{aligned}
a_h(\bm{u}_h,\bm{v}_h)-b_h(\bm{v}_h,p_h)&=(\bm{f},\bm{v}_h),~\forall \bm{v}_h\in\bm{V}_h,\\
b_h(\bm{u}_h,q_h)&=0,~~\forall q_h\in P_h
\end{aligned}
\end{equation}
with bilinear forms
\[
a_h(\bm{u}_h,\bm{v}_h)=\sum_{K\in\mathcal{T}_h}(\nabla\bm{u}_h,\nabla\bm{v}_h)_K,
~b_h(\bm{v}_h,q_h)=(\mathrm{div}_h\bm{v}_h,q_h).
\]
Following a classical consistency error analysis,
only an $O(h)$ convergence order can be derived for $|\bm{u}-\bm{u}_h|_{1,h}$,
since one only has the following weak continuity for the tangential component:
\begin{equation}
\label{e: tangential weak stokes}
\int_E\left[\bm{w}_h\cdot\bm{t}_E\right]_E\,\mathrm{d}s=0,~\forall E\in\mathcal{E}_h,~\forall \bm{w}_h\in \bm{V}_h.
\end{equation}
As before, we have the following higher order error estimate.
\begin{theorem}
\label{th: converge stokes}
The discrete problem (\ref{e: stokes discrete}) has a unique solution $(\bm{u}_h,p_h)\in\bm{V}_h\times P_h$.
Let $(\bm{u},p)\in\left([H_0^1(\Omega)\cap H^3(\Omega)]^2\right)\times(L_0^2(\Omega)\cap H^2(\Omega))$ be the weak solution of (\ref{e: stokes continuous}) fulfilling $\bm{u}=\curl\,\phi$, $\phi\in H_0^2(\Omega)\cap H^4(\Omega)$.
If $\mathcal{T}_h$ is uniform, the following estimates hold:
\begin{equation}
\label{e: converge stokes}
|\bm{u}-\bm{u}|_{1,h}\leq Ch^2(|\phi|_4+|p|_2),~\|p-p_h\|_0\leq Ch|p|_1+Ch^2(|\phi|_4+|p|_2).
\end{equation}
\end{theorem}
\begin{proof}
From the commutative diagram (\ref{e: discrete complex}) we see $\mathrm{div}_h\bm{V}_h\subset P_h$ and
\[
\mathrm{div}_h\bm{\Pi}_h\bm{v}=\mathcal{P}_h\mathrm{div}\,\bm{v},~\forall\bm{v}\in [H_0^1(\Omega)]^2.
\]
Hence, using Fortin's trick and the continuous inf-sup condition, we obtain the following discrete version
\[
\sup_{\bm{v}_h\in\bm{V}_h}\frac{b_h(\bm{v}_h,q_h)}{|\bm{v}_h|_{1,h}}\geq C\|q_h\|_0,~\forall q_h\in P_h.
\]
By a standard argument of the mixed finite element method, e.g., Theorem 3.1 in \cite{Zhou2018} or
Theorem 5.2.6 in \cite{Boffi2013}, (\ref{e: stokes discrete}) has a unique solution $(\bm{u}_h,p_h)\in\bm{V}_h\times P_h$.
Moreover,
\begin{equation}
\label{e: abstract error}
\begin{aligned}
|\bm{u}-\bm{u}_h|_{1,h}&\leq C\left(\inf_{\bm{v_h}\in\bm{Z}_h}|\bm{u}-\bm{v}_h|_{1,h}
+\sup_{\bm{w}_h\in\bm{V}_h}\frac{E_{1,h}(\bm{u},p,\bm{w}_h)}{|\bm{w}_h|_{1,h}}\right),\\
\|p-p_h\|_0&\leq \|p-\mathcal{P}_hp\|_0+C\left(\inf_{\bm{v_h}\in\bm{Z}_h}|\bm{u}-\bm{v}_h|_{1,h}
+\sup_{\bm{w}_h\in\bm{V}_h}\frac{E_{1,h}(\bm{u},p,\bm{w}_h)}{|\bm{w}_h|_{1,h}}\right),
\end{aligned}
\end{equation}
where
\[
\bm{Z}_h=\{\bm{v}_h\in\bm{V}_h:~b_h(\bm{v}_h,q_h)=0,~\forall q_h\in P_h\}
=\{\bm{v}_h\in\bm{V}_h:~\mathrm{div}_h\bm{v}_h=0\},
\]
and the consistency term $E_{1,h}(\bm{u},p,\bm{w}_h)$ is given by
\begin{equation}
\label{e: consistency error vector}
\begin{aligned}
E_{1,h}(\bm{u},p,\bm{w}_h)&=b_h(\bm{w}_h,p)-a_h(\bm{u},\bm{w}_h)+(\bm{f},\bm{w}_h)\\
&=\sum_{K\in\mathcal{T}_h}\left(-\int_{\partial K}\frac{\partial\bm{u}}{\partial\bm{n}}\cdot
\bm{w}_h\,\mathrm{d}s+\int_{\partial K}p\bm{w}_h\cdot\bm{n}\,\mathrm{d}s\right).
\end{aligned}
\end{equation}
It follows from (\ref{e: interp err vector}) that
\[
\inf_{\bm{v_h}\in\bm{Z}_h}|\bm{u}-\bm{v}_h|_{1,h}\leq |\bm{u}-\bm{\Pi}_h\bm{u}|_{1,h}\leq Ch^2|\phi|_4
\]
as $\bm{\Pi}_h\bm{u}\in\bm{Z}_h$.
We shall therefore consider the consistency error.
To this end, we rewrite $E_{1,h}(\bm{u},p,\bm{w}_h)$ as
\[
\begin{aligned}
E_{1,h}(\bm{u},p,\bm{w}_h)=&-\sum_{K\in\mathcal{T}_h}
\int_{\partial K}\frac{\partial\bm{u}}{\partial\bm{n}}\cdot
(\bm{w}_h\cdot\bm{n})\bm{n}\,\mathrm{d}s
-\sum_{K\in\mathcal{T}_h}\int_{\partial K}\frac{\partial\bm{u}}{\partial\bm{n}}\cdot
(\bm{w}_h\cdot\bm{t})\bm{t}\,\mathrm{d}s\\
&+\sum_{K\in\mathcal{T}_h}\int_{\partial K}p\bm{w}_h\cdot\bm{n}\,\mathrm{d}s\\
:=&-J_1(\bm{u},\bm{w}_h)-J_2(\bm{u},\bm{w}_h)+J_3(p,\bm{w}_h)
\end{aligned}
\]
In fact, the weak continuity of $\bm{w}_h$ implies
\begin{equation}
\label{e: normal weak stokes}
\int_E\left[\bm{w}_h\cdot\bm{n}_E\right]_E\xi_E\,\mathrm{d}s=0,~\forall \xi_E\in P_1(E),
~\forall E\in\mathcal{E}_h,~\forall \bm{w}_h\in \bm{V}_h.
\end{equation}
Using a similar strategy as in (\ref{e: I4}) one sees
\begin{equation}
\label{e: 1 order otho vector est}
\begin{aligned}
|J_1(\bm{u},\bm{w}_h)|&
\left\{
\begin{aligned}
&\leq Ch^2|\bm{u}|_3|\bm{w}_h|_{1,h}\leq Ch^2|\phi|_4|\bm{w}_h|_{1,h};\\
&\leq Ch^3|\bm{u}|_3|\bm{w}_h|_{2,h}\leq Ch^3|\phi|_4|\bm{w}_h|_{2,h},\\
\end{aligned}
\right.\\
|J_3(p,\bm{w}_h)|&
\left\{
\begin{aligned}
&\leq Ch^2|p|_2|\bm{w}_h|_{1,h};\\
&\leq Ch^3|p|_2|\bm{w}_h|_{2,h},
\end{aligned}
\right.
\end{aligned}
\end{equation}
thus it remains to estimate $J_2(\bm{u},\bm{w}_h)$.
If we write $\bm{u}=(u_1,u_2)^T$ and $\bm{w}_h=(w_{h1},w_{h2})^T$,
notice from the weak continuity of $\bm{w}_h$ that
\begin{equation}
\label{e: J2}
\begin{aligned}
J_2(\bm{u},\bm{w}_h)=&\sum_{K\in\mathcal{T}_h}\left(
\int_{E_{3,K}}\frac{\partial u_1}{\partial y}
\left(w_{h1}-\mathcal{P}_0^{E_{3,K}}w_{h1}\right)\,\mathrm{d}s
-\int_{E_{1,K}}\frac{\partial u_1}{\partial y}
\left(w_{h1}-\mathcal{P}_0^{E_{1,K}}w_{h1}\right)\,\mathrm{d}s\right)\\
&+\sum_{K\in\mathcal{T}_h}\left(
\int_{E_{2,K}}\frac{\partial u_2}{\partial x}
\left(w_{h2}-\mathcal{P}_0^{E_{2,K}}w_{h2}\right)\,\mathrm{d}s
-\int_{E_{4,K}}\frac{\partial u_2}{\partial x}
\left(w_{h2}-\mathcal{P}_0^{E_{4,K}}w_{h2}\right)\,\mathrm{d}s\right)\\
:=&\sum_{K\in\mathcal{T}_h}J_{1,K}(u_1,w_{h1})+\sum_{K\in\mathcal{T}_h}J_{2,K}(u_2,w_{h2}).
\end{aligned}
\end{equation}
Let us again focus only on $J_{1,K}(u_1,w_{h1})$.
On one hand,
\begin{equation}
\label{e: bilinear vector est 1}
|J_{1,K}(u_1,w_{h1})|\leq Ch\left|\frac{\partial u_1}{\partial y}\right|_{1,K}|w_{h1}|_{1,K}.
\end{equation}
On the other hand, define
\[
J'_{1,K}(u_1,w_{h1})=\frac{h_x^2}{12}\int_K\frac{\partial^2u_1}{\partial x\partial y}\frac{\partial^2w_{h1}}{\partial x\partial y}+\frac{\partial^3u_1}{\partial x^2\partial y}\frac{\partial w_{h1}}{\partial y}\,\mathrm{d}x\mathrm{d}y,
\]
then
\begin{equation}
\label{e: bilinear vector est 2}
|J'_{1,K}(u_1,w_{h1})|\leq Ch\left(\left|\frac{\partial u_1}{\partial y}\right|_{1,K}+h\left|\frac{\partial u_1}{\partial y}\right|_{2,K}\right)|w_{h1}|_{1,K}.
\end{equation}
Moreover, it follows from the definition of $\bm{V}_K$ that $w_{h1}|_K\in P_2(K)\oplus\mathrm{span}\{y^3\}$,
and hence,
\[
w_{h1}|_{E_{i,K}}\in P_2(E_{i,K}),~i=1,3.
\]
The Simpson quadrature rule ensures
\begin{equation}
\label{e: wh1 relation vector}
\frac{1}{h_x}\int_{E_{i,K}}w_{h1}\xi_{i,K}\,\mathrm{d}s=\frac{1}{6}\left(w_{h1}(V_{i'',K})-w_{h1}(V_{i',K})\right),
\end{equation}
where $i''=3$ and $i'=4$ for $i=3$, and $i''=2$ and $i'=1$ for $i=1$.
An argument just like in (\ref{e: normal relation scalar}) and (\ref{e: derivation x scalar}) leads to
\begin{equation}
\label{e: bilinear vector est 3}
J_{1,K}(u_1,w_{h1})-J'_{1,K}(u_1,w_{h1})=0,~\forall \frac{\partial u_1}{\partial y}\in P_1(K),
~\forall w_{h1}\in P_2(K)\oplus\mathrm{span}\{y^3\}.
\end{equation}
Collecting (\ref{e: bilinear vector est 1}), (\ref{e: bilinear vector est 2}) and (\ref{e: bilinear vector est 3})
with the Bramble-Hilbert lemma gives
\begin{equation}
\label{e: bilinear vector est 4}
|J_{1,K}(u_1,w_{h1})-J'_{1,K}(u_1,w_{h1})|\leq Ch^2\left|\frac{\partial u_1}{\partial y}\right|_{2,K}|w_{h1}|_{1,K}
\leq Ch^2|\bm{u}|_{3,K}|\bm{w}_h|_{1,K}.
\end{equation}
Substituting (\ref{e: bilinear vector est 4}) into (\ref{e: J2}) results in
\begin{equation}
\label{e: J1K}
\sum_{K\in\mathcal{T}_h}J_{1,K}(u_1,w_{h1})\leq \sum_{K\in\mathcal{T}_h}J'_{1,K}(u_1,w_{h1})+Ch^2|\phi|_4|\bm{w}_h|_{1,h}.
\end{equation}
Let us turn to the right-hand side in (\ref{e: J1K}).
We need to define the following interpolation operator $\Pi_{E_{i,K}}$ from $P_2(K)\oplus\mathrm{span}\{y^3\}$ to $P_1(E_{i,K})$
by setting
\[
\begin{aligned}
\int_{E_{i,K}}\Pi_{E_{i,K}}v\,\mathrm{d}s&=\int_{E_{i,K}}v\,\mathrm{d}s,\\
\int_{E_{i,K}}\left(\Pi_{E_{i,K}}v\right)\xi_{i,K}\,\mathrm{d}s&=\int_{E_{i,K}}v\xi_{i,K}\,\mathrm{d}s,\\
\forall v\in P_2(K)\oplus\mathrm{span}&\{y^3\},~i=2,4.
\end{aligned}
\]
A simple calculation shows that this interpolation is well-defined.
Moreover,
\begin{equation}
\label{e: key vector 2}
v|_{E_{2,K}}-\Pi_{E_{2,K}}v=v|_{E_{4,K}}-\Pi_{E_{4,K}}v,~\forall v\in P_2(K)\oplus\mathrm{span}\{y^3\}.
\end{equation}
Hence, according to the weak continuity of the normal components of $\bm{w}_h$ over $E_{2,K}$ and $E_{4,K}$,
it holds that
\[
\begin{aligned}
\sum_{K\in\mathcal{T}_h}J'_{1,K}(u_1,w_{h1})
&=\frac{h_x^2}{12}\sum_{K\in\mathcal{T}_h}\int_{y'_K}^{y''_K}\frac{\partial^2u_1}{\partial x\partial y}\frac{\partial w_{h1}}{\partial y}\bigg|_{x=x''_K}-\frac{\partial^2u_1}{\partial x\partial y}\frac{\partial w_{h1}}{\partial y}\bigg|_{x=x'_K}\,\mathrm{d}y\\
&=\frac{h_x^2}{12}\sum_{K\in\mathcal{T}_h}\Bigg(\int_{E_{2,K}}\left(\frac{\partial^2u_1}{\partial x\partial y}
-\mathcal{P}_0^K\frac{\partial^2u_1}{\partial x\partial y}\right)
\left(\frac{\partial w_{h1}}{\partial y}-\frac{\partial\Pi_{E_{2,K}}w_{h1}}{\partial y}\right)\,\mathrm{d}s\\
&~~~~~~~~~~~~~~~~~~-\int_{E_{4,K}}\left(\frac{\partial^2u_1}{\partial x\partial y}
-\mathcal{P}_0^K\frac{\partial^2u_1}{\partial x\partial y}\right)
\left(\frac{\partial w_{h1}}{\partial y}-\frac{\partial\Pi_{E_{4,K}}w_{h1}}{\partial y}\right)\,\mathrm{d}s\Bigg).
\end{aligned}
\]
Hence,
\begin{equation}
\label{e: tangential estimate stokes}
\left|\sum_{K\in\mathcal{T}_h}J'_{1,K}(u_1,w_{h1})\right|
\left\{
\begin{aligned}
&\leq Ch^2\sum_{K\in\mathcal{T}_h}Ch^{1/2}|u_1|_{3,K}Ch^{-1}Ch^{1/2}|w_{h1}|_{1,K}
\leq Ch^2|\bm{u}|_3|\bm{w}_h|_{1,h},\\
&\leq Ch^2\sum_{K\in\mathcal{T}_h}Ch^{1/2}|u_1|_{3,K}Ch^{-1}Ch^{3/2}|w_{h1}|_{2,K}
\leq Ch^3|\bm{u}|_3|\bm{w}_h|_{2,h},
\end{aligned}
\right.
\end{equation}
and therefore by (\ref{e: J1K}),
\[
\left|\sum_{K\in\mathcal{T}_h}J_{1,K}(u_1,w_{h1})\right|\leq Ch^2|\phi|_4|\bm{w}_h|_{1,h}.
\]
An almost identical argument applies to $J_{2,K}(u_2,w_{h2})$ along with (\ref{e: J2}) gives
\[
|J_2(\bm{u},\bm{w}_h)|\leq Ch^2|\phi|_4|\bm{w}_h|_{1,h},
\]
and then
\[
|E_{1,h}(\bm{u},p,\bm{w}_h)|\leq Ch^2(|\phi|_4+|p|_2)|\bm{w}_h|_{1,h},
\]
which establishes the error estimate for $|\bm{u}-\bm{u}_h|_{1,h}$ due to (\ref{e: abstract error}).
Again owing to (\ref{e: abstract error}), the estimate for the pressure is simply based on the fact above and the approximation order $O(h)$ of the projection $\mathcal{P}_h$.
\end{proof}
From Theorem \ref{th: converge stokes}, the velocity error has a higher accuracy of order $O(h^2)$,
while the pressure can only achieve an $O(h)$ convergence rate,
as the approximation error is only $O(h)$.
Nevertheless, this can be improved by a simple postprocessing
based on the idea from \cite{Falk2013} or \cite{Neilan2018}.
For any $K\in\mathcal{T}_h$, we define $p_K^*\in P_1(K)$ by setting
\[
\begin{aligned}
(\nabla p^*_K,\nabla q)_K &= (\Delta\bm{u}_h+\bm{f},\nabla q)_K,~\forall q \in P_1(K),\\
\int_K p^*_K \,\mathrm{d}x\mathrm{d}y&=\int_K p_h\,\mathrm{d}x\mathrm{d}y.
\end{aligned}
\]
Then the postprocessed solution $p_h^*$ is discontinuous and piecewise linear,
defined via $p_h^*|_K=p_K^*$, $\forall K\in\mathcal{T}_h$.
The following result can be derived in a similar manner as Theorem 4.4 in \cite{Neilan2018},
and so the proof is omitted.
\begin{theorem}
\label{th: post p}
Under the same assumptions in Theorem \ref{th: converge stokes}, we have
\[
\|p-p_h^*\|_0\leq Ch^2(|\phi|_4+|p|_2).
\]
\end{theorem}
In what follows, we shall provide the $L^2$-error estimate for the velocity by a duality argument.
To begin with, an $H(\mathrm{div})$-conforming but $H^1$-nonconforming element will be constructed using $W_h^*$.
Over each $K\in\mathcal{T}_h$, the shape function space $\bm{V}_K^*$ is defined by
\[
\begin{aligned}
\bm{V}_K^*&=[P_1(K)]^2+\mathrm{span}\{\curl\,w,~w\in W_K^*\}\\
&=\mathrm{span}\left\{(x,y)^T\right\}\oplus\mathrm{span}\{\curl\,w,~w\in W_K^*\},
\end{aligned}
\]
and the DoF set is given as
\[
\Sigma_K^*=\left\{\bm{v}(V_{i,K}),\int_{E_{i,K}}\bm{v}\,\mathrm{d}s,~i=1,2,3,4\right\},~\forall \bm{v}\in \bm{V}_K^*.
\]
Using the technique provided in Theorem 2.5 in \cite{Zhou2018} and the unisolvency of $W_K^*$ with respect to $T_K^*$,
the element $(K,\bm{V}_K^*,\Sigma_K^*)$ is well-defined.
By the Simpson quadrature rule applied to $(x,y)^T$ and (\ref{e: linear relations scalar}),
the following relations hold:
\begin{equation}
\label{e: linear relations Stokes}
\begin{aligned}
\frac{1}{h_x}\int_{E_{i,K}}v_1\xi_{i,K}\,\mathrm{d}s
&=\frac{1}{6}\left(v_1(V_{i'',K})-v_1(V_{i',K})\right),~i=1,3,\\
\frac{1}{h_y}\int_{E_{j,K}}v_2\xi_{j,K}\,\mathrm{d}s
&=\frac{1}{6}\left(v_2(V_{j'',K})-v_2(V_{j',K})\right),~j=2,4,
~\forall \bm{v}=(v_1,v_2)^T\in \bm{V}_K^*
\end{aligned}
\end{equation}
with $i''=3$ and $i'=4$ for $i=3$, and $i''=2$ and $i'=1$ for $i=1$;
$j''=3$ and $j'=2$ for $j=2$, and $j''=4$ and $j'=1$ for $j=4$.
The global finite element space $\bm{V}_h^*$ is set as
\[
\begin{aligned}
\bm{V}_h^*=\Bigg\{\bm{v}&\in [L^2(\Omega)]^2:~\bm{v}|_K\in \bm{V}_K^*,~\forall K\in\mathcal{T}_h,
~\mbox{$\bm{v}$ is continuous at all}\\
&\mbox{$V\in\mathcal{V}_h^i$ and vanishes at all $V\in\mathcal{V}_h^b$},
~\int_E\left[\bm{v}\right]_E\,\mathrm{d}s=\bm{0}
~\mbox{for all $E\in\mathcal{E}_h$}\Bigg\}.
\end{aligned}
\]
Clearly, $\bm{V}_h^*\in \bm{H}_0(\mathrm{div};\Omega)$.
For a fixed $K$, the interpolation $\bm{\Pi}_K^*$ from $[H^2(K)]^2$ to $\bm{V}_K^*$ is defined such that $\sigma(\bm{\Pi}_K^*\bm{v})=\sigma(\bm{v})$,
$\forall \bm{v}\in [H^2(K)]^2$, $\forall \sigma\in \Sigma_K^*$.
The global version $\bm{\Pi}_h^*$ from $[H^2(\Omega)\cap H_0^1(\Omega)]^2$ to $\bm{V}_h^*$ is naturally determined by $\bm{\Pi}_h^*|_K=\bm{\Pi}_K^*$, $\forall K\in\mathcal{T}_h$.
\begin{remark}
Using a standard argument (see e.g.~Theorem 4.1 in \cite{Zhou2018}),
we have constructed the following discrete Stokes complex:
\[
\begin{tikzcd}[column sep=large, row sep=large]
0 \arrow{r} & W_h^* \arrow{r}{\curl_h}
& \bm{V}_h^* \arrow{r}{\mathrm{div}_h}& P_h\arrow{r}&0.
\end{tikzcd}
\]
\end{remark}
\begin{theorem}
\label{th: converge stokes L2}
Let the solution domain $\Omega$ be convex.
In addition, under the same assumptions in Theorem \ref{th: converge stokes}, we have
\[
\|\bm{u}-\bm{u}_h\|_0\leq Ch^3(|\phi|_4+|p|_2).
\]
\end{theorem}
\begin{proof}
Set $\bm{e}_h=\bm{u}-\bm{u}_h$ and consider the following dual problem:
Find $\bm{\psi}$ and $\chi$ satisfying
\begin{equation}
\label{e: stokes dual problem}
\begin{aligned}
-\Delta\bm{\psi}+\nabla \chi&=\bm{e}_h~~~~~~&\mbox{in}~\Omega,\\
\mbox{div}\,\bm{\psi}&=0~&\mbox{in}~\Omega,\\
\bm{\psi}&=\bm{0}~&\mbox{on}~\partial\Omega,
\end{aligned}
\end{equation}
with the weak form
\[
\begin{aligned}
(\nabla\bm{\psi},\nabla\bm{v})-(\mathrm{div}\,\bm{v},\chi)&=(\bm{e}_h,\bm{v}),~&\forall \bm{v}\in[H_0^1(\Omega)]^2,\\
(\mathrm{div}\,\bm{\psi},q)&=0,~&\forall q\in L_0^2(\Omega),
\end{aligned}
\]
where $(\bm{\psi},\chi)\in [H_0^1(\Omega)]^2\times L_0^2(\Omega)$.
Owing to the convexity of $\Omega$, it follows from \cite{Grisvard1985} that
$(\bm{\psi},\chi)\in [H^2(\Omega)]^2\times H^1(\Omega)$ yielding the regularity
\begin{equation}
\label{e: dual regular stokes}
|\bm{\psi}|_2+|\chi|_1\leq C\|\bm{e}_h\|_0.
\end{equation}
Multiplying $\bm{e}_h$ on both sides of (\ref{e: stokes dual problem}) and integrating by parts show that
\begin{equation}
\label{e: dual test e_h}
\begin{aligned}
\|\bm{e}_h\|_0^2&=a_h(\bm{\psi},\bm{e}_h)-b_h(\bm{e}_h,\chi)+E_{2,h}(\bm{\psi},\chi,\bm{e}_h)\\
&=a_h(\bm{\psi},\bm{e}_h)+E_{2,h}(\bm{\psi},\chi,\bm{e}_h).
\end{aligned}
\end{equation}
On the other hand,
we define $\bm{\psi}^*=\bm{\Pi}_h^*\bm{\psi}\in\bm{V}_h^*$ and then $\bm{\Pi}_h\bm{\psi}^*\in\bm{V}_h$ in light of the definitions of $\bm{V}_h^*$ and $\bm{V}_h$ and the fact that $\bm{\psi}^*\in\bm{H}_0(\mathrm{div};\Omega)$.
Moreover, on each $K$, by Green's formula,
\[
\begin{aligned}
\mathrm{div}\,(\bm{\Pi}_h\bm{\psi}^*|_K)&=\frac{1}{|K|}\int_K\mathrm{div}\,\bm{\Pi}_h\bm{\psi}^*\,\mathrm{d}x\mathrm{d}y
=\frac{1}{|K|}\int_K\mathrm{div}\,\bm{\psi}^*\,\mathrm{d}x\mathrm{d}y\\
&=\frac{1}{|K|}\int_K\mathrm{div}\,\bm{\psi}\,\mathrm{d}x\mathrm{d}y=0,
\end{aligned}
\]
therefore $\bm{\Pi}_h\bm{\psi}^*\in\bm{Z}_h$.
Take $\bm{v}_h=\bm{\Pi}_h\bm{\psi}^*$ in (\ref{e: stokes discrete}), multiply $\bm{\Pi}_h\bm{\psi}^*$
on both sides of (\ref{e: stokes problem}) and integrate by parts, then
\begin{equation}
\label{e: test pihe_h^*}
a_h(\bm{e}_h,\bm{\Pi}_h\bm{\psi}^*)+E_{1,h}(\bm{u},p,\bm{\Pi}_h\bm{\psi}^*)=0.
\end{equation}
The difference of (\ref{e: dual test e_h}) and (\ref{e: test pihe_h^*}) gives
\begin{equation}
\label{e: basic dual stokes}
\|\bm{e}_h\|_0^2=a_h(\bm{\psi}-\bm{\Pi}_h\bm{\psi}^*,\bm{e}_h)+E_{1,h}(\bm{\psi},\chi,\bm{e}_h)
-E_{1,h}(\bm{u},p,\bm{\Pi}_h\bm{\psi}^*).
\end{equation}
The first term is bounded by the triangle inequality, the Bramble-Hilbert lemma and (\ref{e: converge stokes}):
\begin{equation}
\label{e: final dual 1}
\begin{aligned}
|a_h(\bm{\psi}-\bm{\Pi}_h\bm{\psi}^*,\bm{e}_h)|&\leq |\bm{\psi}-\bm{\Pi}_h\bm{\psi}^*|_{1,h}|\bm{e}_h|_{1,h}\\
&\leq \left(|\bm{\psi}-\bm{\Pi}_h^*\bm{\psi}|_{1,h}+|\bm{\psi}^*-\bm{\Pi}_h\bm{\psi}^*|_{1,h}\right)|\bm{e}_h|_{1,h}\\
&\leq Ch^3(|\phi|_4+|p|_2)|\bm{\psi}|_2.
\end{aligned}
\end{equation}
The second term is estimated using the lowest order inter-element orthogonality
(\ref{e: tangential weak stokes}) and (\ref{e: normal weak stokes}):
\begin{equation}
\label{e: final dual 2}
|E_{1,h}(\bm{\psi},\chi,\bm{e}_h)|\leq Ch(|\bm{\psi}|_2+|\chi|_1)|\bm{e}_h|_{1,h}
\leq Ch^3(|\bm{\psi}|_2+|\chi|_1)(|\phi|_4+|p|_2).
\end{equation}
As far as the last term is concerned, we see from (\ref{e: 1 order otho vector est}) that
\begin{equation}
\label{e: final dual 3}
\begin{aligned}
|E_{1,h}(\bm{u},p,\bm{\Pi}_h\bm{\psi}^*)|&\leq |J_1(\bm{u},\bm{\Pi}_h\bm{\psi}^*)|+|J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)|+|J_3(p,\bm{\Pi}_h\bm{\psi}^*)|\\
&\leq Ch^3(|\phi|_4+|p|_2)|\bm{\Pi}_h\bm{\psi}^*|_{2,h}+|J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)|\\
&\leq Ch^3(|\phi|_4+|p|_2)|\bm{\psi}|_2+|J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)|.
\end{aligned}
\end{equation}
Hence, it suffices to investigate $J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)$.
To proceed, we decompose $J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)$ by
\begin{equation}
\label{e: J2 dual}
J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)=J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*-\bm{\psi}^*)+J_2(\bm{u},\bm{\psi}^*).
\end{equation}
It follows from (\ref{e: linear relations Stokes}) and the continuity of $\bm{\psi}^*$ at the two endpoints of each edge that
\[
\int_E\left[\bm{\psi}^*\cdot\bm{t}_E\right]_E\xi_E\,\mathrm{d}s=0,~\forall \xi_E\in P_1(E),~\forall E\in\mathcal{E}_h,
\]
therefore
\begin{equation}
\label{e: J2 dual part 2}
|J_2(\bm{u},\bm{\psi}^*)|\leq\sum_{K\in\mathcal{T}_h}Ch^{3/2}|\bm{u}|_{3,K}Ch^{3/2}|\bm{\psi}^*|_{2,K}
\leq Ch^3|\phi|_4|\bm{\psi}|_2.
\end{equation}
If we write $\bm{\Pi}_h\bm{\psi}^*-\bm{\psi}^*=(\psi_{h1}^-,\psi_{h2}^-)^T$ for short,
then
\begin{equation}
\label{e: J2 dual new}
J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*-\bm{\psi}^*)=\sum_{K\in\mathcal{T}_h}J_{1,K}(u_1,\psi_{h1}^-)
+\sum_{K\in\mathcal{T}_h}J_{2,K}(u_2,\psi_{h2}^-).
\end{equation}
From the first relation in (\ref{e: linear relations Stokes}), (\ref{e: bilinear vector est 3})
and the derivation of (\ref{e: bilinear vector est 3}),
we have
\[
J_{1,K}(u_1,\psi_{h1}^-)-J'_{1,K}(u_1,\psi_{h1}^-)=0,~\forall \frac{\partial u_1}{\partial y}\in P_1(K),
\]
and thus by the Bramble-Hilbert lemma as in (\ref{e: bilinear vector est 4}),
\begin{equation}
\label{e: normal to tangent dual stokes}
\begin{aligned}
\left|J_{1,K}(u_1,\psi_{h1}^-)-J'_{1,K}(u_1,\psi_{h1}^-)\right|
&\leq Ch^2\left|u_1\right|_{3,K}|\psi_{h1}^-|_{1,K}\\
&\leq Ch^3|\bm{u}|_{3,K}|\bm{\Pi}_h\bm{\psi}^*-\bm{\psi}^*|_{1,K}\\
&\leq Ch^3|\bm{u}|_{3,K}|\bm{\psi}^*|_{2,K}\leq Ch^3|\phi|_{4,K}|\bm{\psi}|_{2,K}.
\end{aligned}
\end{equation}
Substituting (\ref{e: normal to tangent dual stokes}) into (\ref{e: J2 dual}) results in
\begin{equation}
\label{e: J1K dual}
\sum_{K\in\mathcal{T}_h}J_{1,K}(u_1,\psi_{h1}^-)
\leq \sum_{K\in\mathcal{T}_h}J'_{1,K}(u_1,\psi_{h1}^-)+Ch^3|\phi|_4|\bm{\psi}|_2.
\end{equation}
Moreover, since $\bm{\psi}^*\in\bm{H}_0(\mathrm{div};\Omega)$,
we find by considering (\ref{e: tangential estimate stokes}) that
\[
\left|\sum_{K\in\mathcal{T}_h}J'_{1,K}(u_1,\psi_{h1}^-)\right|
\leq Ch^3|\bm{u}|_3|\bm{\Pi}_h\bm{\psi}^*|_{2,h}\leq Ch^3|\phi|_4|\bm{\psi}|_2,
\]
and therefore by (\ref{e: J1K dual}),
\[
\left|\sum_{K\in\mathcal{T}_h}J_{1,K}(u_1,\psi_{h1}^-)\right|\leq Ch^3|\phi|_4|\bm{\psi}|_2.
\]
The analysis for $J_{2,K}(u_2,\psi_{h2}^-)$ is similar, which along with (\ref{e: J2 dual}), (\ref{e: J2 dual part 2})
(\ref{e: J2 dual new}) gives
\[
|J_2(\bm{u},\bm{\Pi}_h\bm{\psi}^*)|\leq Ch^3|\phi|_4|\bm{\psi}|_2.
\]
Collecting this result and (\ref{e: final dual 1}), (\ref{e: final dual 2}), (\ref{e: final dual 1}) with
(\ref{e: basic dual stokes}), one gets
\[
\|\bm{e}_h\|_0^2\leq Ch^3(|\phi|_4+|p|_2)(|\bm{\psi}|_2+|\chi|_1)\leq Ch^3(|\phi|_4+|p|_2)\|\bm{e}_h\|_0,
\]
where we have used the regularity (\ref{e: dual regular stokes}).
Finally, this theorem is established by dividing both sides by $\|\bm{e}_h\|_0$.
\end{proof}
\begin{remark}
Again, the mixed finite element designed for Brinkman problems in \cite{Zhou2018} has similar properties
as the pair $\bm{V}_h\times P_h$ in this work,
and so the arguments in Theorems \ref{th: converge stokes} and \ref{th: converge stokes L2} are also valid.
This explains the high accuracy phenomenon observed in the numerical tests
in \cite{Zhou2018} under uniform rectangular partitions.
\end{remark}
\section{Numerical examples}
\label{s: numer ex}
Numerical tests are given in this section.
The solution domain is set as $[0,2]\times[0,1]$,
with uniform $n\times n$ rectangular partitions $\{\mathcal{T}_h\}$ constructed.
It is clear that
\[
h_x=\frac{2}{n},~h_y=\frac{1}{n},~h^2=h_x^2+h_y^2.
\]
We first test $W_h$ for the biharmonic problem (\ref{e: model problem scalar}),
where the exact solution is given by
\begin{equation}
\label{e: example scalar}
u =(3x^2-2y+6xy^2)(x(x-2)y(y-1))^2.
\end{equation}
The errors in various norms are illustrated in Table \ref{t: scalar ours}.
One sees that the convergence orders in discrete $H^2$- and $H^1$-norms are precisely $O(h^2)$ and $O(h^3)$,
respectively, as predicted in Theorems \ref{th: converge scalar} and \ref{th: converge scalar H1}.
The $L^2$ error seems to be of order four, which will be studied in our future work.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{p{1cm}<{\centering}p{2cm}<{\centering}p{1cm}<{\centering}p{2cm}<{\centering}p{1cm}<{\centering}
p{2cm}<{\centering}p{1.5cm}<{\centering}}
\toprule
$n$ &$|u-u_h|_{2,h}$ & order & $|u-u_h|_{1,h}$ & order &$\|u-u_h\|_0$ & order \\
\midrule
$4$ &1.209E0 & &7.041E-2 & &9.209E-3 & \\
$8$ &3.528E-1 &1.78 &9.173E-3 &2.94 &4.139E-4 &4.48 \\
$16$ &8.880E-2 &1.99 &1.063E-3 &3.11 &2.060E-5 &4.33 \\
$32$ &2.140E-2 &2.05 &1.242E-4 &3.10 &1.202E-6 &4.10 \\
$64$ &5.198E-3 &2.04 &1.487E-5 &3.06 &7.428E-8 &4.02 \\
\bottomrule
\end{tabular}
\caption{The discrete $H^2$, $H^1$ and $L^2$ errors produced by $W_h$ applied to the biharmonic problem determined by (\ref{e: example scalar}) for different $n$.
\label{t: scalar ours}}
\end{center}
\end{table}
As a comparison, we also check the performance of the well-known Adini element \cite{Lascaux1975} for the same problem.
The numbers of local DoFs are both 12 for both elements,
and the global DoFs of the Adini element are fewer than those of $W_h$.
However, the space $W_h$ is highly nonconforming,
enjoying a cheap local communication when the method is implemented,
especially for parallel computing.
As the error analysis in \cite{Lascaux1975,Luo2004,Hu2016},
the Adini element can also achieve a second order convergence rate in discrete $H^2$-norm.
Moreover, the absolute errors are sightly lower than the those produced by $W_h$ due to the strong continuity in the tangential direction.
However, the errors in discrete $H^1$- and $L^2$-norms are also only $O(h^2)$,
lower than those of $W_h$,
which is consistent with the lower bound estimate given in \cite{Hu2013,Hu2016}.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{p{1cm}<{\centering}p{2cm}<{\centering}p{1cm}<{\centering}p{2cm}<{\centering}p{1cm}<{\centering}
p{2cm}<{\centering}p{1.5cm}<{\centering}}
\toprule
$n$ &$|u-u_h|_{2,h}$ & order & $|u-u_h|_{1,h}$ & order &$\|u-u_h\|_0$ & order \\
\midrule
$4$ &1.112E0 & &1.270E-1 & &2.195E-2 & \\
$8$ &3.113E-1 &1.84 &3.060E-2 &2.05 &6.283E-3 &1.80 \\
$16$ &7.681E-2 &2.02 &7.642E-3 &2.00 &1.636E-3 &1.94 \\
$32$ &1.901E-2 &2.01 &1.915E-3 &2.00 &4.137E-4 &1.98 \\
$64$ &4.738E-3 &2.00 &4.791E-4 &2.00 &1.037E-4 &2.00 \\
\bottomrule
\end{tabular}
\caption{The discrete $H^2$, $H^1$ and $L^2$ errors produced by the Adini element for the biharmonic problem determined by (\ref{e: example scalar}) for different $n$.
\label{t: scalar adini}}
\end{center}
\end{table}
We end this work by testing the divergence-free Stokes element $\bm{V}_h\times P_h$ for the model problem (\ref{e: stokes problem}) determined by
\begin{equation}
\label{e: example stokes}
\bm{u}=\curl\left(\exp(x+2y)(x(x-2)y(y-1))^2\right),~p=-\sin2\pi x\sin2\pi y.
\end{equation}
The numerical results are provided in Table \ref{t: stokes}.
One can observe that the convergence orders for the velocity, the pressure and the postprocessed pressure are achieved,
as predicted in Theorems \ref{th: converge stokes} and \ref{th: post p}.
Moreover,
the $L^2$ errors of the velocity have an $O(h^3)$ convergence order,
which agrees with the assertion in Theorem \ref{th: converge stokes L2}.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{p{0.7cm}<{\centering}p{1.6cm}<{\centering}p{0.8cm}<{\centering}p{1.6cm}<{\centering}p{0.8cm}<{\centering}
p{1.6cm}<{\centering}p{0.8cm}<{\centering}p{1.6cm}<{\centering}p{0.8cm}<{\centering}}
\toprule
$n$ &$|\bm{u}-\bm{u}_h|_{1,h}$ & order & $\|\bm{u}-\bm{u}_h\|_0$ & order
& $\|p-p_h\|_0$ & order & $\|p-p_h^*\|_0$ & order\\
\midrule
$4$ &2.473E0 & &1.100E-1 & &6.569E-1 & &1.045E0 & \\
$8$ &7.505E-1 &1.72 &1.657E-2 &2.73 &3.485E-1 &0.91 &3.187E-1 &1.71\\
$16$ &1.968E-1 &1.93 &2.305E-3 &3.03 &1.772E-1 &0.98 &6.173E-2 &2.37\\
$32$ &4.846E-2 &2.02 &2.399E-4 &3.08 &8.932E-2 &0.99 &9.938E-3 &2.64\\
$64$ &1.188E-2 &2.03 &2.878E-5 &3.06 &4.477E-2 &1.00 &1.869E-3 &2.41\\
\bottomrule
\end{tabular}
\caption{The errors for the velocity and the pressure produced by $\bm{V}_h\times P_h$ for the Stokes problem determined by (\ref{e: example stokes}) for different $n$.
\label{t: stokes}}
\end{center}
\end{table}
\newpage
|
train/arxiv
|
BkiUcNs5qhLAB70I3mO4
| 5 | 1 |
\section{Introduction}
In this article we develop the method of expansion
and mean-square approximation of multiple Ito stochastic integrals,
based on generalized multiple Fourier series, converging in the mean,
which was proposed by the author in \cite{2006} - \cite{2017b}.
Hereinafter, this method referred to as the method of multiple Fourier series.
As it turned out, the adaptation of the method of multiple
Fourier series to multiple Stratonovich stochastic integrals
leads to more simpler expansions of multiple stochastic integrals (see \cite{2011} - \cite{2017},
\cite{2017f}).
The article is devoted to adaptation
of the method of multiple Fourier series to the triple Stratonovich
stochastic integrals from the so called stochastic Taylor-Stratonovich
expansion \cite{1}, \cite{2}. In this article we consider multiple Fourier-Legendre series
as well as multiple trigonometric Fourier series. At that we
consider the general case of series summation.
In the section 1 we formulate the theorem (theorem 1), which is a
base of the method of multiple Fourier series.
In the section 2 we formulate and prove the theorem (theorem 2)
about expansion of triple Stratonovich stochastic integrals with
constant weight functions, using triple Fourier-Legendre series (case
of multi-dimensional Wiener process). In the
section 3 we consider the generalization of the theorem 2 for the
case of binomial weight functions.
In the last section (section 4) we obtain the analogue
of the theorem 1, using triple trigonometric Fourier series.
The results of the article can be useful for
numerical integration of Ito stochastic differential equations
in accordance with the strong (mean-square) criterion \cite{1}, \cite{2}.
Let $(\Omega,$ ${\rm F},$ ${\sf P})$ be a complete probubility space, let
$\{{\rm F}_t, t\in[0,T]\}$ be a nondecreasing right-continous family of
$\sigma$-subfields of ${\rm F},$
and let ${\bf f}_t$ be a standard $m$-dimensional Wiener stochastic process, which is
${\rm F}_t$-measurable for any $t\in[0, T].$ We assume that the components
${\bf f}_{t}^{(i)}$ $(i=1,\ldots,m)$ of this process are independent.
Hereafter we call stochastic process
$\xi:\ [0,T]\times\Omega\rightarrow \Re^1$
as non-anticipative when it is
measurable according to the family of variables
$(t,\omega)$
and
function $\xi(t,\omega)\stackrel{\sf \small def}{=}\xi_t$ is
$F_t$-measurable for all
$t\in[0,T]$\ and $\xi_{\tau}$\
independent with increments ${\bf f}_{t+\Delta}-{\bf f}_{\Delta}$\
for $\Delta\ge \tau,\ t>0.$
Let's consider
the following multiple Ito and Stratonovich
stochastic integrals:
\begin{equation}
\label{ito}
J[\psi^{(k)}]_{T,t}=\int\limits_t^T\psi_k(t_k) \ldots \int\limits_t^{t_{2}}
\psi_1(t_1) d{\bf w}_{t_1}^{(i_1)}\ldots
d{\bf w}_{t_k}^{(i_k)},
\end{equation}
\begin{equation}
\label{str}
J^{*}[\psi^{(k)}]_{T,t}=
\int\limits_t^{*T}\psi_k(t_k) \ldots \int\limits_t^{*t_{2}}
\psi_1(t_1) d{\bf w}_{t_1}^{(i_1)}\ldots
d{\bf w}_{t_k}^{(i_k)},
\end{equation}
where every $\psi_l(\tau)\ (l=1,\ldots,k)$ is
a non-random function on $[t, T]$;
${\bf w}_{\tau}^{(i)}={\bf f}_{\tau}^{(i)}$\
when $i=1,\ldots,m;$\
${\bf w}_{\tau}^{(0)}=\tau;$\
$i_1,\ldots,i_k=0,\ 1,\ldots,m;$
$\int\limits$ and $\int\limits^{*}$ denote It$\hat{\rm o}$ and
Stratonovich integrals,
respectively.
Suppose that every $\psi_l(\tau)$ $(l=1,\ldots,k)$ is a continuous
on $[t, T]$ function.
Define the following function on a hypercube $[t, T]^k:$
\begin{equation}
\label{ppp}
K(t_1,\ldots,t_k)=
\prod\limits_{l=1}^k\psi_l(t_l)\prod\limits_{l=1}^{k-1}{\bf 1}_{\{t_l<t_{l+1}\}};\ t_1,\ldots,t_k\in[t, T];\ k\ge 2,
\end{equation}
and
$K(t_1)=\psi_1(t_1);\ t_1\in[t, T],$
where ${\bf 1}_A$ is the indicator of the set $A$.
Suppose that $\{\phi_j(x)\}_{j=0}^{\infty}$
is a complete orthonormal system of functions in
$L_2([t, T])$.
The function $K(t_1,\ldots,t_k)$ is sectionally continuous in the
hypercube $[t, T]^k.$
At this situation it is well known, that the multiple Fourier series
of $K(t_1,\ldots,t_k)\in L_2([t, T]^k)$ is converging
to $K(t_1,\ldots,t_k)$ in the hypercube $[t, T]^k$ in
the mean-square sense, i.e.
\begin{equation}
\label{sos1z}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }\biggl\Vert
K(t_1,\ldots,t_k)-
\sum_{j_1=0}^{p_1}\ldots \sum_{j_k=0}^{p_k}
C_{j_k\ldots j_1}\prod_{l=1}^{k} \phi_{j_l}(t_l)\biggr\Vert=0,
\end{equation}
where
\begin{equation}
\label{ppppa}
C_{j_k\ldots j_1}=\int\limits_{[t,T]^k}
K(t_1,\ldots,t_k)\prod_{l=1}^{k}\phi_{j_l}(t_l)dt_1\ldots dt_k.
\end{equation}
and
$$
\left\Vert f\right\Vert^2=\int\limits_{[t,T]^k}
f^2(t_1,\ldots,t_k)dt_1\ldots dt_k.
$$
Consider the partition $\{\tau_j\}_{j=0}^N$ of $[t,T]$ such that
\begin{equation}
\label{1111}
t=\tau_0<\ldots <\tau_N=T,\
\Delta_N=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm max}\cr
$\stackrel{}{{}_{0\le j\le N-1}}$\cr
}} }\Delta\tau_j\to 0\ \hbox{if}\ N\to \infty,\ \Delta\tau_j=\tau_{j+1}-\tau_j.
\end{equation}
{\bf Theorem 1} (see \cite{2006} - \cite{2017}). {\it Suppose that
every $\psi_l(\tau)$ $(l=1,\ldots, k)$ is a continuous on
$[t, T]$ function and
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system
of continuous functions in $L_2([t,T]).$ Then
\begin{equation}
\label{tyyy}
J[\psi^{(k)}]_{T,t}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\ldots\sum_{j_k=0}^{p_k}
C_{j_k\ldots j_1}\biggl(
\prod_{g=1}^k\zeta_{j_g}^{(i_g)}-
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{N\to \infty}}$\cr
}} }\sum_{(l_1,\ldots,l_k)\in {G}_k}
\prod_{g=1}^k
\phi_{j_{g}}(\tau_{l_g})
\Delta{\bf w}_{\tau_{l_g}}^{(i_g)}\biggr),
\end{equation}
where
$$
{\rm G}_k={\rm H}_k\backslash{\rm L}_k;\
{\rm H}_k=\{(l_1,\ldots,l_k):\ l_1,\ldots,l_k=0,\ 1,\ldots,N-1\};
$$
$$
{\rm L}_k=\{(l_1,\ldots,l_k):\ l_1,\ldots,l_k=0,\ 1,\ldots,N-1;\
l_g\ne l_r\ (g\ne r);\ g, r=1,\ldots,k\};
$$
\vspace{2mm}
\noindent
${\rm l.i.m.}$ is a limit in the mean-square sense;
$i_1,\ldots,i_k=0,1,\ldots,m;$
every
\begin{equation}
\label{rr23}
\zeta_{j}^{(i)}=
\int\limits_t^T \phi_{j}(s) d{\bf w}_s^{(i)}
\end{equation}
is a standard Gaussian random variable
for various
$i$\ or $j$ {\rm(}if $i\ne 0${\rm);}
$C_{j_k\ldots j_1}$ is the Fourier coefficient {\rm(\ref{ppppa});}
$\Delta{\bf w}_{\tau_{j}}^{(i)}=
{\bf w}_{\tau_{j+1}}^{(i)}-{\bf w}_{\tau_{j}}^{(i)}$
$(i=0,\ 1,\ldots,m);$\
$\left\{\tau_{j}\right\}_{j_l=0}^{N-1}$ is a partition of
$[t,T],$ which satisfies the condition {\rm (\ref{1111})}.
}
Let's consider the transformed particular case of the theorem 1 for
$k=3:$
$$
J[\psi^{(3)}]_{T,t}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_3\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}\sum_{j_3=0}^{p_3}
C_{j_3j_2j_1}\biggl(
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
-\biggr.
$$
\begin{equation}
\label{leto5002}
-{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
\zeta_{j_3}^{(i_3)}
-{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}
\zeta_{j_1}^{(i_1)}-
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
\zeta_{j_2}^{(i_2)}\biggr),
\end{equation}
where ${\bf 1}_A$ is an indicator of the set $A$.
\vspace{2mm}
\section{Expansion of multiple Stratonovich stochastic integrals of multiplicity 3.
The case of Legendre Polynomials}
\vspace{2mm}
{\bf Theorem 2} (see \cite{2011} - \cite{2017}). {\it Suppose that
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal
system of Legendre polynomials
in the space $L_2([t, T])$.
Then, for multiple Stratonovich stochastic integral of {\rm 3}rd multiplicity
$$
{\int\limits_t^{*}}^T
{\int\limits_t^{*}}^{t_3}
{\int\limits_t^{*}}^{t_2}
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\
$$
$(i_1, i_2, i_3=1,\ldots,m)$
the following converging in the mean-square sense
expansion
\begin{equation}
\label{feto19001}
{\int\limits_t^{*}}^T
{\int\limits_t^{*}}^{t_3}
{\int\limits_t^{*}}^{t_2}
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\
=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,p_2,p_3\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}\sum_{j_3=0}^{p_3}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\stackrel{\rm def}{=}
$$
$$
\stackrel{\rm def}{=}
\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\end{equation}
is reasonable, where
$$
C_{j_3 j_2 j_1}=\int\limits_t^T
\phi_{j_3}(s)\int\limits_t^s
\phi_{j_2}(s_1)
\int\limits_t^{s_1}
\phi_{j_1}(s_2)ds_2ds_1ds.
$$
}
{\bf Proof.} If we prove the following formulas:
\begin{equation}
\label{ogo12}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}
\stackrel{{\rm def}}{=}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_3)}+\frac{1}{\sqrt{3}}\zeta_1^{(i_3)}\right),
\end{equation}
\begin{equation}
\label{ogo13}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}
\stackrel{{\rm def}}{=}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_1)}-\frac{1}{\sqrt{3}}\zeta_1^{(i_1)}\right),
\end{equation}
\begin{equation}
\label{ogo13a}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}
\stackrel{{\rm def}}{=}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=0,
\end{equation}
then in accordance with the theorem 1 (see (\ref{leto5002})),
formulas (\ref{ogo12}) -- (\ref{ogo13a}),
standard
relations between multiple Stratonovich and Ito
stochastic integrals, as well as in accordance with formulas (they also follows
from the theorem 1):
$$
\frac{1}{2}\int\limits_t^T\int\limits_t^{\tau}dsd{\bf f}_{\tau}^{(i_3)}=
\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_3)}+\frac{1}{\sqrt{3}}\zeta_1^{(i_3)}\right)\ \hbox{w. p. 1},
$$
$$
\frac{1}{2}\int\limits_t^T\int\limits_t^{\tau}d{\bf f}_{s}^{(i_1)}d\tau=
\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_1)}-\frac{1}{\sqrt{3}}\zeta_1^{(i_1)}\right)\ \hbox{w. p. 1}
$$
we will have
$$
\int\limits_t^T\int\limits_t^{t_3}\int\limits_t^{t_2}
d{\bf f}_{t_1}^{(i_1)}d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}=
\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}-
{\bf 1}_{\{i_1=i_2\}}
\frac{1}{2}\int\limits_t^T\int\limits_t^{\tau}dsd{\bf f}_{\tau}^{(i_3)}-
{\bf 1}_{\{i_2=i_3\}}
\frac{1}{2}\int\limits_t^T\int\limits_t^{\tau}d{\bf f}_{s}^{(i_1)}d\tau.
$$
It means, that the expansion (\ref{feto19001}) will be proven.
Let's at first proof that:
\begin{equation}
\label{ogo3}
\sum\limits_{j_1=0}^{\infty}C_{0 j_1 j_1}=\frac{1}{4}(T-t)^{\frac{3}{2}},
\end{equation}
\begin{equation}
\label{ogo4}
\sum\limits_{j_1=0}^{\infty}C_{1 j_1 j_1}=\frac{1}{4\sqrt{3}}(T-t)^{\frac{3}{2}}.
\end{equation}
We have
$$
C_{000}=\frac{(T-t)^{\frac{3}{2}}}{6};
$$
\begin{equation}
\label{ogo6}
C_{0 j_1 j_1}=\int\limits_t^T\phi_0(s)\int\limits_t^s\phi_{j_1}(s_1)
\int\limits_t^{s_1}\phi_{j_1}(s_2)ds_2ds_1ds
=\frac{1}{2}\int\limits_t^T\phi_0(s)\left(
\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2ds;\ j_1\ge 1.
\end{equation}
Here $\phi_j(s)$ looks as follows:
\begin{equation}
\label{ogo7}
\phi_j(s)=\sqrt{\frac{2j+1}{T-t}}P_j\left(\left(
s-\frac{T+t}{2}\right)\frac{2}{T-t}\right);\ j\ge 0,
\end{equation}
where $P_j(x)$ --- is a Legendre polynomial.
Let's substitute (\ref{ogo7}) into (\ref{ogo6}) and
calculate $C_{0 j_1 j_1};$ $j\ge 1$:
$$
C_{0 j_1 j_1}=\frac{2j_1+1}{2(T-t)^{\frac{3}{2}}}
\int\limits_t^T
\left(\int\limits_{-1}^{z(s)}
P_{j_1}(y)\frac{T-t}{2}dy\right)^2ds=
$$
$$
=\frac{(2j_1+1)\sqrt{T-t}}{8}
\int\limits_t^T
\left(\int\limits_{-1}^{z(s)}
\frac{1}{2j_1+1}\left(P_{j_1+1}^{'}(y)-P_{j_1-1}^{'}(y)\right)dy
\right)^2ds=
$$
\begin{equation}
\label{ogo8}
=\frac{\sqrt{T-t}}{8(2j_1+1)}
\int\limits_t^T\left(P_{j_1+1}(z(s))-P_{j_1-1}(z(s))\right)^2ds,
\end{equation}
where here and further
$$
z(s)=\left(s-\frac{T+t}{2}\right)\frac{2}{T-t},
$$
and
we used following well-known properties of Legendre polynomials:
$$
P_j(y)=\frac{1}{2j+1}\left(P_{j+1}^{'}(y)-P_{j-1}^{'}(y)\right);\
P_j(-1)=(-1)^j;\ j\ge 1.
$$
Also, we denote
$$
\frac{dP_j}{dy}(y)\stackrel{{\rm def}}{=}P_j^{'}(y).
$$
From (\ref{ogo8}) using the property of orthogonality of Legendre
polynomials we get the following relation
$$
C_{0 j_1 j_1}=\frac{(T-t)^{\frac{3}{2}}}{16(2j_1+1)}
\int\limits_{-1}^1\left(P_{j_1+1}^2(y)+P_{j_1-1}^2(y)\right)dy=
\frac{(T-t)^{\frac{3}{2}}}{8(2j_1+1)}
\left(\frac{1}{2j_1+3}+\frac{1}{2j_1-1}\right),
$$
where we used the relation
$$
\int\limits_{-1}^1 P_j^2(y)dy=\frac{2}{2j+1};\ j\ge 0.
$$
Then
$$
\sum\limits_{j_1=0}^{\infty}C_{0 j_1 j_1}=
\frac{(T-t)^{\frac{3}{2}}}{6}+
\frac{(T-t)^{\frac{3}{2}}}{8}
\left(
\sum_{j_1=1}^{\infty}\frac{1}{(2j_1+1)(2j_1+3)}+
\sum_{j_1=1}^{\infty}\frac{1}{4j_1^2-1}\right)=
$$
$$
=\frac{(T-t)^{\frac{3}{2}}}{6}+\frac{(T-t)^{\frac{3}{2}}}{8}
\left(\sum_{j_1=1}^{\infty}\frac{1}{4j_1^2-1}-\frac{1}{3}
+\sum_{j_1=1}^{\infty}\frac{1}{4j_1^2-1}\right)=
$$
$$
=\frac{(T-t)^{\frac{3}{2}}}{6}+\frac{(T-t)^{\frac{3}{2}}}{8}
\left(\frac{1}{2}-\frac{1}{3}+\frac{1}{2}\right)=
\frac{(T-t)^{\frac{3}{2}}}{4}.
$$
\vspace{1mm}
The relation (\ref{ogo3}) is proven.
Let's check correctness of (\ref{ogo4}). Represent $C_{1 j_1 j_1}$ in the form:
$$
C_{1 j_1 j_1}=\frac{1}{2}\int\limits_t^T
\phi_1(s)\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2 ds=
$$
$$
=\frac{(T-t)^{\frac{3}{2}}(2j_1+1)\sqrt{3}}{16}
\int\limits_{-1}^{1}
P_1(y)\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2 dy;\ j_1\ge 1.
$$
Since functions
$$
\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2;\ j_1\ge 1
$$
are even, then, correspondently functions
$$
P_1(y)\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2 dy;\ j_1\ge 1
$$
are uneven.
It means, that $C_{1 j_1 j_1}=0;$ $j_1\ge 1.$ From the other side:
$$
C_{100}=\frac{\sqrt{3}(T-t)^{\frac{3}{2}}}{16}
\int\limits_{-1}^1 y(y+1)^2 dy=\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}.
$$
Then
$$
\sum\limits_{j_1=0}^{\infty}C_{1 j_1 j_1}=C_{100}+
\sum\limits_{j_1=1}^{\infty}C_{1 j_1 j_1}=
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}.
$$
The relation (\ref{ogo4}) is proven.
Let's prove the equality (\ref{ogo12}). Using (\ref{ogo4}) we get
$$
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\sum\limits_{j_1=0}^{p_1}C_{0 j_1 j_1}\zeta_{0}^{(i_3)}+
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}\zeta_{1}^{(i_3)}+
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=2}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
$$
\begin{equation}
\label{ogo15}
=\sum\limits_{j_1=0}^{p_1}C_{0 j_1 j_1}\zeta_{0}^{(i_3)}+
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}\zeta_{1}^{(i_3)}+
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=2, j_3 - {\rm even}}^{2j_1+2}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}.
\end{equation}
Since
$$
C_{j_3j_1j_1}=\frac{(T-t)^{\frac{3}{2}}(2j_1+1)\sqrt{2j_3+1}}{16}
\int\limits_{-1}^{1}
P_{j_3}(y)\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2 dy
$$
and the degree of polynomial
$$
\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2
$$
equals
to $2j_1+2$, then
$C_{j_3j_1j_1}=0$ for $j_3>2j_1+2.$ It explains
the circumstance, that we put
$2j_1+2$ instead of $p_3$ in the right part of the formula (\ref{ogo15}).
Moreover, the function
$$\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2
$$
is even, it means, that the function
$$
P_{j_3}(y)\left(\int\limits_{-1}^y P_{j_1}(y_1)dy_1\right)^2
$$
is uneven
for uneven
$j_3.$ It means, that $C_{j_3 j_1j_1}=0$ for
uneven
$j_3.$
That is why we
summarize using even
$j_3$ in the right
part of the formula (\ref{ogo15}).
Then we have
\begin{equation}
\label{ogo16}
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=2, j_3 - {\rm even}}^{2j_1+2}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\sum\limits_{j_1=\frac{j_3-2}{2}}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}.
\end{equation}
We replaced $\frac{j_3-2}{2}$ by zero in the right part
of the formula (\ref{ogo16}), since $C_{j_3j_1j_1}=0$ for
$0\le j_1< \frac{j_3-2}{2}.$
Let's put (\ref{ogo16}) into (\ref{ogo15}):
\begin{equation}
\label{ogo17}
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\sum\limits_{j_1=0}^{p_1}C_{0 j_1 j_1}\zeta_{0}^{(i_3)}+
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}\zeta_{1}^{(i_3)}+
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}.
\end{equation}
It is easy to see, that the right part
of the formula (\ref{ogo17}) doesn't depend on $p_3.$
If we prove, that
\begin{equation}
\label{ogo18}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}-
\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_3)}+\frac{1}{\sqrt{3}}\zeta_1^{(i_3)}\right)\right)^2\right\}=0,
\end{equation}
then the relaion (\ref{ogo12}) will be proven.
Using (\ref{ogo17}) and (\ref{ogo3}) we may rewrite the left part
of (\ref{ogo18})
in the following form:
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\left(\sum\limits_{j_1=0}^{p_1}C_{0j_1j_1}-
\frac{(T-t)^{\frac{3}{2}}}{4}\right)\zeta_0^{(i_3)}+
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}\right)^2\right\}
$$
$$
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }\left(\sum\limits_{j_1=0}^{p_1}C_{0j_1j_1}-
\frac{(T-t)^{\frac{3}{2}}}{4}\right)^2+
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2=
$$
\begin{equation}
\label{ogo19}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2.
\end{equation}
If we prove, that
\begin{equation}
\label{ogo20}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2=0,
\end{equation}
then, the relation (\ref{ogo12}) will be proven.
We have
$$
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2=
\frac{1}{4}
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\int\limits_t^T\phi_{j_3}(s)\sum\limits_{j_1=0}^{p_1}
\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2ds\right)^2=
$$
$$
=\frac{1}{4}
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\int\limits_t^T\phi_{j_3}(s)\left((s-t)-\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2\right)ds\right)^2=
$$
$$
=\frac{1}{4}
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\int\limits_t^T\phi_{j_3}(s)\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2 ds\right)^2\le
$$
\begin{equation}
\label{ogo21}
\le\frac{1}{4}
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\int\limits_t^T|\phi_{j_3}(s)| \sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2 ds\right)^2.
\end{equation}
Obtaining (\ref{ogo21}) we used
the Parseval equality in the form:
\begin{equation}
\label{ogo10}
\sum_{j_1=0}^{\infty}\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2=
\int\limits_t^T ({\bf 1}_{\{s_1<s\}})^2ds_1=s-t
\end{equation}
and a property of othogonality of Legendre polynomials:
\begin{equation}
\label{ogo11}
\int\limits_t^T\phi_{j_3}(s)(s-t)ds=0;\ j_3\ge 2.
\end{equation}
Then we have
$$
\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2=
\frac{(T-t)(2j_1+1)}{4}
\left(\int\limits_{-1}^{z(s)}
P_{j_1}(y)dy\right)^2=
$$
$$
=\frac{T-t}{4(2j_1+1)}
\left(\int\limits_{-1}^{z(s)}
\left(P_{j_1+1}^{'}(y)-P_{j_1-1}^{'}(y)\right)dy
\right)^2=
$$
\begin{equation}
\label{ogo22}
=\frac{T-t}{4(2j_1+1)}
\left(P_{j_1+1}\left(z(s)\right)-
P_{j_1-1}\left(z(s)\right)\right)^2
\le
\frac{T-t}{2(2j_1+1)}
\left(P_{j_1+1}^2\left(z(s)\right)+
P_{j_1-1}^2\left(z(s)\right)\right).
\end{equation}
\vspace{2mm}
For the Legendre polynomials the following well-known
estimation is correct:
\begin{equation}
\label{ogo23}
|P_n(y)|<\frac{K}{\sqrt{n+1}(1-y^2)^{\frac{1}{4}}};\
y\in (-1, 1);\ n\in N,
\end{equation}
where the constant $K$ doesn't depend on $y$ and $n.$
The estimation (\ref{ogo23}) may be rewritten for the
function $\phi_n(s)$ in
the following form:
\begin{equation}
\label{ogo24}
|\phi_n(s)|< \sqrt{\frac{2n+1}{n+1}}\frac{K}{\sqrt{T-t}}
\frac{1}
{\left(1-z^2(s)\right)^{\frac{1}{4}}}
<\frac{K_1}{\sqrt{T-t}}
\frac{1}
{\left(1-z^2(s)\right)^{\frac{1}{4}}},\ K_1=K\sqrt{2},\ s\in (t, T).
\end{equation}
Let's estimate the right part of (\ref{ogo22}) using the estimation
(\ref{ogo23}):
\begin{equation}
\label{ogo25}
\left(\int\limits_t^s\phi_{j_1}(s_1)ds_1\right)^2 <
\frac{T-t}{2(2j_1+1)}\left(\frac{K^2}{j_1+2}+\frac{K^2}{j_1}\right)
\frac{1}
{(1-(z(s))^2)^{\frac{1}{2}}} <
\frac{(T-t)K^2}{2j_1^2}
\frac{1}
{(1-(z(s))^2)^{\frac{1}{2}}},
\end{equation}
where $s\in(t, T).$
Substituting the estimation (\ref{ogo25}) into the relation (\ref{ogo21})
and using in (\ref{ogo21}) the estimation (\ref{ogo24})
for $|\phi_{j_3}(s)|$ we get:
$$
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2<
\frac{(T-t)K^4 K_1^2}{16}
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\int\limits_t^T
\frac{ds}
{\left(1-\left(z(s)\right)^2
\right)^{\frac{3}{4}}}\sum\limits_{j_1=p_1+1}^{\infty}\frac{1}{j_1^2}
\right)^2=
$$
\begin{equation}
\label{ogo26}
=\frac{(T-t)^3K^4 K_1^2(p_1+1)}{64}
\left(\int\limits_{-1}^1
\frac{dy}
{\left(1-y^2\right)^{\frac{3}{4}}}\right)^2\left(
\sum\limits_{j_1=p_1+1}^{\infty}\frac{1}{j_1^2}
\right)^2.
\end{equation}
Since
\begin{equation}
\label{ogo27}
\int\limits_{-1}^1
\frac{dy}
{\left(1-y^2\right)^{\frac{3}{4}}}<\infty
\end{equation}
and
\begin{equation}
\label{ogo28}
\sum\limits_{j_1=p_1+1}^{\infty}\frac{1}{j_1^2}
\le \int\limits_{p_1}^{\infty}\frac{dx}{x^2}=\frac{1}{p_1},
\end{equation}
then from (\ref{ogo26}) we find:
\begin{equation}
\label{ogo29}
\sum\limits_{j_3=2, j_3 - {\rm even}}^{2p_1+2}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2<\frac{C(T-t)^3 (p_1+1)}{p_1^2} \to 0\ \hbox{if}\
p_1\to \infty,
\end{equation}
where the constant $C$ doesn't depend on $p_1$ and $T-t.$
From (\ref{ogo29}) follows (\ref{ogo20}), and from
(\ref{ogo20})
follows (\ref{ogo12}).
Let's prove of the equaity (\ref{ogo13}).
Let's at first prove that:
\begin{equation}
\label{ogo30}
\sum\limits_{j_3=0}^{\infty}C_{j_3 j_3 0}=\frac{1}{4}(T-t)^{\frac{3}{2}},
\end{equation}
\begin{equation}
\label{ogo31}
\sum\limits_{j_3=0}^{\infty}C_{j_3 j_3 j_1}=
-\frac{1}{4\sqrt{3}}(T-t)^{\frac{3}{2}}.
\end{equation}
We have
$$
\sum_{j_3=0}^{\infty}C_{j_3 j_3 0}=C_{000}+\sum_{j_3=1}^{\infty}C_{j_3 j_3 0};\
C_{000}=\frac{(T-t)^{\frac{3}{2}}}{6};
$$
$$
C_{j_3 j_3 0}=\frac{(T-t)^{\frac{3}{2}}}{16(2j_3+1)}
\int\limits_{-1}^1\left(P_{j_3+1}^2(y)+P_{j_3-1}^2(y)\right)dy=
\frac{(T-t)^{\frac{3}{2}}}{8(2j_3+1)}
\left(\frac{1}{2j_3+3}+\frac{1}{2j_3-1}\right);\ j_3\ge 1.
$$
Then
$$
\sum\limits_{j_3=0}^{\infty}C_{j_3 j_3 0}=
\frac{(T-t)^{\frac{3}{2}}}{6}+
\frac{(T-t)^{\frac{3}{2}}}{8}
\left(
\sum_{j_3=1}^{\infty}\frac{1}{(2j_3+1)(2j_3+3)}+
\sum_{j_3=1}^{\infty}\frac{1}{4j_3^2-1}\right)=
$$
$$
=\frac{(T-t)^{\frac{3}{2}}}{6}+\frac{(T-t)^{\frac{3}{2}}}{8}
\left(\sum_{j_3=1}^{\infty}\frac{1}{4j_3^2-1}-\frac{1}{3}
+\sum_{j_3=1}^{\infty}\frac{1}{4j_3^2-1}\right)=
$$
$$
=\frac{(T-t)^{\frac{3}{2}}}{6}+\frac{(T-t)^{\frac{3}{2}}}{8}
\left(\frac{1}{2}-\frac{1}{3}+\frac{1}{2}\right)=
\frac{(T-t)^{\frac{3}{2}}}{4}.
$$
\vspace{4mm}
The relation (\ref{ogo30}) is proven.
Let's check the equality (\ref{ogo31}). We have
$$
C_{j_3 j_3 j_1}=\int\limits_t^T
\phi_{j_3}(s)\int\limits_t^s
\phi_{j_3}(s_1)\int\limits_t^{s_1}
\phi_{j_1}(s_2)ds_2ds_1ds=
\int\limits_t^T\phi_{j_1}(s_2)ds_2
\int\limits_{s_2}^T
\phi_{j_3}(s_1)ds_1\int\limits_{s_1}^T
\phi_{j_3}(s)ds=
$$
$$
=\frac{1}{2}\int\limits_t^T
\phi_{j_1}(s_2)\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)ds_1\right)^2 ds_2=
$$
\begin{equation}
\label{ogo33}
=\frac{(T-t)^{\frac{3}{2}}(2j_3+1)\sqrt{2j_1+1}}{16}
\int\limits_{-1}^{1}
P_{j_1}(y)\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2 dy;\ j_3\ge 1.
\end{equation}
Since functions
$$
\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2;\ j_3\ge 1
$$
are even, then functions
$$
P_1(y)\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2 dy;\ j_3\ge 1
$$
are uneven. It means, that $C_{j_3 j_3 1}=0;$ $j_3\ge 1.$
Moreover
$$
C_{001}=\frac{\sqrt{3}(T-t)^{\frac{3}{2}}}{16}
\int\limits_{-1}^1 y(1-y)^2 dy=-\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}.
$$
Then
$$
\sum\limits_{j_3=0}^{\infty}C_{j_3 j_3 1}=C_{001}+
\sum\limits_{j_3=1}^{\infty}C_{j_3 j_3 1}=
-\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}.
$$
The relation (\ref{ogo31}) is proven.
Using the obtained results we have:
$$
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\sum\limits_{j_3=0}^{p_3}C_{j_3 j_3 0}\zeta_{0}^{(i_1)}-
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}\zeta_{1}^{(i_1)}+
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=2}^{p_1}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
$$
\begin{equation}
\label{ogoo5}
=\sum\limits_{j_3=0}^{p_3}C_{j_3 j_3 0}\zeta_{0}^{(i_1)}-
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}\zeta_{1}^{(i_1)}+
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=2, j_1 - {\rm even}}^{2j_3+2}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}.
\end{equation}
Since
$$
C_{j_3j_3j_1}=
\frac{(T-t)^{\frac{3}{2}}(2j_3+1)\sqrt{2j_1+1}}{16}
\int\limits_{-1}^{1}
P_{j_1}(y)\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2 dy;\ j_3\ge 1,
$$
and the degree of polynomial
$$
\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2
$$
equals to
$2j_3+2$,
then
$C_{j_3j_3j_1}=0$ for $j_1>2j_3+2.$ It explains the circumstance,
that we put $2j_3+2$ instead of $p_1$ in the right part
of the formula (\ref{ogoo5}).
Moreover, the function
$$
\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2
$$
is even,
it means, that the
function
$$
P_{j_1}(y)\left(\int\limits_{y}^1 P_{j_3}(y_1)dy_1\right)^2
$$
is uneven for uneven $j_1.$
It means, that $C_{j_3 j_3j_1}=0$ for uneven $j_1.$
It explains summation of only
even $j_1$ in the right part (\ref{ogoo5}).
Then we have
\begin{equation}
\label{ogoo11}
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=2, j_1 - {\rm even}}^{2j_3+2}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\sum\limits_{j_3=\frac{j_1-2}{2}}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}
=\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}.
\end{equation}
We reptaced $\frac{j_1-2}{2}$ by zero in the right part
of (\ref{ogoo11}), since $C_{j_3j_3j_1}=0$ for
$0\le j_3< \frac{j_1-2}{2}.$
Let's substitute (\ref{ogoo11}) into (\ref{ogoo5}):
\begin{equation}
\label{ogoo12}
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\sum\limits_{j_3=0}^{p_3}C_{j_3 j_3 0}\zeta_{0}^{(i_1)}-
\frac{(T-t)^{\frac{3}{2}}}{4\sqrt{3}}\zeta_{1}^{(i_1)}+
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}.
\end{equation}
It is easy to see, that the right part of the formula
(\ref{ogoo12}) doesn't depend on $p_1.$
If we prove, that
\begin{equation}
\label{ogoo13}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}-
\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_1)}-\frac{1}{\sqrt{3}}\zeta_1^{(i_1)}\right)\right)^2\right\}=0,
\end{equation}
then (\ref{ogo13}) will be proven.
Using (\ref{ogoo12}) and (\ref{ogo30}), (\ref{ogo31}) we may rewrite
the left part of the formula (\ref{ogoo13}) in the
following form:
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\left(\sum\limits_{j_3=0}^{p_3}C_{j_3j_3 0}-
\frac{(T-t)^{\frac{3}{2}}}{4}\right)\zeta_0^{(i_1)}+
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}\right)^2\right\}=
$$
$$
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }\left(\sum\limits_{j_3=0}^{p_1}C_{j_3j_3 0}-
\frac{(T-t)^{\frac{3}{2}}}{4}\right)^2+
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2=
$$
$$
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2.
$$
If we prove, that
\begin{equation}
\label{ogoo15}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2=0,
\end{equation}
then the relation (\ref{ogo13}) will be proven.
From (\ref{ogo33}) we get
$$
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2=
\frac{1}{4}
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\int\limits_t^T\phi_{j_1}(s_2)\sum\limits_{j_3=0}^{p_3}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)ds_1\right)^2ds_2\right)^2=
$$
$$
=\frac{1}{4}
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\int\limits_t^T\phi_{j_1}(s_2)\left((T-s_2)-
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)ds_1\right)^2\right)ds_2\right)^2=
$$
$$
=\frac{1}{4}
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\int\limits_t^T\phi_{j_1}(s_2)\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)ds_1\right)^2 ds_2\right)^2\le
$$
\begin{equation}
\label{ogoo21}
\le\frac{1}{4}
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\int\limits_t^T|\phi_{j_1}(s_2)|\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)ds_1\right)^2 ds_2\right)^2.
\end{equation}
In order to get (\ref{ogoo21}) we used
the Parseval equality in the form:
\begin{equation}
\label{ogo10ee}
\sum_{j_1=0}^{\infty}\left(\int\limits_s^T\phi_{j_1}(s_1)ds_1\right)^2=
\int\limits_t^T \left({\bf 1}_{\{s<s_1\}}\right)^2ds_1=T-s
\end{equation}
and a property of othogonality of Legendre polynomials:
\begin{equation}
\label{ogo11e}
\int\limits_t^T\phi_{j_3}(s)(T-s)ds=0;\ j_3\ge 2.
\end{equation}
Then we have
$$
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)ds_1\right)^2=
\frac{(T-t)}{4(2j_3+1)}
\left(P_{j_3+1}\left(
z(s_2)\right)-
P_{j_3-1}\left(
z(s_2)\right)\right)^2\le
$$
$$
\le
\frac{T-t}{2(2j_3+1)}
\left(P_{j_3+1}^2\left(
z(s_2)\right)+
P_{j_3-1}^2\left(
z(s_2)\right)\right)
<\frac{T-t}{2(2j_3+1)}\left(\frac{K^2}{j_3+2}+\frac{K^2}{j_3}\right)
\frac{1}
{(1-(z(s_2))^2)^{\frac{1}{2}}} <
$$
\begin{equation}
\label{ogoo25}
< \frac{(T-t)K^2}{2j_3^2}
\frac{1}
{(1-(z(s_2))^2)^{\frac{1}{2}}},\ s\in(t, T).
\end{equation}
In order to get (\ref{ogoo25}) we used the estimation
(\ref{ogo23}).
Substituting the estimation (\ref{ogoo25}) into relation (\ref{ogoo21})
and using in (\ref{ogoo21}) the estimation (\ref{ogo24})
for $|\phi_{j_1}(s_2)|$ we get:
$$
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2<
\frac{(T-t)K^4 K_1^2}{16}
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\int\limits_t^T
\frac{ds_2}
{(1-z^2(s_2))^{\frac{3}{4}}}\sum\limits_{j_3=p_3+1}^{\infty}\frac{1}{j_3^2}
\right)^2=
$$
\begin{equation}
\label{ogoo26}
=\frac{(T-t)^3K^4 K_1^2(p_3+1)}{64}
\left(\int\limits_{-1}^1
\frac{dy}
{\left(1-y^2\right)^{\frac{3}{4}}}\right)^2\left(
\sum\limits_{j_3=p_3+1}^{\infty}\frac{1}{j_3^2}
\right)^2.
\end{equation}
Using (\ref{ogo27}) and (\ref{ogo28}) from
(\ref{ogoo26}) we find:
\begin{equation}
\label{ogoo29}
\sum\limits_{j_1=2, j_1 - {\rm even}}^{2p_3+2}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2<\frac{C(T-t)^3 (p_3+1)}{p_3^2} \to 0\ \hbox{with}\
p_3\to \infty,
\end{equation}
where the constant $C$ doesn't depend on $p_3$ and $T-t.$
From (\ref{ogoo29}) follows (\ref{ogoo15}) and from
(\ref{ogoo15})
follows (\ref{ogo13}). The relation (\ref{ogo13}) is proven.
Let's prove the equality (\ref{ogo13a}).
Since $\psi_1(\tau),$ $\psi_2(\tau),$ $\psi_3(\tau)\equiv 1,$
then the following relation
for the Fourier coefficients is correct:
$$
C_{j_1 j_1 j_3}+C_{j_1 j_3 j_1}+C_{j_3 j_1 j_1}=\frac{1}{2}
C_{j_1}^2 C_{j_3},
$$
where $C_j=0$ for $j\ge 1$ and $C_0=\sqrt{T-t}.$
Then w.\ p.\ 1
\begin{equation}
\label{sodom31}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=
\sum\limits_{j_1, j_3=0}^{\infty}
\left(\frac{1}{2}C_{j_1}^2 C_{j_3}-C_{j_1 j_1 j_3}-C_{j_3 j_1 j_1}
\right)\zeta_{j_3}^{(i_2)}.
\end{equation}
Therefore, considering
(\ref{ogo12}) and (\ref{ogo13}), w.\ p.\ 1 we can write down the following:
$$
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=
\frac{1}{2}C_0^3\zeta_0^{(i_2)}-
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_1 j_3}\zeta_{j_3}^{(i_2)}-
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_2)}
=
$$
\begin{equation}
\label{sodom3}
=\frac{1}{2}(T-t)^{\frac{3}{2}}
\zeta_0^{(i_2)}
-\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_2)}-\frac{1}{\sqrt{3}}\zeta_1^{(i_2)}\right)
-\frac{1}{4}(T-t)^{\frac{3}{2}}\left(
\zeta_0^{(i_2)}+\frac{1}{\sqrt{3}}\zeta_1^{(i_2)}\right)=0.
\end{equation}
The relation (\ref{ogo13a}) is proven. The theorem 2 is proven.
It is easy to see, that the formula (\ref{feto19001})
may be proven for the case $i_1=i_2=i_3$
using the Ito formula:
$$
\int\limits_t^{*T}\int\limits_t^{*t_3}\int\limits_t^{*t_2}
d{\bf f}_{t_1}^{(i_1)}d{\bf f}_{t_2}^{(i_1)}d{\bf f}_{t_3}^{(i_1)}=
\frac{1}{6}\left(\int\limits_t^T d{\bf f}_{s}^{(i_1)}\right)^3=
\frac{1}{6}\left(C_0\zeta_{0}^{(i_1)}\right)^3=
$$
$$
=
C_{000}\zeta_{0}^{(i_1)}\zeta_{0}^{(i_1)}\zeta_{0}^{(i_1)},
$$
where the equality is fulfilled w. p. 1.
\section{Generalization of the theorem 2}
Let's consider the generalization of theorem 2.
{\bf Theorem 3} (see \cite{2011} - \cite{2017}). {\it Suppose that
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal
system of Legendre polynomials
in the space $L_2([t, T])$.
Then, for multiple Stratonovich stochastic integral of {\rm 3}rd multiplicity
$$
I_{{l_1l_2l_3}_{T,t}}^{*(i_1i_2i_3)}={\int\limits_t^{*}}^T(t-t_3)^{l_3}
{\int\limits_t^{*}}^{t_3}(t-t_2)^{l_2}
{\int\limits_t^{*}}^{t_2}(t-t_1)^{l_1}
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\
$$
$(i_1, i_2, i_3=1,\ldots,m)$
the following converging in the mean-square sense
expansion
\begin{equation}
\label{feto1900}
I_{{l_1l_2l_3}_{T,t}}^{*(i_1i_2i_3)}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,p_2,p_3\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}\sum_{j_3=0}^{p_3}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\stackrel{\rm def}{=}
\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\end{equation}
is reasonable for each of the following cases:
\noindent
{\rm 1}.\ $i_1\ne i_2,\ i_2\ne i_3,\ i_1\ne i_3$\ and
$l_1,\ l_2,\ l_3=0,\ 1,\ 2,\ldots;$\\
{\rm 2}.\ $i_1=i_2\ne i_3$ and $l_1=l_2\ne l_3$\ and
$l_1,\ l_2,\ l_3=0,\ 1,\ 2,\ldots;$\\
{\rm 3}.\ $i_1\ne i_2=i_3$ and $l_1\ne l_2=l_3$\ and
$l_1,\ l_2,\ l_3=0,\ 1,\ 2,\ldots;$\\
{\rm 4}.\ $i_1, i_2, i_3=1,\ldots,m;$ $l_1=l_2=l_3=l$\ and $l=0,\ 1,\ 2,\ldots$,\\
\noindent
where
$$
C_{j_3 j_2 j_1}=\int\limits_t^T(t-s)^{l_3}\phi_{j_3}(s)
\int\limits_t^s(t-s_1)^{l_2}\phi_{j_2}(s_1)
\int\limits_t^{s_1}(t-s_2)^{l_1}\phi_{j_1}(s_2)ds_2ds_1ds.
$$
}
{\bf Proof.} The case 1 directly follows from (\ref{leto5002}).
Let's consider the case 2 ($i_1=i_2\ne i_3$,\ $l_1=l_2=l\ne l_3$\ and
$l_1,\ l_3=0,\ 1,\ 2,\ldots$). So, we prove
the following expansion
\begin{equation}
\label{ogo101}
I_{{l_1 l_1 l_3}_{T,t}}^{*(i_1i_1i_3)}=
\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_1)}\zeta_{j_3}^{(i_3)}\
(i_1, i_2, i_3=1,\ldots,m),
\end{equation}
where the series converges in the mean-square sense;
$l, l_3=0,\ 1,\ 2,\ldots$ and
\begin{equation}
\label{ogo199}
C_{j_3 j_2 j_1}=\int\limits_t^T
\phi_{j_3}(s)(t-s)^{l_3}\int\limits_t^s(t-s_1)^{l}
\phi_{j_2}(s_1)
\int\limits_t^{s_1}(t-s_2)^l
\phi_{j_1}(s_2)ds_2ds_1ds.
\end{equation}
If we prove the formula:
\begin{equation}
\label{ogo200}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\frac{1}{2}\int\limits_t^T(t-s)^{l_3}
\int\limits_t^s(t-s_1)^{2l}ds_1d{\bf f}_s^{(i_3)},
\end{equation}
where the series converges in the mean-square sense and the
coefficients $C_{j_3 j_1 j_1}$ has the form (\ref{ogo199}),
then using the theorem 1 and
standard relations between multiple
Stratonovich and Ito stochastic integrals we get the expansion (\ref{ogo101}).
Using the theorem
1 we may write down:
$$
\frac{1}{2}\int\limits_t^T(t-s)^{l_3}
\int\limits_t^s(t-s_1)^{2l}ds_1d{\bf f}_s^{(i_3)}=
\frac{1}{2}\sum\limits_{j_3=0}^{2l+l_3+1}
\tilde C_{j_3}\zeta_{j_3}^{(i_3)}\ \hbox{w. p. 1},
$$
where
$$
\tilde C_{j_3}=
\int\limits_t^T
\phi_{j_3}(s)(t-s)^{l_3}\int\limits_t^s(t-s_1)^{2l}ds_1ds.
$$
Then
$$
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}-
\frac{1}{2}\sum\limits_{j_3=0}^{2l+l_3+1}
\tilde C_{j_3}\zeta_{j_3}^{(i_3)}=
$$
$$
=\sum\limits_{j_3=0}^{2l+l_3+1}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}-\frac{1}{2}\tilde C_{j_3}\right)
\zeta_{j_3}^{(i_3)}+
\sum\limits_{j_3=2l+l_3+2}^{p_3}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}.
$$
Therefore
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1,p_3\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=0}^{p_1}C_{j_3j_1 j_1}
\zeta_{j_3}^{(i_3)}-
\frac{1}{2}\int\limits_t^T(t-s)^{l_3}
\int\limits_t^s(t-s_1)^{2l}ds_1d{\bf f}_s^{(i_3)}\right)^2\right\}=
$$
\begin{equation}
\label{ogo210}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }\sum\limits_{j_3=0}^{2l+l_3+1}
\left(\sum\limits_{j_1=0}^{p_1}C_{j_3j_1 j_1}-
\frac{1}{2}\tilde C_{j_3}\right)^2+
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1,p_3\to \infty}}$\cr
}} }{\rm M}\left\{\left(
\sum\limits_{j_3=2l+l_3+2}^{p_3}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}\right)^2\right\}.
\end{equation}
Let's prove, that
\begin{equation}
\label{ogo211}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }
\left(\sum\limits_{j_1=0}^{p_1}C_{j_3j_1 j_1}-
\frac{1}{2}\tilde C_{j_3}\right)^2=0.
\end{equation}
We have
$$
\left(\sum\limits_{j_1=0}^{p_1}C_{j_3j_1 j_1}-
\frac{1}{2}\tilde C_{j_3}\right)^2=
$$
$$
=\left(\frac{1}{2}\sum\limits_{j_1=0}^{p_1}
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2ds-
\frac{1}{2}
\int\limits_t^T
\phi_{j_3}(s)(t-s)^{l_3}\int\limits_t^s(t-s_1)^{2l}ds_1ds\right)^2=
$$
$$
=\frac{1}{4}\left(
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}\left(
\sum\limits_{j_1=0}^{p_1}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
-\int\limits_t^s(t-s_1)^{2l}ds_1\right)ds\right)^2=
$$
$$
=\frac{1}{4}\left(
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}\left(
\int\limits_t^s(t-s_1)^{2l}ds_1-\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
-\int\limits_t^s(t-s_1)^{2l}ds_1\right)ds\right)^2
$$
\begin{equation}
\label{ogo300}
=\frac{1}{4}\left(
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}
\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
ds\right)^2.
\end{equation}
In order to get (\ref{ogo300}) we used the Parseval equality,
which in this case may look as follows:
\begin{equation}
\label{ogo301}
\sum_{j_1=0}^{\infty}\left(\int\limits_t^s\phi_{j_1}(s_1)
(t-s_1)^lds_1\right)^2=
\int\limits_t^T K^2(s,s_1)ds_1\ \left(K(s,s_1)=(t-s_1)^l{\bf 1}_{\{s_1<s\}};\ s, s_1\in
[t, T]\right).
\end{equation}
\vspace{2mm}
Taking into account the nondecreasing
of functional sequence
$$
u_n(s)=\sum_{j_1=0}^{n}\left(\int\limits_t^s\phi_{j_1}(s_1)
(t-s_1)^lds_1\right)^2,
$$
continuity of its members and continuity of limit function
$$
u(s)=\int\limits_t^s(t-s_1)^{2l}ds_1
$$
at the interval $[t, T]$
in accordance with Dini test we
have uniform
convergence of functional sequences $u_n(s)$ to the limit function
$u(s)$ at the interval $[t, T]$.
From (\ref{ogo300}) using the inequality
of Cauchy-Bunyakovsky we get:
$$
\left(\sum\limits_{j_1=0}^{p_1}C_{j_3j_1 j_1}-
\frac{1}{2}\tilde C_{j_3}\right)^2\le
\frac{1}{4}
\int\limits_t^T\phi_{j_3}^2(s)(t-s)^{2l_3}ds
\int\limits_t^T\left(\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
\right)^2 ds\le
$$
\begin{equation}
\label{ogo302}
\le\frac{1}{4}\varepsilon^2 (T-t)^{2l_3}\int\limits_t^T\phi_{j_3}^2(s)ds
(T-t)=\frac{1}{4}(T-t)^{2l_3+1}\varepsilon^2
\end{equation}
when $p_1>N(\varepsilon),$ where $N(\varepsilon)$
is found for all $\varepsilon>0.$
From (\ref{ogo302}) it follows
(\ref{ogo211}).
Further
\begin{equation}
\label{ogo303}
\sum\limits_{j_1=0}^{p_1}
\sum\limits_{j_3=2l+l_3+2}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\sum\limits_{j_1=0}^{p_1}
\sum\limits_{j_3=2l+l_3+2}^{2(j_1+l+1)+l_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}.
\end{equation}
We put $2(j_1+l+1)+l_3$ instead of $p_3$, since
$C_{j_3j_1j_1}=0$ for $j_3>2(j_1+l+1)+l_3.$ This conclusion
follows from the relation:
$$
C_{j_3j_1j_1}=
\frac{1}{2}
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}
\left(
\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2ds=
\frac{1}{2}\int\limits_t^T\phi_{j_3}(s)Q_{2(j_1+l+1)+l_3}(s)ds,
$$
where $Q_{2(j_1+l+1)+l_3}(s)$ is a polynomial of the degree
$2(j_1+l+1)+l_3.$
It is easy to see, that
\begin{equation}
\label{ogo304}
\sum\limits_{j_1=0}^{p_1}
\sum\limits_{j_3=2l+l_3+2}^{2(j_1+l+1)+l_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}.
\end{equation}
Note, that we introduced some coefficients $C_{j_3 j_1 j_1}$
in the sum $\sum\limits_{j_1=0}^{p_1}$,
which equals to zero.
From (\ref{ogo303}) and (\ref{ogo304}) we get:
$$
{\rm M}\left\{\left(\sum\limits_{j_1=0}^{p_1}
\sum\limits_{j_3=2l+l_3+2}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}\right)^2\right\}=
{\rm M}\left\{\left(
\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}
\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}\right)^2\right\}=
$$
$$
=\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}
\left(\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\right)^2=
\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}
\left(\frac{1}{2}\sum\limits_{j_1=0}^{p_1}
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2ds\right)^2
$$
$$
=\frac{1}{4}\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}
\left(
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}
\sum\limits_{j_1=0}^{p_1}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
ds\right)^2=
$$
$$
=\frac{1}{4}\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}\left(
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}\left(
\int\limits_t^s(t-s_1)^{2l}ds_1-\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
\right)ds\right)^2
$$
\begin{equation}
\label{ogo310}
=\frac{1}{4}\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}\left(
\int\limits_t^T\phi_{j_3}(s)(t-s)^{l_3}
\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
ds\right)^2.
\end{equation}
In
order to
get
(\ref{ogo310}) we used the Parseval equality
of type (\ref{ogo301}) and the following
relation:
$$
\int\limits_t^T\phi_{j_3}(s)Q_{2l+1+l_3}(s)ds=0;\ j_3>2l+1+l_3,
$$
where $Q_{2l+1+l_3}(s)$ --- is a polynomial of degree
$2l+1+l_3.$
Further we have
$$
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^lds_1\right)^2=
\frac{(T-t)^{2l+1}(2j_1+1)}{2^{2l+2}}
\left(\int\limits_{-1}^{z(s)}
P_{j_1}(y)(1+y)^ldy\right)^2=
$$
$$
=\frac{(T-t)^{2l+1}}{2^{2l+2}(2j_1+1)}\left(
\left(1+z(s)\right)^l
R_{j_1}(s)
-l\int\limits_{-1}^{z(s)}
\left(P_{j_1+1}(y)-P_{j_1-1}(y)\right)\left(1+y\right)^{l-1}dy\right)^2
\le
$$
$$
\le\frac{(T-t)^{2l+1}2}{2^{2l+2}(2j_1+1)}\left(
\left(\frac{2(s-t)}{T-t}\right)^{2l}
R_{j_1}^2(s)
+l^2
\left(
\int\limits_{-1}^{z(s)}
\left(P_{j_1+1}(y)-P_{j_1-1}(y)\right)\left(1+y\right)^{l-1}dy\right)^2
\right)
$$
$$
\le\frac{(T-t)^{2l+1}}{2^{2l+1}(2j_1+1)}\left(
2^{2l+1}
Z_{j_1}(s)+
l^2
\int\limits_{-1}^{z(s)}
(1+y)^{2l-2}dy
\int\limits_{-1}^{z(s)}
\left(P_{j_1+1}(y)-P_{j_1-1}(y)\right)^2dy
\right)\le
$$
$$
\le\frac{(T-t)^{2l+1}}{2^{2l+1}(2j_1+1)}\left(
2^{2l+1}
Z_{j_1}(s)
+\frac{2l^2}{2l-1}\left(\frac{2(s-t)}{T-t}\right)^{2l-1}
\int\limits_{-1}^{z(s)}
\left(P_{j_1+1}^2(y)+P_{j_1-1}^2(y)\right)dy
\right)\le
$$
\begin{equation}
\label{ogo400}
\le\frac{(T-t)^{2l+1}}{2(2j_1+1)}\left(
2
Z_{j_1}(s)
+\frac{l^2}{2l-1}
\int\limits_{-1}^{z(s)}
\left(P_{j_1+1}^2(y)+P_{j_1-1}^2(y)\right)dy
\right),
\end{equation}
where
$$
R_{j_1}(s)=P_{j_1+1}(z(s))-P_{j_1-1}(z(s)),\
Z_{j_1}(s)=P_{j_1+1}^2(z(s))+P_{j_1-1}^2(z(s)).
$$
\vspace{2mm}
Let's estimate the right part
of (\ref{ogo400}) using (\ref{ogo23}):
$$
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^lds_1\right)^2 <
\frac{(T-t)^{2l+1}}{2(2j_1+1)}\left(\frac{K^2}{j_1+2}+\frac{K^2}{j_1}
\right)\left(\frac{2}
{(1-\left(z(s))^2
\right)^{\frac{1}{2}}}
+\frac{l^2}{2l-1}
\int\limits_{-1}^{z(s)}
\frac{dy}{\left(1-y^2\right)^{\frac{1}{2}}}\right)<
$$
\begin{equation}
\label{ogo401}
<\frac{(T-t)^{2l+1}K^2}{2j_1^2}\left(
\frac{2}
{(1-\left(z(s))^2
\right)^{\frac{1}{2}}}+
\frac{l^2\pi}{2l-1}\right),\ s\in(t, T).
\end{equation}
From (\ref{ogo310}) and (\ref{ogo401}) we get:
$$
{\rm M}\left\{\left(\sum\limits_{j_1=0}^{p_1}
\sum\limits_{j_3=2l+l_3+2}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}\right)^2\right\}\le
$$
$$
\le
\frac{1}{4}\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}\left(
\int\limits_t^T|\phi_{j_3}(s)|(t-s)^{l_3}
\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
ds\right)^2\le
$$
$$
\le
\frac{1}{4}(T-t)^{2l_3}\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}\left(
\int\limits_t^T|\phi_{j_3}(s)|
\sum\limits_{j_1=p_1+1}^{\infty}
\left(\int\limits_t^s\phi_{j_1}(s_1)(t-s_1)^{l}ds_1\right)^2
ds\right)^2\le
$$
$$
<
\frac{(T-t)^{4l+2l_3+1}K^4 K_1^2}{16}
\sum\limits_{j_3=2l+l_3+2}^{2(p_1+l+1)+l_3}
\left(\left(\int\limits_t^T
\frac{2ds}
{\left(1-\left(z(s)\right)^2
\right)^{\frac{3}{4}}}
+\frac{l^2\pi}{2l-1}
\int\limits_t^T
\frac{ds}
{\left(1-\left(z(s)\right)^2
\right)^{\frac{1}{4}}}\right)
\sum\limits_{j_1=p_1+1}^{\infty}\frac{1}{j_1^2}
\right)^2
$$
$$
\le
\frac{(T-t)^{4l+2l_3+3}K^4 K_1^2}{64}\cdot\frac{2p_1+1}{p_1^2}
\left(\int\limits_{-1}^1
\frac{2dy}
{(1-y^2)^{\frac{3}{4}}}
+\frac{l^2\pi}{2l-1}
\int\limits_{-1}^1
\frac{dy}
{(1-y^2)^{\frac{1}{4}}}\right)^2\le
$$
\begin{equation}
\label{ogo500}
\le (T-t)^{4l+2l_3+3}C\frac{2p_1+1}{p_1^2}\to 0\ \hbox{when}\ p_1\to \infty,
\end{equation}
where the constant $C$ doesn't depend on $p_1$ and $T-t$.
From (\ref{ogo210}), (\ref{ogo211}) and (\ref{ogo500}) follows
(\ref{ogo200}), and from (\ref{ogo200}) follows the expansion
(\ref{ogo101}).
Let's consider the case 3 ($i_2=i_3\ne i_1$,\ $l_2=l_3=l\ne l_1$\ and
$l_1,\ l_3=0,\ 1,\ 2,\ldots$). So, we prove
the following expansion
\begin{equation}
\label{ogo101ee}
I_{{l_1 l_3 l_3}_{T,t}}^{*(i_1i_3i_3)}=
\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_3)}\zeta_{j_3}^{(i_3)}\
(i_1, i_2, i_3=1,\ldots,m),
\end{equation}
where the series converges in the mean-square sense;
$l, l_1=0,\ 1,\ 2,\ldots$ and
\begin{equation}
\label{ogo1991}
C_{j_3 j_2 j_1}=\int\limits_t^T
\phi_{j_3}(s)(t-s)^{l}\int\limits_t^s(t-s_1)^{l}
\phi_{j_2}(s_1)
\int\limits_t^{s_1}(t-s_2)^{l_1}
\phi_{j_1}(s_2)ds_2ds_1ds.
\end{equation}
If we prove the formula:
\begin{equation}
\label{ogo2000}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\frac{1}{2}\int\limits_t^T(t-s)^{2l}
\int\limits_t^s(t-s_1)^{l_1}d{\bf f}_{s_1}^{(i_1)}ds,
\end{equation}
where the series converges in the mean-square sense and the
coefficients $C_{j_3 j_3 j_1}$ has the form (\ref{ogo1991}),
then using the theorem 1
and standard relations between multiple Ito and
Stratonovich stochastic integrals we get the expansion (\ref{ogo101ee}).
Using the theorems 1 and the Ito formula we
may write down:
$$
\frac{1}{2}\int\limits_t^T(t-s)^{2l}
\int\limits_t^s(t-s_1)^{l_1}d{\bf f}_{s_1}^{(i_1)}ds=
\frac{1}{2}\int\limits_t^T(t-s_1)^{l_1}
\int\limits_{s_1}^T(t-s)^{2l}dsd{\bf f}_{s_1}^{(i_1)}=
\frac{1}{2}\sum\limits_{j_1=0}^{2l+l_1+1}
\tilde C_{j_1}\zeta_{j_1}^{(i_1)}\ \hbox{w. p. 1},
$$
where
$$
\tilde C_{j_1}=
\int\limits_t^T
\phi_{j_1}(s_1)(t-s_1)^{l_1}\int\limits_{s_1}^T(t-s)^{2l}dsds_1.
$$
Then
$$
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}-
\frac{1}{2}\sum\limits_{j_1=0}^{2l+l_1+1}
\tilde C_{j_1}\zeta_{j_1}^{(i_1)}=
\sum\limits_{j_1=0}^{2l+l_1+1}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}-\frac{1}{2}\tilde C_{j_1}\right)
\zeta_{j_1}^{(i_1)}+
\sum\limits_{j_1=2l+l_1+2}^{p_1}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}.
$$
Therefore
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1,p_3\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}C_{j_3j_3 j_1}
\zeta_{j_1}^{(i_1)}-
\frac{1}{2}\int\limits_t^T(t-s)^{2l}
\int\limits_t^s(t-s_1)^{l_1}d{\bf f}_{s_1}^{(i_1)}ds\right)^2\right\}=
$$
\begin{equation}
\label{ogo2100}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }\sum\limits_{j_1=0}^{2l+l_1+1}
\left(\sum\limits_{j_3=0}^{p_3}C_{j_3j_3 j_1}-
\frac{1}{2}\tilde C_{j_1}\right)^2+
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1,p_3\to \infty}}$\cr
}} }{\rm M}\left\{\left(
\sum\limits_{j_1=2l+l_1+2}^{p_1}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}\right)^2\right\}.
\end{equation}
Let's prove, that
\begin{equation}
\label{ogo2110}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_3\to \infty}}$\cr
}} }
\left(\sum\limits_{j_3=0}^{p_3}C_{j_3j_3 j_1}-
\frac{1}{2}\tilde C_{j_1}\right)^2=0.
\end{equation}
We have
$$
\left(\sum\limits_{j_3=0}^{p_3}C_{j_3j_3 j_1}-
\frac{1}{2}\tilde C_{j_1}\right)^2=
\left(\sum\limits_{j_3=0}^{p_3}
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}ds_2
\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1
\int\limits_{s_1}^T\phi_{j_3}(s)(t-s)^{l}ds-\right.
$$
$$
-\left.
\frac{1}{2}
\int\limits_t^T
\phi_{j_1}(s_1)(t-s_1)^{l_1}\int\limits_{s_1}^T(t-s)^{2l}dsds_1\right)^2=
$$
$$
=\left(\frac{1}{2}\sum\limits_{j_3=0}^{p_3}
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2ds_2-
\frac{1}{2}
\int\limits_t^T
\phi_{j_1}(s_1)(t-s_1)^{l_1}\int\limits_{s_1}^T(t-s)^{2l}dsds_1\right)^2
$$
$$
=\frac{1}{4}\left(
\int\limits_t^T\phi_{j_1}(s_1)(t-s_1)^{l_1}\left(
\sum\limits_{j_3=0}^{p_3}
\left(\int\limits_{s_1}^T\phi_{j_3}(s)(t-s)^{l}ds\right)^2
-\int\limits_{s_1}^T(t-s)^{2l}ds\right)ds_1\right)^2=
$$
$$
=\frac{1}{4}\left(
\int\limits_t^T\phi_{j_1}(s_1)(t-s_1)^{l_1}\left(
\int\limits_{s_1}^T(t-s)^{2l}ds-
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_1}^T\phi_{j_3}(s)(t-s)^{l}ds\right)^2
-\int\limits_{s_1}^T(t-s)^{2l}ds\right)ds_1\right)^2=
$$
\begin{equation}
\label{ogo3000}
=\frac{1}{4}\left(
\int\limits_t^T\phi_{j_1}(s_1)(t-s_1)^{l_1}
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_1}^T\phi_{j_3}(s)(t-s)^{l}ds\right)^2
ds_1\right)^2.
\end{equation}
In order to
get (\ref{ogo3000}) we used the Parseval equality, which in
this case may look as follows:
\begin{equation}
\label{ogo3010}
\sum_{j_3=0}^{\infty}\left(\int\limits_{s_1}^T\phi_{j_3}(s)
(t-s)^lds\right)^2=
\int\limits_t^T K^2(s,s_1)ds\ \left(K(s,s_1)=(t-s)^l
{\bf 1}_{\{s_1<s\}};\ s, s_1\in [t, T]\right).
\end{equation}
Taking into account nondecreasing
of functional sequence
$$
u_n(s_1)=\sum_{j_3=0}^{n}\left(\int\limits_{s_1}^T\phi_{j_3}(s)
(t-s)^lds\right)^2,
$$
continuity of its members and continuity of limit
function
$$
u(s_1)=\int\limits_{s_1}^T(t-s)^{2l}ds
$$
at the interval $[t, T]$, according to Dini test we have
uniform
convergence
of the functional sequence $u_n(s_1)$ to the limit
function u(s) at the interval $[t, T]$.
From (\ref{ogo3000}) using the inequality of Cauchy-Bunyakovsky we get:
$$
\left(\sum\limits_{j_3=0}^{p_3}C_{j_3j_3 j_1}-
\frac{1}{2}\tilde C_{j_1}\right)^2\le
\frac{1}{4}
\int\limits_t^T\phi_{j_1}^2(s_1)(t-s_1)^{2l_1}ds_1
\int\limits_t^T\left(\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_1}^T\phi_{j_3}(s)(t-s)^{l}ds\right)^2
\right)^2 ds_1\le
$$
\begin{equation}
\label{ogo3020}
\le\frac{1}{4}\varepsilon^2 (T-t)^{2l_1}\int\limits_t^T\phi_{j_1}^2(s_1)ds_1
(T-t)=\frac{1}{4}(T-t)^{2l_1+1}\varepsilon^2
\end{equation}
when $p_3>N(\varepsilon),$ where $N(\varepsilon)$
is found for all $\varepsilon>0.$
From (\ref{ogo3020}) follows
(\ref{ogo2110}).
We have
\begin{equation}
\label{ogo3030}
\sum\limits_{j_3=0}^{p_3}
\sum\limits_{j_1=2l+l_1+2}^{p_1}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\sum\limits_{j_3=0}^{p_3}
\sum\limits_{j_1=2l+l_1+2}^{2(j_3+l+1)+l_1}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}.
\end{equation}
We put $2(j_3+l+1)+l_1$ instead of $p_1$, since
$C_{j_3j_3j_1}=0$ when $j_1>2(j_3+l+1)+l_1.$
It follows from the relation:
$$
C_{j_3j_3j_1}=
\frac{1}{2}
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}
\left(
\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2ds_2=
\frac{1}{2}\int\limits_t^T\phi_{j_1}(s_2)Q_{2(j_3+l+1)+l_1}(s_2)ds_2,
$$
where $Q_{2(j_3+l+1)+l_1}(s)$ --- is a polynomial of degree
$2(j_3+l+1)+l_1.$
It is easy to see, that
\begin{equation}
\label{ogo3040}
\sum\limits_{j_3=0}^{p_3}
\sum\limits_{j_1=2l+l_1+2}^{2(j_3+l+1)+l_1}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}.
\end{equation}
Note, that we
included
some coefficients $C_{j_3 j_3 j_1}$ in the sum
$\sum\limits_{j_3=0}^{p_3}$,
which equals to zero.
From (\ref{ogo3030}) and (\ref{ogo3040}) we get:
$$
{\rm M}\left\{\left(\sum\limits_{j_3=0}^{p_3}
\sum\limits_{j_1=2l+l_1+2}^{p_1}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}\right)^2\right\}=
{\rm M}\left\{\left(
\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}
\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}\right)^2\right\}=
$$
$$
=\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}
\left(\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\right)^2=
$$
$$
=\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}
\left(\frac{1}{2}\sum\limits_{j_3=0}^{p_3}
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2ds_2\right)^2=
$$
$$
=\frac{1}{4}\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}
\left(
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}
\sum\limits_{j_3=0}^{p_3}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2
ds_2\right)^2=
$$
$$
=\frac{1}{4}\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}\left(
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}\left(
\int\limits_{s_2}^T(t-s_1)^{2l}ds_1-
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2
\right)ds_2\right)^2=
$$
\begin{equation}
\label{ogo3100}
=\frac{1}{4}\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}\left(
\int\limits_t^T\phi_{j_1}(s_2)(t-s_2)^{l_1}
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2
ds_2\right)^2.
\end{equation}
In order to
get
(\ref{ogo3100}) we used the Parseval equality of type
(\ref{ogo3010}) and the following relation:
$$
\int\limits_t^T\phi_{j_1}(s)Q_{2l+1+l_1}(s)ds=0;\ j_1>2l+1+l_1,
$$
where $Q_{2l+1+l_1}(s)$ --- is a polynomial of degree
$2l+1+l_1.$
Further we have
$$
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^lds_1\right)^2=
\frac{(T-t)^{2l+1}(2j_3+1)}{2^{2l+2}}
\left(\int\limits_{z(s_2)}^1
P_{j_3}(y)(1+y)^ldy\right)^2=
$$
$$
=\frac{(T-t)^{2l+1}}{2^{2l+2}(2j_3+1)}\left(
\left(1+z(s_2)\right)^l
Q_{j_3}(s_2)-
l\int\limits_{z(s_2)}^1
\left(P_{j_3+1}(y)-P_{j_3-1}(y)\right)\left(1+y\right)^{l-1}dy\right)^2
$$
$$
\le\frac{(T-t)^{2l+1}2}{2^{2l+2}(2j_3+1)}\left(
\left(\frac{2(s_2-t)}{T-t}\right)^{2l}
Q_{j_3}^2(s_2))+
l^2
\left(
\int\limits_{z(s_2)}^1
\left(P_{j_3+1}(y)-P_{j_3-1}(y)\right)\left(1+y\right)^{l-1}dy\right)^2
\right)
$$
$$
\le\frac{(T-t)^{2l+1}}{2^{2l+1}(2j_3+1)}\left(
2^{2l+1}
H_{j_3}(s_2)+
l^2
\int\limits_{z(s_2)}^1
(1+y)^{2l-2}dy
\int\limits_{z(s_2)}^1
\left(P_{j_3+1}(y)-P_{j_3-1}(y)\right)^2dy
\right)
$$
$$
\le\frac{(T-t)^{2l+1}}{2^{2l+1}(2j_3+1)}\left(
2^{2l+1}
H_{j_3}(s_2)
+\frac{2^{2l}l^2}{2l-1}\left(1-\left(\frac{(s_2-t)}{T-t}\right)^{2l-1}\right)
\int\limits_{z(s_2)}^1
\left(P_{j_3+1}^2(y)+P_{j_3-1}^2(y)\right)dy
\right)
$$
\begin{equation}
\label{ogo4000}
\le\frac{(T-t)^{2l+1}}{2(2j_3+1)}\Biggl(
2
H_{j_3}(s_2)
\Biggl.
+\frac{l^2}{2l-1}
\int\limits_{z(s_2)}^1
\left(P_{j_3+1}^2(y)+P_{j_3-1}^2(y)\right)dy
\Biggr),
\end{equation}
where
$$
Q_{j_3}(s_2)=P_{j_3-1}(z(s_2))-P_{j_3+1}(z(s_2)),\
H_{j_3}(s_2)=P_{j_3-1}^2(z(s_2))+P_{j_3+1}^2(z(s_2)).
$$
\vspace{2mm}
Let's estimate right part
of (\ref{ogo4000}) using
of
(\ref{ogo23}):
$$
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^lds_1\right)^2 <
\frac{(T-t)^{2l+1}}{2(2j_3+1)}\left(\frac{K^2}{j_3+2}+\frac{K^2}{j_3}
\right)\left(\frac{2}
{(1-\left(z(s_2))^2
\right)^{\frac{1}{2}}}
+\frac{l^2}{2l-1}
\int\limits_{z(s_2)}^1
\frac{dy}{\left(1-y^2\right)^{\frac{1}{2}}}\right)
$$
\begin{equation}
\label{ogo4010}
<\frac{(T-t)^{2l+1}K^2}{2j_3^2}\left(
\frac{2}
{(1-\left(z(s_2))^2
\right)^{\frac{1}{2}}}+
\frac{l^2\pi}{2l-1}\right),\ s\in(t, T).
\end{equation}
From (\ref{ogo3100}) and (\ref{ogo4010}) we get:
$$
{\rm M}\left\{\left(\sum\limits_{j_3=0}^{p_3}
\sum\limits_{j_1=2l+l_1+2}^{p_1}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}\right)^2\right\}\le
$$
$$
\le
\frac{1}{4}\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}\left(
\int\limits_t^T|\phi_{j_1}(s_2)|(t-s_2)^{l_1}
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2
ds_2\right)^2\le
$$
$$
\le
\frac{1}{4}(T-t)^{2l_1}\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}\left(
\int\limits_t^T|\phi_{j_1}(s_2)|
\sum\limits_{j_3=p_3+1}^{\infty}
\left(\int\limits_{s_2}^T\phi_{j_3}(s_1)(t-s_1)^{l}ds_1\right)^2
ds_2\right)^2<
$$
$$
<
\frac{(T-t)^{4l+2l_1+1}K^4 K_1^2}{16}
\sum\limits_{j_1=2l+l_1+2}^{2(p_3+l+1)+l_1}
\left(\left(\int\limits_t^T
\frac{2ds_2}
{(1-\left(z(s_2))^2
\right)^{\frac{3}{4}}}\right.\right.
\left.\left.
+\frac{l^2\pi}{2l-1}
\int\limits_t^T
\frac{ds_2}
{(1-\left(z(s_2))^2
\right)^{\frac{1}{4}}}\right)
\sum\limits_{j_3=p_3+1}^{\infty}\frac{1}{j_3^2}
\right)^2
$$
$$
\le
\frac{(T-t)^{4l+2l_1+3}K^4 K_1^2}{64}\cdot\frac{2p_3+1}{p_3^2}
\left(\int\limits_{-1}^1
\frac{2dy}
{(1-y^2)^{\frac{3}{4}}}
+\frac{l^2\pi}{2l-1}
\int\limits_{-1}^1
\frac{dy}
{(1-y^2)^{\frac{1}{4}}}\right)^2
$$
\begin{equation}
\label{ogo5000}
\le (T-t)^{4l+2l_1+3}C\frac{2p_3+1}{p_3^2}\to 0\ \hbox{when}\ p_3\to \infty,
\end{equation}
where the constant $C$ doesn't depend on $p_3$ and $T-t$.
From (\ref{ogo2100}), (\ref{ogo2110}) and (\ref{ogo5000}) follows
(\ref{ogo2000}) and from (\ref{ogo2000}) follows the expansion
(\ref{ogo101ee}).
Let's consider the case 4 ($l_1=l_2=l_3=l=0,\ 1,\ 2,\ldots$ and
$i_1, i_2, i_3=1,\ldots,m$). So, we will prove following expansion
for multiple Stratonovich stochastic integral of 3rd multiplicity:
\begin{equation}
\label{ogo10100}
I_{{l l l}_{T,t}}^{*(i_1i_2i_3)}=\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}\
(i_1, i_2, i_3=1,\ldots,m),
\end{equation}
where the series converges in the mean-square sense;
$l=0,\ 1,\ 2,\ldots$ and
\begin{equation}
\label{ogo19900}
C_{j_3 j_2 j_1}=\int\limits_t^T
\phi_{j_3}(s)(t-s)^{l}\int\limits_t^s(t-s_1)^{l}
\phi_{j_2}(s_1)
\int\limits_t^{s_1}(t-s_2)^{l}
\phi_{j_1}(s_2)ds_2ds_1ds.
\end{equation}
If we prove w.\ p.\ 1\ the formula:
\begin{equation}
\label{ogo20000}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=0,
\end{equation}
where the series coverges in the mean-square sense and
the coefficients $C_{j_3 j_2 j_1}$
have
the form (\ref{ogo19900}), then
using the theorem 1, relations (\ref{ogo200}), (\ref{ogo2000})
when $l_1=l_3=l$
and standard relations
between
multiple Stratonovich and Ito stochastic integrals
we will have expansion (\ref{ogo10100}).
Since
$\psi_1(s),$ $\psi_2(s),$ $\psi_3(s)\equiv (t-s)^l$, then
the following relation for Fourier coefficients takes place
$$
C_{j_1 j_1 j_3}+C_{j_1 j_3 j_1}+C_{j_3 j_1 j_1}=\frac{1}{2}
C_{j_1}^2 C_{j_3},
$$
where $C_{j_3 j_2 j_1}$ has the form (\ref{ogo19900}) and
$$
C_{j_1}=\int\limits_t^T
\phi_{j_1}(s)(t-s)^{l}ds.
$$
Then w.\ p.\ 1:
\begin{equation}
\label{sodom310}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=
\sum\limits_{j_1, j_3=0}^{\infty}
\left(\frac{1}{2}C_{j_1}^2 C_{j_3}-C_{j_1 j_1 j_3}-C_{j_3 j_1 j_1}
\right)\zeta_{j_3}^{(i_2)}.
\end{equation}
\vspace{2mm}
Taking into account (\ref{ogo200}) and (\ref{ogo2000})
when $l_3=l_1=l$ and the Ito formula
we have w.\ p.\ 1:
$$
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=
\frac{1}{2}\sum\limits_{j_1=0}^{l}
C_{j_1}^2\sum\limits_{j_3=0}^{l}C_{j_3}\zeta_{j_3}^{(i_2)}-
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_1 j_3}\zeta_{j_3}^{(i_2)}-
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_2)}
=
$$
$$
=\frac{1}{2}\sum\limits_{j_1=0}^{l}
C_{j_1}^2\int\limits_t^T(t-s)^ld{\bf f}_s^{(i_2)}
-\frac{1}{2}\int\limits_t^T(t-s)^{l}
\int\limits_t^s(t-s_1)^{2l}ds_1d{\bf f}_s^{(i_2)}-
\frac{1}{2}\int\limits_t^T(t-s)^{2l}
\int\limits_t^s(t-s_1)^{l}d{\bf f}_{s_1}^{(i_2)}ds=
$$
$$
=\frac{1}{2}\sum\limits_{j_1=0}^{l}
C_{j_1}^2\int\limits_t^T(t-s)^ld{\bf f}_s^{(i_2)}
+\frac{1}{2(2l+1)}\int\limits_t^T(t-s)^{3l+1}d{\bf f}_s^{(i_2)}-
\frac{1}{2}\int\limits_t^T(t-s_1)^{l}
\int\limits_{s_1}^T(t-s)^{2l}dsd{\bf f}_{s_1}^{(i_2)}=
$$
$$
=\frac{1}{2}\sum\limits_{j_1=0}^{l}
C_{j_1}^2\int\limits_t^T(t-s)^ld{\bf f}_s^{(i_2)}
+\frac{1}{2(2l+1)}\int\limits_t^T(t-s)^{3l+1}d{\bf f}_s^{(i_2)}-
$$
$$
-\frac{1}{2(2l+1)}\left((T-t)^{2l+1}\int\limits_t^T(t-s)^{l}
d{\bf f}_{s}^{(i_2)}+\int\limits_t^T(t-s)^{3l+1}
d{\bf f}_{s}^{(i_2)}\right)=
$$
$$
=\frac{1}{2}\sum\limits_{j_1=0}^{l}
C_{j_1}^2\int\limits_t^T(t-s)^ld{\bf f}_s^{(i_2)}-
\frac{(T-t)^{2l+1}}{2(2l+1)}\int\limits_t^T(t-s)^{l}
d{\bf f}_{s}^{(i_2)}=
\frac{1}{2}\left(\sum\limits_{j_1=0}^{l}
C_{j_1}^2-\int\limits_t^T(t-s)^{2l}ds\right)
\int\limits_t^T(t-s)^ld{\bf f}_s^{(i_2)}=0.
$$
\vspace{2mm}
Here, the Parseval equality looks as follows:
$$
\sum\limits_{j_1=0}^{\infty}
C_{j_1}^2=
\sum\limits_{j_1=0}^{l}
C_{j_1}^2=\int\limits_t^T(t-s)^{2l}ds=\frac{(T-t)^{2l+1}}{2l+1}
$$
and
$$
\int\limits_t^{T}(t-s)^{l}
d{\bf f}_{s}^{(i_2)}=
\sum\limits_{j_3=0}^{l}C_{j_3}\zeta_{j_3}^{(i_2)}\ \hbox{w. p. 1}.
$$
The expansion (\ref{ogo10100}) is proven. The theorem 3 is proven.
It is easy to see, that using Ito formula if
$i_1=i_2=i_3$
we get:
$$
\int\limits_t^{*T}(t-s)^{l}\int\limits_t^{*s}
(t-s_1)^l\int\limits_t^{*s_1}(t-s_2)^{l}
d{\bf f}_{s_2}^{(i_1)}d{\bf f}_{s_1}^{(i_1)}d{\bf f}_{s}^{(i_1)}=
$$
\begin{equation}
\label{sodom400}
=\frac{1}{6}\Biggl(
\int\limits_t^{T}(t-s)^{l}
d{\bf f}_{s}^{(i_1)}\Biggr)^3
=\frac{1}{6}\left(
\sum\limits_{j_1=0}^{l}C_{j_1}\zeta_{j_1}^{(i_1)}\right)^3=
\sum\limits_{j_1, j_2, j_3=0}^{l}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_1)}\zeta_{j_3}^{(i_1)}\
\hbox{w. p. 1}.
\end{equation}
\vspace{2mm}
\section{Expansions of multiple Stratonovich stochastic integrals
of 3rd multiplicity, based on theorem 1. Trigonometric case}
\vspace{2mm}
In this section we will prove the following theorem.
{\bf Theorem 4} (see \cite{2011} - \cite{2017}). {\it Suppose that
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal
system of trigonometric functions
in the space $L_2([t, T])$.
Then, for multiple Stratonovich stochastic integral of {\rm 3}rd multiplicity
$$
{\int\limits_t^{*}}^T
{\int\limits_t^{*}}^{t_3}
{\int\limits_t^{*}}^{t_2}
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\
$$
$(i_1, i_2, i_3=1,\ldots,m)$
the following converging in the mean-square sense
expansion
\begin{equation}
\label{feto19001ee}
{\int\limits_t^{*}}^T
{\int\limits_t^{*}}^{t_3}
{\int\limits_t^{*}}^{t_2}
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\
=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,p_2,p_3\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}\sum_{j_3=0}^{p_3}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\stackrel{\rm def}{=}
$$
$$
\stackrel{\rm def}{=}
\sum\limits_{j_1, j_2, j_3=0}^{\infty}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\end{equation}
is reasonable, where
$$
C_{j_3 j_2 j_1}=\int\limits_t^T
\phi_{j_3}(s)\int\limits_t^s
\phi_{j_2}(s_1)
\int\limits_t^{s_1}
\phi_{j_1}(s_2)ds_2ds_1ds.
$$
}
{\bf Proof.}\ If we prove the following formulas:
\begin{equation}
\label{ogo1299}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}
\stackrel{{\rm def}}{=}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\frac{1}{2}\int\limits_t^T\int\limits_t^{\tau}dsd{\bf f}_{\tau}^{(i_3)},
\end{equation}
\begin{equation}
\label{ogo1399}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}
\stackrel{{\rm def}}{=}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\frac{1}{2}\int\limits_t^T\int\limits_t^{\tau}d{\bf f}_{s}^{(i_1)}d\tau,
\end{equation}
\begin{equation}
\label{ogo13a99}
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}
\stackrel{{\rm def}}{=}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=0.
\end{equation}
then from the theorem 1, formulas (\ref{ogo1299}) -- (\ref{ogo13a99}) and
standard relations
between
multiple
Stratonovich and Ito stochastic integrals
the expansion (\ref{feto19001ee}) will follow.
We have:
$$
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=
\frac{(T-t)^{\frac{3}{2}}}{6}+\sum\limits_{j_1=1}^{p_1}C_{0,2j_1,2j_1}
\zeta_0^{(i_3)}+\sum\limits_{j_1=1}^{p_1}C_{0,2j_1-1,2j_1-1}\zeta_0^{(i_3)}+
\sum\limits_{j_3=1}^{p_1}C_{2j_3,0,0}\zeta_{2j_3}^{(i_3)}+
$$
$$
+
\sum\limits_{j_3=1}^{p_3}\sum_{j_1=1}^{p_1}
C_{2j_3,2j_1,2j_1}\zeta_{2j_3}^{(i_3)}+
\sum\limits_{j_3=1}^{p_3}\sum_{j_1=1}^{p_1}
C_{2j_3,2j_1-1,2j_1-1}\zeta_{2j_3}^{(i_3)}+
\sum\limits_{j_3=1}^{p_3}
C_{2j_3-1,0,0}\zeta_{2j_3-1}^{(i_3)}+
$$
\begin{equation}
\label{ogo900}
+\sum\limits_{j_3=1}^{p_3}\sum_{j_1=1}^{p_1}
C_{2j_3-1,2j_1,2j_1}\zeta_{2j_3-1}^{(i_3)}+
\sum\limits_{j_3=1}^{p_3}\sum_{j_1=1}^{p_1}
C_{2j_3-1,2j_1-1,2j_1-1}\zeta_{2j_3-1}^{(i_3)},
\end{equation}
\vspace{1mm}
\noindent
where the summation is stopped when $2j_1,$ $2j_1-1> p_1$
or $2j_3,$ $2j_3-1> p_3$ and
\begin{equation}
\label{ogo901}
C_{0,2l,2l}=\frac{(T-t)^{\frac{3}{2}}}{8\pi^2l^2},\ \
C_{0,2l-1,2l-1}=\frac{3(T-t)^{\frac{3}{2}}}{8\pi^2l^2},\ \
C_{2l,0,0}=\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{4\pi^2l^2};
\end{equation}
\begin{equation}
\label{ogo903}
C_{2r-1,2l,2l}=0,\ \
C_{2l-1,0,0}=-\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{4\pi l},\ \
C_{2r-1,2l-1,2l-1}=0;
\end{equation}
\begin{equation}
\label{ogo902}
C_{2r,2l,2l}=
\begin{cases}
-\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{16\pi^2l^2},\ r=2l
\cr
\cr
0,\ r\ne 2l\
\end{cases},\ \
C_{2r,2l-1,2l-1}=
\begin{cases}
\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{16\pi^2l^2},\ r=2l
\cr
\cr
-\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{4\pi^2l^2},\ r=l
\cr
\cr
0,\ r\ne l,\ r\ne 2l
\end{cases}.
\end{equation}
After
substituting
(\ref{ogo901}) -- (\ref{ogo902})
into (\ref{ogo900}) we get:
\begin{equation}
\label{ogo905}
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}=(T-t)^{\frac{3}{2}}\left(
\left(\frac{1}{6}+\frac{1}{2\pi^2}
\sum\limits_{j_1=1}^{p_1}\frac{1}{j_1^2}\right)
\zeta_{0}^{(i_3)}-\frac{\sqrt{2}}{4\pi}\sum\limits_{j_3=1}^{p_3}\frac{1}{j_3}
\zeta_{2j_3-1}^{(i_3)}\right).
\end{equation}
Using the theorem 1 and the system of trigonometric functions we find
\begin{equation}
\label{ogo906}
\frac{1}{2}\int\limits_t^T\int\limits_t^s ds_1d{\bf f}_{s}^{(i_3)}=
\frac{1}{2}\int\limits_t^T(s-t)d{\bf f}_{s}^{(i_3)}
=\frac{1}{4}(T-t)^{\frac{3}{2}}
\Biggl(\zeta_0^{(i_3)}-\frac{\sqrt{2}}{\pi}\sum_{r=1}^{\infty}
\frac{1}{r}
\zeta_{2r-1}^{(i_3)}
\Biggr).
\end{equation}
From (\ref{ogo905}) and (\ref{ogo906}) it follows
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\sum\limits_{j_3=0}^{p_3}\sum\limits_{j_1=0}^{p_1}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_3)}-
\frac{1}{2}\int\limits_t^T\int\limits_t^s ds_1d{\bf f}_{s}^{(i_3)}
\right)^2\right\}=
$$
$$
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
(T-t)^3\left(\left(\frac{1}{6}+\frac{1}{2\pi^2}\sum\limits_{j_1=1}^{p_1}
\frac{1}{j_1^2}-\frac{1}{4}\right)^2+
\frac{1}{8\pi^2}\left(\frac{\pi^2}{6}-
\sum\limits_{j_3=1}^{p_3}\frac{1}{j_3^2}\right)\right)=0.
$$
So, the relation (\ref{ogo1299}) is executed for the case of
trigonometric system of functions.
Let's prove the relation (\ref{ogo1399}). We have
$$
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=
\frac{(T-t)^{\frac{3}{2}}}{6}+\sum\limits_{j_3=1}^{p_3}C_{2j_3,2j_3,0}
\zeta_0^{(i_1)}+\sum\limits_{j_3=1}^{p_3}C_{2j_3-1,2j_3-1,0}\zeta_0^{(i_1)}+
$$
$$
+\sum\limits_{j_1=1}^{p_1}\sum\limits_{j_3=1}^{p_3}
C_{2j_3,2j_3,2j_1-1}\zeta_{2j_1-1}^{(i_1)}+
\sum\limits_{j_1=1}^{p_1}\sum_{j_3=1}^{p_3}
C_{2j_3-1,2j_3-1,2j_1-1}\zeta_{2j_1-1}^{(i_1)}+
$$
$$
+\sum\limits_{j_1=1}^{p_1}
C_{0,0,2j_1-1}\zeta_{2j_1-1}^{(i_1)}+
\sum\limits_{j_1=1}^{p_1}\sum\limits_{j_3=1}^{p_3}
C_{2j_3,2j_3,2j_1}\zeta_{2j_1}^{(i_1)}+
$$
\begin{equation}
\label{ogo9000}
+\sum\limits_{j_1=1}^{p_1}\sum_{j_3=1}^{p_3}
C_{2j_3-1,2j_3-1,2j_1}\zeta_{2j_1}^{(i_1)}+
\sum_{j_1=1}^{p_1}
C_{0,0,2j_1}\zeta_{2j_1}^{(i_1)},
\end{equation}
\vspace{2mm}
\noindent
where the summation is stopped, when
$2j_3,$ $2j_3-1> p_3$
or $2j_1,$ $2j_1-1> p_1$ and
\begin{equation}
\label{ogo9010}
C_{2l,2l,0}=\frac{(T-t)^{\frac{3}{2}}}{8\pi^2l^2},\ \
C_{2l-1,2l-1,0}=\frac{3(T-t)^{\frac{3}{2}}}{8\pi^2l^2},\ \
C_{0,0,2r}=\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{4\pi^2r^2};
\end{equation}
\begin{equation}
\label{ogo9030}
C_{2l-1,2l-1,2r-1}=0,\ \
C_{0,0,2r-1}=\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{4\pi r},\ \
C_{2l,2l,2r-1}=0;
\end{equation}
\begin{equation}
\label{ogo9020}
C_{2l,2l,2r}=
\begin{cases}
-\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{16\pi^2l^2},\ r=2l
\cr
\cr
0,\ r\ne 2l
\end{cases},\ \
C_{2l-1,2l-1,2r}=
\begin{cases}
-\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{16\pi^2l^2},\ r=2l
\cr
\cr
\frac{\sqrt{2}(T-t)^{\frac{3}{2}}}{4\pi^2l^2},\ r=l
\cr
\cr
0,\ r\ne l,\ r\ne 2l
\end{cases}.
\end{equation}
After substituting
(\ref{ogo9010}) -- (\ref{ogo9020})
into (\ref{ogo9000}) we get:
\begin{equation}
\label{ogo9050}
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}=(T-t)^{\frac{3}{2}}\left(
\left(\frac{1}{6}+\frac{1}{2\pi^2}
\sum\limits_{j_3=1}^{p_3}\frac{1}{j_3^2}\right)
\zeta_{0}^{(i_1)}+\frac{\sqrt{2}}{4\pi}\sum\limits_{j_1=1}^{p_1}\frac{1}{j_1}
\zeta_{2j_1-1}^{(i_1)}\right).
\end{equation}
\vspace{2mm}
Using the Ito formula, theorem 1 and the system of trigonometric
functions we find
\begin{equation}
\label{ogo9060}
\frac{1}{2}\int\limits_t^T\int\limits_t^sd{\bf f}_{s_1}^{(i_1)}ds=
\frac{1}{2}\left((T-t)
\int\limits_t^Td{\bf f}_{s}^{(i_1)}+
\int\limits_t^T(t-s)d{\bf f}_{s}^{(i_1)}\right)=
\frac{1}{4}(T-t)^{\frac{3}{2}}
\Biggl(\zeta_0^{(i_1)}+\frac{\sqrt{2}}{\pi}\sum_{r=1}^{\infty}
\frac{1}{r}
\zeta_{2r-1}^{(i_1)}
\Biggr).
\end{equation}
\vspace{2mm}
From (\ref{ogo9050}) and (\ref{ogo9060}) it follows
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
{\rm M}\left\{\left(
\sum\limits_{j_1=0}^{p_1}\sum\limits_{j_3=0}^{p_3}
C_{j_3 j_3 j_1}\zeta_{j_1}^{(i_1)}-
\frac{1}{2}\int\limits_t^T\int\limits_t^s d{\bf f}_{s_1}^{(i_1)}ds
\right)^2\right\}=
$$
$$
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1, p_3\to \infty}}$\cr
}} }
(T-t)^3\left(\left(\frac{1}{6}+\frac{1}{2\pi^2}\sum\limits_{j_3=1}^{p_3}
\frac{1}{j_3^2}-\frac{1}{4}\right)^2+\frac{1}{8\pi^2}\left(\frac{\pi^2}{6}-
\sum\limits_{j_1=1}^{p_1}\frac{1}{j_1^2}\right)\right)=0.
$$
\vspace{2mm}
So, the relation (\ref{ogo1399}) is also correct for the case
of trigonometric system of functions.
Let's prove the equality (\ref{ogo13a99}).
Since $\psi_1(\tau),$ $\psi_2(\tau),$ $\psi_3(\tau)\equiv 1,$
then the following
relation
for the Fourier coefficients is correct:
$$
C_{j_1 j_1 j_3}+C_{j_1 j_3 j_1}+C_{j_3 j_1 j_1}=\frac{1}{2}
C_{j_1}^2 C_{j_3}.
$$
Then w.\ p. 1
\begin{equation}
\label{ogo2010}
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=
\sum\limits_{j_1, j_3=0}^{\infty}
\left(\frac{1}{2}C_{j_1}^2 C_{j_3}-C_{j_1 j_1 j_3}-C_{j_3 j_1 j_1}
\right)\zeta_{j_3}^{(i_2)}.
\end{equation}
\vspace{2mm}
Taking into account (\ref{ogo1299}) and (\ref{ogo1399})
w.\ p.\ 1 let's write down the following:
$$
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_3 j_1}\zeta_{j_3}^{(i_2)}=
\frac{1}{2}C_0^3\zeta_0^{(i_2)}-
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_1 j_1 j_3}\zeta_{j_3}^{(i_2)}-
\sum\limits_{j_1, j_3=0}^{\infty}
C_{j_3 j_1 j_1}\zeta_{j_3}^{(i_2)}=
\frac{1}{2}(T-t)^{\frac{3}{2}}
\zeta_0^{(i_2)}-
$$
$$
-
\frac{1}{4}(T-t)^{\frac{3}{2}}
\Biggl(\zeta_0^{(i_2)}+
\frac{\sqrt{2}}{\pi}\sum_{r=1}^{\infty}
\frac{1}{r}
\zeta_{2r-1}^{(i_2)}
\Biggr)
-\frac{1}{4}(T-t)^{\frac{3}{2}}
\Biggl(\zeta_0^{(i_2)}-\frac{\sqrt{2}}{\pi}\sum_{r=1}^{\infty}
\frac{1}{r}
\zeta_{2r-1}^{(i_2)}
\Biggr)=0.
$$
\vspace{2mm}
From (\ref{ogo1299}) -- (\ref{ogo13a99}) and the theorem 1
we get the expansion
(\ref{feto19001ee}). The theorem 4 is proven.
\bigskip
|
train/arxiv
|
BkiUcO3xK1yAgXBVD0G1
| 5 | 1 |
\section{Introduction}
\label{sec:introduction}
Mapping the cosmic shear signal with weak gravitational lensing has long been regarded as an excellent probe of cosmology \citep[see e.g.][for a recent review]{2015RPPh...78h6901K}. In particular, future weak lensing measurements are one of the most promising observables for constraining the history of the growth of cosmic structure (and the physics which caused it) through direct sensitivity to the total mass along a line of sight \citep[e.g.][]{2013PhR...530...87W}.
From early detections \citep{2000MNRAS.318..625B, 2000Natur.405..143W,2000A&A...358...30V,2000astro.ph..3338K}, progress has been made to the point whereby current experiments \citep{2013MNRAS.432.2433H, 2015arXiv151003962J, 2015arXiv150705552T} are able to provide matter contents and dark energy constraints comparable with the best available from other probes such as the Cosmic Microwave Background (CMB, \citealt{2015arXiv150201589P}) and galaxy clustering \citep{2012PhRvD..86j3518P, 2013A&A...557A..54D, 2014MNRAS.441...24A}. As the depth and sky area of these and future experiments increases, uncertainties on these constraints will begin to become dominated by the numerous systematic effects which come into play when turning the raw astronomical data into shear maps and subsequent parameter confidence regions. These systematics include (but are not limited to) telescope systematics, galaxy intrinsic alignments \citep[see e.g.][]{2015SSRv..193....1J}, image analysis algorithm errors and uncertainties associated with modelling the non-linearity of matter clustering on small physical scales.
In this paper we will consider in particular the promise of future weak lensing experiments involving the Square Kilometre Array (SKA)\footnote{\url{http://www.skatelescope.org}} radio interferometer telescope, both alone and in cross-correlation with representative optical weak lensing surveys. The SKA has unique value by itself, the exact extent of which will depend on the properties of the faint radio source population which will be probed by surveys with SKA pathfinders and precursors. In an ideal scenario, the properties of this population will contain a long-tailed source redshift distributions, expected for the star-forming galaxy (SFG) population that will dominate the SKA surveys, and add unique additional information on the lensing shear signal from radio polarisation and resolved spectral line observations \citep[see][for a summary]{2015aska.confE..23B}. Even without the addition of more information, extra advantages can also be gained by cross-correlating the shear maps produced from SKA data with shear maps generated by other experiments in different wavebands, as recently demonstrated by \cite{2015arXiv150705977D}. In this procedure, any spurious shear generated by systematics which are uncorrelated between the wavebands should be instantly eliminated \citep[e.g.][]{2010MNRAS.401.2572P}. In particular, contamination from an incorrectly deconvolved spatially varying Point Spread Function (PSF) and errors from algorithms used to measure the shapes of individual galaxies to infer the shear should be uncorrelated between the different experiments. When measuring an observed shear map $\widetilde{\gamma}$ made in waveband ${X}$, the observed signal receives contributions from the true gravitational shearing $\gamma$ (which is achromatic and identical in both wavebands), the intrinsic shape of the galaxy $\gamma^{\rm int}$ and spurious shear from incorrectly deconvolved PSF or shape measurement error $\gamma^{\rm sys}$. The cross-correlation of shear maps in different wavebands then has terms:
\begin{equation}
\begin{split}
\langle\widetilde{\gamma}_{X}\widetilde{\gamma}_{Y}\rangle =
\langle \gamma \gamma \rangle +& \langle \gamma^{\rm int}_{X} \gamma \rangle + \langle \gamma^{\rm int}_{Y} \gamma \rangle \\ &\quad + \langle \gamma^{\rm int}_{X} \gamma^{\rm int}_{Y} \rangle + \langle \gamma^{\rm sys}_{X} \gamma^{\rm sys}_{Y} \rangle.
\end{split}
\end{equation}
The first term is the cosmological signal that we are interested in, the following three terms are contaminating `intrinsic alignment' terms \citep[see][for a recent review]{2015SSRv..193....1J, 2015SSRv..193...67K, 2015SSRv..193..139K} and the final term is a systematics term (we have ignored terms correlating systematics with signals on the sky). Any contributions to these systematics terms which are uncorrelated between different experiments and wavebands will be suppressed by the cross-correlation, greatly increasing the robustness of cosmological constraints. {If polarised and neutral hydrogen (HI) 21 cm line emission fractions from high redshift sources prove to be high enough, radio weak lensing experiments can also provide useful information} on intrinsic alignment systematics through polarisation \citep{2011MNRAS.410.2057B} and rotational velocity information \citep{2002ApJ...570L..51B, 2006ApJ...650L..21M}, though we do not consider such approaches in these forecasts. Instead, we consider what can be achieved with `vanilla' SKA weak lensing surveys in which cosmological information come from forming shear power spectra from measured galaxy ellipticities, just as in typical optical experiments. Adopting the survey categorisation scheme of the Dark Energy Task Force \citep[DETF,][]{2006astro.ph..9591A}, we will show that surveys conducted with the first phase of the SKA (SKA1) will be competitive with `Stage III' optical weak lensing surveys such as DES\footnote{\url{http://www.darkenergysurvey.org}}, KiDS\footnote{\url{http://kids.strw.leidenuniv.nl}} and HSC\footnote{\url{http://subarutelescope.org/Projects/HSC}}, and that full SKA (SKA2) weak lensing surveys can provide `Stage IV' constraints similar to those achievable with the weak lensing components of the \textit{Euclid}\footnote{\url{http://euclid-ec.org}}, WFIRST-AFTA\footnote{\url{http://wfirst.gsfc.nasa.gov}} and LSST\footnote{\url{http://www.lsst.org}} surveys. We will also show that constraints obtained from cross-power spectra measured between shear maps made in different wavebands will provide measurements which are still just as tight as each experiment by itself, but should be free of any wavelength dependent systematics.
Here we make forecasts using simple prescriptions for the noise spectra and covariance matrices within a weak lensing experiment, and choose a fiducial experimental configuration for the SKA weak lensing surveys. In a companion paper \citep[][hereafter Paper II]{bonaldi2016} we construct a sophisticated simulation pipeline to produce mock weak lensing catalogues for future SKA surveys which we also process through a tomographic weak lensing power spectrum analysis. We then use this pipeline to explore the optimal instrumental configuration for performing SKA weak lensing surveys in the presence of real-world effects such as signal-to-noise dependent shape measurement errors, realistic distributions in galaxy sizes, fluxes and redshifts and ionospheric distortions.
The outline of this paper is as follows. We first provide a brief review of radio weak lensing in \cref{sec:wl}. In \cref{sec:experiments} we then describe the experimental surveys considered for the forecasts and describe our methodology for construction of cross-experiment shear power spectra. In \cref{sec:forecasts} we describe the methods used in producing our forecasts. Then, in \cref{sec:results} we show results for cosmological parameter constraints using SKA, Stage III optical (DES\xspace), Stage IV optical (\textit{Euclid}-like\xspace) and cross-correlations, demonstrating the power of using optical and radio experiments together. Finally in \cref{sec:conclusions} we discuss these results and conclude.
\section{Weak Lensing Cosmology}
\label{sec:wl}
We refer the reader to \cite{2001PhR...340..291B} for a comprehensive overview of weak lensing cosmology, which we will briefly introduce here. Weak lensing analyses typically involve the measurement of the individual shapes of large numbers of galaxies on the sky. For a large number density of sources in a single patch of sky, the estimated change in shape due to the cosmic shear along the line of sight to that patch ($\hat{\gamma}$) can be estimated by taking a simple average over the observed ellipticity of the galaxies ($\epsilon^{\mathrm{obs}}$), assuming that the intrinsic shapes before shearing are uncorrelated:
\begin{equation}
\label{eqn:gamma_hat}
\hat{\gamma} = \frac{1}{N}\sum_{i=1}^{N}\epsilon^{\mathrm{obs}}_{i}.
\end{equation}
The two-point statistics of this observed shear field, such as the power spectrum $\widetilde{C}_\ell$, can then be related to the underlying matter power spectrum $P_{\delta}$, which can be predicted theoretically for different cosmological models. For sources confined to a thin shell in redshift, the $\widetilde{C}_\ell$ are sensitive to the integrated matter power spectrum out to this redshift. In practice, sources are distributed across a range of redshifts $\d n_{\rm gal}/\d z$ (which is in turn affected by imprecise knowledge of the redshifts of individual sources) and extra information is gained about the growth of structures along the line of sight by constructing the auto- and cross-power spectra of shear maps made using sources divided into different tomographic redshift bins.
The full relation for the power spectrum between two different tomographic bins $i,j$ is given by \citep{2001PhR...340..291B}:
\begin{equation}
\label{eqn:limber}
C ^{ij} _{\ell} = \frac{9H_0^4 \Omega_\mathrm m^2}{4c^4} \int_0^{\chi_\mathrm h}
\mathrm d \chi \, \frac{g^{i}(\chi) g^{j}(\chi)}{a^2(\chi)} P_{\delta} \left(\frac{\ell}{f_K(\chi)},\chi \right) \,.
\end{equation}
Here, $H_0$ is the Hubble constant, $\Omega_\mathrm m$
is the (total) matter density, $c$ is the speed of light, $a(\chi)$
is the scale factor of the Universe at co-moving distance $\chi$,
$f_K(\chi)$ is the angular diameter distance (given simply by
$f_K(\chi) = \chi$ in a flat Universe), $P_{\delta}(k, \chi)$ is the
matter power spectrum and the functions $g^{i}(\chi)$ are the lensing
kernels for the redshift bins in question. The lensing kernels are given by:
\begin{equation}
g^{i}(\chi) = \int_\chi^{\chi_{\mathrm h}} \mathrm d \chi' n_{i} (\chi') \frac{f_K (\chi'-\chi)}{f_K (\chi')} \,.
\end{equation}
The number density distributions $n_{i} (\chi)$ give the normalised number of galaxies with radial co-ordinate $\chi$ in this tomographic bin. For single experiment weak lensing cosmology, the $i,j$ label different tomographic redshift bins and the uncertainty on the power spectrum depends on $n_{\rm gal}$, the number density of detected galaxies on the sky and $\sigma_{g}$, the variance of the distribution of galaxy ellipticities (or `shape noise'). We will generalise these measurement and noise terms to include cross-experiment power spectra in \cref{sec:cross-spectra}.
\subsection{Cosmological Parameters}
In this paper we will consider the ability of weak lensing experiments to measure a base six-parameter {\ensuremath{\Lambda\mathrm{CDM}}}\xspace model and two well-motivated extensions: dynamical dark energy and a phenomenological modification to Einstein's gravity. We note that these choices are merely common parametrisations of these extensions and are not specifically tailored to the strengths of SKA weak lensing. Different parametrisations (for example, non-parametric dark energy equation of state reconstruction which equally weights information at all redshifts) may more optimally use the information from these experiments for model selection, but are not considered here.
\subsubsection{Base {\ensuremath{\Lambda\mathrm{CDM}}}\xspace}
For our base cosmology we consider six parameters: total matter content $\Omega_\mathrm m$, baryonic matter content $\Omega_\mathrm b$, amplitude of matter fluctuations $\sigma_8$, Hubble expansion parameter $h_0$, scalar fluctuation spectral index $n_s$ and reionisation optical depth $\tau$. Unless otherwise stated, all constraints presented are marginalised over the first five of these parameters (with $\tau$ kept fixed) with central values of $\boldsymbol\vartheta_{\ensuremath{\Lambda\mathrm{CDM}}}\xspace = \lbrace \Omega_\mathrm m, \Omega_\mathrm b, \sigma_8, h_0, n_s \rbrace = \lbrace 0.3, 0.04, 0.8, 0.72, 0.96 \rbrace $. Weak lensing is highly effective at probing the overall amplitude of the matter power spectrum, which depends on a degenerate combination of the total matter $\Omega_\mathrm m$ and clustering strength $\sigma_8$; we will therefore present constraints in these two parameters only.
\subsubsection{Dark Energy}
As one extension to $\Lambda$CDM, we will consider measuring the parameters in a simple model of evolving dark energy where the equation of state $w$ evolves as a linear function of the scale factor $a$ (known as the Chevallier-Polarski-Linder parameterisation, see \citealt{Chevallier:2000qy} and \citealt{Linder:2002et}):
\begin{equation}
w(a) = w_0 + w_a(1 - a).
\end{equation}
This model represents the first order term in a Taylor expansion of a generally evolving equation of state. We consider these parameters in $\boldsymbol\vartheta_w = \boldsymbol\vartheta_{\ensuremath{\Lambda\mathrm{CDM}}}\xspace + \lbrace w_0, w_a \rbrace$.
\subsubsection{Modified Gravity}
We also consider modifications to gravity as parametrised in \cite{2011PhRvD..84l3001D, 2015PhRvD..92b3003D}. In General Relativity, from the perturbed Friedmann-Lemaitre-Robertson-Walker (FLRW) metric in the conformal Newtonian gauge:
\begin{equation}
ds^2 = a^2(\eta) \left[ -(1 + 2\Psi)d\eta^2 + (1- 2\Phi)dx^adx_a \right],
\end{equation}
we define the Newtonian gravitational potential $\Psi$ felt by matter and the lensing potential $\Phi$ which is also felt by relativistic particles. We now define modified gravity parameters $Q_0$, which modifies the potential $\Phi$ in the relativistic Poisson equation:
\begin{equation}
k^2 \Phi = -4\pi G a^2 \rho \Delta Q_0
\end{equation}
and the gravitational slip $R$ which, in the case of anisotropic stress, gives the ratio between the two potentials:
\begin{equation}
R = \frac{\Psi}{\Phi}.
\end{equation}
As $R$ is degenerate with $Q_0$ it is convenient to define the derived parameter $\Sigma_0 = Q_0(1+R)/2$ and our constraints are given in terms of this. Weak lensing probes the sum of potentials $\Phi + \Psi$ and is hence extremely effective at constraining $\Sigma_0$ but much less sensitive to $Q_0$. Combination with probes for which the opposite is true (i.e. which are sensitive to the Newtonian potential), such as redshift space distortions, is then capable of breaking the degeneracy inherent in each probe individually \citep[see e.g.][]{2013MNRAS.429.2249S, 2015PhRvD..91h3504L}. We consider these parameters in $\boldsymbol\vartheta_{mg} = \boldsymbol\vartheta_{\ensuremath{\Lambda\mathrm{CDM}}}\xspace + \lbrace \Sigma_0, Q_0 \rbrace$.
\subsection{Weak Lensing Systematics}
Whilst the statistical error on a weak lensing measurement of a cosmological parameter can be beaten down through increasing the number density of galaxies $n_{\rm gal}$ with measured shapes on the sky (or by selecting a population with a smaller intrinsic shape dispersion $\sigma_{g}$), forthcoming Stage III and Stage IV experiments will begin to enter the regime where the contribution from systematic errors on shear measurement will become comparable to, and larger than, the statistical noise. Here we provide a brief overview of many (although not all) of these systematics, whereas a more detailed analysis of their effects and ways to overcome them will be provided in a companion paper \citep[][hereafter Paper III]{camera2016}.
\begin{itemize}
\item PSF uncertainties. The light from all sources used in weak lensing is convolved with the telescope point spread function. This convolution will induce changes in the size and ellipticity of the apparent galaxy shape in the image data, and must be accounted for when estimating the true observed ellipticity. Typically, a model is created for the PSF which is then deconvolved during shear measurement. For ground-based optical experiments, the primary systematic is residual, un-modelled PSF shape distortions due to instabilities in the atmosphere above the telescope (i.e. seeing). For space-based telescopes the atmosphere is not a consideration, but other effects from detectors and telescope optics can still create an anisotropic and time-varying PSF. {In addition, the deterministic nature of the changes in interferometer dirty beam shape with observing frequency may potentially avoid issues with shear bias from colour gradients in source galaxies \citep[see e.g.][for a full description of the problem]{2012MNRAS.421.1385V}. However, care will need to be taken to ensure the primary beam of each antenna is well-characterised enough to avoid the return of shear biases originating from the beam.}
\item Shear measurement uncertainties \citep[see][and references therein for an overview]{2014ApJS..212....5M}. Using the observed galaxy ellipticity as a shear estimator as in \cref{eqn:gamma_hat} depends on having a reliable, unbiased estimator of the ellipticity. Whilst in the noise-free case, $\epsilon$ can be defined as a simple function of the quadrupole moments of the image, significant complications arise whenever noise is present as the un-weighted quadrupoles will diverge. In general, maximum likelihood estimators for ellipticity will become increasingly biased at lower signal-to-noise ratios (as ellipticity is a ratio of quadrupole moments), and so must be calibrated \citep[e.g.][]{2012MNRAS.425.1951R}. Shear estimators which measure $\epsilon$ using parametrised models with elliptical isophotes also suffer from `model bias' caused by under-fitting of real galaxy intensity profiles \citep{2010MNRAS.404..458V}. Accounting for these biases correctly, through either explicit calibration or application of correct Bayesian priors, is a major step in the analysis pipeline for most surveys and requires sophisticated, large scale simulations which correctly reflect the observations.
\item Intrinsic Alignment (IA) contamination. A key assumption in \cref{eqn:gamma_hat} is that intrinsic galaxy shapes are uncorrelated and so any coherent shape must be due to cosmic shear. However, in reality there are two other astrophysical effects which contaminate the shear signal. Galaxies which are nearby on the sky form within the same large scale structure environment as one another, creating spurious `II' (Intrinsic-Intrinsic) correlations. In addition, galaxies which are local in redshift to an overdensity will develop intrinsic shapes in anti-correlation with the shearing of background galaxies by that same overdensity -- the `GI' (Gravitational-Intrinsic) alignment. Typically, these alignments can be {mitigated} through modelling their effect on the power spectrum, or discounting galaxies which are expected to be most affected (such as close pairs on the sky or redder galaxies). An overiew of IA effects can be found in \citet{2015SSRv..193....1J}, \citet{2015SSRv..193...67K} and \citet{2015SSRv..193..139K}.
\item Non-linear evolution and baryonic feedback effects. Cosmology with cosmic shear relies on the comparison between an observed shear power spectrum and a theoretically predicted one. However, outside of the regime of linear evolution of large scale structures (i.e. on smaller scales $k \gtrsim 0.2 h \, \mathrm{Mpc}^{-1} $), a variety of physical effects will affect the shape of this power spectrum in uncertain ways which are possibly degenerate with changes in cosmological parameters \citep[e.g.][]{2005APh....23..369H}.
\item Redshift uncertainty estimation. Placing sources into tomographic bins usually requires an estimate of the source's redshift from a small number of broad photometric bands. Significant biases may arise due to insufficient freedom in Spectral Energy Distribution (SED) templates, incorrect spectroscopic calibration and noisy data. For a discussion of these issues see \cite{2015arXiv150705909B} and references therein.
\end{itemize}
\subsection{Radio Weak Lensing}
Performing weak lensing experiments in the radio band offers a number of potential advantages compared to using optical telescopes alone. In addition to opening the door to powerful cross-correlation techniques (which we consider in more detail in the following subsection), the radio band has the potential to bring unique added value to this area of cosmology by way of new approaches to measuring the weak lensing signal using polarisation and rotational velocity observations. Here we summarise the key benefits that radio weak lensing experiments can offer and highlight some of the challenges which need to be met. We refer the reader to \cite{2015aska.confE..23B} for more information.
\begin{itemize}
\item Weak lensing surveys conducted with radio telescopes are, in principle, much less susceptible to instrumental systematic effects associated with residual PSF anisotropies. {The image-plane PSF (or `dirty beam') is set by the baseline distribution and time and frequency sampling of the telescope, all of which are deterministic and known to the observer and may be controlled. An anisotropic PSF can mimic the sought-after cosmic shear signal and are one of the most worrisome systematic effects in optical lensing analyses.} Whilst the turbulent ionosphere can cause similar effects in the radio, these effects scale strongly with frequency, meaning at the high frequency considered here ($1.355\,\mathrm{GHz}$, see Paper II for a full discussion) this is less of a concern for radio weak lensing.
{\item However, whilst the dirty beam is precisely known and highly deterministic, the incomplete sampling of the Fourier plane by the finite number of interferometer baselines leads to significant sidelobes which may extend across the entire visible sky. Deconvolving this PSF then becomes a complicated non-local problem as flux from widely-separated sources is mixed together and traditional methods (such as the CLEAN algorithm, \citealt{1974A&AS...15..417H}) have been shown to be inadequate for preserving morphology to the degree necessary for weak lensing.}
\item The SFGs which are expected to dominate the deep, wide-field surveys to be undertaken with the SKA are also expected to be widely distributed in redshift space \citep[see][and Paper II]{wilman08}. In particular, a high-redshift tail of significant numbers of such galaxies, extending beyond $z \sim 1$ would provide an additional high-$z$ bin to what is already accessible with optical surveys. {See the end of \cref{sec:fisher} for a demonstration of the increase in cosmological constraining power from the inclusion of these high-redshift sources. The details of the flux and size distributions of this population are still somewhat uncertain (see Paper II for a full discussion) and will benefit from the efforts of SKA precursor and pathfinder surveys.}
\item The orientation of the integrated polarised emission from SFGs is not altered by gravitational lensing. If the polarisation orientation is also related to the intrinsic structure of the host galaxy then this provides a powerful method for calibrating and controlling intrinsic galaxy alignments which are the most worrying astrophysical systematic effect for precision weak lensing studies \citep{2011MNRAS.410.2057B, 2015MNRAS.451..383W}. {Again, the polarisation fraction and angle of scatter between position and polarisation angle is currently subject to much uncertainty and have currently only been tested on small low-redshift samples \citep{2009ApJ...693.1392S}. This result may not preserve in the high-redshift SFGs we are interested in here, but will become better informed by other surveys leading up to the SKA.}
\item Much like the polarisation technique, observations of the rotation axis of disk galaxies also provides information on the original (un-lensed) galaxy shape \citep{2002ApJ...570L..51B, 2006ApJ...650L..21M, huff2013}. Such rotation axis measurements {may be available} for significant numbers of galaxies with future SKA surveys through resolved 21 cm HI line observations.
\item HI line observations also provide an opportunity to obtain spectroscopic redshifts for sources used in weak lensing surveys \cite[e.g.][]{2015MNRAS.450.2251Y}, greatly improving the tomographic reconstruction {for the sources for which spectra are available. For SKA1 this will be a relatively small fraction of sources ($\sim10\%$) at low redshifts (which are less useful for gravitational lensing) but this will improve significantly for SKA2.}
\item Because Galactic radio emission at relevant frequencies is smooth, it is `resolved out' by radio interferometers. This means that radio surveys have access to more of the sky than experiments in other wavebands, which cannot see through the Galaxy because of dust obscuration effects.
\end{itemize}
A detection of a weak lensing signal in radio data was first made by \cite{2004ApJ...617..794C} in a shallow, wide-area survey. More recently \cite{2015arXiv150705977D} have made a measurement in cross-correlation with optical data, and the SuperCLASS\footnote{\url{http://www.e-merlin.ac.uk/legacy/projects/superclass.html}} survey is currently gathering data with the express purpose of pushing forward radio weak lensing techniques.
\subsection{Shear Cross-Correlations}
\label{sec:cross-spectra}
Whilst radio weak lensing surveys have worth in themselves, as discussed above, combining shear maps made at different observational wavelengths has further potential to remove systematics which can otherwise overwhelm the cosmological signal. Here we construct a formalism for forecasting the precision with which cross-correlation power spectra can be measured from shear maps obtained from two different experiments $X,Y$, which may be in different wavebands. We may still split sources in each experiment into different redshift bins $i,j$, giving the cross power spectra:
\begin{equation}
\label{eqn:limber_cross}
C ^{X_{i}Y_{j}} _\ell = \frac{9H_0^4 \Omega_\mathrm m^2}{4c^4} \int_0^{\chi_\mathrm h}
\mathrm d \chi \, \frac{g^{X_i}(\chi) g^{Y_j}(\chi)}{a^2(\chi)} P_{\delta} \left(\frac{\ell}{f_K(\chi)},\chi \right) \,.
\end{equation}
Here the bins can be defined differently for each experiment, taking advantage of e.g. higher median redshift distributions or better measured photometric redshifts in one or the other of the two experiments.
When observed, each power spectrum also includes a noise power spectrum from the galaxy sample:
\begin{equation}
\widetilde{C}_\ell^{X_i Y_j} = C_\ell^{X_i Y_j} + \mathcal{N}_\ell^{X_i Y_j}.
\end{equation}
The noise is a function of the number density of galaxies in each experiment individually $n_{\rm gal}^{X_i}, n_{\rm gal}^{Y_j}$, the number of objects which are common to both experiments $n_{\rm gal}^{X_iY_j}$ and the covariance of galaxy shapes between the two experiments and redshift bins $\mathrm{cov}(\epsilon_{X_i}, \epsilon_{Y_j})$. Note that this final term $\mathrm{cov}(\epsilon_{X_i}, \epsilon_{Y_j})$ is in general a function of both waveband $X,Y$ and redshift bin $i,j$, describing how galaxy shapes are correlated between the two wavebands and how this correlation evolves with redshift. We can then write the expression for the noise on an observed shear power spectrum:
\begin{align}
\mathcal{N}_\ell^{X_i Y_j} &= \frac{1}{n_{\rm gal}^{X_i}n_{\rm gal}^{Y_j}}\langle \sum_{\alpha \in X_i}\epsilon_\alpha \sum_{\beta \in Y_j}\epsilon_\beta \rangle \nonumber \\
&= \frac{n_{\rm gal}^{X_i Y_j}}{n_{\rm gal}^{X_i}n_{\rm gal}^{Y_j}}\mathrm{cov}(\epsilon_{X_i}, \epsilon_{Y_j}).
\end{align}
For correlations between redshift bins in the same experiment this reduces to the familiar shape noise term \citep[e.g.][]{2004PhRvD..70d3009H}:
\begin{equation}
\label{eqn:autonoise}
\mathcal{N}_\ell^{ij} = \delta^{ij}\frac{\sigma^2_{g_i}}{n^{i}_{\rm gal}}.
\end{equation}
If we make the simplifying assumption that for cross-experiment correlations, where redshift bins overlap, both experiments probe the same populations of galaxies which have the same shape and shape variance in both wavebands and across all redshift bins, the noise term becomes:
\begin{equation}
\label{eqn:simplenoise}
\mathcal{N}_\ell^{X_i Y_j} = \frac{n_{\rm gal}^{X_i Y_j}}{n_{\rm gal}^{X_i}n_{\rm gal}^{Y_j}}\sigma^2_{g}.
\end{equation}
{Here, for the two sets of tomographic redshift bins for each experiment we consider the fraction of sources which may be expected to appear in both the radio and optical shape catalogues. In reality, this overlap will be between a deep optical sample and a deep radio sample of SFGs on a wide area. Data sets with this combination of area coverage and depth do not as yet exist, but useful information can be gained from some shallower or narrower archival surveys. Here we consider the large but shallow SDSS-DR10 optical catalogue \citep{2014ApJS..211...17A} and the FIRST radio catalogue \citep{1995ApJ...450..559B, 2004ApJ...617..794C}; and deep but narrow observations of the COSMOS field using the Hubble Space Telescope \citep{2010ApJ...708..202M} in the optical and VLA in the radio \citep{2010ApJS..188..384S}. The SDSS-FIRST overlap region contains a significant part ($\sim10,000 \, \, \mathrm{deg}^2$) of the northern sky, but the radio catalogue is shallow (a $10 \, \sigma$ detection limit of $1.5 \, m$Jy). The COSMOS overlap survey is deep (a $10 \, \sigma$ detection limit of $0.28 \, m$Jy) but covers only 1 $\, \mathrm{deg}^2$. These data sets appear to indicate that matching fractions are low ($<10\%$) and do not evolve significantly with redshift. In addition, the optical and radio weak lensing samples constructed by \cite{2010MNRAS.401.2572P} in an $8.5' \times 8.5'$ field in the HDF-N region contain a $4.2\%$ matching fraction across all redshifts.}
To investigate how much a non-vanishing radio-optical matching fraction could degrade the radio-optical cross-correlation constraining power for cosmology, we proceed as follows. We introduce a parameter $f_{\rm O-R}\in[0,1]$ quantifying the number of sources that appears in both the radio and the optical/near-infrared catalogues for a given combination of tomographic bins. In other words, we keep $n_{\rm gal}^{X_i Y_j}$ fixed to the amount of sources present in the overlap between to given radio-optical bin pairs $X_i-Y_j$. We then multiply this quantity by $f_{\rm O-R}$ and perform a Fisher matrix analysis letting $f_{\rm O-R}$ vary continuously between 0 and 1 (but identically across all redshift bins). \cref{fig:FoM_fOR} illustrates the degradation of the Dark Energy Task Force Figure of Merit (DETF FoM -- the inverse area of a Fisher ellipse in the $w_0$-$w_a$ plane, see \citealt{2006astro.ph..9591A} and \cref{eqn:de_fom}) -- as the fraction of matching radio-optical sources, $f_{\rm O-R}$, increases (note that for simplicity we assume $\mathrm{cov}(\epsilon_{X_i}, \epsilon_{Y_j})=\sigma_\epsilon^2$). We show the ratio between the DETF FoM for a non-vanishing radio-optical matching fraction $f_{\rm O-R}$ and the same quantity for $f_{\rm O-R}=0$. It is easy to see that even if 100\% of the sources appeared in both catalogues, the degradation of the dark energy FoM would be $<5\%$ for Stage III cosmic shear surveys, and even lower for Stage IV experiments. If we then consider the available data as described previously in this section, the range of values of $f_{\rm O-R}$ for which are indicated by the shaded area, we may see the minimal impact of realistic noise terms on the cross-correlation power spectra.
{In order to account for this in the following forecasts we consider the regime where overlap fractions are high and photometric redshifts are provided for the $85\%$ and $50\%$ of sources which do not have spectroscopic HI 21cm line redshifts in the case of SKA1 and SKA2 respectively (as described in \cref{tab:experiments}). However, as mentioned in \cref{sec:cross-correlations}, it may be possible for radio surveys alone to provide significantly more redshifts than those from only high-significance HI detections.}
\begin{figure}
\includegraphics[width=0.5\textwidth]{ratio-for.png}
\caption{{Ratio with respect to the case with no radio-optical matching fractions ($f_{\rm O-R}=0$) for dark energy FoMs as a function of $f_{\rm O-R}$ for the cross-correlation between Stage III (dashed line) and Stage IV (solid line) experiments. The shaded regions shows the range of values for $f_{\rm O-R}$ for the data sets discussed in the text.}}\label{fig:FoM_fOR}
\end{figure}
In the regime where systematics are controlled, the maximum amount of information is available by using both cross and auto-experiment power spectra. For a data vector consisting of both:
\begin{equation}
\widetilde{\mathbf{d}} =
\begin{pmatrix}
\widetilde{C}_\ell^{XX} \\
\widetilde{C}_\ell^{XY} \\
\widetilde{C}_\ell^{YY}
\end{pmatrix},
\end{equation}
we can also write the covariance matrix between two bins in different experiments (now suppressing the $i,j$ for clarity and with $\nu =\delta_{\ell \ell'}/(2\ell + 1)f_{\rm sky}$):
\begin{align}
\widetilde{\mathbf{\Gamma}}_{\ell\ell'} &= \\
& \nu \begin{pmatrix}
2(\widetilde{C}_\ell^{XX})^2 & 2\widetilde{C}_\ell^{XX}\widetilde{C}_\ell^{XY} & 2(\widetilde{C}_\ell^{XY})^2 \\
2\widetilde{C}_\ell^{XX}\widetilde{C}_\ell^{XY} & (\widetilde{C}_\ell^{XY})^2 + \widetilde{C}_\ell^{XX}\widetilde{C}_\ell^{YY} & 2\widetilde{C}_\ell^{XY}\widetilde{C}_\ell^{YY} \\
2(\widetilde{C}_\ell^{XY})^2 & 2\widetilde{C}_\ell^{XY}\widetilde{C}_\ell^{YY} & 2(\widetilde{C}_\ell^{YY})^2
\end{pmatrix}, \nonumber
\end{align}
making the simplifying assumption that different $\ell$ modes are uncorrelated and hence the covariance matrix is diagonal in $\ell-\ell'$. However, here we are interested in forecasting constraints which can be gained which are free of systematics caused by e.g. incorrect PSF deconvolution within an experiment and so consider only cross-experiment spectra (as such systematics will be uncorrelated between the two experiments), giving data vector:
\begin{equation}
\label{eqn:simple_data}
\widetilde{\mathbf{d}} =
\begin{pmatrix}
\widetilde{C}_\ell^{XY}
\end{pmatrix},
\end{equation}
and covariance matrix:
\begin{equation}
\label{eqn:simple_cov}
\widetilde{\mathbf{\Gamma}}_{\ell\ell'} =
\nu \begin{pmatrix}
(\widetilde{C}_\ell^{XY})^2 + \widetilde{C}_\ell^{XX}\widetilde{C}_\ell^{YY}
\end{pmatrix}.
\end{equation}
Forecasts presented here for cross-correlation experiments will be of this cross-only form and with noise terms given by \cref{eqn:simplenoise}.
\section{Experiments Considered}
A number of surveys across multiple wavebands are both currently taking place and planned for the near future which have weak lensing cosmology as a prominent science driver. We adopt the language of the Dark Energy Task Force \citep[DETF,][]{2006astro.ph..9591A} in loosely grouping these experiments into `Stage III' and `Stage IV' experiments, where Stage III refers to experiments which were in the near future when the DETF document was prepared compared to Stage IV experiments which follow these in time. The distinction can also be cast in terms of the expected level of constraining power, with Stage III Weak Lensing alone experiments giving $\mathcal{O}(50\%)$ constraints on the Dark Energy equation of state $w$ and Stage IV $\mathcal{O}(10\%)$. We point out that we present here constraints from weak lensing analyses only; in reality, significant improvements on constraints will be gained by both the SKA and optical surveys' measurements of galaxy clustering and other probes (such as supernovae and Intensity Mapping), as well as combination with external data sets.
For each stage we consider a representative experiment from both the optical and the radio. We now give short background descriptions of the source populations assumed and the particulars of each experiment considered.
\label{sec:experiments}
\begin{table*}
\begin{tabular}{lccccccccccc}
\hline
Experiment & $A_{\rm sky} \, [\mathrm{deg}^2]$ & $n_{\rm gal} \, [\mathrm{arcmin}^{-2}]$ & $z_{m}$ & $\alpha$ & $\beta$ & $\gamma$ & $f_{\textrm{spec-}z}$ & $z_{\textrm{spec-max}}$ & $\sigma_{\textrm{photo-}z}$ & $z_{\textrm{photo-max}}$ & $\sigma_{\textrm{no-}z}$ \\
\hline
SKA1 & 5,000 & 2.7 & 1.1 & $\sqrt{2}$ & 2 & 1.25 & 0.15 & 0.6 & 0.05 & 2.0 & 0.3 \\
DES\xspace & 5,000 & 12 & 0.6 & $\sqrt{2}$ & 2 & 1.5 & 0.0 & 2.0 & 0.05 & 2.0 & 0.3 \\
\hline
SKA2 & 30,000 & 10 & 1.3 & $\sqrt{2}$ & 2 & 1.25 & 0.5 & 2.0 & 0.03 & 2.0 & 0.3 \\
\textit{Euclid}-like\xspace & 15,000 & 30 & 0.9 & $\sqrt{2}$ & 2 & 1.5 & 0.0 & 0.0 & 0.03 & 4.0 & 0.3 \\
\hline
\end{tabular}
\caption{Parameters used in the creation of simulated data sets for the representative experiments considered in this paper.}
\label{tab:experiments}
\end{table*}
\subsection{Source Populations}
For the number density of sources in each tomographic bin in each experiment we use a redshift number density distribution of the form:
\begin{equation}
\label{eqn:nofz}
\frac{\d n_{\rm gal}}{\d z} = z^{\beta} \exp\left( -(z/z_0)^{\gamma} \right),
\end{equation}
where $z_0 = z_{m} / \alpha$ ($\alpha$ is a scale parameter) and $z_{m}$ is the median redshift of sources. For the SKA experiments we use the source counts in the SKADS S3-SEX simulation of radio source populations \citep{wilman08}; we have applied re-scalings of these populations in both size distributions and number counts in order to match recent data (see Paper II for a full description). Values of the parameters in \cref{eqn:nofz} are given in \cref{tab:experiments}, including the best-fit parameters to the SKADS S3-SEX distribution. The top panel of \cref{fig:nofz} shows these distributions for the experiments considered, including the high-redshift tail present in the radio source populations. For each experiment we then subdivide these populations into ten tomographic redshift bins, giving equal numbers of galaxies in each bin. We also add redshift errors, spreading the edges of each redshift bin and causing them to overlap. We assume a fraction of sources with spectroscopic redshifts (i.e. with no redshift error) $f_{\textrm{spec-}z}$ up to a redshift of $z_{\textrm{spec-max}}$. For the remaining sources we assign a Gaussian-distributed (with the prior $z > 0$) redshift error of width $(1+z)\sigma_{\textrm{photo-}z}$ up to a redshift of $z_{\textrm{photo-max}}$, beyond which we assume no `good' photometric redshift estimate and assign a far greater error $(1+z)\sigma_{\textrm{no-}z}$. Values for these parameters for each representative experiment are shown in \cref{tab:experiments} and the resulting binned distributions for SKA2 and the \textit{Euclid}-like\xspace experiment (see Section~\ref{sec:stage4_expts} below) are shown in the lower panel of \cref{fig:nofz}. We take an intrinsic galaxy shape dispersion of $\sigma_{g_i} = 0.3$ for all redshift bins and experiments{, consistent with that found for the radio and optical lensing samples used in previous radio weak lensing \citep{2010MNRAS.401.2572P}}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{nofz_theory_dists.png}\\
\includegraphics[width=0.5\textwidth]{nofz_dists.png}
\caption{Source (top) and ``observed" (bottom, split into ten tomographic bins for each experiment) redshift distributions $\d n_{\rm gal}/\d z$ for the \textit{Euclid}-like\xspace and SKA2 experiments described in Section~\ref{sec:stage4_expts}. The curves in both panels are normalised such that the total area under the curves is equal to the total $n_{\rm gal}$ for each experiment.}
\label{fig:nofz}
\end{figure}
\subsection{Stage III Experiments}
\subsubsection{SKA Phase 1 (SKA1)}
The Square Kilometre Array (SKA) will be built in two phases: the first (SKA1) will consist of a low frequency aperture array in Western Australia (SKA1-LOW) and a dish array to be built in South Africa (SKA1-MID) with expected commencement of science observations in 2020. Of these, it is SKA1-MID which will provide the necessary sensitivity and resolution to conduct weak lensing surveys. Here we have assumed source number densities expected to emanate from a $5,000 \, \mathrm{deg}^2$ survey conducted at the centre of observing Band 2 (1.355 GHz) and with baselines weighted to give an image-plane PSF of size 0.5 arcsec full width at half maximum (FWHM). This experimental configuration is expected to give a close-to-optimal combination of high galaxy number density and quiescent ionosphere, as well as maximise commensality with other SKA science goals (see Paper II and \citealt{2015arXiv150706639H} for further discussion). We then calculate the expected sensitivity of the instrument when used in this configuration using the curves from the SKA1 Imaging Science Performance Memo \citep{braun2014}, which assumes a two year survey, and including all sources which are resolved and detected at a signal-to-noise greater than 10. We note that estimates for the number densities and distribution of sizes for SFGs at micro-Jansky fluxes are currently somewhat uncertain. To arrive at our estimates, we follow the procedure described in Paper II. In brief, we once again make use of the SKADS S3-SEX simulation \citep{wilman08} but we have re-calibrated the absolute numbers and sizes of SFGs found in that simulation so that they match the latest observational data from deep radio surveys. For both SKA experiments we also include fractions of spectroscopic redshifts, obtained by detection of HI line emission from the source galaxies.
\subsubsection{Dark Energy Survey (DES)}
For our Stage III optical weak lensing survey we follow the performance specifications of the weak lensing component of the Dark Energy Survey (DES). DES is an optical survey with a primary focus on weak lensing cosmology, covering $5,000 \, \mathrm{deg}^2$ of the Southern hemisphere sky using the 4-metre Blanco telescope at the Cerro Tololo Inter-American Observatory in Chile. It has already produced cosmological parameter measurements from weak lensing with Science Verification data \citep{2015arXiv150705552T} and represents a `Stage III' weak lensing survey along with contemporaries such as the Kilo-Degree Survey \citep[KiDS,][]{2015MNRAS.454.3500K} and Hyper Suprime Cam (HSC) weak lensing projects. Here we use the expected performance of the full five year survey data, with observations in $g,r,i,z,Y$ bands and a limiting magnitude of 24. {The achievable weak lensing source number densities and redshift distributions considered here are drawn from \citep{2005astro.ph.10346T,2016MNRAS.tmp..452D}}.
\subsection{Stage IV Experiments}
\label{sec:stage4_expts}
\subsubsection{Full SKA (SKA2)}
{As described in \cite{dewdney2013}, the full SKA (SKA2)} will be a significant expansion of SKA1, with the current plan for SKA-MID increasing the number of dishes from 194 to $\sim 2000$ (with the initial 194 integrated into the larger array) and spreading long baselines over Southern Africa, undergoing construction between 2023 and 2030. As the sensitivity scales with approximately the total collecting area, for SKA2 we assume a ten times increase in sensitivity of the instrument and make our forecasts for a $3\pi$ steradian survey, again at the centre of observing Band 2 (1.355 GHz) and with a 0.5 arcsec PSF.
\subsubsection{\textit{Euclid}-like\xspace}
For a Stage IV optical weak lensing experiment we consider as a reference a space-based survey capable of obtaining a galaxy number density of $n_{\rm gal} = 30 \, \mathrm{arcmin}^2$ over $15,000\,\,\mathrm{deg}^2$ of the sky, with more accurate photometric redshifts than the DES\xspace survey, but still no spectroscopic redshift measurements. We expect this to be similar to the performance of the weak lensing component of the \textit{Euclid} satellite \citep{Laureijs:2011gra,Amendola:2012ys} planned for launch in 2020. We refer to this representative Stage IV optical weak lensing-only experiment as ``\textit{Euclid}-like\xspace".
\subsection{Cross-Correlations}
\label{sec:cross-correlations}
For cross-correlation experiments, we take combinations of Stage III experiments (DES\xspace and SKA1) and Stage IV experiments (\textit{Euclid}-like\xspace\ and SKA2). For DES\xspace$\times$SKA1 we assume the $5,000 \, \mathrm{deg}^2$ sky coverage is the same for both surveys and construct theoretical power spectra $C_{\ell}$ with lensing kernels given by $g^{\mathrm{DES\xspace}_i}$ and $g^{\mathrm{SKA1}_i}$, with ten tomographic bins from each experiment defined to have equal numbers of sources in each bin (i.e. bin $i$ for DES\xspace does not correspond to, but may overlap with, bin $i$ for SKA1). For the noise power spectra $\mathcal{N}_\ell^{X_i Y_j}$ we assume a limiting case in which there is negligible overlap between the source populations probed by the different experiments \citep[as found in][]{2015arXiv150705977D} and for objects which do exist in both surveys, shapes are uncorrelated, as suggested by the findings of \cite{2010MNRAS.401.2572P}, meaning the populations in the twenty different bins are treated as wholly independent. {As demonstrated in \cref{fig:FoM_fOR}, the relaxation of this assumption should not significantly affect the achievable constraints. In the case where the samples are completely separate, redshift information will be necessary for the SKA sources, but could be obtainable from sub-threshold techniques which make use of the HI 21 cm line below the detection limit traditionally used for spectroscopic redshifts (techniques we are exploring in ongoing work), something which should be very capable in providing imperfect $\d n_{\rm gal}/\d z$ (in the manner of photometric redshifts) for tomographically binned sources.}
For {\textit{Euclid}-like\xspace}$\times$SKA2 we consider only the $15,000 \, \mathrm{deg}^2$ survey region available to both experiments. Again, ten equally populated tomographic redshift bins are chosen for each experiment and observed cross-spectra are formed. We emphasise that we are not merely considering the lowest $n_{\rm gal}$ of the two experiments for the cross-correlations, but using the full $\d n_{\rm gal}/\d z$ distributions in twenty bins, ten from each experiment, making use of all the galaxies present.
\section{Forecasting Methods}
\label{sec:forecasts}
For forecasting constraints on cosmological parameters which will be possible with the SKA and cross-correlations we use two approaches: Markov Chain Monte Carlo (MCMC) mapping of the likelihood distribution and the Fisher Matrix approximation. For a given likelihood function and covariance matrix, MCMC methods are accurate and capable of tracing complicated posterior probability distribution surfaces in multiple dimensions, but are computationally expensive. Here, we run MCMC chains for all of our experiments and use them as a calibration for Fisher matrices, allowing the latter to be robustly used for future similar work. The calculation of realistic covariance matrices beyond the approximation in \cref{eqn:simple_cov} typically requires large-scale simulations of data of the type expected to be generated in an experiment; in Paper II we construct such simulations for a fiducial cosmology.
\subsection{Forecasts with \textsc{CosmoSIS}\xspace}
For our MCMC parameter constraint forecasts we make use of the \textsc{CosmoSIS}\xspace modular cosmological parameter estimation code \citep{2015A&C....12...45Z}. For a given set of cosmological parameters $\boldsymbol\vartheta$ we calculate a non-linear matter power spectrum using CAMB \citep{2000ApJ...538..473L} (with modifications from ISiTGR for the modified gravity models from \citealt{2011PhRvD..84l3001D, 2015PhRvD..92b3003D}) and halofit \citep{2003MNRAS.341.1311S, 2012ApJ...761..152T}. This is then converted to a shear power spectrum using \cref{eqn:limber} and the assumed $n_{X_i}(z)$ for the relevant experiment and redshift bin.
These shear power spectra are compared in a Gaussian likelihood to an `observed' data vector $\widetilde{d}_{\ell}$ and covariance matrix, calculated using the same method at our fiducial cosmological parameters:
\begin{align}
-2 \ln \mathcal{L} &= \nonumber \\
& \sum_{\ell,\ell'=\ell_\mathrm{min}}^{\ell_\mathrm{max}} \left(C_\ell^{XY}(\boldsymbol\vartheta) - \widetilde{d}_{\ell}\right) \left[ \mathbf{\Gamma}^{XY}_{\ell\ell'} \right]^{-1} \left(C_{\ell'}^{XY}(\boldsymbol\vartheta) - \widetilde{d}_{\ell'}\right),
\end{align}
summing over all multipoles as $\Gamma^{XY}_{\ell\ell'}$ is assumed to be diagonal in $\ell$ and $\ell'$. We then use the MultiNest \citep{2013arXiv1306.2144F} code to sample over this parameter space and form the posterior confidence regions shown in our results plots. For all of our MCMC forecasts we include information up to a multipole of $\ell_{\rm max} = 3000$, capturing mildly non-linear scales, dependent on the redshift being probed.
\subsection{Comparison with Fisher Matrices}
\label{sec:fisher}
Whilst fully sampling the posterior distribution with Markov Chain methods provides a robust and accurate prediction for parameter constraints, it is typically computationally expensive and time consuming.
The Fisher matrix is an alternative approach for parameter estimation which assumes the presence of
a likelihood function $L(\boldsymbol\vartheta)$ that quantifies the
agreement between a certain set of experimental data and the set of parameters of
the model, $\boldsymbol\vartheta=\{\vartheta_\alpha\}$. It also assumes that
the behaviour of the likelihood near its maximum characterises the whole
likelihood function sufficiently well to be used to estimate errors on the
model parameters \citep{Jeffreys:1961,1996ApJ...465...34V,Tegmark:1996bz}.
Under the hypothesis of a Gaussian likelihood, the Fisher matrix
is defined as the inverse of the parameter covariance matrix. Thence, it is
possible to infer the statistical accuracy with which the data encoded in the
likelihood can measure the model parameters. If the data is
taken to be the expected measurements performed by future experiments, the Fisher
matrix method can be used, as we do here, to determine its prospects for
detection and the corresponding level of accuracy. The $1\sigma$ marginal error on
parameter $\vartheta_\alpha$ reads
\begin{equation}
\sigma(\vartheta_\alpha) = \sqrt{ \left( \mathbfss F^{-1} \right)_{\alpha\alpha}},
\label{eq:marginal}
\end{equation}
where $\mathbfss F^{-1}$ is the inverse of the Fisher matrix, and no summation
over equal indices is applied here.
Our experimental data will come from the measurement of the (cross-)correlation
angular power spectrum $C^{XY}_\ell$ between the observables $X$ and $Y$. From an observational point of view, we can consider each single mode
$\widetilde{C}^{XY}_\ell$ in tomographic and multipole space as a parameter of the theory. Then, to recast the Fisher matrix in the space of the model parameters, $\boldsymbol\vartheta$, it is sufficient to multiply the inverse of the covariance matrix by the Jacobian of the change of variables, viz.\
\begin{equation}
\mathbfss{F}_{\alpha\beta} = \sum_{\ell,\ell'=\ell_\mathrm{min}}^{\ell_\mathrm{max}}
\frac{\partial \mathbfss{C}^{XY}_\ell}{\partial \vartheta_\alpha}
\left[ \mathbf{\Gamma}^{XY}_{\ell\ell'} \right]^{-1}
\frac{\partial \mathbfss{C}^{XY}_{\ell'}}{\partial \vartheta_\beta},
\label{eq:fisher}
\end{equation}
where again we sum over all the multipoles because $\Gamma^{XY}_{\ell\ell'}$ is here assumed to be diagonal in $\ell$ and $\ell'$.
Fisher matrices can be quickly computed, requiring computation of observational shear spectra only at the set of points in parameter space necessary for approximating the derivative, rather than at enough points to create a good, smooth approximation to the true posterior. This allows exploration of the impact of different systematics and analysis choices on forecast parameter constraints, which we intend to explore in a following paper. Here, we validate the use of the Fisher approximation for such an exploration by comparing for a simple case the predictions from our MCMC chains and Fisher matrices. We use simplified versions of the SKA2 and \textit{Euclid}-like\xspace experiments (intended to maximise the Gaussianity of the contours and be quicker to compute), in which we consider both as covering the full sky ($A_{\rm sky} = 41,253 \, \, \mathrm{deg}^2$), only use information up to $\ell=1000$ and cut off both redshift distributions at $z=4$. For these simplified experiments we calculate the parameter covariance matrix in the two parameters $\lbrace w_0, w_a \rbrace$ using both the MCMC procedure and via the Fisher matrix approximation. \Cref{fig:fisher_comp} shows confidence region ellipses corresponding to both these methods and \cref{tab:fisher_comparison} the associated one dimensional parameter constraints, showing $\mathcal{O}(5\%)$ agreement.
{As a demonstration of the usefulness of this approach, we show the benefit of the high-redshift tail in the source distribution for SKA by calculating constraints in $\lbrace w_0, w_a \rbrace$ both including and excluding all sources above $z=2$. For SKA1, excluding these sources leads to a $\lbrace 3.63, 4.04 \rbrace$ factor increase in the width of the uncertainties, whilst for SKA2 the factors are $\lbrace 1.32, 1.51 \rbrace$.}
\begin{table}
\centering
\begin{tabular}{lcc}
\hline
Experiment & $\sigma_{w_0}$ MC, Fisher & $\sigma_{w_a}$ MC, Fisher \\
\hline
SKA2-simple & 0.0161, 0.0168 & 0.0651, 0.0660 \\
\textit{Euclid}-like\xspace-simple & 0.0226, 0.0236 & 0.104, 0.108 \\
\hline
\end{tabular}
\caption{One dimensional parameter constraints from covariance matrices calculated using full MCMC chains and the Fisher matrix formalism for the simplified weak lensing-only experiments described in \cref{sec:fisher}, showing good agreement, as shown in \cref{fig:fisher_comp}. The constraints for SKA2 correspond to a DETF figure-of-merit of $\sim2500$.}
\label{tab:fisher_comparison}
\end{table}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fisher_comparison.png}
\caption{Fisher (unfilled contours) and MCMC (filled contours) predictions for the simplified weak lensing-only experiments considered in \cref{sec:fisher}, showing agreement in both size and degeneracy direction. One dimensional uncertainties for both cases are shown in \cref{tab:fisher_comparison}.}
\label{fig:fisher_comp}
\end{figure}
\section{Results}
\label{sec:results}
In \crefrange{fig:matter}{fig:mg} we show the two dimensional parameter constraints {from our MCMC forecasts} on matter $\lbrace \sigma_8, \Omega_\mathrm m \rbrace$, dark energy $\lbrace w_0, w_a \rbrace$ and modified gravity $\lbrace \Sigma_0, Q_0 \rbrace$ parameter pairs, each marginalised over the full base {\ensuremath{\Lambda\mathrm{CDM}}}\xspace parameter set $\lbrace \Omega_\mathrm m, \Omega_\mathrm b, \sigma_8, h_0, n_s \rbrace$, with the light (dark) regions representing $95\%$ ($68\%$) confidence regions for the parameter values, and \cref{tab:marginals} showing one dimensional 1$\sigma$ confidence regions for each parameter individually. \Cref{tab:marginals} also shows the DETF Figure of Merit (FoM) for each experiment, calculated as the inverse area of a elliptical confidence region defined from the calculated parameter covariance matrix of the simulated experiments:
\begin{equation}
\label{eqn:de_fom}
\mathrm{FoM} = \left( \sigma_{w_0} \sigma_{w_a} \sqrt{1 - \rho^2}\right)^{-1}
\end{equation}
where $\rho$ is the correlation coefficient and $\sigma_{w_0}$ and $\sigma_{w_a}$ are the one dimensional parameter standard deviations.
The left column of \crefrange{fig:matter}{fig:mg} shows these for the three Stage III experiments: DES\xspace, SKA1 and their cross-correlation. SKA1 performs only slightly worse than DES\xspace, to be expected due to the significantly lower galaxy number density, some of which deficit is made up for by the higher-median redshift distribution, which may be expected to provide a stronger lensing signal. The DES$\times$SKA1 contours, which make use of all of the galaxies in both experiments, outperform each experiment individually in the $\lbrace \sigma_8, \Omega_\mathrm m \rbrace$ case.
The right column of \crefrange{fig:matter}{fig:mg} shows the constraints for Stage IV experiments. Here, SKA2, for which Galactic foregrounds are not a consideration and hence has access to a full $30,000\,\, \mathrm{deg}^2$, outperforms the \textit{Euclid}-like\xspace experiment in the $\lbrace \sigma_8, \Omega_\mathrm m \rbrace$ contours. The cross-correlation contours, which only include galaxies in the $15,000\,\, \mathrm{deg}^2$ available to both experiments are slightly larger than the individual experiments, but may be expected to be significantly more robust due to the removal of wavelength-dependent systematics.
\begin{figure*}
\includegraphics[width=0.475\textwidth]{stage3-matter.png}\includegraphics[width=0.475\textwidth]{stage4-matter.png}
\caption{Stage III (left) and Stage IV (right) weak lensing-only constraints on matter content ($\sigma_8$,$\Omega_\mathrm m$) parameters, including those from cross-correlation spectra between SKA1 and DES, and between SKA2 and the \textit{Euclid}-like\xspace experiment.}
\label{fig:matter}
\includegraphics[width=0.475\textwidth]{stage3-w0wa.png}\includegraphics[width=0.475\textwidth]{stage4-w0wa.png}
\caption{Stage III (left) and Stage IV (right) weak lensing-only constraints on dark energy ($w_0$,$w_a$) parameters, including those from cross-correlation spectra between SKA1 and DES, and between SKA2 and the \textit{Euclid}-like\xspace experiment. Note the different axis scales between the two plots.}
\label{fig:w0wa}
\includegraphics[width=0.475\textwidth]{stage3-mg.png}\includegraphics[width=0.475\textwidth]{stage4-mg.png}
\caption{Stage III (left) and Stage IV (right) weak lensing-only constraints on modified gravity ($\Sigma_0$,$Q_0$) parameters, including those from cross-correlation spectra between SKA1 and DES, and between SKA2 and the \textit{Euclid}-like\xspace experiment.}
\label{fig:mg}
\end{figure*}
\subsection{Application of \textit{Planck}\xspace Priors}
We also show constraints obtained by combining the results from our experiments with results from observations of the CMB by the \textit{Planck}\xspace satellite \citep{2015arXiv150201589P} in \cref{fig:planck_priors}. For this, we re-weight our MCMC chains using the plikHM-TTTEEE-lowTEB-BAO \textit{Planck}\xspace likelihood chain\footnote{Obtained from the Planck Legacy Archive \url{http://www.cosmos.esa.int/web/planck/pla}}, re-centred around our fiducial cosmology. We also show the combined, marginalised parameter constraints for both auto and cross-correlation experiments in \cref{tab:marginals}. Whilst these result in little difference in the matter parameters, the different degeneracy direction of the \textit{Planck}\xspace constraints on $(w_0,w_a)$ allows for a significantly smaller area in the contours, improving the DETF FoM by a factor $\sim5$ for each experiment and allowing $\mathcal{O}(10\%)$ constraints on both parameters.
\begin{figure*}
\includegraphics[width=0.475\textwidth]{stage3-w0wa-planck.png}\includegraphics[width=0.475\textwidth]{stage4-w0wa-planck.png}
\caption{Dark energy ($w_0$,$w_a$) parameter constraints when Stage III and Stage IV weak lensing-only experiments are combined with Cosmic Microwave Background priors from \citet{2015arXiv150201589P}.}
\label{fig:planck_priors}
\end{figure*}
\begin{table*}
\begin{tabular}{lrl|rl|rl|c}
\hline
Experiment & ($\sigma_{\Omega_\mathrm m}/\Omega_\mathrm m$, & $\sigma_{\sigma_{8}}/\sigma_8$) & ($\sigma_{w_0}$, & $\sigma_{w_a}$) & ($\sigma_{\Sigma_{0}}/\Sigma_0$, & $\sigma_{Q_{0}}/Q_0$) & DETF FoM\\
\hline
SKA1 & 0.083 & 0.040 & 0.52 & 1.6 & 0.19 & 0.43 & 1.6\\
SKA1 + \textit{Planck}\xspace & 0.084 & 0.040 & 0.28 & 0.43 & - & - & 77\\
DES\xspace & 0.056 & 0.032 & 0.43 & 1.4 & 0.13 & 0.43 & 3.5\\
DES\xspace + \textit{Planck}\xspace & 0.058 & 0.033 & 0.22 & 0.33 & - & - & 89\\
SKA1$\times$DES\xspace & 0.046 & 0.024 & 0.45 & 1.3 & 0.13 & 0.39 & 3.3\\
SKA1$\times$DES\xspace + \textit{Planck}\xspace & 0.046 & 0.024 & 0.23 & 0.36 & - & - & 106\\
\hline
SKA2 & 0.010 & 0.0046 & 0.14 & 0.42 & 0.04 & 0.13 & 51\\
SKA2 + \textit{Planck}\xspace & 0.010 & 0.0047 & 0.086 & 0.15 & - & - & 305\\
\textit{Euclid}-like\xspace & 0.011 & 0.0058 & 0.13 & 0.38 & 0.053 & 0.17 & 54\\
\textit{Euclid}-like\xspace + \textit{Planck}\xspace & 0.012 & 0.059 & 0.095 & 0.16 & - & - & 244\\
SKA2$\times$\textit{Euclid}-like\xspace & 0.013 & 0.0064 & 0.15 & 0.43 & 0.053 & 0.17 & 45\\
SKA2$\times$\textit{Euclid}-like\xspace + \textit{Planck}\xspace & 0.013 & 0.0064 & 0.10 & 0.17 & - & - & 240\\
\hline
\end{tabular}
\caption{One dimensional marginalised constraints on the parameters considered, where all pairs (indicated by brackets) are also marginalised over the base {\ensuremath{\Lambda\mathrm{CDM}}}\xspace parameter set.}
\label{tab:marginals}
\end{table*}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have presented forecasts for cosmological parameter constraints from weak lensing experiments involving the Square Kilometre Array (SKA), both in isolation and in cross-correlation with comparable optical weak lensing surveys. We have shown that the first phase of the SKA (SKA1) can provide $\mathcal{O}(5\%)$ constraints on matter parameters $\Omega_\mathrm m$ and $\sigma_8$, $\mathcal{O}(50\%)$ constraints on dark energy equation of state parameters $w_0$ and $w_a$, and $\mathcal{O}(10\%)$ constraints on modified gravity parameters $\Sigma_0$ and $Q_0$, competitive with the Dark Energy Survey (DES). The full SKA (SKA2) can significantly improve on all of these constraints and be competitive with the surveys planned with Stage IV optical weak lensing experiments. Furthermore, we have explored what may be achieved with weak lensing constraints from the cross-correlation power spectra between radio and optical experiments. Such cross-correlation experiments are important as they will be free of wavelength-dependent systematics which can otherwise cause large biases which dominate statistical errors and can lead to erroneous cosmological model selection. For both the Stage III (SKA1, DES\xspace) and Stage IV (SKA2, \textit{Euclid}-like\xspace) experiments, such systematics are potentially larger than the statistical errors available from the number density of galaxies probed. We have shown that parameter constraints made using only the cross-waveband power spectra can be as powerful as traditional approaches considering each experiment separately, but with the advantage of being more robust to systematics. Such cross-correlation experiments represent significant promise in allowing weak lensing to maximise its potential in extracting cosmological information. At both Stage III and Stage IV, constraints on $(w_0,w_a)$ are significantly improved with the addition of Cosmic Microwave Background priors from the \textit{Planck}\xspace satellite, down to $\mathcal{O}(10\%)$ in both parameters for SKA2 + \textit{Planck}\xspace.
The realisation of this promise in practice will rely on a number of developments:
\begin{itemize}
\item The accuracy and reliability of shape measurements of galaxies from SKA data (which will arrive in the poorly-sampled Fourier plane as visibilities) will need to match that available from image-plane optical experiments \citep[see][for further discussion]{2015aska.confE..30P}.
\item Understanding of the star-forming radio galaxy populations making up the sources in SKA weak lensing surveys, and how these correspond to the source populations in optical surveys.
{\item The extraction of redshift information for the radio sources, either from cross-matching catalogues, requiring deep data in wavebands capable of providing photometric redshifts, or extracting HI 21 cm line redshifts from below a traditional survey threshold.}
\item Optimisation of SKA survey strategies to maximise the amount of information gained in radio weak lensing surveys. For more discussion of this see \cite{bonaldi2016} (Paper II).
\item Inclusion of additional information from radio polarisation and spectral line measurements, which may mitigate other, wavelength-independent systematics which are not removed by cross-correlations, such as galaxy intrinsic alignments. We intend to explore the impact of these approaches on parameter constraints in a future work using Fisher matrix forecasts to quantify the impact of such systematics and how well they may be removed.
\end{itemize}
These problems are currently being addressed, through the radioGREAT data simulation programme\footnote{\url{http://radiogreat.jb.man.ac.uk}}, precursor experiments and exploitation of archival data \cite[][SuperCLASS]{2015arXiv150705977D}, large scale simulations (Paper II) and theoretical work \citep[e.g.][]{2015MNRAS.451..383W}. If these aspects can be understood sufficiently well, the use of radio and radio-optical cross-correlation experiments will maximise the potential of weak lensing experiments, allowing us to more closely approach the full precision available from the data and give the best chance possible of starting to understand the true physical nature of dark matter and dark energy.
\section*{Acknowledgments}
IH, SC and MLB are supported by an ERC Starting Grant (grant no. 280127). MLB is an STFC Advanced/Halliday fellow. JZ is supported by an ERC Starting Grant (grant no. 240672). We thank Anna Bonaldi for useful discussions and Constantinos Demetroullas and Ben Tunbridge for help with the matched radio-optical catalogues.
|
train/arxiv
|
BkiUdd85qX_BoWGV7JvN
| 5 | 1 |
\section{Introduction}
\label{sec:Intro}
The experimental search for a permanent electric dipole moment of the neutron (nEDM) has been an important topic of fundamental research since the early 1950s \cite{Pur50, Smi57}.
\hl{
Since then, the experimental sensitivity has been improved by more than six orders of magnitude.
The largest leap in sensitivity was due to the development of sources of ultracold neutrons (UCN) \cite{UCNsource, Golubbook} permitting the storage of neutrons within a material ``bottle" for hundreds of seconds \cite{Pendlebury1984}.
This, in turn, created the requirement to keep experimental conditions, especially the magnetic field, stable over similar time spans, which resulted in the development of magnetometers placed close to \cite{Smith90} or within \cite{Green98} the storage bottle.
}
The experimental method applied to search for an nEDM with ultracold neutrons is based on a precise determination of the neutron spin precession frequency in static homogeneous parallel/antiparallel magnetic and electric fields by the Ramsey technique of \mbox{(time-)}separated oscillatory fields \cite{Ram50}.
The statistical and systematic uncertainties of this method are strongly dependent on the (non)uniformity of the magnetic field $\vec{B}$ in which the neutrons precess.
This article is the second episode in a trilogy of papers that comprehensively treat the uncertainties in nEDM searches that originate from the inhomogeneity of the magnetic field.
The first episode \cite{Abe19a} gives a general introduction to the subject, defines the way we characterize gradients, and derives the relevant criteria for nEDM experiments.
\hl{
In the second episode, \hl{ this paper, }we discuss the general approach to measure and compensate magnetic field gradients using an array of magnetometers.
We describe in detail the specific implementation of this approach used in the 2015 and 2016 data runs of the nEDM experiment at the Paul Scherrer Institute (PSI).
The general concept and aspects of the implementation are applicable to other experiments where magnetic field homogeneity is a concern.
}
The third part will present the offline characterization of the magnetic field uniformity in the apparatus with an automated field-mapping device.
The nEDM apparatus at PSI is an upgraded version of the \SRI{} apparatus \cite{Bak14} that is equipped with two high-sensitivity systems for monitoring magnetic field changes, namely a \ensuremath{{}^{199}\mathrm{Hg}}{} co-magnetometer~\cite{Green98, Bak14} and an array of sixteen laser-pumped Cs{} magnetometers~\cite{Weis05,Gro06}.
The PSI-nEDM experiment \cite{psiprl} was the first that used simultaneously a co-magnetometer and an array of external magnetometers during data taking.
The Hg co-magnetometer employs an ensemble of spin-polarized \ensuremath{{}^{199}\mathrm{Hg}}{} atoms which occupy the same storage volume as the UCN, and whose spin precession frequency is used to correct for drifts of the magnetic field in every Ramsey cycle.
The array of Cs{} magnetometers located above and below the storage chamber measures the spatial distribution of the magnetic field, allowing for control of the field homogeneity and extraction of the gradients across the neutron storage chamber.
The focus of this article is the implementation and application of the Cs{} magnetometer array.
In Section \ref{sec:nEDMexperiment} we describe the principle of the PSI-nEDM measurement with emphasis on the required magnetic field sensitivity and resolution of the magnetic field gradient.
Section \ref{sec:TheCsmagnetometerarray} provides a technical description of the Cs{} magnetometer array, including the design of the Cs{} magnetometers, their modes of operation and their performance in terms of magnetic field sensitivity and accuracy.
Section \ref{sec:Applications} details the applications of the Cs{} magnetometer array in the nEDM experiment.
A description of how to extract magnetic field gradients from the array field measurements is provided in
Section \ref{sec:GzExtraction} and Section \ref{sec:T2opt} presents the procedure used to optimize the magnetic field.
\section{The \lowercase{n}EDM experiment at PSI}
\label{sec:nEDMexperiment}
Figure~\ref{fig:setup_1} shows the general scheme of the PSI-nEDM experiment \cite{Bak11}, further called the `nEDM experiment'.
The cylindrical neutron storage chamber, \hl{ which also contains the \ensuremath{{}^{199}\mathrm{Hg}}{} co-magnetometer,} consists of a polystyrene ring coated with deuterated polystyrene
\cite{Bod} and aluminum end caps coated with diamond-like carbon \cite{Atc}.
The latter serve as high-voltage and ground electrodes, which can generate a vertical electric field of up to 15\,\ensuremath{\mathrm{kV}}/\ensuremath{\mathrm{cm}}{} in the chamber.
The height of the cylinder \hl{(i.e., the distance between the electrodes)} is 120\,mm, and the radius is 235\,mm.
The Cs{} magnetometers that measure the magnetic field gradients are mounted on the high-voltage and ground electrodes.
The storage chamber is located inside an aluminum vacuum chamber, onto which a cos-theta coil is wound.
\hl{The vacuum tank also supports a set of 30 trim-coils and the $B_1$ coils used to generate magnetic resonance pulses for the neutrons and the Hg atoms.}
The cos-theta coil produces a vertical, static magnetic field of $\approx \SI{1}{\micro T}$, while the set of trim-coils are used to homogenize the field and to apply specific field gradients when necessary.
The vacuum chamber is surrounded by a passive four-layer $\mu$-metal shield.
The whole setup is enclosed in an air-conditioned, temperature-stabilized wooden hut.
Three pairs of large ($\approx$ 8\,m$\times$6\,m) rectangular coils are mounted outside the hut and dynamically compensate the outer ambient field \cite{Afa14b}.
\hl{
The system attenuates fluctuations in the ambient field by factor of 5-50 in a bandwidth from DC to \SI{0.5}{Hz} which compensates the drop of passive shielding factor at small frequencies.
}
\begin{figure}
\includegraphics[width=\linewidth]{Apparatus.pdf}
\caption[Setup1]{Scheme of the nEDM apparatus. \hl{The magnetic and electric fields in the storage chamber are oriented vertically, each either parallel or anti-parallel to $z$.}}
\label{fig:setup_1}
\end{figure}
The operation of the apparatus during data taking with UCN was recently reviewed in \cite{Abe19b}.
The effect of a finite nEDM $d_{\textrm{n}}$ when the neutron is exposed to both an electric field $\vec{E}$ and a magnetic field $\vec{B}$ is an electric-field-dependent shift of the neutron spin precession frequency \hl{$f_n$}.
Statistical uncertainties in the determination of that frequency by Ramsey's method \cite{Ram50} propagate to the sensitivity of the nEDM measurement
\begin{equation}
\sigma{}(d_{\textrm{n}}) = \frac{\hbar }{2\,\alpha\, T \, E\, \sqrt{N} \sqrt{N_\mathrm{cycles}}},
\label{eq:nustatistical}
\end{equation}
where $\alpha{}$ is the contrast of the Ramsey fringe, $T$ is the precession time, $E$ the electric field strength, $N$ the number of detected neutrons in one Ramsey cycle, and $N_\mathrm{cycles}$ the number of such cycles.
\hl{In real measurements the statistical sensitivity is typically 10\% worse due to imperfections and data cuts.}
Details of this procedure are given in \cite{Harris07}.
The contrast $\alpha$ is determined by the transverse neutron spin depolarization time, and can be significantly improved by homogenizing the longitudinal (vertical) component of the magnetic field, as discussed in Section \ref{sec:T2opt}.
The statistical sensitivity of the \SRI{} experiment \cite{Harris99,Bak06}, \hl{which led to the former best value for $\ensuremath{d_{\textrm{n}}}$ \cite{Pen15},} was $\sigma{}_{\textrm{day}}(d_\mathrm{n})\approx2\times10^{-25} \ensuremath{e\cdot\ensuremath{\mathrm{cm}}}$ per day ($N_\mathrm{cycles}=400$).
In the nEDM experiment at PSI \cite{psiprl} this value was improved by increasing $\alpha$ (see Section \ref{sec:T2optResults}), $E$, and neutron counting statistics and was on average $\sigma{}_{\textrm{day}}(d_\mathrm{n})\approx1.1\times10^{-25} \ensuremath{e\cdot\ensuremath{\mathrm{cm}}}$ per day ($N_\mathrm{cycles}=288$).
In order to keep the systematic uncertainty related to the control of the magnetic field and its gradients below the statistical sensitivity in Eq.~\eqref{eq:nustatistical}, the resolution of the magnetic field measurement, \hl{$\sigma (B)$}, in one Ramsey cycle should be:
\begin{equation}
\sigma (B) \ll \frac{ E \sqrt{2 N_\mathrm{cycles}}~\sigma{}_{\textrm{day}}(d_{\mathrm{n}})}{\mu_n} ,
\label{eq:edmB}
\end{equation}
which gives $\sigma (B) \ll 0.5\,\mbox{pT}$ for the PSI-nEDM experiment.
This resolution is provided by the \ensuremath{{}^{199}\mathrm{Hg}}{} co-magnetometer, whose spin precession frequency $f_{\textrm{Hg}}$ is used to monitor and correct for changes of the magnetic field from one Ramsey cycle to the next \cite{Bak14}.
\hl{
Mercury, and specifically its isotope \ensuremath{{}^{199}\mathrm{Hg}}{}, was chosen because in its ground state it has no electronic contribution to the atomic spin.
The atomic spin, which can be optically pumped and probed, is thus a pure nuclear spin with coherence times of up to hundreds of seconds.
This permits to monitor the magnetic field during a Ramsey cycle with a coherent spin precession signal achieving a sensitivity that is on average better than 80~\mbox{fT}{}.
Using the co-magnetometer signal as magnetic reference reduces the uncertainty of the neutron precession frequency due to magnetic field fluctuations to a few \% of the total uncertainty.
}
\hl{All \ensuremath{{}^{199}\mathrm{Hg}}{} atoms are in the gas phase as the vapor pressure is much below the saturation pressure at room temperature.
The atoms thus move with typical thermal velocities and sample the volume uniformly.
The ultracold neutrons, however, are noticeably affected by gravity because of their much lower velocity and thus preferentially inhabit the lower portion of the storage chamber
}
As a consequence, the ratio $\mathcal{R} ={f_{\textrm{n}}}/{f_{\textrm{Hg}}} $ is affected by any vertical magnetic field gradient $\frac{\partial B_z}{\partial z}$ across the storage chamber.
Adopting the notation of \cite{Abe19a},
\begin{equation}
\mathcal{R} =\frac{f_{\textrm{n}}}{f_{\textrm{Hg}}} = \frac{\gamma_{\textrm{n}}}{\gamma_{\textrm{Hg}}} \bigg(1 + \frac{G_{\textrm{grav}}\langle z \rangle}{B_0}+\delta_{\textrm{other}}\bigg) ,
\label{eq:Rratio}
\end{equation}
where $\gamma_{\textrm{n}}$ and $\gamma_{\textrm{Hg}}$ are the gyromagnetic ratios of the neutron and \ensuremath{{}^{199}\mathrm{Hg}}{} atom respectively, $G_{\textrm{grav}}$ is a combination of the relevant vertical gradients (see Section \ref{sec:GzExtraction}), $\langle z \rangle$ is the vertical displacement of the center of mass of the neutrons with respect to the center of the storage chamber, $B_0 = \langle B_z\rangle_{\textrm{Hg}}$ is the magnetic field averaged over the precession volume as measured by the \ensuremath{{}^{199}\mathrm{Hg}}{} co-magnetometer and $\delta_{\textrm{other}}$ encompasses all other effects that change the $\mathcal{R}$-ratio, such as, e.g., the motional false EDM \cite{Afa15c} and the rotation of the Earth \cite{EarthRot}.
\hl{The positive $z$-direction is defined upwards with respect to gravity so that a negative value} is expected for the average displacement $\langle z \rangle$ of the neutrons.
The required resolution of the gradient measurements $\sigma{}(G_{\textrm{grav}})$ \hl{ for one Ramsey cycle} can be estimated in a similar way as $\sigma(B)$ leading to
\begin{equation}
\sigma{}(G_{\textrm{grav}}) \ll \frac{\sigma(B)}{|\langle z\rangle|}
\simeq\text{1.3\,pT/cm},
\label{eq:edmgz}
\end{equation}
using $\langle z\rangle = -0.38(3)$\,cm as determined in \cite{Abe19a}.
The temporal evolution of the magnetic field gradients was monitored with the array of sixteen Cs{} magnetometers installed close to the precession chamber.
\hl{This allowed corrections to be made for gradient drifts (Section \ref{sec:GzExtraction}) and the homogenization of the magnetic field using the variometer principle \cite{Ver06} (Sections \ref{sec:Variometer} and \ref{sec:T2opt}).}
The latter resulted in larger values for the contrast $\alpha$ leading to a 35\% increase in statistical sensitivity.
\section{The Cs{} magnetometer array}
\label{sec:TheCsmagnetometerarray}
This section describes the design, implementation and modes of operation of the Cs sensors installed above and below the precession chamber for monitoring magnetic field gradients.
\hl{The design decisions were guided by the requirement to minimize any potential interference between the Cs sensors and the neutron EDM measurement.
We chose to operate the sensors at room temperature since temperature gradients can lead to electrical currents that disturb the magnetic field in the experiment.
Using Cs as the sensor medium combines two advantages in this situation: (i) Cs has the highest vapor pressure of all stable alkali metals and (ii) it has only one stable isotope, \ensuremath{{}^{133}\mathrm{Cs}}, with a large hyperfine splitting which suppresses interference from neighboring transitions.
The sensors were operated in the $M_x$-mode \cite{Gro06, Ale, Weis16} which features a stable steady state due to the continuous magnetic resonance driven by an oscillating magnetic field.
This weak field was suppressed by aluminum shielding cans and did not interfere with the neutron EDM measurement due to the large difference in resonance frequency (\SI{3.5}{kHz} for Cs vs. \SI{30}{Hz} for the neutrons).
}
\hl{
Similar sensor arrays have previously been used to measure the magnetic field generated by the human heart \cite{Array19,Array57}.
For those biomagnetic measurements the performance of the array is limited by statistical uncertainties in the individual sensors.
The sensors presented here are related to the ones used in \cite{Array19} but have been optimized for stability and accuracy since statistical uncertainties are not the limiting factor for the large integration times relevant in nEDM measurements.
}
\subsection{Design and implementation}
\label{sec:Design}
The magnetometer array consists of sixteen Cs{} sensors that are made of nonmagnetic materials and are vacuum-compatible.
The compact design allows their mounting close to the storage chamber.
The sixteen magnetometers are arranged in a three-layer gradiometer configuration with sensors located both above and below the storage chamber.
Seven sensors are installed on the high-voltage electrode, the centers of these sensors being 127.9\,\mm{} above the center plane of the neutron storage chamber.
Nine sensors are installed below the ground electrode.
They are arranged on two levels: 6 sensors are mounted on the aluminum plate directly below the ground electrode (128.5\,\mm{} below the center of the storage chamber), while three more sensors are positioned in a plane located 75\,\mm{} lower, as shown in Fig.~\ref{fig:CsMa}.
\hl{All sensors are placed with a position accuracy of about 0.5\,mm.}
\begin{figure}
\subfigure[]
{\includegraphics[width=0.48\linewidth]{CsMaHV.pdf}
\label{fig:CsMaHV}}
\hfill
\subfigure[]
{\includegraphics[width=0.48\linewidth]{CsHV.pdf}
\label{fig:CsAll1}}
\vfill
\subfigure[]
{\centering\includegraphics[width=0.38\linewidth]{AllSensors.pdf}
\label{fig:AllSensors}}
\qquad{}
\subfigure[]
{\includegraphics[width=0.48\linewidth]{CsCut_corrected.png}
\label{fig:CsAll2}}
\caption[Location of the Cs{} magnetometers in the nEDM experiment.]{Positions of the 16 Cs{} magnetometers in the nEDM experiment.
%
Each sensor is enclosed in an aluminum cylinder which suppresses the interaction of its RF-field with the neighboring Cs{} sensors.
%
(a)~Storage chamber removed from the vacuum chamber (in the background) with 6 HV-compatible Cs{} magnetometers installed on an aluminum plate fixed to the HV electrode with corona ring.
%
(b)~Schematic view of the neutron storage chamber, the electrodes and the Cs{} magnetometers.
%
(c)~The blue spheres indicate the positions of the Cs{} sensors, they are arranged in three layers above and below the storage chamber.
%
(d)~Central vertical cut through (b) with dimensions in mm.
%
The vertical distance of the Cs{} sensors from the center of the storage chamber is +127.9\,mm, -128.5\,mm, or -203.5\,mm, the 13 closest magnetometers thus being a factor of 2.8 closer to the center of the precession chamber in comparison to the $^{87}$Rb magnetometers in the earlier \SRI{} experiment
\cite{Pen92}.
}
\label{fig:CsMa}
\end{figure}
\subsubsection{Principle of the Cs{} magnetometer}
\label{sec:Principleofthemagnetometer}
The main components of a Cs{} magnetometer are shown in Fig.~\ref{fig:sensor}.
\begin{figure}
\includegraphics[width=\linewidth]{sensorDAQ.pdf}
\caption[sensor]{
Schematic of the Cs{} magnetometers' main components and electronics as described in the text.}
\label{fig:sensor}
\end{figure}
The actual field-sensing element of each sensor is an evacuated glass cell, with an inner diameter of $\sim$28~\mm{}, whose inner wall is coated by a thin layer of paraffin \cite{Cas}.
The Cs density in the cell is determined by the saturated vapor pressure of a metallic droplet of \ensuremath{{}^{133}\mathrm{Cs}}{} at room temperature.
The droplet is contained in a sidearm connected to the main cell volume by a capillary.
The cesium atoms are spin-polarized by optical pumping using circularly polarized laser light whose frequency is resonant with the $F_g{=}4 \rightarrow F_e{=}3$ hyperfine component of the D$_1$ transition.
The laser beam traverses the cell at an angle of 45$^\circ$ with respect to the magnetic field $\vec{B}$.
The light from a frequency-stabilized laser is delivered to the sensor by a \SI{400}{\micro m} multimode fiber.
Before entering the cell, the light is collimated by a lens and circularly polarized by a linear polarizer and a quarter-wave plate (Fig.~\ref{fig:sensor}).
The laser beam serves both to polarize the Cs{} atoms and to read out the precessing atomic spin polarization (optically detected magnetic resonance).
When exposed to the magnetic field $\vec{B}$, the magnetic moment associated with the spin polarization precesses at the Larmor frequency
\begin{equation}
f_\mathrm{L} = \frac{\gamma_4}{2\pi} \|\vec{B}\|, \label{eq:Larmor}
\end{equation}
where $\gamma_4{} \simeq 2\pi \times 3.50$~\mbox{Hz}/\mbox{nT}~ is the gyromagnetic ratio of the $F{=}4$ hyperfine level of the cesium ground state.
\hl{The spin precession can be either continuously driven by an oscillating magnetic field $\vec{B}_1$ or initiated by a magnetic resonance ($\vec{B}_1$) pulse (see Sec.~\ref{sec:PLLaccuracy}).
In both cases the $\vec{B}_1$ field is generated by a Helmholtz-like pair of coils surrounding the Cs cell.
The coils were optimized to provide a homogeneous magnetic field over the volume of the Cs cell and are historically named RF-coils, a convention we adopt here despite the low oscillation frequency of \SI{3.5}{kHz}.}
The precession of Cs{} atoms imposes an oscillation on the transmission of the laser light, which is detected on the photodiode.
All 16 magnetometers were operated with light delivered by a single high-stability diode laser (Toptica, DL\,pro\,100) that was mounted in a dedicated housing in the temperature-stabilized room of the nEDM experiment.
The laser frequency was actively locked to the $F_g{=}4\rightarrow F_e{=}3$ hyperfine component of the Cs{} D$_1$ ($6\textrm{S}_{1/2} \rightarrow 6\textrm{P}_{1/2}$) transition at $\sim$895~\ensuremath{\mathrm{nm}}{} using Doppler-free saturation absorption spectroscopy (Toptica, CoSy), which allowed us to keep the laser continuously in frequency lock for weeks.
The beam from this laser was divided into multiple beams by a splitter system which was directly attached to the main vacuum chamber of the nEDM apparatus.
The original beam was carried by a single \SI{400}{\micro m} multimode fiber to a beam homogenizer (SUSS MicroOptics) producing a flat-topped intensity profile of quadratic cross section.
The homogenized beam was then imaged onto a bundle of 36 fibers with \SI{400}{\micro m} core diameter whose flat-polished input ends are arranged into a square brass-epoxy holder with an aperture of 3$\times$3\,\mm{}$^2$.
Five of these fibers, including the four located at the corners of the bundle were used for monitoring purposes outside of the vacuum chamber.
The remaining 31 fibers ($\sim$4.5\,m long) were brought into the vacuum chamber, each with its own individual vacuum feedthrough.
\hl{In order to achieve stable transmission efficiencies, the fibers ran uninterrupted through modified Swagelok feedthroughs which provided the vacuum sealing.}
Each fiber was terminated by a ferrule made of carbon-reinforced plastic that was inserted into the machined receptacle in the Cs{} sensor.
On average, each output fiber carried $\sim$1.4$\%$ of the input fiber's power.
\subsubsection{HV-compatible sensor modules}
\label{sec:HVcompatiblesensormodules}
\hl{The magnetometers mounted on the HV-electrode had to be fully opto-coupled.}
The light transmitted by the cell was not detected by a photodiode mounted next to the cell, but rather coupled into a \SI{3}{m} long \SI{800}{\micro m} diameter multimode fiber carrying the light to a photodiode mounted on the grounded vacuum tank.
Tefzel$^\circledR$ (dielectric constant 2.6) was selected as a fiber coating in order to allow good electrical isolation of the sensor.
\hl{
The RF signal driving the magnetic resonance was transmitted to the sensor by light generated by an IR LED (Lite-On Technology, model HSDL 4230) coupled to a \SI{5}{m} long \SI{800}{\micro m} multimode fiber.
The plastic of the LED's casing was partly removed (down to a distance of $\sim$1--2\,\mm{} from the semiconductor die) and polished to optimize coupling into the fiber.
The light power had a constant and a sinusoidally modulated component which were converted to a current using a Si photodiode (Hamamatsu, model S6775-01) mounted near the sensor.
The photo current was sufficient to drive the RF-coils after it passed through a non-magnetic 470\,nF capacitor (WIMA 0.47 63/40) to suppress the DC component.
}
\hl{
All sensors were operated with RF-field amplitudes approximately equal to the linewidth converted to magnetic field units, $< \SI{4}{nT}$.
}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{sensorHV.pdf}
\caption[CsHV]{HV-compatible magnetometer.
The three fibers connected to the sensor provide the laser light (1), the RF signal (2), and collect the transmitted light (3).
The Cs{} cell (of which only the sidearm, 5, is visible) is placed in a poly-carbonate housing and surrounded by the RF coils printed on the (green) PCB boards (6).
The photodiode and the capacitor forming the opto-coupler that drives the RF coils are mounted on a plastic holder (4).}
\label{fig:CsHV}
\end{figure}
\subsection{Phase-feedback mode of operation}
\label{sec:pfmode}
\subsubsection{Description}
\label{sec:PLLdescription}
The magnetometer is operated in the $M_x$ configuration \cite{Gro06, Ale, Weis16} in which the precession of the Cs{} atoms' magnetization around $\vec{B}$ is continuously driven by a weak oscillating magnetic field $\vec{B}_\mathrm{RF}(t)=\vec{B}_{1}\sin(2\pi f_{\textrm{RF}} t)$.
\hl{
The $\vec{B}_{1}$ field is parallel to the wave vector of the laser beam, $\vec{B}_{1} \parallel \vec{k}$, in order to avoid heading errors.
In this geometry, the shape and center of the magnetic resonance do not depend on the orientation of $\vec{B}$ with respect to $\vec{k}$ \cite{Weis16, Col}.
}
The light absorption by the Cs vapor depends on the projection of the atoms' magnetization onto $\vec{k}$.
\hl{The continuous magnetic resonance leads to a steady state magnetization which precesses at the driving frequency $f_{\textrm{RF}}$ and thus the transmitted light power has a component $\delta P(t)$ modulated at that frequency}
\begin{equation}
\delta P(t) = P_\mathrm{R}\sin(2\pi f_\mathrm{RF} \, t+\phi).
\end{equation}
Here $P_\mathrm{R}$ is the modulation amplitude which depends on the light power, the degree of polarization, and the atomic absorption cross section.
The phase $\phi$ is the phase difference with respect to the driving field $\vec{B}_\mathrm{RF}$.
It has a characteristic resonant behavior \cite{Weis16}
\begin{equation}
\phi_E =\phi - \phi_0 = - \arctan\left(\frac{f_\mathrm{RF} - f_\mathrm{L}}{\Gamma/2\pi}\right).
\label{eq:idealphase}
\end{equation}
\hl{Here $\Gamma = 1/T_1 = 1/T_2$ is the Cs spin relaxation rate, which is assumed to be isotropic.}
In absence of any additional phase shifts in the electronic circuits, the reference phase $\phi_0$ has the values of $\pm \pi/2$ depending on the direction of the magnetic field to be measured.
The representation of the phase in Eq.~\eqref{eq:idealphase} is chosen such that the variable $\phi_E$ has a zero-crossing in the center of the resonance at $f_\mathrm{RF} = f_\mathrm{L} = \gamma_4 B/2\pi$.
Close to that point $\phi_E$ is proportional to the difference between the driving frequency and the Larmor frequency.
Its slope with respect to a change of the magnetic field magnitude can thus be expressed as
\begin{align}
\left.\frac{d \phi_E} {d B}\right|_{f_\mathrm{RF} = f_\mathrm{L}}
&=\left.- \frac{d} {d B} \arctan\left(\frac{f_\mathrm{RF} - \gamma_4 B/2\pi }{\Gamma/2\pi}\right) \right|_{f_\mathrm{RF} = f_\mathrm{L}} \nonumber\\
&= \frac{\gamma_4}{\Gamma}.
\label{eq:phaseslope}
\end{align}
The phase $\phi_E$ is determined by a digital signal processing (DSP) system that generates the driving frequency $f_\mathrm{RF}$ via a digital-to-analog converter and samples the photocurrent of the photodiode via an analog-to-digital converter\@.
For this, the photocurrent which is proportional to the light power transmitted through the Cs cell is converted to a voltage by a transimpedance amplifier, prior to digitization.
The sampled voltage signal is then demodulated by a two-phase lock-in algorithm \cite{Weis16} that determines the amplitude of the oscillation and its phase.
The reference phase $\phi_0$ can be programmed via the digital interface of the DSP system which is also used to periodically read out the determined amplitude and phase values.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{fitPhase_2.pdf}
\caption[The phase calibration curve.]{Typical calibration curve of a Cs{} sensor shown with the fit using Eq.~\eqref{eq:idealphase}.
The resulting fit parameters are: $\phi_0{=}$3.6032(8)~rad, $f_\mathrm{L}{=}$3619.980(8)~\mbox{Hz}, and $\Gamma/2\pi{=}$5.358(7)~\mbox{Hz}.}
\label{fig:phase}
\end{figure}
Figure \ref{fig:phase} shows a measurement of $\phi$ as a function of $f_\mathrm{RF}$.
Such scans are used to determine the reference phase $\phi_0$ which is necessary to compute the shifted phase $\phi_E$.
Phase shifts in the electronic circuits that are used in the generation of $f_\mathrm{RF}$ and the sampling of the photocurrent can cause changes in the reference phase $\phi_0$.
The distinctive $\arctan$ line shape shown in Fig.~\ref{fig:phase} permits the determination of $\phi_0$ independently of external references.
This procedure thus constitutes an internal calibration and is performed periodically.
In normal operation $f_\mathrm{RF}$ is not scanned.
It is rather controlled by a servo algorithm that uses $\phi_E$ as its error signal.
If $\phi_0$ was correctly determined, keeping $\phi_E=0$ is equivalent to ensuring that $f_\mathrm{RF} = f_\mathrm{L}$.
As a consequence, $f_\mathrm{RF}$, \hl{which is digitally synthesized in the DSP system,} becomes a measure for the magnetic field which is periodically sampled directly in the DSP system.
This mode of operation using a feedback loop is similar to standard phase-locked-loop schemes.
Here, however, a frequency offset does not result in a linearly changing error signal.
Thus, in contrast to standard phase-locked loop systems, the error signal $\phi_E$ must not only be kept constant but also equal to zero in order to match $f_\mathrm{RF}$ and $f_\mathrm{L}$.
This means that an offset $\Delta \phi_E$ in the determination of $\phi_E$ translates to an offset in the measured magnetic field according to Eq.~\eqref{eq:phaseslope}
\begin{equation}
\Delta B = \left(\frac{d \phi_E} {d B}\right)^{-1} \Delta \phi_E = \frac{\Gamma}{\gamma_4}\, \Delta \phi_E .
\label{eq:Berror}
\end{equation}
\subsubsection{Magnetometric sensitivity}
\label{sec:PLLsensitivity}
The statistical uncertainty of the magnetic field measurement can be computed according to the propagation of noise from the sampled photocurrent $I^\mathrm{PD}$.
The phase noise spectral density is given by
\begin{equation}
\rho{}(\phi) = \frac{\rho{}(I^\mathrm{PD})}{I^\mathrm{PD}_\mathrm{RF}} ,
\label{eq:phasesigma}
\end{equation}
where $I^\mathrm{PD}_\mathrm{RF}$ is the amplitude of the oscillation in the photocurrent at the applied RF frequency.
Using Eq.~\eqref{eq:Berror} we find
\begin{equation}
\rho(B) =
\frac{ \Gamma \rho(\phi)}{\gamma_4}
=\frac{\Gamma }{\gamma_4}\frac{\rho(I^\mathrm{PD})}{I^{\mathrm{PD}}_\mathrm{RF}} .
\label{eq:sens2}
\end{equation}
In the shot noise limit, $\rho(I^\mathrm{PD}) = \sqrt{2\,e\,I^{\mathrm{PD}}_\mathrm{DC}}$ with $I^{\mathrm{PD}}_\mathrm{DC}$ the DC component of the photocurrent, the magnetometric sensitivity for all sensors used was better than $\rho(B) = \SI{50}{fT/\sqrt{Hz}}$ \hl{after the light power and $B_\mathrm{RF}$ amplitude were individually optimized for each sensor.}
The shot noise limit was used as the figure of merit for this optimization since it can be computed independently of the external magnetic noise which depended significantly on the changing experimental environment.
During nEDM measurements the typical statistical sensitivity of the Cs magnetometers was $\rho(B) = \SI{750}{fT/\sqrt{Hz}}$.
\hl{The increase in statistical noise was due to the Johnson noise generated by the aluminum shielding cans} (thickness 2 mm) that had to be installed around each sensor to suppress interference from the $B_\mathrm{RF}$ fields of neighboring sensors.
Even with the cans installed, a small amount of beating was observed due to the remaining interference.
This is the reason why some magnetometers show a pronounced structure in the Allan deviations shown in Fig.~\ref{fig:ASDBz}.
The resulting average sensitivity (including the beating effect) ranges from 0.75 to 8\,pT/$\sqrt{\mathrm{Hz}}$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Paper_Bz_Allan.pdf}
\caption{Allan deviations of the magnetic field magnitude measured by 15 of the Cs{} magnetometers. The straight lines indicate the $\tau^{-1/2}$ behavior of pure white noise. The oscillations that are visible for some sensors are caused by the RF field of a neighboring sensor, as explained in the text. }
\label{fig:ASDBz}
\end{figure}
\subsubsection{Accuracy}
\label{sec:PLLaccuracy}
One can distinguish two types of effects that influence the accuracy of the Cs{} magnetometer.
The first relates to inaccuracies in determining the Larmor precession frequency $f_{\mathrm{L}}$.
The second category includes all effects that change $f_\mathrm{L}$ itself, modifying the relation between the Larmor precession frequency and the magnetic field as given by Eq.~\eqref{eq:Larmor}.
Below follows a short discussion of both types, concluding with recommendations on how to keep the offsets as stable as possible, allowing for high relative accuracy of the magnetic field reading.
As the extraction of the Larmor precession frequency relies heavily on the reference phase $\phi_0$, any drift of $\phi_0$ without recalibration will worsen the accuracy of the sensor.
Such drifts can occur due to temperature-related effects in the electronics or when, for example, the laser intensity changes \cite{JariThesis} and thus the capacitance of the photodiode.
In order to quantify such drifts in the nEDM Cs{} magnetometer array, we have performed calibrations before and after each nEDM run, typically 1 to 4 days apart.
Figure \ref{fig:phiDiff} shows a histogram of the extracted phase change $\Delta\phi_0 = \phi_{0,\textrm{after}} - \phi_{0,\textrm{before}}$ for one of the sixteen sensors.
The typical change of reference phase is on the order of 1 to \SI{2}{m rad}.
An uncorrected drift $\Delta\phi_0$ of the on-resonance phase during phase-feedback operation results in an offset in the magnetic field measurement according to Eq.~(\ref{eq:Berror}).
Figure \ref{fig:phaseBdiff} shows the results of converting the phase differences in Fig.~\ref{fig:phiDiff} to offsets in the magnetic field reading.
The standard deviation of the magnetic field reading offset depends on the sensor properties and ranges from 1 to maximum 7\,pT.
This is of the same order of magnitude as the inherent uncertainty provided by the calibration procedure itself, which is about 1\,pT.
\begin{figure}
\centering
\subfigure[]
{\includegraphics[width=0.48\linewidth]{TSM105_Phase_Hist_NoOutliers_f.pdf}
\label{fig:phiDiff}}
\hfill
\subfigure[]
{\includegraphics[width=0.48\linewidth]{TSM105_Bfield_Hist_NoOutliers_f.pdf}
\label{fig:phaseBdiff}}
\caption{(a)~Histogram of the difference in extracted reference phase $\phi_0$ between two consecutive calibrations.
%
The typical time between the two calibrations is 1 to 4 days.
%
(b)~The corresponding offset in the magnetic field reading, as calculated by Eq.~\eqref{eq:Berror}.
%
The distributions in (a) and (b) are not identical, as the width $\Gamma$ in the conversion factor depends on the light intensity, which varies from sensor to sensor.
%
\hl{Over days of data taking the values for $\Gamma$ were typically stable to better than 5\%. During the whole two year-data taking period, all sensors had values of $\Gamma/ 2\pi$ between \SI{4}{Hz} and \SI{17}Hz.}
}
\label{fig:phasedrift}
\end{figure}
Regarding the second category of inaccuracies, there are several effects that modify the Larmor precession frequency, or to be more precise, the energy separation of adjacent Zeeman sublevels of the $F{=}4$ ground state of the \ensuremath{{}^{133}\mathrm{Cs}}{} atoms.
The resonance frequency that is measured by the Cs{} magnetometer in phase-feedback mode is a weighted average of the energy differences between the $m$ and $m+1$ magnetic sublevels.
In a system without laser interaction, the energy levels are the eigenvalues of the Cs{} ground state Hamiltonian containing the hyperfine interaction $A\,\vec{J}\cdot\vec{I}$ between the electronic spin $\vec{J}$ and the nuclear spin $\vec{I}$, and the interaction of the magnetic moment with the applied magnetic field $\vec{\mu}\cdot\vec{B}$.
Applying perturbation theory to first order in $\mu B/A$ (for $\mu B$ small compared to the scale given by the hyperfine structure constant $A$) then yields the linear Zeeman level splitting.
The exact solution for this $J=1/2$ system is given by the Breit-Rabi equation \cite{BreitRabi}.
For a magnetic field of \SI{1}{ \micro T}, the nonlinear terms in the Zeeman effect result in a maximum deviation equivalent to 3\,pT for neighboring magnetic sublevels, giving an upper limit on the inaccuracy due to nonlinear Breit-Rabi splitting.
A second effect of this category has to do with the use of a nonrotating driving field $\vec{B}_{\textrm{RF}} = \vec{B}_1 \sin(2\pi f_\mathrm{RF} t)$.
The nonrotating field produces a Bloch-Siegert shift \cite{BlochSiegert,Sudyka}, which shifts the resonance by
\begin{equation}
\frac{\left(B_{1}\sin\theta_B\right)^2}{16B_0}= \frac{B_1^2}{32B_0}\approx 0.5\,\textrm{pT} ,
\label{eq:BlochSiegert}
\end{equation}
as the RF field of \SI{4}{nT} makes an angle of $\theta_B=\pi/4$ with the main $B_0$ field of \SI{1}{\micro T}.
Another interaction that modifies the energy of the magnetic sublevels is the AC Stark shift induced by the coherent laser light, otherwise known as the virtual light shift \cite{Happer}.
It entails an interaction $\vec{d}\cdot\vec{E}$ between the electric dipole moment operator $\vec{d}$ of the Cs{} atoms and the oscillating electric field $\vec{E}$ of the laser light.
Apart from modifying the hyperfine splitting and the common energy of all levels, it also produces a linear splitting and a quadratic splitting of the magnetic sublevels.
The former is called a vector light shift, the latter a tensor light shift.
The vector light shift can be interpreted as an effective magnetic field that is oriented along the direction of the laser beam for $\sigma^+$ light.
As the laser light propagates at an angle of 45$^\circ$ with respect to $\vec{B}$, this effective magnetic field will add or subtract to the magnitude of the main magnetic field, depending on the direction of $\vec{B}$.
Both the vector and the tensor light shift in the $F_g{=}4$ ground state depend linearly on the intensity of the light and have a dispersive line shape relative to the laser detuning around each hyperfine transition.
Although the dispersive function vanishes when the laser frequency is resonant with the respective transition $F_g{=}4 \rightarrow F_e{=}3$, the light shift itself does not, as the dispersive function of the neighboring transition $F_g{=}4 \rightarrow F_e{=}4$ is quite broad and nonzero at that laser frequency.
In order to determine the size of this effect in the nEDM experiment, dedicated measurements were done by changing the intensity of the light in a controlled way and scanning the detuning of the laser around the $F_g{=}4 \rightarrow F_e{=}3$ transition.
To avoid the inaccuracy issues of the first type, the magnetometers were run in the free spin precession (FSP) mode \cite{Gru15,Afach15}.
\hl{They could be operated in FSP mode without changing the sensor hardware or the laser power.
The waveform of the signal driving the RF-coils was changed to a burst which alternates between RF-pulses and periods of zero RF amplitude.
During the periods without RF field the ensemble spin precesses freely while the constant laser interaction pumps it slowly to an equilibrium state parallel to $\vec{B}$.
The RF-pulses were tuned to flip the accumulated spin polarization by approximately 90$^\circ$ to the plane perpendicular to $\vec{B}$.
During the next free precession period of about \SI{50}{ms} the laser, which is oriented at 45$^\circ$ with respect to $\vec{B}$,
probes the spin component parallel to $\vec{k}$, which contains both the precessing signal of the spin component perpendicular to $\vec{B}$ and the growing spin polarization created along $\vec{B}$ due to optical pumping.
}
The advantage of operating the magnetometer in the FSP mode is that one directly detects the Larmor spin precession frequency $f_{\textrm{L}}$ of the Cs{} atoms.
These FSP studies \cite{EliseThesis} have shown that the sensors display shifts ranging from $\pm10$\,pT to $\pm50$\,pT at their typical light intensities, which are correlated to the light intensity, depend on the laser detuning and indeed change sign as the magnetic field is reversed.
The FSP mode of operation was only used to test the Cs magnetometers since the pulse repetition frequency is close to the Larmor frequency of the \ensuremath{{}^{199}\mathrm{Hg}}{} atoms.
Oscillating magnetic fields with frequency components close to the resonance frequency can cause changes in the Larmor precession of the \ensuremath{{}^{199}\mathrm{Hg}}{} atoms via the Ramsey-Bloch-Siegert shift which was not acceptable during nEDM data taking.
Recent implementations of the FSP mode avoid interference via the RF field by using all-optical designs \cite{Gru15}.
A fourth effect that modifies the Hamiltonian of the atom-light system is due to spin-exchange collisions between the \ensuremath{{}^{133}\mathrm{Cs}}{} atoms \cite{Grossetete,SpinExchRelax}.
The frequency shift operator contains a term proportional to $\vec{S}\cdot \langle \vec{S}\rangle$, where $\vec{S}$ is the electron spin of the Cs{} atoms.
This effect scales with the number density of the alkali atoms \cite{SpinExchange} and is therefore exponentially dependent on temperature.
The exact implications for our magnetometer are not yet fully understood theoretically, but preliminary measurements comparing the precession frequency in different parts of the FSP signal (and thus at different directions of $\langle \vec{S}\rangle$) seem to indicate that the effect is smaller than 30\,pT for all sixteen sensors \cite{EliseThesis}.
An overview of the effects discussed above is given in Table~\ref{tab:Systematics}. Combining the values of the different effects, the absolute accuracy of the sensors adds up to be in a range from 45 to 90\,pT.
For the purpose of measuring drifts of the vertical magnetic field gradient $G_{\textrm{grav}}$, the absolute accuracy of the magnetometers is not crucial, but it is important that the relative reading offsets of all sensors remain stable in time.
It is therefore recommended to keep the light intensity sufficiently stable to avoid drifts in the reference phase and to keep the light shift in check.
Additionally, large changes in temperature should be avoided, both for the stability of the electronics and the spin exchange effect.
The achieved stability in the nEDM experiment was significantly better than the requirements for time scales up to \SI{10000}{s} as discussed in section \ref{sec:GzExtraction}.
\begin{table}
\caption{Overview of effects that relate to inaccuracies in determining the Larmor precession frequency $f_{\mathrm{L}}$ (line 1), or that change $f_\mathrm{L}$ itself (lines 2 to 5) thereby modifying the relation between the Larmor precession frequency and the magnetic field as given in Eq.~\eqref{eq:Larmor}.}
\centerline{
\begin{tabular}{l c}
\hline
\hline
Effect & size (pT) \\
\hline
Reference phase drifts & 1 to 7 \\
Quadratic Zeeman splitting & 3 \\
Bloch-Siegert shift & 0.5 \\
Vector light shift & 10 to 50 \\
Spin exchange & $<$30 \\
\hline
\hline
\end{tabular}
}
\label{tab:Systematics}
\end{table
\subsection{Variometer method}
\label{sec:Variometer}
The array of Cs{} magnetometers can be used to obtain the vector components of the magnetic field by applying the variometer principle \cite{Ver06}.
The implementation of this method will be explained in Section \ref{sec:VarioWp}, its sensitivity and accuracy will be discussed in Sections \ref{sec:VarioSensitivity} and \ref{sec:VarioAccuracy} respectively.
\subsubsection{Working principle}
\label{sec:VarioWp}
The variometer method consists of applying a well known magnetic field $\vec{B}_{\textrm{T}}$ transverse to the main magnetic field of \SI{1}{\micro T}.
Using the Cs{} magnetometers in phase-feedback mode to measure the magnitude of the total magnetic field, the additional transverse magnetic field changes the magnitude to:
\begin{equation}
\|\vec{B}_{0}+ \vec{B}_{\textrm{T}} I \|^2 =\|\vec{B}_{0}\|^2+ 2 \vec{B}_{0}\cdot\vec{B}_{\textrm{T}} I+ \|\vec{B}_{\textrm{T}}\|^2 I^2,
\label{eq:Parabola}
\end{equation}
where $\vec{B}_{0}$ represents the main magnetic field, $I$ the current applied to the transverse coil, and $\vec{B}_{\textrm{T}}$ the field produced by this transverse coil at the position of the Cs{} magnetometer when applying one unit of current.
Probing the field magnitude with a set of different currents, one can extract $\|\vec{B}_{0}\|$, $\|\vec{B}_{\textrm{T}}\|$ and $\vec{B}_{0}\cdot\vec{B}_{\textrm{T}}$ from the quadratic behavior of $\|\vec{B}_{0}+\vec{B}_{\textrm{T}}I \|^2$ as a function of the current.
The scalar product $\vec{B}_{0}\cdot\vec{B}_{\textrm{T}}$ contains the angle between the applied transverse magnetic field and the main field $\vec{B}_0$.
Projecting on two known transverse magnetic field directions, one can reconstruct the direction of $\vec{B}_0$.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{VarioExample.pdf}}
\caption{
On the left: the response of one of the Cs magnetometers to a current pattern of 1, 0.5, 0, -0.5 and -1\,mA in steps of \SI{5}{s}, first applied to a coil in the $x$-direction (0--25\,s indicated in red), then to a coil in the $y$-direction (25--50\,s indicated in blue). The main field of \SI{1.051}{\micro T} is maintained along the $z$-direction. A current of 1\,mA corresponds to an applied field of about \SI{50}{nT}. On the right: the corresponding parabolic behavior of the magnitude as a function of the applied current to a coil in the $x$-direction (red diamonds) and a coil in the $y$-direction (blue crosses).
}
\label{fig:CsMResponse}
\end{figure}
An example of the readout of a Cs{} sensor during the application of the variometer method is shown in Fig.~\ref{fig:CsMResponse}.
Here, a sequence of five equally spaced currents is applied for five seconds each, first to a coil in the $x$-direction, then to a coil in the $y$-direction, whereas the main magnetic field is maintained in the $z$-direction.
The currents are applied with an Agilent 33500B function generator, using a resistor of 10\,k$\Omega$ in series with the transverse coils to convert the voltage generated by the function generator to a proportional current.
In order to avoid magnetization of the $\mu$-metal shield, the maximal current $I$ is chosen such that the transverse field is about a factor of 20 smaller than the main magnetic field of \SI{1}{\micro T}.
This results in a change of the magnetic field magnitude by typically 5\,nT.
As the Cs{} magnetometer is run in the phase-feedback mode, the reaction of the sensor to this sudden change of the magnetic field is not instantaneous, but has a time constant of a few 100\,ms, depending on the parameters of the stabilizing PID algorithm.
Consequently, the ramping parts of the signal have to be cut when averaging the magnitude over one current setting, effectively increasing the measurement uncertainty calculated in Section \ref{sec:VarioSensitivity}.
In order to extract the vector components of the main magnetic field, knowledge of the direction of the applied transverse field is crucial.
The coils that are used to generate $\vec{B}_{\textrm{T}}$ are normally used for applying the UCN and \ensuremath{{}^{199}\mathrm{Hg}}{} $\pi/2$ spin-flip pulses in the nEDM experiment.
The magnetic fields produced by these coils were measured in 2014 with a nonmagnetic mapping device \hl{(the topic of the third episode in this trilogy)} consisting of a three-axis fluxgate magnetometer mounted on a trolley.
The trolley could move along a horizontal arm, which itself could rotate along a vertical axis and move up and down along the same vertical axis.
Scanning the volume in discrete steps, the magnetic field map can be reconstructed from the corresponding fluxgate readings \cite{GWmap}.
The resulting accuracy of these field maps at the specific Cs{} magnetometer positions is about 1\,nT on each magnetic field component for a 50\,nT total field produced by the coil.
This 2\% inaccuracy of the field maps translates into a similar inaccuracy of all three vector components of $\vec{B}_0$ if the extraction is based purely on the two transverse projections.
For this reason, we additionally include the fact that the magnetic field is predominantly homogeneous and assume that the $B_{0z}$ component of the main field is closely approximated by the field magnitude $B_{0z}=\pm\|\vec{B}_0\|$ (true at the tens-of-pT level), with the sign being determined by the set $B_0$ direction.
Using this approximation, one can extract $B_{0x}$ and $B_{0y}$ by solving the following set of equations:
\begin{equation}
\begin{bmatrix}
\vec{B}_{0}\cdot\vec{B}_1 - B_{0} B_{1z} \\
\vec{B}_{0}\cdot\vec{B}_2 - B_{0} B_{2z}
\end{bmatrix}
=
\begin{bmatrix}
B_{1x} & B_{1y}\\
B_{2x} & B_{2y}
\end{bmatrix}
\begin{bmatrix}
B_{0x}\\
B_{0y}
\end{bmatrix}\label{eq:BtExtraction} ,
\end{equation}
where $\vec{B}_1$ and $\vec{B}_2$ are the two applied transverse fields.
To take into account slight differences in applied currents during the maps and the variometer measurement, the $\vec{B}_1$ and $\vec{B}_2$ maps are scaled using the $\|\vec{B}_{\textrm{T}}\|^2$ parameter from the quadratic fit in Eq.~\eqref{eq:Parabola}.
Matrix inversion of Eq.~\eqref{eq:BtExtraction} yields
\begin{equation}
\begin{split}
B_{0x} &= \frac{B_{2y}\left(\vec{B}_{0}\cdot\vec{B}_1 - B_{0} B_{1z}\right) - B_{1y} \left(\vec{B}_{0}\cdot\vec{B}_2 - B_{0} B_{2z}\right) }{B_{1x} B_{2y} - B_{2x} B_{1y}}\\
&\approx \frac{\vec{B}_{0}\cdot\vec{B}_1 - B_{0} B_{1z} }{B_{1x}}
\end{split}
\label{eq:B0xSolution}
\end{equation}
and
\begin{equation}
\begin{split}
B_{0y} &= \frac{B_{1x} \left(\vec{B}_{0}\cdot\vec{B}_2 - B_{0} B_{2z}\right)- B_{2x}\left(\vec{B}_{0}\cdot\vec{B}_1 - B_{0} B_{1z}\right) }{B_{1x} B_{2y} - B_{2x} B_{1y}}\\
&\approx \frac{\vec{B}_{0}\cdot\vec{B}_2 - B_{0} B_{2z} }{B_{2x}}
\end{split} ,
\label{eq:B0ySolution}
\end{equation}
where the second lines are obtained by assuming that $B_{1y}$ and $B_{2x}$ are negligible (meaning $\vec{B}_1$ is oriented predominantly along $x$ and $\vec{B}_2$ predominantly along $y$).
It is worth noting here that the statistical uncertainties on $B_{0x}$ and $B_{0y}$ originate from the terms proportional to the scalar products, whereas the accuracy is determined by the terms proportional to $B_{0}B_{1z}$ and $B_{0} B_{2z}$.
\subsubsection{Magnetometric sensitivity }
\label{sec:VarioSensitivity}
Based on the second line of Eqs.~\eqref{eq:B0xSolution} and \eqref{eq:B0ySolution}, the statistical uncertainty of the variometer method is determined by
\begin{equation}
\sigma(B_{0\textrm{j}}) = \frac{\sigma(\vec{B}_{0}\cdot\vec{B}_{\textrm{T}}) - \sigma(B_{0}) B_{\textrm{T}z}}{B_{\textrm{T}j}} ,
\label{eq:VarioSens}
\end{equation}
with $j$ indicating the direction of the transverse coil producing $\vec{B}_{\textrm{T}}$.
The components of $\vec{B}_{\textrm{T}}$ do not introduce a statistical uncertainty, as they are fixed by the magnetic field maps.
The precision with which the scalar product between $\vec{B}_{0}$ and $\vec{B}_{\textrm{T}}$ can be determined depends on the amplitude and the duration of the currents applied to the transverse coils.
Let us consider the case of a sequence of $n$ steps of equal duration \hl{$t_s$} with applied currents $I_i$, assuming an anti-symmetric sequence of currents: $\sum_i I_i = 0$.
The uncertainty on the square of the magnetic field magnitude during one step is then given by \hl{$\sigma(B^2) = 2 B \sigma(B) = 2 B \rho(B)/\sqrt{2 t_s}$}, with $\rho(B)$ the noise density of the magnitude (Eq.~\eqref{eq:sens2}).
Using weighted linear least squares fitting, the uncertainty on the coefficient of the linear term in Eq.~\eqref{eq:Parabola} is given by
\begin{equation}
\sigma(2\vec{B}_{0}\cdot\vec{B}_{\textrm{T}})
= \frac{\sigma(B^2)}{\sqrt{\sum\limits_{i=1}^n I_i^2}}
= \frac{2 B\, \rho(B)}{ \sqrt{2 t_s}\, \sqrt{\sum\limits_{i=1}^n I_i^2} } .
\label{eq:ScalarProductUncertainty}
\end{equation}
As $ B_{\textrm{T}z}$ is typically not larger than a few nT, the uncertainty on the scalar product $\sigma(\vec{B}_{0}\cdot\vec{B}_{\textrm{T}})$ is about a factor of 1000 larger than $\sigma(B_{0}) B_{\textrm{T}z}$, hence one can neglect the second term in Eq.~\eqref{eq:VarioSens}.
The uncertainty during one measurement cycle is then
\begin{equation}
\sigma(B_{0\textrm{j}})
= \frac{ B}{B_{\textrm{T}j}}\frac{\rho(B)}{ \sqrt{2 t_s}}\frac{1}{\sqrt{\sum\limits_{i=1}^n I_i^2} } .
\label{eq:B0Tuncertainty}
\end{equation}
Taking into account that two transverse projections are needed, the duration of one full variometer measurement cycle is \hl{$2n\,t_s$,} hence giving the following noise density:
\begin{equation}
\rho(B_{0\textrm{j}})
=\sigma(B_{0\textrm{j}})\sqrt{4n\,t_s} = \rho(B)\frac{ B}{B_{\textrm{T}j}}\frac{\sqrt{2n}}{\sqrt{\sum\limits_{i=1}^n I_i^2} } .
\label{eq:B0TnoiseDensity}
\end{equation}
It is clear that in order to get the best sensitivity, one has to use the smallest number of steps $n=3$ ($I$, 0 and $-I$) per transverse field direction at the highest possible current $I$.
A typical variometer measurement cycle for the nEDM experiment then consists of applying a sequence of 3 steps of 6\,s per transverse direction with a maximum applied transverse field of 50\,nT.
Such a measurement typically results in an uncertainty of about 10\,pT, which is about a factor of 3 larger than expected from the calculated noise density.
The reason is that, at this level of precision, the stability of the current source is a limiting factor.
The uncertainty on the squared magnitude of the field should thus be modified to
\begin{equation}
\sigma(B^2) = \sqrt{\bigg(2B\sigma(B)\bigg)^2 + \left(\sigma(I)\frac{\partial B^2}{\partial I}\right)^2} ,
\end{equation}
such that the \si{\micro A} precision of the current source can be taken into account.
\subsubsection{Stability and accuracy}
\label{sec:VarioAccuracy}
The accuracy of the variometer method is determined by the accuracy of the field maps of $\vec{B}_{1}$ and $\vec{B}_{2}$ at the positions of the Cs{} sensors.
These maps typically have an inaccuracy of 1\,nT in all three components.
Particularly the inaccuracy of the $z$-component propagates into a systematic error in $B_{0x}$ and $B_{0y}$ through the terms $ B_{0} B_{1z} /B_{1x}$ and $B_{0} B_{2z}/B_{2x}$ of Eqs.~\eqref{eq:B0xSolution} and \eqref{eq:B0ySolution} respectively.
Using typical values of \SI{1}{\micro T} for $B_{0z}$ and 50\,nT for $B_{1x}$ and $B_{2y}$, the estimated accuracy is \hl{20\,nT for $B_{0x}$ and $B_{0y}$.
However, $B_{0z}$ can be determined much more accurately as it is well approximated by the (directly-measured) magnitude $\|\vec{B}_0\|$.}
If the transverse components remain smaller than 10\,nT, as is typically the case in the nEDM experiment, the error made with this approximation is less than 100\,pT.
Luckily, the inaccuracy due to the $B_{0z} B_{Tz}/B_{Tj}$ term is canceled when comparing two variometer measurements of similar main magnetic fields.
Assuming the main magnetic field direction is not changed too much, the difference between two magnetic fields can be determined with a relative accuracy of a few percent, since the main contribution of $B_{0z}$ to the 20\,nT cancels out when taking a difference.
This of course does not hold when inverting the magnetic field direction.
As shown in Section \ref{sec:Applications}, these relative measurements are very useful for characterizing drifts of the main magnetic field and provide access to higher order magnetic field gradients that are inaccessible with the regular phase-feedback mode.
\section{Applications of the magnetometer array}
\label{sec:Applications}
The Cs{} magnetometer array can be used for a variety of applications.
The remainder of this paper will focus on two important ones directly beneficial to the nEDM experiment: (i)~the implementation of a strategy to correct for the drift of the vertical magnetic field gradient and (ii)~a procedure to optimize the homogeneity of the magnetic field.
Section \ref{sec:GzExtraction} describes how to extract the magnetic field gradients from the magnetometer array when vector or scalar magnetic field information is collected.
This procedure is then applied to the data taken during the nEDM experiment to characterize the typical gradient drifts and to estimate the accuracy of the gradient extraction that is solely based on the magnitude readings.
Section \ref{sec:T2opt} outlines the optimization procedure that significantly improved the sensitivity of our nEDM experiment during the 2015 and 2016 data taking campaigns.
\subsection{Spatial field distribution and gradient extraction}
\label{sec:GzExtraction}
\hl{
In order to extract the relevant magnetic field gradients, we model the spatial field distribution using a multipole expansion.
The multipoles were chosen such that the relevant gradients can be described by a small number of expansion coefficients.
Specifically, we use the multipole expansion as presented in \cite{Abe19a}, where the magnetic field at position $\vec{r}$ is expanded in the form:
}
\begin{equation}
\vec{B}(\vec{r}) = \sum\limits_{l,m} G_{l,m}
\begin{bmatrix}
\Pi_{x,l,m} (\vec{r})\\
\Pi_{y,l,m}(\vec{r}) \\
\Pi_{z,l,m}(\vec{r})
\end{bmatrix} ,
\end{equation}
with the $\vec{\Pi}_{l,m}$ harmonic polynomials of degree $l$ in the Cartesian coordinates $x$, $y$ and $z$, and $G_{l,m}$ the corresponding gradient coefficients.
Each degree $l$ has $2l+3$ polynomials, with $m$ ranging from $-(l+1)$ to $l+1$.
The origin of the coordinate system is chosen at the center of the cylindrical precession chamber, as this significantly simplifies averaging over the chamber volume.
The harmonic polynomials up to third order are listed in Table II of \cite{Abe19a}.
\hl{The gradient $G_{\textrm{grav}}$ (introduced in Eq.~\eqref{eq:Rratio}), relevant for the nEDM experiment, is a specific combination of the harmonic coefficients \cite{Abe19a}:}
\begin{equation}
G_{\textrm{grav}} = G_{1,0} + G_{3,0} \left(\frac{3H^2}{20}-\frac{3R^2}{4}\right),
\label{eq:GGrav}
\end{equation}
where $H$ is the height and $R$ the radius of the cylindrical storage chamber.
\hl{Evaluating this expression with} the dimensions of the nEDM precession chamber, the vertical gradient is given by $G_{\textrm{grav}} = G_{1,0} - G_{3,0} (393\textrm{\,cm}^2)$.
\subsubsection{Gradient extraction in the variometer mode}
\label{sec:GzVario}
If the vector components of the magnetic field are known at positions $\vec{r}_i$, the gradients $G_{l,m}$ can be determined by solving the matrix equation
\begin{equation}
\begin{bmatrix}
{B_x}\\
{B_y}\\
{B_z}
\end{bmatrix}
=
\begin{bmatrix}
{\Pi_x}\\
{\Pi_y}\\
{\Pi_z}\\
\end{bmatrix}
{G} ,
\label{eq:GradientMatrixEq}
\end{equation}
where ${B_x}$ is a column vector with elements $B_{x}^i$ representing the $x$-component of the magnetic field measured at positions $\vec{r}_i$, ${\Pi_x}$ is a matrix with elements $({\Pi_x})^{ij} = \Pi_{x,l_j,m_j}(\vec{r}_i)$, i.e., the harmonic polynomial defined by $l_j$ and $m_j$ evaluated at position $\vec{r}_i$, and ${G}$ is a column vector containing the harmonic coefficients $G_{l_j,m_j}$.
The expressions are similar for the $y$- and $z$-matrices.
In the particular case of measurements with the variometer method there is, however, a significant difference between the uncertainty on $B_z$ and the uncertainties on the transverse components $B_x$ and $B_y$. Therefore, each line in the matrix equation is weighted with the inverse of the squared uncertainty of the corresponding magnetic field component value.
Since one of the HV-compatible magnetometers failed after an electrical discharge burned one of its optical fibers at an early stage of data taking, we only have 15 sensors available to fit the harmonic coefficients.
This results in $3\times15 = 45$ equations, enabling us to comfortably fit up to third order (24 harmonics) while still having enough degrees of freedom for error estimation.
This means that the harmonic coefficients necessary for the estimation of $G_{\textrm{grav}}$ are easily accessible using the variometer method.
However, since the method involves applying additional magnetic fields, it is not used during a typical nEDM measurement cycle as it would disturb the neutron EDM measurement.
\subsubsection{Gradient extraction in the phase-feedback mode}
\label{sec:GzPLL}
Since in phase-feedback mode only the magnitude of the magnetic field is known at positions $\vec{r}_i$, we first have to make the following approximation:
\begin{equation}
\pm\|\vec{B}\|=B_z+ \frac{B^2_x+B^2_y}{2B_z}+\cdots\approx B_z ,
\end{equation}
where the sign is determined by the main direction of $\vec{B}$, which is oriented along the $z$-axis.
This approximation is valid in the nEDM experiment as the field maps have shown that the transverse components of the main \SI{1}{\micro T} field are typically smaller than \SI{10}{nT}.
To extract the magnetic field gradients $G_{l,m}$, one has to solve the matrix equation ${B_z} = {\Pi_z^{\textrm{S}}}{G^{\textrm{S}}}$, with the matrices being defined as in Eq.~\eqref{eq:GradientMatrixEq}, with the exception that the polynomials with $m_j=\pm(l_j+1)$ are not included.
The reason for this is that these modes are purely transverse and do not contribute to $B_z$, and are therefore not accessible via the magnitude.
The superscript S (scalar) is added to make a clear distinction between gradients ${G}$ determined from vector measurements and gradients ${G^{\textrm{S}}}$ extracted from scalar measurements.
Again, the uncertainty on the magnitude measurements can be used to assign weights to the equations.
As the nEDM experiment has sixteen Cs{} magnetometers, we typically limit the scalar harmonic expansion to second order (with 9 fit parameters), providing the following magnetic field description:
\begin{align}
B_z(x, y, z) \, =\, & G_{0,0}^{\textrm{S}} + y\, G_{1,-1}^{\textrm{S}}+ z\,G_{1,0}^{\textrm{S}} +x\, G_{1,1}^{\textrm{S}}
\nonumber
\\
& + 2xy\,G_{2,-2}^{\textrm{S}}+2yz\,G_{2,-1}^{\textrm{S}}
\nonumber
\\
&+\left(z^2-\frac{1}{2}(x^2+y^2)\right)\, G_{2,0}^{\textrm{S}}
\nonumber
\\
&+ 2xz\,G_{2,1}^{\textrm{S}} + (x^2-y^2)\,G_{2,2}^{\textrm{S}} ~ .
\label{eq:polynomial}
\end{align}
\hl{The cubic vertical gradient $G_{3,0}$ clearly cannot be determined using Eq.~(\ref{eq:polynomial}). }
However, the higher order terms do affect the extracted scalar gradients ${G^{\textrm{S}}}$.
Assuming a multipole expansion ${B_z}={\Pi_z} {G}$, the contribution of the higher order terms to the scalar fit parameters can be calculated explicitly:
\begin{equation}
{G^{\textrm{S}}} = \left(\left({\Pi_z^{\textrm{S}}}\right)^{\textrm{T}} W \, {\Pi_z^{\textrm{S}}}\right)^{-1} \left({\Pi_z^{\textrm{S}}}\right)^{\textrm{T}} W \,{\Pi_z} \,{G} ,
\end{equation}
where $W$ is a diagonal matrix containing the weight of each equation.
Using the positions of the 15 Cs{} magnetometers that were operational during the 2015/2016 nEDM data taking and assuming equal weights for each magnetometer, the influence of the third order gradients on the vertical linear gradient $G_{1,0}^{\textrm{S}} = \sum a_{l,m} G_{l,m}$ is summarized in Table \ref{tab:G10Scubic}.
\hl{By comparing the prefactors in the definition of $G_{\textrm{grav}} = G_{1,0} - \SI{393}{cm^2} \,G_{3,0}$ in Eq.~\eqref{eq:GGrav} to the prefactors $a_{1,0}{=}1$ and $a_{3,0}{=}\SI{-288}{cm^2}$, we can conclude that $G^\textrm{S}_{1,0}$ is a reasonable but slightly inaccurate estimator for $G_{\textrm{grav}}$.
Adding weights $W$ based on the typical uncertainties of each sensor changes the factors $a_{3,m}$ in Table \ref{tab:G10Scubic}, but the prefactor for $G_{3,0}$ remains about 3/4 of the factor in $G_{\textrm{grav}}$.
}
\subsubsection{Gradient extraction during nEDM data taking}
\label{sec:TypicalGz}
In order to show that the Cs{} magnetometer array meets the requirements for gradient drift correction outlined at the end of Section \ref{sec:nEDMexperiment}, we have to quantify the sensitivity and accuracy of the gradient extraction procedure based on the real magnetic field conditions in the nEDM experiment.
To monitor the magnetic field during nEDM data taking, the typical measurement procedure regarding the Cs{} sensors consists of: (i)~calibrations before and after each nEDM run to monitor the light intensity and the reference phase of the phase-feedback mode, (ii)~followed by variometer measurements to monitor the higher order gradient drifts, and (iii)~continuous measurements in the phase-feedback mode during the nEDM run.
A run typically takes a few days, corresponding to about 500 Ramsey cycles which each take five minutes, while the electric field is reversed every 56 cycles.
%
In order to quantify the gradient drift sensitivity during a Ramsey cycle, we extract $G_{1,0}^{\textrm{S}}$ from the data used in Fig.~\ref{fig:ASDBz} and calculate its Allan deviation (ADEV).
The results are shown in Fig.~\ref{fig:ASDGz}.
It is clear that the realized gradient sensitivity during the neutron storage time of 180\,s is significantly better than the requirement of 1.3\,pT/cm calculated in Section \ref{sec:nEDMexperiment}.
The ADEV slowly increases for longer integration times but remains far below the limit for all relevant time-scales.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Paper_Gz_Allan_2.pdf}
\caption{The Allan deviation of the vertical gradient $G_{1,0}^{\textrm{S}}$ extracted from the data shown in Fig.~\ref{fig:ASDBz} using the model in Eq.~\eqref{eq:polynomial} is shown in blue. The achievable statistical uncertainty at the nEDM cycle duration of $180$\,s is 8\,fT/cm, which is significantly below the upper limit indicated as a dashed green line. Statistical uncertainties in the magnetometers cause the rising slope towards small $\tau$ values. The result at $180$\,s is not limited by the slope but rather by the stability of the measurement system.}
\label{fig:ASDGz}
\end{figure}
Regarding the accuracy of the gradient drift measurement, there are two effects that play a role.
On the one hand there are sensor-related drifts that translate into an artificial gradient drift, on the other hand there are drifts of $G_{1,0}^{\textrm{S}}$ induced by changes in the higher order magnetic field gradients.
To estimate the former, we compare the calibrations before and after each nEDM run, to characterize the latter, we use the variometer measurements.
As discussed in Section \ref{sec:PLLaccuracy}, the typical change in reference phase between calibration pairs before and after an nEDM run results in reading offsets corresponding to a few pT.
Using the magnetic field gradient model of Section \ref{sec:GzPLL}, these offsets produce a change of the fit-parameter $G_{1,0}^{\textrm{S}}$ with a standard deviation of 0.1\,pT/cm in the time span of a few days.
Similarly, the light intensity changes slightly over the course of an nEDM run, modifying the light shift in each sensor, resulting in an artificial gradient drift with standard deviation of 0.03\,pT/cm.
Since the light intensity typically decreases over time and the direction of the laser beams is opposite for the sensors above and below the storage chamber, the average change is about -0.01\,pT/cm.
Comparing the variometer measurements before and after each nEDM run, we can extract the total change of each gradient $\Delta G_{l,m}$ during the run.
The distribution of $\Delta G_{l,m}$ is Gaussian, with the standard deviation of the terms relevant to $G_{1,0}^{\textrm{S}}$ summarized in Table~\ref{tab:G10Scubic}.
Taking into account the correlation between the drifts of $G_{1,0}$ and $G_{3,0}$, and using Eq.~\eqref{eq:GGrav}, the standard deviation of $\Delta G_{\textrm{grav}}$ is 1.4\,pT/cm.
Using the magnitude of the same data, the extracted drift of $G_{1,0}^{\textrm{S}}$ is in agreement with the drift of $G_{\textrm{grav}}$ within the error bars of the parameters, which are typically 0.7\,pT/cm for $G_{\textrm{grav}}$ due to the inaccuracy of the variometer mode \hl{including map-related inaccuracies}.
This gives an upper limit on the relative accuracy of $G_{1,0}^{\textrm{S}}$: the accuracy is at least a factor of 2 better than the standard deviation of the drift on the timescale of an nEDM run.
It follows that the dominant uncertainty on the extracted gradients is not due to the accuracy of the individual sensors, but rather due to the `aliasing effect' of the higher order modes which are not included in the fit.
\begin{table}
\setlength{\tabcolsep}{8pt}
\caption{Overview of the harmonic coefficients contributing to the fit parameter $G_{1,0}^{\textrm{S}} = \sum a_{l,m}G_{l,m}$ up to degree 3.
For each harmonic coefficient $G_{l,m}$, the weighing factor $a_{l,m}$ and the standard deviation of the gradient drift $\sigma(\Delta G_{l,m})$ during a typical nEDM run are given.
Taking into account the correlations between the different contributions, an estimation of the standard deviations of the drift of $G_{1,0}^{\textrm{S}}$ and its accuracy $G_{1,0}^{\textrm{S}} - G_{\textrm{grav}}$ are given in the last two lines.
\hl{In the last line, the error estimation is scaled with $\sqrt{\chi^2/\nu}$ of the variometer fit to take into account the map-related inaccuracies of the method.}
}
\centerline{
\begin{tabular}{l c c }
\hline
\hline
$G_{l,m}$& $a_{l,m}$ (cm$^{l-1})$ & $\sigma(\Delta G_{l,m})$ (pT/cm$^l)$ \\
\hline
$G_{1,0}$ &1&1.71 \\
$G_{3,-3}$ &-135&0.0009 \\
$G_{3,-2}$ &344& 0.0006\\
$G_{3,-1}$ &22& 0.0015\\
$G_{3,0}$ &-288& 0.0023\\
$G_{3,1}$ &-23& 0.0010\\
$G_{3,2}$ &466&0.0010 \\
$G_{3,3}$ &1& 0.0017\\
\hline
$G_{1,0}^{\textrm{S}}$&&1.4--1.7\\
$G_{1,0}^{\textrm{S}} - G_{\textrm{grav}}$ &&$<0.7$\\
\hline
\hline
\end{tabular}
}
\label{tab:G10Scubic}
\end{table
\subsection{Homogenization of the magnetic field}
\label{sec:T2opt}
The homogeneity of the magnetic field influences both the statistical precision of the nEDM experiment and its systematic effects.
To improve the former without exacerbating the latter, we have developed a procedure for optimizing the magnetic field in the precession chamber.
The principles behind this optimization strategy are explained in Section \ref{sec:T2optStrategy}.
The implementation of the routine is described in Section \ref{sec:T2optImpl}, followed by a discussion of the tuning of the algorithm in Section \ref{sec:T2optTuning}.
Finally, the resulting improvement in sensitivity is presented in Section \ref{sec:T2optResults}.
\subsubsection{Principles behind the optimization}
\label{sec:T2optStrategy}
Improving the statistical sensitivity and minimizing the systematic effects impose different requirements on the magnetic field optimization.
The magnetic-field-related contribution to the statistical precision of the nEDM measurement is captured in the parameter $\alpha$ of Eq.~\eqref{eq:nustatistical}, which is the visibility or contrast of the Ramsey resonance.
This parameter is predominantly defined by the neutrons' transverse spin relaxation time $T_2$ via $\alpha(T) = \alpha_0 \exp{(-T/T_2)}$ where $\alpha_0$ is the polarization at the start of the Ramsey procedure and $T$ the precession time of the neutrons.
The transverse relaxation time results from a combination of three types of neutron depolarization in the storage chamber, as discussed in \cite{Abe19a}.
The first mechanism is depolarization due to wall collisions, which is an effect that does not depend on the magnetic field.
The second is gravitationally enhanced depolarization \cite{GravEnhDepol1,GravEnhDepol2}, which is caused by the extremely low kinetic energy of the ultracold neutrons.
Different energy groups of neutrons have a different average height in the chamber, \hl{so in the presence of a vertical gradient of the field's main component their precession frequencies differ slightly.}
This causes a dephasing of the different energy groups, which results in a lower polarization at the end of the Ramsey procedure.
To reduce this effect, \hl{it is crucial to minimize specifically the vertical gradient $\partial B_{z}/\partial z$}.
The third mechanism is intrinsic depolarization, which refers to the depolarization within each given energy group.
Even though the neutrons have the same energy, their trajectories through the chamber differ, resulting in dephasing if the magnetic field is not homogeneous over the chamber volume.
\hl{Such local changes in Larmor frequency are caused by all gradients of the main field component $B_{z}$ while gradients of the transverse components $B_{x}$ and $B_{y}$ play a negligible role.}
Conversely, the magnetic-field-related systematic effects that are not dealt with in the extension of the {\it crossing point analysis} of \cite{Abe19a}, involve the quantities $\langle B_{\textrm{T}}^2\rangle$ and $G_{3,0}$.
The first is defined as
\begin{equation}
\langle B_{\textrm{T}}^2\rangle = \Big\langle \big(B_x-\langle B_x\rangle\big)^2+\big(B_y-\langle B_y\rangle\big)^2\Big\rangle\
\end{equation}
and stands for the square of the transverse magnetic field components averaged over the storage volume.
It is a second order combination of the harmonic expansion coefficients $G_{l,m}$:
\begin{equation}
\langle B_{\textrm{T}}^2\rangle = \sum a_{ij} G_{l_i,m_i}G_{l_j,m_j} .
\end{equation}
The coefficients $a_{ij}$ are given in Appendix B of \cite{Abe19a}.
The smaller the gradients of the transverse magnetic field components, the smaller this systematic effect.
\hl{The quantity $G_{3,0}$ is the cubic vertical gradient of $B_z$ with a characteristic $z$-dependance $\vec{B}(x{=}0,y{=}0,z)\propto (0,0,z^3)$.
The systematic uncertainties related to $G_{3,0}$ can thus be suppressed by ensuring the homogeneity of $B_z$}
\hl{In summary, optimizing the homogeneity of the longitudinal field component $B_z$ helps to suppress certain systematic uncertainties and is crucial to maintain long $T_2$ times and thus a high statistical sensitivity.
Optimizing the homogeneity of the transverse field components $B_x$ and $B_y$ is equally important since a different systematic effect is related to those components.
}
\subsubsection{Implementation}
\label{sec:T2optImpl}
\hl{Firstly, the homogeneity of the longitudinal magnetic field component $B_z$ can be directly accessed by the Cs{} magnetometer array}.
However, since the sensors are not perfectly accurate and require offline corrections, $B_z$ was only available up to an accuracy of about 45 to 90\,pT during online data taking (Table~\ref{tab:Systematics}).
Therefore, the goal of the optimization routine is to reduce the spread of the Cs{} magnetometer readings to this level.
Secondly, the transverse components are accessible with the variometer method, but the accuracy is not sufficient to keep $\langle B_{\textrm{T}}^2\rangle$ below the goal of 2\,nT$^2$, which would correspond to a systematic effect at the level of a few 10$^{-27}\ensuremath{e\cdot\ensuremath{\mathrm{cm}}}$.
\hl{For this reason, offline field maps, that were recorded before the period of nEDM data taking, are used to provide an estimate of $\langle B_{\textrm{T}}^2\rangle$.}
The final correction of this systematic effect will be performed with more accurate values extracted from a more recent mapping campaign (the analysis of which will be included in the third part of the trilogy).
Combining the online information of the Cs{} sensors with the offline magnetic field maps, we developed a routine to optimize the currents $I_{\textrm{coil}}$ applied to a set of 30 trim-coils wound around the vacuum tank.
The magnetic field produced by each coil when applying one unit of current was characterized both online and offline, providing $\vec{B_{\textrm{coil}}^{\textrm{CsM}}}$ measured by the Cs{} magnetometer (CsM) in the variometer mode, and the harmonic expansion coefficients $G_{\textrm{coil}}^{\textrm{map}}$ as extracted from the magnetic field maps.
After measuring the main magnetic field $\vec{B_{0}^{\textrm{CsM}}}$ on-line with the Cs{} magnetometer array, the optimal currents are calculated by minimizing the sum of the following three terms:
\begin{equation}
\label{eq:SumToMin}
S = S_{\textrm{Long}}+ T_{\textrm{Trans}} S_{\textrm{Trans}} +T_{\textrm{Reg}} S_{\textrm{Reg}} ,
\end{equation}
where $S_{\textrm{Long}}(I_{\textrm{coil}})$ quantifies the homogeneity of the longitudinal component, $S_{\textrm{Trans}}(I_{\textrm{coil}})$ evaluates the systematic effect due to the transverse components and $S_{\textrm{Reg}}(I_{\textrm{coil}})$ is added as a regularization term since there are more parameters than constraints (30 $>$ 16+1).
The factors $T_{\textrm{Trans}}$ and $T_{\textrm{Reg}}$ are tuning parameters and assign a weight to the respective sums relative to $S_{\textrm{Long}}$.
The explicit expression for $S_{\textrm{Long}}$ as a function of the currents $I_{\textrm{coil}}$ is given by
\begin{equation}
S_{\textrm{Long}} = \sum\limits_{\textrm{CsM}}\left(B_{0,z}^{\textrm{CsM}} + \sum\limits_{\textrm{coil}}I_{\textrm{coil}} B_{\textrm{coil},z}^{\textrm{CsM}} - B_{\textrm{goal}}\right)^2 ,
\end{equation}
where $B_{0,z}^{\textrm{CsM}}$ and $B_{\textrm{coil},z}^{\textrm{CsM}}$ are the $z$-components measured by the Cs{} magnetometer of the main magnetic field and the field produced by the coil when applying one unit of current respectively. $B_{\textrm{goal}}$ is the goal value for the Cs{} sensor magnitude readings.
Typically, the sensors are all assigned the same goal value to improve the homogeneity, but other configurations are possible.
The transverse requirements are taken into account by the following sum
\begin{equation}
S_{\textrm{Trans}} = \langle (B_{\textrm{T}}^{\textrm{map}})^2 \rangle = \sum\limits_{i,j} a_{ij} G^{\textrm{map}}_{l_i,m_i}G^{\textrm{map}}_{l_j,m_j} ,
\end{equation}
where $G_{l_i,m_i}^{\textrm{map}} = G_{0,l_i,m_i}^{\textrm{map}} + \sum\limits_{\textrm{coil}} I_{\textrm{coil}} G_{\textrm{coil},l_i,m_i}^{\textrm{map}}$ is the harmonic coefficient $G_{l_i,m_i}$ of the total magnetic field that would be produced if the currents $I_{\textrm{coil}}$ would be applied to the coils as determined from the field maps.
The coefficients $a_{ij}$ are defined in \cite{Abe19a}.
The regularization term is given by
\begin{equation}
S_{\textrm{Reg}} = \sum\limits_{\textrm{coil}} \left(I_{\textrm{coil}}\max_{\textrm{CsM}}(\| \vec{B^{\textrm{CsM}}_{\textrm{coil}}}\|)\right)^2 ,
\end{equation}
where $\max\limits_{\textrm{CsM}}(\| \vec{B^{\textrm{CsM}}_{\textrm{coil}}}\|)$ is the maximum magnitude measured by the Cs{} magnetometers when one unit of current is applied to the coil.
This term makes sure that the \hl{magnetic field produced} per coil is not too large, avoiding a loss in sensitivity due to local inhomogeneities created by the coils themselves.
In order to minimize Eq.~\eqref{eq:SumToMin}, we solve the set of equations $\partial S/\partial I_{\textrm{coil}} = 0$.
Since the terms in $S$ are at most of order 2 in $I_{\textrm{coil}}$, $\partial S/\partial I_{\textrm{coil}}$ is of order 1 and can be solved efficiently using matrix inversion.
\subsubsection{Optimizing the tuning parameters}
\label{sec:T2optTuning}
The success of the algorithm is determined by the choice of the tuning parameters $T_{\textrm{Trans}}$ and $T_{\textrm{Reg}}$.
To determine the optimal values, we start off with an estimate of the optimal size of each sum in Eq.~\eqref{eq:SumToMin}.
Given the on-line accuracy of the Cs{} magnetometers, we estimate the final standard deviation of $(B_{z}^{\textrm{CsM}}-B_{\textrm{goal}})$ at 100\,pT, resulting in a longitudinal term $S_{\textrm{Long}}$ of $(0.1\,\textrm{nT})^2\times16 = 0.16$\,nT$^2$.
The value of $\langle B_{\textrm{T}}^2\rangle$ should be as small as possible, but since the maps provide only a rough estimate, we set the goal value for $S_{\textrm{Trans}}$ at 0.5\,nT$^2$.
To avoid producing local inhomogeneities due to strong currents in the trim-coils, the tuning is started with a trial value of 2\,nT produced per coil on average, resulting in a regularization term $S_{\textrm{Reg}} $ of $ (2\,\textrm{nT})^2\times 30 = 120$\,nT$^2$.
Comparing the size of each sum, first guesses for the tuning parameters are $T_{\textrm{Trans}} = S_{\textrm{Long}}/S_{\textrm{Trans}} = 0.32$ and $T_{\textrm{Reg}} = S_{\textrm{Long}}/S_{\textrm{Reg}} = 0.0013$.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Sall_new2.pdf}
\caption{The behavior of $S_{\textrm{Long}}$ (top), $S_{\textrm{Trans}}$ (middle) and $S_{\textrm{Reg}}$ (bottom) evaluated at the optimal trim-coil currents as a function of the tuning parameters $T_{\textrm{Trans}}$ and $T_{\textrm{Reg}}$. All scales (including the color scale) are logarithmic.}
\label{fig:TuningParameters}
\end{figure}
Figure~\ref{fig:TuningParameters} shows the minimized values of each sum $S_i$ in Eq.~\eqref{eq:SumToMin} in function of the tuning parameters, with the ranges centered around our initial guesses.
The terms are calculated using a typical magnetic field which is measured on-line 30 minutes after degaussing the $\mu$-metal shield, as is the typical procedure during nEDM data taking.
As is clearly visible in the two uppermost plots of Fig.~\ref{fig:TuningParameters}, the tuning parameter $T_{\textrm{Trans}}$ (horizontal axis) determines the relative importance of the longitudinal spread (top) versus the transverse homogeneity (middle).
For values of $T_{\textrm{Trans}}$ smaller than $1.0$, the longitudinal spread is almost solely determined by the regularization parameter $T_{\textrm{Reg}}$.
The smaller $T_{\textrm{Reg}}$, the larger the applied currents (bottom), and the smaller the predicted spread of $B_{z}$.
For $T_{\textrm{Trans}}$ larger than 1, the value of $\langle (B_{\textrm{T}}^{\textrm{map}})^2 \rangle$ is significantly reduced at the cost of a worse \hl{$B_z$ homogeneity} and much larger currents.
The behavior at large $T_{\textrm{Trans}}$ and small $T_{\textrm{Reg}}$ (bottom right corner of each plot) suggests that it is nearly impossible to have both a small spread in on-line $B_{z}^{\textrm{CsM}}$-component and a small $\langle B_{\textrm{T}}^2\rangle$ predicted from the maps, even if the restriction on the applied currents is relaxed.
This indicates that the estimation of $\langle B_{\textrm{T}}^2\rangle$ from the maps is only reliable down to the 0.3\,nT$^2$ level.
As the exact size of $S_{\textrm{Trans}}$ is not crucial, $T_{\textrm{Trans}}$ is typically fixed at a value smaller than $1.0$ leading to $\langle (B_{\textrm{T}}^{\textrm{map}})^2 \rangle$ values smaller than the limit of 2\,nT$^2$.
The optimal choice for $T_{\textrm{Reg}}$ is not so straightforward.
It depends on the initial homogeneity of the magnetic field, as a larger inhomogeneity implies a larger amount of current necessary to compensate.
Moreover, as the applied currents become larger, the uncertainty on the measurement of $B^{\textrm{CsM}}_{\textrm{coil},z}$ will make the estimation of the longitudinal spread inaccurate and thus reduce the predictive power for the value of $\alpha$.
On top of that, making the magnetic field magnitude the same at all sensor positions does not mean that the field in the storage chamber itself is homogeneous, especially when the applied trim-coil currents are large.
For this reason, we typically selected a scan range of 0.0002 to 0.0020 for $T_{\textrm{Reg}}$ and picked out the best setting by measuring the resulting $\alpha$ on-line.
\subsubsection{Results}
\label{sec:T2optResults}
Different iterations of the optimization procedure were used during the nEDM data taking period of 2015 and 2016.
For each chosen current setting during data taking, the value of $\langle B_{\textrm{T}}^2 \rangle$ was smaller than 2\,nT$^2$.
The corresponding Ramsey visibilities are shown in Fig.~\ref{fig:alpha}.
The effect of gravitational depolarization is clearly visible as $\alpha$ decreases when the vertical gradient $\Delta G_{1,0}^{\textrm{S}}$ moves away from zero.
From dedicated measurements at different storage times, we know that the initial polarization $\alpha_0$ in our storage bottle is 0.86.
The $\alpha$ values of 0.76-0.81 at zero gradient then correspond to transverse neutron relaxation times between 1450\,s and 3000\,s.
The improvement of the neutron spin relaxation time $T_2$ and the corresponding increase of Ramsey contrast $\alpha$ is summarized in Table~\ref{tab:T2}, comparing data from 2014 without CsM based homogenization with data from 2015 and 2016.
The transverse relaxation time has more than doubled with the new homogenization procedure, resulting in an increase of $\alpha$ by about 35\% and an equal improvement of the nEDM sensitivity.
In order to realize the same improvement with neutron statistics, the total number of detected neutrons would have to be increased by a factor of 1.8 due to the $\sqrt{N}$ scaling (see Eq.~\eqref{eq:nustatistical}).
This is a significant improvement for an experiment that is scheduled to take data for several years.
\begin{table}
\caption{\label{tab:T2} Comparison of the transverse neutron spin relaxation time $T_2$ and the Ramsey contrast $\alpha$ at zero vertical gradient before and after the field homogenization was introduced in 2015. The polarization $\alpha_0$ at the start of the Ramsey procedure is 0.86 in both datasets. In 2014 the $\alpha$ values were significantly different for the two $B_0$ field orientations.}
\centerline{
\begin{tabular}{c c c c}
\hline
\hline
Year &$B_0$ direction & $T_2$ (s)& $\alpha$ \\
\hline
2014 & up & 760 & 0.64 \\
& down & 439 & 0.52 \\
\hline
2015 \& 2016 & up & 1620-3000 & 0.77-0.81 \\
& down & 1450-3000 & 0.76-0.81 \\
\hline
\hline
\end{tabular}
}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Alphas2.pdf}
\caption{The Ramsey contrast or visibility $\alpha$ measured during the nEDM data taking period of 2015 and 2016 as a function of the vertical gradient. The `zero' gradient is defined per magnetic field base configuration (or equivalently per set of measurements that are based on the same homogenization result) as the gradient at which the visibility-parabola reaches its highest point. For nonzero vertical gradient, gravitational depolarization reduces the contrast of the Ramsey curve. Note that both $B_0$ up and $B_0$ down reach similar visibilities.}
\label{fig:alpha}
\end{figure}
\section{Summary}
We have discussed the design, implementation and performance of the Cs{} magnetometer array installed at the PSI-nEDM experiment.
The compact optical magnetometers are vacuum and HV compatible and are placed on the electrodes above and below the UCN storage chamber, providing on-line gradient information.
The sensors are driven by a single diode laser, using beam multiplexing to bring the light to the individual sensors in the vacuum chamber of the experiment.
We have explained the phase-feedback mode of sensor operation in the $M_x$ configuration and demonstrated an intrinsic magnetometer sensitivity which is below 50\,fT/$\sqrt{\mathrm{Hz}}$ in the shot noise limit.
The final magnetometer noise in the nEDM experiment was significantly larger than the shot noise limit but it did not limit the extraction of important field parameters at the relevant integration time of 180\,s.
At such large integration times the performance is rather limited by system stability which we could demonstrate to be significantly better than required (see Fig.~\ref{fig:ASDGz}).
We have discussed various systematic effects that influence the reading of the magnetometer and estimated an on-line accuracy of 45 to 90\,pT.
Using a set of two transverse coils, we can run the magnetometers in variometer mode, providing vector information of the local magnetic field.
A model was presented to describe the spatial field distribution, and the precision and accuracy of gradient extraction during nEDM data taking was discussed.
Further, a magnetic field homogenization procedure, which more than doubled the transverse spin relaxation time of the neutrons, while at the same time keeping magnetic-field-related systematic effects under control, was presented.
This resulted in an improvement of 35\% of the statistical sensitivity of the nEDM experiment which reduced the time to reach a given statistical sensitivity by a factor of 1.8.
\hl{
The presented techniques are useful in general for the measurement and control of magnetic field uniformity.
We will use an upgraded version of the magnetometer array, based on all-optical sensors \cite{Gru15}, in our new neutron EDM experiment (n2EDM).
The new sensors use free spin precession in contrast to the driven spin precession in a $M_x$ magnetometer.
This leads to improved stability and accuracy, necessary to fulfill the requirements of our next-generation experiment.
}
\section{Acknowledgements}
We would like to thank the mechanical workshop at the University of Fribourg for manufacturing the construction elements of Cs{} sensors, C.~Macchione for the preparation of the paraffin-coated Cs{} cells and M.~Meier and F.~Burri for their support during the installation of the Cs{} magnetometer array and measurements at PSI.
The LPC Caen and the LPSC acknowledge the support of the French Agence Nationale de la Recherche (ANR) under Reference No. ANR-09-BLAN-0046.
P.M. likes to acknowledge support from SNSF-FCS Grant No. 2015.0594 (ETHZ).
E.W. acknowledges the fellowship of the Fund for Scientific Research Flanders (FWO).
This research was partly financed by the Fund for Scientific Research, Flanders;
Grant No. GOA/2010/10 of KU Leuven; the Swiss National Science Foundation projects 126562 (PSI), 140421 (UNIFR), 144473 (PSI), 149211 (ETH), 162574 (ETH), 172626 (PSI), 172639 (ETH), and 181996 (Bern),
the Deutsche Forschungsgemeinschaft projects BI 1424/2-1 and /3-1, and Grants No. ST/K001329/1, No. ST/M003426/1, and No. ST/L006472/1 from the Science and Technology Facilities Council (STFC) of the United Kingdom. The original nEDM apparatus without the Cs magnetometer array was funded by grants from the PPARC (now STFC) of the United Kingdom. Our Polish partners wish to acknowledge support from the National Science Centre, Poland, under Grants No. UMO-2015/18/M/ST2/00056 and UMO-2016/23/D/ST2/00715.
|
train/arxiv
|
BkiUgdLxK7IAAwXptZZv
| 1 | 0.2 |
\section{The ALICE Collaboration}
\begingroup
\small
\begin{flushleft}
S.~Acharya\Irefn{org141}\And
D.~Adamov\'{a}\Irefn{org94}\And
A.~Adler\Irefn{org74}\And
J.~Adolfsson\Irefn{org80}\And
M.M.~Aggarwal\Irefn{org99}\And
G.~Aglieri Rinella\Irefn{org33}\And
M.~Agnello\Irefn{org30}\And
N.~Agrawal\Irefn{org10}\textsuperscript{,}\Irefn{org53}\And
Z.~Ahammed\Irefn{org141}\And
S.~Ahmad\Irefn{org16}\And
S.U.~Ahn\Irefn{org76}\And
A.~Akindinov\Irefn{org91}\And
M.~Al-Turany\Irefn{org106}\And
S.N.~Alam\Irefn{org141}\And
D.S.D.~Albuquerque\Irefn{org122}\And
D.~Aleksandrov\Irefn{org87}\And
B.~Alessandro\Irefn{org58}\And
H.M.~Alfanda\Irefn{org6}\And
R.~Alfaro Molina\Irefn{org71}\And
B.~Ali\Irefn{org16}\And
Y.~Ali\Irefn{org14}\And
A.~Alici\Irefn{org10}\textsuperscript{,}\Irefn{org26}\textsuperscript{,}\Irefn{org53}\And
A.~Alkin\Irefn{org2}\And
J.~Alme\Irefn{org21}\And
T.~Alt\Irefn{org68}\And
L.~Altenkamper\Irefn{org21}\And
I.~Altsybeev\Irefn{org112}\And
M.N.~Anaam\Irefn{org6}\And
C.~Andrei\Irefn{org47}\And
D.~Andreou\Irefn{org33}\And
H.A.~Andrews\Irefn{org110}\And
A.~Andronic\Irefn{org144}\And
M.~Angeletti\Irefn{org33}\And
V.~Anguelov\Irefn{org103}\And
C.~Anson\Irefn{org15}\And
T.~Anti\v{c}i\'{c}\Irefn{org107}\And
F.~Antinori\Irefn{org56}\And
P.~Antonioli\Irefn{org53}\And
R.~Anwar\Irefn{org125}\And
N.~Apadula\Irefn{org79}\And
L.~Aphecetche\Irefn{org114}\And
H.~Appelsh\"{a}user\Irefn{org68}\And
S.~Arcelli\Irefn{org26}\And
R.~Arnaldi\Irefn{org58}\And
M.~Arratia\Irefn{org79}\And
I.C.~Arsene\Irefn{org20}\And
M.~Arslandok\Irefn{org103}\And
A.~Augustinus\Irefn{org33}\And
R.~Averbeck\Irefn{org106}\And
S.~Aziz\Irefn{org61}\And
M.D.~Azmi\Irefn{org16}\And
A.~Badal\`{a}\Irefn{org55}\And
Y.W.~Baek\Irefn{org40}\And
S.~Bagnasco\Irefn{org58}\And
X.~Bai\Irefn{org106}\And
R.~Bailhache\Irefn{org68}\And
R.~Bala\Irefn{org100}\And
A.~Baldisseri\Irefn{org137}\And
M.~Ball\Irefn{org42}\And
S.~Balouza\Irefn{org104}\And
R.~Barbera\Irefn{org27}\And
L.~Barioglio\Irefn{org25}\And
G.G.~Barnaf\"{o}ldi\Irefn{org145}\And
L.S.~Barnby\Irefn{org93}\And
V.~Barret\Irefn{org134}\And
P.~Bartalini\Irefn{org6}\And
K.~Barth\Irefn{org33}\And
E.~Bartsch\Irefn{org68}\And
F.~Baruffaldi\Irefn{org28}\And
N.~Bastid\Irefn{org134}\And
S.~Basu\Irefn{org143}\And
G.~Batigne\Irefn{org114}\And
B.~Batyunya\Irefn{org75}\And
D.~Bauri\Irefn{org48}\And
J.L.~Bazo~Alba\Irefn{org111}\And
I.G.~Bearden\Irefn{org88}\And
C.~Bedda\Irefn{org63}\And
N.K.~Behera\Irefn{org60}\And
I.~Belikov\Irefn{org136}\And
A.D.C.~Bell Hechavarria\Irefn{org144}\And
F.~Bellini\Irefn{org33}\And
R.~Bellwied\Irefn{org125}\And
V.~Belyaev\Irefn{org92}\And
G.~Bencedi\Irefn{org145}\And
S.~Beole\Irefn{org25}\And
A.~Bercuci\Irefn{org47}\And
Y.~Berdnikov\Irefn{org97}\And
D.~Berenyi\Irefn{org145}\And
R.A.~Bertens\Irefn{org130}\And
D.~Berzano\Irefn{org58}\And
M.G.~Besoiu\Irefn{org67}\And
L.~Betev\Irefn{org33}\And
A.~Bhasin\Irefn{org100}\And
I.R.~Bhat\Irefn{org100}\And
M.A.~Bhat\Irefn{org3}\And
H.~Bhatt\Irefn{org48}\And
B.~Bhattacharjee\Irefn{org41}\And
A.~Bianchi\Irefn{org25}\And
L.~Bianchi\Irefn{org25}\And
N.~Bianchi\Irefn{org51}\And
J.~Biel\v{c}\'{\i}k\Irefn{org36}\And
J.~Biel\v{c}\'{\i}kov\'{a}\Irefn{org94}\And
A.~Bilandzic\Irefn{org104}\textsuperscript{,}\Irefn{org117}\And
G.~Biro\Irefn{org145}\And
R.~Biswas\Irefn{org3}\And
S.~Biswas\Irefn{org3}\And
J.T.~Blair\Irefn{org119}\And
D.~Blau\Irefn{org87}\And
C.~Blume\Irefn{org68}\And
G.~Boca\Irefn{org139}\And
F.~Bock\Irefn{org33}\textsuperscript{,}\Irefn{org95}\And
A.~Bogdanov\Irefn{org92}\And
S.~Boi\Irefn{org23}\And
L.~Boldizs\'{a}r\Irefn{org145}\And
A.~Bolozdynya\Irefn{org92}\And
M.~Bombara\Irefn{org37}\And
G.~Bonomi\Irefn{org140}\And
H.~Borel\Irefn{org137}\And
A.~Borissov\Irefn{org92}\textsuperscript{,}\Irefn{org144}\And
H.~Bossi\Irefn{org146}\And
E.~Botta\Irefn{org25}\And
L.~Bratrud\Irefn{org68}\And
P.~Braun-Munzinger\Irefn{org106}\And
M.~Bregant\Irefn{org121}\And
M.~Broz\Irefn{org36}\And
E.J.~Brucken\Irefn{org43}\And
E.~Bruna\Irefn{org58}\And
G.E.~Bruno\Irefn{org105}\And
M.D.~Buckland\Irefn{org127}\And
D.~Budnikov\Irefn{org108}\And
H.~Buesching\Irefn{org68}\And
S.~Bufalino\Irefn{org30}\And
O.~Bugnon\Irefn{org114}\And
P.~Buhler\Irefn{org113}\And
P.~Buncic\Irefn{org33}\And
Z.~Buthelezi\Irefn{org72}\textsuperscript{,}\Irefn{org131}\And
J.B.~Butt\Irefn{org14}\And
J.T.~Buxton\Irefn{org96}\And
S.A.~Bysiak\Irefn{org118}\And
D.~Caffarri\Irefn{org89}\And
A.~Caliva\Irefn{org106}\And
E.~Calvo Villar\Irefn{org111}\And
R.S.~Camacho\Irefn{org44}\And
P.~Camerini\Irefn{org24}\And
A.A.~Capon\Irefn{org113}\And
F.~Carnesecchi\Irefn{org10}\textsuperscript{,}\Irefn{org26}\And
R.~Caron\Irefn{org137}\And
J.~Castillo Castellanos\Irefn{org137}\And
A.J.~Castro\Irefn{org130}\And
E.A.R.~Casula\Irefn{org54}\And
F.~Catalano\Irefn{org30}\And
C.~Ceballos Sanchez\Irefn{org52}\And
P.~Chakraborty\Irefn{org48}\And
S.~Chandra\Irefn{org141}\And
W.~Chang\Irefn{org6}\And
S.~Chapeland\Irefn{org33}\And
M.~Chartier\Irefn{org127}\And
S.~Chattopadhyay\Irefn{org141}\And
S.~Chattopadhyay\Irefn{org109}\And
A.~Chauvin\Irefn{org23}\And
C.~Cheshkov\Irefn{org135}\And
B.~Cheynis\Irefn{org135}\And
V.~Chibante Barroso\Irefn{org33}\And
D.D.~Chinellato\Irefn{org122}\And
S.~Cho\Irefn{org60}\And
P.~Chochula\Irefn{org33}\And
T.~Chowdhury\Irefn{org134}\And
P.~Christakoglou\Irefn{org89}\And
C.H.~Christensen\Irefn{org88}\And
P.~Christiansen\Irefn{org80}\And
T.~Chujo\Irefn{org133}\And
C.~Cicalo\Irefn{org54}\And
L.~Cifarelli\Irefn{org10}\textsuperscript{,}\Irefn{org26}\And
F.~Cindolo\Irefn{org53}\And
J.~Cleymans\Irefn{org124}\And
F.~Colamaria\Irefn{org52}\And
D.~Colella\Irefn{org52}\And
A.~Collu\Irefn{org79}\And
M.~Colocci\Irefn{org26}\And
M.~Concas\Irefn{org58}\Aref{orgI}\And
G.~Conesa Balbastre\Irefn{org78}\And
Z.~Conesa del Valle\Irefn{org61}\And
G.~Contin\Irefn{org24}\textsuperscript{,}\Irefn{org127}\And
J.G.~Contreras\Irefn{org36}\And
T.M.~Cormier\Irefn{org95}\And
Y.~Corrales Morales\Irefn{org25}\And
P.~Cortese\Irefn{org31}\And
M.R.~Cosentino\Irefn{org123}\And
F.~Costa\Irefn{org33}\And
S.~Costanza\Irefn{org139}\And
P.~Crochet\Irefn{org134}\And
E.~Cuautle\Irefn{org69}\And
P.~Cui\Irefn{org6}\And
L.~Cunqueiro\Irefn{org95}\And
D.~Dabrowski\Irefn{org142}\And
T.~Dahms\Irefn{org104}\textsuperscript{,}\Irefn{org117}\And
A.~Dainese\Irefn{org56}\And
F.P.A.~Damas\Irefn{org114}\textsuperscript{,}\Irefn{org137}\And
M.C.~Danisch\Irefn{org103}\And
A.~Danu\Irefn{org67}\And
D.~Das\Irefn{org109}\And
I.~Das\Irefn{org109}\And
P.~Das\Irefn{org85}\And
P.~Das\Irefn{org3}\And
S.~Das\Irefn{org3}\And
A.~Dash\Irefn{org85}\And
S.~Dash\Irefn{org48}\And
S.~De\Irefn{org85}\And
A.~De Caro\Irefn{org29}\And
G.~de Cataldo\Irefn{org52}\And
J.~de Cuveland\Irefn{org38}\And
A.~De Falco\Irefn{org23}\And
D.~De Gruttola\Irefn{org10}\And
N.~De Marco\Irefn{org58}\And
S.~De Pasquale\Irefn{org29}\And
S.~Deb\Irefn{org49}\And
B.~Debjani\Irefn{org3}\And
H.F.~Degenhardt\Irefn{org121}\And
K.R.~Deja\Irefn{org142}\And
A.~Deloff\Irefn{org84}\And
S.~Delsanto\Irefn{org25}\textsuperscript{,}\Irefn{org131}\And
D.~Devetak\Irefn{org106}\And
P.~Dhankher\Irefn{org48}\And
D.~Di Bari\Irefn{org32}\And
A.~Di Mauro\Irefn{org33}\And
R.A.~Diaz\Irefn{org8}\And
T.~Dietel\Irefn{org124}\And
P.~Dillenseger\Irefn{org68}\And
Y.~Ding\Irefn{org6}\And
R.~Divi\`{a}\Irefn{org33}\And
D.U.~Dixit\Irefn{org19}\And
{\O}.~Djuvsland\Irefn{org21}\And
U.~Dmitrieva\Irefn{org62}\And
A.~Dobrin\Irefn{org33}\textsuperscript{,}\Irefn{org67}\And
B.~D\"{o}nigus\Irefn{org68}\And
O.~Dordic\Irefn{org20}\And
A.K.~Dubey\Irefn{org141}\And
A.~Dubla\Irefn{org106}\And
S.~Dudi\Irefn{org99}\And
M.~Dukhishyam\Irefn{org85}\And
P.~Dupieux\Irefn{org134}\And
R.J.~Ehlers\Irefn{org146}\And
V.N.~Eikeland\Irefn{org21}\And
D.~Elia\Irefn{org52}\And
H.~Engel\Irefn{org74}\And
E.~Epple\Irefn{org146}\And
B.~Erazmus\Irefn{org114}\And
F.~Erhardt\Irefn{org98}\And
A.~Erokhin\Irefn{org112}\And
M.R.~Ersdal\Irefn{org21}\And
B.~Espagnon\Irefn{org61}\And
G.~Eulisse\Irefn{org33}\And
D.~Evans\Irefn{org110}\And
S.~Evdokimov\Irefn{org90}\And
L.~Fabbietti\Irefn{org104}\textsuperscript{,}\Irefn{org117}\And
M.~Faggin\Irefn{org28}\And
J.~Faivre\Irefn{org78}\And
F.~Fan\Irefn{org6}\And
A.~Fantoni\Irefn{org51}\And
M.~Fasel\Irefn{org95}\And
P.~Fecchio\Irefn{org30}\And
A.~Feliciello\Irefn{org58}\And
G.~Feofilov\Irefn{org112}\And
A.~Fern\'{a}ndez T\'{e}llez\Irefn{org44}\And
A.~Ferrero\Irefn{org137}\And
A.~Ferretti\Irefn{org25}\And
A.~Festanti\Irefn{org33}\And
V.J.G.~Feuillard\Irefn{org103}\And
J.~Figiel\Irefn{org118}\And
S.~Filchagin\Irefn{org108}\And
D.~Finogeev\Irefn{org62}\And
F.M.~Fionda\Irefn{org21}\And
G.~Fiorenza\Irefn{org52}\And
F.~Flor\Irefn{org125}\And
S.~Foertsch\Irefn{org72}\And
P.~Foka\Irefn{org106}\And
S.~Fokin\Irefn{org87}\And
E.~Fragiacomo\Irefn{org59}\And
U.~Frankenfeld\Irefn{org106}\And
U.~Fuchs\Irefn{org33}\And
C.~Furget\Irefn{org78}\And
A.~Furs\Irefn{org62}\And
M.~Fusco Girard\Irefn{org29}\And
J.J.~Gaardh{\o}je\Irefn{org88}\And
M.~Gagliardi\Irefn{org25}\And
A.M.~Gago\Irefn{org111}\And
A.~Gal\Irefn{org136}\And
C.D.~Galvan\Irefn{org120}\And
P.~Ganoti\Irefn{org83}\And
C.~Garabatos\Irefn{org106}\And
E.~Garcia-Solis\Irefn{org11}\And
K.~Garg\Irefn{org27}\And
C.~Gargiulo\Irefn{org33}\And
A.~Garibli\Irefn{org86}\And
K.~Garner\Irefn{org144}\And
P.~Gasik\Irefn{org104}\textsuperscript{,}\Irefn{org117}\And
E.F.~Gauger\Irefn{org119}\And
M.B.~Gay Ducati\Irefn{org70}\And
M.~Germain\Irefn{org114}\And
J.~Ghosh\Irefn{org109}\And
P.~Ghosh\Irefn{org141}\And
S.K.~Ghosh\Irefn{org3}\And
P.~Gianotti\Irefn{org51}\And
P.~Giubellino\Irefn{org58}\textsuperscript{,}\Irefn{org106}\And
P.~Giubilato\Irefn{org28}\And
P.~Gl\"{a}ssel\Irefn{org103}\And
D.M.~Gom\'{e}z Coral\Irefn{org71}\And
A.~Gomez Ramirez\Irefn{org74}\And
V.~Gonzalez\Irefn{org106}\And
P.~Gonz\'{a}lez-Zamora\Irefn{org44}\And
S.~Gorbunov\Irefn{org38}\And
L.~G\"{o}rlich\Irefn{org118}\And
S.~Gotovac\Irefn{org34}\And
V.~Grabski\Irefn{org71}\And
L.K.~Graczykowski\Irefn{org142}\And
K.L.~Graham\Irefn{org110}\And
L.~Greiner\Irefn{org79}\And
A.~Grelli\Irefn{org63}\And
C.~Grigoras\Irefn{org33}\And
V.~Grigoriev\Irefn{org92}\And
A.~Grigoryan\Irefn{org1}\And
S.~Grigoryan\Irefn{org75}\And
O.S.~Groettvik\Irefn{org21}\And
F.~Grosa\Irefn{org30}\And
J.F.~Grosse-Oetringhaus\Irefn{org33}\And
R.~Grosso\Irefn{org106}\And
R.~Guernane\Irefn{org78}\And
M.~Guittiere\Irefn{org114}\And
K.~Gulbrandsen\Irefn{org88}\And
T.~Gunji\Irefn{org132}\And
A.~Gupta\Irefn{org100}\And
R.~Gupta\Irefn{org100}\And
I.B.~Guzman\Irefn{org44}\And
R.~Haake\Irefn{org146}\And
M.K.~Habib\Irefn{org106}\And
C.~Hadjidakis\Irefn{org61}\And
H.~Hamagaki\Irefn{org81}\And
G.~Hamar\Irefn{org145}\And
M.~Hamid\Irefn{org6}\And
R.~Hannigan\Irefn{org119}\And
M.R.~Haque\Irefn{org63}\textsuperscript{,}\Irefn{org85}\And
A.~Harlenderova\Irefn{org106}\And
J.W.~Harris\Irefn{org146}\And
A.~Harton\Irefn{org11}\And
J.A.~Hasenbichler\Irefn{org33}\And
H.~Hassan\Irefn{org95}\And
D.~Hatzifotiadou\Irefn{org10}\textsuperscript{,}\Irefn{org53}\And
P.~Hauer\Irefn{org42}\And
S.~Hayashi\Irefn{org132}\And
S.T.~Heckel\Irefn{org68}\textsuperscript{,}\Irefn{org104}\And
E.~Hellb\"{a}r\Irefn{org68}\And
H.~Helstrup\Irefn{org35}\And
A.~Herghelegiu\Irefn{org47}\And
T.~Herman\Irefn{org36}\And
E.G.~Hernandez\Irefn{org44}\And
G.~Herrera Corral\Irefn{org9}\And
F.~Herrmann\Irefn{org144}\And
K.F.~Hetland\Irefn{org35}\And
T.E.~Hilden\Irefn{org43}\And
H.~Hillemanns\Irefn{org33}\And
C.~Hills\Irefn{org127}\And
B.~Hippolyte\Irefn{org136}\And
B.~Hohlweger\Irefn{org104}\And
D.~Horak\Irefn{org36}\And
A.~Hornung\Irefn{org68}\And
S.~Hornung\Irefn{org106}\And
R.~Hosokawa\Irefn{org15}\textsuperscript{,}\Irefn{org133}\And
P.~Hristov\Irefn{org33}\And
C.~Huang\Irefn{org61}\And
C.~Hughes\Irefn{org130}\And
P.~Huhn\Irefn{org68}\And
T.J.~Humanic\Irefn{org96}\And
H.~Hushnud\Irefn{org109}\And
L.A.~Husova\Irefn{org144}\And
N.~Hussain\Irefn{org41}\And
S.A.~Hussain\Irefn{org14}\And
D.~Hutter\Irefn{org38}\And
J.P.~Iddon\Irefn{org33}\textsuperscript{,}\Irefn{org127}\And
R.~Ilkaev\Irefn{org108}\And
M.~Inaba\Irefn{org133}\And
G.M.~Innocenti\Irefn{org33}\And
M.~Ippolitov\Irefn{org87}\And
A.~Isakov\Irefn{org94}\And
M.S.~Islam\Irefn{org109}\And
M.~Ivanov\Irefn{org106}\And
V.~Ivanov\Irefn{org97}\And
V.~Izucheev\Irefn{org90}\And
B.~Jacak\Irefn{org79}\And
N.~Jacazio\Irefn{org53}\And
P.M.~Jacobs\Irefn{org79}\And
S.~Jadlovska\Irefn{org116}\And
J.~Jadlovsky\Irefn{org116}\And
S.~Jaelani\Irefn{org63}\And
C.~Jahnke\Irefn{org121}\And
M.J.~Jakubowska\Irefn{org142}\And
M.A.~Janik\Irefn{org142}\And
T.~Janson\Irefn{org74}\And
M.~Jercic\Irefn{org98}\And
O.~Jevons\Irefn{org110}\And
M.~Jin\Irefn{org125}\And
F.~Jonas\Irefn{org95}\textsuperscript{,}\Irefn{org144}\And
P.G.~Jones\Irefn{org110}\And
J.~Jung\Irefn{org68}\And
M.~Jung\Irefn{org68}\And
A.~Jusko\Irefn{org110}\And
P.~Kalinak\Irefn{org64}\And
A.~Kalweit\Irefn{org33}\And
V.~Kaplin\Irefn{org92}\And
S.~Kar\Irefn{org6}\And
A.~Karasu Uysal\Irefn{org77}\And
O.~Karavichev\Irefn{org62}\And
T.~Karavicheva\Irefn{org62}\And
P.~Karczmarczyk\Irefn{org33}\And
E.~Karpechev\Irefn{org62}\And
A.~Kazantsev\Irefn{org87}\And
U.~Kebschull\Irefn{org74}\And
R.~Keidel\Irefn{org46}\And
M.~Keil\Irefn{org33}\And
B.~Ketzer\Irefn{org42}\And
Z.~Khabanova\Irefn{org89}\And
A.M.~Khan\Irefn{org6}\And
S.~Khan\Irefn{org16}\And
S.A.~Khan\Irefn{org141}\And
A.~Khanzadeev\Irefn{org97}\And
Y.~Kharlov\Irefn{org90}\And
A.~Khatun\Irefn{org16}\And
A.~Khuntia\Irefn{org118}\And
B.~Kileng\Irefn{org35}\And
B.~Kim\Irefn{org60}\And
B.~Kim\Irefn{org133}\And
D.~Kim\Irefn{org147}\And
D.J.~Kim\Irefn{org126}\And
E.J.~Kim\Irefn{org73}\And
H.~Kim\Irefn{org17}\textsuperscript{,}\Irefn{org147}\And
J.~Kim\Irefn{org147}\And
J.S.~Kim\Irefn{org40}\And
J.~Kim\Irefn{org103}\And
J.~Kim\Irefn{org147}\And
J.~Kim\Irefn{org73}\And
M.~Kim\Irefn{org103}\And
S.~Kim\Irefn{org18}\And
T.~Kim\Irefn{org147}\And
T.~Kim\Irefn{org147}\And
S.~Kirsch\Irefn{org38}\textsuperscript{,}\Irefn{org68}\And
I.~Kisel\Irefn{org38}\And
S.~Kiselev\Irefn{org91}\And
A.~Kisiel\Irefn{org142}\And
J.L.~Klay\Irefn{org5}\And
C.~Klein\Irefn{org68}\And
J.~Klein\Irefn{org58}\And
S.~Klein\Irefn{org79}\And
C.~Klein-B\"{o}sing\Irefn{org144}\And
M.~Kleiner\Irefn{org68}\And
A.~Kluge\Irefn{org33}\And
M.L.~Knichel\Irefn{org33}\And
A.G.~Knospe\Irefn{org125}\And
C.~Kobdaj\Irefn{org115}\And
M.K.~K\"{o}hler\Irefn{org103}\And
T.~Kollegger\Irefn{org106}\And
A.~Kondratyev\Irefn{org75}\And
N.~Kondratyeva\Irefn{org92}\And
E.~Kondratyuk\Irefn{org90}\And
J.~Konig\Irefn{org68}\And
P.J.~Konopka\Irefn{org33}\And
L.~Koska\Irefn{org116}\And
O.~Kovalenko\Irefn{org84}\And
V.~Kovalenko\Irefn{org112}\And
M.~Kowalski\Irefn{org118}\And
I.~Kr\'{a}lik\Irefn{org64}\And
A.~Krav\v{c}\'{a}kov\'{a}\Irefn{org37}\And
L.~Kreis\Irefn{org106}\And
M.~Krivda\Irefn{org64}\textsuperscript{,}\Irefn{org110}\And
F.~Krizek\Irefn{org94}\And
K.~Krizkova~Gajdosova\Irefn{org36}\And
M.~Kr\"uger\Irefn{org68}\And
E.~Kryshen\Irefn{org97}\And
M.~Krzewicki\Irefn{org38}\And
A.M.~Kubera\Irefn{org96}\And
V.~Ku\v{c}era\Irefn{org60}\And
C.~Kuhn\Irefn{org136}\And
P.G.~Kuijer\Irefn{org89}\And
L.~Kumar\Irefn{org99}\And
S.~Kumar\Irefn{org48}\And
S.~Kundu\Irefn{org85}\And
P.~Kurashvili\Irefn{org84}\And
A.~Kurepin\Irefn{org62}\And
A.B.~Kurepin\Irefn{org62}\And
A.~Kuryakin\Irefn{org108}\And
S.~Kushpil\Irefn{org94}\And
J.~Kvapil\Irefn{org110}\And
M.J.~Kweon\Irefn{org60}\And
J.Y.~Kwon\Irefn{org60}\And
Y.~Kwon\Irefn{org147}\And
S.L.~La Pointe\Irefn{org38}\And
P.~La Rocca\Irefn{org27}\And
Y.S.~Lai\Irefn{org79}\And
R.~Langoy\Irefn{org129}\And
K.~Lapidus\Irefn{org33}\And
A.~Lardeux\Irefn{org20}\And
P.~Larionov\Irefn{org51}\And
E.~Laudi\Irefn{org33}\And
R.~Lavicka\Irefn{org36}\And
T.~Lazareva\Irefn{org112}\And
R.~Lea\Irefn{org24}\And
L.~Leardini\Irefn{org103}\And
J.~Lee\Irefn{org133}\And
S.~Lee\Irefn{org147}\And
F.~Lehas\Irefn{org89}\And
S.~Lehner\Irefn{org113}\And
J.~Lehrbach\Irefn{org38}\And
R.C.~Lemmon\Irefn{org93}\And
I.~Le\'{o}n Monz\'{o}n\Irefn{org120}\And
E.D.~Lesser\Irefn{org19}\And
M.~Lettrich\Irefn{org33}\And
P.~L\'{e}vai\Irefn{org145}\And
X.~Li\Irefn{org12}\And
X.L.~Li\Irefn{org6}\And
J.~Lien\Irefn{org129}\And
R.~Lietava\Irefn{org110}\And
B.~Lim\Irefn{org17}\And
V.~Lindenstruth\Irefn{org38}\And
S.W.~Lindsay\Irefn{org127}\And
C.~Lippmann\Irefn{org106}\And
M.A.~Lisa\Irefn{org96}\And
V.~Litichevskyi\Irefn{org43}\And
A.~Liu\Irefn{org19}\And
S.~Liu\Irefn{org96}\And
W.J.~Llope\Irefn{org143}\And
I.M.~Lofnes\Irefn{org21}\And
V.~Loginov\Irefn{org92}\And
C.~Loizides\Irefn{org95}\And
P.~Loncar\Irefn{org34}\And
X.~Lopez\Irefn{org134}\And
E.~L\'{o}pez Torres\Irefn{org8}\And
J.R.~Luhder\Irefn{org144}\And
M.~Lunardon\Irefn{org28}\And
G.~Luparello\Irefn{org59}\And
Y.~Ma\Irefn{org39}\And
A.~Maevskaya\Irefn{org62}\And
M.~Mager\Irefn{org33}\And
S.M.~Mahmood\Irefn{org20}\And
T.~Mahmoud\Irefn{org42}\And
A.~Maire\Irefn{org136}\And
R.D.~Majka\Irefn{org146}\And
M.~Malaev\Irefn{org97}\And
Q.W.~Malik\Irefn{org20}\And
L.~Malinina\Irefn{org75}\Aref{orgII}\And
D.~Mal'Kevich\Irefn{org91}\And
P.~Malzacher\Irefn{org106}\And
G.~Mandaglio\Irefn{org55}\And
V.~Manko\Irefn{org87}\And
F.~Manso\Irefn{org134}\And
V.~Manzari\Irefn{org52}\And
Y.~Mao\Irefn{org6}\And
M.~Marchisone\Irefn{org135}\And
J.~Mare\v{s}\Irefn{org66}\And
G.V.~Margagliotti\Irefn{org24}\And
A.~Margotti\Irefn{org53}\And
J.~Margutti\Irefn{org63}\And
A.~Mar\'{\i}n\Irefn{org106}\And
C.~Markert\Irefn{org119}\And
M.~Marquard\Irefn{org68}\And
N.A.~Martin\Irefn{org103}\And
P.~Martinengo\Irefn{org33}\And
J.L.~Martinez\Irefn{org125}\And
M.I.~Mart\'{\i}nez\Irefn{org44}\And
G.~Mart\'{\i}nez Garc\'{\i}a\Irefn{org114}\And
M.~Martinez Pedreira\Irefn{org33}\And
S.~Masciocchi\Irefn{org106}\And
M.~Masera\Irefn{org25}\And
A.~Masoni\Irefn{org54}\And
L.~Massacrier\Irefn{org61}\And
E.~Masson\Irefn{org114}\And
A.~Mastroserio\Irefn{org52}\textsuperscript{,}\Irefn{org138}\And
A.M.~Mathis\Irefn{org104}\textsuperscript{,}\Irefn{org117}\And
O.~Matonoha\Irefn{org80}\And
P.F.T.~Matuoka\Irefn{org121}\And
A.~Matyja\Irefn{org118}\And
C.~Mayer\Irefn{org118}\And
M.~Mazzilli\Irefn{org52}\And
M.A.~Mazzoni\Irefn{org57}\And
A.F.~Mechler\Irefn{org68}\And
F.~Meddi\Irefn{org22}\And
Y.~Melikyan\Irefn{org62}\textsuperscript{,}\Irefn{org92}\And
A.~Menchaca-Rocha\Irefn{org71}\And
C.~Mengke\Irefn{org6}\And
E.~Meninno\Irefn{org29}\textsuperscript{,}\Irefn{org113}\And
M.~Meres\Irefn{org13}\And
S.~Mhlanga\Irefn{org124}\And
Y.~Miake\Irefn{org133}\And
L.~Micheletti\Irefn{org25}\And
D.L.~Mihaylov\Irefn{org104}\And
K.~Mikhaylov\Irefn{org75}\textsuperscript{,}\Irefn{org91}\And
A.~Mischke\Irefn{org63}\Aref{org*}\And
A.N.~Mishra\Irefn{org69}\And
D.~Mi\'{s}kowiec\Irefn{org106}\And
A.~Modak\Irefn{org3}\And
N.~Mohammadi\Irefn{org33}\And
A.P.~Mohanty\Irefn{org63}\And
B.~Mohanty\Irefn{org85}\And
M.~Mohisin Khan\Irefn{org16}\Aref{orgIII}\And
C.~Mordasini\Irefn{org104}\And
D.A.~Moreira De Godoy\Irefn{org144}\And
L.A.P.~Moreno\Irefn{org44}\And
I.~Morozov\Irefn{org62}\And
A.~Morsch\Irefn{org33}\And
T.~Mrnjavac\Irefn{org33}\And
V.~Muccifora\Irefn{org51}\And
E.~Mudnic\Irefn{org34}\And
D.~M{\"u}hlheim\Irefn{org144}\And
S.~Muhuri\Irefn{org141}\And
J.D.~Mulligan\Irefn{org79}\And
M.G.~Munhoz\Irefn{org121}\And
R.H.~Munzer\Irefn{org68}\And
H.~Murakami\Irefn{org132}\And
S.~Murray\Irefn{org124}\And
L.~Musa\Irefn{org33}\And
J.~Musinsky\Irefn{org64}\And
C.J.~Myers\Irefn{org125}\And
J.W.~Myrcha\Irefn{org142}\And
B.~Naik\Irefn{org48}\And
R.~Nair\Irefn{org84}\And
B.K.~Nandi\Irefn{org48}\And
R.~Nania\Irefn{org10}\textsuperscript{,}\Irefn{org53}\And
E.~Nappi\Irefn{org52}\And
M.U.~Naru\Irefn{org14}\And
A.F.~Nassirpour\Irefn{org80}\And
C.~Nattrass\Irefn{org130}\And
R.~Nayak\Irefn{org48}\And
T.K.~Nayak\Irefn{org85}\And
S.~Nazarenko\Irefn{org108}\And
A.~Neagu\Irefn{org20}\And
R.A.~Negrao De Oliveira\Irefn{org68}\And
L.~Nellen\Irefn{org69}\And
S.V.~Nesbo\Irefn{org35}\And
G.~Neskovic\Irefn{org38}\And
D.~Nesterov\Irefn{org112}\And
L.T.~Neumann\Irefn{org142}\And
B.S.~Nielsen\Irefn{org88}\And
S.~Nikolaev\Irefn{org87}\And
S.~Nikulin\Irefn{org87}\And
V.~Nikulin\Irefn{org97}\And
F.~Noferini\Irefn{org10}\textsuperscript{,}\Irefn{org53}\And
P.~Nomokonov\Irefn{org75}\And
J.~Norman\Irefn{org78}\textsuperscript{,}\Irefn{org127}\And
N.~Novitzky\Irefn{org133}\And
P.~Nowakowski\Irefn{org142}\And
A.~Nyanin\Irefn{org87}\And
J.~Nystrand\Irefn{org21}\And
M.~Ogino\Irefn{org81}\And
A.~Ohlson\Irefn{org80}\textsuperscript{,}\Irefn{org103}\And
J.~Oleniacz\Irefn{org142}\And
A.C.~Oliveira Da Silva\Irefn{org121}\textsuperscript{,}\Irefn{org130}\And
M.H.~Oliver\Irefn{org146}\And
C.~Oppedisano\Irefn{org58}\And
R.~Orava\Irefn{org43}\And
A.~Ortiz Velasquez\Irefn{org69}\And
A.~Oskarsson\Irefn{org80}\And
J.~Otwinowski\Irefn{org118}\And
K.~Oyama\Irefn{org81}\And
Y.~Pachmayer\Irefn{org103}\And
V.~Pacik\Irefn{org88}\And
D.~Pagano\Irefn{org140}\And
G.~Pai\'{c}\Irefn{org69}\And
J.~Pan\Irefn{org143}\And
A.K.~Pandey\Irefn{org48}\And
S.~Panebianco\Irefn{org137}\And
P.~Pareek\Irefn{org49}\textsuperscript{,}\Irefn{org141}\And
J.~Park\Irefn{org60}\And
J.E.~Parkkila\Irefn{org126}\And
S.~Parmar\Irefn{org99}\And
S.P.~Pathak\Irefn{org125}\And
R.N.~Patra\Irefn{org141}\And
B.~Paul\Irefn{org23}\textsuperscript{,}\Irefn{org58}\And
H.~Pei\Irefn{org6}\And
T.~Peitzmann\Irefn{org63}\And
X.~Peng\Irefn{org6}\And
L.G.~Pereira\Irefn{org70}\And
H.~Pereira Da Costa\Irefn{org137}\And
D.~Peresunko\Irefn{org87}\And
G.M.~Perez\Irefn{org8}\And
E.~Perez Lezama\Irefn{org68}\And
V.~Peskov\Irefn{org68}\And
Y.~Pestov\Irefn{org4}\And
V.~Petr\'{a}\v{c}ek\Irefn{org36}\And
M.~Petrovici\Irefn{org47}\And
R.P.~Pezzi\Irefn{org70}\And
S.~Piano\Irefn{org59}\And
M.~Pikna\Irefn{org13}\And
P.~Pillot\Irefn{org114}\And
O.~Pinazza\Irefn{org33}\textsuperscript{,}\Irefn{org53}\And
L.~Pinsky\Irefn{org125}\And
C.~Pinto\Irefn{org27}\And
S.~Pisano\Irefn{org10}\textsuperscript{,}\Irefn{org51}\And
D.~Pistone\Irefn{org55}\And
M.~P\l osko\'{n}\Irefn{org79}\And
M.~Planinic\Irefn{org98}\And
F.~Pliquett\Irefn{org68}\And
J.~Pluta\Irefn{org142}\And
S.~Pochybova\Irefn{org145}\Aref{org*}\And
M.G.~Poghosyan\Irefn{org95}\And
B.~Polichtchouk\Irefn{org90}\And
N.~Poljak\Irefn{org98}\And
A.~Pop\Irefn{org47}\And
H.~Poppenborg\Irefn{org144}\And
S.~Porteboeuf-Houssais\Irefn{org134}\And
V.~Pozdniakov\Irefn{org75}\And
S.K.~Prasad\Irefn{org3}\And
R.~Preghenella\Irefn{org53}\And
F.~Prino\Irefn{org58}\And
C.A.~Pruneau\Irefn{org143}\And
I.~Pshenichnov\Irefn{org62}\And
M.~Puccio\Irefn{org25}\textsuperscript{,}\Irefn{org33}\And
J.~Putschke\Irefn{org143}\And
R.E.~Quishpe\Irefn{org125}\And
S.~Ragoni\Irefn{org110}\And
S.~Raha\Irefn{org3}\And
S.~Rajput\Irefn{org100}\And
J.~Rak\Irefn{org126}\And
A.~Rakotozafindrabe\Irefn{org137}\And
L.~Ramello\Irefn{org31}\And
F.~Rami\Irefn{org136}\And
R.~Raniwala\Irefn{org101}\And
S.~Raniwala\Irefn{org101}\And
S.S.~R\"{a}s\"{a}nen\Irefn{org43}\And
R.~Rath\Irefn{org49}\And
V.~Ratza\Irefn{org42}\And
I.~Ravasenga\Irefn{org30}\textsuperscript{,}\Irefn{org89}\And
K.F.~Read\Irefn{org95}\textsuperscript{,}\Irefn{org130}\And
K.~Redlich\Irefn{org84}\Aref{orgIV}\And
A.~Rehman\Irefn{org21}\And
P.~Reichelt\Irefn{org68}\And
F.~Reidt\Irefn{org33}\And
X.~Ren\Irefn{org6}\And
R.~Renfordt\Irefn{org68}\And
Z.~Rescakova\Irefn{org37}\And
J.-P.~Revol\Irefn{org10}\And
K.~Reygers\Irefn{org103}\And
V.~Riabov\Irefn{org97}\And
T.~Richert\Irefn{org80}\textsuperscript{,}\Irefn{org88}\And
M.~Richter\Irefn{org20}\And
P.~Riedler\Irefn{org33}\And
W.~Riegler\Irefn{org33}\And
F.~Riggi\Irefn{org27}\And
C.~Ristea\Irefn{org67}\And
S.P.~Rode\Irefn{org49}\And
M.~Rodr\'{i}guez Cahuantzi\Irefn{org44}\And
K.~R{\o}ed\Irefn{org20}\And
R.~Rogalev\Irefn{org90}\And
E.~Rogochaya\Irefn{org75}\And
D.~Rohr\Irefn{org33}\And
D.~R\"ohrich\Irefn{org21}\And
P.S.~Rokita\Irefn{org142}\And
F.~Ronchetti\Irefn{org51}\And
E.D.~Rosas\Irefn{org69}\And
K.~Roslon\Irefn{org142}\And
A.~Rossi\Irefn{org28}\textsuperscript{,}\Irefn{org56}\And
A.~Rotondi\Irefn{org139}\And
A.~Roy\Irefn{org49}\And
P.~Roy\Irefn{org109}\And
O.V.~Rueda\Irefn{org80}\And
R.~Rui\Irefn{org24}\And
B.~Rumyantsev\Irefn{org75}\And
A.~Rustamov\Irefn{org86}\And
E.~Ryabinkin\Irefn{org87}\And
Y.~Ryabov\Irefn{org97}\And
A.~Rybicki\Irefn{org118}\And
H.~Rytkonen\Irefn{org126}\And
O.A.M.~Saarimaki\Irefn{org43}\And
S.~Sadhu\Irefn{org141}\And
S.~Sadovsky\Irefn{org90}\And
K.~\v{S}afa\v{r}\'{\i}k\Irefn{org36}\And
S.K.~Saha\Irefn{org141}\And
B.~Sahoo\Irefn{org48}\And
P.~Sahoo\Irefn{org48}\textsuperscript{,}\Irefn{org49}\And
R.~Sahoo\Irefn{org49}\And
S.~Sahoo\Irefn{org65}\And
P.K.~Sahu\Irefn{org65}\And
J.~Saini\Irefn{org141}\And
S.~Sakai\Irefn{org133}\And
S.~Sambyal\Irefn{org100}\And
V.~Samsonov\Irefn{org92}\textsuperscript{,}\Irefn{org97}\And
D.~Sarkar\Irefn{org143}\And
N.~Sarkar\Irefn{org141}\And
P.~Sarma\Irefn{org41}\And
V.M.~Sarti\Irefn{org104}\And
M.H.P.~Sas\Irefn{org63}\And
E.~Scapparone\Irefn{org53}\And
B.~Schaefer\Irefn{org95}\And
J.~Schambach\Irefn{org119}\And
H.S.~Scheid\Irefn{org68}\And
C.~Schiaua\Irefn{org47}\And
R.~Schicker\Irefn{org103}\And
A.~Schmah\Irefn{org103}\And
C.~Schmidt\Irefn{org106}\And
H.R.~Schmidt\Irefn{org102}\And
M.O.~Schmidt\Irefn{org103}\And
M.~Schmidt\Irefn{org102}\And
N.V.~Schmidt\Irefn{org68}\textsuperscript{,}\Irefn{org95}\And
A.R.~Schmier\Irefn{org130}\And
J.~Schukraft\Irefn{org88}\And
Y.~Schutz\Irefn{org33}\textsuperscript{,}\Irefn{org136}\And
K.~Schwarz\Irefn{org106}\And
K.~Schweda\Irefn{org106}\And
G.~Scioli\Irefn{org26}\And
E.~Scomparin\Irefn{org58}\And
M.~\v{S}ef\v{c}\'ik\Irefn{org37}\And
J.E.~Seger\Irefn{org15}\And
Y.~Sekiguchi\Irefn{org132}\And
D.~Sekihata\Irefn{org132}\And
I.~Selyuzhenkov\Irefn{org92}\textsuperscript{,}\Irefn{org106}\And
S.~Senyukov\Irefn{org136}\And
D.~Serebryakov\Irefn{org62}\And
E.~Serradilla\Irefn{org71}\And
A.~Sevcenco\Irefn{org67}\And
A.~Shabanov\Irefn{org62}\And
A.~Shabetai\Irefn{org114}\And
R.~Shahoyan\Irefn{org33}\And
W.~Shaikh\Irefn{org109}\And
A.~Shangaraev\Irefn{org90}\And
A.~Sharma\Irefn{org99}\And
A.~Sharma\Irefn{org100}\And
H.~Sharma\Irefn{org118}\And
M.~Sharma\Irefn{org100}\And
N.~Sharma\Irefn{org99}\And
A.I.~Sheikh\Irefn{org141}\And
K.~Shigaki\Irefn{org45}\And
M.~Shimomura\Irefn{org82}\And
S.~Shirinkin\Irefn{org91}\And
Q.~Shou\Irefn{org39}\And
Y.~Sibiriak\Irefn{org87}\And
S.~Siddhanta\Irefn{org54}\And
T.~Siemiarczuk\Irefn{org84}\And
D.~Silvermyr\Irefn{org80}\And
G.~Simatovic\Irefn{org89}\And
G.~Simonetti\Irefn{org33}\textsuperscript{,}\Irefn{org104}\And
R.~Singh\Irefn{org85}\And
R.~Singh\Irefn{org100}\And
R.~Singh\Irefn{org49}\And
V.K.~Singh\Irefn{org141}\And
V.~Singhal\Irefn{org141}\And
T.~Sinha\Irefn{org109}\And
B.~Sitar\Irefn{org13}\And
M.~Sitta\Irefn{org31}\And
T.B.~Skaali\Irefn{org20}\And
M.~Slupecki\Irefn{org126}\And
N.~Smirnov\Irefn{org146}\And
R.J.M.~Snellings\Irefn{org63}\And
T.W.~Snellman\Irefn{org43}\textsuperscript{,}\Irefn{org126}\And
C.~Soncco\Irefn{org111}\And
J.~Song\Irefn{org60}\textsuperscript{,}\Irefn{org125}\And
A.~Songmoolnak\Irefn{org115}\And
F.~Soramel\Irefn{org28}\And
S.~Sorensen\Irefn{org130}\And
I.~Sputowska\Irefn{org118}\And
J.~Stachel\Irefn{org103}\And
I.~Stan\Irefn{org67}\And
P.~Stankus\Irefn{org95}\And
P.J.~Steffanic\Irefn{org130}\And
E.~Stenlund\Irefn{org80}\And
D.~Stocco\Irefn{org114}\And
M.M.~Storetvedt\Irefn{org35}\And
L.D.~Stritto\Irefn{org29}\And
A.A.P.~Suaide\Irefn{org121}\And
T.~Sugitate\Irefn{org45}\And
C.~Suire\Irefn{org61}\And
M.~Suleymanov\Irefn{org14}\And
M.~Suljic\Irefn{org33}\And
R.~Sultanov\Irefn{org91}\And
M.~\v{S}umbera\Irefn{org94}\And
S.~Sumowidagdo\Irefn{org50}\And
S.~Swain\Irefn{org65}\And
A.~Szabo\Irefn{org13}\And
I.~Szarka\Irefn{org13}\And
U.~Tabassam\Irefn{org14}\And
G.~Taillepied\Irefn{org134}\And
J.~Takahashi\Irefn{org122}\And
G.J.~Tambave\Irefn{org21}\And
S.~Tang\Irefn{org6}\textsuperscript{,}\Irefn{org134}\And
M.~Tarhini\Irefn{org114}\And
M.G.~Tarzila\Irefn{org47}\And
A.~Tauro\Irefn{org33}\And
G.~Tejeda Mu\~{n}oz\Irefn{org44}\And
A.~Telesca\Irefn{org33}\And
C.~Terrevoli\Irefn{org125}\And
D.~Thakur\Irefn{org49}\And
S.~Thakur\Irefn{org141}\And
D.~Thomas\Irefn{org119}\And
F.~Thoresen\Irefn{org88}\And
R.~Tieulent\Irefn{org135}\And
A.~Tikhonov\Irefn{org62}\And
A.R.~Timmins\Irefn{org125}\And
A.~Toia\Irefn{org68}\And
N.~Topilskaya\Irefn{org62}\And
M.~Toppi\Irefn{org51}\And
F.~Torales-Acosta\Irefn{org19}\And
S.R.~Torres\Irefn{org9}\textsuperscript{,}\Irefn{org120}\And
A.~Trifiro\Irefn{org55}\And
S.~Tripathy\Irefn{org49}\And
T.~Tripathy\Irefn{org48}\And
S.~Trogolo\Irefn{org28}\And
G.~Trombetta\Irefn{org32}\And
L.~Tropp\Irefn{org37}\And
V.~Trubnikov\Irefn{org2}\And
W.H.~Trzaska\Irefn{org126}\And
T.P.~Trzcinski\Irefn{org142}\And
B.A.~Trzeciak\Irefn{org63}\And
T.~Tsuji\Irefn{org132}\And
A.~Tumkin\Irefn{org108}\And
R.~Turrisi\Irefn{org56}\And
T.S.~Tveter\Irefn{org20}\And
K.~Ullaland\Irefn{org21}\And
E.N.~Umaka\Irefn{org125}\And
A.~Uras\Irefn{org135}\And
G.L.~Usai\Irefn{org23}\And
A.~Utrobicic\Irefn{org98}\And
M.~Vala\Irefn{org37}\And
N.~Valle\Irefn{org139}\And
S.~Vallero\Irefn{org58}\And
N.~van der Kolk\Irefn{org63}\And
L.V.R.~van Doremalen\Irefn{org63}\And
M.~van Leeuwen\Irefn{org63}\And
P.~Vande Vyvre\Irefn{org33}\And
D.~Varga\Irefn{org145}\And
Z.~Varga\Irefn{org145}\And
M.~Varga-Kofarago\Irefn{org145}\And
A.~Vargas\Irefn{org44}\And
M.~Vasileiou\Irefn{org83}\And
A.~Vasiliev\Irefn{org87}\And
O.~V\'azquez Doce\Irefn{org104}\textsuperscript{,}\Irefn{org117}\And
V.~Vechernin\Irefn{org112}\And
A.M.~Veen\Irefn{org63}\And
E.~Vercellin\Irefn{org25}\And
S.~Vergara Lim\'on\Irefn{org44}\And
L.~Vermunt\Irefn{org63}\And
R.~Vernet\Irefn{org7}\And
R.~V\'ertesi\Irefn{org145}\And
L.~Vickovic\Irefn{org34}\And
Z.~Vilakazi\Irefn{org131}\And
O.~Villalobos Baillie\Irefn{org110}\And
A.~Villatoro Tello\Irefn{org44}\And
G.~Vino\Irefn{org52}\And
A.~Vinogradov\Irefn{org87}\And
T.~Virgili\Irefn{org29}\And
V.~Vislavicius\Irefn{org88}\And
A.~Vodopyanov\Irefn{org75}\And
B.~Volkel\Irefn{org33}\And
M.A.~V\"{o}lkl\Irefn{org102}\And
K.~Voloshin\Irefn{org91}\And
S.A.~Voloshin\Irefn{org143}\And
G.~Volpe\Irefn{org32}\And
B.~von Haller\Irefn{org33}\And
I.~Vorobyev\Irefn{org104}\And
D.~Voscek\Irefn{org116}\And
J.~Vrl\'{a}kov\'{a}\Irefn{org37}\And
B.~Wagner\Irefn{org21}\And
M.~Weber\Irefn{org113}\And
S.G.~Weber\Irefn{org144}\And
A.~Wegrzynek\Irefn{org33}\And
D.F.~Weiser\Irefn{org103}\And
S.C.~Wenzel\Irefn{org33}\And
J.P.~Wessels\Irefn{org144}\And
J.~Wiechula\Irefn{org68}\And
J.~Wikne\Irefn{org20}\And
G.~Wilk\Irefn{org84}\And
J.~Wilkinson\Irefn{org10}\textsuperscript{,}\Irefn{org53}\And
G.A.~Willems\Irefn{org33}\And
E.~Willsher\Irefn{org110}\And
B.~Windelband\Irefn{org103}\And
M.~Winn\Irefn{org137}\And
W.E.~Witt\Irefn{org130}\And
Y.~Wu\Irefn{org128}\And
R.~Xu\Irefn{org6}\And
S.~Yalcin\Irefn{org77}\And
K.~Yamakawa\Irefn{org45}\And
S.~Yang\Irefn{org21}\And
S.~Yano\Irefn{org137}\And
Z.~Yin\Irefn{org6}\And
H.~Yokoyama\Irefn{org63}\And
I.-K.~Yoo\Irefn{org17}\And
J.H.~Yoon\Irefn{org60}\And
S.~Yuan\Irefn{org21}\And
A.~Yuncu\Irefn{org103}\And
V.~Yurchenko\Irefn{org2}\And
V.~Zaccolo\Irefn{org24}\And
A.~Zaman\Irefn{org14}\And
C.~Zampolli\Irefn{org33}\And
H.J.C.~Zanoli\Irefn{org63}\And
N.~Zardoshti\Irefn{org33}\And
A.~Zarochentsev\Irefn{org112}\And
P.~Z\'{a}vada\Irefn{org66}\And
N.~Zaviyalov\Irefn{org108}\And
H.~Zbroszczyk\Irefn{org142}\And
M.~Zhalov\Irefn{org97}\And
S.~Zhang\Irefn{org39}\And
X.~Zhang\Irefn{org6}\And
Z.~Zhang\Irefn{org6}\And
V.~Zherebchevskii\Irefn{org112}\And
D.~Zhou\Irefn{org6}\And
Y.~Zhou\Irefn{org88}\And
Z.~Zhou\Irefn{org21}\And
J.~Zhu\Irefn{org6}\textsuperscript{,}\Irefn{org106}\And
Y.~Zhu\Irefn{org6}\And
A.~Zichichi\Irefn{org10}\textsuperscript{,}\Irefn{org26}\And
M.B.~Zimmermann\Irefn{org33}\And
G.~Zinovjev\Irefn{org2}\And
N.~Zurlo\Irefn{org140}\And
\renewcommand\labelenumi{\textsuperscript{\theenumi}~}
\section*{Affiliation notes}
\renewcommand\theenumi{\roman{enumi}}
\begin{Authlist}
\item \Adef{org*}Deceased
\item \Adef{orgI}Dipartimento DET del Politecnico di Torino, Turin, Italy
\item \Adef{orgII}M.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia
\item \Adef{orgIII}Department of Applied Physics, Aligarh Muslim University, Aligarh, India
\item \Adef{orgIV}Institute of Theoretical Physics, University of Wroclaw, Poland
\end{Authlist}
\section*{Collaboration Institutes}
\renewcommand\theenumi{\arabic{enumi}~}
\begin{Authlist}
\item \Idef{org1}A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia
\item \Idef{org2}Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, Kiev, Ukraine
\item \Idef{org3}Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India
\item \Idef{org4}Budker Institute for Nuclear Physics, Novosibirsk, Russia
\item \Idef{org5}California Polytechnic State University, San Luis Obispo, California, United States
\item \Idef{org6}Central China Normal University, Wuhan, China
\item \Idef{org7}Centre de Calcul de l'IN2P3, Villeurbanne, Lyon, France
\item \Idef{org8}Centro de Aplicaciones Tecnol\'{o}gicas y Desarrollo Nuclear (CEADEN), Havana, Cuba
\item \Idef{org9}Centro de Investigaci\'{o}n y de Estudios Avanzados (CINVESTAV), Mexico City and M\'{e}rida, Mexico
\item \Idef{org10}Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche ``Enrico Fermi', Rome, Italy
\item \Idef{org11}Chicago State University, Chicago, Illinois, United States
\item \Idef{org12}China Institute of Atomic Energy, Beijing, China
\item \Idef{org13}Comenius University Bratislava, Faculty of Mathematics, Physics and Informatics, Bratislava, Slovakia
\item \Idef{org14}COMSATS University Islamabad, Islamabad, Pakistan
\item \Idef{org15}Creighton University, Omaha, Nebraska, United States
\item \Idef{org16}Department of Physics, Aligarh Muslim University, Aligarh, India
\item \Idef{org17}Department of Physics, Pusan National University, Pusan, Republic of Korea
\item \Idef{org18}Department of Physics, Sejong University, Seoul, Republic of Korea
\item \Idef{org19}Department of Physics, University of California, Berkeley, California, United States
\item \Idef{org20}Department of Physics, University of Oslo, Oslo, Norway
\item \Idef{org21}Department of Physics and Technology, University of Bergen, Bergen, Norway
\item \Idef{org22}Dipartimento di Fisica dell'Universit\`{a} 'La Sapienza' and Sezione INFN, Rome, Italy
\item \Idef{org23}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Cagliari, Italy
\item \Idef{org24}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Trieste, Italy
\item \Idef{org25}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Turin, Italy
\item \Idef{org26}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Bologna, Italy
\item \Idef{org27}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Catania, Italy
\item \Idef{org28}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Padova, Italy
\item \Idef{org29}Dipartimento di Fisica `E.R.~Caianiello' dell'Universit\`{a} and Gruppo Collegato INFN, Salerno, Italy
\item \Idef{org30}Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy
\item \Idef{org31}Dipartimento di Scienze e Innovazione Tecnologica dell'Universit\`{a} del Piemonte Orientale and INFN Sezione di Torino, Alessandria, Italy
\item \Idef{org32}Dipartimento Interateneo di Fisica `M.~Merlin' and Sezione INFN, Bari, Italy
\item \Idef{org33}European Organization for Nuclear Research (CERN), Geneva, Switzerland
\item \Idef{org34}Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Split, Croatia
\item \Idef{org35}Faculty of Engineering and Science, Western Norway University of Applied Sciences, Bergen, Norway
\item \Idef{org36}Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic
\item \Idef{org37}Faculty of Science, P.J.~\v{S}af\'{a}rik University, Ko\v{s}ice, Slovakia
\item \Idef{org38}Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany
\item \Idef{org39}Fudan University, Shanghai, China
\item \Idef{org40}Gangneung-Wonju National University, Gangneung, Republic of Korea
\item \Idef{org41}Gauhati University, Department of Physics, Guwahati, India
\item \Idef{org42}Helmholtz-Institut f\"{u}r Strahlen- und Kernphysik, Rheinische Friedrich-Wilhelms-Universit\"{a}t Bonn, Bonn, Germany
\item \Idef{org43}Helsinki Institute of Physics (HIP), Helsinki, Finland
\item \Idef{org44}High Energy Physics Group, Universidad Aut\'{o}noma de Puebla, Puebla, Mexico
\item \Idef{org45}Hiroshima University, Hiroshima, Japan
\item \Idef{org46}Hochschule Worms, Zentrum f\"{u}r Technologietransfer und Telekommunikation (ZTT), Worms, Germany
\item \Idef{org47}Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest, Romania
\item \Idef{org48}Indian Institute of Technology Bombay (IIT), Mumbai, India
\item \Idef{org49}Indian Institute of Technology Indore, Indore, India
\item \Idef{org50}Indonesian Institute of Sciences, Jakarta, Indonesia
\item \Idef{org51}INFN, Laboratori Nazionali di Frascati, Frascati, Italy
\item \Idef{org52}INFN, Sezione di Bari, Bari, Italy
\item \Idef{org53}INFN, Sezione di Bologna, Bologna, Italy
\item \Idef{org54}INFN, Sezione di Cagliari, Cagliari, Italy
\item \Idef{org55}INFN, Sezione di Catania, Catania, Italy
\item \Idef{org56}INFN, Sezione di Padova, Padova, Italy
\item \Idef{org57}INFN, Sezione di Roma, Rome, Italy
\item \Idef{org58}INFN, Sezione di Torino, Turin, Italy
\item \Idef{org59}INFN, Sezione di Trieste, Trieste, Italy
\item \Idef{org60}Inha University, Incheon, Republic of Korea
\item \Idef{org61}Institut de Physique Nucl\'{e}aire d'Orsay (IPNO), Institut National de Physique Nucl\'{e}aire et de Physique des Particules (IN2P3/CNRS), Universit\'{e} de Paris-Sud, Universit\'{e} Paris-Saclay, Orsay, France
\item \Idef{org62}Institute for Nuclear Research, Academy of Sciences, Moscow, Russia
\item \Idef{org63}Institute for Subatomic Physics, Utrecht University/Nikhef, Utrecht, Netherlands
\item \Idef{org64}Institute of Experimental Physics, Slovak Academy of Sciences, Ko\v{s}ice, Slovakia
\item \Idef{org65}Institute of Physics, Homi Bhabha National Institute, Bhubaneswar, India
\item \Idef{org66}Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic
\item \Idef{org67}Institute of Space Science (ISS), Bucharest, Romania
\item \Idef{org68}Institut f\"{u}r Kernphysik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany
\item \Idef{org69}Instituto de Ciencias Nucleares, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico
\item \Idef{org70}Instituto de F\'{i}sica, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil
\item \Idef{org71}Instituto de F\'{\i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico
\item \Idef{org72}iThemba LABS, National Research Foundation, Somerset West, South Africa
\item \Idef{org73}Jeonbuk National University, Jeonju, Republic of Korea
\item \Idef{org74}Johann-Wolfgang-Goethe Universit\"{a}t Frankfurt Institut f\"{u}r Informatik, Fachbereich Informatik und Mathematik, Frankfurt, Germany
\item \Idef{org75}Joint Institute for Nuclear Research (JINR), Dubna, Russia
\item \Idef{org76}Korea Institute of Science and Technology Information, Daejeon, Republic of Korea
\item \Idef{org77}KTO Karatay University, Konya, Turkey
\item \Idef{org78}Laboratoire de Physique Subatomique et de Cosmologie, Universit\'{e} Grenoble-Alpes, CNRS-IN2P3, Grenoble, France
\item \Idef{org79}Lawrence Berkeley National Laboratory, Berkeley, California, United States
\item \Idef{org80}Lund University Department of Physics, Division of Particle Physics, Lund, Sweden
\item \Idef{org81}Nagasaki Institute of Applied Science, Nagasaki, Japan
\item \Idef{org82}Nara Women{'}s University (NWU), Nara, Japan
\item \Idef{org83}National and Kapodistrian University of Athens, School of Science, Department of Physics , Athens, Greece
\item \Idef{org84}National Centre for Nuclear Research, Warsaw, Poland
\item \Idef{org85}National Institute of Science Education and Research, Homi Bhabha National Institute, Jatni, India
\item \Idef{org86}National Nuclear Research Center, Baku, Azerbaijan
\item \Idef{org87}National Research Centre Kurchatov Institute, Moscow, Russia
\item \Idef{org88}Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
\item \Idef{org89}Nikhef, National institute for subatomic physics, Amsterdam, Netherlands
\item \Idef{org90}NRC Kurchatov Institute IHEP, Protvino, Russia
\item \Idef{org91}NRC «Kurchatov Institute» - ITEP, Moscow, Russia
\item \Idef{org92}NRNU Moscow Engineering Physics Institute, Moscow, Russia
\item \Idef{org93}Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom
\item \Idef{org94}Nuclear Physics Institute of the Czech Academy of Sciences, \v{R}e\v{z} u Prahy, Czech Republic
\item \Idef{org95}Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States
\item \Idef{org96}Ohio State University, Columbus, Ohio, United States
\item \Idef{org97}Petersburg Nuclear Physics Institute, Gatchina, Russia
\item \Idef{org98}Physics department, Faculty of science, University of Zagreb, Zagreb, Croatia
\item \Idef{org99}Physics Department, Panjab University, Chandigarh, India
\item \Idef{org100}Physics Department, University of Jammu, Jammu, India
\item \Idef{org101}Physics Department, University of Rajasthan, Jaipur, India
\item \Idef{org102}Physikalisches Institut, Eberhard-Karls-Universit\"{a}t T\"{u}bingen, T\"{u}bingen, Germany
\item \Idef{org103}Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany
\item \Idef{org104}Physik Department, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany
\item \Idef{org105}Politecnico di Bari, Bari, Italy
\item \Idef{org106}Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung GmbH, Darmstadt, Germany
\item \Idef{org107}Rudjer Bo\v{s}kovi\'{c} Institute, Zagreb, Croatia
\item \Idef{org108}Russian Federal Nuclear Center (VNIIEF), Sarov, Russia
\item \Idef{org109}Saha Institute of Nuclear Physics, Homi Bhabha National Institute, Kolkata, India
\item \Idef{org110}School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom
\item \Idef{org111}Secci\'{o}n F\'{\i}sica, Departamento de Ciencias, Pontificia Universidad Cat\'{o}lica del Per\'{u}, Lima, Peru
\item \Idef{org112}St. Petersburg State University, St. Petersburg, Russia
\item \Idef{org113}Stefan Meyer Institut f\"{u}r Subatomare Physik (SMI), Vienna, Austria
\item \Idef{org114}SUBATECH, IMT Atlantique, Universit\'{e} de Nantes, CNRS-IN2P3, Nantes, France
\item \Idef{org115}Suranaree University of Technology, Nakhon Ratchasima, Thailand
\item \Idef{org116}Technical University of Ko\v{s}ice, Ko\v{s}ice, Slovakia
\item \Idef{org117}Technische Universit\"{a}t M\"{u}nchen, Excellence Cluster 'Universe', Munich, Germany
\item \Idef{org118}The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland
\item \Idef{org119}The University of Texas at Austin, Austin, Texas, United States
\item \Idef{org120}Universidad Aut\'{o}noma de Sinaloa, Culiac\'{a}n, Mexico
\item \Idef{org121}Universidade de S\~{a}o Paulo (USP), S\~{a}o Paulo, Brazil
\item \Idef{org122}Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil
\item \Idef{org123}Universidade Federal do ABC, Santo Andre, Brazil
\item \Idef{org124}University of Cape Town, Cape Town, South Africa
\item \Idef{org125}University of Houston, Houston, Texas, United States
\item \Idef{org126}University of Jyv\"{a}skyl\"{a}, Jyv\"{a}skyl\"{a}, Finland
\item \Idef{org127}University of Liverpool, Liverpool, United Kingdom
\item \Idef{org128}University of Science and Techonology of China, Hefei, China
\item \Idef{org129}University of South-Eastern Norway, Tonsberg, Norway
\item \Idef{org130}University of Tennessee, Knoxville, Tennessee, United States
\item \Idef{org131}University of the Witwatersrand, Johannesburg, South Africa
\item \Idef{org132}University of Tokyo, Tokyo, Japan
\item \Idef{org133}University of Tsukuba, Tsukuba, Japan
\item \Idef{org134}Universit\'{e} Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France
\item \Idef{org135}Universit\'{e} de Lyon, Universit\'{e} Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, Lyon, France
\item \Idef{org136}Universit\'{e} de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, France, Strasbourg, France
\item \Idef{org137}Universit\'{e} Paris-Saclay Centre d'Etudes de Saclay (CEA), IRFU, D\'{e}partment de Physique Nucl\'{e}aire (DPhN), Saclay, France
\item \Idef{org138}Universit\`{a} degli Studi di Foggia, Foggia, Italy
\item \Idef{org139}Universit\`{a} degli Studi di Pavia, Pavia, Italy
\item \Idef{org140}Universit\`{a} di Brescia, Brescia, Italy
\item \Idef{org141}Variable Energy Cyclotron Centre, Homi Bhabha National Institute, Kolkata, India
\item \Idef{org142}Warsaw University of Technology, Warsaw, Poland
\item \Idef{org143}Wayne State University, Detroit, Michigan, United States
\item \Idef{org144}Westf\"{a}lische Wilhelms-Universit\"{a}t M\"{u}nster, Institut f\"{u}r Kernphysik, M\"{u}nster, Germany
\item \Idef{org145}Wigner Research Centre for Physics, Budapest, Hungary
\item \Idef{org146}Yale University, New Haven, Connecticut, United States
\item \Idef{org147}Yonsei University, Seoul, Republic of Korea
\end{Authlist}
\endgroup
\section*{\refname}}{}{}{}
\newcommand{$\sqrt{s_{\mathrm{NN}}}~$}{$\sqrt{s_{\mathrm{NN}}}~$}
\newcommand{$\sqrt{s}~$}{$\sqrt{s}~$}
\newcommand{GeV/$c$}{GeV/$c$}
\newcommand{$\mathrm{K^{*0}}$}{$\mathrm{K^{*0}}$}
\newcommand{$\mathrm{K^{*\pm}}$}{$\mathrm{K^{*\pm}}$}
\newcommand{$\overline{\mathrm{K}}^{*0}~$}{$\overline{\mathrm{K}}^{*0}~$}
\newcommand{$\overline{\mathrm{K}}^{*0}$}{$\overline{\mathrm{K}}^{*0}$}
\newcommand{\kstf} {K$^{*}(892)^{0}~$}
\newcommand{\ph} {$\mathrm{\phi}~$}
\newcommand{\pha} {$\mathrm{\phi}$}
\newcommand{\phf} {$\mathrm{\phi(1020)}~$}
\newcommand{$p_{\mathrm{T}}~$}{$p_{\mathrm{T}}~$}
\newcommand{$p_{\mathrm{T}}$}{$p_{\mathrm{T}}$}
\newcommand{$\langle p_{\mathrm{T}} \rangle~$}{$\langle p_{\mathrm{T}} \rangle~$}
\newcommand{$\langle\mathrm{d}N_{\mathrm{ch}}/\mathrm{d}\eta\rangle^{1/3}$}{$\langle\mathrm{d}N_{\mathrm{ch}}/\mathrm{d}\eta\rangle^{1/3}$}
\newcommand{\ensuremath{p_{\mathrm{T}}}\xspace}{\ensuremath{p_{\mathrm{T}}}\xspace}
\newcommand{\mbox{$\mathrm {K^0_S}$}}{\mbox{$\mathrm {K^0_S}$}}
\newcommand{\mbox{Pb--Pb}\xspace}{\mbox{Pb--Pb}\xspace}
\newcommand{\mbox{p--Pb}\xspace}{\mbox{p--Pb}\xspace}
\newcommand{\ensuremath{\mathrm{K}^{-}}\xspace}{\ensuremath{\mathrm{K}^{-}}\xspace}
\newcommand{$\mathrm{K^{*0}/K^{-}}$}{$\mathrm{K^{*0}/K^{-}}$}
\newcommand{$\mathrm{\phi/K^{-}}$}{$\mathrm{\phi/K^{-}}$}
\newcommand{\ensuremath{\mathrm{\rho_{00}}}}{\ensuremath{\mathrm{\rho_{00}}}}
\usepackage{float}
\begin{document}%
\begin{titlepage}
\PHyear{2019}
\PHnumber{251}
\PHdate{30 October}
\title{Measurement of spin-orbital angular momentum interactions in relativistic heavy-ion collisions}
\ShortTitle{Spin alignment of vector mesons}
\Collaboration{ALICE Collaboration\thanks{See Appendix~\ref{app:collab} for the list of collaboration members}}
\ShortAuthor{ALICE Collaboration}
\begin{abstract}
The first measurement of spin alignment of vector mesons ($\mathrm{K^{*0}}$~and \pha) in heavy-ion
collisions at the Large Hadron Collider (LHC) is reported. The measurements are carried out as a function of transverse momentum ($p_{\mathrm{T}}$) and collision centrality with the ALICE detector using the particles produced at midrapidity ($|y| <$ 0.5) in Pb--Pb collisions at a center-of-mass energy ($\sqrt{s_{\mathrm{NN}}}~$) of 2.76 TeV.
The second diagonal spin density matrix element (\ensuremath{\mathrm{\rho_{00}}}) is measured from the angular distribution of the decay daughters of the vector meson in the decay rest frame, with respect to the normal of both the
event plane and the production plane. The \ensuremath{\mathrm{\rho_{00}}}~values are found to be less
than 1/3 (= 1/3 implies no spin alignment) at low $p_{\mathrm{T}}~$ ($<$ 2 GeV/$c$) for
both vector mesons. The observed deviations from 1/3 are maximal for mid-central collisions at a level of 3$\sigma$ for $\mathrm{K^{*0}}$~and 2$\sigma$ for \ph mesons. As control measurements, the analysis is also performed using the \mbox{$\mathrm {K^0_S}$}~meson, which has zero spin, and for the vector mesons in pp collisions; in both cases no significant spin alignment is observed. The \ensuremath{\mathrm{\rho_{00}}}~values at low $p_{\mathrm{T}}~$ with respect to the production plane are closer to 1/3 than for the event plane; they are related to each other through correlations introduced by the elliptic flow in the system. The measured spin alignment is surprisingly large compared to the polarization measured for $\Lambda$ hyperons, but qualitatively consistent with the expectation from models which attribute the spin alignment to a polarization of quarks in the presence of large initial angular momentum in non-central heavy-ion collisions and a subsequent hadronization by the process of recombination.
\end{abstract}
\end{titlepage}
\setcounter{page}{2}
Ultra-relativistic heavy-ion collisions create a system of deconfined
quarks and gluons, called the Quark--Gluon Plasma
(QGP) and provide the opportunity to study its properties.
In collisions with non-zero impact parameter, a large angular momentum and magnetic field are also expected. Theoretical
calculations estimate a total angular momentum of
$O(10^{7})$ $\hslash$~\cite{Becattini:2007sr} and a magnetic field
$O(10^{14})$~T~\cite{Kharzeev:2007jp}. While the
magnetic field is expected to be short lived (a few fm/$c$), the
angular momentum is conserved and could be felt throughout the evolution of the
system formed in the collision. Experimental observables sensitive to these initial conditions~\cite{Fries:2017ina,Voronyuk:2011jd} could be used to study the influence of angular momentum and a magnetic field on the properties and the dynamical evolution of the QGP and its subsequent hadronization.
Spin-orbit interactions have wide observable consequences in several branches of
physics~\cite{Jdjackson,landau,Mayer:1949pd}.
The direction of the angular momentum in non-central heavy-ion
collisions is perpendicular to the reaction plane (subtended by the
beam axis and impact parameter)~\cite{Liang:2004ph}. In the presence of such a large angular momentum, the
spin-orbit coupling of quantum chromodynamics (QCD) could
lead to a polarization of quarks followed by a net-polarization of vector mesons ($\mathrm{K^{*0}}$~and
\ph)~\cite{Voloshin:2004ha,Liang:2004ph,Liang:2004xn,Liang:2007ma,Yang:2017sdk} along the direction of
the angular momentum.
The spin alignment of a vector meson is described by a 3 $\times$ 3
Hermitian spin-density matrix~\cite{Yang:2017sdk}.
The trace of the spin-density matrix is 1 and diagonal elements $\mathrm{\rho_{11}}$ and $\mathrm{\rho_{-1-1}}$ cannot be measured separately. As a result, there is only one independent diagonal element, \ensuremath{\mathrm{\rho_{00}}}. The elements of the spin-density matrix can be studied by measuring the angular distributions of the decay products of the vector mesons with respect to a
quantization axis. In the analysis presented here, two different quantization axes are used: i) a vector perpendicular to the production
plane (PP) of the vector meson and ii) the normal to the reaction plane (RP) of the
system. The PP is defined by the flight direction of the vector meson and the
beam direction.
The spin density element \ensuremath{\mathrm{\rho_{00}}}{} is determined from the distribution of the angle $\theta^{*}$ between the kaon decay daughter and the quantization
axis in the decay rest frame~\cite{Schilling:1969um},
\begin{equation}
\frac{\mathrm{d}N}{\mathrm{d}\cos{\theta^{*}}} \propto [ 1 - \rho_{00} + \cos^{2}{\theta^{*}}(3\rho_{00} - 1 ) ].
\label{eqn1}
\end{equation}
The complete expression is given in~\cite{formula} and
Eq.~\ref{eqn1} is obtained by applying parity symmetry of QCD, the unit
trace condition of the spin density matrix, and integrating over the azimuthal angle. The probability of finding a vector meson in spin state zero \ensuremath{\mathrm{\rho_{00}}}~is~1/3 in the absence of spin alignment and the angular distribution in Eq.~\ref{eqn1} is uniform. Deviations from $\ensuremath{\mathrm{\rho_{00}}}=1/3$ indicate that the vector meson has a preferred spin state, leading to a non-uniform angular distribution. This is the experimental signature of spin alignment.
The large initial angular momentum in combination with the spin-orbit interaction is expected to lead to spin alignment with respect to the reaction plane (RP). The reaction plane orientation cannot be measured directly, but is estimated from the final state distributions of particles. This experimentally measured plane is called the event plane~\cite{Poskanzer:1998yz} (EP). To correct for the spread of the EP with respect to the RP, the observed $\rho^{obs}_{00}$ is corrected for the EP resolution ($R$) using~\cite{Tang:2018qtu},
\begin{equation}
\rho_{00} = \frac{1}{3} + \left (\rho^{\mathrm{obs}}_{00} - \frac{1}{3}\right)\frac{4}{1+3R}.
\label{eqn2}
\end{equation}
There are specific qualitative predictions for the spin alignment
effect~\cite{Liang:2004xn}: (a) \ensuremath{\mathrm{\rho_{00}}}~$>$ 1/3 if the hadronization of a polarized parton
proceeds via a fragmentation and less than 1/3 for hadronization via
recombination, (b) \ensuremath{\mathrm{\rho_{00}}}~is expected to have a maximum deviation from 1/3 for
mid-central heavy-ion collisions, where the angular momentum is also
maximal, and a smaller deviation for both peripheral (large impact parameter) and
central (small impact parameter) collisions, (c)
the \ensuremath{\mathrm{\rho_{00}}}~value is expected to have maximum deviation from 1/3 at low $p_{\mathrm{T}}~$ and
reach the value of 1/3 at high $p_{\mathrm{T}}~$ in the recombination hadronization scenario, and (d) the effect is
expected to be larger for $\mathrm{K^{*0}}$~compared to \ph due
to their constituent quark composition. All of these features are probed
~for $\mathrm{K^{*0}}$~and \ph vector mesons in Pb--Pb collisions presented in this letter.
In addition, to establish the results, a control measurement is carried out using pp collisions, which do not possess large
initial angular momentum, and the same analysis is done in \mbox{Pb--Pb}\xspace
collisions for \mbox{$\mathrm {K^0_S}$}~mesons, which have zero spin.
As a further cross check, the measurements are carried out by randomizing the directions of the event (RndEP) and production planes (RndPP).
The analyses are carried out using 43 million minimum bias pp collisions at $\sqrt{s}$ = 13 TeV, taken in the year 2015 and 14 million minimum bias \mbox{Pb--Pb}\xspace collisions at $\sqrt{s_{\mathrm{NN}}}~$ = 2.76 TeV, collected in the year 2010. The measurements for vector mesons are performed at midrapidity ($|y| <$ 0.5) as a function of $p_{\mathrm{T}}~$ and are reported for pp collisions as well as for different centrality classes in \mbox{Pb--Pb}\xspace collisions. The \mbox{$\mathrm {K^0_S}$}~analysis is performed only for \mbox{Pb--Pb}\xspace collisions in the 20--40\% centrality class. The details of the ALICE detector, trigger conditions, centrality selection, and second order event plane~\cite{Abelev:2014pua} estimation using the V0 detectors at forward rapidity, can be found in~\cite{Aamodt:2008zz,Aamodt:2010cz,Abelev:2013qoq}. For the analysis, events are accepted with a primary vertex position within $\pm$~10 cm of the detector center along the beam axis. The event selection in Pb--Pb collisions further requires at least one hit in any of V0A, V0C, and Silicon Pixel Detectors while in pp collisions at least one hit in both V0A and V0C is required. The events were classified by the collision centrality based on the amplitude measured in the V0 counters~\cite{Abelev:2013qoq}.
The $\mathrm{K^{*0}}$~and \ph vector mesons are reconstructed via their decays into charged K$\pi$ and KK pairs, respectively, while the \mbox{$\mathrm {K^0_S}$}~is reconstructed via its decay into two pions. The Time Projection Chamber (TPC)~\cite{Alme:2010ke} and Time-of-Flight (TOF) detector~\cite{Dellacasa:2000kh} are used to identify the decay products of these mesons via specific ionization energy loss and time-of-flight measurements, respectively.
The $\mathrm{K^{*0}}$~and \ph yields are determined via the invariant mass technique~\cite{Adam:2017zbf,Abelev:2014uua,Abelev:2013xaa}. The background coming from combinatorial pairs and misidentified particles is removed by constructing the invariant mass distribution from the so-called mixed events for the $\mathrm{K^{*0}}$~and \ph\cite{Adam:2017zbf,Abelev:2014uua}. The combinatorial background for the \mbox{$\mathrm {K^0_S}$}~candidates is significantly reduced by using topological criteria to select the distinctive V-shaped decay topology~\cite{Abelev:2013xaa}.
The invariant mass distributions are fitted with a Breit-Wigner (Voigtian: convolution of Breit-Wigner and Gaussian distributions)
function for the $\mathrm{K^{*0}}$(\pha) signal and a $2^{\mathrm{nd}}$ order polynomial that describes the residual background, in order to extract the yields~\cite{Adam:2017zbf,Abelev:2014uua}. Extracted yields are then corrected for the reconstruction efficiency and acceptance in each $\cos{\theta^{*}}$ and $p_{\mathrm{T}}~$ bin~\cite{Adam:2017zbf,Abelev:2014uua}. The reconstruction efficiency is determined from Monte Carlo simulations of the ALICE detector response based on GEANT3 simulation~\cite{Adam:2017zbf,Abelev:2014uua}. The signal extraction procedures for the vector mesons and \mbox{$\mathrm {K^0_S}$}~are identical to those used in earlier publications reporting the $p_{\mathrm{T}}~$ distribution of the mesons~\cite{Adam:2017zbf,Abelev:2014uua,Abelev:2013xaa}. The mass peak positions and widths of the resonances across all the $\cos{\theta^{*}}$ bins for various $p_{\mathrm{T}}~$ intervals in pp collisions and in different centrality classes of Pb--Pb collisions are consistent with those obtained from earlier analyses~\cite{Adam:2017zbf,Abelev:2014uua,Abelev:2013xaa} and no significant dependence on $\cos{\theta^{*}}$ is seen.
The resulting efficiency and acceptance corrected
$\mathrm{d}N/\mathrm{d}\cos{\theta^{*}}$ distributions for selected
$p_{\mathrm{T}}~$ intervals in minimum bias pp collisions and in 10--50\% central
\mbox{Pb--Pb}\xspace collisions are shown in Fig.~\ref{dist} along with those for
\mbox{$\mathrm {K^0_S}$}~in 20--40\% central \mbox{Pb--Pb}\xspace collisions. These distributions are
fitted with the functional form given in Eq.~\ref{eqn1} to determine
$\rho_{00}$ for each $p_{\mathrm{T}}~$ bin in pp and \mbox{Pb--Pb}\xspace collisions. For the EP
results, the values of resolution,$R$, used are 0.71, 0.53, 0.72, 0.66, and 0.40 for 10--50\%, 0--10\%, 10--30\%, 30--50\%, and 50--80\%, respectively~\cite{Abelev:2014pua}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.56]{figs/fig1.eps}
\caption{(Color online) Angular distribution of the decay
daughter in the rest frame of the meson with respect to the
quantization axis at $|y| <$ 0.5 for pp collisions at $\sqrt{s}~$ = 13 TeV and \mbox{Pb--Pb}\xspace collisions at $\sqrt{s_{\mathrm{NN}}}~$ = 2.76
TeV. Panels (a) - (c) show results for $\mathrm{K^{*0}}$~and \ph with respect to EP, PP, and random event plane. Panels (d) and (e) are the results for \mbox{$\mathrm {K^0_S}$}~with respect to both the PP and EP and for vector mesons in pp collisions with respect to PP, respectively.
}
\label{dist}
\vspace{-0.8cm}
\end{center}
\end{figure}
There are three main sources of systematic uncertainties in the measurements of the angular distribution of vector meson decays : (a) Meson yield extraction procedure: this contribution is estimated by varying the fit ranges for the yield extraction, the normalization range for the signal+background and background invariant mass distributions, the procedure to integrate the signal function to get the yields, and by varying the width of the resonance peak by leaving the corresponding parameter free in the fit, instead of keeping it fixed to the PDG value and the mass resolution obtained from simulations. These sources contribute to the uncertainties on the \ensuremath{\mathrm{\rho_{00}}}~value at a level of 12(8)\% at the lowest $p_{\mathrm{T}}~$ and decrease with $p_{\mathrm{T}}~$ to 4(3)\% at the highest $p_{\mathrm{T}}~$ studied for the $\mathrm{K^{*0}}$(\pha). (b) Track selection criteria: this contribution includes variations of the selection on the distance of closest approach to the collision vertex, the number of crossed pad rows in the TPC~\cite{Alme:2010ke}, the ratio of number of the found clusters to the number of expected clusters, and the quality of the track fit. The systematic uncertainties on the \ensuremath{\mathrm{\rho_{00}}}~value due to variation on the track selection criteria are 14(6)\% at the lowest $p_{\mathrm{T}}~$ and about 11(5)\% at the highest $p_{\mathrm{T}}~$ for $\mathrm{K^{*0}}$(\pha). (c) Particle identification procedure: this is evaluated by varying the particle identification criteria related to the TPC and
TOF detectors. The corresponding uncertainty is 5(3)\% at the lowest
$p_{\mathrm{T}}~$ and about 4(4.5)\% at the highest $p_{\mathrm{T}}~$ studied for $\mathrm{K^{*0}}$(\pha).
The total systematic uncertainty on \ensuremath{\mathrm{\rho_{00}}}{} is obtained by adding all the contributions in quadratures.
Several consistency checks are carried out. Specifically the
yields of vector mesons are summed over $\cos{\theta^{*}}$ bins for
each $p_{\mathrm{T}}~$ interval to obtain the $p_{\mathrm{T}}~$ distributions, which are found to be consistent within the
statistical uncertainties with the published $p_{\mathrm{T}}~$ distributions in \mbox{Pb--Pb}\xspace collisions ~\cite{Adam:2017zbf,Abelev:2014uua}. Similarly a closure test (comparison between generated and reconstructed angular distribution) is carried out for the Monte Carlo (MC) data which is
used to obtain the reconstruction efficiencies for the mesons. Two different event generators are used to determine the reconstruction efficiency and the results are consistent. The effect of the shape of the $p_{\mathrm{T}}~$ distributions in the MC simulations is studied in detail and the impact on the \ensuremath{\mathrm{\rho_{00}}}{} measurement is found to be small. The dependence of the reconstruction efficiency for a $\cos{\theta^{*}}$ range on the azimuthal angle of vector meson ($\phi_{V}$) relative to the event plane angle ($\Psi$) is also studied. The reconstruction efficiencies obtained in a $\cos{\theta^{*}}$ range by integrating over $\phi_{V} - \Psi$ are similar to the efficiency obtained by averaging over the $\phi_{V} - \Psi$ bins.
Data samples with two different magnetic field polarities in the experiment are separately analyzed and the $\cos{\theta^{*}}$ distributions are found to be consistent. In addition, the analysis is performed separately for
positive (0 $<$ $y$ $<$ 0.5) and negative (--0.5~$<$~$y$~$<$ 0)
rapidity and also for $\mathrm{K^{*0}}$~versus $\overline{\mathrm{K}}^{*0}$; the different samples are also consistent. The final result is reported for average yield of particles ($\mathrm{K^{*0}}$) and anti-particles ($\overline{\mathrm{K}}^{*0}$), obtained from the combined mass distribution.
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{figs/fig2.eps}
\caption{(Color online)
Transverse momentum dependence of \ensuremath{\mathrm{\rho_{00}}}{}
corresponding to $\mathrm{K^{*0}}$, \ph, and \mbox{$\mathrm {K^0_S}$}~mesons at $|y| <$ 0.5 in Pb--Pb collisions at $\sqrt{s_{\mathrm{NN}}}~$ = 2.76 TeV and minimum bias pp collisions at $\sqrt{s}~$ = 13 TeV.
Results are shown for spin alignment with respect to event plane (panels a,b), production plane (c,d) and random event plane (e,f) for $\mathrm{K^{*0}}$~(left column) and \ph (right column).
The statistical and systematic uncertainties are
shown as bars and boxes, respectively.}
\label{Fig:momentum}
\end{center}
\end{figure}
Figure~\ref{Fig:momentum} shows the measured \ensuremath{\mathrm{\rho_{00}}}{} as a function of $p_{\mathrm{T}}~$
for $\mathrm{K^{*0}}$~and \ph mesons in pp collisions and \mbox{Pb--Pb}\xspace collisions, along with the measurements for \mbox{$\mathrm {K^0_S}$}~in Pb--Pb collisions.
In mid-central (10--50\%) \mbox{Pb--Pb}\xspace collisions, \ensuremath{\mathrm{\rho_{00}}}{} is below 1/3 at the
lowest measured $p_{\mathrm{T}}~$ and increases to 1/3 within uncertainties
for $p_{\mathrm{T}}~$ $>$ 2 GeV/$c$. At low $p_{\mathrm{T}}$, the central value of \ensuremath{\mathrm{\rho_{00}}} ~is smaller for $\mathrm{K^{*0}}$~than for~\pha, although the results are compatible within uncertainties. In pp collisions, \ensuremath{\mathrm{\rho_{00}}}{} is independent of $p_{\mathrm{T}}~$ and
equal to 1/3 within uncertainties. For the spin zero hadron
\mbox{$\mathrm {K^0_S}$}, \ensuremath{\mathrm{\rho_{00}}}{} is consistent with 1/3 within uncertainties in \mbox{Pb--Pb}\xspace collisions.
The results with random event plane directions are also compatible with no spin alignment for the studied $p_{\mathrm{T}}~$ range, except for the smallest $p_{\mathrm{T}}~$ bin, where \ensuremath{\mathrm{\rho_{00}}}~less than 1/3 but still larger than for EP and PP measurements. The origin of this is discussed later in context of Fig.~\ref{toymodel}. The results for the random production plane (the momentum vector direction
of each vector meson is randomized) are similar to RndEP measurements.
These results indicate that a spin alignment is present at lower $p_{\mathrm{T}}$,
which is a qualitatively consistent with the predictions~\cite{Liang:2004xn}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{figs/fig3.eps}
\caption{(Color online) Measurements of \ensuremath{\mathrm{\rho_{00}}}~as a function of $\langle
N_{\mathrm {part}} \rangle$ for $\mathrm{K^{*0}}$~and \ph mesons at ranges of low
and high $p_{\mathrm{T}}~$ in \mbox{Pb--Pb}\xspace collisions. The statistical and systematic uncertainties are
shown as bars and boxes, respectively. Few data points are shifted horizontally for better visibility.}
\label{Fig:Centrality}
\end{center}
\end{figure}
Figure~\ref{Fig:Centrality} shows \ensuremath{\mathrm{\rho_{00}}}{} for $\mathrm{K^{*0}}$~and \ph
mesons as a function of average number of participating
nucleons ($\langle N_{\mathrm {part}} \rangle$)~\cite{Abelev:2013qoq} for \mbox{Pb--Pb}\xspace
collisions at $\sqrt{s_{\mathrm{NN}}}~$ = 2.76~TeV. Large $\langle
N_{\mathrm {part}} \rangle$ correspond to the central collisions,
while peripheral events have low $\langle N_{\mathrm {part}} \rangle$.
In the lowest $p_{\mathrm{T}}~${} range, the \ensuremath{\mathrm{\rho_{00}}}~values have maximum deviation from 1/3 for intermediate centrality and approach 1/3 for both central and peripheral collisions. This centrality dependence
is qualitatively consistent with the dependence of initial angular momentum on
impact parameter in
heavy-ion collisions ~\cite{Becattini:2007sr}.
At higher $p_{\mathrm{T}}$, the \ensuremath{\mathrm{\rho_{00}}}~measurements are consistent with 1/3 for all the collision centrality
classes studied for both vector mesons.
For the low-$p_{\mathrm{T}}~$ measurements in mid-central Pb--Pb collisions, the maximum deviations of \ensuremath{\mathrm{\rho_{00}}}~from 1/3 are 3.2 (2.6)
$\sigma$ and 2.1 (1.9) $\sigma$ for $\mathrm{K^{*0}}$~and \ph mesons, respectively, for
mid-central \mbox{Pb--Pb}\xspace collisions with respect to the PP (EP). The $\sigma$ are
calculated by adding statistical and systematic uncertainties into quadrature.
The relation between the $\mathrm{\rho_{00}}$ values with respect to different quantization axes can be expressed using Eq.~\ref{eqn2} and calculating the corresponding factor $R$. This gives $\mathrm{\rho_{00}}(\mathrm{RndEP}) - \frac{1}{3}~=~(\mathrm{\rho_{00}}(\mathrm{EP})~-~\frac{1}{3})~\times~\frac{1}{4}$ ($R$ = 0 for random plane) and $\mathrm{\rho_{00}}(\mathrm{PP}) - \frac{1}{3}~=~(\mathrm{\rho_{00}}(\mathrm{EP})~-~\frac{1}{3})~\times~\frac{1~+~3v_{2}}{4}$ ($R$ = $\frac{1}{2\pi}\int_{-\pi}^{\pi} cos(2\psi_{\mathrm{EP}})[1+2v_{2}cos(2\psi_{\mathrm{EP}})]d\psi_{\mathrm{EP}}$, where $\psi_{\mathrm{EP}}$ is the event plane angle and $v_{2}$ is the azimuthal anisotropy). This is further confirmed (see~Fig.~\ref{toymodel}) using a toy model simulation with PYTHIA 8.2 event generator by incorporating $v_{2}$ and spin alignment through appropriate rotation of $\mathrm{K^{*0}}$ and its decay products momentum~\cite{aziangle,thangle}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{figs/fig4.eps}
\caption{(Color online) \ensuremath{\mathrm{\rho_{00}}}~values from data in 10--50\% \mbox{Pb--Pb}\xspace
collisions at 0.8 $<$ $p_{\mathrm{T}}~$ $<$ 1.2 GeV/$c$ with respect to various planes compared with expectations from model simulations with and without added elliptic
flow ($v_{2}$). The statistical and systematic uncertainties are
shown as bars and boxes, respectively.}
\label{toymodel}
\end{center}
\end{figure}
Spin alignment measurements have been performed in various collision systems in the past. Several measurements in $\mathrm{e}^{+}\mathrm{e}^{-}$~\cite{Ackerstaff:1997kj,Ackerstaff:1997kd,Abreu:1997wd}, hadron--proton~\cite{Barth:1982td} and nucleon--nucleus
collisions~\cite{Aleev:2000tb} were carried out to understand the role of spin in the dynamics of particle production. These measurements in small collision systems with respect to the production plane have
\ensuremath{\mathrm{\rho_{00}}}~$>$ 1/3 and off-diagonal elements close to
zero. For pp collisions at $\sqrt{s}$ = 13 TeV the \ensuremath{\mathrm{\rho_{00}}}~$\sim$~1/3 for the $p_{\mathrm{T}}~${} range studied (see Fig.3). Initial measurements at RHIC\footnote{STAR experiment results have a different event plane resolution correction.} with a relatively small sample of Au--Au
collisions at $\sqrt{s_{\mathrm{NN}}}~$ = 200 GeV did not find significant spin alignment
for the vector mesons~\cite{Abelev:2008ag}.
Significant polarization of $\Lambda$ baryons (spin = 1/2) was reported at
low RHIC energies. The polarization is found to decrease with increasing $\sqrt{s_{\mathrm{NN}}}~$~\cite{STAR:2017ckg}. At the LHC energies,
the global polarization for $\Lambda$ baryons was measured to less than 0.15$\%$~\cite{alice-gp} and compatible with zero within uncertainties. Measurements of particles with spin-1/2 are performed with respect to the 1$^{\mathrm{st}}$ order event
plane in order to know the orientation of the angular momentum
vector. However, the effect of ``spin up''
and ``spin down'' is the same for particles with spin-1, hence the second order event plane suffices.
In the recombination model, \ensuremath{\mathrm{\rho_{00}}}~is expected to depend on the square of the quark polarization whereas the $\Lambda$ polarization depends linearly on it, therefore using quark polarization information from $\Lambda$ measurements will yield a \ensuremath{\mathrm{\rho_{00}}}~$\sim$~1/3 at LHC energies. The large effect observed for the central value of \ensuremath{\mathrm{\rho_{00}}}~for mid-central Pb--Pb collisions at low $p_{\mathrm{T}}~$~is therefore puzzling.
However, the magnitude of the spin alignment also
depends on the details of the transfer of the quark
polarization to the hadrons (baryon vs. meson), details of the
hadronization mechanism (recombination vs. fragmentation), re-scattering, regeneration, and
possibly the lifetime and mass of the hadrons in the system. Moreover,
the vector mesons are predominantly primordially produced whereas the
hyperons are expected to have large contributions from resonance decayes. To date, no
quantitative theory expectation for \ensuremath{\mathrm{\rho_{00}}}~at LHC energies exists. We expect that these measurements will
encourage further theoretical work on this topic.
In conclusion, for the first time we obtain evidence of a significant spin alignment effect for vector mesons in heavy-ion collisions.
The effect is strongest when the alignment is measured at low $p_{\mathrm{T}}~$ with respect to a vector perpendicular to the reaction plane and for mid-central (10--50\%) collisions.
These observations are qualitatively consistent with expectations from the effect of large initial angular momentum in non-central heavy-ion collisions, which leads to quark polarization via spin-orbit coupling and is subsequently transferred to hadronic degrees of freedom by hadronization via recombination.
However, the measured spin alignment is surprisingly large compared to the polarization measured for $\Lambda$ hyperons where in addition a strong decrease in polarization with $\sqrt{s_{\mathrm{NN}}}~$~is observed.
\newenvironment{acknowledgement}{\relax}{\relax}
\begin{acknowledgement}
\section*{Acknowledgements}
\input{fa_2019-10-12.tex}
\end{acknowledgement}
\bibliographystyle{utphys}
|
train/arxiv
|
BkiUbLjxK2li-F6Jw6J1
| 5 | 1 |
\section{Introduction}
Carrier sense multiple access (CSMA) networks form an attractive random access solution for wireless networks
due to their fully distributed nature and low complexity. In order to guarantee a certain set of
feasible throughputs for the links part of a CSMA network (defined as the fraction of the time that a
link is active), the {\it back-off rate} of each link has to be set
in the appropriate manner which depends on the
network topology, i.e., how the different links in the network interfere with each other.
In order to obtain a better understanding of how these back-off rates affect the throughputs,
the ideal CSMA network model was introduced (see \cite{boorstyn1,durvy2,jiang2,jiang1,vandeven5,vandeven1,yun1})
and this model was shown to provide good estimates for the throughput achieved
in real CSMA like networks \cite{wang5}.
The product form solution of the ideal CSMA model was established long ago \cite{boorstyn1}
(for exponential back-off durations) and the set $\Gamma$ of achievable throughput vectors $\vec \phi = (\phi_1,\ldots,\phi_n)$,
where $\phi_i$ is the throughput of link $i$, was characterized in \cite{jiang1}.
Further, for each vector $\vec \phi \in \Gamma$ the existence of a unique vector of back-off rates that achieves $\vec \phi$ was proven in \cite{vandeven5}.
None of the above results indicates how to set the back-off rates to achieve a given vector $\vec \phi \in \Gamma$
(except for very small networks).
For line networks with a fixed interference range this problem was solved in \cite{vandeven1} for any
$\vec \phi = (\alpha,\ldots,\alpha) \in \Gamma$,
while \cite{yun1} presented a closed form expression for the back-off rates to achieve any
$\vec \phi \in \Gamma$ in case the conflict graph is a tree. This expression was obtained from the
zero gradient points of the {\it Bethe free energy} and can be used as an approximation in general conflict graphs,
termed the Bethe approximation.
The explicit results for line and tree networks were generalized in \cite{vanhoudt_ton17}, where a simple formula for
the back-off rates was presented for any {\it chordal} conflict graph $G$. This formula was subsequently
used to develop the local chordal subgraph (LCS) approximation for general conflict graphs.
Another method to approximate unique back-off rates given an achievable target throughput vector
exists in using inverse (generalized) belief
propagation (I(G)BP) algorithms \cite{kai1,yedidia2}. These algorithms are
message passing algorithms that in general are not guaranteed to converge to a fixed point.
In \cite{kai1} the IBP algorithm for CSMA was argued to converge to the exact vector of back-off rates
when the conflict graph is a tree, but convergence of IGBP for loopy graphs to a (unique) fixed point was not established.
Belief propagation algorithms are intimately related to free energy approximations as their fixed points can be shown
to correspond to the zero gradient points of an associated free energy approximation \cite{yedidia1}.
The main objective of this paper is to introduce more refined free energy approximations (compared to the Bethe approximation)
for the ideal CSMA model that yield closed form approximations for the back-off rates and to compare their accuracy with the Bethe and LCS approximation.
The contributions of the paper are as follows.
First, we introduce a class of region-based free energy approximations with clique belief and
a closed form expression for the CSMA back-off rates based on its zero gradient points (Section \ref{sec:cliquebelief}).
Second, we propose the size $k_{max}$ clique approximation as a special case and present a closed form
expression for its counting numbers, as well as a recursive algorithm to compute the back-off rates more efficiently
(Section \ref{sec:kmax}). Setting $k_{max}=2$ reduces the size $k_{max}$ clique approximation to the Bethe approximation of \cite{yun1}.
Third, we prove that the size $k_{max}$ clique approximation coincides with a Kikuchi approximation
(Section \ref{sec:kikuchi}).
As the Kikuchi approximation used to devise the IGBP algorithm of \cite{kai1} corresponds to
setting $k_{max}=n$, the size $n$ clique approximation gives a closed form expression for a
fixed point of the IGBP algorithm.
Fourth, an exact free energy approximation for chordal conflict graphs is introduced and is proven to
coincide with the size $n$ clique approximation (Section \ref{sec:chordal}). This implies that a fixed point of the
IGBP algorithm gives exact results on chordal conflict graphs.
Finally, simulation results are presented that compare the accuracy of the size $k_{max}$ clique approximation
with the LCS algorithm presented in \cite{vanhoudt_ton17} (Section \ref{sec:eval}).
The main observation is that the LCS approximation is less accurate and less robust for denser conflict
graphs compared to the size $k_{max}$ clique approximation.
Before presenting the above results (in Sections \ref{sec:cliquebelief} to \ref{sec:eval}), we start with a model description in Section \ref{sec:model} and
a basic introduction on (region-based) free energy approximations in Section \ref{sec:free}.
Conclusions are drawn in Section \ref{sec:conc}.
We end this section by noting that more advanced free energy approximations have very recently
and independently been proposed by other researchers to approximation the back-off rates in CSMA networks.
More specifically, in \cite{swamy_SPCOM} the authors proposed the Kikuchi approximation induced by all
the maximal cliques of the conflict graph. In Section \ref{sec:kikuchi} of this paper we prove that this approximation
coincides with the size $n$ clique approximation. Further, the authors also prove the exactness
of the maximal clique based Kikuchi approximation on chordal conflict graphs as is done in
Section \ref{sec:chordal} in this paper. In \cite{swamy_arXiv2017} the authors generalized
their work using the region-based free energy framework of \cite{yedidia1} and also consider an approximation
that includes the $4$-cycles as regions.
\section{System model}\label{sec:model}
The ideal CSMA model considers a fixed set of $n$ links where a link is said to be active if a packet is being
transmitted on the link and inactive otherwise. Whether two links $i$ and $j$ can be active simultaneously is determined by the undirected
conflict graph $G=(V,E)$, where $V$ is the set of $n$ links. If $(i,j) \in E$ then link $i$ and $j$ cannot be active at the
same time. The time that a link remains active is represented by an independent and identically distributed random variable
with mean one, after which the link becomes inactive. When link $i$ becomes inactive it starts a back-off period, the length
of which is an independent and identically distributed random variable with mean $1/\nu_i$. When the back-off period
of link $i$ ends and none of the neighbors of $i$ in $G$ are active, link $i$ becomes active; otherwise a new back-off period
starts. Note, in such a setting sensing is assumed to be instantaneous and therefore no collisions occur. Also, links
attempt to become active all the time, which corresponds to considering a saturated network\footnote{It is worth noting that a considerable body of work exists that considers unsaturated CSMA networks where each link maintains its own buffer to store packets that arrive according to a Poisson
process (e.g., \cite{shah1,ni1,bouman1,laufer1,cecchi1}).}.
Hence, the ideal CSMA model is fully characterized by the conflict graph $G=(V,E)$ and the vector of back-off rates
$(\nu_1,\ldots,\nu_n)$.
While the set of links, interference graph and target throughputs are all fixed in the ideal CSMA network,
the results presented in this paper are still meaningful in a network that undergoes gradual changes as in such case
the proposed approximation for the back-off rates can be recomputed at regular times in a {\it fully distributed} manner.
Let $x_i = 1$ if node $i$ is active and $0$ otherwise, for $i=1,\ldots,n$.
Define $f_i(x_i) = \nu_i^{x_i}$ and $f_{(i,j)}(x_i,x_j) = 1-x_ix_j$, for $i,j \in \{1,\ldots,n\}$.
It is well known \cite{boorstyn1,vandeven3} that the probability that the network
is in state $(x_1,\ldots,x_n)$ at time $t$ converges to
\begin{align}\label{eq:pf}
p(x_1&,\ldots,x_n) = \nonumber \\
& \frac{1}{Z} \left(\prod_{i=1}^n f_i(x_i) \right) \left(\prod_{(i,j)\in E} f_{(i,j)}(x_i,x_j)\right),
\end{align}
for $(x_1,\ldots,x_n) \in \{0,1\}^n$ as $t$ tends to infinity, where $Z$ is the normalizing constant.
Note the factors $f_{(i,j)}(x_i,x_j)$ make sure that
the probability of being in state $(x_1,\ldots,x_n)$ is zero whenever two neighbors in $G$ would be active at the same time.
The throughput of link $i$ is simply given by the marginal probability
\[p_i(1) = \sum_{x \in \{0,1\}^n} p(x) 1_{\{x_i = 1\}}.\]
The focus in this paper is not on computing $p_i(1)$ given the back-off rates $(\nu_1,\ldots,\nu_n)$,
but on the inverse problem: how to set/estimate $(\nu_1,\ldots,\nu_n)$ such that a given target throughput vector
$(\phi_1,\ldots,\phi_n) \in \Gamma$ is achieved, that is, such that $p_i(1)=\phi_i$ for all $i$.
In \cite{jiang1} the set of achievable throughput vectors was shown to equal
\begin{align}\label{eq:Gamma}
\Gamma = \left\{ \sum_{\vec x \in \Omega} \xi(\vec x) \vec x \middle| \sum_{\vec x \in \Omega} \xi(\vec x) =1, \xi(\vec x) > 0 \mbox{ for } \vec x \in \Omega \right\},
\end{align}
where $\Omega = \{(x_1,\ldots,x_n) \in \{0,1\}^n | x_i x_j = 0 \mbox{ if } (i,j) \in E\}$.
In other words, a throughput vector $\vec \theta$ is achievable if and only if it belongs to the interior of the
convex hull of the set $\Omega$.
\section{Free energy and region-based approximations}\label{sec:free}
We start with a brief introduction on (region-based) free energy approximations and describe these in the
context of factor graphs, we refer the reader to \cite{yedidia1} for a more detailed exposition.
A factor graph \cite{kschischang1} is a bipartite graph that contains a set of variable and
factor nodes that represents the factorization of a function.
The factor graph associated with \eqref{eq:pf} contains a variable node for each variable $x_i$, for $i=1,\ldots,n$,
and $n+|E|$ factor nodes $f_a$: one for each factor $f_i$ and $f_{(i,j)}$.
Further, a variable node $x_i$ is connected to a factor node $f_a$ if and only if
$x_i$ is an argument of $f_a$. As illustrated in Figure \ref{fig:factorgraph} the factor graph of \eqref{eq:pf} can be obtained from the conflict graph $G$ by labeling node $i$ as $x_i$,
replacing each edge $(i,j)$ by a factor node $f_{(i,j)}$, by connecting $f_{(i,j)}$ to $x_i$ and $x_j$ and by adding the factor nodes $f_i$,
where $f_i$ is connected to $x_i$.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\tikzstyle{every node}=[circle, draw, fill=black!10,
inner sep=0pt, minimum width=10pt]
\node (1) at (1,0) []{$1$};
\node (2) at (2,0) []{$2$};
\node (3) at (3,0) []{$3$};
\node (5) at (2,1) []{$5$};
\node (4) at (1,1) []{$4$};
\draw [] (1) -- (2);
\draw [] (2) -- (3);
\draw [] (2) -- (5);
\draw [] (3) -- (5);
\draw [] (4) -- (5);
\draw [] (1) -- (4);
\tikzstyle{every node}=[circle, draw, fill=black!10,
inner sep=0pt, minimum width=10pt]
\node (f1) at (5,-0.5) [rectangle, minimum width=15pt, minimum height=12pt]{$f_1$};
\node (f2) at (6,-0.5) [rectangle, minimum width=15pt, minimum height=12pt]{$f_2$};
\node (f3) at (7,-0.5) [rectangle, minimum width=15pt, minimum height=12pt]{$f_3$};
\node (f4) at (8,-0.5) [rectangle, minimum width=15pt, minimum height=12pt]{$f_4$};
\node (f5) at (9,-0.5) [rectangle, minimum width=15pt, minimum height=12pt]{$f_5$};
\node (v1) at (5,0.65) []{$x_1$};
\node (v2) at (6,0.65) []{$x_2$};
\node (v3) at (7,0.65) []{$x_3$};
\node (v4) at (8,0.65) []{$x_4$};
\node (v5) at (9,0.65) []{$x_5$};
\node (f12) at (4.5,2) [rectangle, minimum width=15pt, minimum height=12pt]{$f_{1,2}$};
\node (f14) at (5.5,2) [rectangle, minimum width=15pt, minimum height=12pt]{$f_{1,4}$};
\node (f23) at (6.5,2) [rectangle, minimum width=15pt, minimum height=12pt]{$f_{2,3}$};
\node (f25) at (7.5,2) [rectangle, minimum width=15pt, minimum height=12pt]{$f_{2,5}$};
\node (f35) at (8.5,2) [rectangle, minimum width=15pt, minimum height=12pt]{$f_{3,5}$};
\node (f45) at (9.5,2) [rectangle, minimum width=15pt, minimum height=12pt]{$f_{4,5}$};
\draw [] (v1) -- (f1);
\draw [] (v2) -- (f2);
\draw [] (v3) -- (f3);
\draw [] (v4) -- (f4);
\draw [] (v5) -- (f5);
\draw [] (f12) -- (v1);
\draw [] (f12) -- (v2);
\draw [] (f14) -- (v1);
\draw [] (f14) -- (v4);
\draw [] (f23) -- (v3);
\draw [] (f23) -- (v2);
\draw [] (f25) -- (v5);
\draw [] (f25) -- (v2);
\draw [] (f35) -- (v3);
\draw [] (f35) -- (v5);
\draw [] (f45) -- (v4);
\draw [] (f45) -- (v5);
\end{tikzpicture}
\end{center}
\caption{Conflict graph $G$ and its associated bipartite factor graph}
\label{fig:factorgraph}
\end{figure}
Given a distribution $p$ on $\{0,1\}^n$ (as in \eqref{eq:pf}) with an associated factor graph with variable nodes $\{x_1,\ldots,x_n\}$
and factor nodes $\{f_1,\ldots,f_M\}$,
the Gibbs free energy $F(b)$, where $b$ is a distribution on $\{0,1\}^n$, is defined as $F(b) = U(b)-H(b)$, where
\[U(b) = - \sum_{x \in \{0,1\}^n} b(x) \sum_{a=1}^M \ln(f_a(x_a)), \]
is the Gibbs average energy and the subset of the elements of $(x_1,\ldots,x_n)$ that are an argument of the function $f_a$ is
denoted as $x_a$ and
\begin{align}\label{eq:GibbsH}
H(b) = - \sum_{x \in \{0,1\}^n} b(x) \ln(b(x)),
\end{align}
is the Gibbs entropy. For the factor graph of \eqref{eq:pf} we have
\begin{align*}
\sum_{a=1}^M \ln(f_a(x_a)) = \sum_{i=1}^n x_i \ln(\nu_i) +
\sum_{(i,j) \in E} \ln(1-x_i x_j).
\end{align*}
It is well-known that the Gibbs free energy associated with a factor graph is minimized when the distribution $b$ matches $p$.
Although the minimizer $p$ of the Gibbs free energy $F(b)$ may be known explicitly as in the CSMA setting, computing
marginal distributions of the form
\[p_S(x_S) = \sum_{x_i:i \not\in S} p(x_1,\ldots,x_n),\]
where $S$ is a subset of $\{1,\ldots,n\}$,
is often computationally prohibitive. As such, approximations for the Gibbs free energy have been developed that
allow approximating marginal distributions of the form $p_S(x)$ at a (much) lower computational cost.
Such approximations have also been used to attack the {\it inverse} problem which attempts to estimate
the model parameters (e.g., the back-off rates $\nu_i$ in CSMA or the couplings in the Ising model)
given some values for some of the marginal distributions (e.g., the target throughput vector in CSMA or the
magnetizations and correlations in the Ising model). For the Bethe approximation this has led to
explicit formulas for the approximate solution of the inverse problem for both the ideal CSMA model \cite{yun1}
and the Ising model \cite{ricci1}.
The class of free energy approximations that is used in this paper for the inverse problem is the class of
region-based free energy approximations \cite{yedidia1}.
A region-based free energy approximation is characterized by a set $\mathcal{R}$ of regions and a
counting number $c_R$ for each $R \in \mathcal{R}$.
Each region $R$ has an associated set of variables $\mathcal{V}_R$, which is a subset of the variable nodes in the factor graph,
and a set of factors denoted as $\mathcal{F}_R$, which is a subset of the factor nodes in the factor graph.
The following three conditions must be met for the sets $\mathcal{V}_R$ and $\mathcal{F}_R$. First, if $f_a \in \mathcal{F}_R$, then
the arguments of $f_a$ must belong to $\mathcal{V}_R$. Second, the set $\cup_{R \in \mathcal{R}} \mathcal{V}_R = \{x_1,\ldots,x_n\}$
and $\cup_{R \in \mathcal{R}} \mathcal{F}_R = \{f_1,\ldots,f_M\}$, in other words each variable node and factor node must
belong to at least one region. Third,
the counting numbers $c_R$ are integers such that for each factor node $f_a$ and variable node $x_i$ we have
\begin{align}\label{eq:valid}
\sum_{R \in \mathcal{R}} c_R 1_{\{f_a \in \mathcal{F}_R\}} = \sum_{R \in \mathcal{R}} c_R 1_{\{x_i \in \mathcal{V}_R\}} = 1.
\end{align}
For example for the Bethe approximation of \cite{yun1} one associates a single region $R$ with every node in the bipartite factor
graph. For the region $R$ associated with a factor node $f_a$ one sets $\mathcal{F}_R = \{f_a\}$, $\mathcal{V}_R =
\{x_i | x_i \mbox{ is an argument of } f_a \}$ and $c_R = 1$. For the region $R$ corresponding
to a variable node $x_i$ one sets $\mathcal{F}_R = \emptyset$, $\mathcal{V}_R = \{x_i\}$ and
$c_R = -d_i$, where $d_i$ is the number of neighbors of node $i$ in the conflict graph $G$
such that \eqref{eq:valid} holds.
As in \cite{yedidia1} we denote sums of the form
\[\sum_{x \in \{0,1\}^{|\mathcal{V}_R|}} g(x),\]
where $g$ is a function from $\{0,1\}^{|\mathcal{V}_R|}$
to $\mathbb{R}$ and $R$ is a region, as $\sum_{x_R} g(x_R)$.
Using this notation, the region-based free energy is a function of
the set of beliefs $\{ b_R(x_R) | R \in \mathcal{R} \}$, where $b_R$ is a distribution on $\{0,1\}^{|\mathcal{V}_R|}$, and is
defined as
\begin{align}\label{eq:FRbR}
F_R(\{b_R\}) = U_R(\{b_R\}) - H_R(\{b_R\}),
\end{align}
where $U_R(\{b_R\})$
is the region-based average energy defined as
\begin{align}\label{eq:URbR}
U_\mathcal{R}(\{b_R\}) = - \sum_{R \in \mathcal{R}} c_R \sum_{x_R} b_{R}(x_R) \sum_{f_a \in \mathcal{F}_R} \ln(f_a(x_a)),
\end{align}
and $H_R(\{b_R\})$ is the region-based entropy given by
\begin{align}\label{eq:HRbR}
H_\mathcal{R}(\{b_R\}) &= - \sum_{R \in \mathcal{R}} c_R \sum_{x_R} b_{R}(x_R) \ln(b_R(x_R)).
\end{align}
Note that the requirement that $f_a \in \mathcal{F}_R$ implies that
the arguments of $f_a$ must belong to $\mathcal{V}_R$ is necessary for \eqref{eq:URbR} to be well defined.
The beliefs $b_R(x_R)$ are used as approximations for the marginal probabilities
$p_R(x_R) = \sum_{x_i \not\in \mathcal{V}_R} p(x_1,\ldots,x_n)$.
The approximation exists in finding the beliefs $b_R(x_R)$ such that the region-based free energy is minimized
over the set $\Delta_{\mathcal{R}}$ of {\it consistent} beliefs $b_R(x_R)$ defined as
\begin{align*}
\Delta_{\mathcal{R}} = & \left\{ \{b_R, R \in \mathcal{R} \} \middle| b_R(x_R) \geq 0, \sum_{x_R} b_R(x_R) = 1, \right. \\
&\hspace*{0.3cm} \left. \sum_{x_i \in \mathcal{V}_{R'}\setminus \mathcal{V}_R} b_{R'}(x_{R'}) = \sum_{x_j \in \mathcal{V}_R\setminus \mathcal{V}_{R'}} b_{R}(x_{R}) \right\}.
\end{align*}
We note that having a consistent set of beliefs does not imply that they are the marginals of a
single distribution $b(x)$ on $\{0,1\}^n$ \cite[Section V.A]{yedidia1}.
Further, the average energy given by \eqref{eq:URbR} is known to be exact, that is, equal to the Gibbs free energy $U(p)$,
if $b_R(x_R) = p_R(x_R)$ for all $R$ and $x_R$. This condition is however not sufficient for
the region-based entropy to be exact (that is, equal to the Gibbs entropy $H(p)$) \cite{yedidia1}.
\section{Clique Belief}\label{sec:cliquebelief}
In this section we introduce the notion of clique believe and indicate how to select the
back-off rates to obtain a zero gradient point of the region-based
free energy under clique belief. These back-off rates, presented in Theorem \ref{th:backoff},
are used as an approximation for the vector of back-off rates that achieves
a given throughput vector $(\phi_1,\ldots,\phi_n)$.
Clique belief is defined as the belief that all the nodes $i \in V$ with $x_i \in \mathcal{V}_R$ form a clique in the
conflict graph $G$ for any $R \in \mathcal{R}$, meaning the belief that any two nodes within a region $R$ are active at the same
time is zero. More specifically, we define the set of clique beliefs $\Delta^{C}_{\mathcal{R}}$
as the set of beliefs $\{b_R\}$ for which $b_R(x_R)$ has the form
\begin{align}\label{eq:cbelief}
b_R(x_R) = \left\{ \begin{array}{ll}
1-\sum_{i: x_i \in \mathcal{V}_R} \phi_i & \mbox{for } \sum_{x_i \in \mathcal{V}_R} x_i = 0,\\
\phi_i & \mbox{for }x_i = 1 \mbox{ and } \\
& \ \ \ \ \ \sum_{x_j \in \mathcal{V}_R} x_j = 1,\\
0 & \mbox{otherwise}.
\end{array}\right.
\end{align}
for some set $\{\phi_1,\ldots,\phi_n\}$ with $\phi_i \geq 0$ and $\sum_{i:x_i \in \mathcal{V}_R} \phi_i < 1$ for all $R \in \mathcal{R}$.
Clique beliefs are clearly consistent, that is, $\Delta^{C}_{\mathcal{R}} \subseteq \Delta_{\mathcal{R}}$.
The next condition limits the set of region-based free energy approximations considered somewhat by
putting some minor conditions on the manner in which the regions are selected.
\begin{cond}\label{cond:region}
The set of regions $\mathcal{R}$ is such that $\mathcal{R} = \{R_{f_1},\ldots,R_{f_n}\} \cup \{R_{x_1},\ldots,R_{x_n}\} \cup \mathcal{R}'$ with
\begin{enumerate}
\item $\mathcal{V}_{R_{f_i}} = \mathcal{V}_{R_{x_i}} = \{x_i\}$, $\mathcal{F}_{R_{f_i}} = \{f_i\}$ and $\mathcal{F}_{R_{x_i}} = \emptyset$,
for $i=1,\ldots,n$,
\item for $R \in \mathcal{R}'$ we have $\emptyset \not= \mathcal{F}_R \subseteq \{f_{(i,j)} | (i,j) \in E\}$
and $\mathcal{V}_R = \{x_i | \exists j: f_{(i,j)} \in \mathcal{F}_R\}$,
\item $c_{R_{x_i}} = 1-(1+\sum_{R \in \mathcal{R}'} c_R 1_{\{x_i \in \mathcal{V}_R\}})$ and $c_{R_{f_i}} = 1$, for $i = 1,\ldots,n$.
\end{enumerate}
\end{cond}
Note that this condition states that there are $2n$ special regions for which the set of variable nodes,
factor nodes as well as the counting numbers are fixed. For each of the remaining regions (that is, for each $R \in \mathcal{R}'$)
the set of variable and factor nodes is determined by some nonempty set of edges, while its counting number can be chosen
arbitrarily as long as \eqref{eq:valid} holds.
\begin{theorem}\label{th:backoff}
Let $\mathcal{R}$ be a set of regions that meets Condition \ref{cond:region}. For the zero gradient points
of the region-based free energy $F(\{b_R\})$ defined by \eqref{eq:FRbR} over the set $\Delta^{C}_{\mathcal{R}}$ of clique beliefs we have
\begin{align}\label{eq:backoff}
\nu_i = \frac{\phi_i}{(1-\phi_i)^{1+c_{R_{x_i}}}} \prod_{\substack{R \in \mathcal{R}':\\ x_i \in \mathcal{V}_R}} \left( 1-\sum_{j:x_j \in \mathcal{V}_R} \phi_j \right)^{-c_R}.
\end{align}
\end{theorem}
\begin{proof}
Under clique belief the entropy given by \eqref{eq:HRbR} equals
\begin{align*}
H_\mathcal{R}&(\{b_R\}) = - \sum_{R \in \mathcal{R}} c_R \sum_{i: x_i\in \mathcal{V}_R} \phi_i \ln(\phi_i) \nonumber \\
&- \sum_{R \in \mathcal{R}} c_R \left( 1-\sum_{i:x_i\in \mathcal{V}_R} \phi_i\right) \ln(1-\sum_{i:x_i \in \mathcal{V}_R} \phi_i) \nonumber \\
\end{align*}
due to \eqref{eq:cbelief} and \eqref{eq:valid} yields
\begin{align}\label{eq:HRbR2}
H_\mathcal{R}&(\{b_R\}) = - \sum_{i=1}^n \phi_i \ln(\phi_i) \nonumber\\
&- \sum_{R \in \mathcal{R}} c_R \left( 1-\sum_{i:x_i \in \mathcal{V}_R} \phi_i\right) \ln(1-\sum_{i:x_i \in \mathcal{V}_R} \phi_i).
\end{align}
To determine $U_\mathcal{R}(\{b_R\})$ first note that $\ln(f_{(i,j)}(x_i,x_j))$ is zero unless $x_i = x_j = 1$.
When $x_i = x_j = 1$, we have $b(x_R) = 0$ if $f_{(i,j)} \in \mathcal{F}_R$ for $R \in \mathcal{R}'$.
Using the common convention that $0 \ln(0) = \lim_{x \downarrow 0} x \ln(x) = 0$ yields
\[- \sum_{R \in \mathcal{R'}} c_R \sum_{x_R} b_{R}(x_R) \sum_{f_a \in \mathcal{F}_R} \ln(f_a(x_a)) = 0\]
As $\ln(f_i(0))=0$, $\ln(f_i(1))=\ln(\nu_i)$ and $b_{R_{f_i}}(1) = \phi_i$, one therefore finds
\begin{align}\label{eq:URbR2}
U_\mathcal{R}(\{b_R\}) = - \sum_{i \in V} \phi_i \ln(\nu_i).
\end{align}
Demanding that the partial derivatives $dF_\mathcal{R}(\{b_R\})/d\phi_i$ are equal to zero is equivalent to the requirement that
\begin{align*}
\ln(\nu_i) = &-\sum_{R \in \mathcal{R}:x_i \in \mathcal{V}_R} c_R \left( 1+\ln(1-\sum_{j:x_j\in \mathcal{V}_R} \phi_j) \right)\nonumber\\
& + 1+\ln(\phi_i).
\end{align*}
The expression in \eqref{eq:backoff} therefore follows from \eqref{eq:valid}.
\end{proof}
Formula \eqref{eq:backoff} proposes an approximation for the back-off rates $\nu_i$ if the target throughputs of the links are
given by the vector $(\phi_1,\ldots,\phi_n)$ provided that $\sum_{i:x_i \in \mathcal{V}_R} \phi_i < 1$ for all $R \in \mathcal{R}$.
Depending on the choice of the regions in $\mathcal{R}'$, this condition may be more restrictive than demanding that $(\phi_1,\ldots,\phi_n) \in \Gamma$.
However for the size $k_{max}$ clique approximation introduced in the next section, the requirement $\sum_{i:x_i \in \mathcal{V}_R} \phi_i < 1$
holds for any $(\phi_1,\ldots,\phi_n) \in \Gamma$ as for the size $k_{max}$ clique approximation
each region $R$ corresponds to a clique and the sum of the throughputs of all the nodes belonging
to a clique is clearly bounded by one in $\Gamma$.
It is important to stress that formula \eqref{eq:backoff} often leads to a distributed computation of $\nu_i$ as node $i$ only needs to know
its own target throughput, the target throughput of any node $j$ sharing a region with $i$ (that is, any $j$
for which there exists an $R \in \mathcal{R}'$ such that $x_i,x_j \in \mathcal{V}_R$) as well as the counting numbers
$c_R$ for the regions $R$ to which it belongs. The size $k_{max}$ clique approximation presented in the next sections is such that
two nodes only belong to the same region if they are neighbors in the conflict graph $G$ and the required counting numbers
can be computed from the subgraph induced by a node and its one hop neighborhood.
Thus, for the size $k_{max}$ clique approximation a node can compute its approximate back-off rate using information from its {\it one-hop neighbors only for any feasible throughput vector}.
\section{Size $k_{max}$ clique approximation}\label{sec:kmax}
In this section we introduce a region-based free energy approximation for general conflict graphs $G$, called
the size $k_{max}$ clique approximation and present explicit expressions for the counting numbers.
We start by considering two special cases.
\subsection{Bethe approximation}
A first special case is to define $\mathcal{R}$ such that
Condition \ref{cond:region} is met and setting $\mathcal{R}' = \{R_{(i,j)} | (i,j) \in E\}$ such that
$V_{R_{(i,j)}} =\{x_i,x_j\}$ and $F_{R_{(i,j)}} = \{f_{(i,j)}\}$.
The associated counting numbers are $c_{R_{(i,j)}} = 1$, which implies that $c_{R_{x_i}} = -d_i$, where $d_i$ denotes the number of
neighbors of $i$ in $G$.
The entropy given in \eqref{eq:HRbR2} therefore becomes
\begin{align*}
H_\mathcal{R}&(\{b_R\}) = - \sum_{(i,j) \in E} \left( 1 -\phi_i-\phi_j \right) \ln(1-\phi_i-\phi_j) \\
&+ \sum_{i=1}^n \left[(d_i-1)(1-\phi_i)\ln(1-\phi_i) - \phi_i \ln(\phi_i) \right]
\end{align*}
and \eqref{eq:backoff} implies that the back-off rate, denoted as $\nu_i^{(2)}$, should be set as
\begin{align}\label{eq:backoffBethe}
\nu_i^{(2)} = \frac{\phi_i(1-\phi_i)^{d_i-1}}{\prod_{(i,j)\in E} (1-\phi_i-\phi_j)}.
\end{align}
The above expression corresponds to the Bethe approximation for CSMA networks proposed in \cite{yun1}.
It is worth noting that \eqref{eq:backoffBethe} is
a fixed point of the inverse belief propagation (IBP) algorithm presented in \cite{kai1}.
More specifically, the update rule in \cite[Section IV.C]{kai1} can be written as
\[\frac{m_{ji}(0)}{m_{ji}(1)} = 1 + \frac{m_{ij}(0)}{m_{ij}(1)} \frac{\phi_j}{1-\phi_j},\]
where $\phi_j$ is the target
throughput of node $j$. It is easy to check that this update rule has $m_{ij}(0)/m_{ij}(1)=(1-\phi_j)/(1-\phi_i-\phi_j)$
as a fixed point and if we plug this into Equation (8) of \cite[Section IV.C]{kai1} we obtain
\eqref{eq:backoffBethe}.
\subsection{Triangle approximation}
A second special case, called the {\it triangle} approximation, is obtained by extending the set of regions $\mathcal{R}'$
as defined in the previous subsection with the regions $\{R_{(i,j,k)} | (i,j),(i,k),(j,k) \in E\}$
such that $V_{R_{(i,j,k)}} =\{x_i,x_j,x_k\}$ and $F_{R_{(i,j)}} = \{f_{(i,j)},f_{(i,k)},f_{(j,k)}\}$.
The counting numbers are now set as $c_{R_{(i,j,k)}} = 1$ and $c_{R_{(i,j)}}=1-t_{i,j}$ such that $c_{R_{x_i}}=t_i-d_i$,
where $t_i$ and $t_{i,j}$ are the number of triangles in $G$ that contain
node $i$ and edge $(i,j)$, respectively. Note that these counting numbers obey the requirement given
in \eqref{eq:valid} as $\sum_{j \in \mathcal{N}_i} t_{i,j} = 2 t_i$.
For the triangle approximation the back-off rates, denoted as $\nu_i^{(3)}$, given by \eqref{eq:backoff} correspond to
\begin{align}
\nu_i^{(3)} = \frac{\phi_i (1-\phi_i)^{d_i-1}
\prod_{(i,j)\in E} (1-\phi_i-\phi_j)^{t_{i,j}-1}}{(1-\phi_i)^{t_i}\prod_{(i,j,k)\in \Delta_E} (1-\phi_i-\phi_j-\phi_k)},
\end{align}
where $\Delta_E$ denotes the set of triangles in $E$.
\subsection{General case}
The idea behind the Bethe and triangle approximation can be generalized to cliques of larger sizes, at the
expense of an increased complexity to compute the back-off rates.
In this section the set $\mathcal{R}'$ corresponds to the set of all the cliques $K$ in the conflict graph $G$ with
a size in $\{2,\ldots,k_{max}\}$, where $k_{max}$ is a predefined maximum allowed clique size. Note that setting $k_{max}=2$ and $3$ corresponds to the previous two approximations.
If $K$ is a clique of size $k \in \{2,3,\ldots,k_{max}\}$ and $R(K)$ its associated region, then
$V_{R(K)} = \{x_i | i \in K\}$ and $F_{R(K)} = \{f_{(i,j)} | i,j \in K\}$.
The counting number $c_{R(K)}=1$ for any region $R(K)$ associated with a clique $K$ of size $k_{max}$.
For a region $R(K)$ corresponding to a clique $K$ of size $k$ with $1 \leq k < k_{max}$, we set $c_{R(K)}$
\[ c_{R(K)} = 1_{\{k > 1\}} - \sum_{ R' \in \mathcal{R}'} c_{R'} 1_{\{V_{R(K)} \subset \mathcal{V}_{R'}\} }.\]
Note, for any maximal clique $K$ of size $2 \leq k \leq k_{max}$, we have $c_{R(K)} = 1$ irrespective of its size.
The next proposition provides an explicit expression for $c_{R(K)}$.
\begin{theorem}
If $K$ is a clique of size $k \in \{1,\ldots,k_{max}\}$ then
\begin{align}\label{prop:cRK}
c_{R(K)} = 1_{\{k > 1\}} + \sum_{s=k+1}^{k_{max}} (-1)^{s-k} n_{K,s},
\end{align}
where $n_{K,s}$ denotes the number of cliques $K'$ in $G$ with $|K'| = s$ and $K \subset K'$.
\end{theorem}
\begin{proof}
The result clearly holds for $k=k_{max}$.
We use backward induction on $|K|=k$ to find
\begin{align}
-&c_{R(K)}+1_{\{k > 1\}} = \nonumber\\
& \sum_{u=k+1}^{k_{max}} \sum_{\substack{cliques \ K':\\K \subset K', |K'|=u}} \left( 1
+\sum_{s=u+1}^{k_{max}} (-1)^{s-u} n_{K',s}\right)=\nonumber \\
& \sum_{u=k+1}^{k_{max}} n_{K,u} + \sum_{u=k+1}^{k_{max}} \sum_{s=u+1}^{k_{max}} (-1)^{s-u} \hspace*{-0.6cm}
\sum_{\substack{cliques \ K':\\K \subset K', |K'|=u}} n_{K',s} \label{eq:cRKstep1}
\end{align}
The latter sum equals $\binom{s-k}{u-k} n_{K,s}$ as we can pick $u-k$ elements from the $s-k$ elements not belonging to $K$ of a
size $s$ clique that contains $K$ to obtain a $K'$. Hence, switching sums in \eqref{eq:cRKstep1} yields
\begin{align}
-&c_{R(K)} + 1_{\{k > 1\}} =\nonumber\\
&\sum_{s=k+1}^{k_{max}} n_{K,s} + \sum_{s=k+2}^{k_{max}} \sum_{u=k+1}^{s-1} (-1)^{s-u} \binom{s-k}{u-k} n_{K,s}= \nonumber\\
& \sum_{s=k+1}^{k_{max}} \left( 1 + (-1)^{s-k}\sum_{z=1}^{s-k-1} (-1)^{-z} \binom{s-k}{z}\right)n_{K,s}\label{eq:cRKstep2}
\end{align}
By noting that $\sum_{i=0}^n (-1)^i \binom{n}{i} = 0$, \eqref{eq:cRKstep2} becomes
\begin{align*}
&c_{R(K)} = 1_{\{k > 1\}} +\nonumber \\
&\ - \sum_{s=k+1}^{k_{max}} \left( 1 + (-1)^{s-k}(-1-(-1)^{s-k})\right)n_{K,s}\nonumber\\
&\ = 1_{\{k> 1\}} + \sum_{s=k+1}^{k_{max}} (-1)^{s-k}n_{K,s},
\end{align*}
as required.
\end{proof}
Note that increasing $k_{max}$ by one simply adds one additional term to $c_{R(K)}$ in \eqref{prop:cRK},
which allows us to compute the back-off rates of the size $k_{max}$ clique approximation in a recursive
manner as follows.
\begin{corollary}
Let $\nu_i^{(k_{max})}$ be the back-off rate for node $i$ corresponding with the size $k_{max}$ clique approximation, then
\begin{align}\label{eq:recurs}
&\nu_i^{(k_{max})} = \nu_i^{(k_{max}-1)} \cdot \nonumber \\
&\prod_{\substack{cliques \ K \ in \ G:\\ i \in K, 1 \leq |K| \leq k_{max}}}
\left(1-\sum_{s\in K} \phi_s \right)^{-n_{K,k_{max}}(-1)^{k_{max}-|K|}},\end{align}
where $\nu_i^{(1)}= \phi_i/(1-\phi_i)$ and $n_{K,k_{max}}$ denotes the number of size $k_{max}$ cliques $K'$ in $G$ with $K \subset K'$.
\end{corollary}
\begin{proof}
The result follows from \eqref{eq:backoff} when combined with \eqref{prop:cRK}.
\end{proof}
We now briefly discuss the complexity to compute the back-off rate of node $i$ when using the $k_{max}$ clique approximation.
Node $i$ can be part of at most $\min(2^{d_i},d_i^{k_{max}-1})$ cliques $K$ with $|K| \leq k_{max}$, where $d_i$
is the number of neighbors of $i$ in $G$. This set can be computed by first listing the $d_i$ size $2$ cliques
containing $i$. Having obtained the set of size $k$ cliques that contain $i$, the set of size $k+1$ cliques is found by
considering all its one element extensions. By using an ordered list $\{i_1,\ldots,i_{k-1}\}$ of the $k-1$ other nodes
belonging to a size $k$ clique containing $i$, only one element extensions with a node $j > i_{k-1}$ need to be considered
and the creation of identical cliques of size $k+1$ is avoided. Having obtained the list of cliques $K$ that contain $i$
with $|K| \leq k_{max}$, the back-off rate given by \eqref{eq:recurs} can be readily computed by noting that
\begin{align*}
&\nu_i^{(k_{max})} = \nu_i^{(k_{max}-1)} \prod_{\substack{cliques \ K' \ in \ G:\\ i \in K', |K'| = k_{max}}} \cdot \nonumber \\
& \prod_{K \subseteq K': i \in K}
\left(1-\sum_{s\in K} \phi_s \right)^{(-1)^{k_{max}-|K|+1}}.
\end{align*}
\section{Kikuchi approximations}\label{sec:kikuchi}
The IGBP algorithm of \cite{kai1} is a message passing algorithm to estimate the
back-off rates to achieve a given throughput vector $(\phi_1,\ldots,\phi_n) \in \Gamma$.
This algorithm is based on a so-called Kikuchi free energy approximation. In this section
we show that the size $k_{max}$ clique approximation also coincides with a Kikuchi approximation.
In fact for $k_{max}=n$ this Kikuchi approximation corresponds to the one associated to the IGBP
algorithm. As such the expression for the back-off rates of the size $n$ clique
approximation gives us an explicit expression for a fixed point of the IGBP algorithm
in \cite[Section VI.B]{kai1} due to \cite[Section VII]{yedidia1} (as the Bethe approximation did for the IBP algorithm).
In a Kikuchi approximation (see \cite[Appendix B]{yedidia1} for more details)
the set of regions $\tilde{\mathcal{R}}$ can be written as $\tilde{\mathcal{R}} = \bigcup_{i=0}^s \mathcal{R}_i$, for some $s$.
We state that a region $R$ is a subset of a region $R'$ if $\mathcal{V}_R \subseteq \mathcal{V}_{R'}$ and $\mathcal{F}_R \subseteq F_{R'}$.
The regions in $\mathcal{R}_0$ fully characterize a Kikuchi approximation as follows.
The regions in $\mathcal{R}_{i+1}$, for $i=0,\ldots,s$, are constructed from the sets $\mathcal{R}_0,\ldots,\mathcal{R}_i$ by taking all
the different intersections $R_i \cap R_j \not= \emptyset$, with $R_i \not\subseteq R_j$ and $R_j \not\subseteq R_i$,
of the regions $R_i \in \mathcal{R}_i$ with the regions $R_j \in \bigcup_{k=0}^i \mathcal{R}_k$ and subsequently removing the
sets $R \in \mathcal{R}_{i+1}$ for which there exists an $R' \in \mathcal{R}_{i+1}$ with $R \subseteq R'$.
Note, $s$ is the smallest integer such that $\mathcal{R}_{s+1}$ is empty.
The counting number $\tilde c_R$ of region $R \in \mathcal{R}_i$ in a Kikuchi approximation is given by
\[\tilde c_R = 1-\sum_{R' \in \tilde{\mathcal{R}}: R \subset R'} \tilde c_{R'} = 1-\sum_{R' \in \cup_{k=0}^{i-1}
\mathcal{R}_k: R \subset R'} \tilde c_{R'},\]
as a region $R \in \mathcal{R}_i$ cannot be a subset of a region $R' \in \mathcal{R}_j$ with $j \geq i$
(since this would imply the existence of a superset of $R$ in $\mathcal{R}_i$).
\begin{theorem}\label{th:Kikuchi}
The size $k_{max}$ clique approximation coincides with a Kikuchi approximation with $\mathcal{R}_0 = \{R_{f_1},\ldots,R_{f_n}\}
\cup \{R(K) | K \in \mathcal{K}_G(k_{max})\}$, where $\mathcal{K}_G(k_{max})$ is the union of the set of all the cliques of size $k_{max}$
and the set of the maximal cliques of size $k \in \{2,\ldots, k_{max}-1\}$ in $G$.
\end{theorem}
\begin{proof}
See Appendix \ref{app:Kikuchi_proof}.
\end{proof}
The above theorem shows that the maximal clique based Kikuchi approximation considered in \cite{swamy_SPCOM, swamy_arXiv2017} coincides
with the size $n$ clique approximation. We note that no explicit expression for the counting numbers or
a recursive scheme similar to \eqref{eq:recurs} is presented in \cite{swamy_SPCOM, swamy_arXiv2017}.
\section{Chordal conflict graphs}\label{sec:chordal}
In this section we establish two results: (a) we show that the exact explicit expressions for the
back-off rates for chordal conflict graphs, presented in \cite{vanhoudt_ton17}, corresponds to
a zero gradient point of a region-based free energy approximation defined for chordal conflict graphs only and
(b) we prove that the size $n$ clique approximation coincides with this chordal free energy approximation.
This implies that the size $n$ clique approximation (and therefore also a fixed point of the
IGBP algorithm of \cite{kai1}) provide exact results for chordal conflict graphs $G$.
A graph $G$ is chordal if and only if all cycles consisting of more than $3$ nodes have a {\it chord}. A chord of a cycle
is an edge joining two nonconsecutive nodes of the cycle. Let $\mathcal{K}_G = \{K_1,\ldots,K_m\}$ be the set of maximal cliques of $G$.
A clique tree $T=(\mathcal{K}_G,\mathcal{E})$ is a tree in which the nodes correspond
to the maximal cliques and the edges are such that the subgraph of $T$ induced by the maximal cliques that contain the node $v$
is a subtree of $T$ for any $v \in V$. A graph $G$ is chordal if and only if it has at least one clique tree
(see Theorem 3.1 in \cite{blair1}).
For chordal conflict graphs $G$ we can define a region-based free energy approximation, called the
{\it chordal} region-based free energy approximation, by making use of {\it any}
clique tree $T = (\mathcal{K}_G,\mathcal{E})$ of $G$ in the following manner.
We define a set $\mathcal{R}$ containing
$2n+2|\mathcal{K}_G|-1$ regions: one region $R_K$ for each maximal clique $K \in \mathcal{K}_G$, one region $R_{(K,K')}$ for each edge $(K,K') \in
\mathcal{E}$, one region $R_{f_i}$ for each factor node $f_i$ and one region $R_{x_i}$ for each variable node $x_i$.
Let $\mathcal{V}_R$ and $\mathcal{F}_R$ denote the set of variable and factor nodes associated with region $R \in \mathcal{R}$, then
$V_{R_K} = \{x_i | i \in K\}$ and
\begin{align*}
F_{R_K} &= \{f_{(i,j)}| i,j \in K\},
\end{align*}
for $K \in \mathcal{K}_G$, $V_{R_{(K,K')}} =\{x_i | i \in K \cap K'\}$ and
\begin{align*}
F_{R_{(K,K')}} &= \{f_{(i,j)}| i,j \in K \cap K'\},
\end{align*}
for $(K,K') \in \mathcal{E}$ and $V_{R_{x_i}} = V_{R_{f_i}} =\{x_i\}$, $F_{R_{x_i}} =\emptyset$ and $F_{R_{f_i}} =\{f_i\}$.
The counting numbers $c_R$ are defined as follows: $c_{R_K} = c_{R_{f_i}} =1$ and $c_{R_{(K,K')}} = c_{R_{x_i}} = -1$.
Note the set of regions $\mathcal{R}$ fulfills Condition \ref{cond:region}, therefore under clique belief we have
$U_\mathcal{R}(\{b_R\}) = - \sum_{i \in V} \phi_i \ln(\nu_i)$.
As the nodes in $K$ and $K\cap K'$ form a clique, the clique belief matches the exact marginal probabilities $p_R(x_R)$
for each $R \in \mathcal{R}$ when $\phi_i = \sum_{x \in \{0,1\}^n} p(x) 1_{\{x_i = 1\}}$ for $i \in V$.
As the believes $b_{R_{f_i}}(x_i)$ and $b_{R_{x_i}}(x_i)$ are the same and $c_{R_{f_i}} = -c_{R_{x_i}}$ these
regions cancel each other in the expression for the entropy. Thus for the entropy we have
\begin{align}\label{eq:HRbRchor}
H_\mathcal{R}&(\{b_R\}) = - \sum_{i \in V} \phi_i \ln(\phi_i) \nonumber \\
&+ \sum_{(K,K') \in \mathcal{E}} \left(1-\sum_{s \in K\cap K'} \phi_s \right) \ln(1-\sum_{s \in K\cap K'} \phi_s ) \nonumber \\
&- \sum_{K \in \mathcal{K}_G} \left(1-\sum_{s \in K} \phi_s \right) \ln(1-\sum_{s \in K} \phi_s ).
\end{align}
As noted before, even when the believes are equal to the exact marginal probabilities,
the region-based entropy is in general not exact. Below we prove that the entropy (and therefore
also the energy) is exact in this particular case by leveraging existing results on the junction
graph approximation method \cite[Appendix A]{yedidia1}.
\begin{theorem}\label{th:chordal}
The expression for the region-based entropy $H_\mathcal{R}(\{b_R\})$ given by \eqref{eq:HRbRchor}
is equal to the Gibbs entropy $H(p)$ defined by \eqref{eq:GibbsH}. Further, $H(p) = - \ln(\frac{1}{Z}\prod_{i\in V} \nu_i^{p_i(1)})$ and
if $x \in \{0,1\}^n$ such that $x_ix_j = 0$ if $(i,j) \in E$ ($p(x)=0$ otherwise), we have
\begin{align}\label{eq:px}
p&(x) = \frac{\prod_{K \in \mathcal{K}_G} (\vartheta_1(K)+\vartheta_0(K))}{\prod_{(K,K') \in \mathcal{E}}
(\vartheta_1(K\cap K')+\vartheta_0(K\cap K'))},
\end{align}
where
$\vartheta_1(S) = \sum_{i\in S} p_i(1) 1_{\{x_i=1, \sum_{i \in S} x_i=1\}}$
and
$\vartheta_0(S) = \left(1-\sum_{s\in S} p_s(1)\right) 1_{\{\sum_{i \in S} x_i=0\}}$.
\end{theorem}
\begin{proof}
See Appendix \ref{app:chordal_proof}.
\end{proof}
\begin{corollary}
For a chordal conflict graph $G$ with clique tree $(\mathcal{K}_G,\mathcal{E})$, the normalizing constant $Z$ is given by
\begin{align}\label{eq:Z}
Z = \frac{\prod_{(K,K') \in \mathcal{E}} \left(1-\sum_{s\in K\cap K'} p_s(1)\right)}{\prod_{K \in \mathcal{K}_G} \left(1-\sum_{s\in K} p_s(1)\right)},
\end{align}
and the back-off rate $\nu_i$ obeys
\begin{align}\label{eq:nui}
\nu_i = p_i(1) \frac{\prod_{(K,K') \in \mathcal{E}: i \in K \cap K'} \left(1-\sum_{s\in K\cap K'} p_s(1)\right)}{\prod_{K \in \mathcal{K}_G: i \in K}
\left(1-\sum_{s\in K} p_s(1)\right)},
\end{align}
where the marginal probability $p_i(1)$ is the throughput of link $i$.
\end{corollary}
\begin{proof}
The expression for $Z$ follows from equating \eqref{eq:pf} and \eqref{eq:px} with $x=(0,\ldots,0)$.
The back-off rate $\nu_i$ is found in the same way using $x$ with $x_i=1$ and $x_j = 0$ for $i \not= j$.
\end{proof}
Note the above formula for the back-off rate $\nu_i$ for chordal conflict graphs $G$ was derived earlier
in \cite{vanhoudt_ton17} and corresponds to \eqref{eq:backoff}.
\begin{theorem}\label{th:coin}
When the conflict graph $G$ is chordal the Kikuchi approximation with $\mathcal{R}_0 = \{R_{f_1},\ldots,R_{f_n}\}
\cup \{R(K) |K \in \mathcal{K}_G\}$, where $\mathcal{K}_G$ is the set of the maximal cliques in $G$,
coincides with the chordal region-based free energy approximation (defined for chordal conflict graphs only).
\end{theorem}
\begin{proof}
See Appendix \ref{app:coin}.
\end{proof}
\begin{corollary}
When $G$ is chordal, the back-off rates given by \eqref{eq:backoff} for the
size $k_{max}=n$ clique approximation or the Kikuchi approximation
with $\mathcal{R}_0 = \{R_{f_1},\ldots,R_{f_n}\} \cup \{R(K) |K \in \mathcal{K}_G\}$,
where $\mathcal{K}_G$ is the set of the maximal cliques in $G$, are both equal to \eqref{eq:nui}
and are therefore exact.
\end{corollary}
The exactness of the above Kikuchi approximation for chordal conflict graphs was established
independently in \cite{swamy_SPCOM, swamy_arXiv2017}.
\section{Experimental evaluation}\label{sec:eval}
To study the accuracy of the size $k_{max}$ clique approximation we perform simulation
experiments similar to one ones presented in \cite[Section IV.E]{kai1} for IGBP, except that we
consider a different set of conflict graphs and also compare with the local
chordal subgraph (LCS) approximation presented in \cite{vanhoudt_ton17}. More
specifically, we simulate the ideal CSMA model with the back-off rates set as estimated by
each approximation method and compute the mean relative error between the
given target throughputs (used as input by the approximation method) and
the throughputs observed during simulation.
The set of conflict graphs considered is similar to the ones used in \cite{vanhoudt_ton17}:
the $n$ nodes of the conflict graph are placed randomly in a square of size $1$ and there
exists an edge between two nodes if and only if the Euclidean distance between them is less
than some threshold $R$. In the experiments we set $n=100$ nodes with $R$ values equal to
$0.15, 0.2$ and $0.25$. At this point we also note that node $i \in V$ requires the same input to compute its back-off rate
irrespective of whether it uses the size $k_{max}$ clique approximation or the LCS approximation:
it needs to construct the subgraph $G_i=(V_i,E_i)$ induced by node $i$ and its $d_i$ neighbors.
Hence both approximations can be implemented in a fully distributed manner.
\begin{figure}[t]
\center
\includegraphics[width=0.4\textwidth]{Gin020.pdf}
\caption{Conflict graph with $n=100$ used in Figure \ref{fig:relerror} for $R = 0.20$.}
\label{fig:R020}
\end{figure}
In a first set of experiments the target throughput of each node was set equal to $\phi$ divided
by the size of the largest clique in the graph, where $\phi$ equals $0.55, 0.7$ and $0.85$.
Note that for $\phi > 1$ this vector does not belong to $\Gamma$, the set of achievable throughput vectors.
Figure \ref{fig:relerror} presents the mean relative error of the LCS and size $k_{max}$ clique approximation
for various combinations of $R$ and $\phi$ (the corresponding conflict graph for $R=0.20$ is shown in Figure \ref{fig:R020}).
For each value of $R$, the largest value for $k_{max}$ presented in this
figure corresponds to the size of the largest clique in the graph, that is, the same result is obtained when
setting $k_{max}=n$. Recall that we showed earlier that
the size $n$ clique approximation coincides with a fixed point of the IGBP algorithm of \cite{kai1}.
Thus, the rightmost bar in Figure \ref{fig:relerror} corresponds to a fixed point of the IGBP algorithm.
Figure \ref{fig:relerror} shows that the size $k_{max}$ clique approximation is more accurate that the LCS approximation in this setting,
except for small $k_{max}$, at the expense of being more complex.
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{RelativeErrorSizekmax.pdf}
\caption{Mean relative error in the achieved throughput of the LCS and size $k_{max}$ clique approximation
in a conflict graph with $n=100$ nodes for $\phi = 0.55, 0.7$ and $0.85$ and $R = 0.15, 0.2$ and $0.25$.}
\label{fig:relerror}
\end{figure}
We further note that the relative errors grow as the graph becomes more dense (increasing $R$) and this growth seems
more pronounced for the LCS approximation. We also note that the approximation becomes
worse as the target throughput of the links increases (increasing $\phi$). Nevertheless the mean relative
error of the size $n$ clique approximation remains below $2\%$ in all cases. This is somewhat higher
than the values reported in \cite{kai1} for IGBP, but this is mostly due to the fact that more
dense conflict graphs are considered here (for $R=0.15$ we have $301$ edges, while for $R=0.25$
we have as many as $788$ edges).
While the results in Figure \ref{fig:relerror} are based on three conflicts graphs only,
Figure \ref{fig:relerror2} compares the accuracy of the LCS and size $k_{max}$ approximations
with $k_{max} = 2, 5, n$ on a set of $75$ conflict graphs: $25$ for each of the three $R$ values.
The target throughput of node $i$ is set equal to $0.85/(1+d_i)$, meaning not all nodes
have the same target throughput (as opposed to Figure \ref{fig:relerror}).
For each $R$ value and approximation method considered, Figure \ref{fig:relerror2} depicts the
mean relative throughput error (obtained by simulation) for the $2$ conflict graphs
that resulted in the smallest and largest mean relative throughput error, as well as the average
taken over the $25$ conflict graphs.
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{RelativeErrorSizekmax2.pdf}
\caption{Smallest, average and largest mean relative error in the achieved throughput
of the LCS and size $k_{max}$ clique approximation
for $25$ conflict graphs with $n=100$ nodes when $\nu_i = 0.85/(1+d_i)$ for $R = 0.15, 0.2$ and $0.25$.}
\label{fig:relerror2}
\end{figure}
The results in Figure \ref{fig:relerror2} are in agreement with Figure \ref{fig:relerror}:
the LCS approximation outperforms the Bethe approximation, increasing $k_{max}$ reduces the relative throughput errors and the
LCS error increases more significantly when the graph becomes denser compared to the
size $n$ approximation. We further note that the size $k_{max} = 5$ approximation
produces errors close to the size $n$ approximation, which is a useful observation in case we
wish to limit the time needed to compute the required back-off rates.
To get an idea on the computation times of the back-off vector for the different approximations,
we generated 1000 conflict graphs with $n=100$ and $R \in (0,0.25]$. Figure \ref{fig:times} depicts
the average time needed to compute the vector of back-off rates for all the conflict graphs
with a maximum clique size between $4$ and $15$ (only $9$ of the $1000$ conflict graphs contained a $15+$ clique).
The results show that the computation times of the LCS and Bethe approximation have a similar shape. They
also highlight that for denser graphs limiting $k_{max}$ may offer an attractive trade-off between the
computation times and accuracy of the approximation.
We end by noting that the time complexity to compute the back-off rate for node $i$ only depends on
the structure of the subgraph $G_i$ induced by node $i$ and its neighbors. Thus the overall network size $n$
has no impact on the computation times and as such the approximation method is also suitable for very large
graphs as long as the size of the one hop neighborhood does not scale with the overall network size.
Further the complexity to compute $\nu_i$ for the size $n$ approximation
is similar to performing a single iteration of the IGBP algorithm, for
which convergence to a (unique) fixed point is not guaranteed and the number of iterations grows
with the density of the conflict graph (see \cite[Tables IV and V]{kai1}).
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{RunTimesApprox.pdf}
\caption{Computation time of the vector of back-off rates as a function of the maximum clique size of $G$.}
\label{fig:times}
\end{figure}
\section{Conclusions}\label{sec:conc}
In this paper we presented the class of region-based free energy approximations for the ideal CSMA model,
which contains the Bethe approximation of \cite{yun1} as a special case.
We obtained a closed form expression for the vector of back-off rates that corresponds to a zero gradient point
of the free energy within the set of clique beliefs (in terms of its counting numbers).
We subsequently focused on the size $k_{max}$ clique
approximation (which can be implemented in a fully distributed manner)
and derived explicit expressions for its counting numbers as well as a recursive method
to compute the back-off rates more efficiently. We further showed that this approximation is exact on
chordal conflict graphs and coincides with a Kikuchi approximation. The latter result implies that
the size $k_{max}$ clique approximation with $k_{max}=n$ gives an explicit expression for a fixed point
of the IGBP algorithm of \cite{kai1}.
The paper also contains an alternate proof for the back-off rates needed to achieve any achievable
throughout vector in a chordal graph $G$ presented in \cite{vanhoudt_ton17}.
There are a number of possible extensions to the work presented in this paper. First, while this paper has
focused on achieving a given target throughput vector, it should be possible to consider utility maximization
problems as in \cite{yun1}. Second, one could try to relax the conflict graph based interference model considered in this
paper to a more realistic SINR (signal-to-interference-plus-noise ratio) model. In fact, such a relaxation
of the Bethe approximation presented in \cite{yun1} was recently developed in \cite{swamy1}.
Finally, other free energy approximation techniques such as the tree-based reparameterization framework of \cite{wainwright1}
could be considered as well.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUbo45qsFAfsI7XxhA
| 5 | 1 |
\section{Introduction}
The volatility of the price movements reflects the ubiquitous uncertainty within financial markets. It is critical that the level of risk (aka, the degree of variation), indicated by volatility, is taken into consideration before investment decisions are made and portfolio are optimised \cite{hull2006options}; volatility is substantially a key variable in the pricing of derivative securities. Hence, estimating and forecasting volatility is of great importance in branches of financial studies, including investment, risk management, security valuation and monetary policy making \cite{poon2003forecasting}.
Volatility is measured typically by employing the standard deviation of price change in a fixed time interval, such as a day, a month or a year. The higher the volatility is, the riskier the asset should be. One of the primary challenges in designing volatility models is to identify the existence of latent stochastic processes and to characterise the underlying dependences or interactions between variables within a certain time span. A classic approach has been to handcraft the characteristic features of volatility models by imposing assumptions and constraints, given prior knowledge and observations. Notable examples include autoregressive conditional heteroscedasticity (ARCH) model \cite{engle1982autoregressive} and the extension, generalised ARCH (GARCH) \cite{bollerslev1986generalized}, which makes use of autoregression to capture the properties of time-varying volatility within many time series. As an alternative to the GARCH model family, the class of stochastic volatility (SV) models specify the variance to follow some latent stochastic process \cite{hull1987pricing}.
Heston \cite{heston1993closed} proposed a continuous-time model with the volatility following an Ornstein-Uhlenbeck process and derived a closed-form solution for options pricing.
Since the temporal discretisation of continuous-time dynamics sometimes leads to a deviation from the original trajectory of system, those continuous-time models are seldom applied in forecasting. For practical purposes of forecasting, the canonical model \cite{jacquier2002bayesian,kim1998stochastic} formulated in a discrete-time fashion for regularly spaced data such as daily prices of stocks is of great interest.
While theoretically sound, those approaches require strong assumptions which might involve detailed insight of the target sequences and are difficult to determine without a thorough inspection.
In this paper, we take a fully data driven approach and determine the configurations with as few exogenous input as possible, or even purely from the historical data. We propose a neural network re-formulation of stochastic volatility by leveraging stochastic models and recurrent neural networks (RNNs).
In inspired by the work from Chung et al. \cite{DBLP:conf/nips/ChungKDGCB15} and Fraccaro et al. \cite{DBLP:conf/nips/FraccaroSPW16}, the proposed model is rooted in variational inference and equipped with the latest advances of stochastic neural networks. The model inherits the fundamentals of SV model and provides a general framework for volatility modelling; it extends previous sequential frameworks with autoregressive and bidirectional architecture and provide with a more systematic and volatility-specific formulation on stochastic volatility modelling for financial time series.
We presume that the latent variables follow a Gaussian autoregressive process, which is then utilised to model the variance process. Our neural network formulation is essentially a general framework for volatility modelling, which covers two major classes of volatility models in financial study as the special cases with specific weights and activations on neurons.
Experiments with real-world stock price datasets are performed. The result shows that the proposed model produces more accurate estimation and prediction, outperforming various widely-used deterministic models in the GARCH family and several recently proposed stochastic models on average negative log-likelihood; the high flexibility and rich expressive power are validated.
\section{Related Work}
A notable framework for volatility is autoregressive conditional heteroscedasticity (ARCH) model \cite{engle1982autoregressive}: it can accurately identify the characteristics of time-varying volatility within many types of time series. Inspired by ARCH model, a large body of diverse work based on stochastic process for volatility modelling has emerged \cite{bollerslev1994arch}. Bollerslev \cite{bollerslev1986generalized} generalised ARCH model to the generalised autoregressive conditional heteroscedasticity (GARCH) model in a manner analogous to the extension from autoregressive (AR) model to autoregressive moving average (ARMA) model by introducing the past conditional variances in the current conditional variance estimation. Engle and Kroner \cite{engle1995multivariate} presented theoretical results on the formulation and estimation of multivariate GARCH model within simultaneous equations systems. The extension to multivariate model allows the covariance to present and depend on the historical information, which are particularly useful in multivariate financial models. An alternative to the conditionally deterministic GARCH model family is the class of stochastic volatility (SV) models, which first appeared in the theoretical finance literature on option pricing \cite{hull1987pricing}. The SV models specify the variance to follow some latent stochastic process such that the current volatility is no longer a deterministic function even if the historical information is provided. As an example, Heston's model \cite{heston1993closed} characterises the variance process as a Cox-Ingersoll-Ross process driven by a latent Wiener process. While theoretically sound, those approaches require strong assumptions which might involve complex probability distributions and non-linear dynamics that drive the process.
Nevertheless, empirical evidences have confirmed that volatility models provide accurate prediction \cite{andersen1998answering} and models such as ARCH and its descendants/variants have become indispensable tools in asset pricing and risk evaluation.
Notably, several models have been recently proposed for practical forecasting tasks: Kastner et al. \cite{DBLP:journals/csda/KastnerF14} implemented the MCMC-based framework \emph{stochvol} where the ancillarity-sufficiency interweaving strategy (ASIS) is applied for boosting MCMC estimation of stochastic volatility; Wu et al. \cite{DBLP:conf/nips/WuHG14} designed the \emph{GP-Vol}, a non-parametric model which utilises Gaussian processes to characterise the dynamics and jointly learns the process and hidden states via online inference algorithm. Despite the fact that it provides us with a practical approach towards stochastic volatility forecasting, both models require a relatively large volume of samples to ensure the accuracy, which involves very expensive sampling routine at each time step. Another drawback is that those models are incapable to handle the forecasting task for multivariate time series.
On the other hand, deep learning \cite{DBLP:journals/nature/LeCunBH15,DBLP:journals/nn/Schmidhuber15} that utilises nonlinear structures known as deep neural networks, powers various applications. It has triumph over pattern recognition challenges, such as image recognition \cite{DBLP:conf/nips/KrizhevskySH12}, speech recognition \cite{DBLP:conf/nips/ChorowskiBSCB15}, machine translation \cite{DBLP:journals/corr/BahdanauCB14} to name a few.
Time-dependent neural networks models include RNNs with neuron structures such as long short-term memory (LSTM) \cite{DBLP:journals/neco/HochreiterS97}, bidirectional RNN (BRNN) \cite{DBLP:journals/tsp/SchusterP97}, gated recurrent unit (GRU) \cite{DBLP:conf/emnlp/ChoMGBBSB14} and attention mechanism \cite{DBLP:conf/icml/XuBKCCSZB15}.
Recent results show that RNNs excel for sequence modelling and generation in various applications \cite{DBLP:conf/icml/OordKK16,DBLP:conf/emnlp/ChoMGBBSB14,DBLP:conf/icml/XuBKCCSZB15}. However, despite its capability as non-linear universal approximator, one of the drawbacks of neural networks is its deterministic nature. Adding latent variables and their processes into neural networks would easily make the posteriori computationally intractable. Recent work shows that efficient inference can be found by variational inference when hidden continuous variables are embedded into the neural networks structure \cite{DBLP:journals/corr/KingmaW13,DBLP:conf/icml/RezendeMW14}. Some early work has started to explore the use of variational inference to make RNNs stochastic:
Chung et al. \cite{DBLP:conf/nips/ChungKDGCB15} defined a sequential framework with complex interacting dynamics of coupling observable and latent variables whereas Fraccaro et al. \cite{DBLP:conf/nips/FraccaroSPW16} utilised heterogeneous backward propagating layers in inference network according to its Markovian properties.
In this paper, we apply the stochastic neural networks to solve the volatility modelling problem. In other words, we model the dynamics and stochastic nature of the degree of variation, not only the mean itself. Our neural network treatment of volatility modelling is a general one and existing volatility models (e.g., the Heston and GARCH models) are special cases in our formulation.
\section{Preliminaries: Volatility Models}
Volatility models characterise the dynamics of volatility processes, and help estimate and forecast the fluctuation within time series. As it is often the case that one seeks for prediction on quantity of interest with a collection of historical information at hand, we presume the conditional variance to have dependency -- either deterministic or stochastic -- on history, which results in two categories of volatility models.
\subsection{Deterministic Volatility Models: the GARCH Model Family}
The GARCH model family comprises various linear models that formulate the conditional variance at present as a linear function of observations and variances from the past. Bollerslev's extension \cite{bollerslev1986generalized} of Engle's primitive ARCH model \cite{engle1982autoregressive}, referred as generalised ARCH (GARCH) model, is one of the most well-studied and widely-used volatility models:
\begin{align}
\label{eq:garch}
\sigma^2_t &= \alpha_0 + \sum_{i=1}^p \alpha_i x^2_{t-i} + \sum_{j=1}^q \beta_j \sigma^2_{t-j}, \\
\label{eq:assumption}
x_t &\sim \mathscr{N}(0, \sigma^2_t),
\end{align}
where Eq. \eqref{eq:assumption} represents the assumption that the observation $x_t$ follows from the Gaussian distribution with mean 0 and variance $\sigma^2_t$; the (conditional) variance $\sigma^2_t$ is fully determined by a linear function (Eq. \eqref{eq:garch}) of previous observations $\{x_{<t}\}$ and variances $\{\sigma^2_{<t}\}$. Note that if $q=0$ in Eq. \eqref{eq:garch}, GARCH model degenerates to ARCH model.
Various variants have been proposed ever since. Glosten, Jagannathan and Runkle \cite{glosten1993relation} extended GARCH model with additional terms to account for asymmetries in the volatility and proposed GJR-GARCH model; Zakoian \cite{zakoian1994threshold} replaced the quadratic operators with absolute values, leading to threshold ARCH/GARCH (TARCH) models. The general functional form is formulated as
\begin{align}
\label{eq:tarch}
\sigma^d_t = \alpha_0 &+ \sum_{i=1}^p \alpha_i |x_{t-i}|^d + \sum_{j=1}^q \beta_j \sigma^d_{t-j}\notag \\
&+ \sum_{k=1}^o \gamma_k |x_{t-k}|^d I[x_{t-k} < 0],
\end{align}
where $I[x_{t-k} < 0]$ is the indicator function: $I[x_{t-k} < 0] = 1$ if $x_{t-k} < 0$, and 0 otherwise, which allows for asymmetric reactions of volatility in terms of the sign of previous $\{x_{<t}\}$.
Various variants of GARCH model can be expressed by assigning values to parameters $p,o,q,d$ in Eq. \eqref{eq:tarch}:
{
\begin{enumerate}
\setlength\itemsep{0em}
\item{ARCH($p$): $p \in \mathbb{N}^+$; $q \equiv 0$; $o \equiv 0$; $d \equiv 2$}
\item{GARCH($p,q$): $p \in \mathbb{N}^+$; $q \equiv 0$; $o \equiv 0$; $d \equiv 2$}
\item{GJR-GARCH($p,o,q$): $p \in \mathbb{N}^+$; $q \in \mathbb{N}^+$; $o \in \mathbb{N}^+$; $d \equiv 2$}
\item{AVARCH($p$): $p \in \mathbb{N}^+$; $q \equiv 0$; $o \equiv 0$; $d \equiv 2$}
\item{AVGARCH($p,q$): $p \in \mathbb{N}^+$; $q \in \mathbb{N}^+$; $o \equiv 0$; $d \equiv 2$}
\item{TARCH($p,o,q$): $p \in \mathbb{N}^+$; $q \in \mathbb{N}^+$; $o \in \mathbb{N}^+$; $d \equiv 1$}
\end{enumerate}
}%
Another fruitful specification shall be Nelson's exponential GARCH (EGARCH) model \cite{nelson1991conditional}, which instead formulates the dependencies in log-variance $\log(\sigma^2_t)$:
\begin{align}
\label{eq:egarch}
\log(\sigma^2_t) &= \alpha_0 + \sum_{i=1}^p \alpha_i g(x_{t-i}) + \sum_{j=1}^q \beta_j \log(\sigma^2_{t-j}), \\
\label{eq:g}
g(x_t) &= \theta x_t + \gamma (|x_t| - \mathbb{E}[|x_t|]),
\end{align}
where $g(x_t)$ (Eq. \eqref{eq:g}) accommodates the asymmetric relation between observations and volatility changes. If setting $q \equiv 0$ in Eq. \eqref{eq:egarch}, EGARCH($p,q$) then degenerates to EARCH($p$).
\subsection{Stochastic Volatility Models}
An alternative to the (conditionally) deterministic volatility models is the class of stochastic volatility (SV) models. First introduced in the theoretical finance literature, earliest SV models such as Hull and White's \cite{hull1987pricing} as well as Heston model \cite{heston1993closed} are formulated by stochastic differential equations in a continuous-time fashion for analysis convenience. In particular, Heston model instantiates a continuous-time stochastic volatility model for univariate processes:
\begin{align}
\label{eq:heston_z}
\diff{\sigma}(t) &= -\beta \sigma(t) \diff{t} + \delta \diff{w^{\sigma}}(t), \\
\diff{x}(t) &= (\mu - 0.5\sigma^2(t)) \diff{t} + \sigma(t) \diff{w^{x}}(t).
\end{align}
where $x(t)=\log{s}(t)$ is the logarithm of stock price $s_t$ at time $t$, $w^{x}(t)$ and $w^{\sigma}(t)$ represent two correlated Wiener processes and the correlation between $\diff{w^{x}}(t)$ and $\diff{w^{\sigma}}(t)$ is expressed as $\mathbb{E}[\diff{w^{x}}(t) \cdot \diff{w^{\sigma}}(t)] = \rho \diff{t}$.
For practical use, empirical versions of the SV model, typically formulated in a discrete-time fashion, are of great interest. The canonical model \cite{jacquier2002bayesian,kim1998stochastic} for regularly spaced data is formulated as
\begin{align}
\label{eq:sv1}
\log(\sigma^2_t) &= \eta + \phi (\log(\sigma^2_{t-1})-\eta) + z_t, \\
\label{eq:sv2}
z_t &\sim \mathscr{N}(0, \sigma^2_{z}), \quad x_t \sim \mathscr{N}(0, \sigma^2_t).
\end{align}
Equation~\eqref{eq:sv1} indicates that the (conditional) log-variance $\log(\sigma^2_t)$ depends on not only the historical log-variances $\{\log(\sigma^2_t)\}$ but a latent stochastic process $\{z_t\}$. The latent process $\{z_t\}$ is, according to Eq. \eqref{eq:sv2}, white noise process with i.i.d. Gaussian variables.
Notably, the volatility $\sigma^2_t$ is no longer conditionally deterministic (i.e. deterministic given the complete history $\{\sigma^2_{<t}\}$) but to some extent stochastic in the setting of SV models: Heston model involves two correlated continuous-time Wiener processes while the canonical model is driven by a discrete-time Gaussian white-noise process.
\subsection{Volatility Models in a General Form}
Hereafter we denote the sequence of observations as $\{x_t\}$ and the latent stochastic process as $\{z_t\}$. As seen in previous sections, the dynamics of the volatility process $\{\sigma^2_t\}$ can be abstracted in the form of
\begin{align}
\label{eq:f}
\sigma^2_t = f(\sigma^2_{<t}, x_{<t}, z_{\le t}) = \Sigma^x(x_{<t}, z_{\le t}).
\end{align}
The latter equality holds when we recurrently substitute $\sigma^2_\tau$ with $f(\sigma^2_{<\tau}, x_{<\tau}, z_{\le \tau})$ for all $\tau<t$. For models within the GARCH family, we discard $z_{\le t}$ in $\Sigma^x(x_{<t}, z_{\le t})$ (Eq. \eqref{eq:f}); on the other hand, for the primitive SV model, $x_{<t}$ is ignored instead. We can loosen the constraint that $x_t$ is zero-mean to a time-varying mean $\mu^x(x_{<t}, z_{\le t})$ for more flexibility.
Recall that the latent stochastic process $\{z_t\}$ (Eq. \eqref{eq:sv2}) in the SV model is by definition an i.i.d. Gaussian white noise process. We may extend this process to one with an inherent autoregressive dynamics and more flexibility that the mean $\mu^z(z_{<t})$ and variance $\Sigma^z(z_{<t})$ are functions of autoregressive structure on historical values. Hence, the generalised model can be formulated in the following framework:
\begin{align}
\label{eq:z}
z_t | z_{<t} &\sim \mathscr{N}(\mu^z(z_{<t}), \Sigma^z(z_{<t})),\\
\label{eq:x}
x_t | x_{<t}, z_{\le t} &\sim \mathscr{N}(\mu^x(x_{<t}, z_{\le t}), \Sigma^x(x_{<t}, z_{\le t})),
\end{align}
where we have presumed that both the observation $x_t$ and the latent variable $z_t$ are normally distributed. Note that the autoregressive process degenerates to i.i.d. white noise process when $\mu^z(z_{<t}) \equiv 0$ and $\Sigma^z(z_{<t}) \equiv \sigma^2_z$. It should be emphasised that the purpose of reinforcing an autoregressive structure \eqref{eq:z} of the latent variable $z_t$ is that we believe such formulation fits better to real scenarios from financial aspect compared with the i.i.d. convention: the price fluctuation of a certain stock is the consequence of not only its own history but also the influence from the environment, e.g. its competitors, up/downstream industries, relevant companies in the market, etc. Such external influence is ever-changing and may preserve memory and hence hard to characterise if restricted to i.i.d. noise. The latent variable $z_t$ with an autoregressive structure provides a possibility of decoupling the internal influential factors from the external ones, which we believe is the essence of introducing $z_t$.
\section{Neural Stochastic Volatility Models}
In this section, we establish the \emph{neural stochastic volatility model} (NSVM) for volatility estimation and prediction.
\subsection{Generating Observable Sequence}
Recall that the observable variable $x_t$ (Eq. \eqref{eq:x}) and the latent variable $z_t$ (Eq. \eqref{eq:z}) are described by autoregressive models (as $x_t$ also involves an exogenous input $z_{\le t}$). Let $p_{\ps{\Phi}}(x_t | x_{<t}, z_{\le t})$ and $p_{\ps{\Phi}}(z_t | z_{<t})$ denote the probability distributions of $x_t$ and $z_t$ at time $t$. The factorisation on the joint distributions of sequences $\{x_t\}$ and $\{z_t\}$ applies as follow:
{
\begin{align}
\label{eq:pz}
p_{\ps{\Phi}}(Z) &= \prod_t p_{\ps{\Phi}}(z_t | z_{<t})\notag \\
&= \prod_t \mathscr{N}(z_{t} ; \mu^z_{\ps{\Phi}}(z_{<t}), \Sigma^z_{\ps{\Phi}}(z_{<t})),\\
\label{eq:px|z}
p_{\ps{\Phi}}(X|Z) &= \prod_t p_{\ps{\Phi}}(x_t | x_{<t}, z_{\le t})\notag \\
&= \prod_t \mathscr{N}(x_{t} ; \mu^x_{\ps{\Phi}}(x_{<t}, z_{\le t}), \Sigma^x_{\ps{\Phi}}(x_{<t}, z_{\le t})),
\end{align}
}%
where $X = \{x_{1:T}\}$ and $Z = \{z_{1:T}\}$ represents the sequences of observable and latent variables, respectively, whereas $\ps{\Phi}$ stands for the collection of parameters of generative model. The unconditional generative model is defined as the joint distribution w.r.t. the latent variable $Z$ and observable $X$:
\begin{align}
\label{eq:pxz}
p_{\ps{\Phi}}(X, Z) =& \prod_t p_{\ps{\Phi}}(x_t | x_{<t}, z_{\le t}) p_{\ps{\Phi}}(z_t|z_{<t}).
\end{align}
It can be observed that the mean and variance are conditionally deterministic: given the historical information $\{z_{<t}\}$, the current mean $\mu^z_t = \mu^z_{\ps{\Phi}}(z_{<t})$ and variance $\Sigma^z_t = \Sigma^z_{\ps{\Phi}}(z_{<t})$ of $z_t$ is obtained and hence the distribution $\mathscr{N}(z_t; \mu^z_t, \Sigma^z_t)$ of $z_t$ is specified; after sampling $z_t$ from the specified distribution, we incorporate $\{x_{<t}\}$ and calculate the current mean $\mu^x_t = \mu^x_{\ps{\Phi}}(x_{<t}, z_{\le t})$ and variance $\Sigma^x_t = \Sigma^x_{\ps{\Phi}}(x_{<t}, z_{\le t})$ of $x_t$ and determine its distribution $\mathscr{N}(x_t; \mu^x_t, \Sigma^x_t)$ of $x_t$. It is natural and convenient to present such a procedure in a recurrent fashion because of its autoregressive nature. Since RNNs can essentially approximate arbitrary function of recurrent form, the means and variances, which may be driven by complex non-linear dynamics, can be efficiently computed using RNNs.
The unconditional generative model consists of two pairs of RNN and multi-layer perceptron (MLP), namely $\RNN^z_G$/$\MLP^z_G$ for the latent variable and $\RNN^x_G$/$\MLP^x_G$ for the observable. We stack those two RNN/MLP pairs together according to the causal dependency between variables. The unconditional generative model is implemented as the \emph{generative network} abstracted as follows:
\begin{align}
\label{eq:mlp_zg}
\{\mu^z_t, \Sigma^z_t\} &= \MLP^z_G(h^z_t; \ps{\Phi}),\\
\label{eq:rnn_zg}
h^z_t &= \RNN^z_G(h^z_{t-1}, z_{t-1}; \ps{\Phi}),\\
\label{eq:zg_t}
z_t &\sim \mathscr{N}(\mu^z_t, \Sigma^z_t),\\
\label{eq:mlp_xg}
\{\mu^x_t, \Sigma^x_t\} &= \MLP^x_G(h^x_t; \ps{\Phi}),\\
\label{eq:rnn_xg}
h^x_t &= \RNN^x_G(h^x_{t-1}, x_{t-1}, z_t; \ps{\Phi}),\\
\label{eq:xg_t}
x_t &\sim \mathscr{N}(\mu^x_t, \Sigma^x_t),
\end{align}
where $h^z_t$ and $h^x_t$ denote the hidden states of the corresponding RNNs. The MLPs map the hidden states of RNNs into the means and deviations of variables of interest. The collection of parameters $\ps{\Phi}$ is comprised of the weights of RNNs and MLPs. NSVM relaxes the conventional constraint that the latent variable $z_t$ is $\mathscr{N}(0,1)$ in a way that $z_t$ is no longer i.i.d noise but a time-varying signal from external process with self-evolving nature. As discussed above, this relaxation will benefit the effectiveness in real scenarios.
One should notice that when the latent variable $z_t$ is obtained, e.g. by inference (see details in the next subsection), the conditional distribution $p_{\ps{\Phi}}(X|Z)$ (Eq. \eqref{eq:px|z}) will be involved in generating the observable $x_t$ instead of the joint distribution $p_{\ps{\Phi}}(X, Z)$ (Eq. \eqref{eq:pxz}). This is essentially the scenario of predicting future values of the observable variable given its history. We will use the term ``generative model'' and will not discriminate the unconditional generative model or the conditional one as it can be inferred in context.
\subsection{Inferencing the Latent Process}
As the generative model involves the latent variable $z_t$, of which the true values are inaccessible even we have observed $x_t$, the marginal distribution $p_{\ps{\Phi}}(X)$ becomes the key that bridges the model and the data. However, the calculation of $p_{\ps{\Phi}}(X)$ itself or its complement, the posterior distribution $p_{\ps{\Phi}}(Z | X)$, is often intractable as complex integrals are involved. We are unable to learn the parameters by differentiating the marginal log-likelihood $\log p_{\ps{\Phi}}(X)$ or to infer the latent variables through the true posterior. Therefore, we consider instead a restricted family of tractable distributions $q_{\ps{\Psi}}(Z | X)$, referred to as the approximate posterior family, as approximations to the true posterior $p_{\ps{\Phi}}(Z | X)$ such that the family is sufficiently rich and of high capacity to provide good approximations.
It is straightforward to verify that given a sequence of observations $X=\{x_{1:T}\}$, for any $1\le t\le T$, $z_t$ is dependent on the entire observation sequences. Hence, we define the inference model with the spirit of mean-field approximation where the approximate posterior is Gaussian and the following factorisation applies:
{
\begin{align}
\label{eq:qz|x}
q_{\ps{\Psi}}(Z | X) &= \prod^T_{t=1} q_{\ps{\Psi}}(z_t | z_{<t}, x_{1:T})\notag \\
&= \prod_t \mathscr{N}(z_t ; \tilde{\mu}^z_{\ps{\Psi}}(z_{<t}, x_{1:T}), \tilde{\Sigma}^z_{\ps{\Psi}}(z_{<t}, x_{1:T})),
\end{align}
}%
where $\tilde{\mu}^z_{\ps{\Psi}}(z_{t-1}, x_{1:T})$ and $\tilde{\Sigma}^z_{\ps{\Psi}}(z_{t-1}, x_{1:T})$ are functions of the given observation sequence $\{x_{1:T}\}$, representing the approximated mean and variance of the latent variable $z_t$; $\ps{\Psi}$ denotes the collection of parameters of inference model.
The neural network implementation of the model, referred to as the \emph{inference network}, is designed to equip a cascaded architecture with an autoregressive RNN and a bidirectional RNN, where the bidirectional RNN incorporates both the forward and backward dependencies on the entire observations whereas the autoregressive RNN models the temporal dependencies on the latent variables:
\begin{align}
\label{eq:mlp_zi}
\{\tilde{\mu}^z_t, \tilde{\Sigma}^z_t\} &= \MLP^z_I(\tilde{h}^z_{t}; \ps{\Psi}),\\
\label{eq:rnn_zi}
\tilde{h}^z_{t} &= \RNN^z_I(\tilde{h}^z_{t-1}, z_{t-1}, [\tilde{h}^{\rightarrow}_t, \tilde{h}^{\leftarrow}_t]; \ps{\Psi}),\\
\label{eq:rnn_ri}
\tilde{h}^{\rightarrow}_t &= \RNN^{\rightarrow}_I(\tilde{h}^{\rightarrow}_{t-1}, x_t; \ps{\Psi}),\\
\label{eq:rnn_li}
\tilde{h}^{\leftarrow}_t &= \RNN^{\leftarrow}_I(\tilde{h}^{\leftarrow}_{t+1}, x_t; \ps{\Psi}),\\
\label{eq:zi_t}
z_t &\sim \mathscr{N}(\tilde{\mu}^z_t, \tilde{\Sigma}^z_t; \ps{\Psi}),
\end{align}
where $\tilde{h}^{\rightarrow}_t$ and $\tilde{h}^{\leftarrow}_t$ represent the hidden states of the forward and backward directions of the bidirectional RNN. The autoregressive RNN with hidden state $\tilde{h}^z_{t}$ takes the joint state $[\tilde{h}^{\rightarrow}_t, \tilde{h}^{\leftarrow}_t]$ of the bidirectional RNN and the previous value of $z_{t-1}$ as input. The inference mean $\tilde{\mu}^z_t$ and variance $\tilde{\Sigma}^z_t$ is computed by an MLP from the hidden state $\tilde{h}^z_t$ of the autoregressive RNN. We use the subscript $I$ instead of $G$ to distinguish the architecture used in inference model in contrast to that of the generative model. It should be emphasised that the inference network will collaborates with the generative network on conditional generating procedure.
\begin{algorithm}
\centering
\caption{Recursive Forecasting}\label{alg:recursiveforecast}
\begin{algorithmic}[1]
\Loop
\State $\{z^{\langle 1:S \rangle}_{1:t}\}$ $\gets$ draw $S$ paths from $q(z_{1:t} | x_{1:t})$
\State $\{z^{\langle 1:S \rangle}_{1:t+1}\}$ $\gets$ extend $\{z^{\langle 1:S \rangle}_{1:t}\}$ for 1 step via $p(z_{t+1} | z_{1:t})$
\State $\hat{p}(x_{t+1} | x_{1:t})$ $\gets$ $1/S\times \sum_{s} p(x_{t+1} | z^{\langle s \rangle}_{1:t+1}, x_{1:t})$
\State $\hat{\sigma}^2_{t+1} \gets \mathtt{var}\{\hat{x}^{1:S}_{t+1}\}$, $\{\hat{x}^{1:S}_{t+1}\} \sim \hat{p}(x_{\tau+1} | x_{1:\tau})$
\State $\{x_{1:t+1}\} \gets$ extend $\{x_{1:t}\}$ with new observation $x_{t+1}$
\State $t \gets t+1$, (optionally) retrain the model
\EndLoop
\end{algorithmic}
\end{algorithm}
\subsection{Forecasting Observations in Future}
For a volatility model to be practically applicable in forecasting, the generating procedure conditioning on the history is of essential interest. We start with 1-step-ahead prediction, which serves as building block of multi-step forecasting.
Given the historical observations $\{x_{1:T}\}$ up to time step $T$, 1-step-ahead prediction of either $\Sigma^x_{T+1}$ or $x_{T+1}$ is fully depicted by the conditional predictive distribution:
\begin{align}
\label{eq:1-step-ahead_exact}
p(x_{T+1} | x_{1:T}) &= \int_{z} p(x_{T+1} | z_{1:T+1}, x_{1:T})\notag \\
&\qquad \cdot p(z_{T+1} | z_{1:T}) p(z_{1:T} | x_{1:T})~\diff{z},
\end{align}
where the distributions on the right-hand side refer to those in the generative model with the generative parameters $\ps{\Phi}$ omitted. As the true posterior $p(z_{1:T} | x_{1:T})$ involved in Eq. \eqref{eq:1-step-ahead_exact} is intractable, the exact evaluation of conditional predictive distribution $p(x_{T+1} | x_{1:T})$ is difficult.
A straightforward solution is that we substitute the true posterior $p(z_{1:T} | x_{1:T})$ with the approximation $q(z_{1:T} | x_{1:T})$ (see Eq. \eqref{eq:qz|x}) and leverage $q(z_{1:T} | x_{1:T})$ to inference $S$ sample paths $\{z^{\langle 1:S \rangle}_{1:T}\}$ of the latent variables according to the historical observations $\{x_{1:T}\}$. The approximate posterior from a well-trained model is presumed to be a good approximation to the truth; hence the sample paths shall be mimics of the true but unobservable path. We then extend the sample paths one step further from $T$ to $T+1$ using the autoregressive generative distribution $p(z_{T+1} | z_{1:T})$ (see Eq. \eqref{eq:pz}). The conditional predictive distribution is thus approximated as
\begin{align}
\label{eq:1-step-ahead_approx}
\hat{p}(x_{T+1} | x_{1:T}) &\approx \frac{1}{S} \sum_{s} p(x_{T+1} | z^{\langle s \rangle}_{1:T+1}, x_{1:T}),
\end{align}
which is essentially a mixture of $S$ Gaussians. In the case of multi-step forecasting, a common solution in practice is to perform a recursive 1-step-ahead forecasting routine with model updated as new observation comes in; the very same procedure can be applied except that more sample paths should be evaluated due to the accumulation of uncertainty. Algorithm~\ref{alg:recursiveforecast} gives the detailed rolling scheme.
\section{Experiment}
In this section, we present the experiment on real-world stock price time series to validate the effectiveness and to evaluate the performance of the prosed model.
\subsection{Dataset and Pre-processing}
The raw dataset comprises 162 univariate time series of the daily closing stock price, chosen from China's A-shares and collected from 3 institutions. The choice is made by selecting those with earlier listing date of trading (from 2006 or earlier) and fewer suspension days (at most 50 suspension days within the entire period of observation), such that the undesired noises introduced by insufficient observation or missing values -- highly influential on the performance but essentially irrelevant to the purpose of volatility modelling -- can be reduced to the minimum. The raw price series is cleaned by aligning and removing abnormalities: we manually aligned the mismatched part and interpolated the missing value by stochastic regression imputation \cite{little2014statistical} where the imputed value is drawn from a Gaussian distribution with mean and variance calculated by regression on the empirical value within a short interval of 20 recent days. The series is then transformed from actual prices $s_t$ into log-returns $x_t = \log(s_t/s_{t-1})$ and normalised. Moreover, we combinatorically choose a predefined number $d$ out of 162 univariate log-return series and aggregate the selected series at each time step to form a $d$-dimensional multivariate time series, the choice of $d$ is in accordance with the rank of correlation, e.g. $d=6$ in our experiments. Theoretically, it leads to a much larger volume of data as ${{162}\choose{6}} > 2\times 10^{10}$. Specifically, the actual dataset for training and evaluation comprises a collection of 2000 series of $d$-dimensional normalised log-return vectors of length $2570$ ($\sim$ 7 years) with no missing values. We divide the whole dataset into two subsets for training and testing along the time axis: the first 2000 time steps of each series have been used as training samples whereas the rest 570 steps of each series as the test samples.
\subsection{Baselines}
We select several deterministic volatility models from the GARCH family as baselines:
{
\begin{enumerate}
\setlength\itemsep{0em}
\item{Quadratic models}
\begin{itemize}
\setlength\itemsep{0em}
\item{ARCH(1); GARCH(1,1); GJR-GARCH(1,1,1);}
\end{itemize}
\setlength\itemsep{0em}
\item{Absolute value models}
\begin{itemize}
\setlength\itemsep{0em}
\item{AVARCH(1); AVGARCH(1,1); TARCH(1,1,1);}
\end{itemize}
\setlength\itemsep{0em}
\item{Exponential models.}
\begin{itemize}
\setlength\itemsep{0em}
\item{EARCH(1); EGARCH(1,1);}
\end{itemize}
\end{enumerate}
}%
\noindent Moreover, two stochastic volatility models are compared:
\begin{enumerate}
\setlength\itemsep{0em}
\item{MCMC volatility model: \emph{stochvol};}
\item{Gaussian process volatility model \emph{GP-Vol}.}
\end{enumerate}
For the listed models, we retrieve the authors' implementations or tools: \emph{stochvol}\footnote{\scriptsize\url{https://cran.r-project.org/web/packages/stochvol}}, \emph{GP-Vol}\footnote{\scriptsize\url{http://jmhl.org}} (the hyperparameters are chosen as suggested in \cite{DBLP:conf/nips/WuHG14}) and implement the models, such as GARCH, EGARCH, GJR-GARCH, etc., based on several widely-used packages\footnote{\scriptsize\url{https://pypi.python.org/pypi/arch/4.0}}\footnote{\scriptsize\url{https://www.kevinsheppard.com/MFE_Toolbox}}\footnote{\scriptsize\url{https://cran.r-project.org/web/packages/fGarch}} for time series analysis. All baselines are evaluated in terms of the negative log-likelihood on the test samples, where 1-step-ahead forecasting is carried out in a recursive fashion similar to Algorithm \ref{alg:recursiveforecast}.
\begin{table*}[t]
\setlength{\tabcolsep}{3pt}
\centering
\caption{The performance of the proposed model and the baselines in terms of negative log-likelihood (NLL) evaluated on the test samples of real-world stock price time series: each row from 1 to 10 lists the average NLL for a specific individual stock; the last row summarises the average NLL of the entire test samples of all 162 stocks.
}
\label{tbl:performance}
{\footnotesize
\begin{tabular}{rrrrrrrrrrrrr}
\Xhline{1.2pt}
Stock & NSVM-corr & NSVM-diag & ARCH & GARCH & GJR & AVARCH & AVGCH & TARCH & EARCH & EGARCH & stochvol & GP-Vol\Tstrut\Bstrut\\
\Xhline{0.7pt}
1 & \bf{1.11341} & 1.42816 & 1.36733 & 1.60087 & 1.60262 & 1.34792 & 1.57115 & 1.58156 & 1.33528 & 1.53651 & 1.39638 & 1.56260\Tstrut\\
2 & \bf{1.04058} & 1.28639 & 1.35682 & 1.63586 & 1.59978 & 1.32049 & 1.46016 & 1.45951 & 1.35758 & 1.52856 & 1.37080 & 1.47025 \\
3 & \bf{1.03159} & 1.32285 & 1.37576 & 1.44640 & 1.45826 & 1.34921 & 1.44437 & 1.45838 & 1.33821 & 1.41331 & 1.25928 & 1.48203 \\
4 & \bf{1.06467} & 1.32964 & 1.38872 & 1.45215 & 1.43133 & 1.37418 & 1.44565 & 1.44371 & 1.35542 & 1.40754 & 1.36199 & 1.32451 \\
5 & \bf{0.96804} & 1.22451 & 1.39470 & 1.31141 & 1.30394 & 1.37545 & 1.28204 & 1.27847 & 1.37697 & 1.28191 & 1.16348 & 1.41417 \\
6 & \bf{0.96835} & 1.23537 & 1.44126 & 1.55520 & 1.57794 & 1.39190 & 1.47442 & 1.47438 & 1.36163 & 1.48209 & 1.15107 & 1.24458 \\
7 & \bf{1.13580} & 1.43244 & 1.36829 & 1.65549 & 1.71652 & 1.32314 & 1.50407 & 1.50899 & 1.29369 & 1.64631 & 1.42043 & 1.19983 \\
8 & \bf{1.03752} & 1.26901 & 1.39010 & 1.47522 & 1.51466 & 1.35704 & 1.44956 & 1.45029 & 1.34560 & 1.42528 & 1.26289 & 1.47421 \\
9 & \bf{0.95157} & 1.15896 & 1.42636 & 1.32367 & 1.24404 & 1.42047 & 1.35427 & 1.34465 & 1.42143 & 1.32895 & 1.12615 & 1.35478 \\
10 & \bf{0.99105} & 1.13143 & 1.36919 & 1.55220 & 1.29989 & 1.24032 & 1.06932 & 1.04675 & 23.35983 & 1.20704 & 1.32947 & 1.18123\Bstrut\\
\Xhline{0.7pt}
AVG & \bf{1.18354} & 1.23521 & 1.27062 & 1.27051 & 1.28809 & 1.28827 & 1.27754 & 1.29010 & 1.33450 & 1.36465 & 1.27098 & 1.34751\Tstrut\Bstrut\\
\Xhline{1.2pt}
\end{tabular}
}%
\end{table*}
\subsection{Model Implementation}
In our experiments, we predefine the dimensions of observable variables to be $\dim{x_t} = 6$ and the latent variables $\dim{z_t} = 4$. Note that the dimension of the latent variable is smaller than that of the observable, which allows us to extract a compact representation. The NSVM implementation in our experiments is composed of two neural networks, namely the generative network (see Eq. \eqref{eq:mlp_zg}-\eqref{eq:xg_t}) and inference network (see Eq. \eqref{eq:mlp_zi}-\eqref{eq:zi_t}). Each RNN module contains one hidden layer of size $10$ with GRU cells; MLP modules are 2-layered fully-connected feedforward networks, where the hidden layer is also of size $10$ whereas the output layer splits into two equal-sized sublayers with different activation functions: one applies exponential function to ensure the non-negativity for variance while the other uses linear function to calculate mean estimates. Thus $\MLP^z_I$'s output layer is of size $4+4$ for $\{\tilde{\mu}^z,\tilde{\Sigma}^z\}$ whereas the size of $\MLP^x_G$'s output layer is $6+6$ for $\{\mu^x,\Sigma^x\}$. During the training phase, the inference network is connected with the conditional generative network (see, Eq. \eqref{eq:mlp_zg}-\eqref{eq:zg_t}) to establish a bottleneck structure, the latent variable $z_t$ inferred by variational inference \cite{DBLP:journals/corr/KingmaW13,DBLP:conf/icml/RezendeMW14} follows a Gaussian approximate posterior; the size of sample paths is set to $S=100$. The parameters of both networks are jointly learned, including those for the prior. We introduce Dropout \cite{DBLP:journals/jmlr/SrivastavaHKSS14} into each RNN modules and impose $L2$-norm on the weights of MLP modules as regularistion to prevent overshooting; Adam optimiser \cite{DBLP:journals/corr/KingmaB14} is exploited for fast convergence; exponential learning rate decay is adopted to anneal the variations of convergence as time goes. Two covariance configurations are adopted: 1. we stick with diagonal covariance matrices configurations; 2. we start with diagonal covariance and then apply rank-1 perturbation \cite{DBLP:conf/icml/RezendeMW14} during fine-tuning until training is finished. The recursive 1-step-ahead forecasting routine illustrated as Algorithm \ref{alg:recursiveforecast} is applied in the experiment for both training and test phase: during the training phase, a single NSVM is trained, at each time step, on the entire training samples to learn a holistic dynamics, where the latent shall reflect the evolution of environment; in the test phase, on the other hand, the model is optionally retrained, at every 20 time steps, on each particular input series of the test samples to keep track on the specific trend of that series. In other words, the trained NSVM predicts 20 consecutive steps before it is retrained using all historical time steps of the input series at present. Correspondingly, all baselines are trained and tested at every time step of each univariate series using standard calibration procedures. The negative log-likelihood on test samples has been collected for performance assessment. We train the model on a single-GPU (Titan X Pascal) server for roughly two hours before it converges to a certain degree of accuracy on the training samples. Empirically, the training phase can be processed on CPU in reasonable time, as the complexity of the model as well as the size of parameters is moderate.
\vspace{-.5em}
\begin{figure}[!t]
\centering
\subfloat[The volatility forecasting for Stock 37.]{\includegraphics[width=0.90\columnwidth]{case1.pdf}
\label{fig:case1}}
\hfil
\subfloat[The volatility forecasting for Stock 82.]{\includegraphics[width=0.90\columnwidth]{case2.pdf}
\label{fig:case2}}
\caption{Case studies of volatility forecasting.}
\label{fig:casestudy}
\end{figure}
\vspace{-.5em}
\subsection{Result and Discussion}
The performance of NSVM and baselines is listed for comparison in Table \ref{tbl:performance}: the performance on the first 10 individual stocks (chosen in alphacetical order but anonymised here) and the average score on all 162 stocks are reported in terms of negative log-likelihood (NLL) measure.
The result shows that NSVM has achieved higher accuracy over the baselines on the task of volatility modelling and forecasting on NLL, which validates the high flexibility and rich expressive power of NSVM for volatility modelling and forecasting. In particular, NSVM with rank-1 perturbation (referred to as NSVM-corr in Table \ref{tbl:performance}) beats all other models in terms of NLL, while NSVM with diagonal covariance matrix (i.e. NSVM-diag) outperforms GARCH(1,1) on 142 out of 162 stocks. Although the improvement comes at the cost of longer training time before convergence, it can be mitigated by applying parallel computing techniques as well as more advanced network architecture or training methods. Apart from the higher accuracy NSVM obtained, it provides us with a rather general framework to generalise univariate time series models of any specific functional form to the corresponding multivariate cases by extending network dimensions and manipulating the covariance matrices. A case study on real-world financial datasets is illustrated in Fig.~\ref{fig:casestudy}.
NSVM shows higher sensibility on drastic changes and better stability on moderate fluctuations: the response of NSVM in Fig.~\ref{fig:case1} is more stable in $t\in [1600, 2250]$, the period of moderate price fluctuation; while for drastic price change at $t=2250$, the model responds with a sharper spike compared with the quadratic GARCH model. Furthermore, NSVM demonstrates the inherent non-linearity in both Fig.~\ref{fig:case1} and \ref{fig:case2}: at each time step within $t\in [1000, 2000]$, the model quickly adapts to the current fluctuation level whereas GARCH suffers from a relatively slower decay from the previous influences.
The cyan vertical line at $t=2000$ splits the training samples and test samples. We show only one instance within our dataset due to the limitation of pages, the performance of other instances are similar.
\section{Conclusion}
In this paper, we proposed a new volatility model, referred to as NSVM, for volatility estimation and prediction. We integrated statistical models with deep neural networks, leveraged the characteristics of each model, organised the dependences between random variables in the form of graphical models, implemented the mappings among variables and parameters through RNNs and MLPs, and finally established a powerful stochastic recurrent model with universal approximation capability. The proposed architecture comprises a pair of complementary stochastic neural networks: the generative network and inference network. The former models the joint distribution of the stochastic volatility process with both observable and latent variables of interest; the latter provides with the approximate posterior i.e. an analytical approximation to the (intractable) conditional distribution of the latent variables given the observable ones. The parameters (and consequently the underlying distributions) are learned (and inferred) via variational inference, which maximises the lower bound for the marginal log-likelihood of the observable variables. NSVM has presented higher accuracy on the task of volatility modelling and forecasting on real-world financial datasets, compared with various widely-used models, such as GARCH, EGARCH, GJR-GARCH, TARCH in the GARCH family, MCMC volatility model \emph{stochvol} as well as Gaussian process volatility model \emph{GP-Vol}. Future work on NSVM would be to investigate the modelling of time series with non-Gaussian residual distributions, in particular the heavy-tailed distributions e.g. LogNormal $\log\mathscr{N}$ and Student's $t$-distribution.
\clearpage
{\small
\bibliographystyle{aaai}
|
train/arxiv
|
BkiUdZ85i7PA9B6M-PaB
| 5 | 1 |
\section{Medium-induced emission (MIE)}
Hard jet partons of energy $E$ much larger
tha the temperature $T$ propagating through the QCD medium experience
frequent soft interactions with the medium, which
can eventually source collinear, medium-induced radiation.
Its long formation time make it sensitive to the
Landau--Pomeranchuk--Migdal (LPM) effect, i.e. the
quantum-mechanical interference of many soft scatterings.
These effective $1\leftrightarrow 2$
processes are not only key in jet modification; through their
number-changing nature they
ensure chemical equilibration and energy transport in \emph{bottom-up
thermalisation} \cite{Baier:2000sb}.
The celebrated BDMPS-Z \cite{Baier:1996kr,Zakharov:1996fv}
MIE probability, i.e.
\begin{equation}
\frac{dI}{dx}=\frac{{\alpha_s P_{1\to 2}(x)}}{[x(1-x)E]^2}
\mathrm{Re}\int_{t_1<t_2}dt_1dt_2 {{\bm\nabla}_{\boldsymbol{b}_2}\cdot
{\bm\nabla}_{\boldsymbol{b}_1}}\bigg[
\big\langle {\boldsymbol{b}_2},t_2\vert {\boldsymbol{b}_1},t_1
\big\rangle_{ \boldsymbol{b}_2=0}^{ \boldsymbol{b}_1=0}-
\text{vac}\bigg],
\end{equation}
is factorised in a DGLAP splitting kernel $P_{1\to 2}(x)$
multiplying a propagator $ \big\langle {\boldsymbol{b}_2},t_2\vert {\boldsymbol{b}_1},t_1
\big\rangle$ that describes diffusion in transverse position space
from the emission in the amplitude at time $t_1$ and vanishing $\boldsymbol{b}$
to the emission in the conjugate amplitude at time $t_2$.
This propagator is a Green's function of
\begin{equation}
\mathcal{H}={-\frac{\nabla^2_{\boldsymbol{b}}}{2x(1-x)E}
+\sum_{i}\frac{{m_i^2}}{2E_i}}
{-i\mathcal{C}(\boldsymbol{b},x \boldsymbol{b}, (1-x) \boldsymbol{b})}.
\label{prop}
\end{equation}
This 2D Hamiltonian has a real kinetic term
with in-medium masses $m_i$. The imaginary part is the
\emph{scattering kernel}. It encodes jet-medium interactions
and it is tightly related to transverse momentum broadening (TMB).
Determining the propagator from Eq.~\eqref{prop} is difficult.
Historically, approaches concentrated on a few limiting
cases. For thin media one can truncate the LPM resummation series
at first order in the \emph{opacity expansion}~\cite{Gyulassy:1999zd},
while for thick media one can perform the
\emph{harmonic oscillator approximation}, introducing the
\emph{momentum broadening coefficient} $\hat{q}$, i.e.
${\mathcal{C}(\boldsymbol{b},x \boldsymbol{b}, (1-x) \boldsymbol{b})}
\approx \mathcal{C}_\mathrm{HO}\equiv \frac{{\hat{q}}}{4}\big[b^2+(xb)^2+((1-x)b)^2\big]$ \cite{Baier:1996kr}.
The determination of the propagator simplifies also in
an infinite, static medium~\cite{Arnold:2002ja}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5cm]{plot-integrated-spectrum-crop}
\end{center}
\caption{The ``full'' line is the numerical solution from~\cite{Andres:2020vxs,Andres:2020kfg}.
The ``LO+NLO'' is the improved opacity expansion, LO is the harmonic oscillator and GLV the
first order in opacity.
Figure taken from~\cite{Barata:2021wuf}.
\label{fig:ioe}}
\end{figure}
The \emph{improved opacity expansion}, introduced in \cite{Mehtar-Tani:2019ygg,Barata:2021wuf},
is an economical, analytical prescription for overcoming these limiting cases.
It corresponds
to treating the non-harmonic parts of the scattering kernel as perturbations,
i.e. $\mathcal{C}=\mathcal{C}_\mathrm{HO}+
[\mathcal{C}-\mathcal{C}_\mathrm{HO}]$. In this way, the Coulomb logarithm,
$\mathcal{C}(\boldsymbol{b})\propto b^2\ln(b)$, is captured, thus incorporating
the rarer harder \emph{Molière scatterings}.
This new prescription was recently tested against
new numerical determinations of the full propagator, obtained
in~\cite{Andres:2020vxs,Andres:2020kfg,carlota}, finding good, 10\% or better
agreement over its validity range, as shown in Fig.~\ref{fig:ioe}.
\section{Transverse momentum broadening}
The discussion so far was agnostic to the specifics of the TMB kernel $\mathcal{C}$
and associated probability $\mathcal{P}(k_\perp)$, other than featuring
a $1/k_\perp^4$ Coulomb tail in the UV and a diffusive gaussian in the IR.
If we were to determine the TMB kernel perturbatively, we would
run into the known issues associated with the Linde problem: soft
gluons ($\omega\ll T$) are \textit{classical} high-occupancy modes, distributed on the $T/\omega$ IR
tail of the Bose distribution. As the expansion parameter becomes $g^2 T/\omega$,
convergence can be seriously hampered.
A breakthrough came from the realisation in~\cite{CaronHuot:2008ni} that these soft
classical modes at space-like separations become Euclidean.
As $\mathcal{C}$ is determined
from Wilson lines at space-like separations,
the large-distance contribution $b\gtrsim 1/gT$ can be determined
non-perturbatively using the dimensionally-reduced theory on the lattice,
\cite{Panero:2013pla,Moore:2019lgw}. At shorter distance one can use pQCD
and merge the two~\cite{Moore:2021jwe,Schlichting:2021idr,soudi}.\footnote{
The same method is also being applied to in-medium masses~\cite{Moore:2020wvy,Ghiglieri:2021bom,philipp}.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.8cm]{fig1-InverseCompare-T500-crop}
\end{center}
\caption{The LO perturbative line comes from \cite{Aurenche:2002pd,Arnold:2008vd},
the NLO includes the $\mathcal{O}(g)$ corrections from~\cite{CaronHuot:2008ni}
following~\cite{Ghiglieri:2018ltw}.
Figure taken from~\cite{Schlichting:2021idr}.\label{fig:kernel}}
\end{figure}
The resulting kernel is shown in Fig.~\ref{fig:kernel} from~\cite{Schlichting:2021idr}:
a transition from $1/q_\perp^4$ Coulomb in the UV into a
non-perturbative $1/q_\perp^3$ behaviour in the IR takes place,
providing the bare minimum of
``screening'' to make $\hat{q}$, the second moment of the kernel, IR finite.
In the $q_\perp\sim gT$ range the non-perturbative curve differs appreciably
from either the LO or NLO curves.
\cite{Moore:2021jwe,Schlichting:2021idr,soudi} further analysed the impact
of the non-perturbative kernel on MIEs, both for infinite and finite media.
They find that an $\mathcal{O}(1)$ difference
between the MIE rates with NLO and non-perturbative kernels,
a much larger effect than
the improved opacity expansion compared to the full numerical
propagator.
See~\cite{kumar,ilia,Kumar:2020wvb,Grishmanovskii:2022tpb,soloveva,Soloveva:2021quj}
for other non-perturbative approaches to $\hat{q}$ and transport.
Let us now shift to \emph{quantum corrections}. In~\cite{Liou:2013qya,Blaizot:2013vha}
it was pointed out that radiative corrections to TMB from the recoil
off the radiated gluon are responsible for a double-log enhanced, $\mathcal{O}(\alpha_s)$
correction to $\hat{q}$, arising from soft and collinear logarithms
in the single-scattering regime, i.e.
\begin{equation}
\delta\hat{q}=\frac{\alpha_s N_c}{\pi}\hat{q}_0 \int_{\tau_0}^\tau \frac{d\tau}{\tau}
\int_{\hat{q}_0\tau}^{k_\perp^2} \frac{dk_\perp^2}{k_\perp^2},
\end{equation}
where the $\hat{q}_0\tau$ boundary keeps the phase space
away from the $k_\perp^2\sim \hat{q} \tau$ multiple scattering regime.
These double-log correction can be understood as
renormalizing $\hat{q}_0$, the LO value of $\hat{q}$. By
promoting $\hat{q}_0$ to a variable and moving it (and the coupling)
within the double integral, one obtains the resummation equation for these
logs~\cite{Iancu:2014kga}.
It was solved numerically and semi-analytically in~\cite{Caucal:2021lgf,Caucal:2022fhc,yacine},
finding how an initial $\hat{q}(\tau_0,k_{\perp\,0}^2)$ evolves to a
$\hat{q}(\tau,k_{\perp}^2)$ resumming arbitrary numbers of long duration, quantum
fluctuations. One of the main results is that the non-local nature
of these fluctuations affects the UV tail of the TMB probability,
shifting it from its tree-level $\mathcal{P}\propto k_\perp^{-4}$ Coulomb
form to a less steep $\mathcal{P}\propto k_\perp^{-4+2\alpha_s N_c/\pi}$. This
larger probability for wider-angle scatterings corresponds to
more efficient diffusion.
These quantum, radiative corrections are universal: as shown
in~\cite{Blaizot:2014bha,Wu:2014nca,Iancu:2014kga}, they also arise
in the case of a double MIE with overlapping formation times
in the soft limit. Over the past years there has been an ongoing
effort~\cite{Arnold:2015qya,Arnold:2016kek,Arnold:2016mth,
Arnold:2020uzm,Arnold:2021pin,Arnold:2022epx}
to determine all these double-splitting real and virtual corrections,
beyond the soft limit. The underlying goal is understanding whether
the assumed Markovian nature of the MIE kernel holds when used
to construct in-medium cascades. In the most recent developments
\cite{Arnold:2021mow,Arnold:2021pin,shahin} it was shown
that, with some caveats, also the single-logarithmic radiative corrections
determined in~\cite{Liou:2013qya} are universal, in that they apply also
to double splitting, opening a promising pathway for their resummation.
\section{Kinetic theory, transport and thermalisation}
As mentioned, MIE and TMB are key ingredients of the kinetic description of QCD media.
They are complemented by drag, longitudinal momentum broadening and identity-change
processes in the leading-order effective kinetic theory of QCD~\cite{Arnold:2002zm}.
We refer to \cite{Schlichting:2020lef,Dai:2020rlu,Ke:2020clc,ke,dai}
for recent applications of the kinetic framework to jet modifications. Here we
instead connect the previous sections with
transport coefficients and thermalisation.
Let us consider the shear viscosity $\eta$, which damps flow gradients.
It is then clear that microscopic processes that isotropise momentum are
dominant contributions. Hence, the direct isotropizing effect of TMB
is more important for $\eta$ than its indirect effect
as the driver of MIEs, as seen also in the LO determination
of $\eta$~\cite{Arnold:2003zc}. These were recently extended to (almost)
NLO in~\cite{Ghiglieri:2018dib}. These corrections are large --- they reduce
$\eta/s$ by a factor of a few in the phenomenological $T\sim \text{few}\;T_c$ range ---
and are dominated by the $\mathcal{O}(g)$ classical corrections to $\hat{q}$ of~\cite{CaronHuot:2008ni}
shown in Fig.~\ref{fig:kernel}. What should we make of this? Recently,
it was pointed out in \cite{Muller:2021wri} that the problem might lie in the LO determination of TMB,
which then affects $\eta$ at LO. If that determination has eccessive screening --- see Fig.~\ref{fig:kernel}
--- it underestimates broadening, potentially explaining qualitatively this LO-NLO discrepancy.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6cm]{overoccupied-crop}
\end{center}
\caption{Thermalisation time as a function
of the coupling $\lambda=g^2N_c$
for an overouccupied
initial condition. Figure adapted from \cite{Fu:2021jhl}.\label{fig:therm}}
\end{figure}
Several applications of kinetic theory to thermalisation have been discussed
at this conference~\cite{Du:2020dvp,du,Brewer:2022vkq,Mikheev:2022fdl,bruno,plumari,Almaalol:2020rnu,almalool}.
Can similarly
problematic perturbative expansions arise in these kinetic frameworks? What
are the systematics of extrapolations to $\alpha_s=0.3$ ($g\approx 2$)?
In~\cite{Fu:2021jhl,fu}, NLO corrections to thermalisation for isotropic systems
have been presented. Fig.~\ref{fig:therm} shows the LO and NLO thermalisation
times for an overoccupied initial condition. Two different NLO collision operators have been
constructed, which resum differently higher-order effects. Their spread, indicated
by the shaded band, is a proxy for the size of even higher-order corrections.
This band is smaller than its spread from the LO result, as expected for a convergent expansion. Moreover, the extrapolation
to intermediate coupling seems controlled, with a 40\% correction for $g^2N_c=10$. However,
these isotropic sytems are by their nature insensitive to the isotropizing effect
of TMB, which we argued to play a determining role in corrections to transport.
It remains then to be understood how reliable the extrapolation could be in situations
typical of heavy-ion collisions with anisotropic initial conditions and expansions.
In these systems,
plasma instabilities~\cite{Mrowczynski:1993qm,Romatschke:2003ms,Arnold:2003rq} ---
another classical phenomenon --- prevent at the moment consistent LO kinetic treatments.
Recently, the instability-subtracted TMB kernel, together with a recipe for dealing
with the unstable modes, was provided in \cite{Hauksson:2021okc}, finding that
anisotropy reduces the scattering kernel in the QGP phase. In the earlier
glasma phase, large anisotropic TMB effects have been reported
in~\cite{Ipp:2020nfu,Carrington:2022bnv,alina,schuh,meg}.
\textbf{Summary}: The reviewed advances in the
microscopic description of
QCD media are instrumental in better quantification of theory uncertainties and
in narrowing the gap between the QCD Lagrangian and phenomenology.
\bibliographystyle{myJHEP}
|
train/arxiv
|
BkiUgR_xK7IAB1deWP3N
| 5 | 1 |
\section{Introduction}
Extremely low-mass white dwarfs (ELMs) are thought to be the end products of binary star interactions and generally have a companion \citep[e.g.,][]{Marsh1995,2011MNRAS.413.1121R}. Although still far from understood, binary models predict that the companions to these ELMs are white dwarfs because the system has undergone one or two common-envelope phases \citep{2001A&A...365..491N,2006A&A...460..209V,Woods2012,2014A&A...562A..14T}. Knowing the distribution of the companion's mass to these ELMs could provide useful constraints on the various parameters that
enter binary star evolution and the common-envelope phase, as well as to predict whether and when they will merge and, more generally, how they will evolve \citep{2012MNRAS.422.2417D,2012ApJ...751..141K}. Thanks to the ELM Survey, which has identified 61 ELMs and provided orbital parameters for 54 of them\footnote{\citet{ori} used the 55 systems of \citet{Gianninas2014}, which are essentially based on the list of 54 systems from \citet{Brown2013}. Here I used these 54 latter systems because they come from a homogeneous sample.} \citep[see][and references therein]{Brown2013}, it is now possible to begin envisaging conducting a statistical analysis as was done by \citet{ori}. The problem is that ELMs are single-lined spectroscopic binaries and as such, it is not possible to obtain the mass ratio directly \citep[see, e.g.,][]{2010A&A...524A..14B,2012ocpd.conf...41B,Cure2014}, but one needs to apply statistical methods to derive the mass-ratio (or companion mass) distribution. \citet{ori} developed a Bayesian probabilistic model to infer the companion mass distribution for the above-mentioned sample, {\it \textup{assuming a functional form}} -- a two-component Gaussian, with one component representing white dwarfs with masses between 0.2 and 1.44~M$_\odot$, and the other neutron stars with masses centred around 1.4~M$_\odot$ and a standard deviation of 0.05~M$_\odot$. Using a Markov chain Monte Carlo algorithm, they found that their best fit is given by a population of white dwarfs centred around 0.74~M$_\odot$ and a standard deviation of 0.24~M$_\odot$, without a neutron star. This is quite an interesting result that also indicates that in contrast to population synthesis models, the majority of companions to ELMs are CO-core WDs, and not another He WD. As such, it is important to examine whether this result holds when using different methods, including when the functional form is not fixed a priori. This is what I present here.
\section{Methods}
The derivation of the orbital elements for a single-lined spectroscopic binary (period, radial velocity amplitude, and eccentricity) allows obtaining the spectroscopic mass function, $f(m)$, which is a combination of the masses of the two components and the (unknown) inclination of the orbit on the line of sight, $i$:
$$ f(m) = \frac{M_2^3}{(M_1+M_2)^2}~\sin^3 i ,$$
where $M_1$ is the mass of the primary (in this case, the ELM), and $M_2$, the mass of its companion. If $M_1$ is known, as it is the case here, then one can rewrite this as a function of the mass ratio, $q=M_2/M_1$:
$$ Y= \frac{f(m)}{M_1} = \frac{q^3}{(1+q)^2}.$$
The distribution of the logarithm of $f(m)$ -- or $Y$ -- can
be used to determine the distribution of $M_2$ or $q$. This was done here using two different methods, in which I always assumed that the inclination $i$ is randomly distributed on the sky, that is, $P(i)=\sin i$. I used the sample of \citet[][similar to that used by \citet{ori}, but see the footnote]{Brown2013}, which provides a list of 54 systems with known $f(m)$ and $M_1$.
In the first method, I assumed a functional form for the distribution of $M_2$: this is either a Gaussian with mean $\mu$ and standard deviation $\sigma$, or a uniform distribution, defined between a lower, $M_{\rm 2,l}$, and an upper, $M_{\rm 2,u}$ value of the companion mass. I then applied a Kolmogorov-Smirnov statistical test.
I assumed a single population of companions, without distinguishing between a neutron star (NS) and a white dwarf companion population,
for instance. This is based on the fact that \citet{ori} found the NS fraction to be very low, as I confirm here as well.
In each case, I ran 10,000 Monte Carlo simulations, where $M_1$ is distributed according to the observed distribution (see the
online Fig.~\ref{fig:m1}), $M_2$ according to the chosen distribution with one set of parameters, and $i$ is assumed to be randomly distributed on the sky. For each sample of simulations, I calculated the cumulative distribution of $\log f(m)$, which I compared with the observed one. To do this, I calculated the largest deviation between the two distributions, $D^*=D_{54,10000}$. Running 10,000 simulations provides very good precision, and there would be no gain in running more, as the estimator $D^*$ saturates for large numbers\footnote{This is because $D_{\rm n,n'} \propto \sqrt{\frac{n+n'}{ n n'}} \propto \sqrt{\frac{1}{n}}$ for $n'>>n$ and $n>>1$.}.
This estimator allows determining the probability with which the simulated and the observed distribution are extracted from the same population. Thus, if $D^*>0.2628$, the chance is only 0.1\% that the two populations are extracted from the same population, and we may most likely ignore such solutions. For $D^*>0.1274$ and $D^*>0.0506$, these probabilities become 33\% and 99.9\%, respectively. The first value provides a 1-$\sigma$ estimate of the parameters that are allowed, while the latter number can be used to estimate the 0.1\% confidence interval of parameters that provide a very good match to the observed distribution.
In the second method, I took a more direct approach and inverted the distribution of $Y$ to derive the mass-ratio distribution.
\section{Results}
\subsection{Functional form}
\subsubsection{Gaussian}
\begin{figure}[htbp]
\centering
\includegraphics[width=6cm]{gaussM2Sig.pdf}
\caption{Standard deviation, $\sigma$, versus the mean mass of the companion, $\mu$, for all simulated samples where the companion mass is distributed according to a Gaussian, and which have $D^*< 0.1274$ (in black) and those that have $D^*< 0.0506$ (in green). The heavy, red dot shows the location $\mu=0.74$, $\sigma=0.24$.}
\label{fig:gaussM2}
\end{figure}
As mentioned above, I here assumed that the distribution of the companion mass, $\Phi(M_2)$, is given by a Gaussian:
$$ \Phi(M_2) \propto \exp(-\frac{M_2-\mu}{2 \sigma^2}),$$ and I determined for a plane $0 < \mu< 2, 0 < \sigma < 1$ the value of $D^*$. The relation of $D^*$ with $\mu$ is shown in the online Fig.~\ref{fig:gaussD}, where I also show the lines corresponding to $D^*=0.0506,$ 0.1274, and 0.2628. This figure shows that one can find simulations spanning the whole range $0 < \mu < 1.22~{\rm M}_\odot$ that would lead to
$D^*<0.2628$, while the 1-$\sigma$ range covers the values $0.3 < \mu < 0.92~{\rm M}_\odot$ . This indicates that it is difficult to constrain the parameters of such a functional form based on the small observed sample.
It is clear, however, that the simulations with $\mu$ between 0.7 and 0.8~M$_\odot$ correspond to the lowest values of $D^*$, with the minimum being around $\mu$=0.76~M$_\odot$ and $\sigma$=0.27~M$_\odot$, leading to $D^*=0.027$. This value indicates that this simulation and the observed distribution are consistent at a very high level
because the null hypothesis that the two populations are drawn from the same population can only be rejected at a level of $1.3 \times 10^{-12}$\%.
Of course, the standard deviation $\sigma$ is correlated with $\mu$, and Fig.~\ref{fig:gaussM2} shows the solutions that lead to $D^*<0.0506$ and $D^*<0.1274$, illustrating the allowed range. For $D^*<0.1274$, I found that the mean value of the companion mass is $\mu_m = 0.70 \pm 0.12~{\rm M}_\odot$, while the mean value of the standard deviation is $\sigma_m=0.45\pm 0.22~{\rm M}_\odot$. For $D^*<0.0506$, these values become $\mu_m = 0.76 \pm 0.03~{\rm M}_\odot$ and $\sigma_m=0.27\pm 0.06~{\rm M}_\odot$.
A $\chi^2$ analysis that computes the deviation of the computed and observed distribution of $\log f(m)$ provides a similar result, with the best fit being given by $\mu = 0.76 \pm 0.02 ~{\rm M}_\odot$ and $\sigma = 0.27\pm 0.02~{\rm M}_\odot$ (see the online Fig.~\ref{fig:x2map}).
\begin{figure}[tbp]
\centering
\includegraphics[width=6cm]{uniMminMmax.pdf}
\caption{Highest mass of the companion as a function of the lowest mass of the companions for all simulated uniformly distributed samples that have $D^*< 0.1274$ (in black) and those that have $D^*< 0.0506$ (in green). The two masses are clearly correlated, and I show a linear fit to the green dots as the heavy, solid red line.}
\label{fig:uniMminMmax}
\end{figure}
\subsubsection{Uniform distribution}
I repeated the same analysis but using a uniform distribution of the companion mass, between a lowest ($M_{2,\rm l}$) and a highest value ($M_{2,\rm u}$), which were assumed to be in the range
$0 < M_{2,\rm l} < 0.9$~M$_\odot$ and $M_{2,\rm l} < M_{2,\rm u} < M_{2,\rm l}+1.5$~M$_\odot$. The results are shown in the
online Fig.~\ref{fig:uniD} and in Fig.~\ref{fig:uniMminMmax}. Again, the range of allowed values is very wide. If this is
restricted to $D^* < 0.2628$, the whole range of $M_{2,\rm l}$ is allowed, while for $M_{2,\rm u}$, it is restricted to values between 0.5 and 1.95~M$_\odot$. The range becomes narrower for lower values of $D^*$, as shown in Fig. 2, which also shows
that the acceptable values of $M_{2,\rm l}$ and $M_{2,\rm u}$ are correlated. A linear fit gives
$$ M_{2,\rm u} = -1.25 M_{2,\rm l} + 1.60. $$
The mean values of $M_{2,\rm l}$ and $M_{2,\rm u}$ are $0.36\pm0.18$~M$_\odot$ and $1.20\pm0.23$~M$_\odot$ for $D^*<0.1274$, and $0.33\pm0.07$~M$_\odot$ and $1.19\pm0.10$~M$_\odot$ for $D^*<0.0506$. The lowest value of $D^*$ (0.02793) is reached for $M_{2,\rm l}=0.25$~M$_\odot$ and $M_{2,\rm u}=1.28$~M$_\odot$. This value corresponds to a probability that the two distributions are not extracted from the same population of $4 \times 10^{-10}$\%. Perhaps most importantly, this shows that a uniform distribution of the companion mass fits the data very well and cannot be discarded.
With such a functional form, a Gaussian or a uniform distribution
cannot be preferred; they do, of course, quite overlap (see the
online Fig.~\ref{fig:distmass}).
\begin{figure}[tbp]
\centering
\includegraphics[width=9cm]{logf.pdf}
\caption{Comparison between the observed distribution of the logarithm of the spectroscopic mass function (solid black line) and the best fits for the uniformly distributed (red dotted line connected by heavy dots) and Gaussian-distributed (green dashed line connected by open squares) companion masses. For the former, I used $M_{2,\rm l}=0.25$~M$_\odot$ and $M_{2,\rm u}=1.28$~M$_\odot$, while for the latter, I used $\mu = 0.76$~M$_\odot$ and $\sigma=0.27$~M$_\odot$. The top panel shows the fraction of systems, while the bottom panel is the cumulative fraction of systems. It is clear that both samples are good fits, given the intrinsic errors of the observed distribution. }
\label{fig:logf}
\end{figure}
This is further illustrated in Fig.~\ref{fig:logf}, which shows the distribution of $\log f(m)$ for the observed sample as well as for the two best fits of the functional form, a Gaussian, and a uniform distribution. It is clear that both simulated distributions are good fits to the observed distribution, while it is very hard to distinguish the results from the two different functional forms.
\subsubsection{Inversion method}
Using a functional form for the mass distribution allows a better control of the systematics of a method, but at the cost of risking missing some interesting deviations. This is for example the case in the paper by \citet{ori} in their test 4, where they compare the distribution they obtain for a sample of post-common envelope systems to that obtained from spectroscopy. Although they clearly reproduce the bulk distribution, they miss the tail and other details of the distribution (see their Fig. 3). Moreover, \citet{ori} also pointed out that for the sample of ELMs, their result ``could indicate that the true WD distribution may not be exactly Gaussian'' -- and indeed the previous section has shown that a uniform distribution also provides a good fit to the observations. It is therefore useful to consider exploring methods that do not require the a priori input of a functional form. This is the case of the Richardson-Lucy (R-L) inversion method, as used by \citet{1993A&A...271..125B,2010A&A...524A..14B,2012ocpd.conf...41B}, and \citet{Cure2014}.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{massratio.pdf}
\caption{Distribution of the mass ratios as determined by the Richardson-Lucy algorithm (solid black line) and as obtained for the best fits with a Gaussian (red dotted line) and a uniform (blue dashed line) distribution of companions.}
\label{fig:massratio}
\end{figure}
I refer to these papers and references therein for a full discussion of the method, and in particular to \citet{CB1994} for a more formal presentation. Here, I just mention that the Richardson-Lucy method relies on the Bayes theorem on conditional probabilities and solves the Fredholm integral equation that links $Y$ with $q$ by an iterative scheme. It is
important to note that we do not directly have the distribution of the companion mass, but only of the mass ratio. This is, however, a very important parameter for binary evolution models, and given that the mass of the primary is very peaked at 0.17~M$_\odot$, the distribution of the mass ratio can give an idea of the distribution of the companion mass. The outcome of this method is shown in Fig.~\ref{fig:massratio}, where I also compare it with the distribution derived from the best fits of the functional form method. For the latter, I derived the mass-ratio distribution by using the functional form for the companion mass and the primary mass distribution as determined by \citet{Brown2013}.
All three methods provide a rather similar mass-ratio distribution, with some small differences. The outcome of the R-L method gives a broad Gaussian distribution, but with two more pronounced peaks, around $q\sim2-2.5$ and around $q\sim 4-5$.
This may indicate that a simple functional form may be missing on some small structure in the data, although a much larger sample
is needed to be able to confirm these \citep{2012ocpd.conf...41B}. A K-S test indicates that the hypothesis can be rejected that the functional forms and the outcome of the R-L method are drawn from the same population at the level of 92\% -- a rather high number, but perhaps not convincing enough. Indeed, given the small sample, this is at most a two-sigma result. However, it shows that the true mass ratio distribution (and companion mass distribution) {\it may} have a more complicated structure than any simple functional form we can think of. Only with much larger samples will be able to know this.
At the suggestion of the referee, I have also examined the mass-ratio distribution derived with the R-L method when limiting to the least massive primary stars, that is, the 33 systems with masses lower than 0.2~M$_\odot$. The resulting distribution is shown in the online Fig.~\ref{fig:mrlowmass}. It shows a single-peaked distribution centred around $q\sim4-5$, that is (given that we now examine systems with a primary mass $M_1=0.17$~M$_\odot$), $M_2=0.68-0.85$~M$_\odot$. In the figure, the corresponding mass-ratio distributions for the functional forms were computed with a single value of the primary mass. The outcome of the R-L method apparently agrees better with the Gaussian distribution, but given that we have now an even smaller sample, the data should not be overinterpreted
because the three distributions are again compatible within 2-$\sigma$. This peak may correspond to the similar peak seen in the mass-ratio distribution seen for the entire sample, while the fact that there is a possible second peak in Fig.~\ref{fig:massratio} at a lower mass ratio is most probably linked to the more massive primaries.
The very small excess of high mass ratios seen in Fig.~\ref{fig:massratio} and in the online Fig.~\ref{fig:mrlowmass} should not be given too much importance: firstly, it is a well-known effect of the R-L method to smooth the distribution, the effect becoming weaker depending on the number of iterations \citep{CB1994}, and secondly, their value is compatible with zero, given the size of the sample.
\section{Discussion and conclusions}
The results of \citet{ori} in the study of ELM WDs are very important,
therefore I have reanalysed the same sample of spectroscopic binaries they used with two different techniques. In the first one, I assumed functional forms and applied a K-S statistics. The obtained results were confirmed by a $\chi^2$ statistical test. The parameters for the Gaussian functional form are very similar to those found using a different method by \citet{ori}, but I showed that a uniform distribution of the companion mass can provide as good a fit to the observed data and that the range of parameters allowed is rather large, making it hard to provide a definitive answer as to the real distribution.
If the uniform distribution illustrated in Fig.~\ref{fig:distmass} is more representative of the companion mass distribution, then more double He WD binary systems may be expected, such as the eclipsing double white dwarf binary CSS 41177
\citep{2014MNRAS.438.3399B}: the uniform distribution shown in this figure leads to a 24\% probability (this is the fraction of systems that have a secondary mass lower than 0.5~M$_\odot$), compared to 16\% for the Gaussian fit shown in the same figure or as derived by \citet{ori}. If I take into account all the possible values at the 1-$\sigma$ level (i.e. those with $D^* < 0.1274$), I also derive a 26\% probability for both my Gaussian and uniform distributions.
In addition, I applied an inversion method to derive the mass-ratio distribution, without the need {\it \textup{to assume}} any functional form. The results are compatible with those derived from the functional form, although they seem to indicate some additional fine-grained structures.
An important question that \citet{ori} addressed is the possibility to have a neutron star as a companion to the ELM WD. \citet{ori} found that the probability of this is very low. This is also confirmed by my results. For example, the uniform distribution of companion masses indicates that the range allowed for $D^*<0.0506$ ends at 1.4~M$_\odot$. If this is relaxed to $D^*<0.1274$, however,
the highest companion mass can be up to 1.7~M$_\odot$, but all in all, the probability to have one system with such a high mass ($> 1.44$~M$_\odot$) is very low: for the uniform distribution, the 1-$\sigma$ probability is zero, while for the Gaussian distribution it is 6.5\%.
\begin{acknowledgements}
I would like to thank the referee, J.J. Andrews, for a careful reading of the manuscript and for providing suggestions to improve the paper.
\end{acknowledgements}
|
train/arxiv
|
BkiUdns5qsMAI5yh7C63
| 5 | 1 |
\section{INTRODUCTION \label{sec:introduction}}
As protoplanetary disks are often thought to be turbulent
(\citealt{armitage2011araa}; but see \citealt{flahertyetal2015}),
understanding how disk solids interact with
turbulent gas is crucial to modelling
the formation of planetesimals and planets
\citep{weidenschillingcuzzi93,chiangyoudin10,johansenetal2014prpl,testietal2014prpl}
and to explaining observations of disks \citep{william_cieza2011araa,andrews2015PASP}.
Turbulence determines the spatial distribution of solid
particles and their relative collision velocities
\citep[``turbulent stirring"; ][]{voelketal1980,cuzzietal1993, YL2007,OC2007}.
For example, dust particles can be
concentrated in local pressure maxima
at the interstices of turbulent eddies;
pressure bumps can also be found in
spiral density waves, anti-cyclonic vortices, or zonal
flows associated with whatever mechanism drives
disk transport
\citep{maxey87,Klahr2003,riceetal2004,barrancomarcus05,mamarice2009,Johansen2009,
johansenetal2011,panetal2011}.
Turbulent density fluctuations can also exert stochastic gravitational torques on solid objects and alter their orbital dynamics
\citep[``gravitational stirring";][]{laughlinetal2004,nelson2005,ogiharaetal2007,IGM2008}.
Disk self-gravity can drive turbulence, provided
disks are sufficiently massive and provided
their cooling times are longer than their dynamical times
\citep{Paczynski78,gammie01,Forgan2012,ShiChiang2014}.
``Gravito-turbulence'' may characterize
the early phase of star formation, when disks are
still massive
\citep{rodriguez2005,eisneretal2005,andrews_williams2007}. Observations show signs of early grain
growth in some very young stellar systems \citep{riccietal2010,tobinetal2013}. Millimeter-sized chondrules in primitive meteorites indicate they
might once have been melted via strong shock waves in
self-gravitating disks \citep{cuzzi2006nature,alexanderetal2008sci}. Simulations
suggest large dust concentrations via spiral waves and vortices present in gravito-turbulent disks,
possibly leading to
planetesimal formation via gravitational instability in
the dust itself
\citep{riceetal2004,gibbonsetal2015}.
~~~
Many studies of dust dynamics to date focus on particles in disks made turbulent by the magneto-rotational
instability (MRI; \citealt{BH91,BH98}).
Both
analytical and numerical works have been carried out to obtain radial/vertical diffusivities and particle
relative velocities due to turbulent stirring \citep{cuzzietal1993, YL2007,OC2007,carb2010,carb2011,
zhuetal2015}.
For large particles, gravitational forces by MRI-turbulent
density fluctuations exceed aerodynamic drag forces
by gas and generate relative particle velocities
too high to be conducive to planetesimal formation
\citep{nelson2005,johnsonetal2006,
yangetal2009,yangetal2012,NG2010,gnt2012mnras}.
A few useful metrics common to many of these papers include: (1)\,the diffusion coefficient, which
characterizes how quickly solids
random walk through the disk, (2)\, the particle eccentricity or velocity dispersion,
and (3) the pairwise relative velocity,
which is crucial for determining collision outcomes.
Quantity (2) usually serves as a good proxy
for quantity (3).\footnote{We will show in section~\ref{sec:vel} that this
approximation actually breaks down for $\Omega\tstop\sim 1$ particles in gravito-turbulent disks.}
Relatively fewer groups investigated the dynamics of dust in turbulence driven by disk self-gravity.
\citet{gibbonsetal2012,gibbonsetal2014,gibbonsetal2015} study particles with a
range of sizes (with stopping times $\tstop = 10^{-2}$--$10^2\Omega^{-1}$, where $\Omega$
is the local orbital frequency) accumulate in
local, two-dimensional (in the disk plane) simulations.
They find that intermediate-sized dust can concentrate by
up to two orders of magnitude, and that the dispersion of
particle velocites can approach the gas sound speed,
consistent with
the results of 2D global simulations
\citep{riceetal2004,riceetal2006}.
\citet{britschetal2008} and \citet{walmswelletal2013}
find strong eccentricity growth
for large-sized planetesimals forced more by
gravitational stirring than by gas drag.
\citet{boss2015} investigates the radial diffusion process for particles $1$\,cm--$10$\,m in size
($\tstop \sim 10^{-2}$--$1\Omega^{-1}$ in their model),
finding enhanced diffusion for $m$-sized or
larger bodies.
However, no systematic study has yet been performed to
directly measure the dynamical properties
(diffusivities, eccentricities, and relative speeds as
listed above) of solids in gravito-turbulent disks as has
been done for MRI-active disks.
Gravito-turbulence tends to produce relatively
stronger density fluctuations ($\delta\rho/\rho\sim 1$ for a typical Shakura-Sunyaev turbulence parameter $\alpha\sim 10^{-2}$; see \citealt{ShiChiang2014})
than are seen in MRI turbulence ($\delta\rho/\rho\sim 0.1$ for $\alpha\sim 10^{-2}$).
The prominent spiral density features that characterize self-gravitating disks
and that help trap dust particles are
also absent in MRI-turbulent disks.
It is the goal of this paper to study the dynamics and
spatial distribution of dust in gravito-turbulent
disks in a systematic manner, placing our measurements
into direct comparison with analogous measurements
made for MRI-turbulent disks.
We first describe our simulation setup in section~2. Results are given in section~3, where we describe the radial diffusion, eccentricity
growth, and relative velocities of particles,
and how these quantities are affected by gravitational
stirring, particle stopping time, gas cooling rate, numerical resolution, and simulation domain
size. In section~4, we put our results into physical context, discuss their astrophysical
implications, and make comparison with MRI-active disks.
We conclude in section~5.
\section{METHODS\label{sec:methods}}
\subsection{Equations solved and code description \label{sec:eqn}}
We study the diffusion of solids in
gravito-turbulent disks using hybrid (particle+fluid) hydro simulations in the disk plane.
For the gas, we solve the hydrodynamic equations governing 2D,
self-gravitating accretion disks, including the effects of secular cooling. The disk is modeled in
the local shearing sheet approximation assuming the disk aspect ratio $H/r\ll 1$. In a Cartesian
reference frame corotating with the disk at fixed orbital frequency $\Omega$, the equations solved
are similar to those \citet{ShiChiang2014}, but restricted
to be in the disk plane:
\begin{align}
\frac{\partial \Sigmag}{\partial t} + \nabla\cdot (\Sigmag \mathbf{u}) = 0 \,,
\label{eq:continuity} \\
\frac{\partial \Sigmag\mathbf{u}}{\partial t} + \nabla\cdot\left(\rho\mathbf{u}\mathbf{u}
+P\mathbf{I} + \mathbf{T_{\rm g}} \right) = \nonumber \\
2q\Sigmag\Omega^2 x \hat{\mathbf{x}} -2\Sigmag\Omega\hat{\mathbf{z}}\times\mathbf{u}\,,
\label{eq:eom} \\
\frac{\partial E}{\partial t} + \nabla\cdot (E+P)\mathbf{u} =
-\Sigmag\mathbf{u}\cdot\nabla\Phi \nonumber \\
+\rho\Omega^2\mathbf{u}\cdot\left(2 q x \hat{\mathbf{x}} - z\hat{\mathbf{z}}\right) -\Sigmag\qloss \,,
\label{eq:eoe} \\
\nabla^2\Phi = 4\pi G (\Sigmag-\Sigma_0)\delta(z)\,,
\label{eq:poisson}
\end{align}
where $\hat{\mathbf{x}}$ points in the radial direction,
$\rho$ is the gas mass density, $\mathbf{u} = (u_{\rm x}, u_{\rm y}) $ is the gas velocity relative
to the background Keplerian flow $\mathbf{u_0} = (0, -q\Omega x)$,
$P$ is the gas pressure, $\Phi$ is the self-gravitational potential of a razor-thin disk,
$q = 3/2$ is the Keplerian shear parameter,
\beq
E = {U} + {K} = \frac{P}{\Gamma -1} + \frac{1}{2}\Sigmag u^2
\label{eq:eos}
\enq
is the sum of the internal energy density $U$ and bulk kinetic energy
density $K$ for an ideal gas with 2D specific heat ratio $\Gamma = 2$, and
\beq
\mathbf{T_{\rm g}} = \frac{1}{4\pi G}\left[\nabla\Phi\nabla\Phi -
\frac{1}{2}\left(\nabla\Phi\right)\cdot\left(\nabla\Phi\right)\mathbf{I}\right]
\label{eq:grav_tensor}
\enq
is the gravitational stress tensor with identity tensor $\mathbf{I}$.
We choose a very simple cooling function,
\beq
\Sigmag\qloss = U/\tcool = \Omega U / \beta \,\,\,\,
{\rm (constant \,\, cooling \,\, time)}
\enq
with $\beta \equiv \Omega \tcool$ constant everywhere.
The assumption of constant cooling time $\tcool$ is adopted
by many 2D
\citep[e.g.,][]{gammie01,JG2003,Paardekooper2012} and three-dimensional (3D)
\citep[e.g.,][]{Rice2003,LR2004,LR2005,Mejia2005,Cossins2009,MB2011}
simulations of self-gravitating disks.
This prescription enables direct experimental control over the
rate of energy loss.
For the solids,
we assume the dust particles only passively respond to the GT turbulence via the aerodynamical drag
and also the gravitational acceleration from the gas. No particle feedback is included in this study. In the 2D local
approximation, the equation of particle motion reads
\beq
\frac{d\mathbf{v_i}}{dt} = \frac{\mathbf{u}-\mathbf{v_i}}{\tstopi} - \nabla\Phi
-2\Omega\hat{\mathbf{z}}\times \mathbf{v_i} + 2q\Omega^2 x\hat{\mathbf{x}} \,,
\label{eq:eom_par}
\enq
in which the first term is the drag force per unit mass,
the second term $-\nabla\Phi$ is the gravitational pull from the self-gravitating gas.
The particle velocity $\mathbf{v_i} = (v_{\rm x},v_{\rm y})$ represents the $i$--th particle specie relative
to the background shear.
Our simulations are run with \texttt{Athena} \citep{stoneetal08} with the built-in particle module
\citep{baistone10a}. We adopt
the van Leer integrator \citep{vl06,sg09}, a piecewise linear spatial
reconstruction in the primitive variables, and the HLLC
(Harten-Lax-van Leer-Contact) Riemann solver. We solve Poisson's
equation of a razor-thin disk using fast Fourier transforms
\citep{gammie01,Paardekooper2012}\footnote{{In this
paper, we do not smooth the potential over a length $\delta$ in the $z$-direction to mimic the finite
thickness of the disk, as we find the density and velocity dispersions in our unsmoothed 2D simulations to be
similar to those in 3D \citep{ShiChiang2014}. The effect of
smoothing on the perturbations is modest; e.g., $\delta\Sigma/\Sigma$ decreases by at most a factor of two when $\delta=\cs/\Omega$ is used.}}
Boundary conditions for our physical variables ($\rho$,
$\mathbf{v}$, $U$, and $\Phi$)
are shearing-periodic in radius ($x$) and periodic in azimuth ($y$).
We also use orbital advection algorithms to shorten the timestep
and improve conservation \citep{Masset2000, Johnson2008, sg10}.
\subsection{Initial conditions and run setup \label{sec:ic}}
\begin{figure*}
\includegraphics[width=8cm]{f1.pdf}
\includegraphics[width=8cm]{f3.pdf} \\
\includegraphics[width=8cm]{f4.pdf}
\includegraphics[width=8cm]{f5.pdf}
\caption{\small{Time history of the gravito-turbulent disk of
pure gas case with $\tcool\Omega=10$. Top left: The gravitational (green) and Reynolds
(red) stresses normalized with averaged pressure as a function of time. Top Right: the density
dispersion versus time.
Bottom left: the velocity dispersion with (red) and without (black) density weight. Bottom right: The averaged
Toomre Q-parameter using sound speed with (red dashed) and without (black solid)density weight.
In our dust+gas hybrid simulations, we choose
$t=50\Omega^{-1}$ (indicated by vertical dotted lines) as our initial state and distribute dust particles randomly in space.}}
\label{fig:sst}
\end{figure*}
We start with pure gas simulations. At $t=0$ we initialize a uniformly distributed gas disk and set
$\Sigma_0 = \Omega = G =1$. The thermal energy is such that $Q = \cs\Omega/(\pi G\Sigma_0) = 1.1$
close to the critical Toomre $Q$-parameter for a razor-thin disk \citep[$\simeq
1$;][]{toomre64,goldreichbell65}, where $\cs = \sqrt{\Gamma(\Gamma-1)U/\Sigma}$ is the sound speed.
The velocity is $\mathbf{u_0} = (\delta_x,-q\Omega x + \delta_y)$, where $\delta_x$ and $\delta_y$ are
randomized perturbations at $\sim$$1\%$ of the initial sound speed $\cs{_0=1.1\pi}$. Our simulation domain is a box which covers $[-80, 80]\,G\Sigma_0/\Omega^2$ in
both radial ($x$) and azimuthal ($y$) direction with $512^2$ grid points. This amounts to
a spatial span of $160 H/(\pi Q)\simeq 51 Q^{-1} H$ and a resolution of $\simeq 10 Q/H$, where
$H\equiv \cs/\Omega = \pi Q (G\Sigma_0/\Omega^2)$ is the disk scale height.
We allow the disk to cool off immediately with a fixed cooling time $\tcool\Omega \in
\{5,10,20,40\}$. After a short transient phase which normally takes about twice the cooling time,
the disk settles to a quasi-steady gravito-turbulent state in which the heating from compression and
shocks is balanced by the imposed cell-by-cell cooling. For example, we show the time evolution of Reynolds and
gravitational stresses, surface density and velocity dispersions and Toomre's Q parameter
from the $\tcool\Omega=10$ case in Figure~\ref{fig:sst}.
All quantities saturate after $\geq
20\Omega^{-1}$, and well established turbulence sustains to the end of the simulations (hundreds of
dynamical times). The time averaged ($t>50\Omega^{-1}$) nominal $\alpha$, i.e., stresses normalized
with pressure, is $\alpha\simeq 0.020$ for the sum of the Reynolds and gravitational stress.
Toomre's Q is hovering $\sim 3$ \citep{gammie01} which also sets the spatial averaged sound speed $\langle
\cs\rangle = \pi Q (G\Sigma_0/\Omega)$. The Q-parameter using density weighted sound speed
$\langle \cs\drangle$ (red dashed curve in bottom right panel) shows less fluctuation and slightly diminished ($\sim
10\%$) mean. We choose the density weighted measure hereafter and simply use $\langle\cs\rangle$ to
represent the density weighted sound speed.
But we do note that the spatial and temporal averaged velocity dispersion $\langle \ux \rangle$
(black) is about twice the density weighted value $\langle\ux\drangle$ (red) as shown in the bottom left
panel of Figure~\ref{fig:sst}. The cause and effects will be discussed in
section~\ref{sec:dp}.
We also emphasize that the density dispersion
$\delta\Sigma_g/\Sigma_0\simeq
0.6$ for
$\alpha\sim 0.02$, much stronger than found in the MRI-driven turbulence case where
$\delta\rho/\rho\sim \sqrt{0.5\alpha}\sim 0.1$ for
similar $\alpha$ values with or without net weak magnetic field \citep[][]{NG2010,OH2011,ssh2016}. We will discuss it further in section~\ref{sec:tcool}.
We then randomly distribute dust particles in space at $t =
50\Omega^{-1}$ (for $\tcool\Omega \leq 20$) or $100\Omega^{-1}$ (for $\tcool\Omega = 40$), and
evolve the particle+fluid system for another $200\Omega^{-1}$. We
implemented seven types of particles with constant stopping time such that the Stokes number
$\tau_s\equiv \Omega\tstop \in [10^{-3},10^{-2},0.1,1,10,10^2,10^3]$, evenly spaced in logarithmic
scale. For each type, we use $2^{19}$ ($\sim$$5\times
10^5$) particles, or $\sim$ 2 particles per grid cell on average. Their velocities follow the background shear
initially.
Since gas densities vary in time and space,
it would be more physical to fix
the size of each particle rather than its
stopping time; nevertheless, our default
simulations fix stopping times to
more easily compare with previous
simulations that do likewise. We also verify
explicitly that our simulations with
fixed stopping time agree well with a
simulation that uses fixed particle
sizes (see Section \ref{sec:fixed_size}).
For this latter run,
we employ a stopping time based on
the Epstein drag law:
\beq
\tstop{_{,i}} = \frac{f a^*_i}{\Sigma \cs} \,,
\label{eq:epstein}
\enq
where $a^*_i = 10^{i-4}$ for $i=1$--$7$ is the dimensionless
size of the $i$-th particle species. The converting factor $f \simeq 3\pi (G\Sigma_0^2/\Omega^2)$ is
chosen such that $f a^*_i/\langle\!\langle\Sigma\cs\rangle\!\rangle_{\rm
t}\sim a^*_i$, matching $\tau_s$ used in the fixed stopping time runs.
\section{RESULTS\label{sec:results}}
\subsection{Standard run \label{sec:std}}
\begin{figure}
\includegraphics[width=\columnwidth]{8-panel.png}
\caption{\small{Snapshots of gas and dust surface density distributions at the end of simulation.
The domain size is $\lx = \ly = 160 G\Sigma_0/\Omega^2 \simeq 17 H$, and cooling time is
$10\Omega^{-1}$.
Values are normalized against initial values and color coded in log scale. We see small $\tstop$ particles
tracing the gas; particles with intermediate $\tau_s = 0.1$-$1$ strongly clustering; and
larger particles diffused across the domain. }}
\label{fig:8panel}
\end{figure}
We first present our standard run tc=10, i.e., $\tcool\Omega = 10$ run (see Table~\ref{tab:tab1} for
the gas properties).
After distributing the particles randomly on the grid, the particles quickly adjust in
response to the dynamical
gas flows. After $\sim 20\Omega^{-1}$, the distribution of particles reaches a steady state. As an illustration, we have shown the gas and dust density in Figure~\ref{fig:8panel}
at $t=180\Omega^{-1}$.
Clearly, the small particles ($\tau_s\leq 10^{-2}$) are nearly
perfectly coupled to the gas and therefore share the same density structures of the gas.
Particles with intermediate stopping time, for both $\tau_s = 0.1$ and $1$, appear to
concentrate
along the dense gas filaments and cause dust density enhancement of two orders of magnitude
relative to the mean (note the color bars for $\tau_s = 0.1$ and $1$ now extend to higher
values). {Transient vortices are also observable in the snapshots for gas and $\tau_s < 1$ particles, but are probably under-resolved; see \citet{gibbonsetal2015} for the effects
of vortices on particle concentration.}
For large particles ($\tau_s\ge 10$), they are strongly disturbed by the
gravitational stirring from the fluctuating gas (see further discussion of the effects of gravity in
section~\ref{sec:selfg}). After $\sim 20 \Omega^{-1}$, they are completely redistributed and their
end status recovers a random distribution similar to the initial.
In Figure~\ref{fig:mass}, we show the time-averaged fraction of cumulative mass for each type of
particle, binned
in local surface density (number of particles in each cell) normalized to the mean
($2$/cell).
The running profiles of small sized
particles resemble that of the time-averaged gas (solid black curve). Those with large stopping times track the
initial random distribution (dotted black). In contrast, the blue ($\tau_s=1$)
and green ($\tau_s=0.1$) curves show that intermediate-sized particles exhibit relatively higher densities,
$\Sigma_p/\langle\Sigma_p\rangle \sim 10$--$100$.
\begin{figure}
\includegraphics[width=\columnwidth]{f6.pdf}
\caption{\small{The time-averaged cumulative fraction of the total particle mass as a function of particle
surface density. The surface density is normalized by the averaged value (equal to the initial value) and
thus represents the concentration factor of the dust particles. We also show the time-averaged gas distribution (black solid) and the initial dust distribution (black dotted) for comparison. }}
\label{fig:mass}
\end{figure}
After reaching quasi-steady state, we further evolve the dust+gas mixture for a total duration of $200\Omega^{-1}$. We
then measure the radial diffusion coefficients, particle eccentricity and relative velocities averaged over all particles of the same type. The
results are presented in the following subsections.
\subsection{Radial diffusion\label{sec:dp}}
\begin{figure*}
\includegraphics[width=8cm]{f7.pdf}\hfill
\includegraphics[width=8cm]{f8.pdf}
\caption{\small{Left: The squared radial displacement versus time for all types of
particles ($\tau_s \equiv \tstop\Omega$) in run tc=10 ($\tcool\Omega = 10$.
The color symbols (with solid curves) are measurement, the dashed curves
are the linear fits to the data. Their slopes are then twice the diffusion
coefficients based on Equation~\ref{eq:def_dp}.
Right: Symbols mark the radial diffusion coefficients for particles with
different stopping time. At larger stopping time, the measured coefficients
are largely different than models of either \citet{YL2007} (dashed gray
curve) or \citet{cuzzietal1993} (dotted) due to extra stirring from the
gas self-gravity.}}
\label{fig:fixed_ts}
\end{figure*}
We utilize the following formula to derive the radial diffusion coefficients $\Dp$ of different
types of particles in our
simulations \citep{YL2007,carb2011}:
\beq
\Dp \equiv \frac{1}{2}\frac{d\langle |x_p(t)-x_p(0)|^2\rangle}{dt} \,,
\label{eq:def_dp}
\enq
where $\Dp$ is the diffusion coefficient for given particle stopping time, $x_p(t)$ and $x_p(0)$ are
the radial position at time $t$ and initial (taken to be $50\Omega^{-1}$ after
injecting the particles to the gas disk). The radial coordinate is extended beyond the edges of the sheet so that particles moves
on radially without periodic boundary conditions.
The measurements are made at
every time interval $\delta t=0.5\Omega^{-1}$, and are performed for a duration of $150\Omega^{-1}$
long. The average $\langle\,\rangle$ here is taken for all particles within the same type.
The squared displacements as a function of time on the right hand side of Equation~(\ref{eq:def_dp})
are shown in the left panel of Figure~\ref{fig:fixed_ts}. Each curve represents the displacement of
one distinctive type of particles. For particles of $\tau_s\leq 0.1$, the curves overlap. As they
are well-coupled with the turbulent gas flows, their displacements reflect the properties of gas
diffusion.
For larger particles, the gravitational stirring dominates the drag force which introduces some
extra effective diffusion. As a result, the displacement curves of those particles
have bigger slopes. For the largest two dust species we implemented, the curves show periodic
oscillations which are due to the epicyclic motion of individual particles. The large amplitude of the
epicyclic motion is a result of the strong gravitational forcing of the background gas.
Now with the help of Equation~\ref{eq:def_dp} and Figure~\ref{fig:fixed_ts}, we can derive the
radial diffusion coefficients based on the slope of the squared displacements using linear fitting.
The results are shown in the right panel of Figure~\ref{fig:fixed_ts} and also recorded in
Table~\ref{tab:tab2}. The radial diffusion
coefficients $\Dp$ are normalized against the gas diffusion coefficient $D_{g,x}$; the latter is
approximated using the $\Dp$ of particles with $\tau_s = 10^{-3}$. We find $\Dp\simeq
\Dg$ for $\tau_s\leq 0.1$. However, $\Dp$ exceeds $\Dg$ for particles with
longer $\tstop$, and stay roughly constant, $\Dp \sim 6$--$8 \Dg$. For comparison, we also
plot the diffusion coefficient predicted by models in \citet[][dotted gray
curve]{cuzzietal1993} and \citet[][dashed gray]{YL2007}. Both predict small and decreasing values
for larger particles based on homogeneous turbulence without extra forcing like self-gravity,
in clear contrast with what we obtain in our gravito-turbulent disk. When gravitational stirring is artificially suppressed (see section~\ref{sec:selfg}), we do recover similar relationship as they predicted.
\begin{figure}
\includegraphics[width=\columnwidth]{f9.pdf}
\caption{\small{ The radial velocity auto-correlation
functions of the gravito-turbulent gas disk calculated with (solid) and without (dashed) density weight. }}
\label{fig:rxx}
\end{figure}
We also check the validity of approximating $\Dg$ with $\Dp$ by measuring the auto-correlation
function of the turbulent velocity field directly. In general,
\beq
\Dg = \int_0^{\infty}R_{\rm xx}(\tau) d\tau \,,
\label{eq:def_dg}
\enq
where $R_{\rm xx}(\tau) = \langle \ux(\tau) \ux(0) \rangle$
is the auto-correlation of the gas velocity at time $\tau$. However, in gravito-turbulent disk, the
spiral density shock waves cause low density ($\Sigma/\Sigma_0\sim O(10^{-2})$) valleys between high density ($\Sigma/\Sigma_0\sim
O(1)$) ridges. Most of the matter in the high density regions has small velocity, but the gas
in low density region has very high velocity ($\sim \cs$). The auto-correlation function
defined above would strongly bias toward the low density instead of the high density region where
most of the small dust particles reside. One way to remove the bias is to calculate the
gas or dust\footnote{We use the surface density of the dust in our calculation; using the gas density would
change the estimated diffusion coefficient by $\sim$10\%.}
density weighted auto-correlation function
\beq
R_{\rm xx}(\tau) = \frac{\int\! \Sigma(\tau)\ux(\tau)\Sigma(0)\ux(0) dxdy}{\int\! \Sigma(\tau)\Sigma(0)
dxdy} = \langle \ux(\tau)\ux(0)\drangle \,,
\label{eq:rxx}
\enq
in which $\ux(0)$ and $\Sigma(0)$ are velocity and density at a reference time, $\ux(\tau)$ and
$\Sigma(\tau)$ are measured at time $\tau$ from the reference point and are both sheared back to
that point in order to calculate the correlation. Shown in Figure~\ref{fig:rxx} is the velocity
auto-correlation calculated with (solid curve) and without (dashed curve) density weight. Integrating
the density weighted $R_{\rm xx}$, we get $\Dg\simeq 0.014 \langle\!\langle \cs
\rangle\rangle\Omega^{-1}$ close to the $\Dp\simeq 0.012 \langle\!\langle \cs
\rangle\rangle\Omega^{-1}$ measured with particles of $\tau_s = 10^{-3}$. However, using $R_{xx}$
without density weight would overestimate the diffusion coefficient by a factor of $15$.
\subsection{Eccentricity growth \label{sec:ecc}}
When particle eccentricity is small, we have the relation
\beq
\vx^2 \simeq 2 \vy^2 \simeq e^2\Omega^2 r^2 \,.
\label{eq:def_ecc}
\enq
We can therefore measure the orbital eccentricity of each individual particle according to this
relation, and obtain the evolution of the eccentricity as shown in Figure~\ref{fig:ecc}. Although
the initial $e = 0$, we find the
mean eccentricity quickly saturates at $e\simeq 0.2$-$0.3 (H/R)$ for particles of $\tau_s
\leq 1$. It then saturates slower and levels off at greater value for increasing $\tau_s > 1$. The
saturated
values are $e \simeq 0.7 (H/R)$ and $1.3 (H/R)$ for $\tau_s = 10$ and $10^2$ particles (see
Figure~\ref{fig:ecc_ts}). The eccentricity
of $\tau_s = 10^3$ particle keeps rising gradually through the end of the
simulation, suggesting that a saturation level of $\gtrsim 2 (H/r)$ might be achieved for a longer
simulation which is consistent with previous simulations of planetesimals in gravito-turbulent disks
\citep{britschetal2008,walmswelletal2013}.
We also find that the particle eccentricity obtained in gravito-turbulent disk is in
general much greater
than that could be excited in the MRI-driven turbulent disk with similar turbulent stress-to-pressure
ratio $\alpha$. The latter usually gives $e \sim 10^{-2}$--$10^{-1} (H/R) Q^{-1}$
\citep{yangetal2009,yangetal2012,NG2010}, orders of
magnitude smaller than what we have observed here in our simulations.
\begin{figure}
\includegraphics[width=\columnwidth]{f10.pdf}\\
\includegraphics[width=\columnwidth]{f11.pdf}
\caption{\small{Top: The time evolution of orbital eccentricities averaged over all
particles of the same type.
The dashed line is $0.15 (\Omega t)^{1/2}$, which characterize the early
growth of the eccentricity for large particles.
Bottom: The relative number distribution of particle eccentricity at late time of
simulation. We choose
100 bins evenly divided between $\log_{10} (eR/H) = -3 $ and $1$ for the distribution plot.
The dashed line shows a typical Rayleigh distribution for comparison.
}}
\label{fig:ecc}
\end{figure}
Since the particle eccentricity is excited by nearly random gravity field, the evolution will follow the
general $\propto \sqrt{t}$ law \citep{ogiharaetal2007},
\beq
e = C \left(\frac{H}{R}\right) (\Omega t)^{1/2} \,,
\label{eq:ecc}
\enq
where the dimensionless coefficient $C$ determines the growth rate and can be measured with our
simulation. We fit the early growing phase of the both $\tau_s=10^2$ (cyan square) and $10^3$
(magenta cross) with the above relation and obtain $C\simeq 0.15$ as the best fit coefficient (see
the black dashed curve in Figure~\ref{fig:ecc})
The
excitation time scale of eccentricity could be estimated as $t_{\rm exc}\sim e/(de/dt)\sim 2e^2(R/H)^2
C^{-2}\Omega^{-1}$. We can therefore predict the saturated eccentricity by equating $t_{\rm exc}$
with the damping time scale, in our case, is simply the stopping time $\tstop$. The
eccentricity at saturation is therefore
\beq
e \sim C (H / R) (\tau_s/2)^{1/2} \,.
\enq
For $\tau_s=10^2$ particles, this gives $e \sim 1.1 (H/R)$
that matches what we observe in Figure~\ref{fig:ecc} very well. It also predicts a saturation level
of $e\sim 4.7 (H/R)$ if we extend the simulation for another $\sim 300\Omega^{-1}$.
Equation~\ref{eq:ecc} also allows us to measure the dimensionless parameter $\gamma$ which
characterizes the amplitude of the fluctuating gravity field as defined in
\citet{ogiharaetal2007}. Following \citet{IGM2008} and \citet{OO2013a}, we write the eccentricity
growth as
\beq
e \simeq 1.6\gamma \left(\frac{M_{\odot}}{M_*}\right)
\left(\frac{\Sigma}{10\,{\rm g}\,{\rm cm^{-3}}}\right)
\left(\frac{R}{100\,{\rm AU}}\right)^2
\left(\Omega t\right)^{1/2} \,.
\label{eq:ecc_IGM}
\enq
After comparing with Equation~\ref{eq:ecc}, we find the dimensionless turbulent strength
$\gamma\simeq 0.01$ in our gravito-turbulent
disk with $\Omega\tcool=10$ or $\alpha\simeq 0.02$, considerably larger than that in MRI disks with
similar $\alpha$ \citep[e.g., $\gamma\sim 10^{-4}$ in][]{yangetal2012}.
\begin{figure}
\includegraphics[width=\columnwidth]{f12.pdf}
\caption{\small{The time averaged (last $15$ orbits) particle eccentricity (or equivalently the velocity
dispersion $\delta v/\cs$) varies with stopping time.
The standard run using $\tcool = 10\Omega^{-1}$ are shown in asterisk.
The label `high resolution', `large domain', and `no
self-gravity' represent results from tc=10.hires using doubled resolution,
tc=10.dble with doubled sheet size, and tc=10.wosg without gravitational forcing
from the gas respectively, and are discussed in section~\ref{sec:selfg} and~\ref{sec:box}.
}}
\label{fig:ecc_ts}
\end{figure}
We also calculate the saturated eccentricity distributions of particles. In Figure~\ref{fig:ecc}, we find that particles of all types obey a Rayleigh-like distribution
\citep{yangetal2009,yangetal2012}.
The probability rises toward greater eccentricity, and then drops at roughly the
mean eccentricity measured in the top
panel of Figure~\ref{fig:ecc}. Particles having $\tau_s < 1$ share nearly the same properties as gas. As $\tau_s$ increases above unity, increasingly many particles obtain higher eccentricities owing to stochastic gravitational forcing by gas. {This behavior contrasts with that shown in Figure $6$ of \citet{gibbonsetal2012}, in which the particle velocity distribution narrows as $\tau_s$ exceeds unity; their simulations
omit gravitational stirring.}
\subsection{Relative velocity \label{sec:vel}}
\begin{figure*}
\includegraphics[width=8cm]{f13.pdf}\hfill
\includegraphics[width=8cm]{vrelij_ts_numerical.pdf}\hfill
\includegraphics[width=8cm]{f14.pdf}\hfill
\includegraphics[width=8cm]{f15.pdf}
\caption{\small{Average relative velocities versus particle stopping time (top row),
and underlying probability density distributions of relative velocity (bottom row).
Averages are taken over the final orbit of
the simulations. The relative velocity $v_{\rm rel}$ is evaluated between particles of the same type (stopping time), while $v^{\prime}_{\rm rel}$
is measured with respect to $\tau_s=10^{-3}$ particles. Error bars show
the standard deviation for run tc=10 alone (asterisks).
The probability density curve for $v_{\rm rel}$ between $\tau_s = 0.1$ particles (green squares
in bottom left panel) has a diminished amplitude because there is significant probability
at $v_{\rm rel}/\langle\cs\rangle < 10^{-4}$, which
is off the scale of the plot. For comparison, a Maxwellian distribution is shown in the bottom right panel as a dashed curve.
}}
\label{fig:vrel}
\end{figure*}
The high eccentricities obtained in these
particles indicate large velocity dispersion, which leads to the question of what is the relative
velocity between pair collision.
We thus measure the relative velocity of the same type particle or the mono-disperse
case $v_{\rm rel}$, and also the relative
velocity with respect to the smallest grains ($\tau_s=10^{-3}$) or a bi-disperse velocity
$v_{\rm rel}^{\prime}$, in each grid cell assuming
there is no sub-structure below this scale. The results are shown in Figure~\ref{fig:vrel}.
In general, the relative velocity (see the asterisk symbols in top two panels) $v_{\rm rel}$ and
$v_{\rm rel}^{\prime}$ increase from
$\sim 10^{-2}\cs$ to $\gtrsim \cs$ as the $\tau_s$ increases from $10^{-3}$ to $10^3$ which is
indicated by the increasing velocity dispersion or eccentricity as discussed in previous section.
Exceptions occur at $\tau_s= 0.1\rm{-}1$, where particles show the strongest clustering effects
(coherent and therefore less relative motions in the filaments), the
relative velocity of the same type particles is largely reduced. It even drops below
$10^{-3}\cs$ for $\tau_s=0.1$ case. The general approach of approximating particle relative velocity
via the measurement of velocity dispersion fails here.
Our $v_{\rm rel}$ and $v_{\rm rel}^{\prime}$ for smaller particles ($\tau_s \leq 0.1$) seem similar to what are found in MRI-driven turbulent disks \citep[][]{carb2010}.
However, our $v_{\rm rel}$ and $v^{\prime}_{\rm rel}$ increases with $\tstop$
for larger particles that have $\tau_s>0.1$, in contrast to MRI-driven turbulence results \citep[c.f. Figure~2 and 3
in][]{carb2010} where turbulent torquing due to the fluctuating gravity field of the gas is not
considered. The latter show $v_{\rm rel}$ falls continuously, and
$v^{\prime}_{\rm rel}$ stays roughly constant with $\tstop$, in agreement with the
theoretical prediction in isotropic turbulence \citep{OC2007}. We will discuss the effects of
gravitational stirring in section~\ref{sec:selfg}.
We also show the probability density functions (PDFs) for $v_{\rm rel}$ and
$v_{\rm rel}^{\prime}$ in Figure~\ref{fig:vrel}.
The PDFs of larger particles ($\tau_s > 1$) are similar to
Maxwellian distribution (see the dashed curve in the bottom right panel as
an example of Maxwellian distribution) \citep{windmark2012b,garaudetal2013}.
Particles smaller than $\tau_s\sim 1$, however, show broader distributions and
smaller mean values than the bigger particles. They deviate significantly from a Maxwellian and resemble more of a log-normal distribution \citep{mitraetal2013,PPS2014b}.
Relative velocities with respect to the smallest particles, i.e., $v^{\prime}_{\rm rel}$, have
distributions that shift gradually to higher values as
the size of the larger particle increases. {Between $\tau_s = 0.01$ and $\tau_s = 0.001$ particles, the largest
relative velocities $v^{\prime}_{\rm rel} > 0.1 \cs$, which might lead to collisional destruction, are mostly due to their different
deceleration rates in post-shock regions \citep{NM2004,JT2014}.
As plotted in Figure~\ref{fig:postshock}, the locations where $v^{\prime}_{\rm rel} >0.1 \cs$ (white
symbols) are found just behind shock fronts,
where gas radial velocities are discontinuous.}
\begin{figure}
\includegraphics[width=\columnwidth]{f16.pdf}
\caption{\small{Snapshot of gas radial velocities (normalized by the volume-averaged sound speed) at $t=200\Omega^{-1}$. Large relative velocities ($>0.1\cs$) between $\tau_s=10^{-2}$ and $10^{-3}$ particles are overlaid as white symbols.
Evidently, large relative velocities between
particles characterize regions just behind shock fronts; in post-shock regions, particles decelerate at different rates according to their different stopping times \citep{NM2004,JT2014}.
}}
\label{fig:postshock}
\end{figure}
Relative velocities $v_{\rm rel}$ between
particles of the same type can be separated
into two groups.
One group corresponds to the small-size particles ($\tau_s\leq 1$), for which
relative velocities peak
at $v_{\rm rel} \simeq 0.01 \cs$.
The other group corresponds to large-sized particles ($\tau_s > 1$), for which peak velocities are sonic. We note that $\tau_s = 0.1$ (green squares) particles
show diminished amplitude and a significant shift toward smaller relative velocity
due to the clustering effect: $\sim 80\%$ of contribution from
$v_{\rm rel}/\cs \ll 10^{-4}$ is not shown in this plot. While for $\tau_s = 1$
particles, there is also a second, near sonic, contribution, which shows the
transitional behavior from small size (friction-dominated) to large size (gravity-dominated)
particles.
\subsection{Effects of gravitational stirring\label{sec:selfg}}
The gravitational stirring effect has been studied in MRI-driven turbulent
disks and shown to induce high velocity dispersion for solid bodies bigger than
$\sim 100\,\rm{m}$ (or $\tau_s \gtrsim 10^3$ at radius $R\sim$\,AU)
\citep{NG2010,yangetal2009,yangetal2012,OO2013a,OO2013b}. But we will show the dominating
gravitational forcing kicks in for even smaller particles owning to its much stronger density
fluctuation, $\delta\Sigma/\Sigma\sim 5\sqrt{\alpha}$ in gravito-turbulent disks (see
section~\ref{sec:tcool}) rather than
$\sim \sqrt{0.5\alpha}$ in MRI-driven turbulent disks \citep{NG2010,yangetal2012}.
As the stopping time increases, the first term in
Equation~(\ref{eq:eom_par}), the drag force, becomes less important compared to the second term, the
gravitational acceleration. Assuming the characteristic length scale of the spiral density features
is $l$, then $|\nabla \Phi |\sim \Phi / l \sim G\delta\Sigma$, where $\delta\Sigma\sim \Sigma$ in
our gravito-turbulent disk is the density fluctuation.
The length scale can be approximated with the most unstable wavelength for axisymmetric
disturbances, or simply the disk scaleheight,
$l \sim \cs^2/G\Sigma \sim Q^2 (G\Sigma/\Omega^2) \sim H$. This is also supported by our
simulations; the density waves have typical radial wavelength $\sim H$ in the top left panel of
Figure~\ref{fig:8panel}.
The drag force $|\ux - \vx|/\tstop \lesssim \Omega l /\tstop \sim \cs /\tstop$.
The ratio of these two thus gives
\beq
\frac{|\nabla\Phi|} {|\ux-\vx|/\tstop} \sim \frac{1}{Q} \frac{\delta\Sigma}{\Sigma} \tau_s \,,
\label{eq:ratio}
\enq
the gravitational stirring becomes more important than aerodynamical drag when
$\tau_s > Q \sim 1$ in gravito-turbulent disks. However, for non-self-gravitating turbulent disks
such as MRI-driven turbulent disks, $\delta\Sigma/\Sigma$ is one order of magnitude smaller, and
$Q\gg 1$. As a results, the gravitational stirring only affect particles with $\tau_s > 10 Q \gg 1$.
\begin{figure}
\includegraphics[width=\columnwidth]{f17.pdf}\\
\includegraphics[width=\columnwidth]{f18.pdf}
\caption{\small{Examples of radial displacement for two particles with fixed stopping
time $\tau_s = 10^{-3}$ (black) and $10^3$ (magenta) in run with (top panel)
and without (bottom panel) gravity pull from the gas in the calculation. The
gravitational stirring of the $\tau_s=10^3$ particle causes large amplitude oscillations which are absent in the bottom panel when gravity from the gas
is removed. }}
\label{fig:disp}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{f19.pdf} \\
\includegraphics[width=\columnwidth]{f20.pdf}
\caption{\small{Similar to Figure~\ref{fig:fixed_ts} and \ref{fig:mass}, we show radial
diffusion coefficients (top panel) and cumulative fraction of dust mass (bottom
panel) for run tc=10.wosg, in which the gravitational forcing from the gas is
artificially removed. At larger stopping time, the measured $\Dp$ without
gravitational stirring are now closer to models of \citet{YL2007} (dashed gray
curve) or \citet{cuzzietal1993} (dotted). The clustering effect for particles of $\tau_s=0.1$ observed in Figure~\ref{fig:mass} is absent when gravitational stirring is removed.}}
\label{fig:wosg}
\end{figure}
In top panel of Figure~\ref{fig:disp}, we show the time variation of radial positions of two representative
particles in the standard simulation. The particle with larger $\tau_s=10^3$ oscillates with a
peak-to-peak amplitude of $\sim 40 G\Sigma_0/\Omega^2$ at a frequency of the orbital
frequency $\Omega$ after it is released in the turbulent disk. However, the
small particle, $\tau_s=10^{-3}$, scatters randomly over time and does not show strong
periodic variations.
To reveal the self-gravity effect to the dust dynamics, we perform a rerun (tc=10.wosg) of the
standard simulation but suppress the gravitational forcing manually, i.e. removing
$-\nabla\Phi$ term in Equation~\ref{eq:eom_par}. The radially projected
trajectories of the same two representative particles are plotted in the bottom panel of
Figure~\ref{fig:disp} for comparison. Clearly, the small particle is not affected by the
missing of
gravitational acceleration, and appears to behave similarly as in the standard run.
However, the
larger particle in tc=10.wosg barely moves radially when the only acceleration/deceleration
is friction.
Without the gravitational stirring, the diffusion of particle also look significantly
different than
in Figure~\ref{fig:fixed_ts}. In Figure~\ref{fig:wosg}, we present the diffusion coefficients
measured in run tc=10.wosg using the
same method as in Figure~\ref{fig:fixed_ts}. Again, the small $\tstop$ particles are unaffected
and therefore give nearly the same $\Dp$ as in the standard run. $\Dp$ of particles with
$\tau_s > 1$ drops rapidly as $\Dp \propto (\tau_s)^{-2}$ which manifests the lack of
gravitational forcing. We note the exact relation between diffusion coefficient and stopping
time slightly deviates from models of \citet{cuzzietal1993, YL2007}. We speculate that is a
result of different power spectra in gravito-turbulent disks than normal homogeneous turbulence model adopted in these models.
In Figure~\ref{fig:wosg}, we also compute the cumulative mass fraction for run tc=10.wosg. As
expected, the $\Omega\tstop = 1$ particles show the strongest concentration and more
contribution towards the higher concentration ($\gtrsim 100$) than the standard tc=10 run. Both
$\Omega\tstop = 0.1$ and $\Omega\tstop = 10.0$ show similar secondary clustering but are much
weaker than the concentration of $\Omega\tstop = 0.1$ in the case where gravity is present.
Without the stochastic gravitational stirring, the averaged particle eccentricity for
all types of
particles never exceed
$e\sim 0.3 (H/R)$ as shown in Figure~\ref{fig:ecc_ts} (green squares).
The small particles ($\tau_s\leq 1$) have similar eccentricities as in the
standard run; however, those larger particles are dynamically unimportant, and $e$
approaches nearly zero (initial value) as $\tau_s$ increases.
The gravitational stirring is also responsible for the increasing relative velocities for
particles of $\tau_s\ge 1$.
As shown in top row of Figure~\ref{fig:vrel} in green squares, instead of rising up,
$v_{\rm rel}$ decreases while $v^{\prime}_{\rm rel}$ stays constant with increasing $\tau_s$
when gravitational forcing is not included, which is consistent with the results from
previous study \citep{OC2007,carb2010}.
\subsection{Fixed particle size\label{sec:fixed_size}}
\begin{figure*}
\includegraphics[width=8cm]{f21.pdf} \hfill
\includegraphics[width=8cm]{f22.pdf} \hfill
\includegraphics[width=8cm]{f23.pdf}\hfill
\includegraphics[width=8cm]{f24.pdf}
\caption{\small{
Diffusion coefficients (top left), particle eccentricity (top right), relative velocities $v_{\rm rel}$ (bottom left) and $v^{\prime}_{\rm rel}$ (bottom right) versus the effective stopping time for fixed size simulation tc=10.size. The effective stopping times are calculated by time and particle averaging of $\tstop$ of individual particle. Results of tc=10 from Figure~\ref{fig:fixed_ts}, which are shown with smaller grey squares, almost match fixed size results and therefore validate the usage of fixed stopping time particles in this work. We note the constant $\tstop$ approach could overestimate the relative speed for intermediate size particles with $a^*=1$ or effective $\tau_s\sim 1$.
}}
\label{fig:fixed_size}
\end{figure*}
\begin{figure}
\includegraphics[width=\columnwidth]{f25.pdf}
\caption{\small{How particles concentrate
in a run with fixed stopping time
$\tau_s = 1$ (from run tc=10)
versus~a run with fixed particle size
$a^*=1$ (from run tc=10.size). The latter
case, having an effective $\tau_s \simeq 0.83$, shows stronger concentration. }}
\label{fig:cluster_fixsize}
\end{figure}
In case that the fluctuations of the density and sound speed are large ($\lesssim O(1)$), as is the case in
gravito-turbulent disks, the constant stopping time assumption for individual particles might not be
valid as the particles would have varying $\tstop$ as they travel around different regions of the
disk. We therefore, in this section, test if our results still hold statistically when we fix particle size
instead of the stopping time.
We set up seven types of particles with fixed size, dimensionless size $a_i^* = 10^{i-4}$ for
$i=1$-$7$, as described in Equation~(\ref{eq:epstein}) and section~\ref{sec:ic}. Everything else is
kept the same as the standard run tc=10.
For each type of particles, we define an effective stopping time
$\langle\langle \tstop{_{,i}}\drangle\rangle_t$ by
first measuring the stopping time of individual particle according to
Equation~\ref{eq:epstein}, and then averaging over all particles
of the same type and time. We find the averaged stopping time
$\langle\langle\tau_s{_{,i}}\rangle\rangle_t\Omega\simeq
[1.04\times 10^{-3},1.01\times10^{-2}, 0.086, 0.827, 13.5, 2.10\times 10^2, 2.31\times 10^3]$ for $a_i^* = 10^{-3}$--$10^3$. For small particles ($a_i^* < 0.1$),
$\langle\langle \tau_s{_{,i}}\rangle\rangle_t \simeq a_i^* \Omega^{-1}$ as
designed because we choose the converting factor$f$ in Equation~\ref{eq:epstein}
to match $a^*$ with $\tau_s$ of the fixed stopping time case. For intermediate
size, the averaged stopping time is only $\lesssim 20\%$ smaller than the
designed values. For larger particle with $a_i^* > 1$, the effective stopping
time is roughly $\sim 2 a_i^* \Omega^{-1}$.
After converting $a^*$ to the effective stopping time, we find the results stay the same as
the simulations using particles of constant stopping times. In Figure~\ref{fig:fixed_size},
we show the diffusion coefficient, particle eccentricity, and relative velocities in large
colored symbols. They almost follow the results using fixed $\tstop$ (gray squares). Therefore, statistically, these results validate the usage of fixed $\tstop$ particles even in the highly
fluctuated
density field of a gravito-turbulent disk. In general, the results of fixed $\tstop$ can
approximate the results of fixed size particles.
But this approach (constant $\tstop$) might underestimate the
clustering effect for the intermediate size particle $\tau_s = 1$ as shown in
Figure~\ref{fig:cluster_fixsize}. This also leads to an overestimate of the relative speed $v_{\rm rel}$ for intermediate size particles of $\tau_s \sim 1$. The actual relative speed for intermediate
size particles would become even smaller than what we have obtained.
\subsection{Effects of cooling time\label{sec:tcool}}
\begin{figure*}
\includegraphics[width=8cm]{f26.pdf} \hfill
\includegraphics[width=8cm]{f27.pdf} \hfill
\includegraphics[width=8cm]{f28.pdf}\hfill
\includegraphics[width=8cm]{f29.pdf}
\caption{\small{The dependence on cooling
time (different symbols) of the radial diffusion coefficient (top left), saturated
particle eccentricity (top right), relative velocity between particles of the same
type (bottom left), and relative
velocity with respect to $\tau_s=10^{-3}$ particles (bottom right).
The measured $\Dp$ is nearly inversely proportional to $\tcool$, while particle eccentricity and relative velocities roughly follow $\tcool^{-1/2}$. }}
\label{fig:tcool}
\end{figure*}
The dust dynamics would also be affected by the strength of the turbulence. In a
gravito-turblent disk, the strength of the turbulence, e.g., in term of $\alpha$, is inversely
proportional to the cooling time \citep{gammie01,ShiChiang2014}. We can therefore
study how the dust dynamics changes with the turbulent strength by varying $\tcool$.
We perform simulations with $\tcool\Omega = 5$ (run tc=5), $20$ (run tc=20), and $40$ (run tc=40)
using the same particle distribution as in
the $\tcool\Omega = 10$ standard run. We then measure the diffusion coefficient in each $\tcool$
case and present the results in Figure~\ref{fig:tcool}. We find $\Dp$ of various cooling time share
rather similar profile. It is flat at $\tau_s < 0.1$ end when aerodynamical
coupling is strong. $\Dp$ traces $\Dg$ in this regime, and the Schmidt number, which measures the ratio of angular momentum transport and mass diffusion,
${\rm Sc} \equiv \alpha \cs^2\Omega^{-1}/\Dg \sim 1.4$--$2.4$, stays roughly constant.
The diffusion coefficient rises up for $ 0.1 \leq \tau_s \leq 1$, indicating
the increasing effect of the gravitational stirring.
In the very long stopping time regime, $\tau_s \geq 10$, the profile levels off again as the dominant gravitational acceleration does not
depend on $\tstop$. The magnitude of $\Dp$ is usually $5$--$9$ times of the diffusion of small particle or gas.
For given particle size, we find $\Dp$ scales roughly as
$\propto 1/\tcool$ which indicates stronger diffusion in stronger turbulence.
Varying the cooling time also affects the particle concentration. In
Figure~\ref{fig:cluster_tcool}, we show the density snapshots of simulations using different cooling
times at the top row. As $\tcool$ gets longer, we find the turbulent amplitude diminishes, so does the tilt angle
of the shearing wave features. This leads to less frequent interactions between separated shearing
waves and longer life time for existing density features.
In the bottom row of Figure~\ref{fig:cluster_tcool}, we find strong interactions of shearing waves
in $\Omega\tcool = 5$ run leads to plume-like structure of $\tau_s = 1$ particle with typical
concentration of $\sim 10$-$100$. The concentration is greater, $\gtrsim 100-10^3$ for $\Omega\tcool =
40$ case, and particles are more spatially confined in very narrow streams. In
Figure~\ref{fig:cdf_tcool}, we quantitatively confirm this result by showing the cumulative mass
fraction of the $\tau_s=1$ particles for various cooling times. As $\tcool$ rises, more and
more mass actually concentrate towards larger dust density, although the density dispersion of gas
decreases.
\begin{figure*}
\includegraphics[width=16cm]{cluster_collage_small.png}
\caption{\small{The logarithmic values of the gas (top) and $\Omega\tstop = 1$ particle density
(bottom) at the end of simulations with various cooling time as shown in the top
panels of each
column. Color bars are on the left of each row. As $\tcool$ increases, the
fluctuation of gas density becomes weaker, the tilt angle of shearing waves also
gets smaller, both indicates relatively weaker angular momentum transport as the
cooling time scale gets longer. However, stronger clustering effects are found for
longer cooling cases as the velocity dispersion from the gas gets weaker.
}}
\label{fig:cluster_tcool}
\end{figure*}
The particle relative velocity and eccentricity drops as the cooling time gets longer (see Figure~\ref{fig:tcool}). In general, from $\Omega\tcool=10$ to
$40$, $v_{\rm rel}$, $v^{\prime}_{\rm rel}$ and $e$ diminish by less than a factor of $\sim 2$-$3$,
roughly scales as $\propto \alpha^{1/2}$, similar to the way velocity/density dispersion scales.
Exceptions occur at the intermediate size particles (as found in section~\ref{sec:vel}),
$\tau_s=0.1$ and $1$, where we find the relative velocity of the same kind $v_{\rm rel}$ drops
rapidly with increasing cooling time. It diminishes $\sim 30$ times from $\Omega\tcool=5$ to $40$
for $\tau_s=0.1$ particles and $\lesssim 10^3$ times for particles with $\tau_s=1$. As
the intermediate-sized particles show the strongest clustering, the rather slow relative velocity
would favor gravitational collapsing.
Based on our results in
Table~\ref{tab:tab1}, the density dispersion usually follows
$\delta\Sigma/\Sigma \simeq 2(\Omega\tcool)^{-1/2} \simeq 5 \alpha^{1/2}$ in
gravito-turbulent disks. As already mentioned in section~\ref{sec:selfg}, this is much stronger
fluctuation than found in MRI disks ($\sim \lesssim \alpha^{1/2}$). Similar as in
section~\ref{sec:ecc}, we also measure the dimensionless turbulent strength $\gamma$ based on the
eccentricity growth for different $\tcool$ runs and list them in Table~\ref{tab:tab1}. The strength
parameter $\gamma\simeq 0.07 \alpha^{1/2}$ seems about one order of magnitude greater than those found in MRI-driven
turbulent disks \citep{NG2010,yangetal2012} which also explains why we observe much larger
eccentricity and relative speed for large particles ($\tau_s > 1$) in our gravito-turbulent disks.
\begin{figure}
\includegraphics[width=\columnwidth]{f30.pdf}
\caption{\small{Similar to Figure~\ref{fig:mass}, the cumulative mass fraction of the particles with
$\Omega\tstop = 1$ for various cooling times. Longer $\tcool$ results in stronger
concentration and clustering.
}}
\label{fig:cdf_tcool}
\end{figure}
\subsection{Domain size and resolution effects \label{sec:box}}
In this section, we investigate if our results are affected by the size of the computational domain
we adopt. By doubling the size of the shearing sheet and meanwhile keeping the numerical resolution
the same, run tc=10.dble quadruples the numbers of grid cells and
dust particles.
Following the discussion in section~\ref{sec:ic}, we first start with a pure gas simulation with
$\Omega\tcool=10$ using now an extended
domain. This simulation runs for $\sim 200\Omega^{-1}$ and quasi-steady gravito-turbulent stage is
confirmed after $\lesssim 20\Omega^{-1}$. We then inject the particles at $t=50\Omega^{-1}$ similar
to our standard run, and
rerun for another $200\Omega^{-1}$ with now both gas and particles on the grids.
The results are summarized in Table~\ref{tab:tab1} (for the gas) and \ref{tab:tab2} (for the dust
diffusion).
We find the gas properties of tc=10.dble, do not deviate much
from those of the
standard run tc=10. The diffusion coefficients for different type of dust particles are also very
similar, $\sim 20\%$ of difference at most. No significant variations are observed comparing the
particle eccentricity/relative velocities between the standard (black asterisk) and the double-sized (cyan triangles) simulations
in Figure~\ref{fig:ecc_ts} and \ref{fig:vrel} either. We find similar clustering for the intermediate size
particles as well. All these suggest converging results are obtained by adopting domain size of our standard run.
In another run tc=10.hires, we double the number of grid cells in
$x$ and $y$ directions and {quadruple} the total numbers of particles for each species to study the effects of numerical resolution.
The gas properties and the radial diffusion coefficients in this
higher resolution run are listed in Table~\ref{tab:tab2}. We find doubling the
resolution would slightly increase $\alpha_{\rm tot}$ and $\Dp$ by
$\lesssim 20\%$. The particle eccentricity and relative velocities (see red diamond symbols in
Figure~\ref{fig:ecc_ts} and top two panels of
Figure~\ref{fig:vrel}) are also very similar to those from the standard run. We therefore confirm
good numerical convergence in our results.
\section{DISCUSSION\label{sec:disc}}
In this section, we try to construct a model of gravito-turbulent disk, and convert our
scale free results in previous sections into a more physical and realistic format.
An optically thin turbulent flow driven by gravity of its own can be easily found on the outskirts of
protoplanetary disks. At an orbital distance of $R\sim 100$AU from a solar-mass star, a disk mass
of $\Sigma R^2\sim 0.01 M_{\odot}$ or surface density of $\Sigma\sim 10 \rm{g}\,\rm{cm}^{-2}$, and a
temperature of $T\sim 10\rm{K}$ would lead to a Toomre Q about unity and a cooling time scale much
longer than the local dynamical time. Depending on the dust opacity of the disk, the cooling time
can vary from several times to orders of magnitude longer than the local $\Omega^{-1}$
\citep{ShiChiang2014}. In this disk, $H/R \sim 0.1$ and gas density $\rho_{\rm g}\sim \Sigma/H\sim
10^{-13}\rm{g}\,\rm{cm^{-3}}$.
\subsection{Particle sizes\label{sec:disc_size}}
The mean free path of the gas at radius $R\sim 100$AU is $\lambda \sim 10^4\rm{cm}$, which is typically much
greater than the dust particle size we have simulated. Therefore the particle's stopping time
can be characterized as
\begin{eqnarray}
\tau_s \simeq 0.1 \left(\frac{s}{\rm{cm}}\right)
\left(\frac{R}{100\,\rm{AU}}\right)^{-3/2}
\left(\frac{T}{10\,\rm{K}}\right)^{-1/2}\nonumber \\
\times\left(\frac{\rho_s}{1\,\rm{g\,cm^{-3}}}\right)
\left(\frac{\rho_g}{10^{-13}\,\rm{g\,cm^{-3}}}\right)^{-1} \,,
\end{eqnarray}
using the Epstein's law, where $s$ is the physical size of the dust particle.
Specifically in our simulations, particles with $\tau_s = 10^{-3}$
would be translated to $0.1$~mm in particle size, and $\tau_s = 10^3$ corresponds to $0.1$~km
planetesimals.
For particles even smaller than $0.1$~mm, they are perfectly coupled with gas, and
will behave very similarly as the sub-mm-sized particles we explored using our simulations. For planetesimals even bigger than
$0.1$~km in size, the stopping time starts to depend on the relative velocity between the dust and
gas.
However, similar to Equation~\ref{eq:ratio}, we can show that in the Stokes regime, the gravitational stirring always
dominates the gas drag in our gravito-turbulent disk. The general empirical formula for
the drag force reads $F_{\rm D} = 0.5 C_{\rm D}({\rm{Re}})\pi s^2\rho_g {\rm v_{gs}^2}$, where $\rm{Re}\sim s
\rm{v_{gs}}/\nu\sim s {\rm v_{gs}}/\lambda \cs$ is the fluid Reynolds number, the relative velocity between gas
and dust ${\rm v_{gs}}\sim v^{\prime}_{\rm rel} \gtrsim \cs$ for large particles as implied by our simulations, and the
dimensionless coefficient $C_{\rm D}(\rm{Re})\sim 0.1$-$10$ for typical Re of
$s > 0.1$~km planetesimals in our gravito-turbulent disk described above
\citep{Cheng2009,PMC2011}. The ratio between the specific drag force and the
gravitational acceleration is therefore
\begin{eqnarray}
\label{eq:ratio2}
\frac{F_{\rm D}/m_s}{|\nabla\Phi|} \sim 7\times 10^{-3} \left(\frac{C_{\rm d}}{10}\right)
\left(\frac{s}{10^4\,\rm{cm}}\right)^{-1} \nonumber \\
\times
\left(\frac{\rho_s}{1\,\rm{g}\,\rm{cm^{-3}}}\right)^{-1}
\left(\frac{H/R}{0.1}\right)^{-1} \nonumber \\
\times
\left(\frac{R}{100\,{\rm{AU}}}\right)^{-1}
\left(\frac{{\rm v_{gs}}}{0.1\,\rm{km}\,\rm{s^{-1}}}\right)^2 \,,
\end{eqnarray}
where $|\nabla\Phi|\sim G\delta\Sigma\sim 5\sqrt{\alpha}\,G\Sigma$ (see section~\ref{sec:tcool}) is
the gravitational force density, and
$m_s$ is the mass of the assumed spherical planetesimal of radius $s$. Given
Equation~\ref{eq:ratio2}, we can see that the dynamics of the km- or larger
planetesimals is mostly determined by the gravity of the gas in gravito-turbulent disks.
Therefore, our results for $\tau_s=10^3$, or $s=0.1$\,km, particles could be easily extrapolated
to even bigger planetesimals.
\subsection{Implications\label{sec:disc_imply}}
\subsubsection{Fast mass transport via turbulent diffusion\label{sec:diffusion}}
Radial diffusion due to turbulent and gravitational stirring can contribute to the mass transport of
the solids.
The time for particles across a radial distance $\Delta R$ solely due to the diffusion process is
$t_{\rm diff} \sim \Delta R^2/\Dp$. If we take
$\Omega\tcool = 10$ run as an example,
$t_{\rm diff} \sim 10^5 \left(\Delta R/10\rm{AU}\right)^2$\,yr for cm or smaller particles ($\tau_s < 1$) or
$\sim 10^4\left(\Delta R/10\rm{AU}\right)^2$\,yr for particles of $10$\,cm or bigger ($\tau_s
\geq 1$). Both suggests the radial diffusion in gravito-turbulent disks might play a role in the
radial transport of the solids. For comparison, the drift time scale due to the radial
pressure gradient of the gas is $t_{\rm drift}\sim \Delta R/v_{\rm drift}\gtrsim 10^4 \left(\Delta
R/10\rm{AU}\right)$\,yr, with radial drift
velocity $v_{\rm drift}\sim \tau_s (1+\tau_s^2)^{-1}\eta v_{\rm K}$ and $\eta\sim \left(\cs/v_{\rm
K}\right)^2$.
We thus find
\begin{eqnarray}
\frac{t_{\rm diff}}{t_{\rm drift}} \simeq
\begin{cases}
10\tau_s \left(\frac{\Delta
R}{10\rm{AU}}\right) \left(\frac{\alpha}{0.02}\right)^{-1} & \rm{if~~} \tau_s < 1 \\
\tau_s^{-1} \left(\frac{\Delta
R}{10\rm{AU}}\right) \left(\frac{\alpha}{0.02}\right)^{-1} & \rm{if~~} \tau_s \geq 1 \,,
\end{cases}
\end{eqnarray}
in which we put back in the cooling time dependence of the diffusion coefficient (see
section~\ref{sec:tcool}) as $\Dp \propto \alpha \propto (\Omega\tcool)^{-1}$ roughly.
The radial transport due to turbulent diffusion and gravitational stirring really becomes comparable to the radial drift
effect especially for the large particles with $\tau_s \geq 1$. Outward
diffusive transport counters, to some extent, inward drift of large particles and {partially alleviates} the radial drift barrier problem \citep{weidenschilling77,boss2015}. {In Figure~\ref{fig:dd}, we show this effect for particles with $\tau_s=1$ --- those that drift fastest --- using a diffusion coefficient $\Dp=0.1 \cs^2\Omega^{-1}$. We
evolve a narrow Gaussian distribution of particles at $100$\,AU for $10^5$ years by solving the Fokker-Planck
equation, including both radial drift and turbulent diffusion \citep{AdamsBloch2009}. The
finite volume algorithm \texttt{FiPy} \citep{FiPy:2009} is used to solve the partial differential
equation.\footnote{Downloadable at \url{http://www.ctcms.nist.gov/fipy/}}
About 10\% of the original set of particles (black solid curve in Figure~\ref{fig:dd}) can still be found at $R=100$\,AU; by contrast, if radial
diffusion is turned off (green dashed curve), practically none remain at the original location.}
On the other hand, as we discuss in
the next section,
inward diffusion might also help particles move out of turbulent regions and avoid
disruption induced by gravitational torques. It is clear that
radial diffusion of dust in gravito-turbulent disks is significant.
\begin{figure}
\includegraphics[width=\columnwidth]{f31.pdf}
\caption{\small{The particle ($\tau_s=1$) distribution after $10^5$ years due to the effects of
radial drift and diffusion (using $\Dp = 0.1\cs^2/\Omega$). An 1D
advection-diffusion model \citep{AdamsBloch2009} is used to evolve the distribution.
About $10\%$ of the initial values can still be found at $R=100$\,AU (black solid) in contrast to
the radial drift alone case (green dashed) where nearly all particles drift out of
$R=100$\,AU.
}}
\label{fig:dd}
\end{figure}
\subsubsection{Fragmentation and run-away accretion barriers\label{sec:fragment}}
Large relative velocity between particle pairs might lead to catastrophic disruptions.
For micron-to-meter-sized particles, the critical collisional speeds which would results in
fragmentation is $v_{\rm f}\gtrsim 1\,\rm{m}\,\rm{s^{-1}}$
\citep{blumwurm08,SL2009,carb2010}, which is $\gtrsim 0.01 \cs$ in our disk. Based on the
probability distribution function of
relative velocity for the equal-sized particles (bottom left panel of Figure~\ref{fig:vrel}),
particles of decimeter or smaller ($\tau_s \leq 1 $) are mostly below this fragmentation barrier.
As particle grows even bigger, its self-gravity would strengthen the particle against catastrophic disruption.
The criteria of fragmentation is such that the specific kinetic energy of a collision ($v_{\rm
rel}^2/2$) exceeds the critical disruption energy
\beq
Q_{\rm D} \simeq \left[Q_0\left(\frac{s}{1\rm{cm}}\right)^a\!\! +
B\left(\frac{\rho_s}{1 \rm{g}\,\rm{cm^{-3}}}\right)
\left(\frac{s}{1\rm{cm}}\right)^b \right]\,\rm{erg}\,\rm{g^{-1}}\,,
\label{eq:Qd}
\enq
where $Q_0\sim 10^7\rm{-}10^8$ is the material strength, $B\simeq
0.3\rm{-}2.1$ parameterize the self-gravity effect, $a\simeq -0.4$, and $b\simeq
1.3$ for pair-collisions between equal-sized particles \citep{BA1999,IGM2008}. We note that
different material properties, projectile-to-target mass ratio
and impact velocity could all cause differences in this criteria \citep{SL2009}.
Therefore, we should take the above energy criteria as a rough order-of-magnitude estimation.
Following \citet{IGM2008} and using Equation~\ref{eq:Qd}, we find that a collision results in
destruction if the relative velocity exceeds the fragmentation velocity
\beq
v_{\rm f} \simeq 0.13 \cs \left(\frac{s}{1\,\rm{km}}\right)^{0.65}
\left(\frac{\rho_s}{3\,\rm{g}\,\rm{cm^{-3}}}\right)^{1/2}
\left(\frac{R}{100\,\rm{AU}}\right)^{1/2}\,.
\label{eq:vf}
\enq
Thus particles with ${\rm{m}} \leq s \leq {\rm{km}}$ ($10\leq\tau_s\leq10^4$) would have
fragmentation velocity $\sim (0.01{\rm{-}} 0.1) \cs$, much lower than the most probable relative
velocity of those particles found in our simulations (see again the bottom left panel of
Figure~\ref{fig:vrel}).
The high relative velocity found in our
gravito-turbulent disks would therefore limit the size that the majority of planetesimal formation
could reach.
If the main channel of planetesimal formation is through gravitational run-away accretion, it would
require the relative velocity to be even lower than the escape velocity of the collisional outcome
\citep{IGM2008,OO2013a,OO2013b},
\beq
v_{\rm esc} \simeq 4\times 10^{-3} \cs
\left(\frac{s}{1\,\rm{km}}\right)
\left(\frac{\rho_s}{3\,\rm{g}\,\rm{cm^{-3}}}\right)^{1/2}
\left(\frac{R}{100\,\rm{AU}}\right) \,.
\label{eq:vesc}
\enq
The planetesimal formation from collisional accretion is strongly disfavored in our
gravito-turbulent disks.
\begin{figure*}
\includegraphics[width=16cm]{cluster_time.png}
\caption{\small{Illustration of the time evolution of the dust concentration for $\Omega\tcool=40$
run, in which a selection of particles ($\sim 20,000$ in number) with $\tau_s =1$
are marked as black dots, and the color contour at the background shows the density
fluctuation in linear scale. We first identify a high density ($\Sigma_p\gtrsim
1000$) particle cluster at
$t=48\Omega^{-1}$, as show in the second panel with black dots. We then track each
individual particles backward/forward in time. The compact cluster disappears only after
$t=84\Omega^{-1}$, which lasts nearly $6$ orbits before being sheared away.
}}
\label{fig:cluster_time}
\end{figure*}
There are a few conditions that might allay the difficulties:
(1)~When taking into account of $\tcool$, a longer
cooling time scale would reduce the relative velocity (only weakly depends on $\tcool$ though; see
Figure~\ref{fig:tcool}) and thus lower the possibility of fragmentation.
(2)~We note the distribution functions show a sizeable spread toward smaller relative velocity,
therefor a small fraction of planetesimals might still survive the collision or even accrete
although the mean relative velocity is high \citep{windmark2012b}.
(3)~The strong diffusion may bring planetesimals outside the gravito-turbulent zone, and proceed
further growth in a less violent environment.
(4) If the pre-existing sizes of the planetesimals are big enough,
they might avoid fragmentation and/or run-away barrier since $v_{\rm rel}$ scales with $\tau_s$
differently than $v_{\rm esc}$ and $v_{\rm f}$ do. We find $v_{\rm rel}/\cs \simeq (\tau_s/10)^{0.16}$ after
extrapolating our relative speed results in the bottom left panel of Figure~\ref{fig:tcool} toward
high $\tau_s$ end. We thus find only planetesimals $\gtrsim 100\,{\rm km}$ could pass the fragmentation
barrier; only $\gtrsim 1000\,{\rm km}$-sized planetesimals would result in collisional accretion.
Such large planetesimals might form via gravitational instability as we will discuss next.
\subsubsection{Gravitational collapse\label{sec:collapse}}
Despite the problems caused by the big relative velocity for particles bigger than meter-size, those
cm-to-decimeter-sized ($\tau_s=0.1\rm{-}1$) pebbles, find themselves mostly in very dense coherent
structures and possessing very low relative velocities. As shown in Figure~\ref{fig:8panel} and
\ref{fig:cluster_tcool}, the concentration factor $\Sigma_{\rm p}/\langle\Sigma_{\rm p}\rangle$ is
typically $\gtrsim 10$-$100$ in those filament-like structures. Concentrations can go even higher
than $\gtrsim 100$-$10^3$ times the original particle density for longer cooling time $\tcool =
40\Omega^{-1}$.
This density enhancement of the dust particles would push the dust-to-gas ratio from $\sim 1/100$ to
$\Sigma_{\rm p}/\Sigma_g \gtrsim 10$. More importantly, the resulting local dust density
\beq
\rho_{\rm p} \gtrsim 10^{-12} \left(\frac{\Sigma_{\rm p}/\Sigma_g}{10}\right) {\rm g}\,{\rm cm^{-3}}\,,
\enq
will exceed the Roche density
\beq
\rho_{\rm Roche} \sim \frac{M_*}{R^3}\sim 10^{-12}\left(\frac{M_*}{M_\odot}\right)
\left(\frac{R}{100\,\rm{AU}}\right)^{-3}\,,
\enq
where $M_*$ is the stellar mass, and therefore trigger gravitational collapse.
This opens up a potential channel of planetesimal formation via gravitational collapse
\citep{boss2000,britschetal2008,gibbonsetal2012,gibbonsetal2014,sc2013}.
In Figure~\ref{fig:cluster_time}, we show a clump of $\tau_s = 1$ particles which possess over-density
$\Sigma_{\rm p}/\langle\Sigma_{\rm p}\rangle \gtrsim 10^3$ could survive for $\simeq 6 \,{\rm
orbits}$, or $6,000 {\rm yr}$ at $R=100\,\rm{AU}$, before getting disrupted in a gravito-turbulent
disk with $\Omega\tcool = 40$. This is significantly longer than the dynamical time that is necessary for
the process of gravitational collapse.
Even if the dust density does not cross the Roche density, the $\rho_{\rm p}\sim \rho_{g}$ in our
gravito-turbulent disks could still trigger other instabilities such as streaming
instability \citep{goodmanpindor00,youdingoodman05} for those intermediate size particles. As a result, the dust density increases
further and graviational collapse would still occur eventually.
The resulting planetesimals would then diffuse out the gravito-turbulent region as discussed in
section~\ref{sec:diffusion} and avoid being destroyed (if $s\lesssim 100\,\rm{km}$) due to collision
with planetesimals of the same size.
\subsubsection{Comparison with MRI-driven turbulent disks\label{sec:mri_disk}}
In general, we find relatively stronger radial diffusion and eccentricity growth in our
gravito-turbulent disks than in MRI-driven turbulent disks of similar $\alpha$.
According to \citet{yangetal2009,yangetal2012}, the standard deviation of radial drift can be
described as
\beq
\sigma(\Delta x) = C_x\,\xi \,H\left( \frac{t\Omega}{2\pi} \right)^{1/2}\,,
\label{eq:dx_yang}
\enq
where $C_x$ is a dimensionless coefficient and $\xi \equiv 4\pi G\rho_0(2\pi/\Omega)^2=4(2\pi)^{3/2} / Q$
assuming $\rho_0= \Sigma_0/2H$. Comparing it with Equation~(\ref{eq:def_dp}), we can convert our
diffusion coefficient to
\beq
C_x \simeq 0.056\, Q \left(\frac{\Dp}{\cs^2/\Omega}\right)^{1/2}\,.
\label{eq:dp2cx}
\enq
Take our standard tc=10 run ($\alpha\simeq 0.02$) as an example,
$\Dp\geq 0.05\cs^2/\Omega$ for large particles with $\tau_s > 1$, we
therefore obtain a dimensionless $C_x \geq 0.038$, more than one order of magnitude larger than $C_x$
measured in MRI disks (cf. Table~1 in \citet{NG2010} and Table~2 in \citet{yangetal2012}).
The excitation of eccentricity can also be described as \citep{yangetal2009,yangetal2012}
\beq
\sigma(\Delta e) = C_e\,\xi \left(\frac{H}{R}\right)\left(\frac{t\Omega}{2\pi}\right)^{1/2}\,,
\label{eq:de_yang}
\enq
where $C_e$ is a dimensionless coefficient characterizes the growth rate. After comparing it with
Equation~\ref{eq:ecc}, we have
\beq
C_e \simeq 0.040\, Q C \,.
\label{eq:c2ce}
\enq
Recalling $C \simeq 0.15$ for $\tau_s=10^2$ when $\Omega\tcool=10$ in section~\ref{sec:ecc}, we find
the typical $C_e \simeq 0.018$, much greater than $10^{-3}$--$10^{-4}$ found in \citet{yangetal2012}
using local shearing boxes, and one order of magnitude greater than reported in \citet{NG2010} with
global simulations.
The increasing eccentricity and diffusion is a result of stronger gravitational forcing in
gravito-turbulent disk than in MRI disk.
Quantitatively, the parameter $\gamma$ (closely related to $\delta\Sigma/\Sigma$) reflects the
strength of this stirring. As discussed in
section~\ref{sec:tcool} and also shown in Table~\ref{tab:tab1}, this dimensionless parameter
$\gamma$ is at least one order of magnitude larger than found in MRI disks \citep[e.g.,
][]{yangetal2012}.
As
discussed in section~\ref{sec:selfg}, the stronger forcing also pushes more types of particles
($\tau_s >1$) into the gravity dominated regime; while
in MRI disk, as shown in Figure 15 and 23 of \citet{NG2010}, this occurs only for particles with
$\tau_s > 100$.
\section{SUMMARY AND CONCLUSION\label{sec:conclusion}}
We have studied the dynamics of dust in gravito-turbulent disks, i.e., gaseous
disks whose turbulence is driven by self-gravity, using 2D
hybrid (particle and gas) simulations in a local shearing sheet approximation. For dust particles, we
included the aerodynamic drag and gravitational pull from self-gravitating gas, and neglected
particle self-gravity and feedback. We obtained the density distribution, radial diffusion
coefficient and relative velocities for dust particles with stopping times distributed
from $10^{-3}\Omega^{-1}$ to
$10^{3}\Omega^{-1}$.
We summarize our main results as follows:
\begin{enumerate}
\item{Particles with small stopping times ($\tau_s < 0.1$) are aerodynamically well-coupled
to gas and therefore trace the gas distribution. Diffusion coefficients for
small particles are
close to those for gas. The gas diffusion
coefficient $\Dg$ is related to the angular momentum transport parameter $\alpha$
via a
roughly constant Schmidt number ${\rm Sc} = \alpha/\Dg = 1.4{\rm -}2.4$. Small particles also have low
eccentricities ($e \sim 0.1 (H/R) (\alpha/0.01)^{1/2}$)
and low relative velocities ($\lesssim 0.01{\rm -} 0.1\,(\alpha/0.01)^{1/2}\cs$). }
\item{Particles with larger stopping times ($\tau_s \gtrsim 1$) are more strongly gravitationally
forced from self-gravitating gas than by aerodynamic drag. The stronger forcing results in diffusion
coefficients that are $\gtrsim 5$ times greater than those of smaller particles.
Turbulent diffusion in gravito-turbulent disks therefore plays an important role in radial transport of large bodies.
Strong stochastic gravitational stirring generates large eccentricities ($e \gtrsim 0.5{\rm -}1 (H/R) (\alpha/0.01)^{1/2}$)
and large relative velocity ($\gtrsim (\alpha/0.01)^{1/2} \cs$). These disfavor planetesimal
formation by pairwise collision as the typical collisional speed exceeds both the escape
and fragmentation velocities.}
\item{Particles of intermediate size ($\tau_s= 0.1 {\rm -}1$) are marginally coupled to gas, and
are collected by both gas drag and gravitational torques (see equation~\ref{eq:ratio})
into filament-like structures having large
overdensities ($\gtrsim 10{\rm-} 10^3$ times the background) and low relative velocities
($\lesssim 0.01 \cs$) between like-sized particles. The density concentration is high enough to
trigger direct gravitational collapse. Nascent planetesimals can avoid collisional disruption by
diffusing to less turbulent regions of the disk.}
\item{Longer cooling times result in weaker turbulence ($\alpha\propto (\Omega\tcool)^{-1}$),
weaker particle diffusion ($\propto\alpha$), lower relative velocities and
eccentricities
($\propto \sqrt{\alpha}$), and stronger clustering for
intermediate-size ($\tau_s=0.1{\rm -}1$) particles. }
\item{Compared to MRI-turbulent disks, gravito-turbulent disks show almost one order-of-magnitude
stronger density fluctuations for similar $\alpha$ ($\delta\Sigma/\Sigma \simeq 5
\sqrt{\alpha}$ versus
$\sqrt{0.5\alpha}$). Stronger gravitational stirring leads to
higher eccentricities and relative velocities between particles.}
\end{enumerate}
{In a recent paper that parallels ours, \citet{BoothClarke2016} study the relative velocities of dust particles
in global smoothed-particle hydrodynamics (SPH) simulations of
gravito-turbulent protoplanetary disks. Like us, they find large relative velocities for particles with $\tau_s \gtrsim 3$ (compare our Figure~\ref{fig:vrel} with their
Figures~6 and 7).}
Future investigations could
improve upon our work in any number
of ways.
As we use massless test particles in 2D hydrodynamic local
simulations with a simplistic cooling prescription, the effects of
vertical sedimentation, particle feedback, particle self-gravity {\citep{gibbonsetal2014}}, and self-consistent heating and
cooling {on dust dynamics could be further explored}. Local and eventually global 3D simulations which account for these
physical effects would be clear next steps.
\section*{Acknowledgements}
We thank Xuening Bai for fixing a bug related to the orbital advection scheme for particles.
We also thank the anonymous referee for stimulating comments which led to improvement of this
paper.
This work was supported in part by the
National Science Foundation under grant PHY-1144374, "A Max-Planck/Princeton Research Center for
Plasma Physics" and grant PHY-0821899, "Center for Magnetic Self-Organization". ZZ acknowledges support by NASA through Hubble
Fellowship grant HST-HF-51333.01-A awarded by the Space Telescope
Science Institute, which is operated by the Association of
Universities for Research in Astronomy, Inc., for NASA, under
contract NAS 5-26555.
Financial support for EC was provided
by the NSF and NASA Origins.
Resources supporting this work were provided by the Princeton Institute of Computational Science and Engineering (PICSciE) and Stampede at Texas Advanced Computing Center (TACC), the University of Texas at
Austin through XSEDE grant TG-AST130002.
\clearpage
{\renewcommand{\arraystretch}{1.4}
\begin{deluxetable}{ccccccccc}
\tabletypesize{\footnotesize}
\tablecolumns{9} \tablewidth{0pc}
\tablecaption{Gas Properties in Gravito-Turbulent Disks \label{tab:tab1}}
\setlength{\tabcolsep}{0.1in}
\tablehead{\colhead{Name} &
\colhead{$\Omega\tcool$} &
\colhead{$Q$} &
\colhead{$\delta\Sigma_g/\Sigma_g$}\tablenotemark{(a)} &
\colhead{$\gamma$}\tablenotemark{(b)} &
\colhead{$\delta\ux/\cs$}\tablenotemark{(c)} &
\colhead{$\atot$}\tablenotemark{(d)} &
\colhead{$D_{g,x}\,(\cs^2\Omega^{-1})$\tablenotemark{(e)} } &
\colhead{${\rm Sc}$\tablenotemark{(f)} }}
\startdata
tc=5
& $5$ & $3.35$ & $0.82$ & $0.013$ & $0.38\, (0.91)$ & $0.035$ & $0.022$ & $1.6\, (1.7)$ \\
tc=10
& $10$ & $3.19$ & $0.63$ & $9.4\times 10^{-3}$ & $0.31\, (0.61)$ & $0.020$ & $0.014$ & $1.4\,
(1.7)$\\
tc=10.dble \tablenotemark{(g)}
& $10$ & $3.15$ & $0.61$ & $9.0\times 10^{-3}$ & $0.34\, (0.64)$ & $0.022$ & $0.016$ & $1.4\,
(1.5)$\\
tc=10.hires \tablenotemark{(h)}
& $10$ & $3.09$ & $0.59$ & $9.5\times 10^{-3}$ & $0.33\, (0.59)$ & $0.024$ & $0.015$ & $1.6\, (1.7)$\\
tc=20
& $20$ & $3.16$ & $0.52$ & $8.3\times 10^{-3}$ & $0.26\, (0.38)$ & $0.011$ & $0.0061$ & $1.8\,
(2.0)$\\
tc=40
& $40$ & $2.99$ & $0.32$ & $5.2\times 10^{-3}$ & $0.20\, (0.23)$ & $0.006$ & $0.0025$ & $2.4\,
(2.9)$\\
\enddata
\tablenotetext{(a)}{Time- and spatial averaged density dispersion, i.e., $\langle\!\langle\delta\Sigma_g^2\rangle\!\rangle_{t}^{1/2}/\Sigma_g$.}
\tablenotetext{(b)}{Dimensionless parameter used in \citet{IGM2008} which characterize the amplitude
of the random gravity field.}
\tablenotetext{(c)}{Velocity dispersion with density weight, i.e., $\langle\!\langle
\ux^2\drangle\rangle_{t}^{1/2}/\cs$, where $\cs$ is the density weighted sound speed with
$\Gamma=2$. The dispersions calculated without density weight are in the parentheses.}
\tablenotetext{(d)}{$\atot =\ar + \ag$, the total internal stress normalized with averaged
pressure.}
\tablenotetext{(e)}{Gas diffusion coefficient calculated using the autocorrelation function of Equation~\ref{eq:def_dg}.}
\tablenotetext{(f)}{Schmidt number ${\rm Sc}=\alpha \cs^2\Omega^{-1}/\Dg$. The values in parentheses are calculated
assuming $\Dg\simeq \Dp$ for $\tau_s=10^{-3}$ particles (see Table~\ref{tab:tab2}). }
\tablenotetext{(g)}{Similar to run tc=10 but using doubled domain sized and fixed grid resolution
and particle number.}
\tablenotetext{(h)}{Similar to run tc=10 but using doubled grid resolution and particle
number.}
\end{deluxetable}
}
{\renewcommand{\arraystretch}{1.4}
\begin{deluxetable}{cccccccc}
\tabletypesize{\footnotesize}
\tablecolumns{8} \tablewidth{0pc}
\tablecaption{ $\Dp /(\cs^2\Omega^{-1})$: Dust Radial Diffusion coefficient\label{tab:tab2}}
\setlength{\tabcolsep}{0.1in}
\tablehead{\colhead{$$} & \colhead{$\tau_s = 10^{-3}$} & \colhead{$10^{-2}$} & \colhead{$0.1$} &
\colhead{$1$} & \colhead{$10$} & \colhead{$10^2$} & \colhead{$10^3$} }
\startdata
tc=5
& $0.021$ & $0.021$ & $0.028$ & $0.14$ & $0.15$ & $0.089$ & $0.092$ \\
tc=10
& $0.012$ & $0.011$ & $0.011$ & $0.049$ & $0.091$ & $0.058$ & $0.050$ \\
tc=10.wosg \tablenotemark{(a)}
& $0.011$ & $0.011$ & $0.011$ & $0.016$ & $0.003$ & $7\times 10^{-5}$ & $9\times 10^{-7}$\\
tc=10.dble
& $0.015$ & $0.015$ & $0.015$ & $0.050$ & $0.085$ & $0.056$ & $0.051$\\
tc=10.hires
& $0.014$ & $0.014$ & $0.016$ & $0.058$ & $0.096$ & $0.063$ & $0.057$\\
tc=20
& $0.0055$ & $0.0052$ & $0.0049$ & $0.010$ & $0.042$ & $0.034$ & $0.0033$\\
tc=40
& $0.0021$ & $0.0020$ & $0.0021$ & $0.0067$ & $0.019$ & $0.017$ & $0.018$\\
\enddata
\tablenotetext{(a)}{Similar to run tc=10 but the gravitational forcing from the gas is artificially
removed.}
\end{deluxetable}
}
\clearpage
\bibliographystyle{mnras}
|
train/arxiv
|
BkiUdn45qU2ApzQrTwma
| 5 | 1 |
\section{Back-of-envelope estimation of error on the response function}
\label{sec:error_estimate}
We expect to measure the bispectrum of 21-cm signal during reionization with observations of the future radio telescope, the SKA. The response function ($\hat{f}_\mathrm{21,21}$) of large scale fluctuations to the small scale ones will provide us the estimate of the squeezed-limit bispectrum. However, the calculated response function will be affected by the instrumental limitations. In this section, we provide a back-of-envelope estimation of the error on the $\hat{f}_\mathrm{21,21}(k)$ constructed from radio observations and the sample variance.
Based on ref~\cite{chiangthesis:2015}, we give the error due to sample variance on the estimated integrated bispectrum $\hat{iB}_\mathrm{sv}(k)$ (see eq.~2.15) as follows,
\begin{eqnarray}
\Delta \hat{iB}_\mathrm{sv}(k) \approx \sqrt{\frac{V_L}{V_r N_{kL}}}\Bar{\sigma}_{L}P_L(k) \ ,
\end{eqnarray}
\noindent where $V_L$, $V_r$ and $N_{kL}$ are the volume of the subvolume, volume of the entire box and the number of Fourier modes in the each $k$-bin considered while calculating the power spectra of the subvolumes. The $\Bar{\sigma}^2_{L}$ and $\Bar{P}_L(k)$ are the average variance and power spectrum respectively of all the subvolumes (defined in Section~2.3).
Assuming that the error is dominated by the bispectrum term instead of the normalization, we can get the error on the response function ($\Delta \hat{f}_\mathrm{sv}$) by dividing $\Delta \hat{iB}_\mathrm{sv}(k)$ by $P_L(k)$ and $\Bar{\sigma}^2_{L}$.
For a detailed calculation see appendix B of ref~\cite{chiangthesis:2015}.
Assuming the thermal noise of the telescope to be additive offset to the signal, we carry out similar calculation and get the error on the estimated integrated bispectrum $\hat{iB}_\mathrm{noise}(k)$ as,
\begin{eqnarray}
\Delta \hat{iB}_\mathrm{noise}(k) \approx \sqrt{\frac{V_L}{V_r}}\frac{\sigma_\mathrm{noise}}{\Bar{\sigma}^2_{L}}\frac{P_\mathrm{noise}(k)}{P_L(k)} \ ,
\end{eqnarray}
where $\sigma_\mathrm{noise}$ and $P_\mathrm{noise}$ are the rms of the image constructed from the radio observations and the power spectrum of the thermal noise.
\begin{table*}
\centering
\caption{Table gives the telescope configuration considered in this work. We take the telescope parameters for SKA-Low taken from the latest configuration document \footnote{\url{http://astronomers.skatelescope.org/documents/}}.}
\label{tab:tele_param}
\begin{tabular}{lcc}
\hline
Parameters & Values \\
\hline
Observation time ($t_\mathrm{int}$) & 1000 h \\
System temperature & $60\left(\frac{300~\mathrm{MHz}}{\nu}\right)^{2.55}$ \\
Effective collecting area ($A_\mathrm{eff}$) & $962~\mathrm{m}^2$ \\
Core area ($A_\mathrm{core}$) & $785000~\mathrm{m}^2$ \\
Bandwidth ($\mathcal{B}$) & 10 MHz \\
Number of stations ($N_\mathrm{stat}$) & 224
\\ \hline
\end{tabular}
\end{table*}
We consider a simple estimate of the noise power spectrum $P_\mathrm{noise}(k)$ based on ref.~\cite{2013ExA....36..235M}, which is given as:
\begin{eqnarray}
P_\mathrm{noise}(k) = 4\pi k^{-3/2}(D^2_\mathrm{c}\Delta D_\mathrm{c} \Omega_\mathrm{FOV})^{1/2}\frac{T^2_\mathrm{sys}}{\mathcal{B}~t_\mathrm{int}}\frac{A_\mathrm{core}}{N^2_\mathrm{stat}A_\mathrm{eff}}\ ,
\end{eqnarray}
\noindent where $D_\mathrm{c}$ and $\Delta D_\mathrm{c}$ are the comoving distance to the observed redshift $z$ and the comoving distance corresponding to the bandwidth $\mathcal{B}$ respectively. The solid angle subtended by the field of view $\Omega_\mathrm{FOV}$ is $\frac{\lambda^2_\mathrm{21}}{A_\mathrm{eff}}$, where $\lambda_\mathrm{21}$ and $A_\mathrm{eff}$ are the observed wavelength of 21-cm signal and the effective collecting area of the antennae station respectively. The $A_\mathrm{core}$, $T_\mathrm{sys}$ and $t_\mathrm{int}$ represent the core area of the telescope, the system temperature of the telescope and total observation time respectively.
We consider the telescope parameters for the low frequency band of SKA (SKA-Low), which is shown in Table~\ref{tab:tele_param}.
The $\sigma_\mathrm{noise}$ for a fully covered $uv$ space can be given as \cite[e.g.][]{2007MNRAS.382..809D,giri2018optimal},
\begin{eqnarray}
\sigma_\mathrm{noise} = \frac{2k_BT_\mathrm{sys}}{A_\mathrm{eff}\sqrt{\mathcal{B}~t_\mathrm{int}~N_\mathrm{stat}(N_\mathrm{stat}-1)}} \ ,
\end{eqnarray}
where $k_B$ is the Boltzmann constant.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{images/21-21-21-response_withError.png}
\caption{The error on the response function due to sample variance and thermal noise in the radio observations of 1000 h. The curve gives the response of the 21-cm power spectrum to the large scale 21-cm fluctuation when the Universe is 50\% ionized. The shaded region gives the error estimate.}
\label{fig:error_on_response}
\end{figure}
In Figure~\ref{fig:error_on_response}, we show the error estimated for the response function when the Universe is 50\% ionized. While the sample variance dominates the error at large scales, the thermal noise dominates at the small scales. The response function constructed from the observations will be reliable to distinguish between source models when 0.1 Mpc$^{-1} < k <$ 1 Mpc$^{-1}$. Similar range for $k$ is also used for the 21 cm power spectrum \cite{2015MNRAS.449.4246G}. We should note that our error calculation is too simplistic. We will investigate it further in the future.
\bibliographystyle{JHEP}
\section{Introduction}
\label{sec:intro}
The spin-flip transition of the ground state of neutral hydrogen corresponds to a rest frame wavelength of 21.1 cm, and is commonly known as the 21-cm signal. This signal has the potential to map the epoch of reionization (EoR) -- the last major phase change of the Universe when the intergalactic hydrogen transitioned from a cold and neutral state to a hot and ionized state \citep{1997ApJ...475..429M}. Recent indirect observations of the EoR suggest that reionization has completed at around a redshift of 6 \citep[e.g.][]{2015ApJ...802L..19R, 2015MNRAS.454L..76M, 2016A&A...594A..13P}; thus, we expect to find a 21-cm signal from the EoR at radio wavelengths longward of 1.47~m. Future 21-cm observations at different wavelengths will be able to follow the progress of the reionization process \cite[e.g.][]{2012RPPh...75h6901P}.
One of the challenging aspects of studying the EoR with the 21-cm signal is the optimal extraction of information. A new generation of radio telescopes, such as the Giant Metrewave Radio Telescope\footnote{\url{http://www.gmrt.tifr.res.in/}} \citep[GMRT; e.g.][]{2011MNRAS.413.1174P}, the Low Frequency Array\footnote{\url{http://www.lofar.org/}} \citep[LOFAR; e.g.][]{2010MNRAS.405.2492H}, the Murchison Widefield Array\footnote{\url{http://www.mwatelescope.org/}} \citep[MWA; e.g.][]{2009IEEEP..97.1497L} and the Precision Array for Probing the Epoch of Reionization\footnote{\url{http://eor.berkeley.edu/}} \citep[PAPER; e.g.][]{2010AJ....139.1468P} have been trying to detect the EoR statistically. However, they are only sensitive enough to measure the two point statistics (or power spectrum). Since the 21-cm signal from reionization is highly non-Gaussian \citep[e.g.][]{2014MNRAS.443.3090W,2016MNRAS.458.3003S}, the power spectrum will not provide a complete statistical description of this process.
The Square Kilometre Array\footnote{\url{http://www.skatelescope.org/}} \citep[SKA;][]{2013ExA....36..235M} will be able to deliver images of the 21-cm background during the EoR. By combining images at different frequencies it will provide three-dimensional -- so-called tomographic -- data sets which will show the temporal evolution along the frequency direction. These observations will be sensitive to non-Gaussian information and thus will allow the first measurement of the 21-cm bispectrum -- the Fourier transform of the three-point correlation function. The bispectrum is non-zero for triangular configurations of three wave vectors in Fourier space, i.e., ${\mathbf k}_1+{\mathbf k}_2+{\mathbf k}_3=0$. Since there are many different triangular configurations, it is customary to characterize the bispectrum in various limits for simplicity, such as equilateral ($k_1=k_2=k_2$), isosceles ($k_1>k_2=k_3$), and squeezed ($k_1\approx k_2\gg k_3$) triangles etc.
There are at least two hurdles in studying the 21-cm bispectrum. One is a computational challenge: it is computationally expensive to calculate all possible configurations of the bispectrum and their covariance matrix from the (simulated) observations \citep{2015MNRAS.451..467S, 2016MNRAS.458.3003S,2018MNRAS.476.4007M}, though there have been attempts to develop faster algorithms using the Fast Fourier Transform \citep[FFT;][]{2017MNRAS.472.2436W}. The other is physical interpretation: it is not easy to understand the information content of the 21-cm bispectrum, which is complicated further by many possible choices for triangles.
Refs. \citep{2016MNRAS.458.3003S,2018MNRAS.476.4007M} have taken appreciable steps forward in this regard.
In the context of lower-redshift large-scale structure surveys, Refs. \cite{2014JCAP...05..048C,chiangthesis:2015} introduced a computationally inexpensive method to probe the bispectrum in the squeezed limit, in which the wave vectors form isosceles triangles with the unequal side much smaller than the other two ($k_3 \ll k_1\approx k_2$). The method relies on the estimation of the local (or position dependent) power spectra of the signal: one divides the survey volume into subvolumes, and correlates power spectra and mean densities of the tracers (such as the galaxies and the 21-cm signal) locally measured within the subvolumes. This method was successfully applied to the Baryon Oscillation Spectroscopic Survey\footnote{\url{http://www.sdss3.org/surveys/boss.php}} (BOSS) to measure non-Gaussianity due to gravitational evolution and non-linear galaxy bias \cite{2015JCAP...09..028C}.
In this paper we apply the position-dependent power spectrum approach to brightness fluctuations in the 21-cm background from reionization.
This method is useful not only for its computational advantage, but also for physical interpretation. It describes how the small-scale power spectrum of tracers {\it responds} to the large-scale environment, i.e., long-wavelength fluctuations. It therefore naturally and intuitively captures mode-coupling of large- and small-scale fluctuations, which sheds light on how reionization proceeds.
The rest of this paper is organized as follows. In Section~\ref{sec:formalism} we summarize the position-dependent power spectrum approach of Refs.~\cite{2014JCAP...05..048C,chiangthesis:2015}. As we do not have real 21-cm observations yet, we rely on simulated observations, which are described in Section~\ref{sec:21-cm_sig}. We show our main results in Section~\ref{sec:mainresults}, and show how the position-dependent power spectrum approach can distinguish between ``inside-out" and ``outside-in" reionization scenarios in Section~\ref{sec:source_models}. We conclude in Section~\ref{sec:discuss}.
We explain our analytical model for the response function on large scales in Appendix~\ref{sec:toy_model}.
\section{Formalism}
\label{sec:formalism}
\subsection{Redshifted 21-cm signal}
\label{sec:bright_temp}
The intensity of the 21-cm background can be expressed as a differential brightness temperature with respect to the Cosmic Microwave Background (CMB) \citep[e.g.,][]{2012RPPh...75h6901P,2013ExA....36..235M},
\begin{eqnarray}
T_{21} (\mathbf{r}, z) \approx 27~\mathrm{mK}~ x_\mathrm{HI}(\mathbf{r}) (1 + \delta_\mathrm{d}(\mathbf{r}))\left( \frac{1+z}{10} \right)^\frac{1}{2}
\left( 1 -\frac{T_\mathrm{CMB}(z)}{T_\mathrm{s}(\mathbf{r})} \right)\nonumber\\
\times
\left(\frac{\Omega_\mathrm{b}}{0.044}\frac{h}{0.7}\right)
\left(\frac{\Omega_\mathrm{m}}{0.27} \right)^{-\frac{1}{2}}
\left(\frac{1-Y_\mathrm{p}}{1-0.248}\right)\,.
\label{eq:dTb}
\end{eqnarray}
In the above equation there are three position-dependent quantities: $x_\mathrm{HI}$ is the neutral fraction of hydrogen, $\delta_\mathrm{d}$ is the matter density contrast defined in the usual way, and $T_\mathrm{s}$ is the excitation temperature of the two spin states of the hydrogen ground state, known as the spin temperature. The global quantities are $T_\mathrm{CMB}(z)$, the CMB temperature at redshift $z$, $\Omega_\mathrm{b}$ and $\Omega_\mathrm{m}$, the present-day density parameters of baryons and total matter, and the primordial helium mass fraction, $Y_\mathrm{p}$.
There will be no 21-cm signal when $T_\mathrm{CMB} =T_\mathrm{s}$. It is expected that in the early phases of reionization the spin temperature will decouple from $T_\mathrm{CMB}$ and approach the gas temperature due to the Wouthuysen-Field effect \citep{field1958excitation,wouthuysen1952excitation,1997ApJ...475..429M}. Previous studies have shown that the inter-galactic medium (IGM) will likely be heated by the first X-ray sources before substantial reionization starts \citep{pritchard200721}.
Therefore we will adopt the common assumption that the spin temperature is much greater than the CMB temperature, $\left(1-\frac{T_\mathrm{CMB}}{T_\mathrm{s}}\right) \rightarrow 1.$\footnote{It is expected that this assumption breaks down at higher redshifts, but exactly when that occurs is highly uncertain.} Then, spatial variations in the 21-cm signal depend only on the neutral fraction and density and eq.~(\ref{eq:dTb}) can be rewritten as
\begin{equation}
\label{eq:T21}
T_{21}(\mathbf{r}, z)=\hat{T}_\mathrm{21} x_\mathrm{HI}(\mathbf{r}) (1 + \delta_\mathrm{d}(\mathbf{r}))\,,
\end{equation}
where $\hat{T}_\mathrm{21}$ contains all the global cosmological terms in eq.~(\ref{eq:dTb}).
Observations will provide three-dimensional (tomographic) data sets for $T_{21}$ consisting of images at different frequencies. Along the frequency direction these data sets will be affected by various line-of-sight effects such as the light-cone effect \citep{2012MNRAS.424.1877D,giri2017bubble} and redshift space distortions \citep{2013MNRAS.435..460J,2016MNRAS.456...66J}. We will ignore these effects in this initial study.
\subsection{Position-dependent power spectrum as a probe of the squeezed-limit bispectrum}
In this section, we formulate the position-dependent power spectrum and its connection to the squeezed-limit bispectrum following Refs.~\cite{2014JCAP...05..048C,chiangthesis:2015}. Position-dependent power spectrum is an intuitive statistic that captures mode-coupling between large- and small-scale fluctuations. Instead of measuring the power spectrum from the entire survey volume, we divide it into many subvolumes, in which we compute the local power spectra and local mean density fields. The correlation between these two quantities measures coupling between the long-wavelength fluctuation (the local mean density field) and the short-wavelength fluctuation (the local power spectrum).
The correlation between the local matter density power spectra and the local mean matter density field is easy to understand. As the structure formation proceeds faster in overdense regions, the local power spectrum is larger than the average. Therefore there is a positive correlation between the local matter density power spectra and mean matter density field. We can understand this correlation by treating each subvolume as a ``separate universe'' \cite{1980lssu.book.....P}; namely, an overdense region behaves as if it was a separate Friedmann-Lema\^itre-Robertson-Walker universe with positive spatial curvature, even if geometry of the background universe is flat. This picture allows us to calculate theoretically the correlation between the local matter density power spectra and mean matter density field using perturbation theory \cite{2014JCAP...05..048C} in a mildly non-linear regime and the so-called separate universe simulation in a deeply non-linear regime \cite{2015MNRAS.448L..11W}.
While the position-dependent power spectrum measures only a fraction of information contained in the full bispectrum, the fact that we can understand it physically and intuitively is a unique advantage of this approach, as the bispectrum is often too complicated to understand physically.
We do not have to use the same tracer for the local power spectra and the local mean density field. For example, one can correlate the local Lyman-$\alpha$ forest power spectra with the large-scale fluctuation fields of quasars \cite{2017JCAP...06..022C} and weak lensing \cite{2016PhRvD..94j3506D,2018JCAP...01..012C}. Such correlations yield information on non-gravitational effects such as radiative transfer. In this paper we consider the correlation between the local 21-cm power spectra and the mean matter density field or the mean 21-cm field.
Let $\delta(\mathbf{r})$ be an arbitrary zero-mean field. (For example, below we will consider the matter density contrast, $\delta_\mathrm{d}$, as well as 21-cm brightness temperature fluctuations, $\delta_{21}=T_\mathrm{21}/\bar{T}_\mathrm{21}-1$.) The position-dependent power spectrum of $\delta(\mathbf{r})$ can be calculated by dividing the total volume into subvolumes $V_L$. Formally, the subvolumes are represented by a window function $W_L(\mathbf{r}-\mathbf{r}_L)$ centered on a position $\mathbf{r}_L$. In what follows, we use the coordinate-space top-hat function,
\begin{equation}
W(\mathbf{x}) = \prod_{i=1}^{3} \theta (x_i),~~~\theta (x_i) = \left\{\begin{matrix}
1, \ |x_i| \leq L/2,\\
0, \ |x_i| > L/2,
\end{matrix}\right.
\end{equation}
where $L$ is the length of each side of the subvolume. The position-dependent power spectrum is then given by
\begin{equation}
P (\mathbf{k},\mathbf{r}_L) = \frac{1}{V_L}|\delta(\mathbf{k},\mathbf{r}_L)|^2\,,
\label{eq:PDS_def}
\end{equation}
\noindent where
\begin{eqnarray}
\nonumber
\delta (\mathbf{k},\mathbf{r}_L) &=&\int \mathrm{d}^3r~\delta(\mathbf{r})W_L(\mathbf{r}-\mathbf{r}_L)e^{-i\mathbf{k}.\mathbf{r}}\\
&=& \int \frac{\mathrm{d}^3q}{(2\pi)^3}~\delta(\mathbf{k}-\mathbf{q})W_L(\mathbf{q})~e^{-i\mathbf{q}.\mathbf{r}_L}\,.
\label{local_field}
\end{eqnarray}
We find
\begin{equation}
P(\mathbf{k},\mathbf{r}_L)=\frac{1}{V_L}\int \frac{\mathrm{d}^3q_1}{(2\pi)^3} \int \frac{\mathrm{d}^3q_2}{(2\pi)^3}~\delta(\mathbf{k}-\mathbf{q}_1)\delta(-\mathbf{k}-\mathbf{q}_2)W_L(\mathbf{q}_1)W_L(\mathbf{q}_2)~e^{-i\mathbf{r}_L.(\mathbf{q}_1+\mathbf{q}_2)}.
\end{equation}
Defining $\bar{\delta}(\mathbf{r}_L)$ to be the mean of $\delta(\mathbf{r})$ in a subvolume $V_L$, the cross-correlation with $P(\mathbf{k},\mathbf{r}_L)$ can be written as
\begin{eqnarray}
\label{eq:int_bispec}
\left \langle P(\mathbf{k},\mathbf{r}_L) \bar{\delta}(\mathbf{r}_L) \right \rangle = \frac{1}{V_L^2} \int \frac{\mathrm{d}^3q_1}{(2\pi)^3} \int \frac{\mathrm{d}^3q_2}{(2\pi)^3} \int \frac{\mathrm{d}^3q_3}{(2\pi)^3} \left \langle \delta(\mathbf{k}-\mathbf{q}_1) \delta(-\mathbf{k}-\mathbf{q}_2) \delta(-\mathbf{q}_3) \right \rangle \nonumber\\
W_L(\mathbf{q}_1) W_L(\mathbf{q}_2) W_L(\mathbf{q}_3)~e^{-i\mathbf{r}_L.(\mathbf{q}_1+\mathbf{q}_2+\mathbf{q}_3)} \nonumber\\
= \frac{1}{V_L^2} \int \frac{\mathrm{d}^3q_1}{(2\pi)^3} \int \frac{\mathrm{d}^3q_3}{(2\pi)^3} B(\mathbf{k}-\mathbf{q}_1,-\mathbf{k}+\mathbf{q}_1+\mathbf{q}_3,-\mathbf{q}_3) \nonumber\\
W_L(\mathbf{q}_1) W_L(\mathbf{q}_1+\mathbf{q}_3) W_L(\mathbf{q}_3) \ .
\end{eqnarray}
Here, $B$ is the bispectrum,
\begin{equation}
\left \langle \delta(\mathbf{q}_1) \delta(\mathbf{q}_2) \delta(\mathbf{q}_3) \right \rangle = B(\mathbf{q}_1,\mathbf{q}_2,\mathbf{q}_3) (2\pi)^3 \delta_\mathrm{D}(\mathbf{q}_1+\mathbf{q}_2+\mathbf{q}_3) \ ,
\end{equation}
where $\delta_\mathrm{D}$ is the Dirac delta function. Following Ref.~\cite{2014JCAP...05..048C}, we define the right hand side of eq.~(\ref{eq:int_bispec}) to be the integrated bispectrum, denoted as $iB(\mathbf{k})$. In this study, we will consider the spherically-averaged integrated bispectrum,
\begin{equation}
iB(k) \equiv \left \langle P(k,\mathbf{r}_L) \bar{\delta}(\mathbf{r}_L) \right \rangle,
\label{eq:int_bispec_sphavg}
\end{equation}
where $P(k,\mathbf{r}_L)$ is the spherically averaged local power spectrum of the tracer within a subvolume centered at $\mathbf{r}_L$. Since $W_L(\mathbf{q}_3)$ limits ${\mathbf q}_3$ to the scale defined by the size of subvolumes, $2\pi/L$, the cross correlation, $\left\langle P(k,\mathbf{r}_L) \bar{\delta}(\mathbf{r}_L) \right \rangle$, probes the squeezed-limit bispectrum as long as $q_1$ and $q_2$ correspond to scales much smaller than our sub-volumes, i.e. $q_1 \approx q_1 \gg q_3$. When applied to the 21-cm signal, eq.~(\ref{eq:int_bispec_sphavg}) has the key virtue that it contains more information than the power spectrum, but is simpler to estimate compared to the conventional bispectrum estimators \citep{2016MNRAS.458.3003S,2017MNRAS.472.2436W,2018MNRAS.476.4007M}.
For physical interpretation, it is useful to think of the local power spectrum as Taylor expansion in terms of $\bar\delta$, i.e.,
\begin{equation}
P(k,\mathbf{r}_L) = P(k) + \left.\frac{dP(k,\mathbf{r}_L)}{d\bar{\delta}}\right|_{\bar\delta=0}\bar{\delta}+\cdots\,,
\end{equation}
where $P(k)$ is the global power spectrum. The second term describes how the local power spectrum responds to $\bar\delta$. To extract this information, let us define the so-called {\it response function} \cite{2014JCAP...05..048C,chiangthesis:2015} as
\begin{equation}
\label{eq:factorized_iBk}
f(k) \equiv \frac{iB(k)}{\sigma_L^2 P(k)},
\end{equation}
where $\sigma_L^2$ is the variance of $\bar{\delta}_L$. To first order in $\bar\delta$, the response function reduces to
\begin{eqnarray}
\label{eq:resp_ln_form}
f(k) = \frac{d\mathrm{ln}P(k,\mathbf{r}_L)}{d\bar{\delta}}\Big|_{\bar{\delta}=0}.
\end{eqnarray}
The integrated bispectrum defined by eq.~(\ref{eq:int_bispec_sphavg}) thus measures the response of the local power spectrum to long-wavelength fluctuations.
See Refs.~\cite{2014JCAP...05..048C,2015MNRAS.448L..11W,chiangthesis:2015} for the way to calculate $f(k)$ theoretically using perturbation theory and separate universe simulations, when both $P(k,\mathbf{r}_L)$ and $\bar\delta$ refer to the underlying matter density fields. In linear theory, $f_{\rm d,d}(k)=68/21-(1/3)d\ln [k^3P_{\rm dd}(k)]/d\ln k$, where $P_{\rm{dd}}(k)$ is the linear matter power spectrum.
In this paper, we go beyond the matter density response function and consider the 21-cm signal response function. $P(k)$ in the rest of this paper therefore denotes the 21-cm power spectrum.
In Appendix A, we develop a simple analytical model for the response function of the 21-cm signal from reionization. The chief assumption of the model is that, on large enough scales, the neutral fraction of hydrogen, $x_{\mathrm{HI}}$, is a biased tracer of the underlying density field. Under this assumption, we find that the response functions of the 21-cm power spectrum to large-scale density and 21-cm brightness temperature fluctuations are given respectively by
\begin{equation}
\label{eq:f_21d_final}
{f}_\mathrm{21,d}(k) = \frac{2(b_1+b_2)}{b_1+1} + \frac{d \mathrm{ln} P_\mathrm{dd}(k,\mathbf{r}_L)}{d \bar{\delta}_\mathrm{d}} \Big|_{\bar{\delta}_\mathrm{d}=0} \,,
\end{equation}
\begin{equation}
\label{eq:f_2121_final}
{f}_\mathrm{21,21}(k)= \frac{{f}_\mathrm{21,d}(k)}{\bar{x}_\mathrm{HI}(1+b_1)}\,.
\end{equation}
Here, $b_1$ and $b_2$ are the first and second local bias parameters of the neutral fraction, defined in eq.~(\ref{eq:b_N}), $P_{\rm{dd}}(k)$ is the linear matter power spectrum, and $\bar{x}_{\rm{HI}}$ is the global neutral fraction. We will show that these simple expressions capture remarkably well the main features of the response functions measured from our reionization simulation.
\subsection{Estimators}
The response functions can be estimated by dividing our simulation box into $N_\mathrm{cut}^3$ subvolumes, where $N_\mathrm{cut}$ is the number of cuts in each direction.
The shortest side of the squeezed triangle is given by $q_3 = 2\pi N_\mathrm{cut}/L_\mathrm{B}$, where $L_\mathrm{B}=714$~Mpc is the box size of our simulation described in Section~\ref{sec:21-cm_sig}. We can study different squeezed-limit triangle configurations by changing the value of $N_\mathrm{cut}$.
We estimate the cross correlation of small scale 21-cm power spectra with the large scale density fluctuations using the following estimator
\begin{equation}
\label{eq:bispectrum_estimator}
\hat{iB}_{21,\mathrm{d}}(k) = \frac{1}{N_\mathrm{cut}^3} \sum_{i=1}^{N_\mathrm{cut}^3} P(k,\mathbf{r}_{L,i})~\bar{\delta}_\mathrm{d}(\mathbf{r}_{L,i}) \ .
\end{equation}
We then divide $\hat{iB}_{21,\mathrm{d}}(k)$ by the average power spectrum of all subvolumes $\bar{P}(k)=\frac{1}{N_\mathrm{cut}^3} \sum_{i=1}^{N_\mathrm{cut}^3} P(k)$ and the average variance of all subvolumes $\bar{\sigma}_L^2=\frac{1}{N_\mathrm{cut}^3} \sum_{i=1}^{N_\mathrm{cut}^3}\bar{\delta}^2_\mathrm{d}(\mathbf{r}_{L,i})$. This quantity is known as the normalized integrated bispectrum and is the estimator for the response function defined in eq.~(\ref{eq:factorized_iBk}),
\begin{equation}
\label{eq:norm_int_bs_21d}
\hat{f}_{21,\mathrm{d}}(k) = \frac{\hat{iB}_{21,\mathrm{d}}}{\bar{P}(k)\bar{\sigma}_L^2}\,.
\end{equation}
The estimator for $\hat{f}_{21,21}(k)$ is found by replacing $\bar{\delta}_\mathrm{d}$ by $\bar{\delta}_{21}$ in eqs.~(\ref{eq:bispectrum_estimator}) and (\ref{eq:norm_int_bs_21d}).
\section{Simulation of Reionization}
\label{sec:21-cm_sig}
We use a radiative transfer simulation of reionization to measure the 21-cm response functions. The simulation is performed in two steps. The first step is to run an N-body simulation of the matter density field and collapsed structures (halos) in a cosmological volume. This simulation is performed with the code CUBEP$^3$M\footnote{\tt http://wiki.cita.utoronto.ca/mediawiki/index.php/CubePM} \citep{2013MNRAS.436..540H}. We used 6912$^3$ particles and the comoving length of each side of the volume is 714 Mpc. The cosmological parameters used here are $\Omega_m$=0.27, $\Omega_k$=0, $\Omega_b$=0.044, $h=0.7$, $n=0.96$ and $\sigma_8$=0.8, consistent with the \textit{Wilkinson Microwave Anisotropy Probe} (WMAP) \citep{2011ApJS..192...18K,hinshaw2013nine} and \textit{Planck} \citep{2016A&A...596A.108P,2018arXiv180706209P} results.
We postprocess the matter density fields from the N-body simulation with C$^2$-RAY\footnote{\tt https://github.com/garrelt/C2-Ray3Dm}
\citep{2006NewA...11..374M}, a fully numerical 3D radiative transfer code. Our radiative transfer calculations are performed on a uniform grid of size $N_{\rm{rt}}=600^3$. We populate the dark matter haloes from our N-body simulation with ionizing sources. Here we assume that only haloes with masses above $10^9$~M$_\odot$ release ionizing photons into the IGM \citep{2016MNRAS.456.3011D}.
We take the rate of ionizing photon production to be proportional to the masses such that 3 ionizing photons per halo baryon escape every $10^7$~years. These assumptions result in a reionization history which is consistent with the existing observational constraints \citep[][]{2015ApJ...802L..19R, 2015MNRAS.454L..76M, 2016A&A...596A.108P}. C$^2$RAY employs the short characteristics ray tracing method to solve the radiative transfer equation \citep{raga19993d,lim20033d}. The ray-tracing is performed up to a comoving distance of 71~Mpc from each source. This limit is meant to approximate the effect of the presence of optically thick absorbers in the IGM that are unresolved in our N-body simulation. For more details we refer the reader to Refs.~\cite{2006MNRAS.372..679M, 2006MNRAS.369.1625I, 2012MNRAS.424.1877D}.
We will use the radiative transfer simulation described above as our fiducial simulation, labelled as FN-600.
For reference, the volume(mass)-weighted mean ionized fraction is $20\%$, $50\%$, and $80\%$ at $z=8.26(8.59)$, $7.57(7.72)$, and $7.22(7.28)$, respectively. The volume-weighted ionized fraction is lower than the mass-weighted fraction throughout reionization. This indicates that reionization of FN-600 has an ``inside-out'' nature. The Thomson scattering optical depth for this reionization history is $\tau=0.056$, consistent with the \textit{Planck} results \citep[][]{2016A&A...596A.108P,2018arXiv180706209P}. We note that there is evidence from high-$z$ Lyman-$\alpha$ forest observations that reionization may have ended somewhat later than $z=6$ (in contrast to the models considered here, in which reionization ends before $z=7$). This would not affect our main conclusions, however; here we are focused on describing the general features of the 21-cm response functions.
\section{Results}
\label{sec:mainresults}
\subsection{Local mean fields}
\label{sec:PdPS}
Before investigating the position-dependent power spectrum and response functions, we first show behaviour of the local mean fields, $\bar{\delta}_\mathrm{d}$ and $\bar\delta_{21}$.
Here and below we limit ourselves to the case $N_\mathrm{cut}=2$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/one_stats_subvolumes.png}
\caption{Redshift evolution of the long-wavelength mode, $\bar{\delta}_\mathrm{d}$ (left) and $\bar{\delta}_\mathrm{21}$ (right), in the FN-600 simulation. The comoving density of higher density subvolumes increases with time while the reverse is true for lower density subvolumes. The mean 21-cm signal shows the opposite behaviour. The signal in subvolumes with a lower $\bar{\delta}_\mathrm{21}$ at high $z$ increases with time and vice versa. This is indicative of the `inside-out' nature of the reionization.}
\label{fig:mean_subv}
\end{figure}
Figure~\ref{fig:mean_subv} shows the redshift evolution of $\bar{\delta}_\mathrm{d}$ and $\bar\delta_{21}$ of the 8 subvolumes during reionization in our fiducial numerical simulation FN-600. For reference, the top axis shows the mass-weighted mean ionized fraction ($x_\mathrm{m}$) in the simulation. The left and right panels show $\bar{\delta}_\mathrm{d}$ and $\bar{\delta}_{21}$ for each subvolume, respectively. For ease of comparison, the colours representing each subvolume are matched between the left and right panels. We find that each subvolume evolves quite differently. For the mass density, the result is easy to understand: the denser subvolumes increase their density with time while the less dense ones decrease their density. The curves for different subvolumes never cross.
The values of $\bar{\delta}_{21}$ on the other hand display more complex evolution. The behaviour of $\bar{\delta}_{21}$ is similar to that of the matter density field when the Universe is nearly neutral. However, since the denser regions reionize earlier in the ``inside-out'' scenario of reionization, $\bar{\delta}_{21}$ of the densest region quickly starts decreasing (red curves). By redshift 10, it is no longer the region with the highest $\bar{\delta}_{21}$; by redshift 8.8, it has the {\it lowest} $\bar{\delta}_{21}$. At around the same redshift the subvolume with the lowest density has the highest $\bar{\delta}_{21}$.
The evolutionary curves for different subvolumes cross each other just after $z\approx 9$ ($x_m\approx 0.1$).
\subsection{Local 21-cm power spectrum vs matter density field}
\label{sec:PdPS_densityfield}
We calculate the position-dependent power spectra of the 21-cm signal using eq.~(\ref{eq:PDS_def}). In Figure~\ref{fig:PdPS_NumSim_d}, we show the position-dependent dimensionless power spectrum, $\Delta^2 (k, \mathbf{r}_L) = \frac{k^3\mathrm{P}(k, \mathbf{r}_L)}{2\pi^2}$, constructed from FN-600. We have chosen the subvolumes with $\bar{\delta}_\mathrm{d}$ close to zero and those close to the two extreme ends. The $\Delta^2 (k, \mathbf{r}_L)$ at each epoch is normalised by the average of all the $\Delta^2 (k, \mathbf{r}_L)$ at that epoch. In each panel, we indicate the stage of reionization by the mass-weighted mean ionized fraction of the full simulation box. We show results for $x_\mathrm{m}=0.2$, 0.5 and 0.8 ($z=8.34,7.76$ and $7.305$).
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/PdPS_vsd_FN600_3epochs_3lines.png}
\caption{Redshift evolution of the position-dependent power spectra of the 21-cm signal constructed from the FN-600 simulation at three different reionization epochs. For clarity each of the spectra is normalised by the average of all the spectra at that epoch. In each panel, we show the spectra from the subvolume with the highest (red, dotted), mean (green, solid), and lowest (blue, dashed) $\bar{\delta}_\mathrm{d}$ at the corresponding epoch.
The reionization progresses from the left to right panels, which is marked with mean ionization fraction, $x_\mathrm{m}$, of the full simulation box. We see that the correlation with the long-wavelength mode flips as reionization progresses.}
\label{fig:PdPS_NumSim_d}
\end{figure}
The size of the subvolume determines the shortest wavenumber, $2\pi N_{\rm cut}/L_B=0.0176$~Mpc$^{-1}$. The squeezed-limit bispectrum is achieved when the wavenumbers of the local power spectra are much greater than this value. In our choice of parameters, we find that the squeezed-limit bispectrum is achieved reasonably well for $k\gtrsim 0.5$~Mpc$^{-1}$ ($\log_{10}(k)\gtrsim -0.3$). Our description below will focus on this large $k$ regime, unless stated otherwise.
During the early stages of reionization, the 21-cm signal follows the matter distribution. Therefore, we see a positive correlation between the local 21-cm power and $\bar{\delta}_\mathrm{d}$, similar to what has been previously noted for the matter power spectrum \citep[see figure~1 in Ref.][]{2014JCAP...05..048C}.
However, once a non-negligible fraction of the simulation volume has been ionized, an anti-correlation develops at larger $k$ values. This is easy to understand.
The ionizing sources form \rm H~{\sc ii } regions around them, which grow and overlap with time \citep{2004ApJ...613....1F,2012RPPh...75h6901P}. A characteristic scale of the \rm H~{\sc ii } regions increases as reionization progresses \citep[e.g.][]{2004ApJ...613....1F,furlanetto2006characteristic,2007ApJ...669..663M,giri2017bubble}. This characteristic scale leaves an imprint on the 21-cm power spectra which appears as a ``knee''-shaped feature \citep{zaldarriaga200421,wyithe2004characteristic}. The 21-cm power at scales smaller than this feature is lowered by reionization\footnote{In figure~1 of Ref.~\citep{2008ApJ...680..962L}, we see that $\Delta^2$ changes the slope as the ionization fraction increases. The slope initially increases (up to $x_\mathrm{m}\lesssim 0.15$) but the slope starts decreasing when the knee feature develops. After $x_\mathrm{m}\approx 0.70$ the slope no longer changes but the amplitude of $\Delta^2$ decreases as more hydrogen is ionized. Each subvolume follows a different reionization history (seen in Figure~\ref{fig:mean_subv}) and the subvolumes which start reionizing earlier will be ahead in this slope changing process.}. As reionization progresses inside-out in our FN-600 simulation, the high-density subvolumes are ionized first, which suppresses the 21-cm power spectra earlier. This explains an anti-correlation between the 21-cm power spectrum at large $k$ and $\bar\delta_{\rm d}$. The negative correlation spreads to lower $k$ values as more volume is reionized. By $x_\mathrm{m}=0.8$, the local 21-cm power spectra are anti-correlated with $\bar{\delta}_\mathrm{d}$ at all wavenumbers.
\subsection{Local 21-cm power spectrum vs 21-cm brightness field}
\label{sec:PdPS_21cmfield}
While dependence of the local 21-cm power spectra on the local mean matter density field is useful for physical interpretation, it may not be immediately observable because we may not have an adequate tracer of the mean matter density field during the EoR. Therefore we investigate the dependence of the local 21-cm power spectra on the local mean 21-cm brightness field, which is readily observable using the same data set.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/PdPS_vs21_FN600_5epochs_3lines.png}
\caption{
Same as Figure~\ref{fig:PdPS_NumSim_d} but with respect to $\bar\delta_{21}$ instead of $\bar\delta_{\rm d}$.
}
\label{fig:PdPS_NumSim_21}
\end{figure}
In Figure~\ref{fig:PdPS_NumSim_21}, we show $\Delta^2 (k, \mathbf{r}_L)$ for three values of $\bar\delta_{21}$. Interpretation is straightforward. At the very start when $x_\mathrm{m}=0$, the correlation with $\bar\delta_{21}$ is positive, as expected for the matter density power spectra. As reionization proceeds, we find the results that are opposite of those in Figure~\ref{fig:PdPS_NumSim_d}. Namely, the sign of the correlation flips because $\bar\delta_{\rm d}$ and $\bar\delta_{21}$ are anti-correlated for $x_\mathrm{m}\gtrsim 0.2$ (see Figure~\ref{fig:mean_subv}).
The correlation of the local 21-cm power spectra with $\bar{\delta}_{21}$ therefore displays an additional phenomenon during the earliest phases of reionization: it evolves from a positive correlation, through an almost uncorrelated phase, to the anti-correlation seen at $x_\mathrm{m}= 0.2$. After this, the high $k$ power shows a positive correlation due to the size of \rm H~{\sc ii } regions when $x_\mathrm{m} =0.5$. Eventually we find a positive correlation at all wavenumbers when $x_\mathrm{m}=0.8$.
\subsection{21cm response functions}
\label{sec:resp_func}
We now turn our attention to the 21-cm response functions.
Let us first consider the response with respect to the mean matter density field, $\hat{f}_{21,\mathrm{d}}$, which is more straightforward to interpret physically. We will then consider $\hat{f}_{21,21}$, which is the quantity that can be measured from observations.
\subsubsection{The matter-21cm-21cm response function}
Figure~\ref{fig:response_d2121} shows $\hat{f}_{21,\mathrm{d}}$ at different phases of reionization. The left panel shows the early phases ($x_\mathrm{m}=0.0$, 0.1 and 0.2) while the right panel shows the later ones ($x_\mathrm{m}=0.3$, 0.5 and 0.6). By construction, for $x_\mathrm{m}=0.0$ the 21-cm signal follows the density field exactly and thus the response function is identical to that of the matter density field. In our choice of parameters, the squeezed limit is achieved for $k\gtrsim 0.5$~Mpc$^{-1}$ ($\log_{10}(k)\gtrsim -0.3$), in which the response function agrees with $68/21-(1/3)d\ln [k^3P_{\rm dd}(k)]/d\ln k$ shown by the solid black line.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/matter-21-21-response_withfspt.png}
\caption{Response function of the local 21-cm power spectrum to the long-wavelength matter density field at different stages of reionization. We show the early and late stages in the left and right panels, respectively. In the beginning of reionization, the response function resembles the response function of the matter power spectrum. The squeezed limit is achieved for $\log_{10}(k)\gtrsim -0.3$. The black solid line shows the analytical response function of the linear matter power spectrum, $68/21-(1/3)d\ln [k^3P_{\rm dd}(k)]/d\ln k$.
}
\label{fig:response_d2121}
\end{figure}
\begin{figure}[t]
\centering
$\renewcommand{\arraystretch}{-0.75}
\begin{array}{c}
\includegraphics[width=\textwidth]{images/response_vs_z_withmodel_new.png}
\end{array}$
\caption{
Redshift evolution of the response functions at various wavenumbers. The 21cm-21cm-matter and 21cm-21cm-21cm response functions are shown in the left and right panels, respectively. We see the sign change at the epochs predicted by our analytical model (solid black lines).
}
\label{fig:response_vs_z}
\end{figure}
At $x_\mathrm{m}=0.2$, the small-scale ($k\gtrsim 1$~Mpc$^{-1}$) response function decreases because of the size of \rm H~{\sc ii } regions, while it still remains positive. At $x_\mathrm{m}=0.5$ the small-scale response shows a negative value, in agreement with the physical picture we gave in Section~\ref{sec:PdPS_densityfield}. The negative response then spreads to lower $k$ values as reionization proceeds.
The behaviour of the response function on larger scales, $k\lesssim 0.5~{\rm Mpc}^{-1}$, is more complex, but we can reproduce it qualitatively using a simple analytical model we develop in Appendix A. In the left panel of Figure~\ref{fig:response_vs_z}, we compare the redshift evolution of $f_\mathrm{21,d}$ for four different wavenumbers, $k=0.1$, 0.3, 0.5 and 0.7~Mpc$^{-1}$, with the prediction from the analytical model given in eq.~(\ref{eq:f_21d_final}). For $k\lesssim 0.5~{\rm Mpc}^{-1}$, the $f_\mathrm{21,d}$ decreases during the early phases of reionization, but then rapidly changes signs at $x_m \approx 0.13$. The timing of this sign-change can be understood using our model. The crossover occurs when the linear bias of the neutral hydrogen with respect to the underlying density field is $b_1 = -1$. At this time, the fluctuations in the hydrogen density are perfectly anti-correlated with the density fluctuations and the 21-cm power spectrum (to linear order) vanishes\footnote{In reality, higher order terms prevent the 21-cm power spectrum from vanishing.}. In all of our models, this epoch of the minimum 21-cm power occurs at $x_m \approx 0.13$.
While the model predicts a positive response function near the end of reionization, the simulation shows a negative correlation. This is because the size of \rm H~{\sc ii } regions becomes larger than the spatial scales of the wavenumbers shown in this figure near the end of reionization, and our analytical model breaks down. However we have already understood the origin of the negative correlation: it is due to suppression of the 21-cm power spectrum by ionization.
\subsubsection{The 21cm-21cm-21cm response function}
While $\hat{f}_{21,\mathrm{d}}$ is simpler to interpret, we cannot construct it from the 21-cm observations alone. In this section we consider $\hat{f}_{21,21}$. Figure~\ref{fig:response_212121} shows $\hat{f}_{21,21}$ at the same global ionized fractions as in Figure~\ref{fig:response_d2121}.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/21-21-21-response_withfspt.png}
\caption{
Same as figure~\ref{fig:response_d2121} but for the response function of the local 21-cm power spectrum to the long-wavelength 21-cm brightness field.
}
\label{fig:response_212121}
\end{figure}
The signs of $\hat{f}_{21,\mathrm{d}}$ and $\hat{f}_{21,21}$ are opposite when $\bar\delta_\mathrm{d}$ and $\bar\delta_{21}$ become anti-correlated (Figure~\ref{fig:mean_subv}).
According to our model, $f_\mathrm{21,d}$ and $f_{21,21}$ only differ by a multiplication factor $[\bar{x}_\mathrm{HI}(1+b_1)]^{-1}$, see eq.~(\ref{eq:f_2121_final}). When $b_1 < -1$ the multiplication factor becomes negative and the model predicts that features in $f_\mathrm{21,21}$ will be inverted compared to those seen in $f_\mathrm{21,d}$. This is indeed seen in the simulation results.
Unlike $\hat{f}_{21,\mathrm{d}}$, however, $\hat{f}_{21,21}$ rapidly approaches to zero as reionization proceeds for $x_\mathrm{m}\gtrsim 0.3$ (see the right panels of Figure~\ref{fig:response_vs_z} and \ref{fig:response_212121}).
\section{Inside-out versus outside-in reionization}
\label{sec:source_models}
One of the most basic questions about the reionization process is whether it proceeded generally from high to low densities (``inside-out'') or from low to high densities (``outside-in''). Answering this question would inform us about the nature of sources and sinks of ionizing photons \cite{2000ApJ...530....1M,2009MNRAS.394..960C}.
Here we explore the 21-cm response functions as a powerful discriminant of these two scenarios.
\subsection{Models}
In addition to FN-600, we use two simple models to explore whether the 21-cm response function can discriminate between the inside-out and outside-in scenarios. These models assume that the ionization state of a cell is fully determined by its local density. The only two inputs for these simulations are the mass-weighted mean ionized fraction, $x_\mathrm{m}(z)$, and the matter density field. For the inside-out simulation (SN-in-out) the densest fraction $x_\mathrm{m}(z)$ of the mass is assumed to be fully ionized, whereas for the outside-in simulation (SN-out-in) the lowest density fraction $x_\mathrm{m}(z)$ is ionized. These two simulations therefore correspond to a perfect correlation and anti-correlation between ionization and density, respectively. We note that these models are unrealistically extreme, but our purpose here is to demonstrate different behaviours of the response functions in the total inside-out and outside-in limits. We use the same gridded density fields of $600^3$ for these models and use a reionization history (by mass) which approximately matches that of our radiative transfer simulation FN-600. Due to their similar reionization histories, the Thomson scattering optical depth is approximately the same for all models, $\tau\approx0.056$.
\subsection{Results}
In Figure~\ref{fig:source_response_vs_z}, we compare the redshift evolution of the two response functions $\hat{f}_{21,\rm d}$ and $\hat{f}_{21,21}$ at $k=0.3~\mathrm{Mpc}^{-1}$ against the results from FN-600, SN-in-out, and SN-out-in. At $k=0.3~\mathrm{Mpc}^{-1}$, the error on the response function $\hat{f}_{21,21}$ will be minor ($\leq 0.1$) for 1000 h observation with SKA-Low. We provide a back-of-the-envelope calculation of the error on the $\hat{f}_{21,21}$ constructed from radio observations in Appendix~\ref{sec:back-of-envelope}.
The evolution in SN-in-out is qualitatively similar to that of FN-600. We note, however, that SN-in-out shows a much deeper minimum for $\hat{f}_{21,21}$.
The results of SN-out-in, on the other hand, show a very different evolution; $\hat{f}_{21,\rm d}$ always remains positive and increases throughout much of the reionization process. Unlike SN-in-out, $\hat{f}_{21,\rm d}$ does not display any extrema, and $f_\mathrm{21,21}$ is found to be almost flat throughout reionization.
In this scenario, the regions with higher densities remain unaffected until the very end of reionization. Thus, the bias of the neutral hydrogen with respect to the density filed ($b_1$ in the analytical model for the response function) will remain positive, and eqs.~(\ref{eq:f_21d_final}) and (\ref{eq:f_2121_final}) indicate that the response functions will never attain negative values.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/source_comparision_vs_z_k0p3.png}
\caption{Redshift evolution of the response functions at $k=0.3$ Mpc$^{-1}$ for the three different reionization simulations. The 21cm-21cm-matter and 21cm-21cm-21cm response functions are shown in the left and right panels, respectively.}
\label{fig:source_response_vs_z}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/pk_vs_bk_k0p3.png}
\caption{Evolution of reionization in the $f(k)-\Delta^2(k)$ plane at $k=0.3$ Mpc$^{-1}$ for the three different reionization simulations. The $f_\mathrm{21,d}$ and $f_\mathrm{21,21}$ response functions are shown in the left and right panels, respectively. The reionization epoch is given with the colour of the markers.
}
\label{fig:pk_vs_bk}
\end{figure}
To further illustrate how the response functions help discriminate between SN-in-out and SN-out-in, we show in Figure~\ref{fig:pk_vs_bk} the evolutionary track of each simulation in the $f(k)-\Delta^2(k)$ plane for $k=0.3$ Mpc$^{-1}$. Ref.~\citep{2018MNRAS.476.4007M} introduced this representation in the study of their bispectrum results. The left panel shows the evolutionary tracks of $f_\mathrm{21,d}$ for FN-600, SN-in-out, and SN-out-in. All the tracks start at roughly the same region of the $f(k)-\Delta^2(k)$ plane. For the $f_\mathrm{21,d}$ case, SN-in-out (SN-out-in) move leftward (rightward) in the plane. Both SN-in-out and FN-600 create clockwise spiral tracks due to the simultaneous action of dip in the 21-cm power spectra and the sign change in $f_\mathrm{21,d}$. SN-out-in does not show this sign change in $f_\mathrm{21,d}$. Its evolutionary track is more parabolic in nature.
The right panel of Figure~\ref{fig:pk_vs_bk} shows the evolutionary tracks of $f_\mathrm{21,21}$. Again, the tracks start at roughly the same region. Both the SN-in-out and FN-600 tracks exhibit a dip feature instead of a spiral owing to the fact that $f_\mathrm{21,21}$ does not change the sign. The track of SN-out-in is similar to that of the left panel. These results show that the evolution of the 21-cm response function will provide a useful tool for probing the ``topology" of reionization.
\section{Summary and discussion}
\label{sec:discuss}
Due to non-Gaussian nature of the signal, higher order statistics such as the bispectrum are needed to extract the full information content of the 21-cm brightness fluctuations. In this paper we have investigated a particular choice for the bispectrum; namely, the squeezed-limit bispectrum. Following Refs.~\cite{2014JCAP...05..048C,chiangthesis:2015}, we have measured this bispectrum from the correlation between position-dependent 21-cm power spectra and large-scale variations in the matter density or the 21-cm signal fields. This statistic provides a clean measurement of mode-coupling of the large- and small-scale fluctuations, by capturing how the small-scale power in the 21-cm signal responds to the large-scale fluctuation field. Not only is this statistic easy to measure from the data, but it also provides a clear physical interpretation of the measured correlation. This property allowed us to derive a simple analytical model based on bias expansion and the excursion-set formalism.
As structure formation in an overdense region proceeds faster than the average, we find a positive correlation between the small-scale matter power spectrum and the large-scale matter density field. In other words, the small-scale matter power has a positive response to the large-scale matter density field. When the dense regions are ionized first, as in the ``inside-out'' scenario, the response function of the small-scale 21-cm power spectrum is suppressed relative to the matter response function (e.g., at $k\gtrsim 1~{\rm Mpc}^{-1}$ for $x_m=0.2$), and becomes negative as reionization progresses (e.g., at $k\gtrsim 0.5~{\rm Mpc}^{-1}$ for $x_m=0.5$). This negative response is seen in the scales below the typical sizes of \rm H~{\sc ii } regions during reionization.
Eventually we find a negative response at all scales towards the end of reionization.
In this way, reionization produces mode-coupling naturally; large-scale \rm H~{\sc ii } regions suppress the 21-cm power spectrum on small scales. The sign of the response of the small-scale 21-cm power spectrum to the large-scale 21-cm brightness field is opposite to this once the large-scale matter density and 21-cm brightness fields become anti-correlated.
While the behaviour of the response function below the size of \rm H~{\sc ii } regions is understood as described above, the behaviour on larger scales ($k\lesssim 0.5~{\rm Mpc}^{-1}$) is more complex. To this end we have developed an analytical model that is valid on scales larger than the size of \rm H~{\sc ii } regions. The analytical model predicts that the detailed evolution of the response functions must depend on the distribution of neutral material with respect to the density field and hence on the distribution, brightness and clustering of sources of ionizing photons. We found that our model is able to explain the basic features of the response functions observed in our reionization simulation qualitatively.
We applied the position-dependent power spectrum approach to two extreme models: one in which ionization correlates perfectly with density (perfect ``inside-out'' reionization) and one in which it anti-correlates perfectly (perfect ``outside-in'' reionization). We found that the response functions in these scenarios are very different, suggesting that the response functions are useful for studying the topology of reionization. The response functions will also be useful to confirm that the detected 21-cm power spectra are not due to foreground emission or instrumental systematic errors. The power spectrum is susceptible to foreground and systematics because it is easy to add power at a given scale by these effects. However, the response function measures mode-coupling between large- and small-scale fluctuations, and reionization predicts a particular form of coupling which would be harder to mimic precisely by foreground or systematics. The unique evolution of the response function in the $f(k)-\Delta^2(k)$ plane can be used to establish reliability of the measurements.
Our study only represents a first exploration of the application of the position-dependent power spectra and response function techniques to the 21-cm signal. A wider exploration of the parameters space of sources is needed. Another potentially important effect is the impact of spin temperature fluctuations. While we assumed the high spin temperature limit, the spin temperature may be closer to the CMB temperature during an early phase of reionization,
which will impact the strength of the 21-cm signal. Refs.~\citep{2017MNRAS.468.3785R,2018arXiv180803287R} showed that spin temperature fluctuations can have a strong impact on non-Gaussianity of the signal. Ref.~\citep{2018arXiv180802372W} indeed confirmed that there is a clear effect present in the measurements of the bispectrum.
How about observational prospects of the response functions? Addressing this question requires dividing the observational data set into subvolumes and extracting the local power spectrum and mean signal from each of these. Implementation of this procedure on real interferometeric data should be studied, not only from the point of view of the signal-to-noise ratio but also from that of calibration effects. Ref.~\citep{2018arXiv180802372W} have carried out a theoretical study of detectability of the bispectrum in SKA-Low observations in the presence of telescope noise. They find that for 1000-hour integration time the bispectrum from the equilateral triangle configuration will be detectable. A similar study should be done for the response function, hence the squeezed-limit bispectrum.
The squeezed limit bispectrum (as measured through position-dependent power spectrum method) clearly constitutes a powerful probe of reionization and should be added to the palette of analysis methods for the 21-cm signal from the EoR. The arrival of real observational data, in hopefully the not-too-distant future, will establish which of the many available methods are the most useful for confirming the nature of the detected signal as well as for extraction of astrophysical and cosmological parameters from it. The results of our initial study give us confidence that just as for galaxy surveys at lower redshifts \cite{2015JCAP...09..028C}, the squeezed-limit bispectrum will prove to be a valuable tool.
\acknowledgments
This work was supported by Swedish Research Council grant 2016-03581. We acknowledge that the results in this paper have been achieved using the PRACE Research Infrastructure resources Curie based at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris, France and Marenostrum based in the Barcelona Supercomputing Center, Spain. Time on these resources was awarded by PRACE under PRACE4LOFAR grants 2012061089 and 2014102339 as well as under the Multi-scale Reionization grants 2014102281 and 2015122822. Some of the numerical computations were done on the resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC, Royal Institute of Technology, Stockholm.
SM acknowledges financial support from the European Research Council under ERC grant number 638743-FIRSTDAWN. EK was supported in part by JSPS KAKENHI Grant Number JP15H05896.
|
train/arxiv
|
BkiUdOnxaKgS2Q1DVHeO
| 5 | 1 |
\section{Introduction}
\input{tables/comparison}
Attack campaigns from criminal organizations and nation state actors are one of the most powerful forms of disruption, costing the U.S. economy as much as \$109 billion a year~\cite{council2018}.
These cyber attacks are highly sophisticated, targeting governments and large-scale enterprises to interrupt critical services and steal intellectual property~\cite{freitas2020d2m}.
Defending against these attacks requires the development of strong antivirus tools to identify new variants of malicious software before they can infect a network.
Unfortunately, as a majority of newly identified malware is \textit{polymorphic} in nature, where a few subtle source code changes result in significantly different compiled code (e.g., instruction reordering, branch inversion, register allocation)~\cite{dullien2005graph,you2010malware}, the predominant signature-based form of malware detection is rendered inert~\cite{sathyanarayan2008signature}.
To combat these issues, the cybersecurity industry~\cite{chen2020stamina} has turned to image-based malware representations as they are quick to generate, require no feature engineering, and are resilient to common obfuscation techniques (e.g., section encryption~\cite{nataraj2011malware}, file packing~\cite{nataraj2011comparative}).
For all of these reasons, image-based malware detection and classification research has surged in popularity.
Unfortunately, a majority of this research uses small-scale or private data repositories, making it increasingly difficult to characterize and differentiate existing work, develop new research methodologies, and disseminate new ideas~\cite{chen2020stamina,conti2010visual,fang2020android,fu2018malware,gennissen2017gamut,han2015malware,lu2019new,luo2017binary,nataraj2011malware,nataraj2011comparative,raff2018malware}.
To address these issues, we constructed \textsc{MalNet-Image}\xspace, the first large-scale ontology of malicious software images.
\subsection{Contributions}
\noindent\textbf{1. Largest Cybersecurity Image Database.}
\textsc{MalNet-Image}\xspace contains over 1.2 million software images across a hierarchy of $47$ types and $696$ families, enabling researchers and practitioners to conduct experiments on an industry scale dataset, and evaluate techniques that were previously reported in propriety settings.
Compared to the next large public database~\cite{noever2021virus}, \textsc{MalNet-Image}\xspace offers $24\times$ more images and nearly $70\times$ more classes (see Table~\ref{table:dataset_comparison}).
We report the first public large-scale malware detection and classification results on binary images, where we are able to detect malicious files with an AUC of $0.94$ and classify them across $47$ types and $696$ families with a macro-F1 score of $0.49$ and $0.45$, respectively.
\medskip\noindent\textbf{2. Permissive Licensing \& Open Source Code.}
We release \textsc{MalNet-Image}\xspace with a CC-BY license, allowing researchers and practitioners to share and adapt the database to their needs.
We open-source the code to create the images and run the experiments on \href{https://github.com/safreita1/malnet-image}{Github}.
\medskip\noindent\textbf{3. Visual Exploration Without Downloading.}
We develop \textsc{\textsc{MalNet-Image}\xspace{} Explorer}, an image exploration and visualization tool that enables researchers and practitioners to easily study the data without installation or download.
\textsc{\textsc{MalNet-Image}\xspace{} Explorer} is available online at: \url{https://mal-net.org}.
\medskip\noindent\textbf{4. Community Impact.}
\textsc{MalNet-Image}\xspace offers new and unique opportunities to advance the frontiers of cybersecurity research.
In particular, \textsc{MalNet-Image}\xspace offers researchers a chance to study imbalanced classification on a large-scale cybersecurity database with a natural imbalance ratio of $16,901\times$ (see Figure~\ref{fig:imbalance}); and explore explainability research in a high impact domain, where it is critical that security analysts can interpret and trust the model.
\section{Advancing the State-of-the-Art}
Aside from \textsc{MalNet-Image}\xspace, there are only two publicly available binary-image based cybersecurity datasets--- {Malimg}~\cite{nataraj2011comparative} and Virus-MNIST~\cite{noever2021virus}---containing 9,458 images across 25 classes, and 51,880 images across 10 classes, respectively.
In surveying the malware detection and classification literature \cite{nataraj2011comparative,chen2020stamina,gennissen2017gamut,kancherla2013image,choi2017malware,fu2018malware,han2015malware,su2018lightweight,mclaughlin2017deep,mercaldo2020deep,burks2019data,azab2020msic,yue2017imbalanced,catak2020data,ren2020end,chen2018deep,luo2017binary,jain2015enriching,kumar2016machine,fang2020android}, we observed that almost all experiments were conducted on small-scale or private data.
As the field advances, large-scale public databases are necessary to develop the next generation of algorithms.
In Table~\ref{table:dataset_comparison}, we compare \textsc{MalNet-Image}\xspace with other public and private cybersecurity image datasets.
We find that that \textsc{MalNet-Image}\xspace offers 24$\times$ more images and 70$\times$ the classes, compared to the largest alternative public binary image database (Virus-MNIST~\cite{noever2021virus}); and $479,800$ more images and $694$ more classes than the largest private database (Stamina~\cite{chen2020stamina}).
We do not compare against repositories of malicious binaries such as AndroZoo~\cite{li2017androzoo++}, AMD~\cite{wei2017deep}, Microsoft-BIG~\cite{ronen2018microsoft}, Malicia~\cite{nappa2013driving}, VirusShare, and VirusTotal in this discussion, as none of them have images available to use.
\medskip
\noindent\textbf{Security Implications.}
With the release of \textsc{MalNet-Image}\xspace, researchers will now have access to a critical resource to develop advanced, image-based malware detection and classification algorithms.
Like most open data resources, there is a potential for misuse by malicious actors who aim to craft new variants to evade detection.
We believe \textsc{MalNet-Image}\xspace's contribution to the research community significantly outweighs such risk.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/image-distribution.pdf}
\vspace{-6mm}
\caption{Class imbalance distribution for \textit{type} and \textit{family}.
}
\label{fig:imbalance}
\end{figure}
\subsection{Constructing MalNet-Image}\label{sec:construction}
\textsc{MalNet-Image}\xspace is an ambitious project to collect and process over 1.2 million binary images, and is a major extension to the graph representation learning database \textsc{MalNet}~\cite{freitas2020large}, offering significant new malware detection capabilities.
Below, we describe the provenance and construction of \textsc{MalNet-Image}\xspace.
\medskip
\noindent\textbf{Collecting Candidate Images.}
We construct \textsc{MalNet-Image}\xspace using the Android ecosystem due to its large market share~\cite{popper2017}, easy accessibility~\cite{li2017androzoo++} and diversity of malicious software~\cite{nokia2019}.
With the generous permission of AndroZoo~\cite{allix2016androzoo,li2017androzoo++}, we collected 1,262,024 Android APK files, specifically selecting APKs containing both a \textit{family} and \textit{type} label obtained from Euphony~\cite{hurier2017euphony}, a state-of-the-art malware labeling system that aggregates and learns from the labelling results of up to 70 antivirus vendors from VirusTotal~\cite{total2012virustotal}.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{figures/dex-structure.png}
\caption{\textbf{Left:} Android DEX file structure, composed of three major components---(1) header, (2) ids, and (3) data.
\textbf{Right:} binary image representation of the DEX file.
}
\label{fig:dex-structure}
\end{figure}
\medskip
\noindent\textbf{Processing the Images.}
The first step in constructing the image representation was to extract the DEX file (bytecode) from each Android APK.
The extracted DEX file was then converted into a 1D array of $8$ bit unsigned integers.
Each entry in the array is in the range $[0, 255]$ where $0$ corresponds to a black pixel and $255$ a white pixel.
We then convert each 1D byte array into a 2D array using standard linear plotting where the width of the image is fixed and the height is allowed to vary based on the file size.
We use the same image width proposed in the seminal work \cite{nataraj2011malware} (and follow up work~\cite{kalash2018malware,cui2018detection,rezende2018malicious,yakura2018malware}), and scale each image to $256\times 256$ using a standard Lanczos filter from the Pillow library.
Finally, we color each byte according to its use, adding a layer of semantic information on top of the raw bytecode.
While a variety of techniques can be used to encode semantic information into the image, there is currently no accepted standard.
We follow \cite{gennissen2017gamut} and assign each byte to a particular RGB channel depending on its position in the DEX file structure---(i) \textit{header}, (ii) \textit{identifiers} and \textit{class definitions}, and (iii) \textit{data} (see Figure~\ref{fig:dex-structure}).
Distributed across Google Cloud's General-purpose (N2) machine with $16$ cores running $24$ hours a day, this process took approximately a week.
We release the source code used to process the APKs on \href{https://github.com/safreita1/malnet-image}{Github}.
\medskip
\noindent\textbf{MalNet-Image Tiny.}
We construct \textsc{\textsc{MalNet-Image}\xspace Tiny}, containing $61,201$ training, $8,743$ validation and $17,486$ test images, for \textit{type level classification} experiments by removing the 4 largest types in \textsc{MalNet-Image}\xspace.
The goal of \textsc{\textsc{MalNet-Image}\xspace-Tiny} is to enable users to rapidly prototype new ideas, since it requires only a fraction of the time needed to train a new model.
\textsc{\textsc{MalNet-Image}\xspace Tiny} is released alongside the full dataset at {\small\url{https://mal-net.org}}.
\section{MalNet-Image Applications}\label{sec:application}
\textsc{MalNet-Image}\xspace offers new and unique opportunities to advance the frontiers of cybersecurity research.
As examples, we show three exciting new applications made possible by the \textsc{MalNet-Image}\xspace database---(1) as a state-of-the-art cybersecurity image benchmark in Section~\ref{subsec:app_baselines};
(2) as the first large-scale public analysis of malicious software detection using binary images in Section~\ref{subsec:malware-detection};
and (3) how to categorize high-risk malware threats (e.g., is this Ransomware or Spyware?) in Section~\ref{subsec:malware-classification}.
Then, in Section~\ref{subsec:research-challenges} we highlight new research directions enabled by \textsc{MalNet-Image}\xspace.
\medskip\noindent
\textbf{Application Setup.}
We divide \textsc{MalNet-Image}\xspace into three stratified sets of data, with a training-validation-test split of $70$-$10$-$20$ respectively;
repeated for both type and family labels (suggested splits available at {\small\url{https://mal-net.org}}).
In addition, we conduct malware detection experiments by grouping all 46 malicious software images into one type while the benign type maintains its original label.
We evaluate 3 common architectures---ResNet~\cite{he2016deep}, DenseNet~\cite{huang2017densely} and MobileNet~\cite{howard2017mobilenets}, based on its macro-F1 score, as is typical for highly imbalanced datasets~\cite{duggal2020elf,duggal2020rest,duggal2021har,freitas2020large}.
Each model is trained for 100 epochs using cross entropy loss (unless specified otherwise) and an Adam optimizer on an Nvidia DGX-1 containing 8 V100 GPUs and 512GB of RAM using Keras with a Tensorflow backend.
\subsection{Application 1: Benchmarking Techniques}\label{subsec:app_baselines}
Leveraging the unprecedented scale and diversity of \textsc{MalNet-Image}\xspace, we evaluate numerous malware detection and classification techniques that have previously been studied using only private or small-scale databases.
Specifically, we evaluate recent techniques including: (a) semantic information encoding via colored channels, (b) model architecture, (c) imbalanced classification techniques, and (d) the performance of \textsc{\textsc{MalNet-Image}\xspace Tiny}, a small-scale version of \textsc{MalNet-Image}\xspace.
We detail the setup, results, and analysis of each experiment below.
\medskip\noindent
\textbf{Semantic Information Encoding.}
We evaluate the effect of information encoding in the classification process by training two ResNet18 models---one on the RGB images, where each byte is assigned to a particular color channel depending on its position in the DEX file structure as proposed in \cite{gennissen2017gamut}, and another on grayscale converted images.
We find no improvement in the macro-F1 score using semantically encoded RGB images compared to grayscale ones.
As there are alternative encoding techniques~\cite{gennissen2017gamut}, we believe comparing the effects of different encodings could be an interesting future research direction.
Going forward, all models are trained using grayscale images.
\input{tables/results-models}
\medskip\noindent
\textbf{Evaluating Model Architectures.}
We evaluate malware detection and classification performance on 3 popular deep learning architectures (ResNet, DenseNet and MobileNetV2) across a variety of model sizes, using grayscale encoded images, and cross entropy loss.
In Table~\ref{tab:large_scale_results}, we report the macro-F1, macro-precision, and macro-recall of each model.
We find that all models obtain similar macro-F1 scores, indicating that a small model has enough capacity to learn the features present in the binary images.
Going forward, all experiments use a ResNet18 model due to its strong performance and fast training time.
\medskip\noindent
\textbf{Accounting for Class Imbalance.}
We evaluate 3 imbalanced classification techniques---(1) class reweighting with cross entropy loss, (2) focal loss, and (3) class reweighting with focal loss; and compare this to a model trained using cross entropy loss without class weighting.
For class reweighting, each example of a class $c$ is weighted according to it's effective number $\frac{1-\beta}{1-\beta^{n_c}}$, where $n_c$ is the number of images in class $c$ and $\beta=0.999$ is selected through a line search across standard values~\cite{cui2019class} of $\{0.9, 0.99, 0.999, 0.9999\}$.
For focal loss~\cite{lin2017focal}, a regularization technique that tackles imbalance by establishing margins based on the class size, we set the hyperparameter $\gamma=2$ as suggested in \cite{lin2017focal}.
Analyzing the results, we find that cross entropy loss with class reweighting improves the \textit{type} macro-F1 score by $0.021$, but lowers the binary and family classification scores by $0.002$ and $0.006$, respectively.
In particular, we notice that \textsc{MalNet-Image}\xspace's smallest types benefit the most from class reweighting, where the `Click' type (113 examples), sees its F1 score rise from $0$ to $0.91$.
On the other hand, focal loss shows no improvement over the baseline model, likely due to its design for use in dense object detectors like R-CNN.
Going forward, all experiments use cross entropy loss with class reweighting due to the strong improvement in smaller malware types.
\medskip\noindent
\textbf{\textsc{MalNet-Image Tiny} Performance.}
We train a ResNet18 model on grayscale images using cross entropy loss and class reweighting, and achieve a macro-F1 score of $0.65$.
Compared to full dataset, the macro-F1 score is significantly higher $0.65$ vs $0.49$; which is unsurprising since the largest 4 types contained a significant proportion of the image diversity (based on the number of families), resulting in an easier classification task.
\medskip\noindent
\textbf{Limitations.} Methods that work well on other datasets may not work well on \textsc{MalNet-Image}\xspace due to structural differences in the images; vice-versa, methods that work on \textsc{MalNet-Image}\xspace may not transfer well to other datasets.
We hope this work inspires new research in the binary image domain, enabling the development of methods that generalize across key domains such as cybersecurity.
\subsection{Application 2: Malware Detection}\label{subsec:malware-detection}
Researchers and practitioners can now conduct malware detection experiments on an industry scale dataset, evaluating things that were previously reported in propriety settings.
Using the model selected in Section~\ref{subsec:app_baselines}---a ResNet18 model trained on grayscale images using cross entropy loss and class reweighting---we perform an in-depth analysis of this highly imbalanced detection problem containing $1,182,905$ malicious and $79,119$ benign images.
We find that the model is able to obtain a strong macro-F1 score of $0.86$, macro-precision of $0.89$ and a macro-recall of $0.84$.
We further study the model's detection capabilities by analyzing its ROC curve, where the model achieves an AUC score of $0.94$, and is able to identify $84\%$ of all malicious files with a false positive rate of $10\%$ (a common threshold used in security~\cite{chen2020stamina}).
This first of its kind analysis allows researchers insight into malware detection that is usually restricted to handful of industry labs.
\textsc{MalNet-Image}\xspace also opens new opportunities in the nascent and promising research direction of analyzing attention maps to interpret malware detection results.
Yakura et al.~\cite{yakura2019neural,yakura2018malware} showed that specific byte sequences found in the attention map closely correlate with malicious code payloads.
We evaluate the potential of attention maps on \textsc{MalNet-Image}\xspace using the popular Grad-Cam~\cite{selvaraju2017grad} technique to highlight regions of interest across 3 types of malware and benignware in Figure~\ref{fig:model-attention}.
In the malware images (left three), we see the attention map is focused on thin regions of bytecode in the data section (where malicious payloads are often stored), while in the benign images (right side) the attention map is dispersed across the larger data region.
This type of visual analysis can significantly reduce the amount of time and effort required to manually investigate a file by guiding security analysts to suspicious regions of the bytecode.
\subsection{Application 3: Malware Classification}\label{subsec:malware-classification}
\textsc{MalNet-Image}\xspace opens up new research into binary images as a tool for multi-class malware classification (e.g., is this file Ransomware or Spyware?).
Using the model selected in Section~\ref{subsec:app_baselines}, we perform an in-depth analysis of its multi-class classification capability across $47$ types and $696$ families of malware.
We find the model is able to classify the malware \textit{type} and malware \textit{family} with a macro-F1 score of $0.49$ and $0.45$, respectively.
In Figure~\ref{fig:cm}, we conduct an in-depth analysis into \textit{type} level classification performance through a confusion matrix heatmap.
A dark diagonal indicates strong classifier performance, where a dark off-diagonal entry indicates poor performance.
Each square in the diagonal indicates the percent of examples correctly classified for a particular malware type, and each off-diagonal row entry indicates the percent of incorrectly classified examples for a particular malware type.
We find that four types of malware comprise the majority of misclassifications: Adware, Benign, Riskware, and Trojan.
Unsurprisingly, these are the 4 largest types of malware (based on the number of images in each class), indicating the strong effect that data imbalance has in the malware classification process.
In addition, the heatmap can be used to identify potential naming disagreements between vendor labels (e.g., ``adware'' and ``adsware''), serving as the basis for merging certain types of malware.
To the best of our knowledge, this is the first public large-scale analysis of malware classification, providing a new state-of-the-art benchmark to compare against.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/attention.pdf}
\caption{Model attention patterns across $4$ malware types (each with $2$ images).
\textbf{Ransom++Trojan:} focus on thin region of data section.
\textbf{Benign:} wide range of attention across data section.
\textbf{Adware:} attention on circular bytecode ``hotspots''.
\textbf{Monitor:} focus on ``empty'' black region of data section.
}
\label{fig:model-attention}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/cm.pdf}
\caption{
Malware classification results using confusion matrix heatmap (classes in descending order of number of samples).
We analyze type level classification performance, where a dark diagonal indicates strong performance, and a dark off-diagonal indicates poor performance.
Each square in the diagonal indicates the percent of examples correctly classified for a particular malware type, and each off-diagonal entry indicates the percent of incorrectly classified examples for a particular type.
}
\label{fig:cm}
\end{figure*}
\subsection{Enabling New Research Directions}\label{subsec:research-challenges}
The scale and diversity of \textsc{MalNet-Image}\xspace opens up new exciting research opportunities to the ML and security communities. Below, we present 3 promising directions (R1-R3).
\begin{enumerate}[topsep=2mm, itemsep=1mm, parsep=1mm, leftmargin=*, label=\textbf{R\arabic*.}]
\item \textbf{Advancing Vision Based Cybersecurity Research.}
Research into developing image-based malware detection and classification algorithms has recently surged across industry (e.g., Intel-Microsoft collaboration on Stamina~\cite{chen2020stamina}, security companies~\cite{noever2021virus,gennissen2017gamut}) and academia~\cite{kancherla2013image,choi2017malware,fu2018malware,han2015malware,su2018lightweight,mclaughlin2017deep,mercaldo2020deep,burks2019data,azab2020msic,yue2017imbalanced,catak2020data,ren2020end,chen2018deep,luo2017binary,jain2015enriching,kumar2016machine,fang2020android}.
However, existing public datasets contain only a handful of classes and thousands of images, and as the field advances, larger and more challenging datasets are needed for the next generation of models.
With the release of \textsc{MalNet-Image}\xspace, researchers have access to a critical resource to develop and benchmark advanced image-based malware detection and classification algorithms, previously restricted to a few industry labs and research teams.
\item \textbf{Extending Imbalanced Classification to a New Domain.}
Only preliminary work has studied binary-image malware classification under data imbalance~\cite{yue2017imbalanced} due to the limited number of classes and images available in public datasets.
As a result, it is unknown whether many techniques may generalize to the binary-image domain, and how they will perform in highly imbalanced classification scenarios.
We take a first step in this direction in Section~\ref{subsec:app_baselines}, where we show that classes containing only a few examples typically underperform relative to their more populous counterparts---highlighting the significant challenge of imbalanced classification in the cybersecurity domain.
By releasing \textsc{MalNet-Image}\xspace, one of the largest naturally imbalanced databases to date, we hope to foster new interest in this important research area, enabling the machine learning community to impact and generalize across domains.
\item \textbf{Interpretable Cybersecurity Research.}
Preliminary research has demonstrated the importance of attention mechanisms in binary-image malware classification, where extracted regions can provide strong indicators to human analysts, helping guide them to suspicious parts of the bytecode for additional analysis~\cite{yakura2019neural,yakura2018malware}.
This includes recent research in salience based methods that automatically discover concepts, helping to identify correlated regions of bytecode~\cite{zhao2015saliency}.
Prior to \textsc{MalNet-Image}\xspace, researchers were limited to a small number of malicious families and types, hindering their ability to conduct large-scale explainability studies.
With \textsc{MalNet-Image}\xspace's nearly 700 classes, researchers can explore a wide variety of malicious software, enabling new breakthroughs and discoveries.
For example, researchers might discover that new types of visualization and sense-making techniques are needed to accurately summarize large volumes of binary-image data to enhance security analysts decision making capabilities.
\end{enumerate}
\section{Conclusion}\label{sec:conclusion}
Computer vision research into binary-image malware detection and classification is a crucial tool in protecting enterprise networks and governments from cyber attacks seeking to interrupt critical services and steal intellectual property.
Leveraging \textsc{MalNet-Image}\xspace's scale and diversity---containing $1,262,024$ binary images across a hierarchy of $47$ types and $696$ families---researchers and practitioners can now conduct experiments that were previously restricted to a few industry labs and research teams.
We hope \textsc{MalNet-Image}\xspace becomes a central resource for a broad range of research into vision-based cyber defenses, multi-class imbalanced classification, and interpretable security.
|
train/arxiv
|
BkiUbEnxaL3SuhzLmP6T
| 5 | 1 |
\section{Introduction}
The classical Baade-Wesselink method allows us to measure the diameters
of pulsating stars by using the radial velocity and colour curves over the pulsation period.
Instead of the colour curve, the interferometric version of the Baade-Wesselink approach
uses the angular variations of the stellar diameter, thus becoming a direct method
to determine the distance of the pulsating star.
A few years ago we started a
collaboration between the {\it Observatoire C\^ote d'Azur} (Nice, France) and
the {\it INAF-Osservatorio Astronomico di Brera} (Milano, Italy)
aimed at improving the use of classical pulsators as stellar
candles.
To pursue this goal we put together our expertises in stellar
interferometry and high-resolution spectroscopy \citep{guiglion, rhopup, vegachara}.
\section{The physics behind the projection factor $p$}
When we measure the Doppler effect in the atmosphere of a pulsating star like
a Cepheid we actually measure the radial component of the pulsation velocity field,
i.e., the projection of the radial velocity along the line of sight
(Fig.~\ref{fig:p0}).
The recipe to go back to the true velocity pulsation $V_{\rm puls}$
from the observed $V_{\rm rad}$ contains three ingredients, reflecting three different physical
effects. They can be summarized in the decomposition of the projection factor
$p=V_{\rm puls}/V_{\rm rad}$ into
the geometrical factor $p_0$, the gradient $f_{\mathrm{grad}}$, and the correction factor
$f_{\mathrm{o-g}}$ \citep{nardetto2007}:
\begin{equation} \label{pfact}
p= p_{\mathrm{o}}\,f_{\mathrm{grad}}\,f_{\mathrm{o-g}}.
\end{equation}
The physical effects behind each factor are:
\begin{enumerate}
\item
we must consider that the flux
coming out from the borders is reduced by the limb darkening law, describing the
changes in the surface brigthness (Fig.~\ref{fig:p}, lower panel).
Such a quantity acts a weight of the radial component
in function of the distance from the photocenter.
The so-called geometrical factor
$p_{\mathrm{o}}$ has to be introduced (Fig.~\ref{proj}, shift indicated with ``2") to compensate this
effect;
\item
we measure the radial velocities of the lines in the spectra
by means of their shifts in wavelength. We have to take into account that
the absorption lines are forming in different regions of the stellar atmosphere.
In the extended atmospheres of Cepheids, different regions have different
radial velocity amplitudes and mean values. Therefore, such values are line-dependent
and then prone to show a gradient $f_{\mathrm{grad}}$ (Fig.~\ref{proj}, straight line
indicated with ``1");
\item
the interferometric and photometric radii correspond to the photosphere of the star,
hence we have to extrapolate the spectroscopic one to it (hypothetical line of null depth;
Fig.~\ref{proj}, indicated with ``3").
Moreover, spectroscopy is sensitive to gas ({\rm g}) movement while interferometry and photometry are
sensitive to the optical ({\rm o}) continuum. A last correction $f_{\mathrm{o-g}}$ is necessary
to combine the different techniques (Fig. 3, shift indicated with ``4").
\end{enumerate}
It is quite evident that
without knowing the projection factor $p$ we cannot use the
Baade-Wesselink method to determine the distances of the sources and
of the environments in which they are embedded.
\begin{figure}
\begin{minipage}{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{PorettiFig1.PNG}
\caption{Graphical representation of the projection factor $p$ between the
true pulsational velocity $V_{\rm puls}$ of the star and the radial velocity $V_{\rm rad}$
measured by the observer.}
\label{fig:p0}
\end{minipage}
\quad
\begin{minipage}{0.6\textwidth}
\centering
\includegraphics[width=\textwidth]{PorettiFig2.PNG}
\caption{The limb-darkening effect (lower panel) decreases the value
of the geometric factor (upper panel) due to the minor contribution of the
flux received from the borders of the stellar disc.}
\label{fig:p}
\end{minipage}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{PorettiFig3.pdf}
\caption{Graphical representation of the decomposition of the projection
factor $p$: atmospheric gradient (line ``1"), geometric (shift ``2"), extrapolation
to the photosphere (``3"), and correction for the radial velocity of the gas
(shift ``4").}
\label{proj}
\end{figure}
\section{The HARPS-N contribution}
The high-precision radial-velocity spectrograph HARPS-N \citep{co12} is installed at the 3.58-m {\it Telescopio Nazionale Galileo} (TNG),
located at the Roque de los Muchachos Observatory (La Palma, Canary Islands, Spain).
One-hundred-three spectra of $\delta$ Cep were secured between March 27th and September 6th,
2015 in the framework of the OPTICON proposal 2015B/015. Interferometric data of $\delta$ Cep were
previously obtained \citep{merand} with the Fiber Linked Unit for Optical Recombination
\citep[FLUOR; ][]{fluor}
operating at the Center for High Angular Resolution Astronomy array
\citep[CHARA, Mount Wilson Observatory, USA; ][]{chara}.
By using the known trigonometric
distance of $\delta$~Cep \citep[$d$=272~pc; ][]{majaess} we could apply the {\it inverse}
interferometric Baade-Wesselink method to the CHARA and HARPS-N data to derive an observed
value of the projection factor $p$. We obtained $p_{\rm cc-g}$=1.239$\pm$0.031.
\begin{figure}
\includegraphics[width=\textwidth]{PorettiFig4.pdf}
\caption{HARPS-N variation of the amplitude of the radial velocity curve $\Delta RV_{\mathrm{c}}$
in function of the line depth $D$.}
\label{depth}
\end{figure}
In the context of the quantitative evaluation of the three physical effects
concurring to form the projection factor,
we used the HARPS-N spectra to derive the atmospheric gradient $f_{\rm grad}$ directly.
To do this we need a proxy for the height
of the line-forming region in the stellar atmosphere. Such proxy
is provided by the line depth $D$ taken at the minimum radius phase
\citep{nardetto2007}. The photosphere sets the zero line depth.
Figure~{\ref{depth}} shows the plot obtained from 15 unblended spectral lines
from 4683 to 6336~\AA\, \citep[twelve of Fe~{\sc i}, one each of Ni~{\sc i},
Si~{\sc i} and Ti~{\sc ii}; two more lines of Fe~{\sc i} are not discussed here
for sake of semplicity; see ][ for further details]{harpsn}. We computed the amplitude of the radial velocity
curve $\Delta\,RV_c$ (the suffix $c$ stands for the centroid method, different
from the
cross correlation $cc$ one used by the HARPS-N pipeline)
for each line by means of a Fourier decomposition.
After this, we computed the least-squares fitting line
\begin{equation} \label{Eq_grad}
\Delta RV_{\mathrm{c}}= a_0 D + b_0 .
\end{equation}
The correcting factor $f_{\mathrm{grad}}$ due to the velocity gradient can be
computed as the ratio between the amplitude extrapolated at the photosphere
($D$=0) and that of the given line \citep{nardetto2007}
\begin{equation} \label{Eq_grad2}
f_{\mathrm{grad}}= \frac{b_0}{a_0D+ b_0}.
\end{equation}
where $a_0$ and $b_0$ are the slope and zero-point of the linear fit.
Therefore, for the first time we could determine
$f_{\mathrm{grad}}$ from
the observations: the values are ranging from 0.964 (Fe~{\sc i} at 5367~\AA)
to 0.991 (Fe~{\sc i} at 4896~\AA),
with a typical error bar of 0.010.
\section{Conclusions}
The first measurement of $f_{\mathrm{grad}}$ was
not the only result obtained from HARPS-N spectra.
By introducing an hydrodynamical model we could also determine the semi-theoretical
values $f_{\mathrm{o-g}}$=0.975$\pm$002 and $f_{\mathrm{o-g}}$=1.006$\pm$0.002
by assuming radiative transfer in plane-parallel or sherically symmetric geometries,
respectively.
The whole procedure is described in \citet{harpsn}.
We are planning to observe other bright Cepheids both in interferometry
and high-resolution spectroscopy.
An improving of the current performances
of the BRITE two-colours photometry could allow us to link the classical and
interferometric Baade-Wesselink method, thus
getting a closer look to the physics related to the projection factor and a
very important feedback on the distance scale \citep{spips}. This exercise would
be very useful to validate a new task merging interferometry, spectroscopy
and photometry to be included in the PLATO
complementary science, exploting at best the
two onboard telescopes equipped with $B,V$ filters.
\acknowledgements{The observations leading to these results have received funding
from the European Commission's Seventh Framework Programme (FP7/2013-2016)
under grant agreement No. 312430 (OPTICON).
EP and NN acknowledge financial support from the PRIN-INAF 2014 and the ANR-15-CE31-0012- 01, respectively.
Based in part on data collected by the BRITE-Constellation satellite mission, built, launched
and operated thanks to support from the Austrian Aeronautics and Space Agency (FFG-ALR)
and the University of Vienna, the Canadian Space Agency (CSA), and the Foundation for
Polish Science \& Technology (FNiTP MNiSW) and National Science Centre (NCN).
}
\bibliographystyle{ptapap}
|
train/arxiv
|
BkiUblrxK03BfNelViMf
| 5 | 1 |
\section{Introduction}
On the Lie group $G=\so_0(n,1)$, we give a left-invariant metric which
comes from the Killing-Cartan form. The maximal compact subgroup
$\so(n)$ $=\so(n)\x\{1\}$ is denoted by $K$.
Then the group of isometries is
$$
\isom_0(G)=G\x K,
$$
the left translation by $G$ and the right translation by $K$. Thus, there
are two actions of $K$, $\ell(K)\subset G$ and $r(K)=K$.
\bigskip
The homogeneous Riemannian submersion by the isometric $r(K)$-action
(which is free and proper)
$$
\so(n)\ra\so_0(n,1)\ra \so_0(n,1)/\so(n)
$$
is very well understood; $\so_0(n,1)/\so(n)$ is the $n$-dimensional hyperbolic space
$\bbh^n$.
It is the purpose of this paper to study the homogeneous Riemannian
submersion by the $\ell(K)$-action
$$
\so(n)\ra\so_0(n,1)\ra \so(n)\bs\so_0(n,1).
$$
It can be seen that this space $\mathcal H^n=\so(n)\bs\so_0(n,1)$ is diffeomorphic to
$\bbh^n$, but metrically it is not as nice as the case of right
actions. More specifically, it will be shown that
the metric is not conformal to $\bbh^n$, and the space has fewer symmetries.
The following facts will be proven:
\bigskip
1. $\isom_0(\so(n)\bs\so_0(n,1))=r(\so(n))$, and it has one fixed point
$\{\mathbf i\}$, (Theorem \ref{isom-sono}).
2. $\mathcal H^n-\{\mathbf i\}$ is a warped product $(1,\infty)\x_{e^{2\phi}} S^{n-1}$,
(Theorem \ref{warped-prod}).
3. The sectional curvature $\kappa$ satisfies:
$0<\kappa\leq 5$, and $\kappa=5$ is achieved only at $\mathbf i$,
(Theorem \ref{sect-curvature-n}).
\bigskip
\bigskip
\section{Iwasawa Decomposition}
\step
We shall establish some notation first.
Let
$$
J=\left[\begin{matrix} -I_p & 0\\ 0&I_q \end{matrix}\right],
$$
where $I_p$ and $I_q$ are the identity matrices of size $p$ and $q$.
The group $O(p,q)$ is the subgroup of $\gl(p+q,\bbr)$ satisfying
$A J A^t = J$. It has 4 connected components (for $p,q>0$) and we denote
the connected component of the identity by $\so_0(p,q)$.
It is a semi-simple Lie group. The Iwasawa decomposition is
best described on its Lie algebra. We specialize to $\so_0(n,1)$.
\step
Let $e_{ij}$ denote the matrix whose $(i,j)$-entry is 1 and 0 elsewhere.
The standard metric on $\so_0(n,1)$ is given by the orthonormal basis for the
Lie algebra
$$
E_{ij}=\epsilon_{ij} e_{ij}+e_{ji},\quad 1\leq i < j\leq n+1,
$$
where $\epsilon_{ij}=-1$ if $j<n+1$ and $\epsilon_{ij}=1$ if $j=n+1$.
An Iwasawa decomposition $K A N$ is defined as follows. Let
$$
N_i=E_{i,n}+E_{i,n+1},\text{ for $i=1,2,\dots,n-1$,}
$$ be a basis for the nilpotent Lie algebra $\mathfrak n$;
$A_1=E_{n,n+1}$ be a basis for the abelian $\frak a$.
The compact subalgebra $\frak k=\frak{so}(n)$ is sitting in
$\frak{so}(n+1)$ as blocked diagonal matrices $\frak{so}(n)\oplus (0)$.
For an explicit discussion of such a decomposition using positive roots,
see, for example, \cite{Kn}.
\bigskip
\step
It is well known that $NA(=AN)$ forms a (solvable) subgroup. As a
Riemannian \emph{subspace}, $NA$ is an Einstein space; i.e., has a Ricci
tensor which is proportional to the metric.
However, our concern here is $NA$, not as
a subspace, but rather as a \emph{quotient space} of $G$ because it
provides a smooth cross-section for both $G\lra G/K$ and $G\lra K\bs
G$.
\bigskip
\step
From now on, in a slight abuse of notation,
`$r(K)$-action' means the
right action of $K= \so(n)$ on either $\so_0 (n,1)$ or $\mathcal H^n$
under appropriate situations. Also `$\ell(K)$-action' means the left
action of $K= \so(n)$ on either $\so_0 (n,1)$ or $\bbh^n$.
Note that $\mathcal H^n$ (respectively, $\bbh^n$) does not have an
$\ell(K)$-action (respectively, $r(K)$-action).
\bigskip
\section{$\mathcal H^2=\so(2)\bs\so_0(2,1)$}
\label{sec:SP^2}
\step[{Metric on $\so_0(2,1)$}]
We shall study the case when $n=2$ first, because this is the building
block for the general case.
The orthonormal basis for the Lie algebra $\frak{so}(2,1)$ is
$$
E_{13}=
\left[\begin{matrix}
0 &0 &1\\
0 &0 &0\\
1 &0 &0\\
\end{matrix}\right],\quad
E_{23}=
\left[\begin{matrix}
0 &0 &0\\
0 &0 &1\\
0 &1 &0\\
\end{matrix}\right],\quad
E_{12}=
\left[\begin{matrix}
0 &-1 &0\\
1 &0 &0\\
0 &0 &0\\
\end{matrix}\right].
$$
The Lie algebras for the Iwasawa decomposition are
$$
\frak{k}=\angles{E_{12}},\quad
\frak{a}=\angles{A_{1}}\quad\text{and}\ \
\frak{n}=\angles{N_{1}},
$$
where
$$
A_1=E_{23}\quad\text{and}\ \
N_1=E_{13}+E_{12}.
$$
The corresponding Lie subgroups are denoted by $K$, $A$ and $N$, respectively.
\bigskip
\step[Global trivialization of $\mathcal H^2$]
In order to study {$\so(2)\bs\so_0(2,1)$}, it is advantageous to use the
notation $\so_0(2,1)=NAK$ rather than $KAN$. That is, every element $p$
of $\so_0(2,1)$ is uniquely written as a product
$$
p=n a k,\quad n\in N,\ a\in A,\ k\in K.
$$
The nilpotent subgroup $N$ is normalized by $A$, and $NA$ forms a subgroup.
We give a global coordinate to $NA$ by
\begin{align}
\label{varphi}
\varphi:\ \bbr\x \bbr^+ &\lra NA\\
\notag (x,y)&\ \mapsto\ e^{x N_1} e^{\ln(y) A_{1}}.
\end{align}
Note that this is different from the restriction of the exponential map
$\exp: \frak{so}(2,1)\ra\so_0(2,1)$.
Sometimes we shall suppress $\varphi$ and write $(x,y)$ for $\varphi(x,y)$.
\bigskip
\step[Comparison with $\sltr$]
We use the standard isomorphism of Lie algebras $\frak{sl}(2,\bbr)$ and
$\frak{so}(2,1)$, sending the basis
$$
\left[\begin{matrix}
0 &1\\
0 &0
\end{matrix}\right],\quad
\tfrac12
\left[\begin{matrix}
1 &0\\
0 &-1
\end{matrix}\right],\quad
\tfrac12
\left[\begin{matrix}
0 &-1\\
1 &0
\end{matrix}\right]
$$
to the basis
$$
N_1,\quad A_{1},\quad -E_{12}.
$$
With the above identification $\varphi$ in diagram (\ref{varphi}), we see the
following correspondence:
$$
\bbr\x\bbr^+ \ni
(x,y)
\longleftrightarrow
\left[\begin{matrix}
1 &x\\
0 &1
\end{matrix}\right]
\left[\begin{matrix}
\sqrt{y} &0\\
0 &\tfrac{1}{\sqrt{y}}
\end{matrix}\right]
\longleftrightarrow
e^{x N_1} e^{\ln(y) A_{1}}
\in NA.
$$
For the compact subgroup $\so(2)$, this isomorphism yields a 2-to-1
covering transformation
$$
\left[\begin{matrix}
\cos \tfrac z2 &-\sin \tfrac z2\\
\sin \tfrac z2 &\phantom{-}\cos \tfrac z2
\end{matrix}\right]
\longleftrightarrow
\left[\begin{matrix}
\phantom{-}\cos z &\sin z &0\\
-\sin z &\cos z&0\\
0&0&1
\end{matrix}\right]
=e^{z(-E_{12})}.
$$
Therefore, in order to conform with the ordinary M\"obius
transformations of $\so(2)$ on the upper half-plane model, the group
$\so(2)\subset\so_0(2,1)$ will be parametrized by $e^{z (-E_{12})}$
rather than by $e^{z E_{12}}$.
\step[Riemannian metric on $\mathcal H^2$]
With the Riemannian metric on $\so_0(2,1)$ induced by the orthonormal basis
$\{E_{13}, E_{23}, E_{12}\}$,
the group of isometries is
$$
\isom_0(\so_0(2,1))=\so_0(2,1)\x\so(2).
$$
The subgroup $\so(2)\subset\so_0(2,1)$ acts on $\so_0(2,1)$ as left translations,
$\ell(K)$, freely and properly, yielding a submersion. The quotient space
$\so(2)\bs\so_0(2,1)$ acquires a unique Riemannian metric that makes the projection,
$\proj: \so_0(2,1)\lra\so(2)\bs\so_0(2,1)$, a Riemannian submersion.
It has a natural smooth (non-metric) cross section $NA$ in $KNA=NAK$.
At any $p\in\so_0(2,1)$, the vector
$\ell(p)_*(E_{ij})$ is just matrix multiplication $p E_{ij}$, and
$$
\{p E_{13}, p E_{23}, p E_{12}\}
$$
is an orthonormal basis at $p$.
The isometric $\ell(K)$-action induces a homogeneous foliation on $\so_0(2,1)$.
The leaf passing through $p$ is $K p$, the orbit containing $p$. Therefore,
the vertical vector is $E_{12} \, p$. We can find a new orthonormal basis
$\{\mathbf v_1,\mathbf v_2,\mathbf v_3\}$, where the last vector $\mathbf v_3$ is the normalized
$E_{12} \, p$. More explicitly, we write $E_{12}p$ as a combination
of the above orthonormal basis:
\begin{align*}
\mathbf u_3&=E_{12}p=g_1(p)\ p E_{13} + g_2(p)\ p E_{23} + g_3(p)\ p E_{12},\\
\intertext{and set}
\mathbf u_1&=-g_3(p)\ p E_{13} + g_1(p)\ p E_{12}.
\end{align*}
Then take the cross product $\mathbf u_3\x \mathbf u_1$ as $\mathbf u_2$. Now normalize
$\{\mathbf u_1,\mathbf u_2,\mathbf u_3\}$ to get $\{\mathbf v_1,\mathbf v_2,\mathbf v_3\}$.
Thus, $\{\mathbf v_1,\mathbf v_2\}$ is an orthonormal basis for the
horizontal distribution to the homogeneous foliation generated by the
$\ell(K)$-action. We want the projection, $\proj: \so_0(2,1)\ra\mathcal H^2=\so(2)\bs\so_0(2,1)$,
to be an isometry on the horizontal spaces. Since we are using the
global coordinate system
\begin{equation}
\label{trivialization-2}
\xymatrix@C=4.5pc@R=1pc{
\bbr\x\bbr^+\ar[r]^{\varphi}_\cong &NA\ar[r]^{\kern-48pt\proj|_{NA}}_{\kern-48pt \cong} &\mathcal H^2=\so(2)\bs\so_0(2,1)
}
\end{equation}
on $\mathcal H^2$, we take the projection $T_p(\so_0(2,1))=T_p(NAK)\ra T_p(NA)$
for $p\in NA$.
Expressing the images of $\mathbf v_1, \mathbf v_2$ by this projection in terms of
$\{\frac{\partial}{\partial x},\frac{\partial}{\partial y}\}$, we get
\begin{align*}
\mathbf w_1&=
-\frac{\sqrt{\left(x^2+1\right)^2+y^4}}{\sqrt{2} y} \frac{\partial}{\partial x} \Big|_{(x,y)}
-\frac{\sqrt{2} x \left(x^2+1\right)}{\sqrt{\left(x^2+1\right)^2+y^4}}\
\frac{\partial}{\partial y}\Big|_{(x,y)} \\
\mathbf w_2&=
y \sqrt{\frac{2 x^2 y^2}{\left(x^2+1\right)^2+y^4}+1}\
\frac{\partial}{\partial y} \Big|_{(x,y)}.
\end{align*}
\bigskip
\begin{proposition}
\label{ON-on-r2}
The Riemannian metric on the quotient of the
Riemannian submersion $\so_0(2,1)\ra \mathcal H^2=\so(2)\bs\so_0(2,1)$ is given by the
orthonormal basis $\{\mathbf w_1,\mathbf w_2\}$.
\end{proposition}
The space $\mathcal H^2$ is always assumed to have this metric.
\bigskip
\step[Subgroup $NA$ with the left-invariant metric]
We mention that the left-invariant metric restricted on the subgroup $NA$
yields a space isometric to the quotient $\bbh^2=\so_0(2,1)/\so(2)$:
The subgroup $NA$ with the Riemannian metric induced from that of
$\so_0(2,1)$ has an orthonormal basis
$\{\frac{1}{\sqrt{2}} N_1, \ A_1\}$
at the identity,
while the quotient $\so_0(2,1)/\so(2)$ is isometric to the Lie group
$NA$ with a new left-invariant metric coming from the orthonormal basis
$\{N_1, \ A_1\}$.
These two are isometric by $(x,y)\mapsto (\sqrt{2} x,y)$,
and have the same constant sectional curvatures $-1$.
Similar statements are true for general $n$.
\bigskip
\step[Global trivialization of $\bbh^2$]
With the same global coordinate system
$$
\xymatrix@C=4.5pc@R=1pc{
\bbr\x\bbr^+\ar[r]^{\varphi}_\cong
&NA\ar[r]^{\kern-48pt\proj|_{NA}}_{\kern-48pt \cong} &\bbh^2=\so_0(2,1)/\so(2),
}
$$
$\bbh^2$
has the orthonormal basis
$\{y\frac{\partial}{\partial x},y\frac{\partial}{\partial y}\}$
(on the plane $\bbr\x\bbr^+$).
Then the projection $T_p(\so_0(2,1))=T_p(NAK)\ra T_p(NA)$ for $p\in NA$
is a Riemannian submersion, that is, an isometry on the horizontal spaces.
\bigskip
\step[Special point $(0,1)$]
Note also that the vector fields $\{\mathbf w_1,\mathbf w_2\}$ are globally defined
and smooth (including the point $(0,1)$). This fact is significant because
we shall use the fact that our space with the point $(0,1)$ removed is a
warped product to calculate curvatures etc. Since the curvature is a smooth
function of the orthonormal basis, the curvatures at the point $(0,1)$
will simply be the limit of the curvature, $\lim_{(x,y)\ra(0,1)}
\kappa(x,y)$.
\bigskip
\step[$r(K)$-action on $\mathcal H^2$ vs. $\ell(K)$-action on $\bbh^2$]
Observe that $r(K)$ normalizes (in fact, centralizes) the left action
$\ell(K)$, and hence, it induces an isometric action on the quotient
{$K\bs G$}. We need to study this isometric $r(K)$-action in detail.
First we consider the isometric action $\ell(K)$ on the hyperbolic
space $\mathbb H^2=G/K$. For $p\in NA$ and $k\in K$, suppose $k\cdot p=p_1
k_1$. Then $k\cdot (p K)=p_1 K$. That is
$$
\ell(k)\cdot \bar p =\bar p_1 \text{ in } G/K \quad\text{ (if $k\cdot p=p_1 k_1$ for some $k_1$)}.
$$
Now for our $r(K)$-action on $\mathcal H^2=K\bs G$, let $p\in NA$ and $k\in
K$. Suppose $p\cdot k=k_2 p_2$. Then $(K p)\cdot k=K p_2$.
That is
$$
r(k)\cdot \bar p =\bar p_2 \text{ in } K\bs G \quad\text{ (if $p\cdot k=k_2 p_2$ for some $k_2$)}.
$$
\begin{proposition}
\label{r-moebius}
In $xy$-coordinate for $\mathcal H^2=\so(2)\bs\so_0(2,1)$ {\rm(}upper half-plane{\rm)},
the\ isometric $r(K)$-action on $\mathcal H^2$ is given by:
\noindent For\ ${\hat z}=e^{z(-E_{12})}= \left[\begin{matrix}
\phantom{-}\cos z &\sin z &0\\ -\sin z &\cos z &0\\ 0&0&1
\end{matrix}\right]
\in K$ and $(x,y)\in \mathcal H^2$,
\begin{multline*}
r({\hat z}) \cdot (x,y) =\frac{1}{2y}
\Big(-(-x^2+y^2-1)\sin z+2x y\cos z,\Big.\\
\Big.(-x^2+y^2-1) \cos z+2 x y \sin z+x^2+y^2+1\Big).
\end{multline*}
In vector notation,
$$
r(\hat z)\cdot\left[\begin{matrix} x \\ y \end{matrix}\right]
=\left[\begin{matrix} \cos z &-\sin z \\ \sin z &\phantom{-}\cos z \end{matrix}\right]
\left(\left[\begin{matrix} x \\ y \end{matrix}\right]
-\left[\begin{matrix} 0 \\ \tfrac{1+x^2+y^2}{2y} \end{matrix}\right]\right)
+\left[\begin{matrix} 0 \\ \tfrac{1+x^2+y^2}{2y} \end{matrix}\right].
$$
\end{proposition}
\bigskip
\step
Note that $r(\hat z)$ is a ``Euclidean rotation'' with an appropriate center.
More precisely, each $(x,y)$ is on the Euclidean circle centered at
$\left(0,\tfrac{1+x^2+y^2}{2y}\right)$ with radius
$\sqrt{x^2+\left(y-\tfrac{1+x^2+y^2}{2y}\right)^2}$, and $r(\hat z)$ rotates the
point $(x,y)$ along this circle. This can be seen by calculations.
The $\ell(K)$-action on $\bbh^2$ is the genuine M\"obius
transformation, and is given by
$$
\hskip-80pt
\ell({\hat z}) \cdot (x,y)=
\frac{1}{L}
\Big(\left(x^2+y^2-1\right) \sin z+2 x \cos z,\Big.\\
\Big.{2 y}\Big)
$$
with
$$
L={-\left(x^2+y^2-1\right) \cos z+2 x \sin z+x^2+y^2+1}.
$$
The relation between $r(K)$-action on $\mathcal H^2$ and $\ell(K)$-action on
$\bbh^2$ will be stated in Proposition \ref{tau2} more clearly.
\bigskip
\step
Both $r(K)$- and $\ell(K)$-actions have a unique fixed point at
$(0,1)$, and all the other orbits are Euclidean circles
{centered on the $y$-axis}. This implies that the geometry is
completely determined by the geometry
at the points on the $y$-axis (more economically,
on the subset $[1,\infty)$ of the $y$-axis). The orthonormal bases at the points
of $y$-axis are important. From Proposition \ref{ON-on-r2}, we have
\begin{corollary}
\label{onbasis-on-y-axis}
At $(0,y)\in\mathcal H^2$ with $y>1$, the orthonormal system is
\begin{align*}
\mathbf w_1&=-\sqrt{\cosh(2\ln y)} \frac{\partial}{\partial x} \Big|_{(0,y)}\\
\mathbf w_2&=y \frac{\partial}{\partial y} \Big|_{(0,y)}.
\end{align*}
\end{corollary}
\bigskip
With the orthonormal basis on the upper half-plane model given in
Proposition \ref{ON-on-r2}, we can calculate the sectional curvature.
\begin{theorem}
\label{2-dim_sec_curv}
On the space $\mathcal H^2=\so(2)\bs\so_0(2,1)$, the sectional curvature at $(x,y)$ is
$$
\kappa(x,y)=
\frac{4 y^2 \left(x^4+2 x^2 \left(y^2+1\right)+y^4+3 y^2+1\right)}{\left(x^4+2 x^2 \left(y^2+1\right)+y^4+1\right)^2}.
$$
In particular, $0<\kappa\leq 5$
and the maximum $5$ is attained at the point $(0,1)$.
\end{theorem}
\step
Note that, because of the isometric $r(K)$-action (see Proposition
\ref{r-moebius}), it is enough to know the
curvatures at the points on the $y$-axis,
$$
\kappa(0,y)=\frac{4 y^2 (1 + 3 y^2 + y^4)}{(1 + y^4)^2}.
$$
As we shall see in Proposition \ref{tau2}, the $r(K)$-orbits
will be the geometric concentric circles centered at $(0,1)$.
These are Euclidean circles with different
centers, see Proposition \ref{r-moebius}. Over these
$r(K)$-orbits, $\kappa(x,y)$ is constant, of course. In fact, on the
geometric circle of radius $|\ln y|$, the curvature is $\kappa(0,y)$.
Here are graphs of the sectional curvatures.
Figure \ref{Curv-2dim}
shows that $\kappa=5$ is the maximum at $(0,1)$. The level
curves are the geometric circles centered at $(0,1)$ of $\mathcal H^2$.
\begin{figure}[!ht]
\centering
\mbox{\subfigure{\includegraphics[width=2.5in]{Curv01.eps}
\label{Curv-2dim}
\quad
\subfigure{\includegraphics[width=2.5in]{Curv02.eps} }}}
\caption{$\kappa$ for $(-5<x<5,\ 0<y<10)$, and the cross section at $x=0$}
\end{figure}
\bigskip
\bigskip
\section{$\mathcal H^2=\so(2)\bs\so_0(2,1)$ vs. $\bbh^2=\so_0(2,1)/\so(2)$}
Recall that both spaces $\so(2)\bs\so_0(2,1)$ and $\so_0(2,1)/\so(2)$
have isometric actions by circles, $r(K)$ and $\ell(K)$,
respectively.
The trivialization functions $\varphi$ and $\proj|_{NA}\circ
\varphi$ in diagram (\ref{trivialization-2}) will be suppressed sometimes.
From the weak $G$-equivariant diffeomorphism from $K\bs G$ to $G/K$
given by $K g\mapsto g\inv K$, we can define $\tau:\bbr\x\bbr^+ \lra
\bbr\x\bbr^+$ as in the following
\begin{proposition}
\label{tau2}
For $x\in\bbr^{1}$ and $y\in \bbr^+$,
$(0,y)(x,1)(0,y)\inv =(yx,1)$
(with the notation in the diagram {\rm (\ref{trivialization-2})})
so that
$$
(x,y)\inv=(-\tfrac{x}{y},\tfrac{1}{y}).
$$
The map
$$
\tau:\ \mathcal H^2=\so(2)\bs\so_0(2,1)\lra \mathbb H^2=\so_0(2,1)/\so(2)
$$
{\rm(}as a map $\bbr\x\bbr^+ \lra \bbr\x\bbr^+${\rm)}
defined by
$$
\tau(x,y)=(-\tfrac{x}{y},\tfrac{1}{y})
$$
has the following properties:
{\rm (1)}
$\tau$ is a weakly $\so(2)$-equivariant diffeomorphism of period 2.
More precisely,
$$
\tau(r({\hat z})\cdot(x,y))=\ell({\hat z}\inv)\cdot\tau(x,y)
$$
for $\hat{z}\in\so(2)$. In other words, the identification
of $\mathcal H^2$, $\bbh^2$ and $NA$ with $\bbr \times \bbr^+$ as sets
permits some abuse of $\tau$ and gives the following relation between
$r(K)$-action and $\ell(K)$-action:
$r({\hat z})\cdot(x,y)=\tau\left(\ell({\hat z}\inv)\cdot\tau(x,y)\right)$.
{\rm (2)}
$\tau$ leaves the geometric circles centered at $(0,1)$ in each geometry
invariant. That is, for $m>0$, the Euclidean circle
$$
x^2+(y-\cosh(\ln m))^2=\sinh^2(\ln m)
$$
is a geometric circle centered at $(0,1)$ with radius $|\ln m|$, in both
geometries, and the map $\tau$ maps such a circle to itself.
These circles are $r(K)$-orbits in $\mathcal H^2$ and $\ell(K)$-orbits in $\bbh^2$
(when $\mathcal H^2$ and $\bbh^2$ are identified with $\bbr\x\bbr^+$) at the same time.
{\rm (3)} $\tau$ gives a 1-1 correspondence
between the two sets of all the geodesics passing through $(0,1)$
in the two geometries $\mathcal H^2$ and $\mathbb H^2$. In fact, $\tau$ maps
the $y$-axis to itself and half-circles $\{
(x-\alpha)^2+y^2=\alpha^2+1 \}_{\alpha\in\bbr}$ to hyperbolas $\{
x^2+2\alpha xy-y^2+1=0 \}_{\alpha\in\bbr}$.
\end{proposition}
\begin{proof}
(1) Observe that $\tau(x,y)$ corresponds to the inverse of $\varphi(x,y)$
in the group $N A$. In fact, we have
$$
\varphi(\tau(x,y))=(\varphi(x,y))\inv.
$$
For $\varphi(x,y)\in N A$ and $\hat z\in K$, one can find $k\in K$ for which
$$
k \cdot \varphi(x,y) \cdot \hat{z} \in NA.
$$
Thus,
\begin{align*}
\varphi\big(\tau(r(\hat z)\cdot (x,y))\big)
&=\big(\varphi(r(\hat z)\cdot (x,y))\big) \inv \\
&=(k \cdot \varphi(x,y)\cdot \hat z\big)\inv\\
&=\hat z\inv\cdot (\varphi(x,y))\inv \cdot k \inv\\
&=\hat z\inv\cdot \big(\varphi (\tau (x,y)) \big) \cdot k \inv\\
&=\varphi\big(\ell(\hat z)\inv\cdot \tau(x,y)\big).
\end{align*}
(2)
Any $(x,y)$ lies on the Euclidean circle centered at $(0,c)$,
where $c=\tfrac{1+x^2+y^2}{2y}$ and radius
$r=\sqrt{x^2+\left(y-\tfrac{1+x^2+y^2}{2y}\right)^2}$. In particular,
$(0,m)$ lies on the Euclidean circle centered at $(0,c)$, where
$c=\cosh(\ln m)$ and radius $r= | \sinh(\ln m) |$. Note $m=\cosh(\ln
m) + \sinh(\ln m)$ and $\tfrac{1}{m}=\cosh(\ln m) - \sinh(\ln m)$,
which show that both $(0,m)$ and $(0, \tfrac{1}{m})$ lie on the same
circle. Then, in $\bbr \times \bbr^+$,
$$
r(K)\cdot(0,m)=\tau(\ell(K)\cdot(0,\tfrac 1m))=\ell(K)\cdot(0,\tfrac
1m)=\ell(K)\cdot(0,m)
$$
shows this circle is both $r(K)$-orbit of the point $(0,m)$ (in
$\mathcal H^2$) and its $\ell(K)$-orbit (in $\bbh^2$) at the same time.
Since both $r(K)$-action on $\mathcal H^2$ and $\ell(K)$-action on $\bbh^2$
are isometric, every point on the circle has the same distance from
$(0,1)$ in each geometry.
In $xy$-coordinates, the equations
for geodesics in $\mathcal H^2$ are a system of 2 equations
\small{
\begin{align*}
0=&x''(t){\left(2 x (t)^2 y (t)^3+\left(x (t)^2+1\right)^2 y (t)+y (t)^5\right)^2}
\\
&
-2 y (t) x'(t) y'(t)
\Big(
x (t)^6 \left(4 y (t)^2+2\right)+x (t)^4 \left(6 y (t)^4+8 y
(t)^2\right)\Big) \\
&
-2 y (t) x'(t) y'(t)
\Big(
2 x (t)^2 \left(2 y (t)^6+y (t)^4+2 y (t)^2-1\right)+x (t)^8+y
(t)^8-1
\Big) \\
&-4 x (t) y (t)^2 x'(t)^2\left(x (t)^2+1\right)^2 \\
&+ x (t) y'(t)^2
\left(4 \left(x (t)^2+1\right) y (t)^6+2 \left(3 x (t)^4+4 x(t)^2+1\right)
y (t)^4\right)\\
&+ x (t) y'(t)^2
\left(
4 \left(x (t)^2+1\right)^3 y (t)^2+\left(x (t)^2+1\right)^4+y (t)^8\right)\\
\intertext{and}
0=&
{y (t) \left(2 x (t)^2 \left(y (t)^2+1\right)+x (t)^4+y (t)^4+1\right)^2}
y''(t)\\
&
-4 x (t) y (t) x'(t)y'(t)
\left(
x (t)^4 \left(3 y (t)^2+1\right)+x (t)^2 \left(3 y
(t)^4+4 y (t)^2-1\right)\right) \\
&-4 x (t) y (t) x'(t)y'(t)
\left(
x (t)^6+y (t)^6+y (t)^4+y (t)^2-1\right)
\\
&
+2 y (t)^2 x'(t)^2
\left(
3 x (t)^4 \left(y (t)^2-1\right)+x (t)^2 \left(y
(t)^2+1\right) \left(3 y (t)^2-5\right)\right)\\
&+2 y (t)^2 x'(t)^2
\left(
x (t)^6+y (t)^6-y (t)^4-y
(t)^2-1
\right)
\\
&+
y'(t)^2
\left(
2 x (t)^6 \left(y (t)^2+1\right)+4 x (t)^4 y (t)^2-2 x (t)^2 \left(y
(t)^6+y (t)^4-y (t)^2+1\right)\right) \\
&+
y'(t)^2
\left(
x (t)^8-\left(y (t)^4+1\right)^2
\right).
\end{align*}
}
\noindent
One can readily check that
$$
\gamma(t)=(0,e^t)\in\mathcal H^2,\ 0\leq t\leq |\ln (m)|=|\ln (\tfrac{1}{m})|
$$
is a unit-speed geodesic, and therefore,
$\text{Length}(\gamma)=|\ln(m)|$.
This is the geometric radius of the circle centered at $\mathbf i=(0,1)$
$\in\mathcal H^2$.
{(3) Let $\mathcal{G}_{\mathcal H^2}$ and
$\mathcal{G}_{\mathbb H^2}$ be the sets of all the unit-speed geodesics
starting from $\mathbf i$ in $\mathcal H^2$ and $\mathbb H^2$, respectively. Then
$$
\mathcal{G}_{\mathcal H^2}= \{r(k)\cdot \gamma(\bullet): \bbr \lra \mathcal H^2 \}_{k\in
K}
$$
and
$$
\mathcal{G}_{\mathbb H^2}= \{l(k)\cdot \gamma(\bullet) : \bbr \lra \mathbb H^2 \}_{k\in K},
$$
since $\gamma\in \mathcal{G}_{\mathcal H^2}\cap \mathcal{G}_{\mathbb H^2}$.}
The 1-1 correspondence between
$\mathcal{G}_{\mathcal H^2}$ and $\mathcal{G}_{\mathbb H^2}$ by $\tau$ comes
from the weak equivariance of $\tau$ and the fact
$\tau(\gamma(t))=\gamma(-t)$. In fact, for $k\in K$ and $t\in
\bbr$,
$$
r(k)\cdot \gamma(t)=\tau(\ell(k^{-1})\cdot \tau(\gamma(t)))
=\tau(\ell(k^{-1})\cdot\gamma(-t))
=\tau(\ell(k^{-1})\cdot \ell(\hat{\pi})\cdot \gamma(t)).
$$
Finally, we can check easily that for each
$\alpha\in\bbr$, the hyperbola $x^2+2\alpha xy-y^2+1=0 $, a $\mathcal H^2$-geodesic,
corresponds to the half-circle $(x-\alpha)^2+y^2=\alpha^2+1$, a
$\mathbb H^2$-geodesic.
\end{proof}
\bigskip
\begin{theorem} \label{prop:warped-2-dim}
The space $\mathcal H^2-\{\mathbf i\}$ is isometric to the warped
product $B \times_{e^{2 \phi}} S^1$, where $B=(1,\infty)=\{(0,y):\
1<y<\infty\} \subset\mathcal H^2$ has the induced metric; that is,
$|\tfrac{\partial}{\partial t}(t_0)| = \tfrac{1}{t_0}$ for $t_0 \in
(1, \infty)$, $S^1$ has the standard metric; and $e^{2 \phi (t)} =
\frac{\sinh ^2 (\ln t)}{\cosh (2\ln t)}$.
\end{theorem}
\begin{proof}
The crucial points are that $r(K)\subset\isom(\mathcal H^2)$ and
that all the other orbits are circles, except for the one fixed point
$\mathbf i=(0,1)$.
This will make our space a warped product of
$S^1$ by the base space $B$, and we need to find a map $\phi$ in $B
\times_{e^{2 \phi}} {S}^1$. The $r(K)$-orbit through $(0,y)\in\mathcal H^2$
is, by Proposition \ref{r-moebius},
$$
r({\hat z})\cdot (0,y)=
\big(-\sinh(\ln y) \sin z,\ \sinh(\ln y)\cos z + \cosh(\ln y)\big).
$$
Define a map
$$
f : B \times_{e^{2 \phi}} {S}^1 \lra \mathcal H^2 =\so(2)\bs\so_0(2,1)
$$
by
\begin{align*}
f(t,{\hat z})
&=f(t,{\hat z} \cdot \hat{0})\\
&=r({\hat z} ^{-1})\cdot (0,t)\\
&=r(\widehat{-z}) \cdot (0,t)\\
&=\big(\sinh(\ln t) \sin z,\ \sinh(\ln t)\cos z + \cosh(\ln t)\big).
\end{align*}
Note the definition of $f$ does not depend on $e^{2\phi}$ and it is
weakly equivariant with the $r(\so(2))$-action without the concept of
isometry yet. Since $f$ maps the base $B\x \hat 0$ of the warped
product to the $y$-axis of $\mathcal H^2$, it is enough to find $e^{2\phi}$
which makes $f$ isometric on $B\x \hat 0$.
Recall that $\mathcal H^2$ has an orthonormal basis
$$
\Big\{
-\sqrt{\cosh(2\ln t)} \frac{\partial}{\partial x} \Big|_{(0,t)}, \
t \frac{\partial}{\partial y} \Big|_{(0,t)}
\Big\}
$$
at $f(t,\hat0)=(0,t)$,\ $t>1$, see Corollary
\ref{onbasis-on-y-axis}.
Also note that the metric on $B\x_{e^{2\varphi}} S^1$ is given by the
orthonormal basis
$$
\Big\{t\frac{\partial}{\partial t} \Big|_{(t, {\hat z})},\
-e^{-\phi (t)} \frac{\partial}{\partial {\hat z}} \Big|_{(t, {\hat z})} \Big\}
$$
at $(t,{\hat z})$.
Observe
\begin{align*}
f_* \left(\frac{\partial}{\partial t} \Big|_{(t,\hat 0)}\right)
&=\frac{d (f\circ t)}{d t} \Big|_{(t,\hat 0)}\\
&=\frac{\partial}{\partial t}\big(f(t,{\hat z})\big)\Big|_{z=0}\\
&=\frac{1}{t}\Big(\cosh(\ln t)\sin z
\frac{\partial}{\partial x} \Big|_{f(t,{\hat z})}\\
&{\hskip48pt}+\big(\cosh(\ln t)\cos z
+\sinh(\ln t)\big)
\frac{\partial}{\partial y} \Big|_{f(t,{\hat z})}
\Big)\Big|_{z=0}\\
&=\frac{\partial}{\partial y} \Big|_{f(t,\hat 0)}\\
&=\frac{\partial}{\partial y} \Big|_{(0,t)}\\
\intertext{and, we have}
f_* \left(t\frac{\partial}{\partial t }\Big|_{(t,\hat 0)}\right) &=
t \frac{\partial}{\partial y} \Big|_{(0,t)}.\\
\end{align*}
\noindent
Thus, if
\begin{align}
\label{warp1}
f_* \left(e^{-\phi (t)} \frac{\partial}{\partial {\hat z}} \Big|_{(t,\hat 0)}\right)
&= - \sqrt{\cosh \big(2 \ln t
\big)} \frac{\partial}{\partial x} \Big|_{(0,t)},\\
\intertext{then $f$ will be an isometry. Now,}
\label{warp2}
f_* \left(e^{-\phi (t)} \frac{\partial}{\partial {\hat z}} \Big|_{(t,\hat 0)}\right)
&=e^{-\phi (t)} \frac{d (f\circ {\hat z})}{d {\hat z}} \Big|_{(t,\hat 0)}\\
\notag
&=e^{-\phi (t)}\Big(\sinh(\ln t) \cos z \frac{\partial}{\partial x} \Big|_{f(t,{\hat z})}\\
\notag
&{\hskip48pt}-\sinh(\ln t)\sin z \frac{\partial}{\partial y} \Big|_{f(t,{\hat z})}\Big)
\Big|_{z=0}\\
\notag
&=e^{-\phi (t)} \sinh(\ln t)\frac{\partial}{\partial x} \Big|_{(0,t)}.
\end{align}
From the equalities (\ref{warp1}) and (\ref{warp2}), the condition is then
$$
-\sqrt{\cosh(2\ln t)}=e^{-\phi (t)} \sinh(\ln t),
$$
which implies $ e^{2 \phi (t)} = \frac{\sinh ^2 (\ln t)}{\cosh(2\ln
t)}$.
\end{proof}
\bigskip
We calculate $\kappa(0,y)$ again using the warped product. The result
conforms with Theorem \ref{2-dim_sec_curv}.
\begin{corollary}
For $(t,0) \in B \times_{e^{2 \phi}} S^1$,
$$
\kappa(t,0)=\frac{4 t^2 (1 + 3 t^2 + t^4)}{(1 + t^4)^2}.
$$
\end{corollary}
\begin{proof}
From $ e^{2 \phi (t)} = \frac{\sinh ^2 (\ln t)}{\cosh (2\ln t)}$, we
get
$$
\{
t \tfrac{\partial}{\partial t} \! \mid_{(t,0)} ,
-\tfrac{\sqrt{\cosh (2 \ln t)}}{\sinh (\ln t)}
\tfrac{\partial}{\partial \hat{z}} \! \mid_{(t,0)}
\}
$$
is an orthonormal basis at $(t,0) \in B \times_{e^{2 \phi}}
S^1$ and
$$\phi(t) = \ln (\sinh (\ln t)) - \tfrac{1}{2} \ln (\cosh (2 \ln t)).$$
Since $\phi$ is constant along each circle,
\begin{align*}
\nabla \phi \! \mid_{(t,0)}
&= \langle \nabla \phi, t
\tfrac{\partial}{\partial t}\rangle \,
t \tfrac{\partial}{\partial t}\mid_{(t,0)} \\
&= (t \tfrac{\partial \phi}{\partial t}) \,
t \tfrac{\partial}{\partial t} \mid_{(t,0)} \\
&= (\coth(\ln t) - \tanh (2 \ln t)) \,
t \tfrac{\partial}{\partial t} \mid_{(t,0)} .
\end{align*}
For tangent vectors $T_1, T_2 \in T(S^1)$ and $X\in T(S^{1})^\perp$
in the warped product, we have
$$
R(X, T)Y =
\big(
h_{\phi} (X,Y) +
\langle \nabla \phi , X \rangle \langle \nabla \phi , Y \rangle
\big)
T
$$
and so
$$
\langle R(X, T)T, Y \rangle _{\phi} =
- e^{2 \phi} |T|^2 _{S^1}
\big(
h_{\phi} (X,X) + \langle \nabla \phi , X \rangle ^2
\big),
$$
where $h_{\phi}$ is a hessian form, see \cite[p.60,
Proposition2.2.2, Corollary 2.2.1]{GW}. Since
\begin{align*}
h_{\phi}(t \tfrac{\partial}{\partial t}, t \tfrac{\partial}{\partial t})
&= \langle
\nabla _{t \tfrac{\partial}{\partial t}} \nabla \phi, \,
t \tfrac{\partial}{\partial t}
\rangle \\
&= - \csch^2 (\ln t) - 2 \, \sech ^2 (2 \ln t) \\
&= - \csch^2 (\ln t) - 2 + 2 \tanh ^2 (2 \ln t),
\end{align*}
\begin{align*}
\kappa (
t \tfrac{\partial}{\partial t} ,
-\tfrac{\sqrt{\cosh (2 \ln t)}}{\sinh (\ln t)}
\tfrac{\partial}{\partial \hat{z}}
)
&= - \big(
\langle
\nabla \phi, t \tfrac{\partial}{\partial t}
\rangle ^2
+
h_{\phi}(t \tfrac{\partial}{\partial t}, t \tfrac{\partial}{\partial t})
\big) \\
&= 1 - 3 \tanh ^2 (2 \ln t) - 2 \coth (\ln t) \cdot \tanh (2 \ln t) \\
&= \tfrac{4 t^2 (1 + 3 t^2 + t^4)}{(1 + t^4)^2}.\qedhere
\end{align*}
\end{proof}
\begin{remark}
The following are well known: the space $\mathbb H^2-\{\mathbf i\}$ is isometric
to the warped product $(0,1) \times_{e^{2 \psi}} S^1$, where
$(0,1)\subset\mathbb H^2$ has the induced metric from $\mathbb H^2$, that
is, $|\tfrac{\partial}{\partial t}(t_0)| = \tfrac{1}{t_0}$ for $t_0
\in (0,1)$; $S^1$ has the standard metric; and $e^{2 \psi (t)}
= {\sinh ^2 (\ln t)}$.\\
The isometry can be given by
$$
\tilde{f} :
(0,1) \times_{e^{2 \psi}} {S}^1\lra \mathbb H^2-\{\mathbf i\}
$$
defined by
$$
\tilde{f}(s,\hat{u}) = \ell(\hat{u}) \cdot (0, s).
$$
See, for example, \cite[p.58, Theorem 2.2.1]{GW}.
\end{remark}
\begin{corollary}
The map $\tau$ induces a map on the warped products
$$
\tau': (1,\infty) \x_{e^{2\phi}} S^1 \lra (0,1) \x_{e^{2\psi}} S^1
$$
given by
$$\tau' (t,\hat{z}) = (\tfrac{1}{t}, \hat{z}),$$
which is $\so(2)$-equivariant and satisfies
$\tilde{f} \circ \tau ' = \tau \circ f$.
\end{corollary}
The following commutative diagram shows more detail:
$$
\CD
(1,\infty) \x_{e^{2\phi}} S^1 @>{\tau'}>>
(0,1) \x_{e^{2\psi}} S^1
\\
@V{f}VV @V{\tilde{f}}VV
\\
\mathcal H^2=\so(2)\bs\so_0(2,1) @>\tau>>
\bbh^2=\so_0(2,1)/\so(2)
\endCD
$$
\vspace{0.5cm}
$$
\CD
(t,\hat{z} \cdot \hat{0})=(t,\hat{z}) @>{\tau'}>>
(\tfrac{1}{t},\hat{z})=(\tfrac{1}{t},\hat{z}\cdot\hat{0})
\\
@V{f}VV @V{\tilde{f}}VV
\\
r(\widehat{-z}) \cdot (0,t) @>\tau>>
\ell({\hat z}) \cdot (-\tfrac{0}{t}, \tfrac{1}{t})
\endCD
$$
\begin{figure}[!ht]
\centering
\includegraphics[width=2.5in]{Curv03.eps}
\caption{Geometric circles and orthogonal geodesics in two geometries.\
$R= r(\hat{\tfrac{\pi}{7}})\cdot P$,\
$L=\ell(\hat{\tfrac{\pi}{7}})\cdot P$} and $R'=\tau(R)$, $L'=\tau(L)$.
\end{figure}
\bigskip
\section{The general case: $\so(n)\bs\so_0(n,1)$}
\step[Subgroup $NA$ with the left-invariant metric]
As is well known, the subgroup $NA$ has the structure of a solvable Lie group
$N\rx A$, where
$$
N\cong \bbr^{n-1},\quad A\cong\bbr^+.
$$
The subgroup $NA$ with the Riemannian metric induced from that of
$\so_0(n,1)$ has an orthonormal basis
$$
\{\tfrac{1}{\sqrt{2}} N_1,\ \tfrac{1}{\sqrt{2}}N_2,\ \dots,\
\tfrac{1}{\sqrt{2}}N_{n-1},\ A_{1}\}.
$$
at the identity
while the quotient $\so_0(n,1)/\so(n)$ is isometric to the Lie group
$NA$ with a new left-invariant metric coming from the orthonormal basis
$$
\{N_1,\ N_2,\ \dots,\ N_{n-1},\ A_{1}\}.
$$
These two are isometric by $(\mathbf x,y)\mapsto (\sqrt{2}\mathbf x,y)$,
and have the same constant sectional curvatures $-1$.
\step[Global trivialization of $\bbh^n$]
With the Riemannian metric on $\so_0(n,1)$ induced by the orthonormal basis
$\{E_{ij}:\ 1\leq i < j\leq n+1\}$,
the group of isometries is
$$
\isom_0(\so_0(n,1))=\so_0(n,1)\x\so(n).
$$
The subgroup $\so(n)\subset\so_0(n,1)$ acts on $\so_0(n,1)$ as left translations,
$\ell(K)$, freely and properly, yielding a submersion. The quotient space
$\so(n)\bs\so_0(n,1)$ acquires a unique Riemannian metric that makes the projection,
$\proj: \so_0(n,1)\lra\so(n)\bs\so_0(n,1)$, a Riemannian submersion.
It has a natural smooth (non-metric) cross section $NA$ in $KNA=NAK$.
A map
\begin{align*}
\varphi:\ \bbr^{n-1}\x\bbr^+ &\lra NA\\
(\mathbf x,y)&\ \mapsto\ e^{\sum_{i=1}^{n-1}x_i N_i} e^{\ln(y) A_1},
\end{align*}
where $\mathbf x=(x_1,\dots,x_{n-1})$, gives rise to a global
trivialization for the subgroup $NA$ and our space $\mathcal H^n$. Thus, we
shall use $(\mathbf x,y)$ to denote a point in $\mathcal H^n\cong NA$.
\step
Note, for $\mathbf x\in\bbr^{n-1}$ and $y\in \bbr^+$,
$$
(\mathbf 0,y)(\mathbf x,1)(\mathbf 0,y)\inv =(y \mathbf x,1).
$$
Even though we use the local trivialization $\mathcal H^n=\so(n)\bs \so_0(n,1)\ra N A$,
the metric on $\mathcal H^n$ is not related to the group structure of $N A$. That
is, the metric is neither left-invariant nor right-invariant.
\bigskip
\begin{theorem}
\label{isom-sono}
$\isom_0(\so(n)\bs\so_0(n,1))=r(\so(n))$.
\end{theorem}
\begin{proof}
The normalizer of $\ell(\so(n))$ in $\isom_0(\so(n,1))=\ell(\so_0(n,1))\x
r(\so(n))$ is $\ell(\so(n))\x r(\so(n))$.
Since $\ell(\so(n))$ acts ineffectively on the quotient, only $r(\so(n))$
acts effectively on the quotient as isometries.
Thus,
$\isom_0(\so(n)\bs\so_0(n,1))\supset r(\so(n))$.
Suppose these are not equal. Then there exists a point whose orbit contains
an open subset, since the $r(\so(n))$-orbits are already codimension 1.
This implies the sectional curvature is constant on such an open subset.
But this is impossible by Theorem \ref{sect-curvature-n}.
Notice that, for the calculation of the sectional curvature, we only need
the inequality above.
\end{proof}
For $a\in A$ and $k\in\so(n-1)\x\so(1) \subset K=\so(n)$,
$$ak = ka$$
and
$$
(K a)\cdot k = K ka = K a
$$
so that the stabilizer of $r(\so(n))$ at $a=\varphi(\mathbf 0,y),\ y\not=1,\ y
\in \bbr^+$, contains $\so(n-1)\x\so(1)$.
Let $S$ be the only subgroup of $K=\so(n)$ properly containing
$\so(n-1)\x\so(1)$. Then $\so(n-1)\x\so(1)$ has index 2 in $S$, and no
element of $S-\so(n-1)\x\so(1)$ can fix $a$. Thus, we have
\begin{corollary}
\label{son-stablizer} For the $r(\so(n))$-action on
{$\varphi\inv(NA)=\bbr^{n-1}\x\bbr^+$}, the stabilizer at
$(\mathbf 0,y),\ y\not=1$, is $\so(n-1)\x\so(1)$.
\end{corollary}
This can also be proved from the similar fact on $\bbh^n$ using the
weak $\so(n)$-equivariant map.
\bigskip
\step[Embedding of $\so_0(2,1)$ into $\so_0(n,1)$]
Consider the subgroup $\so_0(2,1)$ of $\so_0(n,1)$, as
$$
I_{n-2}\x\so_0(2,1)\subset\so_0(n,1),
$$
where $I_{n-2}$ is the identity matrix of size $n-2$.
For $k\in\ell(K)$ and $p\in\so_0(2,1)$, $k\cdot p\in\so_0(2,1)$ if and only if
$k\in\so_0(2,1)$.
Therefore, the space $\ell(\so(2))\bs\so_0(2,1)$ is isometrically embedded into
$\ell(K)\bs\so_0(n,1)$.
With this embedding, there is an orthonormal basis for this 2-dimensional
subspace:
\begin{align*}
\mathbf w_{n-1}&=c \ \frac{\partial}{\partial x_{n-1}} \Big|_{(\mathbf 0,y)}\\
\mathbf w_n&=y \frac{\partial}{\partial y} \Big|_{(\mathbf 0,y)}\\
\end{align*}
where {$c=-\sqrt{\cosh(2\ln y)}$.}
\bigskip
\step[Orthonormal basis of $\mathcal H^n$]
The right action of a matrix $k=\exp(\tfrac{\pi}{2}\cdot
E_{j,n-1})\in K$ ($j<n$) maps $(\mathbf x, y)=(x_1,\dots,x_j,\dots,$
$x_{n-1},$ $y)\in\mathcal H^n$ to $(\mathbf x' , y)=(x_{1},\dots,x_{n-1},\dots,
-x_{j},y)\in\mathcal H^n$. (i.e., exchanges the $(n-1)$st and $j$th slot).
More precisely, $\varphi(\mathbf x,y)\cdot k=k'\cdot \varphi(\mathbf x
',y)$ in $\so_0(n,1)$ for some $k'\in \so(n)$. By applying such a
right action on $NA$ for $j=1,2,\dots,n-2$, we get the orthonormal
system at $(0,\dots,0,y)\in\mathcal H^n$ with $y>1$:
\begin{align*}
\mathbf w_1&=c \ \frac{\partial}{\partial x_1} \Big|_{(\mathbf 0,y)}\\
\mathbf w_2&=c \ \frac{\partial}{\partial x_{2}} \Big|_{(\mathbf 0,y)}\\
\mathbf w_3&=c \ \frac{\partial}{\partial x_{3}} \Big|_{(\mathbf 0,y)}\\
&\cdots\\
\mathbf w_{n-1}&=c \ \frac{\partial}{\partial x_{n-1}} \Big|_{(\mathbf 0,y)}\\
\mathbf w_n&=y \ \frac{\partial}{\partial y} \Big|_{(\mathbf 0,y)}\\
\end{align*}
where {$c=-\sqrt{\cosh(2\ln y)}$.}
As before, we denote the upper half-space $\bbr^{n-1}\x\bbr^+$
with this metric by $\mathcal H^n$. The above shows that the
metric is very close to being conformal to the standard $\bbr^n$.
\bigskip
\step
Recall that both spaces $\mathcal H^n=\so(n)\bs\so_0(n,1)$ and
$\bbh^n=\so_0(n,1)/\so(n)$ have isometric actions by the maximal
compact subgroup, $r(K)$ and $\ell(K)$, respectively. The latter has
more isometries, $\ell(\so_0 (n,1))$.
\begin{proposition}
The map
$$
\tau:\ \mathcal H^n\lra \bbh^n
$$
{\rm(}as a map $\bbr^{n-1}\x\bbr^+ \lra \bbr\x\bbr^+${\rm)}
defined by
$$
\tau(\mathbf x,y)=(-\tfrac{\mathbf x}{y},\tfrac{1}{y})
$$
has the following properties:
{\rm (1)}
$\tau$ is a weakly $\so(n)$-equivariant diffeomorphism of period 2.
More precisely,
$$
\tau(r(z)\cdot(\mathbf x,y))=\ell(z\inv)\cdot\tau(\mathbf x,y)
$$
for $z\in\so(n)$.
In other words, the identification of $\mathcal H^n,
\bbh^n, NA, \text{ and } \bbr^{n-1} \times \bbr^+$ as sets permits
the following abuse of $\tau$ and gives a following relation between
$r(K)$-action and $\ell(K)$-action:
$r({\hat z})\cdot(x,y)=\tau\left(\ell({\hat z}\inv)\cdot\tau(x,y)\right)$.
{\rm (2)} $\tau$ leaves the geometric spheres centered at
{$\mathbf i=(\mathbf 0,1)$} in each geometry invariant. That is, in both
geometries, for $m>0$, the Euclidean sphere
$$
|\mathbf x|^2+(y-\cosh(\ln m))^2=\sinh^2(\ln m)
$$
is a geometric sphere centered at $\mathbf i=(\mathbf 0,1)$ with radius $|\ln
m|$, in both geometries, and the map $\tau$ maps such a sphere to
itself. These spheres are $r(K)$-orbits in $\mathcal H^n$ and
$\ell(K)$-orbits in $\bbh^n$ (when $\mathcal H^n$ and $\bbh^n$ are
identified with $\bbr^{n-1}\x\bbr^+$) at the same time.
{\rm (3)} $\tau$ gives a 1-1 correspondence
between the two sets of all the geodesics passing through $\mathbf i$
in the two geometries $\mathcal H^n$ and $\mathbb H^n$.
\end{proposition}
\step
For the $\ell(K)$-action on the hyperbolic space $\mathbb H^n=G/K$, we can
take the ray $\{\mathbf 0\}\x (0,1]$ as a cross section to the
$\ell(K)$-action. Clearly, $\{\mathbf 0\}\x [1,\infty)$ is another cross
section. The cross section to the $r(K)$-action on $\mathcal H^n=K\bs G$ is
the ray {$\{\mathbf 0\}\x [1,\infty)$}. The action has a fixed point
$\mathbf i=(\mathbf 0,1)$, and all the other orbits are $\so(n)/\so(n-1) \cong
S^{n-1} \cong \so(n-1) \bs \so(n)$. The geometry of the whole space
$\mathcal H^n=K\bs G$ is completely determined by the geometry on the line
$\{\mathbf 0\}\x [1,\infty)$ as shown below.
\begin{theorem}
\label{warped-prod}
The space $\mathcal H^n-\{\mathbf i\}$ is isometric to the warped product
$(1,\infty) \times_{e^{2 \phi}} S^{n-1}$, where
$(1,\infty)$ has the induced metric from
$\{\mathbf 0\}\x(1,\infty)\subset\mathcal H^n$, that is,
$|\tfrac{\partial}{\partial t}(t_0)| = \tfrac{1}{t_0}$
for $t_0 \in (1, \infty)$;
$S^{n-1}$ has the standard metric;
and $e^{2 \phi (t)} = \frac{\sinh ^2 (\ln t)}{\cosh (2\ln t)}$.
\end{theorem}
\begin{proof}
The sphere $S^{n-1}\subset\bbr^n$ has a canonical $\so(n)$-action by matrix
multiplication.
Choose the north pole $\mathbf n=(0, \dots, 0,1) \in S^{n-1}$ as a base point.
Then the $\so(n)$-action induces an action on $(1,\infty)\x S^{n-1}$,
acting trivially on the first factor.
The space $\mathcal H^n-\{\mathbf i\}$ also has an (isometric) action by $r(\so(n))$.
Using these actions, we define
$$
f: (1,\infty)\x S^{n-1} \lra \mathcal H^n-\{\mathbf i\}
$$
by
$$
f(t,a\cdot\mathbf n)=f(a\cdot(t,\mathbf n))=r(a\inv)\cdot(\mathbf 0,t),
$$
where $(\mathbf 0,t)\in \bbr^{n-1}\x\bbr^+\subset\mathcal H^n$.
Since both actions have orbits $S^{n-1}$, and the stabilizers at
$(t,\mathbf n)$ and $(\mathbf 0,t)$ are both $\so(n-1)\x\so(1)\subset\so(n)$
(see Corollary \ref{son-stablizer}),
$f$ is well-defined, bijective and smooth.
\bigskip
Consider the subgroup $K_2$,
$$
K_2=I_{n-2}\x\so(2)\x I_1\subset\so(n)\x I_1\subset\so_0 (n,1).
$$
By taking the intersection of $(1,\infty)\x S^{n-1}$ and $\mathcal H^n-\{\mathbf i\}$
with the last 2-dimensional plane, we get isometric embeddings
$$
\CD
(1,\infty)\x S^{n-1} @>>> \mathcal H^n-\{\mathbf i\}\\
@A{\cup}AA @A{\cup}AA\\
(1,\infty)\x S^{1} @>>> \mathcal H^2-\{\mathbf i\}
\endCD
$$
Furthermore, when we give a warped product structure to $(1,\infty)\x S^{1}$
by the function $e^{2 \phi (t)} =\frac{\sinh ^2 (\ln t)}{\cosh (2\ln t)}$,
the restriction of the map $f$,
$$
\CD
(1,\infty)\x_{e^{2\phi}} S^{1} @>>> \mathcal H^2-\{\mathbf i\}
\endCD
$$
$$
f(t,a\cdot\mathbf n)=f(a\cdot(t,\mathbf n))=r(a\inv)\cdot(\mathbf 0,t),
$$
where $(0,t)\in\mathcal H^2\subset\mathcal H^n$, becomes an isometry by Theorem
\ref{prop:warped-2-dim}.
Now it is clear that the $\so(n)$-action on both spaces make
the weakly equivariant map $f$ a global isometry.
Thus, the geometry of $\mathcal H^n$ is completely determined by the geometry on
the cross section $\{\mathbf 0\}\x(1,\infty)\subset\mathcal H^n$
to the $r(K)$-action.
\end{proof}
\step
The sectional curvature of a plane containing the
$(1,\infty)$-direction in $(1,\infty) \times_{e^{2 \phi}} S^{n-1}$
is easy to calculate, since such a plane is a rotation of corresponding
plane for $\so(2)\bs\so_0(2,1)$ by $\so(n)$.
Thus, the curvature of such a plane is exactly the same as the
2-dimensional case.
\bigskip
\step
For a general plane (not containing the $(1,\infty)$-direction), we
need some work. Notice that $\{f^{-1}_* \mathbf w_1, \, \dots, \, f^{-1}_*
\mathbf w_{n-1}, f^{-1}_* \mathbf w_n\}$ is an orthonormal basis on $(1,\infty)
\times_{e^{2 \phi}} \{\mathbf n\} $ such that $f^{-1}_* \mathbf w_n$ is a normal
vector to each sphere and the others are tangent to the sphere. By
abusing notation, denote $f^{-1}_* \mathbf w_i $ as $\mathbf w_i$ again.
\begin{lemma}
\label{kappa-lemma}
For $(\mathbf 0,y) \in \mathcal H^n-\{(\mathbf 0,1)\}
=(1,\infty) \times_{e^{2 \phi}} S^{n-1}$, with $y >1,$ and
$
\mathbf w, \tilde{\mathbf w} \in \mathrm{Span}\{ \mathbf w_1, \dots \mathbf w_{n-1} \} ,
$
with
$|\mathbf w|_\phi = |\tilde{\mathbf w}|_\phi = 1$
and $ \langle \mathbf w, \tilde{\mathbf w} \rangle_\phi = 0$, we have
$$
\kappa (a \mathbf w_n + b \mathbf w, \, c \mathbf w_n + d \tilde{\mathbf w}) =
(a^2 d^2 + b^2 c^2) \kappa(\mathbf w_n , \mathbf w) + b^2 d^2 \, \kappa (\mathbf w, \tilde{\mathbf w}).
$$
\end{lemma}
\begin{proof}
For tangent vectors $T_1,T_2,T_3\in T(S^{n-1})$ and $X\in T(S^{n-1})^\perp$
in the warped product, we
have
\begin{align*}
R(T_1, T_2)T_3 &= R_{S^{n-1}} (T_1, T_2)T_3 -
e^{2 \phi} \mid \! \nabla \phi \! \mid ^2
\big( \langle T_2, T_3 \rangle _{S^{n-1}} T_1
- \langle T_1, T_3 \rangle _{S^{n-1}} T_2 \big),\\
R(X, T)Y &= \big(
h_{\phi} (X,Y) +
\langle \nabla \phi , X \rangle \langle \nabla \phi , Y \rangle
\big)
T,
\end{align*}
see \cite[p.60, Proposition 2.2.2]{GW}. So,
$$
\langle R (\tilde{\mathbf w}, \mathbf w)\mathbf w, \mathbf w_n \rangle_\phi = 0
\quad\text{and}\quad
\langle R (\mathbf w, \tilde{\mathbf w}) \tilde{\mathbf w}, \mathbf w_n \rangle_\phi = 0,
$$
also
$$
\langle R(\mathbf w_n,\mathbf w)\mathbf w_n, \tilde{\mathbf w} \rangle _{\phi}
= e^{2 \phi} \langle \mathbf w, \tilde{\mathbf w} \rangle _{S^1}
\big( h_{\phi} (\mathbf w_n,\mathbf w_n) + \langle \nabla \phi , \mathbf w_n \rangle ^2
\big)
=0.
$$
Using an isometric $r(K)$-action rotating the $\{ \mathbf w_n ,
\mathbf w\}$-plane to $\{ \mathbf w_n , \tilde{\mathbf w} \}$-plane, we have $
\kappa(\mathbf w_n, \mathbf w) = \kappa(\mathbf w_n, \tilde{\mathbf w})$. Thus,
\begin{align*}
\kappa (a \mathbf w_n + b \mathbf w, \, c \mathbf w_n + d \tilde{\mathbf w})
&=
\langle
R(a \mathbf w_n + b \mathbf w, \, c \mathbf w_n + d \tilde{\mathbf w}) (c \mathbf w_n + d \tilde{\mathbf w}),
\, a \mathbf w_n + d \mathbf w
\rangle_\phi \\
&= a^2 d^2 \kappa(\mathbf w_n, \tilde{\mathbf w}) + b^2 c^2 \kappa(\mathbf w_n, \mathbf w)
+ b^2 d^2 \kappa(\mathbf w, \tilde{\mathbf w}) \\
& = (a^2 d^2 + b^2 c^2) \kappa(\mathbf w_n , \mathbf w) + b^2 d^2 \, \kappa (\mathbf w,
\tilde{\mathbf w}).
\qedhere
\end{align*}
\end{proof}
\begin{theorem}[
The sectional curvature of the space $\mathcal H^n=\so(n)\bs\so_0(n,1)$]
\label{sect-curvature-n}
For $(\mathbf 0,y) \in \mathcal H^n-\{(\mathbf 0,1)\}
=(1,\infty) \times_{e^{2 \phi}} S^{n-1}$, with $y >1$, let $\sigma$ be a
2-dimensional tangent plane at $(\mathbf 0,y)$ whose angle with the
$y$-axis is $\theta$. Then its sectional curvature
$\kappa(y,\theta):=\kappa(\sigma)$ is
$$
\kappa(y,\theta)=
\cos ^2 \theta \, \frac{4 y^2 (1 + 3 y^2 + y^4)}{(1 + y^4)^2}
+ \sin ^2 \theta \,
\frac{2 (1+ 2y^2 + 4y^4 + 2y^6 + y^8)}{(1 + y^4)^4}.
$$
This curvature formula is valid for all $1\leq y <\infty$.
Therefore $0<\kappa_{(\mathbf 0,y)}\leq 5$ for all $y\geq 1$, and at $y=1$,
$\kappa(1,\theta)=5$ gives the maximum curvature for all $y\geq 1$.
\end{theorem}
\begin{proof}
It is obvious in the case of either $\theta = 0$ or
$\theta=\tfrac{\pi}{2}.$
Assume $0< \theta < \tfrac{\pi}{2}.$ Let $\hat{\mathbf w}$ be the
orthogonal projection of $\mathbf w_n$ to $\sigma$. There is a unique $\mathbf w
\in T(S^{n-1})$, which lies in the plane $\{ \hat{\mathbf w}, \mathbf w_n \}$,
such that we can write $\hat{\mathbf w}$ as a linear combination of $\mathbf w_n$
and $\mathbf w$ with respect to $\theta$: $\hat{\mathbf w} = r \cos \theta \
\mathbf w_n + r \sin \theta \ \mathbf w$ for some $r> 0.$ Now let $\tilde{\mathbf w}$
be a unit vector in $\sigma \cap T(S^{n-1})$.
Since $\tilde{\mathbf w}, \hat{\mathbf w} \in \sigma$,
$$
0 = \langle \mathbf w_n, \tilde{\mathbf w} \rangle_\phi
= \langle \hat{\mathbf w}, \tilde{\mathbf w} \rangle_\phi
= \langle r \cos \theta \mathbf w_n + r \sin \theta \mathbf w, \, \tilde{\mathbf w} \rangle_\phi
= r \sin \theta \langle \mathbf w, \tilde{\mathbf w} \rangle_\phi,
$$
which implies
$$\langle \mathbf w, \tilde{\mathbf w} \rangle_\phi = 0$$
and from the above lemma
\begin{align*}
\kappa(y,\theta)
&= \kappa(\hat{\mathbf w}, \tilde{\mathbf w}) \\
&= \kappa(\cos \theta \ \mathbf w_n + \sin \theta \ \mathbf w, \, \tilde{\mathbf w}) \\
&= \cos ^2 \theta \, \kappa(\mathbf w_n, \tilde{\mathbf w})
+ \sin ^2 \theta \, \kappa(\mathbf w, \tilde{\mathbf w}) \\
&= \cos ^2 \theta \, \kappa (y) + \sin ^2 \theta \, \kappa(\mathbf w, \tilde{\mathbf w}),
\end{align*}
where $\kappa(y)$ is the curvature of any tangent 2-plane containing
$\mathbf w _n$. Now, we get
$$\mid \! \mathbf w \! \mid_{S^{n-1}} = \mid \! \tilde{\mathbf w} \! \mid_{S^{n-1}} = e^{- \phi (y)}$$
with respect to the standard metric on $S^{n-1}$ and, from the
formula of $R(T_1, T_2)T_3$ in the proof of Lemma \ref{kappa-lemma},
\\
\begin{align*}
\kappa(\mathbf w, \tilde{\mathbf w})
&= \langle R (\mathbf w, \tilde{\mathbf w})\tilde{\mathbf w}, \mathbf w \rangle_\phi \\
&= e^{2 \phi (y)}
\langle R (\mathbf w, \tilde{\mathbf w})\tilde{\mathbf w}, \mathbf w \rangle _{S^{n-1}} \\
&= e^{2 \phi (y)}
\big(
\kappa_{S^{n-1}} (\mathbf w, \tilde{\mathbf w}) -
e^{2 \phi (y)} \mid \! \nabla \phi \! \mid ^2
(
\mid \! \mathbf w \! \mid ^2 _{S^{n-1}}
\mid \! \tilde{\mathbf w} \! \mid ^2 _{S^{n-1}} -
\langle \mathbf w , \tilde{\mathbf w} \rangle _{S^{n-1}} ^2
)
\big) \\
&= e^{2 \phi (y)}
\big(
e^{-4 \phi (y)} -
e^{2 \phi(y)}
\mid \! \nabla \phi \! \mid ^2 e^{-4 \phi (y)}
\big) \\
&= e^{- 2 \phi (y)} - \langle \nabla \phi , \mathbf w _n \rangle ^2 \\
&= \tfrac{\cosh (2 \ln y)}{\sinh ^2 (\ln y)} - \big( \mathbf w_n (\phi)\big)^2 \\
&= \tfrac{2 (y^4 +1)}{(y^2 -1)^2} -
\big( y \tfrac{\partial \phi}{\partial y} \big)^2 \\
&= \frac{2 (1+ 2y^2 + 4y^4 + 2y^6 + y^8)}{(1 + y^4)^2}.
\end{align*}
Thus,
$$
\kappa(y,\theta)=
\cos ^2 \theta \, \frac{4 y^2 (1 + 3 y^2 + y^4)}{(1 + y^4)^2}
+ \sin ^2 \theta \, \frac{2 (1+ 2y^2 + 4y^4 + 2y^6 + y^8)}{(1 + y^4)^2}.
$$
By the remark after Proposition \ref{ON-on-r2}, by the continuity argument,
this curvature formula is valid even at the removed point $(\mathbf 0,1)$
with $\kappa_{(\mathbf 0,1)}=5$.
\bigskip
To estimate the values $\kappa(y,\theta)$, let
\begin{align*}
f(y) &= \frac{4 y^2 (1 + 3 y^2 + y^4)}{(1 + y^4)^2} \\
g(y) &= \frac{2 (1+ 2y^2 + 4y^4 + 2y^6 + y^8)}{(1 + y^4)^2}
\end{align*}
for $y > 1$. Then
$$
0 < f(y) < 5 \quad \text{and} \quad 0 < g(y) < 5.
$$
The relation,
$$
\kappa(y,\theta)=
\cos ^2 \theta \, f(y) + \sin ^2 \theta \, g(y)
= \frac{f(y) + g(y) + \cos (2 \theta) \big( f(y) - g(y) \big)}{2}
$$
gives us the following inequality
$$
\frac{f(y) + g(y) - \mid \! f(y) - g(y) \! \mid}{2}\leq
\kappa(y,\theta)
\leq \frac{f(y) + g(y) + \mid \! f(y) - g(y) \! \mid}{2},
$$
so that
$$
\mathrm{min}\{f(y), g(y)\} \leq
\kappa(y,\theta)
\leq \mathrm{max}\{f(y), g(y)\},
$$
which shows $0<\kappa(y,\theta)<5$.
\end{proof}
\bigskip
\bigskip
\bibliographystyle{amsalpha}
|
train/arxiv
|
BkiUaPDxK7Ehm2ZQyBUQ
| 5 | 1 |
\section{Introduction}
Let $\mathsf{H}$ be a Hilbert space and $\sigma = (\sigma_t)_{t \geqslant 0}$ be a
semigroup of unital endomorphisms of the algebra $B(\mathsf{H})$ --- an
$E_0$-semigroup (\cite{wAEbook}). A family of operators $X = (X_t)_{t
\geqslant 0} \subset B(\mathsf{H})$ is a \ti{left cocycle} (respectively a
\ti{right cocycle}) for $\sigma$ if it satisfies
\begin{equation} \label{cocycles}
X_0 = I \ \text{ and } \ X_{r+t} = X_r \sigma_r (X_t) \quad
(\text{resp.\ } X_{r+t} = \sigma_r (X_t) X_r)
\end{equation}
for all $r,t \geqslant 0$. \textit{Unitary} cocycles (i.e.\ each $X_t$ is
unitary) play a fundamental role in the classification of
$E_0$-semigroups, which is carried out up to conjugation by such
objects. The $E_0$-semigroups of type I (those that possess a
sufficiently large number of cocycles) turn out to be precisely those
that are cocycle conjugate to the CCR flow on symmetric Fock space,
the particular flow being uniquely specified by a choice of Hilbert
space $\mathsf{k}$, called the \ti{noise dimension space} in the language
of quantum stochastic calculus (QSC). In QSC cocycles arise naturally
as solutions of a quantum stochastic differential equation (QSDE) of
Hudson-Parthasarathy type, and it is standard practice to ampliate the
CCR flow so that it acts on $B(\mathfrak{h} \otimes \mathcal{F}_+)$, where $\mathfrak{h}$ is
another Hilbert space (the \ti{initial space}) and $\mathcal{F}_+$ denotes
the symmetric Fock space over $L^2 (\Real_+; \mathsf{k})$. The coefficient
driving the QSDE is some operator $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$, where
${\widehat{\noise}} := \mathbb{C} \oplus \mathsf{k}$, and (conjugation by) the resulting
cocycle can be viewed as a Feynman-Kac perturbation of the free
evolution given by $\sigma$ (\cite{lAqFK}). Conversely, any contraction
cocycle that is \textit{Markov-regular} necessarily satisfies such a QSDE
for some such $F$, and moreover the collection of those $F$ that are
generators of contraction cocycles is now well-known, as are the
subsets corresponding to the generators of isometric, coisometric and
unitary cocycles. For more details see \cites{fFqp8, LPgran, mother,
father}, or the lecture notes~\cite{jmLGreif}.
In this paper we characterise the generators of many other classes of
cocycle, namely self-adjoint cocycles, positive cocycles, projection
cocycles and partially isometric cocycles, going beyond the case of
contractive cocycles for the first two classes. Positive contraction
cocycles have appeared in the work of Bhat~(\cite{bvrBCCR}), where
they are used to study dilations and compressions between
$E$-semigroups and quantum Markov semigroups on $B(\mathsf{H})$. In the
third section of the paper we discuss a one-parameter family of
transformations on the class of positive contraction cocycles, and
describe the corresponding transformation on the stochastic generators
in the Markov-regular case. This leads naturally to a polar
decomposition result in the final section, where it is shown that any
Markov-regular contraction cocycle with commutative component von
Neumann algebra can be written as a product of a partial isometry
cocycle and a positive cocycle.
\subsection{Notational conventions}
Algebraic tensor products are denoted by $\underline{\otimes}$, with $\otimes$ reserved
for the (completed) tensor product of Hilbert spaces and the tensor
product of von Neumann algebras. The tensor symbol between Hilbert
space vectors in elementary tensors will usually be suppressed. Given
Hilbert spaces $\mathsf{H}$ and $\mathsf{h}$, and $x \in \mathsf{h}$, we define maps $E_x
\in B(\mathsf{H}; \mathsf{H} \otimes \mathsf{h})$ and $E^x \in B(\mathsf{H} \otimes \mathsf{h}; \mathsf{H})$ by
\[
E_x = u \otimes x = ux \quad \text{ and } \quad E^x = (E_x)^*,
\]
with context indicating the choice of $\mathsf{H}$ and $\mathsf{h}$.
\section{Operator cocycles on Fock space}
Fix two Hilbert spaces, the initial space $\mathfrak{h}$ and the noise
dimension space $\mathsf{k}$. Let $\mathcal{F}_+$ denote the symmetric Fock
space over $L^2 (\Real_+; \mathsf{k})$, and in general let $\mathcal{F}_I$ denote
the symmetric Fock space over $L^2 (I; \mathsf{k})$ for $I \subset \mathbb{R}$.
We shall make frequent use of the time shift and time reversal
operators on $\mathfrak{h} \otimes \mathcal{F}_+$, $S_t$ and $R_t$ respectively, which
are the ampliated second quantisations of
\[
(s_t f)(u) = \begin{cases} 0 & \text{if } u < t, \\ f(u-t) & \text{if
} u \geqslant t, \end{cases}
\ \text{ and } \
(r_t f)(u) = \begin{cases} f(t-u) & \text{if } u \leqslant t, \\ f(u) &
\text{if } u > t, \end{cases}
\]
so that $S_t u\e{f} = u\e{s_t f}$, where $\e{f} = (1, f, (2!)^{-1/2} f
\otimes f, \ldots)$ is the exponential vector associated to $f \in L^2
(\Real_+; \mathsf{k})$. Note that the $S_t$ are isometries and the $R_t$
are self-adjoint unitaries, with both maps $t \mapsto S_t$ and $t
\mapsto R_t$ continuous in the strong operator topology, that is,
\ti{strongly continuous}. The endomorphism semigroup $(\sigma_t)_{t \geqslant
0}$ on $B(\mathfrak{h} \otimes \mathcal{F}_+)$ is constructed from $(S_t)_{t \geqslant 0}$
by using the obvious isomorphism these maps induce between $\mathcal{F}_+$
and $\mathcal{F}_{[t,\infty[}$. More concretely, for any $X \in B(\mathfrak{h} \otimes
\mathcal{F}_+)$ the operator $\sigma_t (X)$ is determined by
\begin{equation} \label{sig defn}
\ip{u\e{f}}{\sigma_t (X) v\e{g}} = \ip{u\e{s^*_t f}}{X v\e{s^*_t g}} \exp
\int^t_0 \ip{f(u)}{g(u)} \, du.
\end{equation}
The time reversal operators on $B(\mathfrak{h} \otimes \mathcal{F}_+)$ are
\begin{equation} \label{rho defn}
\rho_t (X) = R_t X R_t.
\end{equation}
A \ti{Fock-adapted} left (respectively right) cocycle on $\mathfrak{h} \otimes
\mathcal{F}_+$ is any family $X = (X_t)_{t \geqslant 0} \subset B(\mathfrak{h} \otimes
\mathcal{F}_+)$ that satisfies the functional equation~\eqref{cocycles}
together with the adaptedness condition
\begin{equation} \label{adapted}
X_t \in B(\mathfrak{h} \otimes \mathcal{F}_{[0,t[}) \otimes I_{\mathcal{F}_{[t,\infty[}},
\end{equation}
where we utilise the continuous tensor product factorisation of Fock
space: $\mathcal{F}_+ \cong \mathcal{F}_{[0,t[} \otimes \mathcal{F}_{[t,\infty[}$ via $\e{f}
\longleftrightarrow \e{f|_{[0,t[}} \otimes \e{f|_{[t,\infty[}}$. All
cocycles in this paper will be assumed to satisfy~\eqref{adapted}.
Continuity was not given as part of the definition; the next result
mirrors/relies on the corresponding result in semigroup theory.
\begin{propn} \label{coc cts}
Let $X = (X_t)_{t \geqslant 0}$ be a left cocycle on $\mathfrak{h} \otimes
\mathcal{F}_+$. If $X_t \rightarrow I$ weakly as $t \rightarrow 0$ then there are
constants $M, a \in \mathbb{R}$ such that
\begin{equation}
\label{exp growth}
\norm{X_t} \leqslant Me^{at} \quad \text{for all } t \geqslant 0.
\end{equation}
Moreover, the map $t \mapsto X_t$ is strongly continuous.
\end{propn}
Cocycles satisfying these continuity conditions will be called
\ti{$C_0$-cocycles}.
\begin{proof}
Weak convergence to $I$ implies, via two applications of the
Banach-Steinhaus Theorem, that $t \mapsto \norm{X_t}$ is bounded in a
neighbourhood of $0$. The existence of $M$ and $a$ then follows by a
standard argument (see, for example, Proposition~1.18 of~\cite{ebDc0})
since $\norm{X_{r+t}} \leqslant \norm{X_r} \norm{X_t}$, because each
$\sigma_t$ is contractive.
Weak continuity at $0$, and the local uniform bound for $X$
from~\eqref{exp growth} imply that $X_t \otimes I_- \rightarrow I_{\mathfrak{h} \otimes
\mathcal{F}_\mathbb{R}}$ weakly, where $I_-$ is the identity on $\mathcal{F}_- :=
\mathcal{F}_{]{-\infty},0[}$, and we use $\mathcal{F}_\mathbb{R} \cong \mathcal{F}_+ \otimes
\mathcal{F}_-$. If $(\overline{S}_t)_{t \geqslant 0}$ denotes the strongly continuous
family of \textit{unitary} right shifts on $\mathfrak{h} \otimes \mathcal{F}_\mathbb{R}$ defined
analogously to the isometries $S_t$, then $Y_t := (X_t \otimes I_-)
\overline{S}_t$ is weakly convergent to $I_{\mathfrak{h} \otimes \mathcal{F}_\mathbb{R}}$.
Moreover, for any $Z \in B(\mathfrak{h} \otimes \mathcal{F}_+)$
\[
\sigma_t (Z) \otimes I_- = \overline{S}_t (Z \otimes I_-) \overline{S}^*_t,
\]
and it readily follows that $(Y_t)_{t \geqslant 0}$ is a semigroup on
$\mathfrak{h} \otimes \mathcal{F}_\mathbb{R}$. Hence it is strongly continuous, by
Proposition~1.23 of~\cite{ebDc0}, thus so is $t \mapsto X_t \otimes I_- =
Y_t \overline{S}^*_t$, and the result follows.
\end{proof}
\begin{rems}
(i) A result in the same spirit is Proposition~2.5
of~\cite{wActsFockI} (reappearing as Proposition~2.3.1
of~\cite{wAEbook}). It is more general on the one hand since it only
assumes measurability of the cocycle, which is defined with respect to
a \textit{general} $E_0$-semigroup on a von Neumann algebra. However
there are separability assumptions, and essential use is made of the
more restrictive hypothesis that the cocycle be isometric. Similarly,
assumed contractivity of the cocycle is a necessary ingredient of the
alternative proof of the above result for Fock-adapted cocycles given
in Lemma~1.2 of~\cite{FFviaAK}.
(ii) The result extends immediately to right cocycles by use of the
time-reversal operators $\rho_t$ --- see Lemma~\ref{left to right}
below.
\end{rems}
Let $\mathbb{S} = \Lin \{ d \mathbf{1}_{[0,t[}: d \in \mathsf{k}, t \geqslant 0\}$, the
subspace of $L^2(\Real_+; \mathsf{k})$ consisting of right continuous,
piecewise constant functions. It is a dense subspace, so $\mathcal{E} :=
\Lin \{\e{f}: f \in \mathbb{S}\}$ is dense in $\mathcal{F}_+$. Consequently
bounded operators on $\mathfrak{h} \otimes \mathcal{F}_+$ are determined by their
inner products against vectors of the form $u\e{f}$ for $u \in
\mathfrak{h}$, $f \in \mathbb{S}$.
The next result (essentially Proposition~6.2 of~\cite{father}) follows
immediately from adaptedness and~\eqref{sig defn}.
\begin{thm} \label{coc char}
Let $X$ be a bounded adapted process. The following are
equivalent\textup{:}
\begin{rlist}
\item
$X$ is a left cocycle.
\item
For each pair $c,d \in \mathsf{k}$, $(Q^{c,d}_t := E^{\e{c\mathbf{1}_{[0,t[}}}
X_t E_{\e{d\mathbf{1}_{[0,t[}}})_{t \geqslant 0}$ is a semigroup on $\mathfrak{h}$, and
for all $f,g \in \mathbb{S}$
\begin{equation} \label{semi decomp}
E^{\e{f\mathbf{1}_{[0,t[}}} X_t E_{\e{g\mathbf{1}_{[0,t[}}} = Q^{f(t_0),
g(t_0)}_{t_1 -t_0} \cdots Q^{f(t_n), g(t_n)}_{t-t_n}
\end{equation}
where $\{0 = t_0 \leqslant t_1 \leqslant \cdots \leqslant t_n \leqslant t \}$ contains the
discontinuities of $f\mathbf{1}_{[0,t[}$ and $g\mathbf{1}_{[0,t[}$.
\end{rlist}
If in~\textup{(i)} we replace left by right then~\eqref{semi decomp}
in~\textup{(ii)} must be replaced by
\begin{equation*} \label{semi decomp right}
E^{\e{f\mathbf{1}_{[0,t[}}} X_t E_{\e{g\mathbf{1}_{[0,t[}}} = Q^{f(t_n),
g(t_n)}_{t-t_n} \cdots Q^{f(t_0), g(t_0)}_{t_1 -t_0}. \tag*{(\ref{semi
decomp})$'$}
\end{equation*}
\end{thm}
The collection of semigroups $\{Q^{c,d}: c,d \in \mathsf{k}\}$ is the family
of \ti{associated semigroups} of the cocycle $X$. Since the map
$(c,d) \mapsto Q^{c,d}_t$ is jointly continuous, a cocycle $X$ is
determined by the operators $Q^{c,d}_t$ for $c$ and $d$ taken from a
dense subset of $\mathsf{k}$. In fact this observation can be further
refined by using totality results such as those contained
in~\cites{PStotal,mStotal,jmLGreif} to show that it is sufficient to
take $c$ and $d$ from a \textit{total} subset of $\mathsf{k}$ that contains
$0$. If $X$ is a $C_0$-cocycle then it is clear that all of the
associated semigroups are strongly continuous, since the map $x
\mapsto E_x$ is isometric. Conversely if all (or, rather, sufficiently
many) of the associated semigroups are strongly continuous and if $X$
is locally uniformly bounded, then from~\eqref{semi decomp} it follows
that $t \mapsto X_t$ is weakly continuous at $0$, and hence $X$ is a
$C_0$-cocycle. The a priori assumption of local boundedness is
needed here for this (perhaps naive) method of proof to get weak
continuity on all of the complete space $\mathfrak{h} \otimes \mathcal{F}_+$, rather
than just $\mathfrak{h} \underline{\otimes} \mathcal{E}$, then Proposition~\ref{coc cts} can be
invoked to obtain the improved bound~\eqref{exp growth}.
A stronger hypothesis on the map $t \mapsto X_t$ is
\ti{Markov-regularity}, as considered in~\cite{father}, which insists
on norm continuity of the \ti{Markov semigroup} $Q^{0,0}$. For a
$C_0$-cocycle (or, indeed, any locally uniformly bounded cocycle) this
is equivalent to assuming that \textit{all} of the associated semigroups
are norm continuous, which follows easily from the estimate
$\norm{\e{a\mathbf{1}_{[0,t[}} - \e{c\mathbf{1}_{[0,t[}}} = O(t^{1/2})$.
In many cases, for instance when proving Theorem~\ref{conisocoi}, it
can be useful to pass from left to right cocycles or vice versa. Two
methods for doing this are taking adjoints and time-reversal, where
for any process $X$ we define $\widetilde{X}$ by
\[
\widetilde{X}_t := \rho_t (X_t)
\]
with $\rho_t$ is defined in~\eqref{rho defn}.
\begin{lemma} \label{left to right}
Let $X$ be a bounded adapted process. The following are
equivalent\textup{:}
\begin{rlist}
\item
$X$ is a left cocycle.
\item
$X^* := (X^*_t)_{t \geqslant 0}$ is a right cocycle.
\item
$\widetilde{X}$ is a right cocycle.
\end{rlist}
\end{lemma}
\begin{proof}
Equivalence of~(i) and~(ii) is immediate since $\sigma_t$ is
${}^*$-homomorphic. Equivalence of~(i) and~(iii) follows from
Theorem~\ref{coc char} and the fact that $R_t \e{c \mathbf{1}_{[0,t[}} =
\e{c \mathbf{1}_{[0,t[}}$, so that $X$ and $\widetilde{X}$ share the same family
of associated semigroups.
\end{proof}
Given a cocycle $X$ we define two unital subalgebras of $B(\mathfrak{h})$:
\begin{align}
\mathcal{A}_X &= \text{the norm-closed algebra generated by } \{Q^{c,d}_t: c,d
\in \mathsf{k}, t \geqslant 0\}, \label{cpt alg} \\
\intertext{and}
\mathcal{M}_X &= \text{the von Neumann algebra generated by } \mathcal{A}_X.
\label{cpt vNA}
\end{align}
These algebras enter into characterisations of various properties of $X$.
In the language of~\cite{CBonOS} it follows from~\eqref{semi decomp}
(or~\ref{semi decomp right}) that $X_t \in \mathrm{M} (\mathcal{F}_+; \mathcal{A}_X)_{\text{\textup{b}}}
\subset \mathrm{M} (\mathcal{F}_+; \mathcal{M}_X)_{\text{\textup{b}}} = \mathcal{M}_X \otimes B(\mathcal{F}_+)$, where
$\mathrm{M} (\mathcal{F}_+; \mathsf{V})_{\text{\textup{b}}}$ denotes the $\mathcal{F}_+$-matrix space over an
operator space $\mathsf{V}$.
\begin{propn} \label{coc transfs}
Let $X$ be a left cocycle. We have the following sets of
equivalences\textup{:}
\begin{alist}
\item
\begin{rlist}
\item
$X$ is also a right cocycle.
\item
$X = \widetilde{X}$.
\item
$\mathcal{A}_X$ is commutative.
\end{rlist}
\item
\begin{rlist}
\item
$X^* = \widetilde{X}$.
\item
$(Q^{c,d}_t)^* = Q^{d,c}_t$ for all $c,d \in \mathsf{k}$ and $t \geqslant 0$.
\end{rlist}
\noindent
In this case the algebra $\mathcal{A}_X$ is closed under taking adjoints.
\item
\begin{rlist}
\item
$X$ is a self-adjoint cocycle.
\item
$(Q^{c,d}_t)^* = Q^{d,c}_t$ for all $c,d \in \mathsf{k}$ and $t \geqslant 0$, and
$\mathcal{M}_X$ is commutative.
\end{rlist}
\noindent
In this case $X = \widetilde{X}$ as well.
\end{alist}
\end{propn}
\begin{proof}
(a) This is immediate from Theorem~\ref{coc char} and Lemma~\ref{left
to right} since $X$ is also a right cocycle if and only if not
only~\eqref{semi decomp} but also~\ref{semi decomp right} holds.
\medskip\noindent
(b) $X$ is adapted and $\{\e{f}: f \in \mathbb{S}\}$ is total in $\mathcal{F}_+$,
so $X^* = \widetilde{X}$ if and only if
\[
\bigl( E^{\e{g\mathbf{1}_{[0,t[}}} X_t E_{\e{f\mathbf{1}_{[0,t[}}} \bigr)^* =
E^{\e{f\mathbf{1}_{[0,t[}}} X_t^* E_{\e{g\mathbf{1}_{[0,t[}}} =
E^{\e{f\mathbf{1}_{[0,t[}}} \widetilde{X}_t E_{\e{g\mathbf{1}_{[0,t[}}}
\]
for all $f,g \in \mathbb{S}$ and $t \geqslant 0$. The result thus follows from
Theorem~\ref{coc char} and Lemma~\ref{left to right}, since $R_t \e{c
\mathbf{1}_{[0,t[}} = \e{c \mathbf{1}_{[0,t[}}$.
\medskip\noindent
(c\,i \ensuremath{\Rightarrow} c\,ii) If $X$ is self-adjoint then it is also a right
cocycle by Lemma~\ref{left to right}, thus $\mathcal{A}_X$ is commutative
by~(a\,iii), and so $X = X^* = \widetilde{X}$ by~(a\,ii), so part~(b)
applies, which in particular shows that $\mathcal{M}_X$ is commutative.
\medskip\noindent
(c\,ii \ensuremath{\Rightarrow} c\,i) Commutativity of $\mathcal{M}_X$ implies commutativity
of $\mathcal{A}_X$, hence from~(a) we have $X = \widetilde{X}$, and by~(b) we have
$X^* = \widetilde{X}$.
\end{proof}
\begin{rems}
Commutativity of $\mathcal{A}_X$ does not imply commutativity of $\mathcal{M}_X$. To
see this take any $A \in B(\mathfrak{h})$ then $(X_t = e^{tA} \otimes
I_{\mathcal{F}_+})_{t \geqslant 0}$ is both a left and right cocycle, since
$\sigma_r (X_t) = X_t$ for all $r, t \geqslant 0$. Moreoever $Q^{c,d}_t = e^{t
(A+\ip{c}{d})}$, so that $\mathcal{A}_X$ is the unital algebra generated by
$A$, which is certainly commutative, whereas $\mathcal{M}_X$ is the von
Neumann algebra generated by $A$, so commutative if and only if $A$ is
normal.
This result also illustrates some relations that exist between the
algebras defined through~\eqref{cpt alg} and~\eqref{cpt vNA}: for any
cocycle $X$ we have
\[
\mathcal{A}_X = \mathcal{A}_{\smash{\widetilde{X}}}, \ \text{ and } \ \mathcal{M}_X =
\mathcal{M}_{\smash{\widetilde{X}}} = \mathcal{M}_{X^*},
\]
but $\mathcal{A}_X$ need not equal $\mathcal{A}_{X^*}$. Other remarks on the
differences between parts~(a),~(b) and~(c) of the proposition are best
made with reference to the stochastic generator of the cocycle, the
subject of the next section, and so are postponed until then.
\end{rems}
\section{Generated cocycles}
A major source of operator cocycles on Fock space comes from solutions
of the left and right Hudson-Parthasarathy QSDEs:
\begin{align*}
dX_t &= X_t F \, d\Lambda_t, \qquad X_0 = I, \tag{L} \\
dX_t &= F X_t \, d\Lambda_t, \qquad X_0 = I. \tag{R}
\end{align*}
Here the coefficient $F$ is a bounded operator on $\mathfrak{h} \otimes {\widehat{\noise}}$,
where the use of hats is defined by
\begin{equation}
\label{hats}
{\widehat{\noise}} := \mathbb{C} \oplus \mathsf{k}, \qquad \widehat{d} = \begin{pmatrix} 1 \\ d
\end{pmatrix} \text{ for } d \in \mathsf{k}.
\end{equation}
Moreover, let $P_\mathsf{k} \in B({\widehat{\noise}})$ denote the projection ${\widehat{\noise}}
\rightarrow \mathsf{k}$, and $\Delta := I_\mathfrak{h} \otimes P_\mathsf{k}$. Since $\mathfrak{h} \otimes
{\widehat{\noise}} \cong \mathfrak{h} \oplus (\mathfrak{h} \otimes \mathsf{k})$, any $F \in B(\mathfrak{h} \otimes
{\widehat{\noise}})$ can and will be written as
\[
F = \begin{bmatrix} A & B \\ C & D -I_{\mathfrak{h} \otimes \mathsf{k}} \end{bmatrix}
\]
for $A \in B(\mathfrak{h})$, $B, C^* \in B(\mathfrak{h} \otimes \mathsf{k}; \mathfrak{h})$ and $D
\in B(\mathfrak{h} \otimes \mathsf{k})$.
Straightforward Picard iteration arguments (\cite{jmLGreif}) produce
solutions $X^F = (X^F_t)_{t \geqslant 0}$ of the left equation~(L) and ${}^F \! X
= ({}^F \! X_t)_{t \geqslant 0}$ of the right equation~(R), although neither need
be composed of bounded operators. However the solutions have domain
$\mathfrak{h} \underline{\otimes} \mathcal{E}$ and satisfy a property called \ti{weak regularity},
a property shared by any locally bounded process. Moreover $X^F$ and
${}^F \! X$ are the unique weakly regular (weak) solutions to~(L) and~(R)
for the given $F$. On the other hand, any weakly regular process $X$
satisfies~(L) (or~(R)) (weakly) for at most one $F$. See Theorems~3.1
and~7.13 of~\cite{mother}. It follows that $(X^F)^*$ is the unique
weakly regular solution of~(R) for $F^*$, i.e.\ $(X^F)^*|_{\mathfrak{h} \underline{\otimes}
\mathcal{E}} = {}^{F^*} \! X$.
The solution $X^F$ enjoys a semigroup decompositions of the
form~\eqref{semi decomp}, where now
\begin{equation} \label{semi gen}
Q^{c,d}_t = e^{tZ^c_d} \ \text{ for } \ Z^c_d = E^{\widehat{c}} (F +\Delta)
E_{\widehat{d}} = E^{\widehat{c}} F E_{\widehat{d}} +\ip{c}{d}.
\end{equation}
The solution ${}^F \! X$ of~(R) has a similar description involving the same
semigroups, but with the product as in~\ref{semi decomp right}. One
consequence is that a weakly regular process $X$ solves~(L) if and
only if the time-reversed process~$\widetilde{X}$ satisfies~(R). More
importantly, \textit{if} the solution to~(L) (respectively to~(R)) is a
bounded process then it is a Markov-regular left (resp.\ right)
cocycle. However it is still an open problem to determine all the
operators $F$ that yield bounded solutions. For contractive,
isometric, coisometric, and hence unitary solutions the situation is
understood much better, with the answer being given in terms of the
map $\chi$ on $B(\mathfrak{h} \otimes {\widehat{\noise}})$ where
\begin{equation} \label{cont con}
\chi(F) := F +F^* +F^* \Delta F.
\end{equation}
\begin{thm}[\cites{fFqp8,mother}] \label{conisocoi}
Let $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$. We have the following sets of
equivalences\textup{:}
\begin{align*}
&\ttu{(a)}
&&\ttu{\hspace*{-3mm}(i) } {}^F \! X \text{ is contractive}
&&\ttu{\hspace*{-2mm}(ii) } X^F \text{ is contractive}
&&\ttu{\hspace*{-2mm}(iii) } \chi(F) \leqslant 0
&&\ttu{\hspace*{-2mm}(iv) } \chi(F^*) \leqslant 0 && \\
&\ttu{(b)}
&&\ttu{\hspace*{-3mm}(i) } {}^F \! X \text{ is isometric}
&&\ttu{\hspace*{-2mm}(ii) } X^F \text{ is isometric}
&&\ttu{\hspace*{-2mm}(iii) } \chi(F) = 0 \\
&\ttu{(c)}
&&\ttu{\hspace*{-3mm}(i) } {}^F \! X \text{ is coisometric}
&&\ttu{\hspace*{-2mm}(ii) } X^F \text{ is coisometric}
&&\ttu{\hspace*{-2mm}(iii) } \chi(F^*) = 0
\end{align*}
\end{thm}
If, instead, we start with a Markov-regular $C_0$-cocycle $X$ then all
of its associated semigroups $Q^{c,d}$ are norm continuous and so have
bounded generators $Z^c_d$. Let $I$ be a set not containing $0$, set
$\widehat{I} = I \cup \{0\}$, and let $\{e_\alpha\}_{\alpha \in \widehat{I}}$ be an
orthonormal basis of ${\widehat{\noise}}$ with $e_0 = \bigl( \begin{smallmatrix} 1
\\ 0 \end{smallmatrix} \bigr)$, so that $\{e_i\}_{i \in I}$ is an
orthonormal basis of $0 \oplus \mathsf{k} \cong \mathsf{k}$. This basis induces
the second of the following isomorphisms:
\begin{equation} \label{direct sum}
\mathfrak{h} \otimes {\widehat{\noise}} \cong \mathfrak{h} \oplus (\mathfrak{h} \otimes \mathsf{k}) \cong \mathfrak{h} \oplus
\bigoplus\displaystyle{^{(\dim \mathsf{k})}} \mathfrak{h}.
\end{equation}
Now define operators $\{F^\alpha_\beta: \alpha, \beta \in \widehat{I}\} \subset
B(\mathfrak{h})$ through
\begin{equation} \label{F cpts}
\begin{aligned}
F^0_0 &= Z^{e_0}_{e_0}, \quad F^i_0 = Z^{e_i}_{e_0} - Z^{e_0}_{e_0},
\quad F^0_j = Z^{e_0}_{e_j} - Z^{e_0}_{e_0} \quad \text{ and} \\
F^i_j &= Z^{e_i}_{e_j} - Z^{e_i}_{e_0} - Z^{e_0}_{e_j} + Z^{e_0}_{e_0}
-\delta^i_j I_\mathfrak{h},
\end{aligned}
\end{equation}
for $i,j \in I$, and where $\delta^i_j$ is the Kronecker delta. If
$\mathsf{k}$ is finite-dimensional, these $F^\alpha_\beta$ can be regarded as
the components of the matrix associated to a bounded operator $F \in
B(\mathfrak{h} \otimes {\widehat{\noise}})$ through~\eqref{direct sum}, and it follows that $X
= X^F$ or ${}^F \! X$ as appropriate. That is, the cocycle is the solution
of the relevant QSDE. For infinite-dimensional $\mathsf{k}$, a priori the
matrix $[F^\alpha_\beta]$ only gives us a form on $\mathfrak{h} \otimes {\widehat{\noise}}$, with
respect to which $X$ satisfies a weak form of~(L) or~(R) --- this is
Theorem~6.6 of~\cite{father}. However, if the cocycle is in addition
\textit{contractive} then the form is bounded, and so the $F^\alpha_\beta$ are
the components of some $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$ as before.
Recall the subalgebras $\mathcal{A}_X$ and $\mathcal{M}_X$ of $B(\mathfrak{h})$ associated
to a cocycle $X$ by~\eqref{cpt alg} and~\eqref{cpt vNA}. For a
generated cocycle $X$, i.e.\ one satisfying~(L) or~(R) for some $F \in
B(\mathfrak{h} \otimes {\widehat{\noise}})$, it follows from~\eqref{semi gen} and~\eqref{F
cpts} that $\mathcal{A}_X$ is the unital algebra generated by the components
$F^\alpha_\beta$ of $F$. That is, $F \in \mathrm{M} ({\widehat{\noise}}; \mathcal{A}_X)_{\text{\textup{b}}}$, the
${\widehat{\noise}}$-matrix space over $\mathcal{A}_X$. Moreover, from~\eqref{semi gen}
and~\eqref{F cpts} we have
\[
(Q^{c,d}_t)^* = Q^{d,c}_t \ \text{ for all } c,d \in \mathsf{k}, t \geqslant 0
\ \ensuremath{\Leftrightarrow} \ F = F^*.
\]
The following is thus the infinitesimal version of
Proposition~\ref{coc transfs}:
\begin{propn} \label{tisa gen}
Let $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$ and suppose that $X^F$ is bounded with
locally uniform bounds, hence a Markov-regular left $C_0$-cocycle. We
have the following sets of equivalences\textup{:}
\begin{alist}
\item
\begin{alist}
\item
$X^F = \widetilde{X^F} = {}^F \! X$.
\item
$F \in \mathrm{M}({\widehat{\noise}}; \mathcal{C})_{\text{\textup{b}}}$ for some commutative subalgebra $\mathcal{C}
\subset B(\mathfrak{h})$.
\end{alist}
\item
\begin{alist}
\item
$\widetilde{X^F} = (X^F)^*$.
\item
$F = F^*$.
\end{alist}
\noindent
In this case $\mathcal{A}_X$ is closed under taking adjoints.
\item
\begin{alist}
\item
$X^F$ is self-adjoint.
\item
$F = F^*$ and $F \in \mathcal{N} \otimes B({\widehat{\noise}})$ for some commutative von
Neumann algebra $\mathcal{N}$.
\end{alist}
\end{alist}
\end{propn}
\begin{egs}
(i) If $\mathfrak{h} = \mathsf{k} = \mathbb{C}$ and $F = \begin{sbmatrix} -1/2 & -1
\\ 1 & 0 \end{sbmatrix}$ then $\mathcal{A}_X = \mathcal{M}_X = \mathbb{C}$, so $X^F =
\widetilde{X^F} = {}^F \! X$, i.e.\ $X^F$ is both a left and right cocycle.
However $F \neq F^*$, and so~(a) neither implies~(b) nor~(c).
Furthermore, $\mathcal{A}_X$ being closed under taking adjoints does not
imply $F = F^*$. In this example $X^F_t = W(\mathbf{1}_{[0,t[})$, the Weyl
operator associated to $\mathbf{1}_{[0,t[} \in L^2(\Real_+)$.
(ii) As noted after Proposition~\ref{coc transfs}, commutativity of
$\mathcal{A}_X$ does not imply commutativity of $\mathcal{M}_X$. This also shows
that~(a) does not imply~(c).
(iii) Let $F = \begin{sbmatrix} A & B \\ C & D-I \end{sbmatrix}$
with $A = A^* = -\frac{1}{2} C^*C$, $B = C^*$, $D = D^*$, $D^2 = I$
and $(I+D)C = 0$. Then $F = F^*$ and $\chi(F) = 0$, so from
Theorem~\ref{conisocoi} we have that $X^F$ is unitary, and from~(b)
of the above proposition that $(X^F)^* = \widetilde{X^F}$. However if we
ensure that $\mathcal{A}_X$ is not commutative then $(X^F)^* \neq X^F$;
this can be achieved by taking $\mathfrak{h} = \mathbb{C}^2$, $\mathsf{k} = \mathbb{C}$
and $C = \begin{sbmatrix} 1 & 0 \\ -1 & 0 \end{sbmatrix}$, $D =
\begin{sbmatrix} 0 & 1 \\ 1 & 0 \end{sbmatrix}$. This shows that~(b)
implies neither~(a) nor~(c).
\end{egs}
In the examples above it is the algebra generated by the
components of $F$ rather than the von Neumann algebra $\mathcal{N}_F$
generated by $F$ itself that is of interest. For example in~(i)
$\mathcal{M}_X$ is commutative, whereas $\mathcal{N}_F$ is not, and the opposite holds
true in the example in~(iii).
\begin{thm} \label{pos gen}
Let $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$ and suppose that $X^F$ is bounded with
locally uniform bounds, hence a Markov-regular left $C_0$-cocycle. The
following are equivalent\textup{:}
\begin{rlist}
\item
$X^F_t \geqslant 0$ for all $t \geqslant 0$.
\item
$F = F^* \in \mathcal{N} \otimes B({\widehat{\noise}})$ for some commutative von Neumann
algebra $\mathcal{N}$, and $\Delta F \Delta +\Delta \geqslant 0$.
\end{rlist}
\end{thm}
\begin{proof}
Given any von Neumann algebra $\mathcal{N}$ and $F \in \mathcal{N} \otimes B({\widehat{\noise}})$, if
we define $\theta: \mathcal{N} \rightarrow \mathcal{N} \otimes B({\widehat{\noise}})$ by $\theta (a) = F(a \otimes
I_{\widehat{\noise}})$ and assume that $X^F$ is a bounded solution of~(L) then the
mapping process $k_t (a) := X^F_t (a \otimes I_{\mathcal{F}_+})$ is a solution
of the Evans-Hudson QSDE $dk_t = k_t \circ \theta \, d\Lambda_t$.
Moreover, by Theorem~4.1 of~\cite{mother}, $k$ is completely positive
if and only if
\begin{equation} \label{CP gen}
\theta (a) = \psi (a) +E_{\widehat{0}}\, a K + K^* a E^{\widehat{0}} - a \otimes
P_\mathsf{k}
\end{equation}
for some completely positive map $\psi: \mathcal{N} \rightarrow \mathcal{N} \otimes B({\widehat{\noise}})$ and
$K \in B(\mathfrak{h} \otimes {\widehat{\noise}}; \mathfrak{h})$.
\medskip
\noindent
(i \ensuremath{\Rightarrow}\ ii) Suppose that each $X^F_t$ is positive, then $F \in
\mathcal{N} \otimes B({\widehat{\noise}})$ for some commutative von Neumann algebra $\mathcal{N}$ by
Proposition~\ref{tisa gen}, hence $(X^F_t)^{1/2} \in \mathcal{N} \otimes
B(\mathcal{F}_+)$, and so commutes with $a \otimes I_{\mathcal{F}_+}$, showing that
the flow $k$ is completely positive. In particular, since $\Delta
E_{\widehat{0}} = 0$,
\[
\Delta F \Delta +\Delta = \Delta \theta (1) \Delta +\Delta = \Delta
\psi (1) \Delta \geqslant 0.
\]
\medskip
\noindent
(ii \ensuremath{\Rightarrow}\ i) Write $F = \begin{sbmatrix} A & B \\ B^* & D-I
\end{sbmatrix}$ so that $D \geqslant 0$. Since $\mathcal{N}$ is commutative,
\[
\psi (a) := \begin{bmatrix} 0 & 0 \\ 0 & D (a \otimes I_\mathsf{k})
\end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & D^{1/2} (a \otimes I_\mathsf{k})
D^{1/2} \end{bmatrix}
\]
is completely positive. Setting $K = \begin{bmatrix} \frac{1}{2} A & B
\end{bmatrix}$ we get
\[
\psi (a) +E_{\widehat{0}}\, a K +K^* a E^{\widehat{0}} -a \otimes P_\mathsf{k} = F (a
\otimes I_{\widehat{\noise}}),
\]
so $\theta$ has the form~\eqref{CP gen}, and thus generates a
completely positive flow. In particular $X^F_t = k_t (1)$ must be
positive.
\end{proof}
\begin{cor} \label{cpos gen}
Let $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$. The following are equivalent\textup{:}
\begin{rlist}
\item
$X^F$ is a positive contraction cocycle.
\item
$F \in \mathcal{N} \otimes B({\widehat{\noise}})$ for some commutative von Neumann algebra
$\mathcal{N}$, $F \leqslant 0$ and $\Delta F \Delta +\Delta \geqslant 0$.
\item
$F = \begin{sbmatrix} A & B \\ B^* & D-I \end{sbmatrix} \in \mathcal{N} \otimes
B({\widehat{\noise}})$ for some commutative von Neumann algebra $\mathcal{N}$, with $A \leqslant
0$, $0 \leqslant D \leqslant I$ and $B = (-A)^{1/2} V (I-D)^{1/2}$ for some
contraction $V \in B(\mathfrak{h} \otimes \mathsf{k}; \mathfrak{h})$.
\end{rlist}
\end{cor}
\begin{proof}
If the flow $k$ generated by $\theta(a) = F(a \otimes I_{\widehat{\noise}})$ is
completely positive then it is contractive if and only if $\theta (1)
= F \leqslant 0$ (\cite{LPgran}*{Theorem~5.1} or
\cite{mother}*{Proposition~5.1}). Moreover if $k$ is positive then
$\norm{k_t} = \norm{k_t (1)} = \norm{X_t}$. This gives the equivalence
of~(i) and~(ii). Part~(iii) follows from a standard characterisation
of positive $2 \times 2$ operator matrices (e.g.\
~\cite{dilation}*{Lemma~2.1}).
\end{proof}
\begin{rem}
In terms of the operators in~(iii) one may recognise Bhat's
characterisation of positive contraction cocycles in the special case
when $\mathfrak{h} = \mathbb{C}$ (\cite{bvrBCCR}*{Theorem~7.5}). His focus there
was on \ti{local cocycles}, that is cocycles which satisfy $X_t \in
\sigma_t \bigl( B(\mathfrak{h} \otimes \mathcal{F}_+) \bigr)'$ for all $t \geqslant 0$. For the
CCR flow
\[
\sigma_t \bigl( B(\mathfrak{h} \otimes \mathcal{F}_+) \bigr) = B(\mathfrak{h}) \otimes
I_{\mathcal{F}_{[0,t[}} \otimes B(\mathcal{F}_{[t,\infty[}),
\]
so this assumption is stronger than mere adaptedness, and forces $X_t$
to act trivially on $\mathfrak{h}$, equivalently we must have $\mathcal{A}_X =
\mathbb{C}$, or $F \in I_\mathfrak{h} \otimes B({\widehat{\noise}})$. Hence one may restrict to the
case $\mathfrak{h} = \mathbb{C}$ without loss of generality.
\end{rem}
The final characterisations rely on being able to multiply cocycles
together to produce new cocycles.
\begin{lemma} \label{products}
Let $F,G \in B(\mathfrak{h} \otimes {\widehat{\noise}})$ and suppose that the solutions $X^F$
and $X^G$ to~\textup{(L)} for these coefficients are both bounded with
locally uniform bounds. Assume also that
\begin{equation} \label{commute}
(F \otimes I_{\mathcal{F}_+}) \widehat{X^G_t} = \widehat{X^G_t} (F \otimes I_{\mathcal{F}_+}) \quad
\text{ for all } t \geqslant 0
\end{equation}
where $\widehat{X^G_t} \in B(\mathfrak{h} \otimes {\widehat{\noise}} \otimes \mathcal{F}_+)$ denotes the
result of ampliating $X^G_t$ to $\mathfrak{h} \otimes {\widehat{\noise}} \otimes \mathcal{F}_+$. In this
case the product $X^F X^G$ is a bounded left $C_0$-cocycle with
stochastic generator $F +G +F \Delta G$.
\end{lemma}
\begin{proof}
The adjoint process $\bigl( (X^F_t)^* \bigr)_{t \geqslant 0}$ is a right
cocycle with stochastic generator $F^*$, and so the quantum It\^{o}
formula gives
\begin{multline*}
\ip{u\e{f}}{(X^F_t X^G_t -I) v\e{g}} = \\
\begin{aligned}
\int^t_0 \Bigl\{ & \ip{\widehat{X^F_s}^* u\widehat{f}(s)
\e{f}}{\widehat{X^G_s} (G \otimes I_{\mathcal{F}_+}) v \widehat{g}(s) \e{g}} \\
& + \ip{(F^* \otimes I_{\mathcal{F}_+}) \widehat{X^F_s}^* u\widehat{f}(s)
\e{f}}{\widehat{X^G_s} v \widehat{g}(s) \e{g}} \\
& + \ip{(F^* \otimes I_{\mathcal{F}_+}) \widehat{X^F_s}^* u\widehat{f}(s) \e{f}}{(\Delta \otimes
I_{\mathcal{F}_+}) \widehat{X^G_s} (G \otimes I_{\mathcal{F}_+}) v \widehat{g}(s) \e{g}} \Bigr\}
\, ds.
\end{aligned}
\end{multline*}
The commutativity assumed in~\eqref{commute} shows that the weakly
regular process $X^F X^G$ satisfies~(L) for $F +G +F \Delta G$, and so
is a cocycle with this generator.
\end{proof}
\begin{rem}
If $F \in \mathcal{M} \otimes B({\widehat{\noise}})$ and $G \in \mathcal{N} \otimes B({\widehat{\noise}})$ for von
Neumann algebras $\mathcal{M}$ and $\mathcal{N}$ then a sufficient condition
for~\eqref{commute} is $\mathcal{N} \subset \mathcal{M}'$, since $\widehat{X^G_t} \in \mathcal{N}
\otimes I_{\widehat{\noise}} \otimes B(\mathcal{F}_+)$. In particular this is true if $\mathcal{M}$ is
commutative and $\mathcal{N} = \mathcal{M}$.
\end{rem}
\begin{propn} \label{proj gen}
Let $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$. The following are equivalent\textup{:}
\begin{rlist}
\item
$X^F$ is an orthogonal projection-valued cocycle.
\item
$F \in \mathcal{N} \otimes B({\widehat{\noise}})$ for some commutative von Neumann algebra
$\mathcal{N}$, and $F +F^* \Delta F = 0$.
\item
$F = \begin{sbmatrix} -BB^* & B \\ B^* & P-I \end{sbmatrix} \in \mathcal{N}
\otimes B({\widehat{\noise}})$ for some commutative von Neumann algebra $\mathcal{N}$ where $P
\in \mathcal{N} \otimes B(\mathsf{k})$ is an orthogonal projection, and $BP=0$.
\end{rlist}
\end{propn}
\begin{proof}
(i \ensuremath{\Rightarrow}\ ii) Since $X^F$ is self-adjoint, $F \in \mathcal{N} \otimes
B({\widehat{\noise}})$ for a commutative von Neumann algebra $\mathcal{N}$, and $F = F^*$.
It follows that $(F \otimes I_{\mathcal{F}_+}) \widehat{X^F_t} = \widehat{X^F_t} (F \otimes
I_{\mathcal{F}_+})$, hence $(X^F)^2$ is a cocycle with stochastic generator
$2F +F \Delta F$ by Lemma~\ref{products}. But we assumed that $X^F_t
= (X^F_t)^2$, and since generators are unique we get
\[
F = 2F +F \Delta F = 2F +F^* \Delta F
\]
as required.
\medskip
\noindent
(ii \ensuremath{\Rightarrow}\ i) From $F +F^* \Delta F = 0$ it follows that $F$ is
self-adjoint, and that $F \leqslant 0$. Thus part~(a) of
Theorem~\ref{conisocoi} and part~(c) of Proposition~\ref{tisa gen}
apply to show that $X^F$ is a self-adjoint contraction $C_0$-cocycle.
But now Lemma~\ref{products} applies to show that $(X^F)^2$ is also a
cocycle, with generator $2F +F \Delta F = F$, and so by uniqueness of
solutions to~(L) we have that each $X^F_t$ is an orthogonal
projection.
\medskip
\noindent
(ii \ensuremath{\Leftrightarrow}\ iii) Simple algebra.
\end{proof}
The final characterisation rests on equivalences between operator
(in-)equalities involving $\chi(F)$ defined in~\eqref{cont con} and
the following additional functions of $F$:
\begin{align}
\pi(F) &:= F +F^* +F^* \Delta F +F \Delta F +F \Delta F^* +F \Delta
F^* \Delta F \label{pi con} \\
\varphi(F) &:= \chi(F) +\chi(F) \Delta \chi(F). \notag
\end{align}
\begin{lemma} \label{ineqs}
For any $F \in B(\mathfrak{h} \otimes {\widehat{\noise}})$ we have the following sets of
equivalences\textup{:}
\begin{align*}
&\ttu{(a)}
&&\ttu{\hspace*{-4mm}(i) } \chi(F) \leqslant 0
&&\ttu{(ii) } \chi(F^*) \leqslant 0
&&\ttu{(iii) } \varphi(F) \leqslant 0
&&\ttu{(iv) } \varphi(F^*) \leqslant 0 &&\\
&\ttu{(b)}
&&\ttu{\hspace*{-4mm}(i) } \pi(F) = 0
&&\ttu{(ii) } \pi(F^*) = 0
&&\ttu{(iii) } \varphi(F) = 0
&&\ttu{(iv) } \varphi(F^*) = 0 &&
\end{align*}
\end{lemma}
\begin{proof}
(a) Since $\chi(F) = \chi(F)^*$, if $\varphi(F) \leqslant 0$ then $\chi(F)
\leqslant -\chi(F) \Delta \chi(F) \leqslant 0$. Thus (iii) \ensuremath{\Rightarrow}\ (i) and
(iv) \ensuremath{\Rightarrow}\ (ii). However note that
\begin{equation} \label{phi id}
\varphi(F) = (I +F^* \Delta) \chi(F^*) (I +\Delta F),
\end{equation}
from which it follows that (ii) \ensuremath{\Rightarrow}\ (iii) and (i) \ensuremath{\Rightarrow}\
(iv).
\medskip
\noindent
(b) Now $\pi(F^*) = \pi(F)^*$ so (i) \ensuremath{\Leftrightarrow}\ (ii). Also $\pi(F) =
\chi(F^*) (I +\Delta F)$, hence (i) \ensuremath{\Rightarrow}\ (iii) by~\eqref{phi id}.
Finally, if $\varphi(F) =0$ then $\chi(F^*) \leqslant 0$ by part~(a), and
so
\[
0 = -\varphi(F) = \bigl[ \bigl( -\chi(F^*) \bigr)^{1/2} (I +\Delta F)
\bigr]^* \bigl[ \bigl( -\chi(F^*) \bigr)^{1/2} (I +\Delta F) \bigr]
\]
giving (iii) \ensuremath{\Rightarrow}\ (i).
\end{proof}
\begin{propn} \label{pi gen}
Let $F \in \mathcal{N} \otimes B({\widehat{\noise}})$ for a commutative von Neumann algebra
$\mathcal{N}$. The following are equivalent\textup{:}
\begin{rlist}
\item
$X^F$ is a partial isometry-valued cocycle.
\item
$\pi(F) = 0$, where $\pi(F)$ is defined in~\eqref{pi con} above.
\end{rlist}
\end{propn}
\begin{proof}
If $X^F$ is partial isometry-valued then since $\mathcal{N}$ is commutative,
the cocycle $(X^F)^* X^F$ is projection-valued with generator $\chi(F)
= F +F^* +F^* \Delta F$ by Lemma~\ref{products}. Hence $\varphi(F) =
\chi(F) +\chi(F) \Delta \chi(F) = 0$ by Proposition~\ref{proj gen} and
so $\pi(F) = 0$ by the lemma above.
Conversely, if $\pi(F) = 0$ then $\varphi(F) = 0$ by the lemma, hence
$\chi(F) \leqslant 0$ and so $X^F$ is a contraction cocycle by part~(a) of
Theorem~\ref{conisocoi}. Again Lemma~\ref{products} can be invoked to
show that $(X^F)^* X^F$ is a (bounded) cocycle with generator
$\chi(F)$ which satisfies the conditions of Proposition~\ref{proj gen}
and hence is projection-valued, so that $X^F$ is itself a partial
isometry-valued cocycle.
\end{proof}
The condition $\pi(F)=0$ is necessarily satisfied by the generator of
\textit{any} Markov-regular partial isometry-valued cocycle, as can be
shown by standard independence of quantum stochastic
integrators/differentiation at zero arguments. In particular if $F =
\begin{sbmatrix} 0 & 0 \\ 0 & D-I \end{sbmatrix}$ then $\pi(F) =0$ if
and only if $D$ is a partial isometry, but for such pure-gauge
cocycles this condition is in general \textit{not} sufficient to imply
that $X^F$ is partial isometry-valued, as can be seen by using the
explicit solution of~(L) given in~\cite{jmLGreif}*{Example~5.3}. For
each $n \in \mathbb{N}$ and $1 \leqslant j \leqslant n$ define $D^{(n)}_j \in B(\mathfrak{h}
\otimes \mathsf{k}^{\otimes n})$ by having $D$ act on $\mathfrak{h}$ and the $j$th copy
of $\mathsf{k}$, and ampliating to the other copies of $\mathsf{k}$. Set
\[
D^{(n)} := D^{(n)}_1 \cdots D^{(n)}_n \in B(\mathfrak{h} \otimes \mathsf{k}^{\otimes n}).
\]
\begin{propn} \label{not suff}
Let $D \in B(\mathfrak{h} \otimes \mathsf{k})$ be a contraction and set $F =
\begin{sbmatrix} 0 & 0 \\ 0 & D-I \end{sbmatrix}$. The following are
equivalent\textup{:}
\begin{rlist}
\item
$X^F$ is a partial isometry-valued cocycle.
\item
$D^{(n)}$ is a partial isometry for each $n \in \mathbb{N}$.
\end{rlist}
\end{propn}
\begin{proof}
The symmetric tensor product of $n$ copies of $L^2 ([0,t[; \mathsf{k})$
can be naturally identified with $L^2 (\Delta^n_t; \mathsf{k}^{\otimes n})$,
where $\Delta^n_t = \{0 < t_1 < \cdots < t_n < t\} \subset (\Real_+)^n$
(see~\cite{mSwhitebialg} or~\cite{jmLGreif} for details). It follows
that
\[
\mathfrak{h} \otimes \mathcal{F}_+ \cong \biggl( \bigoplus_{n=0}^\infty L^2 (\Delta^n_t;
\mathfrak{h} \otimes \mathsf{k}^{\otimes n}) \biggr) \otimes \mathcal{F}_{[t,\infty[}
\]
and that under this identification the solution $X^F$ of~(L) has the
explicit form
\[
X^F_t = \biggl( \bigoplus_{n=0}^\infty I_{L^2 (\Delta^n_t)} \otimes
D^{(n)} \biggr) \otimes I_{\mathcal{F}_{[t,\infty[}},
\]
(see~\cite{jmLGreif}). The result follows.
\end{proof}
\begin{eg}
As a special case, if $\mathsf{k} = \mathbb{C}$ then $\mathfrak{h} \otimes \mathsf{k}^{\otimes n}
\cong \mathfrak{h}$ and $D^{(n)} = D^n$, the usual $n$th power of $D$. Thus
in this setting $X^F$ is partial isometry-valued if and only if $D^n$
is a partial isometry for each $n$, whereas $\pi (F) =0$ merely if $D$
alone is a partial isometry. If we take $\mathfrak{h} = \mathbb{C}^2$ and the
partial isometry
\[
D =
\begin{bmatrix}
\cos \theta & 0 \\ \sin \theta & 0
\end{bmatrix}, \qquad \theta \notin \pi \mathbb{Z}/2,
\]
then $D^2 (D^2)^* D^2 \neq D^2$, so that $X^F$ is \textit{not} partial
isometry-valued.
An operator $D$ such that $D^n$ is a partial isometry for each $n \in
\mathbb{N}$ is called \ti{power partial isometry}; these have been
characterised by Halmos and Wallen (\cite{HWpowerPIs}).
\end{eg}
\section{A transformation of positive cocycles}
\begin{propn} \label{powers}
Let $X = (X_t)_{t \geqslant 0}$ be a left cocycle with $X_t \geqslant 0$ for
each $t \geqslant 0$. Then for each real number $\alpha > 0$ the family $X^\alpha
= (X^\alpha_t)_{t \geqslant 0}$ is a left cocycle.
\end{propn}
\begin{proof}
We are dealing with a self-adjoint cocycle, so it is both a left and a
right cocycle, hence
\begin{equation} \label{op comm}
X_{r+t} = X_r \sigma_r (X_t) = \sigma_r (X_t) X_r \quad \text{for all } r,t
\geqslant 0.
\end{equation}
From this it is clear that $X^n$ is a cocycle for any integer $n \geqslant
1$, and one consisting of positive operators.
Since $\sigma_r$ is a ${}^*$-homomorphism it follows that $\sigma_r (X_t)
\geqslant 0$ and that $\sigma_r (X_t^{1/2}) = \sigma_r (X_t)^{1/2}$. Moreover
from~\eqref{op comm} and the continuous functional calculus we obtain
\[
[X_r^{1/2}, \sigma_r (X_t)^{1/2}] = [X^{1/2}_r, \sigma_r (X^{1/2}_t)] = 0,
\]
and thus
\[
X^{1/2}_r \sigma_r (X^{1/2}_t) \geqslant 0, \qquad \bigl( X^{1/2}_r \sigma_r
(X^{1/2}_t) \bigr)^2 = X_{r+t}.
\]
Hence, by uniqueness of positive square roots, $X^{1/2}$ is a left
and right cocycle of positive operators.
These two observations show that $X^\alpha$ is a cocycle for any dyadic
rational $\alpha$. To get the desired result for any $\alpha > 0$ let
$(\alpha_n)_{n \geqslant 1}$ be a decreasing sequence of dyadic rationals with
$\alpha_n \rightarrow \alpha$. Now if $h_\beta (t) := t^\beta$ for $\beta > 0$ then
$h_{\alpha_n} \rightarrow h_\alpha$ locally uniformly --- the function sequence
is pointwise increasing on $[0,1]$ and pointwise decreasing on $[1,T]$
for any $T > 1$, and so Dini's Theorem may be applied. Thus appealing
to the continuous functional calculus once more and continuity of
$\sigma_r$ is enough to show that $X^\alpha_{r+t} = X^\alpha_r \sigma_r (X^\alpha_t)$
as required.
\end{proof}
It should be noted that the proof of the above result does not on
particular properties of the CCR flow $\sigma$ on Fock space. Indeed, the
result is valid for any $E$-semigroup since even preservation of the
identity by $\sigma$ is not used.
However, if the cocycle $X$ is a Markov-regular positive contraction
cocycle then it has a stochastic generator $F$. The next results
discuss how $F$ is transformed by taking powers of $X$, and this is
mediated through the following functions from the algebra $C[0,1]$,
defined for each $\alpha > 0$.
\[
f_\alpha(t) = \begin{cases} \dfrac{\alpha-1 -\alpha t +t^\alpha}{(1-t)^2} &
\text{if } t < 1, \\ \frac{1}{2} \alpha(\alpha -1) & \text{if } t=1,
\end{cases}
\quad
g_\alpha(t) = \begin{cases} \dfrac{1 -t^\alpha}{1-t} & \text{if } t < 1, \\
\alpha & \text{if } t=1, \end{cases}
\quad
h_\alpha(t) = t^\alpha.
\]
Note that $h_\alpha$ is a homeomorphism $[0,1] \rightarrow [0,1]$, so induces an
automorphism of $C[0,1]$ by composition. Also we have the following
identities, valid for all $\alpha, \beta > 0$:
\begin{subequations}
\begin{gather}
g_\alpha = \alpha -(1- h_1) f_\alpha; \label{for lemma} \\
f_\alpha +f_\beta +g_\alpha g_\beta = f_{\alpha+\beta}, \quad g_\beta +g_\alpha h_\beta =
g_{\alpha+\beta}, \quad h_\alpha h_\beta = h_{\alpha+\beta}; \label{add} \\
\beta f_\alpha +g_\alpha^2 (f_\beta \circ h_\alpha) = f_{\alpha\beta}, \quad g_\alpha
(g_\beta \circ h_\alpha) = g_{\alpha\beta}, \quad h_\beta \circ h_\alpha =
h_{\alpha\beta};
\label{comp}
\end{gather}
and the inequalities
\begin{equation}
f_\alpha(t) \leqslant f_\beta(t), \quad g_\alpha(t) \leqslant g_\beta(t), \quad h_\alpha(t)
\geqslant h_\beta(t), \label{monotone}
\end{equation}
\end{subequations}
valid for all $t \in [0,1]$ and $1 \leqslant \alpha < \beta$.
\begin{lemma} \label{Fal gens}
Suppose $F = \begin{sbmatrix} A & B \\ B^* & D-I \end{sbmatrix} \in
B(\mathfrak{h} \otimes {\widehat{\noise}})$ is the generator of a positive contraction
cocycle. Then so is $F_\alpha \in B(\mathfrak{h} \otimes {\widehat{\noise}})$ where
\begin{equation} \label{Fal}
F_\alpha := \begin{bmatrix} \alpha A +B f_\alpha(D) B^* & B g_\alpha(D) \\ g_\alpha(D)
B^* & h_\alpha(D) -I \end{bmatrix}.
\end{equation}
\end{lemma}
\begin{rem}
The proof uses the following elementary fact: if $\mathsf{K}_1$, $\mathsf{K}_2$
and $\mathsf{K}_3$ are Hilbert spaces, and $S \in B(\mathsf{K}_1; \mathsf{K}_2)$, $T \in
B(\mathsf{K}_1; \mathsf{K}_3)$ such that $S^*S \leqslant T^*T$, then there is a
contraction $W \in B(\mathsf{K}_2; \mathsf{K}_3)$ such that $S = WT$. This follows
since the inequality allows us to define $W$ by setting $W (T\xi) =
S\xi$ on $\Ran T$ and $W|_{(\Ran T)^\perp} = 0$.
\end{rem}
\begin{proof}
By condition~(iii) of Corollary~\ref{cpos gen}, $F \in \mathcal{N} \otimes
B({\widehat{\noise}})$ for some commutative von Neumann algebra $\mathcal{N}$, $A \leqslant 0$,
$0 \leqslant D \leqslant I$ and there is a contraction $V$ such that $B =
(-A)^{1/2} V (I-D)^{1/2}$.
Now $F_\alpha \in \mathcal{N} \otimes B({\widehat{\noise}})$, and $0 \leqslant h_\alpha(D) \leqslant I$ since
$h_\alpha([0,1]) = [0,1]$. Also $g_\alpha \geqslant 0$, so
\begin{align*}
0 &\leqslant (-A)^{1/2} V g_\alpha(D) V^* (-A)^{1/2} \\
&= (-A)^{1/2} V (\alpha -(I-D)^{1/2} f_\alpha(D) (I-D)^{1/2}) V^* (-A)^{1/2}
\\
&\leqslant -\alpha A -B f_\alpha(D) B^*,
\end{align*}
using~\eqref{for lemma}. These inequalities prove the existence of a
contraction $W \in B(\mathfrak{h}; \mathfrak{h} \otimes \mathsf{k})$ that satisfies
\[
W (-\alpha A -B f_\alpha(D) B^*)^{1/2} = g_\alpha (D)^{1/2} V^* (-A)^{1/2},
\]
and so
\begin{align*}
B g_\alpha(D) &= (-A)^{1/2} V g_\alpha(D)^{1/2} (I-D)^{1/2} g_\alpha(D)^{1/2}
\\
&= (-\alpha A -B f_\alpha(D) B^*)^{1/2} W^* (I -h_\alpha(D))^{1/2},
\end{align*}
since $(I-D) g_\alpha(D) = I -h_\alpha(D)$. Thus $F_\alpha$ satisfies
condition~(iii) of Corollary~\ref{cpos gen}, showing that it is the
generator of a positive contraction cocycle.
\end{proof}
\begin{thm} \label{powergen}
Let $X$ be a Markov-regular positive contraction cocycle with
stochastic generator $F$. Then for each real $\alpha > 0$ the cocycle
$X^\alpha$ is Markov-regular with generator $F_\alpha$ given by~\eqref{Fal}.
\end{thm}
\begin{proof}
The identities~\eqref{add} lead immediately to
\begin{equation} \label{sum}
F_\alpha +F_\beta +F_\alpha \Delta F_\beta = F_{\alpha+\beta} \quad \text{for all }
\alpha, \beta > 0.
\end{equation}
In particular, noting that $F_1 = F$, it follows from
Lemma~\ref{products} and an induction argument that $X^n$ is
Markov-regular and has generator $F_n$ for each $n \in \mathbb{N}$.
Next, Lemma~\ref{Fal gens} shows that $F_\frac{1}{2}$ is the generator of some
positive contraction cocycle $Y$. By~\eqref{sum} and
Lemma~\ref{products}, $Y^2$ is also a cocycle with generator $2 F_\frac{1}{2}
+F_\frac{1}{2} \Delta F_\frac{1}{2} = F_1 = F$, and so $Y^2 = X$ by uniqueness of
generators. Thus the cocycle $X^{1/2}$ has generator $F_\frac{1}{2}$.
Now the identities~\eqref{comp} give $(F_\alpha)_\beta = F_{\alpha\beta}$
for all $\alpha, \beta > 0$, so square roots may be taken repeatedly,
followed by taking arbitrarily large integer powers to show that
$X^\alpha$ is Markov-regular with generator $F_\alpha$ for each dyadic
rational $\alpha > 0$.
If we choose any real number $\alpha > 1$ and let $(\alpha_n)_{n \geqslant 1}$ be
a sequence of such rationals with $\alpha_n \downarrow \alpha$, then the
function sequences $(f_{\alpha_n})$, $(g_{\alpha_n})$ and $(h_{\alpha_n})$
converge pointwise to $f_\alpha$, $g_\alpha$ and $h_\alpha$. The
inequalities~\eqref{monotone} show that the convergence is also
monotonic, and hence uniform by Dini's Theorem, so that $F_{\alpha_n}
\rightarrow F_\alpha$ in norm. It follows from~\eqref{semi decomp}
and~\eqref{semi gen} that the associated semigroups of the cocycle
$X^\alpha$ are the norm limits of the semigroups associated to
$X^{\alpha_n}$, and thus $X^\alpha$ has stochastic generator $F_\alpha$.
Finally, for any remaining $0 < \alpha < 1$ pick $n \in \mathbb{N}$ so that
$\beta := 2^n \alpha > 1$, then $X^\beta$ has generator $F_\beta$, and $X^\alpha =
(X^\beta)^{2^{-n}}$ has generator $(F_\beta)_{2^{-n}} = F_\alpha$.
\end{proof}
\section{Polar decomposition}
One obvious question to ask given the results above is the following:
if $X$ is a contraction cocycle such that $\mathcal{M}_X$ is commutative then
we can form the positive part process $(|X_t|)_{t \geqslant 0} = \bigl(
(X^*_t X_t)^{1/2} \bigr)_{t \geqslant 0}$ which is again a cocycle, so can
we choose partial isometries $U_t$ so that $U_t |X_t| = X_t$ for each
$t$ \textit{and} so that $(U_t)_{t \geqslant 0}$ is a cocycle?
What follows answers this question when $X$ is Markov-regular with
generator $F = \begin{sbmatrix} A & B \\ C & D-I \end{sbmatrix}$. The
necessary and sufficient conditions on $F$ for contractivity of $X$
is $\chi(F) \leqslant 0$ (Theorem~\ref{conisocoi}) which translates as:
\begin{equation} \label{con conds}
\begin{gathered}
\norm{D} \leqslant 1, \quad A+A^*+C^*C \leqslant 0, \quad \text{ and} \\
B+C^*D = (-A-A^*-C^*C)^{1/2} V (I-D^*D)^{1/2}
\end{gathered}
\end{equation}
for some contraction $V \in B(\mathfrak{h} \otimes \mathsf{k}; \mathfrak{h})$.
Lemma~\ref{products}, Proposition~\ref{powers} and
Theorem~\ref{powergen} combine to show that $(|X_t|)_{t \geqslant 0}$ is a
Markov-regular cocycle with generator $G = \chi(F)_\frac{1}{2}$, which
equals
\[
\begin{bmatrix}
\frac{1}{2} (A+A^*+C^*C) +(B+C^*D) f_\frac{1}{2}(|D|^2) (B^*+D^*C) &
(B+C^*D) g_\frac{1}{2}(|D|^2) \\
g_\frac{1}{2}(|D|^2) (B^*+D^*C) & |D|-I
\end{bmatrix}
\]
Now suppose that $U$ is a partial isometry-valued cocycle with
generator $E = \begin{sbmatrix} K & L \\ M & N-I \end{sbmatrix} \in
\mathcal{M}_X \otimes B({\widehat{\noise}})$. Then the product process $(U_t |X_t|)_{t \geqslant 0}$
is a cocycle with generator $E+G+E \Delta G$ (Lemma~\ref{products}).
This must equal $F$ to give $U|X| = X$, and thus $K$, $L$, $M$ and $N$
must be chosen to satisfy
\begin{subequations} \label{gen eqns}
\begin{align}
N|D| &= D, \label{N eqn} \\
M &= C -N g_\frac{1}{2}(|D|^2) (B^*+D^*C), \label{M eqn} \\
L|D| &= B -(B+C^*D) g_\frac{1}{2}(|D|^2), \label{L eqn} \\
K &= \tfrac{1}{2} (A-A^*-C^*C) -(B+C^*D) f_\frac{1}{2}(|D|^2) (B^*+D^*C) \label{K
eqn} \\
& \quad -Lg_\frac{1}{2}(|D|^2) (B^*+D^*C). \notag
\end{align}
\end{subequations}
Note that $N$ and $L$ are fixed on $\Ran |D|$ by these equations, and
once these are chosen the operators $K$ and $M$ are defined
by~\eqref{K eqn} and~\eqref{M eqn} respectively. Also, we want $U$ to
be partial isometry-valued, so need to satisfy $\pi(E) = 0$ by
Proposition~\ref{pi gen}. This is equivalent to requiring
\begin{equation} \label{pi conds}
N = NN^*N, \quad M^*N +LN^*N = 0 \ \text{ and } \ K+K^*+M^*M
+L(I-N^*N)L^* = 0
\end{equation}
So now if we choose any partial isometry $N$ that satisfies~\eqref{N
eqn} then one solution to this problem is obtained by setting
\[
L = -C^*N +(B+C^*D) g_\frac{1}{2}(|D|^2).
\]
Since $\sqrt{t}\, g_\frac{1}{2}(t) = 1- g_\frac{1}{2}(t)$ it is easy to check~\eqref{L
eqn}; moreover the second equation in~\eqref{pi conds} follows.
Checking that the third equation holds is much more tedious, but is
greatly assisted by noting that $2f_\frac{1}{2} +(g_\frac{1}{2})^2 = 0$. Summarising the
above we have:
\begin{thm} \label{polar decomp}
Every Markov-regular contraction cocycle $X$ for which $\mathcal{M}_X$ is
commutative can be written as the product of a partial isometry-valued
cocycle and a positive contraction cocycle.
\end{thm}
\begin{egs}
(i) Take $\mathfrak{h} = \mathbb{C}$, then $\mathcal{A}_X = \mathcal{M}_X = \mathbb{C}$, thus
Theorem~\ref{polar decomp} is applicable to \textit{any} contraction
cocycle on $\mathcal{F}_+$, since also every cocycle is trivially
Markov-regular in this context. Now the generator of such a cocycle is
$F \in B(\mathbb{C} \oplus \mathsf{k})$ which can be written as
\[
F = \begin{bmatrix} i \mu -\frac{1}{2}(\nu^2 +\norm{v}^2) & \langle
\nu (I-D^*D)^{1/2} w -D^* v| \\ |v\rangle & D-I \end{bmatrix}
\]
for some choice of $v \in \mathsf{k}$, $\nu \in [0,\infty[$, $\mu \in
\mathbb{R}$, contraction $D \in B(\mathsf{k})$ and $w \in \mathsf{k}$ with
$\norm{w} \leqslant 1$. This follows from~\eqref{con conds} (see
also~\cite{jmLGreif}*{Theorem~5.12} or~\cite{dilation}*{Theorem~6.2}).
It is then possible to write down the generators $G$ and $E$ of the
positive part and partial isometry cocycles for any choice of partial
isometry $N$ such that $N|D| = D$.
As a particular case suppose that $D$ is already a partial isometry,
let $P = |D|$, the projection onto the initial space of $D$, and take
$N = D$. Then
\begin{align*}
G &= \begin{bmatrix} -\frac{\nu^2}{2} \bigl(1 +\norm{P^\perp w}^2
\bigr) & \langle \nu P^\perp w| \\ |\nu P^\perp w\rangle & P-I
\end{bmatrix} \quad \text{ and } \\
E &= \begin{bmatrix} i\mu -\frac{1}{2}\norm{v}^2 -\frac{1}{2}\nu^2
\norm{P^\perp w} & \langle \nu P^\perp w -D^*v| \\ |v\rangle & D-I
\end{bmatrix}.
\end{align*}
This applies for example if we take $\mathsf{k} = l^2(\mathbb{N})$ with usual
orthonormal basis and take $D$ to be the coisometric left shift, so
that $P$ is the projection onto $\{e_1\}^\perp$. Note that if we
choose $\mu = \nu = 0$, and $v = w = 0$ then
\[
F = E = \begin{bmatrix} 0 & 0 \\ 0 & D-I \end{bmatrix}, \quad
G = \begin{bmatrix} 0 & 0 \\ 0 & P-I \end{bmatrix}.
\]
If $\Gamma (Z)$ denotes the second quantisation of $Z \in B(L^2 (\Real_+;
\mathsf{k}))$ then using the isomorphism $L^2 (\Real_+; \mathsf{k}) \cong
L^2 (\Real_+) \otimes \mathsf{k}$ it follows that
\[
X_t = U_t = \Gamma (M_{\mathbf{1}_{[0,t[}} \otimes D + M_{\mathbf{1}_{[t,\infty[}} \otimes
I_\mathsf{k}), \quad |X_t| = \Gamma (M_{\mathbf{1}_{[0,t[}} \otimes P
+M_{\mathbf{1}_{[t,\infty[}} \otimes I_\mathsf{k}),
\]
where $M_f$ denotes multiplication by $f \in L^\infty (\Real_+)$. In
particular since $D$ is not normal, the algebra generated by the
process $X$ is not commutative.
\medskip
On a different tack, note that equation~\eqref{L eqn} only specifies
$L$ on $\Ran |D|$, so one might be tempted to set it equal to $0$ on
the orthogonal complement, which would certainly be the case if we
replace $L$ by $L' := LN^*N$. This has the effect of apparently making
it easier to check the third of the identities in~\eqref{pi conds}
(noting that the first and second remain valid), since $L'(I-N^*N)L'^*
= 0$. But for our example above with $D$ as the left shift one finds
that $K$ now becomes
\[
K = i\mu -\tfrac{1}{2} \norm{v}^2 +\tfrac{1}{2} \nu^2 \norm{P^\perp
w}^2
\quad \ensuremath{\Rightarrow} \quad
K +K^* +M^*M = \nu^2 \norm{P^\perp w}^2.
\]
Thus the third equation in~\eqref{pi conds} will fail for an
appropriate choice of $\nu$ and $w$, and hence $U$ will not be partial
isometry-valued.
\medskip
(ii) As a special case of the more general situation, suppose that $F
= \begin{sbmatrix} A & B \\ C & D-I \end{sbmatrix} \in \mathcal{N} \otimes
B({\widehat{\noise}})$ for a commutative von Neumann algebra $\mathcal{N}$, and with $D$
isometric. Then $|D| = I$, $N = D$ and hence
\[
G =
\begin{bmatrix}
\frac{1}{2} (A+A^*+C^*C) & 0 \\ 0 & 0
\end{bmatrix}
\ \text{ and } \
E =
\begin{bmatrix}
\frac{1}{2} (A-A^*-C^*C) & -C^*D \\ C & D-I
\end{bmatrix}.
\]
In particular the positive part is $|X_t| = P_t \otimes I_{\mathcal{F}_+}$ where
$P_t$ is the positive semigroup on $\mathfrak{h}$ with generator $\frac{1}{2}
(A+A^*+C^*C)$, and all of the stochastic terms occur only in the
process $U$. Moreover in this case $\chi(E)=0$, so $U$ is an isometric
cocycle.
\end{egs}
\bigskip
\noindent
\emph{ACKNOWLEDGEMENTS}. I am indebted to Martin Lindsay for providing
Proposition~\ref{not suff} and the example in the subsequent
remark. Many thanks to Luigi Accardi whose questions after my
presentation of this material helped me spot an error in a previous
version, and to Ken Duffy for facilitating the corrections.
\def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\begin{bibdiv}
\begin{biblist}
\bib{lAqFK}{article}{
author={Accardi, Luigi},
title={On the quantum {F}eynman-{K}ac formula},
date={1978},
ISSN={0370-7377},
journal={Rend. Sem. Mat. Fis. Milano},
volume={48},
pages={135--180},
}
\bib{wActsFockI}{article}{
author={Arveson, William},
title={Continuous analogues of {F}ock space},
date={1989},
ISSN={0065-9266},
journal={Mem. Amer. Math. Soc.},
volume={80},
number={409},
pages={iv+66},
}
\bib{wAEbook}{book}{
author={Arveson, William},
title={Noncommutative Dynamics and {$E$}-semigroups},
series={Springer Monographs in Mathematics},
publisher={Springer-Verlag},
address={New York},
date={2003},
ISBN={0-387-00151-4},
}
\bib{bvrBCCR}{article}{
author={Bhat, B. V.~Rajarama},
title={Cocycles of {CCR} flows},
date={2001},
ISSN={0065-9266},
journal={Mem. Amer. Math. Soc.},
volume={149},
number={709},
pages={x+114},
}
\bib{ebDc0}{book}{
author={Davies, E.~Brian},
title={One-parameter Semigroups},
series={London Mathematical Society Monographs},
publisher={Academic Press Inc. [Harcourt Brace Jovanovich Publishers]},
address={London},
date={1980},
volume={15},
ISBN={0-12-206280-9},
}
\bib{fFqp8}{article}{
author={Fagnola, Franco},
title={Characterization of isometric and unitary weakly differentiable
cocycles in Fock space},
conference={
title={Quantum Probability \& Related Topics},
},
book={
series={QP-PQ, VIII},
publisher={World Sci. Publishing},
place={River Edge, NJ},
},
date={1993},
pages={143--164},
}
\bib{dilation}{article}{
author={Goswami, Debashish},
author={Lindsay, J.~Martin},
author={Sinha, Kalyan~B.},
author={Wills, Stephen~J.},
title={Dilation of {M}arkovian cocycles on a von {N}eumann algebra},
date={2003},
ISSN={0030-8730},
journal={Pacific J. Math.},
volume={211},
number={2},
pages={221--247},
}
\bib{HWpowerPIs}{article}{
author={Halmos, P.~R.},
author={Wallen, L.~J.},
title={Powers of partial isometries},
date={1970},
journal={J. Math. Mech.},
volume={19},
number={8},
pages={657--663},
}
\bib{jmLGreif}{article}{
author={Lindsay, J. Martin},
title={Quantum stochastic analysis---an introduction},
conference={
title={Quantum Independent Increment Processes. I},
},
book={
series={Lecture Notes in Math.},
volume={1865},
publisher={Springer},
place={Berlin},
},
date={2005},
pages={181--271},
}
\bib{LPgran}{article}{
author={Lindsay, J.~Martin},
author={Parthasarathy, K.~R.},
title={On the generators of quantum stochastic flows},
date={1998},
ISSN={0022-1236},
journal={J. Funct. Anal.},
volume={158},
number={2},
pages={521--549},
}
\bib{mother}{article}{
author={Lindsay, J.~Martin},
author={Wills, Stephen~J.},
title={Existence, positivity and contractivity for quantum stochastic
flows with infinite dimensional noise},
date={2000},
ISSN={0178-8051},
journal={Probab. Theory Related Fields},
volume={116},
number={4},
pages={505--543},
}
\bib{father}{article}{
author={Lindsay, J.~Martin},
author={Wills, Stephen~J.},
title={Markovian cocycles on operator algebras adapted to a {F}ock
filtration},
date={2000},
ISSN={0022-1236},
journal={J. Funct. Anal.},
volume={178},
number={2},
pages={269--305},
}
\bib{FFviaAK}{report}{
author={Lindsay, J.~Martin},
author={Wills, Stephen~J.},
title={Quantum stochastic operator cocycles via associated semigroups},
date={2005},
note={To appear in Math. Proc. Cambridge Philos. Soc.},
eprint={math.FA/0512398},
}
\bib{CBonOS}{report}{
author={Lindsay, J.~Martin},
author={Wills, Stephen~J.},
title={Quantum stochastic cocycles and completely bounded
semigroups on operator spaces I},
date={2005},
note={Preprint},
}
\bib{PStotal}{article}{
author={Parthasarathy, K. R.},
author={Sunder, V. S.},
title={Exponentials of indicator functions are total in the boson Fock
space $\Gamma(L\sp 2[0,1])$},
conference={
title={Quantum Probability Communications},
},
book={
series={QP-PQ, X},
publisher={World Sci. Publishing},
place={River Edge, NJ},
},
date={1998},
pages={281--284},
}
\bib{mSwhitebialg}{book}{
author={Sch{\"u}rmann, Michael},
title={White noise on bialgebras},
series={Lecture Notes in Mathematics},
publisher={Springer-Verlag},
address={Berlin},
date={1993},
volume={1544},
ISBN={3-540-56627-9},
}
\bib{mStotal}{article}{
author={Skeide, Michael},
title={Indicator functions of intervals are totalizing in the symmetric
Fock space $L^2(\mathbb{R}_+)$},
date={2000},
book={
title={Trends in Contemporary Infinite Dimensional Analysis and Quantum
Probability, volume in honour of Takeyuki Hida},
editor={Accardi, L.},
editor={Kuo, H.-H.},
editor={Obata, N.},
editor={Saito, K.},
editor={Si, Si},
editor={Streit, L.},
publisher={Istituto Italiano di Cultura},
address={Kyoto},
},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
train/arxiv
|
BkiUddbxK6-gDyrT9kQ1
| 5 | 1 |
\section{Introduction}
Strongly correlated $f$-electron systems exhibit a wide range of ordering phenomena including various
magnetic orderings as well as superconductivity.\cite{mydosh,thalmeir1} However, they are notorious for
possessing complex ordered phases or so called 'hidden order', which are sometimes
not easily accessible experimentally because of the ordering of multipoles
of higher rank such as electric quadrupolar, magnetic octupole etc. than the rank one magnetic
dipole.\cite{kuramoto,santini}
This marked difference from the correlated $d$-electron system is a
result of otherwise a strong spin-orbit coupling existent in these systems. Recent predictions of
samarium hexaboride (SmB$_6$) to be a topological Kondo insulator has led to an
intense interest and activities in these materials.\cite{takimoto2}
CeB$_6$ with a simple cubic crystal structure is one of the most extensively studied $f$-electron
system both theoretically as well as experimentally. Apart from the pronounced Kondo
lattice properties, it undergoes two different types of ordering transition as a function of temperature despite
it's simple crystal structure.\cite{kasuya1} First, there is a transition to the AFQ phase with ordering wavevector
${\bf Q}_1 = (\pi, \pi, \pi)$ at $T_{{\bf Q}} \approx 3.2K$, which has long remained
hidden to the standard experimental probes such as neutron diffraction.\cite{effantin,goodrich,matsumura,shiina,thalmeier}
Then, another transition to the AFM phase with double ${\bf Q}_2$ commensurate structure with
${\bf Q}_2 = (\pi/2, \pi/2, 0)$ takes place at $T_{N} \approx 2.3K$.\cite{zaharko}
{Significant progress has been made recently through the experiments in understanding the
nature of above mentioned phases of CeB$_6$. Magnetic spin resonance, for instance, has been observed in
the AFQ phase\cite{demishev1,demishev2} with it's
origin attributed to the ferromagnetic correlations\cite{krellner,schlottmann} as
in the Yb compounds, e.g., YbRh,\cite{krellner} YbIr$_2$Si$_2$,\cite{sichelschmidt}
and one Ce compound CeRuPO.\cite{bruning} On the other hand, according to a recent
inelastic neutron-scattering (INS) experiment, AFM phase is rather a
coexistence phase consisting of AFQ ordering as well.\cite{friemel} In another INS measurements,
low-enengy ferromagnetic fluctuations have been reported to be more
intense than the mode corresponding to the magnetic ordering wavevector ${\bf Q}_2$ in the AFM phase,
which stays though with reduced intensity even in the pure AFQ phase.\cite{jang} Overall picture emerging
from these experiments and hotspot observed near $\Gamma$ by ARPES imply the existence of
strong ferromagnetic fluctuations in various phases of CeB$_6$.
So far most of the theoretical studies have focused on the localized aspects of 4-$f$ electron while
neglecting the itinerant character when investigating multipole orderings.\cite{shiina,thalmeier,okhawa} However,
this may appear surprising because the estimates of density of
states (DOS) for CeB$_6$ at the Fermi level from low-temperature specific heat measurement as well as from
the effective mass measurement from de Haas-van Alphen (dHvA) gives a significantly larger value when compared to the
paramagnetic metal such as LaB$_6$ provided that the FSs are considered same in both the compounds.\cite{harrison} In the
temperature regime $T > T_{Q}$, it exhibits a typical dense Kondo behavior dominated by Fermi liquid
with a Kondo temperature of
the order of $T_N$ and $T_{Q}$.\cite{nakamura} Moreover, a low energy dispersionless collective mode
at ${\bf Q}_1$ has been observed in the INS experiments, which is well within the single particle charge
gap present in the coexistence phase.\cite{friemel} The existence of such spin excitons have been reported in
several superconductors\cite{eremin} as well as heavy-ferimion compounds\cite{akbari} previously, and
explanation for the origin of such modes has been provided in terms of
correlated partilce-hole excitation a characteristics of the itinerant systems.
Recent advancement based on a full 3D tomographic sampling of
the electronic structure by the APRES has unraveled
the FSs in the high-symmetry planes of cubic CeB$_6$.\cite{koitzsch,neupane} FSs are
found to be the cross sections of the
ellipsoids, which exclude the $\Gamma$ point and are bisected by (100) plane at $k_z = \pi$. The largest
semi-principle axes of the ellipsoid coincides with $\Gamma$-X. Based on the FS characteristics, it
has been suggested that multipole order may arise due to the nesting as the shifting of one ellipsoid by
nesting vector ($\pi, \pi, \pi$) into the void formed in between other three can
result in a significant overlap. Interestingly, the features of FS bear
several similarities to those of LaB$_6$, which has also been suggested by earlier
estimates based mainly on the dHvA experiments\cite{onuki,harrison} as well as by several band-structure
calculations.\cite{kasuya,suvasini,auluck}
Despite various experimental works on the FSs of CeB$_6$, no theoretical studies of
ordering phenomena have been carried out within the models based on the
realistic electronic structure, and therefore the nature of instability or
fluctuations that will arise in that case is of strong current
interest. To address this important issue, we propose to discuss
a two-orbital tight-binding model with energy levels belonging to the
$\Gamma_8$ quartet. The model reproduces the experimentally measured FSs well along the high-symmetry planes namely
(100), (110) etc, which are part of the ellipsoid like three-dimensional FSs with
the squarish cross sections. With this realistic electronic structure, we examine the nature of of instability or
fluctuations in the Hubbard-like model with standard onsite Coulomb interaction terms considered usually in a multiorbital system such as
iron-based superconductors. This is accomplished by studying behavior of the susceptibilities corresponding to the various
multipolar moments.
\section{Model Hamiltonian}
Single particle state in the presence of strong spin-orbit coupling
is defined by using total angular momentum ${{\bf j}} = \textit{{\bf l}} + \textit{{\bf s}}$, which yields
low-lying sextet and high-lying octet
for $j = 5/2$ and $7/2$, respectively in the case of $f$-electron with ${l} = 3$. Therefore, with the
number of electrons $n$ being 1, it is the low lying
sextet, which is relevant in the case of Ce$^{3+}$ ions. These ions are in the octahedral environment with
corners being occupied by the six B ions. Therefore, the
sextet is further split into $\Gamma_8$ quartet which forms the ground state of CeB$_6$ and a high lying
$\Gamma_7$ doublet separated by $\sim 500K$. $\Gamma_8$ quartet
involves two Kramers doublet and each doublet can be treated
as spin-$\frac{1}{2}$ system.\cite{hotta}
Using $\Gamma_8$ quartet, kinetic part of our starting Hamiltonian is
\begin{equation}
\mathcal{H}_0 = \sum_{{\bf i},{\bf j}}\sum_{\mu,\nu}\sum_{\sigma, {\sigma}^{\prime}} t_{{\bf i};{\bf j}}^{\mu
\sigma;\nu {\sigma}^{\prime}}
(f_{{\bf i}\mu\sigma}^\dagger f_{{\bf j}\nu{\sigma}^{\prime}} + \text{H.c.}),
\end{equation}
where $t_{{\bf i};{\bf j}}^{\mu
\sigma;\nu {\sigma}^{\prime}}$ are the hopping elements from orbital $\mu$ with psuedospin $\sigma$ at site
${\bf i}$ to orbital $\nu$ with psuedospin $\sigma^{\prime}$ at site ${\bf j}$. The operator
$f_{{\bf i} \mu \sigma}^\dagger$ ($f_{{\bf i} \mu \sigma}$) creates (destroys)
a $f$ electron in the $\mu$ orbital of site ${\bf i}$ with psuedo spin $\sigma$. These are given
explicitly in terms of the $z$-components of the total
angular momentum $j = 5/2$ as follows,
\begin{eqnarray}
f_{i1 \uparrow} &=& \sqrt{\frac{5}{6}}c_{i-\frac{5}{2}}+\sqrt{\frac{1}{6}}c_{i\frac{3}{2}} \nonumber\\
f_{i1 \downarrow} &=& \sqrt{\frac{5}{6}}c_{i\frac{5}{2}}+\sqrt{\frac{1}{6}}c_{i-\frac{3}{2}} \nonumber\\
f_{i2 \uparrow} &=& c_{i-\frac{1}{2}},\,\,\,f_{i2 \downarrow} = c_{i\frac{1}{2}}.
\end{eqnarray}
As can be seen in Fig. \ref{orb}, $\Gamma_{8}$ orbitals are similar in structure to the $d$-orbitals $d_{x^2-y^2}$
and $d_{3z^2-r^2}$.
\begin{figure}
\begin{center}
\vspace*{-4mm}
\hspace*{-2mm}
\psfig{figure=fig1.eps,width=85mm,angle=0}
\vspace*{-2mm}
\end{center}
\caption{$\Gamma_{8(1)}$ and $\Gamma_{8(2)}$ orbitals with a similar structure as $d$-orbitals $d_{x^2-y^2}$
and $d_{3z^2-r^2}$, respectively.}
\label{orb}
\end{figure}
\begin{figure}
\begin{center}
\vspace*{-8mm}
\hspace*{-8mm}
\psfig{figure=fig2a.eps,width=32.5mm,angle=-90}
\hspace*{-9mm}
\psfig{figure=fig2b.eps,width=33.5mm,angle=-90}
\vspace*{-6mm}
\end{center}
\caption{(a) Electron dispersions along high-symmetry direction $\Gamma$-X-M-$\Gamma$-R and (b) DOS, which is peaked near
the Fermi level.}
\label{qpi1}
\end{figure}
\begin{figure}
\begin{center}
\vspace*{-10mm}
\hspace*{-2mm}
\psfig{figure=fig3.eps,width=85mm,angle=0}
\vspace*{-4mm}
\end{center}
\caption{Electron dispersions along high-symmetry plane (a) (100), (b) at $\pi$ along
(100), (c) (110) and (d) (111) obtained for the chemical potential $\mu$ = 16.4.}
\label{disp}
\end{figure}
Kinetic energy after the Fourier transform can be expressed in terms of $\Gamma$ matrices defined as
$\hat{\Gamma}^{0,1,2,3,4,5} = (\hat{\tau}_0 \hat{\sigma}_0,\hat{\tau}_z \hat{\sigma}_0,\hat{\tau}_x
\hat{\sigma}_0,\hat{\tau}_y \hat{\sigma}_x,\hat{\tau}_y \hat{\sigma}_y,\hat{\tau}_y \hat{\sigma}_z)$,
where ${\sigma}_i$s and ${\tau}_i$s are Pauli's matrices corresponding to the spin and orbital degrees of freedom, respectively. So that
\begin{equation}
\sum_{\k} \Psi^{\dagger}_{\k} H_k \Psi_{\k} = \sum_{\k} \sum_{i = 0,1,..,5} \Psi^{\dagger}_{\k} d^i (\k) \Gamma^{i}\Psi_{\k}.
\end{equation}
Here, $\Psi^{\dagger}_{\k} = (f^{\dagger}_{\k 1\uparrow},f^{\dagger}_
{\k 2\uparrow},f^{\dagger}_{\k 1\downarrow},f^{\dagger}_{\k 2\downarrow})$ is the electron field with $d^i (\k)$s
\begin{eqnarray}
d^0(\k) &=& -\mu+8t\phi_{0}(\k)+\frac{28}{3}t^{\prime}\phi^{\prime}_{0}(\k)+\frac{128}{9}t^{\prime \prime}
\phi^{\prime \prime}_0(\k) \nonumber\\
d^1(\k) &=& 4t\phi_1(\k)-\frac{2}{3}t^{\prime}\phi^{\prime}_1(\k) \nonumber\\
d^2(\k) &=& -4\sqrt{3}t\phi_2(\k)+\frac{2}{\sqrt{3}}t^{\prime}\phi^{\prime}_2(\k) \nonumber\\
d^3(\k) &=& \frac{16}{\sqrt{3}}t^{\prime}\phi^{\prime}_3(\k)+\frac{128}{9\sqrt{3}}t^{\prime \prime}
\phi^{\prime \prime}_3(\k) \nonumber\\
d^4(\k) &=& \frac{16}{\sqrt{3}}t^{\prime}\phi^{\prime}_4(\k)+\frac{128}{9\sqrt{3}}
t^{\prime \prime}\phi^{\prime \prime}_4(\k) \nonumber\\
d^5(\k) &=& \frac{16}{\sqrt{3}}t^{\prime}\phi^{\prime}_5(\k)+\frac{128}{9\sqrt{3}}
t^{\prime \prime}\phi^{\prime \prime}_5(\k).
\end{eqnarray}
$t^{\prime}$ and $t^{\prime \prime}$ are the second and third next-nearest
neighbor hoping parameters. Various $\phi(\k)$s are expressed in terms of cosines and sines of the
components of momentum in the Brillouin zone as
\begin{eqnarray}
\phi_0&=&\cos k_x+\cos k_y+\cos k_z \nonumber\\
\phi^{\prime}_0&=&\cos k_y \cos k_z+\cos k_z \cos k_x+\cos k_x \cos k_y \nonumber\\
\phi_1&=&\cos k_x+\cos k_y-2 \cos k_z \nonumber\\
\phi^{\prime}_1&=&\cos k_y \cos k_z+\cos k_z \cos k_x-2 \cos k_x \cos k_y \nonumber\\
\phi_2&=&\cos k_x-\cos k_y \nonumber\\
\phi^{\prime}_2&=&\cos k_y \cos k_z-\cos k_z \cos k_x \nonumber\\
\phi^{\prime}_3&=&\sin k_y \sin k_z \nonumber\\
\phi^{\prime}_4&=&\sin k_z \sin k_x \nonumber\\
\phi^{\prime}_5&=&\sin k_x \sin k_y \nonumber\\
\phi^{\prime \prime}_0 &=& \cos k_x \cos k_y \cos k_z \nonumber\\
\phi^{\prime \prime}_3 &=& \cos k_x \sin k_y \sin k_z \nonumber\\
\phi^{\prime \prime}_4 &=& \sin k_x \cos k_y \sin k_z \nonumber\\
\phi^{\prime \prime}_5 &=& \sin k_x \sin k_y \cos k_z.
\end{eqnarray}
In the absence of the second and third nearest-neighbor hopping, the kinetic part of the
Hamiltonian reduces to that of manganites\cite{dagotto} with the only difference of a constant multiplication factor.
In the following, the unit of energy is set to be $t$. Calculated electron
dispersions for $t^{\prime} =-0.38$ and $t^{{\prime}{\prime}} = 0.18$, which consists
of doubly degenerate eigenvalues, are shown in Fig. \ref{disp}(a) along the high symmetry directions.
A large hole pocket near X and the extrema exhibited
by two bands near $\Gamma$ just below the Fermi level are broadly in agreement with 4$f$
dominated part in the band-structure calculations. The density of states (DOS) show
two peaks with larger one being in the vicinity of the Fermi level (Fig. \ref{disp}(b)). It is not unexpected particularly because
of the flatness of the
two bands near $\Gamma$ contributing mostly to the DOS at the Fermi level. Interestingly, a
hot spot near $\Gamma$ has been observed also in the ARPES measurements, which points towards the possibility of
strong ferromagnetic fluctuations.\cite{neupane} Here, the chemical potential is chosen to be 16.4 to
obtain a better agreement with the ARPES FSs.
\begin{figure}
\begin{center}
\vspace*{-4mm}
\hspace*{-2mm}
\psfig{figure=fig4.eps,width=60mm,angle=0}
\vspace*{-4mm}
\end{center}
\caption{Fermi surfaces in the Brillouin zone for the chemical potential $\mu$ = 16.4.}
\label{qpi1}
\end{figure}
Fig. \ref{disp} shows FSs cut along different high-symmetry planes. It has an ellipse-like structure with
major axis aligned along $\Gamma$-X for the (100) plane while touching each other along $\Gamma$-M direction.
On the other hand, the parallel plane at (0, 0, $\pi$) consists of a single squarish pocket around that point. In the absence of four-fold rotation symmetry for the (110) plane, two large ellipse-like FSs
surfaces are present with the major axes along $\Gamma$-Y direction while small pockets exist along $\Gamma$-X direction. The six-fold rotation symmetry is reflected by the six pockets
along (111) plane. All of them are obtained from the FSs shown in the whole Brillouin zone as in the Fig. 3.
It consists of an ellipsoid-like FSs with largest semi-principal axes coinciding with $\Gamma$-X,
however, with a squarish cross section. An overall good
agreement exists with the several recent ARPES
measurements.\cite{koitzsch,neupane} ARPES estimates are believed to more reliable when compared with the earlier
estimates from dHvA experiments carried out in the presence of magnetic field as
the latter has the potential to affect the hot spots.
\section{Multipolar susceptibilities}
\begin{figure*}
\begin{center}
\vspace*{-4mm}
\hspace*{-8mm}
\psfig{figure=fig5a.eps,width=48mm,angle=-90}
\hspace*{0mm}
\psfig{figure=fig5b.eps,width=48mm,angle=-90}
\vspace*{-4mm}
\end{center}
\caption{(a) Magnetic and quadrupolar static susceptibilities
along high-symmetry direction $\Gamma$-X-M-$\Gamma$-R. (b) Octapolar static susceptibilities.}
\label{mpi1}
\end{figure*}
\begin{figure*}
\begin{center}
\vspace*{-4mm}
\hspace*{-8mm}
\psfig{figure=fig6a.eps,width=48mm,angle=-90}
\hspace*{0mm}
\psfig{figure=fig6b.eps,width=48mm,angle=-90}
\vspace*{-4mm}
\end{center}
\caption{Multipolar static susceptibilities calculated at the RPA-level for $U = 15$ and $J = 0.16U$. (a) Spin susceptibility
diverges near (0, 0, 0). (b) For the same set of interaction parameters quadrupolar and octapolar susceptibilities are divergenceless.}
\label{mpi2}
\end{figure*}
Sixteen multipolar moments can be defined for
the $\Gamma_8$ state including one charge, three dipole,
five quadrupole and seven octapole, which are rank-0,
rank-1, rank-2 and rank-3 tensors, respectively. The dipole belongs to the $\Gamma^{-}_4$ irreducible representation, where
$-$ sign denote the breaking of time reversal symmetry. It's components are given by the outer product of Pauli's
matrices $\hat{\tau_0}\hat{\sigma_i}$s. The
quadrupole moments belonging to $\Gamma^{+}_3$ are $\hat{\tau_x}\hat{\sigma_0}$ and $\hat{\tau_z}\hat{\sigma_0}$ while
those belonging to $\Gamma^{+}_5$ irreducible representations are expressed as $\hat{\tau_i}\hat{\sigma_y}$s. The
octapular moments with $\Gamma^{-}_2$ representation is $\hat{\tau_y}\hat{\sigma_0}$, whereas the $z$-component of
those belonging to $\Gamma^-_4$ and $\Gamma^-_5$ are $2\hat{\tau_z}\hat{\sigma_z}$ and $2\hat{\tau_x}\hat{\sigma_z}$,
respectively.
In order to examine the multipolar ordering instabilities,
we calculate susceptibilities while considering only the $z$-component whenever component along
three coordinate axes are present as that will be sufficient because of the cubic symmetry. Multipolar
susceptibilities are defined as\cite{takimoto1}
\begin{equation}
{\chi}^{pq,rs}({\bf q},i\omega_n)= \frac{1}{\beta} \int^{\beta}_0{d\tau e^{i \omega_{n}\tau}\langle T_\tau
[{\cal O}^{pq}_{{\bf q}}(\tau) {\cal O}^{rs}_{ -{\bf q}}(0)]\rangle},
\ee
where
\begin{equation}
{\cal O}^{pq}_{{\bf q}}= \sum_{\bf k} \sum_{\sigma \sigma^{\prime}} \sum_{\mu \mu^{\prime}} f^{\dagger}_{\mu \sigma}(\k+{\bf q})
\tau^{p}_{\mu \mu^{\prime}} \sigma^{q}_{\sigma \sigma^{\prime}} f_{\mu^{\prime} \sigma^{\prime}}(\k).
\ee
They can be expressed in terms of
\begin{eqnarray}
\chi^{\sigma_1 \sigma_2;\sigma_4 \sigma_3}_{\mu_1 \mu_2;\mu_4 \mu_3}({\bf q},i\omega_n)
&=& \frac{1}{\beta} \int^{\beta}_0 d\tau \sum_{\k,\k^{\prime}}\langle T_{\tau} f^{\dagger}_{\k+{\bf q} \mu_1 \sigma_1}(\tau)
f_{\k \mu_2 \sigma_2}(\tau)\nonumber\\
&\times&f^{\dagger}_{\k^{\prime}-{\bf q}^{\prime} \mu_3 \sigma_3}(0)f_{\k^{\prime} \mu_4 \sigma_4}(0)\rangle
\end{eqnarray}
which form a 16$\times$16 matrix. Thus, the dipole or spin susceptibility is given by
\begin{equation}
{\chi}^{0z,0z}({\bf q},i\omega_n)= \sum_{\sigma \sigma^{\prime}} \sum_{\mu \mu^{\prime}} \sigma \sigma^{\prime}\chi^{\sigma \sigma;\sigma^{\prime}\sigma^{\prime}}
_{\mu \mu;\mu^{\prime} \mu^{\prime}}({\bf q},i\omega_n),
\ee
where $\sigma$ and $\mu$ in front of $\chi$ takes +1 or -1 corresponding to the two spin or orbital degrees of freedom.
Various quadrupolar and octapolar susceptibilities are given as
\begin{eqnarray}
{\chi}^{x0,x0}({\bf q},i\omega_n) &=& \sum_{\sigma } \sum_{\mu \mu^{\prime}}
\chi^{\sigma \sigma;\sigma \sigma}_{\mu \bar{\mu};\mu^{\prime} \bar{\mu}^{\prime}}({\bf q},i\omega_n) \nonumber\\
{\chi}^{z0,z0}({\bf q},i\omega_n) &=& \sum_{\sigma }
\sum_{\mu \mu^{\prime}} \mu \mu^{\prime}\chi^{\sigma \sigma;\sigma\sigma}
_{\mu \mu;\mu^{\prime} \mu^{\prime}}({\bf q},i\omega_n) \nonumber\\
{\chi}^{yz,yz}({\bf q},i\omega_n) &=& -i^2\sum_{\sigma \sigma^{\prime}} \sum_{\mu \mu^{\prime}}
\sigma \sigma^{\prime}\mu \mu^{\prime}\chi^{\sigma \sigma;\sigma^{\prime}\sigma^{\prime}}
_{\mu \bar{\mu};\mu^{\prime} \bar{\mu}^{\prime}}({\bf q},i\omega_n) \nonumber\\
\end{eqnarray}
and
\begin{eqnarray}
{\chi}^{y0,y0}({\bf q},i\omega_n) &=& -i^2\sum_{\sigma } \sum_{\mu \mu^{\prime}} \mu \mu^{\prime}
\chi^{\sigma \sigma;\sigma \sigma}_{\mu \bar{\mu};\mu^{\prime} \bar{\mu}^{\prime}}({\bf q},i\omega_n) \nonumber\\
{\chi}^{xz,xz}({\bf q},i\omega_n) &=& \sum_{\sigma \sigma^{\prime}}
\sum_{\mu \mu^{\prime}} \sigma \sigma^{\prime} \chi^{\sigma \sigma;\sigma^{\prime}\sigma^{\prime}}
_{\mu \bar{\mu};\mu^{\prime} \bar{\mu}^{\prime}}({\bf q},i\omega_n) \nonumber\\
{\chi}^{zz,zz}({\bf q},i\omega_n) &=& \sum_{\sigma \sigma^{\prime}} \sum_{\mu \mu^{\prime}}
\sigma \sigma^{\prime}\mu \mu^{\prime}\chi^{\sigma \sigma;\sigma^{\prime}\sigma^{\prime}}
_{\mu \mu;\mu^{\prime} {\mu}^{\prime}}({\bf q},i\omega_n),
\end{eqnarray}
respectively.
Fig. 5 shows different static multipolar susceptibilities with well-defined peaks for some while broad hump like structure for the
other. Particularly, the spin susceptibility $\bar{\chi}^{0z,0z}$ is, among all, sharply peaked, however,
at $\approx$ ${\bf Q}_3$ = (0, 0, 0). Quadrupolar susceptibility $\bar{\chi}^{yz,yz}$ corresponding to the AFQ order observed
in experiments, on the other hand, does shows a peak near ${\bf Q}_1$. Other quadrupolar susceptibility $\bar{\chi}^{x0,x0}$
is peaked near $\approx$ ${\bf Q}_3$ while $\bar{\chi}^{z0,z0}$ has
a broad hump like structure near ($\pi, 0, 0$) and a peak slightly away from ($\pi/2, \pi/2, \pi/2$).
We further note that $\bar{\chi}^{x0,x0} = \bar{\chi}^{xz,xz}$,
$\bar{\chi}^{y0,y0} = \bar{\chi}^{yz,yz}$ and $\bar{\chi}^{z0,z0} = \bar{\chi}^{zz,zz}$ as shown in Fig. 5(b)
\section{Multipolar susceptibilities in the presence of interaction}
In order to investigate the role of electron-electron correlation, we consider the standard onsite
Coulomb interaction terms given as
\begin{eqnarray}
\mathcal{H}_{int} &=& U \sum_{{\bf i},\mu} n_{{\bf i}\mu \uparrow} n_{{\bf i}\mu \downarrow} + (U' -
\frac{J}{2}) \sum_{{\bf i}, \mu<\nu} n_{{\bf i} \mu} n_{{\bf i} \nu} \nonumber \\
&-& 2 J \sum_{{\bf i}, \mu<\nu} {\bf{S_{{\bf i} \mu}}} \cdot {\bf{S_{{\bf i} \nu}}} + J \sum_{{\bf i}, \mu<\nu, \sigma}
f_{{\bf i} \mu \sigma}^{\dagger}f_{{\bf i} \mu \bar{\sigma}}^{\dagger}f_{{\bf i} \nu \bar{\sigma}}
f_{{\bf i} \nu \sigma}, \nonumber\\
\label{int}
\end{eqnarray}
in a manner similar to the various correlated multiorbital systems. First term represents the
intraorbital Coulomb interaction for each orbital. Second and third term
represent the density-density interaction and Hund's coupling between the two orbitals. Fourth term represents the
pair-hopping energy whereas the
condition $U^{\prime}$ = $U$ - $2J$ is essential for the rotational invariance.
Multipolar susceptibilities in the presence of interaction can be obtained from Dyson's equation yielding
\begin{equation}
\hat{\chi}_R({\bf q}, i\omega) = (\hat{{\bf 1}}-\hat{U}\hat{{\chi}}({\bf q}, i\omega))^{-1}\hat{{\chi}}({\bf q}, i\omega).
\ee
Here, $\hat{{\bf 1}}$ is a $16\times16$ identity matrix, whereas the interaction matrix is given by\cite{scherer}
\begin{eqnarray}
&&{U}^{\sigma_1 \sigma_2;\sigma_3 \sigma_4}_{\mu_1 \mu_2;\mu_3 \mu_4} \nonumber\\
&&= \left\{
\begin{array}{@{\,} l @{\,} c}
-U & (\mu_1=\mu_2=\mu_3=\mu_4, \sigma_1=\sigma_2\ne\sigma_3=\sigma_4)\\
-U^{\prime} & (\mu_1=\mu_2\ne\mu_3=\mu_4, \sigma_1=\sigma_2\ne\sigma_3=\sigma_4)\\
-J & (\mu_1=\mu_4\ne \mu_2=\mu_3,\sigma_1=\sigma_2\ne\sigma_3=\sigma_4)\\
-J^{\prime} & (\mu_1=\mu_3\ne \mu_2=\mu_4,\sigma_1=\sigma_2\ne\sigma_3=\sigma_4)\\
-(U-J^{\prime})& (\mu_1=\mu_2\ne\mu_3=\mu_4, \sigma_1=\sigma_2=\sigma_3=\sigma_4)\\
(U-J^{\prime}) & (\mu_1=\mu_4\ne\mu_2=\mu_3, \sigma_1=\sigma_2=\sigma_3=\sigma_4)\\
U & (\mu_1=\mu_2 = \mu_3=\mu_4,\sigma_1=\sigma_4\ne\sigma_2=\sigma_3)\\
U^{\prime} & (\mu_1=\mu_4 \ne \mu_2=\mu_3,\sigma_1=\sigma_4\ne\sigma_2=\sigma_3)\\
J & (\mu_1=\mu_2\ne \mu_3=\mu_4,\sigma_1=\sigma_4\ne\sigma_2=\sigma_3)\\
J^{\prime} & (\mu_1=\mu_3\ne \mu_2=\mu_4,\sigma_1=\sigma_4\ne\sigma_2=\sigma_3)\\
0 & (\mathrm{otherwise})
\end{array}\right. . \nonumber\\
\end{eqnarray}
Fig. \ref{mpi2} show the multipolar static susceptibilities at the RPA-level. As expected, the RPA
spin susceptibility requires the smallest critical interaction strength $U = 15$ ($U$/$W$ $< 1/3$) with $J = 0.16U$
to show the divergence. Interestingly, it diverges near ${\bf Q}_3$ instead of at the
AFM ordering wave vector ${\bf Q}_2$, which is not surprising
because there exists a large DOS near $\Gamma$ that leads also to the peak near ${\bf Q}_3$ in bare spin suspceptibility.
Thus, AFQ instability corresponding to the $\Gamma^+_5$ representation is absent in the model despite the bare quadurpole susceptibility being peaked near
${\bf Q}_1$. However, we believe that the strong low-energy ferromagnetic fluctuations in the paramagnetic phase may have
important implications for the persistent ferromagnetic correlations in various ordered phases as observed by various experiments.\cite{demishev2,jang}
\section{Conclusions and discussions}
In conclusions, we have described a tight-binding model with the bases as $\Gamma_8$,
which captures the salient features of
the Fermi surfaces along the high-symmetry planes as observed in the ARPES measurements. A large density of state is
obtained near the Fermi level due to the flatness of the bands close to $\Gamma$, which bears a remarkable similarity to the
hot-spot observed in another ARPES experiments. Multipolar susceptibilities calculated with the
standard onsite Coulomb interactions as in other multiorbital systems show that it is the spin
susceptibility that exhibits strongest diverging behavior. Moreover, it does so in the low-momentum region implying
an underlying ferromagnetic instability.
It is clear that nature of the instability obtained with the
realistic electronic structure is different from the actual order in CeB$_6$. However, it is important to note that some of the recent experiments have provided
the evidence of strong ferromagnetic correlations in the ordered phases. For instance, there exists magnetic
spin resonance in the AFQ phase, which has been attributed to the FM correlations. Further, the most intense
spin-wave excitation modes have been observed at zero-momentum instead of the AFM ordering wavevector by the
INS measurements in the coexistence phase, which continues to be present even in the
AFQ phase. A similar INS measurement in the paramagnetic phase is highly desirable to probe the existence of
ferromagnetic correlations in the paramagnetic phase. So far only an indirect indication in the form of
hot-spot observed by ARPES near $\Gamma$ is available. In order
to understand above mentioned features, we believe that the
strong low-energy ferromagnetic fluctuations obtained within the two-orbital model with the
realistic electronic structure may be an important step. To explain AFQ and other multipole order,
it would perhaps be necessary to include the local-exchange terms involving AFQ and multipolar moments. Such a proposal should be the subject matter of
future investigation in order to describe various complex ordering phenomena as well as associated unusual features within
a single model.
We acknowledge the use of HPC clusters at HRI.
|
train/arxiv
|
BkiUginxK7IDLyvTIghy
| 5 | 1 |
\section{Introduction and statement of the result}
Given a real number $y>1$, an integer $n$ is said to be $y$-friable if its greatest prime factor, denoted by $P^+(n)$, satisfies $P^+(n)\leq y$ with the
conventions $P^+(\pm1)=1$ and $P^+(0)=0$. Conversely, an integer $n$ is called $y$-sifted if its smallest prime factor, denoted by $P^-(n)$, satisfies $P^-(n)>y$
with the conventions $P^-(\pm1)=+\infty$ and $P^-(0)=0$. Due to the duality beetwen sifted integers and friable integers, such integers occur in several places in number
theory and their distribution has been intensively studied (see \cite{HT93} and \cite{Gr08} for survey articles related to integers without large prime factors).
A theorem of Hildebrand~\cite{Hi86}, related to the number $\Psi(N,y)$ of $y$-friable integers smaller than $N$, asserts that, for any $\varepsilon>0$ and uniformly in
the domain \begin{equation}
N\geq 3\quad\text{and}\quad 1\leq u \leq \frac{\log N}{(\log\log N)^{5/3+\varepsilon}}
,
\end{equation}
we have the asymptotic formula
\begin{equation}\label{Hildebrand formula}
\Psi\left(N,N^{1/u}\right)=N\rho(u)\left(1+O\left(\frac{u\log(u+1)}{\log N}\right)\right)
\end{equation}
where $\rho$ is the Dickman function, namely the unique solution to the delay differential equation
\begin{displaymath}
\left\{\begin{array}{ll}\rho(u)=1&\text{if }0\leq u\leq 1,\\
u\rho'(u)+\rho(u-1)=0&\text{if }u>1.\end{array}
\right.
\end{displaymath}
Given $F\in\mathbf{Z}[X_1,\ldots,X_d]$ and $\mathcal{K}\subset\mathbf{R}^d$, the study of the cardinality
\begin{displaymath}
\Psi_F
\left(\mathcal{K}
,y\right)
:=\#\left\{\boldsymbol{n}\in \mathcal{K}\cap\mathbf{Z}^d
: P^+(F(\boldsymbol{n}))\leq y\right\}
\end{displaymath}
is an interesting question. In particular, the factorization algorithm Number Field Sieve (NFS)\footnote{The interested reader may find a description of this algorithm
in [\cite{CP05}, Chapter~6].}
rests on the assumption that the cardinality
$
\Psi_F
\left(\mathcal{K},y\right)
$
is sufficiently large for some small $y$, for $F\in\mathbf{Z}[X_1,X_2]$ and $\mathcal{K}\subset\mathbf{R}^2$ a sufficiently regular compact set.
Let
$F=F_1^{k_1}\cdots F_t^{k_t}$ be the decomposition of $F$ with $F_1,\ldots,F_t$ the distinct irreducible factors of $F$ and
$d_1,\ldots,d_t$ their respective degrees with $d_1\geq \ldots\geq d_t\geq1$. If we assume the events "$F_i(\boldsymbol{n})$ is $y$-friable" to be independent,
then (\ref{Hildebrand formula}) leads to the following conjecture
\begin{equation}\label{conjecture friable binary form}
\Psi_F\left([0,N]^d,N^{1/u}\right)
\underset{N\rightarrow+\infty}{\sim} N^d\rho(d_1u)\cdots\rho(d_tu)
\end{equation}
for any fixed $u>0$.
When $d=2$, the author proved the validity of (\ref{conjecture friable binary form}) for an irreducible cubic form $F$ or for $F=F_1F_2$ where $F_1$ is a linear form
and $F_2$ is an irreducible quadratic form~\cite{Lacub, La15}. For general binary forms $F$, such a formula seems beyond reach but there exist some partial results
for estimating $\Psi_F\left([0,N]^2,N^{1/u}\right)$ when $u$ is sufficiently small.
In \cite{BBDT12}, Balog, Blomer, Dartyge and Tenenbaum proved the existence of a constant $\alpha_F> 1/d_1$ such that,
for any $\varepsilon>0$ and uniformly for $N\geq2$, we have
\begin{equation}\label{lower bound BBDT}
\Psi_F\left([0,N]^2,N^{1/\alpha_F+\varepsilon}\right)\gg_{\varepsilon} N^2.
\end{equation}
Let $d\geq2$ and $t\geq1$ be integers. In this paper, we focus on binary forms $F=F_1\cdots F_t$ where $F_1,\ldots ,F_t$ are
some affine-linear forms in $\mathbf{Z}[X_1,\ldots, X_d]$. The cases $d=2$ and $t\in\{1,2\}$ can be deduced from results of \cite{FT91} related to
the distribution of friable integers in arithmetic progressions. The case $d=2$ and $t=3$ was essentially considered by a succession of articles of
various authors~(\cite{Br99,LS12,BG14,Dr13,Dr15,Haun}).
In [\cite{Haun}, Corollary~1], Harper used the Hardy-Littlewood circle method to show the existence of $c>0$ such that, uniformly for $
N\geq2$ and $y\geq(\log N)^c$, we have
\begin{align*}
\Psi_{X_1X_2(X_1+X_2)}\left(\mathcal{K}(N),y\right)
\underset{N\rightarrow+\infty}{\sim}\mathfrak{S}_0(\alpha,y)\mathfrak{S}_1(\alpha)\frac{\Psi\left(N,y\right)^3}{N}
\end{align*}
where $\mathcal{K}(N)=\left\{1\leq n_1,n_2\leq N:n_1+n_2\leq N\right\}$, $\alpha:=\alpha(N,y)$ denotes the unique real solution of the equation
\begin{displaymath}
\sum_{p\leq y}\frac{\log p}{p^{\alpha}-1}=\log N,
\end{displaymath}
\begin{displaymath}
\mathfrak{S}_0(\alpha,y):=\prod_{p\leq y}
\left(1+
\frac{(p-p^{\alpha})^3}{p(p-1)^2(p^{3\alpha-1}-1)}\right)
\prod_{p>y}\left(1-\frac{1}{(p-1)^2}\right)
\end{displaymath}
and\begin{displaymath}
\mathfrak{S}_1(\alpha):=\int_0^{1}\int_0^{1-t_1}\alpha^3
(t_1t_2(t_1+t_2))^{\alpha-1}\mathop{}\mathopen{}\mathrm{d} t_2\mathop{}\mathopen{}\mathrm{d} t_1.
\end{displaymath}
The celebrated work of Green, Tao and Ziegler~\cite{GT10,GT12a, GT12b, GTZ12} provides a scheme - the so-called nilpotent Hardy-Littlewood method - to
get asymptotic estimations of the average value
\begin{equation}\label{mean value h}
M_{F_1\cdots F_t}(\mathcal{K};h):=\sum_{\boldsymbol{n}\in\mathcal{K}\cap\mathbf{Z}^d}h(F_1(\boldsymbol{n}))\cdots h(F_t(\boldsymbol{n}))
\end{equation}
for any system of affine-linear forms $F_1,\ldots, F_t\in\mathbf{Z}[X_1,\ldots, X_d]$ such that no two forms are affinely related and for any arithmetic function $h$
with a quasi-random behaviour.
In recent years, this approach has been applied successfully for several functions including the von Mangoldt functions $\Lambda$ (this gives a partial resolution
of the generalized
Hardy-Littlewood conjecture \cite{GT10}), the Liouville function $\lambda$ or the M\"obius function $\mu$~\cite{GT10}, the divisor function $\tau$~\cite{Ma12a},
the function $r_G$ which counts the number of representations of a binary quadratic form $G$ \cite{Ma12b,Ma13} or, very recently, any multiplicative function that
takes values in the unit disk~\cite{FHun}.
In this work, we study how the nilpotent Hardy-Littlewood method may be applied to get an asymptotic formula for (\ref{mean value h}) when $h=1_{S\left(N^{1/u}\right)}$
is the indicator function of the $N^{1/u}$-friable integers for bounded $u\geq1$. Such a question is not covered
by Frantzikinakis and Host work~\cite{FHun} since $h$ depends on $N$ in the present case.
The main result is the following theorem.
\begin{thm}\label{main theorem n*1} Let $N,L,d,t$ and $u_0$ be some positive integers. Suppose that $F=(F_1,\ldots,F_t):\mathbf{Z}^d\rightarrow\mathbf{Z}^t$
is a system of affine-linear forms such that any two forms $F_i$ and $F_j$ are affinely independent over $\mathbf{Q}$ and the non-constant coefficients of the $F_i$ are bounded by $L$.
Then, for any convex body $\mathcal{K}\subset[-N,N]^d$
such that
$F(\mathcal{K})\subset [0,N]^t$ and for any $ u_1,\ldots, u_t\in[0, u_0]$, we have
\begin{align*}
\sum_{\boldsymbol{n}\in\mathcal{K}\cap\mathbf{Z}^d}1_{S\left(N^{1/u_1}\right)}(F_1(\boldsymbol{n}))\dots 1_{S\left(N^{1/u_t}\right)}(F_t(\boldsymbol{n}))=&
\mathrm{Vol}(\mathcal{K})\prod_{i=1}^t\rho(u_i)
+o(N^d)
\end{align*}
where the implicit constant depends only on $t,d,L$ and $u_0$ and $S(y)$ denotes the set of $y$-friable integers.
\end{thm}
As regards the result of Balog \textit{et al.}~\cite{BBDT12}, we get essentially two major improvements on their works in the case of linear forms :
\begin{itemize}
\item Theorem \ref{main theorem n*1} gives an asymptotic equivalent which is consistent with the conjectural formula (\ref{conjecture friable binary form})
whereas (\ref{lower bound BBDT}) only gives a lower bound,
\item when $t\geq 4$, Formula (\ref{lower bound BBDT}) is valid with $\alpha_F=1+\frac{2}{t-2}$ while Theorem~\ref{main theorem n*1} shows that we
can choose any positive real number for $\alpha_F$.
\end{itemize}
\textbf{Outline and perspectives} In its primitive form, the nilpotent Hardy-Littlewood method is concerned with arithmetic functions $h$ which
are equidistributed in residue classes of small moduli and supported on a set of integers with positive asymptotic density.
For such functions, the problem is reduced to show that $h$ is suitably Gowers-uniform to deduce asymptotics for $M_{F_1\cdots F_t}(\mathcal{K};h)$
(see the description of the method in Section~\ref{section descriptif}).
In many applications, the function $h$ may not satisfy the two previous conditions.
The method developed in \cite{GT10, Ma12a, Ma12b} to overcome this difficulties consists in two steps~:
\begin{itemize}
\item the decomposition of $h$ into a sum of functions which are equidistributed in residue classes of small moduli ($W$-trick, see [\cite{GT10}, Section~5]),
\item the construction of a pseudorandom measure $\nu$ dominating $h$ in view to apply a transference principle (see [\cite{GT10}, Section~10]).
\end{itemize}
For bounded $u\geq1$, the set of $N^{1/u}$-friable integers
has positive density $\rho(u)$ and is well-behaved in arithmetic progressions
of small common difference (see the work of Fouvry and Tenenbaum~\cite{FT91}).
In particular, the problem may be directly handled by using the nilpotent Hardy-Litlewood method and showing
that $h$ has small Gowers-uniformity norms.
This may be viewed as an application of the impressive results of Matthiesen~\cite{Mattun}
related to the orthogonality beetwen multiplicative functions and nilsequences.
In the Section~\ref{section preuve} of the present paper, we develop a more direct and simple approach to study
the linear correlations of the friable integers.
It would be interesting to prove Formula~(\ref{conjecture friable binary form}) for unbounded parameters $u$.
In this case, the sequence of friable integers is too sparse to directly apply Green-Tao-Ziegler's work.
A major step to get this generalization would be to construct
a pseudorandom majorant for $1_{S\left(N^{1/u}\right)}$.
\section{A brief description of the nilpotent Hardy-Littlewood method}\label{section descriptif}
In this section, we recall two important arguments of the nilpotent Hardy-Littlewood method.
The generalized von Neumann theorem -- due to Gowers~\cite{Go01} and Green-Tao~\cite{GT10} -- reduces the estimation of
$M_{F_1\ldots F_t}(\mathcal{K};h)$ defined in (\ref{mean value h}) to the study of the Gowers
uniformity norm $\|h\|_{U^{t-1}[N]}$ (see [\cite{GT10}, Appendix~B]
for a definition of Gowers norm).
\begin{thmext}[\cite{GT10}, Proposition~7.1] \label{neumannn}
Let $t,d,L\geq1$ be some integers. Suppose that $h_1,\ldots, h_t : [0,N]\to\mathbf{R}$ are functions bounded by $1$ and that
$ F=(F_1,\ldots,F_t):\mathbf{Z}^d\rightarrow\mathbf{Z}^t$ is a system of affine-linear forms whose non-constant coefficients are bounded by $L$
and such that any two forms $F_i$ and $F_j$ are affinely independent over $\mathbf{Q}$. Let $\mathcal{K} \subset [-N,N]^d$ be a convex body such that
$F(\mathcal{K}) \subset [0,N]^t$. Suppose also that
\begin{equation*} \min_{1 \leq i \leq t} \left\| h_i\right\|_{U^{t-1}[N]} \leq \delta
\end{equation*}
for some $\delta > 0$. Then we have
\begin{equation*}
\sum_{\boldsymbol{n} \in \mathcal{K}} \prod_{i=1}^t h_i(F_i(\boldsymbol{n})) =
o_{\delta}(N^d) + \kappa(\delta)N^{d}
\end{equation*}
where $\kappa(\delta)\to0$ as $\delta\to 0$.
\end{thmext}
\begin{proof}
Let $(\boldsymbol{e}_1,\ldots,\boldsymbol{e}_d)$ be
the canonical basis of $\mathbf{R}^d$ and fix $\boldsymbol{n}=n_1\boldsymbol{e}_1+ \cdot + n_d\boldsymbol{e}_d \in \mathcal{K}$. Then we have
$$\left|F_i(\mathbf{0})\right|\leq \sum_{i=1}^d \left|n_i\right|L+\left|F_i(\boldsymbol{n})\right|
\leq (dL+1)N$$
because $F(\boldsymbol{n})\in[0,N]^t$ and $\mathcal{K}\in[0,N]^d$.
With the definition (1.1) of the norm
$\|\cdot\|_N$ of \cite{GT10}, we therefore have $\|F\|_N\ll_{d,t}L$ and the
Proposition~7.1 of \cite{GT10} can be used to get the result.
\end{proof}
The inverse theorem for the Gowers norms, proved by Green, Tao and Ziegler~\cite{GTZ12}, exhibits the link between linear correlations and polynomial nilsequences.
The reader may refer to \cite{GT12b} for definitions and properties of filtered nilmanifolds and polynomial nilsequences.
\begin{thmext}[\cite{GTZ12}, Theorem~1.3]\label{Gowers inverse}
Let $s\geq0$ be an integer and let $\delta\in]0,1]$. Then there exists a finite collection $\mathcal{M}_{s,\delta}$ of $s$-step nilmanifolds
$G/\Gamma
$, each equipped with some smooth Riemannian metric $d_{G/\Gamma}$, as well as positive constants $C(s,\delta)$ and $c(s,\delta)$ with the following property.
Whenever $N\geq1$ and $h : [0,N]\cap\mathbf{Z} \rightarrow [-1,1]$ is a function such that
\begin{displaymath} \left\|h \right\|_{U^{s+1}[N]}
\geq \delta,
\end{displaymath}
there exists a filtered nilmanifold $G/\Gamma \in \mathcal{M}_{s,\delta}$, a
function $F:G/\Gamma\rightarrow\mathbf{C}$ bounded in magnitude by $1$ and with Lipschitz constant at most $C(s,\delta)$
with respect to the metric $d_{G/\Gamma}$ and a polynomial nilsequence $g:\mathbf{Z}\rightarrow G$ such that
\begin{displaymath} \left|
\sum_{0\leq n \leq N} h(n) F\left(g(n) \Gamma\right)\right|
\geq c(s,\delta) N.
\end{displaymath}
\end{thmext}
We describe now the application of the Green-Tao method to the functions $1_{S\left(N^{1/u_i}\right)}$. For any parameter of friability $N^{1/u_i}$,
we consider the balanced function
\begin{displaymath}
\begin{array}{cccc}h_i:&\mathbf{N}&\rightarrow&[-1,1]\\
&n&\mapsto&1_{S\left(N^{1/u_i}\right)}(n)-\rho(u_i).
\end{array}
\end{displaymath}
By writing $1_{S\left(N^{1/u_i}\right)}(n)=h_i(n)+\rho(u_i)$ and using the bound $\rho(u_i)\leq1$, it follows that
\begin{multline*}
\left|\Psi_{F_1\cdots F_t}\left(\mathcal{K}
,N^{1/u}\right)
-\mathrm{Vol}(\mathcal{K})\prod_{i=1}^t\rho(u_i)\right|
\leq\sum_{\substack{I\subset\{1,\ldots, t\}\\I\neq\emptyset}} \left|\sum_{\boldsymbol{n} \in\mathcal{K}\cap\mathbf{Z}^d
}\prod_{i\in I}h_i(F_i(\boldsymbol{n}))\right|\\+O_d\left(N^{d-1}\right).
\end{multline*}
In view of the inverse theorem, the problem is reduced to prove that, for any $i\in\{1,\ldots, t\}$, the function $h_i$ does not correlate with nilsequences,
namely that the upper bound
\begin{displaymath}
\sum_{n\leq N}h_i(n)F(g(n)\Gamma) =o\left(N\right)
\end{displaymath}
holds for any $(t-2)$-steps nilsequences $F(g(n)\Gamma)$.
\section{Non-correlation with nilsequences}\label{section preuve}
Let $s\geq0, u_0\geq1$ be some integers and
let $(G/\Gamma,G_{\mathbf{N}})$ be a filtered nilmanifold of degree $s$. In this section, we show that for any $1$-bounded
Lipschitz function $F:G/\Gamma\rightarrow\mathbf{C}$, any polynomial nilsequence $g:\mathbf{Z}\rightarrow G$ adapted
to $G_{\mathbf{N}}$, $1\leq u\leq u_0$ and $N\geq1$, we have
\begin{equation}\label{formula to show n*1}
\sum_{n\leq N}h(n)F(g(n)\Gamma) =o\left(N\left(1+\|F\|_{\textrm{Lip},d_{G/\Gamma}
}\right)\right)
\end{equation}
where $h(n):=1_{S\left(N^{1/u}\right)}(n)-\rho(u)$
and the implicit term $o(\cdot)$ only depends on $G/\Gamma$ and $u_0$. In view of Theorems~\ref{neumannn}
and \ref{Gowers inverse}, this will imply Theorem~\ref{main theorem n*1}.
In \cite{Mattun}, Mathiesen develop a method to bound the correlations of a multiplicative function
with polynomial nisequences, under some density and growth conditions and some hypothesis of control of the second moment.
Its approach mix the Montgomery-Vaughan method~\cite{MV77}, the factorisation theorem for polynomial sequences from Green-Tao~\cite{GT12b}
and the fact that the $W$-tricked von Mangoldt function is orthogonal to nilsequences~\cite{GT10}.
Its main result~[\cite{Mattun}, Theorem 5.1] may be applied directly to
the multiplicative function $1_{S\left(N^{1/u}\right)}(n)$ to get (\ref{formula to show n*1}), once we have checked it satisfies the assumptions required.
In the case of the indicator of friable integers and for any $E\geq1$,
the various hypothesis which defined the set $\mathcal{F}_1(E)$ of \cite{Mattun}
can be essentially deduced from the estimation
\begin{displaymath}\sum_{\substack{n\leq N\\ n\equiv a\pmod{q}}}1_{S\left(N^{1/u}\right)}(n)\underset{N\to\infty}{\sim} \frac{N}{q}\rho(u)
\end{displaymath}
which holds uniformly for $1\leq a,q\leq (\log N)^{E}$ (see \cite{FT91}).
In the rest of this paper, we give a direct and simple method to establish (\ref{formula to show n*1}),
with a different focus from
\cite{Mattun}.
The starting point
is the M\"obius inversion formula in the following form
\begin{displaymath}1_{S\left(N^{1/u}\right)}(n)=\sum_{\substack{P^-(k)>N^{1/u}}}\mu(k)1_{k|n}.
\end{displaymath}
We approximate the indicator $1_{k|n}$ by its mean value $\frac{1}{k}$ for $k\leq N^{1-\tau}$ where the parameter $\tau=o(1)\in]1/\log N,1[$
will be chosen later. One can write\begin{align*}
\sum_{1\leq n\leq N}
\left(1_{S\left(N^{1/u}\right)}(n)-\rho(u)\right)F(g(n)\Gamma)
=\Sigma_1(F,g)+\Sigma_2(F,g)
\end{align*}
where
\begin{align*}
\Sigma_1(F,g):=
\sum_{1\leq n\leq N}h_{\tau}(n)
F(g(n)\Gamma)\text{ with }h_{\tau}(n)=\sum_{\substack{k\leq N^{1-\tau}\\P^-(k)>N^{1/u}}}\mu(k)\left(1_{k|n}-\frac{1}{k}\right)
\end{align*}
and
\begin{align*}
\Sigma_2(F,g)
:=\sum_{1\leq n\leq N}\left(\sum_{\substack{k> N^{1-\tau}\\P^-(k)>N^{1/u}}}\mu(k)1_{k|n}+
\sum_{\substack{k\leq N^{1-\tau}\\P^-(k)>N^{1/u}}}\frac{\mu(k)}{k}-\rho(u)\right)F(g(n)\Gamma).
\end{align*}
In the definition of the function $h_{\tau}$, the summation is restricted over the divisors $k\leq N^{1-\tau}$
since the contribution from the interval
$N^{1-\tau}<k\leq N$ is negligible (see (\ref{borne sup sigma2}) below).
First, we focus on $\Sigma_{2}(F,g)$.
In view of the following series of estimations, valid whenever $\tau u<1$,
\begin{align}
\nonumber\sum_{\substack{N^{1-\tau}<k\leq N\\P^-(k)>N^{1/u}}}\frac{\mu^2(k)}{k}&\ll
\sum_{j\geq1}\sum_{N^{1/u}<p_2<\cdots<p_j\leq N}\frac{1}{p_2\cdots p_j}
\sum_{\max\left(N^{1/u},\frac{N^{1-\tau}}{p_2\cdots p_j}\right)\leq p_1\leq \frac{N}{p_2\cdots p_j}}\frac{1}{p_1}\\
\nonumber&\ll\tau u \sum_{j\geq1}\frac{1}{(j-1)!}\left(\sum_{N^{1/u}<p\leq N}\frac{1}{p}\right)^{j-1}\\
\label{mu2}&\ll \tau u\sum_{j\geq1}\frac{1}{(j-1)!}\left(\log(u)+O\left(1\right)\right)^{j-1}\ll\tau u^2,
\end{align}
we have the upper bound
\begin{align}\label{Sigma23}
\sum_{1\leq n\leq N}\sum_{\substack{k> N^{1-\tau}\\P^-(k)>N^{1/u}}}\mu^2(k)1_{k|n}\ll \tau u^2N.
\end{align}
On the other hand, one can handle the sum over $k\leq N^{1-\tau}$ in $\Sigma_2(F,g)$ by using [\cite{LT15}, Formula~(1.5)] which states that the formula
\begin{equation}\sum_{\substack{k\leq N\\P^-(k)>N^{1/u}}}\frac{\mu(k)}{k}=\rho(u)\left(1+O\left(\frac{u\log(u+1)}{\log N}\right)\right)\label{Sigma21} \end{equation}
holds for any $\varepsilon>0$ and uniformly for $x\geq2$ and $1\leq u\leq (\log x)^{3/8-\varepsilon}$.
Finally,
(\ref{Sigma23}) and (\ref{Sigma21}) yield
that
\begin{align}
\Sigma_2(F,g)
\ll&uN\left(\tau u +\frac{\rho(u)\log(u+1)
}{\log N}\right)\label{borne sup sigma2}.\end{align}
In view of the foregoing, it remains to obtain an upper bound for $\Sigma_1(F,g)$. This is the subject of the following proposition.
\begin{prop}\label{estimation Sigma1}Let $m,s\geq1$ be some integers and let $A>0$ be a real number. There exists a constant $c(m,s,A)>0$
with the following property. Whenever $Q,N\geq2$ are integers, $\tau\in]0,1/2[$ and $u\geq1$ are such that
$\min(N^{\tau},N^{1/u})\geq (\log N)^{c(m,s,A)}$, $(G/\Gamma,G_{\mathbf{N}})$ is a filtered nilmanifold of degree $s$ and dimension $m$, $\mathcal{X}$ is a
$Q$-rational Mal'cev
basis\footnote{The notion of $Q$-rational Mal'cev basis is introduced in [\cite{GT12b}, Definitions~2.1 and 2.4]
as a specific basis of the Lie algebra $\mathfrak{g}$ of $G$.} of $(G/\Gamma,G_{\mathbf{N}})$, $g:\mathbf{Z}\rightarrow G/\Gamma$
is a polynomial nilsequence adapted to $G_{\mathbf{N}}$ and $F:G/\Gamma\rightarrow[-1,1]$ is a Lipschitz function, then we have
\begin{equation}\label{formula Sigma1}
\sum_{1\leq n\leq N}h_{\tau}(n)F(g(n)\Gamma)\leq N Q^{c(m,s,A)}\left(1+\|F\|_{\textrm{Lip},\mathcal{X}}\right)2^u(\log N)^{-A}.
\end{equation}
\end{prop}
Recall that the smooth Riemannian metric $d_{G/\Gamma}$ of Proposition~\ref{Gowers inverse} is equivalent to the metric $d_{\mathcal{X}}$ (see the 4th
footnote and Definition~2.2 of \cite{GT12b}). With the choice $\tau=\frac{(\log\log N)^{1+\varepsilon}}{\log N}$, it follows from the estimations
(\ref{borne sup sigma2}) and (\ref{formula Sigma1}) that
the upper bound
\begin{displaymath} \sum_{n\leq N}h(n)F(g(n)\Gamma) =o\left(N\rho(u)\left(1+\|F\|_{\textrm{Lip},d_{G/\Gamma}
}\right)\right)\end{displaymath}
holds for any $\varepsilon>0$ and uniformly for $1\leq u\leq (\log\log N)^{1-\varepsilon}$.
This implies (\ref{formula to show n*1}) since $1\leq u\leq u_0$ is contained in this region for any $u_0$ which does not depend on $N$.
The rest of the article is devoted to the proof of Proposition~\ref{estimation Sigma1}.
The argument follows essentially the proofs of [\cite{GT12a},Theorem~1.1] and \cite{Ma12a}, Theorem~9.1] and we only outline the major differences.
A key point in the proof consists in reducing the problem to establish the formula (\ref{formula Sigma1})
in the case of totally equidistributed polynomial nilsequence $g$, i.e. such that $|P|^{-1}\sum_{n\in P}F(g(n)\Gamma)$ tends to $\int_{G/\Gamma}F$ as $P$ is a
subprogression such that $|P|\rightarrow +\infty$.
After this reduction, it will be possible to use the following analogue of [\cite{GT12a}, Proposition~2.1] and
[\cite{Ma12a}, Proposition~9.2].
\begin{prop}\label{non correlation equidistributed}
Let $m,s$ be some positive integers. There exist some constants $c_0(m,s),c_1(m,s)>0$ with the following property.
Whenever $Q\geq2$, $N\geq2$ and $\delta\in]0,1/2[$ such that $\delta^{-c_0(m,s)}\leq N^{\tau}$, $P\subset\{1,\ldots,N\}$ is an arithmetic progression
of size at least $N/Q$, $(G/\Gamma,G_{\mathbf{N}})$ is a filtered nilmanifold of degree $s$ and dimension $m$, $\mathcal{X}$ is a $Q$-rational Mal'cev basis
of $(G/\Gamma,G_{\mathbf{N}})$, $g:\mathbf{Z}\rightarrow G/\Gamma$ is a polynomial
and $\delta$-totally equidistributed nilsequence\footnote{A sequence $\left(g(n)\Gamma\right)_{n\in\{1,\ldots,N\}}$ is $\delta$-totally equidistributed if we have
$$\left|\frac{1}{|P|}\sum_{n\in P}F(g(n)\Gamma)\right|\leq \delta\|F\|$$ for all Lipschitz function $F:G/\Gamma\rightarrow\mathbf{C}$ with
$\int_{G/\Gamma}F=0$ and all arithmetic progressions $P\subset\{1,\ldots,N\}$ of size at least $\delta N$.} adapted to $G_{\mathbf{N}}$ and
$F:G/\Gamma\rightarrow[-1,1]$ is a Lipschitz function such that $\int_{G/\Gamma}F=0$, we have
\begin{equation*}
\left|\sum_{n\leq N}h_{\tau}(n)1_P(n)F(g(n)\Gamma)\right|\ll \delta^{c_1(m,s)}\|F\|_{\textrm{Lip},\mathcal{X}}QN \left(2^u+\log N\right).
\end{equation*}
\end{prop}
\begin{proof}[Proof that Proposition~\ref{non correlation equidistributed} implies Proposition~\ref{estimation Sigma1}]
Following some ideas of \cite{GT12a}, we can assume, without loss of generality, that
$\|F\|_{\textrm{Lip},\mathcal{X}}=1$ and $Q\leq\log N$. Let $B>0$ be a parameter to be specified at the end of the proof. Applying Theorem~1.19 of \cite{GT12b},
there exists an integer $M$ satisfying $\log N\leq M\leq (\log N)^{c(m,s,B)}$
such that we can write the decomposition $g=\varepsilon g'\gamma$ where \begin{enumerate}
\item $\varepsilon\in\textrm{poly}(\mathbf{Z},G_{\mathbf{N}})$ is $(M,N)$-smooth (see [\cite{GT12b}, Definition~1.18]),
\item $g'\in\textrm{poly}(\mathbf{Z},G_{\mathbf{N}})$ takes values in a rational subgroup $G'\subseteq G$ with Mal'cev basis $\mathcal{X}'$ and
$(g'(n))_{n\leq N}$ is $M^{-B}$-totally
equidistributed in $G'/(G'\cap\Gamma)$ for the metric $d_{\mathcal{X}}$ (see [\cite{GT12b}, Definition~1.10]),
\item $\gamma\in\textrm{poly}(\mathbf{Z},G_{\mathbf{N}})$ is periodic of period $q\leq M$ and $\gamma(n)$ is $M$-rational for any $n\in\mathbf{Z}$
(see [\cite{GT12b}, Definition~1.17]).
\end{enumerate}
Next, we reproduce the arguments of Green and Tao based on partitioning and pigeonholing and we use the properties of periodicity and smoothness
of $\gamma$ and $\varepsilon$.
In this way, the problem is reduced to show that
\begin{equation}\label{intermediary estimat Sigma1}
\left|\sum_{1\leq n\leq N}h_{\tau}(n)1_{P}(n)F'(g'(n)\Gamma')\right|\ll 2^{u}N/(M^2(\log N)^{2A})
\end{equation}
where $P$ is a subprogression such that
$|P|\geq \frac{N}{2M^2(\log N)^A}$, $(G'/\Gamma',G'_{\mathbf{N}})$ is a
$m$-dimensional nilmanifold of degree $s$ with $M^{C_1(m,s)}$-rational Mal'cev basis
$\mathcal{X}'$, $F':G'/\Gamma'\rightarrow[-1,1]$ is a Lipschitz function such that $\|F'\|_{\textrm{Lip},\mathcal{X}'}\leq M^{C_1(m,s)}$
and $g'\in\textrm{poly}(\mathbf{Z},G'_{\mathbf{N}})$ is $M^{-C_2(m,s)B+C_1(m,s)}$-totally equidistributed,
for some constants $C_1(m,s),C_2(m,s)>0$.
If we suppose that $\int_{G'/\Gamma'}F'=0$, then we can apply Proposition~\ref{non correlation equidistributed}
to the sequence $g'$,
with $M^{C_1(m,s)}$ (resp. $M^{-C_2(m,s)B+C_1(m,s)}$) as parameter of rationality (resp. totally equidistribution).
Taking $B$,
$C_1(m,s)$ and $c(m,s,A)$ sufficiently large, the hypothesis on the size of $P$ and $\delta $ are satisfied and we get
(\ref{intermediary estimat Sigma1}).
We can reduce to this last case by writing $F'=(F'-\int_{G'/\Gamma'}F')+\int_{G'/\Gamma'}F'$.
Indeed, we can observe that $\int_{G'/\Gamma'}F'$ is bounded by $1$ and, since the common difference $q$ of $P$
satisfies $q< N^{1/u}$, then we get some multiplicative independence, when $P^-(k)>N^{1/u}$ :
$$\left|\sum_{1\leq n\leq N}\left(1_{k|n}-\frac{1}{k}\right)1_{P}(n)\right|\leq1. $$ We deduce the major arc estimate
\begin{align*}
\left|\sum_{1\leq n\leq N}h_{\tau}(n)1_{P}(n)\int_{G'/\Gamma'}F'\right|
&\leq\sum_{\substack{k\leq N^{1-\tau}\\P^-(k)>N^{1/u}}}\left|
\sum_{1\leq n\leq N}\left(1_{k|n}-\frac{1}{k}\right)1_{P}(n)\right|\\
&\leq \left|\left\{k\leq N^{1-\tau}:P^-(k)>N^{1/u}\right\}\right|\\
&\ll u\frac{N^{1-\tau}}{\log N}
\end{align*}
which implies (\ref{intermediary estimat Sigma1}) under the condition $N^{\tau}\geq (\log N)^{c(m,s,A)}$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{non correlation equidistributed}] We essentially follow the proof of Proposition~9.2 of \cite{Ma12a}
and we suppose that $\|F\|_{\textrm{Lip},\mathcal{X}}=1$ and $Q\leq \delta^{-c_1(m,s)}$.
For $\mathcal{T}\in]0,1/2[$ and $j\geq1$, we define $S_j(\mathcal{T})$ as the set of the integers $k$ satisfying
\begin{displaymath}
\left|\sum_{2^j/k<n\leq 2^{j+1}/k}1_P(kn)F(g(kn)\Gamma)\right|> \mathcal{T}\frac{2^j}{k}.
\end{displaymath}
From the estimation
$$\sum_{\substack{k\leq N\\ P^-(k)>N^{1/u}}}\frac{\mu^2(k)}{k}
\ll \sum_{j\geq1}\frac{1}{j!}\left(\sum_{N^{1/u}<p\leq N}\frac{1}{p}\right)^{j}
\ll u
$
and the trivial bound $\left|\left\{k|n: P^-(k)>N^{1/u}\right\}\right|\leq 2^u$ valid whenever $n\leq N$,
we can see that $h_{\tau}(n)\ll 2^u$.
It follows that
\begin{align*}
\left|\sum_{n\leq N^{1-\tau/2}}h_{\tau}(n)1_P(n)F(g(n)\Gamma)\right|\ll N^{1-\tau/2}2^u
\end{align*}
and therefore we concentrate on the integers $n> N^{1-\tau/2}$.
Since the nilsequence $(g(n)\Gamma)_{n\in\{1,\ldots,N\}}$ is $\delta$-totally equidistributed,
the contribution from the part $\sum_k\frac{\mu(k)}{k}$ of $h_{\tau}$ can be handled
by observing that we have
\[
\sum_{\substack{k\leq N^{1-\tau}\\ P^-(k)>N^{1/u}}}\frac{\mu^2(k)}{k}
\left|\sum_{N^{1-\tau/2}\leq n \leq N}1_{P}(n)F(g(n)\Gamma)\right|\ll u\delta N.
\]
For the remaining terms $\sum_k\mu(k)1_{k|n}$ of $h_{\tau}$, we follow
the proof of Proposition~9.2 of \cite{Ma12a}.
We make a dyadic splitting over the variables $k$ and $n$ and we drop off the condition $P^-(k)>N^{1/u}$ :
\begin{align*}
&\sum_{\substack{k\leq N^{1-\tau}
}}\left|\sum_{N^{1-\tau/2}/k<n\leq N/k}1_P(kn)F(g(kn)\Gamma)\right|\\
\ll& \sum_{2^i\leq N^{1-\tau}}\sum_{\frac{N^{1-\tau/2}}{2}\leq 2^j\leq N}2^j\left(\sum_{\substack{2^{i}\leq k< 2^{i+1}
}}\frac{\mathcal{T}}{k}
+\sum_{\substack{2^{i}\leq k< 2^{i+1}
\\k\in S_j(\mathcal{T})}}\frac{1}{k}\right)\\
\ll& \sum_{\frac{N^{1-\tau/2}}{2}\leq 2^j\leq N}2^j\left(
\mathcal{T}\log N+\sum_{2^i\leq N^{1-\tau}}\frac{1}{2^i}\#\left( S_j(\mathcal{T})\cap\left[2^{i},2^{i+1}\right]\right)\right).
\end{align*}
Put $\mathcal{T}:=\delta^{c_1(m,s)}\leq Q^{-1}$ for a constant $c_1(m,s)>0$ sufficiently small. In the previous sum,
the contribution of the
range $\frac{N^{1-\tau/2}}{2}\leq 2^j\leq \mathcal{T} N$ is negligible and may be bounded by the trivial inequality.
The rest of the proof consists in showing that, if $K\leq N^{1-\tau}$, then we have
\begin{equation}\label{estimation s delta c}
\# \left(S_j(\mathcal{T})\cap[K,2K]\right)\leq \mathcal{T}K
\end{equation}
whenever $\mathcal{T}N\leq 2^j\leq N$.
The estimate (\ref{estimation s delta c}) is the analogue of [\cite{Ma12a}, Lemma~9.3] under the constraint $K\leq N^{1-\tau}$ rather than $K\leq N^{1/2}$
and in the special case $\overline{W}=1$ and $b=0$.
To achieve this, we follow the discussion of
Type I case of [\cite{GT12a}, Part~3] and we suppose for contradiction that (\ref{estimation s delta c}) does not hold for some $K\leq N^{1-\tau}$ and
$\mathcal{T}N
\leq 2^j\leq N$. By reproducing their arguments, we observe the existence
of a non-trivial horizontal character $\psi$ with magnitude $0<|\psi|\leq \mathcal{T}^{-c_2(m,s)}$ such that, for any $r\geq1$ and for at least
$\mathcal{T}^{c_2(m,s)}K$ values of $k$, we have
\begin{equation*}
\|\partial^r(\psi\circ g_k)(0)\|_{\mathbf{R}/\mathbf{Z}}\leq\mathcal{T}^{-c_2(m,s)}\left(K/2^j\right)^{r}
\end{equation*}
where $g_k(n)=g(kn)$,
which is the analogue of the formula~(3.7) of \cite{GT12a}.
By Lemma~3.2 and 3.3 of \cite{GT12a} -- consequences of Waring's theorem -- it follows that
there exists an integer $q\ll_s1$ and at least $\mathcal{T}^{c_3(m,s)}K^r$ integers $l\leq 10^sK^r$ such that
\begin{equation*}
\|ql\beta_r\|_{\mathbf{R}/\mathbf{Z}}\leq \mathcal{T}^{-c_3(m,s)}(K/2^j)^r
\end{equation*}
where the $\beta_r$'s are defined by \begin{equation}\label{definition beta}
\psi\circ g(n)=\beta_s n^s+\cdots+\beta_0.\end{equation}
To deduce some diophantine information about the $\beta_r$'s, we invoke Lemma~3.2 of \cite{GT12b}
in an analogous way as \cite{GT12a} after checking that the hypothesis are satisfied. It suffices to see that
$r\geq1$ and $\frac{\mathcal{T}^{2c_3(m,s)}}{10^s}\gg N^{-\tau}\geq\left(\frac{K}{2^j}\right)^r$ if the constant $c_1(m,s)$ is chosen sufficiently small.
It results that there exists $q'\leq \mathcal{T}^{-c_4(m,s)}$ such that
\begin{displaymath}
\|q'\beta_r\|_{\mathbf{R}/\mathbf{Z}}\leq\mathcal{T}^{-c_4(m,s)}2^{-rj}
\end{displaymath}
for any integer $r\geq1$.
By the definition~(\ref{definition beta}), we get the existence of $c_5(m,s)>0$ sufficiently
large such that $q'\leq \mathcal{T}^{-c_5(m,s)}$ and
\begin{equation}\label{derniere estimation diophante}
\|q'(\psi\circ g)(n)\|_{\mathbf{R}/\mathbf{Z}}\leq 1/10
\end{equation}for any $n\leq\mathcal{T}^{ c_5(m,s)}2^j$.
Let $\eta:\mathbf{R}/\mathbf{Z}\longrightarrow[-1,1]$ be a Lipschitz function of norm $O(1)$, mean value zero, and equal to $1$
on $[-1/10,1/10]$ so that
\begin{displaymath}
\int_{G/\Gamma}\eta\circ (q'\psi)=0\qquad\text{and}\qquad \|\eta\circ( q'\psi)\|_{\textrm{Lip},\mathcal{X}}\leq \mathcal{T}^{-c_5(m,s)}.
\end{displaymath}
It follows from (\ref{derniere estimation diophante}) that we have
\begin{displaymath}
\left|\sum_{n\leq\mathcal{T}^{ c_5(m,s)}2^j}\eta (q'\psi(g(n)\Gamma))\right|\geq \mathcal{T}^{ c_5(m,s)}2^
>\delta
\|\eta\circ( q'\psi)\|_{\textrm{Lip},\mathcal{X}}\mathcal{T}^{c_5(m,s)}2^j
\end{displaymath}
whenever $c_1(m,s)$ is sufficiently small. This contradicts the hypothesis that
$(g(n))_{n\leq N}$ is $\delta$-totally equidistributed, the set of integers less than $\mathcal{T}^{ c_5(m,s)}2^j$
being an arithmetic progression of size at least $\delta N$ whenever $c_1(m,s)$ is sufficiently small since $2^j\geq\mathcal{T}
N$.
\end{proof}
\subsection*{Acknowledgements} The author would like to thank Trevor Wooley for his suggestion to study this method,
R\'egis de la Bret\`eche, Fran\c cois Hennecart and Anne de Roton
for their interest for this work, and his Ph.D. advisor C\'ecile Dartyge for her continuous support.
The major part of this work were completed while the first author was a Ph.D. student at Universit\'e de Lorraine. He put
the finishing touch while he was a postdoctoral fellow at Aix-Marseille Universit\'e.
|
train/arxiv
|
BkiUeUg4eIOjR_CK16lJ
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Open clusters (OCs) have been used to find out the spiral arm structure and evolution of the Galactic disk (Trumpler 1930,
Janes \& Adler 1982, Carraro et al. 1998, Chen et al. 2003, Piskunov et al. 2006, Moraux 2016).
Due to their location in the disc, open clusters are highly contaminated by the non-member
stars. Data from the Gaia mission is very helpful in this direction. In continuation
of its previous two data releases the (early) Third data release (hereafter
EDR3; Gaia Collaboration et al. 2020) was made public on 3$^{rd}$ December 2020.
This catalog consists of the central coordinates, proper motions in right ascension and declination and parallaxes $(\alpha, \delta,
\mu_{\alpha}cos\delta, \mu_{\delta}, \pi)$ for more than 1.46 billion sources. Gaia~EDR3 has enabled a breakthrough in OC studies
because it provides accurate information of proper motion and parallaxes for a large number of stars. Cantat-Gaudin et al. (2018) reported
membership probabilities for 1229 OCs with 60 previously unknown clusters based on the Gaia data. One of the most important outcome from the Gaia
data is that we can detect many new OCs. Sim et al. (2019), Liu \& Pang (2019) and Castro-Ginard et al. (2020) identified 207, 76 and 582
new OCs in the Galactic disk. In this paper, our main goal is to perform a detailed analysis of NGC 1348 using Gaia data. The open
cluster NGC 1348 ($\alpha_{2000} = 03^{h}34^{m}06^{s}$, $\delta_{2000}=51^{\circ} 24^{\prime} 30^{\prime\prime}$;
$l$=146$^\circ$.969, $b$=-3$^\circ$.709) is located in the second Galactic quadrant. Carraro (2002) analyzed this object
using CCD UBVI data. He found that NGC 1348 is a significantly reddened cluster $(E(B-V)=0.85)$, lies at a distance $1.9\pm0.5$ kpc and
has an age greater than 50 Myr.
Open clusters contain a spectrum of stellar masses (from very low to high mass stars)
formed from the same molecular cloud. This makes them the ideal objects to study the
initial mass function (IMF).
Many authors have studied IMF in open
clusters (Durgapal \& Pandey 2001, Phelps \& Janes 1993, Piatti et al. 2002, Piskunov et al. 2004, Scalo et al. 1998, Sung and Bessell 2004,
Yadav \& Sagar 2002, 2004, and Bisht et al. 2017 \& 2019). The universality of IMF is still a matter of intense debate
(Elmegreen 2000; Larson 1999; Marks et al. 2012; Dib 2014; Dib, Schmeja \& Hony 2017).
One of the motives of the present analysis is to gather information of the IMF to
understand the star formation history in NGC 1348. The mass segregation studies
in the OCs provide information about the distribution of stars according to their masses
within the cluster region. The information contained in both
the mass distribution of stars and their spatial distribution can help us understand the process of star formation. We also investigate the
orbits of stars in NGC 1348 as these are very useful to constrain the role of external tidal forces and help us better understand the
dynamical evolution of the cluster.
Virtually, members of a star cluster appear as coherent and mutually associated moving groups of stars sharing similar properties like
distance, kinematics, chemical composition, and age as well as the line of sight velocity (radial velocity).
The determination of the convergent point coordinates $(A_{\circ} , D_{\circ})$ at which the stars of the cluster seem to be merging
(i.e. apex) is an important parameter in the kinematical and physical examination (Wayman 1965, Hanson 1975, Eggen 1984,
Gunn et al. 1988). To determine the apex, numerous techniques are available in the literature, like i) classical convergent
point method, ii) the AD-chart method, and iii) convergent point search method (CPSM; Galli et al. 2012).
The convergent point method is a classical method which still attracts the
interest of many working groups.
This method allows selecting stars based on the parallelism of the proper motion components.
It was further developed and discussed by Smart (1938), Brown (1950), \& Jones (1971).
Based on the works by Jones (1971) and de Bruijne (1999), Galli et al. (2012)
presented the CPSM which also uses the proper motion data.
The CPSM represents the stellar proper motions by great circles over the celestial sphere
and visualizes their intersections as the convergent point of the moving group.
However, for a complete picture of a star's space motion,
both proper motions and radial velocities are required.
Thus, one can identify the stellar groupings with a common movement in space
via the AD-chart method based.
This method that takes into account the
individual stellar apexes is discussed by Chupina et al. (2001, 2006).
In this work, we adopted the AD-chart method (stellar apex method) for NGC 1348.
We used the distribution of individual apexes of cluster members in the equatorial coordinate system.
Also, some kinematical parameters and velocity ellipsoid parameters (VEPs)
are presented here with the computational algorithm presented in our previous papers
(Elsanhoury et al. 2015, 2018, Postnikova et al. 2020, Bisht et al. 2020).
The structure of the article is as follows.
A brief description of the different data sets used here is given in Section 2.
In Section 3, we performed the
study of proper motion and selected the cluster member stars.
The structural properties the cluster and derivation of its fundamental parameters are explained
in Section 4.
Section 5 deals with the study of Luminosity and mass function
while the mass segregation is described in Section 6
along with dynamical and kinematical analysis of the cluster.
We conclude the present work in Section 7.
\begin{figure}
\begin{center}
\includegraphics[width=12.5cm, height=12.5cm]{Fig1.ps}
\caption{The identification map of NGC 1348 taken from the DSS.}
\label{id}
\end{center}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10.5cm,height=12.5cm]{Fig2.ps}
\caption{Photometric errors in the $J$, $H$ and $K$ magnitudes against $J$ magnitude (upper panels).
Photometric errors in the Gaia pass bands $G$, $G_{BP}$ and $G_{RP}$ against $G$ magnitude (lower panels).}
\label{error}
\end{figure}
\section{Data}
We extracted photometric data of the cluster within a 10 arcmin radius from the
APASS, Pan-STARRS1, UKIDSS and WISE along with astrometric data from GAIA EDR3.
The main purpose is to take different photometric surveys' data
to check the extinction law towards the open cluster NGC 1348 from the optical to
the mid-infrared.
After cross-matching all these catalogs, the fundamental parameters, mass function,
Galactic orbits and kinematics have been studied in the current paper.
The identification map shown in the Fig.~\ref{id} is taken from the Digitized Sky Survey (DSS).
The descriptions of the above mentioned data sets are as following:
\subsection{\bf GAIA EDR3}
We have used GAIA~EDR3 (Gaia Collaboration et al. 2020) data for the astrometric investigation of NGC 1348. This data
consists of five quantities, which are position coordinates
, parallaxes and proper motions in two directions having a limiting magnitude of $G=21$ mag. We have plotted the errors in the three photometric bands
($G$, $G_{BP}$ and $G_{RP}$) along with their $G$ magnitudes as shown in the three bottom panels of Fig \ref{error}.
For the sources having G$\le$ 15 mag, the uncertainties in parallax are $\sim$
0.02-0.03 mas while for the sources with G$\le$ 17 mag, it is $\sim$ 0.07.
In Fig. \ref{error_proper}, we plotted the proper motion and their corresponding
errors as a function of $G$ magnitude. This figure shows that the maximum
error in proper motion components is $\sim 0.4$ mas/yr upto $G\sim20$ mag.
\subsection{\bf UKIDSS}
The UKIRT Infrared Deep Sky Survey (UKIDSS; Lawrence et al. 2007) is a deep large scale infrared survey organized with
the Wide Field Camera (WFCAM; Casali et al. 2007) on UKIRT. The UKIDSS GCS DR9 covers $\sim$ 36 square degrees observed
in five passbands ($Z$, $Y$, $J$, $H$, $K$; Hewett et al. 2006).
\subsection{\bf WISE}
The WISE database contains photometric magnitudes of stars in the mid-IR bands. The effective wavelength of these bands are $3.35 \mu m (W1)$,
$4.60 \mu m (W2)$, $11.56 \mu m (W3)$ and $22.09 \mu m (W4)$ (Wright et al. 2010). We have extracted data from the ALLWISE source
catalog for NGC 1348.
\begin{figure}
\centering
\includegraphics[width=10.5cm,height=12.5cm]{Fig3.ps}
\caption{Plot of Proper motions and their errors versus $G$ magnitude. The unit of proper motions and their errors is mas/yr.}
\label{error_proper}
\end{figure}
\subsection{\bf APASS}
The American Association of Variable Star Observers (AAVSO) Photometric All-Sky Survey (APASS) is cataloged
in five filters: B, V (Landolt) and $g^{\prime}$, $r^{\prime}$, $i^{\prime}$, with $V$ band magnitude range
from 7 to 17 mag (Heden \& Munari 2014). The DR9 catalog covers about $99\%$ of the sky (Heden et al. 2016).
From here, we have used data in $B$ and $V$ bands for NGC 1348.
\subsection{\bf Pan-STARRS1}
The Pan-STARRS1 survey (Hodapp et al. 2004) provides data in five broad-band filters, $g$, $r$, $i$, $z$, $y$, screening
from 400 nm to 1 $\mu$m (Stubbs et al. 2010).
These data have a mean 5-$\sigma$ point
source limiting sensitivities as 23.3, 23.2, 23.1, 22.3, and 21.4 mag in $g$,
$r$, $i$, $z$, and $y$ bands respectively (Chambers et al. 2016). The filters
have an effective wavelengths of 481, 617, 752, 866, and 962 nm, respectively (Schlafly et al. 2012; Tonry et al. 2012).
\section{Mean Proper motion and Membership probability of stars}
We plotted a diagram between the Proper motions (PMs)
($\mu_{\alpha} cos{\delta}$, $\mu{\delta}$)
which is called Vector Point Diagrams (VPDs) and shown
in the bottom panels of Fig. \ref{pm_dist}.
The top and middle panels shows that the corresponding $G$ versus $(G_{BP}-G_{RP})$
and $J$ versus ($J-H$) color magnitude diagrams (CMDs).
The left panel shows all
stars within a radius of of 10 arcmin around the cluster center, while the
middle and right panels show the probable cluster members having similar motion
in the sky and non-member stars, respectively. The selection
of circle's radius as 0.6 mas/yr in VPD is a compromise between losing stars with poor PMs and the contamination of
field stars. The CMD of the selected probable cluster members is shown in the upper-middle panels in Fig. \ref{pm_dist}. The main
sequence of the cluster is clearly separated from the non members.
\begin{figure*}
\centering
\includegraphics[width=12.5cm, height=12.5cm]{Fig4.ps}
\caption{(Bottom panels) Proper-motion vector point diagrams (VPDs) for NGC 1348. (Top panels) $J$ versus $(J-H)$ color
magnitude diagrams. (Middle panels) $G$ versus $(G_{BP}-G_{RP})$ color magnitude diagrams. (Left panel) The entire
sample. (Center) Stars within the circle of 0.6~ mas~ yr$^{-1}$ radius centered around the mean proper motion.
(Right) Probable background/foreground field stars in the direction of the cluster.
All these plots show only the stars
with PM error smaller than 0.5~mas~ yr$^{-1}$ in each coordinate.}
\label{pm_dist}
\end{figure*}
For the mean proper motion estimation, we consider the only probable cluster
members on the basis of clusters VPD and CMD as shown
in Fig. \ref{pm_dist}. By using weighted mean method, we found the mean-proper motion of NGC 1348 as $1.27\pm0.001$ and
$-0.73\pm0.002$ mas yr$^{-1}$ in RA and DEC directions, respectively.
In this paper, we used the method described by Balaguer-N\'{u}\~{n}ez et al. (1998)
by using Gaia EDR3 catalog data for NGC 1348
to estimate the membership probability of stars. This method has been used for several clusters by various authors
(Yadav et al. 2013; Sariya et al. 2021a, 2021b; Bisht et al. 2020). Recently we have adopted the above membership
probability method for few OCs using Gaia EDR3 data ( Bisht et al. 2021a, Bisht et al. 2021b). We used stars with PM
errors $\le$ 0.5 mas/yr to express cluster and field star distributions. A group of stars is found at
$\mu_{xc}$=1.27 mas~yr$^{-1}$, $\mu_{yc}$=$-$0.73 mas/yr. Considering a distance of 2.6 kpc and radial velocity
dispersion of 1 km $s^{-1}$ for open star clusters (Girard et al. 1989), the expected dispersion ($\sigma_c$) in
PMs would be 0.08 mas/yr. For the non-members, we obtained ($\mu_{xf}$, $\mu_{yf}$) = ($-$1.0, $-$1.7) mas/yr and
($\sigma_{xf}$, $\sigma_{yf}$) = (3.9, 2.6) mas/yr.
Based on the above method, 438 stars are selected as member stars with membership probability higher than $50\%$
and $G\le20$ mag. In the left panel of Fig.~\ref{membership}, we plotted membership probability versus $G$ magnitude.
In this figure, we can see a clear separation of the cluster and the field stars. In the right panel of this figure, we plotted $G$
magnitude versus parallax of stars. The most probable cluster members with high membership probability $(\ge 50\%)$ are
shown by red dots in Fig.~\ref{membership}. we have plotted $G$ versus ($G_{BP}-G_{RP}$) CMD, the identification
chart and proper motion distribution using stars with membership probability higher than $50\%$ in Fig. \ref{new_membership}.
The Cantat-Gaudin et al. (2018) catalog reports membership probabilities
for the stars of this cluster but only up to 18 mag in $G$ band.
Here, We provide the most probable cluster members up to 20 mag
in $G$ band which is fainter than Cantat-Gaudin et al. (2018).
\begin{figure}
\begin{center}
\centering
\hbox{
\includegraphics[width=7.5cm, height=7.5cm]{Fig5a.ps}
\includegraphics[width=7.5cm, height=7.5cm]{Fig5b.ps}
}
\caption{(Left panel) The cluster membership probabilities plotted with G magnitude. (Right panel) Cluster
parallax with G magnitude. Solid black dots are probable cluster members with membership probability higher than $50\%$.
}
\label{membership}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=12.5cm, height=12.5cm]{Fig6.ps}
\caption{($G, G_{BP}-G_{RP}$) CMD, identification chart and proper motion distribution of member stars with membership
probability higher than $50\%$. The plus sign indicates the cluster center.
}
\label{new_membership}
\end{center}
\end{figure}
\section{Clusters Structure, extinction law and fundamental parameters evaluation}
\subsection{Cluster radius and radial stellar surface density}
To estimate the cluster's center, the weighted mean of the positions of all stars has been considered by von Hoerner (1960, 1963).
The center can be estimated by fitting a Gaussian function to the star's distribution and taking the center to be the point
of maximum number density. We adopted this method to find the central coordinates of NGC 1348. This method has been
described by Bisht et al. (2020). The central coordinates are found as $\alpha = 53.51\pm0.03$ deg ($3^{h} 34^{m} 2.3^{s}$)
and $\delta = 51.41\pm0.02$ deg ($51^{\circ} 24^{\prime} 36^{\prime\prime}$) which are in good agreement with the values given
in Dias et al. (2002).
After center estimation, the next step is to construct a radial density profile
(RDP), for which, we have drawn many concentric rings
around the cluster center using the above estimated values
of center coordinates. We determined the stellar number density, $\rho_{i}$, in
the $i^{th}$ zone of the cluster by using the relation: $\rho_{i}$ = $\frac{N_{i}}{A_{i}}$,
where $N_{i}$ is the number of cluster members in the area $A_{i}$ of the $i^{th}$
zone.
By fitting the King (1962) profile in this distribution as shown by a smooth continuous line in Fig. \ref{dens}, we determined
the structural properties of the cluster. The King (1962) profile is given as:\\
~~~~~~~~~~~~~~~~~~~~~~~${\bf f(r) = f_{bg}+\frac{f_{0}}{1+(r/r_{c})^2}}$\\
where $r_{c}$, $f_{0}$, and $f_{bg}$ are the core radius, central density, and the background density level, respectively.
We have shown background density level with errors using dotted lines in
Fig. \ref{dens}. At $r\sim$ 7.5$^{\prime}$ cluster stars get merged with
the non-member stars, as shown clearly in Fig. \ref{dens}.
Hence, we considered 7.5$^{\prime}$ as the cluster radius.
The error bars are calculated using the Poisson statistics error in each shell as $P_{err}=\frac{1}{\sqrt{N}}$. By fitting the
King model to the cluster density profile, the structural parameters are found as: $f_{b}$=2.54 star/arcmin$^{2}$,
$f_{0}$=10.15 star/arcmin$^{2}$ and $r_{c}$=3.2 arcmin. We obtained the density contrast parameter ($\delta_{c}$) using the
formula described by Bisht et al. (2020), which indicates that NGC 1348 is a sparse cluster. The tidal radius of clusters
is normally influenced by the effects of Galactic tidal fields and later by internal relaxation dynamical evolution of clusters
(Allen \& Martos 1988).
To calculate the tidal radius of NGC 1348, we used the formula derived by
Bertin \& Varri (2008) as:
~~~~~~~~~~~~~~~~~~~$r_{t}=(\frac{GM_{cl}}{\omega^{2} \nu})^{1/3}$ \\
where $\omega$ and $\nu$ are \\
$\omega= (d\Phi_{G}(R)/dR)_{R_{gc}}/R_{gc})^{1/2}$ \\
$\nu=4-\kappa^{2}/\omega^{2}$ \\
where $\kappa$ is \\
$\kappa=(3\omega^{2}+(d^{2}\Phi_{G}(R)/dR^{2})_{R_{gc}})^{1/2}$ \\
here $\Phi_{G}$ is Galactic potential, $M_{cl}$
mass of the cluster, $R_{gc}$ is the Galactocentric
distance of the cluster, $\omega$ is the orbital frequency, $\kappa$ is
epicyclic frequency and $\nu$ is a positive constant. We used the Galactic
potentials discussed in section 5. The value of the Galactocentric distance is
taken from Table~1 and mass of the cluster is taken from section 6.
In this manner, the tidal radius of the cluster is calculated as 9.2 pc.
\begin{figure}
\hspace{3cm}\includegraphics[width=10.5cm, height=10.5cm]{Fig7.ps}
\caption{Surface density distribution of the cluster NGC 1348 using GAIA~EDR3 $G$ band data. Errors are determined from
sampling statistics (=$\frac{1}{\sqrt{N}}$ where $N$ is the number of cluster members used in the density estimation at
that point). The smooth line represents the fitted profile of King (1962) whereas the dotted line shows the background density
level. Long and short dash lines represent the errors in background density.}
\label{dens}
\end{figure}
\subsection{Optical to mid-infrared extinction law}
We have matched the multi-wavelength photometric data with Gaia astrometry to study the extinction law in various wavebands
for NGC 1348. We plotted various $(\lambda-G_{RP})/(G_{BP}-G_{RP})$ two-color diagrams (TCDs) as shown in Fig. \ref{cc_gaia}. Here,
$\lambda$ represent the filters other than $G_{RP}$. A linear fit was executed in all TCDs to find the slope, which are listed in
Table \ref{gaia_slope}. These values of slopes are in fair agreement with the value described by Wang and Chen (2019).
The value of total-to-selective absorption ratios $R_{cluster}$
in the range of $\sim$ 2.9-3.3 for different pass bands demonstrates that the
reddening law is normal towards the cluster region of NGC 1348.
\begin{table*}
\caption{Multi-band color excess ratios in the direction of NGC 1348.
}
\vspace{0.5cm}
\centering
\begin{center}
\small
\begin{tabular}{ccc}
\hline\hline
Band $(\lambda)$ & Effective wavelength & $\frac{\lambda-G_{RP}}{G_{BP}-G_{RP}}$ \\
\\
\hline\hline
Johnson~ B &445 &$1.66\pm0.01$\\
Johnson~ V &551 &$1.05\pm0.02$\\
Pan-STARRS~ g &481 &$1.42\pm0.02$\\
Pan-STARRS~ r &617 &$0.73\pm0.03$\\
Pan-STARRS~ i &752 &$0.15\pm0.03$\\
Pan-STARRS~ z &866 &$-0.16\pm0.05$\\
Pan-STARRS~ y &962 &$-0.35\pm0.04$\\
UKIDSS~ J &1234.5 &$-0.79\pm0.03$\\
UKIDSS~ H &1639.3 &$-1.21\pm0.03$\\
UKIDSS~ K &2175.7 &$-1.35\pm0.06$\\
WISE ~W1 &3317.2 &$-1.39\pm0.06$\\
WISE~ W2 &4550.1 &$-1.40\pm0.07$\\
\hline
\end{tabular}
\label{gaia_slope}
\end{center}
\end{table*}
\begin{figure*}
\begin{center}
\centering
\includegraphics[width=10.5cm, height=10.5cm]{Fig8.ps}
\caption{The $(\lambda-G_{RP})/(G_{BP}-G_{RP})$ TCDs for the stars selected from VPD of NGC 1348.
The continuous blue lines represent the slope determined through the least-squares linear fit.
}
\label{cc_gaia}
\end{center}
\end{figure*}
\subsection{Reddening from UKIDSS colors}
The $(J-H)$ versus $(J-K)$ color-color diagram plot has been used to
obtain the value of interstellar reddening as shown in Fig \ref{cc}.
The solid line represents the Zero age main sequence (ZAMS) as taken from Caldwell et al. (1993).
The similar ZAMS shown by the dotted line is displaced by $E(J-H) = 0.27\pm0.03$ mag and $E(J-K) = 0.47\pm0.05$ mag.
The color excess ratio ($\frac{E(J-H)}{E(J-K)}$=0.57)
is showing agreement with the normal value of 0.55 given by Cardelli et al. (1989).
We obtained the value of interstellar reddening ($E(B-V)$) as 0.88 mag. Our estimated value
is in good agreement with Carraro (2002). Our criterion for reddening estimation is admissible in exceptionally
extended regions.
\begin{figure}
\begin{center}
\centering
\includegraphics[width=8.5cm, height=8.5cm]{Fig9.ps}
\caption{The color-color diagram (CCD) for NGC 1348 using the probable cluster members. In this figure, the red
solid line is the ZAMS taken from Caldwell et al. (1993) while the red dotted line is the same ZAMS shifted by the
values given in the text. Solid black dots are the stars matched with Cantat-Gaudin et al. (2018).}
\label{cc}
\end{center}
\end{figure}
\subsection{Age, distance and Galactocentric coordinates}
The main fundamental parameters (age, distance, and reddening) have been obtained by fitting the theoretical isochrones
of Marigo et al. (2017) to all the CMDs as shown in Fig. \ref{cmd}. The observed data have been corrected for reddening using
the coefficients ratios $\frac{A_{J}}{A_{V}}$=0.276 and $\frac{A_{H}}{A_{V}}$=0.176, which are taken from Schlegel et al. (1998),
while the ratio $\frac{A_{K_{s}}}{A_{V}}$=0.118 was derived from Dutra et al. (2002).For the Gaia DR2, we have estimated the mean value
of $A_{G}$ and $E(G_{BP}-G_{RP})$ as 1.92 and 0.96 using stars with membership probability higher than $50\%$. Cantat-Gaudin et al. (2018)
catalog contains the membership probabilities of many OCs. In this paper, we have matched our likely members with their catalog and selected
common stars having probability higher than $50\%$. These matched stars have been plotted in the CMDs as shown in Fig. \ref{cmd}.
The isochrones of different ages (log(age)=8.10, 8.20 and 8.30) with $Z=0.008$ have been over plotted on all the CMDs for the cluster
NGC 1348 as shown in Fig \ref{cmd}. The overall fit is satisfactory for log(age)=8.20 (middle isochrone) to the brighter stars,
corresponding to $160\pm40$ Myr. The estimated distance modulus ($(m-M)$=13.80 mag) provides a distance from the Sun that is
$2.4\pm0.10$ kpc.
\subsubsection{Distance of the cluster using parallax angle}
The distance can be estimated using the mean parallax of probable member stars (Luri et al. 2018). By using weighted mean method,
the mean parallax for the cluster is found to be $0.39\pm0.005$ mas.
Bailer-jones (2015) have shown that the distance estimation
just by inverting the
parallax is not reliable when there is an associated error.
They described that a correct approach is to obtain the distance values from the parallaxes of stars
through probabilistic analysis which includes a combination of a likelihood (measurements)
and prior (assumption). Bailer-jones (2015) investigated different types of priors and Bailer-Jones
(2018) suggested a exponentially decreasing space density prior in distance $r$.
The prior depends upon a length scale parameter which can be obtained by fitting
a three dimensional model of the Galaxy observed by Gaia and varies smoothly as
a function of Galactic longitude and latitude. With
the help of this prior, distance of the object can be calculated using a posterior
which is similar as the likelihood (a Gaussian distribution function in parallax)
but a function of distance. This
method gives a pure geometric distance of objects which is independent of
physical properties of interstellar extinction towards an individual star.
Before calculating the distance we corrected the parallax for the offset (-0.017mas)
as suggested by Lindegren et al. (2020) for Gaia EDR3 data set.
Then by adopting the above mentioned method the distance is estimated as $2.6\pm0.05$ kpc.
This value of the cluster's distance is in good agreement with our result obtained from the isochrone fitting method.
\begin{figure*}
\begin{center}
\centering
\hbox{
\includegraphics[width=8.5cm, height=8.5cm]{Fig10a.ps}
\includegraphics[width=8.5cm, height=8.5cm]{Fig10b.ps}
}
\caption{The $G, (G_{BP}-G_{RP})$, $G, (G_{BP}-G)$, $G, (G-G_{RP})$, $Z, (Z-Y)$, $J, (J-H)$ and $K, (J-K)$ color-magnitude
diagrams of open star cluster NGC 1348. These stars are probable cluster members
The curves are the
isochrones of (log(age)=8.10, 8.20 and 8.30). These isochrones are taken from Marigo et al. (2017). Solid black dots are matched stars
with Cantat-Gaudin et al. (2018).}
\label{cmd}
\end{center}
\end{figure*}
\section{Orbits of NGC 1348}
Galactic orbits are very useful to explain the dynamical characteristics of clusters. We derive orbits
and orbital parameters of NGC 1348 using Galactic potential models discussed by Allen \& Santillan (1991). Bajkova \& Bobylev (2016)
and Bobylev et al. (2017) refined the Galactic potential model parameters with the using new observational data for a
distance R$\sim$ 0-200 kpc. The description of these Galactic potential
models is given by Rangwal et al. (2019).
The input parameters required to calculate orbits of the cluster, such as central coordinates ($\alpha$ and $\delta$),
mean proper motions ($\mu_{\alpha}cos\delta$, $\mu_{\delta}$), parallax angles, age and heliocentric distance ($d_{\odot}$)
have been taken from our investigation in this paper. The radial velocity for this object is not available in the literature.
Average radial velocity for NGC 1348 was calculated by taking the mean of 14 probable cluster members as selected from the
Gaia DR2 catalog. After five iterations, the average radial velocity is found as $-18.71\pm1.60$ km/sec.
The right-handed coordinate system is used to convert equatorial velocity components into Galactic-space velocity components
($U,V,W$), where $U$, $V$ and $W$ are radial, tangential and vertical velocities respectively. Here, the x-axis is taken positive
towards the Galactic-center, the y-axis is along the direction of Galactic rotation and the z-axis is towards the Galactic north pole. Galactic
center is taken at ($17^{h}45^{m}32^{s}.224, -28^{\circ}56^{\prime}10^{\prime\prime}$)
and the North-Galactic pole is taken to be located at
($12^{h}51^{m}26^{s}.282, 27^{\circ}7^{\prime}42^{\prime\prime}.01$) (Reid \& Brunthaler, 2004). To apply a correction for
Standard Solar Motion and Motion of the Local Standard of Rest (LSR), we used position coordinates of the Sun as ($8.3,0,0.02$)
kpc and its space-velocity components as ($11.1, 12.24, 7.25$) km/s (Schonrich et al. 2010). The transformed parameters in the Galactocentric
coordinate system are listed in Table \ref{inp}.
Fig.~\ref{orbit} shows orbits of the cluster NGC 1348. In the top left panel, the motion of the cluster is described in terms of distance from
Galactic center and Galactic plane, which indicates the 2D side view of the orbit. In the top right panel, the cluster motion
projected into the plane of the Galaxy is described, which shows the top view of orbit. The bottom panel of this figure indicates
the distance of NGC 1348 from the Galactic plane as a function of time.
The nearly circular orbit of NGC 1348 follows a boxy pattern. However the
cluster shows a small drift of $\sim$ 83 pc from the circular motion.
The birth and the present day position of NGC 1348 in the Galaxy
are represented by filled triangle and circle in Fig. \ref{orbit}. We also calculated the orbital parameters for the clusters which are
listed in Table \ref{orpara}. Here $e$ is eccentricity, $R_{a}$ is apogalactic distance, $R_{p}$ is perigalactic distance, $Z_{max}$
is the maximum distance traveled by cluster from Galactic disc, $E$ is the average energy of orbits, $J_{z}$ is $z$ component of angular
momentum and $T$ is time period of the revolution around the Galactic center.
The orbital parameters determined in the present analysis are similar to the
parameters determined by Wu et al. (2009).
\begin{table*}
\caption{Position and velocity components in Galactocentric coordinate system. Here $R$ is the Galactocentric distance,
$Z$ is vertical distance from the Galactic disc, $U$ $V$ $W$ are radial tangential and vertical components of velocity
respectively, and $\phi$ is the position angle relative to the sun's direction.
}
\vspace{1cm}
\centering
\begin{tabular}{ccccccccc}
\hline\hline
Cluster & $R$ & $Z$ & $U$ & $V$ & $W$ & $\phi$ \\
& (kpc) & (kpc) & (km/sec) & (km/sec) & (km/sec) & (radians) \\
\hline
NGC 1348 & 10.39 & -0.14 & $-11.71 \pm 1.68 $ & $-232.55 \pm 1.51 $ & $-10.14 \pm 1.65$ & 0.13 \\
\hline
\end{tabular}
\label{inp}
\end{table*}
\begin{table*}
\caption{The obtained orbital parameters using Galactic potential model.
}
\vspace{1cm}
\centering
\begin{tabular}{ccccccccc}
\hline\hline
Cluster & $e$ & $R_{a}$ & $R_{p}$ & $Z_{max}$ & Birth position & $E$ & $J_{z}$ & $T$ \\
& & (kpc) & (kpc) & (kpc) & (R,Z) & $(100 km/sec)^{2}$ & (100 kpc km/s) & (Myr) \\
\hline\hline
NGC 1348 & 0.004 & 10.47 & 10.38 & 0.25 & (11.47,0.30) & -9.88 & -24.38 & 284 \\
\hline
\end{tabular}
\label{orpara}
\end{table*}
\begin{figure*}
\begin{center}
\hbox{
\includegraphics[width=6.2cm, height=6.2cm]{Fig11a.ps}
\includegraphics[width=8.2cm, height=6.2cm]{Fig11b.ps}
}
\hspace{-4cm}\includegraphics[width=6.2cm, height=6.2cm]{Fig11c.ps}
\caption{Galactic orbits of the cluster NGC 1348 estimated with the Galactic potential model described in the text in the
time interval equal to the age of the cluster. The top left panel shows the side view and the top right panel shows the top
view of the orbit. Bottom panel shows the distance of cluster from the Galactic plane as a function of time. Dotted
line represents the cluster's orbits for a time interval of 284 Myr. The filled circle and the triangle sign denote the
birth and the present day position of cluster in the Galaxy.}
\label{orbit}
\end{center}
\end{figure*}
\section{Dynamics of the cluster}
\subsection{Luminosity and mass function}
Distribution of the cluster members in a unit magnitude range is called the
luminosity function (LF). To derive the LF, we used only the probable members
of NGC 1348. To construct the LF, we converted the apparent $G$ magnitudes
into the absolute ones using the distance modulus.
The resulting histogram is shown in the left panel of Fig. \ref{lf_mass}.
This figure shows that the LF continues rising up to $M_{G}\sim$ 2.9 mag.
The LF and mass function (MF) are associated with each other according to the mass-luminosity relation (MLR).
We have used the theoretical tables of evolutionary tracks of Marigo et al. (2017) to convert luminosities into masses.
Fig. \ref{lf_mass} displays the luminosity function of member stars of the cluster (left panel) and the derived
present day mass function (PDMF; right panel). The PDMF, under specific conditions is an approximate representation of the IMF.
The shape of the present day mass function of stars in NGC 1384 for masses $\ge$ 1 $M_{sol}$ can be approximated by a
power law of the form,\\
\begin{equation}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\log\frac{dN}{dM}=-(1+x)\log(M)+constant\\
\end{equation}
Where dN is the number of stars in the mass interval M+dM. We derive a value of $x=1.30\pm0.18$, which is close to the value of 1.35
derived by Salpeter (1955) for the nearby Galactic field. It is worth mentioning that the slope of the mass function of the Galactic
field is being constantly updated using more modern data and sophisticated inference techniques. A recent work by Mor et al. (2019)
inferred a shallower than Salpeter slope for the Galactic IMF (close to -1) which is in agreement with the theoretical prediction
of Dib \& Basu (2018). Dib et al. (2017) inferred the distribution function of the slope of the IMF for a large population of Galactic
clusters and found that it is well represented by a Gaussian distribution centered around the Salpeter value but with a standard deviation
of ~0.6. For NGC 1348, our derived value falls well within this range, when considering the uncertainty we have measured
for the slope. The total mass was obtained as $\sim$215 $M_{sol}$.
\subsection{Mass-segregation study}
The mass segregation effect in the clusters has been described by many authors
(e.g. Sagar et al. 1988; Hillenbrand \& Hartmann 1998; Fisher et al. 1998;
Meylan 2000; Baumgardt \& Makino 2003; Dib, Schmeja \& Parker 2018;
Dib \& Henning 2019; Alcock \& Parker 2019).
To understand this effect in NGC 1348, we divided
the mass range in two subranges as 1.5$\le\frac{M}{M_{\odot}}\le$~4.1 and
1$\le\frac{M}{M_{\odot}}\le$~1.5. The cumulative radial stellar distribution of stars for two different mass ranges as
shown in Fig. \ref{mass_seg}. This figure demonstrates the mass-segregation effect as bright stars appear to be more centrally
concentrated than the low mass members. This has been checked through Kolmogrov-Smirnov test $(K-S)$. In this way, we found
that the confidence label of the mass-segregation effect is 91 $\%$.
The possible reason of the mass-segregation effect generally differs from one cluster to another. This may be because of dynamical
evolution or could be an imprint of star formation or both
(Dib, Kim \& Shadmehri 2007; Allison et al. 2009; Pavlik 2020).
The most important result of this process is that the most massive stars sink gradually towards the cluster center and
transfer their kinetic energy to the more numerous lower-mass stars, thus leading to mass segregation. The relaxation time $T_{E}$
is defined as the time in which the stellar velocity distribution becomes Maxwellian and expressed by the following formula:\\
\begin{equation}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~T_{ES}=\frac{8.9\times10^5\sqrt{N}\times{R_{h}}^{3/2}}{\sqrt{\bar{m}}\times log(0.4N)}\\
\end{equation}
\begin{figure}
\begin{center}
\hbox{
\includegraphics[width=8.2cm, height=8.2cm]{Fig12a.ps}
\includegraphics[width=8.2cm, height=8.2cm]{Fig12b.ps}
}
\caption{(Left panel) Luminosity function of stars in the region of the cluster NGC 1348. (Right panel) Mass function
derived using the most probable members, where solid line indicates the power law given by Salpeter (1955).
The error bars represent $\frac{1}{\sqrt{N}}$.}
\label{lf_mass}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8.2cm, height=8.2cm]{Fig13.ps}
\caption{The cumulative radial distribution for NGC 1348 using the most probable members in several mass ranges.}
\label{mass_seg}
\end{center}
\end{figure}
where $N$ represents the number of stars in the clusters (in our case the ones with membership probability higher than 50$\%$),
$R_{h}$ is the cluster half mass radius expressed in parsec and $\bar{m}$ is the average mass of the cluster members
(Spitzer \& Hart 1971) in the solar unit. The value of $\bar{m}$ is found as 2.07 $M_{\odot}$. The value of $R_{h}$ is assumed to
be equal to half of the cluster's extent. Using the above formula, the value of dynamical relaxation time $T_{ES}$ is determined
as 18 Myr\footnote{This value is obtained using stars with mass $\ge$ 1 $M_{\odot}$. If we include the low
mass stars ($\ge$ 0.1 $M_{\odot}$), the value of the relaxation time becomes $\sim$ 52 Myr. In either case, the cluster is dynamically relaxed according
to this study.}. Hence, we conclude that NGC 1348 is a dynamically relaxed cluster.
\section{Kinematical structure of NGC 1348}
$\bullet$ {\textit{\textbf{Vertex (apex position) of the cluster}}}
The Apex coordinates are obtained by solving the geometric problems in which the intersection of vectors of spatial velocities
(i.e. $V_{x}$, $V_{y}$, $V_{z}$) of stars on the celestial sphere, when the beginning of the vectors is moved to the point
of observations. A formal description of the method, diagramming technique, and formulas to determine the error ellipses can be found
in Chupina et al. (2001, 2006). This method has been used previously by some of us (Vereshchagin et al. 2014, Elsanhoury et al. 2018,
Elsanhoury 2020a, 2020b, Postnikova et al. 2020).
The equatorial coordinates of the convergent point have the following forms:\\
i.e.
\begin{equation}
~~~~~~~~~~~~~~~~~~~A_{\circ}=\tan^{-1}\Big[\frac{\overline{V_y}}{\overline{V_x}}\Big].
\end{equation}
\begin{equation}
~~~~~~~~~~~~~~~~~~~~D_{\circ}=\tan^{-1}\Big[\frac{\overline{V_z}}{\sqrt{\overline{V_x}^2+\overline{V_y}^2}}\Big].
\end{equation}
The apex equatorial coordinates for NGC 1348 are presented here with Fig. \ref{cp1}.
\begin{figure}
\centering
\includegraphics[width=10.5cm,height=8.5cm]{Fig14.ps}
\caption{The convergent point of NGC 1348 open cluster with AD-chart method, showing the apex coordinates $(A_{o},
D_{o})= (13.865\pm0.238, -48.348\pm0.144)$.}
\label{cp1}
\end{figure}
We have also derived several kinematical parameters,
for example, the matrix elements $(\mu_{ij})$,
direction cosines $(l_{j}, m_{j}, n_{j})$ etc.
using techniques described by Bisht et al. (2020).
All these parameters are listed in Table~\ref{all}.
\begin{table*}
\small
\caption{Dynamical and kinematical parameters of NGC 1348.}
\begin{tabular}{lll}
\hline
Parameters & Numerical values & Reference \\ \hline
No. of members (N) & 438 &Present study \\
Age (log) &8.20 &Present study \\
$(A_{o}, D_{o})$& $-23.815\pm0.135$, $-22.228\pm0.105$
&Present study \\
Cluster radius (arcmin) & 7.5 & Present study \\
Cluster radius (pc) & 5.67 & Present study \\
$T_{ES}$(Myr) &18.00 & Present study \\
$\tau$ & 8.886 $\pm$ 0.413 & Present study\\
$(\overline{V_{x}}, \overline{V_{y}}, \overline{V_{z}})$, (km s\textsuperscript{-1}) & $94.40\pm0.10$, $-41.63\pm0.15$, $-42.13\pm0.15$ & Present study \\
$(\overline{V_{\alpha}}, \overline{V_{\delta}}, \overline{V_{t}})$, (km s\textsuperscript{-1}) & $-100.57\pm0.10$, $-43.95\pm0.15$, $153.78\pm0.09$ & Present study \\
($\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$) (km s\textsuperscript{-1}) & 5765040, 19837.3, 348.674 & Present study \\
($\sigma_{1}$, $\sigma_{2}$, $\sigma_{3}$) (km s\textsuperscript{-1}) & 2401.05, 140.845, 18.673 & Present study \\
$(l_{1}, m_{1}, n_{1})$\textsuperscript{o} & 0.339, 0.404, $-0.850$ & Present study \\
$(l_{2}, m_{2}, n_{2})$\textsuperscript{o} & $-0.431$, $-0.737$, $-0.522$ & Present study \\
$(l_{3}, m_{3}, n_{3})$\textsuperscript{o} & 0.837, $-0.543$, 0.076 & Present study \\
$(x_{c}, y_{c}, z_{c})$ (kpc) & $-8.811$, $-11.911$, $-18.486$ & Present study \\
$B_{j}$, j=1, 2, 3 & $-58\textsuperscript{o}.197$, $-31\textsuperscript{o}.436$, $4\textsuperscript{o}.340$ & Present study \\
$L_{j}$, j=1, 2, 3 & $-50\textsuperscript{o}.030$, $120\textsuperscript{o}.283$, $-147\textsuperscript{o}.059$ & Present study \\
$X_{\odot}$ (kpc) & $-2.175\pm0.047$& Present study \\
& -1.933 & Cantat-Gaudin \textit{et al.} (2020) \\
$Y_{\odot}$ (kpc) & $1.414\pm0.038$& Present study \\
& 1.2568 & Cantat-Gaudin \textit{et al.} (2020) \\
$Z_{\odot}$ (kpc) & $-0.168\pm0.013$& Present study \\
& -0.1495 & Cantat-Gaudin \textit{et al.} (2020) \\
$R_{gc}$ (kpc) & $10.476\pm0.102$& Present study \\
& 10.349 & Cantat-Gaudin \textit{et al.} (2020) \\
$S_{\odot}$ (km/s) & 111.36 & Present study \\
($l_{A}, \alpha_{A})_{w.s.v.c.}$ & $-32.21$, $56.58$ & Present study \\
($b_{A}, \delta_{A})_{w.s.v.c.}$ & $-23.82$, $22.23$ & Present study \\
\hline
\end{tabular}
\label{all}
\end{table*}
\section{Conclusions}
\label{con}
We conducted an exhaustive photometric and kinematical study of the poorly studied northern open cluster NGC 1348
using UKIDSS, WISE, APASS, Pan-STARRS1 and Gaia~EDR3 data sets. We calculated the membership probabilities of the stars in
NGC 1348 and hence found 438 member stars with membership probabilities higher than $50\%$ and G$\le$20 mag.
To derive the fundamental parameters of the cluster,
we used only these selected member stars. We also shed some light on
the dynamical and kinematical properties of the cluster. Our main findings are summarized follows:
\begin{itemize}
\item The cluster's center is obtained as: $\alpha = 53.51\pm0.03$ deg ($3^{h} 34^{m} 2.3^{s}$)
and $\delta = 51.41\pm0.02$ deg ($51^{\circ} 24^{\prime} 36^{\prime\prime}$) with the help of the most
probable cluster members. The radius of the cluster is determined as 7.5 arcmin using a
radial density profile.\
\item Based on the vector point diagram and membership probability estimation of stars, we identified 438 most
probable cluster members for this object. The mean PMs of the cluster is estimated as $1.27\pm0.001$ and
$-0.73\pm0.002$ mas yr$^{-1}$ in both the RA and DEC directions respectively.\
\item The distance is determined as $2.6\pm0.05$ kpc. This value is in fair agreement with the distance estimated
using the mean parallax of the cluster.
Age is determined as $160\pm40$ Myr by comparing the cluster's CMD with
the theoretical isochrones given by Marigo et al. (2017).\
\item The mass function slope is estimated as $1.30\pm0.18$, which is in good agreement with the value (1.35) given
by Salpeter (1955) for field stars in Solar neighborhood.\
\item Mass segregation is also observed for NGC 1348. The K-S test indicates $91\%$ confidence level of
the mass-segregation effect. Our study indicates that NGC 1348 is a dynamically relaxed open cluster.\
\item The Galactic orbits and orbital parameters were estimated using Galactic potential models. We found that
NGC 1348 is orbiting in a boxy pattern.
\item The apex position $(A, D)$ is computed with the AD-chart methods as:
$(A_\circ, D_\circ)$ = (-23$^{\textrm{o}}$.815 $\pm$ 0$^{\textrm{o}}$.135, $-$22$^{\textrm{o}}$.228 $\pm$ 0$^{\textrm{o}}$.105) respectively.\
\item We computed the direction cosines ($l_{j}, m_{j}, n_{j}$) in three axes.\
\item The projected distance $(X_{\odot}, Y_{\odot}, Z_{\odot}$ are computed as ($-$2.175 $\pm$ 0.047, 1.414 $\pm$ 0.038,
$-$0.168 $\pm$ 0.013) kpc and the Solar elements $(S_\odot, l_A, b_A)$ are derived as $(111.36, -32^{\textrm{o}}.21, 56^{\textrm{o}}.58)$.
\end{itemize}
{\bf ACKNOWLEDGMENTS}\\
The authors thank the anonymous referee for the useful comments that improved the scientific content of the article
significantly. This work has been financially supported by the Natural Science Foundation of China (NSFC-11590782, NSFC-11421303).
Devesh P. Sariya and Ing-Guey Jiang are supported by the grant from the Ministry of Science and Technology (MOST),
Taiwan. The grant numbers are MOST 105-2119-M-007 -029 -MY3 and MOST 106-2112-M-007 -006 -MY3. This work has made use of
data from the European Space Agency (ESA) mission GAIA processed by Gaia Data processing and Analysis Consortium (DPAC),
(https://www.cosmos.esa.int/web/gaia/dpac/consortium).
|
train/arxiv
|
BkiUbPg5qrqCyt4L0tFT
| 5 | 1 |
\section{Introduction}
Lie groups provide a way to express the concept of a continuous family of symmetries for geometric objects. There exist a correspondence between Lie groups as geometric object and Lie algebras as linear objects. By differentiating the Lie group action, you get a Lie algebra action, which is a linearization of the group action. As a linear object, a Lie algebra is often a lot easier to work with than working directly with the corresponding Lie group. Whenever you study different kinds of differential geometry (Riemannian, Kahler, symplectic, etc.), there is always a Lie group and Lie algebra lurking around either explicitly or implicitly.
It is possible to learn each particular specific geometry and work with the specific Lie group and Lie algebra without learning anything about the general theory. However, it can be extremely useful to know the general theory and find common techniques that apply to different types of geometric structures.
Moreover, the general theory of Lie groups and algebras leads to a rich assortment of important explicit examples of geometric objects.
So importance of Lie groups leads to importance of its generalizations. In this paper we deal with three different kinds of Lie groups generalizations, namely, Lie groupoids, Double Lie groupoids and generalized Lie groups or top spaces.
The groupoid was introduced by H. Barant in 1926. C. Ehresmann used the concept of Lie groupoid as an essential tool in topology and differential geometry around 1950.
Double Lie groupoid in double category was interestingly introduced by K. Mackenzie \cite{mac}. A double Lie groupoid is essentially a groupoid object in the category of Lie groupoid. We can present a double Lie groupoid as a square
$$\xymatrix{
V \ar @{>} @<2pt>[d] \ar @{>} @<-2pt>[d] \ar @{<-} @<2pt>[r] \ar @{<-} @<-2pt> [r] &D \ar @{>} @<2pt>[d] \ar @{>} @<-2pt>[d]\\
M \ar @{<-} @<2pt>[r] \ar @{<-} @<-2pt> [r] &H}$$
where each edge is a groupoid and the various groupoid structures satisfy certain compatibility conditions.
Top spaces as a generalization of Lie groups was introduced by M. R. Molaei in 1998. In this generalized field, several authors (Araujo, Molaei, Mehrabi, Oloomi, Tahmoresi, Ebrahimi, etc.) have studied different aspects of generalized groups and top spaces \cite{mo,mb}.
In section 2, introducing some basic definitions, we prove that every subgroupoid of a Lie groupoid with the same dimention, is a Lie subgroupoid and show the analogous for double Lie groupoids.
In section 3 we provide some useful tools to prove a theorem similar to Cartan's in Lie groups case i.e. we show that each closed generalized subgroup of a top space is a top subspace.
\section{Generalized subgroups as Lie generalized subgroups}
E. Cartan showed that every closed subgroup of a Lie group, is a Lie subgroup. In this section we give some conditions under which every subgroupoid (double subgroupoid) of a Lie groupoid (double Lie subgroupoid) is a Lie subgroupoid (double Lie subgroupoid).
A groupoid is a category in which every arrow is invertible. More precisely, a groupoid consists of two sets $G$ and $G_{0}$ called the set of morphism or arrows and the set of objects of groupoid respectively, together with two maps $\alpha , \beta:G\longrightarrow G_{0}$ called source and target maps respectively, a map $1_{0}:G_{0}\longrightarrow G, \ x\longmapsto x_{0}$ called the object map, an inverse map $i:G\longrightarrow G_{0}, \ a\longmapsto a^{-1}$ and a composition $ G_{2}=G_{\ \alpha}\times_{\beta}G\longrightarrow G, \ (b,a)\longmapsto boa$ defined on the pullback set
$$G_{\ \alpha}\times_{\beta}G=\lbrace(b,a)\in G\times G\vert \alpha(b)=\beta(a)\rbrace.$$
These maps should satisfy the following conditions:
\begin{itemize}
\item[i.]$\alpha(boa)=\alpha(a)$ and $\beta(boa)=\beta(b)$ for all $(boa)\in G_{2}$;
\item[ii.] $co(boa)=(cob)oa$ such that $\alpha(b)=\beta(a)$ and $\alpha(c)=\beta(b)$, for all
$a,b,c\in G$;
\item[iii.]$\alpha(1_{x})=\beta(1_{x})=x$, for all $x\in G_{0}$;
\item[iv.]$ao1_{\alpha(a)}=a$ and $1_{\beta(a)}oa=a$, for all $a\in G$;
\item[v.]$\alpha(a^{-1})=\beta(a)$ and $\beta(a^{-1})=\alpha(a)$, for all $a\in G$.\cite{mac}
\end{itemize}
Let $(G,G_{0})$ be a groupoid, $M$ be a manifold and $w:M\longrightarrow G_{0}$ be a submersion. An action of $G$ on $M$ via $w$ is a smooth map $\varphi:G_{\ \alpha}\times_{w}M\longrightarrow M, \ (a,x)\longmapsto a.x$, satisfying the conditions:
\begin{itemize}
\item[i.] $w(a.x)=\beta(a)$;
\item[ii.] $b.(a.x)=(boa).x$;
\item[iii.] $1_{w(X)}.x=x$.\cite{fa}
\end{itemize}
\begin{defn}\label{GSub}
A groupoid $(H,H_{0})$ with $(i,i_{0})$ is called subgroupoid of $(G,G_{0})$, if $$i:H\longrightarrow G,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i_{0}:H_{0}\longrightarrow G_{0},$$ are injective and $(i,i_{0})$ is a morphism of groupoids.
\end{defn}
Here is an example of definition \ref{GSub}.
\begin{exm}
Let $G$ be a groupoid acting on $M$. For each $m\in M$, the set
$$H(m)=\lbrace a\in G\vert a.m=m\rbrace,$$
is a subgroupoid of $G$, which is called the stabilizer of $m$ in $G$. Let $G^{'}=H(m)$ and $$G_{0}^{'}=\alpha(G^{'})\bigcap\beta (G^{'}),$$ we show that $(G^{\prime}, G_{0}^{\prime})$ is the subgroupoid of $(G,G_{0})$. See that $\alpha (G^{'})\subset G_{0}^{'}$, $\beta (G^{'})\subset G_{0}^{'}$ and $aob\in G^{'}$, for all $a,b\in G^{'}$. Let $g\in 1_{G_{0}^{'}}$, then $g=1_{x}$, for some $x\in G_{0}^{'}$. Therefore,
$x\in \beta(G^{'})$. In other words there exists $ g^{'}\in G^{'}$ such that $x=\beta(g^{'})$.
Therefore $g^{'}.m=m$ and we have
$$g.m=1_{x}.m=1_{\beta(g^{'})}.m=1_{w(g^{'}.m)}.m =1_{w(m)}.m=m.$$
Let $a\in G^{'},$ then
$$a^{-1}.m=a^{-1}.(a.m)=1_{\alpha(a)}.m=1_{w(m)}.m=m.$$
Hence $(G^{\prime},G_{0}^{\prime})$ with the restriction of source and target maps of $(G,G_{0})$ is a groupoid, $i$ and $i_{0}$ are injective and $(i,i_{0})$ is a morphism of groupoids.
\end{exm}
Now we recall the Lie groupoids and morphism of groupoids. A groupoid $(G,G_{0})$ is called Lie groupoid if $G$ and $G_{0}$ are manifolds, $\alpha$ and $\beta$ are surjective submersions and the composition is a smooth map. For example, any manifold $M$ may be regarded as a Lie groupoid on itself with $$\alpha=\beta=id_{M},$$ and every element a unity.\cite{mac}
Let $(G,G_{0})$ and $(G^{'},G_{0}^{'})$ be groupoids. A morphism $G\rightarrow G^{\prime}$ is a pair of maps $F:G\rightarrow G^{\prime}$, $f:G_{0}\rightarrow G_{0}^{\prime}$ such that $\alpha^{\prime} oF=fo\alpha$, $\beta^{\prime} oF=fo\beta$ and $F(hg)=F(h)F(g)$. Let $(G,G_{0})$ and $(G^{'},G_{0}^{'})$ be Lie groupoids, then $(F,f)$ is a morphism of Lie groupoids if $F$ and $f$ are smooth.\cite{mz}
\begin{exm}
For any groupoid $(G,G_{0})$, the map
$$\chi=(\beta,\alpha):G\rightarrow G_{0}\times G_{0}, g\mapsto(\beta(g),\alpha(g)),$$
is a morphism from $G$ to $G_{0}\times G_{0}$.
\end{exm}
\begin{defn}
\cite{mac}
Let $(G,G_{0})$ be a Lie groupoid. A Lie subgroupoid of $(G,G_{0})$ is a Lie groupoid $(H,H_{0})$ together with injective immersions $i:H\rightarrow G$ and $i_{0}:H_{0}\rightarrow G_{0}$ such that $(i,i_{0})$ is a morphism of Lie groupoid.
\end{defn}
If $(i,i_{0})$ is a morphism of Lie groupoids and $i$ is an injective immersion, then $i_{0}$ is an injective immersion too.
\begin{defn}
\cite{man} Let $G$ be a manifold. A Lie algebroid on $G$ is a vector bundle
$( E, p,G)$ together with a vector bundle map $\sigma:E\longrightarrow TG$ called the anchor of $E$, and a bracket $[.,.]$ on sections of $E$, i.e. $\Gamma(E)$, which is R-bilinear and alternating, satisfying the Jacobi identity, and is such that
$$[X,fY]=\sigma(X).fY+f[X,Y],$$
$$\sigma([X,Y])=[\sigma(X),\sigma(Y)],$$
where $X,Y\in \Gamma(E), f\in c^{\infty}(M)$.
\end{defn}
Let $(G, G_{0})$ be a groupoid with the object map $1_{0}:G_{0}\rightarrow G$ and let
$$T^{\alpha}G:=ker(T\alpha),$$
where $T\alpha$ be the bundle map introduced by $\alpha: G\rightarrow G_{0}$, i.e.
$$T\alpha:TG\longrightarrow TG_{0}.$$
The pullback bundle $AG:=1_{0}^{\ast}(T^{\alpha}G_{0})$ is a Lie algebroid of the Lie groupoid $(G,G_{0})$. We have $AG=\cup_{x\in G_{0}}T_{1_{x}}(G_{x})$, where $G_{x}=\alpha^{-1}(x)$.
Now we recall the topological dimension.
A collection $\mathcal{A}$ of subsets of the space $X$ is said to have order $m+1$ if some point of $X$ lies in $m+1$ elements of $\mathcal{A}$ and no point of $X$ lies in more than $m+1$ elements of $\mathcal{A}$. We recall that given a collection $\mathcal{A}$ of subsets of $X$, a collection $\mathcal{B}$ is said to refine $\mathcal{A}$, or to be a refinement of $\mathcal{A}$, if each element $\beta$ of $\mathcal{B}$ is contained in at least one element of $\mathcal{A}$. A space $X$ is said to be finite dimensional if there is some integer $m$ such that for every open covering $\mathcal{A}$ of $X$, there is an open covering $\mathcal{B}$ of $X$ that refines $\mathcal{A}$ and has order at most $m+1$. The topological dimension of $X$ is defined to be the smallest value of $m$ for which this statement holds. The topological dimension of any m-manifold is at most $m$.\cite{man}
In the following theorem we present an important fact about Lie subgroupoids.
\begin{thm}\label{asli}
Let $(H,H_{0})$ with $(i,i_{0})$ be a subgroupoid of a Lie groupoid $(G,G_{0})$ which satisfies the following conditions. $H_{0}$ is a submanifold of $G_{0}$ and $i_{0}$ is a submersion map, $i$ is injective and the topological dimension of $i(H)$ and the dimension of $G$ are equal. Then $(H,H_{0})$ is a Lie subgroupoid.
\end{thm}
\begin{proof}
Step 1.
We show that $i(H)$ is a submanifold of $G$: Suppose $E$ be a algebroid of $(G,G_{0})$, since $E$ is a vector bundle over $G$, there is a smooth, surjective and locally trivial map $P:E\longrightarrow G$, i.e. for every $x\in i(H)$ there exists a neighborhood $U\subset G$ of $x$ and a fiber-preserving diffeomorphism
$$\varphi: P^{-1}(U)\longrightarrow U\times R^{k}.$$
Therefore $\varphi$ maps $P^{-1}(i(H)\cap U)$ to $(i(H)\cap U)\times R^{k}$.
The restriction of $\varphi^{-1}$ to $U$ is a diffeomorphism onto $\varphi^{-1}\vert_{U}(U)$.
Suppose $\psi:W\rightarrow R^{n}$ is a chart of $E$ such that $ P^{-1}(x)\subset W$. Let $$V=\varphi(W\cap\varphi^{-1}\vert_{U}(U))\cap i(H),$$
then $\psi o\varphi^{-1}\vert_{V}$ is a chart for $i(H)$.
Since the topological dimension of $H$ and the dimension of $G$ are equal and the topological dimention is topological invariant \cite{pe}, these charts are $C^{\infty}$-related.
Step 2.
One can see that $H$ is a manifold. We define $U\subset H$ to be open if $i(U)$ is open in $G$.
For every $h\in H$ there exists $x\in i(H)$ such that $h=i^{-1}(x)$. Suppose $\eta$ is a chart for $x$, then $\eta oi$ is a chart for $h$.
Step 3.
The groupoid $(H,H_{0})$ with $(i, i_{0})$ is a Lie subgroupoid of $(G,G_{0})$; the map $i:H\rightarrow i(H)$ is a diffeomorphism and so is an immersion. Thus $(i,i_{0})$ is a morphism of Lie groupoids. We have $\alpha_{G}oi=i_{0}o\alpha_{H}$ and $\beta_{G}oi=i_{0}o\beta_{H}$, so $\alpha_{H}$ and $\beta_{H}$ are submersions.
Therefore $ (H,H_{0}) $ is a Lie subgroupoid of $(G,G_{0})$
\end{proof}
Note that if the topological dimensions of $G$ and $i(H)$ are not the same, the result of the previous theorem is not true.
\begin{exm}
Consider the groupoids $(R^{2},R)$ and $(H,R)$, where $$H=\{(x,y)\in R^{2}\vert x^{2}=y^{2}\}.$$
$H$ is a subgroupoid of $(R^{2},R)$ but $H$ doesn't have the structure of a manifold.
\end{exm}
A double groupoid $(D ,V ,H ,M )$ is a higher-dimensional groupoid involving a relationship for both horizontal and vertical groupoid structures. In following we will define double Lie groupoids in detail.
\begin{defn}\cite{meh}
Let $ \alpha_{H},\beta_{H}:D\rightarrow H $ and $ \alpha_{V},\beta_{V}:D\rightarrow V $ denote source and target maps, respectively and we use $ \alpha $, $ \beta $ to denote source and target maps from $ H $ or $ V $ to $ T $,
the square
$$\xymatrix{
V \ar @{>} @<2pt>[d] \ar @{>} @<-2pt>[d] \ar @{<-} @<2pt>[r] \ar @{<-} @<-2pt> [r] &D \ar @{>} @<2pt>[d] \ar @{>} @<-2pt>[d]\\
M \ar @{<-} @<2pt>[r] \ar @{<-} @<-2pt> [r] &H}$$
is a double Lie groupoid if the following conditions hold:
\begin{itemize}
\item[i.] The horizontal and vertical source and target maps commute;
\begin{center}
$ \alpha o\alpha_{H}=\alpha o\alpha_{V}\quad \quad \beta o\alpha_{H}=\alpha o\beta_{V} $\\
$ \beta o\beta_{H}=\beta o\beta_{V}\quad \quad \alpha o\beta_{H}=\beta o\alpha_{V} $
\end{center}
\item[ii.] $ \alpha_{V}(s_{1} ._{H} s_{2})=(\alpha_{V}s_{1}).(\alpha_{H}s_{2})
, \alpha_{H}(s_{1} ._{V} s_{3})=(\alpha_{H}s_{1}).(\alpha_{H}s_{3}) $ and similar equations hold for $ \beta_{H},\beta_{V}; $
\item[iii.] $ \alpha_{V}(s_{i1})=\beta_{V}(s_{i2}) $ and $ \alpha_{H}(s_{1i})=\beta_{H}(s_{2i}) $, for $ i=1,2,... ;$
\item[iv.] The double source map $ (\alpha_{V},\alpha_{H}):D\rightarrow V_{s\times s}H $ is a submersion.
\end{itemize}
\end{defn}
\begin{defn}\label{LiDoSub}
Let $(D,V,H,M)$ be a double groupoid. A double subgroupoid is a double groupoid $ (D_{1},V_{1} ,H_{1},M_{1} ) $ such that groupoids $ (D_{1} , H_{1} )$, $ (D_{1} , V_{1} ) $, $( V_{1} , M_{1} ) $ and $( H_{1} , M_{1} ) $ are subgroupoids of $ (D , H )$, $ (D , V ) $, $ (V , M ) $ and $ (H , M ) $, respectively.
\\
Let $(D,V,H,M)$ be a double Lie groupoid. A double Lie subgroupoid is a double Lie groupoid $ (D_{1},V_{1},H_{1},M_{1})$ together with maps $(j,j_{0},i,i_{0})$ such that Lie groupoids $(D_{1},H_{1}) $, $(D_{1},V_{1})$, $(V_{1},M_{1})$ and $(H_{1},M_{1})$ are Lie subgroupoids of $(D,H)$, $(D,V)$, $(V,M)$ and $(H,M)$, with maps $(j,i)$, $(j,j_{0})$, $(j_{0},i_{0})$ and $(i,i_{0})$, respectively.
\end{defn}
The next theorem is a generalization of theorem \ref{asli} for double Lie groupoids.
\begin{thm}
Let $ (D_{1},V_{1},H_{1},M_{1}) $ with $(j, j_{0}, i, i_{0}) $ be a double subgroupoid of $(D,V,H,M)$. If $dim H_{1} =dim H$, $dim D_{1} =dim D $ and $dim V_{1}=dim V $ and
$ j_{0} $ and $ i_{0} $ are injective and immersion maps, Then $ (D_{1},V_{1},H_{1},M_{1}) $ is a double Lie subgroupoid.
\end{thm}
\begin{displaymath}
\xymatrix@!0{
& D_{1} \ar@<-0.5ex>[rrr]\ar@<0.5ex>[rrr]\ar@<-0.5ex>' [dd] [ddd]\ar@<0.5ex>' [dd] [ddd]
& & & V_{1} \ar@<-0.5ex>[ddd]\ar@<0.5ex>[ddd]
\\
& & & &
\\
D \ar@{<-}[uur]^{j}\ar@<-0.5ex>[rrr]\ar@<0.5ex>[rrr]\ar@<-0.5ex>[ddd]\ar@<0.5ex>[ddd]
& & & V \ar@{<-}[uur]^{j_{0}}\ar@<-0.5ex>[ddd]\ar@<0.5ex>[ddd]
\\
& H_{1}\ar@<-0.5ex>' [rr] [rrr]\ar@<0.5ex>' [rr] [rrr]
& & & M_{1}
\\
& & & &
\\
H \ar@{<-}[uur]_{i} \ar@<-0.5ex>[rrr]\ar@<0.5ex>[rrr]
& & & M \ar@{<-}[uur]_{i_{0}}
}
\end{displaymath}
\begin{proof}
We suppose $ (D,V,H,M)$ is a double Lie groupoid and $ (D_{1},V_{1},H_{1},M_{1}) $ is a double subgroupoid. We show that $ (D_{1},V_{1},H_{1},M_{1}) $ has structure of double Lie groupoid.
Using the proof of theorem \ref{asli} for both vertical sides and both horizontal sides of double Lie groupiod $ (D,V,H,M)$, we get $ (D_{1},H_{1})$, $(D_{1},V_{1})$, $(V_{1},M_{1})$ and $(H_{1},M_{1})$ are Lie subgroupoid of $ (D,H)$, $(D,V)$, $(V,M)$ and $(H,M)$, respectively. Then by definition \ref{LiDoSub}, $ (D_{1},V_{1},H_{1},M_{1}) $ is a double Lie subgroupoid.
\end{proof}
\section{Cartan's theorem for generalized Lie groups}
Another generalization of Lie groups is called generalized Lie groups or top spaces which arises from the definition of a generalized group. In this section we prove a theorem analogous to Cartan's theorem in Lie groups case.
\begin{defn}
\cite{mo}
A generalized group is a non-empty set $\mathcal{G}$ admitting an operation called multiplication which satisfies the following conditions:
\begin{itemize}
\item[i.] $(g_1 . g_2) . g_3 = g_1 . (g_2 . g_3)$, for all $g_1, g_2, g_3 \in\mathcal{G}$.
\item[ii.] For each $g \in \mathcal{G}$ there exists a unique $e(g)$ in $\mathcal{G}$ such that $$g . e(g) = e(g) . g = g.$$
\item[iii.] For each $g \in \mathcal{G}$ there exists $h \in \mathcal{G}$ such that $g . h = h . g = e(g)$.
\end{itemize}
\end{defn}
In this paper by $e(\mathcal{G})$ we mean $$\{e(g):g\in \mathcal{G}\}.$$ For any generalized group $\mathcal{G}$, and any $g\in \mathcal{G}$,
$$e^{-1}(e(g))=\{h\in \mathcal{G}|e(h)=e(g)\},$$ has a canonical group structure.
If $e(g)e(h)=e(gh)$ for all $g,h\in \mathcal{G}$ then $e(\mathcal{G})$ is an idempotent semigroup with this product.
A top space is a smooth manifold which its points can be (smoothly) multiplied together by a generalized group operation and generally its identity is a semigroup morphism, i.e.
\begin{defn}\cite{bull}
A top space $T$ is a Hausdorff d-dimensional differentiable manifold which is
endowed with a generalized group structure such that the generalized group operations:
\begin{itemize}
\item[i.] $. : T \times T\rightarrow T$ by $(t_1, t_2) \mapsto t_1 . t_2$ which is called the multiplication map;
\item[ii.] $^{-1} : T \rightarrow T$ by $t \mapsto t^{-1}$ which is called the inverse map;\\
are differentiable and it holds
\item[iii.] $e(t_1 . t_2) = e(t_1) . e(t_2)$, for all $t_1
, t_2 \in T$ .
\end{itemize}
\end{defn}
Throughout this paper by $T_{a}$ we mean $T\cap e^{-1}(e(a))$.
\begin{defn}\cite{bull}
If $T$ and $S$ are two top spaces, then a homomorphism $f : T \rightarrow S$
is called a morphism if it is also a $C^{\infty}$ map.
If $f$ is a morphism, by $f_{a}$ we mean $f|_{e^{-1}(e(a))}$.
\end{defn}
For a smooth manifold $M$, the set of all smooth functions from $M$ to $M$ such that their restriction to a submanifold of $M$ is a diffeomorphism i.e. partial diffeomorphisms of $M$ is denoted by $D_P(M)$.
If $T$ is a top space, we call an immersed submanifold $S$ of $T$ a top subspace, if it is a top space.
Before we continue, we have to recall some tools. The following definitions, examples and theorems up to theorem \ref{form} is from \cite{ma}.
\begin{defn}\label{action}
An action of a top space $T$ on a smooth manifold M is a
map $$\phi:T\rightarrow D_P(M),$$ which satisfies the following conditions:
\begin{itemize}
\item[i.] For every $i\in e(T)$ the map $e^{-1}(i)\times M\rightarrow M$ which maps $(t,m)$ to $\phi_t(m)$ is a smooth function;
\item[ii.] $\phi_{ts}=\phi_t\circ\phi_s$, for all $t,s\in T$.
\end{itemize}
\end{defn}
Now by using the following theorem, we give an example of the definition above.
\begin{thm}
Let $T$ be a generalized group such that $e(t)e(s)=e(ts)$, for any $t,s\in T$. Then there is a generalized group isomorphism between $T$ and $$e(T)\ltimes\{G_i\}_{i\in e(T)},$$where $G_i=e^{-1}(i)$, for all $i\in e(T)$ and $$e(T)\ltimes\{G_i\}_{i\in e(T)}=\{(i,g)|g\in G_i\},$$ by the production rule
$$(i_{1},g_{1})\ltimes(i_{2},g_{2})=(i_{1}i_{2},g_{1}g_{2}),\quad i_{1},i_{2}\in e(T)
, g_{1}\in G_{i_{1}}, g_{2}\in G_{i_{2}}.$$
\end{thm}
\begin{exm}
For a top space $T$, $Ad:T\rightarrow D_P(T),$ where $Ad_t(s)=tst^{-1}$ is an action of top spaces on manifolds which is called adjoint action. Observe that $ad_{t}:e^{-1}(t)\rightarrow e^{-1}(t)$ is a diffeomorphism.
\end{exm}
\begin{defn}
If $G$ is a Lie group and $M$, a smooth manifold, a partial action of $G$ on $M$ is a map
$$\varphi:G\rightarrow D_P(M),$$ such that the map $G\times M\rightarrow M$ is a smooth function and $\varphi^{gh}=\varphi^{g}\circ\varphi^{h}$, for all $g,h\in G$.
\end{defn}
\begin{thm}\label{th;act1}
Let $T$ be a top space. Then $\phi$ is an action of $T$ on $M$ if and only if there exists a family of partial actions $\{\varphi_i\}_{i\in e(T)}$ of $e^{-1}(i)$ on $M$, for any $i\in e(T)$, such that $\varphi_{e(ts)}^{ts}=\varphi_{e(t)}^{t}\circ\varphi_{e(s)}^{s}$ and $\phi_t=\varphi_{e(t)}^{t}$, for all $t,s\in T$.
\end{thm}
\begin{thm}\label{form}
Let $T$ be a top space and $T_ie^{-1}(i)$ the tangent space of $e^{-1}(i)$ at $i\in e(T)$. Then the vector space of all partial left invariant vector fields on $T$ is isomorphic to $$\bigoplus_{i\in e(T)}T_ie^{-1}(i).$$
\end{thm}
In order to reach our aim, we must show that there exists a map from Lie algebra of a top space to the top space itself, which acts like the exponential map in Lie group case.
Consider the exponential map for Lie groups denoted by $exp$.
\begin{lem}
Let $T$ be a top space. Then $exp_{a}$ is a local diffeomorphism for every $a\in e(T)$.
\end{lem}
\begin{proof}
The action $\alpha$ of a top space on a manifold is a partial diffeomorphism for each $t\in T$. So $\alpha_{a}$ is a diffeomorphism for every $ a\in e(T)$. So the $exp_{a}=\gamma_{\alpha_{a}}(1)$ is a local diffeomorphism.
\end{proof}
Now we are ready to prove the following important theorem.
\begin{thm}
Each closed generalized subgroup of a top space is a top subspace.
\end{thm}
\begin{proof}
Let $T$ be a top space and $S$, a closed generalized subgroup of it. We know that $T_{a}$ is a Lie group for every $a \in e(T)$. So $S_{a}$ is a closed subgroup, since a Lie subgroup of $T_{a}$, for every $a \in e(S)$. Denote by $\mathfrak{t}$, the Lie algebras of $T$. Note that by theorem \ref{form} the vector space of all partial left invariant vector fields on $t$ is isomorphic to $\bigoplus_{a\in e(t)}\mathfrak{t_{a}}$, where $\mathfrak{t_{a}}$ is the Lie algebra of $T_{a}$. Now consider the set
$$\mathfrak{s}=\{ X\in \mathfrak{t}| exp(tX)\in S, \forall t\in \mathds{R} \},$$
where the exponential map $exp=\bigoplus_{a \in e(T)}exp_{a}$ is a local diffeomorphism. Obviously $\mathfrak{s}$ is a linear subspace of $\mathfrak{t}$ since every $\mathfrak{s}_{a}$ is a linear subspace of $\mathfrak{t}_{a}$, for all $a$ in $e(S)$. Open sets of $\mathfrak{s}$ are unions of open sets on each $\mathfrak{s_{a}}$. We found a local diffeomorphism between $\mathfrak{s}$ and $S$; therefore, by use of the left transitions we can find an open neighbourhood in $\mathfrak{s}$, for all open sets of $S$. Thus, we can find a smooth chart for each open set of $S$ since $s$ is a linear space. Hence, $S$ is a smooth embedded submanifold which inherits the subspace topology.
\end{proof}
The question remains to be asked is under which conditions different from the ones given in this paper, we can deduce the same results.
\section{Acknowledgments}
This paper is supported by grant no. 92grd1m82582 of Shiraz university, Shiraz, Iran.
\bibliographystyle{<bibstyle>}
|
train/arxiv
|
BkiUddc5qX_Bc1tFGKDV
| 5 | 1 |
\section{Introduction}
\label{Introduction}
The precise meaning of the solution of a system of differential equations
can be cast in several ways \cite{anleach}. We say that we have
determined a closed-form solution for a dynamical system when we have
determined a set of explicit functions describing the variation of the
dependent variables in terms of the independent variable(s). On the other hand,
when we have proved the existence of a sufficient number of independent
explicit first integrals and invariants for the dynamical system, we
say that we have found an analytic solution of the dynamical equations.
In addition, an algebraic solution is found when one has proved the
existence of a sufficient number of explicit transformations which permit
the reduction of the system of differential equations to a system of
algebraic equations. A feature, central to each of these three equivalent
prescriptions of integrability, is the existence of explicit functions which are
first integrals/invariants or the coefficient functions of
the aforementioned transformations.
A first integral (FI) of a dynamical system is a scalar $I$ defined on the
phase space of the system such that $dI/dt=0$. The FIs are classified
according to the power of the momenta. The linear FIs (LFIs) are linear in
the momenta, the quadratic FIs (QFIs) contain products of two momenta and so
on. A dynamical system of $n$ degrees of freedom is called integrable if it
admits $n$ (functionally) independent FIs which are in involution \cit
{Arnold 1989}, that is, their Poisson brackets are zero, i.e.
\{I_{i},I_{j}\}=0$. The maximum number of independent FIs that a dynamical
system of $n$ degrees of freedom can have is $2n-1$ and when this is the
case an integrable system is called superintegrable. The above apply to all
dynamical systems which are described by dynamical equations independently
if they are Lagrangian, or Hamiltonian. If the dynamical system is
Hamiltonian, then the FIs are defined equivalently by the requirement $\{H,I\}=0$ where $H$ is the Hamiltonian function of the system.
FIs are important for the determination of the solution and the study of dynamical systems.
In particular, when a dynamical system is integrable, then (in principle) the
solution of the dynamical equations can be found by means of quadratures.
Such dynamical systems are characterized as Liouville integrable. For this
reason the systematic computation of FIs is a topic of active interest for a
long time, perhaps by the time of the early Mechanics. Originally the FIs
were concerned in the field of the geometry of surfaces (see for example
\cite{Darboux 1901}) where an attempt was made to compute all integrable and
superintegrable 2d surfaces. A new dimension to the topic gave the
introduction of the theorem of Noether in 1918 \cite{Noether 1918} which
prevails the topic since then. More recently one more systematic method, but
less general than the Noether one, was presented in which one assumes a
general form of the QFI and then uses the condition $dI/dt=0$, or the
\{H,I\}=0$, to find a system of simultaneous equations involving the
coefficients defining $I$ (see for instance \cite{Katzin 1973}, \cite{Katzin 1974}, \cite{Kalotas}, \cite{Fris}). The solution of that system of conditions provides us with all the QFIs admitted by a given dynamical system.
The determination of integrable and superintegrable systems is a topic which
is in continuous investigation. Obviously a universal method which computes
the FIs for all types of dynamical equations independently of their
complexity and degrees of freedom is not available. For this reason the
existing studies restrict their considerations to flat spaces or spaces of
constant curvature of low dimension (e.g. \cite{Darboux 1901}, \cit
{Whittaker 1959}, \cite{Dorizzi Grammatikos Ramani 1983}, \cite{Thompson
1984a}, \cite{Sen}, \cite{TsaPal1}, \cite{Ranada 1997}, \cite{Kalnins 2001}
and references therein). The prevailing cases involve the autonomous
conservative dynamical systems with two degrees of freedom and the
classification of the potential functions in integrable and superintegrable.
A comprehensive review of the known integrable and superintegrable 2d
autonomous potentials is given in \cite{Hietarinta 1987}.
Besides the two methods mentioned above, other approaches have appeared. For
example Koenigs \cite{Koenigs} used coordinate transformations in order to
solve the system of equations resulting from the condition $\{H,I\}=0$. This
solution of that system of equations gives the general functional form of
the QFIs and the superintegrable free Hamiltonians, that is the ones which
possess two more QFIs - in addition to the Hamiltonian - which are
functionally independent. Koenig's method has been generalized in several
works (see \cite{Daskalogiannis 2006} and references cited therein) for two
dimensional autonomous conservative systems.
In the present work we follow the method which uses the solution of the
simultaneous system of equations resulting from the condition $dI/dt=0.$
This approach has been used extensively, e.g. \cite{Katzin 1973}, \cite{Katzin 1974}, \cite{Kalotas}, however always for special cases only. In this
work we use Theorem \ref{The first integrals of an autonomous holonomic dynamical
system} proved in \cite{Tsamp2020} which gives the general solution of this system in terms of the
collineations and the Killing tensors (KTs) of the kinetic metric in the configuration
space. This solution is systematic and covariant therefore can be used in
higher dimensions and for curved configuration spaces. Furthermore it is shown that
it is directly related to the Noether approach.
Theorem \ref{The first integrals of an autonomous holonomic dynamical
system} is applied to the case of 2d autonomous conservative dynamical
systems in order to determine the integrable and the superintegrable
potentials. It is found that the integrable potentials are classified in
\textbf{Class I} and \textbf{Class II} and that superintegrable potentials
exist in both classes. All potentials together with their QFIs are listed in
tables for easy reference. All the results listed in the review paper of
\cite{Hietarinta 1987} as well as in more recent works (e.g. \cite{Ranada
1997}, \cite{Kalnins 2001}) are recovered while some
new ones are found which admit time-dependent LFIs and QFIs.
\section{Gauged Noether symmetries and QFIs}
\label{sec.genconsider}
We consider an autonomous conservative dynamical system of $n$ degrees of
freedom $q^{a}$ with kinetic energy $T=\frac{1}{2}\gamma _{ab}\dot{q}^{a
\dot{q}^{b}$ where $\dot{q}^{a}=\frac{dq^{a}}{dt}$. We define in the
configuration space of the system the kinetic metric $\gamma _{ab}$ by the
requirement $\gamma _{ab}=\frac{\partial ^{2}T}{\partial \dot{q}^{a}\dot{q
^{b}}.$ When the dynamical system is regular, that is, $\det \left( \frac
\partial ^{2}T}{\partial \dot{q}^{a}\dot{q}^{b}}\right) \neq 0,$ it can be
shown that the dynamical equations can be written in the for
\begin{equation}
\ddot{q}^{a}=-\Gamma _{bc}^{a}\dot{q}^{b}\dot{q}^{c}-V(q)^{,a}
\label{eq.Noe0}
\end{equation
where $\Gamma_{bc}^{a}$ are the Riemann connection coefficients defined by the
kinetic metric $\gamma_{ab}$, $V(q)$ stands for the conservative forces, a comma indicates the partial derivative and the Einstein summation convention is used. Finally the metric $\gamma_{ab}$ is used for lowering and raising the indices.
The main methods for the determination of the FIs of (\ref{eq.Noe0}) are: \newline
a. The theorem of Noether which is the standard one and requires a
Lagrangian. \newline
b. The direct method (see e.g. \cite{Katzin 1973}, \cite{Katzin 1974}, \cit
{Kalotas}, \cite{Daskalogiannis 2006}, \cite{Karpathopoulos 2018}) which is
not applied widely, uses only the dynamical equations and involves the
solution of the system of equations resulting from the condition $dI/dt=0$.
The two methods are related as follows.
In the Noether approach the Noether symmetries are generated by vector field
\footnote
We restrict our considerations to vector fields in the jet space $J^{1}(t,q
\dot{q}).$}
\begin{equation*}
\mathbf{X}=\xi (t,q,\dot{q})\frac{\partial }{\partial t}+\eta ^{a}(t,q,\dot{
})\frac{\partial }{\partial q^{a}}
\end{equation*
whose first prolongation $\mathbf{X}^{[1]}$ in the jet space $J^{1}(t,q,\dot
q})$ is given by
\begin{equation}
\mathbf{X}^{[1]}=\xi \partial _{t}+\eta ^{a}\partial _{q^{a}}+\left( \dot
\eta}^{a}-\dot{q}^{a}\dot{\xi}\right) \partial _{\dot{q}^{a}}.
\label{FL.10.3}
\end{equation}
Let $L=T-V$ be the Lagrangian of the dynamical system. The Noether
symmetries of (\ref{eq.Noe0}) are its Lie symmetries which satisfy in
addition the Noether condition
\begin{equation}
\mathbf{X}^{\left[ 1\right] }L+L\dot{\xi}=\dot{f}
\label{GenHolonNoether'sFinCond}
\end{equation
where $f(t,q,\dot{q})$ is the Noether or the gauge function. According to
the theorem of Noether a Noether symmetry produces the FI
\begin{equation}
I=f-L\xi -\frac{\partial L}{\partial \dot{q}^{a}}\left( \eta ^{a}-\xi \dot{q
^{a}\right) . \label{GenHolonNoether's1Integr}
\end{equation}
The Noether symmetries are classified in a formal way in two classes\footnote
The original paper of Noether does not distinguish these classes. For a
recent enlightening discussion of Noether theorem see \cite{Hadler Paliathanasis Leach 2018} and references therein.}:\newline
a) The point Noether symmetries whose generators are vector fields on the
augmented configuration space $\{t,q^{a}\}$ and usually lead to LFIs.
\newline
b) The generalized Noether symmetries whose generators are vector fields in
the jet space $J^{1}(t,q,\dot{q})$ which produce FIs of higher degree
\newline
In the present work we restrict our considerations to generalized Noether
symmetries in the first jet space $J^{1}(t,q,\dot{q})$ which produce QFIs.
The 2d autonomous potentials which admit point Noether symmetries have
already been classified in \cite{Sen} and more recently recovered and
extended in \cite{TsaPal1}. Furthermore in \cite{TsaPal1} it has been shown
that the generators of the point Noether symmetries are the elements of the
homothetic algebra of the kinetic metric. Obviously that firm result is not
expected to apply in the case of generalized Noether symmetries which form
an infinite dimensional Lie group.
It is well-known \cite{Stephani book ODES} that the generalized Noether
symmetries have one extra degree of freedom (being special generalized Lie symmetries)
which is removed if we consider the gauge condition $\xi =0$, which we
assume to be the case. Therefore, the (gauged) Noether symmetries we
consider are generated by vector fields of the form $X=\eta ^{a}(q,\dot{q}
\ddot{q},..)\frac{\partial }{\partial q^{a}}$ and accordingly the Noether condition and
the corresponding FI are simplified as follows
\begin{equation}
X^{[1]}L=\dot{f},\quad I=f-\frac{\partial L}{\partial \dot{q}^{a}}\eta ^{a}.
\label{GenHolonNoether's2Cond2}
\end{equation}
In the direct method one assumes for the QFI the generic expression
\begin{equation}
I=K_{ab}(t,q)\dot{q}^{a}\dot{q}^{b}+K_{a}(t,q)\dot{q}^{a}+K(t,q)
\label{eq.Noe1}
\end{equation
where $K_{ab}(t,q)$, $K_{a}(t,q)$, $K(t,q)$ are unknown tensor quantities
and demands the condition\footnote{{Equivalently, if the system is
Hamiltonian, one requires $\{H,I\}-\frac{\partial I}{\partial t}=0$ where $\{.,.\}$ is the Poisson bracket.}} $\frac{dI}{dt}=0.$ This condition leads to a
system of simultaneous equations among the coefficients $K_{ab}(t,q)$,
K_{a}(t,q)$, $K(t,q)$ whose solution provides all the LFIs and the QFIs of
the system of this functional form. The involvement of the specific dynamical system is in the
replacement of the term $\ddot{q}^{a}$ whenever it appears from the dynamical equations (\re
{eq.Noe0}).
The direct approach is related to the Noether symmetries because once one
has determined the QFI the generator of the corresponding gauged Noether
symmetry and the Noether function follow immediately. Indeed for a gauged
Noether symmetry (in the gauge $\xi =0$) relation (\re
{GenHolonNoether's1Integr}) become
\begin{equation}
I=f-\frac{\partial L}{\partial \dot{q}^{a}}\eta ^{a}. \label{eq.Noe2}
\end{equation
Replacing $L=T-V(q)$ we fin
\begin{equation}
I=f-\eta ^{a}\gamma _{ab}\dot{q}^{b}=f-\eta _{a}\dot{q}^{a} \label{eq.Noe3}
\end{equation
and using (\ref{eq.Noe1}) it follow
\begin{equation}
\eta _{a}=-K_{ab}\dot{q}^{b}-K_{a},\enskip f=K \label{eq.Noe4}
\end{equation
that is we obtain directly the Noether generator and the Noether function
from the QFI $I$ by reading the coefficients $K_{ab}(t,q)$, $K_{a}(t,q)$ and
$K(t,q)$ respectively. It can be proved that: (a) the set $\{-K_{ab}\dot{q
^{b}-K_{a};K\}$ does satisfy the gauged Noether condition $\mathbf{X}^{[1]}L=\dot{f}$ and (b) the QFI $I$ defined in (\ref{eq.Noe1}) is not in
general Noether invariant (as it is the case for the point Noether symmetries - see proposition 2.2 in \cite{Sarlet 1981}). Finally, the (gauged) point Noether symmetries which are defined
by the vector $K_{a}$ ($K_{ab}=0)$ give the LFIs whereas the (gauged) generalized Noether symmetries with $K_{ab}\neq 0$ give the QFIs.
\section{The QFIs of an autonomous conservative dynamical system}
\label{Theorem}
It is known (see e.g. \cite{Katzin 1973}, \cite{Kalotas}) that condition $dI/dt=0$ leads to the following system of equations\footnote{
Using the dynamical equations (\ref{eq.Noe0}) to replace $\ddot{q}^{a}$ whenever it appears the condition $dI/dt=0$ is written
\[
K_{(ab;c)}\dot{q}^{a}\dot{q}^{b}\dot{q}^{c}+\left(
K_{ab,t}+K_{(a;b)}\right) \dot{q}^{a}\dot{q}^{b}+\left(
K_{a,t}+K_{,a}-2K_{ab}V^{,b} \right) \dot{q}^{a} +K_{,t}-K_{a}V^{,a}=0.
\]
}
\begin{eqnarray}
K_{(ab;c)} &=&0 \label{eq.veldep4.1} \\
K_{ab,t}+K_{(a;b)} &=&0 \label{eq.veldep4.2} \\
-2K_{ab}V^{,b}+K_{a,t}+K_{,a} &=&0 \label{eq.veldep4.3} \\
K_{,t}-K_{a}V^{,a} &=&0. \label{eq.veldep4.4}
\end{eqnarray}
Here round/square brackets indicate symmetrization/antisymmetrization of the enclosed indices and a semi-colon denotes the Riemannian covariant derivative. In the special case of a scalar function, for example the potential $V$, it holds that $V^{,a}=V^{;a}$.
Condition $K_{(ab;c)}=0$ implies that $K_{ab}$ is a Killing tensor (KT) of
order 2 (possibly zero) of the kinetic metric $\gamma _{ab}$. Because
\gamma _{ab}$ is autonomous we assume
\begin{equation*}
K_{ab}(t,q)=g(t)C_{ab}(q)
\end{equation*
where $g(t)$ is an arbitrary analytic function and $C_{ab}(q)$ ($C_{ab}=C_{ba}$) is a KT of order 2 of the metric $\gamma _{ab}.$ This choice
of $K_{ab}$ and equation (\ref{eq.veldep4.2}) indicate that we set
\begin{equation*}
K_{a}(t,q)=f(t)L_{a}(q)+B_{a}(q)
\end{equation*
where $f(t)$ is an arbitrary analytic function and $L_{a}(q)$, $B_{a}(q)$
are arbitrary vectors. With these choices the system of equations (\re
{eq.veldep4.1}) -(\ref{eq.veldep4.4}) becomes
\begin{eqnarray}
g(t)C_{(ab;c)} &=&0 \label{eq.veldep5} \\
g_{,t}C_{ab}+f(t)L_{(a;b)}+B_{(a;b)} &=&0 \label{eq.veldep6} \\
-2g(t)C_{ab}V^{,b}+f_{,t}L_{a}+K_{,a} &=&0 \label{eq.veldep7} \\
K_{,t}-(fL_{a}+B_{a})V^{,a} &=&0. \label{eq.veldep8}
\end{eqnarray}
Conditions (\ref{eq.veldep5}) - (\ref{eq.veldep8}) must be supplemented with
the integrability conditions $K_{,at}=K_{,ta}$ and $K_{,[ab]}=0$ for the scalar $K.$ The integrability condition $K_{,at}=K_{,ta}$ gives -
if we make use of (\ref{eq.veldep7}) and (\ref{eq.veldep8}) - the PDE
\begin{equation}
f_{,tt}L_{a}+f_{,t}L_{b}A_{a}^{b}+f\left( L_{b}V^{,b}\right) _{;a}+\left(
B_{b}V^{,b}\right) _{;a}-2g_{,t}C_{ab}V^{,b}=0. \label{eq.veldep9}
\end{equation}
Condition $K_{,[ab]}=0$ gives the equation known as the second order
Bertrand-Darboux PDE
\begin{equation}
2g\left( C_{[a\left\vert c\right\vert }V^{,c}\right) _{;b]}-f_{,t}L_{\left[
a;b\right] }=0 \label{eq.veldep10}
\end{equation}
where indices enclosed between vertical lines are overlooked by symmetrization or antisymmetrization symbols.
Finally, the system of equations which we have to solve consists of
equations (\ref{eq.veldep5}) - (\ref{eq.veldep10}). The general solution of
that system in terms of the collineations of the kinetic metric is given in
the following Theorem (see \cite{Tsamp2020}).
\begin{theorem}
\label{The first integrals of an autonomous holonomic dynamical system}
The functions $g(t),f(t)$ are assumed to be analytic so that they may be
represented by polynomial expansion as follows
\begin{equation} \label{eq.thm1}
g(t) = \sum^n_{k=0} c_k t^k = c_0 + c_1 t + ... + c_n t^n
\end{equation}
\begin{equation} \label{eq.thm2}
f(t) = \sum^m_{k=0} d_k t^k = d_0 + d_1 t + ... + d_m t^m
\end{equation}
where $n, m \in \mathbb{N}$, or may be infinite, and $c_k, d_k \in \mathbb{R}
$. Then the independent QFIs of an autonomous conservative dynamical system
are the following: \bigskip
\textbf{Integral 1.}
\begin{equation*}
I_{1} = -\frac{t^{2}}{2} L_{(a;b)}\dot{q}^{a}\dot{q}^{b} + C_{ab}\dot{q}^{a} \dot{q}^{b} + t L_{a} \dot{q}^{a} + \frac{t^{2}}{2} L_{a}V^{,a} + G(q)
\end{equation*}
where $C_{ab}$, $L_{(a;b)}$ are KTs, $\left(L_{b}V^{,b}\right)_{,a} =
-2L_{(a;b)} V^{,b}$ and $G_{,a}= 2C_{ab}V^{,b} - L_{a}$.
\textbf{Integral 2.}
\begin{equation*}
I_{2} = -\frac{t^{3}}{3} L_{(a;b)}\dot{q}^{a}\dot{q}^{b} + t^{2} L_{a} \dot{
}^{a} + \frac{t^{3}}{3} L_{a}V^{,a} - t B_{(a;b)} \dot{q}^{a}\dot{q}^{b} +
B_{a}\dot{q}^{a} + tB_{a}V^{,a}
\end{equation*}
where $L_{a}$, $B_{a}$ are such that $L_{(a;b)}$, $B_{(a;b)}$ are KTs,
\left(L_{b}V^{,b}\right)_{,a} = -2L_{(a;b)} V^{,b}$ and $\left(B_{b}V^{,b
\right)_{,a} = -2B_{(a;b)} V^{,b} - 2L_{a}$.
\textbf{Integral 3.}
\begin{equation*}
I_{3} = -e^{\lambda t} L_{(a;b)}\dot{q}^{a}\dot{q}^{b} + \lambda e^{\lambda t} L_{a} \dot{q}^{a} + e^{\lambda t} L_{a} V^{,a}
\end{equation*}
where $\lambda \neq 0$, $L_{a}$ is such that $L_{(a;b)}$ is a KT and $\left(L_{b}V^{,b}
\right)_{,a} = -2L_{(a;b)} V^{,b} - \lambda^{2} L_{a}$.
\end{theorem}
It can be checked that the FIs listed above produce all the potentials which
admit a LFI or a QFI given in \cite{TsaPal1} and are due to point Noether symmetries.
Since, as shown above, these FIs also follow form a gauged velocity dependent Noether symmetry we conclude that \emph{there does not exist a one-to-one correspondence between Noether FIs and the type of Noether symmetry.} For example the FI of the total energy (Hamiltonian) $E= \frac{1}{2} \gamma_{ab}\dot{q}^{a}\dot{q}^{b} +V(q)$ (case \textbf{Integral 1} for $L_{a}=0$ and $C_{ab}=\frac{\gamma_{ab}}{2}$) is generated by the point Noether symmetry $\Big(\xi=1, \eta_{a}=0; f=0\Big)$ and also by the gauged generalized Noether symmetry $\Big(\xi=0, \eta_{a}= -\frac{1}{2}\gamma_{ab}\dot{q}^{b}; f=V(q) \Big)$.
The FI $-I_{2}(L_{a}=0)$ for $B_{a}$ be a HV with conformal factor $\psi=const$ is generated by the point Noether symmetry $\Big( \xi=2\psi t, \eta_{a}=B_{a}; f=ct \Big)$ such that $B_{a}V^{,a} +2\psi V +c =0$; and also by the gauged generalized Noether symmetry $\Big( \xi=0, \eta_{a}= -t\psi\gamma_{ab}\dot{q}^{b} +B_{a}; f=-tB_{a}V^{,a} \Big)$.
As a final example we consider the FI $-\frac{I_{3}}{\lambda}$ for the gradient HV $L_{a}=\Phi(q)_{,a}$ where $\Phi_{;ab}= \psi\gamma_{ab}$ with $\psi=const$. This FI is generated by the point Noether symmetry
\[
\Big( \xi= \frac{2\psi}{\lambda}e^{\lambda t}, \eta_{a}= e^{\lambda t} \Phi(q)_{,a}; f= \lambda e^{\lambda t} \Phi(q) -\frac{c}{\lambda} e^{\lambda t} \Big)
\]
where $\lambda, c$ are non-zero constants and $\Phi_{,a}V^{,a}= -2\psi V -\lambda^{2}\Phi +c$; and also by the gauged generalized Noether symmetry
\[
\Big( \xi=0, \eta_{a} = - \frac{e^{\lambda t} }{\lambda} \psi\gamma_{ab}\dot{q}^{b} + e^{\lambda t} \Phi_{,a}; f= - \frac{e^{\lambda t}}{\lambda} \Phi_{,a} V^{,a} \Big).
\]
\section{The determination of the QFIs}
\label{The determination of the QFIs}
From Theorem \ref{The first integrals of an autonomous holonomic dynamical
system} follows that for the determination of the QFIs the following
problems have to be solved:
a. Determine the KTs of order 2 of the kinetic metric $\gamma_{ab}$.
b. Determine the special subspace of KTs of order 2 of the form
C_{ab}=L_{(a;b)}$ where $L^{a}$ is a vector.
c. Determine the KTs satisfying the constraint $G_{,a}= 2C_{ab} V^{,b}$.
d. Find all KVs $L_{a}$ of the kinetic metric which satisfy the constraint
L_{a}V^{,a}=s$ where $s$ is a constant, possibly zero.
We note that constraints a. and b. depend only on the kinetic metric.
Because the kinetic energy is a positive definite non-singular quadratic
2-form we can always choose coordinates in which this form reduces either to
$\delta _{ab}$ or to $A(q)\delta _{ab}.$ Since we know the KTs and all the
collineations of a conformally flat metric (of Euclidean or Lorentzian
character) \cite{Rani 2003} we already have the results for all
autonomous (Newtonian or special relativistic) conservative dynamical systems.
The involvement of the potential function is only in the constraints c.
and d. which also depend on the geometric characteristics of the kinetic
metric. There are two different ways to proceed.
\subsection{The potential $V\left(q\right)$ is known}
\label{subsec.pot.given}
In this case the following procedure is used.\newline
a) Substitute $V$ in the constraints $L_{a}V^{,a}=s$ and
G_{,a}=2C_{ab}V^{,b}$ and find conditions for the defining parameters of $L_{a}$ and $C_{ab}$. \newline
b) From these conditions determine $L_{a}$, $C_{ab}$. \newline
c) Substitute $C_{ab}$ in the constraint $G_{,a}=2C_{ab}V^{,b}$ and
find the function $G(q)$. \newline
d) Using the above results write the FI $I$ in each case and determine
directly the gauged Noether generator and the Noether function. \newline
e) Examine if $I$ can be reduced to simpler independent FIs or if it is
new. \newline
\subsection{The potential $V\left(q\right)$ is unknown}
\label{subsec.pot.not.given}
In this case the following algorithm is used.\newline
a) Compute the KTs and the KVs of the kinetic metric. \newline
b) Solve the PDE $L_{a}V^{,a}=s$ or the\footnote{
The integrability conditions for the scalar $G$ are very general PDEs
from which one can find only special solutions by making additional
simplifying assumptions (e.g. symmetries) involving $L_{a}$, $C_{ab}$ and $V(q)$ itself. Therefore one does not find the most general solution. For example in \cite{Markakis 2014} it is required that the QFI\ $I\ $is
axisymmetric, that is $\phi ^{\lbrack 1]}I=0$ where $\phi
^{i[1]}=-y\partial_{x}+x\partial_{y} -\dot{y}\partial_{\dot{x}} +\dot{x
\partial_{\dot{y}}$ is the first prolongation of the rotation $\phi
^{i}=-y\partial x+x\partial y$. It is proved easily that in this case we
have also the constraints $L_{\phi}K_{a}=0$ and $L_{\phi}K_{ab}=0$.}
$G_{,[ab]}=0$ and find the
possible potentials $V(q)$. \newline
c) Substitute the potentials and the KTs found in the constraint $G_{,a}=2C_{ab}V^{,b}$ and compute the function $G(q)$. \newline
d) Write the FI $I$ for each potential and determine the gauged Noether
generator and the Noether function. \newline
e) Examine if $I$ can be reduced further to simpler independent FIs or if it is a new FI. \newline
In the following sections we assume the potential is not given and apply the
second procedure. For that we need first the geometric quantities of the 2d Euclidean plane $E^2$.
\section{The geometric quantities of $E^{2}$}
\label{sec.E2.geometry}
Using well-known results (see also \cite{Thompson 1984a}, \cit
{Karpathopoulos 2018}) \ we state the following:
- $E^{2}$ admits two gradient Killing vectors (KVs) $\partial _{x},\partial _{y}$ whose
generating functions are $x,y$ respectively and one non-gradient KV (the
rotation) $y\partial _{x}-x\partial _{y}$. These vectors can be written
collectivel
\begin{equation}
L_{a}=\left(
\begin{array}{c}
b_{1}+b_{3}y \\
b_{2}-b_{3}
\end{array
\right) \label{FL.15}
\end{equation
where $b_{1},b_{2},b_{3}$ are arbitrary constants, possibly zero.
- The general KT of order 2 in $E^{2}$ is
\begin{equation}
C_{ab}=\left(
\begin{array}{cc}
\gamma y^{2}+2\alpha y+A & -\gamma xy-\alpha x-\beta y+C \\
-\gamma xy-\alpha x-\beta y+C & \gamma x^{2}+2\beta x+
\end{array
\right) \label{FL.14b}
\end{equation
from which follow
\begin{equation}
C_{ab}(q)\dot{q}^{a}\dot{q}^{b}=\left( \gamma y^{2}+2\alpha y+A\right) \dot{x
^{2}+2\left( -\gamma xy-\alpha x-\beta y+C\right) \dot{x}\dot{y}+\left( \gamma
x^{2}+2\beta x+B\right) \dot{y}^{2} \label{FL.10.2}
\end{equation}
where $\alpha, \beta, \gamma, A, B, C$ are arbitrary constants.
- The vectors $L^{a}$ generating KTs of $E^{2}$ of the form
C_{ab}=L_{(a;b)}$ are
\begin{equation}
L^{a}=\left(
\begin{array}{c}
-2\beta y^{2}+2\alpha xy+Ax+a_{1}y+a_{4} \\
-2\alpha x^{2}+2\beta xy+a_{3}x+By+a_{2
\end{array
\right) \label{FL.14}
\end{equation}
where $a_{1}, a_{2}, a_{3}, a_{4}$ are arbitrary constants.
- The KTs $C_{ab}=L_{(a;b)}$ in $E^{2}$ generated from the vector (\re
{FL.14}) ar
\begin{equation}
C_{ab}=L_{(a;b)}=\left(
\begin{array}{cc}
L_{x,x} & \frac{1}{2}(L_{x,y}+L_{y,x}) \\
\frac{1}{2}(L_{x,y}+L_{y,x}) & L_{y,y
\end{array
\right) =\left(
\begin{array}{cc}
2\alpha y+A & -\alpha x-\beta y+C \\
-\alpha x-\beta y+C & 2\beta x+
\end{array
\right) \label{FL.14.1}
\end{equation
where\footnote
Note that $L^{a}$ in (\ref{FL.14}) is the sum of the non-proper ACs of
E^{2} $ and not of its KVs which give $C_{ab}=0.$ .} $2C=a_{1}+a_{3}$.
Observe that these KTs are special cases of the general KTs (\ref{FL.14b
) for $\gamma =0$.
According to Theorem \ref{The first integrals of an autonomous holonomic
dynamical system} the above are common to all 2d Newtonian systems and what
changes in each particular case are the constraints $G_{,a}=2C_{ab}V^{,b}$
and $L_{a}V^{,a}=s$ which determine the potential $V(q)$.
\section{Computing the potentials and the FIs}
\label{sec.find.Pots}
The application of Theorem \ref{The first integrals of an autonomous
holonomic dynamical system} in the case of $E^{2}$ indicates that there are
three different ways to find potentials that admit QFIs (other than the
Hamiltonian): \bigskip
1) The constraint $L_{a}V^{,a}=s$ which leads to the PDE
\begin{equation}
(b_{1}+b_{3}y)V_{,x}+(b_{2}-b_{3}x)V_{,y}-s=0. \label{eq.PDE1}
\end{equation}
2) The constraint $G_{,a}=2C_{ab}V^{,b}$ which leads to the second order
Bertrand-Darboux PDE ($G_{,xy}=G_{,yx}$)
\begin{eqnarray}
0 &=&(\gamma xy+ \alpha x+\beta y-C)(V_{,xx}-V_{,yy})+\left[ \gamma
(y^{2}-x^{2})-2\beta x+2\alpha y+A-B\right] V_{,xy}- \notag \\
&&-3(\gamma x+\beta )V_{,y}+3(\gamma y+\alpha)V_{,x}. \label{eq.PDE2}
\end{eqnarray}
3) The constraint $\left( L_{b}V^{,b}\right) _{,a}=-2L_{(a;b)}V^{,b}-\lambda
^{2}L_{a}$ with\footnote
For $\lambda=0$ this constraint is a subcase of $G_{,a}=2C_{ab}V^{,b}$ hence
only the case $\lambda \neq 0$ must be considered.} $\lambda \neq 0$ and the integrability
condition $\left( L_{b}V^{,b}\right) _{,xy}=\left( L_{b}V^{,b}\right) _{,yx}$
which lead to the PDE
\begin{eqnarray}
0 &=&(-2\beta y^{2}+2\alpha xy+ Ax+ a_{1}y+a_{4})V_{,xx}+(-2\alpha x^{2}+2\beta
xy+ a_{3}x+By+a_{2})V_{,xy}+ \notag \\
&&+(-6\alpha x+2a_{3}+a_{1})V_{,y}+3(2\alpha y+A)V_{,x}+\lambda ^{2}(-2\beta y^{2}+2\alpha xy+Ax +a_{1}y+a_{4}) \label{eq.PDE3.1}
\end{eqnarray}
\begin{eqnarray}
0 &=&(-2\alpha x^{2}+2\beta xy +a_{3}x +By +a_{2})V_{,yy}+(-2\beta y^{2}+2\alpha xy+Ax +a_{1}y +a_{4})V_{,xy}+ \notag \\
&&+3(2\beta x+B)V_{,y}+(-6\beta y+2a_{1}+a_{3})V_{,x}+\lambda
^{2}(-2\alpha x^{2} +2\beta xy +a_{3}x+By+a_{2}) \label{eq.PDE3.2}
\end{eqnarray
\begin{eqnarray}
0 &=&(\alpha x+\beta y-C)(V_{,xx}-V_{,yy})+\left( -2\beta x +2\alpha y+A-B\right)V_{,xy} -3\beta V_{,y} +3\alpha V_{,x}+ \notag \\
&&+\frac{\lambda ^{2}}{2}(6\alpha x-6\beta y+a_{1}-a_{3}), \quad 2C=a_{1}+a_{3}.
\label{eq.PDE3.3}
\end{eqnarray}
For $\alpha=\beta =0$ and $a_{1}=a_{3}$ equation (\ref{eq.PDE3.3}) reduces to
\ref{eq.PDE2}). Therefore in order to find new potentials one of these
conditions must be relaxed. This case of finding potentials is the most
difficult because the problem is over-determined, i.e. we have a system of
three PDEs (\ref{eq.PDE3.1})-(\ref{eq.PDE3.3}) and only one unknown function, the
$V(x,y)$.
In the following sections we solve these constraints and find the admitted
potentials which, as a rule, are integrable. Subsequently we apply Theorem
\ref{The first integrals of an autonomous holonomic dynamical system} to
each of these potentials in order to compute the admitted FIs and determine
which of those are integrable and in particular superintegrable.
\section{The constraint $L_{a}V^{,a} = s$}
\label{sec.const1}
The constraint $L_{a}V^{,a}=s$ gives (\ref{eq.PDE1}) which can be solved
using the method of the characteristic equation.
To cover all possible occurrences we have to consider the following cases:
a) $b_{3}=0$ and $b_{1}\neq 0$ (KVs $\partial _{x}$ and $\partial
_{x},\partial _{y})$; b) $b_{3}=b_{1}=0$ and $b_{2}\neq 0$ (KV $\partial
_{y})$; and c) $b_{3}\neq 0$ ( KVs $y\partial _{x}-x\partial _{y}$;
\partial _{x},y\partial _{x}-x\partial _{y}$; and $\partial _{y},y\partial
_{x}-x\partial _{y})$. For each case the solution is shown in the following
table: \bigskip
\begin{tabular}{|c|c|c|}
\hline
Case & KV & $V(x,y)$ \\ \hline
a & $b_{3}=0,b_{1}\neq 0$ & $\frac{s}{b_{1}}x+F(b_{1}y-b_{2}x)$ \\
b & $b_{3}=b_{1}=0$, $b_{2}\neq 0$ & $\frac{s}{b_{2}}y+F(x)$ \\
c & $b_{3}\neq 0$ & $\frac{s}{b_{3}}\tan ^{-1}\left( \frac{y+\frac{b_{1}}
b_{3}}}{-x+\frac{b_{2}}{b_{3}}}\right) +F(b_{1}y+\frac{b_{3}}{2}y^{2}-b_{2}x
\frac{b_{3}}{2}x^{2})$ \\ \hline
\end{tabular}
\bigskip
We shall refer to the above solutions as \textbf{Class I} potentials. In order
to determine if these potentials admit QFIs we apply Theorem \ref{The first
integrals of an autonomous holonomic dynamical system} to the following
potentials resulting from the table above:
\begin{eqnarray*}
V_{1} &=&cx+F(y-bx) \\
V_{2} &=&cy+F(x)
\end{eqnarray*
\begin{equation*}
V_{3}=c\tan ^{-1}\left( \frac{y+b_{1}}{-x+b_{2}}\right) +F\left( \frac
x^{2}+y^{2}}{2}+b_{1}y-b_{2}x\right) .
\end{equation*
Before we continue we recall that \emph{if $I_{1},I_{2},...,I_{k}$ are FIs of a
given dynamical system then any function $f(I_{1},...,I_{k})$ is also a FI of the dynamical system.}
\subsection{The potential $V_{1}=cx+F(y-bx)$}
\label{subsec.V1}
\textbf{Case a.} $b=0$ and $F=\lambda y$.
The potential reduces to $V_{1a}=cx + \lambda y$.
The irreducible FIs ar
\begin{equation*}
L_{11}=\dot{x}+ct,\enskip L_{12}=\dot{y}+\lambda t,\enskip Q_{11}=\frac{1}{2
\dot{x}^{2}+cx,\enskip Q_{12}=\frac{1}{2}\dot{y}^{2}+\lambda y.
\end{equation*
We note that $Q_{11}+Q_{12}=\frac{1}{2}(\dot{x}^{2}+\dot{y}^{2})+V=H$ the
Hamiltonian. We compute $\{Q_{11},Q_{12}\}=0$, $\{L_{11},Q_{11}\}=-c$.
The FIs $I_{1}=Q_{11}+Q_{12}$, $I_{2}=\lambda L_{11}-cL_{12}=\lambda \dot{x
-c\dot{y} $ and $I_{3}=Q_{11}$ are functionally independent and satisfy the
relations
\begin{equation*}
\{I_{1},I_{2}\}=\{I_{1},I_{3}\}=0,\enskip\{I_{2},I_{3}\}= -c\lambda.
\end{equation*}
Therefore the potential $V_{1a}$ is superintegrable.
We note that the FIs $I_{2}$, $Q_{11}$ are respectively the FIs (3.1.4) and
(3.2.20) of \cite{Hietarinta 1987}. \bigskip
\textbf{Case b.} $\frac{d^{2}F}{dw^{2}}\neq 0$ and $w\equiv y-bx$.
The irreducible FIs are
\begin{equation*}
L_{21}=\dot{x}+b\dot{y}+ct,\enskip L_{22}=t(\dot{x}+b\dot{y})-(x+by)+\frac{
}{2}t^{2},\enskip Q_{21}=(\dot{x}+b\dot{y})^{2}+2c(x+by).
\end{equation*}
For $F(y-bx)=-\frac{1}{2}\lambda ^{2}y^{2}$ and $b=0$ we have the potential
V_{1b}=cx-\frac{1}{2}\lambda ^{2}y^{2}$, $\lambda \neq 0$, which admits the
additional time-dependent FI $L_{23}=e^{\lambda t}(\dot{y}-\lambda y)$.
Observe also that in this case $Q_{21}$ reduces to $Q_{e1}=\frac{1}{2}\dot{x
^{2}+cx$ which using the Hamiltonian generates the QFI
\begin{equation*}
Q_{e2}\equiv H-Q_{e1}=\frac{1}{2}\dot{y}^{2}-\frac{1}{2}\lambda ^{2}y^{2}.
\end{equation*}
The LFI $L_{21}(c=0)$ is the (3.1.4) of \cite{Hietarinta 1987}.
We compute $\{H,L_{21}\}=\frac{\partial L_{21}}{\partial t}=c$ because
L_{21}$ is a time-dependent FI.
The potential of the case b is integrable because $\{H,Q_{21}\}=0$.
Moreover
\begin{equation*}
\{H,L_{22}\} = L_{21} = \frac{\partial L_{22}}{\partial t}, \quad
\{L_{21},L_{22}\}= 1 + b^{2},
\end{equation*}
\begin{equation*}
\{Q_{21},L_{21}\} = 2c(1+b^{2}) = 2c\{L_{21},L_{22}\}, \quad
\{Q_{21},L_{22}\}= 2(1+b^{2})L_{21}=2\{L_{21},L_{22}\}L_{21}.
\end{equation*}
For the special case $V=cx-\frac{1}{2}\lambda ^{2}y^{2}$ we have
\begin{equation*}
\{H,L_{23}\}=\{Q_{e2},L_{23}\}=\lambda L_{23}=\frac{\partial L_{23}}
\partial t},\enskip \{Q_{e1},L_{23}\}=0.
\end{equation*
The triplet $Q_{e1},Q_{e2},L_{23}$ proves that this potential is
superintegrable.\newline
We note that in \cite{Hietarinta 1987} only the \textbf{Class II} potentials
(to be considered in the next section) are examined for superintegrability
(see \cite{Hietarinta 1987} p.108 (3.2.34)-(3.2.36) ).
\bigskip
For $c\neq 0$ the potential $V_{1}=cx+F(y-bx)$ is not included in \cit
{Hietarinta 1987} because the author seeks for autonomous LFIs of the
form (3.1.1) and in that case $s=0$.
\subsection{The potential $V_{2}=cy+F(x)$}
\label{subsec.V2}
We consider the case $F^{\prime \prime }= \frac{d^{2}F}{dx^{2}}\neq 0$ because otherwise we
retrieve the potential $V_{1a}$ discussed above.
The irreducible FIs ar
\begin{equation*}
L_{31}=\dot{y}+ct,\enskip Q_{31}=\frac{1}{2}\dot{x}^{2}+F(x),\enskip Q_{32}
\frac{1}{2}\dot{y}^{2}+cy.
\end{equation*
Therefore the potential $V_{2}$ is integrable. This potential is also of the
form $V=F_{1}(x)+F_{2}(y)$, which is the (3.2.20) of \cite{Hietarinta 1987}.
For $F(x)=- \frac{1}{2} \lambda^{2}x^{2}$ we obtain the potential $V_{2a}=cy
\frac{1}{2}\lambda^{2}x^{2}$, $\lambda\neq0$, which admits the additional FI
$L_{32}= e^{\lambda t}(\dot{x}-\lambda x)$. This potential is
superintegrable because of the functionally independent triplet $Q_{31}$,
Q_{32}$ and $L_{32}$.
\subsection{The potential $V_{3} = c \tan^{-1}\left( \frac{y+b_{1}}{-x +b_{2
} \right) + F\left( \frac{x^{2}+ y^{2}}{2} + b_{1}y - b_{2}x \right)$}
\label{subsec.V3}
We find the time-dependent LFI
\begin{equation*}
L_{51}=y\dot{x}-x\dot{y}+b_{1}\dot{x}+b_{2}\dot{y}+ct.
\end{equation*}
For $c=0$ this potential is integrable. For $c\neq 0$ we do not know.
- For $c=0$ and $F=\lambda \left(\frac{x^{2}+y^{2}}{2}+ b_{1}y-b_{2}x\right)
, $\lambda \neq 0$, the independent FIs ar
\begin{equation*}
L_{41}=y\dot{x}-x\dot{y}+b_{1}\dot{x}+b_{2}\dot{y},\enskip Q_{41}=\frac{1}{2
\dot{x}^{2}+\frac{1}{2}\lambda x^{2}-\lambda b_{2}x,\enskip Q_{42}=\frac{1}{
}\dot{y}^{2}+\frac{1}{2}\lambda y^{2}+\lambda b_{1}y,
\end{equation*
\begin{equation*}
Q_{43}=\dot{x}\dot{y}+\lambda (xy+b_{1}x-b_{2}y).
\end{equation*
Observe that $Q_{41}+Q_{42}=H$ is the energy of the system. The LFI $L_{41}$
is the (3.1.6) of \cite{Hietarinta 1987}. The functionally independent
triplet $H$, $L_{41}$, $Q_{41}$ proves that this potential is
superintegrable. We have
\begin{equation*}
\{H,L_{41}\}=\{H,Q_{41}\}=0,\enskip\{L_{41},Q_{41}\}=-Q_{43}+\lambda
b_{1}b_{2}.
\end{equation*}
If $b_{1}=b_{2}=0$ and $\lambda = -k^{2}\neq0$ we obtain the superintegrabl
\footnote
A subcase of the above superintegrable potential is the potential $V_{3a}=\lambda \left(
\frac{x^{2}+y^{2}}{2}+b_{1}y-b_{2}x\right)$.} potential $V_{3b} =-\frac{1}{2
k^{2} (x^{2}+ y^{2})$ which admits the additional time-dependent LFIs
\begin{equation*}
L_{42\pm}=e^{\pm k t} (\dot{x} \mp k x), \enskip L_{43\pm}= e^{\pm k t}(\dot{y} \mp k y).
\end{equation*}
We also compute
\begin{equation*}
\{L_{41},Q_{42}\}=Q_{43}-\lambda b_{1}b_{2},\enskip\{L_{41},Q_{43
\}=2Q_{41}-2Q_{42}+\lambda (b_{2}^{2}-b_{1}^{2})
\end{equation*
\begin{equation*}
\{Q_{41},Q_{42}\}=0,\enskip\{Q_{41},Q_{43}\}=\{Q_{43},Q_{42}\}= -\lambda
L_{41}.
\end{equation*}
\bigskip
In section 4 of \cite{Adlam 2007} the author has found the superintegrable
\textbf{Class I} potentials $V_{1a}$ and $V_{3a}$. \bigskip
We note that in the review \cite{Hietarinta 1987} the time-dependent LFIs of the
potentials $V_{1a}$, $V_{2}$ are not discussed. In general in
\cite{Hietarinta 1987} all the time-dependent FIs are ignored, although they can
be used to decide the superintegrability of the system.
\subsection{Summary}
\label{sec.class1}
We collect the results for the \textbf{Class I} potentials in the following
tables. \bigskip
\begin{tabular}{|l|l|l|}
\hline
{\large Potential} & {\large Ref \cite{Hietarinta 1987} } & {\large LFIs and
QFIs} \\ \hline
$V_{3}(c\neq 0)=c\tan ^{-1}\left( \frac{y+b_{1}}{-x+b_{2}}\right) +F\left(
\frac{x^{2}+y^{2}}{2}+b_{1}y-b_{2}x\right) $ & - & $L_{51}=y\dot{x}-x\dot{y
+b_{1}\dot{x}+b_{2}\dot{y}+ct$ \\ \hline
\multicolumn{3}{|c|}{\large Integrable potentials} \\ \hline
$V_{1}=cx+F(y-bx)$, $\frac{d^{2}F}{dw^{2}}\neq 0$, $w\equiv y-bx$ & - &
\makecell[l]{$L_{21}=\dot{x}+b\dot{y}+ct$, \\
$L_{22}=t(\dot{x}+b\dot{y})-(x+by)+\frac{c}{2}t^{2}$, \\
$Q_{21}=(\dot{x}+b\dot{y})^{2}+2c(x+by)$} \\ \hline
$V_{2}=cy+F(x)$, $F^{\prime \prime }\neq 0$ & (3.2.20) &
\makecell[l]{$L_{31}=\dot{y}+ct$, $Q_{31}=\frac{1}{2}\dot{x}^{2}+F(x)$, \\
$Q_{32}=\frac{1}{2}\dot{y}^{2}+cy$} \\ \hline
$V_{3}(c=0)$ & (3.1.6) & $L_{51}(c=0)$ \\ \hline
\end{tabular}
\bigskip
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{\large Superintegrable potentials} \\ \hline
{\large Potential} & {\large Ref \cite{Hietarinta 1987} } & {\large LFIs and
QFIs} \\ \hline
$V_{1a}=cx+\lambda y$ & \makecell[l]{(3.1.4), \\ (3.2.20)} &
\makecell[l]{$L_{11}=\dot{x}+ct$, $L_{12}= \dot{y} + \lambda t$, \\ $Q_{11}=
\frac{1}{2}\dot{x}^{2} + cx$, $Q_{12}= \frac{1}{2}\dot{y}^{2} + \lambda y$}
\\ \hline
$V_{1b}=cx-\frac{1}{2}\lambda ^{2}y^{2}$, $\lambda \neq 0$ & (3.2.20) &
\makecell[l]{$L_{11}$, $L_{22}(b=0)=t\dot{x}-x + \frac{c}{2}t^{2}$,
$L_{23}=e^{\lambda t}(\dot{y}-\lambda y)$, \\
$Q_{2e1}=\frac{1}{2}\dot{x}^{2}+cx$,
$Q_{2e2}=\frac{1}{2}\dot{y}^{2}-\frac{1}{2}\lambda ^{2}y^{2}$} \\ \hline
$V_{2a}=cy -\frac{1}{2}\lambda ^{2}x^{2}$, $\lambda \neq 0$ & (3.2.20) &
L_{31}$, $Q_{31a}=\frac{1}{2}\dot{x}^{2} -\frac{1}{2}\lambda ^{2}x^{2}$,
Q_{32}$, $L_{32}=e^{\lambda t}(\dot{x}-\lambda x)$ \\ \hline
$V_{3a}=\lambda \left( \frac{x^{2}+y^{2}}{2}+b_{1}y-b_{2}x\right) $,
\lambda \neq 0$ & (3.1.6) & \makecell[l]{$L_{41}=y\dot{x}-x\dot{y}+b_{1}
\dot{x}+b_{2}\dot{y}$, $Q_{41}= \frac{1}{2}\dot{x}^{2}+ \frac{1}{2}\lambda
x^{2}-\lambda b_{2}x$, \\ $Q_{42}=\frac{1}{2}\dot{y}^{2}+\frac{1}{2}\lambda
y^{2}+\lambda b_{1}y$, $Q_{43}=\dot{x}\dot{y}+\lambda (xy+b_{1}x-b_{2}y)$}
\\ \hline
$V_{3b}=-\frac{1}{2}k^{2}(x^{2}+y^{2})$, $k\neq 0$ & (3.1.5) &
\makecell[l]{$L_{41b}=y\dot{x}-x\dot{y}$, $Q_{41b}= \frac{1}{2}\dot{x}^{2}-
\frac{1}{2}k^{2} x^{2}$, \\ $Q_{42b}= \frac{1}{2}\dot{y}^{2} - \frac{1}{2}
k^{2} y^{2}$, $Q_{43b}=\dot{x}\dot{y} - k^{2}xy$, \\
$L_{42\pm}=e^{\pm kt}(\dot{x}\mp kx)$, $L_{43\pm}=e^{\pm kt}(\dot{y} \mp ky)$} \\ \hline
\end{tabular}
\section{The constraint $G_{,a}=2C_{ab}V^{,b}$}
\label{sec.const2}
In this case we have the PDE (\ref{eq.PDE2})
\begin{eqnarray}
0 &=&(\gamma xy+\alpha x+\beta y-C)(V_{,xx}-V_{,yy})+\left[ \gamma(y^{2}-x^{2})-2\beta x+2\alpha y+A-B\right] V_{,xy}- \notag \\
&&-3(\gamma x+\beta)V_{,y}+3(\gamma y+\alpha)V_{,x}. \label{eq.Hie2}
\end{eqnarray
The potentials which follow from this equation we call \textbf{Class II}
potentials. This equation cannot be solved in full generality (see also \cit
{Hietarinta 1987}), therefore we consider various cases which produce the
known FIs. We emphasize that the potentials we find in this section
are only a subset of the possible potentials which will follow from the
general solution of (\ref{eq.Hie2}). However the important point here is
that we recover the known results with a direct and unified approach which
can be used in the future by other authors to discover new integrable and
superintegrable potentials in $E^{2}$ and in other spaces.
\bigskip
1) $\gamma \neq 0$, $A=B$ and $\alpha=\beta =C=0$. Then $C_{ab}=\left(
\begin{array}{cc}
\gamma y^{2}+A & -\gamma xy \\
-\gamma xy & \gamma x^{2}+
\end{array
\right) $ and equation (\ref{eq.Hie2}) becomes
\begin{equation}
xy(V_{,xx}-V_{,yy})+(y^{2}-x^{2})V_{,xy}-3xV_{,y}+3yV_{,x}=0
\label{eq.Hie3a}
\end{equation
whose solution give
\begin{equation}
V_{21}= \frac{F_{1}\left( \frac{y}{x}\right) }{d_{1}x^{2}+d_{2}y^{2}
+F_{2}(x^{2}+y^{2}) \label{eq.Hie3b}
\end{equation}
where $d_{1}, d_{2}$ are arbitrary constants.
- For the subcase $d_{1}=d_{2}=1$ with $A=0$ we find the QFI
\begin{equation}
I_{11}= (y\dot{x}-x\dot{y})^{2}+ 2F_{1}\left( \frac{y}{x}\right) =(r^{2}\dot
\theta})^{2}-\Phi (\theta ) \label{eq.Hie3bb}
\end{equation
where $r^{2}=x^{2}+y^{2}$ and $\theta =\tan ^{-1}\left( \frac{y}{x}\right)$.
This is the well-known \textbf{Ermakov - Lewis invariant}; see also (3.2.11)
of \cite{Hietarinta 1987}.
- For $d_{1}\neq0$ the potential (\ref{eq.Hie3b}) is written equivalently
\begin{equation*}
V_{21} = \frac{F_{1}\left(\frac{y}{x}\right)}{x^{2}+cy^{2}} +
F_{2}(x^{2}+y^{2})
\end{equation*}
where $c$ is an arbitrary constant.
This potential admits QFIs for $F_{1}= \frac{cky^{2}+ k x^{2}}
x^{2}+(2-c)y^{2}}$. Therefore
\begin{equation*}
V_{21a}= \frac{k}{x^{2}+(2-c)y^{2}} + F_{2}(x^{2}+y^{2})= \frac{k}
x^{2}+\ell y^{2}} + F_{2}(x^{2}+y^{2})
\end{equation*}
with the QFI
\begin{equation}
I_{11a} = (y\dot{x}-x\dot{y})^{2} + \frac{2k(c-1) y^{2}}{x^{2}+(2-c)y^{2}}=
(y\dot{x}-x\dot{y})^{2} + \frac{2k(1-\ell) y^{2}}{x^{2}+\ell y^{2}}
\label{eq.Hie1a}
\end{equation}
where $\ell \equiv 2-c$.
- For $d_{1}=0$, $d_{2}\neq0$ the potential $V_{21}$ becomes
\begin{equation*}
V_{21} = \frac{F_{1}\left(\frac{y}{x}\right)}{y^{2}} + F_{2}(x^{2}+y^{2}).
\end{equation*}
This potential admits QFIs for $F_{1}= \frac{ky^{2}}{2x^{2}+y^{2}}$. Then
\begin{equation*}
V_{21b} = \frac{k}{2x^{2}+y^{2}} + F(x^{2}+y^{2})
\end{equation*}
with the QFI
\begin{equation}
I_{11b} = (y\dot{x}-x\dot{y})^{2} + \frac{ky^{2}}{2x^{2}+ y^{2}}.
\label{eq.Hie1b}
\end{equation}
Observe that $V_{21b}$ is of the form $V_{21a}(c=3/2)$ or $V_{21a}(\ell=1/2)$
with $\bar{k}\equiv 2k$. Therefore $V_{21b}$ is included in case $V_{21a}$.
\bigskip
2) $\gamma =1$ and $\alpha= \beta =B=C=0$. Then $C_{ab}=\left(
\begin{array}{cc}
y^{2}+A & -xy \\
-xy & x^{2
\end{array
\right) $ and equation (\ref{eq.Hie2}) becomes
\begin{equation}
xy(V_{,xx}-V_{,yy})+(y^{2}-x^{2}+A)V_{,xy}-3xV_{,y}+3yV_{,x}=0.
\label{eq.Hie3c}
\end{equation}
- For $A=0$ equation (\ref{eq.Hie3c}) reduces to (\ref{eq.Hie3a}). \bigskip
- For $A\neq 0$ the PDE (\ref{eq.Hie3c}) gives the \textbf{Darboux solution}
\begin{equation}
V_{22}= \frac{F_{1}(u)-F_{2}(v)}{u^{2}-v^{2}} \label{eq.Hie3d}
\end{equation
where $r^{2}=x^{2}+y^{2}$, $u^{2}=r^{2}+A+\left[ (r^{2}+A)^{2}-4Ax^{2}\right]
^{1/2}$ and $v^{2}=r^{2}+A-\left[ (r^{2}+A)^{2}-4Ax^{2}\right] ^{1/2}$.
We find the QFI (see (3.2.9) of \cite{Hietarinta 1987}).
\begin{equation}
I_{21}=(y\dot{x}-x\dot{y})^{2}+A\dot{x}^{2}+\frac{v^{2}F_{1}(u)-u^{2}F_{2}(v
}{u^{2}-v^{2}}. \label{eq.Hie3dd}
\end{equation
\bigskip
3) $\gamma =1$, $B=-A$, $C=\pm iA\neq 0$ and $\alpha=\beta =0$. Then
\begin{equation*}
C_{ab}=\left(
\begin{array}{cc}
y^{2}+A & -xy\pm iA \\
-xy\pm iA & x^{2}-
\end{array
\right)
\end{equation*
and equation (\ref{eq.Hie2}) gives again a potential of the form (\re
{eq.Hie3d}), but with $u^{2}=r^{2}+\left[ r^{4}-4A(x\pm iy)^{2}\right]
^{1/2} $ and $v^{2}=r^{2}-\left[ r^{4}-4A(x\pm iy)^{2}\right] ^{1/2}$.
We find the QFI (see (3.2.13) of \cite{Hietarinta 1987})
\begin{equation}
I_{31}=(y\dot{x}-x\dot{y})^{2}+A(\dot{x}\pm i\dot{y})^{2}+\frac
v^{2}F_{1}(u)-u^{2}F_{2}(v)}{u^{2}-v^{2}}. \label{eq.Hie3db}
\end{equation
\bigskip
4a) $\alpha=1$ and $\beta =\gamma =A=B=C=0$. Then
\begin{equation*}
C_{ab}=\left(
\begin{array}{cc}
2y & -x \\
-x &
\end{array
\right)
\end{equation*
and equation (\ref{eq.Hie2}) becomes
\begin{equation}
x(V_{,xx}-V_{,yy})+2yV_{,xy}+3V_{,x}=0 \label{eq.Hie4a}
\end{equation
which gives the potential
\begin{equation}
V_{24}=\frac{F_{1}(r+y)+F_{2}(r-y)}{r} \label{eq.Hie4b}
\end{equation
where $r^{2}=x^{2}+y^{2}$.
We find the QFI (see (3.2.15) of \cite{Hietarinta 1987})
\begin{equation}
I_{41}=\dot{x}(y\dot{x}-x\dot{y})+\frac{(r+y)F_{2}(r-y)- (r-y)F_{1}(r+y)}{r}.
\label{eq.Hie4c}
\end{equation}
\bigskip
4b) $\beta=1$ and $\alpha =\gamma =A=B=C=0$. Then
\begin{equation*}
C_{ab}=\left(
\begin{array}{cc}
0 & -y \\
-y & 2
\end{array
\right)
\end{equation*
and equation (\ref{eq.Hie2}) becomes
\begin{equation}
y(V_{,xx}-V_{,yy})-2xV_{,xy}-3V_{,y}=0 \label{eq.Hie4a.1}
\end{equation
which gives the potential
\begin{equation}
V_{24b}=\frac{F_{1}(r+x)+F_{2}(r-x)}{r} \label{eq.Hie4b.2}
\end{equation
where $r^{2}=x^{2}+y^{2}$.
We find the QFI
\begin{equation}
I_{41b}=\dot{y}(x\dot{y}-y\dot{x})+\frac{(r+x)F_{2}(r-x)-(r-x) F_{1}(r+x)}{r}
\label{eq.Hie4c.3}
\end{equation}
\emph{\ Observe that the potential (\ref{eq.Hie4b.2}) is just the (\re
{eq.Hie4b}) after the rotation $x\leftrightarrow y$. All the results of the
case 4b can be derived from the case 4a if we apply the transformation
x\leftrightarrow y$. For this reason the case 4b is ignored when we search
for integrable systems, but in superintegrability the PDE (\ref{eq.Hie4a.1})
shall be proved useful (see superintegrable potential (\ref{eq.Hie12d}) in
subsection \ref{sec.super}).}
\bigskip
5) $\alpha=1$, $\beta =-i$, $A=-B=\frac{i}{4}$, $C=\frac{1}{4}$ and $\gamma =0$.
Then
\begin{equation*}
C_{ab}=\left(
\begin{array}{cc}
2y+\frac{i}{4} & -x+iy+\frac{1}{4} \\
-x+iy+\frac{1}{4} & -2ix-\frac{i}{4
\end{array
\right)
\end{equation*
and equation (\ref{eq.Hie2}) becomes
\begin{equation}
(x-iy-\frac{1}{4})\left( V_{,xx}-V_{,yy}\right) +2\left( y+ix+\frac{i}{4
\right) V_{,xy}+3iV_{,y}+3V_{,x}=0. \label{eq.Hie5a}
\end{equation
This is written equivalently
\begin{equation}
(x-iy)\left( \partial _{x}+i\partial _{y}\right) ^{2}V-\frac{1}{4}(\partial
_{x}-i\partial _{y})^{2}V+3(\partial _{x}+i\partial _{y})V=0
\label{eq.Hie5aa}
\end{equation
and gives the potential
\begin{equation}
V_{25} =w^{-1/2}\left[ F_{1}(z+\sqrt{w})+F_{2}(z-\sqrt{w})\right]
\label{eq.Hie5b}
\end{equation
where $z=x+iy$ and $w=x-iy$.
We find the QFI (see (3.2.17) of \cite{Hietarinta 1987})
\begin{eqnarray*}
I_{51} &=&(y\dot{x}-x\dot{y})(\dot{x}+i\dot{y})+\frac{i}{8}(\dot{x}-i\dot{y
)^{2}+i\left( 1-\frac{z}{\sqrt{w}}\right) F_{1}(z+\sqrt{w})+ \\
&&+i\left( -1-\frac{z}{\sqrt{w}}\right) F_{2}(z-\sqrt{w}).
\end{eqnarray*
\bigskip
6) $\alpha=1$, $\beta =\mp i$ and $\gamma =A=B=C=0$. Then
\begin{equation*}
C_{ab}=\left(
\begin{array}{cc}
2y & -x\pm iy \\
-x\pm iy & \mp 2i
\end{array
\right)
\end{equation*
and equation (\ref{eq.Hie2}) becomes
\begin{equation}
(x\mp iy)\left( V_{,xx}-V_{,yy}\right) +2\left( y\pm ix\right) V_{,xy}\pm
3iV_{,y}+3V_{,x}=0 \label{eq.Hie6a}
\end{equation
from which follow
\begin{equation}
V_{26}=\frac{F_{1}(z)}{r}+F_{2}^{\prime }(z) \label{eq.Hie6b}
\end{equation
where $F_{2}^{\prime }=\frac{dF_{2}}{dz}$ and $z=x\pm iy$.
We find the QFI (see (3.2.18) of \cite{Hietarinta 1987})
\begin{equation}
I_{61}=(y\dot{x}-x\dot{y})(\dot{x}\pm i\dot{y})-izV+iF_{2}(z).
\label{eq.Hie6c}
\end{equation
\bigskip
7) $AB\neq 0$, $A\neq B$ and $\alpha=\beta =\gamma =C=0$. Then $C_{ab}=\left(
\begin{array}{cc}
A & 0 \\
0 &
\end{array
\right) $.
Equation (\ref{eq.Hie2}) becomes
\begin{equation} \label{eq.Hie7a}
(A-B)V_{,xy}=0 \implies V_{,xy}=0
\end{equation}
which gives the separable potential
\begin{equation} \label{eq.Hie7b}
V_{27}= F_{1}(x) + F_{2}(y).
\end{equation}
We find the irreducible QFIs (see (3.2.20) of \cite{Hietarinta 1987})
\begin{equation*}
I_{71a}=\dot{x}^{2}+2F_{1}(x), \enskip I_{71b}=\dot{y}^{2}+2F_{2}(y).
\end{equation*}
It can be shown that there are four special potentials of the potential (\re
{eq.Hie7b}) which admit additional time-dependent QFIs and are
superintegrable. These are:
7a. The potential
\begin{equation*}
V_{271} =\frac{k_{1}}{\left( x+c_{1}\right) ^{2}}+\frac{k_{2}}{\left(
y+c_{2}\right) ^{2}}
\end{equation*
admits the independent FIs
\begin{eqnarray*}
I_{72a} &=& -\frac{t^{2}}{2} \dot{y}^{2} + t (y+c_{2})\dot{y} - t^{2}\frac
k_{2}}{(y+ c_{2})^{2}} - \frac{1}{2}y^{2} - c_{2}y \\
I_{72b} &=& -\frac{t^{2}}{2} \dot{x}^{2} + t (x+c_{1})\dot{x} - t^{2}\frac
k_{1}}{(x+ c_{1})^{2}} - \frac{1}{2}x^{2} - c_{1}x.
\end{eqnarray*}
7b. The potential
\begin{equation*}
V_{272}=F_{1}(x)+\frac{k_{2}}{\left( y+c_{2}\right) ^{2}}
\end{equation*
admits the FI $I_{72a}$.
7c. The potential
\begin{equation*}
V_{273}=F_{2}(y)+ \frac{k_{1}}{\left( x+c_{1}\right) ^{2}}
\end{equation*
admits the FI $I_{72b}$.
7d. The potential (see \cite{Fris})
\begin{equation*}
V_{274}=-\frac{\lambda ^{2}}{8}(x^{2}+y^{2})-\frac{\lambda ^{2}}{4}\left(
c_{1}x+ c_{2}y\right) - \frac{k_{1}}{(x+c_{1})^{2}}-\frac{k_{2}}
(y+c_{2})^{2}}
\end{equation*}
admits the independent FIs
\begin{eqnarray*}
I_{73a} &=& e^{\lambda t}\left[ -\dot{x}^{2}+\lambda (x+c_{1})\dot{x}-\frac
\lambda^{2}}{4}(x+c_{1})^{2}+ \frac{2k_{1}}{(x+c_{1})^{2}}\right] \\
I_{73b} &=& e^{\lambda t}\left[ -\dot{y}^{2}+\lambda (y+c_{2})\dot{y}-\frac
\lambda^{2}}{4}(y+c_{2})^{2}+\frac{2k_{2}}{(y+c_{2})^{2}}\right].
\end{eqnarray*}
In all the above relations $\lambda, c_{1}, c_{2}, k_{1}, k_{2}$ are arbitrary constants.
\bigskip
8) $C\neq0$ and $\alpha=\beta=\gamma=0$.
Then $C_{ab}=\left(
\begin{array}{cc}
A & C \\
C &
\end{array
\right) $ and equation (\ref{eq.Hie2}) becomes
\begin{equation}
C(V_{,yy}-V_{,xx})+(A-B)V_{,xy}=0. \label{eq.Hie8a}
\end{equation}
Solving (\ref{eq.Hie8a}) we find the potential
\begin{equation} \label{eq.Hie8b}
V_{28} = F_{1}\left(y + b_{0}x + \sqrt{b_{0}^{2}+1}x\right) + F_{2}\left(y +
b_{0}x - \sqrt{b_{0}^{2}+1}x\right)
\end{equation}
where $b_{0} \equiv \frac{A-B}{2C}$.
This potential admits the QFI
\begin{equation} \label{eq.Hie8bb}
I_{81}= A\dot{x}^{2} + B\dot{y}^{2} + 2C\dot{x}\dot{y} + (A+B)V + 2C \sqrt
b_{0}^{2}+1} (F_{1}- F_{2}).
\end{equation}
We note that $b_{0}(A,B,C)$. Here $A,B,C$ are parameters of the potential
and therefore cannot be taken as independent parameters of the QFI.
For $b_{0}=0$ we have $A=B$, $V_{,yy} - V_{,xx}=0$ and the potential reduces
to
\begin{equation} \label{eq.Hie8c}
V_{28}(b_{0}=0)= F_{1}(y+x) + F_{2}(y-x)
\end{equation}
which is the solution of the 1d-wave equation.
For the potential (\ref{eq.Hie8c}) we find the QFI
\begin{equation*}
I_{82}= \dot{x}\dot{y} + F_{1}(y+x) - F_{2}(y-x).
\end{equation*}
\bigskip
9) $A=2$, $C=\pm i$ and $\alpha=\beta =\gamma =B=0$. Then
\begin{equation*}
C_{ab}=\left(
\begin{array}{cc}
2 & \pm i \\
\pm i &
\end{array
\right)
\end{equation*
and equation (\ref{eq.Hie2}) becomes
\begin{equation}
\mp i(V_{,xx}-V_{,yy})+2V_{,xy}=0. \label{eq.Hie9a}
\end{equation}
Solving (\ref{eq.Hie9a}) we find the potential
\begin{equation} \label{eq.Hie9b}
V_{29} = r^{2} F_{1}^{\prime \prime }(z) + F_{2}(z)
\end{equation}
where $F_{1}^{\prime \prime }= \frac{d^{2}F_{1}}{dz^{2}}$ and $z=x\pm iy$.
This potential admits the QFI (see (3.2.21) of \cite{Hietarinta 1987})
\begin{equation}
I_{91}=\dot{x}(\dot{x}\pm i\dot{y})+V_{29}+ 2zF_{1}^{\prime}(z) -2F_{1}(z).
\label{eq.Hie9c}
\end{equation}
Observe that for the trivial KT $C_{ab}=A\delta _{ab}$ the condition
G_{,a}=2C_{ab}V^{,b}$ gives
\begin{equation*}
G_{,a}=2AV_{,a}\implies G=2AV
\end{equation*
for all potentials $V(x,y)$. Therefore we recover the trivial result that
all 2d-potentials $V(x,y)$ admit the QFI
\begin{equation*}
I=A(\dot{x}^{2}+\dot{y}^{2}+2V)=2AH.
\end{equation*}
\bigskip
Comparing with previous works we see that the potentials $V_{21a}$ and
V_{28}$ are new. The potential $V_{274}(c_{1}=c_{2}=0)$ is mentioned in \cit
{Fris}.
\subsection{The superintegrable potentials}
\label{sec.super}
When a potential belongs to two of the above 9 \textbf{Class II} cases
simultaneously is superintegrable (e.g. potentials (3.2.34)- (3.2.36) of
\cite{Hietarinta 1987}) because in that case the potential admits two more autonomous FIs other than the Hamiltonian. From the above results we
find the following \textbf{Class II} superintegrable potentials
(see also \cite{Ranada 1997}, \cite{Kalnins 2001}). \bigskip
S1) The potential (see (3.2.34) in \cite{Hietarinta 1987}, case (b) in \cit
{Ranada 1997} and \cite{Fris})
\begin{equation}
V_{s1}=\frac{k}{2}(x^{2}+y^{2})+\frac{b}{x^{2}}+\frac{c}{y^{2}}
\label{eq.Hie10a}
\end{equation}
where $k,b,c$ are arbitrary constants.
This is of the form (\ref{eq.Hie3b}) for $d_{1}=d_{2}=1$,
\begin{equation*}
F_{1}\left( \frac{y}{x}\right) =b\left( \frac{y}{x}\right) ^{2}+c\left(
\frac{x}{y}\right) ^{2},\enskip F_{2}(x^{2}+y^{2})=\frac{k}{2}(x^{2}+y^{2})
\frac{b+c}{x^{2}+y^{2}}
\end{equation*
and also of the separable form (\ref{eq.Hie7b}). Therefore $V_{s1}$ admits
the additional QFIs
\begin{eqnarray}
I_{s1a} &=&(y\dot{x}-x\dot{y})^{2}+2b\frac{y^{2}}{x^{2}}+2c\frac{x^{2}}{y^{2
} \label{eq.Hie10b} \\
I_{s1b} &=&\frac{1}{2}\dot{x}^{2}+\frac{k}{2}x^{2}+\frac{b}{x^{2}}
\label{eq.Hie10c} \\
I_{s1c} &=&\frac{1}{2}\dot{y}^{2}+\frac{k}{2}y^{2}+\frac{c}{y^{2}}.
\label{eq.Hie10d}
\end{eqnarray
We note that $V_{s1}\left( k=-\frac{\lambda^{2}}{4}, b=-k_{1}, c=-k_{2}
\right)$, $\lambda \neq 0$, is the $V_{274}$ for $c_{1}=c_{2}=0$ and
therefore admits also the time-dependent FIs $I_{73a}$, $I_{73b}$. \bigskip
S2) Potentials of the form (\ref{eq.Hie4b}) and (\ref{eq.Hie7b}).
Then we have to solve the systems of PDEs (\ref{eq.Hie4a})
and $V_{,xy}=0$. We find
\begin{equation}
V_{s2}=\frac{k_{1}}{2}(x^{2}+4y^{2})+\frac{k_{2}}{x^{2}}+k_{3}y
\label{eq.Hie11a}
\end{equation
and the QFIs
\begin{eqnarray}
I_{s2a} &=&\dot{x}(y\dot{x}-x\dot{y})-k_{1}yx^{2}+\frac{2k_{2}y}{x^{2}}
\frac{k_{3}}{2}x^{2} \label{eq.Hie11b} \\
I_{s2b} &=&\frac{1}{2}\dot{x}^{2}+\frac{k_{1}}{2}x^{2}+\frac{k_{2}}{x^{2}}
\label{eq.Hie11c} \\
I_{s2c} &=&\frac{1}{2}\dot{y}^{2}+2k_{1}y^{2}+k_{3}y. \label{eq.Hie11d}
\end{eqnarray}
where $k_{1}, k_{2}, k_{3}$ are arbitrary constants.
This is the superintegrable potential of case (a) of \cite{Ranada 1997}.
Note that the QFI\ $I_{3}^{a}$ given in \cite{Ranada 1997} is not correct.
The correct is the $I_{s2a}$ of (\ref{eq.Hie11b}) above.
\bigskip We remark that the potential (3.2.35) in \cite{Hietarinta 1987} is
superintegrable only for $b=4a$ in which case the potential becomes $V_{s2}$
for $k_{1}=2a$, $k_{2}=c$ and $k_{3}=0$. \bigskip
S3) Potentials of the form (\ref{eq.Hie3b}) and (\ref{eq.Hie4b}). We solve
the system of PDEs (\ref{eq.Hie3a}) and (\ref{eq.Hie4a}). We
find
\begin{equation}
V_{s3} = \frac{k_{1}}{x^{2}} + \frac{k_{2}}{r} + \frac{k_{3}y}{rx^{2}}
\label{eq.Hie12a}
\end{equation}
and the QFIs
\begin{eqnarray}
I_{s3a} &=& (y\dot{x} - x\dot{y})^{2} + 2k_{1}\frac{y^{2}}{x^{2}} + 2k_{3
\frac{ry}{x^{2}} \label{eq.Hie12b} \\
I_{s3b} &=& \dot{x}(y\dot{x} - x\dot{y}) + 2k_{1} \frac{y}{x^{2}} + k_{2
\frac{y}{r} + k_{3}\frac{x^{2}+2y^{2}}{rx^{2}} \label{eq.Hie12c}
\end{eqnarray}
where $r^{2}=x^{2}+y^{2}$.
The superintegrable potential (\ref{eq.Hie12a}) is symmetric ($x
\leftrightarrow y$) to the superintegrable potential of case (c) of \cit
{Ranada 1997}. Indeed in order to find the superintegrable potential of \cit
{Ranada 1997} we simply consider the case leading to the potential of the
form $V_{24}$ of (\ref{eq.Hie4b}) for $\beta=1$ instead of $\alpha=1$.
We note that if we rename the constants in (\ref{eq.Hie12a}) as $k_{1}=b+c$,
$k_{2}=a$, $k_{3}=c-b$ we recover the superintegrable potential (3.2.36) of
{\large \cite{Hietarinta 1987}}. Indeed we have
\begin{equation*}
V_{s3}=\frac{a}{r}+\frac{\frac{b}{r+y}+\frac{c}{r-y}}{r}.
\end{equation*}
S4) If we substitute the solution (\ref{eq.Hie4b}) of the PDE (\ref{eq.Hie4a
) in the PDE (\ref{eq.Hie4a.1}), we find that for
\begin{equation*}
F_{1}(r+y)=k_{1}+k_{2}\sqrt{r+y}, \enskip F_{2}(r-y)=k_{3}\sqrt{r-y}
\end{equation*}
both PDEs (\ref{eq.Hie4a}) and (\ref{eq.Hie4a.1}) are satisfied
simultaneously. Therefore the potential (see case (d) in \cite{Ranada 1997})
\begin{equation}
V_{s4} = \frac{k_{1}}{r} + k_{2} \frac{\sqrt{r+y}}{r} + k_{3}\frac{\sqrt{r-y
}{r} \label{eq.Hie12d}
\end{equation}
is superintegrable with additional QFIs
\begin{eqnarray*}
I_{s4a} &=& \dot{x}(y\dot{x}-x\dot{y}) + \frac{k_{1}y}{r} + \frac{k_{3}(r+y
\sqrt{r-y}-k_{2}(r-y)\sqrt{r+y}}{r} \label{eq.Hie12e} \\
I_{s4b} &=& \dot{y}(x\dot{y}-y\dot{x}) + G(x,y) \label{eq.Hie12f}
\end{eqnarray*}
where $G_{,x} + yV_{s4,y}=0$ and $G_{,y} + yV_{s4,x} -2xV_{s4,y}=0$.
We note that in the case (d) in \cite{Ranada 1997} the corresponding QFIs
I_{2}^{d}$ and $I_{3}^{d}$ are not correct, because $\{H,I_{2}^{d}\}\neq0$
and $\{H,I_{3}^{d}\}\neq0$. Moreover this superintegrable potential is the
case (E20) in \cite{Kalnins 2001}; and it is not mentioned in the review
\cite{Hietarinta 1987}.
In the following tables we collect the results on \textbf{Class II}
potentials \ with the corresponding reference to the review paper
\cite{Hietarinta 1987}. \bigskip
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{Integrable potentials} \\ \hline
{\large Potential} & {\large Ref \cite{Hietarinta 1987}} & {\large LFIs and
QFIs} \\ \hline
$V_{21}=\frac{F_{1}\left( \frac{y}{x}\right) }{x^{2}+y^{2}
+F_{2}(x^{2}+y^{2})$ & (3.2.10) & $I_{11}=(y\dot{x}-x\dot{y
)^{2}+2F_{1}\left( \frac{y}{x}\right) $ \\ \hline
$V_{21a}= \frac{k}{x^{2}+\ell y^{2}} + F_{2}(x^{2}+y^{2})$ & - & $I_{11a} =
(y\dot{x}-x\dot{y})^{2} + \frac{2k(1-\ell) y^{2}}{x^{2}+\ell y^{2}}$ \\
\hline
\makecell[l]{$V_{22}=\frac{F_{1}(u)-F_{2}(v)}{u^{2}-v^{2}}$, \\
$u^{2}=r^{2}+A+\left[ (r^{2}+A)^{2}-4Ax^{2}\right] ^{1/2}$, \\
$v^{2}=r^{2}+A-\left[ (r^{2}+A)^{2}-4Ax^{2}\right] ^{1/2}$} & (3.2.7,8) &
I_{21}=(y\dot{x}-x\dot{y})^{2}+A\dot{x}^{2}+\frac{v^{2}F_{1}(u)-u^{2}F_{2}(v
}{u^{2}-v^{2}}$ \\ \hline
\makecell[l]{$V_{23}=\frac{F_{1}(u)-F_{2}(v)}{u^{2}-v^{2}}$, \\
$u^{2}=r^{2}+\left[ r^{4}-4A(x\pm iy)^{2}\right] ^{1/2}$, \\
$v^{2}=r^{2}-\left[ r^{4}-4A(x\pm iy)^{2}\right] ^{1/2}$} & (3.2.7,12) &
I_{31}=(y\dot{x}-x\dot{y})^{2}+A(\dot{x}\pm i\dot{y})^{2}+\frac
v^{2}F_{1}(u)-u^{2}F_{2}(v)}{u^{2}-v^{2}}$ \\ \hline
$V_{24}=\frac{F_{1}(r+y)+F_{2}(r-y)}{r}$ & (3.2.15) & $I_{41}=\dot{x}(y\dot{
}-x\dot{y})+\frac{(r+y)F_{2}(r-y)-(r-y)F_{1}(r+y)}{r}$ \\ \hline
\makecell[l]{$V_{25}=w^{-1/2}\left[
F_{1}(z+\sqrt{w})+F_{2}(z-\sqrt{w})\right] $, \\ $z=x+iy$, $w=x-iy$} &
(3.2.17) & \makecell[l]{$I_{51}=(y\dot{x}-x\dot{y})(\dot{x}+i\dot{y})+
\frac{i}{8}(\dot{x}-i\dot{y})^{2}+$ \\ \qquad \enskip $+i\left(
1-\frac{z}{\sqrt{w}}\right) F_{1}(z+\sqrt{w})+$ \\ \qquad \enskip $+i\left(
-1-\frac{z}{\sqrt{w}}\right) F_{2}(z-\sqrt{w})$} \\ \hline
\makecell[l]{$V_{26}=\frac{F_{1}(z)}{r}+F_{2}^{\prime }(z)$, \\
$F_{2}^{\prime }=\frac{dF_{2}}{dz}$, $z=x\pm iy$} & (3.2.18) & $I_{61}=(
\dot{x}-x\dot{y})(\dot{x}\pm i\dot{y})-izV+iF_{2}(z)$ \\ \hline
$V_{27}=F_{1}(x)+F_{2}(y)$ & (3.2.20) & $I_{71a}=\frac{1}{2}\dot{x
^{2}+F_{1}(x)$, $I_{71b}=\frac{1}{2}\dot{y}^{2}+F_{2}(y)$ \\ \hline
\makecell[l]{$V_{28}=F_{1}\left( y+b_{0}x+\sqrt{b_{0}^{2}+1}x\right) +$ \\
\qquad \enskip $+ F_{2}\left(y+b_{0}x-\sqrt{b_{0}^{2}+1}x\right) $,
$b_{0}\equiv \frac{A-B}{2C}$} & - & \makecell[l]{$I_{81}=A\dot{x}^{2}+
\dot{y}^{2}+$ \\ \qquad \enskip $+ 2C\dot{x}\dot{y}+(A+B)V+
2C\sqrt{b_{0}^{2}+1}(F_{1}-F_{2})$} \\ \hline
$V_{28}(b_{0}=0)=F_{1}(y+x)+F_{2}(y-x)$ & - & $I_{82}=\dot{x}\dot{y
+F_{1}(y+x)-F_{2}(y-x)$ \\ \hline
\makecell[l]{$V_{29}=r^{2}F_{1}^{\prime \prime }(z)+F_{2}(z)$,
$F_{1}^{\prime \prime }=\frac{d^{2}F_{1}}{dz^{2}}$, \\ $z=x\pm iy$} &
(3.2.21) & $I_{91}=\dot{x}(\dot{x}\pm i\dot{y})+V_{29} +2zF_{1}^{\prime
}(z)- 2F_{1}(z)$ \\ \hline
\end{tabular}
\bigskip
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{Superintegrable potentials} \\ \hline
{\large Potential} & {\large Ref \cite{Hietarinta 1987}} & {\large LFIs and
QFIs} \\ \hline
$V_{s1}= \frac{k}{2}(x^{2}+y^{2}) + \frac{b}{x^{2}} + \frac{c}{y^{2}}$ &
(3.2.34) & \makecell[l]{$I_{s1a}= (y\dot{x}-x\dot{y})^{2} + 2b
\frac{y^{2}}{x^{2}} + 2c \frac{x^{2}}{y^{2}}$, \\ $I_{s1b} =
\frac{1}{2}\dot{x}^{2} + \frac{k}{2}x^{2} + \frac{b}{x^{2}}$, $I_{s1c} =
\frac{1}{2}\dot{y}^{2} + \frac{k}{2}y^{2} + \frac{c}{y^{2}}$ \\ - For $k=0$:
$I_{72a}$, $I_{72b}$ \\ where $c_{1}=c_{2}=0, k_{1}=b, k_{2}=c$ \\ - For
$k=-\frac{\lambda^{2}}{4}\neq0$: $I_{73a}$, $I_{73b}$ \\ where
$c_{1}=c_{2}=0, k_{1}=-b, k_{2}=-c$ } \\ \hline
$V_{s2}= \frac{k_{1}}{2}(x^{2}+4y^{2}) + \frac{k_{2}}{x^{2}} + k_{3}y$ &
\makecell[l]{(3.2.35) \\ for $k_{3}=0$} & \makecell[l]{$I_{s2a} =
\dot{x}(y\dot{x}-x\dot{y}) - k_{1}yx^{2} + \frac{2k_{2}y}{x^{2}} -
\frac{k_{3}}{2} x^{2}$, \\ $I_{s2b}= \frac{1}{2}\dot{x}^{2} +
\frac{k_{1}}{2}x^{2} + \frac{k_{2}}{x^{2}}$, $I_{s2c}=
\frac{1}{2}\dot{y}^{2} + 2k_{1}y^{2}+ k_{3}y$} \\ \hline
$V_{s3} = \frac{k_{1}}{x^{2}} + \frac{k_{2}}{r} + \frac{k_{3}y}{rx^{2}}$ &
(3.2.36) & \makecell[l]{$I_{s3a}= (y\dot{x} - x\dot{y})^{2} +
2k_{1}\frac{y^{2}}{x^{2}} + 2k_{3}\frac{ry}{x^{2}}$, \\ $I_{s3b}=
\dot{x}(y\dot{x} - x\dot{y}) + 2k_{1} \frac{y}{x^{2}} + k_{2}\frac{y}{r} +
k_{3}\frac{x^{2}+2y^{2}}{rx^{2}}$} \\ \hline
$V_{s4} = \frac{k_{1}}{r} + k_{2} \frac{\sqrt{r+y}}{r} + k_{3}\frac{\sqrt{r-
}}{r}$ & - & \makecell[l]{$I_{s4a} = \dot{x}(y\dot{x}-x\dot{y}) +
\frac{k_{1}y}{r} + \frac{k_{3}(r+y)\sqrt{r-y}-k_{2}(r-y)\sqrt{r+y}}{r}$, \\
$I_{s4b}= \dot{y}(x\dot{y}-y\dot{x}) + G(x,y)$} \\ \hline
$V_{271}=\frac{k_{1}}{\left( x+c_{1} \right)^{2}}+ \frac{k_{2}}{\left(
y+c_{2} \right)^{2}}$ & (3.2.20) & \makecell[l]{$I_{71a}$, $I_{71b}$, \\
$I_{72a}=-\frac{t^{2}}{2}\dot{y}^{2}+t(y+c_{2})\dot{y}-
t^{2}\frac{k_{2}}{(y+c_{2})^{2}}-\frac{1}{2}y^{2}-c_{2}y$, \\
$I_{72b}=-\frac{t^{2}}{2}\dot{x}^{2}+t(x+c_{1})\dot{x}-
t^{2}\frac{k_{1}}{(x+c_{1})^{2}}-\frac{1}{2}x^{2}-c_{1}x$} \\ \hline
$V_{272}=F_{1}(x)+\frac{k_{2}}{\left( y+c_{2}\right) ^{2}}$ & (3.2.20) &
I_{71a}$, $I_{71b}$, $I_{72a}$ \\ \hline
$V_{273}=F_{2}(y)+\frac{k_{1}}{\left( x+c_{1}\right) ^{2}}$ & (3.2.20) &
I_{71a}$, $I_{71b}$, $I_{72b}$ \\ \hline
\makecell[l]{$V_{274}=-\frac{\lambda ^{2}}{8}(x^{2}+y^{2})-\frac{\lambda
^{2}}{4}\left( c_{1}x+ c_{2}y\right)-$ \\ \qquad \quad
$-\frac{k_{1}}{(x+c_{1})^{2}}-\frac{k_{2}}{(y+c_{2})^{2}}$, $\lambda \neq0$}
& (3.2.20) & \makecell[l]{$I_{71a}$, $I_{71b}$, \\ $I_{73a}=e^{\lambda
t}\left[ -\dot{x}^{2}+\lambda (x+c_{1})\dot{x}-\frac{\lambda
^{2}}{4}(x+c_{1})^{2}+\frac{2k_{1}}{(x+c_{1})^{2}}\right] $, \\
$I_{73b}=e^{\lambda t}\left[ -\dot{y}^{2}+\lambda
(y+c_{2})\dot{y}-\frac{\lambda
^{2}}{4}(y+c_{2})^{2}+\frac{2k_{2}}{(y+c_{2})^{2}}\right]$} \\ \hline
\end{tabular}
\section{The constraint $\left(L_{b}V^{,b}\right)_{,a} = -2 L_{(a;b)} V^{,b}
- \protect\lambda^{2}L_{a}$}
\label{sec.const3}
The integrability condition of the constraint $\left(
L_{b}V^{,b}\right)_{,a}=-2L_{(a;b)}V^{,b}-\lambda ^{2}L_{a}$ gives the PDE
\ref{eq.PDE3.3}).
As mentioned above in section \ref{sec.find.Pots} in order to find new
potentials from the PDE (\ref{eq.PDE3.3}) one (or both) of the conditions $\alpha= \beta =0$ and $a_{1}=a_{3}$ must be relaxed. However, if we do find a new
potential, this solution should satisfy also the remaining PDEs (\re
{eq.PDE3.1}) and (\ref{eq.PDE3.2}) in order to admit the time-dependent QFI
I_{3}$ given in case \textbf{Integral 3} of theorem \ref{The first integrals
of an autonomous holonomic dynamical system}. New potentials which admit the
QFI $I_{3}$ shall be referred as \textbf{Class III} potentials.
We note that the PB $\{H,I_{3}\}= \frac{\partial I_{3}}{\partial t} \neq 0$.
Therefore to find a new integrable potential we should find a \textbf{Class
III} potential admitting two independent FIs of the form $I_{3}$, say
I_{3a} $ and $I_{3b}$, such that $\{I_{3a}, I_{3b}\} =0$.
After relaxing one, or both, of the conditions $\alpha=\beta =0$ and $a_{1}=a_{3}$ we found that the only non-trivial \textbf{Class III}
potential is the superintegrable potential $V_{3b}=-\frac{\lambda ^{2}}{2
(x^{2}+y^{2})$ (see subsection \ref{subsec.V3}) found for $\alpha\neq 0$ or $\beta\neq 0$ above. Therefore there are no new \textbf{Class III} potentials.
\section{Using FIs to find the solution of 2d integrable dynamical systems}
In this section we consider examples which show how one uses the 2d (super-)integrable potentials to find the solution of the dynamical equations.
\bigskip
1) The superintegrable potential $V_{3b}=-\frac{1}{2}k^{2}(x^{2}+y^{2})$ where $k\neq 0$.\\
We find the solution by using the time-dependent LFIs $L_{42\pm}= e^{\pm kt}(\dot{x}\mp kx)$ and $L_{43\pm}=e^{\pm kt}(\dot{y} \mp ky)$. Specifically we have
\[
\begin{cases}
e^{kt}(\dot{x} -kx) = c_{1+} \\
e^{-kt}(\dot{x} +kx) = c_{1-}
\end{cases}
\implies
\begin{cases}
\dot{x} -kx = c_{1+}e^{-kt} \\
\dot{x} +kx = c_{1-}e^{kt}
\end{cases}
\implies
x(t) = \frac{c_{1-}}{2k}e^{kt} - \frac{c_{1+}}{2k}e^{-kt}.
\]
Similarly for the LFIs $L_{43\pm}$ we find
\[
y(t) = \frac{c_{2-}}{2k}e^{kt} - \frac{c_{2+}}{2k}e^{-kt}.
\]
Here $c_{1\pm}, c_{2\pm}$ are arbitrary constants
\bigskip
2) The integrable potential $V_{2}=cy+F(x)$ where $F''\neq0$.\\
Using the LFI $L_{31}=\dot{y}+ct=c_{1}$ we find directly $y(t)= -\frac{c}{2}t^{2} +c_{1}t + c_{2}$ where $c,c_{1},c_{2}$ are arbitrary constants.
Using the QFI $2Q_{31} =\dot{x}^{2}+2F(x)=const=c_{3}$ we have
\[
\frac{dx}{dt} = \pm \left[-2F(x)+c_{3}\right]^{1/2} \implies dt= \pm \left[-2F(x)+c_{3}\right]^{-1/2}dx \implies t= \pm \int\left[-2F(x)+c_{3}\right]^{-1/2}dx + c_{0}
\]
where $c_{0}$ is an arbitrary constant. The inverse function of $t=t(x)$ is the solution of the system. If the function $F(x)$ is given, the solution can be explicitly determined.
\bigskip
3) For the integrable potential $V_{27}= F_{1}(x) + F_{2}(y)$ by using the QFIs
\[
I_{71a}= \frac{1}{2}\dot{x}^{2} + F_{1}(x) \enskip \text{and} \enskip I_{71b}=\frac{1}{2}\dot{y}^{2} + F_{2}(y)
\]
we find
\[
t= \int\left[c_{1}-2F_{1}(x)\right]^{-1/2}dx + c_{0}, \enskip t= \int\left[c_{2}-2F_{2}(y)\right]^{-1/2}dy + c_{3}
\]
where $c_{0}, c_{1}= 2I_{71a}, c_{2}=2I_{71b}, c_{3}$ are constants.
\section{Conclusions}
\label{sec.conclusions}
Using Theorem \ref{The first integrals of an autonomous holonomic dynamical
system} we have reproduced in a systematic way most known integrable
and superintegrable 2d potentials of autonomous conservative dynamical
systems. The method used being covariant it is directly applicable to spaces
of higher dimensions and to metrics with any signature and curvature.
We have found two classes of potentials and in each class we have determined
the integrable and the superintegrable potentials together with their QFIs. Since
the general solution of the PDE (\ref{eq.PDE2}) is not possible we have
found the potentials due to certain solutions only. New solutions of this
equation will lead to new integrable and possibly superintegrable 2d
potentials.
It appears that the most difficult part in the application of Theorem \re
{The first integrals of an autonomous holonomic dynamical system} to higher
dimensions and curved configuration spaces is the determination of the KTs. The use
of algebraic computing is limited once one considers higher dimensions since
then the number of the components of the KT increases dramatically.
Fortunately, today new techniques in Differential Geometry have been
developed (e.g. \cite{Rani 2003}, \cite{Kalnins 1980}, \cite{KTs and CKVs 2006}, \cite{Crampin 2008}, \cite{Glass 2010}),
especially in
the case of spaces of constant curvature and decomposable spaces, which can
help to deal with this problem.
\bigskip
{ {\textbf{Acknowledgements:}}}
A.P. acknowledges financial support of Agencia Nacional de Investigaci\'{o}n
y Desarrollo - ANID through the program FONDECYT Iniciaci\'{o}n grant no.
11180126. Additionally, by Vicerrector\'{\i}a de Investigaci\'{o}n y
Desarrollo Tecnol\'{o}gico at Universidad Catolica del Norte.
|
train/arxiv
|
BkiUc8jxK7kjXMEc8zgK
| 5 | 1 |
\section{Introduction}
The detection of high-energy neutrinos by the IceCube observatory at
the South Pole (Aartsen et al. 2013a, 2014) opened a new window for the study
of the energetic astrophysical phenomena.
The discovery has triggered a wealth of studies devoted to the identification of the
possible sources (e.g., Anchordoqui et al. 2014 and Murase 2014 for recent reviews).
The data are consistent with a flavor ratio $\nu_{\rm e}:\nu_{\mu}:\nu_{\tau}=1:1:1$
and the flux level is close to the so--called Waxman--Bahcall limit (Waxman \& Bahcall 1999),
valid if neutrinos are produced by ultra--high energy cosmic rays (UHECR; $E>10^{19}$ eV) through
pion--producing hadronic interaction before leaving their -- optically thin -- sources.
However, the energies of the neutrinos ($E_{\nu}<$ few PeV) indicate that they are associated
to cosmic rays with energies much below the UHECR regime, $E\lesssim10^{17}$ eV.
The substantial isotropy of the flux (with only a non significant small excess in the
direction of the galactic center) is consistent with an extragalactic origin, although
a sizable contribution from galactic sources cannot be ruled out (e.g. Ahlers \& Murase 2014).
Possible extragalactic astrophysical sources include propagating comic rays, star-forming and starburst galaxies, galaxy clusters, $\gamma$--ray burst and active galactic nuclei (AGN).
Among AGN, blazars, characterized by the presence of a relativistic jet of plasma moving
toward the observer (e.g., Urry \& Padovani 1995), have been widely discussed in the past
as candidates cosmic ray (CR) accelerators (e.g., Biermann \& Strittmatter 1987; see
Kotera \& Olinto 2011 for a review) and thus potential neutrino emitters (e.g., Atoyan \& Dermer 2003, Becker 2008). Murase et al. (2014) and Dermer et al. (2014) revisited the possibility -- already discussed in the past, e.g.,
Atoyan and Dermer (2003) -- that the observed neutrinos are produced in the jet of blazars
through photo--pion reactions involving high energy CR and soft photons ($p+\gamma \to X +\pi$),
followed by the prompt charged pion decay ($\pi ^{\pm}\to \mu^{\pm}+\nu_{\mu}\to e^{\pm} + 2\nu_{\mu} +\nu_{\rm e}$; hereafter we do not distinguish among $\nu$ and $\bar{\nu}$).
Their analysis -- based on the simplest, one--zone, framework -- led to the conclusion
that both the flux level and the spectral shape inferred by the IceCube data are difficult
to reproduce by this scenario.
In particular they predicted a rapid decline of the emission below 1 PeV.
In their framework -- in which the CR luminosity is assumed to be proportional to the
electromagnetic output -- it is naturally expected that the neutrino cumulative flux is
dominated by the most luminous and powerful blazars, i.e. the flat spectrum radio quasars (FSRQ),
which are also the sources characterized by the most rich radiative environment (required to have efficient photo-meson reactions).
BL Lac objects, the low--power blazars defined as those to display faint or even absent
optical broad emission lines, would provide only a minor contribution.
As noted, the Murase et al. (2014) analysis relies on the simplest scenario for blazars,
assuming in particular that their jets are characterized by a well localized emission region
(hence the definition of one--zone models) with a well defined speed.
In a previous paper (Tavecchio, Ghisellini \& Guetta 2014, hereafter Paper I), we reconsidered
this issue showing that, under the assumption that the jet presents a velocity structure, i.e.
the flow is composed by a fast spine surrounded by a slower sheath (or layer), the neutrino output
from the weak BL Lac objects (the so--called Highly peaked BL Lacs, HBL) is boosted and could match the observations.
The proposal for the existence of a velocity structure of the jet have been advanced as a possible
solution for the the so--called ``Doppler crisis" for TeV BL Lacs (e.g. Georganopoulos \& Kazanas 2003,
Ghisellini, Tavecchio, \& Chiaberge 2005) and to unify the BL Lacs and radiogalaxy populations
(e.g. Chiaberge et al. 2000, Meyer et al. 2011, Sbarrato et al. 2014).
Direct radio VLBI imaging of both radiogalaxies (e.g. Nagai et al. 2014, M{\"u}ller et al. 2014) and BL Lac
(e.g., Giroletti et al. 2004, Piner \& Edwards 2014) jets, often showing a ``limb brightening"
transverse structure, provides a convincing observational support to this idea, also corroborated by numerical simulations (e.g. McKinney 2006, Rossi et al. 2008).
The reason behind the possibility to increase the neutrino (and inverse Compton $\gamma$-ray) production
efficiency in such a spine--layer structure stems from the fact that for particles flowing in
the spine the radiation field produced in the layer appears to be amplified because of the
relative motion between the two structures (e.g. Tavecchio \& Ghisellini 2008).
In this conditions, the density of the soft photons in the spine rest frame -- determining the
proton cooling rate and hence the neutrino luminosity -- can easily exceed that of the locally
produced synchrotron ones, the only component taken in consideration in the one--zone modeling of
Murase et al. (2014) in the case of BL Lacs (for FSRQ, instead, the photon field is thought to be
dominated by the radiation coming from the external environment).
In Paper I we considered only the weakest BL Lac sources -- similar to the prototypical TeV
blazar Mkn 421 -- for which the arguments supporting the existence of the jet structure are the most compelling.
Interestingly, a hint of an actual association between the IceCube events and some
low--power TeV emitting BL Lac (among which the aforementioned Mkn 421) has been found
by Padovani \& Resconi (2014), although their result is not confirmed by a
sophisticated analysis of the IceCube collaboration (Aartsen et al. 2013b, 2014).
There are hints suggesting that a velocity structure could be a universal characteristics of all BL Lac jets.
This idea is supported by the modeling of the radio-galaxy emission through the spine-layer model
(Tavecchio \& Ghisellini 2008, 2014), which suggests that these jets are typically more
powerful than those associated to the weakest BL Lac (rather, they resembles the BL Lacs of the intermediate,
IBL, or low--synchrotron peak, LBL, category).
These arguments motivated us to extend our previous work presented in Paper I, considering the possibility
that the entire BL Lac population is a source of high--energy neutrinos.
To this aim we have to refine the simple description of the cosmic evolution we adopted in Paper I
with a more complex, luminosity--dependent, evolution of the BL Lac luminosity function.
We describe our neutrino emission model and the assumed cosmic evolution of BL Lacs in \S 2.
We report the results in \S 3, in which also we present a list of the most probable candidates
expected to be associated with the IceCube events.
In \S 4 we conclude with a discussion.
Throughout the paper, the following cosmological parameters are assumed:
$H_0=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M}=0.3$, $\Omega_{\Lambda}=0.7$.
We use the notation $Q=Q_X \, 10^X $ in cgs units.
\section{Setting the stage}
\subsection{Neutrino emission}
We calculate the neutrino emission from a single BL Lac following the scheme
already adopted and described in Paper I.
Here we just recall its basic features.
We assume a two--flow jet structure, with a fast spine (with bulk Lorentz factor
$\Gamma_{\rm s}$) with cross sectional radius $R$ surrounded by the slower and
thin layer (with $\Gamma_{\rm l}<\Gamma_{\rm s}$). The corresponding Doppler factors,
denoting with $\theta_{\rm v}$ the observing angle, are
$\delta_{\rm l,s}=[\Gamma_{\rm l,s}(1-\beta_{\rm l,s}\cos \theta_{\rm v})]^{-1}$.
We further assume that the spine carries a population of high-energy CR (protons, for simplicity),
whose luminosity in the spine frame (for which we use primed symbols) is parametrized by a
cut--offed power law distribution in energy:
\begin{equation}
L^ {\prime}_{\rm p}(E_{\rm p}^ {\prime})=k_{\rm p} E_{\rm p}^{\prime \, -n}
\exp\left( -\frac{E_{\rm p}^ {\prime}}{E_{\rm cut}^ {\prime}}\right) \;\;\;
E_{\rm p}^ {\prime}>E_{\rm min}^ {\prime}
\end{equation}
with total (spine frame) luminosity
$L_{\rm p}^ {\prime}=\int L_{\rm p}^ {\prime}(E_{\rm p}^ {\prime}) dE_{\rm p}^ {\prime}$.
The cooling rate $t^{\prime \, -1}_{p\gamma}(E_{\rm p}^{\prime})$ of protons with energy
$E_{\rm p}^{\prime}$ through the photo--meson reaction with a target radiation field with
numerical density $n_{\rm t}^{\prime}(\epsilon)$ is given by (Atoyan \& Dermer 2003,
see also Dermer \& Menon 2009):
\begin{equation}
t^{\prime \, -1}_{p\gamma}(E_{\rm p}^{\prime})=c \int_{\epsilon_{\rm th}}^{\infty} d\epsilon
\frac{n_{\rm t}^ {\prime}(\epsilon)}{2\gamma_{\rm p}^{\prime}\epsilon^2}
\int_{\epsilon_{\rm th}}^{2\epsilon\gamma_{\rm p}^{\prime}} d{\bar\epsilon}\,
\sigma_{p\gamma}({\bar\epsilon})\, K_{p\gamma}({\bar\epsilon}) \, {\bar\epsilon},
\label{tpg}
\end{equation}
where $\gamma _{\rm p}^{\prime}=E_{\rm p}^{\prime}/m_{\rm p}c^2$, $\sigma_{p\gamma}(\epsilon)$
is the photo--pion cross section, $K_{p\gamma}(\epsilon)$ the inelasticity and $\epsilon_{\rm th}$
is the threshold energy of the process.
The photo--meson production efficiency is measured by the factor $f_{p\gamma}$, defined as
the ratio between the timescales of the competing adiabatic and photo-meson losses:
\begin{equation}
f_{p\gamma}(E_{\rm p}^ {\prime})=\frac{t^ {\prime}_{\rm ad}}{t^{\prime}_{p\gamma}(E_{\rm p}^{\prime})}.
\end{equation}
where $t^ {\prime}_{\rm ad}\approx R/c$.
The neutrino luminosity in the spine frame can thus be calculated as:
\begin{equation}
E_{\nu}^{\prime} L^{\prime}_{\nu}({E_{\nu}^{\prime}}) \simeq \frac{3}{8}
\min [1,f_{p\gamma}(E_{\rm p}^ {\prime})] \, E_{\rm p}^{\prime} L_{\rm p}^{\prime}({E_{\rm p}^{\prime}}),
\end{equation}
where $E_{\nu}^{\prime}=0.05\, E_{\rm p}^{\prime}$.
The factor $3/8$ takes into account the fraction of the energy going into $\nu$ and $\bar{\nu}$ (of all flavors).
The {\it observed} luminosity is derived taken into account the relativistic boosting,
parametrized by the relativistic Doppler factor
$\delta_{\rm s}$: $ E_{\nu} L_{\nu}({E_{\nu}}) = E_{\nu}^{\prime} L^{\prime}_{\nu}({E_{\nu}^{\prime}})\,
\delta_{\rm s}^4$ and $E_{\nu}=\delta_{\rm s}E_{\nu}^{\prime}$.
We assume that the dominant population of soft photons --
specifying $n^{\prime}_{\rm t}(\epsilon)$ in Eq. (\ref{tpg}) -- is provided by the boosted layer radiation
(we show in Paper I that the internally produced synchrotron photons provide a negligible contribution).
The spectrum of this component is modeled as a broken power law $L(\epsilon_{\rm l})$
with indices $\alpha_{1,2}$ and (observer frame, unprimed symbols) SED peak energy $\epsilon_{\rm o}$.
The layer luminosity is parametrized by the total (integrated)
luminosity -- in the observer frame -- $L_{\rm l}$.
As in Paper I we neglect the anisotropy of the layer radiation field in the spine frame (Dermer 1995).
We also neglect the high-energy photons produced in the neutral pions decay $\pi^0\to \gamma \gamma$.
\subsection{Model parameters and scaling laws}
Summarizing, our model is specified by the following parameters: the jet radius, $R$, the spine and
layer Lorentz factors $\Gamma_{\rm s}$ and $\Gamma_{\rm l}$, the observed layer radiative luminosity $L_{\rm l}$, the peak $\epsilon_{\rm o}$ of its energy distribution [in $\epsilon L(\epsilon)$];
the spectral slopes $\alpha_1$ and $\alpha_2$
of $L_{\rm l}(\epsilon)$
, the spine comoving
CR luminosity $L^{\prime}_p$, the CR power law index $n$, the minimum and the cut--off energy
$E^{\prime}_{\rm min}$, $E^{\prime}_{\rm cut}$.
For definiteness we fix the structural jet parameters of the entire BL Lac
population to the values adopted in Paper I.
Specifically we assume $\Gamma_{\rm s}=15$, $\Gamma_{\rm l}=2$, a jet radius $R=10^{15}$ cm and
a layer spectrum with slopes $\alpha_{1}=0.5$ and $\alpha_{1}=1.5$.
\begin{figure}
\vspace*{-1.5 truecm}
\hspace*{-0.8 truecm}
\psfig{file=fpg.ps,height=10.5cm,width=10.5cm}
\vspace{-1.1 cm}
\caption{Photopion production efficiency, $f_{p\gamma}$, as a function of the energy (in the jet spine frame) for
protons carried by the spine scattering off the layer radiation field, for a BL Lac
$\gamma$--ray luminosity $L_{\gamma}=10^{44}$ erg s$^{-1}$ (solid blue line) and
$L_{\gamma}=10^{46}$ erg s$^{-1}$ (dashed red line).
}
\label{diffuse}
\end{figure}
In Paper I we considered two possible realizations of the model, characterized by two different values
of the layer peak energy, $\epsilon_{\rm o}$.
Through the threshold condition, $E^{\prime}_{\rm p}\epsilon^{\prime}_{\rm t}> m_{\pi}m_{\rm p}c^4$,
$\epsilon_{\rm o}$ affects the possible values of the minimum CR energy.
In fact, increasingly larger values of $\epsilon_{\rm o}$ allow for lower values of $E^{\prime}_{\rm p,min}$
and thus lower energies of the produced neutrinos.
On the other hand, due to the steep CR distribution, decreasing $E^{\prime}_{\rm p,min}$ leads to increase the CR power required to produce a given neutrino output. In Paper I we show that to satisfactorily reproduce the low energy data points around 100 TeV we have
to assume a layer emission peaking in the UV band, $\epsilon_{\rm o}\approx 400$ eV.
In the following we adopt this value for our reference model.
For both the CR luminosity $L^{\prime}_{\rm p}$ and the layer radiative
luminosity $L_{\rm l}$ we assume,
as a physically--motivated working hypothesis, a linear dependence on the jet power,
$P_{\rm jet}$ --- i.e. constant efficiencies.
In turn, we assume that $P_{\rm jet}$ is well traced by the observed $\gamma$--ray $0.1-100$ GeV
luminosity, $L_{\gamma}$, as supported by the modeling of blazar SED (e.g. Ghisellini et al. 2014).
We normalize the values of $L^{\prime}_{\rm p}$ and $L_{\rm l}$ to the values corresponding to
the weakest sources, corresponding to $L_{\gamma}=10^{44}$ erg s$^{-1}$.
Therefore, we assume:
\begin{equation}
L^{\prime}_{\rm p}=L^{\prime}_{\rm p,o} \, \frac{L_{\gamma}}{10^{44} {\rm erg}\,\, {\rm s}^{-1}}; \; \;\;
L_{\rm l}=L_{\rm l,o} \, \frac{L_{\gamma}}{10^{44} {\rm erg}\,\, {\rm s}^{-1}}.
\end{equation}
For the layer we adopt the value used in Paper I, $L_{\rm t,o}=2\times 10^{44}$ erg s$^{-1}$,
while $L^{\prime}_{\rm p,o}$ is left as a free parameter, together with the other parameters
specifying the CR distribution.
The value of $f_{p\gamma}(E^{\prime}_{\rm p})$ for the assumed set of parameters is shown in Fig. 1. Note the large efficiency ($f_{\rm p\gamma}>0.1$) characterizing the most powerful sources for proton energies corresponding the neutrinos detected by IceCube.
Given the assumed linear scaling of CR and layer luminosities with the jet power,
the neutrino luminosity --- proportional to their product --- will scale as
$L_{\nu}\propto L^{\prime}_{\rm p} L_{\rm t}\propto L^2_{\gamma}$.
Alternatively, this can be expressed by the fact that the efficiency of the neutrino production,
$\eta _{\nu}\equiv L_{\nu}/P_{\rm jet}$, increases with the jet power (and thus with
the $\gamma$--ray luminosity), $\eta _{\nu} \propto L_{\gamma}$.
We will see that this fact implies that, despite the cosmic density of sources decreases with
their $\gamma$--ray luminosity (i.e. a decreasing luminosity function), the cumulative
cosmic neutrino output is dominated by the most powerful --- but rare --- sources.
\subsection{Diffuse intensity}
The cumulative diffuse neutrino intensity deriving from the entire population of BL Lacs, is evaluated as:
\begin{equation}
E_{\nu}I(E_{\nu})= \frac{c E_{\nu}^2 }{4\pi H_o} \int_{0}^{z_{\rm max}} \int_{L_{\gamma,1}}^{L_{\gamma,2}} \frac{j[L_{\gamma},E_{\nu}(1+z),z]}{\sqrt{\Omega_{\rm M}(1+z)^3+\Omega_{\Lambda}}} \, dL_{\gamma} dz,
\label{cumul}
\end{equation}
in which the luminosity--dependent comoving volume neutrino emissivity $j$ is expressed by the product
of the comoving density of sources with a given $\gamma$--ray luminosity, provided by the the luminosity
function $\Sigma(L_{\gamma}, z)$, and the corresponding source neutrino luminosity:
\begin{equation}
j(L_{\gamma}, E_{\nu},z)=\Sigma(L_{\gamma}, z) \, \frac{L_{\nu}({E_{\nu}})}{E_{\nu}}.
\label{emissivity}
\end{equation}
Eq. (\ref{cumul}) is a generalization of the relation used in Paper I, which was suitable
for a population of sources with a unique luminosity.
We derive $\Sigma(L_{\gamma}, z)$ using the luminosity function and the parameters for its
luminosity--dependent evolution for BL Lacs derived by Ajello et al. (2014) using {\it Fermi}/LAT data.
The local (i.e. $z=0$) luminosity function is described by:
\begin{equation}
\Sigma(L_{\gamma}, z=0)= \frac{A}{\ln(10)L_{\gamma}}\left[ \left( \frac{L_{\gamma}}{L_{\rm *}}\right)^{\gamma_1}
+ \left( \frac{L_{\gamma}}{L_{\rm *}}\right)^{\gamma_2} \right]^{-1}
\label{lumfun}
\end{equation}
with $A=3.4\times 10^{-9}$ Mpc$^{-3}$; $\gamma1=0.27$; $\gamma2=1.86$ and $L_{\rm *}=2.8\times 10^{47}$ erg s$^{-1}$.
This luminosity function evolves with $z$ as:
\begin{equation}
\Sigma(L_{\gamma}, z)= \Sigma(L_{\gamma}, z=0) \times e(L_{\gamma}, z),
\end{equation}
where\footnote{The sign of the exponents $p1$ and $p2$ in Ajello et al. (2014)
is incorrect (M. Ajello, priv. comm.).}:
\begin{equation}
e(L_{\gamma}, z)= \left[ \left( \frac{1+z}{1+z_c(L_{\gamma})}\right)^{-p1(L_{\gamma})} +
\left( \frac{1+z}{1+z_c(L_{\gamma})}\right)^{-p2}\right]^{-1},
\label{evol}
\end{equation}
and the functions $z_c(L_{\gamma})$ and $p1(L_{\gamma})$ are specified by:
\begin{equation}
z_c(L_{\gamma})=z_c^* \cdot (L_{\gamma}/10^{48} {\rm erg} \, {\rm s}^{-1})^{\alpha},
\end{equation}
\begin{equation}
p1(L_{\gamma})=p1^* +\tau \cdot \log (L_{\gamma}/10^{46} {\rm erg} \, {\rm s}^{-1}).
\end{equation}
The best fit parameters derived by Ajello et al. (2014) are: $p2=-7.4$, $z_c^*=1.34$,
$\alpha=4.53\times 10^{-2}$, $p1^*=2.24$, $\tau=4.92$.
This parametrization captures the basic features of the $\gamma$--ray emitting BL Lac evolution.
In particular, low luminosity sources ($L_{\gamma}<10^{45}$ erg s$^{-1}$) are characterized by
a negative evolution (i.e. a density decreasing with $z$), while sources of higher luminosity display
a null or positive evolution.
We consider that the BL Lac $\gamma$--ray luminosity is in the range
$10^{44}$ erg s$^{-1} < L_{\gamma} < 10^{46}$ erg s$^{-1}$.
Note that in the {\it Fermi} second AGN catalogue (2LAC, Ackermann et al. 2011) there are sources
classified as BL Lac objects with even larger $L_{\gamma}$, as also assumed in the population
study of Ajello et al. (2014).
However, as discussed in Ghisellini et al. (2011) (see also Giommi et al. 2013 and Ruan et al. 2014),
these are instead intermediate objects between FSRQ and BL Lacs or even misclassified FSRQ
whose beamed non--thermal continuum is so luminous to swamp the broad emission lines.
We suppose that the jets of these sources do not develop an important layer and therefore we
do not consider them as neutrino emitters.
The assumed maximum luminosity is much below the break luminosity $L_*$.
The local luminosity function,
Eq. (\ref{lumfun}), can thus be well approximated by a single power law,
$\Sigma (L_{\gamma},z=0)\propto L_{\gamma}^{-(\gamma_1+1)}$.
Recalling the relation between the neutrino and the $\gamma$--ray luminosity ($L_{\nu}\propto L_{\gamma}^2$),
the neutrino luminosity density (Eq. \ref{emissivity}) can also be expressed as a function of the
sole $\gamma$--ray luminosity.
Therefore we can express the contribution of the sources with a given $\gamma$--ray luminosity to
the total neutrino background as:
\begin{equation}
I(L_{\gamma})\propto L_{\gamma}j(L_{\gamma}) \propto L_{\gamma}\, \Sigma(L_{\gamma}, z) \,
L_{\nu} \propto L_{\gamma}^{-\gamma_1+2}
\label{intensity}
\end{equation}
from which $I(L_{\gamma})\propto L_{\gamma}^{1.73}$, i.e. the resulting integral neutrino flux
is dominated by the most powerful sources.
\section{Results}
\begin{figure}
\vskip -0.5 cm
\hspace{-1.5 truecm}
\psfig{file=combo.ps,height=12.5cm,width=12.cm}
\vspace{-0.3 cm}
\caption{
{\it Upper panel}: measured diffuse intensities of high--energy neutrinos
(red symbols, from Aartsen et al. 2014).
Red triangles indicate upper limits.
Gray data points show the fluxes for an increase of the prompt atmospheric background to the level of 90\% CL limit.
The black solid line reports the diffuse neutrino intensity calculated assuming that all the
BL Lac jets have a spine--layer structure.
Gray lines report the contributions from sources with
$10^{44}$ erg s$^{-1} < L_{\gamma} < 10^{45}$ erg s$^{-1}$ (dashed) and
$10^{45}$ erg s$^{-1} < L_{\gamma} < 10^{46}$ erg s$^{-1}$ (long dashed).
The blue lines report the corresponding CR intensities, assuming efficient escape from the jet.
Orange (Apel et al. 2012) and cyan (Chen 2008) data points show the observed high--energy CR spectrum.
{\it Lower panel:} as the upper panel but considering only the sources with
$L_{\gamma} < 10^{45}$ erg s$^{-1}$.
Gray lines show the contribution of sources with
$10^{44}$ erg s$^{-1} < L_{\gamma} < 3\times10^{44}$ erg s$^{-1}$ (dashed) and
$3\times 10^{44}$ erg s$^{-1} < L_{\gamma} < 10^{45}$ erg s$^{-1}$ (long dashed).
}
\label{diffuse}
\end{figure}
\subsection{Cumulative intensity}
We apply the model described above to reproduce the observed neutrino intensity.
As noted above, the only free parameters of the model are those specifying the proton
energy distribution and the total CR luminosity normalization:
$n$, $E^{\prime}_{\rm min}$, $E^{\prime}_{\rm cut}$ and $L^{\prime}_{\rm p,o}$.
First we consider the case in which the entire BL Lac population is characterized by
the presence of a structured jet, fixing $L_{\gamma,2}=10^{46}$ erg s$^{-1}$ in Eq. (\ref{cumul}).
For the parameters reported in the first raw of Tab. 1 we obtain the diffuse spectrum shown by the solid black line in Fig. 2 (upper panel), to be compared with the reported IceCube data points from Aartsen et al. (2014).
The abrupt cut-off at low energy is an artifact due to the assumed abrupt truncation of the CR energy
distribution at low energy.
The gray lines show the contributions from BL Lacs in two different ranges of
luminosity, namely $10^{44}-10^{45}$ erg s$^{-1}$ (dashed) and $10^{45}-10^{46}$ erg s$^{-1}$ (long dashed).
As expected from the considerations above (\S 2.3), the total emission is dominated
by the most luminous sources, although their density is much smaller than that of the low--luminosity ones.
The high--energy cut--off of the CR distribution is robustly fixed to $E^{\prime}_{\rm cut}=3$ PeV
by the IceCube upper limits at high energy.
The value of the minimum CR energy $E^{\prime}_{\rm min}$ is instead less constrained.
The lowest energy IceCube data point at $\approx 100$ TeV allows us to limit $E^{\prime}_{\rm min}$ from above,
$E^{\prime}_{\rm min}\lesssim 20\times E_{\nu}/\delta_{\rm s}\approx 10^{14}$ eV
(we ignore the cosmological redshift).
The curves in Fig. 2 have been derived by assuming $E^{\prime}_{\rm min}=2\times 10^{13}$ eV,
although lower values are allowed.
The flat spectrum points to a relatively soft CR spectrum, $n=2.8$.
In the lower panel of Fig. 2 we report the case, similar to that discussed in Paper I,
in which only the jets of the weak BL Lacs --- operatively defined as the sources with
$L_{\gamma}<10^{45}$ erg s$^{-1}$ --- develop a layer.
The cosmic ray power for a given $\gamma$-ray luminosity, $L_{\rm p,o}^{\prime}$, increases by a factor 10 with respect to the previous case.
In Fig. 2 we also show the cumulative CR flux from BL Lacs (blue lines) assuming efficient
escape from the jet and efficient penetration within the Milky Way.
For the case of all BL Lacs the flux is well below the measured level.
For the case of HBL alone the flux is close to the limit fixed by the level recorded at the Earth.
This is because, if only low power BL Lacs have to reproduce the neutrino flux,
they must contain a number of energetic cosmic rays which is greater than if BL Lacs of all powers contribute, since, as noted before, the photopion production efficiency $f_{p\gamma}$ -- and hence the neutrino emission efficiency -- increases with the jet power (or, equivalently, with the $\gamma$-ray luminosity) -- i.e. low power jets are less efficient than high power jets in producing neutrinos.
For our two models the contribution of the BL Lacs to the CR in the $10^{15}-3\times 10^{16}$ eV energy range is of the order of $\sim$5\% and $\sim$50\% for the ``All" and the ``Low power" case, respectively.
\begin{table}
\begin{center}
\begin{tabular}{lccc}
\hline
\hline
Model & $L^{\prime}_{\rm p,o}$ & $E^{\prime}_{\rm min}$ & $E^{\prime}_{\rm cut}$\\
&[erg s$^{-1}$] & [eV] & [eV]\\
\hline
All& $3\cdot10^{40}$& $2\cdot10^{13}$& $3\cdot10^{15}$ \\
Low power& $3\cdot10^{41}$ & $2\cdot10^{13}$& $2.3\cdot10^{15}$ \\
\hline
\hline
\end{tabular}
\vspace{0.3 cm}
\caption{
Parameters for the two realizations of the model shown in Fig. 2.
The three columns report the normalization of the CR luminosity, the minimum and the cut--off
energy of the CR energy distribution.
}
\end{center}
\label{parametri}
\end{table}
\subsection{Jet CR power}
The CR luminosity required to match the observed flux is relatively limited.
The $\gamma$--ray luminosity dependent beaming--corrected power in CR (similar to that valid for photons, e.g. Celotti \& Ghisellini 2008):
\begin{equation}
P_{\rm CR}\, =\, \frac{L^{\prime}_{\rm p}\, \delta_{\rm s}^4}{\Gamma_{\rm s}^2} \, =\,
L^{\prime}_{\rm p,o} L_{\gamma,44} \, \frac{\delta_{\rm s}^4}{\Gamma_{\rm s}^2}
\end{equation}
is $P_{\rm CR}\simeq 2\times 10^{43} L_{\gamma,44}$ erg s$^{-1}$ (``All" case)
and $P_{\rm CR}\simeq 2\times 10^{44} L_{\gamma,44}$ erg s$^{-1}$ (``Low power" case).
This value can be compared to the beaming corrected radiative luminosity, which for blazars
can be directly related to the observed $\gamma$--ray luminosity (Sbarrato et al. 2012),
$P_{\rm rad}\approx 3\times 10^{42} L_{\gamma,44}^{0.78}$ erg s$^{-1}$.
The ($\gamma$--ray luminosity dependent) ratio between the two quantities is
thus $\xi=P_{\rm CR}/P_{\rm rad}\approx 5 \, L_{\gamma,44}^{0.22}$ and
$\approx 50 \, L_{\gamma,44}^{0.22}$ for the two cases.
These values should be compared to
$\xi\approx 100$ assumed by Murase et al. (2014) -- although the possible existence of a curved CR distribution as that discussed by Dermer et al. (2014) should allow to reduce such a large value.
Since for blazar jets the ratio between the radiative and the kinetic power
(calculated assuming a composition of one {\it cold} proton per emitting electron) is
$P_{\rm rad}/P_{\rm jet}\approx 0.1$ (e.g. Nemmen et al. 2012, Ghisellini et al. 2014),
we can also assess the ratio between the jet power
(calculated neglecting the contribution of CR) to the CR power,
$P_{\rm CR}/P_{\rm jet}\approx 0.5 \, L_{\gamma,44}^{0.22}$ for the ``All" case and
ten times larger for the ``Low power" case. Therefore, even in the most conservative case,
the jet should be able to channel a sizable part of its kinetic power into CR acceleration.
As a consequence, the total jet power should increase by a corresponding amount
with respect to the current estimates.
\subsection{Neutrino point sources}
Having calculated the expected cumulative neutrino flux we can also derive the expected number of
events detectable by IceCube from a given BL Lac object.
This is particularly valuable in view of the identification of the possible astrophysical counterparts
of the detected neutrinos and to test our model.
To this aim, first of all we calculate the theoretical differential neutrino number flux at the Earth
from a generic source of neutrino luminosity $L_{\nu}$ at redshift $z$ as:
\begin{equation}
\phi(E_{\nu})\equiv\frac{dN}{dt\, dE_{\nu}\, dA}=\frac{L_{\nu}[E_{\nu}(1+z)]}{4\pi d_L^2 \, E_{\nu}},
\end{equation}
where $d_{\rm L}$ is the luminosity distance.
We derive the expected IceCube rate convolving the flux with the energy--dependent IceCube
effective area $A_{\rm eff}$ (taken from Aartsen et al. 2013a).
Finally we derive the number of events expected with an exposure of 3 years (corresponding
to an effective exposure of $T_{\rm exp}=998$ days):
\begin{equation}
N_{\nu}=T_{\rm exp} \int A_{\rm eff}(E_{\nu}) \phi(E_{\nu}) \,dE_{\nu}.
\end{equation}
For consistency, we first checked that our two models represented in Fig. 2 provide about
30 events detected in 3 years (Aartsen et al. 2014).
We then applied the procedure using the BL Lacs belonging to the 2LAC
catalogue\footnote{{\tt http://www.asdc.asi.it/fermi2lac/}} (Ackermann et al. 2011) with measured redshift.
For each source the $\gamma$--ray luminosity is derived converting the flux provided by the
2LAC catalogue using the procedure described in Ghisellini et al. (2009).
The $\gamma$--ray luminosity is then converted into the luminosity in neutrinos
$L_{\nu}(E_{\nu})$ according to our model.
The resulting event rate ($N_{\nu}/T_{\rm exp}$ in units of yrs$^{-1}$) for the sources,
as a function of $L_{\gamma}$ are reported in Fig. 3 for the two scenarios.
The number of events for the exposure of 998 days $N_{\nu}$ for the five brightest sources
and for the two possible scenarios explored above are reported in Table 2.
\begin{figure}
\hspace{-1.5 truecm}
\psfig{file=comborate.ps,height=12.5cm,width=12.cm}
\vspace{-0.1 cm}
\caption{
Expected IceCube count rate (events/year) for the 2LAC BL Lac as a function of the $\gamma$--ray
luminosity for the ``All" (upper panel) and the ``low power" (lower panel) scenario, respectively.
}
\label{diffuse}
\end{figure}
For the ``All" scenario the brightest 3--4 sources are characterized by a rate sufficient to allow
the detection of several events with a relatively prolonged exposure.
The most probable candidate is PKS 2155--304 (see below).
On the other hand, if only low power BL Lacs are considered, the situation is quite different, with
only one source --- Mkn 421 --- expected to be detectable and with all the other sources with a
rather smaller flux, providing $N_{\nu}<0.1$.
Note also that both sources, with an expected number of events in 3 years $N_{\nu}\approx1$
(over a total of about 30 neutrino detected) should be characterized by a neutrino flux of
the order of $E_{\nu}^2\phi(E_{\nu})\approx 4\pi E_{\nu} I(E_{\nu}) /30\approx 10^{-11}$ erg cm$^{-2}$ s$^{-1}$.
An obvious {\it caveat} is in order when considering this result.
This calculation, although applied to single sources, is built on our results based on the
{\it averaged} characteristics of the BL Lac population (besides our model assumptions).
Furthermore, the $\gamma$--ray luminosity of the 2LAC is an average over 2 years of observations.
Given these limits, our procedure cannot consider source peculiarities or mid--term variability,
particularly relevant for the high--energy emission of luminous BL Lac objects (e.g., Abdo et al. 2010).
\begin{table}
\begin{center}
\begin{tabular}{lcc}
\hline
\hline
Source & $z$ & $N_{\nu}$ (998 days)\\
\hline
\multicolumn{3}{c}{All}\\
\hline
PKS 2155-304& 0.116& 0.6\\
PKS 0447-439& 0.205& 0.5\\
PKS 0301-243& 0.26 & 0.4\\
1H 1013+498 & 0.212& 0.2\\
S4 0954+65 &0.367 & 0.15\\
\hline
\multicolumn{3}{c}{Low power}\\
\hline
Mkn 421& 0.031 & 0.5\\
1ES 0806-05& 0.137& 0.1\\
RX J 0159.5+1047& 0.195& 0.1\\
1ES 1959+650& 0.047& 0.1\\
1ES 2322-409& 0.062& 0.05\\
\hline
\hline
\end{tabular}
\vspace{0.3 cm}
\caption{
List of the BL Lacs and the expected neutrino counts for an exposure of 3 years and the
two scenarios described in the text.
}
\end{center}
\label{events1}
\end{table}
Given these limitations, it is however possible to note a clear difference between the two cases:
in the "All" case there is a bunch of relatively bright neutrino BL Lacs -- those with the highest power.
For the ``Low power" case, instead, Mkn 421 largely dominates over the other sources.
A remark concerns the case PKS 2155--304, the first entry in Table 2 for the ``All" case,
which is the only highly peaked BL Lac (HBL) of this list. Indeed, as we noted above,
the neutrinos reaching the Earth are preferentially produced by the most powerful sources
which preferentially are of the intermediate (IBL) or low peaked (LBL) type (e.g.,
Ackermann et al. 2011, Giommi et al. 2012).
PKS 2155--304 is clearly an outlier of this general trend, displaying a SED typical of HBL
(i.e. the synchrotron peak in the soft X-ray band) but with a luminosity much larger than that
of the averaged HBL population.
Given the link that we assumed between the electromagnetic and neutrino output, this peculiarity
shows up also in the neutrino window.
In Fig. \ref{sed}
we report in detail the SED and the expected neutrino output for this source.
In the SED we also report the IceCube sensitivity curve for 3 years, scaling that provided
in Tchernin et al. (2013) for one year and about half of the detector (IC-40 configuration).
Clearly, the flux limit is very close to that theoretically expected.
Recently, Ahlers \& Halzen (2014) performed a calculation aimed to assess the possibility
to single--out neutrino sources with IceCube, considering different possible source scenarios
and taking into account the details of the the IceCube instrument (e.g. background, statistics).
They also consider blazars as possible sources, adopting the local density and the cosmological
evolution valid for the most powerful blazars, i.e. FSRQ. Comparing the neutrino flux they
derived for single sources with the upper limits on the flux of the the weak BL Lac Mkn 421 and Mkn 501,
they conclude that a blazar origin is disfavored.
However, as said, their calculations are clearly tuned for FSRQ, not BL Lacs, which display
quite different cosmological density and evolution. Indeed, a calculation based on the
BL Lac demography provides results compatible to those presented here (M. Ahalers, priv. comm.).
\section{Discussion}
We have presented an extension of the scenario envisaging the production of high--energy neutrinos
in the structured jets of BL Lac objects sketched in Tavecchio et al. (2014).
The key ingredient is the relativistic boosting of the radiation produced in the layer in the
spine frame (Ghisellini et al. 2005, Tavecchio \& Ghisellini 2008), which entails the increased
efficiency of the photo--pion reactions and the following neutrino emission.
\begin{figure}
\vspace*{-1.2 truecm}
\hspace*{-0.3 truecm}
\psfig{file=2155_neutrini.ps,height=10.cm,width=10.cm}
\vspace{-1.5 cm}
\caption{
Spectral energy distribution of PKS 2155--304 (green points --- taken from {\tt http://www.asdc.asi.it} ---
and solid gray line).
The red solid line shows the expected neutrino emission expected in our model for the "All BL Lac" case.
The blue line tracks the corresponding layer emission.
The black dashed line marked ``IceCube, 3 yr" displays the estimated flux limit for IceCube, obtained scaling the
sensitivity curve provided in Tchernin et al. (2013).
}
\label{sed}
\end{figure}
The observational evidence supporting the idea that BL Lac (and radiogalaxy) jets are structured outflows
--- i.e. with a faster spine surrounded by a slower layer --- is steadily accumulating.
The deceleration of the flow after the blazar region expected for this configuration offers the simplest
explanation of the anomalous lower apparent speeds inferred for jets of TeV emitting BL Lac through
VLBI observation (Piner \& Edwards 2004, Piner et al. 2008).
Likely, a spine--layer system is the most natural description of the limb brightening displayed
by several BL Lac (Giroletti et al. 2004, 2008, Piner \& Edwards 2014, Piner et al. 2010) and
radiogalaxy (e.g., Nagai et al. 2014, M{\"u}ller et al. 2014) jets at VLBI resolutions.
Strong independent --- although indirect --- support is provided by arguments from to the
unification scheme of BL Lacs and Fanaroff-Riley I (FRI) radiogalaxies (e.g. Chiaberge et al. 2000, Meyer et al.
2011, Sbarrato et al. 2014). Indeed, while large Lorentz factors ($\Gamma\approx 10-20$) are required
to model the BL Lac emission (e.g. Tavecchio et al. 2010), the emission properties and the number
densitiy of radiogalaxies instead favor low ($\Gamma\approx 3-5$) bulk Lorentz factors.
The structure jet scenario easily solves this problem: depending on the jet viewing angle, large
or low Lorentz factors are inferred for BL Lac (dominated by the spine) or radiogalaxies (for
which the layer contributes most to the emission), respectively.
There is relatively small number of cases for which some constraints to the structural parameters of
the layer can be derived (e.g., Tavecchio \& Ghisellini 2008, 2014).
In our work we assumed a phenomenological view, tuning the layer properties (bulk Lorentz factor,
emitted spectrum) so that we can reproduce at best the observed neutrino flux.
An improvement of present knowledge could help in better test our proposal.
One is naturally led to wonder whether also misaligned BL Lacs -- i.e. FRI radiogalaxies, according to the classical unification scheme for radio-loud AGN (e.g. Urry \& Padovani 1995) -- could contribute to the observed neutrino background (see also Becker Tjus et al. 2014). In this case, one expects that the close-by radiogalaxies Cen A, M87 and NGC 1275 -- also observed to emit TeV photons (Aharonian et al. 2009, Aharonian et al. 2006, Aleksic et al. 2012) -- should be optimal candidates for a direct association with IceCube events (recall also that Cen A is also possibly associated to a handful of UHECR detected by AUGER, Abraham et al. 2007). For all three sources, however, quite stringent upper limits are derived (Aartsen et al. 2013b). In the structured jets scenario adopted here, the electromagnetic emission from the inner jet of radiogalaxies (at least at high energies, see below), is likely to be dominated by the layer, since, due to the large viewing angle, the more beamed spine radiation is de-boosted as observed from the Earth. Analogously, we expect that possible neutrino emission from the layer would dominate over that of the spine in case of misaligned jets. As for the inverse Compton emission (e.g. Tavecchio \& Ghisellini 2008), the dominant radiation field for the photo-pion reaction is expected to be that of the spine, boosted in the layer frame by the relative motion. For M87 and NGC 1275, the application of the spine-layer scenario (Tavecchio \& Ghisellini 2008, 2014) suggests that the high-energy $\gamma$-ray component is produced in the layer, while the low energy non-thermal emission
is rather due to the (de-boosted) spine emission. In this case we therefore have a direct handling on the spine radiation field. For all the aforementioned radiogalaxies this low energy components peaks in the IR band, around $\epsilon_{\rm o}\approx 0.1$ eV. Therefore, in order to allow the photo-meson reaction, proton energies should exceeds (we neglect the small Doppler shift) $E_{\rm p}\gtrsim m_{\pi}m_{\rm p}c^4/\epsilon_{\rm o}\approx 10^{18} (\epsilon _{\rm o}/{\rm 0.1 \,\, eV})^{-1}$ eV, implying that the resulting neutrinos have energies exceeding $E_{\nu}\gtrsim 50$ PeV, well above the energies of neutrinos considered here. If such high energy for protons are attainable, upcoming new detectors extending beyond the IceCube band (ARA, Allison et al. 2012; ARIANNA, Barwick 2007; ANITA, Gorham et al. 2009; EVA, Gorham et al. 2011) could thus be able to detect neutrinos from the layer of nearby radiogalaxies.
An attractive feature of our scenario, especially in the case for which the entire BL Lac population
contributes to the observed flux, is the moderate required power in CR. Indeed, while it is
typically assumed that the CR luminosity greatly exceeds that in radiation (e.g. Murase et al. 2006, 2014),
we found that a ratio $P_{\rm CR}/P_{\rm rad}\approx 5$ can match the observed flux.
In turn, the ratio between the CR power and the kinetic power --- as derived through the modeling of
the of the observed emission with standard leptonic models, e.g. Ghisellini et al. (2014) --- is
$P_{\rm CR}/P_{\rm jet}\lesssim 1$.
In case of fast cooling of the accelerated electrons, $P_{\rm CR}/P_{\rm rad}$ directly provides a
measure of the electron--to--proton luminosity ratio, $f_{\rm e}\approx P_{\rm rad}/P_{\rm CR}$.
Theoretical expectations indicate values $f_{\rm e}\ll 1$, (e.g., Becker Tjus et al. 2014
and references therein), consistent with our findings.
The fact that the CR power can be a sizable fraction of the jet power could perhaps be linked
to the deceleration of the jet from sub--pc to pc scale as inferred from VLBI observations
(e.g., Piner \& Edwards 2014). We also recall that propagating CR beams produced by BL Lac jets have been invoked to explain several peculiarities of low-power BL Lac jets (the so-called extreme HBL), in particular their hard and slowly variable TeV emission (e.g. Essay et al. 2010, Murase et al. 2012, Aharonian et al. 2013, Tavecchio 2014)
It should be remarked that the CR power sensitively depends on the minimum energy.
In our modeling we assumed $E^{\prime}_{\rm min}\sim 10^{13}$ eV, as limited by the observed
low energy data points.
Lower values are not excluded, of course, possibly increasing the energy budget.
A related point concerns the required maximum CR energy.
The IceCube upper limits robustly fixed the (spine rest frame) maximum energy to few PeV.
General considerations (e.g. Tavecchio 2014) allow us to estimate that the maximum energy
of the accelerated protons to be $E_{\rm max}\simeq 3\times 10^{17} R_{15} B/\epsilon$,
where $\epsilon>1$ is a parameter, incorporating the details of the acceleration mechanism,
determining the acceleration efficiency and $B$ is the magnetic field.
For BL Lac jets typical values are $B=0.1-1$ G (e.g., Tavecchio et al. 2010).
Energies of the order of few PeV could thus be reproduced for $\epsilon \approx 0.01-0.1$.
We provided a list of the sources with the largest expected neutrino flux, which are the best candidates to be
detected as point-sources by IceCube.
The kind and the characteristics of the sources are quite different in the two scenarios.
In the case in which neutrino emission occurs in all the BL Lac population the brightest sources are those
with powerful jets (IBL and LBL type) located at relatively large redshift ($z\sim 0.2$).
The most probable source is PKS 2155--304, a HBL with an atypically large luminosity.
In the case in which, instead, only low power jets have an efficient layer, the most probable sources
associated to neutrino events are HBL at low redshift.
In both cases we expect that the brightest sources could have several associated neutrinos in the next few years. If BL Lac objects are the sources dominating the extragalactic neutrino sky, our model can thus be
effectively tested by a more extended IceCube exposure and the two options that we
presented could be effectively distinguished.
\section*{Acknowledgments}
FT acknowledges contribution from a grant PRIN--INAF--2011.
Part of this work is based on archival data and on--line services
provided by the ASI Science Data Center.
|
train/arxiv
|
BkiUecPxK3xgpfdGQlQK
| 5 | 1 |
\section{Introduction}
Dark matter (DM) played an important role in the evolution of the universe. The freeze-out mechanism~\cite{Bernstein:1985th, Srednicki:1988ce} considered dark matter candidates as thermal relic from the local thermodynamic equilibrium of early universe~\cite{Izaguirre:2015yja}. Their annihilation cross sections are bounded by the observed dark matter relic abundance $\Omega_c h^2 = 0.1131\pm 0.0034$~\cite{Bertone:2004pz, Komatsu:2008hk}. Interestingly, this limitation of interaction intensity happens to be on the same order of magnitude as that of weak interaction, which makes the weakly-interacting massive particle (WIMP) to be one of the most promising dark matter candidates. Currently, the direct and indirect DM detections~\cite{Akerib:2016vxi, Cui:2017nnn, Aprile:2018dbl} get null results and set much stricter constraints on the parameter space for the WIMP with mass larger than several GeV. It provides a motivation for the study of light dark matter candidates through high-energy colliders, for example, CODEX-b at the LHCb experiment aimed to probe for GeV-scale long-lived particles~\cite{Gligorov:2017nwh}. The Lee-Winberg~\cite{PhysRevLett.39.165} limit which sets the lower bound of the WIMP mass to a few GeV is a model-dependent result. This constraint can be relaxed with different models or proper parameters selection. It makes lower mass WIMP be possible, for example, the MeV-scale light dark matter (LDM) is proposed~\cite{Pospelov:2007mp,Hooper:2008im} to explain the unexpected emission of 511 keV photons from the galaxy center. The feebly interacting massive particle (FIMP) is another DM candidate which comes from an alternative scenario of the freeze-in mechanism~\cite{McDonald:2001vt, Hall:2009bx,Bernal:2017kxu}. Within the freeze-in scenario, the DM is never in thermal equilibrium with the SM and is gradually produced from scattering or decay of the Standard Model (SM) particles. It allows much weaker interaction between the SM particles and DM.
High-energy collider searches might be able to detect dark matter particles produced in collisions through their invisible ("missing")
energy and momentum, which do not match SM neutrino prediction. This motivates us to study whether DM interactions could help to explain the anomalies. So far, these experiments provide mostly just upper limits on the interaction strength between DM and the SM. The BaBar and Belle~\cite{delAmoSanchez:2010bk, Aubert:2008am, Chen:2007zk,Grygier:2017tzo,Lai:2016uvj} known as B-factories produce large numbers of $B$ mesons, allowing to study their various decay channels precisely, which has revealed tentative anomalies with respect to SM predictions. New models involved invisible particles have been extensively studied in the flavor-changing neutral current (FCNC) processes~\cite{Bird:2004ts,Bird:2006jd,Badin:2010uh,Gninenko:2015mea,Barducci:2018rlx,Kamenik:2011vy,Bertuzzo:2017lwt}. While previous studies most focus on $B$ meson instead of $B_c$ meson. The $B_c$ meson has been massively produced and measured by the CDF~\cite{Aaltonen:2016dra}, ATLAS~\cite{Burdin:2016rzf}, CMS~\cite{Berezhnoy:2019yei}, and LHCb~\cite{Aaij:2019ths} experiments. The production rate of $B_c$ meson on the LHCb collaboration is close to 3.7 per mille of that of the $B$ mesons~\cite{Aaij:2019ths}. The $B_c$ events are of the order of $10^{10}$ per year. As the luminosity of the LHC increases significantly, much more $B_c$ events will be generated in the near future, which provides a new possibility to discover dark matter candidates.
Except for photons, the SM bosons cannot exist stably for a long time. In models, the invisible boson can either be the stable relics in previous Universe or a mediator between the SM and dark sector. Vector dark matter (VDM)~\cite{Pospelov:2008jk, Redondo:2008ec, Bjorken:2009mm} candidates are usually introduced through Abelian or non-Abelian extended gauge group. In order to make VDM itself a candidate for dark matter, additional symmetries are often requested to maintain its stability~\cite{DiazCruz:2010dc, Baek:2012se}. A well-know invisible vector model is the dark photon~\cite{Fabbrichesi:2020wbt}. A very light massive dark photon could be a dark matter candidate, while in other cases, dark photon appears as a mediator. One of spin-0 hidden boson candidates is the axion-like pseudoscalar particle. Axion was introduced in order to explain the strong-CP problem~\cite{Peccei:1977hh, PhysRevLett.40.223, Wilczek:1977pj}. Axion-like dark matter (ALDM) models~\cite{Batell:2009jf, Aditya:2012ay, Izaguirre:2016dfi} usually introduce a general dimension-five Lagrangian which consists of scalar and vector current to describe the coupling between SM fermions and ALDMs. Scalar dark matter candidates can be achieved in minimal extensions of the SM~\cite{OConnell:2006rsp, Patt:2006fw}, in which the hidden scalar can mix with the Higgs boson~\cite{Krnjaic:2015mbs, Winkler:2018qyg, Filimonova:2019tuy, Kachanovich:2020yhi}. If the scalar further decays into double leptons $l\bar l$, it is possible to observe this signal in the experiments. If it decays into two invisible fermions $\bar \chi \chi$, the scalar is a mediator between the SM and the dark sector.
In this paper, we focus on the light invisible bosonic particle (both scalar and vector) which is emitted in FCNC decays of $B$ and $B_c$ meson. We introduce a general dimension-5 effective Lagrangian which includes coupling strength of quarks and an invisible boson. The Wilson coefficients are extracted from the experimental results of the $B$ meson decays with missing energy, which are used to predict the upper limits of
the branching fractions of the similar decay processes of $B_c$ meson.
The paper is organized as follows: In Sec.~II, we study the decay processes of $B$ and $B_c$ mesons with single invisible scalar ($\chi=S$) production. In Sec.~III, we study the single invisible vector ($\chi=V$) generated case. Finally, we draw the conclusion in Sec.~IV.
\section{Light invisible scalar}
The experimental upper limits of $B$ meson FCNC decays with missing energy from Belle Collaboration and SM predictions are listed in Table.~\ref{tab1}.
\begin{table}[h]
\setlength{\tabcolsep}{0.5cm}
\caption{The branching ratios (in units of $10^{-6}$) of $B$ meson decays involving missing energy.}
\centering
\begin{tabular*}{\textwidth}{@{}@{\extracolsep{\fill}}ccc}
\hline\hline
Experimental bound~\cite{Chen:2007zk,Grygier:2017tzo,Lai:2016uvj}&SM prediction~\cite{Kamenik:2009kc,Jeon:2006nq,Altmannshofer:2009ma,Bartsch:2009qp}&Invisible particles bound\\
\hline
${\rm BR}(B^\pm\to K^\pm\slashed E)<14$& ${\rm BR}(B^\pm\to K^\pm\nu \bar{\nu})=5.1 \pm 0.8$ & ${\rm BR}(B^\pm\to K^\pm\chi)<9.7$ \\
${\rm BR}(B^\pm\to \pi^\pm\slashed E)<14$& ${\rm BR}(B^\pm\to \pi^\pm\nu \bar{\nu})=9.7 \pm 2.1$ & ${\rm BR}(B^\pm\to \pi^\pm\chi)<6.4$ \\
${\rm BR}(B^\pm\to K^{*\pm}\slashed E)<61$& ${\rm BR}(B^\pm\to K^{*\pm}\nu \bar{\nu})=8.4 \pm 1.4$ & ${\rm BR}(B^\pm\to K^{*\pm}\chi)<54$ \\
${\rm BR}(B^\pm\to \rho^\pm\slashed E)<30$&${\rm BR}(B^\pm\to \rho^\pm \nu \bar{\nu})=0.49^{+0.61}_{-0.38}$ & ${\rm BR}(B^\pm\to \rho^\pm \chi)<30$ \\
\hline\hline
\label{tab1}
\end{tabular*}
\end{table}
It can be seen that the theoretical prediction is smaller than the experimental value, which leaves room for contributions from new physics~\cite{Grygier:2017tzo}. We assume that a hidden boson $\chi$ produced in these processes carries away part of energy. The Feynman diagram is presented as in Fig.~\ref{Feyn01},
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Feyn-0.pdf}
\caption{Feynman diagram of decay channels involving invisible particles,}
\label{Feyn01}
\end{figure}
where $q$, $q_{_f}$, and $\bar q^\prime$ represent the quark and antiquark, $M$ and $M_f$ are the masses of the initial and final mesons, respectively. When $\chi=S$, we introduce a dimension-5 model-independent effective Lagrangian to describe the vertex which represents the coupling between SM fermions and the hidden scalar,
\begin{equation}
\begin{aligned}
\mathcal L_{scalar}=m_{_S} g_{_{S1}} (\bar q_{_f} q) S +m_{_S} g_{_{S2}} (\bar q_{_f} \gamma^{5} q) S +g_{_{S3}} (\bar q_{_f}\gamma_{\mu}q)(i\partial^\mu S) + g_{_{S4}} (\bar q_{_f} \gamma_{\mu}\gamma^{5}q)(i\partial^\mu S),
\label{eq1}
\end{aligned}
\end{equation}
where $g_{_{Si}}$s are phenomenological coupling constants. The operators $(\bar q_{_f} q) S$ and $(\bar q_{_f} \gamma^{5} q) S$ break ${\rm SU(2)}_L$ symmetry, as (pseudo)scalar currents are necessarily involving quarks with opposite chirality. If one starts from an effective Lagrangian which respects the SM gauge symmetry, these operators could be suppressed severely. For example, Ref.~\cite{Kamenik:2011vy} included operators like $(H^\dagger\bar q_{_f} q) S$ and $(H^\dagger\bar q_{_f} \gamma^5 q) S$ by considering the electroweak symmetry breaking. These coefficients are suppressed by an additional factor $v/\Lambda$ with $v$ being the vacuum expectation value of Higgs field and $\Lambda$ being the new physics scale (usually considered to be in TeV). In this research we are interested that to what extent the experimental data will constrain these coefficients. The operators are just introduced phenomenologically instead of starting from gauge symmetry.
Similar processes were discussed in some previous papers, for example, Ref.~\cite{Krnjaic:2015mbs, Winkler:2018qyg, Filimonova:2019tuy, Kachanovich:2020yhi} considered the hidden scalar can mix with Higgs boson and introduced a coupling Lagrangian with mixing angle $\theta$. The experimental limits of $B$ and $K$ meson decays are used to set bounds for $\theta$. Ref.~\cite{Pospelov:2008jk} discussed about constraints of keV-scale bosonic DM candidates. In this work, we study the bosonic DM candidate with mass of several GeV, and set upper limits for the branching ratios of $B_c$ meson decays with the emission of the hidden boson.
\subsection{$0^-\to 0^-$ meson decay processes}
According to the Feynman diagram and the effective Lagrangian, the amplitude of $0^-\to 0^-$ meson decay can be written as
\begin{equation}
\begin{aligned}
\langle M_fS|\mathcal L_{scalar}|M \rangle & =m_{_S}g_{_{S1}}\langle M_f^-|(\bar q_{_f} q)|M^-\rangle+g_{_{S3}} \langle M_f^-|(\bar q_{_f}\gamma_\mu q) |M^-\rangle P_{_S}^\mu\\
&=g_{_{S1}} \mathcal {T}_1+g_{_{S3}} \mathcal {T}_3,
\label{eq1.2}
\end{aligned}
\end{equation}
where $\mathcal T_i$s are amplitudes other than the effective coupling coefficients. $P_{_S}$ is the momenta of the invisible scalar. As the Lagrangian is sum of several operators, the partial width can be written as
\begin{equation}
\begin{aligned}
\Gamma =\int {dPS_2 \big(\sum_{j} g_{_{Sj}}\mathcal{T}_j\big)^{\dagger} \big(\sum_{i} g^{ }_{_{Si}} \mathcal{T}_i \big) }=\sum_{ij}g_{_{Sj}}g_{_{Si}}\widetilde\Gamma_{ij},
\label{eq2}
\end{aligned}
\end{equation}
Here we have defined $\widetilde\Gamma_{1(3)}=\int dPS_3 |\mathcal{T}_{1(3)}|^2$, which are independent of the coefficients. When the final meson is a pseudoscalar, by finishing the two-body phase space integral, we get the decay width.
\begin{equation}
\begin{aligned}
\Gamma(M\to M_fS)=&\frac{1}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^2,m_{_S}^2) \bigg\{ m_{_S}^2 g_{_{S1}}^2 \langle M_f^-|(\bar q_{_f} q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} q)|M^-\rangle \\
&+ g_{_{S3}}^2 \langle M_f^-|(\bar q_{_f} \gamma_\nu q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} \gamma_\mu q)|M^-\rangle P_{_S}^\nu P_{_S}^\mu \\
&+m_{_S} g_{_{S1}} g_{_{S3}}^* \langle M_f^-|(\bar q_{_f} \gamma_\nu q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} q)|M^-\rangle P_{_S}^\nu +h.c. \bigg\},
\label{eq3}
\end{aligned}
\end{equation}
where the K${\rm \ddot a}$llen function $\lambda(x, y, z)= x^2 + y^2 +z^2 -2xy-2xz -2yz$ is used. The hadronic transition matrix elements can be expressed as
\begin{equation}
\begin{aligned}
\langle M_f^-|(\bar q_{_f} q)|M^-\rangle&\simeq \frac{(P-P_f)^\mu}{m_q-m_{q_{_f}}}\langle M_f^-|(\bar q_{_f}\gamma_\mu q) |M^-\rangle= \frac{M^2-M_{f}^2}{m_q-m_{q_{_f}}}f_0 (s), \\
\langle M_f^-|(\bar q_{_f}\gamma_\mu q) |M^-\rangle
&= (P+P_f)_{\mu}f_+ (s)+(P-P_f)_{\mu}\frac{M^2-M_{f}^2}{s} \big[f_0 (s)-f_+ (s)\big],
\label{eq4}
\end{aligned}
\end{equation}
where $s=(P-P_f)^2=m_{_S}^2$; $f_0$ and $f_+$ are form factors; $m_q$ and $m_{q_f}$ are the masses of initial and final quarks, respectively. It is worth to mention that one of the form factors in Eq.~(\ref{eq4}) is divergent when $m_{_S}=0$, however, the final results are smooth and convergent when $m_{_S}\to 0$. The hadronic matrix element with pseudoscalar current $\langle M_f^-|(\bar q_{_f} \gamma^5 q)|M^-\rangle$ and axial vector current $\langle M_f^-|(\bar q_{_f} \gamma_\mu\gamma^5 q)|M^-\rangle$ are zero for the $0^-\to0^-$ processes. When we calculate the hadronic matrix elements of $B$ meson decays, the LCSR method are adopted to write the form factors~\cite{Ball:2004ye}. One can see more details of the selection of parameters in our previous work~\cite{Li:2018hgu, Li:2020dpc}. The instantaneous Bethe-Salpeter (BS) method ~\cite{Kim:2003ny, Wang:2005qx} which is more suitable for heavy to heavy meson decays is used in $B_c$ meson decay processes. In Mandelstam formalism, the hadronic transition matrix element is written as
\begin{equation}
\begin{aligned}
\langle h^-|\bar q_1\Gamma^\xi b|B_c^-\rangle
&= \int\frac{d^3 q}{(2\pi)^3} {\rm Tr}\left[\frac{\slashed P}{M}\overline\varphi_{P_f}^{++}(\vec q_{f})\Gamma^\xi\varphi_P^{++}(\vec q)\right],
\label{eq5}
\end{aligned}
\end{equation}
where $\Gamma^\xi=1,\gamma^5,\gamma^\mu,\gamma^\mu\gamma^5$ or $\sigma^{\mu\nu}$; $\varphi^{++}_P$ and $\varphi^{++}_{P_f}$ are the wave functions of the initial and final mesons, respectively; $P$ and ${P_f}$ are the momenta of the initial and final mesons, respectively; $\vec q$ and $\vec q_{_f}$ are the relative momenta of the quark and antiquark in the initial and final meson, respectively.
The results of $\widetilde\Gamma_{ij}$s are shown in Fig.~\ref{width-S14}. The solid and dashed lines represent noninterference and interference terms, respectively.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^-S$]{ \label{width-S14a}
\includegraphics[width=0.45\textwidth]{width-S1.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \pi^-S$]{ \label{width-S14b}
\includegraphics[width=0.45\textwidth]{width-S2.pdf}} \\
\subfigure[$B_c^-\rightarrow D_s^-S$]{ \label{width-S14c}
\includegraphics[width=0.45\textwidth]{width-S3.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^-S$]{ \label{width-S14d}
\includegraphics[width=0.45\textwidth]{width-S4.pdf}}
\caption{$\widetilde\Gamma_{ij}$s in $B$ and $B_c$ meson $0^-\to 0^-$ decays with invisible scalar.}
\label{width-S14}
\end{figure}
One can see that although we use different parametric methods in $B$ and $B_c$ meson decays, the trends of $\widetilde\Gamma_{ij}$s are similar. This is because the mesons have same quantum numbers. $\widetilde\Gamma_{ij}$ increases when $m_{_S}$ increases from zero. It grows faster in $B\to M_f$ modes than that in $B_c \to M_f$ modes when $m_{_S}$ is about smaller than $4$~GeV. This is due to the difference in the form factor caused by the masses of final state mesons, since $K$ and $\pi$ mesons are light while $D_{(s)}^*$ are heavy. $\widetilde\Gamma_{33}$ and $\widetilde\Gamma_{13}$ are zero when $m_{_S}\to0$, because they are proportional to $m_{_S}^2$ and $m_{_S}$, respectively. When $m_{_S}$ is about larger than $4$~GeV, $\widetilde\Gamma_{ij}$ decreases. When $m_{_S}=(M-M_f)$, $\widetilde\Gamma_{ij}$s are zero for there is no phase space.
The upper limits in Table~\ref{tab1} give the allowed parameter space for the effective coupling constants $g_{_{Si}}$s. Here we use two different ways to make the calculation. First, we assume that only one of the $g_{_Si}$ is not zero and make others zero. In this case, the upper limits of $g_{_{Si}}$s as functions of $m_{_S}$ are shown in Fig.~\ref{gs13}, where the point of $m_{_S}=0$ is excluded.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^-S$]{
\includegraphics[width=0.45\textwidth]{gs13k.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \pi^-S$]{
\includegraphics[width=0.45\textwidth]{gs13pi.pdf}}
\caption{Upper limits of $g_{_{Si}}$s from $B$ meson $0^-\to 0^-$ decays with invisible scalar.}
\label{gs13}
\end{figure}
One can see that the upper limit of $|g_{_{Si}}|^2$ is infinite when $m_{_S}=M-M_f$. This is because $\widetilde\Gamma_{ij}=0$ at this point. The smallest valve of $|g_{_{Si}}|^2$ is of the order of $10^{-17}~{\rm GeV}^{-2}$. The solid blue line which represents $|g_{_{S1}}|^2$ is infinite when $m_{_S}\to 0$, since the blue solid line in Fig.~\ref{width-S14a} and Fig.~\ref{width-S14b} which represents $\widetilde\Gamma_{11}$ is zero at this point. The red dashed line which represents $|g_{_{S3}}|^2$ changes slowly when $m_{_S} \textless M-M_f$ due to $\widetilde\Gamma_{33}$ changes slowly in Fig.~\ref{width-S14a} and Fig.~\ref{width-S14b}.
Second, we assume that all operators make contribution and run a program to select the maximum value of the branching ratio of $B_c$ meson. The results are plotted as dashed (ij=11, 33) and solid (Total) lines in Fig.~\ref{br-S12}, respectively.
\begin{figure}[h]
\centering
\subfigure[$B_c^-\rightarrow D_s^{-}S$]{
\includegraphics[width=0.45\textwidth]{br-S1.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{-}S$]{
\includegraphics[width=0.45\textwidth]{br-S2.pdf}}
\caption{Branching ratios of $B_c$ meson $0^-\to0^-$ decays with invisible scalar.}
\label{br-S12}
\end{figure}
One can see that the upper limits of $\mathcal BR$ are of the order of $10^{-6}$. The results of two methods show subtle differences.
It should be noticed that these results are the upper limits of the branching ratios. The area under the curves in Fig.~\ref{br-S12} represents the possible values of the branching ratios. The peak is located near $m_{_S} \approx 4$~GeV, which may imply the greatest probability of detecting the invisible particles in this area. Taking the LHC as an example, although the generation of $B_c$ meson cases can reach the order of $10^{10}$ per year, the actual effective detection is still several orders of magnitude lesser. If more cases can be detected and the distribution spectrum of the missing energy can be obtained, then it is possible to observe the signal experimentally.
\subsection{$0^-\to 1^-$ meson decay processes}
In $0^-\to1^-$ meson decays, the decay width has the form
\begin{equation}
\begin{aligned}
\Gamma(M\to M^{*}_f S)=& \frac{1}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^{*2},m_{_S}^2) \bigg\{ m_{_S}^2 g_{_{S2}}^2\langle M_f^{*-}|(\bar q_{_f} \gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma^5 q)|M^-\rangle \\
&+g_{_{S4}}^2 \langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu} \gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f}\gamma_{\mu} \gamma^5 q)|M^-\rangle P_{_S}^\nu P_{_S}^\mu \\
&+m_{_S} g_{_{S2}}g_{_{S4}}^*\langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu} \gamma^5 q)|M^-\rangle^* \langle M_f^{*-}|(\bar q_{_f} \gamma^5 q)|M^-\rangle P_{_S}^\nu \bigg\},
\label{eq6}
\end{aligned}
\end{equation}
where $M_f^*$ represents the mass of $1^-$ final meson. The hadronic transition matrix elements can be expressed as the functions of form factors
\begin{equation}
\begin{aligned}
\langle M_f^{*-}|(\bar q_f \gamma^5 q )|M^-\rangle
&\simeq -\frac{(P-P_f)^\mu}{m_q+m_{q_{_f}}}\langle M_f^{*-}|(\bar q_{_f}\gamma_\mu\gamma^5 q) |M^-\rangle =-i\frac{2M_f^*}{m_q+m_{q_{_f}}}\epsilon \cdot (P-P_f)A_0(s)\\
\langle M_f^{*-}|(\bar q_f \gamma_\mu q )|M^-\rangle
&=\varepsilon _{\mu \nu \rho \sigma} \epsilon ^\nu P^\rho (P-P_f)^\sigma\frac{2 }{M+M_f^*}V(s), \\
\langle M_f^{*-}|(\bar q_{_f}\gamma_\mu\gamma^5 q) |M^-\rangle
&= i\bigg\{\epsilon_{\mu} (M+M_f^*)A_1(s)-(P+P_f)_{\mu}\frac{\epsilon \cdot (P-P_f)}{M+M_f^*}A_2(s)\\
&~~~-(P-P_f)_{\mu} \big[\epsilon \cdot (P-P_f)\big] \frac{2M_f^*}{s} \big[A_3(s)-A_0(s)\big]\bigg\}.
\label{eq7}
\end{aligned}
\end{equation}
where the parameters are cited from LCSR method~\cite{Straub:2015ica} in $B$ meson decays. The BS method~\cite{Kim:2003ny, Wang:2005qx} is applied to calculate the hadronic transition matrix element of $B_c$ meson decays. The results of $\widetilde\Gamma_{ij}$s are shown in Fig.~\ref{width-S58}.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}S$]{
\includegraphics[width=0.45\textwidth]{width-S5.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \rho^-S$]{
\includegraphics[width=0.45\textwidth]{width-S6.pdf}} \\
\subfigure[$B_c^-\rightarrow D_s^{*-}S$]{
\includegraphics[width=0.45\textwidth]{width-S7.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{*-}S$]{
\includegraphics[width=0.45\textwidth]{width-S8.pdf}}
\caption{$\widetilde\Gamma_{ij}$s in $B$ and $B_c$ meson $0^-\to 1^-$ decays with invisible scalar.}
\label{width-S58}
\end{figure}
It can be seen that there is an obvious difference between $B$ and $B_c$ meson. As $m_{_S}$ increases, $\widetilde\Gamma_{44}$ in $B$ meson decay increases first until $m_{_S}\approx 3.5$~GeV, then decreases to zero. While $\widetilde\Gamma_{44}$ in $B_c$ meson decay keep deceasing until there is no phase space. This is the result of competition between form factors and phase space. As $m_{_S}$ increases, the form factors increase while phase space decreases. The form factors of $B_c$ meson decays grow much faster than these of the $B$ meson decays.
We also use two ways to set the upper limits for the branching ratios of $B_c^-\to M_f^{*-}S$ processes. The upper limits of $g_{_{Si}}$s obtained by the first method are shown in Fig.~\ref{gs24}.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}S$]{
\includegraphics[width=0.45\textwidth]{gs24ks.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \rho^-S$]{
\includegraphics[width=0.45\textwidth]{gs24rho.pdf}}
\caption{Upper limits of $g_{_{Si}}$s from $B$ meson $0^-\to 1^-$ decays with invisible scalar.}
\label{gs24}
\end{figure}
One can see that they have very similar trends to those in Fig.~\ref{gs13}, but about one order of magnitude bigger. This is caused by the different upper limits of experiments in Table~\ref{tab1}.
The upper limits of branching ratios of $B_c$ meson from two methods are shown in Fig.~\ref{br-S34}.
\begin{figure}[h]
\centering
\subfigure[$B_c^-\rightarrow D_s^{*-}S$]{
\includegraphics[width=0.45\textwidth]{br-S3.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{*-}S$]{
\includegraphics[width=0.45\textwidth]{br-S4.pdf}}
\caption{Branching ratios of $B_c$ meson $0^-\to1^-$ decays with invisible scalar.}
\label{br-S34}
\end{figure}
One can see that the difference between dashed and solid lines are obvious. It is due to the contribution of interference term $\Gamma_{24}$. The most likely area for finding the dark scalar is near $m_{_S}\approx 3.5$~GeV. The $\mathcal {BR}$ is of the order of $10^{-5}$, which is about an order of magnitude larger than that in $0^-\to0^-$ modes. This depends on the experimental upper limits in Table~\ref{tab1}.
\section{Light invisible vector}
When $\chi=V$, we assume a hidden vector produced in the FCNC processes. The effective Lagrangian, which represents the coupling between SM fermions and the hidden vector, has the form
\begin{equation}
\begin{aligned}
\mathcal L_{vector}=m_{_V} g_{_{V1}}(\bar q_{_f}\gamma_{\mu}q)V^\mu + m_{_V} g_{_{V2}}(\bar q_{_f} \gamma_{\mu}\gamma^{5}q)V^\mu.
\label{eq8}
\end{aligned}
\end{equation}
This dimension-5 effective Lagrangian naturally meets gauge symmetry, since the chirality of two quarks are the same.
\subsection{$0^-\to 0^-$ meson decay processes}
By finishing the two-body phase space integral, the decay width of $M^- \to M_f^- V$ processes can be written as
\begin{equation}
\begin{aligned}
\Gamma(M\to M_f V )=\frac{m_{_V}^2 g_{_{V1}}^2}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^2,m_{_V}^2) \langle M_f^-|(\bar q_{_f} \gamma_{\nu}q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} \gamma_{\mu}q)|M^-\rangle \mathcal P_{_V}^{\mu\nu},
\label{eq9}
\end{aligned}
\end{equation}
where the sum of polarization vector is
\begin{equation}
\begin{aligned}
\mathcal P_{_V}^{\mu\nu}=\sum\epsilon_{_V}^{*\mu}\epsilon_{_V}^{\nu}=-g^{\mu\nu}+\frac{P_{_V}^\mu P_{_V}^\nu}{m_{_V}^2}.
\label{eq10}
\end{aligned}
\end{equation}
The hadronic transition matrix element with pseudoscalar current is zero when final meson is pesudoscalar. The only nonzero term $\widetilde\Gamma_{11}$ is shown in Fig.~\ref{width-V12}. One can see that the results are smooth and convergent when $m_{_V}\to 0$.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^-(\pi^-)V$]{
\includegraphics[width=0.45\textwidth]{width-V1.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D_{(s)}^-V$]{
\includegraphics[width=0.45\textwidth]{width-V2.pdf}}
\caption{$\widetilde\Gamma_{11}$ in $0^-\to 0^-$ meson decays with invisible vector.}
\label{width-V12}
\end{figure}
Since only one operator contributes, the upper limits of the coupling constants and branching ratios of $B_c$ meson can be easily obtained, which are shown in Fig.~\ref{upper}.
One can see that the upper limits of $|g_{_{V1}}|^2$ are infinite when $m_{_V}=M-M_f$, since $\widetilde\Gamma_{11}=0$ at this point. It changes slowly when $m_{_V} \textless M-M_f$ because $\widetilde\Gamma_{11}$ changes slowly in Fig.~\ref{width-V12}. The upper limits of branching ratios are of the order of $10^{-6}$. As the mass of the invisible particle increases, the upper limits of $\mathcal {BR}$ increase first and then decrease to zero. The peak is located near $m_{_V}\approx 4$~GeV. It may be the area where the invisible particle is most likely to be detected experimentally.
\begin{figure}[h]
\centering
\subfigure[Upper limits of $g_{_{V1}}$ from $B$ meson $0^-\to 0^-$ decays with invisible vector.]{
\label{gv1}
\includegraphics[width=0.46\textwidth]{gv1.pdf}}
\hspace{2em}
\subfigure[Branching ratios of $B_c$ meson $0^-\to0^-$ decays with invisible vector.]{
\label{br-V1}
\includegraphics[width=0.43\textwidth]{br-V1.pdf}}
\caption{Upper limits of coupling constants and $\mathcal {BR}$ of $B_c$ meson with invisible vector.}
\label{upper}
\end{figure}
\subsection{$0^-\to 1^-$ meson decay processes}
In $M^- \to M_f^{*-} V$ processes, the decay width has the form
\begin{equation}
\begin{aligned}
\Gamma(M\to M^*_fV)=& \frac{g_{_{V1}}^2}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^{*2},m_{_V}^2) \bigg\{ \langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu}q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu}q)|M^-\rangle P_{_V}^\mu P_{_V}^\nu \\
&-m_{_V}^2 \langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu} q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma^{\mu} q)|M^-\rangle \bigg\} \\
+&\frac{g_{_{V2}}^2}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^{*2},m_{_V}^2) \bigg\{ \langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu}\gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu}\gamma^5 q)|M^-\rangle P_{_V}^\mu P_{_V}^\nu\\
&-m_{_V}^2 \langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu}\gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma^{\mu}\gamma^5 q)|M^-\rangle \bigg\}.
\label{eq11}
\end{aligned}
\end{equation}
The hadronic transition matrix elements can be expressed as the functions of form factors in Eq.~(\ref{eq7}). In Fig.~\ref{width-V34}, the results of $\widetilde\Gamma_{ij}$s as a function of $m_{_V}$ are shown.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}(\rho^-)V$]{ \label{width-V34a}
\includegraphics[width=0.45\textwidth]{width-V3.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D_{(s)}^{*-}V$]{ \label{width-V34b}
\includegraphics[width=0.45\textwidth]{width-V4.pdf}}
\caption{$\widetilde\Gamma_{ij}$ in $0^-\to 1^-$ meson decays with invisible vector.}
\label{width-V34}
\end{figure}
$\widetilde\Gamma_{22}$ has the same shape as that in $0^-\to0^-$ modes above in Fig.~\ref{width-V12}. $\widetilde\Gamma_{11}$ starts from zero because it is proportional to $ m_{_V}^2$. There is no term like $\widetilde\Gamma_{12}$ since the interference term can be proved to be zero.
The upper limits of $|g_{_{Vi}}|^2$ are shown in Fig.~\ref{gv12}. The $|g_{_{V2}}|^2$ which is of the order of $10^{-17}$ changes slowly when $m_{_V} \textless M-M_f$. When $m_{_V}\to 0$, the upper limits of $|g_{_{V1}}|^2$ go to infinity. These results depend on $\widetilde\Gamma_{ij}$ in Fig.~\ref{width-V34a}.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}V$]{
\includegraphics[width=0.46\textwidth]{gv12ks.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \rho^-V$]{
\includegraphics[width=0.46\textwidth]{gv12rho.pdf}}
\caption{Upper limits of $g_{_{Vi}}$ from $B$ meson $0^-\to 1^-$ decays with invisible vector.}
\label{gv12}
\end{figure}
The upper limits of the branching ratios are shown as Fig.~\ref{br-1}.
\begin{figure}[h]
\centering
\subfigure[$B_c^-\rightarrow D_s^{*-}V$]{
\includegraphics[width=0.44\textwidth]{br-V2.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{*-}V$]{
\includegraphics[width=0.45\textwidth]{br-V3.pdf}}
\caption{Branching ratios of $B_c$ meson $0^-\to1^-$ decays with invisible vector.}
\label{br-1}
\end{figure}
One of the two operators is opened in turn, while assuming the other is zero. The blue solid line and red dashed line represent the contribution from $\widetilde\Gamma_{11}$ and $\widetilde\Gamma_{22}$, respectively. As there is no interference term, the upper limit of the branching ratio is the larger one of these lines, namely, the red dashed line.
\section{Conclusion}
We have studied the light invisible bosonic particles via FCNC processes of $B$ and $B_c$ meson. The mass is considered to be less than a few GeV. Both scalar and vector cases are considered. The effective Lagrangian is introduced to describe the coupling between quarks and the dark boson. The effective coupling constants are constrained by the experimental results for the $B$ decays with missing energy. Then the upper limits of the branching fractions of the $B_c\to M_f^{(*)}\chi$ channels are calculated. When the final meson is pseudoscalar $D_{(s)}$, the largest value of the upper limits is of the order of $10^{-6}$. For the final vector meson $D_{(s)}^*$, the $\mathcal {BR}$ is of the order of $10^{-5}$. The most likely area for finding the dark boson is near $m_\chi\approx 3.5-4$~GeV. As much more $B_c$ events will be generated in the near future, we hope future experiments can make new discoveries through such processes or set more stringent constraints for them.
\section{Acknowledgments}
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No.~12075073. We also thank the HEPC Studio at Physics School of Harbin Institute of Technology for access to high performance computing resources through [email protected]
\section{Introduction}
Dark matter (DM) played an important role in the evolution of the universe. The freeze-out mechanism~\cite{Bernstein:1985th, Srednicki:1988ce} considered dark matter candidates as thermal relic from the local thermodynamic equilibrium of early universe~\cite{Izaguirre:2015yja}. Their annihilation cross sections are bounded by the observed dark matter relic abundance $\Omega_c h^2 = 0.1131\pm 0.0034$~\cite{Bertone:2004pz, Komatsu:2008hk}. Interestingly, this limitation of interaction intensity happens to be on the same order of magnitude as that of weak interaction, which makes the weakly-interacting massive particle (WIMP) to be one of the most promising dark matter candidates. Currently, the direct and indirect DM detections~\cite{Akerib:2016vxi, Cui:2017nnn, Aprile:2018dbl} get null results and set much stricter constraints on the parameter space for the WIMP with mass larger than several GeV. It provides a motivation for the study of light dark matter candidates through high-energy colliders, for example, CODEX-b at the LHCb experiment aimed to probe for GeV-scale long-lived particles~\cite{Gligorov:2017nwh}. The Lee-Winberg~\cite{PhysRevLett.39.165} limit which sets the lower bound of the WIMP mass to a few GeV is a model-dependent result. This constraint can be relaxed with different models or proper parameters selection. It makes lower mass WIMP be possible, for example, the MeV-scale light dark matter (LDM) is proposed~\cite{Pospelov:2007mp,Hooper:2008im} to explain the unexpected emission of 511 keV photons from the galaxy center. The feebly interacting massive particle (FIMP) is another DM candidate which comes from an alternative scenario of the freeze-in mechanism~\cite{McDonald:2001vt, Hall:2009bx,Bernal:2017kxu}. Within the freeze-in scenario, the DM is never in thermal equilibrium with the SM and is gradually produced from scattering or decay of the Standard Model (SM) particles. It allows much weaker interaction between the SM particles and DM.
High-energy collider searches might be able to detect dark matter particles produced in collisions through their invisible ("missing")
energy and momentum, which do not match SM neutrino prediction. This motivates us to study whether DM interactions could help to explain the anomalies. So far, these experiments provide mostly just upper limits on the interaction strength between DM and the SM. The BaBar and Belle~\cite{delAmoSanchez:2010bk, Aubert:2008am, Chen:2007zk,Grygier:2017tzo,Lai:2016uvj} known as B-factories produce large numbers of $B$ mesons, allowing to study their various decay channels precisely, which has revealed tentative anomalies with respect to SM predictions. New models involved invisible particles have been extensively studied in the flavor-changing neutral current (FCNC) processes~\cite{Bird:2004ts,Bird:2006jd,Badin:2010uh,Gninenko:2015mea,Barducci:2018rlx,Kamenik:2011vy,Bertuzzo:2017lwt}. While previous studies most focus on $B$ meson instead of $B_c$ meson. The $B_c$ meson has been massively produced and measured by the CDF~\cite{Aaltonen:2016dra}, ATLAS~\cite{Burdin:2016rzf}, CMS~\cite{Berezhnoy:2019yei}, and LHCb~\cite{Aaij:2019ths} experiments. The production rate of $B_c$ meson on the LHCb collaboration is close to 3.7 per mille of that of the $B$ mesons~\cite{Aaij:2019ths}. The $B_c$ events are of the order of $10^{10}$ per year. As the luminosity of the LHC increases significantly, much more $B_c$ events will be generated in the near future, which provides a new possibility to discover dark matter candidates.
Except for photons, the SM bosons cannot exist stably for a long time. In models, the invisible boson can either be the stable relics in previous Universe or a mediator between the SM and dark sector. Vector dark matter (VDM)~\cite{Pospelov:2008jk, Redondo:2008ec, Bjorken:2009mm} candidates are usually introduced through Abelian or non-Abelian extended gauge group. In order to make VDM itself a candidate for dark matter, additional symmetries are often requested to maintain its stability~\cite{DiazCruz:2010dc, Baek:2012se}. A well-know invisible vector model is the dark photon~\cite{Fabbrichesi:2020wbt}. A very light massive dark photon could be a dark matter candidate, while in other cases, dark photon appears as a mediator. One of spin-0 hidden boson candidates is the axion-like pseudoscalar particle. Axion was introduced in order to explain the strong-CP problem~\cite{Peccei:1977hh, PhysRevLett.40.223, Wilczek:1977pj}. Axion-like dark matter (ALDM) models~\cite{Batell:2009jf, Aditya:2012ay, Izaguirre:2016dfi} usually introduce a general dimension-five Lagrangian which consists of scalar and vector current to describe the coupling between SM fermions and ALDMs. Scalar dark matter candidates can be achieved in minimal extensions of the SM~\cite{OConnell:2006rsp, Patt:2006fw}, in which the hidden scalar can mix with the Higgs boson~\cite{Krnjaic:2015mbs, Winkler:2018qyg, Filimonova:2019tuy, Kachanovich:2020yhi}. If the scalar further decays into double leptons $l\bar l$, it is possible to observe this signal in the experiments. If it decays into two invisible fermions $\bar \chi \chi$, the scalar is a mediator between the SM and the dark sector.
In this paper, we focus on the light invisible bosonic particle (both scalar and vector) which is emitted in FCNC decays of $B$ and $B_c$ meson. We introduce a general dimension-5 effective Lagrangian which includes coupling strength of quarks and an invisible boson. The Wilson coefficients are extracted from the experimental results of the $B$ meson decays with missing energy, which are used to predict the upper limits of
the branching fractions of the similar decay processes of $B_c$ meson.
The paper is organized as follows: In Sec.~II, we study the decay processes of $B$ and $B_c$ mesons with single invisible scalar ($\chi=S$) production. In Sec.~III, we study the single invisible vector ($\chi=V$) generated case. Finally, we draw the conclusion in Sec.~IV.
\section{Light invisible scalar}
The experimental upper limits of $B$ meson FCNC decays with missing energy from Belle Collaboration and SM predictions are listed in Table.~\ref{tab1}.
\begin{table}[h]
\setlength{\tabcolsep}{0.5cm}
\caption{The branching ratios (in units of $10^{-6}$) of $B$ meson decays involving missing energy.}
\centering
\begin{tabular*}{\textwidth}{@{}@{\extracolsep{\fill}}ccc}
\hline\hline
Experimental bound~\cite{Chen:2007zk,Grygier:2017tzo,Lai:2016uvj}&SM prediction~\cite{Kamenik:2009kc,Jeon:2006nq,Altmannshofer:2009ma,Bartsch:2009qp}&Invisible particles bound\\
\hline
${\rm BR}(B^\pm\to K^\pm\slashed E)<14$& ${\rm BR}(B^\pm\to K^\pm\nu \bar{\nu})=5.1 \pm 0.8$ & ${\rm BR}(B^\pm\to K^\pm\chi)<9.7$ \\
${\rm BR}(B^\pm\to \pi^\pm\slashed E)<14$& ${\rm BR}(B^\pm\to \pi^\pm\nu \bar{\nu})=9.7 \pm 2.1$ & ${\rm BR}(B^\pm\to \pi^\pm\chi)<6.4$ \\
${\rm BR}(B^\pm\to K^{*\pm}\slashed E)<61$& ${\rm BR}(B^\pm\to K^{*\pm}\nu \bar{\nu})=8.4 \pm 1.4$ & ${\rm BR}(B^\pm\to K^{*\pm}\chi)<54$ \\
${\rm BR}(B^\pm\to \rho^\pm\slashed E)<30$&${\rm BR}(B^\pm\to \rho^\pm \nu \bar{\nu})=0.49^{+0.61}_{-0.38}$ & ${\rm BR}(B^\pm\to \rho^\pm \chi)<30$ \\
\hline\hline
\label{tab1}
\end{tabular*}
\end{table}
It can be seen that the theoretical prediction is smaller than the experimental value, which leaves room for contributions from new physics~\cite{Grygier:2017tzo}. We assume that a hidden boson $\chi$ produced in these processes carries away part of energy. The Feynman diagram is presented as in Fig.~\ref{Feyn01},
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Feyn-0.pdf}
\caption{Feynman diagram of decay channels involving invisible particles,}
\label{Feyn01}
\end{figure}
where $q$, $q_{_f}$, and $\bar q^\prime$ represent the quark and antiquark, $M$ and $M_f$ are the masses of the initial and final mesons, respectively. When $\chi=S$, we introduce a dimension-5 model-independent effective Lagrangian to describe the vertex which represents the coupling between SM fermions and the hidden scalar,
\begin{equation}
\begin{aligned}
\mathcal L_{scalar}=m_{_S} g_{_{S1}} (\bar q_{_f} q) S +m_{_S} g_{_{S2}} (\bar q_{_f} \gamma^{5} q) S +g_{_{S3}} (\bar q_{_f}\gamma_{\mu}q)(i\partial^\mu S) + g_{_{S4}} (\bar q_{_f} \gamma_{\mu}\gamma^{5}q)(i\partial^\mu S),
\label{eq1}
\end{aligned}
\end{equation}
where $g_{_{Si}}$s are phenomenological coupling constants. The operators $(\bar q_{_f} q) S$ and $(\bar q_{_f} \gamma^{5} q) S$ break ${\rm SU(2)}_L$ symmetry, as (pseudo)scalar currents are necessarily involving quarks with opposite chirality. If one starts from an effective Lagrangian which respects the SM gauge symmetry, these operators could be suppressed severely. For example, Ref.~\cite{Kamenik:2011vy} included operators like $(H^\dagger\bar q_{_f} q) S$ and $(H^\dagger\bar q_{_f} \gamma^5 q) S$ by considering the electroweak symmetry breaking. These coefficients are suppressed by an additional factor $v/\Lambda$ with $v$ being the vacuum expectation value of Higgs field and $\Lambda$ being the new physics scale (usually considered to be in TeV). In this research we are interested that to what extent the experimental data will constrain these coefficients. The operators are just introduced phenomenologically instead of starting from gauge symmetry.
Similar processes were discussed in some previous papers, for example, Ref.~\cite{Krnjaic:2015mbs, Winkler:2018qyg, Filimonova:2019tuy, Kachanovich:2020yhi} considered the hidden scalar can mix with Higgs boson and introduced a coupling Lagrangian with mixing angle $\theta$. The experimental limits of $B$ and $K$ meson decays are used to set bounds for $\theta$. Ref.~\cite{Pospelov:2008jk} discussed about constraints of keV-scale bosonic DM candidates. In this work, we study the bosonic DM candidate with mass of several GeV, and set upper limits for the branching ratios of $B_c$ meson decays with the emission of the hidden boson.
\subsection{$0^-\to 0^-$ meson decay processes}
According to the Feynman diagram and the effective Lagrangian, the amplitude of $0^-\to 0^-$ meson decay can be written as
\begin{equation}
\begin{aligned}
\langle M_fS|\mathcal L_{scalar}|M \rangle & =m_{_S}g_{_{S1}}\langle M_f^-|(\bar q_{_f} q)|M^-\rangle+g_{_{S3}} \langle M_f^-|(\bar q_{_f}\gamma_\mu q) |M^-\rangle P_{_S}^\mu\\
&=g_{_{S1}} \mathcal {T}_1+g_{_{S3}} \mathcal {T}_3,
\label{eq1.2}
\end{aligned}
\end{equation}
where $\mathcal T_i$s are amplitudes other than the effective coupling coefficients. $P_{_S}$ is the momenta of the invisible scalar. As the Lagrangian is sum of several operators, the partial width can be written as
\begin{equation}
\begin{aligned}
\Gamma =\int {dPS_2 \big(\sum_{j} g_{_{Sj}}\mathcal{T}_j\big)^{\dagger} \big(\sum_{i} g^{ }_{_{Si}} \mathcal{T}_i \big) }=\sum_{ij}g_{_{Sj}}g_{_{Si}}\widetilde\Gamma_{ij},
\label{eq2}
\end{aligned}
\end{equation}
Here we have defined $\widetilde\Gamma_{1(3)}=\int dPS_3 |\mathcal{T}_{1(3)}|^2$, which are independent of the coefficients. When the final meson is a pseudoscalar, by finishing the two-body phase space integral, we get the decay width.
\begin{equation}
\begin{aligned}
\Gamma(M\to M_fS)=&\frac{1}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^2,m_{_S}^2) \bigg\{ m_{_S}^2 g_{_{S1}}^2 \langle M_f^-|(\bar q_{_f} q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} q)|M^-\rangle \\
&+ g_{_{S3}}^2 \langle M_f^-|(\bar q_{_f} \gamma_\nu q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} \gamma_\mu q)|M^-\rangle P_{_S}^\nu P_{_S}^\mu \\
&+m_{_S} g_{_{S1}} g_{_{S3}}^* \langle M_f^-|(\bar q_{_f} \gamma_\nu q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} q)|M^-\rangle P_{_S}^\nu +h.c. \bigg\},
\label{eq3}
\end{aligned}
\end{equation}
where the K${\rm \ddot a}$llen function $\lambda(x, y, z)= x^2 + y^2 +z^2 -2xy-2xz -2yz$ is used. The hadronic transition matrix elements can be expressed as
\begin{equation}
\begin{aligned}
\langle M_f^-|(\bar q_{_f} q)|M^-\rangle&\simeq \frac{(P-P_f)^\mu}{m_q-m_{q_{_f}}}\langle M_f^-|(\bar q_{_f}\gamma_\mu q) |M^-\rangle= \frac{M^2-M_{f}^2}{m_q-m_{q_{_f}}}f_0 (s), \\
\langle M_f^-|(\bar q_{_f}\gamma_\mu q) |M^-\rangle
&= (P+P_f)_{\mu}f_+ (s)+(P-P_f)_{\mu}\frac{M^2-M_{f}^2}{s} \big[f_0 (s)-f_+ (s)\big],
\label{eq4}
\end{aligned}
\end{equation}
where $s=(P-P_f)^2=m_{_S}^2$; $f_0$ and $f_+$ are form factors; $m_q$ and $m_{q_f}$ are the masses of initial and final quarks, respectively. It is worth to mention that one of the form factors in Eq.~(\ref{eq4}) is divergent when $m_{_S}=0$, however, the final results are smooth and convergent when $m_{_S}\to 0$. The hadronic matrix element with pseudoscalar current $\langle M_f^-|(\bar q_{_f} \gamma^5 q)|M^-\rangle$ and axial vector current $\langle M_f^-|(\bar q_{_f} \gamma_\mu\gamma^5 q)|M^-\rangle$ are zero for the $0^-\to0^-$ processes. When we calculate the hadronic matrix elements of $B$ meson decays, the LCSR method are adopted to write the form factors~\cite{Ball:2004ye}. One can see more details of the selection of parameters in our previous work~\cite{Li:2018hgu, Li:2020dpc}. The instantaneous Bethe-Salpeter (BS) method ~\cite{Kim:2003ny, Wang:2005qx} which is more suitable for heavy to heavy meson decays is used in $B_c$ meson decay processes. In Mandelstam formalism, the hadronic transition matrix element is written as
\begin{equation}
\begin{aligned}
\langle h^-|\bar q_1\Gamma^\xi b|B_c^-\rangle
&= \int\frac{d^3 q}{(2\pi)^3} {\rm Tr}\left[\frac{\slashed P}{M}\overline\varphi_{P_f}^{++}(\vec q_{f})\Gamma^\xi\varphi_P^{++}(\vec q)\right],
\label{eq5}
\end{aligned}
\end{equation}
where $\Gamma^\xi=1,\gamma^5,\gamma^\mu,\gamma^\mu\gamma^5$ or $\sigma^{\mu\nu}$; $\varphi^{++}_P$ and $\varphi^{++}_{P_f}$ are the wave functions of the initial and final mesons, respectively; $P$ and ${P_f}$ are the momenta of the initial and final mesons, respectively; $\vec q$ and $\vec q_{_f}$ are the relative momenta of the quark and antiquark in the initial and final meson, respectively.
The results of $\widetilde\Gamma_{ij}$s are shown in Fig.~\ref{width-S14}. The solid and dashed lines represent noninterference and interference terms, respectively.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^-S$]{ \label{width-S14a}
\includegraphics[width=0.45\textwidth]{width-S1.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \pi^-S$]{ \label{width-S14b}
\includegraphics[width=0.45\textwidth]{width-S2.pdf}} \\
\subfigure[$B_c^-\rightarrow D_s^-S$]{ \label{width-S14c}
\includegraphics[width=0.45\textwidth]{width-S3.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^-S$]{ \label{width-S14d}
\includegraphics[width=0.45\textwidth]{width-S4.pdf}}
\caption{$\widetilde\Gamma_{ij}$s in $B$ and $B_c$ meson $0^-\to 0^-$ decays with invisible scalar.}
\label{width-S14}
\end{figure}
One can see that although we use different parametric methods in $B$ and $B_c$ meson decays, the trends of $\widetilde\Gamma_{ij}$s are similar. This is because the mesons have same quantum numbers. $\widetilde\Gamma_{ij}$ increases when $m_{_S}$ increases from zero. It grows faster in $B\to M_f$ modes than that in $B_c \to M_f$ modes when $m_{_S}$ is about smaller than $4$~GeV. This is due to the difference in the form factor caused by the masses of final state mesons, since $K$ and $\pi$ mesons are light while $D_{(s)}^*$ are heavy. $\widetilde\Gamma_{33}$ and $\widetilde\Gamma_{13}$ are zero when $m_{_S}\to0$, because they are proportional to $m_{_S}^2$ and $m_{_S}$, respectively. When $m_{_S}$ is about larger than $4$~GeV, $\widetilde\Gamma_{ij}$ decreases. When $m_{_S}=(M-M_f)$, $\widetilde\Gamma_{ij}$s are zero for there is no phase space.
The upper limits in Table~\ref{tab1} give the allowed parameter space for the effective coupling constants $g_{_{Si}}$s. Here we use two different ways to make the calculation. First, we assume that only one of the $g_{_Si}$ is not zero and make others zero. In this case, the upper limits of $g_{_{Si}}$s as functions of $m_{_S}$ are shown in Fig.~\ref{gs13}, where the point of $m_{_S}=0$ is excluded.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^-S$]{
\includegraphics[width=0.45\textwidth]{gs13k.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \pi^-S$]{
\includegraphics[width=0.45\textwidth]{gs13pi.pdf}}
\caption{Upper limits of $g_{_{Si}}$s from $B$ meson $0^-\to 0^-$ decays with invisible scalar.}
\label{gs13}
\end{figure}
One can see that the upper limit of $|g_{_{Si}}|^2$ is infinite when $m_{_S}=M-M_f$. This is because $\widetilde\Gamma_{ij}=0$ at this point. The smallest valve of $|g_{_{Si}}|^2$ is of the order of $10^{-17}~{\rm GeV}^{-2}$. The solid blue line which represents $|g_{_{S1}}|^2$ is infinite when $m_{_S}\to 0$, since the blue solid line in Fig.~\ref{width-S14a} and Fig.~\ref{width-S14b} which represents $\widetilde\Gamma_{11}$ is zero at this point. The red dashed line which represents $|g_{_{S3}}|^2$ changes slowly when $m_{_S} \textless M-M_f$ due to $\widetilde\Gamma_{33}$ changes slowly in Fig.~\ref{width-S14a} and Fig.~\ref{width-S14b}.
Second, we assume that all operators make contribution and run a program to select the maximum value of the branching ratio of $B_c$ meson. The results are plotted as dashed (ij=11, 33) and solid (Total) lines in Fig.~\ref{br-S12}, respectively.
\begin{figure}[h]
\centering
\subfigure[$B_c^-\rightarrow D_s^{-}S$]{
\includegraphics[width=0.45\textwidth]{br-S1.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{-}S$]{
\includegraphics[width=0.45\textwidth]{br-S2.pdf}}
\caption{Branching ratios of $B_c$ meson $0^-\to0^-$ decays with invisible scalar.}
\label{br-S12}
\end{figure}
One can see that the upper limits of $\mathcal BR$ are of the order of $10^{-6}$. The results of two methods show subtle differences.
It should be noticed that these results are the upper limits of the branching ratios. The area under the curves in Fig.~\ref{br-S12} represents the possible values of the branching ratios. The peak is located near $m_{_S} \approx 4$~GeV, which may imply the greatest probability of detecting the invisible particles in this area. Taking the LHC as an example, although the generation of $B_c$ meson cases can reach the order of $10^{10}$ per year, the actual effective detection is still several orders of magnitude lesser. If more cases can be detected and the distribution spectrum of the missing energy can be obtained, then it is possible to observe the signal experimentally.
\subsection{$0^-\to 1^-$ meson decay processes}
In $0^-\to1^-$ meson decays, the decay width has the form
\begin{equation}
\begin{aligned}
\Gamma(M\to M^{*}_f S)=& \frac{1}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^{*2},m_{_S}^2) \bigg\{ m_{_S}^2 g_{_{S2}}^2\langle M_f^{*-}|(\bar q_{_f} \gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma^5 q)|M^-\rangle \\
&+g_{_{S4}}^2 \langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu} \gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f}\gamma_{\mu} \gamma^5 q)|M^-\rangle P_{_S}^\nu P_{_S}^\mu \\
&+m_{_S} g_{_{S2}}g_{_{S4}}^*\langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu} \gamma^5 q)|M^-\rangle^* \langle M_f^{*-}|(\bar q_{_f} \gamma^5 q)|M^-\rangle P_{_S}^\nu \bigg\},
\label{eq6}
\end{aligned}
\end{equation}
where $M_f^*$ represents the mass of $1^-$ final meson. The hadronic transition matrix elements can be expressed as the functions of form factors
\begin{equation}
\begin{aligned}
\langle M_f^{*-}|(\bar q_f \gamma^5 q )|M^-\rangle
&\simeq -\frac{(P-P_f)^\mu}{m_q+m_{q_{_f}}}\langle M_f^{*-}|(\bar q_{_f}\gamma_\mu\gamma^5 q) |M^-\rangle =-i\frac{2M_f^*}{m_q+m_{q_{_f}}}\epsilon \cdot (P-P_f)A_0(s)\\
\langle M_f^{*-}|(\bar q_f \gamma_\mu q )|M^-\rangle
&=\varepsilon _{\mu \nu \rho \sigma} \epsilon ^\nu P^\rho (P-P_f)^\sigma\frac{2 }{M+M_f^*}V(s), \\
\langle M_f^{*-}|(\bar q_{_f}\gamma_\mu\gamma^5 q) |M^-\rangle
&= i\bigg\{\epsilon_{\mu} (M+M_f^*)A_1(s)-(P+P_f)_{\mu}\frac{\epsilon \cdot (P-P_f)}{M+M_f^*}A_2(s)\\
&~~~-(P-P_f)_{\mu} \big[\epsilon \cdot (P-P_f)\big] \frac{2M_f^*}{s} \big[A_3(s)-A_0(s)\big]\bigg\}.
\label{eq7}
\end{aligned}
\end{equation}
where the parameters are cited from LCSR method~\cite{Straub:2015ica} in $B$ meson decays. The BS method~\cite{Kim:2003ny, Wang:2005qx} is applied to calculate the hadronic transition matrix element of $B_c$ meson decays. The results of $\widetilde\Gamma_{ij}$s are shown in Fig.~\ref{width-S58}.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}S$]{
\includegraphics[width=0.45\textwidth]{width-S5.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \rho^-S$]{
\includegraphics[width=0.45\textwidth]{width-S6.pdf}} \\
\subfigure[$B_c^-\rightarrow D_s^{*-}S$]{
\includegraphics[width=0.45\textwidth]{width-S7.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{*-}S$]{
\includegraphics[width=0.45\textwidth]{width-S8.pdf}}
\caption{$\widetilde\Gamma_{ij}$s in $B$ and $B_c$ meson $0^-\to 1^-$ decays with invisible scalar.}
\label{width-S58}
\end{figure}
It can be seen that there is an obvious difference between $B$ and $B_c$ meson. As $m_{_S}$ increases, $\widetilde\Gamma_{44}$ in $B$ meson decay increases first until $m_{_S}\approx 3.5$~GeV, then decreases to zero. While $\widetilde\Gamma_{44}$ in $B_c$ meson decay keep deceasing until there is no phase space. This is the result of competition between form factors and phase space. As $m_{_S}$ increases, the form factors increase while phase space decreases. The form factors of $B_c$ meson decays grow much faster than these of the $B$ meson decays.
We also use two ways to set the upper limits for the branching ratios of $B_c^-\to M_f^{*-}S$ processes. The upper limits of $g_{_{Si}}$s obtained by the first method are shown in Fig.~\ref{gs24}.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}S$]{
\includegraphics[width=0.45\textwidth]{gs24ks.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \rho^-S$]{
\includegraphics[width=0.45\textwidth]{gs24rho.pdf}}
\caption{Upper limits of $g_{_{Si}}$s from $B$ meson $0^-\to 1^-$ decays with invisible scalar.}
\label{gs24}
\end{figure}
One can see that they have very similar trends to those in Fig.~\ref{gs13}, but about one order of magnitude bigger. This is caused by the different upper limits of experiments in Table~\ref{tab1}.
The upper limits of branching ratios of $B_c$ meson from two methods are shown in Fig.~\ref{br-S34}.
\begin{figure}[h]
\centering
\subfigure[$B_c^-\rightarrow D_s^{*-}S$]{
\includegraphics[width=0.45\textwidth]{br-S3.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{*-}S$]{
\includegraphics[width=0.45\textwidth]{br-S4.pdf}}
\caption{Branching ratios of $B_c$ meson $0^-\to1^-$ decays with invisible scalar.}
\label{br-S34}
\end{figure}
One can see that the difference between dashed and solid lines are obvious. It is due to the contribution of interference term $\Gamma_{24}$. The most likely area for finding the dark scalar is near $m_{_S}\approx 3.5$~GeV. The $\mathcal {BR}$ is of the order of $10^{-5}$, which is about an order of magnitude larger than that in $0^-\to0^-$ modes. This depends on the experimental upper limits in Table~\ref{tab1}.
\section{Light invisible vector}
When $\chi=V$, we assume a hidden vector produced in the FCNC processes. The effective Lagrangian, which represents the coupling between SM fermions and the hidden vector, has the form
\begin{equation}
\begin{aligned}
\mathcal L_{vector}=m_{_V} g_{_{V1}}(\bar q_{_f}\gamma_{\mu}q)V^\mu + m_{_V} g_{_{V2}}(\bar q_{_f} \gamma_{\mu}\gamma^{5}q)V^\mu.
\label{eq8}
\end{aligned}
\end{equation}
This dimension-5 effective Lagrangian naturally meets gauge symmetry, since the chirality of two quarks are the same.
\subsection{$0^-\to 0^-$ meson decay processes}
By finishing the two-body phase space integral, the decay width of $M^- \to M_f^- V$ processes can be written as
\begin{equation}
\begin{aligned}
\Gamma(M\to M_f V )=\frac{m_{_V}^2 g_{_{V1}}^2}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^2,m_{_V}^2) \langle M_f^-|(\bar q_{_f} \gamma_{\nu}q)|M^-\rangle^*\langle M_f^-|(\bar q_{_f} \gamma_{\mu}q)|M^-\rangle \mathcal P_{_V}^{\mu\nu},
\label{eq9}
\end{aligned}
\end{equation}
where the sum of polarization vector is
\begin{equation}
\begin{aligned}
\mathcal P_{_V}^{\mu\nu}=\sum\epsilon_{_V}^{*\mu}\epsilon_{_V}^{\nu}=-g^{\mu\nu}+\frac{P_{_V}^\mu P_{_V}^\nu}{m_{_V}^2}.
\label{eq10}
\end{aligned}
\end{equation}
The hadronic transition matrix element with pseudoscalar current is zero when final meson is pesudoscalar. The only nonzero term $\widetilde\Gamma_{11}$ is shown in Fig.~\ref{width-V12}. One can see that the results are smooth and convergent when $m_{_V}\to 0$.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^-(\pi^-)V$]{
\includegraphics[width=0.45\textwidth]{width-V1.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D_{(s)}^-V$]{
\includegraphics[width=0.45\textwidth]{width-V2.pdf}}
\caption{$\widetilde\Gamma_{11}$ in $0^-\to 0^-$ meson decays with invisible vector.}
\label{width-V12}
\end{figure}
Since only one operator contributes, the upper limits of the coupling constants and branching ratios of $B_c$ meson can be easily obtained, which are shown in Fig.~\ref{upper}.
One can see that the upper limits of $|g_{_{V1}}|^2$ are infinite when $m_{_V}=M-M_f$, since $\widetilde\Gamma_{11}=0$ at this point. It changes slowly when $m_{_V} \textless M-M_f$ because $\widetilde\Gamma_{11}$ changes slowly in Fig.~\ref{width-V12}. The upper limits of branching ratios are of the order of $10^{-6}$. As the mass of the invisible particle increases, the upper limits of $\mathcal {BR}$ increase first and then decrease to zero. The peak is located near $m_{_V}\approx 4$~GeV. It may be the area where the invisible particle is most likely to be detected experimentally.
\begin{figure}[h]
\centering
\subfigure[Upper limits of $g_{_{V1}}$ from $B$ meson $0^-\to 0^-$ decays with invisible vector.]{
\label{gv1}
\includegraphics[width=0.46\textwidth]{gv1.pdf}}
\hspace{2em}
\subfigure[Branching ratios of $B_c$ meson $0^-\to0^-$ decays with invisible vector.]{
\label{br-V1}
\includegraphics[width=0.43\textwidth]{br-V1.pdf}}
\caption{Upper limits of coupling constants and $\mathcal {BR}$ of $B_c$ meson with invisible vector.}
\label{upper}
\end{figure}
\subsection{$0^-\to 1^-$ meson decay processes}
In $M^- \to M_f^{*-} V$ processes, the decay width has the form
\begin{equation}
\begin{aligned}
\Gamma(M\to M^*_fV)=& \frac{g_{_{V1}}^2}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^{*2},m_{_V}^2) \bigg\{ \langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu}q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu}q)|M^-\rangle P_{_V}^\mu P_{_V}^\nu \\
&-m_{_V}^2 \langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu} q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma^{\mu} q)|M^-\rangle \bigg\} \\
+&\frac{g_{_{V2}}^2}{16\pi M^3 } \lambda^{1/2}(M^2,M_f^{*2},m_{_V}^2) \bigg\{ \langle M_f^{*-}|(\bar q_{_f} \gamma_{\nu}\gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu}\gamma^5 q)|M^-\rangle P_{_V}^\mu P_{_V}^\nu\\
&-m_{_V}^2 \langle M_f^{*-}|(\bar q_{_f} \gamma_{\mu}\gamma^5 q)|M^-\rangle^*\langle M_f^{*-}|(\bar q_{_f} \gamma^{\mu}\gamma^5 q)|M^-\rangle \bigg\}.
\label{eq11}
\end{aligned}
\end{equation}
The hadronic transition matrix elements can be expressed as the functions of form factors in Eq.~(\ref{eq7}). In Fig.~\ref{width-V34}, the results of $\widetilde\Gamma_{ij}$s as a function of $m_{_V}$ are shown.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}(\rho^-)V$]{ \label{width-V34a}
\includegraphics[width=0.45\textwidth]{width-V3.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D_{(s)}^{*-}V$]{ \label{width-V34b}
\includegraphics[width=0.45\textwidth]{width-V4.pdf}}
\caption{$\widetilde\Gamma_{ij}$ in $0^-\to 1^-$ meson decays with invisible vector.}
\label{width-V34}
\end{figure}
$\widetilde\Gamma_{22}$ has the same shape as that in $0^-\to0^-$ modes above in Fig.~\ref{width-V12}. $\widetilde\Gamma_{11}$ starts from zero because it is proportional to $ m_{_V}^2$. There is no term like $\widetilde\Gamma_{12}$ since the interference term can be proved to be zero.
The upper limits of $|g_{_{Vi}}|^2$ are shown in Fig.~\ref{gv12}. The $|g_{_{V2}}|^2$ which is of the order of $10^{-17}$ changes slowly when $m_{_V} \textless M-M_f$. When $m_{_V}\to 0$, the upper limits of $|g_{_{V1}}|^2$ go to infinity. These results depend on $\widetilde\Gamma_{ij}$ in Fig.~\ref{width-V34a}.
\begin{figure}[h]
\centering
\subfigure[$B^-\rightarrow K^{*-}V$]{
\includegraphics[width=0.46\textwidth]{gv12ks.pdf}}
\hspace{2em}
\subfigure[$B^-\rightarrow \rho^-V$]{
\includegraphics[width=0.46\textwidth]{gv12rho.pdf}}
\caption{Upper limits of $g_{_{Vi}}$ from $B$ meson $0^-\to 1^-$ decays with invisible vector.}
\label{gv12}
\end{figure}
The upper limits of the branching ratios are shown as Fig.~\ref{br-1}.
\begin{figure}[h]
\centering
\subfigure[$B_c^-\rightarrow D_s^{*-}V$]{
\includegraphics[width=0.44\textwidth]{br-V2.pdf}}
\hspace{2em}
\subfigure[$B_c^-\rightarrow D^{*-}V$]{
\includegraphics[width=0.45\textwidth]{br-V3.pdf}}
\caption{Branching ratios of $B_c$ meson $0^-\to1^-$ decays with invisible vector.}
\label{br-1}
\end{figure}
One of the two operators is opened in turn, while assuming the other is zero. The blue solid line and red dashed line represent the contribution from $\widetilde\Gamma_{11}$ and $\widetilde\Gamma_{22}$, respectively. As there is no interference term, the upper limit of the branching ratio is the larger one of these lines, namely, the red dashed line.
\section{Conclusion}
We have studied the light invisible bosonic particles via FCNC processes of $B$ and $B_c$ meson. The mass is considered to be less than a few GeV. Both scalar and vector cases are considered. The effective Lagrangian is introduced to describe the coupling between quarks and the dark boson. The effective coupling constants are constrained by the experimental results for the $B$ decays with missing energy. Then the upper limits of the branching fractions of the $B_c\to M_f^{(*)}\chi$ channels are calculated. When the final meson is pseudoscalar $D_{(s)}$, the largest value of the upper limits is of the order of $10^{-6}$. For the final vector meson $D_{(s)}^*$, the $\mathcal {BR}$ is of the order of $10^{-5}$. The most likely area for finding the dark boson is near $m_\chi\approx 3.5-4$~GeV. As much more $B_c$ events will be generated in the near future, we hope future experiments can make new discoveries through such processes or set more stringent constraints for them.
\section{Acknowledgments}
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No.~12075073. We also thank the HEPC Studio at Physics School of Harbin Institute of Technology for access to high performance computing resources through [email protected]
|
train/arxiv
|
BkiUdAO6NNjgBpvIBP0H
| 5 | 1 |
\section{Introduction}
\label{sec:intr}
Perturbative QCD (pQCD) cannot be applied to the studies of phenomenology in the intermediate ($|Q^2| \sim 1 \ {\rm GeV}^2$) and infrared ($|Q^2| < 1 \ {\rm GeV}^2$) regimes of the $Q^2$-complex plane,\footnote{We use the notation $Q^2 \equiv -q^2 = -(q^0)^2 + {\vec q}^2$, with $q$ being the 4-momentum in a considered physical process.} because the pQCD running coupling $a(Q^2) \equiv \alpha_s(Q^2)/\pi$ has singularities within or close to such regimes. The problematic singularities appear in the spacelike IR-regime, $0 < Q^2 < \Lambda^2_{\rm Lan.}$ where $\Lambda^2_{\rm Lan.} \sim 0.1$-$1 \ {\rm GeV}^2$ is the branching scale of the Landau cut. These singularities lead, above all, to practical difficulties of evaluation of $a(Q^2)$ and of QCD processes at such $Q^2$.
A way out of this problem consists in regaining a correct analytic behaviour by ``analytizing'' the running coupling, i.e., by replacing the pQCD coupling $a(Q^2)$ by another coupling \textcolor{black}{${\mathcal{A}}(Q^2)$ which} has the aspired analyticity properties and could, at least in principle, be used for (quasi)perturbative evaluation of low-energy observables.
In Ref.~\cite{3dAQCD} we have constructed such an improved coupling and demonstrated its compatibility with intermediate-energy observables. Here we somewhat refine this construction and apply it in addition to quantities determined by even lower energies.
\section{Construction of ${\mathcal{A}}(Q^2)$}
\label{sec:consA}
Our coupling ${\mathcal{A}}(Q^2)$ is based on dispersive methods and determined mainly by two demands: I.) it should approach the pQCD coupling $a(Q^2)$ for $Q^2 \to \infty$; and II.) it should be compatible with lattice results at very low $Q^2$. Let us go into more details now.
For the behaviour of ${\mathcal{A}}(Q^2)$ in the IR regime we proceed as follows. We start from the general defining relation for the running pQCD coupling $a (Q^2) \equiv \alpha_s(Q^2)/\pi$:
\begin{equation}
a (Q^2) = a (\Lambda^2) \frac{Z_{\rm gl}^{(\Lambda)}(Q^2) Z_{\rm gh}^{(\Lambda)}(Q^2)^2}{Z_1 ^{(\Lambda)}(Q^2)^2},
\label{alatt}
\end{equation}
where $Z_{\rm gl}$, $Z_{\rm gh}$ are the dressing functions of the gluon and ghost propagator, respectively, and $Z_1$ is the gluon-ghost-ghost vertex renormalization constant.
In the Landau gauge, in which large-volume lattice calculations are performed, $Z_1^{(\Lambda)}(Q^2)=1$ to all orders \cite{Taylor}. The resulting formula for the running coupling is particularly convenient for lattice calculations since single particle correlation functions (full propagators) in the Landau gauge are most easily accessible with that technique. In this way, a lattice coupling ${\mathcal{A}}_{\rm latt}(Q^2)$ can be defined
\begin{equation}
{\mathcal{A}}_{\rm latt}(Q^2) \equiv {\mathcal{A}}_{\rm latt}(\Lambda^2) Z_{\rm gl}^{(\Lambda)}(Q^2) Z_{\rm gh}^{(\Lambda)}(Q^2)^2 \ ,
\label{Alatt}
\end{equation}
where the $Z$'s result from large-volume lattice simulations (with lattice spacing $1/\Lambda$). ${\mathcal{A}}_{\rm latt}(Q^2)$ and our coupling ${\mathcal{A}}(Q^2)$ include both perturbative and nonperturbative contributions. We will require our coupling ${\mathcal{A}}(Q^2)$ [a low-energy extension of the pQCD coupling $a(Q^2)$] to agree qualitatively with ${\mathcal{A}}_{\rm latt}(Q^2)$ in the IR-regime
\begin{equation}
{\mathcal{A}}_{\rm latt}(Q^2) = {\mathcal{A}}(Q^2) + \Delta {\mathcal{A}}_{\rm NP}(Q^2) \ ,
\label{AlattA}
\end{equation}
\textcolor{black}{where $\Delta {\mathcal{A}}_{\rm NP}$ is regarded as a restricted (see below) nonperturbative difference between ${\mathcal{A}}_{\rm latt}$ and our ${\mathcal{A}}$.}
Recent large-volume lattice results \cite{LattcoupNf02,Lattcoupb,Lattcoupc} indicate that ${\mathcal{A}}_{\rm latt}(Q^2) \sim Q^2$ at $Q^2 \to 0$. We will assume that there is no finetuning at $Q^2 \to 0$; this leads to
\begin{equation}
{\mathcal{A}}(Q^2) \sim Q^2 \quad {\rm and} \quad \Delta {\mathcal{A}}_{\rm NP}(Q^2) \sim Q^2
\quad ({\rm at} \; Q^2 \to 0).
\label{noft}
\end{equation}
A further result of lattice calculations, which we will use in the following, is that ${\mathcal{A}}_{\rm latt}(Q^2)$ (at real positive $Q^2$) shows a local maximum at $Q^2 \sim 0.1 \ {\rm GeV}^2$.
A note on renormalization schemes is in order here. The mentioned lattice calculations have been performed within the (lattice) MiniMOM (MM) scheme. Consequently, we also work within that scheme, but with the squared momenta rescaled to the usual $\MSbar$-like scaling: $Q^2=Q^2_{\rm latt} (\Lambda_{\MSbar}/\Lambda_{\rm MM})^2 \approx Q^2_{\rm latt}/1.9^2$. We call this rescaled scheme the ``Lambert MM'' scheme (LMM). The name is motivated by the fact that, for the underlying perturbative coupling $a(Q^2)$, as well as for its spectral function $\rho_a(\sigma) \equiv {\rm Im} \ a(-\sigma - i \epsilon)$, we use for calculational efficiency an explicit expression \cite{GCIK2011,3dAQCD} in terms of the Lambert function $W_{\pm 1}(z(Q^2))$. Here, $z(Q^2)=-(\Lambda_{L}/Q^2)^{\beta_0/c_1}/(c_1 e)$, and the coupling $a(Q^2)$ is in the LMM scheme which has the first two $\beta$ scheme coefficients $c_j=\beta_j/\beta_0$ equal to the known MM coefficients \cite{MiniMOM} (with $N_f=3$)
\begin{equation}
c_2=9.2970 (4.4711), \; c_3=71.4538 (20.9902)\ ,
\label{c2c3}
\end{equation}
where in parentheses the values in the $\MSbar$ scheme are given.
The Lambert scale $\Lambda_{L}$ can be determined numerically from the value of $\alpha_s(M_Z^2;\MSbar)$. For example, when using the recent world average \textcolor{black}{values $\alpha_s(M_Z^2;N_f=5;\MSbar)=0.1179 \pm 0.0010$ \cite{PDG2019}, we get for the $N_f=3$ regime: $\Lambda_{L}=0.1120^{+0.0051}_{-0.0049}$ GeV} (we use the five-loop $\MSbar$ $\beta$-function \cite{5lMSbarbeta} and the corresponding four-loop quark threshold matching \cite{4lquarkthresh}). \textcolor{black}{In our specific example, we will use $\alpha_s(M_Z^2;N_f=5;\MSbar)=0.1177$ which gives $\Lambda_{L}=0.1110$ GeV.}
Having clarified the renormalization scheme, the coupling ${\mathcal{A}}(Q^2)$ will be constructed by the dispersion relation
\begin{equation}
{\mathcal{A}}(Q^2) = \frac{1}{\pi} \int_{\sigma=M^2_{\rm thr}}^{\infty} \frac{d \sigma \rho_{{\mathcal{A}}}(\sigma)}{(\sigma + Q^2)} ,
\label{Adisp}
\end{equation}
where $\rho_{{\mathcal{A}}}(\sigma) \equiv {\rm Im} \; {\mathcal{A}}(Q^2=-\sigma - i \varepsilon)$, and $M^2_{\rm thr}$ is a threshold scale expected to be $\sim 0.1 \ {\rm GeV}^2$ [$\sim (2 m_{\pi})^2$].
In Eq.~(\ref{Adisp}) we have to specify the corresponding discontinuity function $\rho_{{\mathcal{A}}}(\sigma)$ for the whole energy range $\sigma \in [M^2_{\rm thr},\infty)$. We do this in two steps:
In the UV-regime (large positive $\sigma=-Q^2$), we demand that $\rho_{{\mathcal{A}}}(\sigma)$ tend to the underlying pQCD spectral function $\rho_a(\sigma)$ as dictated by the asymptotic freedom
\begin{equation}
\rho_{{\mathcal{A}}}(\sigma) = \rho_a(\sigma) \qquad {\rm for} \; \sigma > M_0^2,
\label{rhoAa} \end{equation}
where $M_0^2$ ($ \sim 1$-$10 \ {\rm GeV}^2$) denotes the onset of the perturbative regime.
In the remaining (IR) region ($M^2_{\rm thr} < \sigma < M_0^2$) the spectral function $\rho_{{\mathcal{A}}}(\sigma)$ is a priori unknown, and we have to make a physically motivated ansatz. This interval contributes in the dispersion integral to the part which we call $\Delta {\mathcal{A}}_{\rm IR}(Q^2)$, and we decide to parametrize the latter quantity by means of a quasidiagonal Pad\'e $[M-1/M](Q^2)$. This specific choice is motivated by the highly efficient convergence properties of these approximants when $M$ increases \cite{Peris}. On the other hand, we need to keep the number of free adjustable parameters limited in order to avoid numerical instabilities during the adjustments. We take $M=3$
\begin{equation}
\Delta {\mathcal{A}}_{\rm IR}(Q^2) = \frac{\sum_{n=1}^{2} A_n Q^{2n}}{\sum_{n=1}^3 B_n Q^{2n}}
= \sum_{j=1}^{3} \frac{{\cal F}_j}{Q^2 + M_j^2}.
\label{PFM1M}
\end{equation}
The second expression on the right-hand side is obtained by the partial-fraction decomposition of the Pad\'e, with free adjustable parameters ${\cal F}_j$ and $M_j$ ($j=1,2,3$). Together with Eq.~(\ref{rhoAa}) this implies
\begin{equation}
{\mathcal{A}}(Q^2)= \sum_{j=1}^3 \frac{{\cal F}_j}{(Q^2 + M_j^2)} + \frac{1}{\pi} \int_{M_0^2}^{\infty} d \sigma \frac{ \rho_a (\sigma) }{(Q^2 + \sigma)},
\label{AQ2} \end{equation}
and the corresponding spectral function is
\begin{equation}
\rho_{{\mathcal{A}}}(\sigma) = \pi \sum_{j=1}^{3} {\cal F}_j \; \delta(\sigma - M_j^2) + \Theta(\sigma - M_0^2) \rho_a (\sigma),
\label{rhoA}
\end{equation}
Finally, we have to fix the seven as yet unspecified parameters [$M_j,{\cal F}_j$ ($j=1,2,3$) and $M_0^2$], and therefore we need seven appropriate conditions:
I.) Four conditions stem from the requirement that the coupling ${\mathcal{A}}$ at high momenta ($|Q^2|>\Lambda_L^2$) practically (i.e., up to high power corrections) coincides with the (underlying) pQCD coupling $a$: ${\mathcal{A}}(Q^2) - a(Q^2) \sim (\Lambda_{L}^2/Q^2)^N$, where $N$ is sufficiently high. We take $N=5$ which gives four conditions (cf.~Ref.~\cite{3dAQCD} for more details).
II.) The fifth condition is implied by the limiting behaviour ${\mathcal{A}}(Q^2) \sim Q^2$ for $Q^2 \to 0$, cf.~Eq.~(\ref{noft}).
III.) The sixth condition comes from the fact that for positive $Q^2$ the lattice coupling ${\mathcal{A}}_{\rm latt}(Q^2)$ has a maximum at $Q^2_{\rm max} \approx 0.135 \ {\rm GeV}^2$ [in the mentioned rescaled ``Lambert'' MM (LMM) scheme]; we require that our ${\mathcal{A}}(Q^2)$ achieves maximum at the same $Q^2_{\rm max} \approx 0.135 \ {\rm GeV}^2$.\footnote{We note that the last two conditions (II. and III.) are the only information that we take from lattice calculations.}
IV.) The final, seventh, condition is connected with the requirement that the use of the coupling ${\mathcal{A}}(Q^2)$ in QCD (we call this the ${\mathcal{A}}$QCD framework) should work well in the intermediate energy regime ($|Q^2| \sim 1 \ {\rm GeV}^2$). Specifically, it should reproduce the correct value of the canonical hadronic $\tau$-decay branching ratio $r^{(D=0)}_{\tau} \approx 0.20$ \cite{ALEPH2}. This is the QCD-part of the hadronic $\tau$-decay ratio into nonstrange hadrons, with all higher-twist ($D \not=0$) and nonzero quark mass contributions subtracted. In Appendix, a summarized analysis and evaluation of this quantity within ${\mathcal{A}}$QCD is given, where it is evaluated with the renormalon-motivated model of Ref.~\cite{renmod}. The equality of the theoretical value of $r^{(D=0)}_{\tau}$ with the experimentally preferred value $r^{(D=0)}_{\tau} \approx 0.20$ leads to the seventh condition.
The seven conditions taken together lead us to obtain numerical values of the seven parameters of the coupling ${\mathcal{A}}(Q^2)$ Eq.~(\ref{AQ2}). \textcolor{black}{When we choose $\alpha_s(M_Z^2;\MSbar)=0.1177$ and $r^{(D=0)}_{\tau, {\rm th}}=0.200$, we obtain \cite{GCRKext}:}
\begingroup \color{black}
\numparts
\begin{eqnarray}
M_0^2 &= & 10.033 \ {\rm GeV}^2 \; (M_0 \approx 3.167 \ {\rm GeV});
\label{M0}
\\
M_1^2 &=& 0.0240 \ {\rm GeV}^2 \; (M_1 \approx 0.155 \ {\rm GeV}), \quad {\cal F}_1 = -0.00813 \ {\rm GeV}^2,
\label{M1}
\\
M_2^2&=&0.506 \ {\rm GeV}^2 \; (M_2 \approx 0.712 \ {\rm GeV}), \quad {\cal F}_2 = 0.1313 \ {\rm GeV}^2,
\label{M2}
\\
M_3^2 &=& 7.358 \ {\rm GeV}^2 \; (M_3 \approx 2.713 \ {\rm GeV}), \quad {\cal F}_3 = 0.0740 \ {\rm GeV}^2.
\label{M3}
\end{eqnarray}
\endnumparts \endgroup
We see that all ${M}^2_j > 0$, therefore the resulting coupling ${\mathcal{A}}(Q^2)$ is holomorphic (i.e., without the Landau singularities) not by imposition, but as a result of the seven mentioned (physically-motivated) conditions. \textcolor{black}{In Fig.~\ref{Figrho} we present the underlying pQCD spectral function $\rho_a$ and the resulting spectral function $\rho_{{\mathcal{A}}}$. In Fig.~\ref{FigAa} the resulting coupling $\pi {\mathcal{A}}(Q^2)$ at low positive $Q^2$ is given. At $Q^2 \to 0$ the coupling behaves as $A(Q^2) = k Q^2$ with $k \approx 13.6 \ {\rm GeV}^{-2}$. The coupling agrees qualitatively with the lattice results, while the height of the peak depends significantly on the chosen reference value $\alpha_s(M_Z^2; \MSbar)$ both in our approach and in the lattice calculation.}
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=80mm]{rho1MMFigart0200al01177NLL.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=80mm]{rho1MMFigbrt0200al01177NLL.pdf}
\end{minipage}
\vspace{-0.4cm}
\caption{(a) The spectral function $\rho_a (\sigma) = {\rm Im} \; a (Q^2=-\sigma - i \epsilon)$ in the 4-loop LMM scheme, $\sigma$ is on linear scale; (b) $\rho_{{\mathcal{A}}}(\sigma) = {\rm Im} \; {\mathcal{A}}(Q^2=-\sigma - i \epsilon)$, where $\sigma > 0$ is on logarithmic scale. The delta function at $M_1^2$ is negative (shown as positive for convenience).}
\label{Figrho}
\end{figure}
\begin{figure}[htb]
\centering\includegraphics[width=95mm]{AlattAaa4l3drt0200al01177N.pdf}
\vspace{-0.4cm}
\caption{The considered $N_f=3$ holomorphic coupling $\pi {\mathcal{A}}$ (solid curve), the underlying LMM pQCD coupling $\pi a$ (dot-dashed curve), $\MSbar$ pQCD coupling ${\overline a}$ (dotted curve), for $Q^2>0$. Included are the large-volume lattice results $\pi {\mathcal{A}}_{\rm latt}$ \cite{LattcoupNf02} (points with bars), for which the momenta $Q^2$ were rescaled from the lattice MM to the LMM scheme: $Q^2=Q^2_{\rm latt} (\Lambda_{\MSbar}/\Lambda_{\rm MM})^2 \approx Q^2_{\rm latt}/1.9^2$. At large $Q^2 > 1 \ {\rm GeV}^2$, the (large-volume) lattice results are unreliable.}
\label{FigAa}
\end{figure}
Somewhat different (but similar) results were obtained by us earlier \cite{3dAQCD}, where for the Adler function (see Sec.~\ref{sec:BSR} and Appendix) we took the nonresummed form: truncated series in ${\mathcal{A}}$QCD based on the first four terms of the pQCD expansion (\ref{dpt}).\footnote{\textcolor{black}{The Dirac delta function at $\sigma=M_3^2$ has an effect of simulating (parametrizing) a nonabrupt fall of $\rho_{{\mathcal{A}}}(\sigma)$ when $\sigma$ decreases below $M_0^2$.}}
As a consequence, the evaluation of integrals in Eq.~(\ref{DAres}) [in contrast to $d(Q^2)_{D=0; res} $ of Eq.~(\ref{Dares})] is unambiguous for all spacelike $Q^2 \in \mathbb{C} \backslash (-\infty, -M_1^2]$, because no Landau singularities are encountered along the integration lines, \textcolor{black}{and the resulting Adler function $d(Q^2)_{D=0; {\mathcal{A}} res}$ is a holomorphic function in the entire complex $Q^2$-plane with the exception of the negative semiaxis.}
\textcolor{black}{The values of parameters in Eqs.~(\ref{M0})-(\ref{M3}) change appreciably when the values of the input parameters $\alpha_s(M_Z^2; \MSbar)$ and $r_{\tau, {\rm th}}^{(D=0)}$ change. For example, when $\alpha_s(M_Z^2; \MSbar)$ is increased to $0.1181$, the extracted parameters are: $M_0 \approx 2.864$ GeV, $M_1=0.252$ GeV, $M_2=0.454$ GeV, $M_3= 2.442$ GeV; ${\cal F}_1=-0.0582 \ {\rm GeV}^2$; ${\cal F}_2=0.1716 \ {\rm GeV}^2$; ${\cal F}_3=0.0665 \ {\rm GeV}^2$.}
\section{Applications: I.~Borel sum rules for semihadronic $\tau$ decay}
\label{sec:BSR}
An important physical quantity, essential for the analysis of several QCD processes (e.g., hadronic $\tau$-decays, cf.~Appendix) is the Adler function ${\cal D}(Q^2)$ defined by
\begin{equation} {\cal D}(Q^2) \equiv - 2 \pi^2 \frac{d \Pi(Q^2)}{d \ln Q^2},
\label{Ddef} \end{equation}
where $\Pi(Q^2)$ is the general vacuum polarization function, i.e., current correlation function. The OPE expansion of the (full V+A channel) Adler function is
\begin{equation}
{\cal D}_{\rm V+A}(Q^2)
= 1 + d(Q^2)_{D=0} + 2 \pi^2 \sum_{n \geq 2}
\frac{ n \langle O_{2n} \rangle_{\rm V+A}}{(Q^2)^n}.
\label{DVA}
\end{equation}
The leading-twist ($D=0$) contribution $d(Q^2)_{D=0}$, sometimes also called Adler function, and its evaluation in ${\mathcal{A}}$QCD with the renormalon-motivated model of Ref.~\cite{renmod}, are explained in Appendix. The nonperturbative higher-twist terms ($D \geq 4$) include the corresponding vacuum condensates.
The Adler function can be used in general sum rules. Namely, by choosing any holomorphic function $g(Q^2)$ one can derive from it the sum rule
\begin{equation}
\int_0^{\sigma_{\rm max}} d \sigma g(-\sigma) \omega_{\rm exp}(\sigma) =
-i \pi \oint d Q^2 g(Q^2) \Pi_{\rm th}(Q^2) \ ,
\label{sr1}
\end{equation}
where the integration on the right-hand side is performed along the circle $|Q^2|=\sigma_{\rm max}$ (with $\sigma_{\rm max} \leq m^2_{\tau}$); $\omega(\sigma)$ is the spectral function of $\Pi(Q^2)$ along the cut,
$\omega(\sigma) \equiv 2 \pi \; {\rm Im} \ \Pi(Q^2=-\sigma - i \epsilon)$, which is measured. Integration by parts on the right-hand side of Eq.~(\ref{sr1}) leads to a form which involves the Adler function ${\cal D}(Q^2)$.
The specific case of Borel (or: Laplace) sum rules is obtained if one chooses $g(Q^2) = \exp(Q^2/M^2)/M^2$, where $M^2$ denotes a complex parameter (Borel scale), and in Eq.~(\ref{sr1}) only the real parts are considered \cite{Ioffe}. The corresponding integrals on the right-hand side are usually denoted as $B_{\rm (th)}(M^2)$. If in the sum in Eq.~(\ref{DVA}) we take only two terms ($n=2,3$), then for $M^2=|M^2| \exp(i \pi/6)$ and $M^2=|M^2| \exp(i \pi/4)$ the Borel sum rules allow us to extract the condensate values $\langle O_4 \rangle_{\rm V+A}$ and $\langle O_6 \rangle_{\rm V+A}$, respectively, from the measured $\tau$-decay spectral function $\omega_{\rm exp}(\sigma) = 2 \pi {\rm Im} \Pi_{{\rm V+A}}(-\sigma - i \epsilon)$ as obtained from OPAL \cite{OPAL} and ALEPH \cite{ALEPH2} experiments, cf.~Fig.~\ref{FigOmega}.
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=80mm]{OmegaVANoPi.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=80mm]{OmegaVANoPiALEPH.pdf}
\end{minipage}
\vspace{-0.4cm}
\caption{(a) The spectral function $\omega_{{\rm V+A}}(\sigma)$, measured by OPAL (left-hand) and by ALEPH Collaboration (right-hand), without the pion peak contribution. We take $\sigma_{\rm max}=3.136$ and $2.80 \ {\rm GeV}^2$ for OPAL and ALEPH, respectively.}
\label{FigOmega}
\end{figure}
In Figs.~\ref{FigPi64} we present the results of the fit to the ALEPH data, for Borel transforms with $M^2 = |M^2| \exp(i \Psi)$ with $\Psi=\pi/6$ and $\pi/4$. We can see that the adjustment of the values of $\langle O_4 \rangle$ and $\langle O_6 \rangle$, respectively, gives a good fit to the central experimental curve when the described ${\mathcal{A}}$QCD evaluation is used in the $D=0$ part of the Adler function (\ref{DVA}) in the LMM scheme [see Eq.~(\ref{DAres})]. When applying the $\MSbar$ pQCD approach, the $D=0$ part of Adler function is calculated in the $\MSbar$ scheme (at complex $Q^2$) according to Eq.~(\ref{Dares}) with the characteristic functions for $\MSbar$ from Ref.~\cite{renmod}. We can see in Figs.~\ref{FigPi64} that the $\MSbar$ pQCD approach gives worse fit. Finally, in Fig.~\ref{FigPsi0} the curves for the theoretical Borel transforms for real positive $M^2$ (i.e., $\Psi=0$) are given, with the condensate values of $\langle O_4 \rangle$ and $\langle O_6 \rangle$ obtained from the aforementioned fits, and compared with the experimental ALEPH values. The ${\mathcal{A}}$QCD prediction is significantly better than the $\MSbar$ pQCD.
The gluon condensate is directly related to $\langle O_4 \rangle_{{\rm V+A}}$: $\langle a GG \rangle = 6 \langle O_4 \rangle_{{\rm V+A}} + 0.00199 \ {\rm GeV}^4$.
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=78mm]{BexpBinALPsiPi6o77rt0200full4lal01177NReno2LL.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=78mm]{BexpBinALPsiPi4o77rt0200full4lal01177NReno2LL.pdf}
\end{minipage}
\vspace{-0.4cm}
\caption{Borel transforms ${\rm Re} B(M^2)$ along the rays $M^2 = |M^2| \exp(i \Psi)$ with $\Psi=\pi/6$ (left-hand) and $\Psi = \pi/4$ (right-hand), as a function of $|M^2|$, fitted to the ALEPH data. \textcolor{black}{The grey band represents exprimental results [left-hand side of Eq.~(\ref{sr1})], the solid line is the middle of this band.}}
\label{FigPi64}
\end{figure}
Combining the fits for OPAL and ALEPH data yields, using the described ${\mathcal{A}}$ for fixed \textcolor{black}{$\alpha_s(M_Z^2;\MSbar)=0.1177$ and $r_{\tau}^{(D=0)}=0.200$} (for comparison we include the results with $\MSbar$ pQCD coupling)
\begingroup \color{black}
\numparts
\begin{eqnarray}
\langle O_4 \rangle_{{\rm V+A}} & = & (+0.00028 \pm 0.00016) \ {\rm GeV}^4
\label{O4}
\\
\Rightarrow \; \langle a G G \rangle &=& (+0.00364 \pm 0.00097) \ {\rm GeV}^4 ,
\label{aGG}
\\
\langle O_6 \rangle_{{\rm V+A}} & = & (+0.00074 \pm 0.00021) \ {\rm GeV}^6.
\label{O6}
\end{eqnarray}
\begin{eqnarray}
\langle O_4 \rangle_{{\rm V+A},\MSbar} & = & (+0.00173 \pm 0.00024) \ {\rm GeV}^4,
\label{O4MS}
\\
\langle O_6 \rangle_{{\rm V+A},\MSbar} & = & (-0.00451 \pm 0.00040) \ {\rm GeV}^6.
\label{O6MS}
\end{eqnarray}
\endnumparts \endgroup
\begin{figure}[htb]
\centering\includegraphics[width=95mm]{BexpBinALPsi0o77rt0200full4lal01177NReno2LL.pdf}
\vspace{-0.4cm}
\caption{Analogous to the previous Figures, but now the Borel transforms $B(M^2)$ are for real $M^2 > 0$.}
\label{FigPsi0}
\end{figure}
Cross-check of consistency can be performed, comparing the theoretical (predetermined by the coupling ${\mathcal{A}}$, cf.~Appendix) and the experimental values of $r^{(D=0, \sigma_{\rm max})}_{\tau}$:
\numparts
\begin{eqnarray}
\lefteqn{r^{(D=0, \sigma_{\rm max})}_{\tau,{\rm exp}}=}
\nonumber\\
&=& 2 \int_{0}^{\sigma_{\rm max}} \frac{d \sigma}{\sigma_{\rm max}} \left( 1 - \frac{\sigma}{\sigma_{\rm max}} \right)^2 \left( 1 + 2 \frac{\sigma}{\sigma_{\rm max}} \right) \omega_{\rm exp}(\sigma)-1 + 12 \pi^2 \frac{\langle O_6 \rangle_{{\rm V+A}}}{\sigma^3_{\rm max}}
\nonumber\\
& =& \begingroup \color{black} 0.201 \pm 0.006 \ \quad \rm (OPAL) \; vs \; 0.201 \; \rm (th.) \endgroup
\label{rtauexpa}
\\
& =& \begingroup \color{black} 0.211 \pm 0.003 \ \quad \rm (ALEPH) \; vs \; 0.213 \; \rm (th.) \endgroup
\label{rtauexpb}
\end{eqnarray}
\endnumparts
We see that there is a consistency of the ${\mathcal{A}}$QCD results. We have $\sigma_{\rm max}=3.136 \ {\rm GeV}^2$ and $2.80 \ {\rm GeV}^2$, and \textcolor{black}{$\langle O_6 \rangle_{\rm V+A}=+0.00085$ and $+0.00063 \ {\rm GeV}^6$, for OPAL and ALEPH, respectively. We recall that $r^{(D=0, m_{\tau}^2)}_{\tau,{\rm th}} =0.200$.}
\section{Applications: II.~V-channel Adler function and muon $g-2$}
\label{sec:DV}
We can now perform a further consistency check of our ${\mathcal{A}}$QCD [called $3\delta$ ${\mathcal{A}}$QCD to address the specific ansatz Eq.~(\ref{rhoA})], by applying it to a quantity which is determined by QCD at even lower energies, namely the anomalous magnetic moment of muon, or more specifically, to the leading order hadronic vacuum polarization (had(1)) contribution to this moment, $(g_{\mu}/2-1)^{\rm had(1)} \equiv a_{\mu}^{\rm had(1)}$. This quantity is experimentally deducible to a high accuracy from the precise measurements of the cross section $e^+ e^- \to \gamma^{\ast} \to$ hadrons, the recent values are \textcolor{black}{\cite{Davier:2019can}}\footnote{\textcolor{black}{Recently, the BMW-Collaboration \cite{Borsanyi:2020mff} obtained from lattice calculation significantly higher values $10^{10} \times a_{\mu; {\rm exp}}^{\rm had(1)} = 712.4 \pm 4.5$, indicating that no new physics beyond SM (beyond QCD) is required to explain the directly measured value of full $a_{\mu}$ \cite{PDG2019}. On the other hand, another lattice calculation \cite{Lehner:2020crt} indicates that the lattice results have higher statistical uncertainties than assumed in \cite{Borsanyi:2020mff}, which would avoid tension with the result Eq.~(\ref{amuhad1}) based on the measurements of $e^+ e^- \to \gamma^{\ast} \to$ hadrons.}}
\begin{equation}
10^{10} \times a_{\mu; {\rm exp}}^{\rm had(1)} \approx \color{black} 694 \pm 4 \ .
\label{amuhad1}
\end{equation}
For its theoretical evaluation, one needs the correlation function of the V-channel currents. Therefore, first the evaluation of the (full) V-channel Adler function
\begin{eqnarray}
{\cal D}_{\rm V}(Q^2) &\equiv& - 4 \pi^2 \frac{d \Pi_{\rm V}(Q^2)}{d \ln Q^2}
= d(Q^2)_{D=0} + {\cal D}(Q^2)^{\rm (NP)}
\nonumber\\
&=& d(Q^2)_{D=0} + 1 + 2 \pi^2 \sum_{n \geq 2}
\frac{ n 2 \langle O_{2n} \rangle_{\rm V}}{(Q^2)^n}
\label{DV}
\end{eqnarray}
is needed. The leading-twist contribution $d(Q^2)_{D=0}$ is the same as in the previously considered (V+A)-channel case (cf.~ Appendix). The $D=4$ condensate values here are known from the (V+A)-channel because
$\langle O_4 \rangle_{\rm V} = \langle O_4 \rangle_A = (1/2) \langle O_4 \rangle_{{\rm V+A}}$.
On the other hand, for $D=6$, a sum rule analysis of the (V-A)-channel \cite{GPRS} gives (the average from OPAL and ALEPH data)
\begin{equation}
\langle O_6 \rangle_{{\rm V-A}} = (-0.00465 \pm 0.00126) \ {\rm GeV}^6,
\label{O6VmA}
\end{equation}
Therefore, taking into account that $\langle O_6 \rangle_{\rm V} = (1/2) ( \langle O_6 \rangle_{{\rm V+A}} + \langle O_6 \rangle_{{\rm V-A}})$, we obtain in our case of $3 \delta$ ${\mathcal{A}}$QCD [with \textcolor{black}{$\alpha_s(M_Z^2;\MSbar)=0.1177$ and $r_{\tau}^{(D=0)}=0.200$}]
\numparts
\begin{eqnarray}
\langle O_4 \rangle_{\rm V} &=& \textcolor{black}{ (+0.00014 \pm 0.00008)} \ {\rm GeV}^4,
\label{O4Vres}
\\
\langle O_6 \rangle_{\rm V} &=& \textcolor{black}{ (-0.00196 \pm 0.00064)} \ {\rm GeV}^6,
\label{O6Vres}
\end{eqnarray} \endnumparts
The nonperturbative (NP) part of V-channel Adler function (\ref{DV}), since applied here to very low $Q^2$-values, clearly has to be ``IR-regularized.''
We do this by two regularization masses ${\cal M}_2$ and ${\cal M}_3$ in the following way:
\numparts
\begin{eqnarray}
\lefteqn{
{\cal D}_{\rm V}(Q^2)^{\rm (NP)} = 1 + 2 \pi^2 \sum_{n \geq 2}
\frac{ n 2 \langle O_{2n} \rangle_{\rm V}}{(Q^2)^n}
\label{NPOPE}
}
\\
& = &
1 + 4 \pi^2
\left[ \frac{2 \langle O_4 \rangle_{\rm V}}{ (Q^2+ {\cal M}_2^2)^2 }
+ \frac{\left( 3 \langle O_6 \rangle_{\rm V} + 4 {\cal M}_2^2 \langle O_4 \rangle_{\rm V} \right)}{\textcolor{black}{(Q^2 + {\cal M}_3^2)^3}} \right]
\label{NPOPEreg}
\end{eqnarray} \endnumparts
We note that similarly regularized higher-twist expressions were used in the analyses of Bjorken Sum Rule in \cite{KTG}.
The IR-regularization masses ${\cal M}_2$ and ${\cal M}_3$ are expected to be real positive and $\lesssim 1$ GeV, reflecting the scales of the NP regime in QCD. The expression (\ref{NPOPEreg}) is written in such a way that in the limit of large $|Q^2|$ it gives the correct first two terms of the sum in Eq.~(\ref{NPOPE}). At $Q^2 \to 0$ we have, on the other hand
\begin{equation}
{\cal D}_{\rm V}(0) = 0 \; \Rightarrow {\cal D}_{\rm V}(0)^{\rm (NP)} =0.
\label{deepIR1}
\end{equation}
The above implication is valid because the (resummed) $D=0$ part $d(Q^2)_{D=0}$ in $3 \delta$ ${\mathcal{A}}$QCD, Eq.~(\ref{DAres}) goes to zero when $Q^2 \to 0$ because ${\mathcal{A}}(Q^2) \sim Q^2 \to 0$. The condition ${\cal D}_{\rm V}(0)^{\rm (NP)} =0$ implies
\begin{equation}
{\cal M}_3^2 = \left[ \frac{(-3) \langle O_6 \rangle_{\rm V} - 4 {\cal M}_2^2 \langle O_4 \rangle_{\rm V}}{\frac{1}{4 \pi^2} + 2 \langle O_4 \rangle_{\rm V}/{\cal M}_2^4} \right]^{1/3}.
\label{M32}
\end{equation}
Now we turn to the theoretical evaluation of $a_{\mu}^{\rm had(1)}$. It is given by
\begin{eqnarray}
a_{\mu}^{\rm had(1)} &=& \frac{\alpha_{em}^2}{3 \pi^2} \int_0^{\infty} \frac{ds}{s} K(s) R_{\gamma,{\rm data}}(s),
\label{ahada}
\\
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
{\rm with:} \qquad \;\;
K(s) &=& \int_0^1 dx \frac{x^2 (1-x)}{x^2 + \frac{s}{m_{\mu}^2} (1-x)},
\label{Ks}
\end{eqnarray}
and $R_{\gamma,{\rm data}}(s) = 4 \pi k_f \; {\rm Im} \Pi_{\rm V}(-s-i \epsilon)$. Further, $k_f = 3 \sum_{f} Q_f^2$ ($k_f=2$ for $N_f=3$).
Using Cauchy theorem, $a_{\mu}^{\rm had(1)}$ Eq.~(\ref{ahada}) can be expressed in terms of the V-channel Adler function (\ref{DV})
\begin{equation}
a_{\mu}^{\rm had(1)} = \frac{\alpha_{em}^2}{3 \pi^2} \! \int_0^{1} \! \frac{dx}{x} (1-x) (2-x) {\cal D}_{\rm V} \left( Q^2\!=\!m^2_{\mu} \frac{x^2}{(1-x)} \right).
\label{ahadb}
\end{equation}
We use now: (I) our $3 \delta$ ${\mathcal{A}}$QCD for the (renormalon-motivated) evaluation of the $D=0$ contribution to ${\cal D}_{\rm V}( Q^2)$ as given by Eq.~(\ref{DAres}); (II) the resummed OPE expression (\ref{NPOPEreg}) for the $D \geq 4$ (NP) contribution to ${\cal D}_{\rm V}( Q^2)$; (III) and the obtained values of $\langle O_4 \rangle_{\rm V}$ and $\langle O_6 \rangle_{\rm V}$ Eqs.~(\ref{O4Vres})-(\ref{O6Vres}). When evaluating the expression (\ref{ahadb}) in this way and requiring that the result reproduces the experimental value (\ref{amuhad1}), we can numerically extract the allowed values of the regularization masses ${\cal M}_2$ and ${\cal M}_3$. We obtain (the uncertainties from various sources are separated)
\begingroup \color{black}
\numparts
\begin{eqnarray}
{\cal M}_2 &= & \left[ 0.384^{+0.019}_{-0.040} (\delta \langle O_4 \rangle_{\rm V})^{-0.019}_{+0.014} (\delta \langle O_6 \rangle_{\rm V}) \right] \ {\rm GeV},
\label{cM2}
\\
{\cal M}_3 & = & \left[ 0.730^{-0.012}_{+0.016} (\delta \langle O_4 \rangle_{\rm V})^{-0.055}_{+0.042} (\delta \langle O_6 \rangle_{\rm V}) \right] \ {\rm GeV}.
\label{cM3}
\end{eqnarray} \endnumparts \endgroup
This means that both IR-regularization masses are in the physically expected range of values, which we consider as a further consistency check of our approach. \textcolor{black}{If only the $D=0$ part $d(Q^2)_{D=0}$ were used for ${\cal D}_{\rm V}(Q^2)$, the integration Eq.~(\ref{ahadb}) would give $10^{10} \times a_{\mu, D=0}^{\rm had(1)} \approx 423$, i.e., about $61 \%$ of the required value Eq.~(\ref{amuhad1}). The extracted central values of Eqs.~(\ref{cM2})-(\ref{cM3}) change only very little when $10^{10} \times \delta a_{\mu; {\rm exp}}^{\rm had(1)}$ varies by $\pm 4$ according to Eq.~(\ref{amuhad1}): $\delta {\cal M}_2 = \pm 0.7$ MeV and $\delta {\cal M}_3= \pm 0.3$ MeV. When the higher value $10^{10} \times a_{\mu; {\rm exp}}^{\rm had(1)} = 712.4$ is used (the central value of the prediction of Ref.~\cite{Borsanyi:2020mff}), the extracted values incease only slightly: $\delta {\cal M}_2 = 3.1$ MeV and $\delta {\cal M}_3=1.3$ MeV.}
\textcolor{black}{We have also performed our analysis for various other input values of $\alpha_s(M_Z^2;\MSbar)$ and $r_{{\tau}, {\rm th}}^{(D=0)}$ ($\equiv r_{{\tau}, {\rm th}}^{(D=0, m_{\tau}^2)}$), in order to investigate the stability of the results. Details of the results will be presented in the extended version of this work, Ref.~\cite{GCRKext}.}
\section{Conclusions}
\label{sec:concl}
A QCD coupling ${\mathcal{A}}(Q^2)$ was constructed by dispersive methods, in the lattice MiniMOM scheme (rescaled to the usual $\Lambda_{\MSbar}$ scale convention). Mathematica programs for the evaluation of the coupling is available online \cite{MathPrg}. This coupling defines a version of (${\mathcal{A}}$)QCD which has several attractive features:
\noindent
(a) At high momenta $|Q^2| > 1 \ {\rm GeV}^2$, the coupling ${\mathcal{A}}(Q^2)$ reproduces the pQCD results, because there it practically coincides with the underlying pQCD coupling $a(Q^2) \equiv \alpha_s(Q^2)/\pi$.
\noindent
(b) At very low momenta $|Q^2| \lesssim 0.1 \ {\rm GeV}^2$, ${\mathcal{A}}(Q^2)$ goes to zero as $\sim Q^2$, as suggested by high-volume lattice results.
\noindent
(c) At intermediate momenta $|Q^2| \sim 1 \ {\rm GeV}^2$, ${\mathcal{A}}(Q^2)$ reproduces the well measured physics of semihadronic $\tau$-lepton decay.
\noindent
(d) As a byproduct of the construction, ${\mathcal{A}}(Q^2)$ possesses an attractive holomorphic behaviour in the complex $Q^2$-plane, the behaviour qualitatively shared by QCD spacelike physical quantities.
\noindent
(e) Several successful applications of ${\mathcal{A}}(Q^2)$ were made in low-$|Q^2|$ phenomenology, including the correct reproduction of the value of muon $(g-2)_{\rm exp}^{\rm had(1)}$.
Other holomorphic couplings have been introduced and applied in QCD phenomenology by various authors, among them \cite{ShS,MS,Sh1Sh2,BMS,APTappl1,APTappl2,Nest2,2dAQCD,PTBMF,Boucaud,Luna,Pelaez,Siringo,Mirjalili}. Further, spacelike QCD observables can be evaluated also by applying dispersive methods directly to them \cite{MSS1,MSS2,DeRafael,MagrGl,MagrTau,mes2,Nest3a,Nest3b}.
Further details will be presented in the extended version of this work, Ref.~\cite{GCRKext}.
\vspace{0.4cm}
\noindent
{\it Acknowledgments:\/} The work of G.C. was supported in part by the Fondecyt (Chile) Grant No.~1180344.
|
train/arxiv
|
BkiUd545qYVBjW9qu7TG
| 5 | 1 |
\section{Introduction}
\label{intro}
Arising from the arachnoid caps cells on the outer surface of the meninges, meningiomas are the second most common primary brain tumor after gliomas, and account for approximately one-third of all central nervous system tumors~\cite{ostrom2019cbtrus}. With the increase in use of neuroimaging for checkups and precautionary diagnostics, incidental meningiomas are found more often~\cite{spasic2016incidental}.
Magnetic resonance imaging (MRI), adopted as the first routine examination, represents the gold standard for diagnosis and planning of the optimal treatment strategy (i.e.surgery or conservative management)~\cite{goldbrunner2016eano,kunimatsu2016variants}. While several different MR sequences may be used for meningioma imaging, measurements of tumor diameters and volumes are done using the contrast enhanced T1 weighted sequence.
Systematic and consistent segmentation of brain tumors is of utmost importance for accurate monitoring of growth and for guiding treatment decisions. With meningiomas being typically slow-growing tumors, performing detection at an early stage and monitoring systematically growth over time could improve clinical decision making and the patient's outcome~\cite{fountain2017volumetric}. Manual segmentation by radiologists, often in a slice-by-slice fashion is too time consuming to be part of daily clinical routine. Tumor volume and thus growth is therefore usually assessed based on manual measurements of tumor diameters resulting in considerable inter- and intra-rater variability~\cite{binaghi2016collection} and rough measures for growth evaluation~\cite{berntsen2020volumetric}. Automatic segmentation of pathology from MR images has been an active area of research for several decades but has made considerable progress with the recent advances in deep learning based methods~\cite{bauer2013survey,ueda2019technical}.
Nevertheless, the task of brain tumor segmentation remains challenging due to the large variability in appearance, shape, structure, and location~\cite{watts2014magnetic}. Similarly, problems might arise from the MRI volumes themselves whereby variability in resolution, intensity inhomogeneity~\cite{nyul2000new,tustison2010n4itk}, or varying intensity ranges among the same sequences and scanners can be noticed.
Gliomas, especially of low grade, are considered the most difficult brain tumors to segment in MRI since they often are diffuse, poorly contrasted, and with a tentacle-like structure. Conversely, typical meningiomas are sharply circumscribed with a strong contrast enhancement. However, smaller meningiomas may resemble other contrast enhancing structures, for example blood vessels (intensity, shape and size) particularly at the base of the brain, making them challenging to detect automatically.
In this study, we focus on the task of automatic meningioma segmentation using solely T1-weighted MRI volumes from both surgically treated patients and untreated patients followed at the outpatient clinic in order to create a method that is able to segment all tumor types and sizes.
\textbf{State-of-the-art:} As described in a recent review study~\cite{icsin2016review}, brain tumor segmentation methods can be classified into three categories based on the level of user interaction: manual, semi-automatic, and fully-automatic. For this study, we narrow the work to only fully-automatic methods specifically focused on deep learning methods. In the past, a large majority of studies in brain tumor segmentation have been carried out using the Multimodal Brain Tumor Image Segmentation (BRATS) challenge dataset, which only contains glioma images~\cite{menze2014multimodal}. The task of brain tumor segmentation can be approached in 2D where each axial image (slice) from the original 3D MRI volume is processed sequentially. Havaei et al. proposed a two-pathway convolutional neural network (CNN) architecture to combine local and global information, arguing that the prediction for a given pixel should be influenced by both the immediate local neighborhood and a larger context such as the overall position in the brain~\cite{havaei2017brain}. Using the BRATS dataset for their experiments, they also proposed using a combination of all available MRI modalities as input for their method. In Zhao et al.~\cite{zhao2018deep}, the authors proposed to train CNNs and recurrent neural networks using image patches and slices along the three different acquisition planes (i.e. axial, coronal, sagittal) and fuse the predictions using a voting-based strategy. Both methods have been benchmarked on the BRATS dataset and were able to reach up to 80-85\% in terms of Dice coefficient and sensitivity/specificity. A large number of other studies have been carried out using image or image patch-based techniques as an attempt to deal with large MRI volumes in an efficient way~\cite{pereira2016brain,dvorak2015structured,zikic2014segmentation}. However, methods based on features obtained from image patches or across planes generally achieve lower performance than methods using features extracted from the entire 3D volume directly or through a slabbing process (i.e., using a set of slices). Simple 3D CNN architectures~\cite{myronenko20183d,isensee2018no}, multi-scale approaches~\cite{kamnitsas2017efficient,xu2018multi}, and ensembling of multiple CNNs~\cite{feng2020brain} have been explored. While they achieve better segmentation performances, are more robust to hyper-parameters and generalize better, the 3D nature of MRI volumes still poses challenges with respect to memory and computation limitations even on high-end GPUs.\\
While the availability of the BRATS dataset has triggered a large amount of work on glioma segmentation, meningioma segmentation has been less studied resulting in a scarce body of work. More traditional machine learning methods (e.g., SVM and graph cut) have been used for multi-modal (T1, T2) and multi-class (core tumor and edema) segmentation~\cite{binaghi2016collection}. While the reported performances are quite promising, the validation studies have been carried out on a dataset of only 15 patients. More recently, Laukamp et al. used different 3D deep CNN architectures (e.g., DeepMedic, BioMedIA) on their own multi-modal dataset~\cite{laukamp2019fully,laukamp2020automated}. While reported results reached above 90\% Dice score, the validation group consisted of only 56 patients. In addition, they investigated the use of heavy preprocessing techniques such as atlas registration and skull-stripping in combination with resampling and normalization. In their study, Pereira et al. also mentioned the effectiveness of normalization and data augmentation for brain tumor segmentation~\cite{pereira2016brain}.
A common limitation in the meningioma segmentation studies is the relatively limited number of patients included, and the choice of a fixed test set instead of a more thorough cross-validation approach. In general, the global trend in CNN architectures leads to ever larger and deeper 3D networks, even more so when considering ensembling strategies. As a consequence, the models' training and inference is becoming extremely computationally intensive, prohibiting their use in clinical settings with limited time and access only to regular computers.\\
In this paper, our contributions are: (i) the study of a lightweight 3D architecture that is less computationally intensive to use, (ii) a set of validation studies based on the largest meningioma dataset to date (698 patients), and (iii) an investigation into the trade-offs between segmentation performances and training/inference speed to enable clinical use.
\section{Data}
\label{sec:dataset}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.85]{images/meningioma-illustrations.png}
\caption{Illustrations of the manually annotated meningiomas over the dataset (in red). Each row represents a different patient, and each column represents respectively the axial, coronal, and sagittal view.}
\label{fig:dataset-illu}
\end{figure}
For this study, we have used a dataset of 698 Gd-enhanced T1-weighted MRI volumes acquired on 1.5 or 3 Tesla scanners at one the seven hospitals in the geographical catchment region of the Department of Neurosurgery at St. Olavs University hospital, Trondheim, Norway between 2006 and 2015. All patients were 18 years or older with radiologically or histopathologically confirmed meningioma. Of those, 324 patients underwent surgery to remove the meningioma while the remaining 374 patients were followed at the outpatient clinic. Overall, MRI volume dimensions covered $[192; 512]\times[224; 512]\times[11; 290]$ voxels and the voxel sizes ranged between $[0.41; 1.05]\times[0.41; 1.05]\times[0.60; 7.00]\,\text{mm}^3$. All the meningiomas were manually delineated by an expert using 3D Slicer~\cite{pieper20043d}, and two examples are provided in Fig.~\ref{fig:dataset-illu}. Given the wide range in voxel sizes, especially in the z-dimension (slice thickness), we decided to further split our dataset in two. The first subset (DS1) consisted of the 600 high-quality MRIs with a slice thickness of at most 2.0\,mm, while the second subset (DS2) consisted of all 698 MRIs including the 98 images with a considerably higher slice thickness.
Overall, the meningiomas had a volume ranging $[0.07, 167.99]\,\text{ml}$.
We analyzed the differences between the groups of meningiomas. The volume of the surgically resected meningiomas was on average larger ($29.80\pm32.60\,\text{ml}$) compared to the untreated meningiomas followed at the outpatient clinic ($8.47\pm14.91\,\text{ml}$). A T-test showed statistical significance ($p<0.005$) between treatment strategy and tumor volume. Meningiomas for patients followed at the outpatient clinic are significantly smaller, making them more difficult to identify. Conversely, no statistical significance ($p=0.55$) has been unveiled between treatment strategy and poor image resolution. There were 50 MRIs with poor resolution for patients followed at the outpatient clinic and 48 MRIs for patient who underwent surgery.
\section{Methods}
\label{sec:methods}
First, we explain in Section~\ref{subsec:archs} our rationale for selecting the architectures and deep learning frameworks. Then we introduce in Section~\ref{subsec:preproc} the different preprocessing steps that can be applied. Finally, we present the selected training strategies for the two architectures in Section~\ref{subsec:train-strats}.
\subsection{Architectures and frameworks}
\label{subsec:archs}
\begin{figure}[h]
\centering
\includegraphics[scale=0.60]{images/UNet-arch.png}
\caption{3D U-Net architecture used in this study. The number of layers (l) and number of filters for each layer can vary based on input sample resolution.}
\label{fig:unet-arch}
\end{figure}
In early studies using fully convolutional neural network architectures, the original 3D MRI volumes were required to be split into 2D patches or slices before being processed independently and sequentially due to insufficient GPU memory. While it presented an advantage with respect to memory use, the lack of sufficient global information about the 3D relationships between voxels was detrimental for the overall performance. The advances in GPU design and increased memory capacity enabled the research on 3D neural network architectures to become mainstream.
For the task of semantic segmentation, encoder-decoder architectures have been favored, especially since the emergence of the U-Net~\cite{ronneberger2015u}, followed by the 3D U-Net~\cite{cciccek20163d}. Many U-Net variants have been studied in 2D and 3D for medical image segmentation over the past years and this architecture can be considered as a strong baseline~\cite{zhou2018unet++,alom2018recurrent,isensee2018nnu}. In this study, we have implemented an architecture close to the initial 3D U-Net, illustrated in Fig.~\ref{fig:unet-arch}. When working with 3D images, preprocessing is needed in order to fit the number of parameters on the GPU. Typical solutions are to either downsample the input volume, perform sub-division into slabs that are sequentially processed, or reduce the batch size to 1 which will result in poor convergence. Training on mini-batches from size 2 to 32 have shown to improve generalization performances~\cite{masters2018revisiting}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.70]{images/PLS-arch2.png}
\caption{PLS-Net architecture used in this study, kept identical as described in the original paper~\cite{lee2019efficient}.}
\label{fig:pls-arch}
\end{figure}
To take full advantage of the high-resolution MRI volumes as input, multi-scale encoder-decoder architectures have been proposed. Initially designed for the segmentation of lung lobes in CT volumes, the PLS-Net architecture is based on three insights: efficiency, multi-scale feature representation, and high-resolution 3D input/output~\cite{lee2019efficient}. The core components are (i) depthwise separable convolutions making the model lightweight and computationally efficient, (ii) dilated residual dense blocks to capture wide-range and multi-scale context features, and (iii) an input reinforcement scheme to maintain spatial information after downsampling layers. We have implemented the architecture as described in the original paper, and an illustration is provided in Fig.~\ref{fig:pls-arch}.
To address the issue of limited GPU memory, recent advances have been made for enabling the use of mixed precision computation rather than full precision for training neural networks. Mixed precision consists in using the full precision (i.e., float32) for some key specific layers (e.g., loss layer) while reducing most of the other layers to half precision (i.e., float16). The training process therefore requires less memory due to faster data transfer operations while at the same time math-intensive and memory-limited operations are sped up. These benefits are ensured at no accuracy expense compared to a full precision training. Since not all combinations of deep learning frameworks and GPU architectures are fully compatible with mixed precision training, we chose to use \texttt{TensorFlow}~\cite{abadi2016tensorflow} for full precision training, and \texttt{PyTorch}~\cite{paszke2019pytorch} for mixed precision training.
\subsection{Preprocessing}
\label{subsec:preproc}
In order to maximize and standardize the information input to the neural network, we propose a series of independent preprocessing steps for generating the training samples:
\begin{itemize}
\item N4 bias correction using the \texttt{ANTs} implementation~\cite{avants2009advanced}.
\item Resampling to a uniform and isotropic spacing of 1\,mm using \texttt{NiBabel}, with a spline interpolation of order 1.
\item Cropping the volumes as tightly as possible around the patient's head by discarding the 20\% lowest intensity values (background noise) and identifying the largest remaining region. This is less restrictive and faster to perform than skull stripping.
\item Volume resizing to a specific shape dependent on the study/architecture using spline interpolation of order 1. When resizing based on an axial slice resolution, the new depth value is automatically inferred.
\item Finally, either normalization of the intensity to the range $[0, 1]$ (S) or zero-mean standardization (ZM).
\end{itemize}
\subsection{Training strategies}
\label{subsec:train-strats}
With the large range and inhomogeneous distribution of meningioma volumes, the baseline sampling strategy was to populate each fold with a similar volume distribution. We therefore split the meningiomas into three equally-populated bins and randomly sampled from these bins to generate the cross-validation folds.
All models were trained from scratch using the Adam optimizer with an initial learning rate of $10^{-3}$, the class-average Dice as loss function, and training was stopped after 30 consecutive epochs without validation loss improvement. All U-Net models were trained with a batch size of 8 using full precision in \texttt{TensorFlow} while all PLS-Net models were trained with batch size 4 using mixed precision with \texttt{PyTorch}.
We used a classical data augmentation approach where the following transforms have been applied to each input sample with a probability of 50\%: horizontal and vertical flipping, random rotation in the range $[-20^{\circ}, 20^{\circ}]$, translation up to 10\% of the axis dimension, zoom between $[80, 120]\%$, and perspective transform with a scale within $[0.0, 0.1]$. We selected two sets of augmentation methods: a minimalist approach with only flipping, rotation, and translation (Augm1); and an extended approach with all the above-mentioned transformations (Augm2).
\subsubsection{U-Net}
\begin{table}[h]
\centering
\caption{Overview of the different training strategies for the U-Net architecture.}
\begin{tabular}{r|r|r|r|r|r|r}
Configuration & Stride & Neg/Pos ratio & Norm. & Augm. & Resolution\tabularnewline
\hline
Cfg1 & 8 & None & S & Augm1 & $256\times192\times[167,420]$\tabularnewline
Cfg2 & 8 & 2.0 & S & Augm1 & $256\times192\times[167,420]$\tabularnewline
Cfg3 & 16 & 2.0 & S & Augm1 & $256\times192\times[167,420]$\tabularnewline
Cfg4 & 8 & 1.0 & S & Augm1 & $256\times192\times[167,420]$\tabularnewline
\end{tabular}
\label{tab:training-strats-unet}
\end{table}
As training strategy, we specifically investigated the impact of different sampling patterns in addition to the augmentation approach described above. Each patient's MRI volume was split into a collection of training samples (slabs) made of 32 slices along the z-axis. The stride parameter determined the number of slices shared by two consecutive slabs (i.e., an overlap of 24 slices for an stride of 8). Since some meningiomas are tiny, we also investigated balancing the ratio of positive to negative samples for each MRI volume. Random negative slabs were removed when the ratio was exceeded but no positive slab was excluded, purposely crafted as a non-bijective function. All MRI volumes were resized to an axial resolution of $256\times192$ pixels, leaving the third dimension adjusted dynamically following Eq.~\ref{eq:automatic-z}. For the architecture design, we used 7 layers with $[8, 16, 32, 64, 128, 256, 256]$ as number of filters and all spatial dropouts were set to a value of 0.1. In Table~\ref{tab:training-strats-unet}, we summarize the different configurations.
\begin{equation}
new\_dim_{z} = dim_{z} * \frac{new\_dim_{y}}{dim_{y}}
\label{eq:automatic-z}
\end{equation}
\subsubsection{PLS-Net}
We decided to use the exact same architecture, number of layers, and kernel sizes as presented in the original paper. The single design choice was to keep a fixed input size of $256\times320\times224\, \text{pixels}$ while we focused on different preprocessing and data augmentation aspects, summarized in Table~\ref{tab:training-strats-pls}.
\begin{table}[h]
\centering
\caption{Overview of the different training strategies for the PLS-Net architecture.}
\begin{tabular}{r|r|r|r|r|r|r}
Configuration & Data & Bias Cor. & Norm. & Augm1 & Augm2 & Resolution\tabularnewline
\hline
Cfg1 & DS1 & False & S & True & False & $256\times320\times224$\tabularnewline
Cfg2 & DS1 & True & S & True & False & $256\times320\times224$\tabularnewline
Cfg3 & DS1 & False & ZM & True & False & $256\times320\times224$\tabularnewline
Cfg4 & DS1 & False & S & True & True & $256\times320\times224$\tabularnewline
Cfg5 & DS2 & False & S & True & True & $256\times320\times224$\tabularnewline
\end{tabular}
\label{tab:training-strats-pls}
\end{table}
\section{Validation studies}
\label{sec:validation}
In this work, we aim to optimize segmentation performances while finding the best trade-off with respect to processing speed. Unless specified otherwise, we followed a 5-fold cross-validation approach whereby at every iteration three folds were used for training, one for validation, and one for testing.
\textit{Measurements:}
For quantifying the performances, we used: (i) the Dice score, (ii) the F1-score, and (iii) the training/inference speed.
The Dice score, reported in \%, is used to assess the quality of the pixel-wise segmentation by computing how well a detection overlaps with the corresponding manual ground truth.
The F1-score, reported in \%, assesses the combination of recall and precision performances.
Finally, the training speed (in $s.epoch^{-1}$), the inference speed IS (in ms), and the test speed TS (in $s.patient^{-1}$) to process one MRI are reported.
\textit{Metrics:}
For the segmentation task, the Dice score is computed between the ground truth and a binary representation of the probability map generated by a trained model. The binary representation is computed for ten different equally-spaced probability thresholds (PT) in the range $[0, 1]$. For the detection task, a similar range of probability thresholds is used to generate the binary results. A second threshold value (DT), in the list $[0, 0.25, 0.50, 0.75]$, is used to decide at the patient level if the meningioma has been sufficiently segmented to be considered a true positive, discarded otherwise (reported as Dice-TP). In case of multifocal meningiomas, a connected components approach coupled to a pairing strategy was employed to compute the recall and precision values.
Pooled estimates, computed from each fold's results, are reported for each measurement~\cite{killeen2005alternative}. Measurements are either reported with mean, mean and standard deviation, or mean and respective percentile confidence interval. If not stated otherwise, a significance level of 5 \% was used when calculating confidence intervals.
\textit{(i) Optimization study:} Performances using the different training configurations reported in Table~\ref{tab:training-strats-unet} and Table~\ref{tab:training-strats-pls} are studied. For U-Net, results are reported after training on the first fold only, given the time required to train one model.
\textit{(ii) Speed versus segmentation accuracy:}
This study aims at assessing which of the two architectures achieves the best overall performances considering all measurements, using the best configurations identified in the previous study.
\textit{(iii) Impact of dataset quality and variability:} Models trained with the best PLS-Net configuration were used for inference on the 98 low-resolution MRI volumes and the results were averaged. A direct comparison is done over the high-resolution and low-resolution images with models trained including the whole dataset (PLS-Cfg5).
\textit{(iv) Ground-truth quality:} In order to assess the quality of the manual annotations, all performed by a single expert, we performed an inter-annotator variability study. A random subset of 30 MRI volumes, 20 high-resolution and 10 low-resolution, was given for annotation to a second expert and differences were computed using the Dice score.
\section{Results}
\label{sec:results}
\subsection{Implementation details}
Results were obtained using an HP desktop: Intel Xeon @3.70 GHz, 62.5 GiB of RAM, NVIDIA Quadro P5000 (16GB), and a regular hard-drive. Implementation was done in Python using \texttt{TensorFlow} v1.13.1, and \texttt{PyTorch lightning} v0.7.3 with \texttt{PyTorch} back-end v1.3. For further training speed-up, all PLS-Net models were trained using the benchmark flag and Amp optimization level 2 (FP16 training with FP32 batch normalization and FP32 master weights). For data augmentation, all the methods used came from the Imgaug Python library~\cite{imgaug}.
\subsection{Optimization study}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{images/SegPaper-OverallResults.png}
\caption{Illustrations of segmentation results using the PLS-cfg4 model, each row representing a different patient. The ground truth for the meningioma in shown in blue whereas the automatic segmentation is shown in red.}
\label{fig:results-visual_examples}
\end{figure}
Results obtained for the U-Net configurations are reported in Table~\ref{tab:results-unet-opti}, while the ones for the PLS-Net architecture are reported in Table~\ref{tab:results-pls-opti}. With an optimized distribution of positive and negative training samples, U-Net performs similarly to PLS-Net regarding Dice performances. The highest precision is achieved with the first U-Net configuration, which was to be expected since all negative samples are kept. However, the best F1 score obtained with the U-Net architecture is far worse than with the PLS-Net architecture. The slabbing strategy generates more false positives since only local image features struggle to differentiate a small meningioma from other anatomical structures such as blood vessels. The different training configurations of PLS-Net provide comparable results across the board. The average Dice-TP reaches up to 87\%, indicating a good segmentation quality when a meningioma is detected. Considering the F1-score as the most important measurement for a relevant diagnosis use in clinical practice, UNet-cfg2 and PLS-cfg4 are the two best configurations.
\begin{table}[h]
\caption{Segmentation performances obtained with the different U-Net architecture configurations, over the first fold only.}
\centering
\adjustbox{max width=\textwidth}{
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
Cfg & PT & DT & Dice & Dice-TP & F1 & Recall & Precision \tabularnewline
\hline
Cfg1 & 0.6 & 0.5 & $63.49\pm36.65$ & $84.40\pm11.27$ & $77.13$ & $73.55$ & $81.06$\tabularnewline
Cfg2 & 0.6 & 0.25 & $67.75\pm33.99$ & $82.56\pm14.13$ & \boldmath{$77.78$} & $81.82$ & $77.76$\tabularnewline
Cfg3 & 0.4 & 0.25 & $69.27\pm32.76$ & $82.17\pm14.63$ & $76.27$ & $84.29$ & $69.63$ \tabularnewline
Cfg4 & 0.6 & 0.5 & \boldmath{$71.37\pm29.78$} & $84.19\pm10.13$ & $76.34$ & $81.82$ & $71.55$ \tabularnewline
\end{tabular}
}
\label{tab:results-unet-opti}
\end{table}
\begin{table}[h]
\caption{Segmentation performances obtained with the different PLS-Net architecture configurations, averaged across all folds.}
\adjustbox{max width=\textwidth}{
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
Cfg & PT & DT & Dice & Dice-TP & F1 & Recall & Precision \tabularnewline
\hline
Cfg1 & 0.5 & 0.5 & $73.40\pm31.34$ & $86.62\pm10.54$ & $88.01\pm1.39$ & $83.05\pm1.68$ & $93.68\pm2.47$ \tabularnewline
Cfg2 & 0.6 & 0.5 & $72.16\pm32.55$ & \boldmath{$87.19\pm9.40$} & $86.23\pm2.98$ & $80.74\pm2.43$ & $92.54\pm3.89$\tabularnewline
Cfg3 & 0.6 & 0.5 & \boldmath{$73.23\pm30.38$} & $86.01\pm10.47$ & $86.82\pm0.6$ & $82.90\pm2.1$ & $91.31\pm3.26$ \tabularnewline
Cfg4 & 0.5 & 0.25 & $71.69\pm33.41$ & $85.79\pm12.51$ & \boldmath{$88.34\pm1.86$} & \boldmath{$83.22\pm2.96$} & \boldmath{$94.19\pm1.06$} \tabularnewline
\end{tabular}
}
\label{tab:results-pls-opti}
\end{table}
Some examples, obtained with the PLS-cfg4 model, are displayed in Fig.~\ref{fig:results-visual_examples} where the ground truth is indicated in blue and the obtained segmentation is indicated in red. For the patients featured in the first two rows, the segmentation is almost perfect, while for the third patient the whole extent of the meningioma is not fully segmented. In the last case, the meningioma is both relatively small and located right behind the eye socket, and as such has not been detected at all.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.50]{images/boxplot_dice_over_tumor_volume_equal_bins_overall.png}~
\includegraphics[scale=0.47]{images/dice_scatter_over_tumor_volume_equal_bins_overall.png}\\\includegraphics[scale=0.50]{images/boxplot_dice_over_tumor_volume_equal_bins_hospital.png}~
\includegraphics[scale=0.47]{images/dice_scatter_over_tumor_volume_equal_bins_hospital.png}\\
\includegraphics[scale=0.50]{images/boxplot_dice_over_tumor_volume_equal_bins_polyclinic.png}~\includegraphics[scale=0.47]{images/dice_scatter_over_tumor_volume_equal_bins_polyclinic.png}
\caption{Overall (top), hospital (middle), and outpatient clinic (bottom) results for PLS-cfg4. The first column shows Dice performances over tumor volumes, the second column shows Dice, Dice-TP and recall performances over tumor volumes. Ten equally populated bins, based on tumor volumes, have been used to group the meningioma performances.}
\label{fig:results-pls-volumeprov}
\end{figure}
When considering the origin of the data (i.e., hospital or outpatient clinic) reported in Table~\ref{tab:results-pls-origin}, performance appears to be better for surgically treated tumors reaching an F1-score higher than 90\% with the PLS-Net architecture. Conversely, more meningiomas from the outpatient clinic are left undetected and thus unsegmented, explaining the lower recall and average Dice score. Meningiomas from outpatient clinic patients being statistically smaller than surgically treated meningiomas, we further analyzed the relationship between the treatment strategy and tumor volume as shown in Fig~\ref{fig:results-pls-volumeprov}. Small meningiomas ($<2\,\text{ml}$) are challenging to segment and are either not or poorly segmented with at best a 50\% Dice score. For larger meningiomas ($>3\,\text{ml}$) the Dice score reaches 90\% whether surgically resected or followed at the outpatient clinic. Segmentation performance is heavily impacted by meningioma volumes, and probably also by the larger variability in tumor location within the brain for those smaller meningiomas. This is a clear indication that designing better sampling strategies for training is of utmost importance to train a robust and generic model.
\begin{table}[ht]
\caption{Best segmentation performances based on the treatment strategy: surgery or follow-up at the outpatient clinic.}
\adjustbox{max width=\textwidth}{
\begin{tabular}{c|c|c|c|c|c|c|}
Cfg & Origin & Dice & Dice-TP & Recall & Precision & F1 \tabularnewline
\hline
\multirow{2}{*}{UNet-cfg2} & Hospital & $82.47\pm22.57$ & $88.66\pm8.11$ & $91.42\pm3.91$ & $78.44\pm3.86$ & $84.36\pm2.95$ \tabularnewline
& Clinic & $64.85\pm32.16$ & $81.79\pm10.49$ & $75.06\pm9.45$ & $76.79\pm5.36$ & $75.79\pm7.17$ \tabularnewline
\hline
\multirow{2}{*}{PLS-cfg4} & Hospital & $81.29\pm26.08$ & $88.81\pm9.82$ & $91.41\pm1.22$ & $93.61\pm2.22$ & $92.49\pm1.58$\tabularnewline
& Clinic & $63.44\pm36.50$ & $82.53\pm14.16$ & $75.82\pm6.51$ & $95.05\pm1.12$ & $84.17\pm3.73$ \tabularnewline
\end{tabular}
}
\label{tab:results-pls-origin}
\end{table}
\subsection{Speed versus segmentation accuracy}
On average, convergence is achieved much faster with the PLS-Net architecture ($<50\,hours$) than with U-Net ($130\,hours$) when leaving enough room for the models to grind. Competitive models with a validation loss below 0.2 can even be generated in shorter time using the PLS-Net architecture ($<20\,hours$). A summary of the training time and convergence speed is presented in Table~\ref{tab:results-training-speed}.
The inference speed is fast with both architectures, making them both usable in practice. On average with the U-Net architecture, the inference speed is of $3.58\pm0.22\texttt{s}$ for a total processing time of $21.48\pm7.89\texttt{s}$. With this architecture, the MRI volume is split into non-overlapping slabs that are processed sequentially. For the PLS-Net architecture, the inference speed is lowered to $950\pm14\texttt{ms}$ for a total processing time of $14.15\pm4.5\texttt{s}$. The small number of trainable parameters with PLS-Net (0.251\,M) also makes it usable on low-end computers simply equipped with a CPU. In comparison, our U-Net architecture is made of 14.75\,M trainable parameters, which is consequently higher. In case of pure CPU usage, the total processing time of a new MRI with the PLS-Net architecture increases to $135\pm10\texttt{s}$.
\begin{table}[h]
\caption{Training time results for the different U-Net and PLS-Net configurations. Results are averaged across the five folds when possible.}
\centering
\adjustbox{max width=\textwidth}{
\begin{tabular}{r|r|r|r|r|}
Cfg & \# samples & s.$epoch^{-1}$ & Best epoch & Train time (hours)\tabularnewline
\hline
UNet-Cfg1 & $19\,684$ & $5\,800$ & $79$ & $127.28$\tabularnewline
UNet-Cfg2 & $14\,617$ & $3\,990$ & $120\pm40$ & $132.78\pm44.18$ \tabularnewline
UNet-Cfg3 & $7\,359$ & $2\,120$ & $153$ & $90.1$ \tabularnewline
UNet-Cfg4 & $10\,321$ & $2\,860$ & $105$ & $83.42$\tabularnewline
PLS-Cfg1 & $600$ & $1\,920$ & $113\pm18$ & $60.27\pm9.56$ \tabularnewline
PLS-Cfg2 & $600$ & $1\,920$ & $106\pm27$ & $56.74\pm14.23$ \tabularnewline
PLS-Cfg3 & $600$ & $1\,920$ & $86\pm29$ & $45.97\pm15.47$ \tabularnewline
PLS-Cfg4 & $600$ & $1\,920$ & $91\pm23$ & $48.75\pm12.48$ \tabularnewline
PLS-Cfg5 & $698$ & $2\,220$ & $91\pm31$ & $56.00\pm19.04$
\end{tabular}
}
\label{tab:results-training-speed}
\end{table}
\subsection{Impact of input resolution}
Segmentation performances for the high- and low- resolution images are summarized in Table~\ref{tab:results-pls-slice-thickness}. Only minor differences across all performance metrics can be seen whether the low-resolution images are used during the training process or left aside. Selecting only the high-resolution images for training, coupled to advanced data augmentation methods, allows the trained models to be robust to extreme image stretching when resizing an MRI with for example an original slice thickness of 5\,mm. Figure~\ref{fig:results_poorres-visual_examples} provides segmentation results on low-resolution images.
\begin{table}[ht]
\caption{Performances analysis when including the images with a slice thickness larger than 2\,mm.}
\adjustbox{max width=\textwidth}{
\begin{tabular}{c|c|c|c|c|c|c|}
Resolution & Cfg & Dice & Dice-TP & Recall & Precision & F1 \tabularnewline
\hline
\multirow{2}{*}{High} & PLS-cfg4 & $71.69\pm33.41$ & $85.79\pm12.51$ & $83.22\pm2.96$ & $94.19\pm1.06$ & $88.34\pm1.86$\tabularnewline
& PLS-cfg5 & $73.19\pm32.31$ & $87.28\pm9.44$ & $82.65\pm4.25$ & $95.25\pm1.63$ & $88.43\pm2.36$ \tabularnewline
\hline
\multirow{2}{*}{Low} & PLS-cfg4 & $61.09\pm33.15$ & $78.88\pm12.71$ & $74.69\pm5.59$ & $85.39\pm3.04$ & $79.61\pm3.72$ \tabularnewline
& PLS-cfg5 & $62.70\pm33.34$ & $81.34\pm12.55$ & $73.70\pm8.65$ & $86.57\pm6.89$ & $79.39\pm6.81$ \tabularnewline
\end{tabular}
}
\label{tab:results-pls-slice-thickness}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{images/Illustration-PoorRes.png}
\caption{Illustrations of segmentation results on images with a poor resolution using the PLS-cfg4 model, each row representing a different patient. The ground truth for the meningioma in shown in blue whereas the automatic segmentation is shown in red.}
\label{fig:results_poorres-visual_examples}
\end{figure}
\subsection{Ground truth quality}
Between the two experts, the segmentation is matching with an average Dice score of 89.1 [86.3, 92.0], indicating a strong similarity. The Dice was higher for the high-resolution scans, with a Dice of 92.0 [89.8, 94.2], compared to 83.4 [75.8, 91.0] for the low-resolution ones. However, as the confidence intervals overlap, there is not a significant difference between the annotators with respect to image resolution. There was also found no difference in the models performances on both ground truths. This indicates that the initial ground truth is sufficient for training good models in terms of segmentation.
The ground-truths were originally not created in a pure manual fashion but rather with the assistance of semi-automatic methods from 3D Slicer for time-efficiency purposes. As a result, the presence of some noise in the ground truth has been identified, as illustrated in Fig.~\ref{fig:study_GT_quality}. While such noise is not detrimental since our models appear to be robust and do not generate small artifacts, cleaning the ground truth should lead to a slightly better model and increase the obtained Dice by some percents.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{images/SegPaper-GroundTruthQuality-ZoomedIn.png}
\caption{Illustration of noise in the ground truth from the use of 3D Slicer, indicated by a red arrow. Original image (top left), prediction with PLS-Cfg4 (top right), ground truth used for training (bottom left), fully manual ground truth from a second expert (bottom right).}
\label{fig:study_GT_quality}
\end{figure}
\section{Discussion}
\label{sec:discussion}
The dataset used in this study is larger than any previously described dataset in a meningioma segmentation paper. MRI investigations have been performed using multiple scanners in seven different hospitals, reducing potential biases and preventing overfitting issues. In addition, the smaller meningiomas from the outpatient clinic exhibit a wider range of size and location in the brain which enables our models to be more robust. The identification of slight noise in the ground-truth, due to the use of an external software to facilitate the task, is a slight inconvenience and should be adjusted. Putting the noise aside, the manual annotations from both experts were matching almost perfectly ensuring the overall quality of our dataset.\\
Overall, the PLS-Net architecture provides the best performances with no additional efforts or adjustments. Smarter training schemes are necessary to be implemented for the U-Net architecture, providing a clear speed-up with no impact on the segmentation performances. Nevertheless, due to the slabbing strategy and the lack of global information, reaching the same performances as with PLS-Net seems unachievable. In a local slab a part of a meningioma might appear similar to other hyperintense structures, making the network struggle. In the future, and with the increasing access to medical data, such training schemes would need to be even more complex with no clear benefit until GPUs are large enough to fit high-resolution MRI volumes in combination with deep architectures.While using a batch-size of 4 is not detrimental for reaching an optimum when training the PLS-Net architecture, batch normalization layers are not optimally put to use which can be one explanation regarding the difference between the validation loss and the actual results. This discrepancy can also originate from not computing the Dice score on exactly the same images. The difference in resolution, spacing, and extent between a preprocessed MRI volume and its original version can amount to Dice score variations when computed using the exact same ground-truth and detection.
Trained models are also robust to the ground-truth noise since predictions do not exhibit the same patterns of small fragmentation. Nevertheless, cleaning the ground-truth is imperative to generate better models since the loss function is based on the Dice score computation.\\
Considering the trade-off between model complexity, memory consumption, and training/inference speed, the PLS-Net architecture is clearly superior for the task of single class segmentation. While meningiomas can be expressed in a large variety of shapes and sizes, their localization in the brain is important. Such information can only be captured by processing the entire MRI volume at once and will be somewhat lost when using a slabbing scheme. In addition, and given that only one class is to be segmented, the huge amount of trainable parameters from U-Net is superfluous. The limitations of the PLS-Net would be apparent if multiple classes were to be segmented.
Compared with the U-Net architecture, the use of the lightweight PLS-Net architecture proves to be better both in terms of segmentation performances but also in terms of training and inference speed. Dividing tenfold the training time is especially relevant with the increase in data collection and the need for models re-training on a regular basis. Different training schemes and data augmentation techniques can also be investigated in a relatively short amount of time.\\
On top of the neural network architecture choice, using mixed precision during training played an essential role to drastically reduce training time. Given the reduced memory footprint, larger-resolution input samples or larger batch size can be investigated.
Having identified that small meningiomas are often missed, increasing the input resolution should help the network finding smaller objects. The downside would be a longer training time, and potentially difficulties to converge if the batch size has to be lowered all the way to 1. Increasing the variability or ratio of small meningiomas in the training set might also steer the network in the correct direction. Lastly, hard-mining could be a potential alternative after careful analysis of the training samples. In any case, using mixed precision by default in the future seems to be a promising strategy in many applications.\\
Compared to previous studies, similar results are obtained using only one MR sequence and without heavy preprocessing (i.e., bias correction, registration to MNI space and skull stripping). In a lightweight framework, and with a shallow multi-scale model, a new patient's MRI can be processed in at most 2 minutes with CPU making it interesting for clinical routine use.\\
In this study, directly benchmarking our models' performances with state-of-the-art results was not possible due to a lack of a publicly available meningioma dataset. Most previous brain tumor segmentation studies have used the BRATS challenge dataset which contains only glioma patients. The few studies focusing on meningioma segmentation used considerably smaller datasets with at most 56 patients in the test set, while not being openly accessible at the same time. Nevertheless, the size of our dataset is on-par with the BRATS challenge dataset which contains 542 patients overall and a fixed test set of 191 patients as of 2018. In addition, we do report our results after performing 5-fold cross-validations, which provides better insight into the model's reproducibility, robustness, and capacity to generalize, compared to a single dataset split into training, testing, and validation sets.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we investigated the task of meningioma segmentation in T1-weighted MRI volumes. We considered two different fully convolutional neural network architectures: U-Net and PLS-Net. The lightweight PLS-Net architecture enables both high segmentation performances while having a very competitive training and processing speed. Using multi-scale architectures and leveraging the whole MRI volume at once impacts mostly the F1-score which is beneficial for automatic diagnosis purposes. Smarter data balancing and training schemes have also shown to be necessary in order to improve performances. In future works, improved multi-scale architectures specifically tailored for such tasks should be explored, but improvements could also come from better data analysis and clustering.
\subsection*{Disclosures}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\\
Informed consent was obtained from all individual participants included in the study.
\subsection* {Acknowledgments}
This work was funded by the Norwegian National Advisory Unit for Ultrasound and Image-Guided Therapy (usigt.org).
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUbi_xaKgS2D2zNC71
| 5 | 1 |
\section{Introduction}
\section{Beyond Individual Values}
\input{content/table}
Viewers are most efficient at computing the ratio between two visualized values when those values are encoded by their position on a common axis, as in a dot plot. The dominance of position encodings for this task is followed by a ranked list of other encodings including 2D area and orientation, and with intensity typically listed as the least precise encoding~\cite{cleveland1984graphical}. There is an implicit assumption that the ranking derived from this two-value ratio judgment represents an atomic unit for visualization, so that the additional precision conveyed by position should transfer to better perceptual performance in more complex tasks.
We challenge this assumption. While the precision of ratio judgments is one operationalization of perceptual performance, there are many others. For example, one fundamental analytics task is seeing the 'big picture' in a dataset. Figure~\ref{fig:examplesFig}, provides an example, showing a $6\times12$ grid of data values plotted as a dot plot (position), bar graph (position + length), bubble chart (area), line graphs (juxtaposed and superposed), and heatmap. If precision of value extraction is all that matters, then the dot plot should be the preferred design. In contrast, it is clear to our eyes that the position-encoded dot plot is the \textit{least} effective visualization for seeing many potential 'big pictures' of the data. The bar graph to its right is far more useful for this task, likely because it adds a redundant encoding of length (or more likely, area \cite{yuan2019perceptual}), The line graph below is useful because it adds an emergent encoding of the local deltas between points via the orientation of the lines. Our favorite 'big picture view' is actually the heatmap in the corner, despite its status as the bottom of the barrel for precision of extracting ratios between individual values.
Table \ref{tab:examples} depicts a list of other likely perceptual tasks, inspired by work on low-level task taxonomies~\cite{amar2004best} \& recent papers that examine visualization through the lens of perceptual psychology~\cite{albers2014task,correll2012comparing,jardine2019perceptual,yang2018correlation}. We do not claim that this list is exhaustive, representative, or even correct. It is instead intended to show that ratio judgment tasks are only a small subset of likely perceptual tasks. The second column of Table \ref{tab:examples} provides concrete examples of the abstract perceptual tasks within that concrete example.
In the first two rows of the table, we list two perceptual operations that can be computed over 2 points: 'metric' relations between metric numbers (a ratio) and 'ordinal' relations between pairs of metric values (is value A higher than B?), say the first two points of the first row of Figure~\ref{fig:examplesFig}. While data visualization research has focused almost exclusively on the former, the latter is arguably as important for real-world tasks. While we occasionally note that today is five degrees hotter than yesterday, we more typically note that it is hotter than yesterday. COVID-19 infection rates have increased. Profits are lower, and we are over budget.
The next set of rows list tasks that might unfold when a viewer is presented with many 2-point pairs of values, such as the first two rows of one of the visualizations in Figure~\ref{fig:examplesFig}. These tasks include metric comparisons, such as finding which pair has the largest ratio, estimating an average ratio, or clustering the sizes of ratios. They also include ordinal comparisons of metric value pairs, such as finding a unique relation ($A\leq B$ among $A\geq B$ pairs), or estimating which relations type is more frequent. We know of only two studies that have studied these important perceptual tasks~\cite{nothelfer2019measures,srinivasan2018s}, but because both rely primarily on position encodings, they cannot confirm whether the Cleveland \& McGill position ranking holds for these alternative tasks.
The next several rows show perceptual tasks that are not constrained over pairs of points, and instead could be computed over an entire set or subset of N values. These include identifying a single value with a given property (e.g., min, max, outlier), or summarizing a set of values by a single number (e.g., mean, variance, clusters). Recent work on the perception of these ``aggregate'' or ``ensemble'' tasks~\cite{albers2014task,szafir2016four} provides evidence that many encodings that are imprecise for individual values (such as color) have performance benefits for these types of tasks over positional encodings like line charts.
The row labeled 'Shape, trend' refers to the need to holistically judge a single series, or to compare two series, in an open-ended manner. We suspect that this task, like stepping back to see the 'big picture' in the data, will not always be best supported by position encodings. The viewer might search for anything from basic patterns (rising, flat), to idiosyncratic motifs and shapes in the data~\cite{gogolou2018comparing}. Visual interfaces for times series search~\cite{lee2019you} have had to find ways for users to express shapes (and the properties of those shapes that they find important~\cite{correll2016semantics}) in fluid and dynamic ways, as the rigid definition of specific individual values may not capture the visual features of interest to the user.
The row labeled 'filter' refers to a visual subset operation based on the data values, e.g., 'pick out all of the high values'. We do not have a full understanding of the filtering operation in visualization contexts, although existing work evaluates the detection of individual ``oddball'' outliers~\cite{haroz2012capacity} or filtering across nominal categories (for instance, picking a particular class of points out of a scatterplot~\cite{gleicher2013perception}). However, perceptually motivated designs for time series data have often used color as a form of perceptual ``boosting''~\cite{oelke2011visual} to highlight anomalous items~\cite{albers2011sequence,correll2015layercake}.
Finally, recent work has begun to uncover the perceptual tasks that underlie more complex comparisons, such as judging the correlation between two sets of paired values~\cite{rensink2010perception,harrison2014ranking,yang2018correlation}. This work suggests that, instead of judging high-level properties like correlation \textit{per se}, a viewer relies on a more concrete proxy, such as the aspect ratio of the bounding box surrounding the points~\cite{yang2018correlation}. These hypothesized proxies may prove to be complex and may take many years to unpack -- but when they are better understood, we cannot predict whether they would be best supported by position encodings.
\section{Conclusion}
Much of the empirical and theoretical basis for visualization work comes from studies examining the efficiency of visual channels in extracting information, and using these results to generate a ranking of these channels~\cite{cleveland1984graphical}. These rankings power many of our design guidelines and constraints~\cite{moritz2018formalizing}, are ubiquitous in our textbooks~\cite{munzner2014visualization, ware2019information}, and are instantiated in the logic of many of our automated or semi-automated visualization design tools~\cite{mackinlay1986automating,mackinlay2007show}. And yet, these rankings do not seem to capture important components of how people use, interpret, and learn from visualizations. We should be expansive in how we analyze, conceptualize, and teach visualization. Otherwise, we risk a situation where academia focuses on the narrow, scatterplot-like section of the vast, more interesting world of visualization as a whole.
Of course we are not proposing to throw the baby out with the bathwater. The ranking of visual variables has had enormous impact on visualization research and practice, informing design decisions for tool development and providing pedagogical value in numerous guidelines, textbooks, and courses. Our intent is to raise awareness about the ways an excessive and narrow focus on channel ranking may be acting as a detrimental limitation to our field in terms of: (1) understanding actual data visualization practice; (2) development of data visualization tools and techniques; (3) methodologies for data visualization design and evaluation and (4) pedagogy of data visualization. The question is: how can we rectify and expand the theory behind the ranking of visual variables? When does it work? When does it not work? And, maybe even more importantly, what else to do we need in its place or in addition to it? From this initial analysis of the various insufficiencies we have identified it seems clear there is much to do in this area. It is our hope that this work sparks interesting conversations and potentially lead other practitioners and designers to develop alternative (or more refined) practices, conceptualizations, and epistemologies~\cite{meyer2019criteria} for visualization.
\acknowledgments{
This work was supported by NSF awards IIS-1901485 and IIS-1900941.
}
\section{Metaphors and Congruence}
The choice of encoding channels is constrained by more than perceptual precision, with one major constraint being the congruency of its metaphors~\cite{mackinlay1986automating}. A channel must be consistent with (and convey) the concepts that they encode -- for example, conveying quantity with area -- serving as type of ``affordance" for the data to show how it can be used (e.g., a push plate vs. a pull-bar for a door) ~\cite{norman2014things}.
One example of this conceptual congruence is a study that asked participants to describe simple bar and line graphs of the same two data points ~\cite{zacks1999bars}. Bar charts lead to descriptions in terms of discrete comparisons whereas lines lead to descriptions of trends; indicating that different graphical solutions can suggest different type of interpretations. Interestingly, in these examples the channels used to convey quantitative information were identical (i.e., vertical position) and the only element that changes between the graphs is the affordance of the connected line implying continuous data, and the visual connection between points in the line graph.
Another example comes from one author's experience with an exercise assigned in his information visualization class. The assignment asked the students to compare a set of countries in terms of amount of money donated or received, as recorded in the Aid Data data set \footnote{https://www.aiddata.org/} (which records international aids disbursements globally). Students produced two main type of solutions: (1) a scatter plot with dots representing the countries and axes representing incoming and outgoing amounts; (2) a pair of aligned bar charts showing for each country the amount donated and the amount received. While both solutions employ position as the main channel to encode the donated/received amounts, the graphs invite the reader in making completely different sets of judgments. More precisely, while the scatter plot affords detection of correlations and groupings, the pair of aligned bar charts invites the reader to compare individual countries across two metrics (see Fig. ~\ref{fig:examplesFig} for a similar design comparison). These examples show that the ranking of visual channels (based on precision) is not \textit{sufficient} in the visualization design space. In other words, knowing that a channel affords more precision does not provide sufficient guidance for visualization design.
The role of metaphors and expressiveness can be observed at multiple levels of granularity. At the level of individual channels there are several examples of how channels may express or fail to express certain types of information. For example, color hue can't express ordinal or quantitative information because the human eye does not assign an order to colors that vary exclusively in hue. Similarly, colors have strong semantic associations, therefore appropriate associations between concepts and colors may improve readability and comprehension~\cite{lin2013selecting}. Area size is a moderately precise channel when conveying quantity, but it cannot easily show negative values because larger sizes are firmly associated with larger (positive) quantities.
Different semantic associations can also be created by using different symbols or graphical marks. A classic example is the representation of part-to-whole relationships and the question of whether pie charts should be considered effective solutions for the representation of such data~\cite{skau2016arcs}. When in a visualization the designer wants to explicitly convey information about the fact that a given value is part of a whole, specific metaphors work better than others. For example, in comparing a pie chart, a stacked bar and a group of bars, it is evident that only the pie chart and the stacked bar explicitly convey the part-to-whole metaphor. Following the reasoning behind the ranking of visual variables, the solution with separate bars (position encoding) should be preferred over the stacked bar (length encoding) or pie chart (angle and area encoding) because it provides a more precise representation. Other similar examples of this kind exist. For instance, bars on maps are rarely used, whereas circles are often preferred in their place. Line charts are preferred over bars when the goal is to convey a temporal trend. Icon arrays are preferred over aggregate values in risk estimation. All of these examples demonstrate that there is something more than ranking of visual channels and that reasoning about visualization at the level of individual channels can be limited and potentially misleading.
Even a combination of accuracy and efficiency cannot fully characterize the effectiveness of data visualizations. One must also measure how easy it is to extract information out of it.
Two concepts developed in the literature on cognitive science seem to be of pertinence here. The first one is the ``congruence principle'' suggested by Tversky et al.~\cite{tversky2002animation}. The principle states that ``\textit{the content and format of the graphic should correspond to the content and format of the concepts to be conveyed}'' and it seems to apply perfectly to the type of concerns we discussed above. The second concept is ``cognitive fit,'' developed by Vessey. In the words of Vessey~\cite{vessey1991cognitive}: ``\textit{... performance on a task will be enhanced when there is a cognitive fit (match) between the information emphasized in the representation type and that required by the task type.}'' While the theory of cognitive fit has been developed originally to explain the difference between symbolic and graphical representations (i.e., tables vs graphs), there is no reason to believe the same logic can't be used to describe differences between alternate graphical representations. A good matching between the ``information emphasized in the representation type" and the information a reader is expected to extract seems to be a good guiding principle for data visualization.
\section{Rhetoric, Persuasion, and Memory}
A last category of objections to a world of only scatterplots is that many visualizations are unconcerned with accurate extraction of individual (or even aggregate) values. Charts are often designed to persuade, educate, and motivate. Designing for serendipitous discovery, educational impact, hedonic response, or changes in behavior is in some cases only tangentially connected with the precision of a particular visualization. Wang et al.~\cite{wang2019emotional} call for us to ``revis[e] the way we value visualizations'' on this basis, and Correll \& Gleicher~\cite{correll2014bad} point to a whole class of designs and design guidelines that seem counter-productive in terms of precision but that nonetheless result in benefits in terms of higher-level cognitive goals. In this section we briefly discuss some of these mismatches.
Hullman et al.~\cite{hullman2011benefitting} point to possible benefits for ``visual difficulties'' in charts: that is, by making the viewer do more work to decode the values, there is potentially an impact on the retention of those values. Rather than designing charts to be as precise as possible, for longer-term or higher-level tasks we may wish to slow the viewer down. Lupi's~\cite{lupi2017data} Data Humanism manifesto calls for visualizations that encourage viewers to ``spend time'' with the data, with examples of dense, multidimensional glyph-based visualizations that do not afford quick and precise extraction of values. Similarly, Bradley et al.~\cite{bradley2019approaching} call for a ``slow analytics'' movement that encourages ownership and retention of analytical tasks rather than precision.
Part of the pedagogical utility of charts is not merely conveying the information, but ensuring that the information is retained. One immediate downside to a world of only scatter and dot plots is that our charts would all look similar, and so unlikely to be differentiated much in memory. Borkin et al.~\cite{borkin2013makes,borkin2015beyond} find that charts with pictorial elements and other visual features of interest are more memorable than plain and otherwise unadorned charts. Kostelnick~\cite{kostelnick2008visual} recommends occasional deviations from minimalist design in the service of ``clarity'' which can include such factors as engaging the reader's attention. Many of the most impactful charts in visualization have had non-standard or otherwise less than precise forms (e.g., Fig~\ref{fig:minard}).
There may be benefits for imprecise visualizations for analysts as well, not just for passive viewers or learners. Often when designing a system we may have no idea of the form or category of insights present in our data. The serendipitous discovery of important features of the data-set may not be well-covered by existing design principles that are designed for the precision at intended or standard analytical tasks: lucky, chance, and stochastic exploration may be more important than reliably picking out values. Thudt et al.~\cite{thudt2012bohemian} discuss the challenges of designing for serendipitous information discovery, and suggest that standard designs may be ill-suited to the unconstrained and stochastic sort of exploration that can be necessary for making discoveries, whereas D{\"o}rk et al.~\cite{dork2011information} point to the challenges of designing for the wandering ``information flâneur.''
There are also potentially \textit{costs} to overly-precise visualizations. Kennedy et al.~\cite{kennedy2016work} claim that the ``clean layouts'' of minimalist visualizations can grant an imprimatur of authority and objectivity to data that may not match that standard. Likewise, Drucker~\cite{drucker2012humanistic} points to the ``seductive rhetorical force'' of visualizations to convince viewers that the data they contain is not merely a potentially flawed, biased, and uncertain view, but an objective truth about the world. This unwillingness to question charts due to a perception of their objectivity can override even strong political conventions or skepticism~\cite{peck2019data}. ``Messier'' designs (such as sketchy~\cite{wood2012sketchy} or uncertainty-conveying~\cite{song2018s} renderings) can introduce a willingness to critique or a greater appreciation for uncertainty not present in more precision-driven visualizations.
|
train/arxiv
|
BkiUeQS6NNjgBvv4PyWp
| 5 | 1 |
\section{Introduction}\label{sec:intro}
\subsection{Overview}\label{sec:over}
There are essentially two approaches to Quantum Field Theory (QFT) in the physics literature. In the first approach the quantum fields are (generalized) functions $\hat{\mathcal O}(t,x)$ on the space-time $\mathbb{R}^{d+1}$ ($d=3$ for the Standard Model of physics) taking values in operators acting in a Hilbert space ${\mathcal H}$ of physical states. Matrix elements of products of quantum fields
at different points $\langle \psi|\prod_{k=1}^N \hat{\mathcal O}_k(t_k,x_k)|\psi\rangle$ where $\psi\in {\mathcal H}$ (the ``vacuum" state) are (generalized) functions on $\mathbb{R}^{N(d+1)}$. Physical principles (positivity of energy) imply that these matrix elements should have an analytic continuation to the {\it Euclidean} domain where $t_k\in i\mathbb{R}$ and be given there as correlation functions of {\it random fields} ${\mathcal O}(y)$ defined on $y\in \mathbb{R}^{n}$ where $n=d+1$.
In the second approach, based on the so-called path integral approach due to Feynman, these correlation functions are formally given as integrals over a space of generalized functions on $\mathbb{R}^{n}$, called fields, with the formal integration measure given explicitly as a Gibbs measure with potential a functional of the fields. The Euclidean formulation serves also as a setup for the theory of second order phase transitions in statistical mechanics systems where now $n\leq 3$. In this case one expects the correlation functions to possess an additional symmetry under the conformal transformations of $\mathbb{R}^{n}$ and the QFT is now a Conformal Field Theory (CFT).
In practice most of the information on QFT obtained by physicists has been perturbative, namely given in terms of a formal power series expansion in parameters perturbing a Gaussian measure (and pictorially described by Feynman diagrams). In CFT however there is another, nonperturbative, approach going under the name of {\it Conformal bootstrap}. In this approach one postulates a set of special {\it primary} fields ${\mathcal O}_\alpha(y)$ (or operators ${\mathcal O}_\alpha(t,x)$ in the Hilbert space formulation) whose correlation functions transform as conformal tensors under the action of the conformal group. Furthermore one postulates a rule called the {\it operator product expansion} allowing to expand the product of two primary fields inside a correlation function (or a product of two quantum fields in the Hilbert space formulation) as a sum running over a subset of primary fields called the {\it spectrum} with explicit coefficients depending on the three point correlation functions, the so called {\it structure constants} of the CFT (see Section \ref{dozzboot}). Hence the correlation functions of a CFTs are determined in the bootstrap approach if one knows its spectrum and the structure constants.
In the case of two dimensional conformal field theories ($d=1$ or $n=2$ above) the conformal symmetry constrains the possible CFTs particularly strongly and Belavin, Polyakov and Zamolodchikov \cite{BPZ} (BPZ from now on) showed the power of the bootstrap hypothesis by producing explicit expressions for the correlation functions of a large family of CFTs of interest to statistical physics among which the CFT that is believed to coincide with the scaling limit of critical Ising model. In a nutshell, {BPZ} argued that
one could parametrize CFTs by a unique parameter $c$ called the central charge and they found the correlation functions for certain rational values of $c$ where the number of primary fields is finite (the minimal models).
During the last decade the bootstrap approach has also led to spectacular predictions of critical exponents in the three dimensional case \cite{PoRyVi}. The reader may consult \cite{Gaw} for some background on 2d CFTs.
Giving a rigorous mathematical meaning to these two approaches and relating them has been a huge challenge for mathematicians. On the axiomatic level the transition from the operator theory on Hilbert space to the Euclidean probabilistic theory was understood early on and for the converse the crucial concept of {\it reflection positivity} was isolated \cite{OS1,OS2}. Reflection positivity is a property of the probability law underlying the random fields that allows for a construction of a canonical Hilbert space where operators representing the symmetries of the theory act. Reflection positivity is one of the crucial inputs in the present paper.
However on a more concrete level of explicit examples of QFTs mathematical progress has been slower.
The (Euclidean) path integral approach was addressed by constructive field theory in dimensions $d+1\leq 4$ using probabilistic methods but detailed information has been restricted to the cases that are small perturbations of a Gaussian measure. In particular the 2d CFTs have been beyond this approach so far. A different probabilistic approach to conformal invariance has been developed during the past twenty years following the introduction by Schramm \cite{Sch} of random curves called Schramm-Loewner evolution (SLE). This approach, centred around the geometric description of critical models of statistical physics, has led to exact statements on the interfaces of percolation or the critical Ising model; following the introduction of SLE and the work of Smirnov, probabilists also managed to justify and construct the CFT correlation functions of the scaling limit of the 2d Ising model
\cite{CS,CHI15} (see also the review \cite{Pel} for the construction of CFT correlations via SLE observables).
Making a mathematical theory of the BPZ approach triggered in the 80's and 90's intense research in the field of
Vertex Operator Algebras (VOA for short) introduced by Borcherds \cite{borcherds} and Frenkel-Lepowsky-Meurman \cite{FLM89} (see also the book \cite{Hu} and the article \cite{HuKo} for more recent developments on this formalism). Even if the theory of VOA was quite successful to rigorously formalize numerous CFTs, the approach suffers certain limitations at the moment. First, correlations are defined as formal power series (convergence issues are not tackled in the first place and are often difficult); second, many fundamental CFTs have still not been formalized within this approach, among which the CFTs with uncountable collections of primary fields and in particular Liouville conformal field theory (LCFT in short) studied in this paper. Moreover, the theory of VOA, which is based on axiomatically implementing the operator product expansion point of view of physics, does not elucidate the link to the the path integral approach or to the models of statistical physics at critical temperature (if any).
In their seminal work, BPZ were in fact motivated by the quest to compute the correlation functions in LCFT, which had been introduced a few years before by Polyakov under the form of a path integral in his approach to bosonic string theory \cite{Pol}. Although BPZ failed to carry out the bootstrap program for LCFT\footnote{See Polyakov's citation \cite{Pol1} at the beginning of this paper.}, this was successfully implemented later in the physics literature by Dorn, Otto, Zamolodchikov and Zamolodchikov \cite{DoOt,ZaZa}. Since then, LCFT has appeared in the physics literature in a wide range of fields including random planar maps (discrete quantum gravity, see the review \cite{kos}) and the supersymmetric Yang-Mills theory (via the AGT correspondence \cite{AGT}). Recently, there has been a large effort in probability theory to make sense of Polyakov's path integral formulation of LCFT within the framework of random conformal geometry and the scaling limit of random planar maps: see \cite{LeGall,Mier,MS1,MS2,MS3,DDDF,DFGPS,MG} for the construction of a random metric space describing (at least at the conjectural level) the limit of random planar maps and \cite{DMS,NS} for exact results on their link with LCFT\footnote{It is beyond the scope of this introduction to state and comment all the exciting results that have been obtained recently in this flourishing field of probability theory.}. In particular, the three last authors of the present paper in collaboration with F. David \cite{DKRV} have constructed
the path integral formulation of LCFT on the Riemann sphere using probability theory. This was extended to higher genus surfaces in \cite{DRV,GRV}. In this paper, we will be concerned with LCFT on the Riemann sphere.
LCFT depends on two parameters $\gamma \in (0,2)$ and $\mu>0$\footnote{The case $\mu=0$ is different and corresponds to Gaussian Free Field theory; for the study of a related model, see Kang-Makarov \cite{KM}.}.
In this paper we prove that for all $\gamma \in (0,2)$ and $\mu>0$ the probabilistic construction of LCFT satisfies the hypothesis of the bootstrap approach envisioned in \cite{BPZ}. In particular we determine the spectrum of LCFT and prove the bootstrap formula for the 4-point function in terms of the structure constants that were identified by the last three authors in \cite{dozz}.
LCFT for $\gamma\in (0,2)$ is a highly nontrivial CFT with an uncountable family of primary fields and a nontrivial OPE
and we believe the proof of conformal bootstrap in this case provides the first nontrivial test case where a mathematical justification for this beautiful idea from physics has been achieved. Let us emphasize that determining the spectrum and the structure constants of LCFT is the cornerstone of the deep link between LCFT and representation theory via exact formulae for correlation functions. Indeed they lead to exact formulae for n point correlation functions on the sphere and pave the way towards understanding bootstrap formulae for higher genus surfaces \cite{GKRV}
In a nutshell, our proof uses reflection positivity to construct a representation of the semigroup of
dilations in $\mathbb{C}$
on an explicit Hilbert space associated to LCFT. Its generator,
called the \emph{Hamiltonian} of LCFT, is a self-adjoint unbounded operator. It has the form of a Schr\"odinger operator acting in the $L^2$-space of an infinite dimensional Gaussian measure with a non trivial potential which is a positive function for $\gamma \in (0,\sqrt{2})$ and a measure for $\gamma \in [\sqrt{2},2)$.
We perform the spectral analysis of this operator to determine the spectrum of LCFT \footnote{The importance of understanding the spectral analysis of the Hamiltonian of LCFT was stressed by Teschner in \cite{Tesc1} which was an inspiration for us.} and show that the bootstrap formula for the 4 point correlations functions can be seen as a Plancherel formula with respect to its spectral decomposition. We obtain exact expressions for the generalized eigenfunctions of the Hamiltonian by deriving conformal Ward identities for LCFT correlation functions which reflect the underlying symmetry algebra of LCFT (i.e. the Virasoro algebra).
\vskip 3mm
\subsection{Probabilistic approach of Liouville CFT} Let us start with the physicists formulation of LCFT. It is a theory of a random (generalized) function $\phi:\mathbb{C}\to \mathbb{R}$ called the Liouville field. One is interested in averages of functionals $F$ of $\phi$ formally given by the path integral
\begin{equation}\label{phydef}
\langle F \rangle_{\gamma,\mu} := \int F(\phi)e^{- S_L(\phi) } \text{\rm D} \phi,
\end{equation}
where $S_L$ is the Liouville action functional
$$S_L(\phi):= \frac{1}{\pi}\int_{\mathbb{C}}\big( |\partial_z \phi (z) |^2 + \pi \mu e^{\gamma \phi(z) } \big)\, \dd z$$
and where $ \dd z$ denotes the Lebesgue measure on $\mathbb{C}$. It
depends on two parameters $\gamma \in (0,2)$ and $\mu>0$ (called cosmological constant). The notation $ \text{\rm D} \phi$ refers to a formal ``Lebesgue measure" on the space of functions $\phi: \mathbb{C} \to \mathbb{R} $ obeying the asymptotic $$\phi(z) \underset{|z| \to \infty}{\sim} -2 Q \ln |z|$$ with $Q=\frac{2}{\gamma}+\frac{\gamma}{2}$. This asymptotic is formally required by conformal invariance whereby the field $\phi$ should be thought to be defined on the Riemann sphere $\hat \mathbb{C}= \mathbb{C} \cup \lbrace \infty \rbrace$.
The physically interesting expectations in LCFT are the n-point correlation functions
\begin{equation}\label{phyvertex}
\langle \prod_{i=1}^n V_{\alpha_i}(z_i)\rangle_{\gamma, \mu}
\end{equation}
of the exponentials of the Liouville field $V_\alpha(z)=e^{\alpha\phi(z)}$ called ``vertex operators" in physics.
In \eqref{phyvertex} the points $z_i \in \mathbb{C}$ are distinct and $\alpha_i \in \mathbb{C}$.
The recent work \cite{DKRV} gave a rigorous mathematical meaning to the correlation functions \eqref{phyvertex} via probability theory as we now describe. As usual the quadratic part of the action functional can be interpreted in terms of a Gaussian Free Field (GFF).
We consider the GFF $X$ on $ \mathbb{C}$ with the following covariance kernel
\begin{equation}\label{chatcov}
\mathds{E} [ X(z)X(z')] =\ln\frac{1}{|z-z'|}+\ln|z|_++\ln|z'|_+:=G(z,z')
\end{equation}
with $|x|_+=\max(|x|,1)$.
This GFF can be defined as a random generalized function on a suitable probability space $( \Omega, \Sigma, \P)$ (with expectation $\mathds{E}[.]$): see Section \ref{sub:gff}. As is readily checked with the covariance \eqref{chatcov} $ X(z)\stackrel{law}= X(1/z)$ so that the GFF is naturally defined on the Riemann sphere $\hat\mathbb{C}=\mathbb{C}\cup\{\infty\}$. Furthermore it defines $\P$ almost surely an element in the space of distributions ${\mathcal D}'(\hat\mathbb{C})$.
The second ingredient needed for making sense of the path integral is the following Gaussian multiplicative chaos measure (GMC, originally introduced by Kahane \cite{cf:Kah})
\begin{equation}\label{GMCintro}
M_\gamma (\dd z):= \underset{\epsilon \to 0} {\lim} \; \; e^{\gamma X_\epsilon(z)-\frac{\gamma^2}{2} \mathds{E}[X_\epsilon(z)^2]} \frac{\dd z}{|z|_+^4}
\end{equation}
where $X_\epsilon= X \ast \theta_\epsilon$ is the mollification of $X$ with an approximation $(\theta_\epsilon)_{\epsilon>0}$ of the Dirac mass $\delta_0$; indeed, one can show that the limit \eqref{GMCintro} exists in probability in the space of Radon measures on $\hat\mathbb{C}$ and that the limit does not depend on the mollifier $\theta_\varepsilon$: see \cite{RoV, review, Ber} for example. The condition $\gamma \in (0,2)$ stems from the fact that the random measure $M_\gamma$ is different from zero if and only if $\gamma \in (0,2)$.
With these definitions the rigorous definition of the Liouville field $\phi$ is
\begin{align}\label{liouvillefield}
\phi(z)= c+X(z)-2Q \ln |z|_+
\end{align}
and the expectation \eqref{phydef} for $F$ continuous and non negative on ${\mathcal D}'(\hat\mathbb{C})$ is defined as
\footnote{In the recent paper \cite{dozz}, the authors used a convention where the RHS is multiplied by $2$.}
\begin{equation}\label{FL1}
\langle F(\phi) \rangle_{\gamma,\mu
= \: \int_\mathbb{R} e^{ -2Qc}\mathds{E}[ F(c+X-2Q \ln |.|_+)
e^{ -\mu e^{\gamma c}M_\gamma(\mathbb{C})}]\,\dd c.
\end{equation}
The variable $c\in\mathbb{R}$
stems from the fact that in the path integral \eqref{phydef} one wants to include also the constant functions on $\mathbb{C}$ which are not captured by the GFF. Indeed, it is needed to
ensure conformal invariance of LCFT \cite{DKRV}. We remark also that the expectation $\langle\cdot\rangle_{\gamma,\mu}$ is {\it not} a probability measure as $\langle 1\rangle_{\gamma,\mu}=\infty$ \cite{DKRV}.
Following \cite{DKRV}, the $n$-point correlations \eqref{phyvertex} can be defined for \emph{real} valued $\alpha_i$ via the following limit
\begin{equation}\label{deflimintro}
\langle \prod_{i=1}^n V_{\alpha_i}(z_i)\rangle_{\gamma, \mu}:= \underset{\epsilon \to 0} {\lim} \: \langle \prod_{i=1}^n V_{\alpha_i,\epsilon}(z_i)\rangle_{\gamma, \mu}
\end{equation}
where $z_1,\dots,z_n\in \mathbb{C}$ are distinct,
\begin{equation}\label{Vregul}
V_{\alpha, \epsilon}(z)= |z|_+^{-4\Delta_\alpha}e^{\alpha c}e^{\alpha X_\epsilon (z)-\frac{\alpha^2}{2}\mathds{E} [X_\epsilon (z)^2]}
\end{equation}
and $\Delta_\alpha$ is called the \emph{conformal weight} of $V_\alpha$
\begin{align}\label{deltaalphadef}
\Delta_\alpha=\frac{\alpha}{2}(Q-\frac{\alpha}{2}), \ \ \ \alpha\in\mathbb{C}.
\end{align}
The limit \eqref{deflimintro} exists and is non trivial if and only if the following bounds hold
\begin{equation}\label{Seibergintro}
\sum_{i=1}^n \alpha_i >2Q, \quad \quad \quad \alpha_i <Q, \; \; \forall i=1,\dots,n \quad \quad \quad \quad ({\bf Seiberg \: bounds}).
\end{equation}
One of the main results of \cite{DKRV} is that the limit \eqref{deflimintro} admits the following representation in terms of the moments of GMC
\begin{equation}\label{probarepresentation}
\langle \prod_{i=1}^n V_{\alpha_i}(z_i)\rangle_{\gamma, \mu}= \gamma^{-1} \left ( \prod_{1 \leq j<j' \leq n} \frac{1}{|z_j-z_{j'}|^{\alpha_j \alpha_{j'}}} \right ) \mu^{-s} \Gamma(s) \mathds{E}[ Z^{-s} ]
\end{equation}
where $s=\frac{\sum_{i=1}^n \alpha_i-2Q}{\gamma}$, $\Gamma$ is the standard Gamma function and (recall that $|x|_+=\max(|x|,1)$)
\begin{equation*}
Z= \int_{\mathbb{C}} \left ( \prod_{i=1}^n \frac{|x|_+^{\gamma \alpha_i}}{|x- z_i|^{\gamma \alpha_i}} \right ) M_\gamma (\dd x).
\end{equation*}
We stress that the formula \eqref{probarepresentation} is valid for correlations with $n \geq 3$, which can be seen at the level of the Seiberg bounds \eqref{Seibergintro}\footnote{In fact, one can extend the probabilistic construction \eqref{probarepresentation} a bit beyond the Seiberg bounds but the extended bounds also imply $n \geq 3$. We will not discuss these extended bounds in this paper.}.
Also it was proved in \cite[Th 3.5]{DKRV} that these correlation functions
are {\it conformally covariant}. More precisely, if $z_1, \cdots, z_n$ are $n$ distinct points in $\mathbb{C}$ then for a M\"obius map $\psi(z)= \frac{az+b}{cz+d}$ (with $a,b,c,d \in \mathbb{C}$ and $ad-bc=1$)
\begin{equation}\label{KPZformula}
\langle \prod_{i=1}^n V_{\alpha_i}(\psi(z_i)) \rangle_{\gamma, \mu}= \prod_{i=1}^n |\psi'(z_i)|^{-2 \Delta_{\alpha_i}} \langle \prod_{k=1}^n V_{\alpha_i}(z_i) \rangle_{\gamma, \mu}.
\end{equation}
Because of relation \eqref{KPZformula}, the vertex operators are \emph{primary fields} in the language of CFT. The M\"obius covariance implies in particular that the three point functions are determined up to a constant, called the {\it structure constant}, which we write as
\begin{align}\label{struconst}
\langle V_{\alpha_1}(0) V_{\alpha_2}(1)V_{\alpha_3}(\infty) \rangle_{\gamma, \mu}:=\lim_{|u|\to \infty} |u|^{4 \Delta_{\alpha_3}} \langle V_{\alpha_1}(0) V_{\alpha_2}(1)V_{\alpha_3}(u) \rangle_{\gamma, \mu}.
\end{align}
Similarly the four point function can be reduced to a function of one complex variable defined by
\begin{align}\label{4pointinfty}
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu} :=& \lim_{|u|\to \infty} |u|^{4 \Delta_{\alpha_4}}\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(u) \rangle_{\gamma,\mu}.
\end{align}
Let us now turn to the conformal bootstrap approach to LCFT.
\subsection{DOZZ formula and Conformal Bootstrap.} \label{dozzboot}
In theoretical physics conformal field theory is a quantum field theory with conformal group symmetry. In particular one then postulates existence of primary fields whose correlation functions transform covariantly under conformal maps (M\"obius transformations in the two dimensional case). The
basic physical axiom of conformal field theory apart from this covariance is the {\it operator product expansion} (OPE for short). Denoting the primary fields still as $V_\alpha(x)$ for $x\in\mathbb{R}^n$ then,
in very loose terms and cutting many corners, the OPE is the identity
\begin{align}\label{OPE}
V_\alpha(x)V_{\alpha'}(x')=\sum_{\beta\in{\mathcal S}}C^{\beta}_{\alpha\alpha'}(x,x',\nabla_x)V_\beta(x)
\end{align}
where ${\mathcal S}$ labels a special set of primary fields called the {\it spectrum} of the CFT and the $C^{\beta}_{\alpha,\alpha'}$ are differential operators {\it completely determined } by the conformal weights of the fields $V_\alpha, V_{\alpha'},V_\beta$ and linear in the structure constants $C_{\alpha\alpha'\beta}$ (defined in general analogously to \eqref{struconst}). The identity \eqref{OPE} is assumed to hold once inserted in the correlation functions. Obviously a repeated application of the OPE allows to express an $n$-point function of the CFT as a sum of products of the structure constants. Hence to ``solve a CFT" one needs to find its structure constants and spectrum. To find these, the following approach where the 4-point function\footnote{In $d>2$ the four point function is a function of two complex variables.} \eqref{4pointinfty}
plays a fundamental role has been extremely fruitful. Applying the OPE \eqref{OPE} in \eqref{4pointinfty} to $\alpha_1$ and $\alpha_2$ leads to a quadratic expression in the structure constants. Applying the OPE instead to $\alpha_2$ and $\alpha_3$ yields another quadratic expression and equating the two produces a quadratic equation for the structure constants. Varying $\alpha_1,\dots,\alpha_4$ results in a set of quadratic equations labeled by 4-tuples of $\alpha_i\in{\mathcal S}$ (for LCFT see \eqref{crossingsymmetry1}).
These 4-point bootstrap equations pose strong constraints on the spectrum and the structure constants and have been used to great effect e.g. in the case of the 3-dimensional Ising model \cite{Ry1,Ry2}. In two dimensions their study in the case when one of the fields $V_{\alpha_i}$ is a so-called degenerate field led in \cite{BPZ} to the discovery of the {\it minimal models} (2d Ising model among them) and their spectra and structure constants.
The motivation for the BPZ paper \cite{BPZ} was to solve the bootstrap equations for LCFT but in this they were unsuccessful \cite{Pol1}. The spectrum of LCFT was conjectured in \cite{ct, bct, gn} to be {\it continuous} ${\mathcal S}=Q+i\mathbb{R}_+$ and an explicit formula (see Appendix \ref{dozz}) for the structure constants, the DOZZ formula, was postulated by Dorn, Otto, Zamolodchikov and Zamolodchikov \cite{DoOt,ZaZa}. A derivation based on the degenerate 4-point bootstrap equations was subsequently given by Teschner \cite{Tesc} and further evidence for the formula was given in \cite{Tesc1}, \cite{Tesc2}. The DOZZ expression which we denote by $C_{\gamma,\mu}^{{\rm DOZZ}} (\alpha_1,\alpha_2,\alpha_3 )$ is analytic in the variables $\alpha_1,\alpha_2,\alpha_3 \in \mathbb{C}$ (with a countable number of poles) and one is led to expect that it coincides with the probabilistic expression \eqref{struconst} on the domain of validity of the probabilistic construction, i.e. for real $\alpha_1,\alpha_2,\alpha_3$ satisfying the Seiberg bounds \eqref{Seibergintro}. This is indeed the case: in a recent work \cite{dozz}, the last three authors proved that the probabilistically constructed $3$-point correlation functions satisfy the DOZZ formula:
\begin{equation*
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(\infty) \rangle_{\gamma,\mu} =\frac{1}{2}C_{\gamma,\mu}^{{\rm DOZZ}} (\alpha_1,\alpha_2,\alpha_3 )\footnote{The $\frac{1}{2}$ factor here is a general global constant which can be absorbed in the definition of the probablistic construction: see footnote associated to \eqref{FL1}.}
\end{equation*}
Given that the spectrum is ${\mathcal S}=Q+i\mathbb{R}_+$, the formula \eqref{OPE} (where the sum becomes an integral) leads formally to the following bootstrap conjecture for the 4 point correlation functions
\begin{align}
& \langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty)\rangle^{{\rm Boot}}_{\gamma,\mu} \nonumber \\
& = \frac{1}{8 \pi}\int_{0}^\infty C_{\gamma,\mu}^{{\rm DOZZ}}(\alpha_1, \alpha_2, Q- iP ) C_{\gamma,\mu}^{{\rm DOZZ}}(Q+ iP, \alpha_3, \alpha_4 ) |z|^{2(\Delta_{Q+iP}-\Delta_{\alpha_1}-\Delta_{\alpha_2})} | \mathcal{F}_P (z) |^2 {\rm d}P \label{4pointidentity}
\end{align}
where $\mathcal{F}_P$ are holomorphic functions in $z$ called (spherical) \emph{conformal blocks}. The conformal blocks are universal in the sense that they only depend on the conformal weights $\Delta_{\alpha_i}$ and $\Delta_{Q+iP}$
and the central charge $c_L=1+6Q^2$ of LCFT, i.e.
$ \mathcal{F}_P (z)= \mathcal{F}(c_L,\Delta_{\alpha_1},\Delta_{\alpha_2},\Delta_{\alpha_3},\Delta_{\alpha_4},\Delta_{Q+iP},z)$; see
Subsection \ref{sub:defblock} for the exact definition.
The second hypothesis on the bootstrap approach i.e. the fact that the OPE may be applied in the two ways explained above goes under the name of {\it crossing symmetry}. More specifically, it is the conjecture that the following identity holds for real $z \in (0,1)$
\begin{align}
& \int_{\mathbb{R}^+} C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q-iP ) C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_3,\alpha_4, Q+iP ) |z|^{2(\Delta_{Q+iP}-\Delta_{\alpha_1}-\Delta_{\alpha_2})} |\mathcal{F}_P (z)|^2 dP \nonumber \\
& = \int_{\mathbb{R}^+} C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_3,\alpha_2, Q-iP ) C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_4, Q+iP ) |1-z|^{2(\Delta_{Q+iP}-\Delta_{\alpha_3}-\Delta_{\alpha_2})} |\tilde{\mathcal{F}}_P (1-z)|^2 dP \label{crossingsymmetry1}
\end{align}
where $\tilde{\mathcal{F}}_P $ is obtained from $\mathcal{F}_P$ by flipping the parameter $\alpha_1$ with $\alpha_3$. As explained above this identity is a very strong constraint in LCFT.
\subsection{Main result on conformal bootstrap}
In this paper, we justify rigorously the bootstrap approach to LCFT by constructing the spectral representation of LCFT; as an output, we prove the bootstrap and crossing formulas described in the previous subsection.
To understand our approach to the spectrum of LCFT it is useful to draw the analogy with the harmonic analysis on a Lie group $G$ and in particular the Plancherel identity on $L^2(G)$. For a compact $G$, $L^2(G)$ is decomposed into a direct sum of irreducible (highest weight) representations of $G$ whereas in the non-compact case also a continuous family of representations (a direct integral) appears. This decomposition is related to the spectral decomposition of a self-adjoint operator (the Laplacian) acting on the Hilbert space $L^2(G)$.
In 2d CFT, Osterwalder-Schrader's method of reflection positivity provides a canonical Hilbert space where the symmetry
algebra of 2d CFT, the \emph{Virasoro algebra}, acts. The role of the Laplacian is played by a self adjoint operator, a special element in the Virasoro algebra called the Hamiltonian of the CFT. In the case of ``compact CFTs" (examples being the minimal models of BPZ) the spectrum of this operator is discrete whereas in case of ``non-compact CFTs" the spectrum is continuous. Our proof is based on finding the spectral resolution of this operator using scattering theory and representation theory of the Virasoro algebra, leading to a Plancherel type identity as a rigorous version of the OPE.
The main result of this paper is the following theorem proving that the conformal bootstrap formula \eqref{4pointidentity} holds
for the probabilistic construction of the $4$-point function:
\begin{theorem}\label{bootstraptheoremintro}
Let $\gamma \in (0,2)$ and $\alpha_i<Q$ for all $i \in \llbracket 1,4\rrbracket$. Then the following identity holds for $\alpha_1+\alpha_2 >Q$ and $\alpha_3+\alpha_4>Q$
\begin{equation
\label{bootstrapidentityintro}
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu} =\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty) \rangle^{{\rm Boot}}_{\gamma,\mu}.
\end{equation}
\end{theorem}
The conditions $\alpha_1+\alpha_2>Q$ and $\alpha_3+\alpha_4>Q$ are essential. Indeed, if $\alpha_1+\alpha_2<Q$ the analytic continuation of \eqref{4pointidentity} from $\alpha_i \in \mathcal{S}$ requires adding an extra term (cf. the discussion on the so-called discrete terms in \cite{ZaZa}).
The second main input to the bootstrap hypothesis, namely the crossing symmetry conjecture \eqref{crossingsymmetry1},
follows directly from our work since one has by conformal covariance \eqref{KPZformula} of the probabilistic construction of the correlations
\[
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu}= \langle V_{\alpha_3}(0) V_{\alpha_2}(1-z) V_{\alpha_1}(1) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu}
\]
whereby we get the following corollary:
\begin{corollary}
The bootstrap construction of LCFT satisfies crossing symmetry for $\gamma \in (0,2)$.
\end{corollary}
This result seems to be very hard to prove directly, however
let us also mention that Teschner has given strong arguments in \cite{Tesc1} in that direction.
\begin{remark}
We have stated the bootstrap conjecture as the statements \eqref{4pointidentity} and \eqref{crossingsymmetry1} since these are the crucial relations following from the OPE axiom \eqref{OPE} used by physicists to study CFTs. However, as explained above the OPE axiom \eqref{OPE} leads also to a recursive computation of the $n$-point correlation functions for all $n$.
Likewise, the approach in this paper can be extended to $n>4$ by a $n-3$-fold application of the Plancherel identity. Hence the spectral analysis of the LCFT Hamiltonian is the crucial result of this paper. In order to keep the length of this paper reasonable, we will discuss these generalisations elsewhere, in particular for the case of LCFT on the complex torus where the n-point formuli are mathematically quite appealing \cite{GKRV}.
\end{remark}
\subsection{Conformal blocks and relations with the AGT conjecture.}
Let us mention that it is not at all obvious that the bootstrap definition of the four point correlation function, i.e. the right hand side of \eqref{bootstrapidentityintro}, exists for real $\alpha_i$ satisfying the condition $ \alpha_i<Q$ for all $i \in \llbracket 1,4\rrbracket$ along with $\alpha_1+\alpha_2 >Q$ and $\alpha_3+\alpha_4>Q$. Indeed, first the conformal blocks are defined via a series expansion
\begin{equation}\label{blocksintro}
\mathcal{F}_P(z)= \sum_{n=0}^\infty \beta_n z^n
\end{equation}
where the coefficients $\beta_n$, which have a strong representation theoretic content, are non explicit
: see \eqref{expressionbeta} for the exact definition of $\beta_n$. Hence, it is not obvious that the series \eqref{blocksintro} converges for $|z|<1$. Second, it is not clear that the integral in $P\in \mathbb{R}^+$ of expression \eqref{4pointidentity} is convergent. As a matter of fact, in the course of the proof of Theorem \ref{bootstraptheoremintro}, we establish both that the radius of convergence\footnote{We acknowledge here an argument that was given to us by Slava Rychkov in private communication.} of \eqref{blocksintro} is $1$ for almost all $P$ and that the integral \eqref{4pointidentity} makes sense.
To the best of our knowledge, the proof of the convergence of the conformal blocks is new and we expect that the result holds for all $P$, although we do not need such a strong statement for our purpose. Let us mention here the recent work
\cite{GRSS} which establishes a probabilistic formula involving moments of a GMC type variable for the conformal blocks of LCFT on the complex torus
thereby proving the existence of the torus blocks for all values of the relevant parameters.
Convergence of conformal blocks defining series is also topical in physics, see \cite{rych1,rych2,rych3}.
The AGT correspondence \cite{AGT} between $4d$ supersymmetric Yang-Mills theory and the bootstrap construction of LCFT
conjectures that $\mathcal{F}_P(z)$ coincides with special cases of Nekrasov's partition function \cite{Ne04}.
In particular this leads to an explicit formula for $\beta_n$ in \eqref{blocksintro}. However, even admitting this conjecture, it remains difficult to show that the radius of convergence in \eqref{blocksintro} is $1$:
see for instance
\cite{FLM18}.
The AGT conjecture has been proved as an identity between formal power series in the case of the torus in
\cite{Ne} following the works
\cite{MO, SV}
but this does not address the issue of convergence.
See also \cite{FL, AFLT}
for arguments in the physics literature which support the AGT conjecture on the torus or the Riemann sphere. \\
\textbf{Acknowledgements.} C. Guillarmou acknowledges that this project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No 725967. A. Kupiainen is supported by the Academy of Finland and ERC Advanced Grant 741487. R. Rhodes is partially supported by the Institut Universitaire de France (IUF).
The authors wish to thank Zhen-Qing Chen, Naotaka Kajino for discussions on Dirichlet forms, Ctirad Klimcik and Yi-Zhi Huang for explaining the links with Vertex Operator Algebras, Slava Rychkov for fruitful discussions on the convergence of conformal blocks, Alex Strohmaier, Tanya Christiansen and Jan Derezinski for discussions on the scattering part and Baptiste Cercl\'e for comments on earlier versions of this manuscript.
\section{Outline of the proof}\label{sec:outline}
In this section we give an informal summary of the proof of the bootstrap formula with pointers to precise definitions and statements.
\subsection{Reflection positivity}\label{outline:OS}
The LCFT expectation \eqref{FL1} is an expectation in a positive measure but it has another positivity property called reflection positivity (or Osterwalder-Schrader positivity) that allows us to express the correlation functions of LCFT in terms of a scalar product in the {\it physical Hilbert space} of LCFT. This task is carried out in Section \ref{sec:ospos} and outlined now. The construction of the Hilbert space is based on an involution acting on observables $F$. For this, we consider the reflection at the unit circle $ \theta: \hat\mathbb{C}\to\hat\mathbb{C}$
\begin{align}
\theta(z)=1/\bar{z}
\label{thetadef}
\end{align}
which maps the unit disk $\mathbb{D}$ to its complement $\mathbb{D}^c$. We promote it to an operator $\Theta$ acting on observables $F \mapsto \Theta F$ by
\begin{align}\label{Thetadef}
(\Theta F) (\phi):=F(\phi\circ \theta-2Q\ln|\cdot|).
\end{align}
This allows us to define a sesquilinear form acting on a set $\mathcal{F}_\mathbb{D}$ of observables $F$ that depend only on the restriction $\phi|_\mathbb{D}$ of $\phi$ to the unit disk (i.e. $F(\phi)=F(\phi|_\mathbb{D})$) by
\begin{equation}\label{osform:intro}
(F,G)_\mathbb{D}:=\langle \Theta F(\phi) \overline{G(\phi)}\rangle_{\gamma,\mu},\quad F,G\in \mathcal{F}_\mathbb{D}.
\end{equation}
Note that $ \Theta F$ is an observable depending on the restriction $\phi|_{\mathbb{D}^c}$ of $\phi$ to the complementary disc $\mathbb{D}^c$ so that the scalar product of two observables on the disk $\mathbb{D}$ is given by the LCFT expectation of their product when one of them is reflected to $\mathbb{D}^c$. Reflection positivity is the statement that the sesquilinear form \eqref{osform:intro} is nonnegative:
\begin{equation*
(F,F)_\mathbb{D}\geq 0
\end{equation*}
see Proposition \ref{OS1}.
The canonical Hilbert space ${\mathcal H}_\mathbb{D}$ of LCFT is defined as the completion of $ \mathcal{F}_\mathbb{D}$, quotiented out by the null set ${\mathcal N}_0=\{F\in {\mathcal F}_\mathbb{D} \,|\, (F,F)_{\mathbb{D}}=0\}$, with respect to the sesquilinear form \eqref{osform:intro}. This space can be realized in more concrete terms as follows.
\subsection{Hilbert space of LCFT}\label{outline:hilb}
The space ${\mathcal H}_\mathbb{D}$ can be realized as an $L^2$ space on a set of fields on the equatorial circle $\mathbb{T}=\partial\mathbb{D}$ using the domain Markov property of the GFF. To do this let $\varphi=X|_\mathbb{T}$ be the restriction of the GFF to the unit circle. $\varphi$ can be realized (see Section \ref{sub:gff}) as a (real valued) random Fourier series
\begin{equation}\label{GFFcircle0}
\varphi(\theta)=\sum_{n\not=0}\varphi_ne^{in\theta}
\end{equation}
with $\varphi_{n}=\frac{1}{2\sqrt{n}}(x_n+iy_n)$ for $n>0$ where $x_n,y_n$ are i.i.d. unit Gaussians.
$\varphi$ can be understood as a random element in a Sobolev space $W^{s}(\mathbb{T})$ with $s<0$ (see \eqref{outline:ws}) and
as the coordinate function in the probability space $\Omega_\mathbb{T}=(\mathbb{R}^2)^{\mathbb{N}^\ast}$ equipped with a cylinder set sigma algebra and a Gaussian probability measure $\P$ (see \eqref{Pdefin}). The GFF $X$ \eqref{chatcov} can now be decomposed (Section \ref{sub:gff}) as an independent sum
\begin{equation}\label{decomposeGFF}
X\stackrel{{\rm law}}= P\varphi+ X_\mathbb{D}+X_{\mathbb{D}^c}
\end{equation}
where $P\varphi$ is the harmonic extension of $\varphi$ to $\hat\mathbb{C}$ and $X_\mathbb{D},X_{\mathbb{D}^c}$ are two independent GFFs on $\mathbb{D}$ and $\mathbb{D}^c$ with Dirichlet boundary conditions. In Proposition \ref{OS1} we show that there is a unitary map $U:{\mathcal H}_\mathbb{D}\to L^2(\mathbb{R}\times \Omega_\mathbb{T})$ given by
\begin{align}\label{udeff0}
(UF)(c,\varphi)
e^{-Qc}\mathds{E}_\varphi[ F(c+X)e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D})}],
\end{align}
where $\mathds{E}_\varphi$ is expectation over $X_\mathbb{D}$ in the decomposition $X|_\mathbb{D}=X_\mathbb{D}+P\varphi$. Hence ${\mathcal H}_\mathbb{D}$ can by identified with $L^2(\mathbb{R}\times \Omega_\mathbb{T})$. The operator $U$ then allows us to write the 4 point function in terms of the scalar product $\langle\cdot|\cdot\rangle_2$ in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$, namely, we have for $|z|<1$
\begin{equation}\label{4point_via_U}
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu}=\big\langle U \big(V_{\alpha_1}(0)V_{\alpha_2}(z)\big) \,\big|\, U \big(V_{\alpha_4}(0)V_{\alpha_3}(1)\big) \big\rangle_{2}.
\end{equation}
The bootstrap formula is then obtained by
expanding this scalar product along the eigenstates of a self adjoint operator $\mathbf{H}$ on $L^2(\mathbb{R}\times \Omega_\mathbb{T})$, the LCFT Hamiltonian to which we now turn.
\subsection{Hamiltonian of Liouville theory}
For $q\in \mathbb{C}$ with $|q|\leq 1$, the dilation map $z\in \mathbb{C}\to s_q(z)=qz$ maps the unit disc to itself and it gives rise to a map $S_q:\mathcal{F}_\mathbb{D}\to \mathcal{F}_\mathbb{D}$ by
\begin{align}\label{dilationS}
S_q F(\phi):=F(\phi\circ s_q+Q\ln |q|)
\end{align}
for $F\in \mathcal{F}_\mathbb{D}$. In Proposition \ref{dilationsemi} we show that these operators satisfy $S_qS_{q'}=S_{qq'}$ and $q\mapsto S_q$ descends to a strongly continuous semigroup on ${\mathcal H}_\mathbb{D}$. Taking $q=e^{-t}$ for $t\geq 0$, we get a contraction semigroup on $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ via the unitary map $U$ in \eqref{udeff0}, given by the relation
\begin{align}\label{lsemig}
e^{-t{\bf H}}=US_{e^{-t}}U^{-1}.
\end{align}
The generator of the semi-group is a positive self-adjoint operator $\bf H$ with domain $\mathcal{D}(\mathbf{H})\subset L^2(\mathbb{R}\times \Omega_\mathbb{T})$, called the {\it Hamiltonian of Liouville theory}.
To get a more concrete representation for $\bf H$ we use a probabilistic Feynman-Kac representation for $e^{-t{\bf H}}$. It is based on the well known fact that the GFF $X(e^{-t+i\theta})$ for $t\geq 0$ can be realized as a continuous Markov process $$t\mapsto X(e^{-t+i\cdot})=(B_t,\varphi_t(\cdot))\in W^{s}(\mathbb{T})
$$
with $s<0$. Here we decomposed $W^{s}(\mathbb{T})=\mathbb{R}\oplus W^{s}_0(\mathbb{T})$ where $ W^{s}_0(\mathbb{T})$ has the $n\neq 0$ Fourier components. In this decomposition $B_t$ is a standard Brownian motion and the process $\varphi_t$ gives rise to an Ornstein-Uhlenbeck semigroup on $W^{s}_0(\mathbb{T})$ (i.e.the harmonic components evolve as independent OU processes), whose generator is a positive self-adjoint operator $ \mathbf{P}$. The operator $ \mathbf{P}$ is essentially an infinite sum of harmonic oscillators (see \eqref{hdefi} for the exact definition). This leads to the expression for the Hamiltonian when $\mu=0$
\begin{equation}\label{H0def}
\mathbf{H}^0:=\mathbf{H}|_{\mu=0}= -\frac{1}{2}\partial_c^2 + \frac{1}{2} Q^2+ \mathbf{P}
\end{equation}
defined on an appropriate domain where the first two terms come from the Brownian motion, see Section \ref{sec:GFFCFT} for further details.
Using the formula \eqref{udeff0} we then deduce
a Feynman-Kac formula for the LCFT semigroup in Proposition \ref{prop:FK}) which then
formally gives $\bf H$ as a "Schr\"odinger operator"
\begin{equation}\label{Hdef}
\mathbf{H}=
\mathbf{H}^0+\mu e^{\gamma c}V(\varphi)
\end{equation}
where the potential is formally given by
\begin{equation}\label{VVV}
V(\varphi)=\int_0^{2\pi} e^{ \gamma \varphi(\theta)- \frac{\gamma^2}{2} \mathds{E}[ \varphi(\theta)^2 ]
} \dd\theta .
\end{equation}
In Section \ref{sub:bilinear}, we give the rigorous definition of \eqref{Hdef} which actually is quite subtle and nonstandard. The operator \eqref{Hdef} is defined as the Friedrichs extension of an associated quadratic form. For $\gamma\in [0,\sqrt{2})$ $V$ is a well defined random variable defined as the total mass of a GMC measure on $\mathbb{T}$ associated to $\varphi$.
For $\gamma \in [\sqrt{2},2)$ however, this GMC measure vanishes identically and $V$ can no longer be considered as a multiplication operator: it has to be understood as a measure on some subspace of $L^2(\Omega_\mathbb{T})$ but it still gives rise to a positive operator.
We mention here that the operator \eqref{Hdef} in the absence of the $c$-variable, i.e. the operator $ \mathbf{P} +\mu V$, was first studied in \cite{hoegh} and was shown to be essentially self-adjoint on an appropriate domain provided $\gamma \in (0,1)$, condition under which $V$ is a random variable in $L^2(\Omega_\mathbb{T})$.
\subsection{Spectral resolution of the Hamiltonian}\label{outline:subspectral}
One of the main mathematical inputs in our proof comes from stationary {\it scattering theory}, see Section \ref{sec:scattering}. It is based on the observation that the potential $e^{\gamma c}V$ vanishes as $c\to -\infty$ so that $\mathbf{H}$-eigenstates should be reconstructed from their asymptotics at $c\to-\infty$, region over which they should behave like $\mathbf{H}^0$-eigenstates.
The spectral analysis of $\mathbf{H}^0$ is simple. The spectrum $\{0<\lambda_1<\dots\lambda_k<\dots\}$ of the operator $ \mathbf{P}$ is given by the natural numbers $\lambda_k=k$ and each eigenvalue has finite multiplicity, see Section \ref{sec:fock}. The corresponding eigenspace $\ker (P-\lambda_k)$ is spanned by a family $(h_{jk})_{j=0,\dots,J(k)}$ of finite products of Hermite polynomials in the Fourier components $(\varphi_n)_{n\in\mathbb{Z}^\ast}$ in \eqref{GFFcircle0}, providing
an orthonormal basis $\{h_{jk}\}_{k\in\mathbb{N},j\leq J(k)}$ of $L^2(\Omega_\mathbb{T})$. The spectrum of $\mathbf{H}^0$ is then readily seen to be absolutely continuous and given by the half-line $[\tfrac{Q^2}{2},+\infty)$
with a complete set of generalized eigenstates
\begin{align}\label{h0ev}
\Psi^0_{Q+iP,jk}(c,\varphi)=e^{iPc}h_{jk}(\varphi),
\end{align}
with $P\in\mathbb{R}$, $k\in\mathbb{N}$, $j\leq J(k)$ and eigenvalue $\frac{1}{2}(Q^2+P^2)+\lambda_k=2\Delta_{Q+iP}+\lambda_k$ where $\Delta_{Q+iP}$ is the conformal weight \eqref{deltaalphadef}.
In physics, the spectrum of $\mathbf{H}$ was first elaborated by Curtright and Thorn \cite{ct} (see also Teschner \cite{Tesc1} for a nice discussion of the scattering picture).
To explain the intuition, let us consider a toy model where only the $c$-variable enters, namely the operator $\frac{1}{2}(-\partial_c^2+Q^2) +\mu e^{\gamma c}$ on $L^2(\mathbb{R})$.
It is a Schr\"odinger operator with potential tending to $0$ as $c\to -\infty$ and to $+\infty$ as $c\to +\infty$. Thus for $c\to -\infty$ the eigenfunctions should tend to eigenfunctions of the $\mu=0$ problem i.e. to a linear combination of $e^{\pm iPc}$. Indeed, the operator
has a complete set of generalized eigenfunctions $f_P$, $P\in\mathbb{R}^+$, with asymptotics
\begin{equation*}
f_P(c) \sim
e^{ iPc}+R(P)e^{ -iPc} \ \ \textrm{ as } c\to -\infty
\end{equation*}
and $f_P(c)\to 0$ as $c\to +\infty$. These eigenfunctions describe scattering of waves from a wall (the exponential potential $e^{\gamma c}$ acts as a wall when $c\to+\infty$) and $R(P)$ is an explicit coefficient called the reflection coefficient\footnote{This coefficient is a simplified version of the (quantum) reflection coefficient which appears in the proof of the DOZZ formula \cite{KRV,dozz}.}.
For the operator $\bf H$ we expect the same intuition to hold: an eigenfunction $\Psi(c,\varphi)$ of $\bf H$ should, as $c\to -\infty$, tend to an eigenfunction of ${\bf H}^0$ of the same eigenvalue.
Indeed we prove (see Theorem \ref{spectralmeasure}) which uses a different labelling) that to each ${\bf H}^0$ eigenvector $\Psi^0_{Q+iP,jk}$ in \eqref{h0ev} there is a corresponding ${\bf H}$ eigenvector $\Psi_{Q+iP,jk}$ with the same eigenvalue
\begin{align}\label{heigenv}
{\bf H} \Psi_{Q+iP,jk}=(2\Delta_{Q+iP}+\lambda_k)\Psi_{Q+iP,jk},
\end{align}
and these form a complete set for $P\in\mathbb{R}_+$, $k\in\mathbb{N}$, $j\leq J(k)$. Applying the spectral decomposition of ${\bf H}$ to \eqref{4point_via_U} using this basis leads to
\begin{align}\label{4point_via_U1}
& \langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(1) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu}=\\
&\frac{1}{2\pi}
\sum_{k\in \mathbb{N}}\sum_{j=0}^{J(k)}\int_0^\infty \langle U \big(V_{\alpha_1}(0)V_{\alpha_2}(z)\big)\, |\, \Psi_{Q+iP,jk}\rangle_2 \langle \Psi_{Q+iP,jk}\, |\, U \big(V_{\alpha_4}(0)V_{\alpha_3}(1)\big)\rangle_2 \dd P.\nonumber
\end{align}
Proving this spectral resolution for $\bf H$ is the technical core of the paper and involves a considerable amount of work (the whole Section \ref{sec:scattering}).
The main difficulty comes from the fact that the potential $V$ appearing in ${\bf H}$ acts on the $L^2$-space of an infinite dimensional space $\mathbb{R}\times\Omega_\mathbb{T}$ and moreover $V$ is not even a function for $\gamma\in[\sqrt{2},2)$ as discussed in the previous subsection. This weak regularity and unboundedness of the potential make the problem quite non-standard.
\subsection{Analytic continuation}
The spectral resolution \eqref{4point_via_U1} is not yet the bootstrap formula. Indeed, it holds under quite general assumptions on $V$.
To get the bootstrap formula from \eqref{4point_via_U1} we need to connect the scalar products in \eqref{4point_via_U1} to the DOZZ 3-point functions $C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q-iP )$ and $C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_3,\alpha_4, Q+iP )$ respectively. The probabilistic 3-point function $C_{\gamma,\mu}( \alpha_1,\alpha_2, \alpha )$ is defined only for $\alpha$ real and satisfying the Seiberg bounds $\alpha_1<Q$, $\alpha_2<Q$, $\alpha<Q$, $\alpha_1+\alpha_2+\alpha>2Q$ and $C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q-iP )$ is an analytic continuation of this probabilistic expression. Our strategy then is to analytically continue the eigenfunction $\Psi_{Q+iP,jk}$ to real values of $Q+iP$ so that we can use the probabilistic construction of the LCFT and in particular its conformal invariance to derive identities that allow to determine the above scalar products.
In Section, \ref{sec:scattering} we show that the eigenfunctions can be analytically continued to $ \Psi_{\alpha,jk}$ which are analytic in $\alpha$ in a connected region which contains both the spectrum line $\alpha\in Q+i\mathbb{R}_+$ and a half-line $]-\infty,A_k]\subset\mathbb{R}$ for some $A_k<Q$. This analytic continuation still satisfies the eigenvalue problem \eqref{heigenv} (with $Q+iP$ replaced by $\alpha$)
but it has exponential growth in the $c$ variable as $c\to -\infty$ and it does not contribute to the spectral resolution (compare with eigenfunctions $e^{ac}$ of the operator $-\partial_c^2$ which are associated to the $L^2$ spectrum only for $a\in i\mathbb{R}$).
\begin{figure}[h]
\begin{tikzpicture}[xscale=1,yscale=1]
\newcommand{\A}{(0,-0.1) rectangle (4.4,3.3)};
\newcommand{\Ci}{(4,-0.2) arc (30:150:1.1) -- (3,-0.2)};
\fill[pattern=north east lines, pattern color=green] \A;
\fill[white] \Ci;
\node[below] at (4,0) {$Q$};
\draw[line width=1pt] (2.1,-0.1) -- (2.1,0.1) ;
\draw [line width=2pt,color=blue](4.2,-0.1) -- (4.2,3);
\draw[line width=1pt,<-] (4.2,2) -- (4.6,2)node[right]{Spectrum line } ;
\node[below] at (5.5,1.8) {$Q+i\mathbb{R}$};
\shade[left color=yellow!10,right color=yellow,opacity=0.7] (2,0) arc (0:25:2) -- (0,3) -- (0,0) -- (2,0);
\shade[left color=yellow!10,right color=yellow] (2,0) arc (0:-5:2) -- (0,-0.2) -- (0,0) -- (2,0);
\draw[style=dashed,line width=1pt,->] (3,-0.1) node[below]{$0$} -- (3,3) node[left]{{\rm Im} $\alpha$};
\draw[style=dashed,line width=1pt,->] (0,0) -- (6,0)node[below]{{\rm Re} $\alpha$};
\node[below] at (0,1) {{\small Probabilistic region}};
\node[below] at (2.4,2.5) {{\small Analyticity region}};
\node[below] at (2.1,0) {$Q-\lambda_{k}$};
\end{tikzpicture}
\caption{Analytic continuation of eigenstates and probabilistic region.}
\label{art:intro}
\end{figure}
The crucial point we show is that the analytically continued $\mathbf{H}$-eigenfunctions $ \Psi_{\alpha,jk}$ can be obtained by {\it intertwining} the $\mathbf{H}^0$-eigenfunctions for $\alpha$ in some complex neighborhood of the half-line $]-\infty,A_k]$ that we call {\it the probabilistic region} (see Figure \ref{art:intro}):
\begin{equation}\label{intertwining:intro}
\Psi_{\alpha,jk}=\lim_{t\to\infty} e^{(2\Delta_\alpha+\lambda_{k})t}e^{-t\mathbf{H}}\Psi^{0}_{\alpha,jk} \quad \quad\quad \text{\textbf{(Intertwining)}}
\end{equation}
where $\Psi^{0}_{\alpha,jk}=e^{(\alpha-Q)c}h_{jk}$ are the analytically continued eigenfunctions of $\mathbf{H}^0$ (see Proposition \ref{descconvergence} which has a different labelling). Furthermore, combining \eqref{intertwining:intro} with the
Feynman-Kac formula one finds (Section \ref{sub:HW}) for $\lambda_k=0$ (then $J(0)=0$, $h_{00}(\varphi)=1$ and we denote $\Psi_{\alpha,0}$ for $\Psi_{\alpha,00}$) that
\begin{align}
\Psi_{\alpha,0}=U(V_\alpha(0))\label{urelation}
\end{align}
where $U$ is the unitary map \eqref{udeff0}. Hence the ``non-spectral'' eigenfunctions $ \Psi_{\alpha,0}$ for $\alpha\in\mathbb{R}$ have a probabilistic interpretation in LCFT whereas the ``spectral'' eigenfunctions $\Psi_{Q+ip,0}$ do not have one. In the physical literature the correspondence between local fields and states in the Hilbert space is called the state-operator correspondence and in LCFT this correspondence is broken. The spectral states $\Psi_{Q+iP,0}$ do not correspond to local fields and they are called {\it macroscopic states} whereas the non-normalizable states $\Psi_{\alpha,0}$ correspond to the local field $V_\alpha$ and are called {\it microscopic states}, see \cite{seiberg} for a lucid discussion.
The relation \eqref{urelation} leads to
\begin{equation}\label{4point_via_U2}
\langle U(\prod_{i=1}^nV_{\alpha_i}(z_i)) \, |\, \Psi_{\alpha,0}\rangle_2
=
\langle \big ( \prod_{i=1}^nV_{\alpha_i}(z_i) \big ) V_\alpha(\infty) \rangle_{\gamma,\mu}
\end{equation}
for $\{\alpha_i\},\alpha$ satisfying the Seiberg bounds (the scalar product in \eqref{4point_via_U2} still makes sense since the vector on the left has sufficient decay in the $c$-variable to counter the exponential increase of $\Psi_{\alpha,0}$). Note in particular that \eqref{4point_via_U2} is defined only if $n$ is large enough since the Seiberg bound requires $\sum_i\alpha_i>2Q-\alpha>2Q-A_0$. To understand the case $k>0$ we need to discuss the conformal symmetry of LCFT.
\subsection{Conformal Ward Identities}
The symmetries of conformal field theory are encoded in an infinite dimensional Lie algebra, the Virasoro algebra. In the case of GFF this means that
the Hilbert space $L^2(\mathbb{R}\times\Omega_\mathbb{T})$ carries a representation of two commuting Virasoro algebras with generators $\{{\bf L}^0_n\}_{n\in\mathbb{Z}}$ and $\{\tilde {\bf L}^0_n\}_{n\in\mathbb{Z}}$ (see Section \ref{repth}). In particular the
GFF Hamiltonian is given by ${\bf H}^0={\bf L}^0_0+\tilde{\bf L}^0_0$ and the ${\bf H}^0$ eigenstates $\Psi_{Q+iP,jk}^0$ for fixed $P$ can be organized to a highest weight representation of these algebras. In concrete terms the highest weight state $\Psi_{Q+iP,0}^0$ satisfies
\begin{equation}
\mathbf{L}_0^0 \Psi^{0}_{Q+iP,0} =\widetilde{\mathbf{L}}_0^0\Psi^{0}_{Q+iP,0}=\Delta_{Q+iP}\Psi^{0}_{Q+iP,0} ,\quad \quad \mathbf{L}_n^0\Psi^{0}_{Q+iP,0}=\widetilde{\mathbf{L}}_n^0\Psi^{0}_{Q+iP,0}=0,\ \ \ n>0,
\end{equation}
and given two non-increasing sequences of positive integers $\nu= (\nu_1,\dots,\nu_k) $ and $\tilde{\nu}=(\tilde\nu_1,\dots,\tilde\nu_l) $, $k,l\in \mathbb{N}$ and setting
$\mathbf{L}_{-\nu}^0=\mathbf{L}_{-\nu_k}^0 \cdots \, \mathbf{L}_{-\nu_1}^0 $ and $\tilde{\mathbf{L}}_{-\tilde \nu}^0=\tilde{\mathbf{L}}_{-\tilde\nu_j}^0 \cdots\, \tilde{\mathbf{L}}_{-\tilde\nu_1}^0$
the states
\begin{align}\label{psibasis:intro}
\Psi^0_{Q+iP,\nu, \tilde\nu}:=\mathbf{L}_{-\nu}^0\tilde{\mathbf{L}}_{-\tilde\nu}^0 \: \Psi_{Q+iP,0},
\end{align}
are eigenstates of ${\bf H}^0$ of eigenvalue $E=2\Delta_{Q+iP}+\sum_i\nu_i+\sum_j\tilde\nu_j$ and, for fixed $E$, span that eigenspace. Thus at each eigenspace there is a nonsingular matrix $M(Q+iP)$ relating the vectors $\Psi^0_{Q+iP,\nu, \tilde\nu}$ and $\Psi^0_{Q+iP,jk}$ and furthermore we show (Proposition \ref{prop:mainvir0}) that this matrix is analytic in the variable $\alpha=Q+iP$. Setting
\begin{align*
\Psi_{\alpha, \nu,\tilde\nu}:=\sum_{k,j} M(\alpha)_{\nu,\tilde\nu;jk} \Psi_{\alpha,jk}
\end{align*}
the vectors $ \Psi_{Q+iP, \nu,\tilde\nu}$ are a complete set of generalized eigenvectors of $\bf H$ and they can be used in the identity \eqref{4point_via_U1} (with appropriate Gram matrices), see Section \ref{sectionproofbootstrap}. Furthermore
$ \Psi_{\alpha, \nu,\tilde\nu}$ provides an analytic continuation of $ \Psi_{Q+iP, \nu,\tilde\nu}$
to the half line $\alpha<Q-A$ for some $A>0$ and it is intertwined there with the corresponding vector $ \Psi^0_{\alpha, \nu,\tilde\nu}$ by the relation \eqref{intertwining:intro}.
The bootstrap formula \eqref{4pointidentity} is a consequence of the following fundamental identity
(see Proposition \ref{Ward})
\begin{align}\label{Ward3:intro}
\langle \Psi_{Q+iP, \nu,\tilde\nu}\, |\, U(V_{\alpha_1}(0)V_{\alpha_2}(z)) \rangle_2 &=d(\alpha_1,\alpha_2,\nu,\tilde\nu)C^{ \mathrm{DOZZ}}_{\gamma,\mu}( \alpha_1,\alpha_2, Q+iP ) \bar{z}^{|\nu|} z^{|\tilde{\nu}|} |z|^{2 (\Delta_{Q+iP}-\Delta_{\alpha_1}-\Delta_{\alpha_2})}
\end{align}
where the function $d$ is an explicit function of the parameters that will contribute
to the conformal blocks. To prove \eqref{Ward3:intro} we consider the scalar product $\langle \Psi_{Q+iP, \nu,\tilde\nu}\, |\, U(\prod_{i=1}^nV_{\alpha_i}(z_i))\rangle_2 $ where $z_i\in\mathbb{D}$ and $\sum_i\alpha_i>Q+A$ and analytically continue it from $\alpha=Q+iP$ ($P \in \mathbb{R}_+$) to $\alpha\in (2Q-\sum_i\alpha_i,Q-A)$. For such $\alpha$ we prove the Conformal Ward Identity (see Proposition \ref{proofward})
\begin{align}\label{wardintroduction}
\langle U(\prod_{i=1}^nV_{\alpha_i}(z_i)) \, |\, \Psi_{\alpha,\nu,\tilde\nu}\rangle_2
&={\mathcal D}(\boldsymbol{\alpha},\alpha,\nu,\tilde\nu) \langle \prod_{i=1}^n V_{\alpha_i} (z_i) V_\alpha(\infty)\rangle_{\gamma,\mu}
\end{align}
where $\boldsymbol{\alpha}=(\alpha_1, \dots, \alpha_n)$ and ${\mathcal D}(\boldsymbol{\alpha},\alpha,\nu,\tilde\nu)$ is an explicit partial differential operator in the variables $z_i$. Using \eqref{4point_via_U2}, continuing back to $\alpha=Q+iP$ and taking $\alpha_i\to 0$ for $i>2$ we then deduce (the complex conjugate of) \eqref{Ward3:intro}.
The proof of \eqref{wardintroduction} occupies the whole Section \ref{sec:proba}. It is based on a representation of the states $\Psi^0_{\alpha,\nu,\tilde\nu}$ in terms of $V_\alpha(0)$ and the Stress-Energy-Tensor (SET) in Proposition \ref{def_contour}. Let us briefly explain this for $\nu=n, \tilde\nu=\emptyset$ ie for
the state ${\bf L}^0_{-n}\Psi^0_{\alpha,0}$. We have $\Psi^0_{\alpha,0}=U_0V_\alpha(0)$ where $U_0$ is the map \ref{udeff0} for $\mu=0$. The SET is
given in the GFF theory by the field
\begin{equation}\label{SET:intro}
T(z):=Q\partial_{z}^2X(z)-(\partial_zX(z))^2+\mathds{E}[(\partial_zX(z))^2],
\end{equation}
defined through regularization and limit. Then we prove
\begin{align*
{\bf L}^0_{-n}\Psi^0_{\alpha,0}=\frac{1}{2\pi i}\oint z^{1-n}U_0(T(z)V_\alpha(0))dz
\end{align*}
where the integration contour circles the origin in $\mathbb{D}$. Plugging this identity to
the intertwining relation \eqref{intertwining:intro} and using the Feynman-Kac formula one ends up with a contour integral of the SET insertion in LCFT correlation function. This is analyzed by Gaussian integration by parts and results in the Ward identity.
\subsection{Organization of the paper}
The paper is organized as follows. In Section \ref{sec:ospos}, we will introduce the relevant material on the Gaussian Free Field and explain the construction of the Hilbert space as well as the quantization of dilations; the Liouville Hamiltonian is then defined as the generator of dilations. This section uses the concept of reflection positivity. In Section \ref{sec:GFFCFT}, we study the dynamics induced by dilations in the GFF theory (i.e. $\mu=0$) and recall the basics of representation theory related to the GFF. In Section \ref{sec:LCFTFQ}, we study in more details the Liouville Hamiltonian: we establish the Feynman-Kac formula for the associated semigroup and identify its quadratic form, which allows us to use scattering theory to diagonalize the Liouville Hamiltonian in Section \ref{sec:scattering}. In Sections \ref{sec:proba} and \ref{subproofward}, we will prove the Conformal Ward identities for the correlations of LCFT: in a way, this can be seen as an identification of the eigenstates of the Liouville Hamiltonian. In Section \ref{sectionproofbootstrap}, we will prove the main result of the paper Theorem \ref{bootstraptheoremintro} using the material proved in the other sections. Finally, in the appendix, we will recall the DOZZ formula and gather auxiliary results (analyticity of vertex operators).
\subsection{Notations and conventions}\label{notation}
We gather here the frequently used notations:
\vskip 2mm
\noindent $\langle . \rangle_{\gamma,\mu}$ denotes the LCFT expectation \eqref{FL1}.
\vskip 1mm
\noindent $X,\phi,\varphi$ denote respectively the GFF \eqref{chatcov}, the Liouville field \eqref{liouvillefield} and the GFF on $\mathbb{T}$ \eqref{GFFcircle0}.
\vskip 1mm
\noindent $(\Omega_\mathbb{T},\P_\mathbb{T})$ denotes the probability space \eqref{omegat} with measure \eqref{Pdefin}.
\vskip 1mm
\noindent $L^p(\Omega_\mathbb{T})$ for $p\geq 1$: complex valued functions $\psi(\varphi)$ with norm $\|\cdot\|_{L^p(\Omega_\mathbb{T})}$.
\vskip 1mm \noindent $L^p(\mathbb{R} \times \Omega_\mathbb{T})$ for $p\geq 1$: complex valued functions $\psi(c,\varphi)$ with norm $\|\cdot\|_{p}$.
\vskip 1mm \noindent $\langle \cdot | \cdot \rangle_{L^2(\Omega_\mathbb{T})}$: scalar product in $L^2(\Omega_\mathbb{T})$.
\vskip 1mm \noindent $\langle \cdot | \cdot\rangle_2$:
scalar product in $L^2(\mathbb{R} \times \Omega_\mathbb{T})$.
\vskip 1mm \noindent $e^{\alpha \rho(c)}L^p(\mathbb{R} \times \Omega_\mathbb{T})$.
Weighted $L^p$-space equipped with the norm $ \|f\|_{e^{\alpha \rho(c)}L^p}:= \|e^{-\alpha \rho(c)} f\|_p$.
\vskip 1mm \noindent $C^\infty(D)$ denotes the set of smooth functions on the domain $D$.
\vskip 1mm \noindent $C^\infty_c(D)$ denotes the set of smooth functions with compact support in $D$.
\vskip 1mm \noindent
$(.,.)_\mathbb{D}$ denotes the scalar product \eqref{osform} associated to reflexion positivity
\vskip 1mm \noindent ${\mathcal H}_\mathbb{D}$: Hilbert space associated to $(.,.)_\mathbb{D}$
\vskip 1mm \noindent
$\langle f,g \rangle _\mathbb{T}:=\int_0^{2 \pi }f(\theta)g(\theta) d\theta $ denotes the scalar product on $L^2(\mathbb{T})$.\vskip 1mm \noindent $\langle f,g \rangle _\mathbb{D}:=\int_\mathbb{D} f(x)g(x) dx $ denotes the scalar product in $L^2(\mathbb{D})$.
\vskip 1mm \noindent All sesquilinear forms are linear in their first argument, antilinear in their second one.
\vskip 1mm \noindent For integral operators on some measure space $(M,\mu)$ we use the notation $(Gf)(x)=\int G(x,y)f(y)\mu(dy)$.
\vskip 1mm \noindent GFF quantities as opposed to corresponding LCFT ones will carry a subscript or superscript $0$: for example the Hamiltonians are ${\bf H}^0$ (GFF) and ${\bf H}$ (LCFT).
\section{Reflection positivity} \label{sec:ospos}
In this section we prove the reflection positivity of the LCFT and explain the isometry $U$ \eqref{udeff0} mapping the LCFT observables to states in the Hilbert space $L^2(\mathbb{R} \times \Omega_\mathbb{T})$ as well as the semigroup in \eqref{lsemig}. We start by a discussion of the various GFF's.
\subsection{Gaussian Free Fields}\label{sub:gff}
We will now define the fields entering the decomposition \eqref{decomposeGFF}.
\subsubsection{GFF on the unit circle $\mathbb{T}$}
Given two independent sequences of i.i.d. standard Gaussians $(x_n)_{n\geq 1}$ and $(y_n)_{n\geq 1}$, the GFF on the unit circle is the random Fourier series
\begin{equation}\label{GFFcircle}
\varphi(\theta)=\sum_{n\not=0}\varphi_ne^{in\theta}
\end{equation}
where for $n>0$
\begin{align}\label{varphin}
\varphi_n:=\frac{1}{2\sqrt{n}}(x_n+iy_n) ,\ \ \ \varphi_{-n}:=\overline\varphi_{n}.
\end{align}
Let $W^s(\mathbb{T})\subset \mathbb{C}^\mathbb{Z}$ be the set of sequences s.t.
\begin{equation}\label{outline:ws}
\|\varphi\|_{W^s(\mathbb{T})}^2:=\sum_{n\in\mathbb{Z}}|\varphi_n|^2(|n|+1)^{2s} <\infty.
\end{equation}
One can easily check that $\mathds{E}[\|\varphi\|_{W^s(\mathbb{T})}^2]<\infty$ for any $s<0$ so that the series \eqref{GFFcircle} defines a random element in $W^s(\mathbb{T})$. Moreover, by a standard computation, one can check that it is a centered Gaussian field with covariance kernel given by
\begin{equation}
\mathds{E}[\varphi({\theta})\varphi({\theta'})]=\ln\frac{1}{|e^{i\theta}-e^{i\theta'}|}.
\end{equation}
We will view $\varphi$ as the coordinate function of the probability space
\begin{align}\label{omegat}
\Omega_\mathbb{T}=(\mathbb{R}^{2})^{\mathbb{N}^*}
\end{align}
which is equipped with the cylinder sigma-algebra
\Sigma_\mathbb{T}=\mathcal{B}^{\otimes \mathbb{N}^*}$, where $\mathcal{B}$ stands for the Borel sigma-algebra on $\mathbb{R}^2$ and the product measure
\begin{align}\label{Pdefin}
\P_\mathbb{T}:=\bigotimes_{n\geq 1}\frac{1}{2\pi}e^{-\frac{1}{2}(x_n^2+y_n^2)}\dd x_n\dd y_n.
\end{align}
Here $ \P_\mathbb{T}$ is supported on $W^s(\mathbb{T})$ for any $s<0$ in the sense that $ \P_\mathbb{T}(\varphi\in W^s(\mathbb{T}))=1$.
\subsubsection{Harmonic extension of the GFF on $\mathbb{T}$}
The next ingredient we need for the decomposition of the GFF \eqref{decomposeGFF} is the harmonic extension $P\varphi$ of the circle GFF defined on $z\in\mathbb{D}$ by
\begin{equation}\label{harmonic}
(P\varphi)(z) =\sum_{n\geq 1}(\varphi_n z^n+\bar \varphi_n \bar z^n
\end{equation}
and on $z\in\mathbb{D}^c$ by $(P\varphi)(1/\bar z)$ so that we have
\begin{align*
P\varphi= (P\varphi)\circ\theta
\end{align*}
where $\theta$ is the reflection in the unit circle \eqref{thetadef}.
$P\varphi$ is a.s. a smooth field in the complement of the unit circle with covariance kernel given for $z,u\in\mathbb{D}$
\begin{align*}
\mathds{E}[(P\varphi)(z)(P\varphi)(u)]=\frac{_1}{^2}\sum_{n>0}\frac{1}{n}((z\bar u)^n+(\bar z u)^n)=-\ln|1-z\bar u|
\end{align*}
and for $z\in\mathbb{D}$, $u\in\mathbb{D}^c$
\begin{align*}
\mathds{E}[(P\varphi)(z)(P\varphi)(u)]=-\ln|1-z/ u|
\end{align*}
\subsubsection{Dirichlet GFF on the unit disk}
The Dirichlet GFF $X_\mathbb{D}$ on the unit disk $\mathbb{D}$ is the centered Gaussian distribution (in the sense of Schwartz) with covariance kernel $G_\mathbb{D}$ given by
\begin{equation}\label{dirgreen}
G_\mathbb{D}(x,x'):=\mathds{E}[X_\mathbb{D}(x)X_\mathbb{D}(x')]=\ln\frac{|1-x\bar x'|}{|x-x'|}.
\end{equation}
Here, $G_\mathbb{D}$ is the Green function of the negative of the Laplacian $\Delta_\mathbb{D}$ with Dirichlet boundary condition on $\mathbb{T}=\partial\mathbb{D}$ and
$X_\mathbb{D}$ can be realized
as an expansion in eigenfunctions of $\Delta_\mathbb{D}$ with Gaussian coefficients. However, it will be convenient for us to use another realization based on the following observation. Let, for $n\in\mathbb{Z}$
\begin{align*
X_n(t)=\int_0^{2 \pi} e^{-in\theta}X_\mathbb{D}(e^{-t+i\theta})\tfrac{d\theta}{2\pi}.
\end{align*}
Then we deduce from \eqref{dirgreen}
\begin{align}\label{ipp}
\mathds{E}[X_n(t)X_m(t')]=\left\{
\begin{array}{ll} \tfrac{1}{2|n|}\delta_{n,-m}(e^{-|t-t'||n|}-e^{-(t+t')|n|})&\, n\neq 0\\
t\wedge t'&\, n=m=0
\end{array} \right..
\end{align}
Thus $\{X_n\}_{n\geq 0}$ are independent Gaussian processes with $X_{-n}=\bar X_n$ and $X_0$ is Brownian motion. We can and will realize them in a probability space $(\Omega_\mathbb{D},\Sigma_\mathbb{D},\P_\mathbb{D})$ s.t. $X_n(t)$ have
continuous sample paths. Then for fixed $t$
\begin{align}\label{XDdef}
X_\mathbb{D}(e^{-t+i\theta})=\sum_{n\in\mathbb{Z}} X_n(t)e^{in\theta}.
\end{align}
takes values in $W^s(\mathbb{T})$ for $s<0$ a.s. (defined in \eqref{outline:ws}) and we can take the map $t\in\mathbb{R}^+\mapsto X_\mathbb{D}(e^{-t+i\cdot})\in W^s(\mathbb{T})$ to be continuous a.s. in $\Omega_\mathbb{D}$. Decompose $W^{s}(\mathbb{T})=\mathbb{R}\oplus W_0^{s}(\mathbb{T}) $ where $f\in W_0^{s}(\mathbb{T})$ has zero average: $\int f(\theta)d\theta=0$. Then
\begin{align*
X_\mathbb{D}(e^{-t+i\cdot})=(B_t, Y_t)
\end{align*}
where $B_t$ is Brownian motion and $Y_t(\theta)=\sum_{n\neq 0} Y_n(t)e^{in\theta}$ is a continuous process in $W^{s}_0(\mathbb{T})$, independent of $B_t$.
The Dirichlet GFF $X_{\mathbb{D}^c}$ on the complement $\mathbb{D}^c$ of $\mathbb{D}$ can be constructed in the same way (in a probability space $(\Omega_{\mathbb{D}^c},\Sigma_{\mathbb{D}^c},\P_{\mathbb{D}^c})$) and we have the relation in law
\begin{align}\label{xdirref}
X_{\mathbb{D}^c}\stackrel{{\rm law}}=X_{\mathbb{D}}\circ\theta
\end{align}
or in other words $X_{\mathbb{D}^c}(e^{t+i\cdot})\stackrel{{\rm law}}=X_{\mathbb{D}}(e^{-t+i\cdot})$, $t\geq 0$.
\subsubsection{GFF on the Riemann sphere}
One can check that adding the covariances in the previous subsections we get that the field $X$ defined by \eqref{decomposeGFF} has the covariance
\begin{equation}\label{hatGformula}
\mathds{E} [ X(x)X(y)]
=\ln\frac{1}{|x-y|}+\ln|x|_++\ln|y|_+
\end{equation}
which coincides with \eqref{chatcov}. In the sequel, we suppose that the GFF on the Riemann sphere $X$ is defined on a probability space $(\Omega, \Sigma, \P)$ (with expectation $\mathds{E}[.]$) where $\Omega= \Omega_\mathbb{T} \times \Omega_\mathbb{D} \times \Omega_{\mathbb{D}^c} $, $\Sigma= \Sigma_\mathbb{T} \otimes \Sigma_\mathbb{D} \otimes \Sigma_{\mathbb{D}^c}$ and $\P$ is a product measure $\P=\P_\mathbb{T}\otimes \P_{\mathbb{D}}\otimes \P_{\mathbb{D}^c}$. At the level of random variables, the GFF decomposes as the sum of three independent variables
\begin{equation}\label{decomposGFF}
X= P\varphi+ X_\mathbb{D}+X_{\mathbb{D}^c}
\end{equation}
where $P\varphi$ is the harmonic extension of the GFF restricted to the circle $\varphi=X|_\mathbb{T}$ defined on $ (\Omega_\mathbb{T}, \Sigma_\mathbb{T},\P_\mathbb{T})$ and $X_\mathbb{D},X_{\mathbb{D}^c}$ are two independent GFFs on $\mathbb{D}$ and $\mathbb{D}^c$ with Dirichlet boundary conditions defined respectively on the probability spaces $ (\Omega_\mathbb{D}, \Sigma_\mathbb{D},\P_\mathbb{D})$ and $ (\Omega_{\mathbb{D}^c}, \Sigma_{\mathbb{D}^c}, \P_{\mathbb{D}^c})$\footnote{With a slight abuse of notations, we will assume that these spaces are canonically embedded in the product space $(\Omega,\Sigma)$ and we will identify them with the respective images of the respective embeddings.}. We will write $\mathds{E}_\varphi[\cdot ]$ for conditional expectation with respect to the GFF on the circle $\varphi$ (instead of $:=\mathds{E}[\cdot | \Sigma_\mathbb{T}]$). We will view $X$ in two ways in what follows: as a process $X_t\in W^s(\mathbb{T})$ ($s<0$)
\begin{align}\label{def:xt}
X_t(\theta)=X_\mathbb{D}(e^{-t+i\theta})1_{t>0}+X_{\mathbb{D}^c}(e^{-t+i\theta})1_{t<0}+(P\varphi)(e^{-|t|+i\theta}).
\end{align}
and as a random element in ${\mathcal D}'(\hat\mathbb{C})$.
In the sequel, we will denote for $t\geq 0$:
\begin{equation}\label{def:phit}
\varphi_t(\theta):=P\varphi(e^{-t+i\theta}) +Y_t(\theta).
\end{equation}
\subsection{Reflection positivity}
Let $\mathbb{D}=\{|z|<1\}$ be the unit disk. Recall the definition of the Liouville field \eqref{liouvillefield} which is given on $\mathbb{D}$ by $\phi(z)=c+X(z)=c+X_\mathbb{D}(z)+P\varphi(z)$. Let ${\mathcal A}_\mathbb{D}$ be the sigma-algebra on $\mathbb{R}\times\Omega$ generated by the maps $\phi\mapsto , \langle \phi,g\rangle _\mathbb{D}$ for $g\in C_0^{\infty}(\mathbb{D})$ and we recall the notation $\langle \phi,g\rangle _\mathbb{D}=\int_\mathbb{D} g(z)\phi(z)dz$. Let ${\mathcal F}_\mathbb{D}$ be the set of
${\mathcal A}_\mathbb{D}$-measurable functions with values in $\mathbb{R}$.
For $F,G\in {\mathcal F}_\mathbb{D}$ such that the following quantities make sense (see below), we define (recall \eqref{Thetadef})
\begin{equation}\label{osform}
(F,G)_\mathbb{D}:=\langle \Theta F(\phi) \overline{G(\phi)}\rangle_{\gamma,\mu}.
\end{equation}
Reflection positivity is the statement that this bilinear form is non-negative, namely $(F,F)_\mathbb{D}\geq 0$. In what follows, we will study this statement separately for the GFF theory ($\mu=0$) and for LCFT.
\subsubsection{Reflection positivity of the GFF}
Here we assume $\mu=0$. Let $F,G\in {\mathcal F}_\mathbb{D}$ be nonnegative. The sesquilinear form \eqref{osform} becomes at $\mu=0$
\begin{align}\label{oszero}
(F,G)_{\mathbb{D},0}=\int_\mathbb{R} e^{-2Qc}\mathds{E} [ (\Theta F)(\phi)\overline{G(\phi)}] dc=\int_\mathbb{R} e^{-2Qc}\mathds{E} [ F(c+X^{(2)})\overline{G(c+X^{(1)})}] \dd c
\end{align}
where we denoted $X^{(i)}=X^{(i)}_\mathbb{D}+P\varphi$ with $X^{(1)}_\mathbb{D}= X_\mathbb{D}$ and $X^{(2)}_\mathbb{D}= X_{\mathbb{D}^c} \circ \theta$ which are two independent GFFs in the unit disk.
Hence by independence of $X_\mathbb{D}^{(i)}$
\begin{align}\nonumber
(F,G)_{\mathbb{D},0}&=\int_\mathbb{R} e^{-2Qc}\mathds{E} [ \mathds{E}_\varphi [ F(c+X^{(2)}_\mathbb{D}+P\varphi) ] \overline{ \mathds{E}_\varphi [ G(c+X^{(1)}_\mathbb{D}+P\varphi) ] } ]\dd c\nonumber
\\
&=\langle U_0F | U_0G\rangle_2 \label{oszero1}
\end{align}
where the map $U_0$ is defined by
\begin{align}\label{uxerodef}
(U_0F)(c,\varphi)=e^{-Qc}\mathds{E}_\varphi [ F(c+X_\mathbb{D}+P\varphi)]
\end{align}
and we recall $\mathds{E}_\varphi$ denotes expectation over $X_\mathbb{D}$.
Such a map is well defined on nonnegative $F\in {\mathcal F}_\mathbb{D}$ and extended to ${\mathcal F}_\mathbb{D}^{0,\infty}$, which is defined as the space of $F\in {\mathcal F}_\mathbb{D}$ such that $U_0|F|<\infty$ $dc\otimes\P_\mathbb{T}$-almost everywhere. Let ${\mathcal F}_\mathbb{D}^{0,2}=\{F\in {\mathcal F}_\mathbb{D}^{0,\infty}\,|\, \|U_0F\|_2<\infty\}$.
\begin{proposition}\label{OS}
The sesquilinear form \eqref{oszero} extends to ${\mathcal F}_\mathbb{D}^{0,2}$. This extension is non negative
\begin{align}\label{scalar10}
\forall F\in{\mathcal F}_\mathbb{D}^{0,2},\quad\quad (F,F)_{\mathbb{D},0}\geq 0.
\end{align}
Let $\overline{{\mathcal F}_\mathbb{D}^{0,2}/{\mathcal N}_0^0}$ be the Hilbert space completion of the pre-Hilbert space ${\mathcal F}_\mathbb{D}^{0,2}/{\mathcal N}_0^0$ with ${\mathcal N}_0^0:=\{F\in {\mathcal F}_\mathbb{D}^{0,2}\,|\, (F,F)_{\mathbb{D},0}=0\}$. The map $U_0$
in \eqref{uxerodef} descends to a unitary map $U_0 :\overline{{\mathcal F}_\mathbb{D}^{0,2}/{\mathcal N}_0^0}\to L^2(\mathbb{R} \times\Omega_\mathbb{T})$.
\end{proposition}
\proof
By \eqref{oszero1} $U_0$ descends to an isometry on ${\mathcal F}_\mathbb{D}^{0,2}/{\mathcal N}_0^0$ so we need to show it is onto. We take $F$ of the form
\begin{align}\label{denseset}
F(c+X)=\rho(\langle c+X,g\rangle _\mathbb{D})e^{\langle c+X,f\rangle _\mathbb{D}-\frac{_1}{^2} \langle f,G_\mathbb{D} f\rangle_\mathbb{D}}.
\end{align}
with $\rho\in C_0^\infty(\mathbb{R})$ and $g,f\in C_0^{\infty}(\mathbb{D})$ with the further conditions that
$g$ is rotation invariant i.e. $g(re^{i\theta})=g(r)$, and that $\int_0^{2\pi} f(re^{i\theta})d\theta=0$ for all $r\in [0,1]$.
Then
$ \langle c,f \rangle _\mathbb{D}=0$ and $\langle P\varphi,g \rangle_\mathbb{D}=0$ and we get
\begin{align}\nonumber
(U_0 F)(c,\varphi)&=e^{-Qc}e^{ \langle P\varphi,f \rangle_\mathbb{D}}\mathds{E} [ \rho(c+\langle X_\mathbb{D},g\rangle_\mathbb{D})e^{\langle X_\mathbb{D},f\rangle_\mathbb{D}-\frac{_1}{^2} \langle f,G_\mathbb{D} f\rangle_\mathbb{D}} ]\\&=e^{-Qc}e^{\langle P\varphi,f\rangle_\mathbb{D}}\mathds{E} [ \rho(c+\langle X_\mathbb{D},g\rangle_\mathbb{D})]\label{denseset1
\end{align}
where we observed that $\langle X_\mathbb{D},g\rangle_\mathbb{D}$ and $\langle X_\mathbb{D},f\rangle_\mathbb{D}$ are independent as their covariance vanishes. Indeed, by rotation invariance of $g$, the function $\mathcal{O}(r,\theta):= \int_\mathbb{D} g(x) G_\mathbb{D}(x,r e^{i\theta})\dd x$ does not depend on $\theta$ hence
\begin{align*}
\mathds{E}[ \langle X_\mathbb{D},g\rangle_\mathbb{D} \langle X_\mathbb{D},f\rangle_\mathbb{D} ] & = \int_{\mathbb{D}} \int_{\mathbb{D}} G_\mathbb{D}(x,y) g(x) f(y) \dd y \\
& = \int_{0}^1 r \int_0^{2 \pi} f(re^{i \theta} ) \mathcal{O}(r,\theta) \dd \theta \dd r \\
& = \int_{0}^1 r \mathcal{O}(r,0) \int_0^{2 \pi} f(re^{i \theta} ) \dd \theta \dd r=0.
\end{align*}
Let $h\in C^\infty(\mathbb{T})$, $f_\epsilon\in C_0^{\infty}(\mathbb{D})$ and $g_\epsilon$ be given by $g_\epsilon(re^{i\theta})=\epsilon^{-1}\eta(\frac{1-r}{\epsilon})$,
$f_\epsilon= hg_\epsilon$
where $\eta$ is a smooth bump with support on $[1,2]$ and total mass one. Then
$\lim_{\epsilon\to 0} \langle P\varphi,f_\epsilon \rangle_\mathbb{D}= \langle \varphi,h \rangle_\mathbb{T}$ and $\lim_{\epsilon\to 0}\mathds{E} (\langle X_\mathbb{D},g_\epsilon\rangle_\mathbb{D}^2)=0$
so that
\begin{align*
\lim_{\epsilon\to 0}(U_0 F_\epsilon)(c,\varphi)=e^{-Qc}\rho(c)e^{ \langle \varphi,h \rangle_\mathbb{T}}
\end{align*}
where the convergence is in $L^2(\mathbb{R} \times \Omega_\mathbb{T})$.
Thus the functions $e^{-Qc}\rho(c)e^{\langle \varphi,h \rangle _\mathbb{T}}
$ are in the image of $U_0$ for all $\rho\in C_0^\infty(\mathbb{R})$ and $h\in C^\infty(\mathbb{T})$. Since the linear span of these is dense in $L^2(\mathbb{R} \times \Omega_\mathbb{T})$ the claim follows. \qed
\begin{remark}\label{uoext}
Note that this argument shows that $U_0$ extends from ${\mathcal F}_\mathbb{D}^{0,2}$ to functionals of form
$F(c+X_{|\mathbb{T}})$ and then $$(U_0F)(c,\varphi)=e^{-Qc}F(c+\varphi).$$
\end{remark}
\subsubsection{Reflection positivity of LCFT}
Next we want to show reflection positivity for the LCFT expectation \eqref{FL1} with $\mu>0$. The GMC measure $ M_\gamma$ defined in \eqref{GMCintro} can also be constructed as the martingale limit
\begin{equation}
M_\gamma(\dd x)=\lim_{N\to\infty}e^{ \gamma X_N(x)-\tfrac{\gamma^2}{2}\mathds{E}[X_N(x)^2] }|x|_+^{-4}\, \dd x.
\end{equation}
where in $X_N$ we cut off the series \eqref{XDdef} and \eqref{harmonic} defining $X_\mathbb{D}^{(i)}$ and $P\varphi$ respectively to finite number of terms $|n|\leq N$. We claim that
\begin{align*
M_\gamma(\hat\mathbb{C})=M^{(1)}_\gamma(\mathbb{D})+ M^{(2)}_\gamma(\mathbb{D})
\end{align*}
where $M^{(i)}_\gamma$ are the GMC measures of the fields $X^{(i)}=X^{(i)}_\mathbb{D}+P\varphi$, $i=1,2$. Indeed, we take the limit $N\to\infty$ in
\begin{align*
\int_{\mathbb{D}^c}e^{ \gamma X_N(x)-\tfrac{\gamma^2}{2}\mathds{E}[X_N(x)^2] }|x|^{-4}\,\dd x =\int_{\mathbb{D}^c}e^{ \gamma X^{(2)}_N(\frac{1}{\bar x})-\tfrac{\gamma^2}{2}\mathds{E}[X^{(2)}_N(\frac{1}{\bar x})^2] }|x|^{-4}\,\dd x=\int_{\mathbb{D}}e^{ \gamma X^{(2)}_N(x)-\tfrac{\gamma^2}{2}\mathds{E}[X^{(2)}_N(x)^2] }\,\dd x.
\end{align*}
Thus, for nonnegative $F,G\in {\mathcal F}_\mathbb{D}$
\begin{align*
(F,G)_\mathbb{D}=\langle \Theta F\overline{ G}\rangle_{\gamma, \mu}=\langle U_0(IF)\, | \, U_0(IG)\rangle_2
\end{align*}
where
\begin{align*
I=e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D})}.
\end{align*}
Let $ {\mathcal F}_\mathbb{D}^\infty$ be the space of $F\in {\mathcal F}_\mathbb{D}$ such that $U_0(|F|I)<\infty$ $dc\otimes\P_\mathbb{T}$-almost everywhere. Let ${\mathcal F}_\mathbb{D}^{2}=\{F\in {\mathcal F}_\mathbb{D}^{\infty}\,|\, \|U_0(FI) \|_2<\infty\}$. From the above considerations, we arrive at:
\begin{proposition}\label{OS1}
The sesquilinear form \eqref{osform} extends to ${\mathcal F}_\mathbb{D}^2$, is nonnegative and given by
\begin{align}\label{scalar1}
(F,G)_\mathbb{D}=\langle UF \,|\, UG \rangle _{2}
\end{align}
for all $F,G\in{\mathcal F}_\mathbb{D}^2$ where
\begin{align}\label{udeff}
(UF)(c,\varphi)=(U_0(FI))(c,\varphi)=e^{-Qc}\mathds{E}_\varphi [ F(c+X)e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D})}],
\end{align}
$X=X_\mathbb{D}+P\varphi$ and $M_\gamma$ is its GMC measure. Define ${\mathcal N}_0:=\{F\in {\mathcal F}_\mathbb{D}^2\,|\, (F,F)_\mathbb{D}=0\}$. Then $U$ descends to a unitary map
$$U :{\mathcal H}_\mathbb{D}\to L^2(\mathbb{R} \times\Omega_\mathbb{T})$$
with ${\mathcal H}_\mathbb{D}:=\overline{{\mathcal F}_\mathbb{D}^2/{\mathcal N}_0}$ (the completion with respect to $(.,.)_\mathbb{D}$).
\end{proposition}
\proof
We need to show $U$ is onto. This follows from $U(I^{-1}F)=U_0F$ and the fact that $U_0$ is onto.\qed
\begin{remark}
From Remark \ref{uoext} we conclude that $U$ extends from ${\mathcal F}_\mathbb{D}$ to functionals $F(c+X_{|\mathbb{T}})$ for which
$$(UF)(c,\varphi)=F(c+\varphi)\times (U1)(c+\varphi
$$
or, in other words, for $f\in L^2(\mathbb{R} \times \Omega_\mathbb{T})$
\begin{align}\label{Uinverse}
U^{-1}f=(U1)^{-1}f
\end{align}
\end{remark}
\subsection{Dilation Semigroup}
Recall the action of the dilation map \eqref{dilationS} on ${\mathcal F}_\mathbb{D}$.
The reason for the $Q \ln |q|$-factor is the M\"obius invariance
property of LCFT \cite{DKRV}
\begin{proposition}\label{int:mobius} Let $\psi:\hat\mathbb{C}\to\hat\mathbb{C}$ be a M\"obius map and let $F$ be a functional on ${\mathcal D}'(\hat{\mathbb{C}})$ so that $\langle|F(\phi)|\rangle_{\gamma,\mu}<\infty$. Then
\begin{align}
\langle F(\phi \circ \psi +Q \ln |\psi'|)\rangle_{\gamma,\mu}=\langle F(\phi)\rangle_{\gamma,\mu}.
\label{moobi}
\end{align}
\end{proposition}
We have then:
\begin{proposition} \label{dilationsemi} The map $S_{q}$ descends to a contraction $S_{q}:{\mathcal H}_\mathbb{D}\to {\mathcal H}_\mathbb{D}$:
\begin{align}\label{conttra}
\forall F\in {\mathcal H}_\mathbb{D},\quad \quad (S_{q}F,S_{q}F)_\mathbb{D}\leq (F,F)_\mathbb{D}.
\end{align}
The adjoint of $S_q$ is $S_q^\ast=S_{\bar q}$ i.e. for all $F,G\in {\mathcal H}_\mathbb{D}$
\begin{align}
(S_q F, G)_\mathbb{D}=(F,S_{\bar q}G)_\mathbb{D}.
\label{adjointsq}
\end{align}
Finally the map $q\in\mathbb{D}\mapsto S_{q}$ is strongly continuous and satisfies the group property
\begin{align}
S_{q}S_{q'}=S_{qq'}
\label{gropupprpo}
\end{align}
so that $q\in\mathbb{D}\mapsto S_{q}$ is a strongly continuous contraction semigroup.
\end{proposition}
\begin{proof}
Let us start with \eqref{adjointsq}.
It suffices to consider $F,G\in {\mathcal F}^2_\mathbb{D}$ real.
By definition
\begin{align*
(S_q F, G)_\mathbb{D} =\langle F( \phi\circ\theta\circ s_q+ Q \ln |q| -2Q\ell\circ s_q ) G( \phi) \rangle_{\gamma,\mu}:=\langle \tilde F( \phi ) G( \phi) \rangle_{\gamma,\mu}
\end{align*}
where $\ell(z):=\ln|z|$. Applying Proposition \ref{int:mobius} with $\psi=s_{\bar q}$ we get
\begin{align*
\langle \tilde F( \phi ) G( \phi) \rangle_{\gamma,\mu}= \langle \tilde F( \phi \circ s_{\bar q}+Q\ln |q|) G( \phi \circ s_{\bar q}+Q\ln |q|) \rangle_{\gamma,\mu}
\end{align*}
But
\begin{align*
\tilde F( \phi \circ s_{\bar q}+Q\ln |q|) =F( \phi\circ s_{\bar q}\circ\theta\circ s_q+2Q\ln|q|-2Q\ell\circ s_q)=F(\phi\circ\theta-2Q\ell)
\end{align*}
and therefore $ \langle \tilde F( \phi ) G( \phi) \rangle_{\gamma,\mu}= ( F, S_{\bar q}G)_\mathbb{D}$ as claimed.
The group property \eqref{gropupprpo} is obvious.
To prove the contraction, denote for $F\in {\mathcal F}_\mathbb{D}$, the seminorm $\|F\|_\mathbb{D}:=(F,F)_\mathbb{D}^\frac{_1}{^2}$. Then we have
\begin{align*}
\|S_{q}F\|_\mathbb{D}&=
(S_{q}F,S_{q}F)_\mathbb{D}^\frac{_1}{^2}=(F,S_{|q|^{2}}F)_\mathbb{D}^\frac{_1}{^2}\leq \|F\|^\frac{_1}{^2}_\mathbb{D}\|S_{{|q|^{2}}}F\|^\frac{_1}{^2}_\mathbb{D}
\end{align*}
Iterating this inequality we obtain
\begin{align*}
\|S_{q}F\|_\mathbb{D}\leq \|F\|_\mathbb{D}^{1-2^{-k}}\|S_{|q|^{2k}}F\|_\mathbb{D}^{{2^{-k}}}.
\end{align*}
Recall that
\begin{align*}
(G,G)_\mathbb{D}
\langle U_0(IG) | U_0(IG) \rangle _{2}=
\int_\mathbb{R} e^{-2Qc}\mathds{E}[ \mathds{E}_\varphi[ IG] ^2]dc
\end{align*}
and then by Cauchy-Schwartz applied to $\mathds{E}_\varphi[ . ]$
\begin{align*}
\mathds{E} [\mathds{E}_\varphi [ IG ]^2 ]= \mathds{E}[\mathds{E}_\varphi[I^{\frac{_1}{^2}}G I^{\frac{_1}{^2}}]^{2}]\leq \mathds{E} [ \mathds{E}_\varphi [ IG^{2} ]\mathds{E}_\varphi[ I ] ]
\end{align*}
so that
\begin{align*}
(G,G)_\mathbb{D}\leq \langle U_{0}(IG^{2}) | U_{0}I\rangle_2=\langle U G^{2}| U 1\rangle_2 =\langle G^{2}\rangle_{\gamma,\mu}.
\end{align*}
Hence
\begin{align*}
\|S_{q}F\|_\mathbb{D}\leq \|F\|_\mathbb{D}^{1-2^{-k}}\langle (S_{|q|^{2k}}F)^{2}\rangle_{\gamma,\mu}^{{2^{-k-1}}}= \|F\|_\mathbb{D}^{1-2^{-k}}\langle F^{2}\rangle_{\gamma,\mu}^{{2^{-k-1}}}
\end{align*}
where we used again the M\"obius invariance of $\langle \cdot\rangle_{\gamma,\mu}$. Taking $k\to\infty$ we conclude $ \|S_{q}F\|_\mathbb{D}
\leq \|F\|_\mathbb{D}$ for
$F\in {\mathcal F}_\mathbb{D}$ which satisfy $ \langle F^2\rangle_{\gamma,\mu}<\infty$. Such $F$ form a dense set in ${\mathcal F}_\mathbb{D}$. Indeed, let $F\in {\mathcal F}_\mathbb{D}$ with $\|F\|_\mathbb{D}<\infty$ and let $F_R=F1_{|F|<R}$. Then $ \langle F_R^2\rangle_{\gamma,\mu}<\infty$ and
\begin{align*
\|F-F_R\|_\mathbb{D}^2=\|F1_{|F|\geq R}\|_\mathbb{D}^2\leq \|F\|_\mathbb{D}^2\|1_{|F|\geq R}\|_\mathbb{D}^2
\end{align*}
and $\|1_{|F|\geq R}\|_\mathbb{D}^2=\langle 1_{F\geq R}\Theta 1_{|F|\geq R}\rangle\leq \frac{1}{R^2}\langle |F|\theta| F|\rangle\to 0$ as $R\to\infty$.
Hence \eqref{conttra} holds for all $F\in{\mathcal F}_\mathbb{D}$ with $\|F\|_\mathbb{D}<\infty$. This implies $S_q$ maps the null space ${\mathcal N}_0$ to ${\mathcal N}_0$ and thus $S_q$ extends to ${\mathcal H}_\mathbb{D}$ so that \eqref{conttra} holds.
Finally to prove strong continuity, by the semigroup property it suffices to prove it at $q=1$ and by the contractive property we need to prove it only on a dense set. Since
\begin{align*}
\|S_{q}F-F\|_\mathbb{D}^{2}= \|S_{q}F\|_\mathbb{D}^{2}+ \|F\|_\mathbb{D}^{2}- (S_{q}F,F)_\mathbb{D}-(F,S_{q}F)_\mathbb{D}\leq 2\|F\|_\mathbb{D}^{2}- (S_{q}F,F)_\mathbb{D}-(F,S_{q}F)_\mathbb{D}
\end{align*}
it suffices to prove $(S_{q}F,F)_\mathbb{D}\to (F,F)_\mathbb{D}$ as $q\to 1$ on a dense set of $F$.
Take $F=GI^{-1}$ so that $UF=U_0G$. Then
\begin{align*}
(F,S_{q}F)_\
=\int_\mathbb{R} e^{-2Qc}\mathds{E}( \Theta GG
e^{-\mu e^{\gamma c}M_{\gamma}(\mathbb{D}\setminus|q|\mathbb{D})})dc
\end{align*}
which converges as $q\to 1$ to $(F,F)_\mathbb{D}$ (use $\P (M_{\gamma}(\mathbb{D}\setminus|q|\mathbb{D})>\epsilon)\to 0$ as $q\to 1$).
\end{proof}
In particular we can form two one-parameter (semi) groups from $S_q$. Taking $q=e^{-t}$ we define $T_{t}=S_{e^{-t}}$. Then $T_{t+s}=T_{t}T_{s}$ so $T_{t}$ is a strongly continuous contraction semigroup on the Hilbert space ${\mathcal H}_\mathbb{D}$. Hence by the Hille-Yosida theorem
\begin{align}\label{hstar}
US_{e^{-t}}U^{-1}=e^{-t{\bf H_\ast}}
\end{align}
where the
generator ${\bf H_\ast}$ (in the case $\mu=0$, we will write ${\bf H}_\ast^0$) is a positive self-adjoint operator with domain ${\mathcal D}({\bf H}_\ast)$ consisting of $\psi\in L^2(\mathbb{R} \times\Omega_\mathbb{T})$ such that $\lim_{t\to 0}\frac{1}{t}(e^{-t{\bf H_\ast}}-1)\psi$ exists in $L^2(\mathbb{R} \times\Omega_\mathbb{T})$. The operator $\bf{H_\ast}$ is the {\it Hamiltonian} of LCFT. Taking $q=e^{i\alpha}$ we get that $\alpha\mapsto S_{e^{i\alpha}}$ is a strongly continuous unitary group so that by Stone's theorem
$$
US_{e^{i\alpha}}U^{-1}=e^{i\alpha \Pi_\ast}
$$
where $\Pi_\ast$ is the self adjoint {\it momentum} operator of LCFT. As we will have no use for $ \Pi_\ast$ in this paper we will concentrate on ${\bf H_\ast}$ from now on. Let us emphasize here that it is defined in the full range $\gamma \in (0,2)$. One of our next tasks will be to show that for $\gamma \in (0,2)$: ${\bf H_\ast}={\bf H}$, where the Hamiltonian ${\bf H}$ will be defined as the Friedrichs extension of \eqref{Hdef}.
\section{Gaussian Free Field: dynamics and CFT aspects} \label{sec:GFFCFT}
\subsection{Fock space and harmonic oscillators}\label{sec:fock}
The Hilbert space $L^2(\Omega_\mathbb{T},\P_\mathbb{T})$ (denoted from now on by $L^2(\Omega_\mathbb{T})$) has the structure of Fock space. Let ${\mathcal P}\subset L^2(\Omega_\mathbb{T})$ (resp. $ \mathcal{S}\subset L^2(\Omega_\mathbb{T})$) be the linear span of the functions of the form $F(x_1,y_1, \cdots, x_N,y_N)$ for some $N \geq 1$ where $F$ is a polynomial on $\mathbb{R}^{2N}$ (resp. $F\in C^\infty((\mathbb{R}^2)^N)$ with at most polynomial growth at infinity for $F$ and its derivatives). Obviously $\mathcal{P}\subset\mathcal{S}$ and they are both dense in $L^2(\Omega_\mathbb{T})$.
On $\mathcal{S}$ we define the annihilation and creation operators
\begin{align}\label{crea}
\mathbf{X}_n&=\partial_{x_n},\ \ \ \mathbf{X}_n^\ast=-\partial_{x_n}+x_n,\\
\mathbf{Y}_n&=\partial_{y_n},\ \ \ \mathbf{Y}_n^\ast=-\partial_{y_n}+y_n.
\end{align}
They are formally adjoint of each other (see e.g. \cite[VIII.11]{rs1} for more about the closure of these operators, which we will not need here) and form a representation of the algebra of canonical commutation relations on $\mathcal{S}$:
\begin{align}\label{ccr}
[\mathbf{X}_n,\mathbf{X}_m^\ast]=\delta_{nm}=[\mathbf{Y}_n,\mathbf{Y}_m^\ast]
\end{align}
with other commutators vanishing. The operator $ \mathbf{P}$ is then given on $\mathcal{S}$ as
\begin{align}\label{hdefi}
\mathbf{P}=\sum_{n=1}^\infty n(\mathbf{X}_n^\ast \mathbf{X}_n+\mathbf{Y}_n^\ast \mathbf{Y}_n)
\end{align}
(only finite number of terms in the sum contributes when acting on $\mathcal{S})$ and extends uniquely to an unbounded
self-adjoint positive operator on $L^2(\Omega_\mathbb{T})$: this follows from the fact that we can find a complete system of eigenfunctions in ${\mathcal P}$, as described now. Let $\mathcal{N}$ be the set of non-negative integer valued sequences with only a finite number of non null integers, namely ${\bf k}=(k_1,k_2,\dots)\in \mathcal{N}$ iff ${\bf k}\in \mathbb{N}^{\mathbb{N}_+}$ and $k_n=0$ for all $n$ large enough. For $\bf k,\bf l \in\mathcal{N}$ define the polynomials (here $1\in L^2(\Omega_\mathbb{T})$ is the constant function)
\begin{align}\label{fbasishermite}
\hat{\psi}_{{\bf k}{\bf l}}=\prod_n ( \mathbf{X}_n^\ast)^{k_n}( \mathbf{Y}_n^\ast)^{l_n}1 \in {\mathcal P}.
\end{align}
Equivalently, $\hat{\psi}_{{\bf k}{\bf l}}= \prod_n {\rm He}_{k_n}(x_n) {\rm He}_{l_n}(y_n)$ where $({\rm He}_k)_{k \geq 0}$ are the standard Hermite polynomials. Then, using \eqref{ccr}, one checks that these are eigenstates of $\bf P$:
\begin{align}\label{fbasis2}
\mathbf{P}\hat{\psi}_{{\bf k}{\bf l}}=(|{\bf k}|+|{\bf l}|
\hat{\psi}_{{\bf k}{\bf l}}=\lambda_{{\bf k}{\bf l}}\hat{\psi}_{{\bf k}{\bf l}}.
\end{align}
where we use the notations
\begin{equation}\label{firstlength}
|{\bf k}|:=\sum_{n=1}^\infty nk_n,\quad \lambda_{{\bf k}{\bf l}}:=|{\bf k}|+|{\bf l}|
\end{equation}
for ${\bf k},{\bf l}\in{\mathcal N}$.
It is also well known that the family $\{\psi_{{\bf k}{\bf l}}=\hat{\psi}_{{\bf k}{\bf l}}/\|\hat{\psi}_{{\bf k}{\bf l}}\|_{L^2(\Omega_\mathbb{T})}\}$ (where $\|\cdot \|_{L^2(\Omega_\mathbb{T})}$ is the standard norm in $L^2(\Omega_\mathbb{T})$) forms an orthonormal basis of $L^2(\Omega_\mathbb{T})$. Finally we claim
\begin{proposition}\label{semigrouppt}
The operator $\mathbf{P}$ generates a strongly continuous semigroup of self-adjoint contractions $(e^{-t\mathbf{P}})_{t\geq 0}$ on $L^2(\Omega_\mathbb{T}) $ with probabilistic representation, for $t\geq 0$,
$$\forall f\in L^2(\Omega_\mathbb{T}),\quad e^{-t\mathbf{P}}f=\mathds{E}_\varphi[f(\varphi_t)] $$
with $(\varphi_t)_{t\geq 0}$ the process defined by \eqref{def:phit}.
\end{proposition}
\begin{proof}
The fact that $\mathbf{P}$ generates a strongly continuous semigroup of self-adjoint contractions results from the fact that $\mathbf{P}$ is self-adjoint and nonnegative. Since $(\psi_{{\bf k}{\bf l}})_{{\bf k},{\bf l}}$ form an orthonormal basis of $L^2(\Omega_\mathbb{T})$, it suffices to study the semigroup on this basis. Obviously $ e^{-t\mathbf{P}}\psi_{{\bf k}{\bf l}}=e^{-\lambda_{{\bf k}{\bf l}}t}\psi_{{\bf k}{\bf l}}$. Furthermore the decomposition \eqref{def:xt} together with the covariance structure \eqref{ipp} (and recalling the decomposition \eqref{GFFcircle}+\eqref{varphin} of the field $\varphi$) entails that the law of $\varphi_t$ (see \eqref{def:phit}) conditionally on $\varphi$ is given by
$$\varphi_t(\theta)= \sum_{n>0}\frac{x_n(t)+iy_n(t)}{2\sqrt{n}}e^{in\theta}+\sum_{n<0}\frac{x_{-n}(t)-iy_{-n}(t)}{2\sqrt{-n}}e^{in\theta}$$
where $x_n(t),y_n(t)$ are independent Ornstein-Uhlenbeck processes. In particular for each fixed $t$, there are two independent sequences of independent standard Gaussians $(\bar{x}_n)_n$ and $(\bar{y}_n)_n$ such that, for $n\geq 1$
$$x_n(t)\stackrel{\text{law cond. on }\varphi}{=}e^{-nt}x_n+\sqrt{1-e^{-2tn} }\bar{x}_n, \quad\quad y_n(t)\stackrel{\text{law cond. on }\varphi}{=}e^{-nt}y_n+\sqrt{1-e^{-2tn} }\bar{y}_n.$$
Finally, we recall the following elementary result: given $Y$ a standard Gaussian random variable, the standard Hermite polynomials $({\rm He}_k)_{k \geq 0}$ on $\mathbb{R}$, $x\in\mathbb{R}$ and $u,v\geq 0$ such that $u^2+v^2=1$ then
\begin{equation}\label{hermiteOU}
\mathds{E}[{\rm He}_k(ux+vY)]=u^k{\rm He}_k(x).
\end{equation}
Using this lemma and our description of the law of $X_t$, it is then plain to deduce that
$$e^{-t\mathbf{P}}\psi_{{\bf k}{\bf l}}=\mathds{E}_\varphi[\psi_{{\bf k}{\bf l}}(\varphi_t)] =e^{-\lambda_{{\bf k}{\bf l}}t}\psi_{{\bf k}{\bf l}}.$$
Hence our claim.
\end{proof}
\begin{remark}\label{eigP}
List the eigenvalues $\lambda= {|\bf k}|+{|\bf l}|$ of $\bf P$ in increasing order $\lambda_1<\lambda_2<\dots$ and let $P_i$ be the corresponding spectral projectors. Since each $\lambda_i$ is of finite multiplicity and $\lambda_i\to\infty$ as $i\to\infty$ the semigroup $e^{-t\bf P}=\sum_ie^{-t\bf \lambda_i}P_i$ and the resolvent $(z-{\bf P})^{-1}=\sum_i(z-\lambda_i)^{-1}P_i$ are compact if $t>0$ and $\Im z\neq 0$ since they are norm convergent limits of finite rank operators.
\end{remark}
\subsection{Quadratic form}
Introduce the bilinear form (with associated quadratic form still denoted by $\mathcal{Q}_0$)
\begin{equation}\label{defQ0}
\forall u,v\in \mathcal{C},\quad \mathcal{Q}_0(u,v):=\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} \Big( \partial_c u \partial_c \bar{v}+Q^2u\bar{v}+ 2( \mathbf{P} u)\bar{v} \Big)\dd c
\end{equation}
with
\begin{equation}\label{core}
\mathcal{C}=\mathrm{Span}\{ \psi(c)F\,|\,\psi\in C_c^\infty(\mathbb{R})\text{ and }F\in\mathcal{S} \}.
\end{equation}
We claim
\begin{proposition}\label{FQ0:GFF}
The quadratic form \eqref{defQ0} is closable (and we still denote its closure by $\mathcal{Q}_0$ with domain $\mathcal{D}(\mathcal{Q}_0)$) and lower semibounded: $\mathcal{Q}(u)\geq Q^2\|u\|_2^2/2$.
It determines uniquely a self-adjoint operator $\mathbf{H}^0 $, called the \emph{Friedrichs extension}, with domain denoted by $\mathcal{D}(\mathbf{H}^0)$ such that:
$$\mathcal{D}(\mathbf{H}^0 )=\{u\in \mathcal{D}(\mathcal{Q}_0)\, |\, \exists C>0,\forall v\in \mathcal{D}(\mathcal{Q}_0),\,\,\, |\mathcal{Q}_0(u,v)|\leq C\|v\|_2\}$$
and for $u\in \mathcal{D}(\mathbf{H}^0)$, $\mathbf{H}^0 u$ is the unique element in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ satisfying
$$\mathcal{Q}_0(u,v)=\langle \mathbf{H}^0 u|v\rangle_2 .$$
\end{proposition}
\begin{proof} Recall that the closability of the quadratic form means that its completion with respect to the $ \mathcal{Q}_0$-norm embeds continuously and injectively in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$.
Its completion is the vector space consisting of equivalence classes of Cauchy sequences of
$\mathcal{C}$ for the $\mathcal{Q}_0$-norm under the equivalence relation
$u\sim v$ iff $\mathcal{Q}_0(u_n-v_n)\to 0$ as $n\to \infty$. This space is a Hilbert space. Let us show that it embeds injectively and continuously in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ by the map $j: [u]\mapsto \lim_{n\to \infty}u_n$. Indeed, $u_n$ is Cauchy for $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ since $\|u_n-u_m\|^2_{2}\leq 2Q^{-2}\mathcal{Q}_0(u_n-v_n)$, it thus converges in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$. Moreover $\|\lim_{n}u_n\|_{2}^2\leq 2Q^{-2}\lim_{n}\mathcal{Q}_0(u_n)$
thus $j$ is bounded. Finally if $j([u])=0$, then for $(u_n)_n$ a representative Cauchy sequence of $[u]$, we have $u_n\to 0$ in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ and using
\[ \frac{1}{2}\|\partial_c (u_n-u_m)\|_{2}^2+
\|{\bf P}^{1/2}(u_n-u_m)\|_{2}^2 \leq \mathcal{Q}_0(u_n-u_m,u_n-u_m),\]
one has the convergence in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ of $\partial_cu_n\to v$ and ${\bf P}^{1/2}u_n\to w$ for some $v,w \in L^2(\mathbb{R}\times \Omega_\mathbb{T})$. For each $\varphi \in \mathcal{C}$, we have as $n\to \infty$
\[ \langle \partial_cu_n,\varphi\rangle_{2}=\langle u_n,-\partial_c\varphi\rangle\to 0, \quad
\langle {\bf P}^{1/2}u_n, \varphi\rangle_2=\langle u_n,{\bf P}^{1/2}\varphi\rangle_2\to 0,
\]
thus $v=w=z=0$ by density of $\mathcal{C}$ in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$. This implies that $\mathcal{Q}_0(u_n)\to 0$ and thus $j$ is injective.
Let us now consider the closure $\mathcal{Q}_0$ with domain $\mathcal{D}(\mathcal{Q}_0)$. Obviously it is closed and lower semi-bounded $\mathcal{Q}_0(u)\geq Q^2\|u\|_2^2/2$ so that the construction of the Friedrichs extension then follows from \cite[Theorem 8.15]{rs1}.
\end{proof}
If we let $\mathcal{D}(\mathcal{Q}_0)'$ be the dual to $\mathcal{D}(\mathcal{Q}_0)$ (i.e. the space of bounded conjugate linear functionals on $\mathcal{D}(\mathcal{Q}_0)$), the injection $L^2(\mathbb{R}\times \Omega_\mathbb{T})\subset \mathcal{D}(\mathcal{Q}_0)'$ is continuous and the operator ${\bf H}^0$ can be extended as a bounded isomorphism
\[{\bf H}^0:\mathcal{D}(\mathcal{Q}_0)\to \mathcal{D}(\mathcal{Q}_0)'.\]
We alsohave $\mathcal{D}({\bf H}^0)=\{ u\in\mathcal{D}(\mathcal{Q}_0)\,|\, {\bf H}^0u\in L^2(\mathbb{R}\times \Omega_\mathbb{T})\}$ and $({\bf H}^0)^{-1}:L^2(\mathbb{R}\times \Omega_\mathbb{T})\to \mathcal{D}({\bf H}^0)$ is bounded. Furthermore, by the spectral theorem, it generates a strongly continuous contraction semigroup of self-adjoint operators $(e^{-t \mathbf{H}^0 } )_{t\geq 0}$ on $L^2(\mathbb{R}\times\Omega_\mathbb{T})$.
\subsection{Dynamics of the GFF}
The goal of this subsection is to prove the relation ${\bf H}_\ast^0={\bf H}^0$, i.e. we want to show
\begin{proposition}
For all $f\in L^2(\mathbb{R} \times\Omega_\mathbb{T})$ and all $t\geq 0$
\begin{align}\label{u00identity}
U_0S_{e^{-t}}U_0^{-1}f=e^{-t{\bf H}^0}f=e^{-\frac{Q^2t}{2}}\mathds{E}_\varphi[ f (c+B_t, \varphi_t)]
\end{align}
\end{proposition}
\begin{proof}
Recalling \eqref{XDdef}, we have the independent sum
\begin{align*
X_\mathbb{D}(e^{-t+i\theta})=B_{t}+Y_t(\theta)
\end{align*}
where $B_t$ is a Brownian motion and $Y_t$ has zero average on the circle. We then have
\begin{align*}
(U_0S_{e^{-t}}U_0^{-1}f)(c,\varphi)=&e^{-Qc}\mathds{E}_\varphi [e^{Q(c+B_t-Qt)}f(c+B_t-Qt,P\varphi(e^{-t+i\cdot})+Y_t(\cdot))]\\
=&e^{-\frac{Q^2}{2}t}\mathds{E}_\varphi [ f(c+B_t,P\varphi(e^{-t+i\cdot})+Y_t(\cdot))]
\end{align*}
where we have used the Girsanov transform to obtain the last equality.
Since $B_t$ and $Y_t$ are independent conditionally on $\varphi$, this last quantity is also equal to $e^{-t(\frac{Q^2}{2}-\frac{\partial^2_c}{2})}e^{-t\mathbf{P}}f$ by using Proposition \ref{semigrouppt}.
Furthermore, for $f\in \mathcal{C}$, it is plain to see that the mapping $t\mapsto e^{-t(\frac{Q^2}{2}-\frac{\partial^2_c}{2})}e^{-t\mathbf{P}}f$ solves the Cauchy problem $\partial_tu=-\mathbf{H}^0u$ with $u(0)=f$. Hence $e^{-t(\frac{Q^2}{2}-\frac{\partial^2_c}{2})}e^{-t\mathbf{P}}f=e^{-t\mathbf{H}^0}f$.
\end{proof}
Finally, we have the simple:
\begin{proposition}\label{l2alpha} The following properties hold:
\begin{enumerate}
\item The measure $\dd c\times\P_\mathbb{T}$ is invariant for $e^{\frac{Q^2t}{2}}e^{-t{\bf H}^{0}}$.
\item $e^{-t{\bf H}^{0}}$ extends to a continuous semigroup on $ L^p(\mathbb{R} \times \Omega_\mathbb{T})$ for all $p\in [1,+\infty]$ with norm $e^{ -\frac{Q^2}{2}t}$ and it is strongly continuous for $p\in [1,+\infty)$.
\item
$e^{-t{\bf H}^{0}}$ extends to a strongly continuous semigroup on $e^{-\alpha c} L^2(\mathbb{R} \times \Omega_\mathbb{T})$ for all $\alpha\in\mathbb{R}$ with norm $e^{(\frac{\alpha^2}{2}-\frac{Q^2}{2})t}$.
\end{enumerate}
\end{proposition}
\begin{proof} 1) This is a consequence of \eqref{u00identity}: indeed the processes $B$ and $Y$ are independent and describe two dynamics for which the measures $\dd c$ and $\P$ are respectively invariant.
2) follows from \eqref{u00identity}, Jensen's inequality and the fact that $\dd c\otimes \P_\mathbb{T}$ is invariant for ${\bf H}^{0}$. 3)
The map $K:f\mapsto e^{ -\alpha c}f$ is unitary from $L^2(\mathbb{R} \times \Omega_\mathbb{T} )\to e^{- \alpha c} L^2(\mathbb{R} \times \Omega_\mathbb{T})$. We have $Ke^{-t{\bf H}^{0}}K^{-1}=e^{t(\frac{\alpha^2}{2}- \alpha \partial_c)}e^{-t{\bf H}^{0}}$ which implies the claim.
\end{proof}
\begin{remark}\label{discreteH0}
Using the decomposition $L^2( \Omega_\mathbb{T})=\bigoplus_{{\bf k,l}}\ker
({\bf P}-\lambda_{{\bf k,l}})$, the operator ${\bf H}^0$ is unitarily equivalent to the
direct sum $\bigoplus_{{\bf k,l}} (-\frac{1}{2}\partial_c^2+\frac{Q^2}{2} +\lambda_{{\bf k,l}})$,
each of these operators being a shifted Laplacian on the real line $\mathbb{R}$. Consequently (using Fourier transform in $c$),
${\bf H}^0$ has no $L^2$-eigenvalue, its spectrum is absolutely continuous and the family $(e^{iPc}\psi_{\bf k,l})_{P,{\bf k,l}}$ form a complete family of generalized eigenstates diagonalizing ${\bf H}^0$.
\end{remark}
\subsection{Diagonalization of the free Hamiltonian using the Virasoro algebra}\label{repth}
We start by explaining the diagonalization of the free (i.e. non interacting) Hamiltonian $\mathbf{H}^0$ which corresponds to the case $\mu=0$ in \eqref{Hdef}. As explained in Remark \ref{discreteH0}, that can be done directly by using the orthonormal basis of Hermite polynomials $\psi_{{\bf kl}}$ of $L^2(\Omega_\mathbb{T})$ combined with the Plancherel formula for the Fourier transform on the real line: for each $u_1,u_2\in L^2(\mathbb{R}\times \Omega_\mathbb{T})$, one has
\begin{equation}\label{<F,G>usingH0}
\langle u_1\,|\, u_2\rangle_2 =\frac{1}{2\pi} \sum_{{\bf k},{\bf l}\in \mathcal{N}}\int_\mathbb{R} \langle u_1\,|\, e^{iPc}\psi_{{\bf kl}}\rangle_2 \langle e^{iPc}\psi_{{\bf kl}}\,| \,u_2\rangle_2 \,\dd P.
\end{equation}
It will be useful however to use another basis for $L^2(\Omega_\mathbb{T})$ which respects its underlying complex analytic structure; this new basis, made up of $\mathbf{H}^0$-eigenstates, will be generated by the action on $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ of two commuting unitary representations of the Virasoro algebra (as motivated in the end of Subsection \ref{outline:subspectral}). We follow below the Segal-Sugawara construction for the Fock representation of the Heisenberg algebra. Let us emphasize that the material we introduce here is standard; to keep the paper self-contained, we recall the main properties of the construction and just give sketches of the proofs (see for instance \cite{gordon,kac} for more details).
\subsubsection{Fock representation of the Heisenberg algebra}
We will work on the vector space
\begin{equation}\label{smoothexpgrowth}
\mathcal{C}_\infty:=\mathrm{Span}\{ \psi(c)F\,|\,\psi\in C^\infty(\mathbb{R})\text{ and }F\in\mathcal{S} \}.
\end{equation}
(not to be confused with $\mathcal{C}$ which is a subset of $\mathcal{C}_\infty$) and use the complex coordinates \eqref{varphin}, i.e. we denote for $n>0$
\begin{align*}
\partial_n:=\frac{\partial}{\partial\varphi_{n}}= \sqrt{n} (\partial_{x_n}-i \partial_{y_n}) \quad \text{ and }\quad \partial_{-n}:=\frac{\partial}{\partial\varphi_{-n}}= \sqrt{n} (\partial_{x_n}+i \partial_{y_n}).
\end{align*}
We define
on $ \mathcal{C}_\infty$ the following operators for $n>0$:
\begin{align*}
\mathbf{A}_n&= \tfrac{i}{2}\partial_{n},\ \ \ \mathbf{A}_{-n}=\tfrac{i}{2}(\partial_{-n}-2n\varphi_{n})\\
\widetilde{\mathbf{A}}_n&= \tfrac{i}{2}\partial_{-n},\ \ \ \widetilde{\mathbf{A}}_{-n}=\tfrac{i}{2}(\partial_{n}-2n\varphi_{-n})\\
\mathbf{A}_0&=\widetilde{\mathbf{A}}_0=\tfrac{i}{2}(\partial_c+Q)
\end{align*}
Their restrictions to $\mathcal{C}$ are closable operators in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ satisfying (on their closed extension)
\begin{align}
\mathbf{A}_n^\ast= \mathbf{A}_{-n},\ \ \ \widetilde{\mathbf{A}}_n^\ast=\widetilde{\mathbf{A}}_{-n}.
\label{adjoan}
\end{align}
Furthermore
$ \mathbf{A}_n1=0$ and $\widetilde{ \mathbf{A}}_n1=0$ for $n>0$. It is easy to see that the space $ \mathcal{C}_\infty$ is stable by the operators $\mathbf{A}_n,\widetilde{\mathbf{A}}_n$ and we have the commutation relations on $ \mathcal{C}_\infty$
\begin{align}
[\mathbf{A}_n,
\mathbf{A}_{m}]=\frac{_n}{^2}\delta_{n,-m}=[\widetilde{\mathbf{A}}_n,
\widetilde{\mathbf{A}}_{m}],\ \
[ \mathbf{A}_n,\widetilde{ \mathbf{A}}_m]=0.\label{com mu}
\end{align}
Thus $ \mathbf{A}_n$ and $\widetilde{ \mathbf{A}}_n$ ($n>0$) are annihilation operators and $ \mathbf{A}_{-n},\widetilde{ \mathbf{A}}_{-n}$ creation operators. By identifying canonically $\mathcal{S}$ (defined in the beginning of subsection \ref{sec:fock}) as a subspace of $ \mathcal{C}_\infty$, it is plain to check that $\mathcal{S}$ is stable under the action of these operators. As before, let ${\bf k}, {\bf l} \in \mathcal{N}$ and define
the polynomials
\begin{align}\label{fbasis}
\hat \pi_{{\bf k}{\bf l}}=
\prod_{n>0} \mathbf{A}_{-n}^{k_n} \tilde{\mathbf{A}}_{-n}^{l_n}1.
\end{align}
Then $\hat \pi_{{\bf k}{\bf l}}$ and $\hat \pi_{{\bf k'}{\bf l'}}$
are orthogonal if ${\bf k}\neq {\bf k'}$ or ${\bf l}\neq {\bf l'}$ and the $\hat \pi_{{\bf k}{\bf l}}$'s with $|{\bf k}|+|{\bf l}|=N$
span the eigenspace of ${\bf P}$ in $L^2(\Omega_\mathbb{T})$ with eigenvalue $N$\footnote{Explicitly: $\hat \pi_{{\bf k}{\bf l}}=\prod_{n>0} (-in)^{k_n+l_n} \varphi_{n}^{k_n} \varphi_{-n}^{l_n} +P(\varphi_{n},\varphi_{-n})$ where $P$ is a polynomial in $\varphi_{n},\varphi_{-n}$ spanned by monomials of the form $\prod_{n>0} \varphi_{n}^{k'_n} \varphi_{-n}^{l'_n} $ with $k'_n\leq k_n$, $l'_n,\leq l_n$ and $\sum_{n>0} k'_n+l'_n<\sum_{n>0} k_n+l_n$. }. We denote $ \pi_{{\bf k}{\bf l}}:= \hat \pi_{{\bf k}{\bf l}}/ \|\hat \pi_{{\bf k}{\bf l}}\|_{L^2(\Omega_\mathbb{T})} $ the normalized eigenvectors
\subsubsection{Segal-Sugawara construction}
Now we use the Fock representation of the Heisenberg algebra to construct the Virasoro representation.
We define the {\it normal ordered product} on $ \mathcal{C}_\infty$ by
$:\!\mathbf{A}_n\mathbf{A}_m\!\!:\,=\mathbf{A}_n\mathbf{A}_m$ if $m>0$ and $\mathbf{A}_m\mathbf{A}_n$ if $n>0$ (i.e. annihilation operators are on the right) and then for all $n \in \mathbb{Z}$
\begin{align}\label{virassoro}
\mathbf{L}_n^0:=-i(n+1)Q\mathbf{A}_n+\sum_{m\in\mathbb{Z}}:\mathbf{A}_{n-m}\mathbf{A}_m:
\end{align}
\begin{align}\label{virassorotilde}
\widetilde{\mathbf{L}}_n^0:=-i(n+1)Q\widetilde{\mathbf{A}}_n+\sum_{m\in\mathbb{Z}}:\widetilde{\mathbf{A}}_{n-m}\widetilde{\mathbf{A}}_m:\,\,.
\end{align}
These operators are well defined on $ \mathcal{C}_\infty$ (since only a finite number of terms contribute) and their restrictions to $\mathcal{C}$ are closable operators satisfying (on their closed extensions)
\begin{align}
(\mathbf{L}_n^0)^\ast=\mathbf{L}^0_{-n},\ \ \ (\widetilde{\mathbf{L}}_n^0)^\ast=\widetilde{\mathbf{L}}^0_{-n}.
\label{adjo}
\end{align}
Furthermore the vector space $ \mathcal{C}_\infty$ is stable under $\mathbf{L}_n^0$ and $\widetilde{\mathbf{L}}_n^0$ for all $n \in \mathbb{Z}$; on $ \mathcal{C}_\infty$ the $\mathbf{L}_n^0$ satisfy the commutation relations of the {\it Virasoro Algebra} (see \cite[Prop 2.3]{kac}):
\begin{align}\label{virasoro}
[\mathbf{L}_n^0,\mathbf{L}_m^0]=(n-m)\mathbf{L}_{n+m}^0+\frac{c_L}{12}(n^3-n)\delta_{n,-m}
\end{align}
where the central charge is
\begin{align*}
c_L=1+6Q^2.
\end{align*}
These commutation relations can be checked by using the fact that, on $ \mathcal{C}_\infty$, only finitely many terms contribute in \eqref{virasoro} and using the commutation relation \eqref{com mu}.
$\widetilde{\mathbf{L}}_n^0$ satisfy the same commutation relations \eqref{virasoro} and commute with the $ \mathbf{L}_n^0$'s. Note also that
\begin{align}\label{L0def}
\mathbf{L}_{0}^0&=\tfrac{1}{4}(-\partial_c^2+Q^2)+2\sum_{n>0}\mathbf{A}_{-n}\mathbf{A}_n\\
\widetilde{\mathbf{L}}_{0}^0&=\tfrac{1}{4}(-\partial_c^2+Q^2)+2\sum_{n>0}\widetilde{\mathbf{A}}_{-n}\widetilde{\mathbf{A}}_n,
\label{L0def1}
\end{align}
so that one can easily check that the $\mu=0$ Hamiltonian $\mathbf{H}^0:= -\frac{1}{2}\partial_c^2 + \frac{1}{2} Q^2+ \mathbf{P}$ has the following decomposition when restricted on $\mathcal{C}_\infty$
$$\mathbf{H}^0=\mathbf{L}_{0}^0+
\widetilde{\mathbf{L}}_{0}^0.
$$
\begin{remark} In the terminology of representation theory, we have a {\it unitary representation} of two commuting Virasoro Algebras on $L^2(\mathbb{R} \times \Omega_\mathbb{T})$ (unitary in the sense that \eqref{adjo} holds) and this representation is reducible as we will see below by constructing stable sub-representations.
\end{remark}
\subsubsection{Diagonalizing $\mathbf{H}^0$ using the Virasoro representation }\label{sectionDiagvirasoro}
Now we explain how to construct the generalized eigenstates of the free Hamiltonian $\mathbf{H}^0$ using the families of operators $(\mathbf{L}_n^0)_n$ and $(\widetilde{\mathbf{L}}_n^0)_n$.
Recall that, for $\alpha\in \mathbb{C}$, we have defined the function
\begin{align}\label{psialphadef}
\Psi^0_\alpha(c,\varphi):=e^{(\alpha-Q)c}\in \mathcal{C}_\infty.
\end{align}
For $\alpha\in \mathbb{C}$, these are generalized eigenstates of ${\bf H}^0$: they never belong to $L^2(\mathbb{R} \times \Omega_\mathbb{T})$ but rather to some weighted spaces $e^{\beta |c|}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for $\beta>|{\rm Re}(\alpha)-Q|$, hence their name ``generalized eigenstates". We have
\begin{equation}\label{L0psialpha}
\begin{split}
\mathbf{L}_0^0\Psi^0_\alpha&=\widetilde{\mathbf{L}}_0^0\Psi^0_\alpha=\Delta_{\alpha
\Psi^0_\alpha\\
\mathbf{L}_n^0\Psi^0_\alpha&=\widetilde{\mathbf{L}}_n^0\Psi^0_\alpha=0,\ \ \ n>0,
\end{split}
\end{equation}
where $\Delta_\alpha$ is the conformal weight \eqref{deltaalphadef}. In the language of representation theory (or in the CFT terminology), $\Psi^0_\alpha$ is called {\it highest weight state} with highest weight $\Delta_{\alpha}$ for both algebras. Before defining the so-called descendants of $\Psi^0_\alpha$, we introduce the following definition:
\begin{definition}\label{young}
A sequence of integers $\nu= (\nu_i)_{i \geq 0}$ is called a Young diagram if the mapping $i\mapsto\nu_i $ is non-increasing and if $\nu_i=0$ for $i$ sufficiently large.
We denote by $\mathcal{T}$ the set of all Young diagrams. We will sometimes write $\nu=(\nu_i)_{i \in\llbracket 1,k\rrbracket}$ where $k$ is the last integer i such that $\nu_i>0$ and denote by $|\nu|:= \sum_{i \geq 1} \nu_i$ the length of the Young diagram\footnote{This length should not be confused with the length \eqref{firstlength} of a sequence of integers.}. We set $\mathcal{T}_j:=\{\nu\in \mathcal{T}\, |\, |\nu|=j\}$ the set of Young diagrams of length $j$.
\end{definition}
Given two Young diagrams $\nu= (\nu_i)_{i \in [1,k]}$ and $\tilde{\nu}= (\tilde{\nu}_i)_{i \in [1,j]}$ we denote
\begin{equation*}
\mathbf{L}_{-\nu}^0=\mathbf{L}_{-\nu_k}^0 \cdots \, \mathbf{L}_{-\nu_1}^0, \quad \quad \quad \tilde{\mathbf{L}}_{-\tilde \nu}^0=\tilde{\mathbf{L}}_{-\tilde\nu_j}^0 \cdots\, \tilde{\mathbf{L}}_{-\tilde\nu_1}^0
\end{equation*}
and define
\begin{align}\label{psibasis}
\Psi^0_{\alpha,\nu, \tilde\nu}=\mathbf{L}_{-\nu}^0\tilde{\mathbf{L}}_{-\tilde\nu}^0 \: \Psi^0_\alpha,
\end{align}
with the convention that $\Psi^0_{\alpha,\emptyset, \emptyset}=\Psi^0_{\alpha}$.
The vectors $\Psi^0_{\alpha,\nu, \tilde\nu}$ are called the \emph{descendants} states of $\Psi^0_\alpha$. We gather in the following proposition their main properties
\begin{proposition}\label{prop:mainvir0} The following holds:\\
1) For each pair of Young diagrams $\nu,\tilde{\nu}\in \mathcal{T}$, the descendant state $\Psi^0_{\alpha,\nu, \tilde\nu}$ can be written as
\begin{align}\label{psibasis1}
\Psi^0_{\alpha,\nu, \tilde\nu}=\mathcal{Q}_{\alpha,\nu,\tilde \nu}\Psi^0_\alpha
\end{align}
where $\mathcal{Q}_{\alpha,\nu,\tilde\nu}\in \mathcal{P} $ is a polynomial.\\
2) for all $\alpha \in \mathbb{C}$
\begin{equation*}
\mathbf{L}_0^0\Psi^0_{\alpha,\nu, \tilde\nu} = (\Delta_{\alpha
+|\nu|)\Psi^0_{\alpha,\nu ,\tilde\nu},\ \ \ \tilde{\mathbf{L}}_0^0\Psi^0_{\alpha,\nu, \tilde\nu} = (\Delta_{\alpha}+|\tilde\nu|)\Psi^0_{\alpha,\nu ,\tilde\nu}
\end{equation*}
and thus since $\mathbf{H}^0=\mathbf{L}_0^0+\tilde{\mathbf{L}}_0^0$
\begin{equation*}
\mathbf{H}^0\Psi^0_{\alpha,\nu,\tilde{\nu}} = (2 \Delta_\alpha+|\nu|+|\tilde{\nu}| )\Psi^0_{\alpha,\nu,\tilde{\nu}} .
\end{equation*}
3) {\bf Completeness}: the inner products of the descendant states obey
\begin{equation}\label{scapo}
\langle \mathcal{Q}_{2Q-\bar\alpha,\nu,\tilde\nu} | \mathcal{Q}_{\alpha,\nu',\tilde\nu'} \rangle_{L^2(\Omega_\mathbb{T})}=\delta_{|\nu| ,|\nu'|}\delta_{|\tilde\nu| ,|\tilde\nu'|}F_{\alpha}(\nu,\nu')F_{\alpha}(\tilde\nu,\tilde\nu')
\end{equation}
where each coefficient $F_{\alpha}(\nu,\nu')$ is a polynomial in ${\alpha}$, called the {\it Schapovalov form}. The functions $(\mathcal{Q}_{\alpha,\nu,\tilde \nu})_{\nu,\tilde\nu\in\mathcal{T}}$ are linearly independent for $$\alpha\nin \{{\alpha_{r,s}} \mid \,\,r,s\in \mathbb{N}^\ast ,rs \leq \max(|\nu|, |\tilde \nu |)\}\quad \quad\text{with }\quad {\alpha_{r,s}}=Q-r\frac{\gamma}{2}-s\frac{2}{\gamma}.$$
4) the following holds
\begin{equation}\label{defqalpha}
\begin{gathered}
\mathcal{Q}_{\alpha,\nu,\tilde{\nu}}: = \sum_{\mathbf{k},\mathbf{l}, |{\bf k}|+|{\bf l}|=N}M^{N}_{\alpha,\mathbf{k}\mathbf{l},\nu\tilde\nu} \psi_{\mathbf{k}\mathbf{l}},
\end{gathered}
\end{equation}
for some coefficients $M^{N}_{\alpha,\mathbf{k}\mathbf{l},\nu\tilde\nu}$ polynomial in $\alpha\in \mathbb{C}$.\\
5) {\bf Spectral decomposition}: if $u_1,u_1\in L^2(\mathbb{R}\times\Omega_\mathbb{T})$ then
\begin{align}\label{fcomplete}
\langle u_1\, |\, u_2\rangle_{2}=\frac{1}{2 \pi}\sum_{\nu,\tilde\nu,\nu',\tilde\nu'\in \mathcal{T}}\int_\mathbb{R} \langle u_1\,|\,
\Psi^0_{Q+iP,\nu',\tilde{\nu}'} \rangle_{2} \langle \Psi^0_{Q+iP,\nu,\tilde{\nu}}\, |\, u_2\rangle _{2}F^{-1}_{P+iQ}(\nu,\nu')F^{-1}_{P+iQ}(\tilde\nu,\tilde\nu')\, \dd P.
\end{align}
\end{proposition}
\begin{proof}
The decomposition \eqref{psibasis1} can be obtained from the definition \eqref{psibasis} by using that $[{\bf A}_n,e^{(\alpha-Q)c}]=[\tilde{{\bf A}}_n,e^{(\alpha-Q)c}]=0$ for all $n\not=0$, $[\partial_c,e^{(\alpha-Q)c}]=e^{(\alpha-Q)c}(\alpha-Q){\rm Id}$ and that finitely many applications of ${\bf A}_{n}$ and $\tilde{\bf A}_n$ to ${\bf 1}$ is a polynomial in $\mathcal{P}$.
Actually, one can even show that
for $\nu$ a Young diagram
\begin{align*
\mathbf{L}_{-\nu}^0 \Psi^0_\alpha=(\mathbf{L}^{0,\alpha}_{-\nu} 1)\Psi^0_\alpha
\end{align*}
where, for a Young diagram $\nu=( \nu_1, \cdots, \nu_k )$, we define $\mathbf{L}^{0,\alpha}_{\nu}:=\mathbf{L}^{0,\alpha}_{\nu_1}\dots \mathbf{L}^{0,\alpha}_{\nu_k}$ and the operators $\mathbf{L}^{0,\alpha}_n$, $n\in\mathbb{Z}$ act in $L^2(\Omega_\mathbb{T})$ and are given by the expression \eqref{virassoro} where we replace ${\bf A}_0$ by
\frac{i}{2}(\alpha+Q)$ i.e.
\begin{equation*}
\mathbf{L}^{0,\alpha}_n:= \left\{
\begin{array}{ll} i(\alpha-Q-nQ)\mathbf{A}_n+\sum_{m\neq n,0} \mathbf{A}_{n-m}\mathbf{A}_m
& \, n\neq 0\\
\frac{\alpha}{2}(Q-\frac{\alpha}{2})+2\sum_{m>0} \mathbf{A}_{-m}\mathbf{A}_m
&\, n=0
\end{array} \right..
\end{equation*}
Hence on their closed extensions we have
\begin{equation}\label{adjoint}
(\mathbf{L}^{0,\alpha}_n)^\ast=\mathbf{L}^{0,2Q-\bar\alpha}_{-n}
\end{equation}
and $\mathbf{L}^{0,\alpha}_n$ satisfies \eqref{virasoro}. The reader should notice here that the order in which the $\nu_j$ appear in previous definition ensures that $(\mathbf{L}^{0,2Q-\bar\alpha}_{\nu})^\ast=\mathbf{L}^{0,\alpha}_{-\nu}$. The operators $(\tilde{\mathbf{L}}^{0,\alpha}_{n})_n$ are defined in a similar fashion and commute with $(\mathbf{L}^{0,\alpha}_{n})_n$. Therefore $ \mathcal{Q}_{\alpha,\nu,\tilde\nu} =\mathbf{L}^{0,\alpha}_{-\nu}\tilde{\mathbf{L}}^{0,\alpha}_{-\tilde\nu}1$. Then 2) results from \eqref{L0psialpha} and the commutation relations \eqref{virasoro} with $n=0$.
Since $\mathbf{L}^{0,\alpha}_\nu$ and $\tilde {\mathbf{L}}^{0,\alpha}_{\tilde{\nu}}$ commute we have
\begin{align}\label{QQ}
\langle \mathcal{Q}_{2Q-\bar\alpha,\nu,\tilde\nu}\, |\, \mathcal{Q}_{\alpha,\nu',\tilde\nu'} \rangle_{L^2(\Omega_\mathbb{T})}=\langle 1\,|\, \tilde{\mathbf{L}}^{0,\alpha}_{\tilde\nu}\tilde{\mathbf{L}}^{0,\alpha}_{-\tilde\nu'}\mathbf{L}^{0,\alpha}_{\nu}\mathbf{L}^{0,\alpha}_{-\nu'}1\rangle_{L^2(\Omega_\mathbb{T})}.
\end{align}
Let us now compute the right-hand side. By Lemma \ref{LemAppendixVir}, one has for arbitrary $t_1, \dots, t_k \in \mathbb{Z}$ such that $t_1+ \cdots +t_k>0$
\begin{equation}\label{sumpositive}
\mathbf{L}^{0,\alpha}_{t_1} \cdots \mathbf{L}_{t_k}^{0,\alpha}1=0, \quad \tilde{\mathbf{L}}^{0,\alpha}_{t_1} \cdots \tilde{\mathbf{L}}_{t_k}^{0,\alpha}1=0.
\end{equation}
Therefore, if $|\nu| > |\nu'|$ or $|\tilde{\nu}| > |\tilde{\nu}'|$ then we get that \eqref{QQ} is equal to $0$ by using \eqref{sumpositive}. The case $|\nu| < |\nu'|$ or $|\tilde{\nu}| < |\tilde{\nu}'|$ can be dealt similarly and yields $0$ also. Hence in what follows we suppose that $|\nu| = |\nu'|$ and $|\tilde \nu| = |\tilde{\nu}'|$.
Lemma \ref{LemAppendixVir1} establishes that for $|\nu|=|\nu'|$
\begin{equation}\label{decompL0}
\mathbf{L}^{0,\alpha}_{\nu}{\bf L}^{0,\alpha}_{-\nu'}1=\sum_{k\geq 0}a_k(\mathbf{L}^{0,\alpha}_0)^k
\end{equation}
where the coefficients $a_k$ are determined by the algebra \eqref{virasoro} and are independent of $\alpha$. Since $(\mathbf{L}^{0,\alpha}_0)^k1=\Delta_{\alpha}^k$ and $\Delta_\alpha=\frac{\alpha}{2}(Q-\frac{\alpha}{2})$ we conclude
$$
\mathbf{L}^{0,\alpha}_{\nu}{\bf L}^{0,\alpha}_{-\nu'}1=F_{\alpha}(\nu,\nu')
$$
where $F_{\alpha}(\nu,\nu')$ is a polynomial in $\Delta_\alpha$ and thus in $\alpha$.
Repeating the argument for $\tilde{\bf L}^{0,\alpha}_{\tilde\nu}\tilde{\bf L}^{0,\alpha}_{-\tilde\nu'} 1$ yields \eqref{scapo}.
The determinant of the matrix $(F_{\alpha}(\nu,\nu'))_{|\nu|=|\nu'|=N}$ is given by the Kac determinant formula (see Feigin-Fuchs \cite{FF})
\begin{align}\label{}
\det (F_{\alpha}(\nu,\nu'))_{|\nu|=|\nu'|=N}=\kappa_N\prod_{r,s=1; \: rs \leq N}^N(\Delta_\alpha-\Delta_{\alpha_{r,s}})^{p(N-rs)}
\end{align}
where $\kappa_N$ does not depend on $\alpha$ or $c_L$, $p(M)$ is the number of Young Diagrams of length $M$ and
\begin{align*
{\alpha_{r,s}}=Q-r\frac{\gamma}{2}-s\frac{2}{\gamma}.
\end{align*}
Now we turn to 4). The polynomials $\mathcal{Q}_{\alpha,\nu,\tilde\nu}$ with
$|\nu|+|\tilde\nu|=N$ belong to the space spanned by the basis $\psi_{\bf kl}$ \eqref{fbasis2} (or equivalently the basis $\pi_{\bf kl}$ \eqref{fbasis}) above with $
|{\bf k}|+|{\bf l}|=N$. Let us also stress that item 3) entails that the space spanned by $\mathcal{Q}_{\alpha,\nu,\tilde\nu}$ with
$|\nu|+|\tilde\nu|=N$ is exactly the same as $\psi_{\bf kl}$ with $
|{\bf k}|+|{\bf l}|=N$ as soon as $\alpha \not \in Q-\frac{\gamma}{2}\mathbb{N}^*-\frac{2}{\gamma}\mathbb{N}^*$. Hence the existence of a decomposition of the type \eqref{defqalpha}. The fact that the coefficients of this decomposition are polynomials in $\alpha$ can be more easily seen in the basis $\pi_{\bf kl}$: in that case, the coefficients are given by the scalar products $ \langle \mathcal{Q}_{\alpha,\nu,\tilde\nu} \,|\, \pi_{\mathbf{k}\mathbf{l}} \rangle_{L^2(\Omega_\mathbb{T})}$ and one can use the expression \eqref{fbasis} together with the commutation relations
$$[{\bf A}_{n},{\bf L}^{0,\alpha}_m]=n{\bf A}^\alpha_{n+m}-\frac{i}{2}n(n+1)Q\delta_{n,-m}$$
where ${\bf A}^\alpha_0=\frac{i }{2}(\alpha+Q)$ and ${\bf A}^\alpha_n={\bf A}_n$ for $n\neq 0$ and ${\bf L}^{0,\alpha}_01=\Delta_{\alpha} $. 4) follows.
Now, we specialise the above considerations to the case $\alpha=Q+iP$ with $P \in \mathbb{R}$. In this case, one has
\begin{equation}\label{scapointro}
\langle \mathcal{Q}_{Q+iP,\nu,\tilde\nu} \,|\, \mathcal{Q}_{Q+iP,\nu',\tilde\nu'} \rangle _{L^2(\Omega_\mathbb{T})}=\delta_{|\nu| ,|\nu'|}\delta_{|\tilde\nu| ,|\tilde\nu'|}F_{Q+iP}(\nu,\nu')F_{Q+iP}(\tilde\nu,\tilde\nu')
\end{equation}
where the matrices $(F_{Q+iP}(\nu,\nu'))_{\nu,\nu'\in\mathcal{T}_j}$ are positive definite (hence invertible) for each $j\in\mathbb{N}$ and they depend on ${P+iQ}$: more precisely they are polynomials in the weight $\Delta_{Q+iP}=\tfrac{1}{4}(P^2+Q^2)$ and in the central charge $c_L$.
Item 5) then follows from the representation \eqref{<F,G>usingH0} and decomposing the basis $\psi_{{\bf kl}}$ in terms of the new basis $\mathcal{Q}_{Q+iP,\nu,\nu'}$. \end{proof}
\section{Liouville CFT: dynamics and quadratic form} \label{sec:LCFTFQ}
The goal of this section is to construct explicitly the quadratic form associated to ${\bf H}_\ast$ and to obtain a probabilistic representation of the semigroup. This probabilistic representation, i.e. a Feynman-Kac type formula, easily follows from the definition of ${\bf H}_\ast$, this is done in subsection \ref{Dynamics of LCFT} just below. The construction of the quadratic form is more subtle, especially in the case $\gamma\in (\sqrt{2},2)$. Indeed we will construct the Friedrichs extension of the operator \eqref{Hdef} (restricted to appropriate domain) and we call it $\mathbf{H}$. Then we will show that ${\bf H}_\ast={\bf H}$. The main reason why the difficulty to construct the quadratic form increases with $\gamma$ is due to the interpretation of the term $V$ in \eqref{Hdef}. For $\gamma\in (0,\sqrt{2})$ only, it makes sense as a GMC random variable. Indeed, define the regularized field for $k\geq 0$
\begin{equation}\label{GFFcirclecutoff}
\varphi^{(k)}(\theta)=\sum_{|n|\leq k}\varphi_ne^{in\theta}
\end{equation}
which is a.s. a smooth function.
Then the GMC random variable $V$ can be defined as the following limit
\begin{equation}\label{Vdefi}
V:=\lim_{k\to\infty}V^{(k )},\quad \quad V^{(k)}:=\int_0^{2\pi}e^{ \gamma \varphi^{(k)}(\theta)- \frac{\gamma^2}{2} \mathds{E}[ \varphi^{(k)}(\theta)^2 ]} \dd\theta
\end{equation}
where the above limit exists $\P_\mathbb{T}$-almost surely and is non trivial for $\gamma\in (0,\sqrt{2})$, see \cite{cf:Kah,review,Ber} for instance on the topic, in which case $V\in L^{p}(\Omega_\mathbb{T})$ for all $p<\frac{2}{\gamma^2}$. For $\gamma\in [\sqrt{2},2)$, the limit $V$ vanishes, in which case we will rather make sense of the multiplication operator $V$ as a measure (singular with respect to $\P_\mathbb{T}$): this case is more problematic, see Subsection \ref {sub:bilinear}.
\subsection{Feynman-Kac formula}\label{Dynamics of LCFT}
We consider the semigroup $e^{-t{\bf H}_\ast}$ and start by writing some form of Feynman-Kac formula for this semigroup.
\begin{proposition}{\bf (Feynman-Kac formula)}\label{prop:FK}
For $f\in L^2(\mathbb{R}\times\Omega_\mathbb{T})$ we have
\begin{equation}\label{FKgeneral}
e^{-t\mathbf{H}_\star}f=e^{-\frac{Q^2t}{2}}\mathds{E}_{\varphi}\big[ f(c+B_t,\varphi_t) e^{-\mu e^{\gamma c}\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}M_\gamma (\dd z)}\big]
\end{equation}
where $(c+B_t,\varphi_t)$ is the process on $W^{s}(\mathbb{T})$ defined in subsection \ref{sub:gff} and $\mathbb{D}_t^c:=\{z\in\mathbb{D}\, |\, |z|>e^{-t}\}$.\\
In particular, for $\gamma\in (0,\sqrt{2})$
\begin{equation}\label{fkformula}
e^{-t\mathbf{H}_\star}f=e^{-\frac{Q^2t}{2}}\mathds{E}_{\varphi}\big[ f(c+B_t,\varphi_t)e^{-\mu\int_0^t e^{\gamma (c +B_s) }V(\varphi_s)\dd s}\big].
\end{equation}
where
$$V(\varphi_s):=\int_0^{2\pi}e^{ \gamma \varphi_s(\theta)- \frac{\gamma^2}{2} \mathds{E}[ \varphi_s(\theta)^2 ]} \dd\theta.$$
\end{proposition}
\begin{proof}
Let $f\in L^2(\mathbb{R}\times\Omega_\mathbb{T})$. By definition we have $e^{-t\mathbf{H}_\star}F=US_{e^{-t}}U^{-1}f$, where, from \eqref{Uinverse}, we have $U^{-1}f=(U1)^{-1} f$ with
\begin{align*
(U1)(c,\varphi)=(U_0 e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D})})(c,\varphi).
\end{align*}
Now we claim that we have for $t>0$
\begin{equation}\label{Vmarkov}
\mathds{E}[e^{-\mu e^{\gamma c} M_\gamma ( \mathbb{D}_t)}|\mathcal{F}_{\mathbb{D}^c_t} ]= S_{e^{-t}}(e^{Qc}U1)
\end{equation}
where $\mathbb{D}_t=\{z\in\mathbb{D};|z|<e^{-t}\}$. Indeed, by the domain Markov property of the GFF and conditionally on $\mathcal{F}_{\mathbb{D}^c_t}$, the law of $X$ inside $\mathbb{D}_t$ is given by the independent sum
$$X\stackrel{law}{=}P(s_{e^{-t}}X)(e^t\cdot )+X_\mathbb{D}(e^t\cdot).$$
As a consequence and using a change of variables by dilation, we get the relation conditionally on $\mathcal{F}_{\mathbb{D}^c_t} $
$$ M_\gamma(\mathbb{D}_t)\stackrel{law}{=}S_{e^{-t}}M_\gamma(\mathbb{D}),$$
which gives \eqref{Vmarkov}. Next, using \eqref{Vmarkov}, we get
\begin{align*}
e^{-t\mathbf{H}_\star}f=&US_{e^{-t}}U^{-1}f\\
=&e^{-Qc}\mathds{E}_\varphi\big[S_{e^{-t}}(U^{-1}f)e^{-\mu e^{\gamma c}M_\gamma ( \mathbb{D})}\big]\\
=&e^{-Qc}\mathds{E}_\varphi\big[\mathds{E}\big[S_{e^{-t}}(U^{-1}f)e^{-\mu e^{\gamma c} M_\gamma ( \mathbb{D})}|\mathcal{F}_{\mathbb{D}^c_{t}}\big]\big]\\
=&e^{-Qc}\mathds{E}_\varphi\big[ S_{e^{-t}}(U^{-1}f)e^{-\mu e^{\gamma c} M_\gamma ( \mathbb{D}_t^c)}\mathds{E}\big[e^{-\mu e^{\gamma c}M_\gamma ( \mathbb{D}_t)}|\mathcal{F}_{\mathbb{D}^c_{t}}\big]\big]\\
=&e^{-Qc}\mathds{E}_\varphi\big[ S_{e^{-t}}(e^{Qc}f ) e^{-\mu e^{\gamma c}M_\gamma ( \mathbb{D}^c_t)} \big]
\end{align*}
We complete the argument by applying the Cameron-Martin formula to $S_{e^{-t}}(e^{Qc})$ to get \eqref{FKgeneral}.
In the case when $\gamma\in (0,\sqrt{2})$, we write the chaos measure in terms of the process $Z_t=(c+B_t,\varphi_t)$. The function $V:W^{s}(\mathbb{T})\to \mathbb{R}^+$ given by
$$V(\varphi)=\lim_{k\to\infty}\int_\mathbb{T} e^{\gamma \varphi^{(k)}(\theta)-\frac{\gamma^2}{2}\mathds{E}[ \varphi^{(k)}(\theta)^2]}\dd\theta
$$
is measurable and we get that conditionally on $\varphi=\varphi_0$ (by making the change of variables $dx=r dr d\theta=e^{-2s}dsd\theta$ with $r=|x|=e^{-s}$ and $-\frac{\gamma^2}{2} \mathbb{E}[B_s^2]-2s=-\gamma Qs$)
\[
e^{\gamma c}M_\gamma(\mathbb{D}_t^c)\stackrel{law}=\int_0^t e^{\gamma (c+B_s-Qs)}V(\varphi_s)\dd s.\qedhere
\]
\end{proof}
As a consequence of this formula and similarly to $\bf H^0$ we have
\begin{proposition}\label{l2alphamu} The following properties hold:
\begin{enumerate}
\item $e^{-t{\bf H}_\ast}$ extends to a continuous semigroup on $ L^p(\mathbb{R} \times \Omega_\mathbb{T})$ for all $p\in [1,+\infty]$ with norm $e^{ -\frac{Q^2}{2}t}$ and it is strongly continuous for $p\in [1,+\infty)$.
\item $e^{-t{\bf H}_\ast}$ extends to a strongly continuous semigroup on $e^{-\alpha c} L^2(\mathbb{R} \times \Omega_\mathbb{T})$ for all $\alpha\in\mathbb{R}$ with norm $e^{(\frac{\alpha^2}{2}-\frac{Q^2}{2})t}$.
\end{enumerate}
\end{proposition}
\begin{proof} Using in turn \eqref{fkformula} and $V\geq 0$, we see that $\|e^{-t\mathbf{H}_\star}f\|_p\leq \|e^{-t\mathbf{H}^0}|f|\|_p$ so that the claim 1) follows from Proposition \ref{l2alpha}. The same argument works for 2).
\end{proof}
\subsection{Quadratic forms and the Friedrichs extension of ${\bf H}$}\label{sub:bilinear}
Here we construct the quadratic forms associated to \eqref{Hdef} and the Friedrichs extension of ${\bf H}$ in the case $\gamma\in (0,2)$.
Recall that the underlying measure on the space $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ is $\dd c\times \P_\mathbb{T}$, with $\Omega_\mathbb{T}=(\mathbb{R}^{2})^{\mathbb{N}^*}$, $ \Sigma_\mathbb{T}=\mathcal{B}^{\otimes \mathbb{N}^*}$ (where $\mathcal{B}$ stands for the Borel sigma-algebra on $\mathbb{R}^2$) and the probability measure $ \P_\mathbb{T}$ defined by \eqref{Pdefin}. Also, recall the GFF on the unit circle $\varphi:\Omega \to W^{s}(\mathbb{T})$ (with $s<0$) defined by \eqref{GFFcircle}. Finally recall that $\mathcal{S}$ is the set of smooth functions depending on finitely many coordinates, i.e. of the form
$F(x_1,y_1, \dots,x_n,y_n)$ with $n\geq 1$ and $F\in C^\infty((\mathbb{R}^2)^n)$, with at most polynomial growth at infinity for $F$ and its derivatives. $\mathcal{S}$ is dense in $L^2(\Omega_\mathbb{T})$.
We will construct the quadratic form associated to $\mathbf{H}$ as a limit (in a suitable sense) of regularized quadratic forms associated with the regularized potential $V^{(k)}$.
For $k\geq 1$ and $\mu> 0$, we introduce the bilinear form (with associated quadratic form still denoted by $\mathcal{Q}_k$)
\begin{equation}\label{defQn}
\mathcal{Q}^{(k)}(u,v):=\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} \Big( \partial_c u \partial_c \bar{v}+Q^2u\bar{v}+ 2( \mathbf{P} u)\bar{v}+2\mu e^{\gamma c}V^{(k)}u\bar{v}\Big)\dd c.
\end{equation}
Here $u,v$ belong to the domain $\mathcal{D}(\mathcal{Q}^{(k)})$ of the quadratic form, namely the completion for the $\mathcal{Q}^{(k)}$-norm in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ of the space $\mathcal{C}$ defined by \eqref{core}. Also, it is clear that $ \mathcal{D}(\mathcal{Q}^{(k)})$ embeds continuously and injectively in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ (same argument as in the proof of Prop \ref{FQ0:GFF}). Also, since $V^{(k)}\geq 0$ it is clearly lower semi-bounded $\mathcal{Q}^{(k)}(u)\geq Q^2\|u\|_2^2/2$ so that the construction of the Friedrichs extension then follows from \cite[Theorem 8.15]{rs1}. It determines uniquely a self-adjoint operator $\mathbf{H}^{(k)}$, called the \emph{Friedrichs extension}, with domain denoted by $\mathcal{D}(\mathbf{H}^{(k)})$ such that:
$$\mathcal{D}(\mathbf{H}^{(k)} )=\{u\in \mathcal{D}(\mathcal{Q}^{(k)}); \exists C>0,\forall v\in \mathcal{D}(\mathcal{Q}^{(k)}),\,\,\, \mathcal{Q}^{(k)}(u,v)\leq C\|v\|_2\}$$
and for $u\in \mathcal{D}(\mathbf{H}^{(k)})$, $\mathbf{H}^{(k)} u$ is the unique element in $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ satisfying
$$\mathcal{Q}^{(k)}(u,v)=\langle \mathbf{H}^{(k)} u|v\rangle_2 .$$
We also denote by $\mathbf{R}_{\lambda}^{(k)}$ the associated resolvent family.
The following result is rather standard but we found no reference corresponding exactly to our context so that we give a short proof.
\begin{proposition}\label{prop:fkhk}
The strongly continuous contraction semigroup $(e^{-t\mathbf{H}^{(k)}})_{t\geq 0}$ of self-adjoint operators on $L^2(\mathbb{R}\times\Omega_\mathbb{T})$ obeys the Feynman-Kac formula
\begin{equation}\label{fkformulaforhk}
e^{-t\mathbf{H}^{(k)}}f=e^{-\frac{Q^2t}{2}}\mathds{E}_{\varphi}\big[ f(c+B_t,\varphi_t)e^{-\mu\int_0^t e^{\gamma (c +B_s) }V^{(k)}(\varphi_s)\dd s}\big],
\end{equation}
where we have, after decomposing the field $\varphi_s $ along its harmonics i.e. $\varphi_s(\theta):=\sum_{n\not=0}\varphi_{s,n}e^{in\theta}$,
$$V^{(k)}(\varphi)=\int_0^{2\pi}e^{ \gamma \varphi^{(k)}_s(\theta)- \frac{\gamma^2}{2} \mathds{E}[ \varphi^{(k)}_s(\theta)^2 ]} \dd\theta\quad\text{ with }\varphi^{(k)}_s(\theta):=\sum_{|n|\leq k,n\not=0}\varphi_{s,n}e^{in\theta}.$$
\end{proposition}
\begin{proof}
We use Kato's strong Trotter product formula (see \cite[Theorem S.21 page 379]{rs1}) applied to the self-adjoint operators ${\bf H}_0$ and $e^{\gamma c}V^{(k)}$: since the domain $\mathcal{D}(\mathcal{Q}^{(k)})$ of the quadratic form $\mathcal{Q}^{(k)}$ is dense in $L^2(\mathbb{R}\times\Omega_\mathbb{T})$ and satisfies $\mathcal{D}(\mathcal{Q}^{(k)})=\mathcal{D}(\mathcal{Q}_0)\cap \mathcal{D}(\mathcal{Q}_{V^{(k)}})$, where $\mathcal{D}(\mathcal{Q}_{V^{(k)}})=\{f\mid \|e^{\gamma c/2}(V^{(k)})^{1/2}f\|_2<+\infty\}$ is the domain of the quadratic form associated to the operator of multiplication by $e^{\gamma c}V^{(k)}$, we have the identity
$$\lim_{n\to\infty} (e^{-\frac{t}{n}{\bf H}_0}e^{-\frac{t}{n}\mu e^{\gamma c}V^{(k)}})^n =e^{-t{\bf H}^{(k)}}$$
where the limit is understood in the strong sense (i.e. convergence in $L^2(\mathbb{R}\times\Omega_\mathbb{T})$ when this relation is applied to $f\in L^2(\mathbb{R}\times\Omega_\mathbb{T})$). Now we compute the limit in the left-hand side. For $f\in L^2(\mathbb{R}\times\Omega_\mathbb{T})$ we have
\begin{equation}\label{trotter}
(e^{-\frac{t}{n}{\bf H}_0}e^{-\frac{t}{n}\mu e^{\gamma c}V^{(k)}})^n f=e^{-Q^2t/2}\mathds{E}_{\varphi}\big[ f(c+B_t,\varphi_t)e^{-\mu R_t^{n,(k)}}\big]
\end{equation}
with $R_t^{n,(k)}$ the Riemann sum
$$R_t^{n,(k)}:=\frac{t}{n}\sum_{j=1}^n e^{\gamma (c +B_{jt/n}) }V^{(k)}(\varphi_{jt/n}) .$$
The right-hand side of \eqref{trotter} converges in $L^2(\mathbb{R}\times\Omega_\mathbb{T})$ towards the same expression with $R_t^{n,(k)}$ replaced by $\int_0^t e^{\gamma (c +B_s) }V^{(k)}(\varphi_{s})\dd s$: indeed this can be established by using Jensen and the fact that, almost surely, the Riemann sum $R_t^{n,(k)}$ converges almost surely (for all fixed $c$) towards the integral $\int_0^t e^{\gamma (c +B_s) }V^{(k)}(\varphi_{s})\dd s$ since the process $s\mapsto e^{\gamma (c +B_{s}) }V^{(k)}(\varphi_{s}) $ is continuous. This provides the Feynman-Kac representation as claimed.
\end{proof}
Now our main goal is to construct a quadratic form corresponding to the limit $k\to\infty$ of the quadratic forms $(\mathcal{Q}^{(k)},\mathcal{D}(\mathcal{Q}^{(k)}))$. For $u,v\in \mathcal{C}$, we define
\begin{equation}\label{defQ2}
\mathcal{Q}(u,v):=\lim_{k\to\infty}\mathcal{Q}^{(k)}(u,v).
\end{equation}
Of course, when $\gamma\in (0,\sqrt{2})$, existence of the limit is trivial as $(V^{(k)})$ converges towards $V$ in $L^p(\Omega_\mathbb{T})$ for $p<2/\gamma^2$ and $u\bar{v}\in C^\infty_c(\mathbb{R};L^q(\Omega_\mathbb{T}))$ for any $q>1$. But treating the case $\gamma\in (\sqrt{2},2)$ as well requires another argument: the existence of the limit is guaranteed by the Girsanov transform, namely that for $u=u(x_1,y_1\dots,x_n,y_n)$, $v=v(x_1,y_1\dots,x_n,y_n)$ and $k\geq n$ the term involving $V^{(k)}$ in $\mathcal{Q}^{(k)}$ can be rewritten as
\begin{equation}\label{girsuk}
\mathds{E}\int_{\mathbb{R}} e^{\gamma c}V^{(k)}u\bar{v} \dd c=\mathds{E}\int_{\mathbb{R}} \int_0^{2\pi}e^{\gamma c} u_{\rm shift}\bar{v}_{\rm shift} \dd c\dd \theta
\end{equation}
where the function $u_{\rm shift}$ (and similarly for $v$) is defined by
$$u_{\rm shift}(\theta,c, x_1,y_1\dots,x_n,y_n):=u \Big(c,x_1+\gamma \cos(\theta),y_1-\gamma \sin(\theta)\dots,x_n+\frac{\gamma}{\sqrt{n}}\cos(n\theta),y_n-\frac{\gamma}{\sqrt{n}}\sin(n\theta)\Big).$$
Hence the term $\mathds{E}\int_{\mathbb{R}} e^{\gamma c}V^{(k)}u\bar{v} \dd c$ does not depend on $k\geq n$.
We also denote by $u_{\rm shift(N)}$ the same as above except that the first $N$ variables $(x_1,y_1,\dots,x_N,y_N)$ are shifted and other variables remain unchanged.
Also, we denote by $\mathbf{R}_{*,\lambda}$ the resolvent family associated with the Feynman-Kac semigroup $e^{-t\mathbf{H}_*}$. Let $\mathcal{Q}_*$ denote the quadratic form associated to $\mathbf{H}_*$ with domain $\{u\in L^2; \lim_{t\to0}\langle u, \frac{u-e^{-t\mathbf{H}_*}u}{t}\rangle_2<\infty\}$. Using \cite[Section 1.4]{sznitman}, $\mathcal{Q}_*$ is closed. The following lemma is fundamental: though the limit \eqref{defQ2} makes sense for any value of $\gamma\in \mathbb{R}$, the fact that it can be related to the quadratic form $ \mathcal{Q}_*$ deeply relies on GMC theory, hence on the fact that $\gamma\in (0,2)$.
\begin{lemma}
For $u,v\in\mathcal{C}$, we have $ \mathcal{Q}_*(u,v)= \mathcal{Q}(u,v)$. In particular, $ \mathcal{Q}$ is closable.
\end{lemma}
\begin{proof}
Let $u,v\in\mathcal{C}$, say depending on the first $n$-th harmonics. From \eqref{FKgeneral}, we have to show that, as $t\to 0$,
$$D_t:=e^{-\frac{Q^2t}{2}}\int\mathds{E}\Big[ u(c+B_t,\varphi_t) v(c,\varphi)e^{-\mu e^{\gamma c}\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}M_\gamma (\dd z)}\Big] \,\dd c=\langle u,v\rangle_2-t \mathcal{Q}(u,v)+o(t).$$
For notational simplicity and only in this proof, let us denote $V_t:=\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}M_\gamma (\dd z)$. Then
\begin{align*}
D_t-\langle u,v\rangle_2=& \langle e^{-t\mathbf{H}_0}u-u,v\rangle_2+(1+o(1))\int\mathds{E}\Big[ u(c+B_t,\varphi_t) v(c,\varphi)(e^{-\mu e^{\gamma c}V_t}-1)\Big] \,\dd c\\
=&-t\mathcal{Q}_0(u,v)+o(t)+(1+o(1))\int\mathds{E}\Big[ (u(c+B_t,\varphi_t) -u(c,\varphi))v(c,\varphi)(e^{-\mu e^{\gamma c}V_t}-1)\Big] \,\dd c\\
&-(1+o(1))\int\mathds{E}\Big[ u(c,\varphi)v(c,\varphi)(1-e^{-\mu e^{\gamma c}V_t})\Big] \,dc \\
=&:-t\mathcal{Q}_0(u,v)+o(t)+(1+o(1))D^1_t-(1+o(1))D^2_t.
\end{align*}
Let us show that
\begin{equation}\label{D2t}
D_t^2=t(1+o(1))\mu\int e^{\gamma c}\int_0^{2\pi} \mathds{E}\Big[ u_{\rm shift}(c,\varphi)v_{\rm shift}(c,\varphi)\Big] \,\dd c\dd\theta .
\end{equation} For this and even if it means decomposing $u,v$ into their positive/negative parts, we may assume that $u,v$ are non-negative. First, we use the inequality $1-e^{-x}\leq x$ for $x\geq 0$ to get
\begin{align*}
D_t^2\leq & \int e^{\gamma c}\mathds{E}\Big[ u(c,\varphi)v(c,\varphi)\mu e^{\gamma c}V_t\Big] \,\d c\\
=& \mu\int e^{\gamma c}\int_0^{2\pi}\int_{e^{-t}}^1\mathds{E}\Big[ e^{\gamma P\varphi(re^{i\theta})-\frac{\gamma^2}{2}\mathds{E}[P\varphi(re^{i\theta})^2]} u(c,\varphi)v(c,\varphi)\Big] \,\dd c r^{1-\gamma Q}\,\dd r\dd\theta \\
=& \mu\int e^{\gamma c}\int_0^{2\pi}\int_{e^{-t}}^1\mathds{E}\Big[ u^r_{\rm shift}(c,\varphi)v^r_{\rm shift}(c,\varphi)\Big] \,\dd c r^{1-\gamma Q}\,\dd r\dd\theta
\end{align*}
where we have used the Girsanov transform in the last line and the function $ u^r_{\rm shift}$ is defined by
$$u_{\rm shift}(\theta,c, x_1,y_1\dots,x_n,y_n):=u \big(c,x_1+a_1,y_1+b_1,\dots,x_n+a_n,y_n+b_n\big),$$
with $a_k:= \frac{\gamma}{\sqrt{k}}r^k\cos(k\theta)$ and $b_k:= -\frac{\gamma}{\sqrt{k}}r^k\sin(k\theta)$.
Because $u,v\in \mathcal{C}$, it is then plain to deduce that $D^2_t\leq t(1+o(1))\mu\int e^{\gamma c}\int_0^{2\pi} \mathds{E}\Big[ u_{\rm shift}(c,\varphi)v_{\rm shift}(c,\varphi)\Big] \,\dd c\dd\theta $.
Second, we use the inequality $1-e^{-x}\geq x e^{-x}$ for $x\geq 0$ to get, using again the Girsanov transform,
\begin{align*}
D_t^2\geq & \int e^{\gamma c}\mathds{E}\Big[ u(c,\varphi)v(c,\varphi)\mu e^{\gamma c}V_te^{-\mu e^{\gamma c}V_t}\Big] \,\d c\\
=& \mu\int e^{\gamma c}\int_0^{2\pi}\int_{e^{-t}}^1\mathds{E}\Big[ u^r_{\rm shift}(c,\varphi)v^r_{\rm shift}(c,\varphi,re^{})e^{-\mu e^{\gamma c}V^{\rm shift}_t}\Big] \,\dd c r^{1-\gamma Q}\,\dd r\dd\theta
\end{align*}
with $V^{\rm shift}_t(re^{i\theta}):=\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}|z-re^{i\theta}|^{-\gamma^2}M_\gamma (\dd z)$. It remains to get rid of the exponential term in the last expectation. So we split the above expectation in two parts by writing $e^{-\mu e^{\gamma c} V^{\rm shift}_t} =1+(e^{-\mu e^{\gamma c} V^{\rm shift}_t} -1)$. The first part produces $ t(1+o(1))\mu\int e^{\gamma c}\int_0^{2\pi} \mathds{E}\Big[ u_{\rm shift}(c,\varphi)v_{\rm shift}(c,\varphi)\Big] \,\dd c\dd\theta $ similarly as above. For the second part corresponding to $(e^{-\mu e^{\gamma c} V^{\rm shift}_t} -1)$, we want to show that it is neglectable. For this, we write
\begin{align}
\mu\int e^{\gamma c}&\int_0^{2\pi} \int_{e^{-t}}^1\mathds{E}\Big[ u^r_{\rm shift}(c,\varphi)v^r_{\rm shift}(c,\varphi)(1-e^{-\mu e^{\gamma c}V^{\rm shift}_t})\Big] \,\dd c r^{1-\gamma Q}\,\dd r\dd\theta\label{fuck} \\
\leq &\mu\int e^{\gamma c}\int_0^{2\pi}\int_{e^{-t}}^1\mathds{E}\Big[ u^r_{\rm shift}(c,\varphi)v^r_{\rm shift}(c,\varphi))^p\Big]^{1/p}\mathds{E}\Big[ |1-e^{-\mu e^{\gamma c}V^{\rm shift}_t(re^{i\theta})} |^q \Big]^{1/q} \,\dd c r^{1-\gamma Q}\,\dd r\dd\theta\nonumber
\end{align}
where we have used H\"{o}lder's inequality in the last line for any fixed conjugate exponents $p,q$. The first expectation $c\mapsto \mathds{E}\Big[ u^r_{\rm shift}(c,\varphi)v^r_{\rm shift}(c,\varphi))^p\Big]^{1/p}$ is bounded uniformly with respect to $r,c$ and has fixed compact support in $c$ (because $v$ has so), say $[-A,A]$ for some $A>0$. Now we claim that the family of functions $(c,r)\mapsto f_t(c,r):=\mathds{E}\Big[ |1-e^{-\mu e^{\gamma c}V^{\rm shift}_t(re^{i\theta})} |^q \Big]^{1/q}$ (this quantity does not depend on $\theta$ by invariance in law under rotation) converges uniformly towards $0$ as $t\to 0$, from which one can easily deduce that the quantity \eqref{fuck} is $o(t)$. Since $f_t$ is increasing in $c$ hence monotonic for $c$ in $[-A,A]$) it is enough to prove this claim for a fixed $c$. Next, recall \cite[Lemma 3.10]{DKRV} the condition of finiteness for integrals of GMC with singularities: for any $\alpha\in (0,Q)$, $p>0$ and $z_0\in\mathbb{D}$
\begin{equation}\label{conising}
\mathds{E}\Big[\Big(\int_\mathbb{D} \frac{1}{|z-z_0|^{\gamma\alpha}}M_\gamma(\dd z)\Big)^p\Big]<+\infty \Leftrightarrow p<\tfrac{2}{\gamma}(Q-\alpha).
\end{equation}
Taking $\alpha=\gamma$ and $z_0=re^{i\theta}$, the finiteness of the expectation above implies that for fixed $r\in [\tfrac{1}{2},1]$ (and fixed $c$) $\lim_{t\to 0}f_t(c,r)=0$ by using dominated convergence. Now we claim that for fixed $t$ and $c$, the mapping $r\mapsto f_t(c,r)$ is continuous. To see this, for each $\delta>0$ let us introduce the mapping $$r\in [\tfrac{1}{2},1] \mapsto f_{t,\delta}(c,r):=\mathds{E}\Big[ |1-e^{-\mu e^{\gamma c}V^{\rm shift}_{t,\delta}(re^{i\theta})} |^q \Big]^{1/q}$$ where $V^{\rm shift}_{t,\delta}$ is the potential with regularized singularity at scale $\delta$
$$V^{\rm shift}_{t,\delta}(re^{i\theta}):=\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}(|z-re^{i\theta}|\vee \delta)^{-\gamma^2}M_\gamma (\dd z).$$
Obviously, $f_{t,\delta}(c,r)$ is a continuous function of the variable $r$. We want to show that the family $(f_{t,\delta})_\delta$ converges uniformly towards $f_{t}$ as $\delta\to 0$. By using the triangular inequality
\begin{align*}
|f_{t,\delta}(c,r)-f_{t}(c,r)|\leq &\mathds{E}\Big[|e^{-\mu e^{\gamma c}V^{\rm shift}_{t,\delta}(re^{i\theta})} -e^{-\mu e^{\gamma c}V^{\rm shift}_{t}(re^{i\theta})} |^q\Big]^{1/q}\\
\leq &\mu e^{\alpha\gamma c}\mathds{E}\Big[| V^{\rm shift}_{t}(re^{i\theta}) -V^{\rm shift}_{t,\delta}(re^{i\theta}) |^{\alpha q}\Big]^{1/q}.
\end{align*}
where we have used the inequality $|e^{-x}-e^{-y}|\leq |x-y|^{\alpha}$ for any arbitrary $\alpha\in (0,1]$ and $x,y\geq 0$. We fix $\alpha$ such that $\alpha q<\tfrac{2}{\gamma}(Q-\gamma)$ to make sure that the expectation is finite using the criterion \eqref{conising}. By invariance in law of $M_\gamma$ under translation, this quantity does not depend on $r$ and is less than (in fact this is true when the singularity is at positive distance from the boundary: when the singularity approaches the boundary, the above quantity is strictly less than the bound below)
\begin{equation}\label{singsmallball}
C\mathds{E}\Big[\Big(\int_{|z|\leq \delta } |z|^{-\gamma^2}M_\gamma (\dd z)\Big)^{\alpha q}\Big]^{1/q}.
\end{equation}
By multifractal scaling (see the proof of \cite[Lemma 3.10]{DKRV}), \eqref{singsmallball} is equal $C\delta ^{(\gamma (Q-\gamma)\alpha-\frac{\gamma^2}{2}\alpha^2q}$ and one can check that the exponent is positive under the condition $\alpha q<\tfrac{2}{\gamma}(Q-\gamma)$. This establishes the uniform convergence of $(f_{t,\delta})_\delta$ towards $f_{t}$ as $\delta\to 0$ over $r\in [\tfrac{1}{2},1]$. In conclusion, for fixed $c$, the family $(f_t(c,\cdot))_{t>0}$ is a family of continuous functions that decrease pointwise towards $0$ as $t\to 0$. Hence the convergence is uniform by the Dini theorem. So we have proved \eqref{D2t}.
Similar arguments can be used to show that $D^1_t=o(t)$. Indeed, we use again the inequality $1-e^{-x}\leq x$ to get the bound
$$|D^1_t|\leq \mu\int e^{\gamma c}\mathds{E}\Big[ |u(c+B_t,\varphi_t)-u(c,\varphi)|| v(c,\varphi)| V_t \Big] \,\dd c$$
and then the Girsanov transform and H\"{o}lder to obtain
\begin{align*}
|D^1_t|\leq &\mu\int_0^{2\pi}\int_{e^{-t}}^1\int e^{\gamma c}\mathds{E}\Big[ |u^{\rm shift}(t,c+B_t,\theta,\varphi_t)-u^{\rm shift}(0,c,\theta,\varphi)|| v^{\rm shift}(0,c,\theta,\varphi)| \Big] \,\dd \theta\dd r\dd c\\
\leq &\mu\int_0^{2\pi}\int_{e^{-t}}^1\int e^{\gamma c}\mathds{E}\Big[ |u^{\rm shift}(t,c+B_t,\theta,\varphi_t)-u^{\rm shift}(0,c,\theta,\varphi)|^p\Big]^{1/p}\mathds{E}[| v^{\rm shift}(0,c,\theta,\varphi)|^q]^{1/q} \,\dd \theta\dd r\dd c
\end{align*}
with
$$u^{\rm shift}(t,c,\theta,x_1,y_1\dots,x_n,y_n):=u \Big(c,x_1+a_1,y_1+b_1,\dots,x_n+a_n,y_n+b_n\Big)$$
and
$$a_k:=\frac{\gamma k^{1/2}}{\pi}\int_0^{2\pi}\ln\frac{1}{|e^{-t+i\theta'}-e^{-r+i\theta}|}\cos(k\theta')\,\dd \theta',\quad b_k:=\frac{\gamma k^{1/2}}{\pi}\int_0^{2\pi}\ln\frac{1}{|e^{-t+i\theta'}-e^{-r+i\theta}|}\sin(k\theta)\,\dd \theta. $$
Again, the second expectation has compact support in $c$ whereas the second satisfies $\sup_{r\leq t} \mathds{E}\Big[ |u^{\rm shift}(t,c+B_t,\theta,\varphi_t)-u^{\rm shift}(0,c,\theta,\varphi)|^p\Big]^{1/p}\to 0$ as $t\to 0$ since $u\in \mathcal{C}$ and $B_t$ and the first $n$-th harmonics of the field $(\varphi_t)_t$ are continuous. One can then easily conclude.
\end{proof}
From the above lemma, the quadratic form $(\mathcal{C},\mathcal{Q} )$ is closable. Let $ \mathcal{D}(\mathcal{Q})$ be the completion of $ \mathcal{C}$ for the $\mathcal{Q}$-norm. The completion is the vector space consisting of equivalence classes of Cauchy sequences of
$\mathcal{C}$ for the norm $\|u\|_{\mathcal{Q}}:=\sqrt{\mathcal{Q}(u,u)}$ under the equivalence relation
$u\sim v$ iff $\|u_n-v_n\|_{\mathcal{Q}}\to 0$ as $n\to \infty$. This space is a Hilbert space. It can be identified with the closure of the space of quadruples $\{(u,\partial_cu,\mathbf{P}^{1/2}u,u_{\rm shift});u\in \mathcal{C}\}$ in $(L^2)^3\times L^2(\mathbb{R}\times\Omega_\mathbb{T}\times [0,2\pi],e^{\gamma c}\dd c\otimes \P_\mathbb{T}\otimes \dd \theta)$. $ \mathcal{D}(\mathcal{Q})$ embeds injectively in $L^2$ as it is closed. Now we want to show that
\begin{proposition}
For $\gamma\in (0,2)$, the quadratic form $( \mathcal{D}(\mathcal{Q}),\mathcal{Q})$ defines an operator $\mathbf{H}$ (the Friedrichs extension), which satisfies $\mathbf{H}_*=\mathbf{H}$.
\end{proposition}
\begin{proof}
Take $F\in \mathcal{C}$ and consider $u:=\mathbf{R}_{*,\lambda}F$ (resp. $u_k:=\mathbf{R}_{\lambda}^{(k)}F$) for $\lambda>0$. We will repeatedly use below the fact that for $k$ large enough $u_{k,{\rm shift}}=u_{k,{\rm shift}(k)}$ since $F\in \mathcal{C}$. Indeed, this follows from the Feynman-Kac formula (by Prop. \ref{prop:fkhk})
\begin{equation}\label{fkres}
u_k=\int_0^{\infty}e^{-\lambda t}e^{-t\mathbf{H}^{(k)}}f\,\dd t=\int_0^{\infty }e^{-(\lambda+\frac{Q^2}{2})t}\mathds{E}_{\varphi}\big[ f(c+B_t,\varphi_t)e^{-\mu\int_0^t e^{\gamma (c +B_s) }V^{(k)}(\varphi_s)\dd s}\big]\,\dd t.
\end{equation}
With this expression, one can see that if $f$ depends on the first $N$ harmonics of the field $\varphi$ then for $k\geq N$ we have $u_{k,{\rm shift}}=u_{k,{\rm shift}(k)}$.
The first step of the proof is to observe that
$$u_k\to u \quad \text{as }k\to\infty \quad\text{in }L^2(\mathbb{R}\times\Omega_\mathbb{T}).$$ This follows from the Feynman-Kac representation \eqref{fkres}+\eqref{FKgeneral} and standard GMC theory that ensures that $\int_0^t e^{\gamma (c +B_s) }V^{(k)}(\varphi_s)\dd s$ converges almost surely towards $e^{\gamma c}\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}M_\gamma (\dd z)$ for $\gamma\in (0,2)$.
Furthermore, since
$$\lambda\|u_k\|_2^2+\mathcal{Q}^{(k)}(u_k)=\langle F,u_k\rangle_2$$
we deduce using Cauchy-Schwartz in the r.h.s. that
\begin{equation}\label{tight}
\sup_k\mathcal{Q}^{(k)}(u_k)<+\infty.
\end{equation}
This entails that the sequences $(\partial_cu_k)_k$ and $(\mathbf{P}^{1/2}u_k)_k$ weakly converge up to subsequences in $L^2(\mathbb{R}\times\Omega_\mathbb{T})$ and, by \eqref{girsuk}, that $(u_{k,{\rm shift}})_k$ weakly converges in $e^{-\gamma c/2}L^2([0,2\pi]\times\mathbb{R}\times\Omega_\mathbb{T})$. Strong convergence of $(u_k)_k$ towards $u$ and weak convergence of $(\partial_cu_k)_k$ and $(\mathbf{P}^{1/2}u_k)_k$ implies that their respective weak limits must be $\partial_cu$ and $\mathbf{P}^{1/2}u$. The resolvent equation associated to $u_k$ reads for $v\in\mathcal{C}$
\begin{equation}
\lambda\langle u_k,v\rangle_2+\mathcal{Q}^{(k)}(u_k,v)=\langle F,v\rangle_2.
\end{equation}
Denote by $z$ a possible weak limit of $(u_{k,{\rm shift}})_k$ in $e^{-\gamma c/2}L^2([0,2\pi]\times\mathbb{R}\times\Omega_\mathbb{T})$.
Passing to the limit in $k\to\infty$ (up to appropriate subsequence) produces
\begin{equation}\label{eqlimitQ}
\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} \Big( \partial_c u \partial_c \bar{v}+(Q^2+2\lambda)u\bar{v}+ 2( \mathbf{P}^{1/2} u)\overline{\mathbf{P}^{1/2}v}+2\mu \int_0^{2\pi}e^{\gamma c}z \bar{v}_{\rm shift}\dd\theta\Big)\dd c=\langle F,v\rangle_2.
\end{equation}
Taking $v=u_k$ and passing to the limit as $k\to\infty$, we get
\begin{equation}
\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} \Big( |\partial_c u|^2+(Q^2+2\lambda)|u|^2+ 2| \mathbf{P}^{1/2} u|^2+2\mu \int_0^{2\pi}e^{\gamma c}|z|^2\dd\theta\Big)\dd c=\langle F,u\rangle_2.
\end{equation}
By weak limit we have
\begin{align*}
\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} &\Big( |\partial_c u|^2+(Q^2+2\lambda)|u|^2+ 2| \mathbf{P}^{1/2} u|^2+2\mu \int_0^{2\pi}e^{\gamma c}|z|^2\dd\theta\Big)\dd c\\
\leq &
\liminf_k\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} \Big( |\partial_c u_k|^2+(Q^2+2\lambda)|u_k|^2+ 2| \mathbf{P}^{1/2} u_k|^2+2\mu \int_0^{2\pi}e^{\gamma c}| u_{k,{\rm shift}}|^2\dd\theta\Big)\dd c .
\end{align*}
Also
\begin{align*}
\tfrac{1}{2}\mathds{E}\int_{\mathbb{R}} \Big( |\partial_c u_k|^2+(Q^2+2\lambda)|u_k|^2+ 2| \mathbf{P}^{1/2} u_k|^2+2\mu \int_0^{2\pi}e^{\gamma c}| u_{k,{\rm shift}}|^2\dd\theta\Big)\dd c =\langle F,u_k\rangle_2\to \langle F,u\rangle_2
\end{align*}
as $k\to\infty$. This shows that the weak convergence of the sequence $(u_k,\partial_cu_k,\mathbf{P}^{1/2} u_k,u_{k,{\rm shift}})_k $ actually holds in the strong sense. Also, it is easy to check from \eqref{eqlimitQ} that the limit is unique. This implies the convergence of $(u_k)_k$ in $\mathcal{Q}$-norm toward $u$, which thus belongs to $ \mathcal{D}(\mathcal{Q})$. Hence $ \mathbf{R}_{*,\lambda}$ maps $\mathcal{C}$ into $\mathcal{D}(\mathcal{Q})\subset L^2(\mathbb{R}\times\Omega_\mathbb{T})$ and it coincides on $\mathcal{C}$ with the resolvent associated to $\mathcal{Q}$. Since $\mathcal{C}$ is dense in $L^2(\mathbb{R}\times\Omega_\mathbb{T})$, this shows that both resolvent families coincide, hence their semigroups, quadratic forms and generators too.
\end{proof}
Again, we stress that $\mathcal{D}({\bf H})=\{ u\in\mathcal{D}(\mathcal{Q})\,|\, {\bf H}u\in L^2(\mathbb{R}\times \Omega_\mathbb{T})\}$ and ${\bf H}^{-1}:L^2(\mathbb{R}\times \Omega_\mathbb{T})\to \mathcal{D}({\bf H})$ is bounded. Furthermore, by the spectral theorem, $\mathbf{H}$ generates a strongly continuous contraction semigroup of self-adjoint operators $(e^{-t \mathbf{H} } )_{t\geq 0}$ on $L^2(\mathbb{R}\times\Omega_\mathbb{T})$.
If we let $\mathcal{D}(\mathcal{Q})'$ be the dual to $\mathcal{D}(\mathcal{Q})$ (i.e. the space of bounded conjugate linear functionals on $\mathcal{D}(\mathcal{Q})$), the injection $L^2(\mathbb{R}\times \Omega_\mathbb{T})\subset \mathcal{D}(\mathcal{Q})'$ is continuous and the operator ${\bf H}$ can be extended as a bounded isomorphism
\[{\bf H}:\mathcal{D}(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})'.\]
\begin{remark}
By adapting some argument in \cite{rs1,rs2} one can prove that ${\bf H}$ is essentially self-adjoint on $\mathcal{D}({\bf H}^0)\cap \mathcal{D}(e^{\gamma c}V)$ for $\gamma\in (0,1)$, condition that ensures that the potential $V$ is in $L^2(\Omega_\mathbb{T})$.
\end{remark}
\section{Scattering of the Liouville Hamiltonian}\label{sec:scattering}
In this section, we develop the scattering theory for the operator $\mathbf{H}$ on $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ with underlying measure $dc\otimes \P_\mathbb{T}$ (where $\mathbf{H}$ is the generator of the dilation semigroup studied above). This operator has continuous spectrum and cannot be diagonalized with a complete set of $L^2(\mathbb{R}\times \Omega_\mathbb{T})$-eigenfunctions.
We will rather use a stationary approach for this operator, in a way similar to what has been done in geometric scattering theory for manifolds with cylindrical ends in \cite{Gui, Mel}. The goal is to obtain a spectral resolution for $\mathbf{H}$ in terms of generalized eigenfunctions, which will be shown to be analytic in the spectral parameter. In other words,
we search to write the spectral measure of $\mathbf{H}$ using these generalized eigenfunctions, which are similar to plane waves $(e^{i\lambda \omega.x})_{\lambda\in \mathbb{R},\omega\in S^{n-1}}$ in Euclidean scattering for the Laplacian $\Delta_x$ on $\mathbb{R}^n$. In our case, the generalized eigenfunctions will be functions in weighted spaces of the form $e^{-\beta c_-}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for $\beta>0$ with particular asymptotic expansions at $c=-\infty$. Let us explain briefly the simplest one, corresponding to the functions $\Psi_{\alpha}:=\Psi_{\alpha,{\bf 0},{\bf 0}}=U(V_\alpha(0))$ defined in the Introduction and represented probabilistically by \eqref{defvalpha} using the unitary map \eqref{udeff} when $\alpha<Q$ is real.
For $\alpha \in (Q-\gamma/2,Q)$, they will be the only eigenfunctions of ${\bf H}$ in $e^{-\beta c_-}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for $\beta>Q-\alpha$ satisfying
\[ ({\bf H}-2\Delta_\alpha)\Psi_\alpha=0, \quad \Psi_{\alpha}=e^{(\alpha-Q)c}+ u, \quad \textrm{ with } u\in L^2(\mathbb{R}\times \Omega_\mathbb{T}).\]
Similarly, we will construct (in Section \ref{sub:holomorphic}) some eigenfunctions $\Psi_{\alpha,{\bf k},{\bf l}}$ of ${\bf H}$ with eigenvalue $2\Delta_\alpha+\lambda_{{\bf k}{\bf l}}$, with $\Psi_{\alpha,{\bf k},{\bf l}}|_{\mathbb{R}^+}\in L^2$ and asymptotic (for all $\beta\in(0,\gamma/2)$)
\[\Psi_{\alpha,{\bf k},{\bf l}}|_{\mathbb{R}^-}=e^{(\alpha-Q)c}\psi_{{\bf k}{\bf l}}+ u, \quad \textrm{ with }u\in e^{(\alpha-Q+\beta)c}L^2(\mathbb{R}^-\times \Omega_\mathbb{T}),\]
where we recall that $\lambda_{{\bf kl}}\in \mathbb{N}$ are the eigenvalues of $P$ defined in \eqref{firstlength} and the corresponding eigenfunctions $\psi_{{\bf kl}}$ of $P$ appear in \eqref{fbasishermite}.
One way to construct the $\Psi_{\alpha}$, and similarly for $\Psi_{\alpha,{\bf k},{\bf l}}$, is to take the limit (see Proposition \ref{Pellproba})
\[ \Psi_\alpha= \lim_{t\to \infty}e^{-t{\bf H}}(e^{2t\Delta_\alpha}e^{(\alpha-Q)c}),\]
where we observe that $e^{-t{\bf H}^0}e^{(\alpha-Q)c}=e^{-2t\Delta_\alpha}e^{(\alpha-Q)c}$ so that, formally speaking, $\Psi_\alpha$ is the limit of the intertwining $e^{-t{\bf H}}e^{t{\bf H}^0}(e^{(\alpha-Q})c)$ as $t\to +\infty$.
An alternative expression is to write them as
\begin{equation}\label{Psialphawithresolvent}
\Psi_\alpha= e^{(\alpha-Q)c}\chi(c)- ({\bf H}-2\Delta_\alpha)^{-1}({\bf H}-2\Delta_\alpha)(e^{(\alpha-Q)c}\chi(c))
\end{equation}
where $\chi\in C^\infty(\mathbb{R})$ equal to $1$ near $-\infty$ and $0$ near $+\infty$ (see \eqref{definPellproba} and Lemma \ref{firstPoisson}); here we notice that
$e^{(\alpha-Q)c}\chi(c)$ is not $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ but one can check that $({\bf H}-2\Delta_\alpha)(e^{(\alpha-Q)c}\chi(c))\in L^2$ if $\gamma > Q-\alpha$ (and $\gamma<1$) so that we can apply the resolvent ${\bf R}(\alpha):=({\bf H}-2\Delta_\alpha)^{-1}$ to it, and $({\bf H}-2\Delta_\alpha)^{-1}({\bf H}-2\Delta_\alpha)(e^{(\alpha-Q)c}\chi(c))\in L^2$ is not equal to $(e^{(\alpha-Q)c}\chi(c))$.
Our goal will be to extend analytically these $\Psi_\alpha$ (and actually the whole family of generalized eigenstates denoted by $\Psi_{\alpha,\mathbf{k},\mathbf{l}}$) to ${\rm Re}(\alpha)\leq Q$ and in particular to the line $\alpha\in Q+i\mathbb{R}$ corresponding to the spectrum of ${\bf H}$. To perform this, we see that we need to extend analytically the resolvent operator ${\bf R}(\alpha)$ to ${\rm Re}(\alpha)\leq Q$, which will be the main part of this section. In fact, we shall show that ${\bf R}(\alpha)$ extends analytically on an open set of a Riemann surface covering the complex plane, containing the real half-plane ${\rm Re}(\alpha)\leq Q$. We note that the functions $\Psi_{\alpha,\mathbf{k},\mathbf{l}}$ will be expressed as the elements in the range of some Poisson operator denoted $\mathcal{P}(\alpha)$, mapping (some subspaces of) $L^2(\Omega_\mathbb{T})$ to weighted spaces $e^{-\beta c_-}L^2(\mathbb{R}\times \Omega_T)$ for some $\beta>0$ depending on $\alpha$. The results proved here hold in some cases for geometric scattering in finite dimension \cite{Gui, Mel}, but we are not aware of some results of this type in quantum field theory where the base space is infinite dimensional.
The main difficulty will be to deal with the fact that the perturbation $V$ is quite singular (even for $\gamma<\sqrt{2}$ where $V$ is a potential, it is not bounded) and the fact that the eigenfunctions of ${\bf P}$ (Hermite polynomials) have $L^p( \Omega_\mathbb{T})$ norms which blow up
very fast in terms of their eigenvalues.
In this section, we shall start by describing the resolvent of ${\bf H}$ in the \emph{probabilistic region} $\{{\rm Re}((\alpha-Q)^2)>\beta^2\}$ acting on weighted spaces $e^{\beta c_-}L^2$ for
$\beta\in\mathbb{R}$ and deduce the construction of the $\Psi_{\alpha,\mathbf{k},\mathbf{l}}$ in this region. Next,
we will show that the resolvent ${\bf R}(\alpha)$ admits an analytic extension in a neighborhood of $\{{\rm Re}(\alpha)\leq Q\}$ (for $\alpha$ on some Riemann suface $\Sigma$). We shall use these results to prove the analytic continuation of the $\Psi_{\alpha,\mathbf{k},\mathbf{l}}$ to ${\rm Re}(\alpha)\leq Q$ and we shall finally construct the scattering operator ${\bf S}(\alpha)$ in Section \ref{section_scattering} and write the spectral decomposition of ${\bf H}$ in terms of the $\Psi_{\alpha,\mathbf{k},\mathbf{l}}$ in Theorem \ref{spectralmeasure} (written in terms of Poisson operator in this section).\\
In what follows, we will mostly consider the $L^2$ (or $L^p$) spaces on $\Omega_\mathbb{T}$ or on
$\mathbb{R}\times \Omega_\mathbb{T}$ respectively equipped with the measure $\P_\mathbb{T}$ or $dc\otimes \P_\mathbb{T}$, which we will denote by $L^2(\Omega_\mathbb{T})$ or $L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for short. When the space is omitted, i.e. we simply write $L^2$, this means that we consider $L^2(\mathbb{R}\times \Omega_\mathbb{T})$: this will relieve notations in some latter part of the paper.
Recall that we denote by $\langle \cdot\,|\,\cdot\rangle_{2}$ the standard scalar product associated to $L^2(\mathbb{R}\times \Omega_\mathbb{T}, dc \otimes \P_\mathbb{T})$ and $\|\cdot\|_2$ the associated norm; in general our scalar products will always be complex linear in the left component and anti-linear in the right component.
Given two normed vector space $E$ and $F$, the space of continuous linear mappings from $E$ into $F$ will be denoted by $\mathcal{L}(E,F)$ and when $E=F$ we will simply write $\mathcal{L}(E)$. The corresponding operator norms will be denoted by $\|\cdot\|_{\mathcal{L}(E,F)}$ or $\|\cdot\|_{\mathcal{L}(E)}$.
\subsection{The operators ${\bf H},{\bf H}^0, e^{\gamma c}V$}
The operator ${\bf H}$ is made up of several pieces. The first piece is the operator $\mathbf{P}$ defined in \eqref{hdefi}, which is a self-adjoint non-negative unbounded operator on $L^2(\Omega_\mathbb{T})$. It has discrete spectrum $(\lambda_{\mathbf{k}\mathbf{l}})_{\mathbf{k},\mathbf{l}\in \mathcal{N}}$, but to simplify the indexing we shall order them in increasing order (without counting multiplicity) and denote them by
\[ \sigma(\mathbf{P})=\{ \lambda_j \,|\, j\in\mathbb{N}, \lambda_{j}<\lambda_{j+1}\}.\]
We denote by
\begin{equation}
E_k:=\{F\in L^2(\Omega_\mathbb{T})\,|\, 1_{[0,\lambda_k]}(\mathbf{P})F=F\}=\bigoplus_{j\leq k}\ker(\mathbf{P}-\lambda_j)
\end{equation}
the sum of eigenspaces with eigenvalues less or equal to $\lambda_k$ and $\Pi_{k}: L^2(\Omega_\mathbb{T})\to E_k$ the orthogonal projection. We also will use the important fact
\begin{equation}\label{LpEk}
E_k \subset L^p(\Omega_\mathbb{T}) , \quad \forall p<\infty.
\end{equation}
The quadratic form associated to ${\bf H}$ is the form $\mathcal{Q}$ defined in Subsection \ref{sub:bilinear}, with domain $\mathcal{D}(\mathcal{Q})$.
We will consider the self-adjoint extension associated of ${\bf H}$ obtained using the quadratic form $\mathcal{Q}$. The operator ${\bf H}^0:\mathcal{D}({\bf H}^0)\to L^2$ corresponds to the case $\mu=0$,
its quadratic form has domain $\mathcal{D}(\mathcal{Q}_0)$ containing $\mathcal{D}(\mathcal{Q})$.
We recall that
\[ {\bf H}: \mathcal{D}(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q}), \quad {\bf H}^0: \mathcal{D}(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})\]
if $\mathcal{D}'(\mathcal{Q})$ is the dual
of $\mathcal{D}(\mathcal{Q})$.
We then define $e^{\gamma c}V:\mathcal{D}(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$ by the equation
\begin{equation}\label{definitionofH}
{\bf H}={\bf H}^0+e^{\gamma c}V= -\frac{1}{2} \partial_c^2+\frac{1}{2} Q^2+{\bf P}+e^{\gamma c}V.
\end{equation}
We will define the associated quadratic form
\[\mathcal{Q}_{e^{\gamma c}V} :=\mathcal{Q}-\mathcal{Q}_0\]
defined on $\mathcal{D}(\mathcal{Q})$ and remark that
\[ \mathcal{Q}_{e^{\gamma c}V}(u,u)=\int_{\mathbb{R}}e^{\gamma c}\mathcal{Q}_V(u,u)dc \]
for some well-defined quadratic form $\mathcal{Q}_V$, defined on a domain $\mathcal{D}(\mathcal{Q}_V)\subset L^2(\Omega_\mathbb{T})$ containing $E_k$ for each $k\geq 0$.
For our study of the resolvent of ${\bf H}$, we need to introduce the orthogonal projection
\[\Pi_k:L^2(\Omega_\mathbb{T})\to E_k,\]
extended trivially in the $c$ variable as $\Pi_k:L^2(\mathbb{R}\times \Omega_\mathbb{T})\to L^2(\mathbb{R};E_k)$.
\begin{lemma}\label{chiPikV}
Let $\chi\in L^\infty(\mathbb{R})\cap C^\infty(\mathbb{R})$ with support in $(-\infty,A)$ for some $A\in\mathbb{R}$ and $\chi'\in L^\infty(\mathbb{R})$ . Then, for all $\beta\geq -\gamma/2$ and $\beta'\in\mathbb{R}$, the following operators are bounded
\[ \chi \Pi_k: \mathcal{D}(\mathcal{Q})\to \mathcal{D}(\mathcal{Q}) , \quad \chi \Pi_k: \mathcal{D}'(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})\]
\[
\chi e^{\gamma c}V\Pi_k : e^{\beta'c}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{(\beta'-\beta) c}\mathcal{D}'(\mathcal{Q}),
\]
\[\chi e^{\gamma c}\Pi_k V: e^{(\beta-\beta') c}\mathcal{D}(\mathcal{Q})\to e^{-\beta' c}L^2(\mathbb{R};E_k),\]
\[ \chi \Pi_k e^{\gamma c}V\Pi_k: e^{\beta'c}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{(\beta'-\beta) c}L^2(\mathbb{R};E_k).\]
If $\beta> -\gamma/2$ then one also has
\[\chi e^{\gamma c}V\Pi_k : e^{\beta'c}L^\infty(\mathbb{R}\times \Omega_\mathbb{T})\to e^{(\beta'-\beta) c}\mathcal{D}'(\mathcal{Q}).\]
\end{lemma}
\begin{proof}
For the first operator $\chi \Pi_k$, it suffices to check that for $u\in L^2((-\infty,A);E_k)$, $\mathcal{Q}_{e^{\gamma c}V}(u)\leq C_k\|u\|^2_{L^2}$. Since there is $C_k>0$ depending on $k$ such that for all $F\in E_k$,
$\mathcal{Q}_{V}(F)\leq C_k\|F\|_{L^2(\Omega_\mathbb{T})}^2$, we have
\[ \mathcal{Q}_{e^{\gamma c}V}(u)\leq C_k \int_{-\infty}^A e^{\gamma c}\|u\|^2_{L^2(\Omega_\mathbb{T})}dc\leq C_{k,A}\|u\|_{L^2(\mathbb{R}\times \Omega_\mathbb{T})}^2.\]
The extension of $\chi \Pi_k$ to $\mathcal{D}'(\mathcal{Q})$ follows by duality, using that $\Pi_k^*=\Pi_k$ on $L^2$.
Next, we analyze $\chi e^{\gamma c}V\Pi_k$. It suffices to deal with the case $\beta'=0$ since $e^{\beta'c}$ commutes with $e^{\gamma c}V$.
Using that for $F\in L^2(\mathbb{R};E_k)$ and $\chi\in L^\infty(\mathbb{R})$ with support in $\mathbb{R}^-$, we have by Cauchy-Schwarz that for all $u\in \mathcal{D}(\mathcal{Q})$
\[ \begin{split}
|\langle \chi(c) e^{(\gamma+\beta) c}V F,u\rangle|\leq
\int_{-\infty}^A \chi(c)e^{(\gamma +\beta) c} |\mathcal{Q}_V(F,u)| dc & \leq \|\chi\|_{L^\infty}
\Big(\int_{-\infty}^A e^{\gamma c}\mathcal{Q}_V(u)dc\Big)^{\frac{1}{2}}
\Big( \int_{-\infty}^Ae^{(\gamma+2\beta) c}\mathcal{Q}_V(F)dc\Big)^\frac{1}{2}\\
\leq C_{k,A}\|\chi\|_{L^\infty}\|F\|_{L^2}\mathcal{Q}(u)^{1/2}
\end{split}\]
for some constant $C_{k,A}>0$, provided $\gamma+2\beta\geq 0$. The same argument also holds for $F\in L^\infty(\mathbb{R};E_k)$.
Finally, $\chi \Pi_k e^{\gamma c}V=\chi e^{\gamma c} \Pi_k V$ makes sense as a map $e^{\beta c}\mathcal{D}(\mathcal{Q})\to e^{\beta c}\mathcal{D}'(\mathcal{Q})$ using the boundedness
$\Pi_k: \mathcal{D}'(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$. To prove that it actually maps to $L^2$ if $\gamma+2\beta>0$, we note that for all $u'\in \mathcal{C}$
\[ |\langle \chi \Pi_k e^{\gamma c}Vu,u'\rangle|=|\langle \chi e^{\gamma c}Vu,\Pi_k u'\rangle|=\Big| \int_{-\infty}^A
\chi(c)e^{\gamma c}\mathcal{Q}_{V}(u,\Pi_k(u'))dc\Big|\leq C_{k,A}\|\chi\|_{L^\infty}\mathcal{Q}(e^{-\beta c}u)^{1/2}\|u'\|_{L^2}\]
where the last bound is obtained as above by Cauchy-Schwarz and the bounds on $\mathcal{Q}_V$ on $E_k$.
For the last term $\chi \Pi_k V\Pi_k$, we take $u\in L^2(\mathbb{R}\times \Omega)$ and note that for $u'\in \mathcal{D}(\mathcal{Q})$
\[\begin{split}
|\langle e^{\beta c}\Pi_k V\Pi_k u,u'\rangle|=& |\langle \chi e^{\beta c}\Pi_k V\Pi_k u, \Pi_k u'\rangle|
\leq \Big| \int_{-\infty}^A
\chi(c)e^{(\gamma+\beta) c}\mathcal{Q}_{V}(\Pi_ku),\Pi_ku')dc\Big|\\
\leq & \Big| \int_{-\infty}^A
\chi(c)e^{(\gamma+\beta) c}\mathcal{Q}_V(\Pi_k u)^{\frac{1}{2}}\mathcal{Q}_V(\Pi_k u')^{\frac{1}{2}}dc\Big|\leq C_{k,A}\|u\|_{L^2}\|u'\|_{L^2}
\end{split}\]
showing that $\|e^{\beta c}\Pi_k V\Pi_k u\|_2\leq C_{k,A}\|u\|_2$.
\end{proof}
First, we show a useful result for the spectral decomposition.
\begin{lemma}\label{embedded}
The operator ${\bf H}$ does not have non-zero eigenvectors $u\in \mathcal{D}({\bf H})$. If $\gamma\in (0,1)$, the spectrum of $\mathbf{H}$ is given by $\sigma(\mathbf{H})=[\tfrac{Q^2}{2},\infty)$ and consists of essential spectrum.
\end{lemma}
\begin{proof} In the case $\gamma\in (0,1)$, the space $\mathcal{C}$ is included in $\mathcal{D}({\bf H})$.
It is then easy to check that $\sigma(\mathbf{H})=[\tfrac{Q^2}{2},\infty)$ consists only of essential spectrum by using Weyl sequences $(e^{ipc}\chi(2^{-n}c)/\omega_n)_n\in\mathbb{N}$ where $\chi \in C_c^\infty(\mathbb{R})$ have support in $[-\frac{3}{2},-1]$ and equal to $1$ on some interval, with $\omega_n=\|\chi( 2^{-n}\cdot)\|_{L^2(\mathbb{R})}$.
Let $u\in \mathcal{D}(\mathbf{H})$ such that $\mathbf{H}u=\lambda u$ with $\lambda \in [\tfrac{Q^2}{2},\infty)$. Then $ u\in \mathcal{D}(\mathcal{Q})$ (hence $\partial_cu\in L^2$), and it satisfies $\mathcal{Q}(u,v)=\langle \lambda u\, |\,v\rangle_2$ for all $v\in \mathcal{D}(\mathcal{Q})$.
Now we claim
\begin{lemma}\label{cderiv}
Assume we are given $f\in L^2$ such that $\partial_cf\in L^2$. Consider $u\in \mathcal{D}(\mathcal{Q})$ such that
\begin{equation}\label{dirF}
\mathcal{Q}(u,v)=\langle f\, |\, v\rangle_2,\quad \forall v\in \mathcal{D}(\mathcal{Q}).
\end{equation}
Then $\partial_cu\in \mathcal{D}(\mathcal{Q})$ and
\begin{equation}\label{dirFbis}
\mathcal{Q}(\partial_cu,v)=\langle \partial_c f\, |\, v\rangle_2- \gamma \int e^{\gamma c}\int_0^{2\pi}\mathds{E}[ u_{\rm shift}v_{\rm shift}]\dd \theta\dd c,\quad \forall v\in \mathcal{D}(\mathcal{Q}).
\end{equation}
\end{lemma}
We postpone the proof of this lemma and conclude first. Consider next \eqref{dirF} with $f=\lambda u$ and choose $v=\partial_c u\in \mathcal{D}(\mathcal{Q})$ to obtain $\mathcal{Q}(u,\partial_cu)=\langle \lambda u\,|\, \partial_cu\rangle_2=0$. Also, choosing $v=u$ in \eqref{dirFbis} we obtain $\mathcal{Q}(\partial_cu,u)=\langle \lambda \partial_c u\,|\, u\rangle_2-\gamma\int e^{\gamma c} \int_0^{2\pi}\mathds{E}|u_{\rm shift}|^2\dd\theta \dd c$. These relations imply $\gamma\int e^{\gamma c} \int_0^{2\pi}\mathds{E}|u_{\rm shift}|^2\dd\theta \dd c=0$. In the case when $\gamma\in (0,\sqrt{2})$ then $V$ exists as a fairly defined function and this relation translates into $\|e^{\gamma c/2}V^{1/2}u\|_2=0$. Hence $u=0$ as $V>0$ almost surely. In the general situation $\gamma\in (0,2)$, the argument is as follows: the relation $\gamma\int e^{\gamma c} \int_0^{2\pi}\mathds{E}|u_{\rm shift}|^2\dd\theta \dd c=0$ implies that $\mathcal{Q}(u,v)=\mathcal{Q}_0(u,v)$ for all $v\in \mathcal{C}$. Therefore, $u\in \mathcal{D}(\mathbf{H}_0)$ and $u$ is an eigenfunction $\mathbf{H}_0$, hence $u=0$ (cf. Remark \ref{discreteH0}).
\end{proof}
{\it Proof of Lemma \ref{cderiv}.} For $h>0$, introduce the translation operator $T_h:L^2\to L^2$ by $T_hv:=v(c+h,\cdot)$ and the discrete derivative operator $D_h: L^2\to L^2$ by $D_hv:=(T_hv-v)/h$. Note that $T_h$ maps $ \mathcal{D}(\mathcal{Q})$ into itself, that $\|D_hv\|_2\leq \|\partial_cv\|_2 $, $D_hu_{\rm shift}=(D_hu)_{\rm shift}$ and we have the discrete IPP $\langle D_hu \,|\,v\rangle_2=-\langle u \,|\,D_{-h}v\rangle_2$ for all $u,v\in L^2$. Now we can replace $v$ by
$D_{-h}v$ in \eqref{dirF} and use discrete IPP to obtain
\begin{align}\label{derivFD}
\mathcal{Q}(D_hu,v)=&\langle D_h f,v\rangle_2 -\int\int_0^{2\pi}\mathds{E}[(T_he^{\gamma c}-e^{\gamma c})D_hu_{\rm shift}v_{\rm shift}]\dd \theta\dd c \\
& -\int\int_0^{2\pi}\mathds{E}[D_h(e^{\gamma c} )u_{\rm shift}v_{\rm shift}]\dd \theta\dd c,\quad \forall v\in \mathcal{D}(\mathcal{Q}).
\end{align}
Next we choose $v=D_hu$ and, using the inequality $ab \leq \tfrac{\epsilon}{2}a^2+ \tfrac{1}{2\epsilon}b^2 $ for arbitrary $\epsilon>0$, we obtain the a priori estimate (for some constant $C>0$ depending only on $\gamma$)
\begin{equation}\label{aprioriestQDh}
\begin{split}
\mathcal{Q}(D_hu,D_hu) \leq & \frac{1}{2}(\|D_hf\|_2^2+\|D_hu\|_2^2)-(e^{\gamma h}-1)\int\int_0^{2\pi}\mathds{E}( e^{\gamma c}|D_hu_{\rm shift}|^2)\dd \theta\dd c
\\
&+\frac{1}{2}\Big|\frac{e^{\gamma h}-1}{h}\Big|\Big(\epsilon^{-1}\int\int_0^{2\pi}\mathds{E}( e^{\gamma c}|u_{\rm shift}|^2)\dd \theta\dd c +\epsilon\int\int_0^{2\pi}\mathds{E}( e^{\gamma c}|D_hu_{\rm shift}|^2)\dd \theta\dd c \Big)\\
\leq &( C+\epsilon^{-1}) \big(\|\partial_c f\|_2^2+ \mathcal{Q}(u,u)\big)+C(\varepsilon+h)\mathcal{Q}(D_hu,D_hu)
\end{split}\end{equation}
for all $h>0$ small, and therefore the last term can be absorded in the left hand side if $\varepsilon,h>0$ are small enough.
Then, writing \eqref{derivFD} for $h$ and $h'$, subtracting and then choosing $v=D_hu-D_{h'}u$, we find
\[ \mathcal{Q}(D_hu-D_{h'}u,D_hu-D_{h'}u) \leq C \big(\|D_hf-D_{-h'} f\|_2^2+ |h-h'|\mathcal{Q}(u,u)+ |h-h'|\mathcal{Q}(D_{(h-h')}u,D_{(h-h')}u)\big).\]
Using \eqref{aprioriestQDh} with $h$ replaced by $h-h'$ to bound the last term, we obtain that the sequence $(D_hu)_h$ is Cauchy for the $ \mathcal{Q}$-norm. Hence the limit $\partial_cu$ belongs to $\mathcal{D}( \mathcal{Q})$.
\qed
\begin{remark}When $\gamma\in [1,2)$ the spectrum is also $[Q^2/2,\infty)$ and is made of essential spectrum, and this will follow from our analysis of the resolvent: in fact we will show for $\gamma \in (0,2)$ that the spectrum of ${\bf H}$ is absolutely continuous.
\end{remark}
\subsection{Resolvent of $\mathbf{H}$}
To describe the spectral measure of ${\bf H}$ and construct its generalized eigenfunctions,
the main step is to understand the resolvent of $\mathbf{H}$ as a function of the spectral parameter, in particular when the spectral parameter approach the spectrum.
Due to the fact that the spectrum of ${\bf H}$ starts at $Q^2/2$, it is convenient to use the spectral parameter $2\Delta_\alpha$ where $\Delta_{\alpha}= \frac{\alpha}{2}(Q-\frac{\alpha}{2})$ and $\alpha \in \mathbb{C}$. That way we have with $\alpha=Q+ip$
\[ {\bf H}-2\Delta_\alpha= -\tfrac{1}{2}\partial_c^2 + {\bf P}+e^{\gamma c}V-\tfrac{1}{2}p^2\]
where $p\in\mathbb{R}$ plays the role of a frequency: in particular $2\Delta_\alpha\in [Q^2/2,\infty)$ if and only if $p\in \mathbb{R}$. The half-plane $\{\alpha\in\mathbb{C} \,|\,{\rm Re}(\alpha)<Q\}$ is mapped by $\Delta_\alpha$ to the resolvent set $\mathbb{C}\setminus [Q^2/2,\infty)$ of ${\bf H}$ and will be called the \emph{physical sheet}. By the spectral theorem
\[ \mathbf{R}(\alpha)=(\mathbf{H}-2\Delta_{\alpha})^{-1}: L^2(\mathbb{R}\times \Omega_\mathbb{T})\to \mathcal{D}(\mathbf{H})\]
is bounded if ${\rm Re}(\alpha)<Q$. Our goal is to extend this resolvent up to the line ${\rm Re}(\alpha)=Q$ analytically, and we will actually do it in an even larger region. The price to pay
is that $\mathbf{R}(\alpha)$ will not be bounded on $L^2$ but rather on certain weighted $L^2$ spaces, where the weights are $e^{\beta c}$ in the region $c\leq -1$, with $\beta\in \mathbb{R}$ tuned with respect to $\alpha$.
\subsubsection{Resolvent and propagator on weighted spaces in the probabilistic region.}
Our first task is to understand the resolvent on weighted spaces in a subregion of ${\rm Re}(\alpha)<Q$, that we call the \emph{probabilistic region} due to the fact that the resolvent can be written in terms of the semigroup $e^{-t{\bf H}}$.
Let $\rho:\mathbb{R}\to \mathbb{R}$ be a smooth non-decreasing function satisfying
\[\rho(c)=c+a \textrm{ for }c\leq -1,\quad \rho(c)=0 \textrm{ for }c\geq 0, \quad 0\leq \rho'\leq 1\]
for some $a\in \mathbb{R}$. We have for $\beta\geq 0$ the inclusion of weighted spaces
\[e^{\beta \rho(c)}L^2(\mathbb{R}\times \Omega_\mathbb{T})\subset L^2(\mathbb{R}\times \Omega_\mathbb{T})\subset e^{-\beta \rho(c)}L^2(\mathbb{R}\times \Omega_\mathbb{T}).\]
The weighted spaces $e^{\beta \rho(c)}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ are obviously Hilbert spaces with product $\langle u,v\rangle_{e^{\beta \rho}L^2}:=\langle e^{-\beta\rho}u,e^{-\beta \rho}v\rangle_2$.
\begin{lemma}\label{resolventweighted}
Let $\beta\in\mathbb{R}$. If ${\rm Re}((\alpha-Q)^2)>\beta^2$ and ${\rm Re}(\alpha)<Q$, the resolvent $\mathbf{R}(\alpha)=(\mathbf{H}-2\Delta_{\alpha})^{-1}$ extends to a bounded operator
\begin{align*}
&\mathbf{R}(\alpha): e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}(\mathbf{H}),\\
&\mathbf{R}(\alpha): e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})
\end{align*}
which is analytic in $\alpha$ in this region. The operator $\mathbf{H}: e^{-\beta \rho}L^2\to e^{-\beta \rho}L^2$ is closed with domain $e^{-\beta \rho}\mathcal{D}(\mathbf{H})$, it is a bijective mapping
$e^{-\beta \rho}\mathcal{D}(\mathbf{H})\to e^{-\beta \rho}L^2$ with inverse $\mathbf{R}(\alpha)$.
Moreover, for $\alpha\in (-\infty,Q)$ and $0\leq \beta<Q-\alpha$, the resolvent is bounded with norm $\|\mathbf{R}(\alpha)\|_{\mathcal{L}(e^{-\beta\rho}L^2)}\leq 2((\alpha-Q)^2-\beta^2)^{-1}$ and is equal to the integral
\begin{equation}\label{resolvvspropagator}
\mathbf{R}(\alpha)=\int_0^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}dt
\end{equation}
where $e^{-t\mathbf{H}}$ is the semigroup on $e^{-\beta\rho}L^2$ obtained by Hille-Yosida theorem with norm \begin{equation}\label{normetHweight}
\forall t\geq 0, \quad \|e^{-t\mathbf{H}}\|_{\mathcal{L}(e^{-\beta\rho}L^2)}\leq e^{-t\frac{Q^2-\beta^2}{2}}.
\end{equation}
The integral \eqref{resolvvspropagator} converges in $\mathcal{L}(e^{-\beta\rho}L^2)$ operator norm and $e^{-t\mathbf{H}}:e^{-\beta \rho}L^2\to e^{-\beta \rho}L^2$ extends the semigroup defined in \eqref{hstar}. Finally, $e^{-t{\bf H}}:e^{\beta \rho}\mathcal{D}(\mathcal{Q})\to e^{\beta \rho}\mathcal{D}(\mathcal{Q})$ and $e^{-t{\bf H}}:e^{\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{\beta \rho}\mathcal{D}'(\mathcal{Q})$ are bounded and for each $\varepsilon>0$ there is some constant $C_\varepsilon>0$ such that for all $t>0$
\begin{equation}\label{normetHweight2}
\|e^{-t{\bf H}}\|_{\mathcal{L}(e^{\beta \rho}\mathcal{D}(\mathcal{Q}))}+\|e^{-t{\bf H}}\|_{\mathcal{L}(e^{\beta \rho}\mathcal{D}'(\mathcal{Q}))}\leq C_\epsilon e^{-t(\frac{Q^2-\beta^2}{2}-\varepsilon)}.
\end{equation}
\end{lemma}
\begin{proof}
Consider the operator for $\beta\in \mathbb{R}$, acting on the space $\mathcal{C}$ (defined in \eqref{core}) and with value in $e^{\beta \rho}\mathcal{D}'(\mathcal{Q})$,
\[ \mathbf{H}_{\beta}:= e^{\beta \rho(c)}\mathbf{H}e^{-\beta \rho(c)}=\mathbf{H} -\tfrac{\beta^2}{2}(\rho'(c))^2+\tfrac{\beta}{2} \rho''(c)+\beta \rho'(c)\partial_{c}.\]
Let $u\in \mathcal{C}$, then we have (using integration by parts)
\begin{equation}\label{coercive_estimate}
\begin{split}
{\rm Re}\langle \mathbf{H}_\beta u\, |\, u\rangle_2 & =\mathcal{Q}(u)-\tfrac{\beta^2}{2}\|\rho'u\|_2^2+\tfrac{\beta}{2} \langle \rho''u\,|\, u\rangle_2
+\beta{\rm Re}(\langle \rho'\partial_cu\,|\, u\rangle_2)\\
&= \mathcal{Q}(u)-\tfrac{\beta^2}{2}\|\rho'u\|_2^2\\
&\geq \mathcal{Q}(u)-\tfrac{\beta^2}{2}\|u\|^2_2 \geq\tfrac{Q^2-\beta^2}{2}\|u\|^2_2+\tfrac{1}{2}\|\partial_cu\|^2_2+\|\mathbf{P}^{1/2}u\|^2_2+\mathcal{Q}_{e^{\gamma c}V}(u,u),
\end{split}
\end{equation}
where $\mathcal{Q}$ was defined in Section \ref{sub:bilinear}.
Consider the sesquilinear form $\mathcal{Q}_{\alpha,\beta}(u,v):=\langle (\mathbf{H}_{\beta}-2\Delta_\alpha)u\,|\,v\rangle_2$ defined on
$\mathcal{C}$. We easily see that if $-{\rm Re}(2\Delta_{\alpha})>\tfrac{\beta^2-Q^2}{2}$, then
\[\{u \in L^2(\mathbb{R}\times\Omega_\mathbb{T})\,|\, \mathcal{Q}_{\alpha,\beta}(u,u)<\infty\}=\mathcal{D}(\mathcal{Q}).\]
Let $\mathcal{D}'(\mathcal{Q})$ be the dual of $\mathcal{D}(\mathcal{Q})$ (note that $L^2\subset \mathcal{D}'(\mathcal{Q})$).
By Lax-Milgram, if $-{\rm Re}(2\Delta_{\alpha})>\tfrac{\beta^2-Q^2}{2}$, then for each $f'\in \mathcal{D}'(\mathcal{Q})$, there is a unique $u\in \mathcal{D}(\mathcal{Q})$ such that
\begin{equation}\label{LaxMilram}
\forall v\in \mathcal{D}(\mathcal{Q}), \quad \, \mathcal{Q}_{\alpha,\beta}(u,v) =f'(v), \quad \mathcal{Q}(u)^{1/2}\leq C'\|f'\|_{\mathcal{D}'(\mathcal{Q})}
\end{equation}
for all $v\in \mathcal{D}(\mathcal{Q})$, where $C'>0$ depends only on ${\rm Re}(2\Delta_{\alpha})$ and $\beta^2$. This holds in particular for the linear form $f':v\mapsto \langle f\,|\, v\rangle_{2}$
with norm $\|f'\|_{\mathcal{D}'(\mathcal{Q})}\leq C\|f\|_2$ for some $C>0$ depending on ${\rm Re}(2\Delta_{\alpha})$ and $\beta^2$.
We define $\widetilde{\mathbf{R}}(\alpha)(e^{-\beta\rho}f):=e^{-\beta \rho}u$,
this gives a bounded linear operator
\begin{equation}\label{boundontildeR}
\widetilde{\mathbf{R}}(\alpha): e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})\subset e^{-\beta \rho}L^2
\end{equation}
inverting the bounded operator $e^{\beta\rho}(\mathbf{H}-2\Delta_{\alpha})e^{-\beta\rho}:\mathcal{D}(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$. Moreover, by \eqref{coercive_estimate}, its weighted $L^2$-norm is bounded by
\begin{equation}\label{boundtildeRalpha}
\|\widetilde{\mathbf{R}}(\alpha)\|_{\mathcal{L}(e^{-\beta\rho}L^2)}\leq 2({\rm Re}((\alpha-Q)^2-\beta^2))^{-1}=(\tfrac{Q^2-\beta^2}{2}-2{\rm Re}(\Delta_\alpha))^{-1}
\end{equation}
Using $\mathcal{D}(\mathcal{Q})\subset e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$ and the uniqueness property above,
this means that for $f\in L^2$, we have $\mathbf{R}(\alpha)f=\widetilde{\mathbf{R}}(\alpha)f$ and thus $\widetilde{\mathbf{R}}(\alpha)$ is a continuous extension of $\mathbf{R}(\alpha)$ to the Hilbert space $e^{-\beta \rho}L^2$. The analyticity in $\alpha$ comes from Lax-Milgram, but can also alternatively be obtained
by Cauchy formula (for $\varepsilon>0$ small)
\[ \widetilde{\mathbf{R}}(\alpha)f= \frac{1}{2\pi i}\int_{|z-\alpha|=\varepsilon} \frac{\widetilde{\mathbf{R}}(z)f}{z-\alpha}dz\]
which holds for all $f\in \mathcal{C}$ (since $\widetilde{\mathbf{R}}(\alpha)f=\mathbf{R}(\alpha)f$ for such $f$), and can then be extended to $e^{-\beta \rho}L^2$ by density of $\mathcal{C}$ in $e^{-\beta \rho}L^2$. The domain
$\mathcal{D}(e^{\beta \rho}{\bf H}e^{-\beta \rho})=\{u\in \mathcal{D}(\mathcal{Q})\,|\,e^{\beta \rho}{\bf H}e^{-\beta \rho}u\in L^2\}$ of the operator $e^{\beta \rho}\bf{H}e^{-\beta \rho}$ is actually equal to
$\mathcal{D}({\bf H})=\{u\in \mathcal{D}(\mathcal{Q})\,|\, {\bf H}u\in L^2\}$ since
\begin{equation}\label{formuleHcommute}
e^{-\beta \rho}\mathbf{H}(e^{\beta \rho}u)=\mathbf{H}u -\tfrac{\beta^2}{2}(\rho'(c))^2u-\tfrac{\beta}{2} \rho''(c)u-\beta \rho'(c)\partial_{c}u
\end{equation}
with $\rho'\in C_c^\infty(\mathbb{R})$ (thus $\rho'(c)\partial_cu\in L^2$ for $u\in \mathcal{D}(\mathcal{Q})$). The operator ${\bf H}:e^{-\beta\rho}\mathcal{D}({\bf H})\to e^{-\beta\rho}L^2$ is thus closed.
By Hille-Yosida theorem, there is an associated bounded semigroup $e^{-t\mathbf{H}}$ on $e^{-\beta \rho}L^2$, and by density of $L^2\subset e^{-\beta \rho}L^2$ when $\beta\geq 0$, it is an extension of the $e^{-t\mathbf{H}}$ semigroup on $L^2$ .
Let us check that the resolvent can be written as an integral of the propagator.
For $f\in \mathcal{C}\subset L^2$, we have
\[
\widetilde{\mathbf{R}}(\alpha)f=\mathbf{R}(\alpha)f=\int_0^{\infty} e^{-t\mathbf{H}+t2\Delta_{\alpha}}f\, dt.
\]
By Hille-Yosida theorem and \eqref{boundontildeR}, we have $\|e^{-t\mathbf{H}}\|_{\mathcal{L}(e^{-\beta \rho}L^2)}\leq e^{-t\tfrac{Q^2-\beta^2}{2}}$, so that the integral above converges if $Q-\alpha>\beta$ (for $\alpha\in(-\infty,Q)$) as a bounded operator on $e^{-\beta\rho}L^2$, showing the desired claim by density of $\mathcal{C}$ in $e^{-\beta \rho}L^2$.
We conclude with a $e^{\beta \rho}\mathcal{D}'(\mathcal{Q})$ bound for $\tilde{{\bf R}}(\alpha)$ and
$e^{-t{\bf H}}$. First, we note using \eqref{formuleHcommute} that for $u\in \mathcal{D}(\mathcal{Q})$
\[|{\rm Im}(\mathcal{Q}_{\alpha,\beta}(u,u))|\geq (|{\rm Im}(2\Delta_\alpha)|-\beta^2)\,\|u\|^2_2
-\tfrac{1}{4}\|\partial_cu\|^2_2,\]
which implies
\[|(\mathcal{Q}_{\alpha,\beta}(u,u)|\geq \frac{1}{\sqrt{2}}\Big(c_{\Delta_\alpha}\|u\|^2+\tfrac{1}{4}\|\partial_cu\|^2_2+\|\mathbf{P}^{1/2}u\|^2_2+\mathcal{Q}_{e^{\gamma c}V}(u,u)\Big).
\]
\[ c_{z}:= \min\Big(-2{\rm Re}(z)+2|{\rm Im}(z)|+\tfrac{Q^2-3\beta^2}{2}, -2{\rm Re}(z)+\tfrac{Q^2-\beta^2}{2}\Big).\]
This in turn gives that, provided ${\rm Re}(-2\Delta_\alpha)+|{\rm Im}(2\Delta_\alpha)|+\tfrac{Q^2-3\beta^2}{2}>0$, $\tilde{{\bf R}}(\alpha):e^{-\beta \rho}L^2\to e^{-\beta \rho}L^2$ is also well-defined, analytic in $\alpha$ and bounded, and moreover satisfies for $|\Delta_\alpha|\gg 1$ and ${\rm Re}(\Delta_\alpha)\leq \frac{1}{2}|{\rm Im}(\Delta_\alpha)|$
\[ \|\tilde{{\bf R}}(\alpha)\|_{\mathcal{L}(e^{-\beta \rho}L^2)}\leq C|\Delta_\alpha|^{-1} \]
where $C>0$ is a uniform constant.
First, for each $\varepsilon>0$, there is $C_\varepsilon>0$ such that for all $f\in e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$, $u:={\bf R}(\alpha)f$
\[\mathcal{Q}(u,u)\leq C_{\varepsilon}{\rm Re}(\mathcal{Q}_{\alpha,\beta}(u,u))\leq C_{\varepsilon}|\langle u\,|\,f\rangle_2|\leq \frac{C_\varepsilon}{c_{\Delta_\alpha}} \|f\|_2^2 \]
if $c_{\Delta_\alpha}>0$, thus for $c_z>0$
\begin{equation}\label{boundH-zinverse}
\begin{split}
\|({\bf H}-z)^{-1}\|_{\mathcal{L}(e^{-\beta\rho}\mathcal{D}(\mathcal{Q}))}\leq & C_\varepsilon^{1/2} c_z^{-1/2}\\
\|({\bf H}-z)^{-1}\|_{\mathcal{L}(e^{-\beta\rho}L^2)}\leq & C_\varepsilon c_z^{-1}.
\end{split}
\end{equation}
Let us consider a contour $\Gamma=\Gamma_0\cup \Gamma_+\cup \Gamma_-\subset \mathbb{C}$ with $a:=\tfrac{Q^2-\beta^2}{2}-\varepsilon$, $\Gamma_{\pm}:=
a\pm iN+e^{\pm 3i\pi/8}\mathbb{R}^+\subset \mathbb{C}$ and $\Gamma_0=a+i[-N,N]$ for some $N>0$ large enough so that $c_z>0$ on $\Gamma$, and where $\Gamma$ oriented clockwise around $[a,+\infty)$. Using the holomorphic functional calculus, we have
\[ e^{-t{\bf H}}=\frac{1}{2\pi i}\int_{\Gamma_+\cup \Gamma_-}e^{-tz}({\bf H}-z)^{-1}dz\]
and the integral converges both in $\mathcal{L}(e^{-\beta\rho}L^2)$ and $\mathcal{L}(e^{-\beta\rho}\mathcal{D}(\mathcal{Q}))$ using \eqref{boundH-zinverse}, with bound
\[ \|e^{-t{\bf H}}\|_{e^{-\beta \rho}\mathcal{D}(\mathcal{Q})}\leq Ce^{-ta}\]
for some some $C$ depending only on $\varepsilon>0$. Using duality, this gives \eqref{normetHweight2}.
\end{proof}
In what follows, we will always write $\mathbf{R}(\alpha)$ for the resolvent, for both the operator acting on $L^2$ or acting on $e^{-\beta \rho}L^2$.
\begin{figure}
\centering
\begin{tikzpicture}
\node[inner sep=10pt] (F1) at (0,0)
{\includegraphics[width=.5\textwidth]{figure1.png}};
\node (F) at (1.7,2.3){ ${\rm Re}(\alpha)=Q$};
\node (F) at (-2.5,0.8){ ${\rm Re}((\alpha-Q)^2)>\beta^2$};
\end{tikzpicture}
\caption{The blue region corresponds to the set of parameters $\alpha\in\mathbb{C}$ such that ${\rm Re}((\alpha-Q)^2)>\beta^2$, i.e. region of validity of Lemma \ref{resolventweighted} (here $\beta=1$ on the plot).}
\end{figure}
\subsubsection{Poisson operator in the probabilistic region.}
For $\ell\in\mathbb{N}$, we shall define the Poisson operator $\mathcal{P}_\ell(\alpha)$ in the resolvent set.
This operator is a way to construct the generalized eigenfunctions of ${\bf H}$: it takes an element $F\in E_\ell\subset L^2(\Omega_\mathbb{T})$ and produces a function $u=\mathcal{P}_\ell(\alpha)F$ solving
$({\bf H}-2\Delta_\alpha)u=0$ with a prescribed leading asymptotic in terms of $F$ as $c\to -\infty$.
First, we explain in details our convention on square roots in $\mathbb{C}$ and
since it will be important in the proof and to avoid confusions for the reader.
We will denote by $\sqrt{\cdot}$ the square root defined so that ${\rm Im}(\sqrt{z})>0$ if $z\in \mathbb{C}\setminus \mathbb{R}^+$, i.e. $\sqrt{re^{i\theta}}=\sqrt{r}e^{i\theta/2}$ for $\theta\in (0,2\pi)$ and $r>0$. In particular, one has $\sqrt{z^2}=z$ for ${\rm Im}(\sqrt{z})>0$ and this extends holomorphically to $z\in \mathbb{C}$. For $\lambda\in \mathbb{R}^+$,
the function $\sqrt{z^2-\lambda}$ will be of special interest to us: it is well-defined and analytic
in $\{{\rm Im}(z)>0\}\cup (-\sqrt{\lambda},\sqrt{\lambda})$
(since $z^2\notin [\lambda,\infty)$ in this region) and it extends analytically in a
neighborhood of the half lines $(-\infty,-\sqrt{\lambda})$ and $(\sqrt{\lambda},+\infty)$, for exemple in $z\in \sqrt{\lambda}+\mathbb{R}^+ e^{-i\theta}$ and $z\in -\sqrt{\lambda}+\mathbb{R}^+ e^{i(\pi+\theta)}$ for $\theta\in[0,\varepsilon)$. With that convention, which will be used all along this Section on scattering for ${\bf H}$, we have $\sqrt{z^2-\lambda}>0$ if $z \in(\sqrt{\lambda},\infty)$ while $\sqrt{z^2-\lambda}<0$ if $z \in(-\infty,-\sqrt{\lambda})$. Later we will view these functions as holomorphic functions on a Riemann surface.
We note the following elementary property, which will be useful in the following.
\begin{lemma}\label{racine}
For $z\in \mathbb{C}\setminus \mathbb{R}^+$, the following map is non-decreasing
\[ x \in\mathbb{R}^+ \mapsto {\rm Im}\sqrt{z-x}.\]
\end{lemma}
\begin{proof}
It suffices to differentiate in $x$.
\end{proof}
Let $\chi\in C^\infty(\mathbb{R})$ be equal to $1$ in $(-\infty,a-1)$ and equal to $0$ in $(a,+\infty)$ for some $a\in (0,1/2)$, then for $\alpha=Q+ip$ with ${\rm Im}(p)>0$ we choose
\begin{equation}\label{betaell}
\beta_\ell>\max_{j=0,\dots,\ell}{\rm Im}\sqrt{p^2-2\lambda_j}-\gamma/2={\rm Im}\sqrt{p^2-2\lambda_\ell}-\gamma/2, \,\quad \textrm{ and }\beta_\ell\geq 0.
\end{equation}
Then for ${\rm Re}((\alpha-Q)^2)>\beta_\ell^2$ we define
\begin{equation}\label{definPellproba}\begin{gathered}
\mathcal{P}_\ell(\alpha):\left\{ \begin{array}{lll}
E_\ell=\oplus_{j=0}^\ell \ker (\mathbf{P}-\lambda_j)& \to & e^{-(\beta_\ell+\gamma/2) \rho}\mathcal{D}(\mathcal{Q})\\
F= \sum_{0\leq j\leq \ell} F_j &\mapsto & \chi F_-(\alpha) -\mathbf{R}(\alpha)(\mathbf{H}-2\Delta_{\alpha})(\chi F_-(\alpha)),
\end{array}\right.\\
\textrm{ with }F_-(\alpha):=\sum_{j=0}^{\ell}F_je^{ic\sqrt{p^2-2\lambda_j}}.
\end{gathered}\end{equation}
We will show in the following Lemma that this definition makes sense by using Lemma \ref{resolventweighted}. Before going to the proof of it, recall that ${\rm Im}\sqrt{p^2-2\lambda_j}>0$ for ${\rm Re}(\alpha)<Q$ by Lemma \ref{racine}, and note that the condition ${\rm Re}((\alpha-Q)^2)>\beta_\ell^2$ implies that
${\rm Im} \sqrt{p^2-2\lambda_j}\geq {\rm Im}(p)> \beta_\ell$ for all $j=0,\dots,\ell$. We then emphasize that the main
reason for $\mathcal{P}_\ell(\alpha)F$ to be defined and non-trivial is that $\chi F_-(\alpha)\in
e^{-(\beta_\ell+\gamma/2) \rho}\mathcal{D}(\mathcal{Q})\setminus e^{-\beta_\ell \rho}\mathcal{D}(\mathcal{Q})$ and $(\mathbf{H}-2\Delta_{\alpha})(\chi F_-(\alpha))\in e^{-\beta_\ell \rho}\mathcal{D}'(\mathcal{Q})$ so that $\mathbf{R}(\alpha)(\mathbf{H}-2\Delta_{\alpha})(\chi F_-(\alpha))$ is well-defined but not equal to $\chi F_-(\alpha)$.
\begin{lemma}\label{firstPoisson}
For each $\ell\in\mathbb{N}$, let $\beta_\ell\geq 0$, then the operator $\mathcal{P}_\ell(\alpha)$ is well-defined, bounded and holomorphic in the region
\begin{equation}\label{regionLemma4.4}
\Big\{\alpha=Q+ip\in\mathbb{C} \, |\, {\rm Re}(\alpha-Q)<0,\,{\rm Re}((\alpha-Q)^2)>\beta_\ell^2\, ,\,\, \beta_\ell>{\rm Im}\sqrt{p^2-2\lambda_\ell}-\gamma/2\Big\}
\end{equation}
and it satisfies in $e^{-(\beta_\ell+\gamma/2) \rho}\mathcal{D}'(\mathcal{Q})$
\begin{equation}\label{solutionHu=0}
(\mathbf{H}-2\Delta_{\alpha})\mathcal{P}_\ell(\alpha)=0,
\end{equation}
and in the region $c\leq -1$, one has the asymptotic behaviour, with $F_j:=(\Pi_j-\Pi_{j-1})F$,
\begin{equation}\label{expansionofPell}
\mathcal{P}_\ell(\alpha)F=\sum_{j=0}^{\ell}F_je^{ic\sqrt{p^2-2\lambda_j}}+
F_+(\alpha), \quad F_+(\alpha)\in e^{-\beta_\ell \rho}\mathcal{D}(\mathcal{Q}).
\end{equation}
\end{lemma}
\begin{proof}
First we observe that $(\mathbf{H}_0-2\Delta_{\alpha})F_-(\alpha)=0$ thus
\[(\mathbf{H}-2\Delta_{\alpha})\chi F_-(\alpha)=
-\tfrac{1}{2}\chi''(c) F_-(\alpha)+e^{\gamma c}V\chi F_-(\alpha)-\chi'(c)\partial_c F_-(\alpha).\]
We note that $\chi',\chi''$ have compact support in $\mathbb{R}$, and also for each $u\in \mathcal{D}(\mathcal{Q})$
\[ \begin{split}
|\langle \chi e^{\gamma c+\beta_\ell \rho}V F_-(\alpha),u\rangle|\leq
\int_{\mathbb{R}^-} e^{\gamma c+\beta_\ell \rho} |\mathcal{Q}_V(F_-(\alpha),u)| dc & \leq
\Big(\int_{\mathbb{R}^-} e^{\gamma c}\mathcal{Q}_V(u)dc\Big)^{\frac{1}{2}}\Big( \int_{\mathbb{R}^-}e^{\gamma c}\mathcal{Q}_V(e^{\beta _\ell \rho}F_-)dc\Big)^\frac{1}{2}\\
\leq \mathcal{Q}(u)^{1/2}\mathcal{Q}_{e^{\gamma c}V}(e^{\beta _\ell \rho}F_-)^{\frac{1}{2}}
\end{split}\]
and $\mathcal{Q}_{e^{\gamma c}V}(e^{\beta _\ell \rho}F_-)<\infty$ by using
\eqref{regionLemma4.4} and the fact that $\mathcal{Q}_V(\psi)<\infty$ for all $\psi\in E_\ell$.
We obtain that
\[\chi F_-(\alpha)\in e^{-(\beta_\ell+\gamma/2)\rho}\mathcal{D}(\mathcal{Q}),\quad (\mathbf{H}-2\Delta_{\alpha})\chi F_-(\alpha)\in e^{-\beta_\ell \rho}\mathcal{D}'(\mathcal{Q}).\]
This shows, using Lemma \ref{resolventweighted}, that $\mathbf{R}(\alpha)(\mathbf{H}-2\Delta_{\alpha})(\chi F_-(\alpha))$ is well-defined as an element of $e^{-\beta_\ell \rho}\mathcal{D}(\mathcal{Q})$, with holomorphic dependence in $\alpha$, provided ${\rm Re}((\alpha-Q)^2)>\beta_\ell^2$. By construction, it clearly also solves \eqref{solutionHu=0} in $e^{-(\beta_\ell+\gamma/2) \rho}\mathcal{D}'(\mathcal{Q})$.
\end{proof}
Note that the error term $F_+(\alpha)$ in \eqref{expansionofPell} is smaller than the bigger term in $F_-(\alpha)$ but is not necessarily neglectible with respect to all terms of $F_-(\alpha)$.
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{PR}=[minimum width=2cm,text width=3cm,minimum height=0.8cm,rectangle,rounded corners=5pt,draw,fill=red!30,text=black,font=\bfseries,text centered,text badly centered]
\tikzstyle{NS}=[minimum width=2cm,text width=3cm,minimum height=0.8cm,rectangle,rounded corners=5pt,draw,fill=blue!20,text=black,font=\bfseries,text centered,text badly centered]
\tikzstyle{PRfleche}=[->,>= stealth,thick,red!60]
\tikzstyle{NSfleche}=[->,>= stealth,thick,blue!60]
\node[inner sep=0pt] (F1) at (0,0)
{\includegraphics[width=.4\textwidth]{figure2.png}};
\node[inner sep=0pt] (F2) at (8,0)
{\includegraphics[width=.4\textwidth]{figure2b.png}};
\node (F) at (1.7,2.3){ ${\rm Re}(\alpha)=Q$};
\node (G) at (9.7,2.3){ ${\rm Re}(\alpha)=Q$};
\node[PR] (P) at (3,-2) {Probabilistic regions \eqref{regionLemma4.4}};
\node[NS] (NSR) at (5,2) {Near spectrum regions \eqref{regionvalide}};
\node (P1) at (-2,-0.6) {};
\node (P2) at (6,-0.5) {};
\node (N1) at (0.7,3) {};
\node (N2) at (8.6,2) {};
\draw[PRfleche] (P) to [bend left=25] (P1);
\draw[PRfleche] (P) to [bend right=25] (P2);
\draw[NSfleche] (NSR) to [bend right=25] (N1);
\draw[NSfleche] (NSR) to [bend right=25] (N2);
\end{tikzpicture}
\caption{Left picture: regions of definition of the Poisson operator $\mathcal{P}_0(\alpha)$
(for the plot, we take $\gamma=1/2$ and $\beta_0$ is optimized to obtain the largest possible region). Right picture: regions of definition of the Poisson operator $\mathcal{P}_{\ell}(\alpha)$ with $\ell>0$ (for the plot we take $\lambda_\ell=4$,$\gamma=1/2$ and $\beta_\ell$ optimized).
}
\label{figure2}
\end{figure}
We also notice that where they are defined, we have for $j\geq 0, \ell\geq 0$
\begin{equation}\label{egalitePell}
\mathcal{P}_{\ell+j}(\alpha)|_{E_\ell}=\mathcal{P}_{\ell}(\alpha)|_{E_\ell}.
\end{equation}
In \eqref{definPellproba}, the definition of the operator $\mathcal{P}_\ell(\alpha)$ seemingly depends on the cutoff function $\chi$. In fact, we can show that this is not the case. We state a lemma below in this direction
\begin{lemma}\label{uniqPell}
For $\ell\in\mathbb{N}$, $\beta_\ell$ satisfying \eqref{betaell} and for ${\rm Re}((\alpha-Q)^2)>\beta_\ell^2$ the definition of the operator $\mathcal{P}_\ell(\alpha)_{|E_\ell} $ does not depend on $\chi$.
\end{lemma}
\begin{proof} Pick two functions $\chi,\hat{\chi}$ satisfying our assumptions and denote by $\mathcal{P}^\chi_\ell(\alpha),\mathcal{P}^{\hat{\chi}}_\ell(\alpha) $ the corresponding Poisson operators. Set $d(\chi):=\chi-\hat{\chi}$. Then observe that for $F\in E_\ell$
$$\mathcal{P}^\chi_\ell(\alpha)F-\mathcal{P}^{\hat{\chi}}_\ell(\alpha) F=d(\chi)F_-(\alpha)-\mathbf{R}(\alpha)(\mathbf{H}-2\Delta_{\alpha})( d(\chi)F_-(\alpha)).$$
Now we note that $d(\chi)F_-(\alpha)\in \mathcal{D}(\mathcal{Q})$ since $d(\chi)(c)=0$ for $c\notin (-1,a)$ for some $a>0$ and $\mathcal{Q}_{V}$ is bounded on $E_\ell$.
Then $\mathbf{R}(\alpha)(\mathbf{H}-2\Delta_{\alpha})( d(\chi)F_-(\alpha))=d(\chi)F_-(\alpha)$ since $(\mathbf{H}-2\Delta_{\alpha}):\mathcal{D}(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$ is an isomorphism, hence $\mathcal{P}^\chi_\ell(\alpha)F-\mathcal{P}^{\hat{\chi}}_\ell(\alpha) F=0$.
\end{proof}
We can also rewrite the construction of the Poisson operator using the propagator.
\begin{proposition}\label{Pellproba}
The following properties hold true:\\
1) Let $\ell\in\mathbb{N}$ and let $F\in L^2(\Omega_\mathbb{T})\cap \ker(\mathbf{P}-\lambda_j)$ for $j\leq \ell$.
Then we have the identity
\[\mathcal{P}_\ell(\alpha)F= \lim_{t\to +\infty}e^{t 2\Delta_\alpha}e^{-t\mathbf{H}}\Big(e^{ic\sqrt{p^2-2\lambda_j}}\chi(c)F\Big)\]
in $e^{-(\beta+\gamma/2) \rho}\mathcal{D}'(\mathcal{Q})$ provided
${\rm Re}((\alpha-Q)^2)>\beta^2$ with $\beta>{\rm Im}(\sqrt{p^2-2\lambda_j})-\gamma/2$ and $\alpha=Q+ip$. Furthermore if ${\rm Im}(\sqrt{p^2-2\lambda_j})>\gamma$ then we can take $\chi=1$ in the above statement.\\
2) Let $F\in L^2(\Omega_\mathbb{T})\cap \ker(\mathbf{P})$.
If $\alpha\in\mathbb{R}$ with $\alpha <Q$, then $dc\otimes \P$ almost everywhere
\[\mathcal{P}_0(\alpha)F= \lim_{t\to +\infty}e^{t 2\Delta_\alpha}e^{-t\mathbf{H}}\Big(e^{(\alpha-Q)c} F\Big).\]
\end{proposition}
\begin{proof} We first prove 1). Recall that $\mathbf{H}=\mathbf{H}_0+e^{\gamma c}V$.
We have
\[ (\mathbf{H}-2\Delta_\alpha)(\chi(c)e^{i\nu_j c}F)= \chi(c)e^{(i\nu_j +\gamma)c}V F
-\tilde{\chi}(c)Fe^{i\nu_jc}\]
where $\nu_j:=\sqrt{p^2-2\lambda_j}$ and $\tilde{\chi}(c):=\tfrac{1}{2}\chi''(c)+i\nu_j\chi'(c)\in C_c^\infty(\mathbb{R})$, and this belongs to $e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$.
Using Lemma \ref{resolventweighted},
\[ \mathcal{P}_\ell(\alpha)F=\chi(c)e^{i\nu_j c}F-\mathbf{R}(\alpha)(e^{(i\nu_j+\gamma) c}V\chi F -\tilde{\chi} e^{i\nu_jc}F)
\]
provided ${\rm Re}((\alpha-Q)^2)>\beta^2$ for any $\beta>{\rm Im}(\nu_j)-\gamma/2$. Noticing that the bound \eqref{normetHweight} remains valid with $V=0$, we can make sense of
$u(t):=e^{-t\mathbf{H}_0}(\chi(c)e^{i\nu_j c}F)$ as an element in $e^{-\beta_0 \rho}L^2(\mathbb{R}; E_j)$ for any $\beta_0>{\rm Im}(\nu_j)$. Then
\[ (\partial_t+2\Delta_\alpha)u(t)=e^{-t\mathbf{H}_0}(-\mathbf{H}_0+2\Delta_\alpha)(\chi(c)e^{i\nu_j c}F)=e^{-t\mathbf{H}_0}(e^{i\nu_j c}F \tilde{\chi}(c))
\]
thus we get by integrating in $t$
\begin{equation}\label{actionexp-tH_0}
\begin{split}
e^{-t(\mathbf{H}_0-2\Delta_\alpha)}(\chi(c)e^{i\nu_j c}F)= &
\chi(c)e^{i\nu_j c}F+\int_0^t e^{-s(\mathbf{H}_0-2\Delta_\alpha)}(e^{i\nu_jc}F\tilde{\chi}(c))ds\\
= & \chi(c)e^{i\nu_j c}F+(1- e^{-t(\mathbf{H}_0-2\Delta_\alpha)})\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))
\end{split}
\end{equation}
where $\mathbf{R}_0(\alpha):=(\mathbf{H}_0-2\Delta_\alpha)^{-1}$ is defined also on $e^{-\beta \rho}L^2$ by taking the proof of Lemma \ref{resolventweighted} in the case of the trivial potential $V=0$.
We also note that $e^{-t\mathbf{H}_0}(e^{i\nu_jc}F\tilde{\chi}(c))$ and $e^{-t\mathbf{H}_0}\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))$ are in $L^2(\mathbb{R}; E_j)$ since $\mathbf{H}_0F=(Q^2/2+\lambda_j)F$ (and $\tilde{\chi}\in C_c^\infty(\mathbb{R})$), i.e. all terms above are functions of $c$ with values in $E_j$.
We next claim that
\[ e^{-t\mathbf{H}}(\chi(c)e^{i\nu_j c}F)=e^{-t\mathbf{H}_0}(\chi(c)e^{i\nu_j c}F)-\int_0^t
e^{-(t-s)\mathbf{H}}e^{\gamma c}Ve^{-s\mathbf{H}_0}(\chi(c)e^{i\nu_j c}F)ds.\]
Indeed, first, all terms are well-defined: since $e^{-s\mathbf{H}_0}(\chi(c)e^{i\nu_j c}F)\in e^{-\beta_0 \rho} L^2(\mathbb{R}; E_j)\cap e^{-\beta_0 \rho} \mathcal{D}(\mathcal{Q})$ one has $e^{\gamma c}Ve^{-s\mathbf{H}_0}(\chi(c)e^{i\nu_j c}F)\in e^{-\beta_0 \rho}\mathcal{D}'(\mathcal{Q})$, and we can then use \eqref{normetHweight2}.
Then the identity above is obtained since both terms solve
$(\partial_t+\mathbf{H})u(t)=0$ in $e^{-\beta_0 \rho}\mathcal{D}'(\mathcal{Q})$ with $u(0)=\chi(c)e^{i\nu_j c}F$ in $e^{-\beta_0 \rho}\mathcal{D}(\mathcal{Q})$.
By applying twice \eqref{actionexp-tH_0}, we thus obtain
\[\begin{split}
e^{-t(\mathbf{H}-2\Delta_\alpha)}(\chi(c)e^{i\nu_j c}F)= & \chi(c)e^{i\nu_j c}F-e^{t2\Delta_\alpha}\int_0^t
e^{-(t-s)\mathbf{H}}e^{\gamma c}Ve^{-s\mathbf{H}_0}(\chi(c)e^{i\nu_j c}F)ds\\
& + (1- e^{-t(\mathbf{H}_0-2\Delta_\alpha)})\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))\\
= & \chi(c)e^{i\nu_j c}F-\int_0^t
e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}(e^{\gamma c}V\chi(c)e^{i\nu_j c}F)ds\\
&+ (1- e^{-t(\mathbf{H}_0-2\Delta_\alpha)})\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c)) \\
& - \int_0^t
e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}e^{\gamma c}V(1-e^{-s(\mathbf{H}_0-2\Delta_\alpha)})\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))ds.
\end{split}\]
Using \eqref{normetHweight} and \eqref{normetHweight2} (applied with both $V\not=0$ and $V=0$) and ${\rm Re}((\alpha-Q)^2)>\beta^2$, we have as bounded operators on respectively $e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$ and $e^{-\beta \rho}L^2$
\[ \begin{gathered}
\lim_{t\to +\infty}\int_0^t
e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}ds=\lim_{t\to +\infty}(1-e^{-t(\mathbf{H}-2\Delta_\alpha)})\mathbf{R}(\alpha)=\mathbf{R}(\alpha)\\
\lim_{t\to +\infty} (1- e^{-t(\mathbf{H}_0-2\Delta_\alpha)})\mathbf{R}_0(\alpha)= \mathbf{R}_0(\alpha).
\end{gathered}\]
This gives in particular in $e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$
\[ \lim_{t\to +\infty}\int_0^t
e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}(e^{\gamma c}V\chi(c)e^{i\nu_j c}F)ds=\mathbf{R}(\alpha)(e^{\gamma c}V\chi(c)e^{i\nu_j c}F).\]
Similarly one has in $e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$
\[ \lim_{t\to +\infty}\int_0^t
e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}e^{\gamma c}V\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))ds=
\mathbf{R}(\alpha)e^{\gamma c}V\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c)).\]
Finally we claim that
\[\lim_{t\to +\infty}\int_0^t
e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}e^{\gamma c}Ve^{-s(\mathbf{H}_0-2\Delta_\alpha)}\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))ds=0.\]
Indeed, we can apply Lebesgue theorem if one can show
\[ \|e^{\gamma c}Ve^{-s(\mathbf{H}_0-2\Delta_\alpha)}\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))\|_{e^{\beta \rho}\mathcal{D}'(\mathcal{Q})}\leq e^{-\delta s}\]
for some $\delta>0$, since $\|e^{-(t-s)(\mathbf{H}-2\Delta_\alpha)}\|_{\mathcal{L}(e^{-\beta\rho}\mathcal{D}'(\mathcal{Q}))}\to 0$ by \eqref{normetHweight2}. But this estimate follows again from \eqref{normetHweight} with $V=0$ and the fact that $\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))\in e^{-\beta \rho}L^2(\mathbb{R};E_j)$, which in turn implies that
\[e^{\gamma c}Ve^{-s(\mathbf{H}_0-2\Delta_\alpha)}\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))\in e^{\beta\rho}\mathcal{D}'(\mathcal{Q}).\]
We have thus proved that
\[\begin{split}
\lim_{t\to +\infty}e^{-t(\mathbf{H}-2\Delta_\alpha)}(\chi(c)e^{i\nu_j c}F)= &\chi(c)e^{i\nu_j c}F-\mathbf{R}(\alpha)e^{\gamma c}V\big(e^{i\nu_jc}F\chi(c)+\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))\big)\\
& +\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c)) .
\end{split}\]
We conclude by observing that
\[\mathbf{R}(\alpha)(e^{\gamma c}V\chi e^{i\nu_jc}F -\tilde{\chi} e^{i\nu_jc}F)=
-\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi})+\mathbf{R}(\alpha)e^{\gamma c}V\big(\chi e^{i\nu_jc}F+\mathbf{R}_0(\alpha)(e^{i\nu_jc}F\tilde{\chi}(c))\big),
\]
which can be established by applying $(\mathbf{H}-2\Delta_\alpha)$ to this equation and using the injectivity of
$(\mathbf{H}-2\Delta_\alpha)$ on $e^{-\beta \rho} \mathcal{D}(\mathcal{Q})$ under our condition on $\alpha,\beta$.
Notice that for ${\rm Im}(\sqrt{p^2-2\lambda_j})>\gamma$, we have $e^{(i\nu_j+\gamma)c}\mathbf{1}_{(0,+\infty)}(c) \in L^2(\mathbb{R})$ so that the above argument applies with $\chi=1$.
Now we prove 2). As $F\in L^2(\Omega_\mathbb{T})\cap \ker(\mathbf{P}-\lambda_0)$, we may assume $F=1$ without loss of generality. With our assumptions, we can write $\alpha=Q+ip$ with $p=i(Q-\alpha)$ and choose $(Q-\alpha)>\beta>(Q-\alpha)-\gamma$. By applying 1), we get that
\[\mathcal{P}_\ell(\alpha)1= \lim_{t\to +\infty}e^{t 2\Delta_\alpha}e^{-t\mathbf{H}}\Big(e^{(\alpha-Q)c}\chi(c) \Big)\]
in $e^{-\beta \rho}L^2$, hence $dc\otimes \P$ almost everywhere (up to extracting subsequence). We have to show that we can replace $\chi$ by $1$. For this we will use the probabilistic representation \eqref{fkformula}: we have
\begin{align*}
e^{t 2\Delta_\alpha}e^{-t\mathbf{H}}\Big(e^{(\alpha-Q)c}(1-\chi(c)) \Big)=&e^{-Qc} \mathds{E}_\varphi\big[S_{e^{-t}} (e^{\alpha c}(1-\chi(c)))e^{-\mu M_\gamma ( \mathbb{D}_t^c)}\big]\\
=&e^{t 2\Delta_\alpha}e^{(\alpha-Q)c}\mathds{E}_\varphi \big[e^{\alpha (X_t(0)-Qt)}(1-\chi(c+X_t(0)-Qt)))e^{-\mu M_\gamma ( \mathbb{D}_t^c)}\big].
\end{align*}
By Girsanov's transform this expression can be rewritten as
\begin{align*}
e^{t 2\Delta_\alpha}e^{-t\mathbf{H}}\Big(e^{(\alpha-Q)c}(1-\chi(c)) \Big)=&e^{(\alpha-Q)c}\mathds{E}_\varphi \big[ \big(1-\chi(c+X_t(0)-(Q-\alpha)t)\big)\exp\big(-\mu \int_{ \mathbb{D}_t^c}|z|^{-\gamma\alpha} M_\gamma (dz)\big)\big].
\end{align*}
Recalling that $\chi=1$ on $(-\infty,a-1)$ and that $t\mapsto X_t(0)$ evolves as a Brownian motion independent of $\varphi$, then estimating the exponential term by $1$ we obtain
\begin{align*}
\big|e^{t 2\Delta_\alpha}e^{-t\mathbf{H}}\Big(e^{(\alpha-Q)c}(1-\chi(c)) \Big)\big|\leq &e^{(\alpha-Q)c}\P\big( c+X_t(0)-(Q-\alpha)t\geq a-1\big).\end{align*}
The result easily follows from this estimate.
\end{proof}
\subsubsection{Meromorphic extension of the resolvent near the $L^2$ spectrum}
We denote by $Z$ the Riemann surface on which the functions $p\mapsto \sqrt{p^2-2\lambda_j}$ are single valued for all $j$. This is a ramified covering of $\mathbb{C}$ with ramification points $\{\pm\sqrt{2\lambda_j}\, |\, j\in\mathbb{N}\}$, and
in which we embed the region ${\rm Im}(p)>0$ that we call the \emph{physical sheet}. We will call $\pi: Z\to \mathbb{C}$ the projection of the covering. The construction of $Z$ can be done iteratively on $j$, as explained in Chapter 6.7 of Melrose's book \cite{Mel}.
The map $p\mapsto \alpha:=Q+ip$ from ${\rm Im}(p)>0$ to ${\rm Re}(\alpha)<Q$ (now called the physical sheet for the variable $\alpha$) extends analytically as a map $Z\to \Sigma$ where $\Sigma$ is an isomorphic Riemann surface to $Z$ (it just amounts to a linear change of complex coordinates from $p$ to $\alpha$). We shall also denote by $\pi : \Sigma\to \mathbb{C}$ the projection.
Finally we choose a specific function $\chi$ of the form indicated previously but we further impose that $\chi\in C^\infty(\mathbb{R})$ is equal to $1$ in $(-\infty,-1+\delta)$ and equal to $0$ in $(0,+\infty)$ (for some small $\delta>0$).
First, we recall the notation for the orthogonal projectors
\[ \Pi_k= 1_{[0,\lambda_k]}(\mathbf{P}) : L^2(\Omega_\mathbb{T},\P)\to L^2(\Omega_\mathbb{T},\P)\]
and we denote by $E_k$ their range (which are Hilbert spaces) and $E_k^\perp$ the range of $1-\Pi_k$.
The goal of this section is to show the following:
\begin{proposition}\label{extensionresolvent}
Assume that $\gamma\in (0,\sqrt{2})$ and let $0\leq \beta<\gamma/2$. Then the following holds true:\\
1) The resolvent $\mathbf{R}(\alpha):=(\mathbf{H}-2\Delta_\alpha )^{-1}$ is bounded as a map
$L^2(\mathbb{R}\times \Omega_\mathbb{T})\to \mathcal{D}(\mathbf{H})$ for ${\rm Re}(\alpha)<Q$ and for $k\geq 0$ large enough, the operator
$(\mathbf{H}-2\Delta_{\pi(\alpha)})^{-1}$ admits a meromorphic continuation to the region
\[\{\alpha=Q+ip \in \Sigma\,|\, |\pi(p)|^2\leq \lambda_k^{\frac{1}{4}}, \forall j\leq k, {\rm Im}\sqrt{p^2-2\lambda_j}>-\min(\beta,\gamma/2-\beta)
\}\]
as a map
\begin{align*}
\mathbf{R}(\alpha): e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}({\bf H}),
\end{align*}
and the residue at each pole is a finite rank operator. Moreover, for each $\psi\in C^\infty(\mathbb{R})\cap L^\infty(\mathbb{R})$ satisfying $\psi'\in L^\infty$ and ${\rm supp}(\psi)\subset (-\infty,A)$ for some $A\in \mathbb{R}$,
\[\mathbf{R}(\alpha)(1-\Pi_k)\psi: \mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q}).\]
2) If $f\in e^{\beta \rho}\mathcal{D}'(\mathcal{Q})$ is supported in $c\in (-\infty,A)$ for some $A>0$ and is such that $\Pi_kf\in e^{\beta \rho}L^2(\mathbb{R};E_k)$
and $(1-\Pi_k)f\in \mathcal{D}'(\mathcal{Q})$, then for $\alpha$ as above and not a pole, one has in $c\leq 0$
\begin{equation}\label{asymptoticRf}
\mathbf{R}(\alpha)f=\sum_{j=0}^k a_j(\alpha,f)e^{-ic\sqrt{p^2-2\lambda_j}}+G(\alpha,f),
\end{equation}
with $a_j(\alpha,f)\in \ker (\mathbf{P}-\lambda_j)$, and $G(\alpha,f), \partial_cG(\alpha,f)\in e^{\beta\rho}L^2(\mathbb{R}^-; E_k)+L^2(\mathbb{R}^-; E_k^\perp)$, all depending meromorphically in $\alpha$ in the region they are defined.\\
3) There is no pole for $\mathbf{R}(\alpha)$ in $\{\alpha \in \Sigma\,|\, {\rm Re}(\alpha)\leq Q\}\setminus \cup_{j=0}^\infty \{Q\pm i\sqrt{2\lambda_j}\}$ and $Q\pm i\sqrt{2\lambda_j}$ can be at most a pole of order $1$.
\end{proposition}
To prove this Proposition, we will construct parametrices for the operator
$\mathbf{H}-2\Delta_\alpha =\mathbf{H}-\tfrac{Q^2+p^2}{2}$ in several steps and will split the argument. More concretely, we will search for some bounded model operator $\widetilde{\bf R}(\alpha):e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$, holomorphic in $\alpha$ in the desired region of $\Sigma$, such that
\[ (\mathbf{H}-2\Delta_\alpha)\widetilde{\bf R}(\alpha)=1-{\bf K}(\alpha)\]
where ${\bf K}(\alpha) \in \mathcal{L}(e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T}))$ is an analytic family of compact operators with $1-{\bf K}(\alpha_0)$ invertible at some $\alpha_0$ belonging to the physical sheet. Then the Fredholm analytic theorem will imply that $(1-{\bf K}(\alpha))^{-1}$ exists as a meromorphic family and
${\bf R}(\alpha):=\widetilde{\bf R}(\alpha)(1-{\bf K}(\alpha))^{-1}$ gives us the desired meromorphic extension of ${\bf R}(\alpha)$. Our strategy will be based on that method with slights modifications. The continuous spectrum of ${\bf H}$ near frequency $(Q^2+p^2)/2\in \mathbb{R}^+$ will come only from finitely many eigenmodes of $\mathbf{P}$, namely those $\lambda_j$ for which $2\lambda_j \leq p^2$. This suggests, in order to construct the approximation $\widetilde{{\bf R}}(\alpha)$ to split
the modes of ${\bf P}$ depending on the value of ${\rm Im}(\alpha-Q)$. The parametrix will be constructed in three steps as follows:
\begin{itemize}
\item First, we deal with the large eigenmodes for the operator
$\mathbf{P}$ in the region $c\in (-\infty,0]$ of $L^2(\mathbb{R} \times \Omega_\mathbb{T})$. We will show that this part does not contribute to the continuous spectrum at frequency $(Q^2+p^2)/2$, and we shall obtain a parametrix for that part by energy estimates.
\item Then we consider the region $c\geq -1$ where we shall show that the model
operator in that region (essentially ${\bf H}$ on $L^2([-1,\infty)\times\Omega_\mathbb{T})$ with Dirichlet condition at $c=0$) has compact resolvent, providing a compact operator for the parametrix of that region.
\item Finally, we will deal with the $c\leq 0$ region corresponding to eigenmodes of $\mathbf{P}$ of order $\mathcal{O}(|p|^2)$, where there is \emph{scattering} at $c=-\infty$ for frequency $(Q^2+p^2)/2$, producing continuous spectrum. The parametrix for this part is basically the exact inverse of ${\bf H_0}-2\Delta_\alpha$ restricted to finitely many modes of ${\bf P}$.
\end{itemize}
For $s\geq 0$ and $I\subset \mathbb{R}$ an open interval, we will denote by $H^s(I; L^2(\Omega_\mathbb{T}))$ the Sobolev space of order $s$ in the $c$-variable
\[H^s(I; L^2(\Omega_\mathbb{T})):=\{ u\in L^2(I\times \Omega_\mathbb{T})\, |\, \forall j\leq \ell, \xi\mapsto \langle \xi\rangle^{s}\mathcal{F}(u)(\xi)\in L^2(I\times \Omega_\mathbb{T})\}\]
where $\mathcal{F}$ denotes Fourier transform in $c$, and similarly $H_0^s(I;L^2(\Omega_\mathbb{T}))$ will be the completion of $C_c^\infty(I;L^2(\Omega_\mathbb{T}))$
with respect to the norm $\|\langle \xi\rangle^s \mathcal{F}(u)\|_{L^2(\mathbb{R};L^2(\Omega_\mathbb{T}))}$.\\
\noindent\textbf{1) Large $\mathbf{P}$ eigenmodes in the region $c\leq 0$.}
Let $\chi\in C^\infty(\mathbb{R},[0,1])$ which satisfies $\chi(c)=1$ for $c\leq -1+\delta$ for some $\delta\in (0,\tfrac{1}{2})$ and $\chi(c)=0$ in $[-1/2,\infty)$ and $\tilde{\chi}\in C^\infty(\mathbb{R},[0,1])$ such that $\tilde{\chi}=1$ on the support of
$\chi$ and ${\rm supp}(\tilde{\chi})\subset \mathbb{R}^-$, and we now view these functions as multiplication operators by $\chi(c)$ on the spaces $e^{\beta \rho}L^2(\mathbb{R}^-\times\Omega_\mathbb{T})$.
We will first show the following
\begin{lemma}\label{LemmaRk}
1) There is a constant $C>0$ depending only on $|\tilde{\chi}'|_{\infty}, |\tilde{\chi}''|_{\infty}$ such that for each $k\in\mathbb{N}$, there is
a bounded operator
\[\mathbf{R}_k^\perp(\alpha): L^2(\mathbb{R};E_k^\perp)\to L^2(\mathbb{R}^-;E_k^\perp)\]
holomorphic in $\alpha=Q+ip\in\mathbb{C}$ in the region
$\{{\rm Re}(\alpha)<Q\}\cup \{|\alpha-Q|^2\leq \lambda_k\}$,
with
\begin{align*}
& \tilde{\chi}\mathbf{R}_k^\perp(\alpha) \chi: L^2(\mathbb{R}\times\Omega_\mathbb{T})
\to L^2(\mathbb{R};E_k^\perp)\cap\mathcal{D}({\bf H}),
& \tilde{\chi}\mathbf{R}_k^\perp(\alpha) \chi: \mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})\cap L^2(\mathbb{R};E_k^\perp)
\end{align*}
bounded, so that
\begin{equation}\label{firstparamlem}
(\mathbf{H}-\tfrac{Q^2+p^2}{2})\tilde{\chi}\mathbf{R}_k^\perp(\alpha)(1-\Pi_k)\chi=(1-\Pi_k)\chi+\mathbf{L}_{k}^\perp(\alpha)+\mathbf{K}_{k}^\perp(\alpha)\quad \text{ and }\quad \tilde{\chi}\mathbf{R}_k^\perp(\alpha)\Pi_k\chi=0
\end{equation}
with
$\mathbf{L}_{k}^\perp(\alpha): \mathcal{D}'(\mathcal{Q})\to L^2(\mathbb{R}^-;E_k^\perp)$
and
$\mathbf{K}_{k}^\perp(\alpha):\mathcal{D}'(\mathcal{Q})\to e^{\beta \rho}L^2(\mathbb{R};E_k)$ bounded and holomorphic in
$\alpha$ as above for each $0<\beta<\gamma/2$.
In the region where
$|p^2|\leq \lambda_k$,
one has the bound
\begin{equation}\label{boundonK1}
\|\mathbf{L}_{k}^\perp(\alpha)\|_{\mathcal{L}(L^2)}\leq C\lambda_k^{-1/2}
\end{equation}
and $\mathbf{K}_{k}^\perp(\alpha)$ is compact as a map $L^2(\mathbb{R}\times\Omega_\mathbb{T})\to
e^{\beta \rho}L^2(\mathbb{R};E_k)$.\\
2) Let $\beta\in\mathbb{R}$, then in the region ${\rm Re}((\alpha-Q)^2)>\beta^2-2\lambda_k+1$, the operator
$\mathbf{R}_k^\perp(\alpha):e^{-\beta \rho} \mathcal{D}'(Q)\to e^{-\beta \rho}(L^2(\mathbb{R}^-;E_k^\perp)\cap \mathcal{D}(\mathcal{Q}))$ is a bounded holomorphic family,
$\mathbf{K}_k^\perp(\alpha): e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}L^2(\mathbb{R}^-;E_k)$
is a compact holomorphic family, $\mathbf{
L}_k^\perp(\alpha):e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}L^2(\mathbb{R}^-;E_k^\perp)$ is bounded analytic with norm
\[\| \mathbf{L}_k^\perp(\alpha)\|_{\mathcal{L}(e^{-\beta \rho}L^2)}\leq \frac{C(1+|\beta|)}{\sqrt{{\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2}}\]
for some $C>0$ depending only on $|\tilde{\chi}'|_\infty$ and $|\tilde{\chi}''|_\infty$.
\end{lemma}
\begin{proof}
Let us define $\mathcal{Q}_k^\perp$ the restriction of $\mathcal{Q}$ to the closed subspace
\[ \mathcal{D}(\mathcal{Q}_k^\perp):=\mathcal{D}(\mathcal{Q})\cap L^2(\mathbb{R}^-;E_k^\perp)=\mathcal{D}(\mathcal{Q})\cap \ker \Pi_k\cap \ker r_{\mathbb{R}^+}\]
where $r_{\mathbb{R}^\pm}:L^2(\mathbb{R}\times \Omega_\mathbb{T})\to L^2(\mathbb{R}^\pm\times \Omega_\mathbb{T})$ is the restriction for $c\geq 0$.
Note that this is a closed form and is thus the quadratic form of a unique self-adjoint operator
${\bf H}_k^\perp$, which maps $\mathcal{D}(\mathcal{Q}_k^\perp)\to \mathcal{D}'(\mathcal{Q}_k^\perp)$ and has a domain
$\mathcal{D}(\mathbf{H}_k^\perp)\subset \mathcal{D}(\mathcal{Q}_k^\perp)$.
For $u\in \mathcal{D}(\mathcal{Q}_k^\perp)$ we have $\|{\bf P}^{1/2}u\|_{2}^2\geq \lambda_k\|u\|^2_2$, thus for each $\varepsilon\geq 0$
\begin{equation}\label{boundsob}
\mathcal{Q}(u)\geq \tfrac{1}{2}\|\partial_cu\|_{L^2}^2+( \tfrac{Q^2}{2}+(1-\varepsilon)\lambda_k)\|u\|_{L^2}^2+\varepsilon\|\mathbf{P}^{1/2}u\|^2_{L^2}+\mathcal{Q}_{e^{\gamma c}V}(u,u)
\end{equation}
and therefore the quadratic form $\mathcal{Q}_k^\perp(u)$ is bounded below
by $\frac{\lambda_k}{2}\|u\|^2_{L^2}$ on $\mathcal{D}(\mathcal{Q}_k^\perp)$ (if $\epsilon$ is chosen small enough).
There is a natural injection $\iota: \mathcal{D}(\mathcal{Q}_k^\perp)\to \mathcal{D}(\mathcal{Q})$ which we view as an inclusion.
Let $\chi,\widetilde{\chi}\in C^\infty(\mathbb{R})$ be equal to $1$ in $c<-1/2$ and supported in $(-\infty,0)$. Let $\widetilde{\chi},\chi$ defined as $\chi$ but equal to $1$ on the support of $\chi$. We view $\chi$ as the operator of multiplication by $\chi(c)$ on $L^2(\mathbb{R}\times \Omega_\mathbb{T})$, and similarly for $\widetilde{\chi}$.
Let us check that $\iota\chi=\chi\iota: \mathcal{D}({\bf H}_k^\perp)\to \mathcal{D}({\bf H})$ and for $u\in \mathcal{D}({\bf H}_k^\perp)$
\begin{equation}\label{identite_entre_H}
{\bf H}\iota(\chi u) = \iota ({\bf H}_k^\perp \chi u)+\chi e^{\gamma c}\Pi_k(V\iota(u))
\end{equation}
(note that $\chi u\in \mathcal{D}({\bf H}_{k}^\perp)$ if $u\in \mathcal{D}({\bf H}_{k}^\perp)$ since ${\bf H}_{k}^\perp \chi u=\chi {\bf H}_{k}^\perp u-\frac{1}{2}\chi''u-\chi'\partial_cu$).
Here we use Lemma \ref{chiPikV}, which insures that $\chi e^{\gamma c}\Pi_k(V\iota(u))$ makes sense.
First, we have for $u\in \mathcal{D}(\mathcal{Q}_k^\perp)$ that $\iota(u)=(1-\Pi_k)\iota(u)$.
For such $u$ and for $v\in \mathcal{D}(\mathcal{Q})$, one gets
\[ \begin{split}
\mathcal{Q}(\iota (\chi u),v)=& \tfrac{1}{2}\langle \partial_c(1-\Pi_k)\chi u,\partial_c(1-\Pi_k)\widetilde{\chi} v\rangle_{L^2(\mathbb{R}^-\times \Omega_\mathbb{T})}+
\tfrac{Q^2}{2}\|(1-\Pi_k)\chi u,(1-\Pi_k)\widetilde{\chi}v\rangle_{L^2(\mathbb{R}^-\times \Omega_\mathbb{T})}+\\
& +\langle {\bf P}^{1/2}(1-\Pi_k)\chi u,{\bf P}^{1/2}(1-\Pi_k)\widetilde{\chi} v\rangle_{L^2(\mathbb{R}^-\times \Omega_\mathbb{T})}+\int_{\mathbb{R}}\chi(c) e^{\gamma c}(\mathcal{Q}_V(u,(1-\Pi_k)v)+\mathcal{Q}_V(u,\Pi_kv))dc\\
=& \mathcal{Q}_{k}^\perp(\chi u,(1-\Pi_k)\widetilde{\chi}v)+\int_{\mathbb{R}}e^{\gamma c}\mathcal{Q}_V(\chi u,\Pi_kv)\, dc\\
=& \langle {\bf H}_k^\perp \chi u,(1-\Pi_k)\widetilde{\chi}v\rangle_{2}+\int_{\mathbb{R}}e^{\gamma c}\mathcal{Q}_V(\chi u,\Pi_kv)\, dc=\langle {\bf H}_k^\perp \chi u,v\rangle_{L^2(\mathbb{R}^-\times \Omega_\mathbb{T})}+\int_{\mathbb{R}}e^{\gamma c}\mathcal{Q}_V(\chi u,\Pi_kv)\, dc
\end{split}\]
where in the last line, $\widetilde{v}:=(1-\Pi_k)\widetilde{\chi}v$ is viewed as an element in $\mathcal{D}(\mathcal{Q}_k^\perp)$. As in the proof of Lemma \ref{chiPikV}, there is a uniform constant $C_k$ depending only on $k,\gamma$ such that
\[ \Big|\int_{\mathbb{R}^-}e^{\gamma c}\mathcal{Q}_V(u,\Pi_kv)\, dc\Big|\leq C_k|\mathcal{Q}_{e^{\gamma c}V}(i(u))|^{1/2} \|v\|_{2} \]
which implies that $\iota(\chi u)\in \mathcal{D}({\bf H})$ since $|\langle {\bf H}_k^\perp \chi u,v\rangle_{L^2(\mathbb{R}^-\times \Omega_\mathbb{T})}|\leq \|{\bf H}_k^\perp \chi u\|_2\|v\|_2$. Now we also notice that
\[\int_{\mathbb{R}}e^{\gamma c}\mathcal{Q}_V(\chi u,\Pi_kv)\, dc= \langle e^{\gamma c}V\chi \iota(u),\Pi_k \widetilde{\chi}v\rangle_2\]
where the pairing makes sense since $e^{\gamma c}V\chi \iota(u)\in \mathcal{D}'(\mathcal{Q})$ and $\Pi_k\widetilde{\chi}v\in \mathcal{D}(\mathcal{Q})$ by Lemma \ref{chiPikV}. Moreover, using $\widetilde{\chi}\chi=\chi$ and $\Pi_k^*=\Pi_k$ this term is also equal to
\[\langle e^{\gamma c}V\chi \iota(u),\Pi_k \widetilde{\chi}v\rangle_2=\langle \chi e^{\gamma c}\Pi_k(V\iota(u)),v\rangle_2.\]
This shows the formula \eqref{identite_entre_H}.
The spectrum of ${\bf H}_k^\perp$ is contained in $[ \tfrac{Q^2}{2}+\lambda_k,\infty)$ due to \eqref{boundsob}.
It will be said to have Dirichlet condition at $c=0$,
by analogy with the Laplacian on finite dimensional manifolds.
The resolvent $\mathbf{R}_k^\perp(\alpha):=(\mathbf{H}_k^\perp-\tfrac{Q^2+p^2}{2})^{-1}$ (with $\alpha=Q+ip$) is well defined as a bounded map
\begin{align*}
&\mathbf{R}_k^\perp(\alpha): L^2(\mathbb{R}^-;E_k^\perp)\to \mathcal{D}(\mathbf{H}_k^\perp)
&{\bf R}_k^\perp(\alpha): \mathcal{D}'(\mathcal{Q}_k^\perp)\to \mathcal{D}(\mathcal{Q}_k^\perp)
\end{align*}
if $p\in\mathbb{C}$ is such that $p^2\notin [2\lambda_k,\infty)$, with $L^2$
norm
\begin{equation}\label{boundRkalpha}
\|{\bf R}_k^\perp(\alpha)\|_{\mathcal{L}(L^2)}\leq 2/\lambda_k \textrm{ for }|p|^2\leq \lambda_k.
\end{equation}
It is also holomorphic in $\alpha$ in $\{{\rm Re}(\alpha)<Q\}\cup \{|\alpha-Q|^2<\lambda_k\}$.
We also define the dual map $\iota^T:\mathcal{D}'(\mathcal{Q})\to\mathcal{D}'(\mathcal{Q}_k^\perp)$ (which also maps $L^2(\mathbb{R}\times \Omega_\mathbb{T})\to L^2(\mathbb{R}^-;E_k^\perp)$. We get
\begin{align*}
& \iota \chi \mathbf{R}_k^\perp(\alpha)\iota^T(1-\Pi_k): L^2(\mathbb{R}^-\times\Omega_\mathbb{T})\to \mathcal{D}(\mathbf{H})\cap L^2(\mathbb{R};E_k^\perp)
& \iota\mathbf{R}_k^\perp(\alpha)\iota^T:\mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})
\end{align*}
with the same properties. To avoid heavy notations, we shall now remove the $\iota,\iota^T$ maps in the notations so that we view $\chi \mathbf{R}_k^\perp(\alpha)(1-\Pi_k)$ as map $L^2(\mathbb{R}^-\times\Omega_\mathbb{T})\to \mathcal{D}(\mathbf{H})\cap L^2(\mathbb{R};E_k^\perp)$.
Using \eqref{boundsob} with $u={\bf R}_k^\perp(\alpha)f$ we also obtain that
\begin{equation}
\label{boundplcRk}
\|\partial_c {\bf R}_k^\perp(\alpha)\|_{\mathcal{L}(L^2)}\leq \sqrt{\frac{4}{\lambda_k}}, \textrm{ for }|p|^2\leq \lambda_k/2.
\end{equation}
Now we fix $\chi,\widetilde{\chi}$ as defined before the Lemma.
Thus, using $\Pi_k{\bf H}_0={\bf H}_0\Pi_k$ and \eqref{identite_entre_H}, we get for $\alpha=Q+ip$ \[\begin{split}(\mathbf{H}-\tfrac{Q^2+p^2}{2})\tilde{\chi}\mathbf{R}_k^\perp(\alpha)(1-\Pi_k)\chi=& (1-\Pi_k)\chi-\tfrac{1}{2}[\partial_c^2,\tilde{\chi}]\mathbf{R}_k^\perp(\alpha)(1-\Pi_k)\chi+
e^{\gamma c}\tilde{\chi}\Pi_k V\mathbf{R}_k^\perp(\alpha)\chi (1-\Pi_k)\\
=:& (1-\Pi_k)\chi+\mathbf{L}_k^\perp(\alpha)+\mathbf{K}_k^\perp(\alpha).
\end{split}\]
Since $[\partial_c^2,\tilde{\chi}]$ is a first order operator with compact support in $c$ commuting with $\Pi_k$, we notice that $\mathbf{L}_k^\perp(\alpha):\mathcal{D}'(\mathcal{Q})\to L^2(\mathbb{R}^-;E_k^\perp)$ and
we can use \eqref{boundplcRk}, \eqref{boundRkalpha} to deduce that there is $C>0$ depending only on $|\tilde{\chi}'|_{\infty}, |\tilde{\chi}''|_{\infty}$ such that
\[\|\mathbf{L}_k^\perp(\alpha)\|_{\mathcal{L}(L^2)}\leq C\lambda_k^{-1/2} \]
as long as $|p^2|\leq \lambda_k$. Let us now deal with ${\bf K}_k^\perp(\alpha)$.
First, notice that $\mathbf{K}_k^\perp(\alpha)$ maps $\mathcal{D}'(\mathcal{Q})$ to $e^{\gamma \rho/2}L^2(\mathbb{R}^-;E_k)$: indeed,
\[\mathbf{R}_k^\perp(\alpha)(1-\Pi_k)\chi: \mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q}), \quad e^{\gamma c}\tilde{\chi}\Pi_k V: \mathcal{D}(\mathcal{Q})\to e^{\gamma \rho/2}L^2(\mathbb{R}^-;E_k)\]
using Lemma \ref{chiPikV} with $\beta=\beta'=-\gamma/2$.
We would like to prove some regularization property in $c$ to deduce that $\mathbf{K}_k^\perp(\alpha)$ is compact on $L^2$ (or some weighted $L^2$ space). First, we have
\[ \partial_c {\bf H}_k^\perp = {\bf H}_k^\perp\partial_c +\gamma e^{\gamma c}(1-\Pi_k)V(1-\Pi_k)\]
thus applying ${\bf R}_k^\perp(\alpha)$ on the left and right:
\[ {\bf R}_k^\perp(\alpha)\partial_c=\partial_c{\bf R}_k^\perp(\alpha)+\gamma {\bf R}_k^\perp(\alpha)e^{\gamma c}V{\bf R}_k^\perp(\alpha).\]
Here the last term really is ${\bf R}_k^\perp(\alpha)\iota^Te^{\gamma c}V\iota {\bf R}_k^\perp(\alpha)$, viewed as a bounded map $L^2(\mathbb{R}^-;E_k^\perp)\to \mathcal{D}(\mathcal{Q}_k^\perp)$ using $e^{\gamma c}V:\mathcal{D}(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$.
Therefore (using $[\partial_c,V]=0$)
\[ \begin{split}
\partial_c{\bf K}_k^\perp(\alpha)=& \gamma {\bf K}_k^\perp(\alpha)+e^{\gamma c}\widetilde{\chi}'\Pi_k V{\bf R}_k^\perp(\alpha)(1-\Pi_k)\chi\\
& +\widetilde{\chi}e^{\gamma c}\Pi_k V{\bf R}_k^\perp(\alpha) \partial_c\chi-\gamma e^{\gamma c}\widetilde{\chi}\Pi_k V{\bf R}_k^\perp(\alpha)e^{\gamma c}V{\bf R}_k^\perp(\alpha)\chi.
\end{split}\]
Just as above, the first two terms map $\mathcal{D}'(\mathcal{Q})$ to $e^{\gamma \rho/2}L^2(\mathbb{R};E_k)$.
Next, $\partial_c \chi: L^2(\mathbb{R}\times \Omega_\mathbb{T})\to \mathcal{D}'(\mathcal{Q})$, ${\bf R}_k^\perp(\alpha): \mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})$ and
$\widetilde{\chi}e^{\gamma c}\Pi_k V: \mathcal{D}(\mathcal{Q}) \to L^2(\mathbb{R}\times \Omega_\mathbb{T})$ are bounded (using again Lemma \ref{chiPikV}), so
\[\tilde{\chi} e^{\gamma c}\Pi_k V{\bf R}_k^\perp(\alpha) \partial_c\chi: L^2(\mathbb{R}\times \Omega_\mathbb{T})\to
e^{\gamma \rho/2}L^2(\mathbb{R}; E_k)\]
is bounded. Finally, by Lemma \ref{chiPikV}
\[ \widetilde{\chi}e^{\gamma c}\Pi_k V{\bf R}_k^\perp(\alpha): \mathcal{D}'(\mathcal{Q}_k^\perp)\to e^{\gamma \rho/2}L^2(\mathbb{R};E_k), \quad
e^{\gamma c}V{\bf R}_k^\perp(\alpha)\chi : L^2\to \mathcal{D}'(\mathcal{Q}_k^\perp)\]
is bounded so
\[e^{\gamma c}\widetilde{\chi}\Pi_k V{\bf R}_k^\perp(\alpha)e^{\gamma c}V{\bf R}_k^\perp(\alpha)\chi: L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{\gamma \rho/2}L^2(\mathbb{R}; E_k)\]
is also bounded and therefore $\partial_c{\bf K}_k^\perp(\alpha): L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{\gamma \rho/2}L^2(\mathbb{R}; E_k)$. This shows that
\[{\bf K}_k^\perp(\alpha): L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{\gamma\rho/2}H_0^1(\mathbb{R};E_k).\]
It is then easy to see this is a compact operator on $L^2$ as announced since $e^{\gamma\rho/2}H_0^1(\mathbb{R};E_k)$ injects compactly into $e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for $\beta<\gamma/2$ by using that $E_k$ has finite dimension. We conclude that
${\bf K}_k^\perp(\alpha)$ is compact as a map $L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ if $\beta<\gamma/2$.
Moreover $\mathbf{K}_k^\perp(\alpha), \mathbf{L}_k^\perp(\alpha)$ are holomorphic
in $\alpha\in \mathbb{C}$ in the region $\{{\rm Re}(\alpha)<Q\}\cup \{|\alpha-Q|^2<\lambda_k\}$ since ${\bf R}_k^\perp(\alpha)$ is.
This concludes the proof of 1).\\
Let us next consider the region $\{{\rm Re}(\alpha)\leq Q\}$, and we proceed as in Lemma \ref{resolventweighted}. Let $\mathbf{H}_{k,\beta}^\perp:=e^{\beta \rho}\mathbf{H}_k^\perp e^{-\beta \rho}$
for $\beta\in \mathbb{R}$ which is also given by
\[ \mathbf{H}_{k,\beta}^\perp= \mathbf{H}_k^\perp+(1-\Pi_k)\big(-\tfrac{\beta^2}{2}(\rho'(c))^2+\tfrac{\beta}{2} \rho''(c)+\beta \rho'(c)\partial_{c}\big)\]
and the associated sesquilinear form on $\mathcal{D}(\mathcal{Q}_k^\perp)$
\[ \mathcal{Q}_{k,\beta}^\perp(u):=\mathcal{Q}_k^\perp(u)-\tfrac{\beta^2}{2}\|\rho'u\|^2_2+\tfrac{\beta}{2} \langle \rho''u,u\rangle_2+\beta\langle \partial_cu,\rho'u\rangle_2.\]
Note that on $\mathcal{D}(\mathcal{Q}_k^\perp)$, we have
\[ {\rm Re}(\mathcal{Q}_{k,\beta}^\perp(u))\geq \mathcal{Q}_k^\perp(u)-\tfrac{\beta^2}{2}\|u\|_2^2\geq
\tfrac{1}{2}\| \partial_c u\|^2_2+(\tfrac{Q^2-\beta^2}{2}+\lambda_k)\|u\|_2^2+\|e^{\frac{\gamma c}{2}}V^{\frac{1}{2}}u\|^2_2.\]
This implies by Lax-Milgram, just as in the proof of Lemma \ref{resolventweighted}, that if
${\rm Re}((\alpha-Q)^2)>\beta^2-2\lambda_k$, then
for each $f\in \mathcal{D}'(\mathcal{Q}^\perp_{k})$, there is a unique
$u\in \mathcal{D}(\mathcal{Q}_k^\perp)$ such that \begin{align}
&e^{\beta \rho}(\mathbf{H}_k^\perp-2\Delta_{\alpha}) e^{-\beta \rho}u=f, \textrm{ and if }f\in L^2\\
&\|u\|_{2}\leq \frac{2\|f\|_{2}}{{\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2},\quad& \|\partial_cu\|_{2}\leq \frac{2\|f\|_{2}}{\sqrt{{\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2}}.\label{estdc}
\end{align}
In particular, this shows that, for ${\rm Re}((\alpha-Q)^2)>\beta^2-2\lambda_k$,
$\mathbf{R}_k^\perp(\alpha)$ extends as a map
\[ \mathbf{R}_k^\perp(\alpha): e^{-\beta \rho}\mathcal{D}'(\mathcal{Q}_{k}^\perp)\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q}_k^\perp)
\]
with $\|\mathbf{R}_k^\perp(\alpha)\|_{\mathcal{L}(e^{\beta \rho}L^2)}\leq 2({\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2)^{-1}$. If we further impose that
${\rm Re}((\alpha-Q)^2)>\beta^2-2\lambda_k+1$ then, since $e^{\beta \rho}[\partial_c,e^{-\beta \rho}]=-\beta \rho'$ and using \eqref{estdc},
\[ \|\mathbf{L}_k^\perp(\alpha)\|_{\mathcal{L}(e^{-\beta \rho}L^2)}\leq \frac{2|\chi'|_\infty(1+|\beta|) +|\chi''|_\infty}{\sqrt{{\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2}}.\]
Finally, the same argument as above for $\mathbf{K}_k^\perp(\alpha)$ shows that for ${\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2>1$, the operator $\mathbf{K}_k^\perp(\alpha)$ is compact from $e^{-\beta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ to $e^{-\beta\rho}L^2(\mathbb{R}^-;E_k)$.
\end{proof}
\begin{remark} We notice that the operators ${\bf R}_k^\perp(\alpha)$, $\mathbf{K}_k^\perp(\alpha)$, ${\bf L}_k^\perp(\alpha)$ lift as holomorphic family of operators to the region
$\{\alpha\in \Sigma\, |\, {\rm Re}(\pi(\alpha))<Q, |\pi(\alpha)-Q|^2<\lambda_k\}$ by simply composing with the projection $\pi: \Sigma\to \mathbb{C}$.\end{remark}
\noindent \textbf{2) The region $c\geq -1$.}
Next, consider the operator $\mathbf{H}-\tfrac{Q^2+p^2}{2}$ on $L^2([-1,\infty);L^2(\Omega_\mathbb{T}))$ with Dirichlet condition at $c=-1$ (i.e. the extension associated to the quadratic form on functions supported in $c\geq -1$), and $\hat{\chi}\in C^\infty(\mathbb{R}; [0,1])$ such that $(1-\hat{\chi})=1$ on ${\rm supp}(1-\chi)$ and
$1-\hat{\chi}$ supported in $(-1,\infty)$ (otherwise stated, $\hat{\chi}=0$ on $(-1+\delta,+\infty)$ and $\hat{\chi}=1$ on $(-\infty,-1)$ ).
We will construct a quasi-compact approximate inverse to ${\bf H}$ in $[-1,\infty)$ by using energy estimates and the properties of $V$, in particular the fact the region where $V>0$ is small are somehow small. We show the following:
\begin{lemma}\label{regimec>-1}
There is a uniform constant $C>0$ and a bounded operator, independent of $\alpha$,
\[
\mathbf{R}_+: L^2(\mathbb{R}\times \Omega_\mathbb{T})\to \mathcal{D}(\mathcal{{\bf H}}) ,
\]
satisfying
\begin{align*}
& (1-\hat{\chi}) \mathbf{R}_+(1-\chi): \mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})
\end{align*}
and for $\alpha=Q+ip\in\mathbb{C}$ and $k\geq 1$
\[(\mathbf{H}-\tfrac{Q^2+p^2}{2})(1-\hat{\chi})\mathbf{R}_+ (1-\chi)= (1-\chi)+\mathbf{K}_{+,k}(\alpha)+\mathbf{L}_{+,k}(\alpha)\]
where $\mathbf{K}_{+,k}(\alpha):L^2(\mathbb{R}\times \Omega_\mathbb{T})\to L^2([-1,\infty)\times \Omega_\mathbb{T})$ compact and holomorphic in $\alpha\in \mathbb{C}$, and the operator $\mathbf{L}_{+,k}(\alpha):L^2(\mathbb{R}\times \Omega_\mathbb{T})\to L^2([-1,\infty)\times \Omega_\mathbb{T})$ is bounded and holomorphic in $\alpha\in \mathbb{C}$, such that
\begin{equation}\label{boundbis}
\|\mathbf{L}_{+,k}(\alpha)\|_{\mathcal{L}(L^2)}\leq C (1+|p|^2\lambda_k^{-1/2}),\quad \|\mathbf{L}_{+,k}(\alpha)^2\|_{\mathcal{L}(L^2)}\leq C (1+|p|^2)\lambda_k^{-1/2}+C(1+|p|^4)\lambda_k^{-1}
\end{equation}
for some uniform constant $C$ depending only on $\hat{\chi}$. Moreover
$\mathbf{L}_{+,k}(\alpha)$ and $\mathbf{K}_{+,k}(\alpha)$ are bounded as maps $\mathcal{D}'(\mathcal{Q})\to L^2([-1,\infty)\times \Omega_\mathbb{T})$.
\end{lemma}
\begin{proof}
We consider ${\bf R}_+:=({\bf H}-\tfrac{Q^2}{2}+1)^{-1}: L^2\to \mathcal{D}({\bf H})$, which we also view as
a map $\mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})$. We have
\begin{equation}\label{boundanddefR+}
\tfrac{1}{2}\|\partial_c\mathbf{R}_+f\|^2_{2}+\|\mathbf{R}_+f\|^2_{2}+\|\mathbf{P}^{1/2}\mathbf{R}_+f\|^2_{2}+\mathcal{Q}_{e^{\gamma c}V}(\mathbf{R}_+f)\leq \|f\|^2_{2},
\end{equation}
We write
\begin{equation}\label{errorK+}
\begin{split}
(\mathbf{H}-\tfrac{Q^2+p^2}{2})(1-\hat{\chi})\mathbf{R}_+ (1-\chi)=& (1-\chi)+\tfrac{1}{2}[\partial_c^2,\hat{\chi}]\mathbf{R}_+(1-\chi)-(\tfrac{p^2}{2}-1)(1-\hat{\chi})\mathbf{R}_+ (1-\chi).\\
=:& (1-\chi)+\mathbf{K}_+^{1}+\mathbf{K}_+^{2}(\alpha).
\end{split}
\end{equation}
Notice that $\mathbf{K}_+^{1},\mathbf{K}_+^2(\alpha)$ are bounded as maps $\mathcal{D}'(\mathcal{Q})\to L^2(\mathbb{R}\times \Omega_\mathbb{T})$ by using that $[\partial_c^2,\hat{\chi}]:\mathcal{D}(\mathcal{Q})\to L^2(\mathbb{R}\times \Omega_\mathbb{T})$ is bounded.
By Lemma \ref{decroissance_c_grand}, there is $\beta>0$ such that have (here $c_+:=c\mathbf{1}_{c>0}$.)
\begin{equation}\label{weightedPik}
\Pi_k\mathbf{R}_+ : L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta c_+}L^2(\mathbb{R}; E_k)
\end{equation}
is bounded. Let $\psi\in C_c^\infty(\mathbb{R})$ non-negative, equal to $1$ in $[-A,A]$ and $0$ for $|c|>2A$ and so that $\|\psi\|_\infty+\|\psi'\|_\infty+\|\psi''\|_\infty\leq 1$ (we shall take $A\to \infty$ later). By using Lemma \ref{chiPikV}, we have
\[ \Pi_k \psi :\mathcal{D}(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})\]
is bounded: indeed, $\partial_c(\psi u)=\psi' u+\psi\partial_cu\in L^2$ if $u\in \mathcal{D}(\mathcal{Q})$ and
\[ \int_{\mathbb{R}} \psi^2(c)e^{\gamma c}\mathcal{Q}_V(u,u)dc \leq C_{\gamma,\psi}\mathcal{Q}(u,u)\]
for some $C_{\gamma,\psi}$ depending on $\gamma$ and $\psi$.
We are going to show that for all such $\psi$,
\begin{equation}\label{rayas2005}
\|(1-\Pi_k)\psi {\bf R}_+\|_{L^2\to L^2}\leq 3/\sqrt{\lambda_k},
\end{equation}
which by taking $A\to \infty$ shows that $(1-\Pi_k){\bf R}_+$ has $\mathcal{L}(L^2)$ norm
bounded by $3/\sqrt{\lambda_k}$.
Let $u=\mathbf{R}_+f$ with $f\in L^2$, then we claim that
\begin{equation}\label{bound-Pik}
\begin{gathered}
\lambda_k\|(1-\Pi_k)\psi u\|_{2}^2 +\tfrac{1}{2}\|\partial_c(1-\Pi_k)\psi u\|_{2}^2 + \tfrac{1}{2}\mathcal{Q}_{e^{\gamma c}V}((1-\Pi_k)\psi u) \\
\leq \|(1-\Pi_k)\psi f\|_{2}\|(1-\Pi_k)\psi u\|_{2}+\tfrac{1}{2}\mathcal{Q}_{e^{\gamma c}V}(\Pi_k\psi u)+2\|f\|^2_2
\end{gathered}
\end{equation}
To prove this, we note that, in $\mathcal{D}'(\mathcal{Q})$
\[ (\mathbf{H}-\tfrac{Q^2}{2}+1)(1-\Pi_k)\psi u=
\psi(1-\Pi_k)f+\psi(\Pi_ke^{\gamma c}V-e^{\gamma c}V\Pi_k)u-\tfrac{1}{2}\psi''(1-\Pi_k)u-\psi'(1-\Pi_k)\partial_c \]
thus pairing with $(1-\Pi_k)\psi u\in \mathcal{D}(\mathcal{Q})$, this gives
\[\begin{split}
\mathcal{Q}((1-\Pi_k)\psi u, (1-\Pi_k)\psi u)=& \langle (1-\Pi_k)\psi f,(1-\Pi_k)\psi u\rangle_2-
\mathcal{Q}_{e^{\gamma c}V}\Big(\psi \Pi_k u,\psi(1-\Pi_k)u\Big)\\
& -\tfrac{1}{2}\langle \psi''(1-\Pi_k)u,(1-\Pi_k)\psi u\rangle_2-\langle \psi'(1-\Pi_k)\partial_c u,(1-\Pi_k)\psi u\rangle_2
\end{split}\]
This gives the bound \eqref{bound-Pik}, using Cauchy-Schwarz, $\|{\bf P}^{1/2}(1-\Pi_k)u\|^2_{L^2(\Omega_\mathbb{T})}\geq \lambda_k\|u\|^2_{L^2(\Omega_\mathbb{T})}$ and the fact that $\|u\|_{\mathcal{D}(\mathcal{Q})}\leq \|f\|_2$ (by \eqref{boundanddefR+}).
Now we do the same computation with $(1-\Pi_k)$ replaced by $\Pi_k$:
\begin{equation}\label{bound1-Pik}
\begin{gathered}
\|\Pi_k \psi u\|_{2}^2 +\tfrac{1}{2}\|\partial_c\Pi_k\psi u\|_{2}^2 + \tfrac{1}{2}\mathcal{Q}_{e^{\gamma c}V}(\Pi_k\psi u) \\
\leq \|\Pi_k\psi f\|_{2}\|\Pi_k\psi u\|_{2}+\tfrac{1}{2}\mathcal{Q}_{e^{\gamma c}V}((1-\Pi_k)\psi u)+2\|f\|^2_2.
\end{gathered}
\end{equation}
Combining \eqref{bound1-Pik} and \eqref{bound-Pik} and using $\|u\|_{2}\leq \|f\|_{2}$, we obtain
\begin{equation}\label{final1-Pik}
\|(1-\Pi_k)\psi\mathbf{R}_{+} f\|_{2}=\|(1-\Pi_k)\psi u\|_{2}\leq \frac{3}{\sqrt{\lambda_k}}\|f\|_2
\end{equation}
Since $[\partial_c^2,\hat{\chi}]=\hat{\chi}''+2\hat{\chi}'\partial_c$ and $\hat{\chi}'=0$ on
${\rm supp}(1-\chi)$, we have
$(\mathbf{K}_+^1)^2=0$ and $\|\mathbf{K}_+^1\|_{\mathcal{L}(L^2)}\leq C$ (using \eqref{boundanddefR+}) for some uniform $C$ depending only on $\hat{\chi}$.
By combining with \eqref{final1-Pik}, we deduce that
\[ \begin{gathered}
\|(\mathbf{K}_+^1+ (1-\Pi_k)\mathbf{K}_+^2(\alpha))\|_{\mathcal{L}(L^2)}\leq C(1+|p|^2\lambda_k^{-1/2}), \\
\|(\mathbf{K}_+^1+ (1-\Pi_k)\mathbf{K}_+^2(\alpha))^2\|_{\mathcal{L}(L^2)}\leq C ((1+|p|^2)\lambda_k^{-1/2}+(1+|p|^2)^2\lambda_k^{-1})
\end{gathered}\]
for some uniform $C$ depending only on $\hat{\chi}$.
Next we consider the operator $\Pi_k\mathbf{K}_+^2(\alpha)$.
Recall that, by \eqref{boundanddefR+},
\begin{equation}\label{weightedPik+der}
\partial_c\Pi_k\mathbf{R}_+=\Pi_k \partial_c{\bf R}_+: L^2(\mathbb{R}\times \Omega_\mathbb{T})\to L^2(\mathbb{R}\times \Omega_\mathbb{T})
\end{equation}
is bounded. Now we claim that the injection
\begin{equation}\label{injectioncpt}
F_k:=\{u\in e^{-\beta c}L^2([-1,\infty); E_k)\,|\,
\partial_cu\in L^2([-1,\infty); E_k)\}\to e^{-\frac{\beta c}{2}}L^2([-1,\infty)\times \Omega_\mathbb{T})
\end{equation}
is compact if we put the norm $\|u\|_{F_k}:=\|e^{\beta c}u\|_{2}+\|\partial_c u\|_{2}$
on $F_k$. Indeed, consider the operator $\eta_T{\rm Id}:F_k\to e^{-\frac{\beta c}{2}}L^2([-1,\infty); E_k)$ where $\eta_T(c)=\eta(c/T)$ if $\eta\in C_0^\infty((-2,2))$ is equal to $1$ on $(-1,1)$ and $0\leq\eta\leq 1$. Since $E_k$ has finite dimension, this is a compact operator by the compact embedding $H^1([-1,T);E_k)\to e^{-\frac{\beta c}{2}}L^2([-1,\infty);E_k)$, and as $T\to \infty$ we have for $u\in F_k$
\[ \| e^{\frac{\beta}{2}c}(\eta_Tu-u)\|^2_{2}\leq e^{-\beta T}\int_{-1}^\infty (1-\eta_T)^2e^{2\beta c}\|u\|_{L^2(\Omega_\mathbb{T})}^2 dc\leq e^{-\beta T}\|u\|^2_{F_k}
\]
thus the injection \eqref{injectioncpt} is a limit of compact operators for the operator norm topology, therefore is compact.
By \eqref{weightedPik} and \eqref{weightedPik+der}, the operator $\Pi_k\mathbf{R}_+:
L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\frac{\beta}{2}c_+}L^2([-1,\infty)\times \Omega_\mathbb{T})$ is compact.
We get that
\[
\Pi_k \mathbf{K}_+^2(\alpha): L^2(\mathbb{R} \times \Omega_\mathbb{T})\to L^2(\mathbb{R} \times E_k)
\]
is compact. This complete the proof by setting $\mathbf{K}_{+,k}(\alpha):=\Pi_k\mathbf{K}_+^2(\alpha)$ and
$\mathbf{L}_{+,k}(\alpha):=\mathbf{K}_+^1+ (1-\Pi_k)\mathbf{K}_+^2(\alpha)$.
The holomorphy in $\alpha\in \mathbb{C}$ is clear since $\mathbf{K}_+^2(\alpha)$ is polynomial in $\alpha$.
\end{proof}
\begin{lemma}\label{decroissance_c_grand}
For $\alpha=Q+ip$ with $p\in i\mathbb{R}^*$, there is $\beta>0$ such that the resolvent $\Pi_k\mathbf{R}(\alpha): L^2(\mathbb{R} \times \Omega_\mathbb{T})\to e^{-\beta c_+}L^2(\mathbb{R} \times \Omega_\mathbb{T})$ is bounded.
\end{lemma}
\begin{proof} We estimate the norm for $ e^{-\beta c_+}L^2(\mathbb{R}_+ \times \Omega_\mathbb{T})$ since the estimate for $ e^{-\beta c_+}L^2(\mathbb{R}^- \times \Omega_\mathbb{T})$ results from Proposition \ref{l2alphamu}.
Recall \eqref{resolvvspropagator} with the representation for the resolvent $
\mathbf{R}(\alpha)=\int_0^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}\dd t
$, valid for the range of $\alpha$ we consider. A key observation to establish our lemma is the following estimate on $ e^{-t\mathbf{H}}$, based on the Feynman-Kac formula \eqref{FKgeneral}. Here we write for simplicity $V_t:=\int_{\mathbb{D}_t^c} |z|^{-\gamma Q}M_\gamma (\dd z)$. By using the Markov property, we get that for $f\in L^2(\mathbb{R} \times \Omega_\mathbb{T})$ and $a\in (0,1)$,
\begin{align}
|e^{-t\mathbf{H}}f|=&| e^{-\frac{Q^2}{2}t}\mathds{E}_{\varphi}\big[ f(c+B_t,\varphi_t)e^{-\mu e^{\gamma c}V_t}\big]| \nonumber\\
\leq & e^{-\frac{Q^2}{2}t} \mathds{E}_{\varphi}\big[ |f(c+B_t,\varphi_t)|e^{-\mu e^{\gamma c}V_{(1-a)t}}\big]\nonumber \\
=& e^{-\frac{Q^2}{2}(1-a)t} \mathds{E}_{\varphi}\big[ e^{-at \mathbf{H}_0}|f|(c+B_{(1-a)t},\varphi_{(1-a)t}) e^{-\mu e^{\gamma c}V_{(1-a)t}}\big]. \label{markineg
\end{align}
Now we want to exploit this relation in the integral representation of the resolvent.
Take $q\in[1,2)$. By using in turn the fact that $\Pi_k$ has finite dimensional rank (in particular we have equivalence of norms on its rank) and the continuity of the map $\Pi_k: L^q(\Omega_\mathbb{T})\to L^q(\Omega_\mathbb{T})$ (see \cite[Th 5.14]{SJ}), we obtain for some constant $C=C(t_0,\alpha,k,q,\mu,\gamma)$ which may change along the lines below
\begin{align*}
\int_0^\infty &e^{2\beta c}\mathds{E}\Big[\Big(\Pi_k\int_0^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}f\dd t\Big)^2\Big]\,\dd c\\
\leq & C \int_0^\infty e^{2\beta c}\mathds{E}\Big[\big(\Pi_k\int_0^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}f\dd t\big)^q\Big] ^{2/q}\,\dd c\\
\leq & C \int_0^\infty e^{2\beta c}\mathds{E}\Big[\big| \int_0^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}f\dd t\big|^q\Big] ^{2/q}\,\dd c.
\end{align*}
We are going to estimate this quantity by splitting the $\dd t$-integral above in two parts, $\int_0^{t_0}\dots$ and $\int_{t_0}^{\infty}\dots$ for some $t_0>0$, which we call respectively $A_1$ and $A_2$.
Let us first focus on the first part. Using the relation \eqref{markineg} with $a=1/2$ produces the bound
\begin{align*}
A_1\leq C \int_0^\infty e^{2 \beta c}\mathds{E}\Big[ \Big|\int_0^{t_0} \mathds{E}_{\varphi}\big[ e^{-\frac{t}{2}\mathbf{H}_0}|f|(c+B_{t/2},\varphi_{t/2}) e^{- \mu e^{\gamma c}V_{t/2}} \big] \,\dd t\Big|^q\Big]^{2/q} \dd c.
\end{align*}
Then Jensen gives the estimate
\begin{align*}
A_1\leq C \int_0^\infty e^{2 \beta c} \int_0^{t_0} \mathds{E}\Big[ \big( e^{-\frac{t}{2}\mathbf{H}_0}|f|(c+B_{t/2},\varphi_{t/2}) \big)^q e^{- q\mu e^{\gamma c}V_{t/2}} \Big]^{2/q} \,\dd t\dd c.
\end{align*}
Now we use H\"older's inequality with conjugate exponents $2/q, 2/ (2-q)$ to estimate the above quantity by
$$
A_1\leq C \int_0^\infty\int_0^{t_0} e^{2\beta c} \mathds{E}\Big[ \big(e^{-\frac{t}{2}\mathbf{H}_0}|f|(c+B_{t/2},\varphi_{t/2})\big)^2\Big] \Big(\mathds{E}\big[e^{- \frac{2q }{2-q}\mu e^{\gamma c}V_{t/2}} \big]\Big)^{\frac{2-q}{q}}\,\dd t\dd c.
$$
Using the elementary inequality $x^\theta e^{-x}\leq C$ for $\theta>0$, we deduce
$$
A_1\leq C \int_0^\infty\int_0^{t_0} e^{(2\beta-\theta \frac{2-q}{q}\gamma) c} \mathds{E}\Big[ \big(e^{-\frac{t}{2}\mathbf{H}_0}|f|(c+B_{t/2},\varphi_{t/2})\big)^2\Big] \Big(\mathds{E}\big[ V_{t/2}^{-\theta} \big]\Big)^{\frac{2-q}{q}}\,\dd t\dd c.
$$
It remains to estimate the quantity $\mathds{E}\big[ V_{t/2}^{-\theta} \big]$. Notice that $V_{t/2}$ is a GMC over an annulus of radii $1$ and $e^{-t/2}$. Hence the quantity to be investigated is less than the same expression with $V_{t/2}$ replaced by a GMC over a ball of diameter $t/2$ contained in this annulus. Standard computations of GMC multifractal spectrum \cite[Th 2.14]{review} (also valid for negative moments) then yields that
\begin{equation}
\mathds{E}\Big( V_{t/2}^{-\theta} \Big)\leq C t^{\psi(-\theta)}
\end{equation}
where $\psi(u):=(2+\tfrac{\gamma^2}{2})u-\tfrac{\gamma^2}{2}u^2 $ is the multifractal spectrum. Since $\psi$ is continuous and $\psi(0)=0$, we can find $\theta>0$ such that $\psi(-\theta)>-1$. Then, for $2\beta=\frac{2-q}{q}\theta\gamma$, we deduce that
\begin{align*}
A_1
\leq & C \int_0^\infty\int_0^{t_0} \mathds{E}\Big[ \big( e^{-\frac{t}{2}\mathbf{H}_0}|f|(c+B_{t/2},\varphi_{t/2})\big)^2\Big] t^{\psi(-\theta)}\,\dd t\dd c\\
\leq & C \int_0^{t_0} \|e^{-\frac{t}{2}\mathbf{H}_0}|f|\|_2^2 t^{\psi(-\theta)}\,\dd t\\
\leq & C\|f\|_2^2 \int_0^{t_0} t^{\psi(-\theta)}\,\dd t,
\end{align*}
which proves our claim for $A_1$, namely the part corresponding to $\int_0^{t_0}$.
Concerning $A_2$, we start as above
\begin{align*}
A_2:=&\int_0^\infty e^{2\beta c}\mathds{E}\Big[\Big(\Pi_k\int_{t_0}^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}f\dd t\Big)^2\Big]\,\dd c\\
\leq & C \int_0^\infty e^{2\beta c}\Big(\mathds{E}\Big[\big( \int_{t_0}^{\infty} e^{-t\mathbf{H}+t2 \Delta_{\alpha}}f\dd t\big)^q\Big]\Big)^{2/q}\,\dd c
\end{align*}
for $q\in (1,2)$. For the values of $\alpha$ we consider, we have $2\Delta_\alpha\leq \frac{Q^2}{2}-\frac{|p|^2}{2}$ so that, using Jensen,
\begin{align*}
A_2\leq C \int_0^\infty e^{2\beta c}\Big(\mathds{E}\big[ \int_{t_0}^{\infty}e^{-\frac{|p|^2}{2}t} \big(\mathds{E}_{\varphi}\big[ |f(c+B_t,\varphi_t)|e^{-\mu e^{\gamma c}V_{t}}\big] \big)^q \dd t \big]\Big)^{2/q}\,\dd c
\end{align*}
Let us now fix $a\in (0,1)$ small enough such that $aQ^2<|p|^2/2$. Then, using \eqref{markineg}, Jensen and the inequality $x^\theta e^{-x}\leq C$ for $\theta>0$, we deduce
\begin{align*}
A_2\leq & C \int_0^\infty e^{2(\beta-\gamma\theta/q) c}\Big(\mathds{E}\Big[ \int_{t_0}^{\infty}e^{-\frac{|p|^2}{4}t} \mathds{E}_{\varphi}\big[ \big(e^{-at\mathbf{H}_0}|f|(c+B_{(1-a)t},\varphi_{(1-a)t} )\big)^q V_{(1-a)t}^{- \theta}\big] \dd t\Big] \Big)^{2/q}\,\dd c.
\end{align*}
Now we can proceed similarly as above by using the fact that for $a<1$ and all $q>0$ $\sup_{t>t_0}\mathds{E}[V_{(1-a)t}^{-\theta }]<\infty$ (this quantity is decreasing in $t$ and GMC has negative moments) to get
\begin{align*}
A_2\leq & C \int_0^\infty e^{2(\beta-\gamma\theta/q) c}\Big(\int_{t_0}^{\infty}e^{-\frac{|p|^2}{4}t} \mathds{E}\big[ \big( e^{-at\mathbf{H}_0}|f|(c+B_{(1-a)t},\varphi_{(1-a)t} )\big)^2\big]^{q/2} \mathds{E}\big[ V_{(1-a)t}^{-\frac{2\theta}{2-q}}\big] ^{\frac{2-q}{2}} \dd t \Big)^{2/q}\,\dd c\\
\leq & C \int_0^\infty e^{2(\beta-\gamma\theta/q) c}\Big(\int_{t_0}^{\infty}e^{-\frac{|p|^2}{4}t} \mathds{E}\Big[ \big( e^{-at\mathbf{H}_0}|f|(c+B_{(1-a)t},\varphi_{(1-a)t} )\big)^2\Big]^{q/2} \dd t \Big)^{2/q}\,\dd c.
\end{align*}
For $\beta=\theta\gamma/q$ and using Jensen, the above quantity is less than
$$ C \int_{t_0}^{\infty}e^{-\frac{|p|^2}{4}t} \|e^{-at\mathbf{H}_0}|f|\|_2^2 \dd t \leq C\|f\|_2^2\int_{t_0}^{\infty}e^{-\frac{|p|^2}{4}t}\,\dd t.$$
Hence the result.
\end{proof}
\begin{remark} As above, the operators ${\bf R}_+, \mathbf{K}_{+,k}(\alpha)$ and $\mathbf{L}_{+,k}(\alpha)$, lift as holomorphic family of operators to $\Sigma$.\end{remark}
\noindent \textbf{3) Small $\mathbf{P}$ eigenmodes in the region $c\leq 0$, where there is scattering.}
We will view the operator ${\bf H}$ as a perturbation of the free Hamiltonian
$\mathbf{H}_{0}:= -\tfrac{1}{2}\partial_c^2 +\tfrac{Q^2}{2}+\mathbf{P}$ on $L^2(\mathbb{R}^-\times \Omega_\mathbb{T})$ with Dirichlet condition at $c=0$. We show (recall that $\pi:\Sigma\to \mathbb{C}$ is the covering map)
\begin{lemma}\label{modelres}
1) Fix $k$ and $0<\beta<\gamma/2$. The operators
\begin{equation}
\mathbf{R}_k(\alpha):= (\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}\Pi_k: e^{\beta\rho}L^2(\mathbb{R}^{-}\times \Omega_\mathbb{T})\to e^{-\beta \rho}L^2(\mathbb{R}^-;E_k)
\end{equation}
defined for ${\rm Im}(p)>0$ can be holomorphically continued to the region
\begin{equation}\label{region_holo_R_k}\{\alpha=Q+ip \in \Sigma\,|\, \forall j\leq k, {\rm Im}\sqrt{p^2-2\lambda_j}>-\beta\}.
\end{equation}
This continuation, still denoted $\mathbf{R}_k(\alpha):e^{\beta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})\to e^{-\beta \rho}L^2(\mathbb{R}^-;E_k)$, satisfies
\[\tilde{\chi}\mathbf{R}_k(\alpha)\chi:e^{\beta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})\to e^{-\beta \rho}(L^2(\mathbb{R}^-;E_k)\cap \mathcal{D}(\mathcal{Q})),\]
\[ (\mathbf{H}-\tfrac{Q^2+p^2}{2})\tilde{\chi}\mathbf{R}_k(\alpha)\chi =\Pi_k \chi+\mathbf{K}_{k,1}(\alpha)+{\bf K}_{k,2}(\alpha)\]
where $\mathbf{K}_{k,1}(\alpha)$, $\mathbf{K}_{k,2}(\alpha)$ are such that for ${\rm Im}\sqrt{p^2-2\lambda_j}>-\min(\beta,\gamma/2-\beta)$
\begin{align*}
& \mathbf{K}_{k,1}(\alpha):e^{\beta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})\to e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})
\\
& {\bf K}_{k,2}(\alpha):e^{\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{\beta \rho}\mathcal{D}'(\mathcal{Q})
\end{align*}
are holomorphic families of compact operators in \eqref{region_holo_R_k}, and we have $\mathbf{K}_{k,i}(\alpha)(1-\Pi_k)=0$ for $i=1,2$, $(1-\Pi_k)\mathbf{K}_{k,1}(\alpha)=0$ and $\Pi_k\mathbf{K}_{k,2}(\alpha)=0$.\\
2) If $f\in e^{\beta \rho}L^2$, then there is $C_k>0$ depending on $k$, some $a_j(\alpha,f)$ and
$G(\alpha,f)\in H^1(\mathbb{R};E_k)$ depending linearly on $f$ and holomorphic in $\{\alpha=Q+ip \in \Sigma\,|\, \forall j\leq k, {\rm Im}\sqrt{p^2-2\lambda_j}>-\min(\beta,\gamma/2-\beta)\}$ such that
in the region $c\leq 0$
\begin{equation}\label{asympR_-}
(\mathbf{R}_k(\alpha)f)=\sum_{\lambda_j\leq \lambda_k}a_j(\alpha,f)
e^{-ic\sqrt{p^2-2\lambda_j}}+G(\alpha,f),
\end{equation}
\[ \|G(\alpha,f)(c)\|_{L^2(\Omega_\mathbb{T})}+\|\partial_cG(\alpha,f)(c)\|_{L^2(\Omega_\mathbb{T})}\leq C_k e^{\beta \rho}
\|e^{-\beta \rho}\Pi_kf\|_{2}.\]
3) For each $\beta\in \mathbb{R}$, the operator $\mathbf{R}_k(\alpha)$ extends as a bounded analytic family
\[ \mathbf{R}_k(\alpha): e^{-\beta \rho}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{-\beta \rho}(L^2(\mathbb{R}^-;E_k)\cap \mathcal{D}(\mathcal{Q}))\]
in the region ${\rm Im}(p)>|\beta|$ and $\mathbf{K}_{k,1}(\alpha): e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$, $\mathbf{K}_{k,2}(\alpha): e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$ are compact analytic families in that same region.
\end{lemma}
\begin{proof}
We first consider $\mathbf{H}_{0}$ on $(-\infty,0]$ with Dirichlet condition at $c=0$. Using the diagonalisation of $\mathbf{P}$ on $E_k$, we can compute the resolvent
$(\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}$ on $E_k$ by standard ODE methods (Sturm-Liouville theory): for ${\rm Im}(p)>0$, this is the diagonal operator given for $j\leq k$ and $f\in L^2(\mathbb{R}^-)$ and $\phi_j\in \ker (\mathbf{P}-\lambda_j)$
\[\begin{split}
(\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}f(c)\phi_j =\frac{2}{\sqrt{p^2-2\lambda_j}}& \phi_j\Big(\int_{-\infty}^c \sin(c\sqrt{p^2-2\lambda_j})e^{-ic'\sqrt{p^2-2\lambda_j}}f(c')dc'\\
& +\int_{c}^0 e^{-ic\sqrt{p^2-2\lambda_j}}\sin(c'\sqrt{p^2-2\lambda_j})f(c')dc'\Big)
\end{split}\]
where our convention is that $\sqrt{z}$ is defined with the cut on $\mathbb{R}^+$, so that $\sqrt{p^2}=p$
if ${\rm Im}(p)>0$. For $j=0$, that is $\phi_0=1$, for each $\beta>0$ the resolvent restricted to $E_0$ admits an analytic continuation from ${\rm Im}(p)>0$ to ${\rm Im}(p)>-\beta$, as a map
\[ (\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}\Pi_0: e^{-\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|}L^2(\mathbb{R}^-;E_0).\]
This is easy to see by using Schur's lemma and the analyticity in $p$ for the Schwartz kernel
\[ \kappa_0(c,c'):= \operatorname{1\hskip-2.75pt\relax l}_{\{c\geq c'\}}e^{-\beta (|c|+|c'|)} \sin(cp)e^{-ic'p} +\operatorname{1\hskip-2.75pt\relax l}_{\{c'\geq c\}}e^{-\beta (|c'|+|c|)} \sin(c'p)e^{-icp}\]
of the operator $\Pi_0 e^{-\beta |c|}(\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}e^{-\beta |c|}\Pi_0$ that we view as on operator on $L^2(\mathbb{R}^-)$. Moreover, one directly also
obtains that it maps $e^{-\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{-\beta |c|}H^2(\mathbb{R}^-;E_0)\cap H_0^1(\mathbb{R}^-;E_0)$.
Similarly, the operators
\[ (\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1} \phi_j\langle \phi_j,\cdot\rangle:e^{-\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|}L^2(\mathbb{R}^-;\mathbb{C} \phi_j)\]
are analytic in $p$, which implies that
\[\mathbf{R}_k(\alpha):= (\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}\Pi_k: e^{-\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|}H^2(\mathbb{R}^-;E_k)\cap H_0^1(\mathbb{R}^-;E_k)\]
admits an analytic extension in $p$ to the region $\{p\in Z\, |\, \forall j\geq 0, {\rm Im}(\sqrt{p^2-2\lambda_j})>-\beta\}$ of the ramified Riemann surface $Z$. The proof of Lemma \ref{chiPikV} also shows that $\mathcal{Q}_{e^{\gamma c}V}(\widetilde{\chi}\Pi_ku)<\infty$ if $u\in L^2(\mathbb{R}\times \Omega_\mathbb{T})$ and clearly also ${\bf P}^{\frac{1}{2}}\Pi_k\in \mathcal{L}(L^2(\Omega_\mathbb{T}))$, thus we deduce that
\[\tilde{\chi} \mathbf{R}_k(\alpha)\chi:e^{-\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|} (L^2(\mathbb{R}^-;E_k)\cap \mathcal{D}(\mathcal{Q}))\]
is bounded.
We have (using $\Pi_k\mathbf{R}_k(\alpha)=\mathbf{R}_k(\alpha)$ and Lemma \ref{chiPikV})
\[\begin{split}
(\mathbf{H}-\tfrac{Q^2+p^2}{2})\tilde{\chi} \mathbf{R}_k(\alpha)\chi= &\, \Pi_k\chi-\tfrac{1}{2} [\partial_c^2,\tilde{\chi}]\Pi_k \mathbf{R}_k(\alpha)\chi+
e^{\gamma c}V\Pi_k\tilde{\chi}\mathbf{R}_k(\alpha)\chi.\\
=& \, \Pi_k\chi+ \tfrac{1}{2} [\partial_c^2,\tilde{\chi}]\Pi_k \mathbf{R}_k(\alpha)\chi+
\Pi_ke^{\gamma c} V\Pi_k\tilde{\chi}\mathbf{R}_k(\alpha)\chi+\tilde{\chi}(1-\Pi_k)e^{\gamma c}V\Pi_k\tilde{\chi}\mathbf{R}_k(\alpha)\chi\\
= & \, \Pi_k \chi+\mathbf{K}_{k,1}(\alpha)+\mathbf{K}_{k,2}(\alpha)
\end{split}\]
where $\mathbf{K}_{k,2}(\alpha):=\tilde{\chi}(1-\Pi_k)e^{\gamma c}V\Pi_k\tilde{\chi}\mathbf{R}_k(\alpha)\chi$
satisfies $\Pi_k {\bf K}_{k,2}(\alpha)=0$.
The operator $[\partial_c^2,\tilde{\chi}]\Pi_k \mathbf{R}_k(\alpha)\chi$ is compact on $e^{-\beta|c|}L^2(\mathbb{R};L^2(\Omega_\mathbb{T}))$ since $e^{\beta |c|}[\partial_c^2,\tilde{\chi}]$ is a compactly supported first order operator in $c$, $E_k={\rm Im}(\Pi_k)$ is finite dimensional in $L^2(\Omega_\mathbb{T})$
and $\mathbf{R}_k(\alpha): e^{-\beta|c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|}H^2(\mathbb{R}^-;E_k)$ (this amounts to the compact injection $H^2([-1,0];E_k)\to H^1(\mathbb{R}^-;E_k)$).
The operator $e^{(\beta-\gamma/2)|c|}\Pi_k\tilde{\chi}\mathbf{R}_k(\alpha)\chi e^{-\beta|c|}$ is also compact as maps
$L^2(\mathbb{R} \times \Omega_\mathbb{T})\to L^2(\mathbb{R}; E_k)$ and $L^2\to \mathcal{D}(\mathcal{Q})$ by using the same type of argument as for proving
the compact injection \eqref{injectioncpt}: indeed, one has the pointwise bound on its Schwartz kernel restricted to $\ker (\mathbf{P}-\lambda_j)$
\[ \begin{split}
|\kappa_j(c,c')|\leq & Ce^{(\beta-\gamma/2)|c|-\beta|c'|}(e^{{\rm Im}(\sqrt{p^2-2\lambda_j})(|c|-|c'|)}+e^{-{\rm Im}(\sqrt{p^2-2\lambda_j})(|c|+|c'|)})\operatorname{1\hskip-2.75pt\relax l}_{|c'|\geq |c|}\\
& + Ce^{(\beta-\gamma/2)|c'|-\beta|c|}(e^{{\rm Im}(\sqrt{p^2-2\lambda_j})(|c'|-|c|)}+e^{-{\rm Im}(\sqrt{p^2-2\lambda_j})(|c|+|c'|)})\operatorname{1\hskip-2.75pt\relax l}_{|c|\geq |c'|}.
\end{split}\]
We see that for ${\rm Im}(\sqrt{p^2-2\lambda_j})\geq 0$, if $0<\beta<\gamma/2$, this is bounded by
$C\max(e^{(\beta-\gamma/2)|c|-\beta|c'|},e^{(\beta-\gamma/2)|c'|-\beta|c|})$, and is thus the integral kernel of a compact operator on $L^2(\mathbb{R}^-)$ since it is Hilbert-Schmidt (the kernel being in $L^2(\mathbb{R}\times\mathbb{R})$). If now ${\rm Im}(\sqrt{p^2-2\lambda_j})<0$, the same argument shows that a sufficient condition to be compact is that
\[ {\rm Im}\sqrt{p^2-2\lambda_j}>-\beta \textrm{ and } {\rm Im}\sqrt{p^2-2\lambda_j}>\beta-\gamma/2.\]
Combining with Lemma \ref{chiPikV}, we deduce that if $0<\beta<\gamma/2$, then
\[e^{\gamma c}V\Pi_k \tilde{\chi}{\bf R}_k(\alpha)\chi : e^{\beta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})\to e^{\beta \rho}\mathcal{D}'(\mathcal{Q}) \]
\[\textrm{ and } \Pi_k e^{\gamma c}V\Pi_k \tilde{\chi}{\bf R}_k(\alpha)\chi : e^{\beta\rho }L^2(\mathbb{R}\times\Omega_\mathbb{T})\to e^{\beta\rho}L^2(\mathbb{R}^-; E_k)\]
are compact operators. Thus ${\bf K}_{k,1}(\alpha):e^{\beta\rho }L^2(\mathbb{R}\times\Omega_\mathbb{T})\to e^{\beta\rho}L^2(\mathbb{R}^-; E_k)$ is also compact. This proves 1).\\
Now, if $f\in e^{\beta \rho}L^2(\mathbb{R};E_k)$ we have for $c\leq 0$ and writing $f=\sum_{j\leq k}f_j$
with $f_j\in \ker (\mathbf{P}-\lambda_j)$
\begin{equation}\label{asymptRk}
\begin{split}
(\mathbf{R}_k(\alpha)\chi f)(c)=& 2\sum_{j\leq k}
\frac{e^{-ic\sqrt{p^2-2\lambda_j}}}{\sqrt{p^2-2\lambda_j}}\int_{-\infty}^0 \sin{(c'\sqrt{p^2-2\lambda_j})}\chi(c')f_j(c')dc'\\
& + 2\sum_{j\leq k}
\frac{e^{-ic\sqrt{p^2-2\lambda_j}}}{\sqrt{p^2-2\lambda_j}}\int_{-\infty}^c\sin\Big((c-c')\sqrt{p^2-2\lambda_j}\Big)f_j(c')dc'
\end{split}\end{equation}
and the term in the second line, denoted $G(c)$, satisfies for $c<-1$
\[\begin{split}
\|G(c)\|_{L^2(\Omega_\mathbb{T})}\leq & 2\sum_{j\leq k} \int_{-\infty}^c
e^{|{\rm Im}\sqrt{p^2-2\lambda_j}|(c-c')+\beta c'} \|e^{-\beta \rho(c')}f_j(c')\|_{L^2(\Omega_\mathbb{T})}dc'\\
\leq & C_k\|e^{-\beta \rho}f\|_{2} e^{-\beta|c|}.
\end{split}
\]
and the same bounds hold for $\|\partial_cG(c)\|_{L^2(\Omega_\mathbb{T})}$; this completes the proof of 2).
We remark that if ${\rm Im}(p)>0$, then the operator
\[ (\mathbf{H}_{0}-\tfrac{Q^2+p^2}{2})^{-1}\Pi_k: e^{\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|}L^2(\mathbb{R}^-;E_k)\]
is bounded and analytic in $p$, provided $0<\beta<\min_{j\leq k}{\rm Im}(\sqrt{p^2-2\lambda_j})={\rm Im}(p)$. Indeed,
this follows again from Schur's lemma applied to the Schwartz kernel
\[ \operatorname{1\hskip-2.75pt\relax l}_{\{c\geq c'\}}e^{-\beta (|c|-|c'|)} \sin(c\sqrt{p^2-2\lambda_j})e^{-ic'\sqrt{p^2-2\lambda_j}} +\operatorname{1\hskip-2.75pt\relax l}_{\{c'\geq c\}}e^{-\beta (|c'|-|c|)} \sin(c'\sqrt{p^2-2\lambda_j})e^{-ic\sqrt{p^2-2\lambda_j}}.\]
Using again that $\Pi_k: E_k\to L^2(\Omega_\mathbb{T})$ is bounded, this implies that
\[\mathbf{R}_k(\alpha): e^{\beta |c|}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{\beta |c|}
(L^2(\mathbb{R}^-;E_k)\cap \mathcal{D}(\mathcal{Q}))\]
is an analytic bounded family in $p$ in the region $0<\beta<{\rm Im}(p)$. The same argument works with $0<-\beta<{\rm Im}(p)$ in case $\beta<0$.
The operator $\mathbf{K}_{k,1}(\alpha):e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}L^2(\mathbb{R} \times \Omega_\mathbb{T})$ is compact by the same argument as above since it is Hilbert-Schmidt and
$\mathbf{K}_{k,2}(\alpha):e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$ is compact.
\end{proof}
\noindent \textbf{4) Proof of Proposition \ref{extensionresolvent}}.
For $\beta\in \mathbb{R}$, define the Hilbert space for fixed $k$
\[\mathcal{H}_{k,\beta}:=e^{\beta \rho}L^2(\mathbb{R};E_k)\oplus L^2(\mathbb{R};E_k^\perp)\]
with scalar product
\[ \langle f,f'\rangle_{\mathcal{H}_{k,\beta}}:= \int_{\mathbb{R}}e^{-2\beta \rho(c)}\langle \Pi_k f,\Pi_kf'\rangle_{L^2(\Omega_\mathbb{T})}dc +\langle(1-\Pi_k)f,(1-\Pi_k)f' \rangle_{L^2(\mathbb{R}\times\Omega_\mathbb{T})}.\]
We now fix $\beta$ and $\beta'$ as in the statement of Proposition \ref{extensionresolvent}.
We now use the operators of Lemma \ref{LemmaRk}, Lemma \ref{regimec>-1} and Lemma \ref{modelres}: let $\chi;\tilde{\chi},\hat{\chi}$ be the cutoff functions of these Lemmas and let $\check{\chi}\in C^\infty(\mathbb{R})$ equal to $1$ on ${\rm supp}(\tilde{\chi})$ and supported in $\mathbb{R}^-$.
We define
\[ \widetilde{\mathbf{R}}(\alpha):=\tilde{\chi}\mathbf{R}_k^\perp(\pi(\alpha))\chi+(1-\hat{\chi})\mathbf{R}_+(1-\chi)+\tilde{\chi}\mathbf{R}_k(\alpha)\chi-\check{\chi}{\bf R}_k^\perp(\alpha){\bf K}_{k,2}(\alpha)\]
which in $\{\alpha=Q+ip \in \Sigma \,|\, \forall j\leq k, {\rm Im}\sqrt{p^2-2\lambda_j}>-\beta\}$
is bounded and holomorphic (in $\alpha$) as a map $\widetilde{{\bf R}}(\alpha): \mathcal{H}_{k,\beta}\to \mathcal{H}_{k,-\beta}\cap e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$. It moreover satisfies the identity
\[ \begin{split}
(\mathbf{H}-2\Delta_{\pi(\alpha)})\widetilde{\mathbf{R}}(\alpha)=& 1+\mathbf{\mathbf{L}}_k^\perp(\pi(\alpha))+\mathbf{K}_k^\perp(\pi(\alpha))+\mathbf{K}_{+,k}(\pi(\alpha))+\mathbf{L}_{+,k}(\pi(\alpha))+\mathbf{K}_{k,1}(\alpha)\\
& -(\check{\mathbf{L}}_k^\perp(\pi(\alpha))+\check{\mathbf{K}}_k^\perp(\pi(\alpha))){\bf K}_{k,2}(\alpha)
\end{split}\]
where $\check{\mathbf{L}}_k^\perp(\pi(\alpha))$ and $\check{\mathbf{K}}_k^\perp(\pi(\alpha))$ are the operators of Lemma \ref{LemmaRk} with $\tilde{\chi}$ (resp. $\chi$) is replaced by $\check{\chi}$ (resp. $\tilde{\chi}$).
Let us define
\begin{equation}\label{def:tildeKk}
\tilde{\mathbf{K}}_k(\alpha):=\mathbf{K}_{k,1}(\alpha)+\mathbf{K}_{+,k}(\pi(\alpha))+\mathbf{K}_k^\perp(\pi(\alpha))-(\check{\mathbf{L}}_k^\perp(\pi(\alpha))+\check{\mathbf{K}}_k^\perp(\pi(\alpha))){\bf K}_{k,2}(\alpha).
\end{equation}
By Lemma \ref{LemmaRk}, Lemma \ref{regimec>-1} and Lemma \ref{modelres}, $\tilde{\mathbf{K}}_k(\alpha):\mathcal{H}_{k,\beta}\to \mathcal{H}_{k,\beta}$ is compact and holomorphic in $\alpha$ (recall that $\mathbf{K}_{k,j}(\alpha)(1-\Pi_k)=0$). We claim that for each $\psi\in C^\infty(\mathbb{R})\cap L^\infty(\mathbb{R})$ satisfying $\psi'\in L^\infty$ and ${\rm supp}(\psi)\subset (-\infty,A)$ for some $A\in \mathbb{R}$,
\begin{equation}\label{boundonD'(Q)prop3.10}
\tilde{\mathbf{K}}_k(\alpha)(1-\Pi_k):\mathcal{D}'(\mathcal{Q})\to \mathcal{H}_{k,\beta} \quad \textrm{ and }\widetilde{{\bf R}}(\alpha)(1-\Pi_k)\psi:\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})
\end{equation}
are bounded.
Indeed, Lemma \ref{LemmaRk} and Lemma \ref{regimec>-1} show that $\mathbf{K}_{+,k}(\pi(\alpha))+\mathbf{K}_k^\perp(\pi(\alpha))$
is bounded as a map $\mathcal{D}'(\mathcal{Q})\to \mathcal{H}_{k,\beta}$ and that $\tilde{\chi}\mathbf{R}_k^\perp(\pi(\alpha))\chi+(1-\hat{\chi})\mathbf{R}_+(1-\chi)$ is bounded as a map $\mathcal{D}'(\mathcal{Q})\to \mathcal{D}(\mathcal{Q})$, while Lemma \ref{chiPikV} shows that $(1-\Pi_k)\psi:\mathcal{D}'(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$.
Now if $|\pi(\alpha)-Q|^2\leq \lambda_k^{1/4}$ and if $k$ is large enough, the operator $\widetilde{\mathbf{L}}_k(\alpha):= \mathbf{L}_k^\perp(\pi(\alpha))+\mathbf{L}_{+,k}(\alpha)$ is bounded as map
\begin{align}\label{def:tildeLk}
&\widetilde{\mathbf{L}}_k(\alpha):\mathcal{H}_{k,\beta}\to L^2(\mathbb{R};E_k^\perp)
& \widetilde{\mathbf{L}}_k(\alpha): \mathcal{D}'(\mathcal{Q})\to \mathcal{H}_{k,\beta}
\end{align}
with holomorphic dependance in $\alpha$, and with bound (recall \eqref{boundonK1} and \eqref{boundbis})
\[ \|\tilde{\mathbf{L}}_k(\alpha)^2\|_{\mathcal{H}_{k,\beta}\to L^2(\mathbb{R};E_k^\perp)}<1/2.\]
In particular, $(1+\tilde{\mathbf{L}}_k(\alpha))(1-\tilde{\mathbf{L}}_k(\alpha))=1-\tilde{\mathbf{L}}_k(\alpha)^2$ is invertible on $\mathcal{H}_{k,\beta}$ with holomorphic inverse given by the Neumann series $\sum_{j=0}^\infty \tilde{\mathbf{L}}_k(\alpha)^{2j}$;
we write $(1+\mathbf{T}_k(\alpha)):=(1-\tilde{\mathbf{L}}_k(\alpha))(1-\tilde{\mathbf{L}}_k(\alpha)^2)^{-1}$, with $\mathbf{T}_k(\alpha)$ mapping boundedly $\mathcal{H}_{k,\beta}\to \mathcal{H}_{k,\beta}$. Moreover we have
\[ (\mathbf{H}-2\Delta_{\pi(\alpha)})\widetilde{\mathbf{R}}(\alpha)(1+\mathbf{T}_k(\alpha))=1+\tilde{\mathbf{K}}_k(\alpha)(1+\mathbf{T}_k(\alpha))\]
and the remainder $\hat{\mathbf{K}}(\alpha):=\tilde{\mathbf{K}}_k(\alpha)(1+\mathbf{T}_k(\alpha))$ is now compact on $\mathcal{H}_{k,\beta}$,
and $1+\hat{\mathbf{K}}(\alpha)$ is thus Fredholm of index $0$.
Let $p_0=iq$ for some $q\gg \beta$, the operator $\mathbf{H}-\tfrac{Q^2}{2}$ being
self-adjoint on its domain $\mathcal{D}(\mathbf{H})$ and non-negative, $\mathbf{H}-\tfrac{Q^2+p_0^2}{2}$ is invertible with inverse denoted $\mathbf{R}(\alpha_0)$ if $\alpha_0=Q+ip_0=Q-q$.
Now, let $(\psi_j)_{j\leq J}\subset \mathcal{H}_{k,\beta}$ be an orthonormal basis of
$\ker (1+\hat{\mathbf{K}}(\alpha_0)^*)$, and
$(\varphi_j)_{j\leq J}\subset \mathcal{H}_{k,\beta}$ an orthonormal basis of $\ker (1+\hat{\mathbf{K}}(\alpha_0))$.
For each $j$, there is $w_j\in \mathcal{D}(\mathbf{H})$ such that $(\mathbf{H}-\tfrac{Q^2+p_0^2}{2})w_j=\psi_j$.
If $\theta\in C^\infty(\mathbb{R})$ equal $1$ in $c\in (-\infty,-1)$ and is supported in $c\in \mathbb{R}^-$, we have in $\mathcal{D}'(\mathcal{Q})$
\begin{equation}\label{paris}
(\mathbf{H}_0-\tfrac{Q^2+p_0^2}{2})\theta w_j=\theta \psi_j-\theta e^{\gamma c}Vw_j-\tfrac{1}{2}[\partial_c^2,\theta]w_j
\end{equation}
and this implies by projecting this relation on $E_k$ with $\Pi_k$ that, setting $\psi_{j,k}=\Pi_k\psi_j$ and $\psi_{j,k}^\perp=(1-\Pi_k)\psi_j$, similarly $w_{j,k}=\Pi_kw_j$ and $w_{j,k}^\perp=(1-\Pi_k)w_j$
\begin{equation}\label{projPik}
(\mathbf{H}_0-\tfrac{Q^2+p_0^2}{2})\theta w_{j,k}=\theta \psi_{j,k}-\theta \Pi_k(e^{\gamma c}Vw_j)-\tfrac{1}{2}[\partial_c^2,\theta]w_{j,k}.
\end{equation}
By Lemma \ref{chiPikV} with $\beta'=\beta=\gamma/2$, we see that $\Pi_k(e^{\gamma c} Vw_j)\in e^{\gamma \rho/2}L^2$. Since $[\partial_c^2,\theta]$ is a first order differential operator with compact support, we get $[\partial_c^2,\theta]w_{j,k}\in L^2$ as $\widetilde{\theta} w_{j,k}\in \mathcal{D}(\mathcal{Q})$ if $\widetilde{\theta}\in C_c^\infty(\mathbb{R})$ by Lemma \ref{chiPikV}. The right hand side of \eqref{projPik} is then in $e^{\beta\rho}L^2$.
This shows in particular that
$\theta w_{j,k}\in H^2(\mathbb{R}^-;E_k)\cap H_0^1(\mathbb{R}^-;E_k)$ and since $E_k$ is finite dimensional, it is
direct to check that
\[\mathbf{R}_{k}(Q+ip_0)(\mathbf{H}_0-\tfrac{Q^2+p_0^2}{2})\theta w_{j,k}=\theta w_{j,k}\]
with $\mathbf{R}_{k}(Q+ip_0)$ the operator of Lemma \ref{modelres}.
We obtain in the region $c\in \mathbb{R}^-$
\[\theta w_{j,k}=\mathbf{R}_k(Q+ip_0)\big(\theta \psi_{j,k}-\theta e^{\gamma c}\Pi_k(Vw_j)-\tfrac{1}{2}[\partial_c^2,\theta]w_{j,k}\big).\]
Using the properties of $\mathbf{R}_k(\alpha)$ in Part 3) of Lemma \ref{modelres} (applied with $-\beta$ instead of $\beta$),
we see that for each $0<\beta<\gamma/2$
\[ \theta w_{j,k}\in e^{\beta \rho}L^2(\mathbb{R}^-;E_k)\]
and thus we deduce that
\[ w_j\in e^{\beta \rho}L^2(\mathbb{R};E_k)\oplus L^2(\mathbb{R};E_k^\perp)=\mathcal{H}_{k,\beta}.\]
If we consider the finite rank operator $\mathbf{W}$ defined by $\mathbf{W}f:=\sum_{j=1}^Jw_j \langle f,\varphi_j\rangle_{ \mathcal{H}_{k,\beta}}$
for $f\in \mathcal{H}_{k,\beta}$,
we have
\[ (\mathbf{H}-\tfrac{Q^2+p_0^2}{2})\mathbf{W}=\mathbf{Y} \quad \textrm{ with } \mathbf{Y}f:=\sum_{j=1}^J\psi_j\langle f,\varphi_j\rangle_{ \mathcal{H}_{k,\beta}}.\]
But now it is direct to check that $1+\hat{\mathbf{K}}(\alpha_0)+\mathbf{Y}$ is invertible on
$\mathcal{H}_{k,\beta}$ and we obtain that
\[(\mathbf{H}-2\Delta_{\pi(\alpha)})\big(\widetilde{\mathbf{R}}(\alpha)(1+\mathbf{T}_k(\alpha))+\mathbf{W}\big)=1+\hat{\mathbf{K}}(\alpha)+\mathbf{Y}-2(\Delta_{\pi(\alpha)}-\Delta_{\alpha_0}){\bf W}.\]
The remainder ${\bf K}(\alpha):=\hat{\mathbf{K}}(\alpha)+\mathbf{Y}-2(\Delta_{\pi(\alpha)}-\Delta_{\alpha_0}){\bf W}$ is compact on $\mathcal{H}_{k,\beta}$, analytic in $\alpha$ in the desired region and
$1+\mathbf{K}(\alpha)$ is invertible for
$\alpha=\alpha_0$, therefore we can apply the Fredholm analytic theorem to conclude that
the family of operator $(1+\mathbf{K}(\alpha))^{-1}$ exists as a meromorphic family of bounded operators on $\mathcal{H}_{k,\beta}$ for $\alpha$ in
$\{\alpha=Q+ip \in \Sigma\,|\, |\pi(\alpha)-Q|^2\leq \lambda_k^{1/4}, \forall j\leq k, {\rm Im}\sqrt{p^2-\lambda_j}>-\beta\}$ except on a discrete set of poles with finite rank polar part. We can thus set
\begin{equation}\label{formulefinaleRalpha}
\mathbf{R}(\alpha):=\big(\widetilde{\mathbf{R}}(\alpha)(1+\mathbf{T}_k(\alpha))+ \mathbf{W}\big)(1+\mathbf{K}(\alpha))^{-1}
\end{equation}
which satisfies the desired properties. To prove the boundedness of ${\bf R}(\alpha)(1-\Pi_k)\psi:\mathcal{D}'(\mathcal{Q}) \to e^{-\beta\rho}\mathcal{D}(\mathcal{Q})$ for $\psi\in L^\infty(\mathbb{R})\cap C^\infty(\mathbb{R})$ with $\psi'\in L^\infty$ and ${\rm supp}(\psi)\subset (-\infty,A)$ for $A\in \mathbb{R}$, we write
\[ {\bf R}(\alpha)=\widetilde{{\bf R}}(\alpha)-{\bf R}(\alpha)(\tilde{{\bf K}}_k(\alpha)+\tilde{{\bf L}}_k(\alpha))\]
and we have seen in \eqref{boundonD'(Q)prop3.10} and \eqref{def:tildeLk} that $(\tilde{{\bf K}}_k(\alpha)+\tilde{{\bf L}}_k(\alpha)): \mathcal{D}'(\mathcal{Q}) \to \mathcal{H}_{k,\beta}$ is bounded and $\widetilde{{\bf R}}(\alpha)(1-\Pi_k)\psi:\mathcal{D}'(\mathcal{Q})\to e^{-\beta\rho}\mathcal{D}(\mathcal{Q})$ is bounded, while $(1-\Pi_k)\psi:\mathcal{D}'(\mathcal{Q})\to \mathcal{D}'(\mathcal{Q})$ by Lemma \ref{chiPikV}. Finally ${\bf R}(\alpha):\mathcal{H}_{k,\beta}\to e^{\beta \rho}\mathcal{D}(\mathcal{Q})$ and the same holds for ${\bf W}$. This shows the announced property of ${\bf R}(\alpha)(1-\Pi_k)\psi$.
We can use the mapping properties of $\widetilde{\mathbf{R}}(\alpha)$ and $\mathbf{W}$, together with \eqref{asympR_-} and \eqref{formulefinaleRalpha} to deduce \eqref{asymptoticRf}.
We finally need to prove that there is no pole in the half plane ${\rm Re}(\alpha)\leq Q$ except possibly at the points $\alpha=Q\pm i\sqrt{2\lambda_j}$.
First, by the spectral theorem, one has for each $f\in e^{\beta \rho}L^2\subset L^2$ with
$\beta>0$ and each $\alpha$ satisfying ${\rm Re}(\alpha)<Q$
\[ \|{\bf R}(\alpha)f\|_{e^{\-\beta \rho}L^2}\leq C\|{\bf R}(\alpha)f\|_{2}\leq \frac{C\|f\|_{2}}{|{\rm Im}(\alpha)|.|{\rm Re}(\alpha)-Q|}\]
which implies that a pole $\alpha_0=Q+ip_0$ with $p_0\notin \{\pm\sqrt{2\lambda_j}\,| \,j\geq 0\}$ on ${\rm Re}(\alpha)=Q$ must be at most of order $1$, while at $p_0=\pm \sqrt{2\lambda_j}$ it can be at most of order $2$ on $\Sigma$. Since $\widetilde{\mathbf{R}}(\alpha)(1+\mathbf{T}_k(\alpha))+ \mathbf{W}$ is analytic, a pole of $\widetilde{\mathbf{R}}(\alpha)$ can only come from a pole of $(1+\mathbf{K}(\alpha))^{-1}$, with polar part being a finite rank operator. We now assume that $p_0\notin \{\pm\sqrt{2\lambda_j}\,| \,j\geq 0\}$.
Let us denote by ${\bf Z}$ the finite rank residue ${\bf Z}={\rm Res}_{\alpha_0}{\bf R}(\alpha)$. Then $(\mathbf{H}-\tfrac{Q^2+p_0^2}{2}){\bf Z}=0$, which means that each element in ${\rm Ran}(\bf Z)$ is a $w\in e^{-\beta\rho}\mathcal{D}({\bf H})$ such that $(\mathbf{H}-\tfrac{Q^2+p_0^2}{2})w=0$. There are finite rank operators ${\bf Z}_0,\dots,
{\bf Z}_{N}$ on $\mathcal{H}_{k,\beta}$ so that for $\psi\in C^\infty((-\infty,-2)_c;[0,1])$
\[\psi{\bf Z}=\sum_{n=0}^N \psi\partial_{\alpha}^{n}\tilde{\mathbf{R}}(\alpha_0){\bf Z}_n=\sum_{n=1}^N \psi \partial_{\alpha}^{n}\mathbf{R}_k(\alpha_0)\chi{\bf Z}_n+\psi\tilde{\mathbf{R}}(\alpha_0){\bf Z}_0+\psi{\bf Z}_{L^2}\]
where ${\bf Z}_{L^2}$ is a finite rank operator mapping to $\mathcal{H}_{k,\beta}\subset L^2$.
For $f\in \mathcal{H}_{k,\beta}$, the expression of $\partial_{\alpha}^{j}\mathbf{R}_k(\alpha_0)f$ is explicit
from \eqref{asymptRk}, and one directly checks by differentiating \eqref{asymptRk} in $\alpha$ that it is of the form (for $c<-2$)
\[ (\partial_{\alpha}^{n}\tilde{\mathbf{R}}(\alpha_0)f)(c)=\sum_{\lambda_j\leq \lambda_k}\sum_{m\leq n}\widetilde{a}_{j,m}(\alpha,f)c^{m}
e^{-ic\sqrt{p_0^2-2\lambda_j}}+\widetilde{G}(\alpha_0,f)\]
for some $\widetilde{a}_{j,m}(\alpha_0,f)\in \ker ({\bf P}-\lambda_j)$ and $\tilde{\chi}\widetilde{G}(\alpha_0,f)\in \mathcal{H}_{k,\beta}$ satisfying ${\bf H}\tilde{\chi}\widetilde{G}(\alpha_0,f)\in \mathcal{H}_{k,\beta}$. This implies that, in $c<-2$, $w\in {\rm Ran}({\bf Z})$ is necessarily of the form
\[w=\sum_{j\leq k}\sum_{m\leq N}b_{j,m}c^{m}
e^{-ic\sqrt{p_0^2-2\lambda_j}}+\hat{G}\]
for some $b_{j,m}\in\ker ({\bf P}-\lambda_j)$ and $\hat{G}\in \mathcal{H}_{k,\beta}$ with ${\bf H}\hat{G}\in \mathcal{H}_{k,\beta}$. Using that $\tilde{\chi}(c)e^{\gamma c}\Pi_k(Vw)\in e^{\beta c}L^2$, we see from the equation $(\mathbf{H}_0-\tfrac{Q^2+p_0^2}{2})\Pi_k(w)=-e^{\gamma c}\Pi_k(Vw)$ that
\[ (\mathbf{H}_0-\tfrac{Q^2+p_0^2}{2})\Big(\sum_{\lambda_j\leq \lambda_k}\sum_{m\leq N}b_{j,m}c^{m}
e^{-ic\sqrt{p_0^2-2\lambda_j}}\Big)\Big|_{c\leq 0}\in e^{\beta c}L^2\]
and by using the explicit expression of ${\bf H}_0$, it is clear that necessarily
$b_{j,m}=0$ for all $m\not=0$. Then we may apply Lemma \ref{boundarypairing} with $u_1=u_2=w$ to deduce that $b_{j,0}=0$, and therefore $w\in \mathcal{D}(\bf{H})$, which implies $w=0$ by Lemma \ref{embedded}.
It remains to show that $\alpha=Q\pm i\sqrt{2\lambda_j}$ is a pole of order at most $1$. To simplify, we write the argument for $\alpha=Q$, the proof is the same for all $j$. The method is basically the same as in the proof of \cite[Proposition 6.28]{Mel}: the resolvent has Laurent expension ${\bf R}(\alpha)=(\alpha-Q)^{-2}{\bf Q}+(\alpha-Q)^{-1}{\bf R}'(\alpha)$ for some holomorphic operator ${\bf R}'(\alpha)$ near $\alpha=Q$ and ${\bf Q}$ has finite rank, then we also have $\|{\bf R}(\alpha)\phi\|_{L^2}\leq |Q-\alpha|^{-2}$ for $\alpha<Q$ and all $\phi\in \mathcal{C}$, thus we can deduce that
\[ {\bf Q}\phi=\lim_{\alpha\to Q^-}(\alpha-Q)^2{\bf R}(\alpha)\phi.\]
The limit holds in $e^{-\delta\rho}L^2$ for all $\delta>0$ small, but the right hand side has actually a bounded $L^2$-norm, so ${\bf Q}\phi\in L^2$ and thus ${\rm Ran}({\bf Q})\subset L^2$. Since we also have ${\bf H}{\bf Q}=0$ from Laurent expanding $({\bf H}-2\Delta_\alpha){\bf R}(\alpha)={\rm Id}$ at $\alpha=Q$, we conclude that ${\bf Q}=0$ by using Lemma \ref{embedded}.
\qed
\subsubsection{The resolvent in the physical sheet on weighted spaces}
We shall conclude this section on the resolvent of ${\bf H}$ by analyzing its boundedness on weighted spaces $e^{-\beta \rho}L^2$ in the half-plane $\{{\rm Re}(\alpha)<Q\}$. We recall that
Lemma \ref{resolventweighted} was precisely proving such boundedness but the region of validity in $\alpha$ of this Lemma was not covering the whole physical-sheet, and in particular not the region close to the line ${\rm Re}(\alpha)=Q$. Just as in Lemma \ref{firstPoisson},
the main application of such boundedness on weighted spaces is to define the Poisson operator
$\mathcal{P}_\ell(\alpha)$, and we aim to define it in a large connected region of
$\{{\rm Re}(\alpha)\leq Q\}$ relating the probabilistic region and the line $\alpha \in Q+i\mathbb{R}$ corresponding to the $L^2$-spectrum of ${\bf H}$.
\begin{proposition}\label{physicalsheet}
Let $\beta\in \mathbb{R}$ and ${\rm Re}(\alpha)<Q$, then the resolvent $\mathbf{R}(\alpha)$ of $\mathbf{H}$ extends as an analytic family of bounded operators
\begin{align*}
\mathbf{R}(\alpha): e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})
\end{align*}
in the region ${\rm Re}(\alpha)<Q-|\beta|$, and it satisfies for each $\psi\in C^\infty(\mathbb{R})\Cap L^\infty(\mathbb{R})$ such that $\psi'\in L^\infty$ and ${\rm supp}(\psi)\subset (-\infty,A)$ for some $A\in \mathbb{R}$,
\[\mathbf{R}(\alpha)(1-\Pi_k)\psi: e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q}).\]
\end{proposition}
\begin{proof} We proceed as in the proof of Proposition \ref{extensionresolvent}:
we let for ${\rm Re}(\alpha)<Q$
\[ \tilde{{\bf R}}(\alpha):=\tilde{\chi}\mathbf{R}_k^\perp(\alpha)\chi+(1-\hat{\chi})\mathbf{R}_+(1-\chi)+\tilde{\chi}\mathbf{R}_k(\alpha)\chi-\check{\chi}{\bf R}_k^\perp(\alpha){\bf K}_{k,2}(\alpha)\]
and we get
\[ (\mathbf{H}-\tfrac{Q^2+p^2}{2})\tilde{{\bf R}}(\alpha)={\rm Id}+\tilde{{\bf K}}_k(\lambda)+\tilde{{\bf L}}_k(\alpha)\]
where we used the operators of the proof of Proposition \ref{extensionresolvent} (see \eqref{def:tildeKk} and \eqref{def:tildeLk}). Now we take ${\rm Re}(\alpha)<Q$ and $|Q-\alpha|<A_0$ for some fixed constant $A_0>0$ that can be chosen arbitrarily large, and we let $k>0$ large enough so that
\begin{equation}\label{dingue}
A_0^2 +1<\min (\frac{\lambda_k^{1/2}}{16(1+C_2)},\lambda_k^{1/4}), \quad \lambda_k> (16^2C_{1}^2(1+C_{2}^2)+1)(|\beta|+1)^2,
\end{equation}
where the constant $C_{1}$, $C_{2}$ above are the constants respectively given in Lemma \ref{LemmaRk} and Lemma \ref{regimec>-1}.
The conditions in \eqref{dingue} ensures both the condition ${\rm Re}((\alpha-Q)^2)>\beta^2-2\lambda_k+1$ of Lemma \ref{LemmaRk} is satisfied and the operator
$\tilde{\chi}\mathbf{R}_k^\perp(\alpha)\chi:e^{-\beta \rho} \mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}(L^2(\mathbb{R}^-;E_k^\perp)\cap \mathcal{D}(\mathcal{Q}))$ is a bounded holomorphic family, and the norm estimate appearing in 2) of Lemma \ref{LemmaRk} gives
\begin{equation} \label{borne1}
\| \mathbf{L}_k^\perp(\alpha)\|_{\mathcal{L}(e^{-\beta \rho}L^2)}\leq \frac{C_{1}(1+|\beta|)}{\sqrt{{\rm Re}((\alpha-Q)^2)+2\lambda_k-\beta^2}}\leq \frac{1}{16(1+C_{2})}.
\end{equation}
The condition $|\beta|<Q-{\rm Re}(\alpha)$ (equivalent to ${\rm Im}(p)>|\beta|$) makes sure that we can apply 3) of Lemma \ref{modelres}: in particular the operator $\mathbf{R}_k(\alpha): e^{-\beta \rho}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})\to e^{-\beta \rho}(L^2(\mathbb{R}^-;E_k)\cap \mathcal{D}(\mathcal{Q}))$ is a bounded holomorphic family.
Also, Lemma \ref{regimec>-1} ensures that $(1-\hat{\chi})\mathbf{R}_+(1-\chi): e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$ is a bounded holomorphic family (note that both cutoff functions $(1-\hat{\chi})$ and $(1-\chi)$ kill the $c\to -\infty$ behaviour and this is why Lemma \ref{regimec>-1} extends to weighted spaces $e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})$). Also the first condition in \eqref{dingue} ensures the norm estimate (as given by \eqref{boundbis})
\begin{equation} \label{borne2}
\|\mathbf{L}_{+,k}(\alpha)\|_{\mathcal{L}(L^2)}\leq 2 C_2,\quad \|\mathbf{L}_{+,k}(\alpha)^2\|_{\mathcal{L}(L^2)}\leq \frac{1}{8}.
\end{equation}
As a consequence
\[ \tilde{\mathbf{R}}(\alpha): e^{-\beta \rho}L^2\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})\]
is bounded and holomorphic in $U:=\{\alpha\in \mathbb{C}\,|\, |Q-\alpha|<A_0, {\rm Re}(\alpha)<Q-|\beta|\}$.
Furthermore \eqref{borne1} and \eqref{borne2} provide the estimate
\[ \|(\mathbf{L}_{+,k}(\alpha)+\mathbf{L}_k^\perp(\alpha))^2\|_{\mathcal{L}(e^{-\beta \rho}L^2)}<1/2.\]
Moreover, 2) of Lemma \ref{LemmaRk}, Lemma \ref{regimec>-1} and 3) of Lemma \ref{modelres} also give that
$\tilde{\mathbf{K}}_k(\alpha)$ is compact on the Hilbert space
$e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})$. Exactly the same argument as in the proof of Proposition \ref{extensionresolvent} gives that
\[ (\mathbf{H}-\frac{Q^2+p^2}{2})\tilde{{\bf R}}(\alpha)(1+\mathbf{T}_k(\alpha))=1+ \tilde{{\bf K}}_k(\alpha)(1+\mathbf{T}_k(\alpha))\]
for some $\mathbf{T}_k$ bounded holomorphic on $e^{-\beta \rho}L^2$ in $U$. Since by Lemma \ref{resolventweighted} we know that $(\mathbf{H}-2\Delta_\alpha)$ is invertible on $e^{-\beta \rho}L^2$
for some $\alpha_0\in U$, one can always add a finite rank operator $\mathbf{W}: e^{-\beta \rho}L^2\to e^{-\beta \rho}\mathcal{D}(\mathbf{H})$, so that
\[(\mathbf{H}-2\Delta_\alpha)(\mathbf{\widetilde{R}}(\alpha)(1+\mathbf{T}_k(\alpha))+\mathbf{W})=1+\mathbf{K}(\alpha)\]
for some compact remainder $\mathbf{K}(\alpha)$ on $e^{-\beta \rho}L^2$, analytic in $\alpha\in U$ in the desired region and $1+\mathbf{K}(\alpha)$ being invertible for $\alpha=\alpha_0\in U$. This implies by analytic Fredholm theorem that
\[ \mathbf{R}(\alpha)=(\tilde{\mathbf{R}}(\alpha)(1+\mathbf{T}_k(\alpha))+\mathbf{W})(1+\mathbf{K}(\alpha))^{-1}: e^{-\beta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})\]
is meromorphic for $\alpha\in U$. Now, using the density of the embeddings
$e^{|\beta| \rho}L^2\subset L^2\subset e^{-|\beta| \rho}L^2$ and using that $\mathbf{R}(\alpha)$ is holomorphic in $U$ as a bounded operator on $L^2$, it is direct to check that
$\mathbf{R}(\alpha):e^{-\beta \rho}L^2\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$ is analytic in $U$. Since $A_0$ (and thus $U$) can be chosen arbitrarily large as long as the constraint ${\rm Re}(\alpha)<Q-|\beta|$ is satisfied, we obtain our desired result. To prove that ${\bf R}(\alpha)(1-\Pi_k)\psi$ maps
$e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})$ to $e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$, we proceed as in the proof of Proposition \ref{extensionresolvent} and write
\[ {\bf R}(\alpha)=\widetilde{{\bf R}}(\alpha)-{\bf R}(\alpha)(\tilde{{\bf K}}_k(\alpha)+\tilde{{\bf L}}_k(\alpha)).\]
We have seen that $\widetilde{{\bf R}}(\alpha)(1-\Pi_k)\psi:e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$. The same arguments (just as in the proof of Proposition \ref{extensionresolvent}) also prove that that operators
$\tilde{{\bf K}}_k(\alpha),\tilde{{\bf L}}_k(\alpha)$ are bounded as operators
$e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}L^2$, thus we obtain that ${\bf R}(\alpha)(1-\Pi_k)\psi:e^{-\beta \rho}\mathcal{D}'(\mathcal{Q})\to e^{-\beta \rho}\mathcal{D}(\mathcal{Q})$ is bounded.
\end{proof}
\subsection{The Poisson operator}
We have seen in Lemma \ref{firstPoisson} that it is possible to construct a family of Poisson operators $\mathcal{P}_\ell(\alpha)$ in what we called the \emph{probabilistic region}, which contains a half line $(-\infty,Q-c_\ell)$ for some $c_\ell\geq 0$ depending on $\ell$. The construction was using the resolvent acting on weighted $L^2$-spaces. In this section, we will use Proposition \ref{extensionresolvent} and Proposition \ref{physicalsheet} to prove that the Poisson operators
extend holomorphically in $\alpha$ in a connected region of ${\rm Re}(\alpha)\leq Q$ containing the probabilistic region and the line $Q+i\mathbb{R}$.
\subsubsection{Regime close to the continuous spectrum of $H$}
We first start with a technical lemma that allows to define the Poisson operator on the continuous spectrum $Q+i\mathbb{R}$:
\begin{lemma}\label{boundarypairing}
Let $p\in \mathbb{R}$ and for $m=1,2$, let $u_m\in e^{-\delta \rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ with $\delta>0$ such that:\\
1) for each $\theta\in C^\infty(\mathbb{R};[0,1])$ supported in $(a,+\infty)$ for some $a\in\mathbb{R}$ then $\theta u_m\in \mathcal{D}(\mathcal{Q})$\\
2) $u_m$ satisfies
\[ (\mathbf{H}-\tfrac{Q^2+p^2}{2})u_m=r_m \in e^{\delta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T}).\]
Set $k=\max\{j\geq0\, |\, 2\lambda_j\leq p^2\}$. Then $u_m$ has asymptotic behaviour
\begin{equation}\label{decompui}
u_m=\sum_{j,2 \lambda_j\leq p^2}\Big(a_m^j
e^{-ic\sqrt{p^2-2\lambda_j}}+b_m^j
e^{ic\sqrt{p^2-2\lambda_j}}\Big)+G_m
\end{equation}
with $a_m^j,b_m^j\in \ker (\mathbf{P}-\lambda_j)$, and both $G_m,\partial_cG_m\in e^{\delta \rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})+L^2(\Omega_\mathbb{T};E_k^\perp)$.
Then we have
\[\langle u_1\,|\,r_2\rangle-\langle r_1\,|\,u_2\rangle=i \sum_{j, 2\lambda_j\leq p^2}\sqrt{p^2-2\lambda_j}\Big(\langle a_1^j\,|\,a_2^j\rangle_{L^2(\Omega_\mathbb{T})}-
\langle b_1^j\,|\,b_2^j\rangle_{L^2(\Omega_\mathbb{T})}\Big).\]
\end{lemma}
\begin{proof} Let $\theta\in C^\infty(\mathbb{R})$ be non-negative
satisfying $\theta_{T}=1$ on $[-T,\infty)$ and ${\rm supp}(\theta_{T})\subset [-T-\varepsilon,\infty)$
where $T>0$ is a large parameter and $\varepsilon>0$ small, and let $\widetilde{\theta}_{T}(\cdot)=\theta_T(\cdot\,+1)$. In particular we have $\widetilde{\theta}_{T}\theta_{T}=\theta_{T}$. First, $\widetilde{\theta}_{T} u_m\in H^1(\mathbb{R}; L^2(\Omega_\mathbb{T}))$ satisfies
\[(\mathbf{H}- \tfrac{Q^2+p^2}{2})(\widetilde{\theta}_{T} u_m)=\widetilde{\theta}_{T}r_m-\tfrac{1}{2}[\partial_c^2,\widetilde{\theta}_{T}]u_m \in L^2(\mathbb{R}\times \Omega_\mathbb{T})\]
thus $\widetilde{\theta}_Tu_m\in \mathcal{D}(\mathbf{H})$ (we used that $[\partial_c^2,\widetilde{\theta}_{T}]$ is a first order differential operator with compactly supported coefficients). This implies, using $[\mathbf{H},\widetilde{\theta}_T]\theta_T=0=\theta_T[\mathbf{H},\widetilde{\theta}_T]$, that
\begin{equation}\label{Greensformula}
\begin{split}
\langle u_1,r_2\rangle-\langle r_1,u_2\rangle = & \lim_{T\to \infty}
\langle \theta_{T} u_1, \mathbf{H}(\widetilde{\theta}_{T}u_2)\rangle-\langle \mathbf{H}(\widetilde{\theta}_{T}u_1),\theta_{T} u_2\rangle \\ =
& -\lim_{T\to \infty}\tfrac{1}{2}
\langle [\partial_c^2,\theta_{T}] u_1, u_2\rangle.
\end{split}\end{equation}
We write $u_m=u_m^0+G_m$ by using \eqref{decompui}. Then we claim that, as $T\to \infty$,
\[ |\langle [\partial_c^2,\theta_{T}] u_1^0, G_2\rangle|+|\langle [\partial_c^2,\theta_{T}] G_1, u_2\rangle|\to 0.\]
Indeed, we have $(|u_1^0|+|\partial_cu_1^0|)G_2\in L^1(\mathbb{R}\times \Omega_\mathbb{T})$ and $(|G_1|+|\partial_cG_1|)u_2\in L^1(\mathbb{R}\times \Omega_\mathbb{T})$, and the support of $[\partial_c^2,\theta_{T}]$ is contained in $[-T-1,-T]$. We are left in \eqref{Greensformula} to study the limit of $\langle [\partial_c^2,\theta_{T}] u_1^0, u_2^0\rangle$.
But now we have $[\partial_c^2,\theta_{T}] u^0_1=\theta''_{T}u^0_1+2\theta'_{T}\partial_cu^0_1$ and for fixed $T>0$
it is direct to check, using integration by parts and the fact that $(\mathbf{H}_0-\tfrac{Q^2+p^2}{2})u_m^0=0$ that
\[\begin{split}
\langle [\partial_c^2,\theta_{T}] u_1^0, u_2^0\rangle=& \int_{-T-1}^{-T}\partial_c\big(\theta_T\langle \partial_cu^0_1,u^0_2\rangle_{L^2(\Omega_\mathbb{T})}-\theta_T\langle u^0_1,\partial_cu^0_2\rangle_{L^2(\Omega_\mathbb{T})}\big)dc\\
=& \langle \partial_cu_1^0(-T),u_2^0(-T)\rangle_{L^2(\Omega_\mathbb{T})}-\langle u_1^0(-T),\partial_cu_2^0(-T)\rangle_{L^2(\Omega_\mathbb{T})}.
\end{split}\]
A direct computation gives that this is equal to
\[ 2i \sum_{j, \lambda_j\leq p^2}\sqrt{p^2-2\lambda_j}\Big(
\langle b_1^j,b_2^j\rangle_{L^2(\Omega_\mathbb{T})}-\langle a_1^j,a_2^j\rangle_{L^2(\Omega_\mathbb{T})}\Big).\]
This completes the proof.
\end{proof}
Now we extend the construction of the Poisson operator \eqref{definPellproba} in a neighborhood of the line spectrum $\alpha\in Q+i\mathbb{R}$.
\begin{proposition}\label{poissonprop}
Let $0<\beta<\gamma/2$ and $\ell\in\mathbb{N}$.
Then there is an analytic family of operators $\mathcal{P}_\ell(\alpha)$
\[\mathcal{P}_\ell(\alpha): E_\ell\to e^{-\beta\rho}\mathcal{D}(\mathcal{Q})\]
in the region
\begin{equation}\label{regionvalide}
\Big\{\alpha\in \mathbb{C}\,\Big|\, {\rm Re}(\alpha)< Q, {\rm Im}\sqrt{p^2-2\lambda_\ell}<\beta\Big\}\cup \Big\{Q+ip\in Q+i\mathbb{R}\, \Big| \, |p|\in \bigcup_{j\geq \ell} (\sqrt{2\lambda_j},\sqrt{2\lambda_{j+1}})\Big\},
\end{equation}
continous at each $Q\pm i\sqrt{2\lambda_j}$ for $j\geq \ell$, satisfying $(\mathbf{H}-\tfrac{Q^2+p^2}{2})\mathcal{P}_\ell(\alpha)F=0$ and
\begin{equation}\label{expansionP}
\mathcal{P}_\ell(\alpha)F=\sum_{j\leq \ell}\Big(F^-_je^{ic\sqrt{p^2-2\lambda_j}}+F_j^+(\alpha)
e^{-ic\sqrt{p^2-2\lambda_j}}\Big)+G_\ell(\alpha,F)
\end{equation}
with $F^-_j=\Pi_{\ker(\mathbf{P}-\lambda_j)}F$, $F_j^+(\alpha)\in \ker (\mathbf{P}-\lambda_j)$, and
$G_\ell(\alpha,F),\partial_c G_\ell(\alpha,F)\in e^{\frac{\beta}{2}\rho(c)}L^2(\mathbb{R}\times \Omega_\mathbb{T})+L^2(\mathbb{R}; E_\ell^{\perp})$. In particular, $\mathcal{P}_\ell(\alpha)\in e^{-({\rm Im}(\sqrt{p^2-\lambda_\ell})+\varepsilon)\rho}\mathcal{D}(\mathcal{Q})$ for all $\varepsilon>0$.
Moreover,
for each $\theta\in C_c^\infty(\mathbb{R})$, one has $\theta \mathcal{P}_\ell(\alpha)F\in \mathcal{D}(\mathbf{H})$. Such a solution $u\in e^{-\beta\rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ to the equation $(\mathbf{H}-\tfrac{Q^2+p^2}{2})u=0$ with the asymptotic expansion \eqref{expansionP} is unique.
The operator $\mathcal{P}_\ell(\alpha)$ admits a meromorphic extension to the region
\begin{equation}\label{regiondextension}
\Big\{\alpha=Q+ip \in \Sigma\, |\, \forall j \in\mathbb{N} \cup\{0\}, {\rm Im}\sqrt{p^2-2\lambda_j}\in (\beta/2-\gamma,\beta/2)\Big\}\end{equation}
and $\mathcal{P}_\ell(\alpha)F$ satisfies \eqref{expansionP} in that region. Finally, $F_j^+(\alpha)$ depends meromorphically on $\alpha$ in the region above.
\end{proposition}
\begin{proof}
We start by setting $u_-(\alpha):=\sum_{j=0}^\ell F^-_je^{ic\sqrt{p^2-2\lambda_j}}$, and let $\chi\in C^\infty(\mathbb{R})$ equal to $1$ in $(-\infty,-1)$ and with ${\rm supp}(\chi)\subset \mathbb{R}^-$.
We get
\[(\mathbf{H}-\tfrac{Q^2+p^2}{2})(\chi u_-(\alpha))= -\frac{1}{2}\chi''(c)u_-(\alpha)- \chi'(c)\partial_cu_-(\alpha)+
e^{\gamma c}V\chi u_-(\alpha).\]
The first two terms are in $e^{N\rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for all $N$, the term $e^{\gamma c}V\chi u_-(\alpha)$ can be decomposed (for each $k\geq \ell$) as
\[ \Pi_k(e^{\gamma c}V\chi u_-(\alpha))+(1-\Pi_k)(e^{\gamma c}V\chi u_-(\alpha)).\]
Using that $u_-(\alpha)=\Pi_k u_-(\alpha)$ if $k\geq \ell$ together with Lemma \ref{chiPikV}, we see that the first term satisfies $\Pi_k(e^{\gamma c}V\chi u_-(\alpha))\in e^{\beta \rho}L^2(\mathbb{R}^-;E_k)$
and the second term $(1-\Pi_k)(e^{\gamma c}V\chi u_-(\alpha))\in e^{\beta \rho}\mathcal{D}'(\mathcal{Q})$, provided that $\beta<\gamma/2$ and that ${\rm Im}\sqrt{p^2-\lambda_j})\leq \gamma/2$ for $j\leq \ell$.
We can thus define, using Proposition \ref{extensionresolvent} (with $k\gg \ell$ large enough),
if ${\rm Im}(\sqrt{p^2-2\lambda_j})\in (-\min(\beta,\frac{\gamma}{2}-\beta),\gamma/2)$ for all $j\leq k$ and $|\pi(p)|^2\leq \lambda_k^{1/4}$
\[u_+(\alpha):=\mathbf{R}(\alpha)(\mathbf{H}-\tfrac{Q^2+p^2}{2})(\chi u_-(\alpha))\in e^{-\beta\rho}\mathcal{D}(\mathcal{Q})\]
so that $u(\alpha):=\chi u_-(\alpha)-u_+(\alpha)$ solves $(\mathbf{H}-\tfrac{Q^2+p^2}{2})u(\alpha)=0$ in $e^{-\frac{\gamma}{2}\rho}\mathcal{D}'(\mathcal{Q})$.
We use Proposition \ref{extensionresolvent} with $k\gg \ell$ large enough, and we see
that $u_+(\alpha)$ is of the form, in $c\leq 0$,
\[
u_+(\alpha)= \sum_{j\leq k}\widetilde{a}_j(\alpha,F)e^{-ic\sqrt{p^2-2\lambda_j}}+G(\alpha,F)= \sum_{j\leq \ell}\widetilde{a}_j(\alpha,F)e^{-ic\sqrt{p^2-2\lambda_j}}+G_\ell(\alpha,F)
\]
with $\widetilde{a}_j(\alpha,F)\in \ker (\mathbf{P}-\lambda_j)$ and $G(\alpha,F), \partial_cG(\alpha,F)\in e^{\beta\rho}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})+ L^2(\mathbb{R}^-;E_k^\perp)$ and
$G_\ell (\alpha,F)\in e^{\beta\rho}L^2(\mathbb{R}^-\times \Omega_\mathbb{T})+L^2(\mathbb{R}^-;E_\ell^\perp)$ if ${\rm Re}(\alpha)<Q$. Here we have used the fact that ${\rm Im}\sqrt{p^2-2\lambda_j}>0$
(since either ${\rm Re}(\alpha)<Q$ or $p^2<2\lambda_j$ for all $j>\ell$ if $\alpha=Q+ip$ with $p\in\mathbb{R}\setminus [-\sqrt{2\lambda_\ell},\sqrt{2\lambda_\ell}]$) to place all terms corresponding to all $\ell<j\leq k$, which belong to $L^2(\mathbb{R};E_\ell^\perp)$, in the remainder term $G_\ell(\alpha,F)$.
This shows that
\begin{equation}\label{defPalpha}
\mathcal{P}_\ell(\alpha)F:=u(\alpha)=\chi(c)\sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}-
\mathbf{R}(\alpha)(\mathbf{H}-2\Delta_\alpha )\Big(\chi(c)\sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\Big)
\end{equation}
satisfies all the required properties. The analyticity in $\alpha$ except possibly at the points $Q\pm i\sqrt{2\lambda_j}$ for $j\in\mathbb{N}_0$ follows from Proposition \ref{extensionresolvent}, in particular 3) of that Proposition. At the points $Q\pm i\sqrt{2\lambda_j}$, the analyticity on the surface $\Sigma$ is a consequence of the Lemma \ref{Poissonaj}, in particular \eqref{expression P vs resolvante} and the fact that $Q\pm i\sqrt{2\lambda_j}$ is at most a pole of ordre $1$ of ${\bf R}(\alpha)$ and $a_j(\alpha,\varphi)$. We notice that the expression of $\mathcal{P}_\ell(\alpha)$ is the same as in \eqref{definPellproba}, thus when the regions of $\alpha$ considered in Lemma
\ref{firstPoisson} and here have an intersection, then this corresponds to the same operator, by analytic continuation.
The uniqueness of the solution with such an asymptotic is direct if ${\rm Re}(\alpha)<Q$: the difference of two such solutions would be in $\mathcal{D}(\mathcal{Q})$ and the operator $\mathbf{H}$ has no
$L^2$ eigenvalues (Lemma \ref{embedded}), hence the difference is identically $0$. For the case $\alpha=Q+ip$ with $p\in\mathbb{R}$, denote by $\hat u(\alpha)$ the difference of two such solutions. Then $\hat u(\alpha) $ can be written under the form
\[\hat u(\alpha)=\sum_{j\leq \ell} \hat F_j^+(\alpha)
e^{-ic\sqrt{p^2-2\lambda_j}} +\hat G_\ell(\alpha,F)\]
where $\hat F_j^+(\alpha)\in \ker (\mathbf{P}-\lambda_j)$ and
$\hat G_\ell(\alpha,F)\in e^{\beta\rho(c)}L^2(\mathbb{R}\times \Omega_\mathbb{T})+L^2(\mathbb{R}; E_\ell^{\perp})$. We can split the sum above as $\sum_{j, 2\lambda_j\leq p^2}\cdots+\sum_{j, p^2< j \leq 2\lambda_\ell}\cdots $. The sum $\sum_{j, p^2< j \leq 2\lambda_\ell}\cdots $ belongs to some $e^{\delta \rho}L^2$ as well as its $\partial_c$ derivative. We can use
Lemma \ref{boundarypairing} to see that $\sum_{j, 2\lambda_j\leq p^2}\|\hat F_j^+(\alpha)\|_{L^2(\Omega_\mathbb{T})}^2=0$, hence again $\hat u(\alpha)\in L^2$ and we can conclude as previously.
The meromorphic extension of $\mathcal{P}_\ell(\alpha)$ is a direct consequence of the meromorphic extension of ${\bf R}(\alpha)$ in Proposition \ref{extensionresolvent}.
\end{proof}
We notice that for $\alpha=Q+ip$ with $p\in\mathbb{R}$, the function $\overline{\mathcal{P}(\overline{\alpha})F}$ is another solution of $(\mathbf{H}-\tfrac{Q^2+p^2}{2})u=0$ satisfying
\[\overline{\mathcal{P}_\ell(\overline{\alpha})F}=\sum_{j\leq \ell}\Big(\overline{F^-_j}e^{-ic\sqrt{p^2-\lambda_j}}+\overline{F_j^+(\overline{\alpha})}
e^{ic\sqrt{p^2-\lambda_j}}\Big)+\overline{G_\ell(\overline{\alpha},F)}.\]
This implies that for each $F=\sum_{j\leq \ell}F_j^-\in E_\ell$, there is a unique solution $u=\widehat{\mathcal{P}}_\ell(\alpha)F$ to $(\mathbf{H}-\tfrac{Q^2+p^2}{2})u=0$ of the form
\begin{equation}\label{regimebar}
\widehat{\mathcal{P}}_\ell(\alpha)F=\sum_{j\leq \ell}\Big(F^-_je^{-ic\sqrt{p^2-\lambda_j}}+\widehat{F}_j^+(\alpha)
e^{ic\sqrt{p^2-\lambda_j}}\Big)+\widehat{G}_\ell(\alpha,F)
\end{equation}
with $\widehat{G}_\ell(\alpha,F)\in e^{\frac{\beta}{2}\rho(c)}L^2(\mathbb{R}\times \Omega_\mathbb{T})+L^2(\mathbb{R}; E_\ell^{\perp})$ and $\widehat{F}_j^+\in \mathbb{C}\rho_j$, and $\hat{\mathcal{P}}_\ell(\alpha)$ extends meromorphically on an open set of $\Sigma$ just like $\mathcal{P}_\ell(\alpha)$.
\begin{lemma}\label{Poissonaj}
Let $\ell\in \mathbb{N}$, $0<\beta<\gamma$ and $\alpha$ in \eqref{regionvalide},
the Poisson operator $\mathcal{P}_{\ell}(\alpha)$ can be obtained from the resolvent as follows:
for $F=\sum_{j\leq \ell}F_j^- \in E_\ell$ and $\varphi\in e^{\beta\rho}L^2$
\begin{equation}\label{expression P vs resolvante}
\langle \mathcal{P}_\ell(\alpha)F,\varphi\rangle_2= i\sum_{j\leq \ell} \sqrt{p^2-2\lambda_j}\Big\langle F_j^-,a_j(\overline{\alpha},\varphi)\Big\rangle_{L^2(\Omega_\mathbb{T})}
\end{equation}
where $a_j(\alpha,\varphi)$ are the functionals obtained from \eqref{asymptoticRf}, holomorphic in $\alpha$ and linear in $\varphi$.
\end{lemma}
\begin{proof}
Let $\alpha=Q+ip$ with $p\in \mathbb{R}\setminus ([-\sqrt{2\lambda_\ell},\sqrt{2\lambda_\ell}]\cup_{j\geq \ell}\{\pm\sqrt{2\lambda_j}\})$ and
let us take $F=\sum_{j\leq \ell} F_j^-$
with $ F_j^-\in {\rm Ker}(\mathbf{P}-\lambda_j)$ for $j\leq \ell$. Then from the construction of
$\mathcal{P}_\ell(\alpha) F$ (with a function $\chi=\chi(c)$) in the proof of Proposition \ref{poissonprop}
\[ \begin{split}
\langle \mathcal{P}_\ell(\alpha)F,\varphi\rangle & = \Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi, \varphi\Big\rangle- \Big\langle \mathbf{R}(\alpha)(\mathbf{H}-2\Delta_\alpha)\sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi,\varphi\Big\rangle\\
& =\Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi, \varphi\Big\rangle- \Big\langle ( \mathbf{H}-2\Delta_\alpha)\sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi, \mathbf{R}(\overline{\alpha})\varphi\Big\rangle.
\end{split}
\]
Here we used $ \mathbf{R}(\alpha)^*= \mathbf{R}(\overline{\alpha})={\bf R}(2Q-\alpha)$. Let $\theta_T$ be as in the proof of Lemma \ref{boundarypairing}.
We have
\begin{align*}
\lim_{T\to \infty}&\Big\langle \theta_T(c)(\mathbf{H}-2\Delta_\alpha)\sum_{j\leq \ell}
F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi, \mathbf{R}(\overline{\alpha})\varphi\Big\rangle\\
=& \Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi,\varphi\Big\rangle - \frac{1}{2}\lim_{T\to \infty}\Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi,
\theta''_T \mathbf{R}(\overline{\alpha})\varphi\Big\rangle\\
&- \lim_{T\to \infty}\Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi,
\theta'_T\partial_c \mathbf{R}(\overline{\alpha})\varphi\Big\rangle.
\end{align*}
Using now the asymptotic form \eqref{asymptoticRf}, the last two limits above
can be rewritten as
\[
\lim_{T\to \infty}\Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi,
\theta''_T\mathbf{R}(\overline{\alpha})\varphi\Big\rangle=\lim_{T\to \infty} \sum_{j\leq \ell}\Big\langle
F^-_je^{ic\sqrt{p^2-2\lambda_j}},
\theta''_Ta_j(\overline{\alpha},\varphi)e^{ic\sqrt{p^2-2\lambda_j}}\Big\rangle\]
\[
\lim_{T\to \infty}\Big\langle \sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}\chi,
\theta'_T\partial_c\mathbf{R}(\overline{\alpha})\varphi\Big\rangle=\lim_{T\to \infty}\sum_{j\leq \ell}\Big\langle F^-_je^{ic\sqrt{p^2-2\lambda_j}},
\theta'_Ta_j(\overline{\alpha},\varphi)\partial_c(e^{ic\sqrt{p^2-2\lambda_j}})\Big\rangle
\]
and this easily yields
\[ \langle \mathcal{P}_\ell(\alpha)F,\varphi\rangle =i\sum_{j\leq \ell} \sqrt{p^2-\lambda_j}\Big\langle F_j^-,a_j(\overline{\alpha},\varphi)\Big\rangle_{L^2(\Omega_\mathbb{T})}.
\]
where $a_j$ are the functional obtained from \eqref{asymptoticRf}. The result then extends holomorphically to the region \eqref{regionvalide} and meromorphically to \eqref{regiondextension}
\end{proof}
\subsubsection{The Poisson operator far from ${\rm Re}(\alpha)=Q$.}
We have seen in Lemma \ref{firstPoisson} that the Poisson operator can be defined far from the spectrum. The problem is that the region of analyticity of $\mathcal{P}_\ell(\alpha)$ in Lemma \ref{firstPoisson} does not intersect (for $\ell$ large at least) the region of analyticity
of $\mathcal{P}_\ell(\alpha)$ from Proposition \ref{poissonprop}. The proposition below extends the construction of the Poisson operator to a region overlapping both regions in Lemma \ref{firstPoisson} and Proposition \ref{poissonprop} (see figure \ref{figure3}).
\begin{proposition}\label{Poissonintermediaire}
For $\ell$ fixed, the Poisson operator $\mathcal{P}_\ell(\alpha)$ of Lemma \ref{firstPoisson} extends analytically to the region
\begin{equation}\label{regionetendue}
\Big\{\alpha=Q+ip\in \mathbb{C}\,\, |\, \,{\rm Re}(\alpha)<Q\, ,\, {\rm Im}(p)>{\rm Im}(\sqrt{p^2-2\lambda_\ell})-\gamma/2\Big\}
\end{equation}
as an function in $e^{-({\rm Im}(\sqrt{p^2-2\lambda_\ell})+\varepsilon)\rho}\mathcal{D}(\mathcal{Q})$ for all $\varepsilon>0$.
\end{proposition}
\begin{proof}
As before, for $F=\sum_{j=0}^\ell F_j^-\in E_\ell$ with $F_j^-\in {\rm Ker}(\mathbf{P}-\lambda_j)$, we set $u_-(\alpha):=\sum_{j\leq \ell}F^-_je^{ic\sqrt{p^2-2\lambda_j}}$, and let $\chi\in C^\infty(\mathbb{R})$ equal to $1$ in $(-\infty,-1)$ and with ${\rm supp}(\chi)\subset \mathbb{R}^-$.
We get
\[(\mathbf{H}-\tfrac{Q^2+p^2}{2})(\chi u_-(\alpha))= -\tfrac{1}{2}\chi''(c)u_-(\alpha)- \chi'(c)\partial_cu_-(\alpha)+e^{\gamma c}V\chi u_-(\alpha).\]
The first two terms in the right hand side are in $e^{N\rho}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for all $N>0$ (indeed, compactly
supported in $c$), while the last term can be decomposed as
\[ \Pi_ke^{\gamma c}V\chi u_-(\alpha)+(1-\Pi_k)\chi e^{\gamma c}V u_-(\alpha)\in e^{(-{\rm Im}\sqrt{p^2-2\lambda_\ell}+\gamma/2)\rho}L^2(\mathbb{R};E_k)+e^{(-{\rm Im}\sqrt{p^2-2\lambda_\ell}+\gamma/2)\rho}\mathcal{D}'(\mathcal{Q})
\]
by using Lemma \ref{chiPikV}.
Using Proposition \ref{physicalsheet}, we can thus define, with the same formula as in Lemma \ref{definPellproba} and Proposition \ref{poissonprop}, the Poisson operator
\[u_+(\alpha):=\mathbf{R}(\alpha)(\mathbf{H}-\tfrac{Q^2+p^2}{2})(\chi u_-(\alpha))\in e^{(-{\rm Im}\sqrt{p^2-2\lambda_\ell}+\gamma/2)\rho}\mathcal{D}(\mathcal{Q}),\]
\[ \mathcal{P}_\ell(\alpha)F:=u_-(\alpha)\chi - u_+(\alpha)\]
in the region
\[ {\rm Im}(p)={\rm Re}(Q-\alpha)> \max_{j\leq \ell}{\rm Im}\sqrt{p^2-2\lambda_j}-\gamma/2={\rm Im}\sqrt{p^2-2\lambda_\ell}-\gamma/2.\]
\end{proof}
\begin{remark}\label{nonempty}
Notice that this region of holomorphy is non-empty and connected, as for $\ell$ and $|p|=R\gg \lambda_\ell$
\[ {\rm Im}(p)-{\rm Im}\sqrt{p^2-2\lambda_\ell}+\gamma/2=\gamma/2+\mathcal{O}(\frac{\lambda_\ell}{R^2})>0.\]\end{remark}
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{PR}=[minimum width=2cm,text width=3cm,minimum height=0.8cm,rectangle,rounded corners=5pt,draw,fill=red!30,text=black,font=\bfseries,text centered,text badly centered]
\tikzstyle{NS}=[minimum width=2cm,text width=3cm,minimum height=0.8cm,rectangle,rounded corners=5pt,draw,fill=blue!20,text=black,font=\bfseries,text centered,text badly centered]
\tikzstyle{AR}=[minimum width=2cm,text width=3cm,minimum height=0.8cm,rectangle,rounded corners=5pt,draw,fill=SeaGreen!50,text=black,font=\bfseries,text centered,text badly centered]
\tikzstyle{PRfleche}=[->,>= stealth,thick,red!60]
\tikzstyle{NSfleche}=[->,>= stealth,thick,blue!60]
\tikzstyle{Afleche}=[->,>= stealth,thick,SeaGreen]
\node[inner sep=0pt] (F1) at (0,0)
{\includegraphics[width=.6\textwidth]{figure3.png}};
\node (F) at (2,2.3){ ${\rm Re}(\alpha)=Q$};
\node[PR] (P) at (3,-2) {Probabilistic region \eqref{regionLemma4.4}};
\node[NS] (NSR) at (5,3) {Near spectrum region \eqref{regionvalide}};
\node[AR] (AR) at (4,1) {Analyticity region \eqref{regionetendue} for $\mathcal{P}_\ell(\alpha)$};
\node (P1) at (-3,-0.5) {};
\node (N1) at (1,3) {};
\node (A1) at (-2,2) {};
\draw[PRfleche] (P) to [bend left=25] (P1);
\draw[NSfleche] (NSR) to [bend right=25] (N1);
\draw[Afleche] (AR) to [bend left=25] (A1);
\end{tikzpicture}
\caption{The green colored region correspond to the region \eqref{regionetendue} of validity of Lemma \ref{Poissonintermediaire} for the analyticity of the Poisson operator $\mathcal{P}_\ell(\alpha)$ with $\ell>0$ (for the plot, we take $\lambda_\ell=4$, $\gamma=1/2$). It overlaps the probabilistic \eqref{regionLemma4.4} and near spectrum \eqref{regionvalide} regions.}
\label{figure3}
\end{figure}
\subsection{The Scattering operator}\label{section_scattering}
\begin{definition}
Let $\ell\in \mathbb{N}$ and $\alpha=Q+ip$ with $p\in\mathbb{R}\setminus (-\sqrt{2\lambda_\ell},\sqrt{2\lambda_\ell})$.
The scattering operator $\mathbf{S}_\ell(\alpha): E_\ell \to E_\ell$ for the $\ell$-th layer (also called $\ell$-scattering operator) is the operator defined as follows:
let $F=\sum_{j\leq \ell}F_j\in E_\ell$ (with $F_j\in {\rm Ker}(\mathbf{P}-\lambda_j)$) and let $F_j^-:=(p^2-2\lambda_j)^{-1/4}F_j$, then we set
\[ \mathbf{S}_\ell(\alpha)F:=\left\{ \begin{array}{ll}
\sum_{j \leq \ell}F_j^+(\alpha)(p^2-2\lambda_j)^{1/4}, & \textrm{ if }p>\sqrt{2\lambda_\ell}, \\
\sum_{j\leq \ell}\widehat{F}_j^+(\alpha)(p^2-2\lambda_j)^{1/4}, & \textrm{ if }p<-\sqrt{2\lambda_\ell}.
\end{array}\right.\]
where $F_j^+(\alpha),\hat{F}_j^+(\alpha)$ are the functions in \eqref{expansionP} and \eqref{regimebar}. We will call more generally
$$\mathbf{S}(\alpha):=\left\{\begin{array}{lll}
\bigcup_{\ell, 2\lambda_\ell<p^2}\ker ({\bf P}-\lambda_j) &\to & \bigcup_{\ell, 2\lambda_\ell<p^2}
\ker({\bf P}-\lambda_j) \\
F\in E_\ell & \mapsto & \mathbf{S}_\ell(\alpha)F\end{array}\right.
$$
the scattering operator, where we use $\mathbf{S}_\ell(\alpha)_{|E_{\ell'}}= \mathbf{S}_{\ell'}(\alpha) $ for $\ell'<\ell$.
\end{definition}
Let us define the map $\omega_\ell: \Sigma\to \Sigma$ by the following property: if
$r_j(\alpha)=\sqrt{p^2-2\lambda_j}$ are the analytic functions on $\Sigma$ used to define
this ramified covering (with $\alpha=Q+ip$ in the physical sheet),
then $\omega_\ell(\alpha)$ is the point in $\Sigma$ so that
\[ r_j(\omega_\ell(\alpha))=\left\{ \begin{array}{ll}
-r_j(\alpha) & \textrm{ if }j \leq \ell, \\
r_j(\alpha) & \textrm{ if }j>\ell
\end{array}\right..\]
As a consequence of Proposition \ref{poissonprop} and Lemma \ref{boundarypairing}, we obtain the
\begin{corollary}\label{mainthscat}
For each $\ell\in\mathbb{N}$, the $\ell$-th scattering operator $\mathbf{S}_\ell(\alpha)$ is unitary on $E_\ell$ if $\alpha=Q+ip$ ($p\in\mathbb{R}$) is such that $\lambda_\ell<p^2<\min\{\lambda_j\, |\, \lambda_\ell<\lambda_j\}$.
It also satisfies the functional equation
\begin{equation}\label{fcteqS}
\mathbf{S}_\ell(\alpha)\mathbf{S}_\ell(\omega_\ell(\alpha))={\rm Id}.
\end{equation}
Moreover it extends meromorphically in \eqref{regiondextension} if $\beta<\gamma$. It also satisfies the following functional equation for each $F=\sum_{j=0}^\ell F_j\in E_\ell$
\begin{equation}\label{fcteqP}
\mathcal{P}_\ell(\alpha)\sum_{j=0}^\ell (p^2-2\lambda_j)^{-1/4}F_j=\mathcal{P}_\ell(\omega_\ell(\alpha))\mathbf{S}_\ell(\alpha)\sum_{j=0}^\ell F_j.
\end{equation}
\end{corollary}
\begin{proof}
The unitarity of $\mathbf{S}_\ell(\alpha)$ on the line ${\rm Re}(\alpha)=Q$ follows directly from Lemma \ref{boundarypairing} applied with $u_\ell=\mathcal{P}_\ell(\alpha)F$. The functional equation \eqref{fcteqS} reads $\mathbf{S}_\ell(\alpha)\mathbf{S}_\ell(2Q-\alpha)={\rm Id}$ on the line ${\rm Re}(\alpha)=Q$
and that comes directly from the uniqueness statement in Proposition \ref{poissonprop} on the line. The extension of ${\bf S}(\alpha)$ with respect to $\alpha$ comes directly from the meromorphy
of the $a_j(\alpha,F)$ in Proposition \ref{extensionresolvent}. The functional identity extends meromorphically under the formula \eqref{fcteqS}.
The functional equation \eqref{fcteqS} also comes from uniqueness of the Poisson operator.
\end{proof}
\begin{theorem}\label{spectralmeasure}
For each $j\in \mathbb{N}$, let $(h_{jk})_{k=1,\dots,k(j)}$ be an orthonormal basis of
$\ker_{L^2(\Omega_\mathbb{T})}(\mathbf{P}-\lambda_j)$. The spectral resolution holds for all $ \varphi,\varphi'\in e^{\beta\rho}L^2(\Omega_\mathbb{T}\times\mathbb{R})$ with $\beta>0$
\begin{equation}\label{spectraldecomposH}
\langle \varphi\,|\,\varphi'\rangle_2 = \frac{1}{2\pi}\sum_{j=0}^{\infty} \sum_{k=1}^{k(j)}\int_{0}^\infty
\Big\langle \varphi \,|\,\mathcal{P}_j\Big(Q+i\sqrt{p^2+2\lambda_j}\Big)h_{jk}\Big\rangle
\Big\langle \mathcal{P}_j\Big(Q+i\sqrt{p^2+2\lambda_j}\Big)h_{jk}\,|\,\varphi'\Big\rangle {\rm d}p.
\end{equation}
As a consequence the spectrum of ${\bf H}$ is absolutely continuous.
\end{theorem}
\begin{proof}
We recall the Stone formula: for $\varphi,\varphi' \in e^{\beta \rho}L^2$ for $\beta>0$
\[ \begin{split}
\langle \varphi\,|\,\varphi'\rangle_2=& \frac{1}{2\pi i}\lim_{\varepsilon\to 0^+}\int_{0}^{\infty}\langle [(\mathbf{H}-\tfrac{Q^2}{2}-t-i\varepsilon)^{-1}-(\mathbf{H}-\tfrac{Q^2}{2}-t+i\varepsilon)^{-1}]\varphi\,|\,\varphi'\rangle {\rm d}t\\
=& \frac{1}{2\pi i}\lim_{\varepsilon\to 0^+}\int_{0}^{\infty}\langle [(\mathbf{H}-\tfrac{Q^2+p^2}{2}-i\varepsilon)^{-1}-(\mathbf{H}-\tfrac{Q^2+p^2}{2}+i\varepsilon)^{-1}]\varphi\,|\,\varphi'\rangle p{\rm d}p\\
&= \frac{1}{2\pi i}\int_0^\infty \langle [(\mathbf{R}(Q+ip)-\mathbf{R}(Q-ip))]\varphi\,|\,\varphi'\rangle p{\rm d}p.
\end{split}\]
Here $\alpha=Q+ip$ (with $p>0$) has to be viewed as an element in $\Sigma$ obtained by limit $Q+ip-\varepsilon$ as $\varepsilon\to 0^+$ and, if $\ell$ is the largest integer such that $2\lambda_\ell\leq p^2$, we write $\overline{\alpha}$ for the point $\omega_\ell(\alpha)$ on $\Sigma$.
For $\alpha=Q+ip$ with $p\in \mathbb{R}^+$, we have for $\varphi \in e^{\beta \rho}L^2$ with $\beta>0$
\[ \begin{split}
\frac{1}{2i}\langle (\mathbf{R}(\alpha)-\mathbf{R}(\overline{\alpha}))\varphi\,|\,\varphi\rangle&= {\rm Im}\langle \mathbf{R}(\alpha)\varphi\,|\,\varphi\rangle\\
&= {\rm Im}\langle \mathbf{R}(\alpha)(\mathbf{H}-2\Delta_\alpha)\mathbf{R}(\overline{\alpha})\varphi\,|\,\varphi\rangle \\
&= {\rm Im}\langle (\mathbf{H}-2\Delta_\alpha)\mathbf{R}(\overline{\alpha})\varphi\,|\,\mathbf{R}(\overline{\alpha})\varphi\rangle \end{split}.\]
Here we have used that $(\mathbf{H}-2\Delta_\alpha)\mathbf{R}(\overline{\alpha})={\rm Id}$ on $e^{\beta\rho} L^2$ provided $p\in \mathbb{R}$ and that
$\langle \mathbf{R}(\overline{\alpha})\varphi\,|\,\varphi'\rangle=\langle \varphi,{\bf R}(\alpha)\varphi'\rangle$
for $\varphi\,|\,\varphi'\in e^{\beta \rho}L^2$, this last fact coming from the identity
$\mathbf{R}(\overline{\alpha})=\mathbf{R}(\alpha)^*$ for ${\rm Re}(\alpha)<Q$ and passing to the limit ${\rm Re}(\alpha)\to Q$.
Let $\theta_T(c)$ as in the proof of Lemma \ref{boundarypairing}. We have
\[\begin{split}
\langle (\mathbf{H}-2\Delta_\alpha)\mathbf{R}(\overline{\alpha})\varphi,\mathbf{R}(\overline{\alpha})\varphi\rangle& =\lim_{T\to +\infty}
\langle \theta_T(\mathbf{H}-2\Delta_\alpha)\mathbf{R}(\overline{\alpha})\varphi,\mathbf{R}(\overline{\alpha})\varphi\rangle\\
&= \lim_{T\to +\infty}
\langle \theta_T\mathbf{R}(\overline{\alpha})\varphi\,|\,\varphi\rangle+\lim_{T\to \infty}\tfrac{1}{2}\langle [\partial^2_c,\theta_T]\mathbf{R}(\overline{\alpha})\varphi\,|\,\mathbf{R}(\overline{\alpha})\varphi\rangle .
\end{split}\]
Using \eqref{asymptoticRf} and arguing as in the proof of Lemma \ref{boundarypairing}, as $T\to \infty$ we get
\[\begin{split}
\langle (\mathbf{H}-2\Delta_\alpha)\mathbf{R}(\overline{\alpha})\varphi,\mathbf{R}(\overline{\alpha})\varphi\rangle
=& \langle \mathbf{R}(\overline{\alpha})\varphi,\varphi\rangle
+\tfrac{1}{2}\lim_{T\to \infty}\sum_{j\leq \ell}\|a_j(\overline{\alpha},\varphi)\|^2_{L^2(\Omega_\mathbb{T})}
\partial_c(e^{ic\sqrt{p^2-2\lambda_j}})_{|c=-T}e^{iT\sqrt{p^2-2\lambda_j}}\\
& -\tfrac{1}{2}\lim_{T\to \infty}\sum_{j\leq \ell}\|a_j(\overline{\alpha},\varphi)\|_{L^2(\Omega_\mathbb{T})}^2 e^{-iT\sqrt{p^2-2\lambda_j}}\partial_c(e^{-ic\sqrt{p^2-2\lambda_j}})_{|c=-T}\\
=& \langle \mathbf{R}(\overline{\alpha})\varphi,\varphi\rangle-i\sum_{j\leq \ell}{\sqrt{p^2-2\lambda_j}}\|a_j(\overline{\alpha},\varphi)\|_{L^2(\Omega_\mathbb{T})}^2.
\end{split}\]
We conclude that
\begin{equation}\label{ImRalpha}
-{\rm Im}\langle \mathbf{R}(\overline{\alpha})\varphi,\varphi\rangle= \tfrac{1}{2}\sum_{j\leq \ell}{\sqrt{p^2-2\lambda_j}}\|a_j(\overline{\alpha},\varphi)\|^2_{L^2(\Omega_\mathbb{T})}.
\end{equation}
By Lemma \ref{Poissonaj}, we have
\begin{equation}\label{Pell*}
\mathcal{P}_\ell(\alpha)^*\varphi=-i\sum_{j\leq \ell}\, a_j(\overline{\alpha},\varphi)\sqrt{p^2-2\lambda_j}.
\end{equation}
By polarisation and by denoting $\Pi_j$ the orthogonal projectors on ${\rm Ker}(\mathbf{P}-\lambda_j)$, we deduce from \eqref{ImRalpha} and \eqref{Pell*} that
$$-{\rm Im}\langle \mathbf{R}(\overline{\alpha})\varphi,\varphi\rangle= \tfrac{1}{2}\sum_{j\leq \ell}\frac{1}{\sqrt{p^2-2\lambda_j}}\langle \Pi_j \mathcal{P}_\ell(\alpha)^*\varphi, \Pi_j \mathcal{P}_\ell(\alpha)^*\varphi' \rangle_{L^2(\Omega_\mathbb{T})}.$$
Rewriting $\mathcal{P}_\ell(\alpha)^*\varphi=\sum_{j=0}^\ell\sum_{k=1}^{k(j)}\langle\varphi,\mathcal{P}_\ell(\alpha)h_{jk}\rangle_2 h_{jk}$, we obtain
\[ \langle \varphi,\varphi'\rangle_2 = \frac{1}{2\pi}\int_{0}^\infty \sum_{\ell=0}^{\infty}\operatorname{1\hskip-2.75pt\relax l}_{[\sqrt{2\lambda_{\ell}},\sqrt{2\lambda_{\ell+1}})}(p)\sum_{j=0}^\ell\sum_{k=1}^{k(j)}
\langle \varphi,\mathcal{P}_\ell(Q+ip)h_{jk}\rangle_2 \overline{\langle \varphi',\mathcal{P}_\ell(Q+ip)h_{jk}\rangle_2} \frac{p}{\sqrt{p^2-2\lambda_j}} dp.\]
This finally can be rewritten, using \eqref{egalitePell}, as
\[\begin{split}
\langle \varphi,\varphi'\rangle_2 = &\frac{1}{2\pi}\sum_{j=0}^{\infty}\sum_{\ell\geq j}\int_{0}^\infty \operatorname{1\hskip-2.75pt\relax l}_{[\sqrt{2\lambda_{\ell}},\sqrt{2\lambda_{\ell+1}})}(p)\sum_{k=1}^{k(j)}
\langle \varphi,\mathcal{P}_\ell(Q+ip)h_{jk}\rangle \overline{\langle \varphi',\mathcal{P}_\ell(Q+ip)h_{jk}\rangle} \frac{p}{\sqrt{p^2-2\lambda_j}} dp\\
= &\frac{1}{2\pi}\sum_{j=0}^{\infty}\int_{\sqrt{2\lambda_j}}^\infty \sum_{k=1}^{k(j)}
\langle \varphi,\mathcal{P}_j(Q+ip)h_{jk}\rangle \overline{\langle \varphi',\mathcal{P}_j(Q+ip)h_{jk}\rangle} \frac{p}{\sqrt{p^2-2\lambda_j}}dp\\
=& \frac{1}{2\pi}\sum_{j=0}^{\infty} \sum_{k=1}^{k(j)}\int_{0}^\infty
\Big\langle \varphi ,\mathcal{P}_j\Big(Q+i\sqrt{r^2+2\lambda_j}\Big)h_{jk}\Big\rangle
\overline{\Big\langle \varphi',\mathcal{P}_j\Big(Q+i\sqrt{r^2+2\lambda_j}\Big)h_{jk}\Big\rangle} dr
\end{split}\]
where we performed the change of variables in the last line $r=\sqrt{p^2-2\lambda_j}$.
\end{proof}
\subsection{Holomorphic parametrization of the eigenstates}\label{sub:holomorphic}
We conclude this section by defining the generalized eigenfunctions $\Psi_{\alpha,{\bf k},{\bf l}}$ by using the standard orthonormal basis $(\psi_{{\bf kl}})_{{\bf k,l}\in \mathcal{N}}$ of $L^2(\Omega_\mathbb{T}) $ made of Hermite polynomials and introduced below \eqref{firstlength}. We set
\[ \Psi_{\alpha,{\bf k},{\bf l}}:=\mathcal{P}_j(Q-\sqrt{(Q-\alpha)^2-2\lambda_{{\bf kl}}})\psi_{{\bf kl}}\]
where $\ell$ is defined by $\lambda_\ell=\lambda_{{\bf kl}}$ and $\sqrt{z}$ is used here with the convention that the cut is on $\mathbb{R}^-$ (ie. $\sqrt{Re^{i\theta}}=\sqrt{R}e^{i\theta/2}$ for $\theta\in (-\pi,\pi)$). These are eigenfunctions of ${\bf H}$ with eigenvalues $2\Delta_\alpha+\lambda_{{\bf kl}}$. Using \eqref{regionvalide} and \eqref{regionetendue} for the holomorphy of $\mathcal{P}_\ell(\cdot)$, we obtain the
\begin{proposition}\label{holomorphiceig}
Let $\ell\geq 0$ such that $\lambda_{{\bf kl}}=|{\bf k}|+|{\bf l}|=\lambda_\ell$. For each $\varepsilon>0$, the function $\Psi_{\alpha,{\bf k},{\bf l}}\in e^{- \beta \rho}\mathcal{D}(\mathcal{Q})$ is an eigenfunction of ${\bf H}$ with eigenvalue $2\Delta_\alpha+\lambda_\ell$ for all $\beta>Q-{\rm Re}(\alpha)$, it is holomorphic on the set
\begin{equation}\label{defWell}
W_\ell:= \Big\{ \alpha \in \mathbb{C} \setminus \mathcal{D}_\ell\, |\,
{\rm Re}(\alpha)\leq Q, {\rm Re}\sqrt{(Q-\alpha)^2-2\lambda_\ell}>{\rm Re}(Q-\alpha)-\gamma/2\Big\}
\end{equation}
where $\mathcal{D}_\ell:=\bigcup_{j\geq \ell}\{Q\pm i\sqrt{2(\lambda_j-\lambda_\ell)}\}$ is a discrete set where
$\Psi_{\alpha,{\bf k},{\bf l}}$ is continuous in $\alpha$ with square root singularities.
The set $W_\ell$ is a connected subset of the half-plane ${\rm Re}(\alpha)\leq Q$, containing $(Q+i\mathbb{R})\setminus \mathcal{D}_\ell$ and the real half-line $(-\infty,Q-\frac{2\lambda_\ell}{\gamma}-\frac{\gamma}{4})$.
\end{proposition}
Finally, one has for $P\in \mathbb{R}$, $\Psi_{Q+iP,{\bf k},{\bf l}}=\mathcal{P}_j(Q+i\sqrt{P^2+2\lambda_{{\bf kl}}})\psi_{{\bf kl}}$ and one can rewrite \eqref{spectraldecomposH} as
\begin{equation}\label{spectraldecomposH2}
\langle \varphi\,|\,\varphi'\rangle_2 = \frac{1}{2\pi}\sum_{{\bf k,l}\in \mathcal{N}} \int_{0}^\infty
\langle \varphi \,|\,\Psi_{Q+iP,{\bf k},{\bf l}}\rangle
\langle \Psi_{Q+iP,{\bf k},{\bf l}}\,|\,\varphi'\rangle {\rm d}P.
\end{equation}
\section{Probabilistic representation of the Poisson operator}\label{sec:proba}
In Section \ref{sec:scattering}, we constructed the generalized eigenstates $\Psi_{\alpha,{\bf k},{\bf l}}$ by means of the Poisson operator (see Prop \ref{holomorphiceig}) of the Liouville Hamiltonian $\mathbf{H}$ on the spectrum line $\alpha\in Q+i\mathbb{R}$ and showed that these generalized eigenstates can be analytically continued in the parameter $\alpha $ over the region $W_\ell$ defined by \eqref{defWell} for $|{\bf k}|+|{\bf l}|=\lambda_\ell$. As in the case of the $\mathbf{H}^0$-eigenstates, we will need to perform a change of basis. So, similarly to Prop \ref{prop:mainvir0} item 4, we set for $\nu,\tilde{\nu}\in\mathcal{T}$ Young diagrams, with $N:=|\nu|+|\tilde\nu|$,
\begin{equation}\label{defdescendents}
\begin{gathered}
\Psi_{\alpha,\nu,\tilde{\nu}}: = \sum_{\mathbf{k},\mathbf{l}, |{\bf k}|+|{\bf l}|=N}M^{N}_{\alpha,\mathbf{k}\mathbf{l},\nu\tilde\nu}\Psi_{\alpha,\mathbf{k},\mathbf{l}},
\end{gathered}
\end{equation}
with the convention that $\Psi_{\alpha,\emptyset, \emptyset}=\Psi_{\alpha,0,0}$, which will be denoted by $\Psi_{\alpha}$. The $(\Psi_{\alpha,\nu,\tilde{\nu}})_{\nu,\tilde\nu}$ (with $\nu\not=\emptyset$ or $\tilde\nu\not=\emptyset$) will be called descendant states. Since the coefficients $M^{N}_{\alpha,\mathbf{k}\mathbf{l},\nu\tilde\nu}$ are analytic in $\alpha\in\mathbb{C}$ (see Prop \ref{prop:mainvir0}), $\Psi_{\alpha,\nu,\tilde{\nu}}$ satisfy the same holomorphy properties (namely Prop \ref{holomorphiceig}) as $\Psi_{\alpha,{\bf k},{\bf l}}$ provided that $|{\bf k}|+|{\bf l}|=|\nu|+|\tilde\nu|$. The main goal of this section is to compute the correlation functions of these descendant states and our strategy can be summarized as follows.
It turns out that for $\alpha$ real in the physical region, which we will call probabilistic region, we will be able to give a probabilistic representation for the descendant states thanks to the {\it intertwining} construction based on the descendant states $\Psi^0_{\alpha,\nu,{\tilde{\nu}}}$ of the GFF theory $\mu=0$:
\begin{equation} \label{secward:intert}
\lim_{t\to +\infty}e^{t (2\Delta_\alpha+|\nu|+|{\tilde{\nu}}|)}e^{-t\mathbf{H}}\Psi^0_{\alpha,\nu,{\tilde{\nu}}}=\Psi_{\alpha,\nu,{\tilde{\nu}}}
\end{equation}
for $\alpha$ real and negative enough (see Proposition \ref{descconvergence}). Indeed, in Subsection \ref{SET}, we express the free descendant states in terms of contour integrals of the Stress Energy Tensor (SET) of the GFF theory, which we plug then in \eqref{secward:intert}. Applying the Feynman-Kac formula for the propagator $e^{-t\mathbf{H}}$ in the expression \eqref{secward:intert}, we obtain a probabilistic expression for the descendant states $\Psi_{\alpha,\nu,\tilde{\nu}}$ in terms of contour integrals of SET in LCFT\footnote{Recast in the language of CFTs, this is somewhat equivalent to stating that descendant states can be obtained via the action of Virasoro generators on primary states.} (see Corollary \ref{propcontour}). The Ward identities (Proposition \ref{proofward}) then allows us to translate these contour integrals
in terms of differential operators: more precisely, correlation functions of descendant states $\Psi_{\alpha,\nu,\tilde{\nu}}$ can be obtained in terms of differential operators acting on the correlation functions of primary states $\Psi_{\alpha}$. Finally we will analytically continue these relations from the region $\{\alpha\in\mathbb{R};\alpha<Q\}$ back to the spectrum line and obtain the consequences in Subsection \ref{sub:desc3point} about the structure of 3 point correlation functions involving descendant states.
\subsection{Highest weight states}\label{sub:HW}
Recall the definition \eqref{psialphadef} of the highest weight state $\Psi^0_\alpha(c,\varphi)=e^{(\alpha-Q)c}$ for $\alpha\in\mathbb{C}$ of the $\mu=0$ theory. From Proposition \ref{Pellproba} item 2 (applied with $F=1$), we know that for $\alpha\in\mathbb{R}$ with $\alpha<Q$, the state
\begin{align}\label{Psialpha}
\Psi_\alpha=\mathcal{P}_0(\alpha)1
\end{align}
is given by the large time limit
\begin{equation}\label{intertprimary}
\Psi_\alpha=\lim_{t\to\infty}e^{2t\Delta_\alpha}e^{-t\mathbf{H}}\Psi^0_\alpha,\quad dc\otimes \P_\mathbb{T} \text{ a.e.}
\end{equation}
where $\Delta_\alpha$ denotes the conformal weight \eqref{deltaalphadef}. In physics (or representation theory) terminology the state
$\Psi_\alpha$ is the highest weight state corresponding to the {\it primary field} $V_\alpha$. Combining \eqref{intertprimary} with the Feynman-Kac formula \eqref{fkformula} leads to the probabilistic representation for $\alpha<Q$
\begin{equation}\label{defvalpha}
\Psi_\alpha(c,\varphi) :=e^{(\alpha-Q)c} \mathds{E}_\varphi\Big[ \exp\Big(-\mu e^{\gamma c}\int_{\mathbb{D}} |x|^{-\gamma\alpha }M_\gamma(\dd x)\Big)\Big].
\end{equation}
We recall here that the integrability of $ |x|^{-\gamma\alpha }$ with respect to $M_\gamma(\dd x)$ is detailed in \cite{DKRV}.
\begin{remark}
In forthcoming work, we will show that, for $\alpha\in (\tfrac{2}{\gamma},Q)$, we have as $c\to-\infty$
$$ \Psi_\alpha (c,\varphi) = e^{(\alpha-Q)c}+e^{(Q-\alpha)c} {R}(\alpha) +e^{(Q-\alpha)c}o(1 )$$
with $o(1 )\to 0$ in $L^2(\Omega_\mathbb{T})$ as $c\to-\infty$ with $R$ the reflection coefficient defined in \cite{dozz}, which thus appears as the scattering coefficient of constant functions: for $\ell=0$ we have $\mathbf{S}_0(\alpha)=R(\alpha) \mathbf{Id}$. More generally, we will show that the scattering matrix is diagonal.
\end{remark}
\subsection{Descendant states}\label{sub:desc}
Recall (Subsection \ref{repth}) that for $\mu=0$ we have descendant states given by
\begin{align*
\Psi^0_{\alpha,\nu,\tilde\nu}(c,\varphi)=\mathcal{Q}_{\alpha,\nu,\tilde{\nu}}(\varphi)e^{(\alpha-Q)c}
\end{align*}
where $\mathcal{Q}_{\alpha,\nu,\tilde{\nu}}$ are eigenstates of the operator $\bf P$:
\begin{align*
{\bf P}\mathcal{Q}_{\alpha,\nu,\tilde{\nu}}=(|\nu|+|\tilde{\nu}|)\mathcal{Q}_{\alpha,\nu,\tilde{\nu}}
\end{align*}
so that
\begin{align*
{\bf H}^0\Psi^0_{\alpha,\nu,\tilde{\nu}}=(2\Delta_\alpha+|\nu|+|\tilde{\nu}|)\Psi^0_{\alpha,\nu,\tilde{\nu}}.
\end{align*}
From Proposition \ref{Pellproba} we infer the following
\begin{proposition}\label{descconvergence}
Let $\alpha<\big(Q-\frac{2(|\nu|+|\tilde{\nu}|)}{\gamma}-\frac{\gamma}{4}\big)\wedge (Q-\gamma)$. Then the limit
\begin{equation}\label{descremi}
\lim_{t\to +\infty}e^{t (2\Delta_\alpha+|\nu|+|{\tilde{\nu}}|)}e^{-t\mathbf{H}}\Psi^0_{\alpha,\nu,{\tilde{\nu}}}=\Psi_{\alpha,\nu,{\tilde{\nu}}}
\end{equation}
holds in $e^{-(\beta+\gamma/2)\rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ for $\beta>Q-\alpha-\gamma/2$.
\end{proposition}
\begin{proof}
Write $|\nu|+|{\tilde{\nu}}|=\lambda_j$ for some $j$. We apply Proposition \ref{Pellproba} item 1 but we have to make a small notational warning: indeed recall that in Section \ref{sec:scattering}, eigenvalues are parametrized by $2\Delta_\alpha$ whereas here eigenvalues correspond to $ 2\Delta_\alpha+|\nu|+|{\tilde{\nu}}| = 2\Delta_\alpha+\lambda_j $. So let us call $\alpha'$ the $\alpha$ in the statement of Proposition \ref{Pellproba}, and write it as $\alpha'=Q+ip$ with $p^2=2\lambda_j-(Q-\alpha)^2$ ($p\in i\mathbb{R}$) in such a way that
$$2\Delta_{\alpha'}=2\Delta_{\alpha}+\lambda_j,\quad\text{otherwise stated }\sqrt{p^2-2\lambda_j}=i(Q-\alpha).$$
In particular ${\rm Im}\sqrt{p^2-2\lambda_j}=Q-\alpha>\gamma$ so that we can choose $\chi=1$ in Proposition \ref{Pellproba} item 1.
Let $\ell\geq 1$, $j\leq \ell$ and $F=\mathcal{Q}_{\alpha,\nu,{\tilde{\nu}}}$. Then the limit \eqref{descremi} exists in $e^{-(\beta+\gamma/2) \rho}L^2$ if
$\beta>{\rm Im}\sqrt{p^2-2\lambda_j}-\gamma/2=Q-\alpha-\gamma/2>0$ and $-p^2>\beta^2$. In conclusion we get $\alpha<Q-\gamma/2=2/\gamma$ and $-p^2>(Q-\alpha-\gamma/2)^2$. Substituting $ p^2=2\lambda_j-(Q-\alpha)^2$ in the latter, we arrive at the relation $(Q-\alpha)^2-2\lambda_j>(Q-\alpha-\gamma/2)^2$, which can be solved to find our condition.
\end{proof}
\subsection{Stress Energy Field}\label{SET}
In this section we construct a probabilistic representation for the Virasoro descendants \eqref{psibasis}. This can be done in terms of a local field, the {\it stress-energy tensor}, formally given for $z\in\mathbb{D}$ by
\begin{equation}\label{stress}
T(z):=Q\partial^2_{z}X(z)-(\partial_z X(z))^2+\mathds{E}[(\partial_zX(z))^2].
\end{equation}
The stress tensor does not make sense as a random field but can be given sense at the level of correlation functions as the limit of $T_\epsilon(z)$ defined by \eqref{stress} with $X$ replaced by a regularized field $X_\epsilon$ which we take in the form
\begin{align}\label{XepsDdef}
X_{\epsilon}(z):= \langle X, f_{\epsilon,z}\rangle_{\mathbb{D}
\end{align}
for suitable test function $f_{\epsilon,z}$.
We will use two regularizations in what follows. For the first one we take $f_{\epsilon,z}(u)=\frac{1}{\epsilon^2} \varrho(\frac{z-u}{\epsilon} )$ with non-negative $\varrho\in C_c^\infty(\mathbb{C})$ with $\varrho(0)=1$ and $\int_{\mathbb{C}} \varrho(x) \dd x=1$. For this regularization we have for $t\geq 0$ the following scaling relation
\begin{align}\label{scalingrelation}
S_{-t}T_\epsilon(z)=e^{-2t}T_{e^{-t}\epsilon}(e^{-t}z).
\end{align}
For the second regularization we write $z=e^{-t+i\theta}$ and take $f_{\epsilon,z}(e^{-s+i\theta'})=\frac{1}{2\pi\epsilon}\rho(\frac{t-s}{\epsilon})\sum_{|n|<\epsilon^{-1}}e^{in(\theta-\theta')}$ with $\rho\in C_c^\infty(\mathbb{R})$, $\rho(0)=1$ and $\int_{\mathbb{R}} \rho(x) \dd x=1$. This regularization has the property that $T_\epsilon(z)$ depends only on the Fourier components $\varphi_n$ with $|n|<1/\epsilon$ when we decompose $X=P\varphi+X_\mathbb{D}$. We denote also by $\bar T(z)$ the complex conjugate of $T(z)$.
Our goal is to express the action of the Virasoro generators \eqref{virassoro} and \eqref{virassorotilde} on the state $U_0F$ in terms of the states
\begin{align}\label{TbarT}
U_0\big(\prod_{i=1}^kT(u_i)\prod_{j=1}^l \bar T(v_j) F\big)
\end{align}
which will be defined as limits of regularized expressions.
We start by specifiying a suitable class of $F$ for which \eqref{TbarT} makes sense.
Let $\delta<1$ .We introduce the set
${\mathcal E}_\delta$ defined by
\begin{equation}\label{defEcall}
{\mathcal E}_\delta:=\big\{f\in C_0^\infty(\delta \mathbb{D}) \mid f(e^{-t+i\theta})=\sum_{n\in\mathbb{Z}}f_n(t)e^{in\theta}, \text{ with }f_n\in C_0^\infty((-\ln\delta,\infty)) \text{ and }f_n=0\text{ for } |n| \text{ large enough}\big\}.
\end{equation}
Define ${\mathcal F}_\delta\subset {\mathcal F}_{ \mathbb{D}}$ by
\begin{equation}\label{Fform}
{\mathcal F}_\delta= {\rm span}\ \{ \prod_{i=1}^l \langle g_i, c+X\rangle_\mathbb{D} e^{\langle f,c+X \rangle _\mathbb{D}}; \; l \geq 0, \: f,g_i \in {\mathcal E}_\delta \big \} .
\end{equation}
with $l \geq 0$, $f,g_i \in {\mathcal E}_\delta$.
We note that $U_0F\in e^{\beta|c|}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ for $\beta>|f_0-Q|$ where $f_0=\langle f,1 \rangle_\mathbb{D}$ and is in the domain of the Virasoro operators $\mathbf{L}_{-\nu}^0\tilde{\mathbf{L}}_{-\tilde\nu}^0$ defined in Subsection \ref{repth} since it depends on a finite number of $\varphi_n$. Let
\begin{align}\label{caOt}
{\mathcal O}_\delta=\{({\bf u},{\bf v})\in\mathbb{C}^{m+n}\,|\, \delta<|u_j|,|v_j|<1,\; \forall j\neq j', \: \ |u_j|\neq |u_{j'}|, |v_j|\neq |v_{j'}|, |u_j|\neq |v_{j'}| \}.
\end{align}
We have the simple:
\begin{proposition}\label{basicstresss}
Let $F\in{\mathcal F}_{\delta}$.
Then the functions
\begin{align*
({\bf u},{\bf v})\to
U_0\big(\prod_{i=1}^kT_{\epsilon_i}(u_i)\prod_{j=1}^l \bar T_{\epsilon'_j}(v_j) F\big)
\end{align*}
are continuous on ${\mathcal O}_\delta$ with values in $e^{\beta|c|}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ for $\beta>| f_0-Q|$ and converge uniformly on compact subsets of ${\mathcal O}_\delta$ as $(\epsilon_i)_i$ and $(\epsilon'_j)_j$ tend successively to $0$ in whatever order. The limit is independent of the regularization or the order along which limits have been taken and denoted by \eqref{TbarT} and it defines a function holomorphic in $\bf u$ and anti-holomorphic in $\bf v$ in the region ${\mathcal O}_\delta$ taking values in $e^{\beta|c|}L^2(\mathbb{R}\times\Omega_\mathbb{T})$.
\end{proposition}
\begin{proof}
To keep the notation simple we consider only the case $j=0$ in \eqref{TbarT} and the case where $F\in {\mathcal F}_{\delta }$ is of the form $F=e^{\langle f,c+X \rangle _\mathbb{D}}$ with $f\in {\mathcal E}_\delta$. By replacing $f$ by $f+\sum_i\lambda_i g_i$ with $g_i \in {\mathcal E}_\delta$ and differentiating at $\lambda_i=0$ we can deduce the result for general $F$. For such $F$ we have
\begin{align}\label{ozerof}
(U_0F)(c,\varphi)=e^{(f_0-Q)c}e^{\langle P\varphi,f \rangle_\mathbb{D}}e^{\frac{_1}{^2} \langle f, G_\mathbb{D} f \rangle_\mathbb{D}}.
\end{align}
By Gaussian integration by parts (see \eqref{basicipp}) the right-hand side of \eqref{TbarT} is a sum of terms
\begin{align}\label{messy}
{\rm const}\times\prod_i(\partial^{a_i}P\varphi_{\epsilon_i}(u_i))^{b_i}\prod_{j<k}(\partial^{b_{jk}}_{u_j}\partial^{c_{jk}}_{u_k}(f_{\epsilon_j,u_j},G_\mathbb{D} f_{\epsilon_k,u_k})_\mathbb{D}
^{d_{jk}})\prod_l\partial^{d_l}_{u_l}(f_{\epsilon_l,u_l},G_\mathbb{D} f)_\mathbb{D}
^{e_l}U_0F
\end{align}
where $a_i,b_{jk}, c_{jk}\in \{1,2\}$, $b_i,d_{jk},e_l\in \{0,1,2\}$, $d_l\in \{1,2\}$ and $G_\mathbb{D}$ is the Dirichlet Green function \eqref{dirgreen}.
The functions $(f_{\epsilon_j,u_j},G_\mathbb{D} f_{\epsilon_k,u_k})_\mathbb{D}
$ and $(f_{\epsilon_l,u_l},G_\mathbb{D} f)_\mathbb{D}$ converge uniformly on compacts of ${\mathcal O}_\delta$ to smooth functions. From \eqref{dirgreen}
we get
\begin{align}\label{delG}
\partial_zG_\mathbb{D} (z,u)=-\frac{_1}{^2}(\frac{1}{z-u}-\frac{1}{z-\frac{1}{\bar u}})
\end{align}
Hence the limit of the second product in \eqref{messy} is holomorphic since $b_{jk}, c_{jk}>0$. For the third product, we get convergence to terms of the form
$$\int_\mathbb{D} \partial_u G_\mathbb{D}(u,z)f(z) \dd z=-\tfrac{1}{2}\sum_{n=0}^\infty (f_n u^{-n-1}+f_{-n-1}u^n)
$$
or its $\partial_u$ derivative where $f_n=\langle f,u^n \rangle_\mathbb{D}$ and $f_{-n}=\langle f,\bar u^n \rangle_\mathbb{D}$ and the sum is finite and holomorphic.
Finally, recalling \eqref{harmonic} it is easy to check that the first product converges in $L^2(\Omega_\mathbb{T})$ uniformly on compacts of ${\mathcal O}_\delta$ to a holomorphic function $g({\bf u})$ and $g({\bf u})U_0F\in e^{\beta|c|}L^2(\mathbb{R}\times\Omega_\mathbb{T})$.
\end{proof}
Now we will consider contour integrals of observables of the type \eqref{TbarT}, for which we use the following notation: for $f:\mathbb{D}\to\mathbb{C}$ and $\delta>0$
\begin{align*
\oint_{|z|=\delta }f(z) \dd z: =i \delta \int_0^{2\pi}f( \delta e^{i\theta}) e^{i \theta} \dd \theta,\ \ \oint_{|z|={\delta}}f(z)\dd \bar z:=i {\delta} \int_0^{2\pi}f({\delta} e^{i\theta})e^{-i \theta} \dd \theta.
\end{align*}
Then we have
\begin{lemma}\label{keepcooldontStress}
Let $F\in{\mathcal F}_{\delta}$ and $\delta'>\delta$.
Then for all $n>0$
\begin{align}\label{basicT}
\tfrac{1}{2\pi i}\oint_{|z|=\delta'} z^{1-n}U_0(T(z)F) \dd z=\mathbf{L}_{-n}^0U_0F,\ \ \tfrac{1}{2\pi i}\oint_{|z|=\delta'} \bar z^{1-n}U_0(\bar{T}(z)F)d\bar z=\tilde{\mathbf{L}}_{-n}^0U_0F.
\end{align}
\end{lemma}
\begin{proof}
Here again (for simplicity) we consider the case where $F\in {\mathcal F}_{\delta \mathbb{D}}$ is of the form $F=e^{\langle f,c+X \rangle _\mathbb{D}}$ with $f\in {\mathcal E}_\delta$.
Let us write the integration by parts terms explicitly. First
\begin{align*}
U_0(\partial_zX(z)F)
=&\partial_zP\varphi(z)U_0 F+\int_\mathbb{D}
\partial_zG_\mathbb{D} (z,u)
f(u) \dd u\,U_0F.
\end{align*}
From \eqref{delG}
we get
\begin{align}\label{delG2}
\partial_zG_\mathbb{D} (z,u
=-\frac{_1}{^2}\sum_{n=0}^\infty(u^nz^{-n-1}+\bar u^{n+1}z^n)
\end{align}
which converges since $|u|<\delta<\delta'=|z|<1$. Therefore
$$\int_\mathbb{D} \partial_z G_\mathbb{D}(z,u)f(u) \dd u=-\tfrac{1}{2}\sum_{n=0}^\infty (f_n z^{-n-1}+f_{-n-1}z^n)$$
where $f_n=\langle f,u^n \rangle_\mathbb{D}$ and $f_{-n}=\langle f,\bar u^n \rangle_\mathbb{D}$ for $n\geq 0$.
Recalling \eqref{harmonic}, we have obtained
\begin{align}\label{lll}
U_0(\partial_zX(z)F)&=\sum_{n\geq 0}(n\varphi_nz^{n-1}-\frac{_1}{^2} (f_n z^{-n-1}+f_{-n-1}z^n))U_0F\\&=\sum_{n\in\mathbb{Z}}z^{n-1}(n\varphi_n1_{n>0}-\frac{_1}{^2} f_{-n} )U_0F.\nonumber
\end{align}
By \eqref{ozerof}
\begin{align}\label{ozerof1}
(U_0F)(c,\varphi)=e^{(f_0-Q)c}e^{\sum_{n>0}(\varphi_nf_{n}+\varphi_{-n}f_{-n})}e^{\frac{_1}{^2} \langle f, G_\mathbb{D} f \rangle_\mathbb{D}}
\end{align}
so that
\begin{align}\nonumber
U_0(\partial_zX(z)F)&=(-\frac{_1}{^2} f_0z^{-1}+\sum_{n\neq 0}z^{n-1}(n\varphi_n1_{n>0}-\frac{_1}{^2} \partial_{-n}))U_0F\\&=i\sum_{n\in\mathbb{Z}}z^{n-1}{\bf A}_{-n}U_0F \label{llll}
\end{align}
where we recall that ${\bf A}_0=\frac{i}{2}(\partial_c+Q)$. Hence
\begin{align}\label{QtermT}
U_0(Q\partial^2_zX(z)F)=&-iQ \sum_{n\in \mathbb{Z}}(n+1)z^{-n-2} \mathbf{A}_{n}U_0 F.
\end{align}
Next consider the quadratic terms in $T$ and use Gaussian integrating by parts twice to get
\begin{align*}
U_0(((\partial_z X(z))^2-\mathds{E}[(\partial_zX(z))^2])F)
&= \Big(\partial_z P\varphi(z)+\int_\mathbb{D} \partial_z G_\mathbb{D}(z,u)f(u) \dd u
\Big)^2U_0F\\&=(\sum_{n\in\mathbb{Z}}z^{n-1}(n\varphi_n1_{n>0}-\frac{_1}{^2} f_{-n} ))^2U_0F\\&=\sum_{n,m}z^{n+m-2} (-\mathbf{A}_{-n} \mathbf{A}_{-m}+\tfrac{m}{2}\delta_{m,-n}1_{m>0})U_0F\\&=-\sum_{n,m}z^{n+m-2} :\mathbf{A}_{-n} \mathbf{A}_{-m}:U_0F
\end{align*}
where we used \eqref{lll} in the second step, \eqref{llll} in the third step and $\mathbf{A}_{m} \mathbf{A}_{-m}=\mathbf{A}_{-m} \mathbf{A}_{m}+\frac{m}{2}$ for $m>0$ in the last step. The sum converges in $e^{\beta|c|}L^2(\mathbb{R}\times\Omega_\mathbb{T})$. Combining this with \eqref{QtermT} the claim \eqref{basicT} follows upon doing the contour integral. The claim for $\bar T$ is proved in the same way.
\end{proof}
Let us now introduce some notation for the general correlations \eqref{TbarT}. Denote ${\bf u}=(u_1,\dots u_k)\in\mathbb{D}^k$.
We define nested contour integrals for $f:\mathbb{D}^k\times\mathbb{D}^j\to\mathbb{C}$ by
\begin{align}\label{def_contour}
\oint_{|\mathbf{u}|=\boldsymbol{\delta} }\oint_{|\mathbf{v}|=\tilde{\boldsymbol{\delta}}}f ({\bf u},{\bf v})\dd {\bf \bar v}\dd {\bf u}:=\oint_{|u_k|=\delta_k}\dots \oint_{|u_1|=\delta_1} \oint_{|v_j|=\tilde{\delta}_{j}}\dots \oint_{|v_1|=\tilde{\delta}_1}f({\bf u},{\bf v}) \dd \bar v_1\dots \dd \bar v_{j} \dd u_1\dots \dd u_k.
\end{align}
where $\boldsymbol{\delta} :=(\delta_1,\dots, \delta_k)$ with $0<\delta_1<\dots<\delta_k<1$ and similarly for $\tilde{\boldsymbol{\delta}}$. Furthermore we always suppose $\delta_i\neq \tilde\delta_j$ for all $i,j$.
Next, for $\boldsymbol{\epsilon}\in (\mathbb{R}^+)^k$ and $\boldsymbol{\epsilon}'\in (\mathbb{R}^+)^{j}$ we set
$T_{\boldsymbol{\epsilon}}( \mathbf{u}):=\prod_{i=1}^k T_{\epsilon_i}(u_i)$ (and similarly for the anti-holomorphic part)
and given two Young diagrams $\nu=(\nu_i)_{1\leq i\leq k},\tilde\nu=(\tilde \nu_i)_{1\leq i\leq j},$ we denote $\mathbf{u}^{1-\nu}=\prod u_i^{1-\nu_i}$ and $\bar{\mathbf{v}}^{1-\tilde{\nu}}=\prod \bar {v}_i^{1-\tilde{\nu}_i}$.
With these notations we have:
\begin{proposition}\label{FinalTbarT}
Let $F\in{\mathcal F}_{\delta}$ and
$\delta <\delta_1 \wedge \tilde{\delta}_1$. Then
\begin{align}\label{tbart11111}
(2\pi i)^{-k-j}\oint_{|\mathbf{u}|=\boldsymbol{\delta}}\oint_{|\mathbf{v}|=\tilde{\boldsymbol{\delta}}} \mathbf{u}^{1-\nu}\bar{\mathbf{v}}^{1-\tilde{\nu}} U_0\big(T({\bf u})\bar T({\bf v}) F\big)\dd {\bf \bar v}\dd {\bf u}={\bf L}_{-\nu }^0 {\bf L}_{-\tilde\nu}^0U_0 F.
\end{align}
\end{proposition}
\begin{proof}
For simplicity consider again the case with only $T$ insertions. We proceed by induction in $k$. By Lemma \ref{keepcooldontStress} the claim holds for $k=1$. Suppose it holds for $k-1$. We use the second regularization introduced above. This entails that $T_\epsilon(u_l)\in {\mathcal F}_{\delta_{l+1}} $ for $\epsilon$ small enough. By Proposition \ref{basicstresss} we have
\begin{align*}
\oint_{|\mathbf{u}|=\boldsymbol{\delta}} \mathbf{u}^{1-\nu} U_0\big(T({\bf u}) F\big)\dd {\bf u}&=\lim_{\boldsymbol{\epsilon}^{(k)}\to 0} \lim_{\epsilon_k\to 0
\oint_{|\mathbf{u}|=\boldsymbol{\delta}} \mathbf{u}^{1-\nu} U_0\big(T_{\epsilon_k}(u_k)T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F\big)\dd {\bf u}\\
&=\lim_{\boldsymbol{\epsilon}^{(k)}\to 0
\oint_{|\mathbf{u}|=\boldsymbol{\delta}} \mathbf{u}^{1-\nu} U_0\big(T(u_k)T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F\big)\dd {\bf u}
\end{align*}
where we introduced the notation
$\boldsymbol{\epsilon}^{(k)}=(\epsilon_1,\dots, \epsilon_{k-1})$, ${\bf u}^{(k)}=(u_1,\dots, u_{k-1})$. We have, for $\boldsymbol{\epsilon}^{(k)}$ small enough that $T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)})F\in {\mathcal F}_{\delta_k}$.
Hence by
Lemma \ref{keepcooldontStress}
\begin{align*
\frac{1}{2\pi i}\oint_{|u_k|=\delta_k}u_k^{1-\nu_k}U_0\big(T(u_k)T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F\big)du_k={\bf L}_{-\nu_k}U_0\big(T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F\big) .
\end{align*}
From \eqref{messy} we infer that $U_0\big(T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F)=P_k(\boldsymbol{\epsilon}^{(k)},\varphi)U_0F$ where $P_k(\boldsymbol{\epsilon}^{(k)},\varphi)$ is a polynomial in finitely many variables $\varphi_n$ (depending on $k$) with coefficients continuous in ${\bf u}^{(k)}$ (and depending on $\boldsymbol{\epsilon}^{(k)}$).
Therefore we may commute ${\bf L}_{-\nu_k} $ and the integration to get
\begin{equation*}
(2\pi i)^{-k} \oint_{|\mathbf{u}|=\boldsymbol{\delta}} \mathbf{u}^{1-\nu} U_0\big(T(u_k)T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F\big)\dd {\bf u}= {\bf L}_{-\nu_k} \left ( (2\pi i)^{-k+1} \oint_{|\mathbf{u}^{(k)}|=\boldsymbol{\delta}^{(k)}} (\mathbf{u}^{(k)})^{1-\nu^{(k)}} U_0\big(T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)}) F\big)\dd {\bf u}^{(k)} \right ).
\end{equation*}
By the induction hypothesis the term
\begin{equation}\label{lala}
I_{\boldsymbol{\epsilon}^{(k)}}:=(2\pi i)^{-k+1} \oint_{|\mathbf{u}^{(k)}|=\boldsymbol{\delta}^{(k)}} (\mathbf{u}^{(k)})^{1-\boldsymbol{\nu}^{(k)}} U_0\big(T_{\boldsymbol{\epsilon}^{(k)}}({\bf u}^{(k)} )F\big)\dd {\bf u}^{(k)}
\end{equation}
converges to $\mathbf{L}_{-\boldsymbol{\nu}^{(k)}} U_0F$ as $\boldsymbol{\epsilon}^{(k)}$ goes to $0$.
Moreover we claim that, for all fixed $k$, there exists $M,N<\infty$ s.t. $I_{\epsilon^{(k)}}=Q_k(\boldsymbol{\epsilon}^{(k)},\varphi)U_0F$ where $Q_k(\boldsymbol{\epsilon}^{(k)},\varphi)$ is a polynomial in $\varphi_n$ with $|n|<N$ and of degree less than $M$. Furthermore the coefficients of $Q_k(\boldsymbol{\epsilon}^{(k)},\varphi)$ converge as $\boldsymbol{\epsilon}^{(k)}\to 0$. Therefore we can commute ${\bf L}_{-\nu_k}$ and the limit to get
\begin{align*
\lim_{\boldsymbol{\epsilon}^{(k)}\to 0} {\bf L}_{-\nu_k}I_{\epsilon^{(k)}}={\bf L}_{-\nu_k}\lim_{\boldsymbol{\epsilon}^{(k)}\to 0}I_{\epsilon^{(k)}}={\bf L}_{-\nu_k}\mathbf{L}_{-\nu^{(k)}} U_0(F)=\mathbf{L}_{-\boldsymbol{\nu}} U_0(F)
\end{align*}
which completes the induction step.
To prove the above claim we use \eqref{messy}. Let ${\mathcal G}$ be the graph with vertex set $\{1,\dots,k\}$ and edges $\{i,j\}$ with $d_{ij}>0$. Let $\Gamma$ be a connected component of ${\mathcal G}$. Then the number of vertices $i$ in $\Gamma$ s.t. $b_i>0$ is no more than two. Consider a connected component for which the number is two. This corresponds to a subproduct in \eqref{messy} of the form
\begin{align}\label{moremess}
f_{\boldsymbol{\epsilon}}(u_{i_1},\dots,u_{i_l})=\partial P\varphi_{\epsilon_{i_1}}(u_{i_1})\partial P\varphi_{\epsilon_{i_l}}(u_{i_l})\prod_{a=1}^{l-1}\partial_{u_{i_a}}\partial_{u_{i_{a+1}}}(f_{\epsilon_{i_a},u_{i_a}},G_\mathbb{D} f_{\epsilon_{i_{a+1}},u_{i_{a+1}}})_\mathbb{D}.
\end{align}
We have
\begin{align*
\partial P\varphi_{\epsilon}(u)=\sum_{n=1}^{M_\varepsilon}a_n(\epsilon)\varphi_nu^{n-1}
\end{align*}
where $M_\epsilon<\infty$ if $\epsilon>0$ and $a_n(\epsilon)$ converges as $\epsilon\to 0$.
Similarly, for $|u|<|u'|$,
\begin{align*
\partial_{u}\partial_{u'}(f_{\epsilon,u},G_\mathbb{D} f_{\epsilon',u'})_\mathbb{D}=\sum_{m=0}^{N_{\epsilon,\epsilon'}}b_m(\epsilon,\epsilon')u^m{u'}^{-m-2}
\end{align*}
where $N_{\epsilon,\epsilon'}<\infty$ if $\epsilon,\epsilon'>0$ and $b_n(\epsilon,\epsilon')$ converge as $\epsilon,\epsilon'\to 0$. Insert these to \eqref{moremess}.
To simplify notation denote $u_{i_a}=v_a$, $\epsilon_{i_a}=\varepsilon_a$, $\delta_{i_a}=d_a $ and $\nu_{i_a}=\eta_a$.
The contour integral of $f_{\boldsymbol{\epsilon}}$ becomes
\begin{align*
\oint_{|\mathbf{v}|=\bf d} \mathbf{v}^{1-\boldsymbol{\eta}} f_{\boldsymbol{\varepsilon}}({\bf v})d{\bf v}=\sum_{n_1,n_l} \varphi_{n_1}\varphi_{n_l}a_{n_1,n_l}(\boldsymbol{\varepsilon})
\end{align*}
where
\begin{align*
a_{n_1,n_l}(\boldsymbol{\varepsilon})=\sum_{m_1,\dots, m_{l-1}>0} a_{n_1,n_l}({\bf m},\boldsymbol{\varepsilon})
\int_{[0,2\pi]^l} e^{i(n_1\theta_1+n_l\theta_l+\sum_{i=1}^{l-1}\alpha_i(m_i\theta_i-(m_{i}+2)\theta_{i+1})+\sum_{i=1}^l(1-\eta_i)\theta_i}\dd\boldsymbol{\theta}
\end{align*}
and $\alpha_i=\pm 1$ depending on whether $|v_a|<|v_{a+1}|$ or the opposite. The $\boldsymbol{\theta}$ integral gives the constraints $n_1+\alpha_1m_1+1-\eta_1=0$, $n_l-\alpha_{l-1}(m_{l-1}+2)+1-\eta_l=0$ and
$\alpha_im_i-\alpha_{i-1}(m_{i-1}+2)+(1-\eta_i)=0$ for $i=2,\dots,l-1$. The last two imply
$n_l-\alpha_1m_1+C(\boldsymbol{\alpha},\boldsymbol{\eta})=0$
and combining this with the first constraint we get $a_{n_1,n_l}=0$ if $|n_1|, |n_l|>C(\boldsymbol{\alpha},\boldsymbol{\eta})$, where $C(\boldsymbol{\alpha},\boldsymbol{\eta})$ denotes a generic constant depending only on $\boldsymbol{\alpha},\boldsymbol{\eta}$. Convergence of the $a_{n_1,n_l}(\boldsymbol{\varepsilon})$ as $\boldsymbol{\varepsilon}\to 0$ then follows from the convergence of the $(a_n)_n$'s and $(b_m)$'s and the fact that $a_{n_1,n_l}(\boldsymbol{\varepsilon})$ is a polynomial of these coefficients. This finishes the proof for the case $\Gamma$ has two vertices with $b_i>0$. The two other cases are similar.
\end{proof}
In the sequel we will apply Proposition \ref{FinalTbarT} to the function
$$F=S_{e^{-t}}U_0^{-1}\Psi^0_\alpha=e^{\alpha \int_0^{2\pi} (c+X(e^{-t+i\theta}))\frac{d\theta}{2\pi}-\alpha Q t}
$$ where $\Psi^0_\alpha(c,\varphi)=e^{(\alpha-Q)c}$ (in this case integration against a function $f \in {\mathcal E}_\delta$ is replaced by an average on a circle but the previous considerations apply also). Then $U_0F=e^{-t{\bf H}^0}\Psi^0_\alpha=e^{-2t\Delta_\alpha}\Psi^0_\alpha$. Thus we arrive to the representation for the Virasoro descendants
\begin{align}\label{finalcontour}
\Psi^0_{\alpha,\nu,\tilde\nu}=e^{2t\Delta_\alpha}
(2\pi i)^{-k-j}\oint_{|\boldsymbol{u}|=\boldsymbol{\delta}}\oint_{|\boldsymbol{v}|=\tilde{\boldsymbol{\delta}}} \mathbf{u}^{1-\nu}\bar{\mathbf{v}}^{1-\tilde\nu} U_0\big(T({\bf u})\bar T({\bf v}) S_{e^{-t}}U_0^{-1} \Psi^0_\alpha\big)\dd{\bf \bar v} \dd{\bf u}
\end{align}
where now $e^{-t}<\delta_1\wedge \tilde{\delta}_1$.
\subsection{Conformal Ward Identities}\label{sub:ward}
In this section we state the main identity relating a LCFT correlation function with a $V_\alpha$ insertion to a scalar product with the descendant states $\Psi_{\alpha,\nu,\tilde\nu}$ given by \eqref{defdescendents}.
We have
\begin{lemma} \label{T-lemma} We have
\begin{align*
e^{-t{\bf H}}
U_0\big(T({\bf u})\bar T({\bf v}) S_{e^{-s}}U_0^{-1}\Psi^0_\alpha\big)=
\lim_{{\boldsymbol{\epsilon}\to 0}}\lim_{{\boldsymbol{\epsilon}'\to 0}}e^{-t{\bf H}}
U_0\big(T_{\boldsymbol{\epsilon}}({\bf u})\bar T_{\boldsymbol{\epsilon'}}({\bf v}) S_{e^{-s}}U_0^{-1}\Psi^0_\alpha\big
\end{align*}
where the limit is in $e^{-\beta \rho }L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for all $ \beta >Q-\alpha$, uniformly in $({\bf u},{\bf v})\in{\mathcal O}_{e^{-s}}$ (recall \eqref{caOt}) and the LHS is analytic in ${\bf u}$ and anti-analytic in ${\bf v}$ on ${\mathcal O}_{e^{-s}}$.
\end{lemma}
\begin{proof}
This follows from Proposition \ref{basicstresss} and \eqref{normetHweight} applied with $ \beta >Q-\alpha$.
\end{proof}
The following lemma gives a probabilistic expression for $e^{-t{\bf H}}\Psi^0_{\alpha,\nu,\tilde\nu} $ (recall our convention for contour integrals in \eqref{def_contour}):
\begin{lemma}\label{TTLemma} Let $\delta_k\wedge \tilde\delta_{\tilde k}<e^{-t}$. Then
\begin{align*}
e^{-t\mathbf{H}}\Psi^0_{\alpha,\nu,\tilde\nu}
=& \frac{e^{-(2\Delta_{\alpha}+|\nu|+|\tilde\nu|)t}}{(2\pi i)^{k+j}}
\oint_{|\mathbf{u}|=\boldsymbol{\delta}} \oint_{|\mathbf{v}|=\boldsymbol{\tilde\delta}}
\mathbf{u}^{1-\nu}\bar{\mathbf{v}}^{1-\tilde\nu} e^{-Q c} \mathds{E}_\varphi\Big( T(\mathbf{u})\bar T(\mathbf{v}) V_\alpha(0) e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D}\setminus\mathbb{D}_t)}\Big) \dd \bar{\mathbf{v}}\dd \mathbf{u}
\end{align*}
where
\begin{align*}
\mathds{E}_\varphi\Big( T(\mathbf{u})\bar T(\mathbf{v}) V_\alpha(0) e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D}\setminus\mathbb{D}_t)}\Big) :=
\lim_{{\boldsymbol{\epsilon}\to 0}}\lim_{{\boldsymbol{\epsilon}'\to 0}} \mathds{E}_\varphi\Big( T_{\boldsymbol{\epsilon}}(\mathbf{u})\bar T_{\boldsymbol{\epsilon'}}(\mathbf{v}) V_\alpha(0) e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D}\setminus\mathbb{D}_t)}\Big)
\end{align*}
and the limit exists in $e^{-\beta \rho }L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for all $\beta >Q-\alpha$ and is analytic in $\bf u$ and anti-analytic in $\bf v$ in the region ${\mathcal O}_{e^{-t}}$.
\end{lemma}
\begin{proof} For the sake of readability we write the proof in the case when $\tilde\nu=0$.
Thus, consider
\begin{equation}
\Psi^0_{\alpha,\nu,0}=\frac{e^{2\Delta_{\alpha} s}}{(2\pi i)^{k}}
\oint_{|\mathbf{u}|=\boldsymbol{\delta}}
\mathbf{u}^{1-\nu}
U_0\Big( T(\mathbf{u}) S_{e^{-s}}U_0^{-1}\Psi^0_\alpha\Big) \dd \mathbf{u} .
\end{equation}
We use the regularisation $T_\epsilon$ where \eqref{scalingrelation} holds. By Lemma \ref{T-lemma}
\begin{align*}
e^{-t\mathbf{H}}U_0\Big( T(\mathbf{u}) S_{e^{-s}}U_0^{-1}\Psi^0_\alpha\Big)=&\lim_{{\boldsymbol{\epsilon}\to 0}} e^{-t\mathbf{H}}U_0\Big( T_{\boldsymbol{\epsilon}}(\mathbf{u}) S_{e^{-s}}U_0^{-1}\Psi^0_\alpha\Big)
\\
= &\lim_{{\boldsymbol{\epsilon}\to 0}}e^{-t\mathbf{H}}U\Big(T_{\boldsymbol{\epsilon}}(\mathbf{u})(S_{e^{-s}}U_0^{-1}\Psi^0_\alpha)e^{\mu e^{\gamma c}M_\gamma(\mathbb{D})}\Big)
\\
=& \lim_{{\boldsymbol{\epsilon}\to 0}} U\Big( S_{e^{-t}}(T_{\boldsymbol{\epsilon}}(\mathbf{u})(S_{e^{-s}}U_0^{-1}\Psi^0_\alpha)e^{\mu e^{\gamma c}M_\gamma(\mathbb{D})})\Big)
\\
=&\lim_{{\boldsymbol{\epsilon}\to 0}} e^{-2t} U\Big( T_{e^{-t}\boldsymbol{\epsilon}}(e^{-t}\mathbf{u}) (S_{e^{-s-t}}U_0^{-1}\Psi^0_\alpha)S_{e^{-t}}(e^{\mu e^{\gamma c} M_\gamma(\mathbb{D})})\Big) \\
= &e^{-2t} e^{-Q c}\lim_{{\boldsymbol{\epsilon}\to 0}} \mathds{E}_\varphi\Big( T_{e^{-t}\boldsymbol{\epsilon}}(e^{-t}\mathbf{u}) (S_{e^{-s-t}}U_0^{-1}\Psi^0_\alpha)e^{-\mu e^{\gamma c} M_\gamma(\mathbb{D}\setminus \mathbb{D}_t)})\Big)
\\
:= &e^{-2t} e^{-Q c} \mathds{E}_\varphi\Big( T(e^{-t}\mathbf{u}) (S_{e^{-s-t}}U_0^{-1}\Psi^0_\alpha)e^{-\mu e^{\gamma c} M_\gamma(\mathbb{D}\setminus \mathbb{D}_t)})\Big)
\end{align*}
where we used \eqref{scalingrelation} in the fourth identity.
By Lemma \ref{T-lemma} the last expression is analytic in $ \mathbf{u}$ and since $e^{2\Delta_{\alpha} s} e^{-t\mathbf{H}}U_0\Big( T(\mathbf{u}) S_{e^{-s}}U_0^{-1}e_\alpha\Big)$ is independent on $s$
we can take the limit $s\to\infty$. For this we note that
$$
S_{e^{-s-t}}U_0^{-1}\Psi^0_\alpha=e^{\alpha (c-(t+s)Q)}e^{\alpha (1,X( e^{-t-s}\,\cdot))_\mathbb{T}}=e^{-2\Delta_{\alpha}(s+t)}e^{\alpha c}e^{\alpha (1,X( e^{-t-s}\,\cdot))_\mathbb{T}-\frac{_1}{^2}\alpha^2\mathds{E} (1,X( e^{-t-s}\,\cdot))_\mathbb{T}^2}
$$
so that
\begin{align*
e^{2\Delta_{\alpha} s} \mathds{E}_\varphi\Big( T(e^{-t}\mathbf{u}) (S_{e^{-s-t}}U_0^{-1}\Psi^0_\alpha)e^{-\mu e^{\gamma c} M_\gamma(\mathbb{D}\setminus \mathbb{D}_t)})\Big)=e^{-2\Delta_{\alpha} t}\mathds{E}_\varphi\Big( T(e^{-t}\mathbf{u}) V_\alpha(0)e^{-\mu e^{\gamma c} M_\gamma(\mathbb{D}\setminus \mathbb{D}_t)})\Big)
\end{align*}
and the last expression is analytic in $ \mathbf{u}$.
Hence by a change of variables in the $\bf u$-integral
\begin{align*}
e^{-t\mathbf{H}}\Psi^0_{\alpha,\nu,0}
=&
\frac{e^{2\Delta_{\alpha} s}}{(2\pi i)^{k}}
\oint_{|\mathbf{u}|=\boldsymbol{\delta}}
\mathbf{u}^{1-\nu} e^{-t\mathbf{H}}
U_0\Big( T(\mathbf{u}) S_{e^{-s}}U_0^{-1}\Psi^0_\alpha\Big) \dd \mathbf{u} \\= &
\frac{e^{-(2\Delta_{\alpha}+|\nu|)t}}{(2\pi i)^{k}}
\oint_{|\mathbf{u}|=\boldsymbol{e^{-t}\delta}}
\mathbf{u}^{1-\nu} e^{-Q c} \mathds{E}_\varphi\Big( T(\mathbf{u}) V_\alpha(0) e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D}\setminus\mathbb{D}_t)}\Big) \dd \mathbf{u}\\
=&
\frac{e^{-(2\Delta_{\alpha}+|\nu|)t}}{(2\pi i)^{k}}
\oint_{|\mathbf{u}|=\boldsymbol{\delta}}
\mathbf{u}^{1-\nu} e^{-Q c} \mathds{E}_\varphi\Big( T(\mathbf{u}) V_\alpha(0) e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D}\setminus\mathbb{D}_t)}\Big) \dd \mathbf{u}
\end{align*}
where in the last step we used analyticity to move the contours to ${|\mathbf{u}|=\boldsymbol{\delta}}$.
\end{proof}
In what follows, for fixed $n\geq 1$, we will denote
\begin{equation}\label{defZ}
\mathcal{Z} :=\{\mathbf{z}=(z_1,\dots,z_n)\,|\,\forall i\not = j,\,\, z_i\not =z_j\text{ and }\forall i,\,\, |z_i|<1\} .
\end{equation}
Denoting $\theta (\mathbf{z})=(\theta(z_1),\dots,\theta(z_n))\in \mathbb{C}^n$ we have $\theta\mathcal{Z}=\{(z_1,\dots,z_n)\,|\,\forall i\not = j,\,\, z_i\not =z_j\text{ and }\forall i,\,\, |z_i|>1
\} . $
For $\boldsymbol{\alpha}=(\alpha_1,\dots,\alpha_n)\in\mathbb{R}^n$ such that $\alpha_i<Q$ for all $i$ we define the function $U_{\boldsymbol{\alpha}}(\mathbf{z}): \mathbb{R}\times \Omega_\mathbb{T}\to \mathbb{R}$ by
\begin{align}\label{defUward}
U_{\boldsymbol{\alpha}}(\mathbf{z},c,\varphi):=&\lim_{\epsilon\to 0}e^{-Qc}\mathds{E}_\varphi \Big[\Big(\prod_{i=1}^nV_{\alpha_i,\epsilon}(z_i)\Big)e^{-\mu e^{\gamma c}M_\gamma(\mathbb{D} )} \Big], \quad \text{ for } \mathbf{z} \in \mathcal{Z}
\end{align}
where $V_{\alpha_i,\epsilon}$ stands for the regularized vertex operator \eqref{Vregul}.
Let us set
\begin{equation}\label{defs}
s:=\sum_{i=1}^n\alpha_i.
\end{equation}
\begin{remark}\label{correlalpha}
It follows directly from the construction of correlation functions that for $\mathbf{z} \in \theta\mathcal{Z}$
$$\langle V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i)\rangle_{\gamma,\nu}=\Big(\prod_{i=1}^n|z_i|^{-4\Delta_{\alpha_i}}\Big)\langle\Psi_{\alpha} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2
$$
and these expressions are finite if $\alpha+s>2Q$ and $\alpha, \alpha_i<Q$.
\end{remark}
\begin{lemma}\label{integrcf}
Let $ \mathbf{z} \in \theta\mathcal{Z}$. Then almost everywhere in $c,\varphi$ and for all $R>0$
$$U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))(c,\varphi)\leq e^{(s-Q)(c\wedge 0)-R(c\vee 0) } A(\varphi)
$$ where $A\in L^2 (\Omega_\mathbb{T})$.
\end{lemma}
\begin{proof}
Let $r=\max_i|\theta(z_i)|$ and $\iota(\varphi)=\inf_{x\in\mathbb{D}_r}P\varphi(x)$ and $\sigma(\varphi)=\sup_{x\in\mathbb{D}_r}P\varphi(x)$ with $\mathbb{D}_r$ the disk centered at $0$ with radius $r$.
Then
\begin{align*
U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))(c,\varphi
\leq Ce^{-Qc}e^{(c+\sigma(\varphi))s}\mathds{E} e^{-\mu e^{\gamma (c+\iota(\varphi))}Z}
\end{align*}
where the expectation is over the Dirichlet GFF $X_\mathbb{D}$ and
\begin{align*
Z=\int_{\mathbb{D}_r} (1-|z|^2)^{\frac{\gamma^2}{2}}e^{\sum_i\gamma\alpha_i G_\mathbb{D}(z,\theta(z_i))}M_{\gamma,\mathbb{D}}(\dd z )
\end{align*}
where $M_{\gamma,\mathbb{D}}$ is the GMC of $X_\mathbb{D}$. For $c<0$ we use the trivial bound $$U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))(c,\varphi)\leq Ce^{(s-Q)c}e^{\sigma(\varphi)s}$$ and for $c>0$ we note that
$Z$ has all negative moments so that for $a>0$
\begin{align*
\mathds{E} e^{-aZ}=\mathds{E} (aZ)^{-n}(aZ)^{n}e^{-aZ}\leq n!\mathds{E} (aZ)^{-n}\leq C_na^{-n}
\end{align*}
implying
\begin{align*
U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))(c,\varphi)\leq C_ne^{c(s-Q-\gamma n)}e^{s\sigma(\varphi)-n\gamma \iota(\varphi)}
\end{align*}
Since $e^{s \sigma(\varphi)-n\iota(\varphi)}$ is in $L^2(\P)$ for all $s,n$ the claim follows.
\end{proof}
Define now the modified Liouville expectation (with now $\mathbb{D}_t$ the disk centered at $0$ with radius $e^{-t}$)
\begin{align}\label{modifiedlcft}
\langle F\rangle_t=\int_\mathbb{R} e^{-2Qc}\mathds{E}\Big[ F(c,X)e^{-\mu e^{\gamma c}M_{\gamma}(\mathbb{C}\setminus\mathbb{D}_t)}\Big]\dd c.
\end{align}
Also, in the contour integrals below, for vectors $\boldsymbol{\delta}$, $\boldsymbol{\widetilde{ \delta}}$ defining the radii of the respective contours, we will put a subscript $t$ when these variables are multiplied by $e^{-t}$, namely $\boldsymbol{\delta}_t:=e^{-t}\boldsymbol{\delta}$ and similarly for $\boldsymbol{\widetilde{ \delta}}_t$.
Then we get
\begin{corollary}\label{propcontour}
Let $ \mathbf{z} \in \theta\mathcal{Z}$. For $\alpha\in\mathbb{R}$ such that $\alpha<\big(Q-\frac{2(|\nu|+|\tilde{\nu}|)}{\gamma}-\frac{\gamma}{4}\big)\wedge (Q-\gamma)$ and $\alpha+\sum_i\alpha_i>2Q$, we have
\begin{align*}
\langle&\Psi_{\alpha,\nu,{\nu'}} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2\\
=&
\Big(\prod_{i=1}^n|z_i|^{4\Delta_{\alpha_i}}\Big)\times\frac{1}{(2\pi i)^{k+j}}\lim_{t\to \infty} \oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}
\oint_{| { \mathbf{v}}|=\boldsymbol{{\delta}'}_t} \mathbf{u}^{1-\nu}\bar{\mathbf{v}}^{1-{\nu'}} \langle T(\mathbf{u})\bar T({ \mathbf{v}}) V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \, \dd \bar{ \mathbf{v}}\dd \mathbf{u}.
\end{align*}
\end{corollary}
\begin{proof}
Combining Proposition \ref{descconvergence} and Lemma \ref{integrcf} the existence of the limit
\begin{align*}
\langle\Psi_{\alpha,\nu,{\nu'}} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2 =&\lim_{t\to\infty}e^{(2\Delta_{\alpha}+|\nu|+|\nu'|)t}\langle e^{-t\mathbf{H}}\Psi^0_{\alpha,\nu,\nu'} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2
\end{align*}
follows. By Lemma \ref{TTLemma} the RHS is given by the RHS of \eqref{propcontour}.
\end{proof}
Here is the main result of this section:
\begin{proposition}\label{proofward}
Let $ \mathbf{z} \in \theta\mathcal{Z}$. For $\alpha\in\mathbb{R}$ such that $\alpha<\big(Q-\frac{2(|\nu|+|\tilde{\nu}|)}{\gamma}-\frac{\gamma}{4}\big)\wedge (Q-\gamma)$ and $\alpha+\sum_i\alpha_i>2Q$ and for all $i$ $\alpha_i<Q$, we have in the distributional sense
$$\langle\Psi_{\alpha,\nu,\tilde{\nu}} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2=\Big(\prod_{i=1}^n|z_i|^{4\Delta_{\alpha_i}}\Big)\times
\mathbf{D}_{\nu}\tilde{\mathbf{D}}_{{\tilde{\nu}}}\langle V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i)\rangle_{\gamma,\mu} $$
where the differential operators $\mathbf{D}_{\nu}$, $\tilde{\mathbf{D}}_{\tilde{\nu}}$ are defined by
\begin{equation}
\mathbf{D}_{\nu}= \mathbf{D}_{\nu_k}\dots \mathbf{D}_{\nu_1}\quad \text{ and }
\quad \tilde{\mathbf{D}}_{\tilde{\nu}}= \tilde{\mathbf{D}}_{\tilde{\nu}_j}\dots \tilde{\mathbf{D}}_{\tilde{\nu}_1}
\end{equation}
where for $n\in\mathbb{N}$
\begin{align}
\mathbf{D}_{n}=&\sum_{i=1}^n\Big(-\frac{1}{z_i^{n-1}}\partial_{z_i}+\frac{(n-1) }{z_i^n}\Delta_{\alpha_i}\Big)\\
\tilde{\mathbf{D}}_{n}=&\sum_{i=1}^n\Big(-\frac{1}{\bar z_i^{n-1}}\partial_{\bar z_i}+\frac{(n-1) }{\bar z_i^n}\Delta_{\alpha_i}\Big)
\end{align}
\end{proposition}
\begin{proof} Section \ref{subproofward} will be devoted to the proof of this proposition.
\end{proof}
\subsection{Computing the 3-point correlation functions of descendant states}\label{sub:desc3point}
Now we exploit Proposition \ref{proofward} to give \emph{exact} analytic expressions for the 3-point correlation functions of descendant states. For this, we first need to introduce some notation: for $\nu= (\nu_i)_{i \in \llbracket 1,k\rrbracket}$ a Young diagram and some real $\Delta,\Delta', \Delta''$ we set
\begin{equation}\label{poidsv}
v(\Delta,\Delta', \Delta'', \nu):= \prod_{j=1}^k (\nu_j \Delta'-\Delta+\Delta''+ \sum_{u<j} \nu_u ) .
\end{equation}
With this notation, we can state the following key result:
\begin{proposition}[Conformal Ward identities]\label{Ward}
Assume $\alpha_1,\alpha_2<Q$ with $\alpha_1+\alpha_2>Q$ and $|z|<1$. For all $P>0$,
the scalar product $\langle \Psi_{Q+iP, \nu,\tilde\nu}\, |\, U_{\alpha_1,\alpha_2}(0,z) \rangle_2$ is explicitly given by the following expression
\begin{align}
\langle \Psi_{Q+iP, \nu,\tilde\nu}\, |\, U_{\alpha_1,\alpha_2}(0,z) \rangle_2 &=v(\Delta_{\alpha_1}, \Delta_{\alpha_2}, \Delta_{Q+iP},\tilde{\nu}) v(\Delta_{\alpha_1}, \Delta_{\alpha_2}, \Delta_{Q+iP},\nu)\\
& \times \frac{1}{2} C^{ \mathrm{DOZZ}}_{\gamma,\mu}( \alpha_1,\alpha_2, Q+iP ) \bar{z}^{|\nu|} z^{|\tilde{\nu}|} |z|^{2 (\Delta_{Q+iP}-\Delta_{\alpha_1}-\Delta_{\alpha_2})} \nonumber
\end{align}
where $\Delta_{\alpha}$ are conformal weights \eqref{deltaalphadef} and $ C^{ \mathrm{DOZZ}}_{\gamma,\mu}( \alpha_1,\alpha_2, Q+iP )$ are the DOZZ structure constants (see appendix \ref{dozz} for the definition).
\end{proposition}
The remaining part of this subsection is devoted to the proof of this proposition.
We first need the following lemma concerning analycity of $U_{\boldsymbol{\alpha}}(\mathbf{z})$ in the parameter $\boldsymbol{\alpha}\in\mathbb{C}^n$, proved in Appendix \ref{app:analytic}.
\begin{lemma}\label{lem:analytic}
For fixed $\mathbf{z}\in \mathcal{Z}$ the mapping $\boldsymbol{\alpha}\mapsto U_{\boldsymbol{\alpha}}(\mathbf{z})\in e^{ \beta'\rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ extends analytically to a complex neighborhood $\mathcal{A}^n_U$ in $\mathbb{C}^n$ of the set $\{\boldsymbol{\alpha}\in\mathbb{R}^n\mid \forall i, \alpha_i<Q\}$, for arbitrary $\beta'<{\rm Re}(s)-Q$ (recall \eqref{defs}). This analytic extension is continuous in $(\boldsymbol{\alpha},\boldsymbol{z})\in \mathcal{A}^n_U\times\mathcal{Z}$.
\end{lemma}
The first conclusion we want to draw is the fact that the pairing of $\Psi_\alpha$ with $U_{\boldsymbol{\alpha}}(\theta(\mathbf{z})) $ (in the case $n=2$) is related to the DOZZ formula when $\alpha$ is on the spectrum line $Q+i\mathbb{R}$.
\begin{lemma}
Here we fix $n=2$ and we consider $\mathbf{z}\in \theta\mathcal{Z}$. The mapping $(\alpha,\boldsymbol{\alpha})\mapsto \langle \Psi_\alpha | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2$ is
continuous in the set
$$\Xi:= \{(\alpha,\boldsymbol{\alpha})\in \mathbb{C} \times \mathcal{A}^2_U\mid {\rm Re}(\alpha)\leq Q,{\rm Re}(\alpha+\alpha_1+\alpha_2)>2Q\}$$
and analytic in the set $\Xi\cap \{\alpha\in \mathbb{C}\setminus\mathcal{D}_0\}$. Moreover, in the set $\Xi$, we have the relation
$$\langle \Psi_\alpha | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2= |z_1|^{2\Delta_{\alpha_2}-2\Delta_\alpha+2\Delta_{\alpha_1}}|z_1-z_2|^{2\Delta_{\alpha}-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}}|z_2|^{2\Delta_{\alpha_1}-2\Delta_\alpha+2\Delta_{\alpha_2}}\tfrac{1}{2}C_{\gamma,\mu}^{{\rm DOZZ}} (\alpha,\alpha_1,\alpha_2 ) .$$
In particular this relation holds for $\alpha=Q+iP$ with $P\in (0,+\infty)$ and $\alpha_1,\alpha_2\in (-\infty,Q)$ with $\alpha_1+\alpha_2>Q$.
\end{lemma}
\begin{proof}
By Proposition \ref{holomorphiceig}
the mapping $\alpha\mapsto \Psi_\alpha\in e^{-\frac{\beta}{2}\rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ is analytic on $W_0$, i.e. in the region $\{\alpha\in\mathbb{C}\setminus \mathcal{D}_0 \mid Q-{\rm Re}(\alpha)<\beta \}$ and continuous on $\mathcal{D}_0$. Combining with Lemma \ref{lem:analytic} (with $n=2$) produces directly the region of analycity/continuity we claim.
Furthermore when all the parameters $\alpha,\alpha_1,\alpha_2$ are real, by Remark \ref{correlalpha} we have
$$\langle V_{\alpha}(0) V_{\alpha_1 }(z_1)V_{\alpha_2 }(z_2)\rangle_{\gamma,\nu}=\Big(\prod_{i=1}^2|z_i|^{-4\Delta_{\alpha_i}}\Big)\langle\Psi_{\alpha} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2.
$$
Also, for real parameters, the LHS coincide with the DOZZ formula \cite{dozz}, namely
$$\langle V_{\alpha}(0) V_{\alpha_1 }(z_1)V_{\alpha_2 }(z_2)\rangle_{\gamma,\nu}=|z_1|^{2\Delta_{\alpha_2}-2\Delta_\alpha-2\Delta_{\alpha_1}}|z_1-z_2|^{2\Delta_{\alpha}-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}}|z_2|^{2\Delta_{\alpha_1}-2\Delta_\alpha-2\Delta_{\alpha_2}}\tfrac{1}{2}C_{\gamma,\mu}^{{\rm DOZZ}} (\alpha,\alpha_1,\alpha_2 ) .$$
This proves the claim.
\end{proof}
Now we would like to use the Ward identities, i.e. Proposition \ref{proofward}, to express the correlations of descendant fields with two insertions $\langle \Psi_{\alpha,\nu,\tilde{\nu}} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2$ (here with $n=2$) in terms of differential operators applied to to correlation of primaries $\langle \Psi_\alpha | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2$ when the parameter $\alpha$ is close to the spectrum line $\alpha\in Q+i\mathbb{R}$. This is not straightforward because Proposition \ref{proofward} is not only restricted to real values of the parameter but also because the constraint on $\alpha$, which forces it to be negatively large, implies to have $n$ large in order for the global Seiberg bound $\alpha+\sum_i\alpha_i>2Q$ to be satisfied. Transferring Ward's relations close to the spectrum line is thus our next task.
For this, fix a pair of Young diagrams $\nu,\tilde\nu$ and $\ell $ such that $\lambda_\ell=|\nu|+|\tilde\nu|$. By Proposition \ref{holomorphiceig} there exists a connected open set $\mathcal{A}_{\nu,\tilde{\nu}}:=W_0\cap W_\ell\subset\mathbb{C}$ such that the mappings $\alpha\mapsto \Psi_{\alpha,\nu,\tilde{\nu}}\in e^{-\beta\rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ and $\alpha\mapsto \Psi_{\alpha}\in e^{-\beta\rho}L^2(\mathbb{R}\times\Omega_\mathbb{T})$ for $\beta>Q- \mathrm{Re}(\alpha)$ are analytic on $\mathcal{A}_{\nu,\tilde{\nu}}$ and furthermore
\begin{itemize}
\item $\mathcal{A}_{\nu,\tilde{\nu}}$ contains a complex neighborhood of the spectrum line $ \{\alpha=Q+iP\mid P\in(0,+\infty)\}$, with the discrete set $\mathcal{D}_0\cup\mathcal{D}_\ell$ removed and the mappings extend continuously to
$\mathcal{D}_0\cup\mathcal{D}_\ell$.
\item $\mathcal{A}_{\nu,\tilde{\nu}}$ contains a complex neighborhood of the real half-line $(-\infty,Q-\frac{2\lambda_\ell}{\gamma}-\frac{\gamma}{4})$
\end{itemize}
Therefore, for arbitrary fixed $n$ and $\mathbf{z}\in\theta\mathcal{Z}$, the pairings $(\alpha,\boldsymbol{\alpha})\mapsto \langle\Psi_{\alpha,\nu,\tilde{\nu}} |U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2$ and $(\alpha,\boldsymbol{\alpha})\mapsto \langle\Psi_{\alpha} |U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2$ are holomorphic in the region $\mathcal{A}_{\nu,\tilde{\nu}}\odot\mathcal{A}^n_U:=\{(\alpha,\boldsymbol{\alpha})\in \mathcal{A}_{\nu,\tilde{\nu}}\times\mathcal{A}^n_U\mid 2Q<{\rm Re}(s+\alpha)\}$.
Let us consider the subsets
$$\mathcal{S}:=\{(\alpha,\boldsymbol{\alpha}) \mid \forall i \,\,\alpha_i\in\mathbb{R}\text{ and } \alpha_i<Q,s-Q>0,\alpha\in Q+i (0,\infty) \},\quad \mathcal{S}_\star=\mathcal{S}\cap\{(\alpha,\boldsymbol{\alpha}) \mid\alpha\not \in \mathcal{D}_0\cup\mathcal{D}_\ell\}$$
and
$$\mathcal{R}_{\nu,\tilde{\nu}}:=\{(\alpha,\boldsymbol{\alpha}) \mid \forall i \,\,\alpha_i\in\mathbb{R}\text{ and } \alpha_i<Q,\alpha+s-2Q>0,\alpha\in \mathbb{R},\alpha< (Q-\frac{2\lambda_\ell}{\gamma}-\frac{\gamma}{4})\wedge (Q-\gamma) \}.$$
They are both in the same connected component of $\mathcal{A}_{\nu,\tilde{\nu}}\odot\mathcal{A}^n_U$. Furthermore, $ \mathcal{S}_\star$ is obviously non-empty whereas the condition $(n-2)Q>-(Q-\frac{2\lambda_\ell}{\gamma}-\frac{\gamma}{4})\wedge (Q-\gamma)$ ensures that $\mathcal{R}_{\nu,\tilde{\nu}}$ is non-empty, which we assume from now on.
Now we exploit the Ward identities, valid on $\mathcal{R}_{\nu,\tilde{\nu}}$. Let us consider a smooth compactly supported function $\varphi$ on $\theta\mathcal{Z}$. The mapping
$$(\alpha,\boldsymbol{\alpha})\in\mathcal{A}_{\nu,\tilde{\nu}}\odot\mathcal{A}_U^n\mapsto \int \langle\Psi_{\alpha,\nu,\tilde{\nu}} |U_{\boldsymbol{\alpha}}(\theta(\mathbf{z})) \rangle_2\bar{\varphi}(\mathbf{z})\,\dd \mathbf{z}$$
is thus analytic. Furthermore on $\mathcal{R}_{\nu,\tilde{\nu}}$ and by Proposition \ref{proofward}, it coincides with the mapping
\begin{align*}
(\alpha,\boldsymbol{\alpha})\in\mathcal{A}_{\nu,\tilde{\nu}}\odot\mathcal{A}_U^n
\mapsto &
\int \langle V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i)\rangle_{\gamma,\mu} \overline{\tilde{\mathbf{D}}_{{\tilde{\nu}}}^\ast \mathbf{D}_{\nu}^\ast \Big( \varphi(\mathbf{z}) \prod_{i=1}^n|z_i|^{4\Delta_{\alpha_i}}\Big)}\,\dd \mathbf{z}\\
=& \int \Big(\prod_{i=1}^n|z_i|^{-4\Delta_{\alpha_i}}\Big)\langle\Psi_{\alpha} | U_{\boldsymbol{\alpha}}(\theta(\mathbf{z}))\rangle_2 \overline{\tilde{\mathbf{D}}_{{\tilde{\nu}}}^\ast \mathbf{D}_{\nu}^\ast \Big( \varphi(\mathbf{z}) \prod_{i=1}^n|z_i|^{4\Delta_{\alpha_i}}\Big)}\,\dd \mathbf{z}
\end{align*}
where we have introduced the (adjoint) operator $\mathbf{D}_\nu^\ast$ by
$$\int_{\mathbb{C}^n} \mathbf{D}_\nu f(\mathbf{z})\bar{\varphi}(z)\,\dd \mathbf{z}=\int_{\mathbb{C}^n} f(\mathbf{z})\overline{\mathbf{D}_\nu^\ast \varphi}(z)\,\dd \mathbf{z}$$
for all functions $f$ in the domain of $\mathbf{D}_\nu$ and all smooth compactly supported functions $ \varphi$ in $\mathbb{C}^n$ (and similarly for $\tilde{\mathbf{D}}_{\tilde{\nu}}$). Therefore both mappings are analytic and coincide on $\mathcal{R}_{\nu,\tilde{\nu}}$, thus on the connected component of $\mathcal{A}_{\nu,\tilde{\nu}}\odot\mathcal{A}_U$ containing $\mathcal{R}_{\nu,\tilde{\nu}}$, therefore on $\mathcal{S}_\star$ and finally on $\mathcal{S}$ by continuity. Notice that, on $\mathcal{S}$, we can take all the $\alpha_i$'s equal to $0$ but the first two of them provided they satisfy $\alpha_1+\alpha_2>Q$. This fact being valid for all test function $\varphi$, we deduce that the relation
\begin{align*}
\langle \Psi_{Q+iP,\nu,\tilde{\nu}} |U(V_{\alpha_1}&(\theta(z_1))V_{\alpha_2}(\theta(z_2)))\rangle_2\\
=&\tfrac{1}{2}C_{\gamma,\mu}^{{\rm DOZZ}} (Q+iP,\alpha_1,\alpha_2 )|z_1|^{4\Delta_{\alpha_1}}|z_2|^{4\Delta_{\alpha_2}}\\
&\times \mathbf{D}_{\nu} \tilde{\mathbf{D}}_{{\tilde{\nu}}} \Big( |z_1|^{2\Delta_{\alpha_2}-2\Delta_{Q+iP}-2\Delta_{\alpha_1}}|z_1-z_2|^{2\Delta_{Q+iP}-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}}|z_2|^{2\Delta_{\alpha_1}-2\Delta_{Q+iP}-2\Delta_{\alpha_2}}\Big)
\end{align*}
holds for almost every $z_1,z_2$ and $\alpha_1,\alpha_2<Q$ such that $\alpha_1+\alpha_2>Q$, and thus for every $z_1,z_2\in\theta\mathcal{Z}$ as both sides are continuous in these variables. From this relation and after some elementary algebra to compute the last term, sending $z_2\to\infty$, we end up with the claimed relation.\qed
\section{Proof of Theorem \ref{bootstraptheoremintro}}\label{sectionproofbootstrap}
\subsection{Definition of Conformal Blocks}\label{sub:defblock}
Before proving Theorem \ref{bootstraptheoremintro}, we give the definition of the conformal blocks \eqref{blocksintro} which is based on material introduced in subsections \ref{sectionDiagvirasoro} and
\ref{sub:desc3point}.
The conformal blocks are defined as the formal power series
\begin{equation}\label{defblocks}
\mathcal{F}_{Q+iP}(\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_3},\Delta_{\alpha_4}z)= \sum_{n=0}^\infty \beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_3},\Delta_{\alpha_4})z^n
\end{equation}
with
\begin{equation}\label{expressionbeta}
\beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_3},\Delta_{\alpha_4}):= \sum_{|\nu|, |\nu'|=n} v(\Delta_{\alpha_1}, \Delta_{\alpha_2}, \Delta_{Q+iP},\nu) F_{Q+iP}^{-1} (\nu,\nu') v(\Delta_{\alpha_4},\Delta_{\alpha_3}, \Delta_{Q+iP}, \nu').
\end{equation}
where the matrix $(F_{Q+iP}^{-1} (\nu,\nu'))_{|\nu|,|\nu'|=n}$ is the inverse of the scalar product matrix \eqref{scapo} and
the function $v$ is explicitely defined by expression \eqref{poidsv}.
The definition of $\beta_n$ is not explicit because there is no known explicit formula for the inverse matrix $F_{Q+iP}^{-1}$ and the convergence of \eqref{defblocks} is an open problem. Below we'll prove the series converges in the unit disc a.e. in $P$.
\subsection{Proof of Theorem \ref{bootstraptheoremintro}}
We start with a Lemma on the spectral decompostion:
\begin{lemma}\label{spclemma}
Let $u_1,u_2\in e^{\delta c_-}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ with $\delta>0$.
Then
\begin{align*
\langle u_1 \,|\, u_2\rangle_2=\frac{1}{2\pi}
\lim_{N,L\to \infty
\sum_{|\nu'|=|\nu|\leq N} \sum_{|\tilde\nu'|=|\tilde\nu|\leq N}
\int_0^L \langle u_1\,|\, \Psi_{Q+iP,\nu ,\tilde{\nu}}\rangle _{2}\langle \Psi_{Q+iP,\nu' , \tilde{\nu}'}\,|\, u_2\rangle_{2}F^{-1}_{Q+iP}(\nu,\nu')F^{-1}_{Q+iP}(\tilde\nu,\tilde\nu')\, \dd P.
\end{align*}
\end{lemma}
\proof
We write the spectral representation \eqref{spectraldecomposH2} as
\begin{equation}\label{keyidentity}
\begin{split}
\langle u_1\,|\,u_2\rangle_2
=&\lim_{N,L\to\infty} \frac{1}{2\pi}\sum_{ |\mathbf{k}|+|\mathbf{l}|\leq N} \int_{0}^L
\langle u_1\,|\,\Psi_{Q+iP,\mathbf{k},\mathbf{l}}\rangle_2 \langle \Psi_{Q+iP,\mathbf{k},\mathbf{l}}\,|\,u_2\rangle_{2} {\rm d}P.
\end{split}
\end{equation}
Let
$F_{Q+iP}^{-1/2}(\nu,\nu')$
be the square root of the positive definite matrix
$(F_{Q+iP}^{-1}(\nu,\nu'))_{|\nu|=|\nu'|=N}$ and set
for $|\nu|=N$, $|\tilde\nu|=N'$
\begin{equation*}
H_{Q+iP,\nu,\tilde{\nu}}: = \sum_{|\nu_1|=N, |\nu_2|= N'} F_{Q+iP}^{-1/2}(\nu, \nu_1)F_{Q+iP}^{-1/2}(\tilde{\nu}, \nu_2) \Psi_{Q+iP,\nu_1,\nu_2} .
\end{equation*}
where $ \Psi_{Q+iP,\nu_1,\nu_2} $ are defined in
\eqref{defdescendents}. We get
the identity
\begin{align*
\sum_{ |\mathbf{k}|+|\mathbf{l} |=N }
\langle u_1\,|\,\Psi_{Q+iP,\mathbf{k},\mathbf{l}}\rangle_2 \langle \Psi_{Q+iP,\mathbf{k},\mathbf{l}}\,|\,u_2\rangle_{2}= \sum_{|\nu| + |\tilde \nu| = N} \langle u_1 \,| \, H_{Q+iP,\nu,\tilde{\nu}} \rangle_2 \langle \, H_{Q+iP,\nu,\tilde{\nu}} \,| u_2\rangle_2 .
\end{align*}
Hence we have
\begin{align}\label{equalityyy} \sum_{|\nu| + |\tilde \nu| \leq N} \int_0^L \langle u_1 \,| \, H_{Q+iP,\nu,\tilde{\nu}} \rangle_2 \langle \, H_{Q+iP,\nu,\tilde{\nu}} \,| u_2\rangle_2 {\rm d}P= \sum_{|\mathbf{k}|+ |\mathbf{l}| \leq N} \int_0^L \langle u_1 \,| \, \Psi_{Q+iP,\mathbf{k},\mathbf{l}} \rangle_2 \langle \Psi_{Q+iP,\mathbf{k},\mathbf{l}} \,|\, u_2\rangle_2 {\rm d}P .
\end{align}
On the other hand
\begin{align}
& \sum_{|\nu|,|\tilde \nu| \leq N}\int_0^L \langle u_1 \,| \, H_{Q+iP,\nu,\tilde{\nu}} \rangle_2 \langle \, H_{Q+iP,\nu,\tilde{\nu}} \,|\, u_2\rangle_2 {\rm d}P \label{casej,j'leqN} \\
& = \sum_{|\nu_1|=|\nu_3|\leq N}\sum_{ |\nu_2|=|\nu_4|\leq N} \int_0^L \langle u_1 \,| \, \Psi_{Q+iP,\nu_1,\nu_2} \rangle_2 \langle \, \Psi_{Q+iP,\nu_3, \nu_4} \,|\, u_2\rangle_2 F_{Q+iP}^{-1}(\nu_1, \nu_3) F_{Q+iP}^{-1}(\nu_2, \nu_4) {\rm d}P.\nonumber
\end{align}
Let $u_1=u_2=u$. Then
\begin{equation}\label{threeterms}
\int_0^L \sum_{|\nu|+ |\tilde \nu| \leq N} | \langle u \,| \, H_{Q+iP,\nu,\tilde{\nu}} \rangle_2 |^2 {\rm d}P \leq \int_0^L \sum_{\substack{|\nu|\leq N, \\ |\tilde \nu| \leq N}} | \langle u \,| \, H_{Q+iP,\nu,\tilde{\nu}} \rangle_2 |^2 {\rm d}P \leq \int_0^L \sum_{|\nu|+ |\tilde \nu| \leq 2N} | \langle u \,| \, H_{Q+iP,\nu,\tilde{\nu}} \rangle_2 |^2 {\rm d}P.
\end{equation}
By \eqref{equalityyy} and \eqref{keyidentity} the limits as $L,N\to \infty$ of all the three sums in \eqref{threeterms} are $2\pi \langle u\,|\,u\rangle_2$. Hence by polarisation the limits as $L,N\to \infty$ of the left hand sides of \eqref{casej,j'leqN} and \eqref{equalityyy} exist and equal $2\pi \langle u_1\,|\,u_2\rangle_2$. This proves the claim. \qed
Let $|z|<1$, $t\in (0,1)$ and $ |u|>1$ with $u\neq t^{-1}$ .The $4$-point correlation function is given as a scalar product
\begin{equation}\label{scalarproduct}
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(t^{-1}) V_{\alpha_4}(u) \rangle_{\gamma,\mu} = t^{4 \Delta_{\alpha_3}} |u|^{-4 \Delta_{\alpha_4}} \Big\langle U_{\alpha_1, \alpha_2}(0,z) \,|\, U_{\alpha_4, \alpha_3}\big({\bar{u}}^{-1},t\big) \Big\rangle_2
\end{equation}
where $U_{\alpha_1, \alpha_2}$ is defined in \eqref{defUward}.
As in \eqref{4pointinfty} we have
\begin{align*
\langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_3}(t^{-1}) V_{\alpha_4}(\infty) \rangle_{\gamma,\mu}
= t^{4 \Delta_{\alpha_3}} \Big\langle U_{\alpha_1, \alpha_2}(0,z\big) \,|\, U_{\alpha_4, \alpha_3}\big(0,t\big) \Big\rangle_2.
\end{align*}
Recall from Lemma \ref{integrcf} that for $|z|<1$ the vector $U_{\alpha_1, \alpha_2}(0,z) \in e^{\delta c_-}L^2(\mathbb{R}\times \Omega_\mathbb{T})$ for some $\delta>0$ if $\alpha_1+\alpha_2>Q$, and the same holds for $U_{\alpha_4, \alpha_3}(0,t)$ if $\alpha_3+\alpha_4>Q$.
Using Lemma \ref{spclemma} we get
\begin{align}\label{doublelimite}
\Big\langle U_{\alpha_1, \alpha_2}(0,z\big) \,|\, U_{\alpha_4, \alpha_3}\big(0,t\big) \Big\rangle_2=\frac{1}{2\pi} \lim_{N,L\to \infty} I_{N,L}(\boldsymbol{\alpha},z,t)
\end{align}
where
\begin{align}\label{doublelimite1}
I_{N,L}(\boldsymbol{\alpha},z,t)
= \sum_{\substack{
|\nu'|=|\nu|\leq N,\\
|\tilde{\nu}'|=|\tilde{\nu}|\leq N}}\int_0^L \langle U_{\alpha_1,\alpha_2}(0,z)\,|\, \Psi_{Q+iP,\nu ,\tilde{\nu}}\rangle _{2}\langle \Psi_{Q+iP,\nu' , \tilde{\nu}'}\,|\, U_{\alpha_4,\alpha_3}(0,t)\rangle_{2}F^{-1}_{Q+iP}(\nu,\nu')F^{-1}_{Q+iP}(\tilde\nu,\tilde\nu')\, \dd P.
\end{align}
Using Proposition \ref{Ward} for the scalar products we can write this as
\begin{align*
I_{N,L}&(\boldsymbol{\alpha},z,t)
= \tfrac{1}{4} t^{4 \Delta_{\alpha_3}} t^{-2 \Delta_{\alpha_3}-2 \Delta_{\alpha_4} }|z|^{-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}}
\int_{0}^L \overline{ C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q+iP)}C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_4,\alpha_3, Q+iP)
\\&
|tz|^{2\Delta_{Q+iP} }\sum_{ |\nu'|=|\nu|\leq N}
z^{|\nu|}
t^{ |\nu'|}v(\Delta_{\alpha_1}, \Delta_{\alpha_2}, \Delta_{Q+iP},\nu) F_{Q+iP}^{-1}(\nu,\nu')v(\Delta_{\alpha_4}, \Delta_{\alpha_3}, \Delta_{Q+iP},\nu')\\&
\sum_{
|\tilde{\nu}'|=|\tilde{\nu}|\leq N}
\bar{z}^{|\tilde \nu|}t^{|\tilde{\nu'}|}v(\Delta_{\alpha_1}, \Delta_{\alpha_2}, \Delta_{Q+iP},\tilde{\nu})
F_{Q+iP}^{-1}(\tilde{\nu}, \tilde{\nu}') v(\Delta_{\alpha_4}, \Delta_{\alpha_3}, \Delta_{Q+iP},\tilde{\nu}')
dP\\&=\tfrac{1}{4} t^{2 \Delta_{\alpha_3}-2 \Delta_{\alpha_4} }|z|^{-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}}
\int_{0}^L \overline{ C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q+iP)}C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_4,\alpha_3, Q+iP) |tz|^{2\Delta_{Q+iP} }\\& \left | \sum_{n=1}^N \beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_3},\Delta_{\alpha_4}) (zt)^{n} \right |^2 dP
\end{align*}
where we noted that the $v$ factors are real. The coefficients $\beta_n$ are given by \eqref{defblocks} so
formally the bootstrap formula follows now by taking $N,L\to\infty$. Yet, rigorously, there is still a gap to bridge since we do not know the convergence of the series \eqref{defblocks}. We adapt here a Cauchy-Schwarz type argument \footnote{Communicated to us by S. Rychkov.} to establish the convergence a.s. in $P$.
For this, we take first $\alpha_3=\alpha_2$ and $\alpha_1=\alpha_4$, with $\alpha_1+\alpha_2>Q$.
Then
\begin{align*
I_{N,L}&(\alpha_1,\alpha_2,,\alpha_2,\alpha_1,z,t)
= \frac{1}{4} t^{4 \Delta_{\alpha_2}}|tz|^{-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}}\\&
\int_{0}^L |C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q-iP )|^2 |tz|^{2 \Delta_{Q+iP}} \left | \sum_{n=0}^N \beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_2},\Delta_{\alpha_1} ) (zt)^{n} \right |^2 dP
\end{align*}
If $z\in(0,1)$ thi expression is increasing in the variables $N,L$
and $zt$ since $\beta_n\geq 0$; moreover, it converges as $N,L$ goes to infinity. This implies that the series
\[ \sum_{n=0}^\infty z^n \beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_2},\Delta_{\alpha_1})\]
is absolutely converging for $|z|<1$ for almost all $P\geq 0$ and
\begin{align*}
& \langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_2}(t^{-1}) V_{\alpha_1}(\infty) \rangle_{\gamma,\mu}= \\
&\frac{1}{8\pi} t^{4 \Delta_{\alpha_2}}|tz|^{-2\Delta_{\alpha_1}-2\Delta_{\alpha_2}} \int_{0}^\infty |C_{\gamma,\mu}^{ \mathrm{DOZZ}}( \alpha_1,\alpha_2, Q-iP )|^2 |tz|^{2 \Delta_{Q+iP}} \left | \sum_{n=0}^\infty \beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_2},\Delta_{\alpha_1} ) (zt)^{n} \right |^2 dP.
\end{align*}
This leads to the formula \eqref{bootstrapidentityintro} for the case $\alpha_3=\alpha_2$ and $\alpha_1=\alpha_4$ by using the identity
\begin{equation*}
\langle V_{\alpha_1}(0) V_{\alpha_2}(tz) V_{\alpha_2}(1) V_{\alpha_1}(\infty) \rangle_{\gamma,\mu}= t^{-4 \Delta_{\alpha_2}} \langle V_{\alpha_1}(0) V_{\alpha_2}(z) V_{\alpha_2}(t^{-1}) V_{\alpha_1}(\infty) \rangle_{\gamma,\mu}.
\end{equation*}
For the general case we use
Cauchy-Schwartz
\[
\begin{split}
|\beta_n(\Delta_{Q+iP}P,\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_3},\Delta_{\alpha_4} ) |
& \leq {\beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_2},\Delta_{\alpha_1} )}^\frac{_1}{^2} {\beta_n(\Delta_{Q+iP},\Delta_{\alpha_4}, \Delta_{\alpha_3},\Delta_{\alpha_3},\Delta_{\alpha_4} )}^\frac{_1}{^2} \\
& \leq \frac{1}{2} (\beta_n(\Delta_{Q+iP},\Delta_{\alpha_1}, \Delta_{\alpha_2},\Delta_{\alpha_2},\Delta_{\alpha_1} )+ \beta_n(\Delta_{Q+iP},\Delta_{\alpha_4}, \Delta_{\alpha_3},\Delta_{\alpha_3},\Delta_{\alpha_4} )).
\end{split}\]
so that the general case follows from the case $\alpha_1=\alpha_4$ and $\alpha_2=\alpha_3$. \qed
\section{Proof of Proposition \ref{proofward}}\label{subproofward}
\subsection{ Preliminary remarks}
Before proceeding to computations, we stress that the reader should keep in mind that the SET field $T(u)$ is not a proper random field. In particular the expectation in \eqref{propcontour} is a notation for the object constructed in the limit as $\boldsymbol{\epsilon}\to0$ and $t\to\infty$. In LCFT the construction of the correlation functions of the SET is subtle. This was done in \cite{KRV} only for one or two SET insertions. However the situation is here much simpler and we will not have to rely on \cite{KRV}. The reason is that we need to deal with correlation functions with a regularized LCFT expectation $\langle \cdot\rangle_t$ where we have replaced $M_\gamma(\mathbb{C})$ by $M_\gamma(\mathbb{C}\setminus \mathbb{D}_t)$ in \eqref{modifiedlcft}, and all the SET insertions that we will consider are located in $\mathbb{D}_t$ which as we will see makes them much more regular than in the full LCFT.
The regularized SET field $T_\epsilon(u)$ is a proper random field and its correlation functions in the presence of the vertex operators are defined as limits of the corresponding ones with regularized vertex operators
\begin{align}\label{regular}
\langle T_{\boldsymbol{\epsilon}}(\mathbf{u})\bar T_{\boldsymbol{\epsilon'}}({ \mathbf{v}}) V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t =\lim_{\epsilon''\to 0}\langle T_{\boldsymbol{\epsilon}}(\mathbf{u})\bar T_{\boldsymbol{\epsilon'}}({ \mathbf{v}}) V_{\alpha,\epsilon''}(0)\prod_{i=1}^nV_{\alpha_i ,\epsilon''}(z_i) \rangle_t .
\end{align}
The existence of this limit follows from the representation of the expectation on the RHS as a GFF expectation of an explicit function of a a GMC integral \cite{DKRV}. And in particular the limit is independent of the regularization procedure used for the vertex operators. For simplicity we will use in this section the following:
\begin{align}\label{Vregulward}
V_{\alpha,\epsilon}(x)=\epsilon^{\frac{\alpha^2}{2}}e^{\alpha \phi_\epsilon(x)}=|x|_+^{-4\Delta_\alpha}e^{\alpha c}e^{\alpha X_\epsilon(x)-\frac{\alpha^2}{2}\mathds{E} X_\epsilon(x)^2}(1+{\mathcal O}(\epsilon))
\end{align}
where $X_\epsilon$ is the same regularization as in $T_\epsilon$. The ${\mathcal O}(\epsilon)$ will drop out from all terms in the $\epsilon\to 0$ limit and will not be displayed below.
The proof of Proposition \ref{proofward} consists of using Gaussian integration by parts to the the $ T_{\boldsymbol{\epsilon}}(\mathbf{u})$ and $\bar T_{\boldsymbol{\epsilon'}}({ \mathbf{v}}) $ factors in \eqref{regular} to which we now turn.
\subsection{ Gaussian integration by parts}
\eqref{regular} is analysed using Gaussian integration by parts. For a centered Gaussian vector $(X,Y_1,\dots, Y_N)$ and a smooth function $f$ on $\mathbb{R}^N$, the Gaussian integration by parts formula is $$\mathds{E}[X \,f(Y_1,\dots,Y_N)]=\sum_{k=1}^N\mathds{E}[XY_k]\mathds{E}[\partial_{k}f(Y_1,\dots,Y_N)]. $$
Applied to the LCFT this leads to the following formula. Let $\phi=c+X-2Q\log|z|_+$ be the Liouville field and $F$ a smooth function on $\mathbb{R}^N$. Define for $u,v\in\mathbb{C}$
\begin{align}\label{greenderi}
C(u,v)=-\frac{1}{2}\frac{1}{u-v},\ \ C_{\epsilon,\epsilon'}(u,v)=\int \rho_{\epsilon}(u-u') \rho_{\epsilon'}(v-v')C(u',v')\dd u\dd v
\end{align}
with $( \rho_{\epsilon})_\epsilon$ a mollifying family of the type $\rho_\epsilon(\cdot)=\epsilon^{-2}\rho(|\cdot|/\epsilon)$.
Then for $z,x_1,\dots,x_N\in\mathbb{C}$
\begin{align} \label{basicipp}
\langle \partial_z\phi_\epsilon(z) F(\phi_{\epsilon'}(x_1),\dots,\phi_{\epsilon'}(x_N))\rangle_t
&=
\sum_{k=1}^NC_{\epsilon,\epsilon'}(z,x_k)\langle \partial_{k}F(\phi_{\epsilon'}(x_1),\dots,\phi_{\epsilon'}(x_N))\rangle_t
\\
&-\mu\gamma\int_{\mathbb{C}_t} C_{\epsilon,0}(z,x)\langle V_\gamma(x) F(\phi_{\epsilon'}(x_1),\dots,\phi_{\epsilon'}(x_N))\rangle_t \dd x \nonumber
\end{align}
where $F$ in the applications below is such that all the terms here are well defined. Note that $ \partial_uG(u,v)=C(u,v)+\frac{_1}{^2} u^{-1}1_{|u|>1}$. The virtue of the Liouville field is that the annoying metric dependent terms $u^{-1}1_{|u|>1}$ drop out from the formulae. This fact is nontrivial and it was proven in \cite[Subsection 3.2]{KRV}, for the case $t=\infty$ with $F$ corresponding to product of vertex operators. The proof goes the same way to produce \eqref{basicipp} with a finite $t$.
The first application of this formula is a direct proof of the existence of the $\boldsymbol{\epsilon},\boldsymbol{\epsilon}'\to 0$ limit of \eqref{regular} which will be useful also later in the proof. We have
\begin{proposition}
The functions \eqref{regular} converge uniformly on compact subsets of $({\bf u},{\bf v},{\bf z})\in \mathbb{D}_t\times \mathbb{D}_t \times \theta{\mathcal Z}$
\begin{align}\label{regularlimit}
\lim_{\boldsymbol{\epsilon},\boldsymbol{\epsilon}'\to 0}\langle T_{\boldsymbol{\epsilon}}(\mathbf{u})\bar T_{\boldsymbol{\epsilon'}}({ \mathbf{v}}) V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t :=\langle T(\mathbf{u})\bar T({ \mathbf{v}}) V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\end{align}
where the limit is analytic in ${\bf u}\in {\mathcal O}_{e^{-t}}$ and anti-analytic in ${\bf v}\in {\mathcal O}_{e^{-t}}$.
\end{proposition}
\begin{proof}
We consider for simplicity only the case $\tilde{\nu}=0$. The LHS is defined as the limit $\epsilon''\to 0$ in \eqref{regular} but we will for clarity work directly with $\epsilon''=0$ as it will be clear below that this limit trivially exists. Indeed, the functions $C_{\epsilon,\epsilon'}(u,v)$
are smooth for $\epsilon, \epsilon'>0$ and they converge together with their derivatives uniformly on $|u|<e^{-s}$, $|v|\geq e^{-t}$ for all $s>t$ to the derivatives of $C(u,v)$.
We will now apply \eqref{basicipp} to all the $\phi_\epsilon$ factors in the SET tensors in\eqref{regularlimit} one after the other. To make this systematic let us introduce the notation
$$({\mathcal O}_1,{\mathcal O}_2,{\mathcal O}_3) :=( \partial_z\phi_\epsilon, \partial_z^2\phi_\epsilon, (\partial_z\phi_\epsilon)^2-\mathds{E}(\partial_z\phi_\epsilon)^2).$$
Applying the integration by parts formula to $\partial_z\phi_\epsilon(u_k)$ (or to $\partial_z^2\phi_\epsilon(u_k)$ if $i_k=2$ below) we obtain
\begin{align*}
\langle \prod_{j=1}^k {\mathcal O}_{i_j}(u_j)V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t=&\sum_{l=1}^{k-1} \langle \{{\mathcal O}_{i_k}(u_k){\mathcal O}_{i_l}(u_l)\} \prod_{j\neq k,l} {\mathcal O}_{i_j}(u_j)V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \\+&\sum_{l=0}^n \alpha_l \langle \{{\mathcal O}_{i_k}(u_k)\phi(z_l)\}\prod_{j\neq k} {\mathcal O}_j(u_j)V_\alpha(0)\prod_{i\neq l}V_{\alpha_i }(z_i) \rangle_t\\-&\mu\gamma \int_{\mathbb{C}_t} \langle \{{\mathcal O}_{i_k}(u_k)\phi(x)\} \prod_{j\neq k} {\mathcal O}_j(u_j)V_\alpha(0)V_\gamma(x)\prod_{i\neq l}V_{\alpha_i }(z_i) \rangle_t\dd x
\end{align*}
where $\alpha_0=\alpha$ and $z_0=0$ and we introduced the following notation
\begin{align*
\{{\mathcal O}_1(u){\mathcal O}_1(v)\} &:=\partial_vC_{\epsilon,\epsilon}(u,v) & & \{{\mathcal O}_2(u){\mathcal O}_2(v)\} :=\partial_u\partial^2_{v}C_{\epsilon,\epsilon}(u,v)\\
\{{\mathcal O}_1(u){\mathcal O}_3(v)\} &:=2{\mathcal O}_1(v)\partial_vC_{\epsilon,\epsilon}(u,v) & & \{{\mathcal O}_1(u){\mathcal O}_2(v)\} :=\partial^2_{v}C_{\epsilon,\epsilon}(u,v)\\
\{{\mathcal O}_3(u){\mathcal O}_1(v)\} &:={\mathcal O}_1(u)\partial_vC_{\epsilon,\epsilon}(u,v) & & \{{\mathcal O}_2(u){\mathcal O}_1(v)\} :=\partial_{u}\partial_{v}C_{\epsilon,\epsilon}(u,v) \\
\{{\mathcal O}_3(u){\mathcal O}_3(v)\} &:=2{\mathcal O}_1(u){\mathcal O}_1(v)\partial_vC_{\epsilon,\epsilon}(u,v)& & \{{\mathcal O}_2(u){\mathcal O}_3(v)\} :=2{\mathcal O}_1(v)\partial_{u}\partial_{v}C_{\epsilon,\epsilon}(u,v)\\
\{{\mathcal O}_3(u){\mathcal O}_2(v)\} &:={\mathcal O}_1(u)\partial^2_{v}C_{\epsilon,\epsilon}(u,v) & &
\end{align*}
etc. Similarly
$$\{{\mathcal O}_1(u)\phi(x)\} := C_{\epsilon,0}(u,x), \quad\{{\mathcal O}_2(u)\phi(x)\} := \partial_u C_{\epsilon,0}(u,x), \quad\{{\mathcal O}_3(u)\phi(x)\} :={\mathcal O}_1(u)C_{\epsilon,0}(u,x). $$
Since $T_\epsilon=Q{\mathcal O}_2-{\mathcal O}_3$ we can iterate the integration by parts formula to obtain an expression for $\langle T_{\boldsymbol{\epsilon}}(\mathbf{u}
V_{\alpha}(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t $ as a sum of terms
of the form
\begin{align}\label{basicterm}
C\prod_{\alpha, \beta} \partial^{i_{\alpha\beta}}C_{\epsilon,\epsilon}(u_\alpha,u_\beta)\prod_{\alpha,l} \partial^{j_{\alpha,l }}C_{\epsilon,0}(u_\alpha,z_l)\int_{\mathbb{C}_t^m}\prod_{\alpha ,k} \partial^{l_{\alpha,k }}C_{\epsilon,0}(u_{\alpha},x_k) \langle V_\alpha(0)\prod_{j=1}^mV_{\gamma }(x_j)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t\dd{\bf x}
\end{align}
where the products run through some subsets of the index values. We have for all compact $K\subset \mathbb{D}_t$
$$ \sup_{u\in K, x\in\mathbb{C}_t}\sup_\epsilon|\partial_u^{l}\partial_{\bar u}^{l'}C_{\epsilon,0}(u,x)| <\infty.$$
Hence the expression \eqref{basicterm} is bounded together with all its derivatives in $\bf u$ uniformly in $\epsilon$ by
\begin{align}\label{aprior}
C\int_{\mathbb{C}_t^m}\langle V_\alpha(0)\prod_{j=1}^mV_{\gamma }(x_j)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \dd{\bf x}.
\end{align}
which by the lemma below is finite. The limit of \eqref{basicterm} and all its derivatives exist by dominated convergence. Clearly the $\partial_{\bar u}$ derivatives vanish so the limit is analytic in $\bf u$.
\end{proof}
We need the following KPZ identity for the a priori bound \eqref{aprior} (recall \eqref{defs}
\begin{lemma}\label{kpzlemma}
The functions ${\bf x}\in \mathbb{C}_t^n\to \langle V_\alpha(0)\prod_{j=1}^mV_{\gamma }(x_i)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t$ are integrable and
\begin{align}\label{KPZ}
\int_{\mathbb{C}_t^m} \langle V_\alpha(0)\prod_{j=1}^mV_{\gamma }(x_i)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t\dd{\bf x}=C\int_{\mathbb{C}_t^m} \langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t\dd{\bf x}
\end{align}
where $C=(\mu\gamma)^{-m}\prod_{l=0}^{m-1}(\alpha+s+ \gamma l-2Q)$.
\end{lemma}
\begin{proof}
See \cite[Lemma 3.3]{KRV}. Briefly,
$$\langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t=\mu^{\frac{2Q-s-\alpha}{\gamma}}\langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t\big|_{\mu=1}$$
and the LHS of \eqref{KPZ} equals $(-\partial_\mu)^m\langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t$.
\end{proof}
The representation \eqref{basicterm} will not be useful for a direct proof of the Ward identity due to the $V_\gamma$ insertions. We will rather use the integration by parts inductively, first to $T_\epsilon(u_k)$ which corresponds to the largest contour
in the contour integrals in expression \eqref{propcontour}, and by showing then that at each step the $V_\gamma$ insertions give rise to the derivatives in the Proposition \ref{proofward}. We give now the inductive step to prove this claim, stated for simplicity for the case $\tilde{\nu}=0$. For this, we introduce the (adjoint) operator $\mathbf{D}_n^\ast$
defined (by duality) by
$$\int_{\mathbb{C}^n} \mathbf{D}_n f(\mathbf{z})\bar{\varphi}(z)\,\dd \mathbf{z}=\int_{\mathbb{C}^n} f(\mathbf{z})\overline{\mathbf{D}_n^\ast \varphi}(z)\,\dd \mathbf{z}$$
for all functions $f$ in the domain of $\mathbf{D}_n$ and all smooth compactly supported functions $ \varphi$ in $\mathbb{C}^n$.
Then
\begin{proposition}\label{wardinduction}
Let $\varphi\in C_0^\infty(\theta{\mathcal Z})$ be a smooth compactly supported function in $\theta{\mathcal Z}\subset\mathbb{C}^n$ and define
\begin{align*
{\mathcal T}_t(\nu,\varphi):=\int \Big(\frac{1}{(2\pi i)^{k}}\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t} \mathbf{u}^{1-\nu} \langle T(\mathbf{u}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t\dd{\bf u}\Big)\,\bar{\varphi}({\bf z})\dd{\bf z}
\end{align*}
Then for $\nu^{(k)}=(\nu_1,\dots,\nu_{k-1})$
\begin{align*
{\mathcal T}_t(\nu,\varphi)={\mathcal T}_t(\nu^{(k)},{\bf D}^\ast_{\nu_k}\varphi)+{\mathcal B}_t(\nu,\varphi)
\end{align*}
where
\begin{align}\label{btbound}
|{\mathcal B}_t(\nu,\varphi)|\leq Ce^{(\alpha+|\nu|-2)t}\int \langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,|\varphi({\bf z})|\dd{\bf z}.
\end{align}
\end{proposition}
\vskip 2mm
\noindent{\it Proof of Proposition \ref{proofward}}. Iterating Proposition \ref{wardinduction} we get
\begin{align*
{\mathcal T}_t(\nu,\varphi)=\int \langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\overline{{\bf D}^\ast_\nu\varphi}({\bf z})\dd{\bf z}+{\mathcal B}_t(\varphi)
\end{align*}
where ${\mathcal B}_t(\varphi)$ satisfies
\begin{equation*}
{\mathcal B}_t(\varphi) \leq Ce^{(\alpha+|\nu|-2)t} \sum_{\ell=1}^k\int \langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \, | {\bf D}^\ast_{\nu_{\ell+1}} \cdots {\bf D}^\ast_{\nu_k} \varphi({\bf z})|\dd{\bf z}
\end{equation*}
where by convention $ {\bf D}^\ast_{\nu_{\ell+1}} \cdots {\bf D}^\ast_{\nu_k} \varphi=\varphi$ if $\ell=k$. The functions ${\bf z}\to \langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t$ are continuous on $\theta{\mathcal Z}$ and converge uniformly as $t\to\infty$ on compact subsets of $\theta{\mathcal Z}$ to the function $ \langle V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle$. Hence $ {\mathcal T}_t(\nu,\cdot)$ converges in the Frechet topology of ${\mathcal D}'(\theta{\mathcal Z})$ to the required limit since $ {\mathcal B}_t(\varphi)$ goes to $0$ as $t$ goes to infinity (recall that $\alpha+|\nu|-2<0$). \qed
\subsection{ Proof of Proposition \ref{wardinduction}}
We start the proof of Proposition \ref{wardinduction} by
applying Gaussian integration by parts formula twice to the $T(u_k)$.
This produces plenty of terms which we group in four contributions:
\begin{align}\label{IPPstress}
\langle T(\mathbf{u})& V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t =R(\mathbf{u},\mathbf{z}) + M(\mathbf{u},\mathbf{z}) + N(\mathbf{u},\mathbf{z})+D(\mathbf{u},\mathbf{z}) .
\end{align}
In $R(\mathbf{u})$ we group all the contractions with $C(u_k,u_l)$ between $u_k$ and $u_l$, $l<k$ and $u_k$ and $0$. These terms do not contribute to the contour integral of the $u_k$ variable since they give rise to integrals of the form
\begin{align*
\oint_{|u_k|=e^{-t}\delta_k}u_k^{1-\nu_k}(u_k-v)^{-a}(u_k-w)^{-b}\dd u_k
\end{align*}
where $v,w\in\{0,u_1,\dots,u_{k-1}\}$ and $a+b\geq 2$. Since $|v|,|w|<e^{-t}\delta_k$ and $\nu_k>0$, this integral vanishes. We conclude
\begin{align}\label{Rcontour}
\oint_{|u_k|=e^{-t}\delta_k}u_k^{1-\nu_k}R({\bf u},\mathbf{z})\dd u_k=0.
\end{align}
For the benefit of the reader we display all the terms in $R({\bf u},\mathbf{z})$ in the Appendix \ref{ippjunk}.
Let us now introduce the notations ${\bf u}^{(k)}:=(u_1,\dots, u_{k-1})$ and ${\bf u}^{(k,\ell)}:=(u_1,\dots,u_{\ell-1},u_{\ell+1},\dots, u_{k-1})$. The second contribution in \eqref{IPPstress} collects the contractions hitting only one $V_{\alpha_p}$:
\begin{align}\label{IPPstressM}
M(\mathbf{u},\mathbf{z})= \sum_{p=1}^{n} (\frac{Q\alpha_p}{2}- \frac{\alpha_p^2}{4}) \frac{1}{(u_k-z_p)^2} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t .
\end{align}
We can then do the $u_k$ integral explicitly to obtain
\begin{align}\label{confweightward}
& \int\Big(\frac{1}{(2\pi i)^{k}}\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}
\mathbf{u}^{1-\nu}\mathbf{M}(\mathbf{u},\mathbf{z}) \, d \mathbf{u}\Big)\bar{\varphi}({\bf z})d{\bf z} \\
& = \int \left ( \sum_{p=1}^n\frac{\nu_k-1}{z_p^{\nu_k}}\Delta_{\alpha_p} \right )
\langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \bar{\varphi}({\bf z})\dd{\bf z}
\end{align}
Note that this term contains the constant part of the differential operator $\mathbf{D}_{\nu_k}$.
The third contribution is given by terms where all contractions hit $V_\gamma$:
\begin{align}\nonumber
N(\mathbf{u},\mathbf{z})=&(\frac{\mu\gamma^2}{4}-\frac{\mu\gamma Q}{2}) \int_{\mathbb{C}_t} \frac{1}{(u_k-x)^2}\langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\\=
& - \mu \int_{\mathbb{C}_t} \frac{1}{(u_k-x)^2}\langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x. \label{Nterm}
\end{align}
Finally $D$ gathers all the other terms
\begin{align}\label{IPPstressrest}
D(\mathbf{u},\mathbf{z}) = \sum_{i=1}^9T_i(\mathbf{u},\mathbf{z})
\end{align}
with
\begin{align}
T_1(\mathbf{u},\mathbf{z})&=
-\sum_{\ell=1}^{k-1}\sum_{p=1}^n \frac{Q\alpha_p}{(u_k-u_\ell)^3(u_k-z_p)} \langle T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\nonumber \\
T_2(\mathbf{u},\mathbf{z}) &= \sum_{\ell=1}^{k-1} \sum_{p=1}^{n} \frac{\alpha_p}{(u_k-u_\ell)^2(u_k-z_p)} \langle \partial_{z}X(u_{\ell}) T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\nonumber\\
T_3(\mathbf{u},\mathbf{z}) &=- \sum_{p=1}^{n} \frac{\alpha\alpha_p}{2} \frac{1}{(u_k-z_p)u_k} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\nonumber\\
T_4(\mathbf{u},\mathbf{z})& =\frac{\mu\gamma}{2} \sum_{p=1}^{n} \alpha_p\int_{\mathbb{C}_t} \frac{1}{(u_k-z_p)(u_k-x)} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\nonumber\\
T_5(\mathbf{u},\mathbf{z}) &=- \sum_{p\not=p'=1}^{n} \frac{\alpha_p\alpha_{p'}}{4} \frac{1}{(u_k-z_p)(u_k-z_{p'})} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\nonumber\\
T_6(\mathbf{u},\mathbf{z}) & =\mu Q\gamma \sum_{\ell=1}^{k-1}\int_{\mathbb{C}_t} \frac{1}{(u_k-u_\ell)^3(u_k-x)} \langle T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\nonumber\\
T_7(\mathbf{u},\mathbf{z}) &=-\mu\gamma \sum_{\ell=1}^{k-1} \int_{\mathbb{C}_t} \frac{1}{(u_k-u_\ell)^2(u_k-x)} \langle \partial_{z}X(u_{\ell}) T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\nonumber\\
T_8(\mathbf{u},\mathbf{z}) &=\frac{\mu\gamma \alpha}{2} \int_{\mathbb{C}_t} \frac{1}{u_k(u_k-x)} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\nonumber\\
T_9(\mathbf{u},\mathbf{z}) &= -\frac{\mu^2\gamma^2}{4} \int_{\mathbb{C}_t}\int_{\mathbb{C}_t} \frac{1}{(u_k-x)(u_k-x')} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)V_\gamma(x')\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x .
\label{Dterms}
\end{align}
We need to show that $N$ and $D$ will give rise (after contour integration) to the $\partial_{z_i}$-derivatives in the expression $\mathbf{D}_{\nu_k}\langle T(\mathbf{u}^k) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t $.
To show this we need to analyse $N$ further.
Regularizing the vertex insertions (beside the $V_\gamma$ insertion, we also regularize the $V_{\alpha_i}$'s for later need)
in $N(\mathbf{u},\mathbf{z})$ given by \eqref{Nterm}, and performing an integration by parts (Green formula) in the $x$ integral we get
\begin{align*}
N(\mathbf{u},\mathbf{z}) =& - \mu \lim_{\epsilon\to 0} \int_{\mathbb{C}_t} \partial_x\frac{1}{u_k-x }\langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_{\gamma,\epsilon}(x)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t \,\dd x\\
=& B_t(\mathbf{u},\mathbf{z}) + \mu \gamma \lim_{\epsilon\to 0} \int_{\mathbb{C}_t} \frac{1}{u_k-x } \langle T(\mathbf{u}^{(k)}) V_\alpha(0) \partial_x\phi_\epsilon(x)V_{\gamma,\epsilon}(x)\prod_{i=1}^nV_{\alpha_i ,\epsilon}(z_i) \rangle_t \,\dd x\\=: &B_t(\mathbf{u},\mathbf{z}) + \tilde N(\mathbf{u},\mathbf{z})
\end{align*}
where the boundary term appearing in Green formula has $\epsilon\to 0$ limit given by
\begin{align}\label{bryterm}
B_t(\mathbf{u},\mathbf{z}) :
i\mu\oint_{|x|=e^{-t}
\frac{1}{u_k-x} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd\bar x
\end{align}
and we used
\eqref{Vregul} to write
\begin{align*
\partial_xV_{\gamma,\epsilon}(x)=\alpha\partial_x\phi_\epsilon(x)V_{\gamma,\epsilon}(x).
\end{align*}
In $\tilde N(\mathbf{u},\mathbf{z})$ we integrate by parts the $ \partial_x\phi_\epsilon(x)$ and end up with
\begin{align}\nonumber
\tilde N(\mathbf{u},\mathbf{z})
=& - \mu Q \gamma\sum_{\ell=1}^{k-1} \int_{\mathbb{C}_t} \frac{1}{(u_k-x)(x-u_\ell)^3 } \langle T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\\
& +\mu \gamma\sum_{\ell=1}^{k-1} \int_{\mathbb{C}_t} \frac{1}{(u_k-x)(x-u_\ell)^2 } \langle \partial_zX(u_\ell) T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x \nonumber
\\
& -\frac{\mu \gamma \alpha}{2} \int_{\mathbb{C}_t} \frac{1}{(u_k-x) x } \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x \nonumber
\\
& +\frac{\mu^2 \gamma^2 }{2} \int_{\mathbb{C}_t} \int_{\mathbb{C}_t} \frac{1}{(u_k-x) (x-x') } \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)V_\gamma(x')\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x \nonumber
\\
&+ {\mu \gamma }\lim_{\epsilon\to 0} \sum_{p=1}^n \int_{\mathbb{C}_t} \frac{\alpha_p}{u_k-x }C_{\epsilon,0}(x,z_p) \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t \,\dd x\nonumber
\\
&=:
T'_6(\mathbf{u},\mathbf{z})+T'_7(\mathbf{u},\mathbf{z})+T'_{8}(\mathbf{u},\mathbf{z})+T'_{9}(\mathbf{u})+T'_{4}(\mathbf{u},\mathbf{z})\label{T4leq}
\end{align}
where again we took the $\epsilon\to 0$ limit in the terms where it was obvious. In particular this identity proves that the limit on the RHS, denoted by $T'_{4}(\mathbf{u},\mathbf{z})$, exists. The numbering of these terms and the ones below will be used when comparing with \eqref{Dterms}.
\subsubsection{Derivatives of correlation functions}
We want to compare the expression \eqref{IPPstress} to derivatives of the function $\langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t $.
We need to treat separately the cases $\nu_k\geq 2$ and $\nu_k=1$.
\subsubsection{Case $\nu_k\geq 2$ }
We have
\begin{lemma}\label{deriv2}
Let
\begin{align*}
I_\epsilon(\mathbf{u},{\bf z}):=
\sum_{p=1}^n\frac{1}{u_k-z_p}& \partial_{z_p}\langle T(\mathbf{u}^k) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t
\end{align*}
Then $\lim_{\epsilon\to 0}I_\epsilon(\mathbf{u},{\bf z}):= I(\mathbf{u},{\bf z})$
exists and defines a continuous function in ${\bf z}\in \theta{\mathcal Z}$ satisfying
\begin{align}\label{contourI}
\int\Big(\frac{1}{(2\pi i)^{k}}\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}{\bf u}^{1-\nu} I(\mathbf{u},{\bf z})d{\bf u} \Big)\bar{\varphi}({\bf z})\dd {\bf z}={\mathcal T}_t(\nu^{(k)},\hat{\bf D}_{\nu_k}\varphi)
\end{align}
for all $\varphi\in C_0^\infty(\theta{\mathcal Z})$ with $\hat{\bf D}_{n}={\bf D}^\ast_{n}-(n-1)\sum_i\Delta_{\alpha_i}z_i^{-n}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
I_\epsilon(\mathbf{u},{\bf z})=\sum_{p=1}^n\frac{\alpha_p}{u_k-z_p}& \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\partial_{z_p}\phi_\epsilon(z_p)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t = K_\epsilon(\mathbf{u},{\bf z})+L_\epsilon(\mathbf{u},{\bf z})
\end{align*}
where we integrate by parts the $\partial_{z_p}\phi_\epsilon(z_p)$ and $K_\epsilon({\bf u},{\bf z})$ collects the terms with an obvious $\epsilon\to 0$ limit $K({\bf u},{\bf z})$:
\begin{align*}
K(\mathbf{u},{\bf z}) =&- \sum_{p=1}^n\sum_{\ell=1}^{k-1}\frac{Q\alpha_p}{(u_k-z_p)(z_p-u_\ell)^3} \langle T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
&+ \sum_{p=1}^n\sum_{\ell=1}^{k-1}\frac{\alpha_p}{(u_k-z_p)(z_p-u_\ell)^2} \langle \partial_{z}X(u_\ell) T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
&- \sum_{p=1}^n \frac{\alpha_p\alpha}{2}\frac{1}{(u_k-z_p)z_p} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
&- \sum_{p\not= p'=1}^n \frac{\alpha_p\alpha_{p'}}{2}\frac{1}{(u_k-z_p)(z_p-z_{p'})} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
=:&D_1(\mathbf{u},{\bf z})+D_2(\mathbf{u},{\bf z})+D_3(\mathbf{u},{\bf z}
+D_5(\mathbf{u},{\bf z}) ,
\end{align*}
whereas
\begin{align*}
L_\epsilon(\mathbf{u},{\bf z})
=-\mu\gamma\sum_{p=1}^n
\alpha_p\int_{\mathbb{C}_t
\frac{1}{u_k-z_p}C_{\epsilon,0}(z_p,x) \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i ,\epsilon}(z_i) \rangle_t \,\dd x .
\end{align*}
Since $C_{0,0}(z_p,x) =-\frac{1}{2}\frac{1}{z_p-x}$ and since it is not clear that $\frac{1}{z_p-x} \langle T(\mathbf{u}^{k}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t $ is integrable the $\epsilon\to 0$ limit of $L_\epsilon$ is problematic\footnote{Actually, this fact was shown in \cite{dozz} without the SET insertions and could be proven here as well but we will follow another route because the recursion to prove this extension is painful.}. However, we can compare it with the term ${T_4'}$ in \eqref{T4leq}. Writing
$$\frac{1}{u_k-z_p}=\frac{1}{u_k-x}+\frac{z_p-x}{(u_k-z_p)(u_k-x)},
$$
we conclude that $ L_\epsilon$ converges:
\begin{align*}
\lim_{\epsilon\to 0} L_\epsilon(\mathbf{u},{\bf z})
=&-\mu \gamma\lim_{\epsilon\to 0}\sum_{p=1}^n
\alpha_p \Big(\int_{\mathbb{C}_t}
\frac{1}{u_k-x}C_{\epsilon,0}(z_p,x) \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t \,\dd x
\\
& + \int_{\mathbb{C}_t}
\frac{z_p-x}{(u_k-z_p)(u_k-x)}C_{\epsilon,0}(z_p,x) \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x\Big)
\\
=&T_4'(\mathbf{u},{\bf z})+T_4(\mathbf{u},{\bf z}).
\end{align*}
Indeed, setting $z=z_p-x$ the function
\begin{align*
(z_p-x)C_{\epsilon,0}(z_p,x)= -\frac{_1}{^2}\int \rho_\epsilon(y)\frac{z}{z+y}dy=\frac{i}{2\epsilon^2}\int_{\mathbb{R}^+}\rho(\tfrac{r}{\epsilon})\oint_{|u|=1} \frac{z}{z+ru}\frac{du}{u}rdr=-\pi \int_{\mathbb{R}^+}\rho(r)1_{r<|z|/\epsilon}rdr
\end{align*}
is uniformly bounded and converges almost everywhere to $-\frac{_1}{^2}$.
The same argument can be repeated to the smeared functions to show that (because convergence is uniform over compact subsets of $\theta\mathcal{Z}$)
\begin{align*
\lim_{\epsilon\to 0}
\sum_{p=1}^n\int\Big(\frac{\alpha_p}{u_k-z_p} \partial_{z_p}\langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t \Big)\bar{\varphi}({\bf z})d{\bf z}:=\ell(\nu,\varphi)
\end{align*}
exists. Then integrating $\partial_{z_p}$ by parts and using that $\lim_{\epsilon\to 0}
\langle T(\mathbf{u}^k) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t $ exists we conclude
\begin{align*
\ell(\nu,\varphi)=
-\int\langle T(\mathbf{u}^k) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_t \sum_{p=1}^n \partial_{z_p}(\frac{\alpha_p}{u_k-z_p}
\varphi({\bf z}))\dd {\bf z}
\end{align*}
which proves \eqref{contourI}.
\end{proof}
We have obtained the relation
\begin{align}
N(\mathbf{u},\mathbf{z})-I(\mathbf{u},{\bf z})&=B_t(\mathbf{u},\mathbf{z})+\sum_{i=6}^9T_i(\mathbf{u},\mathbf{z})-\sum_{i=1}^3D_i(\mathbf{u},\mathbf{z})-D_5(\mathbf{u},\mathbf{z})
-T_4(\mathbf{u},{\bf z}).\label{N-I}
\end{align}
Let us consider the expression
\begin{align*
K(\mathbf{u},{\bf z}):=&\langle T(\mathbf{u})V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t -I(\mathbf{u},{\bf z})- M(\mathbf{u},\mathbf{z})
\\=&R(\mathbf{u},\mathbf{z}) + N(\mathbf{u},\mathbf{z})+D(\mathbf{u},\mathbf{z}) -I(\mathbf{u},{\bf z})
\end{align*}
By \eqref{confweightward} and \eqref{contourI} we have
\begin{align}\label{contourI1}
\int\Big(\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}{\bf u}^{1-\nu} K(\mathbf{u},{\bf z})d{\bf u} \Big)\varphi({\bf z})\dd {\bf z}&={\mathcal T}(\nu,\varphi)-{\mathcal T}(\nu^{(k)},\mathbf{D}^\ast_{\nu_k}\varphi).
\end{align}
On the other hand
combining \eqref{IPPstress}, \eqref{IPPstressrest} and \eqref{N-I}
we obtain
\begin{align
K&=R+\sum_{i=1}^3(T_i-D_i)+ T_5-D_5+\sum_{i=6}^9(T_i+T'_i)
+B_t\label{finalsum}.
\end{align}
Now some simple algebra, see Appendix \ref{ippjunk} gives:
\begin{align}\label{cancel1}T'_{9}=-T_{9}, \ \
T_5&=D_5\\ \label{cancel3}
\oint_{|u_k|=e^{-t}\delta_k}u_k^{1-\nu_k}({T}_i(\mathbf{u},\mathbf{z})+{T'}_i(\mathbf{u},\mathbf{z}))\,\dd u_k&=0\ \ i=6,7,8
\\ \label{cancel4}
\oint_{|u_k|=e^{-t}\epsilon_k}u_k^{1-\nu_k}\big(T_i(\mathbf{u},\mathbf{z}) -D_i(\mathbf{u},\mathbf{z})\big) \,\dd u_k&=0\ \ i=1,2,3
\end{align}
Hence using these relations and \eqref{Rcontour} we conclude
\begin{align*
\int\Big(\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}{\bf u}^{1-\nu} K(\mathbf{u},{\bf z})\dd{\bf u} \Big)\bar{\varphi}({\bf z})\dd {\bf z}&= \int\Big(\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}{\bf u}^{1-\nu} B_t(\mathbf{u},{\bf z})\dd{\bf u} \Big)\bar{\varphi}({\bf z})\dd{\bf z} =:B_t(\nu,\varphi).
\end{align*}
Thus to prove Proposition \ref{proofward} for $\nu_k\geq 2$ we need to prove the bound \eqref{btbound} for $B_t(\nu,\varphi)$.
Recalling \eqref{bryterm} we get by residue theorem
\begin{align*}
\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}
\mathbf{u}^{1-\nu} B_t(\mathbf{u},{\bf z}) \, \dd \mathbf{u}
=-2\pi\oint_{|\mathbf{u}^{(k)}|=\boldsymbol{\delta}^{(k)}_t}
(\mathbf{u}^{(k)})^{1-\nu^{(k)}} \oint_{|x|=e^{-t}} x^{1-\nu_k}\langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd\bar x \, \dd \mathbf{u}.
\end{align*}
By \eqref{basicterm} (at $\epsilon=0$ and an extra $V_{\alpha_{n+1}}(z_{n+1})=V_\gamma(x)$) the expectation on the RHS is a sum of terms of the form
\begin{align*
\int_{\mathbb{C}_t^m}I(\mathbf{u}^{(k)},{\bf x},{\bf z},x)\langle V_\alpha(0)V_{\gamma }(x)\prod_{\ell=1}^mV_{\gamma }(x_\ell)\prod_{i=1}^{n}V_{\alpha_i }(z_i) \rangle_t\dd{\bf x}
\end{align*}
where
\begin{align*
I(\mathbf{u}^{(k)},{\bf x},{\bf z},x)=
C\prod_{\alpha, \beta} \frac{1}{(u_\alpha-u_\beta)^{k_{\alpha\beta}}}\prod_{\alpha,i}\frac{1}{(u_\alpha-z_i)^{l_{\alpha i}}}\prod_{\alpha,\ell}\frac{1}{(u_\alpha-x_\ell)^{m_{\alpha \ell}} }\prod_{\alpha}\frac{1}{(u_\alpha-x)^{n_{\alpha }} }
\end{align*}
where $\sum k_{\alpha\beta}+\sum l_{\alpha k}+\sum m_{\alpha l}+\sum n_{\alpha }=2(k-1)$. Performing the $u$-integrals in the order $u_{k-1},u_{k-2},\dots$ by the residue theorem we get
\begin{align*
\oint_{|\mathbf{u}^{(k)}|=\boldsymbol{\delta}^{(k)}_t}
(\mathbf{u}^{(k)})^{1-\nu^{(k)}} I(\mathbf{u}^{(k)},{\bf x},{\bf z},x) \dd \mathbf{u}=\sum C(a,{\bf b},{\bf c})x^{-a}\prod_\ell x_\ell^{-b_\ell}\prod_iz_i^{-c_i}
\end{align*}
with $a+\sum b_\ell+\sum c_i=|\nu^{(k-1)}|$. Since $|x_\ell|\geq e^{-t}$ and $|x|= e^{-t}$ we conclude
\begin{align*}
|\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}
\mathbf{u}^{1-\nu} B_t(\mathbf{u},\mathbf{z}) \, \dd \mathbf{u}|\leq Ce^{t(|\nu|-2)}
\max_{m\leq 2(k-1)}\sup_{|x|=e^{-t}}\int_{\mathbb{C}_t^m} \langle V_\alpha(0)V_\gamma(x)\prod_{\ell=1}^mV_{\gamma }(x_\ell)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \dd{\bf x}.
\end{align*}
By Lemma \ref{kpzlemma}
\begin{align*}
\int_{\mathbb{C}_t^m} \langle V_\alpha(0)V_\gamma(x)\prod_{\ell=1}^mV_{\gamma }(x_\ell)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t d{\bf x}&=C \langle V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t\leq C |x|^{-\gamma\alpha}=Ce^{t\gamma\alpha}.
\end{align*}
where we used the formula \eqref{probarepresentation} and this estimate is uniform over the compact subsets of $\mathbf{z}\in\theta\mathcal{Z}$. Hence
\begin{align*}
|\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}
\mathbf{u}^{1-\nu} B_t(\mathbf{u},\mathbf{z}) \, d \mathbf{u}|\leq Ce^{t(|\nu|+\alpha-2)}
\end{align*}
as claimed.
\subsubsection{Case $\nu_k=1$}
Here we need to regularize also the Liouville expectation: let $\langle -\rangle_{t,\epsilon}$ be as in \eqref{modifiedlcft} except we replace $e^{\gamma c}M_\gamma(\mathbb{C}_t)$ by the regularized version $\int_{\mathbb{C}_t}V_{\gamma,\epsilon}(x)\dd x$. We use following variant of Lemma \ref{deriv2}.
\begin{lemma}\label{deriv2'}
Let $
I'_\epsilon(\mathbf{u},{\bf z}):=\frac{1}{u_k}
\sum_{p=1}^n \partial_{z_p}\langle T(\mathbf{u}^k) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_{t,\epsilon}
$.
Then $\lim_{\epsilon\to 0}I'_\epsilon(\mathbf{u},{\bf z}):= I'(\mathbf{u},{\bf z})$
exists and defines a continuous function in ${\bf z}\in \theta{\mathcal Z}$ satisfying
\begin{align}\label{contourI'}
\int\Big(\frac{1}{(2\pi i)^{|\nu|}}\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}{\bf u}^{1-\nu} I'(\mathbf{u},{\bf z})\dd {\bf u} \Big)\bar{\varphi}({\bf z})\dd {\bf z}={\mathcal T}_t(\nu^{(k)},{\bf D}_{\nu_k}^\ast\varphi)
\end{align}
for all $\varphi\in C_0^\infty(\theta{\mathcal Z})$.
\end{lemma}
\begin{proof} The proof is similar to Lemma \ref{deriv2} but cancellations occur for other reasons and we explain how. First, the integration by parts gives
\begin{align*
I'_\epsilon(\mathbf{u},{\bf z})=K'_\epsilon(\mathbf{u},{\bf z})+L'_\epsilon(\mathbf{u},{\bf z})
\end{align*}
where $\lim_{\epsilon\to 0}K'_\epsilon=K'$ exists and is given by
\begin{align*}
K'(\mathbf{u},{\bf z}) =&- \sum_{p=1}^n\sum_{\ell=1}^{k-1}\frac{Q\alpha_p}{u_k(z_p-u_\ell)^3} \langle T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
&+ \sum_{p=1}^n\sum_{\ell=1}^{k-1}\frac{\alpha_p}{u_k(z_p-u_\ell)^2} \langle \partial_{z}X(u_\ell) T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
&- \sum_{p=1}^n \frac{\alpha_p\alpha}{2}\frac{1}{u_kz_p} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
&- \sum_{p\not= p'=1}^n \frac{\alpha_p\alpha_{p'}}{2}\frac{1}{u_k(z_p-z_{p'})} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t
\\
& =:C_1(\mathbf{u},{\bf z})+C_2(\mathbf{u},{\bf z})+C_3(\mathbf{u},{\bf z})
C_5(\mathbf{u},{\bf z})
\end{align*}
whereas
\begin{align*
L'_\epsilon(\mathbf{u},{\bf z})= -
\frac{\mu\gamma }{u_k} \sum_{p=1}^n {\alpha_p }\int_{\mathbb{C}_t}C_{\epsilon,\epsilon}( z_p,x)\langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_{\gamma,\epsilon}(x)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_{t,\epsilon} \,\dd x
\end{align*}
is the term that needs analysis. Let us define
\begin{align*
B'_t(\mathbf{u},{\bf z}):=-\frac{i\mu }{u_k}\oint_{|x|=e^{-t}} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd\bar x.
\end{align*}
Then
\begin{align*
B'_t(\mathbf{u},{\bf z})=&-\frac{i\mu }{u_k}\lim_{\epsilon\to 0} \oint_{|x|=e^{-t}} \langle T(\mathbf{u}^{(k)}) V_{\alpha}(0)V_{\gamma,,\epsilon}(x)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_{t,\epsilon} \,\dd\bar x\\=&\frac{\mu}{u_k}\lim_{\epsilon\to 0} \int_{\mathbb{C}_t} \partial_x\langle T(\mathbf{u}^{(k)}) V_{\alpha}(0)V_{\gamma,,\epsilon}(x)\prod_{i=1}^nV_{\alpha_i,\epsilon }(z_i) \rangle_{t,\epsilon} \,\dd x\\
=& -\frac{\mu Q\gamma }{u_k}\sum_{\ell=1}^{k-1} \int_{\mathbb{C}_t} \frac{1}{(x-u_\ell)^3} \langle T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\\
&+\frac{\mu\gamma}{u_k} \sum_{\ell=1}^{k-1} \int_{\mathbb{C}_t} \frac{1}{(x-u_\ell)^2} \langle \partial_zX(u_\ell) T(\mathbf{u}^{(\ell,k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\\
&-\mu\frac{ \gamma \alpha}{2u_k} \int_{\mathbb{C}_t} \frac{1}{x} \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_\gamma(x)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t \,\dd x
\\
&-\frac{\mu\gamma}{u_k} \lim_{\epsilon\to 0}\sum_{p=1}^n {\alpha_p }\int_{\mathbb{C}_t}C_{\epsilon,\epsilon}( z_p,x) \langle T(\mathbf{u}^{(k)}) V_\alpha(0)V_{\gamma,\epsilon}(x)\prod_{i=1}^nV_{\alpha_i ,\epsilon}(z_i) \rangle_{t,\epsilon} \,\dd x \\
&=:P_6(\mathbf{u},{\bf z})+P_7(\mathbf{u},{\bf z})+P_{8}(\mathbf{u},{\bf z})+P_4(\mathbf{u},{\bf z})
\end{align*}
This proves the existence of $ \lim_{\epsilon\to 0}L'_\epsilon=L'=P_4$ and furthermore
\begin{align}\label{l4p4}
I'=\lim_{\epsilon\to 0}I'_\epsilon=C_1+C_2+C_3+C_5+B'_t-P_6-P_7-P_{8}
\end{align}
The claim \eqref{contourI'} follows as in Lemma \ref{deriv2}.
\end{proof}
Let us consider the expression
\begin{align*
K'(\mathbf{u},{\bf z}):=&\langle T(\mathbf{u})V_\alpha(0)\prod_{i=1}^nV_{\alpha_i }(z_i) \rangle_t +I'(\mathbf{u},{\bf z})- M(\mathbf{u},\mathbf{z})
\\=&R(\mathbf{u},\mathbf{z}) + N(\mathbf{u},\mathbf{z})+D(\mathbf{u},\mathbf{z}) +I'(\mathbf{u},{\bf z})
\end{align*}
By \eqref{confweightward} and \eqref{contourI'} we have
\begin{align}\label{contourI12}
\int\Big(\oint_{|\mathbf{u}|=\boldsymbol{\delta}_t}{\bf u}^{1-\nu} K(\mathbf{u},{\bf z})\dd{\bf u} \Big)\bar{\varphi}({\bf z})\dd{\bf z}&={\mathcal T}(\nu,\varphi)-{\mathcal T}(\nu^{(k)},\mathbf{D}^\ast_{\nu_k}\varphi).
\end{align}
On the other hand
combining \eqref{IPPstress}, \eqref{IPPstressrest} and \eqref{N-I}
we obtain
\begin{align
K&=R+N +\sum_{i=1}^9 T_i +\sum_{i=1}^3 C_i+C_5 -\sum_{i=6}^8 P_
+B'_t\label{finalsum2}
\end{align}
As before it is easy to check that the following relations hold (see Appendix \ref{ippjunk})
\begin{align}\label{tinuk1}
\rcircleleftint_{|u_k|=e^{-t}\epsilon_k} T_i(\mathbf{u},{\bf z})\,\dd u_k=&0\quad \text{ for }i=4,5,9
\\
\rcircleleftint_{|u_k|=e^{-t}\epsilon_k}\big(T_i(\mathbf{u},{\bf z})+C_i(\mathbf{u},{\bf z})\big)\,\dd u_k=&0\quad \text{ for }i=1,2,3\label{tinuk2}
\\
\rcircleleftint_{|u_k|=e^{-t}\epsilon_k}\big(T_i(\mathbf{u},{\bf z})-P_i(\mathbf{u},{\bf z})\big)\,\dd u_k=&0\quad \text{ for }i=6,7,8\label{tinuk2'
\\
\rcircleleftint_{|u_k|=e^{-t}\epsilon_k} C_5(\mathbf{u},{\bf z}) \,\dd u_k=&0 \label{tinuk3}\\
\rcircleleftint_{|u_k|=e^{-t}\epsilon_k} N(\mathbf{u},{\bf z}) \,\dd u_k=&0 . \label{tinuk1'}
\end{align}
We can now conclude as in the case $\nu_k>1$. \qed
|
train/arxiv
|
BkiUdcM4uBhivVgojREw
| 5 | 1 |
\section{Introduction}
Maldacena \cite{Maldacenaeternalbh} formulated a version of the black hole information paradox using simple correlation functions in AdS black holes. He observed that the bulk theory's prediction of exponential decay \cite{Horowitz:1999jd, Goheer:2002vf, Dyson:2002pf, Barbon:2003aq} of a two-point correlation function cannot be consistent with unitarity on the boundary CFT. He proposed that this paradox can be resolved if one sums over all geometries with prescribed boundary conditions on the bulk side. Saad \cite{Saadsingleauthor} \footnote{continuing the idea of \cite{Blommaert:2019hjr}} carried out this proposal by considering geometries in the context of Jackiw-Teitelboim (JT) gravity \cite{Teitelboim, Jackiw, AlmheiriPolchinski} coupled with matter. More specifically, he computed bosonic two-point correlation functions using the techniques developed by Yang \cite{Yangsingleauthor} \footnote{for other approaches see \cite{Lam:2018pvp, Mertens:2017mtv, Blommaert:2018oro, Blommaert:2019hjr, Iliesiu:2019xuh}} on the bulk side and compared with Random Matrix Theory (RMT) predictions for operators satisfying Eigenstate Thermalization Hypothesis (ETH) \cite{ethDeutsch, ethSrednicki} on the boundary side.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{figures/introcopy.png}
\caption{Lowest order contributions to two-point correlation functions. (a) The contribution of the disk topology decays at late time \cite{AltlandBagrets, MertensVerlinde, KitaevSuh, Yangsingleauthor}. (b) The crosscap contribution gives a leading non-decaying contribution proportional to $e^{-S_0}$. (c) The contribution of a genus one surface with one boundary, i.e. a handle-disk, is non-decaying and proportional to $te^{-2S_0}$ \cite{Saadsingleauthor}.}
\label{intro}
\end{figure}
Saad \cite{Saadsingleauthor} showed on the boundary side using RMT that with respect to time $t$ bosonic two-point correlation function should first decay exponentially (called a slope), then climb up linearly (called a ramp), and finally stay constant (called a plateau). To explain the late-time non-decaying feature on the bulk side, Saad studied a genus one surface with one boundary (a handle-disk), which gives a contribution to the two-point correlation function proportional to $te^{-2S_0}$, and so the handle-disk is responsible for the ramp on the graph of two-point correlator v.s. time.
In this paper, we extend Saad's result to non-orientable geometries and fermionic two-point correlation functions. A key geometry we study is a disk with a crosscap, which is topologically equivalent to a M\"{o}bius band. We find that the crosscap gives a non-decaying contribution to the bosonic two-point correlation function proportional to $e^{-S_0}$, which is similar to the original plateau, and can either enhance it or cancel it depending on the weighting factors we attach in front of this contribution. From there we crosscheck with predictions from RMT with appropriate symmetry classes on the boundary side. We then go to fermionic two-point correlation functions. There we consider a disk with one crosscap and a disk with two crosscaps. On each particular geometry, we sum over all Spin/Pin$^-$ structures with appropriate weightings related to their corresponding topological invariants. This leads to an N mod 8 periodicity. Our result confirms known numerics results \cite{9authors} of two-point correlation functions in Sachdev-Kitaev-Ye (SYK) model \cite{SachdevYe, KitaevTalks, KitaevSuh} \footnote{for more works on SYK model see \cite{Polchinski:2016xgd, Maldacena:2016hyu, Jensen:2016pah, 9authors, SaadShenkerStanford18}}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/ccintrocopy.png}
\caption{a disk with a crosscap is topologically equivalent to a M\"{o}bius band}
\label{ccintro}
\end{figure}
In section 2, we review some important tools involved in calculating two-point correlation functions in JT, calculate crosscap contributions to bosonic two-point functions, and compare the results with RMT computations. In section 3, we review Spin structure on orientable geometries, introduce Pin$^-$ structure on non-orientable geometries and from there examine the contributions of a disk with one crosscap or two crosscaps to the fermionic two-point functions, and compare with RMT computations as well as SYK numerics.
\section{Bosonic Two-Point Correlation Functions}
\subsection{Review}
In this paper we are working in a simple model of holography, where the bulk theory is JT gravity with probe matter fields, and the boundary theory is formally an ordinary quantum system, or an ensemble of such systems. From the boundary perspective, we are interested in computing the thermal two-point function at late times $t$:
\begin{equation}
\braket{V(t)V(0)}=\mathrm{Tr}[e^{-\beta H}V(t)V(0)]=\sum_{n,m}e^{-\beta E_m}e^{-it(E_n-E_m)}|\braket{E_n|V|E_m}|^2
\end{equation}
We are hoping to match expected features of this function to bulk computations of correlation functions in JT gravity. To compute the bulk correlation function, we are suppose to hold fixed the boundary conditions (including the operators $V(0)$ and $V(t)$ that we are inserting) and sum over two-dimensional bulk topologies. We are particularly interested in the contribution of the topology of a disk with a crosscap inserted, but we will review the computations of the disk topology and the disk with a handle inserted.
The 2d gravity theory we will study consists of the Einstein-Hilbert action + JT gravity action + action from matter. JT gravity on a 2d manifold $M$ has Euclidean action
\begin{equation}
I_{JT}=-\frac{1}{2}\left(\int_M\phi(R+2)+2\int_{\partial M}\phi_b (K-1)\right)
\end{equation}
Classically, the equation of motion fixes the bulk geometry to be AdS$_2$ with $R=-2$ and the action reduces to a Schwarzian action on the boundary \cite{MaldacenaStanfordYang}. In 2d, the Einstein-Hilbert action is purely topological and can be written as
\begin{equation}
I_{EH}=-\chi S_0
\end{equation}
where $\chi=2-2g-n$ is the Euler character for manifold $M$ with $g$ the genus and $n$ the number of boundaries, and $S_0$ is the zero-temperature bulk entropy which is a constant. The Einstein-Hilbert action then contributes an overall factor $e^{\chi S_0}$ to the partition function. In all of our figures the orange disks represent infinite hyperbolic space (or its quotient) and yellow geometries inside represent the physical Euclidean spacetimes, with wiggly regularized boundaries described by the Schwarzian theory \cite{MaldacenaStanfordYang}.
The two main shapes of Euclidean AdS we consider in this review are a hyperbolic disk which has one asymptotically boundary with renormalized length $\beta$, and a hyperbolic trumpet which has one asymptotic boundary with renormalized length $\beta$ and one geodesic boundary with length $b$ (see figure~\ref{partitionfunction}). That is because a disk is the simplest hyperbolic geometry with one asymptotic boundary and a trumpet can be thought of as a building block of more complicated geometries via attaching a Riemann surface with one geodesic boundary to the geodesic boundary of the trumpet.
JT path integrals without operator insertions can be computed directly by doing the path integral over the wiggly boundary of the disk and the trumpet explicitly. Disk \cite{BagretsAltlandKamenev16, 9authors, BagretsAltlandKamenev17, StanfordWitten17, SchwarzianBelokurovShavgulidze, SchwarzianMertensTuriaciVerlinde, KitaevSuhBH, Yangsingleauthor, IliesiuPufuVerlindeWang} and trumpet partition functions \cite{StanfordWitten17, Saadsingleauthor} are given respectively by
\begin{equation}
Z_{\text{Disk}}(\beta)=e^{S_0}\frac{e^{\frac{2\pi^2}{\beta}}}{\sqrt{2\pi}\beta^{3/2}}=e^{S_0}\int_0^\infty dE\,\underbrace{\frac{\sinh(2\pi\sqrt{2E})}{2\pi^2}}_{\rho_0(E)}e^{-\beta E}\label{zdisk}
\end{equation}
and
\begin{equation}
Z_{\text{Trumpet}}(\beta,b)=\frac{e^{-\frac{b^2}{2\beta}}}{\sqrt{2\pi\beta}}=\int_0^\infty dE\,\frac{\cos(b\sqrt{2E})}{\pi\sqrt{2E}}e^{-\beta E}\label{ztrumpet}
\end{equation}
where $\rho_0(E)$ denotes the density of state.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/partitionfunctioncopy.png}
\caption{(a) disk partition function (b) trumpet partition function}
\label{partitionfunction}
\end{figure}
To compute path integrals with operator insertions we need more tools. Before we do that, we should note that there is another way of computing the disk partition function. A disk can be decomposed into two Hartle-Hawking wavefunctions by the following procedure
\begin{align}
Z_{\text{Disk}}(\beta)&=\includegraphics[valign=c,width=0.2\textwidth]{figures/diskcopy.png}\\
&=\int\,e^\ell d\ell\includegraphics[valign=c,width=0.2\textwidth]{figures/splitcopy.png}\\
&=e^{S_0}\int\,e^\ell d\ell\,\varphi_{\text{Disk},\tau}(\ell)\varphi_{\text{Disk},\beta-\tau}(\ell)
\end{align}
This decomposition may seem redundant since we already know how to calculate $Z_{\mathrm{disk}}$ but this procedure teaches us how to calculate two-point correlation functions. To do that, we just need another factor of $e^{-\Delta\ell}$ in the integral, which is the QFT two-point correlation function of two boundary operators $V$ of conformal weight $\Delta$ with renormalized geodesic distance $\ell$ apart. Disk contribution to two-point correlation functions at time $t=-i\tau$ is then given by
\begin{align}
\braket{V(t=-i\tau)V(0)}_{\chi=1}&=\includegraphics[valign=c,width=0.2\textwidth]{figures/diskcorrelatorcopy.png}\\
&=\int\,e^\ell d\ell\includegraphics[valign=c,width=0.2\textwidth]{figures/splitcopy.png}e^{-\Delta\ell}\\
&=e^{S_0}\int\,e^\ell d\ell\,\varphi_{\text{Disk},\tau}(\ell)\varphi_{\text{Disk},\beta-\tau}(\ell)e^{-\Delta\ell}
\end{align}
Note that this two-point correlator is not normalized by dividing out the disk partition function which is of order $e^{S_0}$, in this paper the notation $\braket{VV}$ in JT is not normalized. Hartle-Hawing wavefunctions can be written in a simple closed form by first writing the wavefunctions with fixed energy boundary conditions given by
\begin{equation}
\varphi_E(\ell)=\braket{\ell|E}=4e^{-\ell/2}K_{i\sqrt{8E}}(4e^{-\ell/2})
\end{equation}
where $K$ is a Bessel-K function. Hartle-Hawking wavefunctions i.e. wavefunctions with fixed length boundary condition are given by \cite{Yangsingleauthor,Saadsingleauthor}
\begin{align}
\varphi_{\text{Disk},\tau}(\ell)&=\int_0^\infty dE\,\rho_0(E)e^{-\tau E}\varphi_E(\ell)\label{hhdisk}\\
\varphi_{\text{Trumpet},\tau}(\ell,b)&=\int_0^\infty dE\,\frac{\cos(b\sqrt{2E})}{\pi\sqrt{2E}}e^{-\tau E}\varphi_E(\ell)\label{hhtrumpet}
\end{align}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/hhwavefunctioncopy.png}
\caption{(a) disk Hartle-Hawing state $\varphi_{\text{Disk},\tau}(\ell)$ (b) trumpet Hartle-Hawing state $\varphi_{\text{Trumpet},\tau}(\ell,b)$}
\end{figure}
Now we review two important relations that the Hartle-Hawking wavefunctions satisfy:
\begin{align}
\int_{-\infty}^\infty e^\ell d\ell\,\varphi_E(\ell)\varphi_{E'}(\ell)&=\frac{\delta(E-E')}{\rho_0(E)}\label{hhrelation1}\\
\int_{-\infty}^\infty e^\ell d\ell\,\varphi_E(\ell)\varphi_{E'}(\ell)e^{-\Delta\ell}&=|V_{E,E'}|^2=\frac{\left|\Gamma\left(\Delta+i(\sqrt{2E}+\sqrt{2E'})\right)\Gamma\left(\Delta+i(\sqrt{2E}-\sqrt{2E'})\right)\right|^2}{2^{2\Delta+1}\Gamma(2\Delta)}
\end{align}
In particular using (\ref{hhrelation1}) we can verify
\begin{align}
Z_{\text{Disk}}(\beta)&=e^{S_0}\int e^\ell d\ell\,\varphi_{\text{Disk},\tau}(\ell)\varphi_{\text{Disk},\beta-\tau}(\ell)\\
Z_{\text{Trumpet}}(\beta,b)&=\int e^\ell d\ell\,\varphi_{\text{Disk},\tau}(\ell)\varphi_{\text{Trumpet},\beta-\tau}(\ell,b)
\end{align}
by plugging in (\ref{zdisk}, \ref{ztrumpet}, \ref{hhdisk}, \ref{hhtrumpet}). In addition to partition functions and Hartle-Hawking states, we review a final and important tool we use: propagators i.e. time evolution operators of Hartle-Hawking wavefunctions such that
\begin{align}
\varphi_{\text{Disk},\beta+\beta_1+\beta_2}(\ell)&=\int e^{\ell'}d\ell'\,P_{\text{Disk}}(\beta_1,\beta_2,\ell,\ell')\varphi_{\text{Disk},\beta}(\ell')\\
\varphi_{\text{Trumpet},\beta+\beta_1+\beta_2}(\ell)&=\int e^{\ell'}d\ell'\,P_{\text{Trumpet}}(\beta_1,\beta_2,b,\ell,\ell')\varphi_{\text{Disk},\beta}(\ell')
\end{align}
we can check \cite{Saadsingleauthor} that the above relations are solved by
\begin{align}
P_{\text{Disk}}(\beta_1,\beta_2,\ell,\ell')&=\int dE\,\rho_0(E)e^{-(\beta_1+\beta_2)E}\varphi_E(\ell)\varphi_E(\ell')\label{diskpropagator}\\
P_{\text{Trumpet}}(\beta_1,\beta_2,b,\ell,\ell')&=\int_0^\infty dE\frac{\cos(b\sqrt{2E})}{\pi\sqrt{2E}}e^{-(\beta_1+\beta_2)E}\varphi_E(\ell)\varphi_E(\ell')
\end{align}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/propagatorcopy.png}
\caption{(a) disk propagator (b) trumpet propagator}
\end{figure}
Recall that the disk two-point correlators is given by
\begin{align}
\braket{V(t=-i\tau)V(0)}_{\chi=1}&=e^{S_0}\int e^\ell d\ell\,\varphi_{\text{Disk},\tau}(\ell)\varphi_{\text{Disk},\beta-\tau}(\ell)e^{-\Delta\ell}\\
&=e^{S_0}\int dE_1dE_2\,\rho_0(E_1)\rho_0(E_2)e^{-\tau E_1}e^{-(\beta-\tau)E_2}|V_{E_1,E_2}|^2\\
&\sim \frac{e^{S_0}}{t^3}|V_{0,0}|^2 \quad\quad t\rightarrow\infty
\end{align}
This is dominated by $E_1,E_2$ close to zero as time goes to infinity, so we get a decay proportional to $t^{-3}$, showing that the disk does not contribute to the late-time behavior of two-point correlation functions.
Saad \cite{Saadsingleauthor} showed that for a handle-disk geometry a procedure similar to the disk can be done by separating the geometry into two trumpet Hartle-Hawking wavefunctions. Then the contribution of a single example of handle-disk to two-point correlator is given by
\begin{equation}
\braket{V(t=-i\tau)V(0)}_{\chi=-1}\supset e^{-S_0}\varphi_{\text{Trumpet},\tau}(\ell,b)\varphi_{\text{Trumpet},\beta-\tau}(\ell,b)e^{-\Delta\ell}\label{hdgeodesic}
\end{equation}
we need to integrate over all geodesics and also integrate over $b$ according to the Mapping Class Group. We will explain how to do that in section3.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/correlatorscopy.png}
\caption{(a) one example of genus-0 contribution to 2pt correlator (b) one example of genus-1 contribution to 2pt correlator }
\end{figure}
\subsection{Crosscap}
If we allow non-orientable geometries, there is another contribution to the two-point correlation function given by a disk with a crosscap. A crosscap is a hole ($S^1$) with antipodal points identified. A useful fact is that each crosscap adds genus $1/2$ to the topology, so a disk with a crosscap has Euler characteristic $\chi=0$. A disk with a crosscap is topologically equivalent to a M\"{o}bius band. To see this, we look at a more familiar topology which is $\mathbb{RP}^2$, i.e. a sphere with a crosscap. This is topologically equivalent to a sphere with all pairs of antipodal points identified. A disk with a crosscap is similarly topologically equivalent to a double-trumpet with all pairs of antipodal points identified, as shown in figure~\ref{ccdrawing}. A double-trumpet then contains two copies of disk+crosscap. If we take one of those copies by cutting the cylinder twice longitudinally and identifying antipodal points on the two cuts, we get a M\"{o}bius band. Geometrically this M\"{o}bius band can be embedded on a hyperbolic disk, so its shape reminds us of the disk propagator we defined in equation (\ref{diskpropagator}).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{figures/ccdrawing.png}
\caption{(a) $\mathbb{RP}^2$ i.e. a sphere with a crosscap is a sphere with all pairs of antipodal points identified (b) a disk with a crosscap is a double-trumpet with all pairs of antipodal points identified which is topologically equivalent to a M\"{o}bius band}
\label{ccdrawing}
\end{figure}
There can be two types of geodesics on a disk with a crosscap: a geodesic going through the crosscap, and a geodesic not going through the crosscap as shown in figure~\ref{diskcc}.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\textwidth]{figures/ccgeodesicscopy.png}
\caption{(a) a hyperbolic disk with a crosscap (b) a geodesic going through the crosscap (c) a geodesic not going through the crosscap}
\label{diskcc}
\end{figure}
In the following two subsections we consider two-point function contributions arising from these two types of geodeiscs separately.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{figures/crosscap2copy.png}
\caption{On the M\"{o}bius band representation of disk+crosscap, if the two operators are located at the blue and the green dots respectively, a geodesic going through the crosscap is given by $\ell$ and two geodesics not going through the crosscap are given by $\ell_1$ and $\ell_2$.}
\label{ccgeodesics}
\end{figure}
\subsubsection{Non-decaying part}
In this subsection, we consider geodesics going through the crosscap. A disk+crosscap again can be represented as a M\"{o}bius strip. If we cut along the geodesic $\ell$ which goes through the crosscap, we get a disk propagator.
\begin{figure}[h]
\centering
\includegraphics[width=0.54\textwidth]{figures/crosscap1copy.png}
\caption{A geodesic going through the crosscap on disk+crosscap representation and M\"{o}bius band representation respectively with the geodesic given by $\ell$.}
\label{ccgeodesics1}
\end{figure}
Integrating over the length of the geodesic weighted by the free propagator $e^{-\Delta\ell}$ we get a contribution to 2-point correlator.
\begin{align}
\braket{V(t=-i\tau)V(0)}_{\text{cc},0}&= \int e^\ell d\ell\,P_{\text{Disk}}(\tau,\beta-\tau,\ell,\ell)e^{-\Delta\ell}\\
&=\int e^\ell d\ell\,\int dE\,\rho_0(E)e^{-\beta E}\varphi_E(\ell)^2e^{-\Delta\ell}\\
&=\int dE\,\rho_0(E)e^{-\beta E}|V_{E,E}|^2
\end{align}
This is one of the main results of this paper. Note that the result is manifestly independent of time $t$. Thus disk+crosscap gives a non-decaying contribution to two-point correlation function of order $e^{-S_0}$ after normalization (by dividing by the disk partition function).
\subsubsection{Decaying part}
In this subsection, we consider geodesics not going through the crosscap. Before calculating their contributions to two-point correlation functions, we introduce two quantities. The first is $I(\ell_1,\ell_2,\ell_3)$: the path integral of the hyperbolic triangle as shown in figure~\ref{I3I4I5}(a) (see \cite{Yangsingleauthor} for more information)
\begin{align}
I(\ell_1,\ell_2,\ell_3)&=e^{\ell_1/2}e^{\ell_2/2}e^{\ell_3/2}\int_0^\infty ds\,s\frac{\sinh(2\pi s)}{2\pi^2}\varphi_{s^2/2}(\ell_1)\varphi_{s^2/2}(\ell_2)\varphi_{s^2/2}(\ell_3)\\
&=e^{\ell_1/2}e^{\ell_2/2}e^{\ell_3/2}\int_0^\infty dE\rho_0(E)\varphi_E(\ell_1)\varphi_E(\ell_2)\varphi_E(\ell_3)
\end{align}
We can check that integrating over the product of a hyperbolic triangle and three Hartle-Hawking states we get back the disk partition function
\begin{multline}
e^{S_0}\int e^{\ell_1/2+\ell_2/2+\ell_3/2}d\ell_1d\ell_2d\ell_3\,\varphi_{\text{Disk},\beta_1}(\ell_1)\varphi_{\text{Disk},\beta_2}(\ell_2)\varphi_{\text{Disk},\beta_3}(\ell_3)I(\ell_1,\ell_2,\ell_3)\\
=e^{S_0}\int e^{\ell_1}d\ell\,\varphi_{\text{Disk},\beta_1}(\ell_1)\varphi_{\text{Disk},\beta_2+\beta_3}(\ell_1)=Z_{\text{Disk}}
\end{multline}
We can generalize the hyperbolic triangle to any hyperbolic polygons. To give a simplest example, gluing two hyperbolic triangles together gives a hyperbolic quadrilateral
\begin{align}
I(\ell_1,\ell_2,\ell_3,\ell_4)&=\int d\ell\,I(\ell_1,\ell_2,\ell)I(\ell,\ell_3,\ell_4)\\
&=\int d\ell dEdE'\,e^{\ell_1/2+\ell_2/2+\ell_3/2+\ell_4/2}e^{\ell}\rho_0(E)\rho_0(E')\varphi_E(\ell_1)\varphi_E(\ell_2)\varphi_E(\ell)\varphi_{E'}(\ell)\varphi_{E'}(\ell_3)\varphi_{E'}(\ell_4)\\
&=\int dE\,e^{\ell_1/2+\ell_2/2+\ell_3/2+\ell_4/2}\rho_0(E)\varphi_E(\ell_1)\varphi_E(\ell_2)\varphi_E(\ell_3)\varphi_E(\ell_4)
\end{align}
In general
\begin{equation}
I(\ell_1,\ldots,\ell_n)=\int dE\,\rho_0(E)\prod e^{\ell_i/2}\varphi_E(\ell_i)
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figures/I3I4I5copy.png}
\caption{(a) $\ell_1$, $\ell_2$, $\ell_3$ enclose a hyperbolic triangle $I(\ell_1,\ell_2,\ell_3)$ (b) $\ell_1$, $\ell_2$, $\ell_3$, $\ell_4$ enclose a hyperbolic quadrilateral $I(\ell_1,\ell_2,\ell_3,\ell_4)$ (c) $\ell_1$, $\ell_2$, $\ell_3$, $\ell_4$, $\ell_5$ enclose a hyperbolic pentagon $I(\ell_1,\ell_2,\ell_3,\ell_4,\ell_5)$}
\label{I3I4I5}
\end{figure}
A second important quantity we introduce is the crosscap correction to the disk partition function $Z_{cc}$. And we can define the correction $\rho_{1/2}(E)$ to density of states $\rho_0(E)$ accordingly
\begin{equation}
Z_{\mathrm{disk}+\mathrm{cc}}(\beta)=Z_{\mathrm{disk}}(\beta)+Z_{\mathrm{cc}}(\beta)=\int dE\, \left(e^{S_0}\rho_0(E)+\rho_{1/2}(E)\right)e^{-\beta E}\label{rho1}
\end{equation}
Now let us calculate the crosscap partition function using the $I$ function we just defined. Analyzing figure~\ref{ccgeodesics} we get
\begin{align}
Z_{\mathrm{cc}}&=\int d\ell d\ell_1d\ell_2\,e^{\ell_1/2+\ell_2/2}I(\ell,\ell_1,\ell,\ell_2)\varphi_{\text{Disk},\tau}(\ell_1)\varphi_{\text{Disk},\beta-\tau}(\ell_2)\\
&=\int dE\,\rho_0(E)\int e^\ell d\ell\,\varphi_E(\ell)\varphi_E(\ell)e^{-\beta E}
\end{align}
Together with (\ref{rho1}), this gives a relation analogous to (\ref{hhrelation1}) but with $E=E'$
\begin{equation}
\int_{-\infty}^\infty e^\ell d\ell\,\varphi_E(\ell)\varphi_E(\ell)=\frac{\rho_{1/2}(E)}{\rho_0(E)}
\end{equation}
This is a divergent integral, so (\ref{hhrelation1}) only makes sense when $E\neq E'$. To deal with the problem of divergence in the crosscap partition function, we compute it another way by integrating over trumpet partition functions with geodesic boundary replaced by a crosscap with different perimeter lengths.
\begin{figure}[h]
\centering
\includegraphics[width=0.2\textwidth]{figures/ccZcopy.png}
\caption{a trumpet with its geodesic boundary replaced by a crosscap}
\end{figure}
The geodesic boundary length is $b$. In Appendix~\ref{appendixmeasure}, we show that the measure to integrate over $b$ for crosscaps is $db/2\tanh\frac{b}{4}$. Using this we get
\begin{equation}
Z_{\text{cc}}=\int \frac{db}{2\tanh\frac{b}{4}}Z_{\mathrm{Trumpet}}(\beta,b)=\int_0^\infty dE\,\int_0^\infty \frac{db}{2\tanh\frac{b}{4}}\frac{\cos(b\sqrt{2E})}{\pi\sqrt{2E}}e^{-\beta E}
\end{equation}
We then get an explicit expression of $\rho_{1/2}(E)$ i.e.
\begin{equation}
\rho_{1/2}(E)=\int_0^\infty \frac{db}{2\tanh\frac{b}{4}}\frac{\cos(b\sqrt{2E})}{\pi\sqrt{2E}}
\end{equation}
Note that the lower limit of this integral $b\rightarrow0$ gives a divergence. That is when the crosscap end of the trumpet becomes very long and thin. The divergence could be regulated by studying a different theory of gravity for example $(2,p)$ Liouville gravity \cite{Seiberg:2004at} (see Appendix F of \cite{StanfordWitten19}). But the way it is regulated won't be important because after regularization it will still be a small correction to the disk partition function since it is $e^{S_0}$ smaller.
We now show that after regulation the contribution of geodesics not going through the crosscap to the two-point correlation function is decaying with time. The contribution of geodesic $\ell_1$ is
\begin{align}
\braket{V(t=-i\tau)V(0)}_{\text{cc},1}&=\int dL d\ell_1d\ell_2\,e^{\ell_1/2+\ell_2/2}e^{-\Delta\ell_1}I(L,\ell_1,L,\ell_2)\varphi_{\text{Disk},\tau}(\ell_1)\varphi_{\text{Disk},\beta-\tau}(\ell_2)\\
&=\int dEdE_1\rho_0(E_1)e^{-\tau E_1}e^{-(\beta-\tau)E}\rho_{1/2}(E)|V_{E,E_1}|^2
\end{align}
and similarly the contribution of geodesic $\ell_2$ is
\begin{align}
\braket{V(t=-i\tau)V(0)}_{\text{cc},2}&=\int dL d\ell_1d\ell_2\,e^{\ell_1/2+\ell_2/2}e^{-\Delta\ell_2}I(L,\ell_1,L,\ell_2)\varphi_{\text{Disk},\tau}(\ell_1)\varphi_{\text{Disk},\beta-\tau}(\ell_2)\\
&=\int dEdE_2\rho_0(E_2)e^{-\tau E}e^{-(\beta-\tau)E_2}\rho_{1/2}(E)|V_{E,E_2}|^2
\end{align}
When $t$ is large, $\braket{V(t=-i\tau)V(0)}_{\text{cc},1}$ is dominated by $E,E_1$ close to zero. In that case $|V_{E,E_1}|^2$ approaches a constant and $\rho_0(E)\approx\sqrt{E}$ so we can do the integral approximately
\begin{equation}
\braket{V(t=-i\tau)V(0)}_{\text{cc},1}\approx\int dEdE_1\sqrt{E_1}e^{-\tau E_1}e^{-(\beta-\tau)E}\rho_{1/2}(E)|V_{0,0}|^2\sim\frac{1}{t^{5/2}}\rho_{1/2}(0)|V_{0,0}|^2
\end{equation}
and similar for $\braket{V(t=-i\tau)V(0)}_{\text{cc},2}$. They decay with time as $t^{-5/2}$, so these geodesics not going through the crosscap do not contribute to the late-time two-point correlation function.
Therefore, geodesics going through the crosscap gives a non-decaying contribution to the two-point correlation function $\braket{V(t=-i\tau)V(0)}_{\text{cc},0}$. On the disk+crosscap, there are also other geodesics that wind around the crosscap multiple times so that they self-intersect. At large time, they do not contribute. The classical solution to the disk+crosscap configuration with two operator insertions is given by the quotient of the configuration calculated in Appendix B of \cite{Stanfordwormhole} by identifying antipodal points. At late time the classical solution of $b$ scales with time $t$, so the main contribution to the quantum wavefunction calculation comes from large $b$. This is saying that the geodesic tends to be long and its contribution decays over time.
But before we conclude, we should note that there are two versions of sums over geometries with or without a weighting factor $(-1)^{n_c}$ where $n_c$ is the number of crosscaps \cite{StanfordWitten19}. Thus in the case of disk+one crosscap, this factor is simply $-1$, which gives the two-point correlator $\pm\braket{V(t=-i\tau)V(0)}_{\text{cc},0}$. We will see that these correspond to GOE and GSE-like matrix integrals on the boundary.
\subsection{RMT}
\label{bosonicRMT}
\cite{SaadShenkerStanford19} showed that there is a correspondence between JT gravity path integrals and Hermitian matrix integrals
\begin{equation}
Z_{\mathrm{JT}}(\beta_1,\ldots,\beta_n)\leftrightarrow \frac{1}{\mathcal{Z}}\int dH\,e^{-L\mathrm{Tr} V(H)}\mathrm{Tr}e^{-\beta_1 H}\cdots\mathrm{Tr}e^{-\beta_n H}
\end{equation}
where $H$ are Hermitian $L\times L$ matrices \footnote{More precisely, these matrices should be double-scaled.}, $V(H)$ is a potential function and $\mathcal{Z}=\int dH\,e^{-L\mathrm{Tr} V(H)}$ is the matrix integral partition function. The left-hand side is the JT gravity partition function with $n$ asymptotic boundaries of regularized lengths $\beta_1,\ldots,\beta_n$, and the right-hand side is an average of products of thermal partition functions over an appropriate random matrix ensemble.
There are three Dyson $\beta$-ensembles of random matrices: GUE, GOE, GSE \cite{Dyson}. GUE is an ensemble of random Hermitian matrices and the ensemble is invariant under unitary transformations, i.e. $U(L)$ satisfying $U^\dagger U=1$. GOE and GSE can be derived by adding time-reversal symmetry (i.e. the time-reversal operator $T$ commuting with $U$) to GUE with different anomaly conditions $T^2=\pm1$. Recall that Lorentzian time-reversal $T$ is antilinear and antiunitary. Since we know that complex conjugate $K$ is antilinear and antiunitary \footnote{Complex conjugate operator is an antilinear and antiunitary operator $K:\mathcal{H}\rightarrow\mathcal{H}$ because $K\sum\alpha_i\ket{i}=\sum\alpha_i^*\ket{i}$ and if we write $\ket{\psi}=\sum\alpha_i\ket{i}, \quad \ket{\chi}=\sum\beta_i\ket{i}$ then we have the inner product $\braket{K\chi| K\psi}=\sum_{i,j}(\bra{j}\beta_j)(\alpha^*_i\ket{i})=\sum_i\alpha_i^*\beta_i\braket{i|i}=\sum_{i,j}(\bra{j}\alpha^*_j)(\beta_i\ket{i})=\braket{\psi|\chi}$.}, it is natural to model $T$ with a factor of $K$ in it. For $T^2=1$, we can just take $T=K$ and the condition $TUT^{-1}=U$ reduces $U^\dagger U=1$ to $U^TU=1$, which is equivalent to saying $U\in O(L)$, and the ensemble becomes GOE. For $T^2=-1$, we instead take $T=K\omega$ where $\omega=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$. The condition $TUT^{-1}=U$ reduces $U^\dagger U=1$ to $U^T\omega U=\omega$, which is equivalent to saying $U\in Sp(L)$, and the ensemble becomes GSE. Now we examine the three ensembles one by one.
\subsubsection*{GUE}
Now that we have considered two-point correlation functions in JT, we want to calculate $\braket{V(t)V(0)}=\braket{U(t)^\dagger V(0) U(t)V(0)}=\frac{1}{L}\mathrm{tr}[U(t)^\dagger V(0) U(t)V(0)]$ in RMT. A simple model \cite{Blommaert, Weingartens} of the time evolution operator $U(t)$ is to approximate it by
\begin{equation}
U(t)= e^{-i(u^\dagger hu)t}=u^\dagger e^{-iht} u
\end{equation}
where $h$ is diagonal and $u$ is Haar random. Note that this model only works at late times when the time evolution is random enough. Time evolution of an operator $W$ is then given by
\begin{equation}
W(t)=U^\dagger W(0) U=u^\dagger e^{iht} u W(0) u^\dagger e^{-iht} u
\end{equation}
On the RMT side, the two-point correlation function can be calculated via an integral over $u$
\begin{equation}
\int du\,\braket{V(t)V(0)}=\int du\,\braket{e^{iHt}V(0)e^{-iHt}V(0)}=\int du\,\braket{u^\dagger e^{iht}uV(0)u^\dagger e^{-iht}uV(0)}
\end{equation}
This can be calculated using the Weingarten formula for unitary matrices $u$
\begin{multline}
\int du\,u_{i_1}^{\phantom{i_1}j_1}u_{i_2}^{\phantom{i_2}j_2}(u^\dagger)_{k_1}^{\phantom{k_1}l_1}(u^\dagger)_{k_2}^{\phantom{k_2}l_2}=\frac{1}{L^2-1}\Big(\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_2}\delta_{k_2,j_1}\\
-\frac{1}{L}\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_2}\delta_{k_2,j_1}-\frac{1}{L}\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_1}\delta_{k_2,j_2}\Big)\label{weingartenu}
\end{multline}
To leading order
\begin{equation}
\int du\,u_{i_1}^{\phantom{i_1}j_1}u_{i_2}^{\phantom{i_2}j_2}(u^\dagger)_{k_1}^{\phantom{k_1}l_1}(u^\dagger)_{k_2}^{\phantom{k_2}l_2}\approx\frac{1}{L^2}\big(\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_2}\delta_{k_2,j_1}\big)
\end{equation}
then two-point correlator is given by
\begin{multline}
\int du\,\braket{V(t)V(0)}=\frac{1}{L^2-1}\Big(L^2\braket{e^{iht}}\braket{e^{-iht}}\braket{VV}+L^2\braket{V}\braket{V}\Big)\\
-\frac{1}{L(L^2-1)}\Big(L\braket{VV}+L^3\braket{e^{iht}}\braket{e^{-iht}}\braket{V}\braket{V}\Big)
\end{multline}
Then in the limit $L\rightarrow\infty$ and assuming $\braket{V}=0$
\begin{equation}
\int du\,\braket{V(t)V(0)}\approx \braket{e^{iht}}\braket{e^{-iht}}\braket{VV}\label{VVu}
\end{equation}
We know from RMT computation \cite{9authors} that
\begin{equation}
\braket{e^{iht}}\braket{e^{-iht}}\sim\min\{\frac{t}{2\pi L^2},\frac{1}{\pi L}\}\sim\begin{cases}
t/(2\pi L^2)& t<2L\\1/(\pi L)& t\geq 2L\end{cases}\label{ramprelation}
\end{equation}
Thus (\ref{VVu}) becomes approximately
\begin{equation}
\min\{\frac{t}{L^2},\frac{1}{L}\}\braket{VV}
\end{equation}
It exhibits a ramp connected to a plateau in time confirming the analysis in \cite{Saadsingleauthor}. Also note that \cite{Saadsingleauthor} showed that the ramp part is originated from contribution from handle-disk in JT gravity with two-point correlation function given by
\begin{equation}
\braket{VV}_{\mathrm{handle-disk}}=e^{-S_0}\int dEdE'\,\rho_2(E,E')e^{-\tau E}e^{-(\beta-\tau)E'}|V_{E,E'}|^2
\end{equation}
where
\begin{equation}
\rho_2(E,E')=\int_0^\infty bdb\,\frac{\cos(b\sqrt{2E})\cos(b\sqrt{2E'})}{\pi^2\sqrt{2E}\sqrt{2E'}}
\end{equation}
In particular, if we identify $\rho_0(E)|V_{E,E}|^2=\braket{E|VV|E}$, we can compute the normalized contribution to the two-point correlation function from handle-disk
\begin{equation}
\frac{\braket{VV}_{\mathrm{handle-disk}}}{\braket{1}_{\mathrm{disk}}}=\frac{\braket{VV}_{\mathrm{handle-disk}}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim \frac{t}{(\rho_0(E)e^{S_0})^2}\braket{E|VV|E}\sim \frac{t}{L^2}\braket{E|VV|E}
\end{equation}
where we have identified $L$ with $\rho_0(E)e^{S_0}$. This confirms the claim that the handle-disk gives the ramp \cite{Saadsingleauthor}.
\subsubsection*{GOE}
In our case, the geometries of JT are non-orientable. For the case where we do not include the weighting factor $(-1)^{n_c}$, on the RMT side we replace the complex unitary matrix $u$ by a real orthogonal matrix $o$ so that there is time-reversal symmetry. The two-point correlation function is then given by
\begin{equation}
\int do\,\braket{V(t)V(0)}=\int do\,\braket{e^{iHt}V(0)e^{-iHt}V(0)}=\int do\,\braket{o^\dagger e^{iht}oV(0)o^\dagger e^{-iht}oV(0)}
\end{equation}
We can derive a formula analogous to Weingarten formula for orthogonal matrices
\begin{multline}
\int do\,o_{i_1}^{\phantom{i_1}j_1}o_{i_2}^{\phantom{i_2}j_2}(o^\dagger)_{k_1}^{\phantom{k_1}l_1}(o^\dagger)_{k_2}^{\phantom{k_2}l_2}\\
=\frac{(L+1)}{L(L-1)(L+2)}\Big(\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_2}\delta_{k_2,j_1}+\delta_{i_1,i_2}\delta_{l_1,l_2}\delta_{k_1,k_2}\delta_{j_1,j_2}\\
-\frac{1}{L+1}\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_2}\delta_{k_2,j_1}-\frac{1}{L+1}\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,k_2}\delta_{j_1,j_2}-\frac{1}{L+1}\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_1}\delta_{k_2,j_2}\\
-\frac{1}{L+1}\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,k_2}\delta_{j_1,j_2}-\frac{1}{L+1}\delta_{i_1,i_2}\delta_{l_1,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}-\frac{1}{L+1}\delta_{i_1,i_2}\delta_{l_1,l_2}\delta_{k_1,j_2}\delta_{k_2,j_1}\Big)\label{weingarteno}
\end{multline}
To leading order
\begin{equation}
\int do\,o_{i_1}^{\phantom{i_1}j_1}o_{i_2}^{\phantom{i_2}j_2}(o^\dagger)_{k_1}^{\phantom{k_1}l_1}(o^\dagger)_{k_2}^{\phantom{k_2}l_2}\approx\frac{1}{L^2}\big(\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_2}\delta_{k_2,j_1}+\delta_{i_1,i_2}\delta_{l_1,l_2}\delta_{k_1,k_2}\delta_{j_1,j_2}\big)
\end{equation}
then using the above formula the two-point correlator is given by
\begin{multline}
\int do\,\braket{V(t)V(0)}=\frac{(L+1)}{L(L-1)(L+2)}\Big(L^2\braket{e^{iht}}\braket{e^{-iht}}\braket{VV}+L^2\braket{V}\braket{V}+L\braket{VV^T}\Big)\\
-\frac{1}{L(L-1)(L+2)}\Big(L\braket{VV}+L\braket{VV}+L\braket{VV^T}\\
+L^2\braket{e^{iht}}\braket{e^{-iht}}\braket{VV^T}+L^2\braket{V}\braket{V}+L^3\braket{e^{iht}}\braket{e^{-iht}}\braket{V}\braket{V}\Big)
\end{multline}
Then in the limit $L\rightarrow\infty$ and assuming $\braket{V}=0$
\begin{equation}
\int do\,\braket{V(t)V(0)}\approx\braket{e^{iht}}\braket{e^{-iht}}\braket{VV}+\frac{1}{L}\braket{VV^T}\label{VVo}
\end{equation}
The first term is the same as (\ref{VVu}) and again gives a ramp at early time and a plateau at late time and in early time corresponds to a handle-disk in JT. Since we are dealing with non-orientable geometries, we write $\rho_0(E)|V_{E,E}|^2=\braket{E|VV|E}$ if we identify the two sides of the geodesic without twists and write $\rho_0(E)|V_{E,E}|^2=\braket{E|VV^T|E}$ if we identify the two sides of the geodesic with a one-half twist which translates to a reflection across the horizontal axis or Euclidean time-reversal. Thus the normalized contribution to the two-point correlation function from crosscap is given by
\begin{equation}
\frac{\braket{VV}_{\mathrm{cc}}}{\braket{1}_{\mathrm{disk}}}=\frac{\int dE\,e^{-\beta E}\braket{E|VV^T|E}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim \frac{1}{\rho_0(E)e^{S_0}}\braket{E|VV^T|E}\sim \frac{1}{L}\braket{E|VV^T|E}
\end{equation}
which identifes with the third term of equation~(\refeq{VVo}). This doubles the original plateau.
\subsubsection*{GSE}
For the case where we do include the weighting factor $(-1)^{n_c}$, on the RMT side we replace the orthogonal matrix $o$ by a symplectic matrix $s$. The two-point correlation function is then given by
\begin{equation}
\int ds\,\braket{V(t)V(0)}=\int ds\,\braket{e^{iHt}V(0)e^{-iHt}V(0)}=\int do\,\braket{s^\dagger e^{iht}sV(0)s^\dagger e^{-iht}sV(0)}
\end{equation}
We can derive a formula for symplectic matrices $s$ analogous to (\ref{weingarteno})
\begin{multline}
\int ds\,s_{i_1}^{\phantom{i_1}j_1}s_{i_2}^{\phantom{i_2}j_2}(s^\dagger)_{k_1}^{\phantom{k_1}l_1}(s^\dagger)_{k_2}^{\phantom{k_2}l_2}\\
=\frac{(L+1)}{L(L-1)(L+2)}\Big(\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_2}\delta_{k_2,j_1}+\omega_{i_1,i_2}\omega_{l_1,l_2}\omega_{k_1,k_2}\omega_{j_1,j_2}\\
-\frac{1}{L+1}\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_2}\delta_{k_2,j_1}+\frac{1}{L+1}\delta_{i_1,l_1}\delta_{i_2,l_2}\omega_{k_1,k_2}\omega_{j_1,j_2}-\frac{1}{L+1}\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_1}\delta_{k_2,j_2}\\
+\frac{1}{L+1}\delta_{i_1,l_2}\delta_{i_2,l_1}\omega_{k_1,k_2}\omega_{j_1,j_2}+\frac{1}{L+1}\omega_{i_1,i_2}\omega_{l_1,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\frac{1}{L+1}\omega_{i_1,i_2}\omega_{l_1,l_2}\delta_{k_1,j_2}\delta_{k_2,j_1}\Big)\label{weingartens}
\end{multline}
to leading order
\begin{equation}
\int ds\,s_{i_1}^{\phantom{i_1}j_1}s_{i_2}^{\phantom{i_2}j_2}(s^\dagger)_{k_1}^{\phantom{k_1}l_1}(s^\dagger)_{k_2}^{\phantom{k_2}l_2}
\approx\frac{1}{L^2}\big(\delta_{i_1,l_1}\delta_{i_2,l_2}\delta_{k_1,j_1}\delta_{k_2,j_2}+\delta_{i_1,l_2}\delta_{i_2,l_1}\delta_{k_1,j_2}\delta_{k_2,j_1}+\omega_{i_1,i_2}\omega_{l_1,l_2}\omega_{k_1,k_2}\omega_{j_1,j_2}\big)
\end{equation}
where $\omega=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$ and a symplectic matrix $s$ satisfies the relations $s\omega s^T=\omega$ and $ss^\dagger=1$. The two-point correlation function is then given by
\begin{multline}
\int ds\,\braket{V(t)V(0)}=\frac{(L+1)}{L(L-1)(L+2)}\Big(L^2\braket{e^{iht}}\braket{e^{-iht}}\braket{VV}+L^2\braket{V}\braket{V}+L\braket{\omega ^Te^{iht}\omega e^{-iht}}\braket{V \omega V^T \omega}\Big)\\
-\frac{1}{L(L-1)(L+2)}\Big(L\braket{VV}-L\braket{\omega ^Te^{iht}\omega e^{-iht}}\braket{VV}-L\braket{V \omega V^T \omega}\\
-L^2\braket{e^{iht}}\braket{e^{-iht}}\braket{V \omega V^T \omega}-L^2\braket{\omega ^Te^{iht}\omega e^{-iht}}\braket{V}\braket{V}+L^3\braket{e^{iht}}\braket{e^{-iht}}\braket{V}\braket{V}\Big)
\end{multline}
Then in the limit $L\rightarrow\infty$ and assuming $\braket{V}=0$
\begin{equation}
\int ds\,\braket{V(t)V(0)}\approx\braket{e^{iht}}\braket{e^{-iht}}\braket{VV}-\frac{1}{L}\braket{V \omega V^T \omega^{-1}}\label{VVs}
\end{equation}
where we have used the relation $\omega^T e^{iht}\omega=e^{iht}$ since matrix $h$ has no symplectic structure.
Again, the first term is the same as (\ref{VVu}) and in early time corresponds to a handle-disk in JT. In a topological field theory with weighting factor $(-1)^{n_c}$, if we identify the two sides of the geodesic with a one-half twist we should write $\rho_0(E)|V_{E,E}|^2=\braket{E|V \omega V^T \omega^{-1}|E}$. Thus the normalized contribution to the two-point correlation function from crosscap is given by
\begin{equation}
\frac{\braket{VV}_{\mathrm{cc}}}{\braket{1}_{\mathrm{disk}}}=\frac{-\int dE\,e^{-\beta E}\braket{E|V\omega V^T\omega^{-1}|E}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim -\frac{1}{\rho_0(E)e^{S_0}}\braket{E|V\omega V^T\omega^{-1}|E}\sim -\frac{1}{L}\braket{E|V\omega V^T\omega^{-1}|E}
\end{equation}
which identifes with the third term of equation~(\refeq{VVs}). This cancels the original plateau.
\section{Fermionic Two-Point Correlation Functions}
We now shift gears from bosonic 2-point functions to fermionic 2-point functions. On the bulk side, fermions introduce Spin structures for orientable manifolds and Pin structures for non-orientable manifolds. On the boundary side, anomalies of two discrete symmetries of the boundary theory classify it into different RMT classes. In this section we show that JT gravity and RMT calculations of fermionic two-point correlation functions match and they together confirm numerical computations in SYK \cite{9authors}. There, SYK Hamiltonian for $N$ Majorana fermions $\psi_a$ is given by
\begin{equation}
H=\frac{1}{4!}\sum_{a,b,c,d}J_{abcd}\psi_a\psi_b\psi_c\psi_d=\sum_{a<b<c<d}J_{abcd}\psi_a\psi_b\psi_c\psi_d\label{syk}
\end{equation}
where $J_{abcd}$ are totally anti-symmetric independent parameters drawn from Gaussian distribution. In SYK, the two-point correlation function averaged over couplings $J$ is given by
\begin{equation}
G(t)=\frac{1}{N}\sum_{i=1}^N\frac{\braket{\mathrm{Tr}[e^{-\beta H}\psi_i(t)\psi_i(0)]}_J}{\braket{Z(\beta)}_J}
\end{equation}
The graph of $G(t)$ v.s.~time $t$ has combinations of ramp and plateau features depending on $N\mod 8$ summarized in table~\ref{SYKresult}.
\begin{table}[h]
\centering
\begin{tabular}{ c c c c c c c }
\\
N & ramp & plateau\\
\hline\\
$0\mod8$& $\times$ & $\times$ \\
\\
$2\mod8$& $\checkmark$ & $\checkmark$\\
\\
$4\mod8$& $\times$ & $\times$ \\
\\
$6\mod8$& $\checkmark$ & $\times$\\
\\
\end{tabular}
\caption{SYK numerics result \cite{9authors}}
\label{SYKresult}
\end{table}
Note that this table only contains even $N$. We will match JT and RMT calculations for both even and odd $N$, and for even $N$ we are able to confirm the results in the table.
\subsection{Review}
Before we delve into Pin structures on non-orientable geometries and compute fermionic 2-point correlation functions, let us first review Spin structures on oriented geometries by computing a simpler example: the product of two fermionic one-point functions. This subsection is based on \cite{StanfordWitten19} and \cite{Witten16}.
To compute a particular quantity in JT involving fermions, we need to sum over all Spin structures on a particular geometry with an appropriate weighting factor characterized by topological invariant $\zeta$. More specifically, if we consider a manifold $Y$ with boundary $X$ we do that sum by fixing a Spin structure on $X$ and sum over compatible Spin structures on $Y$. For SYK model with $N$ Majorana fermions, the weighting factor is given by $(-1)^{N\zeta}$. The topological invariant $\zeta$ is defined as follows: If $Y$ has no boundary, for a bulk fermion field $\Psi$, we can consider its Dirac equation on $Y$, $\slashed{D}\Psi=0$. The number of zero modes of this equation mod2 is $\zeta$.
Now we consider an example illustrating the Spin structure: the product of two fermionic one-point functions. Before looking at fermions, we first recall Saad \cite{Saadsingleauthor} showed that for bosons, the product of two one-point functions have a non-decaying contribution from a connected geometry called a double-trumpet (see figure~\ref{doubletrumpet}a), and the result is given by
\begin{align}
\braket{V}_\beta\braket{V}_{\beta'}&= \int e^\ell d\ell\,P_{\text{Disk}}(\beta,\beta',\ell,\ell)e^{-\Delta\ell}\\
&=\int e^\ell d\ell\,\int dE\,\rho_0(E)e^{-(\beta+\beta')E}\varphi_E(\ell)^2e^{-\Delta\ell}\\
&=\int dE\,\rho_0(E)e^{-(\beta+\beta')E}|V_{E,E}|^2\label{1pointprod}
\end{align}
In this case, the manifold $Y$ is a double trumpet and $X$ are two asymptotic circles. On each circle, there are two Spin structures: antiperiodic and periodic fermions on a circle are called Neveu-Schwarz (NS) and Ramond (R) respectively. Note that the Spin structures on the two boundaries of $Y$ should be the same since they can both be slided to the center of the double trumpet. Thus we can bring the two boundary circles together and identify them, so that the Spin structures on a double trumpet are the same as those on a torus.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/2trumpet.png}
\caption{(a) double trumpet (b) torus}
\label{doubletrumpet}
\end{figure}
Topologically we can draw a torus as a square with $x\simeq x+1$ and $y\simeq y+1$. Then a spin field $\Psi$ on $Y$ satisfies
\begin{equation}
\Psi(x+1,y)=\color{green}\pm\color{black}\Psi(x,y)\quad\quad\Psi(x,y+1)=\color{red}\pm\color{black} \Psi(x,y)
\end{equation}
+ for R and - for NS.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/torussquare.png}
\caption{torus spin structure}
\end{figure}
Let $z=x+iy$, then the Dirac equation of $\Psi$ can be written as $\partial\Psi/\partial\overline{z}=0$ and the topological invariant $\zeta$, i.e. the number of solutions to the Dirac equation mod2, are given in table~\ref{table:torusspin}. We can see that a torus has 4 Spin structures.
\begin{table}[h]
\centering
\begin{tabular}{ c | c c }
\\
$\color{blue}\zeta$ & \color{red}R & \color{red}NS\\
\hline\\
\color{green}R & \color{blue}1 & \color{blue}0 \\
\\
\color{green}NS & \color{blue}0 & \color{blue}0 \\
\\
\end{tabular}
\caption{$\zeta$ of a torus}
\label{table:torusspin}
\end{table}
Let $F$ denote the sum over Spin structures on $Y$ keeping the Spin structure on $X$ fixed. We can then just sum $(-1)^{N\zeta}$ over rows for each column of the above table and get
\begin{equation}
F_{\mathrm{Torus}}^{\mathrm{\color{red}R}}=(-1)^N+1\quad\quad F_{\mathrm{Torus}}^{\mathrm{\color{red}NS}}=2
\end{equation}
Now we go back to double-trumpet contribution to the product of two one-point functions.
\begin{figure}[H]
\centering
\includegraphics[width=0.2\textwidth]{figures/doubletrumpet.png}
\caption{double-trumpet with Spin structure}
\label{dtspin}
\end{figure}
We showed at the beginning of this subsection that without Spin structures the product of two bosonic one-point functions is given by
\begin{equation}
\braket{V}\braket{V}=\braket{VV}_{\mathrm{double-trumpet}}=\int dE\,\rho_0(E)e^{-(\beta+\beta')E}|V_{E,E}|^2
\end{equation}
With Spin structures, we just multiply the above expression by the appropriate weighting factor $(-1)^{N\zeta}$ and get the following table.
\begin{table}[H]
\centering
\begin{tabular}{ c | c c }
\\
$\braket{V}\braket{V}$ & $\braket{V}_\mathrm{\color{red}R}\braket{V}_\mathrm{\color{red}R}$ & $\braket{V}_\mathrm{\color{red}NS}\braket{V}_\mathrm{\color{red}NS}$\\
\hline\\
\color{green}R & $(-1)^{N\color{blue}1}$$\braket{VV}_{\mathrm{double-trumpet}}$ & $(-1)^{N\color{blue}0}$$\braket{VV}_{\mathrm{double-trumpet}}$ \\
\\
\color{green}NS & $(-1)^{N\color{blue}0}$$\braket{VV}_{\mathrm{double-trumpet}}$ & $(-1)^{N\color{blue}0}$$\braket{VV}_{\mathrm{double-trumpet}}$ \\
\\
\end{tabular}
\caption{product of two bosonic one-point functions}
\end{table}
Summing over Spin structures with boundary Spin structure fixed, i.e. suming over the \color{green}green \color{black}Spin structures, we get
\begin{equation}
\braket{V}_{\color{red}\mathrm{R}}\braket{V}_{\color{red}\mathrm{R}}=((-1)^N+1)\braket{VV}_{\mathrm{double-trumpet}}\quad\quad\braket{V}_{\color{red}\mathrm{NS}}\braket{V}_{\color{red}\mathrm{NS}}=2\braket{VV}_{\mathrm{double-trumpet}}
\end{equation}
We now look at the product of two one-point functions of boundary fermions $\psi$, again if we ignore Spin structures, the product of two fermionic one-point functions is given by
\begin{equation}
\braket{\psi\psi}_{\mathrm{double-trumpet}}=\int dE\,\rho_0(E)e^{-(\beta+\beta')E}|\psi_{E,E}|^2\label{1pointprodf}
\end{equation}
With Spin structures, in addition to multiplying the above expression by the appropriate weighting factor $(-1)^{N\zeta}$, there is another factor we need to take into account for fermions. According to the \color{green}green \color{black}Spin structure in figure~\ref{dtspin} the two fermions on left/right boundaries are the same if the Spin structure is \color{green}R \color{black} and would differ by a minus sign if the Spin structure is \color{green}NS\color{black}. We summarize the result in the following table.
\begin{table}[h]
\centering
\begin{tabular}{ c | c c }
\\
$\braket{\psi}\braket{\psi}$ & $\braket{\psi}_\mathrm{\color{red}R}\braket{\psi}_\mathrm{\color{red}R}$ & $\braket{\psi}_\mathrm{\color{red}NS}\braket{\psi}_\mathrm{\color{red}NS}$\\
\hline\\
\color{green}R & $(-1)^{N\color{blue}1}$$\braket{\psi\psi}_{\mathrm{double-trumpet}}$ & $(-1)^{N\color{blue}0}$$\braket{\psi\psi}_{\mathrm{double-trumpet}}$ \\
\\
\color{green}NS & $(-1)^{N\color{blue}0}$$\color{green}(-1)$$\braket{\psi\psi}_{\mathrm{double-trumpet}}$ & $(-1)^{N\color{blue}0}\color{green}(-1)$$\braket{\psi\psi}_{\mathrm{double-trumpet}}$ \\
\\
\end{tabular}\caption{product of two fermionic one-point functions}
\end{table}
Summing over Spin structures with boundary Spin structure fixed, i.e. suming over the \color{green}green \color{black}Spin structures, we get
\begin{equation}
\braket{\psi}_{\color{red}\mathrm{R}}\braket{\psi}_{\color{red}\mathrm{R}}=((-1)^N-1)\braket{\psi\psi}_{\mathrm{double-trumpet}}\quad\quad\braket{\psi}_{\color{red}\mathrm{NS}}\braket{\psi}_{\color{red}\mathrm{NS}}=0
\end{equation}
\subsection{Pin$^-$ Structure and Crosscaps}
We now consider non-orientable geometries, where there is a structure called Pin structures analogous to the Spin structure for orientable geometries. There are two ways of defining these Pin structures called Pin$^+$ and Pin$^-$, respectively. We know that bulk fermions transform under Lorentzian time-reversal and spacial reflection respectively as (see \cite{Witten16} section 5 and Appendix A)
\begin{equation}
T:\Psi(t,x)\mapsto \gamma^0\Psi(-t,x)\quad\quad R:\Psi(t,x)\mapsto \gamma^1\Psi(t,-x)
\end{equation}
with one of $T$ and $R$ squares to 1 and the other squares to $(-1)^F$. Pin$^-$ or Pin$^+$ depends on which one squares to 1 as summarized in table~\ref{pin+-}.
\begin{table}[H]
\centering
\begin{tabular}{ c c c c c c }
\\
Pin structures & $T^2$ & $R^2$\\
\hline\\
Pin$^-$ & 1 & $(-1)^F$\\
\\
Pin$^+$ & $(-1)^F$ & 1\\
\\
\end{tabular}
\caption{Pin$^-$ vs Pin$^+$}
\label{pin+-}
\end{table}
\noindent To reproduce the behavior of the standard SYK model, we are interested in the Pin$^-$ case, as explained in \cite{StanfordWitten19}. In this case, based on table~\ref{pin+-} we would like $(\gamma^0)^2=1$ and $(\gamma^1)^2=-1$, which can be satisfied with the following choice of real matrices \footnote{Note that our $\gamma$ matrices satisfy $\{\gamma^\mu,\gamma^\nu\}=-2\eta^{\mu\nu}$, which leads to a minus sign in the spin matrix as explained in Appendix~\ref{appendixfermion}.}.
\begin{equation}
\gamma^0=\begin{pmatrix}0&1\\1&0\end{pmatrix}\quad\quad\gamma^1=\begin{pmatrix}0&-1\\1&0\end{pmatrix}
\end{equation}
and we define the quantity
\begin{equation}
\overline{\gamma}=\gamma^0\gamma^1=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\label{gammabar}
\end{equation}
to be used later. In Euclidean signature, the corresponding gamma matrices are given by
\begin{equation}
\gamma^1=\begin{pmatrix}0&-1\\1&0\end{pmatrix}\quad\quad\gamma^2=\begin{pmatrix}0&i\\i&0\end{pmatrix}
\end{equation}
In Euclidean signature, we can obtain our non-orientable geometry from a quotient of the hyperbolic disk, so we start from looking at fermions on a hyperbolic disk. There are two sets of commonly used coordinates for the hyperbolic disk as shown in figure~\ref{coordinate} (see Appendix~\ref{appendixmeasure} for more details): $xy$-coordinates given by
\begin{equation}
ds^2=dx^2+\cosh^2x\,dy^2\label{diskmetricxy}
\end{equation}
and $\rho\theta$-coordinates given by
\begin{equation}
ds^2=d\rho^2+\sinh^2\rho\,d\theta^2\label{diskmetricrho}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/coordinatecopy.png}
\caption{(a) $xy$-coordinates (b) $\rho\theta$-coordinates}
\label{coordinate}
\end{figure}
For bulk fermions $\Psi$, we work with the $xy$-frame. Euclidean time-reversal in $y$ and reflection in $x$ are given by
\begin{equation}
T':\Psi(y,x)\mapsto \gamma^2\Psi(-y,x)\quad\quad R:\Psi(y,x)\mapsto \gamma^1\Psi(y,-x)
\end{equation}
Near $y=0$, boundary fermions $\psi$ on the $xy$-frame and $\rho\theta$-frame are the same on one side of $x\rightarrow\pm\infty$ and related by $\mathrm{Rot}(\pi)$, i.e. a rotation by $180^\circ$, on the other side. The rotation can be written as a product of two reflections $\mathrm{Rot}(\pi)=T'R$.
In the AdS/CFT correspondence, the behavior of the bulk fermion field near the boundary is important. Near the ``right'' boundary $x\rightarrow\infty$, the behavior is:
\begin{equation}
\Psi(y,x\rightarrow\infty)=e^{(-\frac{1}{2}+M)x}\psi_+(y)\eta_++e^{(-\frac{1}{2}-M)x}\psi_-(y)\eta_-\label{psiexpansion1}
\end{equation}
where $\eta_+=\begin{pmatrix}1\\1\end{pmatrix}$ and $\eta_-=\begin{pmatrix}1\\-1\end{pmatrix}$. We will take $M\geq 0$. One can impose boundary conditions that set to zero either $\psi_+$ (Dirichlet-like) or $\psi_-$ (Neumann-like). In each case, the operator that is not set to zero becomes the boundary fermion operator. The conformal dimensions of the boundary operator is $\frac{1}{2}+M$ or $\frac{1}{2}-M$ in the two cases. Although it doesn't actually affect what follows, for concreteness we will use the Neumann-like boundary conditions that are necessary to get a boundary fermion with dimension $\frac{1}{4}$ as in the SYK model.
Similarly, near the ``left'' boundary $x\rightarrow-\infty$, the asymptotic behavior is
\begin{equation}
\Psi(y,x\rightarrow-\infty)=e^{(\frac{1}{2}+M)x}\psi_+(y)\eta_++e^{(\frac{1}{2}-M)x}\psi_-(y)\eta_-\label{psiexpansion2}
\end{equation}
(see Appendix~\ref{appendixfermion} for details on how we get these asymptotic behaviors).
The actions of the discrete symmetry operators $T'$ and $R$ on the bulk fermion induce corresponding actions on the boundary operators, which we can work out as follows. $T'$ and $R$ act on bulk fermions by conjugation as
\begin{equation}
T'\Psi(y,x)T'^{-1}=\gamma^2\Psi(-y,x)\quad\quad R\Psi(y,x)R^{-1}=\gamma^1\Psi(y,-x)
\end{equation}
We substitute (\ref{psiexpansion1}) and (\ref{psiexpansion2}) into the above equations
\begin{align}
R\Psi(y,x\rightarrow\pm\infty)R^{-1}&=e^{(\mp\frac{1}{2}+M)x}R\psi_+(y)R^{-1}\eta_++e^{(\mp\frac{1}{2}-M)x}R\psi_-(y)R^{-1}\eta_-\\
&=e^{(\pm\frac{1}{2}-M)(-x)}R\psi_+(y)R^{-1}\gamma^1\eta_-+e^{(\pm\frac{1}{2}+M)(-x)}R\psi_-(y)R^{-1}(-\gamma^1\eta_+)\\
&=\gamma^1\Psi(y,-x\rightarrow\mp\infty)
\end{align}
gives
\begin{equation}
R\psi_+(y) R^{-1}=\psi_-(y)\quad\quad R\psi_-(y) R^{-1}=-\psi_+(y)
\end{equation}
And similarly,
\begin{align}
T'\Psi(y,x\rightarrow\pm\infty)T'^{-1}&=e^{(\mp\frac{1}{2}+M)x}T'\psi_+(y)T'^{-1}\eta_++e^{(\mp\frac{1}{2}-M)x}T'\psi_-(y)T'^{-1}\eta_-\\
&=e^{(\mp\frac{1}{2}+M)x}T'\psi_+(y)T'^{-1}(-i\gamma^2\eta_+)+e^{(\mp\frac{1}{2}-M)x}T'\psi_-(y)T'^{-1}(i\gamma^2\eta_-)\\
&=\gamma^2\Psi(-y,x\rightarrow\pm\infty)
\end{align}
gives
\begin{equation}
T'\psi_+(y)T'^{-1}=i\psi_+(-y)\quad\quad T'\psi_-(y) T'^{-1}=-i\psi_-(-y)
\end{equation}
On other thing we should keep in mind is that according to AdS/CFT the bulk field should decay as they approach the boundary, so we choose the mode $\psi_+$ as $x\rightarrow\infty$ and the mode $\psi_-$ as $x\rightarrow-\infty$ on the boundary of the hyperbolic disk (see Appendix~\ref{appendixfermion} for more details).
There is a topological invariant $\eta$ for non-orientable manifolds analogous to $\zeta$ for orientable manifolds. For Pin$^-$, the Dirac equation is given by
\begin{equation}
(i\overline{\gamma}\slashed{D}+iM)\Psi=0
\end{equation}
Let $\{\lambda_k\}$ be eigenvalues of $i\overline{\gamma}\slashed{D}$. When $M>0$ is large, the regularized partition function is given by
\begin{equation}
Z_{\mathrm{reg}}=\det{i\overline{\gamma}\slashed{D}}=\prod_k\frac{\lambda_k}{\lambda_k+iM}=|Z|\exp(-\frac{i\pi}{2}\sum_k\mathrm{sign}(\lambda_k))=|Z|\exp(-\frac{i\pi}{2}\eta)
\end{equation}
(for more information see \cite{Witten16}). We can then define APS invariant $\eta$ as
\begin{equation}
\eta=\lim_{s\rightarrow0}\sum_k\mathrm{sign}(\lambda_k)|\lambda_k|^{-s}=\lim_{\epsilon\rightarrow0}\sum_k\mathrm{sign}(\lambda_k)e^{-\epsilon\lambda_k}
\end{equation}
More specifically, to match anomaly class of SYK model with $N$ Majorana fermions on the boundary, we can generalize the weighting factor from $(-1)^{N\zeta}=e^{-i\pi N\zeta}$ for orientable geometries to $e^{-i\pi N\eta/2}$ for non-orientable geometries in the bulk (they agree in the orientable case). One vague motivation for the form of the weighting factors is that if we have $N$ bulk Majorana fermions each with the same weighting factor overall they would give that to the $N$th power. But this is not to be taken literally. SYK is not dual to JT with $N$ bulk fermions, only that they have the same anomalies.
\subsubsection{Genus one-half: constant shift}
It can be shown that a crosscap has $\eta=\pm\frac{1}{2}$ (see Appendix~\ref{appendixeta} for more information). In particular, going around the boundary $S^1$ of a crosscap looks like reflection squared
\begin{equation}
R^2: \Psi\mapsto(\pm\gamma^1)^2\Psi
\end{equation}
we have $\pm\gamma^1$ corresponding to the two possible $\eta$'s. We can then define the Pin$^-$ sum of weighting factors over $\eta=\pm1/2$ for a crosscap
\begin{equation}
F_{cc}(N)=\sum_{\text{pin$^-$ structures}}e^{-\frac{i\pi}{2}N\eta}=\sum_{\eta=\pm\frac{1}{2}}e^{-\frac{i\pi}{2}N\eta}=2\cos(2\pi N/8)
\end{equation}
We can draw a crosscap topologically as a square with two opposite edges identified with a reflection as in figure~\ref{ccsquare}(b).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/ccsquare.png}
\caption{crosscap as a square diagram}
\label{ccsquare}
\end{figure}
A bulk fermion field $\Psi$ on flat space with the same kind of square diagram would satisfy
\begin{equation}
\Psi(-x,y+1)=(-1)^{\color{green}\alpha}\gamma^1\Psi(x,y)\label{ccid}
\end{equation}
with $\alpha=0,1$ specifying two Pin$^-$ structures $\eta=\pm1/2$. If we embed the crosscap in the hyperbolic disk with $xy$-coordinates then instead of identifying $y=\pm1/2$, we are indentifying $y=\pm b/4$ as in figure~\ref{ccsquare}(a), so the equation (\ref{ccid}) should instead look like
\begin{equation}
\Psi(-x,y+\frac{b}{2})=(-1)^{\color{green}\alpha}\gamma^1\Psi(x,y)
\end{equation}
Before calculating the disk+crosscap contribution to fermionic two-point correlation functions, we first need to describe what geodesics look like on a disk+crosscap connecting two boundary fermions. Here we only focus on the geodesic that gives non-decaying contribution to the two-point correlator.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/reflectioncopy.png}
\caption{Geodesic (light blue curve) on a crosscap v.s. on a disk}
\label{reflection}
\end{figure}
Recall that disk+crosscap is topologically a M\"{o}bius band and can be drawn as a quotient of the hyperbolic disk. A geodesics going through the crosscap consists of two pieces connected together across the crosscap (see figure~\ref{reflection}a). The two pieces of the geodesics can be put together into a horizontal geodesics (see figure~\ref{reflection}b) but with a reflection on the fermion. Thus a two-point correlation function on the crosscap can be related to that on the disk via
\begin{align}
\braket{\psi_{1-}\psi_{2-}}_{cc}&=\braket{\psi_{1-}R\psi_{2'+}R^{-1}}_{\text{Disk},xy}\\
&=\braket{\psi_{1-}\mathrm{Rot}(\pi)R\psi_{2'+}R^{-1}\mathrm{Rot}(\pi)^{-1}}_{\text{Disk},\rho\theta}\\
&=-\braket{\psi_{1-}T'\psi_{2'+}T'^{-1}}_{\text{Disk},\rho\theta}\\
&=-i\braket{\psi_{1-}\psi_{2'+}}_{\text{Disk},\rho\theta}
\end{align}
where $1$, $2$, and $2'$ are shown as in figure~\ref{reflection}.
Here we ignore the decaying contributions to the two-point correlation function. Let $\braket{\psi\psi}_{\chi=0,0}$ be the 2-point function without considering the Spin/Pin$^-$ structure
\begin{equation}
\braket{\psi\psi}_{\chi=0,0}=\int dE\,\rho_0e^{-\beta E}|\psi_{E,E}|^2
\end{equation}
then with Pin$^-$ structures, the 2-point function is given by
\begin{equation}
\braket{\psi\psi}_{cc}=\sum_{\eta=\pm\frac{1}{2}}e^{-\frac{i\pi}{2}N\eta}(\pm1)(-i)\braket{\psi \psi}_{\chi=0,0}=2\sin\left(\frac{2\pi N}{8}\right)\braket{\psi\psi}_{\chi=0,0}
\end{equation}
\subsubsection{Genus one: ramp}
Any non-orientable two-manifold is an oriented manifold with one or two crosscaps glued in, so after considering disk+one crosscap, it is natural to consider disk+two crosscaps. But before doing that, let us review the Pin$^-$ structures of a reflected-double-trumpet.
We now consider a double trumpet glued from two trumpets but with a reflection on one of the interfaces being glued (see figure~\ref{doubletrumpet}). This has the same Pin$^-$ structure as a Klein bottle.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/kbsquare.png}
\caption{(a) Klein bottle (here R means reflecion) (b) square diagram of a Klein bottle, note that here the vertical axis is the $x$-axis while the horizontal axis is the $y$-axis}
\end{figure}
Topologically we can draw a Klein bottle as a square with the sides identified according to
\begin{equation}
(x,y)\simeq (x+1,y)\quad\quad (x,y)\simeq(-x,y+1)
\end{equation}
Fermion field $\Psi$ on Klein bottle satisfies
\begin{align}
\Psi(x+1,y)&=(-1)^{\color{red}\alpha}\Psi(x,y)\label{kbf1}\\
\Psi(-x,y+1)&=(-1)^{\color{green}\beta}\gamma^1\Psi(x,y)\label{kbf2}
\end{align}
As shown in figure~\ref{kb2cc}, a Klein bottle is topologically equivalent to two crosscaps (one at $x=0$ and the other at $x=1/2$) on a cylinder.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/kb2cccopy.png}
\caption{Klein bottle is topologically equivalent to a cylinder with two crosscaps}
\label{kb2cc}
\end{figure}
Therefore $F_{\mathrm{KB}}(N)=F_{\mathbb{RP}^2}(N)^2$. Equations (\refeq{kbf1}) and (\refeq{kbf2}) tell us that
\begin{align}
\Psi(0,y+1)&=(-1)^{\color{green}\beta}\gamma^1\Psi(0,y)\\
\Psi(\frac{1}{2},y+1)&=(-1)^{\color{red}\alpha+\color{green}\beta}\gamma^1\Psi(\frac{1}{2},y)
\end{align}
This is telling us if $\alpha=0$, i.e. if the boundaries are of R type, the two crosscaps have the same Pin$^-$ structure while if $\alpha=1$, i.e. if the boundaries are of NS type, the two crosscaps have opposite Pin$^-$ structures. Then
\begin{align}
F_{\mathrm{KB}}^{\mathrm{R}}&=\sum_{\eta=\pm\frac{1}{2}}\left(e^{-\frac{i\pi}{2}N\eta}\right)^2=2\cos(2\pi N/4)\\
F_{\mathrm{KB}}^{\mathrm{NS}}&=\sum_{\eta=\pm\frac{1}{2}}e^{-\frac{i\pi}{2}N\eta}e^{-\frac{i\pi}{2}N(-\eta)}=2
\end{align}
The Pin$^-$ structures of torus and Klein bottle are summarized in table~\ref{tkbtab}.
\begin{table}[h]
\centering
\begin{tabular}{ c | c c }
\\
\color{blue}pin$^-$ structure & \color{red}R & \color{red}NS\\
\hline\\
$F_{\mathrm{KB}}$ & $2\cos(2\pi N/4)$ & $2$ \\
\\
$F_{\mathrm{T}}$ & $1+(-1)^N$ & $2$ \\
\\
$F_{\mathrm{KB}}+F_{\mathrm{T}}$ & $4\delta_{N\mod4,0}$ & $4$ \\
\\
\end{tabular}
\caption{Spin/Pin$^-$ structure of torus and Klein bottle}
\label{tkbtab}
\end{table}
For order $\mathcal{O}(e^{-2S_0})$ contribution to the 2-point function, let's first consider the handle-disk given in \cite{Saadsingleauthor}. There are two types of geodesics, ones that go through the handle and ones that do not go through the handle. Saad showed that the former gives non-decaying contribution to the two-point correlation functions. Without loss of generality, we can represent the handle-disk as part of the hyperbolic disk as shown in figure~\ref{torusMCG}, with the pink line our geodesic. We already know that each single geodesic contributes (\ref{hdgeodesic}) to bosonic 2-pt correlator. To get the full contribution we need to integrate over all possible geodesics on all possible configurations that look like a handle-disk.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/torusMCGcopy.png}
\caption{(a) a handle-disk represented on a hyperbolic disk (b) square diagram of a handle-disk with $a_1$ and $a_2$ generators of $\pi_1$.}
\label{torusMCG}
\end{figure}
The mapping class group (MCG) is a group of transformations that preserve a certain geometry. The mapping class group of a handle-disk is generated by two Dehn twist $t_1:\{a_1\mapsto a_1,a_2\mapsto a_1a_2\}$ and $t_2:\{a_1\mapsto a_1a_2,a_2\mapsto a_2\}$. Fixing the geodesic (thus also fixing the length of $b$), the only elements in MCG that preserves the shape of the handle-disk as well as the geodesic are generated by one of the Dehn twist $t_2$. Thus we need to integrate over twist $\tau$ but only from $0$ to $b$. To integrate over all geodesics we need to also integrate over the length of the geodesic from $0$ to infinity and to integrate $b$ from $0$ to infinity. Then the total non-decaying contribution from handle-disk to bosonic 2-point correlation function is
\begin{align}
\braket{V(-i\tau)V(0)}_{\chi=-1}&=e^{-S_0}\int_0^\infty bdb\int_{-\infty}^\infty e^\ell d\ell\,\varphi_{\text{Trumpet},\tau}(\ell,b)\varphi_{\text{Trumpet},\beta-\tau}(\ell,b)e^{-\Delta\ell}\\
&=e^{-S_0}\int_0^\infty bdb\int dE\,\int dE'\frac{\cos(b\sqrt{2E})\cos(b\sqrt{2E'})}{\pi^2\sqrt{2E}\sqrt{2E'}}\,e^{-\tau E}e^{-(\beta-\tau)E'}|V_{E,E'}|^2
\end{align}
But we should note that fermion 2-pt correlator need to be multiplied by the Spin structure $F_T$. Define $\braket{\psi\psi}_{\chi=-1,0}$ to be the handle-disk contribution to 2-pt fermion correlator ignoring Spin/Pin$^-$ structure, i.e.
\begin{equation}
\braket{\psi\psi}_{\chi=-1,0}=e^{-S_0}\int_0^\infty bdb\int dE\,\int dE'\frac{\cos(b\sqrt{2E})\cos(b\sqrt{2E'})}{\pi^2\sqrt{2E}\sqrt{2E'}}\,e^{-\tau E}e^{-(\beta-\tau)E'}|\psi_{E,E'}|^2
\end{equation}
Again in addition to multiplying $\braket{\psi\psi}_{\chi=-1,0}$ by the spin structure $F_T$, fermions connected by NS differ by a sign. We can summarize the result in table~\ref{tabletoruspsipsi}.
\begin{table}[H]
\centering
\begin{tabular}{ c | c c }
\\
& $\braket{\psi\psi}_\mathrm{\color{red}R}$ & $\braket{\psi\psi}_\mathrm{\color{red}NS}$\\
\hline\\
\color{green}R & $(-1)^{N\color{blue}1}$$\braket{\psi\psi}_{\chi=-1,0}$ & $(-1)^{N\color{blue}0}$$\braket{\psi\psi}_{\chi=-1,0}$ \\
\\
\color{green}NS & $(-1)^{N\color{blue}0}$$\color{green}(-1)$$\braket{\psi\psi}_{\chi=-1,0}$ & $(-1)^{N\color{blue}0}\color{green}(-1)$$\braket{\psi\psi}_{\chi=-1,0}$ \\
\\
\end{tabular}\\
\caption{handle-disk two-point function}
\label{tabletoruspsipsi}
\end{table}
\noindent Summing over spin structures, we get
\begin{equation}
\braket{\psi\psi}_{\mathrm{\color{red}R},\text{handle-disk}}=((-1)^N-1)\braket{\psi\psi}_{\chi=-1,0}\quad\quad \braket{\psi\psi}_{\mathrm{\color{green}NS},\text{handle-disk}}=0
\end{equation}
so there is no contribution from handle-disk for $N$ even.
In the non-orientable case, there is not only a handle-disk but also a reflected-handle-disk (rhd) as drawn in figure~\ref{kbMCG}. The mapping class group of such a geometry is generated by a Dehn twist $t_2:\{a_1\mapsto a_1a_2,a_2\mapsto a_2\}$, and two reflections $y:\{a_1\mapsto a_1^{-1},a_2\mapsto a_2\}$ and $\omega_1:\{a_1\mapsto a_1,a_2\mapsto a_2^{-1}\}$. (For more information on MCG see \cite{gomez2017classification}.)
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/kbMCGcopy.png}
\caption{(a) a reflected-handle-disk (Klein-bottle with a hole) represented on a hyperbolic disk (b) square diagram of a reflected-handle-disk with $a_1$ and $a_2$ generators of $\pi_1$.}
\label{kbMCG}
\end{figure}
\noindent There are four possible types of geodesics on a reflected-handle-disk, we analyze them one by one. Note that we will show that only the first type contributes to the ramp.
\textbf{(1)} A geodesic that goes through two crosscaps
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{figures/kb1copy.png}
\caption{(a) On a reflected-handle-disk, type (1) geodesic goes through two crosscaps (b) draw a reflected-handle-disk as a quotient of a hyperbolic disk}
\end{figure}
Elements of MCG that preserves this configuration are $t_2$ and $y$. Since $y$ generates a finite group, we need to focus on $t_2$. The resulting integral is the same as the case for handle-disk, so we get the same two-point function as handle disk $\braket{\psi\psi}_{\chi=-1,0}$ if we do not consider the Pin$^-$ structure. With the Pin$^-$ structure, from section 3.2.1, we know that the geodesic going through one crosscap would change the two-point correlation function from $\braket{\psi\psi}$ to $2\sin\left(\frac{2\pi N}{8}\right)\braket{\psi \psi}$. Thus naturally, geodesic going through two crosscaps would repeat this procedure twice on a fermionic two-point function, i.e.
\begin{equation}
\braket{\psi\psi}_{\text{rhd}(1)}=4\sin^2\left(\frac{2\pi N}{8}\right)\braket{\psi\psi}_{\chi=-1,0}
\end{equation}
\pagebreak
\textbf{(2)} A geodesic that goes through one crosscap
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/kb2copy.png}
\caption{(a) On a reflected-handle-disk, type (2) geodesic goes through one crosscap (b) draw a reflected-handle-disk as a quotient of a hyperbolic disk (c) dividing the geometry into pieces we know the function forms of}
\label{kb2calc}
\end{figure}
This geometry can be thought of as a crosscap correction to the disk+one crosscap contribution to two-point correlators. We can divide this geometry into five pieces as in figure~\ref{kb2calc}(c). Then its contribution to the two-point correlation function without considering the Pin$^-$ structure is
\begin{align}
\braket{\psi\psi}_{\text{rhd}(2),0}&=e^{-S_0}\int d\ell d\ell'e^{-\Delta \ell}\int d\ell_1d\ell_2d\ell_3d\ell_4\,e^{\ell_1/2+\ell_2/2+\ell_3/2+\ell_4/2}I(\ell_1,\ell,\ell_2,\ell',\ell_3,\ell,\ell_4,\ell')\nonumber\\
&\quad\times \varphi_{\text{Disk},\beta_1}(\ell_1)\varphi_{\text{Disk},\beta_2}(\ell_2)\varphi_{\text{Disk},\beta_3}(\ell_3)\varphi_{\text{Disk},\beta_4}(\ell_4)\\
&=e^{-S_0}\int d\ell d\ell'e^{-\Delta \ell}dE\,e^{\ell}e^{\ell'}\rho_0(E)\varphi_E(\ell)\varphi_E(\ell)\varphi_E(\ell')\varphi_E(\ell')e^{-(\beta_1+\beta_2+\beta_3+\beta_4)E}\\
&=e^{-S_0}\int dE\,e^{-(\beta_1+\beta_2+\beta_3+\beta_4)E}|\psi_{E,E}|^2\rho_{1/2}(E)
\end{align}
The only MGC element that preserves this configuration is $\omega_1$ but this only generates a discrete group so no further integration is needed.
Since the geodesic goes through one crosscap, again this has the effect of changing the two-point correlation function from $\braket{\psi\psi}$ to $2\sin\left(\frac{2\pi N}{8}\right)\braket{\psi \psi }$ but at the same time we need to sum over Pin$^-$ structure of the other crosscap, which gives a multiplicative factor $F_{\text{cc}}(N)=2\cos\left(\frac{2\pi N}{8}\right)$. Thus type (2) geodesic contributes to fermionic two-point correlation function by
\begin{equation}
\braket{\psi\psi}_{\text{rhd}(2)}=4\sin\left(\frac{2\pi N}{8}\right)\cos\left(\frac{2\pi N}{8}\right)\braket{\psi \psi }_{\text{rhd}(2),0}
\end{equation}
In particular, this equals to zero for $N$ even. For odd $N$, we should note that this contribution is of order $e^{-2S_0}$ after normalization and is independent of time, so it will still be subleading.
\pagebreak
\textbf{(3)} A geodesic that goes through no crosscap and divide the two crosscaps into two disconnected pieces
\begin{figure}[H]
\includegraphics[width=\textwidth]{figures/kb3copy.png}
\caption{(a) On a reflected-handle-disk, type (3) geodesic goes through no crosscap and divide the two crosscaps into two disconnected pieces (b) draw a reflected-handle-disk as a quotient of a hyperbolic disk (c) dividing the geometry into pieces we know the function forms of}
\label{kb3calc}
\end{figure}
To compute the 2-point function we can divide the configuration as in figure~\ref{kb3calc}(c), i.e. We divide this config into left part and right part. We can first write out the left part as
\begin{align}
\mathrm{LeftPart}&=\int d\ell'd\ell_1\,e^{\ell_1/2}I(\ell,\ell',\ell_1,\ell')\psi^{HH}_\tau(\ell_1)\\
&=\int d\ell'd\ell_1\,e^{\ell_1+\ell'+\ell/2}dEdE'\,\rho_0(E)\rho_0(E')\varphi_E(\ell)\varphi_E(\ell')\varphi_E(\ell_1)\varphi_E(\ell')\varphi_{E'}(\ell_1)e^{-\tau E'}\\
&=e^{\ell/2}\int dE\,e^{-\tau E}\varphi_E(\ell)\rho_{1/2}(E)
\end{align}
Right part is similar with $\tau$ changed to $\beta-\tau$. Then ignoring Pin$^-$ structure, we get
\begin{align}
\braket{\psi\psi}_{\text{rhd}(3),0}&=e^{-S_0}\int d\ell e^{-\Delta\ell}\mathrm{LeftPart}\cdot\mathrm{RightPart}\\
&=e^{-S_0}\int dE_1dE_2\int d\ell\,e^{\ell}e^{-\Delta\ell}e^{-\tau E_1}\psi_{E_1}(\ell_1)\delta(0)e^{-(\beta-\tau) E_2}\psi_{E_2}(\ell)\delta(0)\\
&=e^{-S_0}\int dE_1dE_2\,|\psi_{E_1,E_2}|^2e^{-\tau E_1}e^{-(\beta-\tau) E_2}\rho_{1/2}(E_1)\rho_{1/2}(E_2)
\end{align}
If $\tau$ is large and imaginary, this is dominated by $E_1,E_2$ close to zero. In that case $|\psi_{E_1,E_2}|^2$ approaches a constant and so we can do the integral approximately
\begin{equation}
\braket{\psi\psi}_{\text{rhd}(3),0}\sim e^{-S_0}\int dE_1dE_1e^{-\tau E_1}e^{-(\beta-\tau)E_2}\rho_{1/2}(0)^2|\psi_{0,0}|^2\sim\frac{e^{-S_0}}{t^2}\rho_{1/2}(0)^2|\psi_{0,0}|^2
\end{equation}
Thus this answer decays with time. The Pin$^-$ structure just gives a multiplicative factor of $F_{cc}(N)^2$ so the whole thing decays with time.
\pagebreak
\textbf{(4)} A geodesic that goes through no crosscap and divide the geometry into two disconnected pieces with the two crosscaps both in one piece
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{figures/kb4copy.png}
\caption{(a) On a reflected-handle-disk, type (4) geodesic goes through no crosscap and divide the geometry into two connected pieces with the two crosscaps both in one piece (b) draw a reflected-handle-disk as a quotient of a hyperbolic disk (c) dividing the geometry into pieces we know the function forms of}
\end{figure}
For this configuration, the geodesic doesn't explore the non-trivial geometry, so this contribution to the two-point correlation function should also decay with time.
\noindent Summarizing theses four cases, we know that for even $N$, the ramp contribution to fermionic two-point correlation function is given by (1)
\begin{equation}
\braket{\psi\psi}_{\text{rhd}(1)}=4\sin^2\left(\frac{2\pi N}{8}\right)\braket{\psi\psi}_{\chi=-1,0}\label{kbpin}
\end{equation}
and the plateau contribution is given by one-crosscap
\begin{equation}
\braket{\psi\psi}_{cc}=2\sin\left(\frac{2\pi N}{8}\right)\braket{\psi \psi }_{\chi=0,0}\label{ccpin}
\end{equation}
combined with the already existing plateau in oriented case.
\subsection{RMT}
In the previous section, we computed the contribution to the two-point function from genus-$1/2$ and genus-$1$ surfaces, with the sum over Pin$^-$ structures weighted by a topological field theory. We would like to compare this to the predictions of a corresponding random matrix ensemble.
The SYK model with $N$ Majorana fermions is a convenient stepping stone that allows us to relate a particular topological field theory to a particular random matrix symmetry class. This is because, on one hand, the boundary SYK model has the same anomalies as the bulk $e^{-i N\pi\eta/2}$, and on the other hand, the algebra of time reversal and $(-1)^F$ operators in SYK determines a RMT symmetry class \cite{StanfordWitten19}.
The action of SYK model is given by
\begin{equation}
I=\int dt\,\left(\frac{i}{2}\sum_k\psi_k\frac{d\psi_k}{dt}-H\right)
\end{equation}
where H is the SYK Hamiltonian (\ref{syk}). Classically, the fermions $\psi_k$ are treated as Grassmann numbers and the action naturally has two symmetries: the symmetry $(-1)^F$ acting by $\psi_k\mapsto-\psi_k$, and the Lorentzian time-reversal symmetry $T$ acting by $\psi_k\mapsto\psi_k$. Classically, these two symmetries satisfy the relations
\begin{equation}
T^2=1\quad\quad T(-1)^F=(-1)^FT
\end{equation}
However, these relations are not always satisfied quantum mechanically if we quantize the Hamiltonian and treat the fermions as forming a Clifford algebra. Furthermore, the symmetry $(-1)^F$ cannot be defined for odd $N$. It turns out the anomalies of $T$ and $(-1)^F$ depend on $N\mod8$ and can be summarized in table~\ref{Nmod8}. The last column shows the corresponding random matrix classes as a function of $N\mod8$.
\begin{table}[h]
\centering
\begin{tabular}{ c c c c c c c }
\\
N & $(-1)^F$ & $T^2$ & $T(-1)^F$ & $T\psi_k T^{-1}$ & RMT\\
\hline\\
$0\mod8$& $\checkmark$ & 1 & $(-1)^FT$ & $\psi_k$ & $\begin{pmatrix}\mathrm{GOE}_1&\\&\mathrm{GOE}_2\end{pmatrix}$ \\
\\
$1\mod8$& $\times$ & 1 & N/A & $\psi_k$ & GOE \\
\\
$2\mod8$& $\checkmark$ & 1 & $-(-1)^FT$ & $\psi_k$ & $\begin{pmatrix}\mathrm{GUE}&\\&\mathrm{GUE}\end{pmatrix}$ \\
\\
$3\mod8$& $\times$ & -1 & N/A & $-\psi_k$ & GSE \\
\\
$4\mod8$& $\checkmark$ & -1 & $(-1)^FT$ & $\psi_k$ & $\begin{pmatrix}\mathrm{GSE}_1&\\&\mathrm{GSE}_2\end{pmatrix}$ \\
\\
$5\mod8$& $\times$ & -1 & N/A & $\psi_k$ & GSE \\
\\
$6\mod8$& $\checkmark$ & -1 & $-(-1)^FT$ & $\psi_k$ & $\begin{pmatrix}\mathrm{GUE}&\\&\mathrm{GUE}\end{pmatrix}$ \\
\\
$7\mod8$& $\times$ & 1 & N/A & $-\psi_k$ & GOE \\
\\
\end{tabular}
\caption{Classification of $T$ and $(-1)^F$ anomalies and the corresponding random matrix ensemble (see \cite{StanfordWitten19} for more information).}
\label{Nmod8}
\end{table}
We are able to match all $8$ cases between JT result and RMT calculation, we now show the detailed calculations one by one.
\subsubsection*{N even}
For even $N$, we can choose a basis for the Hilbert space of dimension $2L\times 2L$ such that
\begin{equation}
(-1)^F=\begin{pmatrix}1&0\\0&-1\end{pmatrix}
\end{equation}
and then the constraint that $H$ should commute with $(-1)^F$ tells us that
\begin{equation}
H=\begin{pmatrix}H_1&0\\0&H_2\end{pmatrix}\in\begin{pmatrix}\mathrm{GUE}_1&\\&\mathrm{GUE}_2\end{pmatrix}
\end{equation}
where the two subscripts mean independent random matrix ensembles. Further a Hermitian fermion which anticommute with $(-1)^F$ should look like
\begin{equation}
\psi=\begin{pmatrix}0&\lambda\\\lambda^\dagger&0\end{pmatrix}
\end{equation}
Then depending on $T^2$ and $T$\&$(-1)^F$ commutation relation, we can determine the form of $T$ for each case of $N$mod$8$. With this $T$, the condition that $H$ commutes with $T$ then further constrain the form of the Hamiltonian.
If $\mathbf{N=0\,mod\,8}$, $T^2=1$ and $T(-1)^F=(-1)^FT$, so we can just take
\begin{equation}
T=K\begin{pmatrix}1&0\\0&1\end{pmatrix}
\end{equation}
Then $THT^{-1}=H$ gives
\begin{equation}
H=\begin{pmatrix}H_1&0\\0&H_2\end{pmatrix}\in\begin{pmatrix}\mathrm{GOE}_1&\\&\mathrm{GOE}_2\end{pmatrix}
\end{equation}
The time evolution operator is then given by
\begin{equation}
U(t)=e^{-iHt}=\begin{pmatrix}e^{-iH_1t}&0\\0&e^{-iH_2t}\end{pmatrix}
\end{equation}
then the two-point function is given by
\begin{equation}
\braket{\psi(t)\psi(0)}=\braket{U(t)^\dagger\lambda(0)U(t)\lambda(0)}=\braket{\begin{pmatrix}\lambda e^{iH_2t}\lambda^\dagger e^{-iH_1t}&0\\0&\lambda^\dagger e^{iH_1t}\lambda e^{-iH_2t}\end{pmatrix}}
\end{equation}
$e^{\pm iH_1t}$ and $e^{\pm iH_2t}$ are independent, so the ensemble average of $\braket{\psi(t)\psi(0)}$ is zero. This is consistent with the fact that on the JT side for $N=0\mod8$ both (\ref{kbpin}) and (\ref{ccpin}) give zero. These all together confirms SYK numerics result that there is no ramp and no plateau (see table~\ref{SYKresult}).
If $\mathbf{N=2\,mod\,8}$, $T^2=1$ and $T(-1)^F=-(-1)^FT$ so we take
\begin{equation}
T=K\begin{pmatrix}0&1\\1&0\end{pmatrix}
\end{equation}
Then $THT^{-1}=H$ gives $H_1=H_2^*$, so
\begin{equation}
H=\begin{pmatrix}H_1&0\\0&H_1^*\end{pmatrix}\in\begin{pmatrix}\mathrm{GUE}&\\&\mathrm{GUE}\end{pmatrix}
\end{equation}
where we have removed the subscripts saying that the two blocks are the same GUE ensemble. Let $h$ be real and diagonal then we can model time evolution using Haar random unitary matrices $u$ as
\begin{equation}
U(t)=e^{-iHt}=\begin{pmatrix}u^\dagger&0\\0&u^T\end{pmatrix}\begin{pmatrix}e^{-iht}&0\\0&e^{-iht}\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}
\end{equation}
Having the form of $T$ we can also act on $\psi$ by it
\begin{equation}
T\psi T^{-1}=\begin{pmatrix}0&\lambda^T\\\lambda^*&0\end{pmatrix}
\end{equation}
Therefore, we can write a two-point function of fermions as
\begin{align}
&\int du\,\braket{\psi(t)\psi(0)}\nonumber\\
=&\int du\,\braket{U(t)^\dagger\lambda(0)U(t)\lambda(0)}\\
=&\int du\,\braket{\begin{pmatrix}u^\dagger &0\\0&u^T\end{pmatrix}\begin{pmatrix}e^{iht}&0\\0&e^{iht}\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}\begin{pmatrix}0&\lambda\\\lambda^\dagger&0\end{pmatrix}\begin{pmatrix}u^\dagger&0\\0&u^T\end{pmatrix}\begin{pmatrix}e^{-iht}&0\\0&e^{-iht}\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}\begin{pmatrix}0&\lambda\\\lambda^\dagger&0\end{pmatrix}}\\
=&\int du\,\braket{u^\dagger e^{iht}u\lambda u^Te^{-iht}u^*\lambda^\dagger}+h.c.
\end{align}
Using formula (\ref{weingartenu}) we get
\begin{align}
\int du\,\braket{\psi(t)\psi(0)}&=\frac{2L^2}{L^2-1}\braket{\lambda\lambda^\dagger}\braket{e^{iht}}\braket{e^{-iht}}+\frac{2L}{L^2-1}\braket{\lambda\lambda^*}\nonumber\\
&\quad-\frac{1}{L^2-1}\braket{\lambda\lambda^\dagger}(\braket{e^{2iht}}+\braket{e^{-2iht}})-\frac{2L}{L^2-1}\braket{\lambda\lambda^*}\braket{e^{iht}}\braket{e^{-iht}}\\
&=\frac{L^2}{L^2-1}\braket{\psi\psi}\braket{e^{iht}}\braket{e^{-iht}}+\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}\nonumber\\
&\quad-\frac{1}{2(L^2-1)}\braket{\psi\psi}(\braket{e^{2iht}}+\braket{e^{-2iht}})-\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}\braket{e^{iht}}\braket{e^{-iht}}
\end{align}
\begin{comment}
If $\mathbf{N=2\,mod\,8}$, $T^2=1$ and $T(-1)^F=-(-1)^FT$ so the Euclidean time-reversal operator is given by
\begin{equation}
T=\text{transpose}\circ\begin{pmatrix}0&1\\1&0\end{pmatrix}
\end{equation}
If we start with a general Hamiltonian $H=\begin{pmatrix}h_1&0\\0&h_2\end{pmatrix}$ the condition that $T$ commutes with $H$ gives
\begin{equation}
THT^{-1}=\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}h_1^T&0\\0&h_2^T\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}=\begin{pmatrix}h_2^T&0\\0&h_1^T\end{pmatrix}=\begin{pmatrix}h_1&0\\0&h_2\end{pmatrix}=H
\end{equation}
i.e. $h_1=h_2^T$. Let $h$ be diagonal then we can write the Hamiltonian as
\begin{equation}
H=\begin{pmatrix}u^\dagger h u&0\\0& u^Thu^*\end{pmatrix}=\begin{pmatrix}u^\dagger&0\\0&u^T\end{pmatrix}\begin{pmatrix}h&0\\0&h\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}
\end{equation}
the time evolution operator is given by
\begin{equation}
U(t)=e^{-iHt}=\begin{pmatrix}u^\dagger&0\\0&u^T\end{pmatrix}\begin{pmatrix}e^{-iht}&0\\0&e^{-iht}\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}
\end{equation}
Since we need to have fermions anticommute with $(-1)^F$, we cannot have diagonal elements for the fermionic field. In general, we can consider a fermionic field given by
\begin{equation}
\psi=\begin{pmatrix}0&\lambda_1\\\lambda_2&0\end{pmatrix}\quad\text{with}\quad T\psi T^{-1}=\begin{pmatrix}0&\lambda_1^T\\\lambda_2^T&0\end{pmatrix}
\end{equation}
Therefore, we can write a two-point function of fermions as
\begin{align}
&\braket{\psi(t)\psi(0)}\nonumber\\
=&\int du\,\braket{U(t)^\dagger\lambda(0)U(t)\lambda(0)}\\
=&\int du\,\braket{\begin{pmatrix}u^\dagger &0\\0&u^T\end{pmatrix}\begin{pmatrix}e^{iht}&0\\0&e^{iht}\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}\begin{pmatrix}0&\lambda_1\\\lambda_2&0\end{pmatrix}\begin{pmatrix}u^\dagger&0\\0&u^T\end{pmatrix}\begin{pmatrix}e^{-iht}&0\\0&e^{-iht}\end{pmatrix}\begin{pmatrix}u&0\\0&u^*\end{pmatrix}\begin{pmatrix}0&\lambda_1\\\lambda_2&0\end{pmatrix}}\\
=&\int du\,\braket{u^\dagger e^{iht}u\lambda_1 u^Te^{-iht}u^*\lambda_2}+\int du\,\braket{u^\dagger e^{-iht}u\lambda_1 u^Te^{iht}u^*\lambda_2}
\end{align}
Using formula (\ref{weingartenu}) we get
\begin{align}
\braket{\psi(t)\psi(0)}&=\frac{2L^2}{L^2-1}\braket{\lambda_1\lambda_2}\braket{e^{iht}}\braket{e^{-iht}}+\frac{2L}{L^2-1}\braket{\lambda_1^T\lambda_2}-\frac{2}{L^2-1}\braket{\lambda_1\lambda_2}-\frac{2L}{L^2-1}\braket{\lambda_1^T\lambda_2}\braket{e^{iht}}\braket{e^{-iht}}\\
&=\frac{L^2}{L^2-1}\braket{\psi\psi}\braket{e^{iht}}\braket{e^{-iht}}+\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}-\frac{1}{L^2-1}\braket{\psi\psi}-\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}\braket{e^{iht}}\braket{e^{-iht}}
\end{align}
\end{comment}
Using the fact that $T\psi T^{-1}=\psi$ and in the limit $L\rightarrow\infty$
\begin{equation}
\int du\,\braket{\psi(t)\psi(0)}\approx \underbrace{\braket{\psi\psi}\braket{e^{iht}}\braket{e^{-iht}}}_{\text{ramp+plateau}}+\underbrace{\frac{1}{L}\braket{\psi \psi }}_{\text{crosscap offset}}\label{2mod8}
\end{equation}
Note that here $H$ is of dimension $2L\times 2L$, so we can identify $2L$ with $\rho_0(E)e^{S_0}$. Also we can identify $\rho_0(E)|\psi_{E,E}|^2=\braket{E|\psi\psi|E}$. Thus using (\ref{ramprelation}) the first term of (\ref{2mod8}) reduces to $\min\{t/L^2,1/L\}\braket{\psi\psi}$. The early time (ramp) part identifies with the reflected-handle-disk contribution, note that when $N=2\mod8$ the Pin$^-$ structure in (\ref{kbpin}) gives a prefactor 4
\begin{equation}
\frac{\braket{\psi\psi}_{\text{rhd}(1)}}{\braket{1}_{\mathrm{disk}}}=\frac{4\braket{\psi\psi}_{\chi=-1,0}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim \frac{4t}{(\rho_0(E)e^{S_0})^2}\braket{E|\psi\psi|E}\sim \frac{t}{L^2}\braket{E|\psi\psi|E}
\end{equation}
where we have used the results of handle-disk \cite{Saadsingleauthor}. The second term identifies with the contribution from crosscap, note that when $N=2\mod8$ the Pin$^-$ structure in (\ref{ccpin}) gives a prefactor 2.
\begin{equation}
\frac{\braket{\psi\psi}_{cc}}{\braket{1}_{\mathrm{disk}}}=2\frac{\int dE\,e^{-\beta E}\braket{E|\psi \psi |E}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim \frac{2}{\rho_0(E)e^{S_0}}\braket{E|\psi \psi |E}\sim \frac{1}{L}\braket{E|\psi \psi |E}
\end{equation}
The crosscap contribution doubles the plateau. Thus our computation confirms the SYK numerics result that there is a ramp and a plateau (see table~\ref{SYKresult}).
If $\mathbf{N=4\,mod\,8}$, $T^2=-1$ and $T(-1)^F=(-1)^FT$, so we can just take
\begin{equation}
T=K\omega\begin{pmatrix}1&0\\0&1\end{pmatrix}
\end{equation}
Then $THT^{-1}=H$ gives
\begin{equation}
H=\begin{pmatrix}H_1&0\\0&H_2\end{pmatrix}\in\begin{pmatrix}\mathrm{GSE}_1&\\&\mathrm{GSE}_2\end{pmatrix}
\end{equation}
As for $N=0\mod 8$, $T$ does not mix the two blocks of $H$ so the ensemble average of $\braket{\psi(t)\psi(0)}$ is again zero. This is consistent with the fact that on the JT side for $N=4\mod8$ both (\ref{kbpin}) and (\ref{ccpin}) give zero. These all together confirms SYK numerics result that there is no ramp and no plateau (see table~\ref{SYKresult}).
If $\mathbf{N=6\,mod\,8}$, $T^2=-1$ and $T(-1)^F=-(-1)^FT$ so we take
\begin{equation}
T=K\begin{pmatrix}0&1\\-1&0\end{pmatrix}
\end{equation}
Then $THT^{-1}=H$ gives the same Hamiltonian and time evolution as $N=2\mod 8$. But $T$ acts on $\psi$ differently
\begin{equation}
T\psi T^{-1}=-\begin{pmatrix}0&\lambda^T\\\lambda^*&0\end{pmatrix}
\end{equation}
Thus the two-point function is given by
\begin{align}
\int du\,\braket{\psi(t)\psi(0)}&=\frac{2L^2}{L^2-1}\braket{\lambda\lambda^\dagger}\braket{e^{iht}}\braket{e^{-iht}}+\frac{2L}{L^2-1}\braket{\lambda\lambda^*}\nonumber\\
&\quad-\frac{1}{L^2-1}\braket{\lambda\lambda^\dagger}(\braket{e^{2iht}}+\braket{e^{-2iht}})-\frac{2L}{L^2-1}\braket{\lambda\lambda^*}\braket{e^{iht}}\braket{e^{-iht}}\\
&=\frac{L^2}{L^2-1}\braket{\psi\psi}\braket{e^{iht}}\braket{e^{-iht}}-\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}\nonumber\\
&\quad-\frac{1}{2(L^2-1)}\braket{\psi\psi}(\braket{e^{2iht}}+\braket{e^{-2iht}})+\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}\braket{e^{iht}}\braket{e^{-iht}}
\end{align}
\begin{comment}
If $\mathbf{N=6\,mod\,8}$, $T^2=-1$ and $T(-1)^F=-(-1)^FT$ so the Euclidean time-reversal operator is
\begin{equation}
T=\text{transpose}\circ\begin{pmatrix}0&1\\-1&0\end{pmatrix}
\end{equation}
If we start with a general Hamiltonian $H=\begin{pmatrix}h_1&0\\0&h_2\end{pmatrix}$ the condition that $T$ commutes with $H$ again gives $h_1=h_2^T$, so we get the same Hamiltonian as the $N=2\mod8$ case. But now a fermionic field satisfies a different condition
\begin{equation}
\psi=\begin{pmatrix}0&\lambda_1\\\lambda_2&0\end{pmatrix}\quad\text{with}\quad T\psi T^{-1}=-\begin{pmatrix}0&\lambda_1^T\\\lambda_2^T&0\end{pmatrix}
\end{equation}
Thus the two-point function is given by
\begin{align}
\braket{\psi(t)\psi(0)}&=\frac{2L^2}{L^2-1}\braket{\lambda_1\lambda_2}\braket{e^{iht}}\braket{e^{-iht}}+\frac{2L}{L^2-1}\braket{\lambda_1^T\lambda_2}-\frac{2}{L^2-1}\braket{\lambda_1\lambda_2}-\frac{2L}{L^2-1}\braket{\lambda_1^T\lambda_2}\braket{e^{iht}}\braket{e^{-iht}}\\
&=\frac{L^2}{L^2-1}\braket{\psi\psi}\braket{e^{iht}}\braket{e^{-iht}}-\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}-\frac{1}{L^2-1}\braket{\psi\psi}+\frac{L}{L^2-1}\braket{\psi T\psi T^{-1}}\braket{e^{iht}}\braket{e^{-iht}}
\label{6mod8}
\end{align}
\end{comment}
Using the fact that $T\psi T^{-1}=\psi$ and in the limit $L\rightarrow\infty$
\begin{equation}
\int du\,\braket{\psi(t)\psi(0)}\approx \underbrace{\braket{\psi\psi}\braket{e^{iht}}\braket{e^{-iht}}}_{\text{ramp+plateau}}\underbrace{-\frac{1}{L}\braket{\psi \psi }}_{\text{crosscap offset}}\label{6mod8}
\end{equation}
The first term is the same as (\ref{2mod8}) and again identifies with reflected-handle-disk in early time and gives a ramp. But here the second term is negative. Note that it identifies with the contribution from crosscap because the Pin$^-$ structure now gives a prefactor $-2$ which can be seen in (\ref{ccpin}).
\begin{equation}
\frac{\braket{\psi\psi}_{cc}}{\braket{1}_{\mathrm{disk}}}=-2\frac{\int dE\,e^{-\beta E}\braket{E|\psi \psi |E}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim -\frac{2}{\rho_0(E)e^{S_0}}\braket{E|\psi \psi |E}\sim -\frac{1}{L}\braket{E|\psi \psi |E}
\end{equation}
The crosscap contribution cancels the plateau. This confirms the SYK numerics result that there is a ramp but no plateau (see table~\ref{SYKresult}).
\subsubsection*{N odd}
For odd $N$, $(-1)^F$ cannot be defined, so naively when we characterize the anomalies we only need to consider $T^2$. But when $(-1)^F$ is defined we can always choose between $T$ and $T(-1)^F$ to ensure that $T\psi_k T^{-1}=\psi_k$ in SYK but this is no longer true when $(-1)^F$ cannot be defined, so there are also anomalies in the commutation relation of $\psi_k$ and $T$. Since we don't constrain our Hamiltonian using $(-1)^F$, $H$ has only one block. Before constraining with $THT^{-1}=H$, our Hamiltonian looks like $H\in\mathrm{GUE}$ of dimension $L\times L$. The calculations become similar to the bosonic cases in section \ref{bosonicRMT}.
If $\mathbf{N=1\,mod\,8}$, $T^2=1$ so we take $T=K$. $THT^{-1}=H$ gives $H\in\mathrm{GOE}$, so we get the same result as (\ref{VVo}) with $\braket{VV}\mapsto\braket{\psi\psi}$ and $\braket{VV^T}\mapsto\braket{\psi \psi^T}$. Assuming $\psi$ is Hermitian, $T\psi T^{-1}=\psi$ gives $\psi=\psi^T$, so the two-point correlator is
\begin{equation}
\int do\,\braket{\psi(t)\psi(0)}\approx\underbrace{\braket{e^{iht}}\braket{e^{-iht}}\braket{\psi\psi}}_{\text{ramp+plateau}}+\underbrace{\frac{1}{L}\braket{\psi\psi}}_{\text{crosscap offset}}\label{1mod8}
\end{equation}
Recall (\ref{ramprelation}), so the first term gives approximately $\min\{t/L^2,1/L\}\braket{\psi\psi}$. For fermions, \cite{StanfordWitten19} showed that the partition function scales like $\sqrt{2}$ times the dimension of the Hilbert space $L$, so we should identify $\sqrt{2}L$ with $\rho_0(E)e^{S_0}$. The early time (ramp) part identifies with the reflected-handle-disk contribution, note that when $N=1\mod8$ the Pin$^-$ structure in (\ref{kbpin}) gives a prefactor 2 so
\begin{equation}
\frac{\braket{\psi\psi}_{\text{rhd}(1)}}{\braket{1}_{\mathrm{disk}}}=\frac{2\braket{\psi\psi}_{\chi=-1,0}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim \frac{2t}{(\rho_0(E)e^{S_0})^2}\braket{E|\psi\psi|E}\sim \frac{t}{L^2}\braket{E|\psi\psi|E}
\end{equation}
matching RMT. For disk+crosscap contribution in JT, when $N=1\mod8$ the Pin$^-$ structure in (\ref{ccpin}) gives a prefactor $\sqrt{2}$.
\begin{equation}
\frac{\braket{\psi\psi}_{cc}}{\braket{1}_{\mathrm{disk}}}=\sqrt{2}\frac{\int dE\,e^{-\beta E}\braket{E|\psi \psi |E}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim \frac{\sqrt{2}}{\rho_0(E)e^{S_0}}\braket{E|\psi \psi |E}\sim \frac{1}{L}\braket{E|\psi \psi |E}
\end{equation}
matching RMT result.
If $\mathbf{N=3\,mod\,8}$, $T^2=-1$ so we take $T=K\omega$. $THT^{-1}=H$ gives $H\in\mathrm{GSE}$, so we get the same result as (\ref{VVs}) with $\braket{VV}\mapsto\braket{\psi\psi}$ and $\braket{V \omega V^T \omega^{-1}}\mapsto\braket{\psi \omega \psi^T \omega^{-1}}$. Assuming $\psi$ is Hermitian, $T\psi T^{-1}=-\psi$ gives $\omega \psi^T\omega^{-1}=-\psi$, so the two-point correlator is
\begin{equation}
\int ds\,\braket{\psi(t)\psi(0)}\approx\underbrace{\braket{e^{iht}}\braket{e^{-iht}}\braket{\psi\psi}}_{\text{ramp+plateau}}+\underbrace{\frac{1}{L}\braket{\psi\psi}}_{\text{crosscap offset}}\label{3mod8}
\end{equation}
RMT results and JT results are both the same as $N=1\mod8$, so they match.
If $\mathbf{N=5\,mod\,8}$, $T^2=-1$ so we take $T=K\omega$. $THT^{-1}=H$ gives $H\in\mathrm{GSE}$, so we get the same result as (\ref{VVs}) with $\braket{VV}\mapsto\braket{\psi\psi}$ and $\braket{V \omega V^T \omega^{-1}}\mapsto\braket{\psi \omega \psi^T \omega^{-1}}$. Assuming $\psi$ is Hermitian, $T\psi T^{-1}=\psi$ gives $\omega \psi^T\omega^{-1}=\psi$, so the two-point correlator is
\begin{equation}
\int ds\,\braket{\psi(t)\psi(0)}\approx\underbrace{\braket{e^{iht}}\braket{e^{-iht}}\braket{\psi\psi}}_{\text{ramp+plateau}}\underbrace{-\frac{1}{L}\braket{\psi\psi}}_{\text{crosscap offset}}\label{5mod8}
\end{equation}
The first term is the same as $N=1\mod8$ and identifies with reflected-handle-disk at early time. For disk+crosscap contribution in JT, when $N=5\mod8$ the Pin$^-$ structure in (\ref{ccpin}) gives a prefactor $-\sqrt{2}$.
\begin{equation}
\frac{\braket{\psi\psi}_{cc}}{\braket{1}_{\mathrm{disk}}}=-\sqrt{2}\frac{\int dE\,e^{-\beta E}\braket{E|\psi \psi |E}}{\int dE\,e^{S_0}\rho_0(E)e^{-\beta E}}\sim -\frac{\sqrt{2}}{\rho_0(E)e^{S_0}}\braket{E|\psi \psi |E}\sim -\frac{1}{L}\braket{E|\psi \psi |E}
\end{equation}
matching RMT result.
If $\mathbf{N=7\,mod\,8}$, $T^2=1$ so we take $T=K$. $THT^{-1}=H$ gives $H\in\mathrm{GOE}$, so we get the same result as (\ref{VVo}) with $\braket{VV}\mapsto\braket{\psi\psi}$ and $\braket{VV^T}\mapsto\braket{\psi \psi^T}$. Assuming $\psi$ is Hermitian, $T\psi T^{-1}=-\psi$ gives $\psi=-\psi^T$, so the two-point correlator is
\begin{equation}
\int do\,\braket{\psi(t)\psi(0)}\approx\underbrace{\braket{e^{iht}}\braket{e^{-iht}}\braket{\psi\psi}}_{\text{ramp+plateau}}\underbrace{-\frac{1}{L}\braket{\psi\psi}}_{\text{crosscap offset}}\label{7mod8}
\end{equation}
RMT results and JT results are both the same as $N=5\mod8$, so they match.
\section{Discussion}
We want to give one possible intuitive way of understanding why the disk+crosscap contributions to two-point correlation functions do not decay over time here. Before doing that we review an intuitive understanding of the handle-disk given by Saad \cite{Saadsingleauthor}. A handle-disk can be viewed as a baby universe being emitted by one Hartle-Hawking state and then being reabsorbed by another Hartle-Hawking state. At late time $t$, the Einstein-Rosen bridge (ERB) of a Hartle-Hawking state becomes very long (proportional to $t$), so its overlap with a second Hartle-Hawking state is small thus causing a decay with time. If a baby universe is emitted, it takes away most of ERB length and leaves behind a short ERB, so the handle-disk gives a non-decaying contribution to two-point correlation functions. The number of ways to match the two baby universes is proportional to $t$.
A disk with a crosscap is topologically equivalent to a M\"{o}bius band. No time evolution in the traditional sense can happen in this case because if we assume there is time evolution, since a M\"{o}bius band only has one boundary, time cannot flow in one direction along the boundary. Thus the ERB doesn't grow with time and the crosscap gives a non-decaying contribution to the two-point correlation function.
On the other hand, the crosscap contribution to OTOC decays with time. If we take $\beta_1,\beta_3=\frac{\beta}{4}+it$ and $\beta_2,\beta_4=\frac{\beta}{4}-it$ we get
\begin{align}
\braket{V(0)W(t)V(0)W(t)}_{\chi=-1,0}&=\int e^{\ell}d\ell e^{\ell'}d\ell'\,P_{\text{Disk}}(\beta_1,\beta_3,\ell,\ell')P_{\text{Disk}}(\beta_2,\beta_4,\ell,\ell')e^{-\Delta \ell}e^{-\Delta \ell'}\\
&=\int dE dE'\,\rho_0(E)\rho_0(E')e^{-(\beta_1+\beta_3)E}e^{-(\beta_2+\beta_4)E'}|V_{E,E'}|^2|W_{E,E'}|^2\\
&\sim \frac{1}{t^3}|V_{0,0}|^2|W_{0,0}|^2\quad t\rightarrow\infty
\end{align}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/otoc2copy.png}
\caption{disk+crosscap contribution to OTOC (a) the same geometry drawn as a quotient of a hyperbolic disk}
\label{otoc}
\end{figure}
This could be understood intuitively as well. In this case the time evolution has been divided into two parts, so the M\"{o}bius band structure does not hinder the definition of time anymore. Thus the growth of ERB is also restored.
\begin{comment}
For a non-orientable geometry, a reflected-handle-disk also contributes to out-of-time-correlators (OTOCs). Recall that a reflected-handle-disk is topologically a disk with two crosscaps. Analogous to the two-handle-disk contribution to OTOCs shown in \cite{Saadsingleauthor}, a non-decaying contribution to bosonic four-point correlation function on a disk with two crosscaps is given by two geodesics each going through one crosscap (see figure~\ref{otoc}).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/otoccopy.png}
\caption{one example of reflected-handle-disk contribution to OTOC (a) a pair of geodesics that we cut along (b) embed the geometry on a hyperbolic disk}
\label{otoc}
\end{figure}
As in figure~\ref{kb2calc}(c), we can divide the geometry into pieces we can calculate and arrive at
\begin{align}
\braket{VWVW}_{\chi=-1,0}&=2e^{-S_0}\int d\ell d\ell'e^{-\Delta \ell}e^{-\Delta \ell'}\int d\ell_1d\ell_2d\ell_3d\ell_4\,e^{\ell_1/2+\ell_2/2+\ell_3/2+\ell_4/2}I(\ell_1,\ell,\ell_2,\ell,\ell_3,\ell',\ell_4,\ell')\nonumber\\
&\quad\times \varphi_{\text{Disk},\beta_1}(\ell_1)\varphi_{\text{Disk},\beta_2}(\ell_2)\varphi_{\text{Disk},\beta_3}(\ell_3)\varphi_{\text{Disk},\beta_4}(\ell_4)\\
&=2e^{-S_0}\int d\ell d\ell'e^{-\Delta \ell}e^{-\Delta \ell'}dE\,e^{\ell}e^{\ell'}\rho_0(E)\varphi_E(\ell)\varphi_E(\ell)\varphi_E(\ell')\varphi_E(\ell')e^{-(\beta_1+\beta_2+\beta_3+\beta_4)E}\\
&=2e^{-S_0}\int dE\,\rho_0(E)e^{-(\beta_1+\beta_2+\beta_3+\beta_4)E}|V_{E,E}|^2|W_{E,E}|^2
\end{align}
Here we integrate over all possible geodesic-lengths and the factor of $2$ in front comes from the fact that we can either cut along operators $\beta_2,\beta_4$ apart or cur along operators $\beta_1,\beta_3$ apart.
For fermionic OTOC, we need to account for the Pin$^-$ structure. For each geodesic going through a crosscap the Pin$^-$ structure changes $\braket{\psi\psi}$ to $2\sin\left(\frac{2\pi N}{8}\right)\braket{\psi\psi}$ so in this case since each of our two geodesics goes through one crosscap we get
\begin{equation}
\braket{\psi\psi\psi\psi}=4\sin^2\left(\frac{2\pi N}{8}\right)\braket{\psi T\psi T^{-1}\psi T\psi T^{-1}}_{\chi=-1,0}
\end{equation}
\end{comment}
\section*{Acknowledgements}
I want to give special thanks to Douglas Stanford for patient guidance, extensive discussions, and inspiring comments throughout this project. I am also grateful to Zhenbin Yang and Shunyu Yao for discussions.
|
train/arxiv
|
BkiUaOPxK6wB9mpbzVp-
| 5 | 1 |
\section{Introduction}
Consider the following {\em mathematical program with vanishing constraints} (MPVC)
\begin{equation} \label{eq : genproblem}
\begin{array}{rll}
\min\limits_{x \in \mathbb{R}^n} & f(x) & \\
\textrm{subject to } & h_i(x) = 0 & i \in E,\\
& g_i(x) \leq 0 & i \in I,\\
& H_i(x) \geq 0, \, G_i(x) H_i(x) \leq 0 & i \in V,
\end{array}
\end{equation}
with continuously differentiable functions $f$, $h_i, i \in E$, $g_i, i \in I$, $G_i, H_i, i \in V$ and finite index sets $E,I$ and $V$.
Theoretically, MPVCs can be viewed as standard nonlinear optimization problems, but due to the vanishing
constraints, many of the standard constraint qualifications of nonlinear programming are violated at any feasible point $\bar x$
with $H_i(\bar x) = G_i(\bar x) = 0$ for some $i \in V$.
On the other hand, by introducing slack variables, MPVCs may be reformulated as so-called
mathematical programs with complementarity constraints (MPCCs), see \cite{Ho09}.
However, this approach is also not satisfactory as it has turned out that MPCCs are in fact even more difficult to handle than MPVCs.
This makes it necessary, both from a theoretical and numerical point of view, to consider special tailored algorithms
for solving MPVCs. Recent numerical methods follow different directions.
A smoothing-continuation method and a regularization approach for MPCCs are considered in \cite{FuPa99,Sch01}
and a combination of these techniques, a smoothing-regularization approach for MPVCs is investigated in \cite{AchHoKa13}.
In \cite{IzSo09,AchKaHo12} the relaxation method has been suggested in order to deal with the inherent difficulties of MPVCs.
In this paper, we carry over a well known SQP method from nonlinear programming to MPVCs.
We proceed in a similar manner as in \cite{BeGfr16a}, where an SQP method for MPCCs was introduced by Benko and Gfrerer.
The main task of our method is to solve in each iteration step a quadratic program with linear vanishing constraints,
a so-called auxiliary problem.
Then we compute the next iterate by reducing a certain merit function along some polygonal line which is given by the solution
procedure for the auxiliary problem. To solve the auxiliary problem we exploit the new concept of {\em $\mathcal Q_M$-stationarity}
introduced in the recent paper by Benko and Gfrerer \cite{BeGfr16b}.
$\mathcal Q_M$-stationarity is in general stronger than M-stationarity
and it turns out to be very suitable for a numerical approach as it allows to handle
the program with vanishing constraints without relying on enumeration techniques.
Surprisingly, we compute at least a $\mathcal Q_M$-stationary solution of the auxiliary problem
just by means of quadratic programming by solving appropriate convex subproblems.
Next we study the convergence of the SQP method. We show that every limit point of the generated sequence is at least M-stationary.
Moreover, we consider the extended version of our SQP method, where at each iterate a correction of the iterate is made
to prevent the method from converging to undesired points.
Consequently we show that under some additional assumptions all limit points are at least $\mathcal Q_M$-stationary.
Numerical tests indicate that our method behaves very reliably.
A short outline of this paper is as follows.
In section 2 we recall the basic stationarity concepts for MPVCs as well as
the recently developed concepts of $\mathcal Q$- and $\mathcal Q_M$-stationarity.
In section 3 we describe an algorithm based on quadratic programming
for solving the auxiliary problem occurring in every iteration of our SQP method.
We prove the finiteness and summarize some other properties of this algorithm.
In section 4 we propose the basic SQP method. We describe how the next iterate is computed by means of the solution of the auxiliary problem
and we consider the convergence of the overall algorithm.
In section 5 we consider the extended version of the overall algorithm and we discuss its convergence.
Section 6 is a summary of numerical results we obtained by implementing our basic algorithm in MATLAB and by testing it on
a subset of test problems considered in the thesis of Hoheisel \cite{Ho09}.
In what follows we use the following notation. Given a set $M$ we denote by
$\mathcal{P}(M):=\{ (M_1,M_2) \,\vert\, M_1 \cup M_2 = M, \, M_1 \cap M_2 = \emptyset \}$
the collection of all partitions of $M$. Further, for a real number $a$ we use the notation $(a)^+:=\max(0,a)$, $(a)^-:=\min(0,a)$.
For a vector $u= (u_1, u_2, \ldots, u_m)^T \in \mathbb{R}^m$ we define $\vert u \vert$, $(u)^+$, $(u)^-$ componentwise, i.e.
$\vert u \vert := (\vert u_1 \vert, \vert u_2 \vert, \ldots, \vert u_m \vert)^T$, etc.
Moreover, for $u \in \mathbb{R}^m$ and $1 \leq p \leq \infty$ we denote the $\ell_p$ norm of $u$ by $\norm{u}_p$
and we use the notation $\norm{u} := \norm{u}_2$ for the standard $\ell_2$ norm.
Finally, given a sequence $y_k \in \mathbb{R}^m$, a point $y \in \mathbb{R}^m$ and an infinite set $K \subset \mathbb{N}$ we write $y_k \setto{K} y$
instead of $\lim_{k \to \infty, k \in K} y_k = y$.
\section{Stationary points for MPVCs}
Given a point $\bar x$ feasible for \eqref{eq : genproblem} we define the following index sets
\begin{eqnarray} \nonumber
I^g(\bar x) & := & \{ i \in I \,\vert\, g_i(\bar x) = 0 \}, \\ \nonumber
I^{0+}(\bar x) & := & \{ i \in V \,\vert\, H_i(\bar x) = 0 < G_i(\bar x) \}, \\ \label{eqn : IndexStes}
I^{0-}(\bar x) & := & \{ i \in V \,\vert\, H_i(\bar x) = 0 > G_i(\bar x) \}, \\ \nonumber
I^{+0}(\bar x) & := & \{ i \in V \,\vert\, H_i(\bar x) > 0 = G_i(\bar x) \}, \\ \nonumber
I^{00}(\bar x) & := & \{ i \in V \,\vert\, H_i(\bar x) = 0 = G_i(\bar x) \}, \\ \nonumber
I^{+-}(\bar x) & := & \{ i \in V \,\vert\, H_i(\bar x) > 0 < G_i(\bar x) \}.
\end{eqnarray}
In contrast to nonlinear programming there exist a lot of stationarity concepts for MPVCs.
\begin{definition}
Let $\bar x$ be feasible for \eqref{eq : genproblem}. Then $\bar x$ is called
\begin{enumerate}
\item {\em weakly stationary}, if there are multipliers $\lambda_i^g, i \in I$, $\lambda_i^h, i \in E$, $\lambda_i^G, \lambda_i^H, i \in V$
such that
\begin{equation} \label{eq : StatEq}
\nabla f(\bar x)^T + \sum_{i \in E} \lambda_i^h \nabla h_i(\bar x)^T + \sum_{i \in I} \lambda_i^g \nabla g_i(\bar x)^T
+ \sum_{i \in V} \left( - \lambda_i^H \nabla H_i(\bar x)^T + \lambda_i^G \nabla G_i(\bar x)^T \right) = 0
\end{equation}
and
\begin{equation} \label{eq : WeakStat}
\begin{array}{rcl}
\lambda_i^g g_i(\bar x) = 0, i \in I, & \lambda_i^H H_i(\bar x) = 0, i \in V, & \lambda_i^G G_i(\bar x) = 0, i \in V, \\
\lambda_i^g \geq 0, i \in I, & \lambda_i^H \geq 0, i \in I^{0-}(\bar x), & \lambda_i^G \geq 0, i \in I^{00}(\bar x) \cup I^{+0}(\bar x).
\end{array}
\end{equation}
\item {\em M-stationary}, if it is weakly stationary and
\begin{equation} \label{eq : MStatCond}
\lambda_i^H \lambda_i^G = 0, i \in I^{00}(\bar x).
\end{equation}
\item {\em $\mathcal{Q}$-stationary with respect to $(\beta^1,\beta^2)$}, where $(\beta^1,\beta^2)$ is a given partition of $I^{00}(\bar x)$,
if there exist two multipliers $\overline\lambda=(\overline\lambda^h,\overline\lambda^g,\overline\lambda^H,\overline\lambda^G)$
and $\underline\lambda=(\underline\lambda^h,\underline\lambda^g,\underline\lambda^H,\underline\lambda^G)$,
both fulfilling \eqref{eq : StatEq} and \eqref{eq : WeakStat}, such that
\begin{equation} \label{eq : QStatCond}
\overline\lambda_i^G = 0, \ \underline\lambda_i^H, \underline\lambda_i^G \geq 0, \ i \in \beta^1; \quad
\overline\lambda_i^H, \overline\lambda_i^G \geq 0, \ \underline\lambda_i^G = 0, \ i \in \beta^2.
\end{equation}
\item {\em $\mathcal{Q}$-stationary}, if there is some partition $(\beta^1,\beta^2) \in \mathcal P(I^{00}(\bar x))$ such that $\bar x$ is
$\mathcal{Q}$-stationary with respect to $(\beta^1,\beta^2)$.
\item {\em $\mathcal{Q}_M$-stationary}, if it is $\mathcal{Q}$-stationary and at least one of the multipliers $\overline\lambda$ and
$\underline\lambda$ fulfills M-stationarity condition \eqref{eq : MStatCond}.
\item {\em S-stationary}, if it is weakly stationary and
\[ \lambda_i^H \geq 0, \lambda_i^G = 0, i \in I^{00}(\bar x). \]
\end{enumerate}
\end{definition}
The concepts of $\mathcal{Q}$-stationarity and $\mathcal{Q}_M$-stationarity
were introduced in the recent paper by Benko and Gfrerer \cite{BeGfr16b}, whereas the other stationarity concepts
are very common in the literature, see e.g. \cite{AchKa08,Ho09,IzSo09}.
The following implications hold:
\begin{eqnarray*}
& \textrm{S-stationarity} \Rightarrow \mathcal{Q}\textrm{-stationarity with respect to every }
(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x)) \Rightarrow & \\
& \mathcal{Q}\textrm{-stationarity w.r.t. } (\emptyset,I^{00}(\bar x)) \Rightarrow
\mathcal{Q}_M\textrm{-stationarity} \Rightarrow \textrm{M-stationarity} \Rightarrow \textrm{weak stationarity}.&
\end{eqnarray*}
The first implication follows from the fact that the multiplier corresponding to S-stationarity fulfills
the requirements for both $\overline\lambda$ and $\underline\lambda$. The third implication holds because
for $(\beta^1,\beta^2) = (\emptyset,I^{00}(\bar x))$ the multiplier $\underline\lambda$ fulfills
\eqref{eq : MStatCond} since $\underline\lambda_i^G = 0$ for $i \in I^{00}(\bar x)$.
Note that the S-stationarity conditions are nothing else than the Karush-Kuhn-Tucker conditions for the problem \eqref{eq : genproblem}.
As we will demonstrate in the next theorems, a local minimizer is S-stationary only under some comparatively stronger constraint qualification,
while it is $\mathcal{Q}_M$-stationary under very weak constraint qualifications. Before stating the theorems we recall some
common definitions.
Denoting
\begin{eqnarray} \label{eqn : FPdef}
F_i(x):=(-H_i(x),G_i(x))^T, i \in V, && P := \{(a,b) \in \mathbb{R}_- \times \mathbb{R} \,\vert\, ab \geq 0 \}, \\ \label{eqn : mathcFdef}
\mathcal{F}(x) := (h(x)^T,g(x)^T,F(x)^T)^T, && D:= \{0\}^{\vert E \vert} \times \mathbb{R}_-^{\vert I \vert} \times P^{\vert V \vert},
\end{eqnarray}
we see that problem \eqref{eq : genproblem} can be rewritten as
\[\min f(x) \quad \textrm{subject to} \quad x \in \Omega_V := \{x \in \mathbb{R}^n \,\vert\, \mathcal{F}(x) \in D \}.\]
Recall that the {\em contingent} (also {\em tangent}) {\em cone} to a closed
set $\Omega \subset \mathbb{R}^m$ at $u \in \Omega$ is defined by
\[T_{\Omega}(u) := \{ d \in \mathbb{R}^m \,\vert\, \exists (d_k) \to d, \exists (\tau_k) \downarrow 0 : u + \tau_k d_k \in \Omega \, \forall k \}. \]
The {\em linearized cone} to $\Omega_V$ at $\bar x \in \Omega_V$ is then defined as
$T_{\Omega_V}^{\mathrm{lin}}(\bar x) := \{d \in \mathbb{R}^n \,\vert\, \nabla \mathcal{F}(\bar x) d \in T_{D}(\mathcal{F}(\bar x))\}$.
Further recall that $\bar x \in \Omega_V$ is called {\em B-stationary} if
\[\nabla f(\bar x) d \geq 0 \, \forall d \in T_{\Omega_V}(\bar x).\]
Every local minimizer is known to be B-stationary.
\begin{definition}
Let $\bar x$ be feasible for \eqref{eq : genproblem}, i.e $\bar x \in \Omega_V$. We say that the {\em generalized Guignard constraint qualification}
(GGCQ) holds at $\bar x$, if the polar cone of $T_{\Omega_V}(\bar x)$ equals the polar cone of $T_{\Omega_V}^{\mathrm{lin}}(\bar x)$.
\end{definition}
\begin{theorem}[{c.f. \cite[Theorem 8]{BeGfr16b}}]
Assume that GGCQ is fulfilled at the point $\bar x \in \Omega_V$. If $\bar x$ is B-stationary,
then $\bar x$ is $\mathcal{Q}$-stationary for \eqref{eq : genproblem} with respect to every partition
$(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$ and it is also $\mathcal{Q}_M$-stationary.
\end{theorem}
\begin{theorem}[{c.f. \cite[Theorem 8]{BeGfr16b}}]
If $\bar x$ is Q-stationary with respect to a partition $(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$,
such that for every $j \in \beta^1$ there exists some $z^j$ fulfilling
\begin{equation} \label{eq : MPVCMFCQ1}
\begin{array}{l}
\nabla h(\bar x) z^j = 0, \\
\nabla g_i(\bar x) z^j = 0, i \in I^g(\bar x), \\
\nabla G_i(\bar x) z^j = 0, i \in I^{+0}(\bar x), \\
\nabla G_i(\bar x) z^j \left\{
\begin{array}{lr}
\geq 0, & i \in \beta^1,\\
\leq 0, & i \in \beta^2,
\end{array}
\right. \\
\nabla H_i(\bar x) z^j = 0, i \in I^{0-}(\bar x) \cup I^{00}(\bar x) \cup I^{0+}(\bar x) \setminus \{j\}, \\
\nabla H_j(\bar x) z^j = -1
\end{array}
\end{equation}
and there is some $\bar z$ such that
\begin{equation} \label{eq : MPVCMFCQ2}
\begin{array}{l}
\nabla h(\bar x) \bar z = 0, \\
\nabla g_i(\bar x) \bar z = 0, i \in I^g(\bar x), \\
\nabla G_i(\bar x) \bar z = 0, i \in I^{+0}(\bar x), \\
\nabla G_i(\bar x) \bar z \left\{
\begin{array}{lr}
\geq 0, & i \in \beta^1,\\
\leq -1, & i \in \beta^2,
\end{array}
\right. \\
\nabla H_i(\bar x) \bar z = 0, i \in I^{0-}(\bar x) \cup I^{00}(\bar x) \cup I^{0+}(\bar x),
\end{array}
\end{equation}
then $\bar x$ is S-stationary and consequently also B-stationary.
\end{theorem}
Note that these two theorems together also imply that a local minimizer $\bar x \in \Omega_V$ is S-stationary provided GGCQ is fulfilled at $\bar x$
and there exists a partition $(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$,
such that for every $j \in \beta^1$ there exists $z^j$ fulfilling \eqref{eq : MPVCMFCQ1} and $\bar z$ fulfilling \eqref{eq : MPVCMFCQ2}.
Moreover, note that \eqref{eq : MPVCMFCQ1} and \eqref{eq : MPVCMFCQ2} are fulfilled for every partition $(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$
e.g. if the gradients of active constraints are linearly independent. On the other hand, in the special case of partition
$(\emptyset,I^{00}(\bar x)) \in \mathcal{P}(I^{00}(\bar x))$, this conditions read as the requirement that the system
\begin{equation*} \label{eq : MPVCMFCQ2special}
\begin{array}{l}
\nabla h(\bar x) \bar z = 0, \\
\nabla g_i(\bar x) \bar z = 0, i \in I^g(\bar x), \\
\nabla G_i(\bar x) \bar z = 0, i \in I^{+0}(\bar x), \\
\nabla G_i(\bar x) \bar z \leq -1, i \in I^{00}(\bar x), \\
\nabla H_i(\bar x) \bar z = 0, i \in I^{0-}(\bar x) \cup I^{00}(\bar x) \cup I^{0+}(\bar x)
\end{array}
\end{equation*}
has a solution, which resembles the well-known Mangasarian-Fromovitz constraint qualification (MFCQ) of nonlinear programming
and it seems to be a rather weak and possibly often fulfilled assumption.
Finally, we recall the definitions of normal cones.
The {\em regular normal cone} to a closed set $\Omega \subset \mathbb{R}^m$ at $u \in \Omega$ can be defined as the polar
cone to the tangent cone by
\[
\widehat N_{\Omega}(u) := (T_{\Omega}(u))^{\circ} = \{ z \in \mathbb{R}^m \,\vert\, (z,d) \leq 0 \, \forall d \in T_{\Omega}(u)\}.
\]
The {\em limiting normal cone} to a closed set $\Omega \subset \mathbb{R}^m$ at $u \in \Omega$ is given by
\begin{equation} \label{eq : LimitNCdef}
N_{\Omega}(u) := \{ z \in \mathbb{R}^m \,\vert\, \exists u_k \to u, z_k \to z \textrm{ with } u_k \in \Omega, z_k \in \widehat N_{\Omega}(u_k) \, \forall k \}.
\end{equation}
In case when $\Omega$ is a convex set, regular and limiting normal cone coincide with the classical normal cone of convex analysis, i.e.
\begin{equation} \label{eq : convexNC}
\widehat N_{\Omega}(u) = N_{\Omega}(u) = \{z \in \mathbb{R}^m \,\vert\, (z,u - v) \leq 0 \, \forall v \in \Omega\}.
\end{equation}
Well-known is also the following description of the limiting normal cone
\begin{equation} \label{eq : propLimitNC}
N_{\Omega}(u) := \{ z \in \mathbb{R}^m \,\vert\, \exists u_k \to u, z_k \to z \textrm{ with } u_k \in \Omega, z_k \in N_{\Omega}(u_k) \, \forall k \}.
\end{equation}
We conclude this section by the following characterization of M- and $\mathcal Q$-stationarity via limiting normal cone.
Straightforward calculations yield that
\begin{eqnarray*}
& N_{P}(F_i(\bar x)) = \left\{
\begin{array}{ll}
\mathbb{R}_+ \times \{0\} & \textrm{if } i \in I^{0-}(\bar x), \\
\mathbb{R} \times \{0\} \cup \{0\} \times \mathbb{R}_+ & \textrm{if } i \in I^{00}(\bar x), \\
\mathbb{R} \times \{0\} & \textrm{if } i \in I^{0+}(\bar x), \\
\{0\} \times \mathbb{R}_+ & \textrm{if } i \in I^{+0}(\bar x), \\
\{0\} \times \{0\} & \textrm{if } i \in I^{+-}(\bar x),
\end{array} \right.& \\
& N_{P^1}(F_i(\bar x)) = \mathbb{R} \times \{0\} \quad \textrm{ if } i \in I^{0+}(\bar x) \cup I^{00}(\bar x) \cup I^{0-}(\bar x),& \\
&N_{P^2}(F_i(\bar x)) = \left\{
\begin{array}{ll}
\mathbb{R}_+ \times \mathbb{R}_+ & \textrm{if } i \in I^{00}(\bar x), \\
N_{P}(F_i(\bar x)) & \textrm{if } i \in I^{0-}(\bar x) \cup I^{+0}(\bar x) \cup I^{+-}(\bar x)
\end{array} \right.&
\end{eqnarray*}
and hence the M-stationarity conditions \eqref{eq : WeakStat} and \eqref{eq : MStatCond} can be replaced by
\begin{equation} \label{eq : MstatLNC}
(\lambda^h,\lambda^g,\lambda^H,\lambda^G) \in N_{D}(\mathcal{F}(\bar x))
= \mathbb{R}^{\vert E \vert} \times \{u \in \mathbb{R}_+^{\vert I \vert} \,\vert\, (u,g(\bar x)) = 0\} \times N_{P^{\vert V \vert}}(F(\bar x))
\end{equation}
and the $\mathcal Q$-stationarity conditions \eqref{eq : WeakStat} and \eqref{eq : QStatCond} can be replaced by
\begin{eqnarray} \label{eqn : QstatLNC1}
(\overline\lambda^h,\overline\lambda^g,\overline\lambda^H,\overline\lambda^G) & \in &
\mathbb{R}^{\vert E \vert} \times \{u \in \mathbb{R}_+^{\vert I \vert} \,\vert\, (u,g(\bar x)) = 0\} \times \prod_{i \in V} \nu_i^{\beta^1,\beta^2}(\bar x), \\ \label{eqn : QstatLNC2}
(\underline\lambda^h,\underline\lambda^g,\underline\lambda^H,\underline\lambda^G) & \in &
\mathbb{R}^{\vert E \vert} \times \{u \in \mathbb{R}_+^{\vert I \vert} \,\vert\, (u,g(\bar x)) = 0\} \times \prod_{i \in V} \nu_i^{\beta^2,\beta^1}(\bar x),
\end{eqnarray}
where for $(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$ we define
\[\nu_i^{\beta^1,\beta^2}(\bar x) := \left\{
\begin{array}{ll}
N_{P^1}(F_i(\bar x)) & \textrm{if } i \in I^{0+}(\bar x) \cup \beta^1, \\
N_{P^2}(F_i(\bar x)) & \textrm{if } i \in I^{0-}(\bar x) \cup I^{+0}(\bar x) \cup I^{+-}(\bar x) \cup \beta^2.
\end{array} \right.\]
Note also that for every $i \in V$ we have
\begin{equation} \label{eq : MstatConvexPiece}
\nu_i^{I^{00}(\bar x),\emptyset}(\bar x) \subset N_{P}(F_i(\bar x)).
\end{equation}
\section{Solving the auxiliary problem}
In this section, we describe an algorithm for solving quadratic problems with vanishing constraints of the type
\begin{equation} \label{eq : deltaOldprob}
\begin{array}{lrll}
QPVC(\rho) & \min\limits_{(s,\delta) \in \mathbb{R}^{n+1}} & \frac{1}{2} s^T B s + \nabla f s + \rho ( \frac{1}{2} \delta^2 + \delta) & \\
& \textrm{subject to } & (1 - \delta) h_i + \nabla h_i s = 0 & i \in E, \\
&& (1 - \theta_i^g \delta) g_i + \nabla g_i s \leq 0 & i \in I, \\
&& (1 - \theta_i^H \delta) H_i + \nabla H_i s \geq 0, & \\
&& \left( (1 - \theta_i^G \delta) G_i + \nabla G_i s \right) \ \left( (1 - \theta_i^H \delta) H_i + \nabla H_i s \right) \leq 0 & i \in V, \\
&& - \delta \leq 0. &
\end{array}
\end{equation}
Here the vector $\theta = (\theta^g,\theta^G,\theta^H) \in \{ 0, 1 \}^{\vert I \vert + 2 \vert V \vert} =: \mathcal{B}$
is chosen at the beginning of the algorithm such that some feasible point is known in advance, e.g. $(s,\delta)=(0,1)$.
The parameter $\rho$ has to be chosen sufficiently large and acts like a penalty parameter forcing
$\delta$ to be near zero at the solution. $B$ is a symmetric positive definite $n \times n$ matrix, $\nabla f$, $\nabla h_i$, $\nabla g_i$,
$\nabla G_i$, $\nabla H_i$ denote row vectors in $\mathbb{R}^n$ and $h_i,g_i,G_i,H_i$ are real numbers.
Note that this problem is a special case of problem \eqref{eq : genproblem} and consequently the definition
of $\mathcal{Q}-$ and $\mathcal{Q}_M-$ stationarity as well as the definition of index sets \eqref{eqn : IndexStes}
remain valid.
It turns out to be much more convenient to operate with a more general notation. Let us denote by $F_i:=(-H_i,G_i)^T$ a vector in $\mathbb{R}^2$,
by $\nabla F_i := (-\nabla H_i^T,\nabla G_i^T)^T$ a $2 \times n$ matrix and by $P^1 := \{0\} \times \mathbb{R}$ and $P^2:= \mathbb{R}^2_-$ two subsets of $\mathbb{R}^2$.
Note that for $P$ given by \eqref{eqn : FPdef} it holds that $P = P^1 \cup P^2$.
The problem \eqref{eq : deltaOldprob} can now be equivalently rewritten in a form
\begin{equation} \label{eq : deltaprob}
\begin{array}{lrll}
QPVC(\rho) & \min\limits_{(s,\delta) \in \mathbb{R}^{n+1}} & \frac{1}{2} s^T B s + \nabla f s + \rho ( \frac{1}{2} \delta^2 + \delta) & \\
& \textrm{subject to } & (1 - \delta) h_i + \nabla h_i s = 0 & i \in E, \\
&& (1 - \theta_i^g \delta) g_i + \nabla g_i s \leq 0 & i \in I, \\
&& \delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P & i \in V, \\
&& - \delta \leq 0. &
\end{array}
\end{equation}
For a given feasible point $(s,\delta)$ for the problem $QPVC(\rho)$ we define the following index sets
\begin{eqnarray*}
I^{1}(s,\delta) & := & \{ i \in V \,\vert\, \delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^1 \setminus P^2 \} =
I^{0+}(s,\delta), \\
I^{2}(s,\delta) & := & \{ i \in V \,\vert\, \delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^2 \setminus P^1 \} =
I^{+0}(s,\delta) \cup I^{+-}(s,\delta), \\
I^{0}(s,\delta) & := & \{ i \in V \,\vert\, \delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^1 \cap P^2 \} =
I^{0-}(s,\delta) \cup I^{00}(s,\delta),
\end{eqnarray*}
where the index sets $I^{0+}(s,\delta)$, $I^{+0}(s,\delta)$, $I^{+-}(s,\delta)$, $I^{0-}(s,\delta)$, $I^{00}(s,\delta)$
are given by \eqref{eqn : IndexStes}.
Further, consider the distance function $d$ defined by
\[d(x,A) := \inf_{y \in A} \norm{x-y}_1,\]
for $x \in \mathbb{R}^2$ and $A \subset \mathbb{R}^2$.
The following proposition summarizes some well-known properties of $d$.
\begin{proposition}
Let $x \in \mathbb{R}^2$ and $A \subset \mathbb{R}^2$.
\begin{enumerate}
\item Let $B \subset \mathbb{R}^2$, then
\begin{equation} \label{eq : DistOfCup}
d(x, A \cup B) = \min\{ d(x, A), d(x, B) \}.
\end{equation}
In particular,
\begin{equation} \label{eq : DistToP}
d(x,P^1) = (x_1)^+ + (-x_1)^+, \,\, d(x,P^2) = (x_1)^+ + (x_2)^+, \,\,
d(x,P) = (x_1)^+ + (\min\{-x_1,x_2\})^+.
\end{equation}
\item $d(\cdot,A) : \mathbb{R}^2 \rightarrow \mathbb{R}^+$ is Lipschitz continuous with Lipschitz modulus $L = 1$ and consequently
\begin{equation} \label{eq : DistIfIn}
d(x,A) \leq d(x+y,A) + \norm{y}_1.
\end{equation}
\item $d(\cdot,A) : \mathbb{R}^2 \rightarrow \mathbb{R}^+$ is convex, provided $A$ is convex.
\end{enumerate}
\end{proposition}
Due to the disjunctive structure of the auxiliary problem we can subdivide it into several QP-pieces.
For every partition $(V_1,V_2) \in \mathcal{P}(V)$ we define the convex quadratic problem
\begin{equation} \label{eq : deltaMprob}
\begin{array}{lrll}
QP(\rho, V_1) & \min\limits_{(s,\delta) \in \mathbb{R}^{n+1}} & \frac{1}{2} s^T B s + \nabla f s + \rho (\frac{1}{2} \delta^2 + \delta) & \\
& \textrm{subject to } & (1 - \delta) h_i + \nabla h_i s = 0 & i \in E, \\
&& (1 - \theta_i^g \delta) g_i + \nabla g_i s \leq 0 & i \in I, \\
&& \delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^1 & i \in V_1, \\
&& \delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^2 & i \in V_2, \\
&& - \delta \leq 0. &
\end{array}
\end{equation}
Since $(V_1,V_2)$ form a partition of $V$ it is sufficient to define $V_1$ since $V_2$ is given by $V_2 = V \setminus V_1$.
At the solution $(s,\delta)$ of $QP(\rho, V_1)$ there is a corresponding multiplier $\lambda(\rho, V_1) = (\lambda^h,\lambda^g,\lambda^H,\lambda^G)$
and a number $\lambda^{\delta} \geq 0$ with $\lambda^{\delta} \delta = 0$ fulfilling the KKT conditions:
\begin{eqnarray} \label{eqn : FirstOrder1}
B s + \nabla f^T + \sum_{i \in E} \lambda_i^h \nabla h_i^T + \sum_{i \in I} \lambda_i^g \nabla g_i^T
+ \sum_{i \in V} \nabla F_i^T \lambda_i^F & = & 0, \\ \label{eqn : FirstOrder2}
\rho (\delta + 1) - \lambda^{\delta} - \sum_{i \in E} \lambda_i^h h_i - \sum_{i \in I} \lambda_i^g \theta_i^g g_i
+ \sum_{i \in V} (\theta_i^H H_i, - \theta_i^G G_i) \lambda_i^F & = & 0, \\
\label{eqn : ComplemCon1}
\lambda_i^g ((1 - \theta_i^g \delta) g_i + \nabla g_i s) = 0, \,\, \lambda_i^g \geq 0, && i \in I, \\ \label{eqn : ComplemCon2}
\lambda_i^F \in N_{P^1}(\delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s), && i \in V_1, \\ \label{eqn : ComplemCon3}
\lambda_i^F \in N_{P^2}(\delta (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s), && i \in V_2,
\end{eqnarray}
where $\lambda_i^F := (\lambda_i^H,\lambda_i^G)^T$ for $i \in V$. Since $P^1$ and $P^2$ are convex sets,
the above normal cones are given by \eqref{eq : convexNC}.
The definition of the problem $QP(\rho, V_1)$ allows the following interpretation of $\mathcal{Q}$-stationarity,
which is a direct consequence of \eqref{eqn : QstatLNC1} and \eqref{eqn : QstatLNC2}.
\begin{lemma} \label{Lem : QstatQPpiece}
A point $(s,\delta)$ is $\mathcal{Q}$-stationary with respect to $(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(s,\delta))$ for \eqref{eq : deltaprob}
if and only if it is the solution of the convex
problems $QP(\rho, I^{1}(s,\delta) \cup \beta^1)$ and $QP(\rho, I^{1}(s,\delta) \cup \beta^2)$.
\end{lemma}
Moreover, since for $V_1 = I^{1}(s,\delta) \cup I^{00}(s,\delta)$ the conditions \eqref{eqn : ComplemCon2},\eqref{eqn : ComplemCon3}
read as $\lambda_i^F \in \nu_i^{I^{00}(s,\delta),\emptyset}(s,\delta)$, it follows from \eqref{eq : MstatConvexPiece}
that if a point $(s,\delta)$ is the solution of $QP(\rho, I^{1}(s,\delta) \cup I^{00}(s,\delta))$ then it is M-stationary for \eqref{eq : deltaprob}.
Finally, let us denote by $\bar\delta(V_1)$ the objective value at a solution of the problem
\begin{equation} \label{eq : MinDelProb}
\min_{(s,\delta) \in \mathbb{R}^{n+1}} \,\, \delta \qquad \textrm{ subject to the constraints of \eqref{eq : deltaMprob}.}
\end{equation}
An outline of the algorithm for solving $QPVC(\rho)$ is as follows.
\begin{algorithm}[Solving the QPVC] \label{AlgSol} \rm \mbox{}
Let $\zeta \in (0,1)$, $\bar{\rho} > 1$ and $\rho > 0$ be given.
\Itl1{1:} Initialize:
\Itl2{} Set the starting point $(s^0, \delta^0) := (0,1)$, define the vector $\theta$ by
\Itl3{} \begin{equation} \label{eq : DefBet}
\theta_i^{g} := \left\{
\begin{array}{ll}
1 & \quad \textrm{if } g_i > 0, \\
0 & \quad \textrm{if } g_i \leq 0,
\end{array} \right. \qquad
(\theta_i^{H}, \theta_i^{G}) := \left\{
\begin{array}{ll}
(0,0) & \quad \textrm{if } d(F_i,P) = 0, \\
(1,0) & \quad \textrm{if } 0 < d(F_i,P^1) \leq d(F_i,P^2), \\
(0,1) & \quad \textrm{if } 0 < d(F_i,P^2) < d(F_i,P^1)
\end{array} \right.
\end{equation}
\Itl3{} and set the partition $V_1^1 := I^{1}(s^0,\delta^0)$ and the counter of pieces $t:=0$.
\Itl2{} Compute $(s^{1}, \delta^{1})$ as the solution and $\lambda^{1}$ as the corresponding multiplier
\Itl3{} of the convex problem $QP(\rho, V_1^{1})$ and set $t:=1$.
\Itl2{} If $\delta^1 > \delta^0$, perform a restart: set $\rho := \rho \bar \rho$ and go to step 1.
\Itl1{2:} Improvement step:
\Itl2{} {\tt while} $(s^t,\delta^t)$ is not a solution of the following four convex problems:
\begin{eqnarray} \label{eqn : b1b2Stat}
QP(\rho, I^{1}(s^t,\delta^t) \cup (I^{00}(s^t,\delta^t) \cap V_1^t)), &&
QP(\rho, I^{1}(s^t,\delta^t) \cup (I^{00}(s^t,\delta^t) \setminus V_1^t)), \\
\label{eqn : I00EmptyStat}
QP(\rho, I^{1}(s^t,\delta^t)), &&
QP(\rho, I^{1}(s^t,\delta^t) \cup I^{00}(s^t,\delta^t)).
\end{eqnarray}
\Itl3{} Compute $(s^{t+1}, \delta^{t+1})$ as the solution and $\lambda^{t+1}$ as the corresponding multiplier
\Itl4{} of the first problem with $(s^{t+1}, \delta^{t+1}) \neq (s^{t}, \delta^{t})$, set $V_1^{t+1}$ to the
\Itl4{} corresponding index set and increase the counter $t$ of pieces by $1$.
\Itl3{} If $\delta^t > \delta^{t-1}$, perform a restart: set $\rho := \rho \bar \rho$ and go to step 1.
\Itl1{3:} Check for successful termination:
\Itl2{} If $\delta^t < \zeta$ set $N:=t$, stop the algorithm and return.
\Itl1{4:} Check the degeneracy:
\Itl2{} If the non-degeneracy condition
\begin{equation} \label{eq : NonDegen}
\min \{ \bar\delta(I^{1}(s^t,\delta^t)), \bar\delta(I^{1}(s^t,\delta^t) \cup I^{00}(s^t,\delta^t)) \} < \zeta
\end{equation}
\Itl3{} is fulfilled, perform a restart: set $\rho := \rho \bar \rho$ and go to step 1.
\Itl2{} Else stop the algorithm because of degeneracy.
\end{algorithm}
The selection of the index sets in step 2 is motivated by Lemma \ref{Lem : QstatQPpiece},
since if $(s,\delta)$ is the solution of convex problems \eqref{eqn : b1b2Stat}, then it is
$\mathcal{Q}$-stationary and if $(s,\delta)$ is also the
solution of convex problems \eqref{eqn : I00EmptyStat}, then it is even $\mathcal{Q}_M$-stationary
for problem \eqref{eq : deltaprob}.
We first summarize some consequences of the Initialization step.
\begin{proposition} \label{Prop : InitStep}
\begin{enumerate}
\item Vector $\theta$ is chosen in a way that for all $i \in V$ it holds that
\begin{equation} \label{eq : betaeff}
\norm{(\theta_i^H H_i, - \theta_i^G G_i)^T}_1 = d(F_i, P).
\end{equation}
\item Partition $(V_1^1,V_2^1)$ is chosen in a way that for $j=1,2$ it holds that
\begin{equation} \label{eq : InitPart}
i \in V_j^1 \, \textrm{ implies } \, d(F_i, P) = d(F_i, P^j).
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
1. If $d(F_i, P) = 0$ we have $(\theta_i^H, \theta_i^G) = (0,0)$
and \eqref{eq : betaeff} obviously holds. If $0 < d(F_i,P^1) \leq d(F_i,P^2)$ we have $(\theta_i^H, \theta_i^G) = (1,0)$ and we obtain
\[\norm{(\theta_i^H H_i, - \theta_i^G G_i)^T}_1 = \vert H_i \vert = d(F_i,P^1) = d(F_i,P) \]
by \eqref{eq : DistToP} and \eqref{eq : DistOfCup}. Finally, if $0 < d(F_i,P^2) < d(F_i,P^1)$ we have
$H_i < 0 < G_i$, $(\theta_i^H, \theta_i^G) = (0,1)$ and thus
\[\norm{(\theta_i^H H_i, - \theta_i^G G_i)^T}_1 = \vert G_i \vert = (H_i)^+ + (G_i)^+ = d(F_i,P^2) = d(F_i,P) \]
follows again by \eqref{eq : DistToP} and \eqref{eq : DistOfCup}.
2. If $(\theta_i^H H_i, - \theta_i^G G_i)^T + F_i \in P^j$ for some $i \in V$ and $j = 1,2$,
by \eqref{eq : DistIfIn} and \eqref{eq : betaeff} we obtain
\[d(F_i,P^j) \leq \norm{(\theta_i^H H_i, - \theta_i^G G_i)^T}_1 = d(F_i, P) \]
and consequently $d(F_i,P^j) = d(F_i,P)$, because of \eqref{eq : DistOfCup}.
Hence we conclude that $i \in (I^{j}(s^0,\delta^0) \cup I^{0}(s^0,\delta^0))$ implies $d(F_i,P^j) = d(F_i,P)$ for $j = 1,2$
and the statement now follows from the fact that $V_1^1 = I^{1}(s^0,\delta^0)$ and $V_2^1 = I^{2}(s^0,\delta^0) \cup I^{0}(s^0,\delta^0)$.
\end{proof}
The following lemma plays a crucial part in proving the finiteness of the Algorithm \ref{AlgSol}.
\begin{lemma} \label{Lem : RhoMinDelta}
For each partition $(V_1,V_2) \in \mathcal{P}(V)$ there exists a positive constant $C_{\rho}(V_1)$ such that for every
$\rho \geq C_{\rho}(V_1)$ the solution $(s,\delta)$ of $QP(\rho,V_1)$ fulfills $\delta = \bar\delta(V_1)$.
\end{lemma}
\begin{proof}
Let $(s(V_1),\delta(V_1))$ denote a solution of \eqref{eq : MinDelProb}. Since $\delta(V_1)=\bar\delta(V_1)$, it follows that the problem
\begin{equation} \label{eq : deltaZeroMprob}
\begin{array}{rll}
\min\limits_{s \in \mathbb{R}^{n}} & \frac{1}{2} s^T B s + \nabla f s & \\
\textrm{subject to } & (1 - \bar\delta(V_1)) h_i + \nabla h_i s = 0 & i \in E, \\
& (1 - \theta_i^g \bar\delta(V_1)) g_i + \nabla g_i s \leq 0 & i \in I, \\
& \bar\delta(V_1) (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^1 & i \in V_1, \\
& \bar\delta(V_1) (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s \in P^2 & i \in V_2
\end{array}
\end{equation}
is feasible and by $\bar s(V_1)$ we denote the solution of this problem and by $\bar \lambda(V_1)$ the corresponding multiplier.
Further, $(\bar s(V_1),\bar\delta(V_1))$ is a solution of \eqref{eq : MinDelProb} and by $\lambda(V_1)$
we denote the corresponding multiplier.
Then, triple $(\bar s(V_1),\bar\delta(V_1))$ and $\bar \lambda(V_1)$ fulfills \eqref{eqn : FirstOrder1}
and \eqref{eqn : ComplemCon1}-\eqref{eqn : ComplemCon3}.
Moreover, triple $(\bar s(V_1),\bar\delta(V_1))$ and $\lambda(V_1)$ fulfills \eqref{eqn : ComplemCon1}-\eqref{eqn : ComplemCon3} and
\begin{eqnarray} \label{eqn : FirstOrder1Small}
\sum_{i \in E} \lambda(V_1)_i^h \nabla h_i^T + \sum_{i \in I} \lambda(V_1)_i^g \nabla g_i^T
+ \sum_{i \in V} \nabla F_i^T \lambda(V_1)_i^F & = & 0, \\ \label{eqn : FirstOrder2Small}
1 - \lambda^{\delta} - \sum_{i \in E} \lambda(V_1)_i^h h_i - \sum_{i \in I} \lambda(V_1)_i^g \theta_i^g g_i
+ \sum_{i \in V} (\theta_i^H H_i, - \theta_i^G G_i) \lambda(V_1)_i^F & = & 0
\end{eqnarray}
for some $\lambda^{\delta} \geq 0$ with $\lambda^{\delta} \bar\delta(V_1) = 0$.
Let $C_{\rho}(V_1)$ be a positive constant such that for all $\rho \geq C_{\rho}(V_1)$ we have
\[\alpha := \rho(\bar\delta(V_1) + 1) - \sum_{i \in E} \bar \lambda(V_1)_i^h h_i - \sum_{i \in I} \bar \lambda(V_1)_i^g \theta_i^g g_i
+ \sum_{i \in V} (\theta_i^H H_i, - \theta_i^G G_i) \bar \lambda(V_1)_i^F \geq 0\]
and set $\tilde \lambda^{\delta} := \alpha \lambda^{\delta} \geq 0$ and $\tilde \lambda := \bar \lambda(V_1) + \alpha \lambda(V_1)$.
We will now show that for such $\rho$ it holds that $(\bar s(V_1),\bar\delta(V_1))$ is the solution of $QP(\rho,V_1)$.
Clearly, $\tilde \lambda^{\delta} \bar\delta(V_1) = \alpha \lambda^{\delta} \bar\delta(V_1) = 0$ and the triple
$(\bar s(V_1),\bar\delta(V_1))$ and $\tilde \lambda$ also fulfills \eqref{eqn : FirstOrder1} due to \eqref{eqn : FirstOrder1Small}
and it fulfills \eqref{eqn : ComplemCon1}-\eqref{eqn : ComplemCon3} due to the convexity of the normal cones. Moreover, taking into account
the definitions of $\alpha$, $\tilde \lambda^{\delta}$ and $\tilde \lambda$ together with \eqref{eqn : FirstOrder2Small}, we obtain
\[\rho (\bar\delta(V_1) + 1) - \tilde \lambda^{\delta} - \sum_{i \in E} \tilde \lambda_i^h h_i - \sum_{i \in I} \tilde \lambda_i^g \theta_i^g g_i
+ \sum_{i \in V} (\theta_i^H H_i, - \theta_i^G G_i) \tilde \lambda_i^F =
\alpha - \alpha \lambda^{\delta} - \alpha(1 - \lambda^{\delta}) = 0,\]
showing also \eqref{eqn : FirstOrder2}. Hence $(\bar s(V_1),\bar\delta(V_1))$ is the solution of $QP(\rho,V_1)$ and the proof is complete.
\end{proof}
We now formulate the main theorem of this section.
\begin{theorem} \label{The : FinitAndFinal}
\begin{enumerate}
\item Algorithm \ref{AlgSol} is finite.
\item If the Algorithm \ref{AlgSol} is not terminated because of degeneracy, then $(s^N,\delta^N)$ is $\mathcal{Q}_M$-stationary
for the problem \eqref{eq : deltaprob} and $\delta^N < \zeta$.
\end{enumerate}
\end{theorem}
\begin{proof}
1. The algorithm is obviously finite unless we perform a restart and hence increase $\rho$. Thus we can assume that $\rho$ is sufficiently
large, say \[\rho \geq C_{\rho} := \max_{(V_1,V_2) \in \mathcal P(V)} C_{\rho}(V_1),\]
with $C_{\rho}(V_1)$ given by the previous lemma. However this means, taking into account also Proposition \ref{Prop : Props} (1.),
that $(s^{t-1},\delta^{t-1})$ is feasible for the problem $QP(\rho,V_1^t)$ for all $t$, hence $\delta^{t-1} \geq \bar\delta(V_1^t)$
and $(s^t,\delta^t)$ is the solution of $QP(\rho,V_1^t)$, implying $\delta^{t} = \bar\delta(V_1^t)$
and consequently $\delta^{t} \leq \delta^{t-1}$. Therefore we do not perform a restart in step 1 or step 2.
On the other hand, since we enter steps 3 and 4 with $\delta^t = \bar\delta(I^{1}(s^t,\delta^t)) = \bar\delta(I^{1}(s^t,\delta^t) \cup I^{00}(s^t,\delta^t))$,
we either terminate the algorithm in step 3 with $\delta^t < \zeta$ if the non-degeneracy condition \eqref{eq : NonDegen} is fulfilled
or we terminate the algorithm because of degeneracy in step 4. This finishes the proof.
2. The statement regarding stationarity follows easily from the fact that we enter step 3 of the algorithm only when $(s,\delta)$ is a solution of
problems \eqref{eqn : I00EmptyStat} and this means that it is also $\mathcal{Q}$-stationary with respect to
$(\emptyset,I^{00}(s^N,\delta^N))$ by Lemma \ref{Lem : QstatQPpiece}. Thus, $(s,\delta)$ is also
$\mathcal{Q}_M$-stationary for problem \eqref{eq : deltaprob}. The claim about $\delta$ follows
from the assumption that the Algorithm \ref{AlgSol} is not terminated because of degeneracy.
\end{proof}
We conclude this section with the following proposition that brings together the basic properties of the Algorithm \ref{AlgSol}.
\begin{proposition} \label{Prop : Props}
If the Algorithm \ref{AlgSol} is not terminated because of degeneracy, then the following properties hold:
\begin{enumerate}
\item For all $t = 1, \ldots, N$ the points $(s^{t-1},\delta^{t-1})$ and $(s^{t},\delta^{t})$ are feasible
for the problem $QP(\rho,V_1^t)$ and the point $(s^t,\delta^t)$ is also the solution of the convex problem $QP(\rho,V_1^t)$.
\item For all $t = 1, \ldots, N$ it holds that
\begin{equation}
0 \leq \delta^{t} \leq \delta^{t-1} \leq 1.
\end{equation}
\item There exists a constant $C_t$, dependent only on the number of constraints, such that
\begin{equation} \label{eq : StepsBound}
N \leq C_t.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
1. By definitions of the problems $QPVC(\rho)$ and $QP(\rho,V_1)$ it follows that a point $(s,\delta)$, feasible
for $QPVC(\rho)$, is feasible for $QP(\rho,V_1)$ if and only if
\begin{equation} \label{eq : Feasibility}
I^1(s,\delta) \subset V_1 \subset I^1(s,\delta) \cup I^{0}(s,\delta).
\end{equation}
The point $(s^0,\delta^0)$ is clearly feasible for $QP(\rho,V_1^1)$
and similarly the point $(s^{t},\delta^{t})$ is feasible for $QP(\rho,V_1^{t+1})$ for all $t = 1, \ldots, N-1$,
since the partition $V_1^{t+1}$ is defined by one of the index sets of \eqref{eqn : b1b2Stat}-\eqref{eqn : I00EmptyStat}
and thus fulfills \eqref{eq : Feasibility}. However, feasibility of $(s^{t+1},\delta^{t+1})$ for $QP(\rho,V_1^{t+1})$,
together with $(s^{t+1},\delta^{t+1})$ being the solution of $QP(\rho,V_1^{t+1})$, then follows from its definition.
2. Statement follows from $\delta^0 = 1$, from the fact that we perform a restart whenever $\delta^t > \delta^{t-1}$ occurs
and from the constraint $-\delta \leq 0$.
3. Since whenever the parameter $\rho$ is increased the algorithm goes to the step 1 and thus the counter $t$ of the pieces
is reset to $0$, it follows that after the last time the algorithm enters step 1 we keep $\rho$ constant.
It is obvious that all the index sets $V_1^t$ are pairwise different implying
that the maximum of switches to a new piece is $2^{\vert V \vert}$.
\end{proof}
\section{The basic SQP algorithm for MPVC}
An outline of the basic algorithm is as follows.
\begin{algorithm}[Solving the MPVC] \label{AlgMPCC} \rm \mbox{}
\Itl1{1:} Initialization:
\Itl2{} Select a starting point $x_0 \in \mathbb{R}^n$ together with a positive definite $n \times n$ matrix $B_0$,
\Itl3{} a parameter $\rho_0 > 0$ and constants $\zeta \in (0,1)$ and $\bar\rho > 1$.
\Itl2{} Select positive penalty parameters $\sigma_{-1} = (\sigma^h_{-1}, \sigma^g_{-1}, \sigma^F_{-1})$.
\Itl2{} Set the iteration counter $k := 0$.
\Itl1{2:} Solve the Auxiliary problem:
\Itl2{} Run Algorithm \ref{AlgSol} with data $\zeta, \bar\rho, \rho:= \rho_k, B:=B_k, \nabla f := \nabla f (x_k),$
\Itl3{} $h_i := h_i(x_k), \nabla h_i := \nabla h_i (x_k), i \in E,$ etc.
\Itl2{} If the Algorithm \ref{AlgSol} stops because of degeneracy,
\Itl3{} stop the Algorithm \ref{AlgMPCC} with an error message.
\Itl2{} If the final iterate $s^N$ is zero, stop the Algorithm \ref{AlgMPCC} and return $x_k$ as a solution.
\Itl1{3:} Next iterate:
\Itl2{} Compute new penalty parameters $\sigma_{k}$.
\Itl2{} Set $x_{k+1} := x_k + s_k$ where $s_k$ is a point on the polygonal line connecting the points
\Itl3{} $s^0,s^1, \ldots, s^N$ such that an appropriate merit function depending on $\sigma_{k}$ is decreased.
\Itl2{} Set $\rho_{k+1} := \rho$, the final value of $\rho$ in Algorithm \ref{AlgSol}.
\Itl2{} Update $B_{k}$ to get positive definite matrix $B_{k+1}$.
\Itl2{} Set $k := k+1$ and go to step 2.
\end{algorithm}
\begin{remark} \label{rem : StoppingCriteria}
We terminate the Algorithm \ref{AlgMPCC} only in the following two cases. In the first case
no sufficient reduction of the violation of the constraints can be achieved.
The second case will be satisfied only by chance when the current iterate is a $\mathcal{Q}_M$-stationary solution.
Normally, this algorithm produces an infinite sequence of iterates and we must include a stopping criterion
for convergence. Such a criterion could be that the violation of the constraints at some iterate is sufficiently small,
\[\max \{ \max_{i \in E} \vert h_i(x_k) \vert, \max_{i \in I} (g_i(x_k))^+, \max_{i \in V} d(F_i(x_k),P) \} \leq \epsilon_C,\]
where $F_i$ is given by \eqref{eqn : FPdef} and the expected decrease in our merit function is sufficiently small,
\[ (s_k^{N_k})^T B_k s_k^{N_k} \leq \epsilon_1, \]
see Proposition \ref{pro : MainMerit} below.
\end{remark}
\subsection{The next iterate}
Denote the outcome of Algorithm \ref{AlgSol} at the $k-$th iterate by
$$(s_k^{t}, \delta_k^{t}), \lambda_k^{t}, (V_{1,k}^{t}, V_{2,k}^{t}) \textrm{ for } t = 0, \ldots, N_k \textrm{ and }
\theta_k, \underline\lambda_k^{N_k}, \overline\lambda_k^{N_k}.$$
The new penalty parameters are computed by
\begin{eqnarray} \label{eqn : PenalParam}
\sigma_{i,k}^h = \begin{cases}
\xi_2 \tilde\lambda_{i,k}^{h} & \textrm{ if } \sigma_{i,k-1}^h < \xi_1 \tilde\lambda_{i,k}^{h}, \\
\sigma_{i,k-1}^h & \textrm{ else},
\end{cases} \qquad
\sigma_{i,k}^g = \begin{cases}
\xi_2 \tilde\lambda_{i,k}^{g} & \textrm{ if } \sigma_{i,k-1}^g < \xi_1 \tilde\lambda_{i,k}^{g}, \\
\sigma_{i,k-1}^g & \textrm{ else},
\end{cases}
\\ \nonumber
\sigma_{i,k}^F = \begin{cases}
\xi_2 \tilde{\lambda}_{i,k}^F & \textrm{ if }
\sigma_{i,k-1}^F < \xi_1 \tilde{\lambda}_{i,k-1}^F, \\
\sigma_{i,k-1}^F & \textrm{ else},
\end{cases}
\end{eqnarray}
where
\begin{equation} \label{eq : DefTilLamb}
\tilde{\lambda}_{i,k}^h = \max \vert \lambda_{i,k}^{h,t} \vert, \quad
\tilde{\lambda}_{i,k}^g = \max \vert \lambda_{i,k}^{g,t} \vert, \quad
\tilde{\lambda}_{i,k}^F = \max \norm{\lambda_{i,k}^{F,t}}_{\infty},
\end{equation}
with maximum being taken over $t \in \{ 1, \ldots, N_k \}
and $1 < \xi_1 < \xi_2$. Note that this choice of $\sigma_{k}$ ensures
\begin{equation}
\label{EqPropSigma}
\sigma_{k}^h \geq \tilde\lambda_k^h,\ \sigma_{k}^g \geq \tilde\lambda_k^g,\ \sigma_{k}^F \geq \tilde{\lambda}_{k}^F.
\end{equation}
\subsubsection{The merit function}
We are looking for the next iterate at the polygonal line connecting the points $s_k^0,s_k^1, \ldots, s_k^{N_k}$.
For each line segment $[s_k^{t-1},s_k^t] := \{(1-\alpha) s_k^{t-1} + \alpha s_k^t \,\vert\, \alpha \in [0,1] \}, t = 1, \ldots, N_k$
we consider the functions
\begin{eqnarray*}
\phi_k^t(\alpha) & := & f(x_k + s) + \sum \limits_{i \in E} \sigma_{i,k}^h \vert h_i(x_k + s) \vert +
\sum \limits_{i \in I} \sigma_{i,k}^g ( g_i(x_k + s) )^+ \\
&& + \sum \limits_{i \in V_{1,k}^t} \sigma_{i,k}^F d(F_i(x_k + s),P^1)
+ \sum \limits_{i \in V_{2,k}^t} \sigma_{i,k}^F d(F_i(x_k + s),P^2), \\
\hat{\phi}_k^t(\alpha) & := & f + \nabla f s + \frac{1}{2} s^TB_ks +
\sum \limits_{i \in E} \sigma_{i,k}^h \vert h_i + \nabla h_i s \vert +
\sum \limits_{i \in I} \sigma_{i,k}^g ( g_i + \nabla g_i s )^+ \\
&& + \sum \limits_{i \in V_{1,k}^t} \sigma_{i,k}^F d(F_i + \nabla F_i s,P^1)
+ \sum \limits_{i \in V_{2,k}^t} \sigma_{i,k}^F d(F_i + \nabla F_i s,P^2),
\end{eqnarray*}
where $s = (1 - \alpha) s_k^{t-1} + \alpha s_k^t$ and $f = f(x_k)$, $\nabla f = \nabla f (x_k)$,
$h_i = h_i(x_k), \nabla h_i = \nabla h_i (x_k), i \in E,$ etc.
and we further denote
\begin{equation} \label{eq : DefOfR}
r^{t}_{k,0}:= \hat\phi_k^{t}(0) - \hat\phi_k^{1}(0), \quad r^t_{k,1} := \hat\phi_k^{t}(1) - \hat\phi_k^{1}(0).
\end{equation}
\begin{lemma} \label{lem : convex}
\begin{enumerate}
\item For every $t \in \{ 1, \ldots, N_k \}$ the function $\hat{\phi}_k^t$ is convex.
\item For every $t \in \{ 1, \ldots, N_k \}$ the function $\hat{\phi}_k^t$ is a first order approximation of $\phi_k^t$, that is
\[\vert \phi_k^t(\alpha) - \hat{\phi}_k^t(\alpha) \vert = o(\norm{s}),\]
where $s = (1 - \alpha) s_k^{t-1} + \alpha s_k^t$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. By convexity of $P^1$ and $P^2$, $\hat{\phi}_k^t$ is convex because it is sum of convex functions.
2. By Lipschitz continuity of distance function with Lipschitz modulus $L = 1$ we conclude
\begin{eqnarray*}
\vert \phi_k^t(\alpha) - \hat{\phi}_k^t(\alpha) \vert & \leq &
\vert f(x_k + s) - f - \nabla f s - \frac{1}{2} s^TB_ks \vert +
\sum \limits_{i \in E} \sigma_{i,k}^h \vert h_i(x_k + s) - h_i - \nabla h_i s \vert \\
&& \sum \limits_{i \in I} \sigma_{i,k}^g \vert g_i(x_k + s) - g_i - \nabla g_i s \vert
+ \sum \limits_{i \in V} \sigma_{i,k}^F \norm{ F_i(x_k + s) - F_i - \nabla F_i s}_1
\end{eqnarray*}
and hence the assertion follows.
\end{proof}
We state now the main result of this subsection. For the sake of simplicity we omit the iteration index $k$ in this part.
\begin{proposition} \label{pro : MainMerit}
For every $t \in \{ 1, \ldots, N_k \}$
\begin{eqnarray} \label{eqn : BoundMain}
\hat\phi^{t}(0) - \hat\phi^{1}(0) & \leq &
- \sum \limits_{\tau = 1}^{t-1} \frac{1}{2} (s^{\tau} - s^{\tau-1})^T B (s^{\tau} - s^{\tau-1}) \, \, \leq \, \, 0,
\\ \label{eqn : BoundMain2}
\hat\phi^{t}(1) - \hat\phi^{1}(0) & \leq &
- \sum \limits_{\tau = 1}^{t} \frac{1}{2} (s^{\tau} - s^{\tau-1})^T B (s^{\tau} - s^{\tau-1}) \, \, \leq \, \, 0.
\end{eqnarray}
\end{proposition}
\begin{proof}
Fix $t \in \{ 1, \ldots, N_k \}$ and note that
\begin{eqnarray*}
1/2 (s^{t})^T B s^{t} + \nabla f s^{t} & = & 1/2 (s^{t})^T B s^{t} + \nabla f s^{t} - 1/2 (s^{0})^T B s^{0} - \nabla f s^{0} \\
& = & \sum_{\tau = 1}^t 1/2 (s^{\tau})^T B s^{\tau} - 1/2(s^{\tau-1})^T B s^{\tau-1} + \nabla f (s^{\tau} - s^{\tau-1}),
\end{eqnarray*}
because of $s^0 = 0$. For $j=0,1$ consider $r^{t+j}_{1-j}$ defined by \eqref{eq : DefOfR}. We obtain
\begin{eqnarray} \label{eqn : Basic}
r^{t+j}_{1-j} & = & \sum_{\tau = 1}^t \left( \frac{1}{2} (s^{\tau})^T B s^{\tau} - \frac{1}{2}(s^{\tau-1})^T B s^{\tau-1}
+ \nabla f (s^{\tau} - s^{\tau-1}) \right) \\ \nonumber
&& + \sum \limits_{i \in E} \sigma_i^h \left( \vert h_i + \nabla h_i s^{t} \vert - \vert h_i \vert \right)
+ \sum \limits_{i \in I} \sigma_i^g \left( (g_i + \nabla g_i s^{t})^+ - (g_i)^+ \right) \\ \nonumber
&& + \sum \limits_{i \in V_1^{t+j}} \sigma_i^F d(F_i + \nabla F_i s^t,P^1)
+ \sum \limits_{i \in V_2^{t+j}} \sigma_i^F d(F_i + \nabla F_i s^t,P^2) \\ \nonumber
&& - \sum \limits_{i \in V_1^{1}} \sigma_i^F d(F_i,P^1)
- \sum \limits_{i \in V_2^{1}} \sigma_i^F d(F_i,P^2).
\end{eqnarray}
Using that $(s^{\tau},\delta^{\tau})$ is the solution of $QP(\rho,V_1^{\tau})$ and multiplying the first order optimality
condition \eqref{eqn : FirstOrder1} by $(s^{\tau} - s^{\tau-1})^T$ yields
\begin{equation} \label{eq:mult}
(s^{\tau} - s^{\tau-1})^T \left( B s^{\tau} + \nabla f^T + \sum \limits_{i \in E} \lambda_i^{h,\tau} \nabla h_i^T +
\sum \limits_{i \in I} \lambda_i^{g,\tau} \nabla g_i^T + \sum \limits_{i \in V} \nabla F_i^T \lambda_i^{F,\tau} \right) = 0.
\end{equation}
Summing up the expression on the left hand side from $\tau =1$ to $t$, subtracting it from the right hand side of \eqref{eqn : Basic}
and taking into account the identity
$$1/2 (s^{\tau})^T B s^{\tau} - 1/2(s^{\tau-1})^T B s^{\tau-1} - (s^{\tau} - s^{\tau-1})^T B s^{\tau} =
- 1/2 (s^{\tau} - s^{\tau-1})^T B (s^{\tau} - s^{\tau-1})$$
we obtain for $j=0,1$
\begin{eqnarray} \label{eqn : TheLong}
r^{t+j}_{1-j} & = & - \sum \limits_{\tau = 1}^{t} \frac{1}{2} (s^{\tau} - s^{\tau-1})^T B (s^{\tau} - s^{\tau-1}) \\ \nonumber
& & + \sum \limits_{i \in E} \left( \sigma_i^h (\vert h_i + \nabla h_i s^{t} \vert - \vert h_i \vert)
- \sum \limits_{\tau = 1}^{t} \lambda_i^{h,\tau} \nabla h_i (s^{\tau} - s^{\tau-1}) \right) \\ \nonumber
& & + \sum \limits_{i \in I} \left( \sigma_i^g ( (g_i + \nabla g_i s^{t})^+ - (g_i)^+ )
- \sum \limits_{\tau = 1}^{t} \lambda_i^{g,\tau} \nabla g_i (s^{\tau} - s^{\tau-1}) \right) \\ \nonumber
&& + \sum \limits_{i \in V_1^{t+j}} \sigma_i^F d(F_i + \nabla F_i s^t,P^1)
+ \sum \limits_{i \in V_2^{t+j}} \sigma_i^F d(F_i + \nabla F_i s^t,P^2) \\ \nonumber
&& - \sum \limits_{i \in V_1^{1}} \sigma_i^F d(F_i,P^1)
- \sum \limits_{i \in V_2^{1}} \sigma_i^F d(F_i,P^2)
- \sum \limits_{i \in V} \sum \limits_{\tau = 1}^{t} (\lambda_i^{F,\tau})^T \nabla F_i (s^{\tau} - s^{\tau-1}).
\end{eqnarray}
First, we claim that
\begin{equation} \label{eq : MultiplTerm}
- \sum \limits_{i \in V} \sum \limits_{\tau = 1}^{t} (\lambda_i^{F,\tau})^T \nabla F_i (s^{\tau} - s^{\tau-1})
\leq \sum \limits_{i \in V} \tilde{\lambda}_{i}^F (1 - \delta^t) d(F_i,P).
\end{equation}
Consider $i \in V$ and $\tau \in \{1, \ldots, t\}$ with $i \in V_1^{\tau}$. By the feasibility of $(s^{\tau},\delta^{\tau})$
and $(s^{\tau-1},\delta^{\tau-1})$ for $QP(\rho, V_1^{\tau})$ it follows that
\[\delta^{\tau} (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s^{\tau} \in P^1, \quad
\delta^{\tau-1} (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s^{\tau-1} \in P^1\]
and hence from \eqref{eqn : ComplemCon2} and \eqref{eq : convexNC} we conclude
\[- (\lambda_i^{F,\tau})^T \left( \nabla F_i (s^{\tau} - s^{\tau-1}) + (\delta^{\tau} - \delta^{\tau-1})
(\theta_i^H H_i, - \theta_i^G G_i)^T \right) \leq 0\]
and consequently
\begin{equation} \label{eq : MultiplBound}
- (\lambda_i^{F,\tau})^T \nabla F_i (s^{\tau} - s^{\tau-1}) \leq (\lambda_i^{F,\tau})^T
(\delta^{\tau} - \delta^{\tau-1}) (\theta_i^H H_i, - \theta_i^G G_i)^T \leq
\tilde{\lambda}_{i}^F (\delta^{\tau-1} - \delta^{\tau}) d(F_i,P)
\end{equation}
follows by the H\"older inequality and \eqref{eq : betaeff}.
Analogous argumentation yields \eqref{eq : MultiplBound} also for $i, \tau$ with $i \in V_2^{\tau}$
and since $V_1^{\tau},V_2^{\tau}$ form a partition of $V$, the claimed inequality \eqref{eq : MultiplTerm} follows.
Further, we claim that for $j=0,1$ it holds that
\begin{equation} \label{eq : FinalTerm}
\sum \limits_{i \in V_1^{t+j}} \sigma_i^F d(F_i + \nabla F_i s^t,P^1)
+ \sum \limits_{i \in V_2^{t+j}} \sigma_i^F d(F_i + \nabla F_i s^t,P^2) \leq
\sum \limits_{i \in V} \sigma_{i}^F \delta^t d(F_i,P).
\end{equation}
From feasibility of $(s^t,\delta^t)$ for either $QP(\rho,V_1^{t})$ or $QP(\rho,V_1^{t+1})$
for $i \in V_1^t \cup V_1^{t+1}$ it follows that
\[\delta^{t} (\theta_i^H H_i, - \theta_i^G G_i)^T + F_i + \nabla F_i s^{t} \in P^1\]
and hence, using \eqref{eq : betaeff} and \eqref{eq : DistIfIn},
\begin{equation} \label{eq : FinalBound}
\sigma_i^F d(F_i + \nabla F_i s^{t}, P^1) \leq \sigma_i^F \norm{\delta^{t} (\theta_i^H H_i, - \theta_i^G G_i)^T}_1 =
\sigma_i^F \delta^{t} d(F_i,P).
\end{equation}
Again, for $i \in V_2^{t}$ or $i \in V_2^{t+1}$ it holds that
$\sigma_i^F d(F_i + \nabla F_i s^{t}, P^2) \leq \sigma_i^F \delta^{t} d(F_i,P)$ by analogous argumentation
and since $V_1^{t},V_2^{t}$ and $V_1^{t+1},V_2^{t+1}$ form a partition of $V$,
the claimed inequality \eqref{eq : FinalTerm} follows.
Finally, we have
\begin{equation} \label{eq : InitialTerm}
- \sum \limits_{i \in V_1^{1}} \sigma_i^F d(F_i,P^1) - \sum \limits_{i \in V_2^{1}} \sigma_i^F d(F_i,P^2) =
- \sum \limits_{i \in V} \sigma_{i}^F d(F_i,P),
\end{equation}
due to the fact that $V_1^{1},V_2^{1}$ form a partition of $V$ and \eqref{eq : InitPart}.
Similar arguments as above show
\begin{eqnarray*}
\sigma_i^h (\vert h_i + \nabla h_i s^{t} \vert - \vert h_i\vert) -
\sum \limits_{\tau = 1}^{t} \lambda_i^{h,\tau} \nabla h_i (s^{\tau} - s^{\tau-1}) & \leq &
(\sigma_i^h - \tilde{\lambda}_i^{h}) (\delta^{t} - 1) \vert h_i \vert, i \in E, \\
\sigma_i^g ( (g_i + \nabla g_i s^{t})^+ - (g_i)^+ ) -
\sum \limits_{\tau = 1}^{t} \lambda_i^{g,\tau} \nabla g_i (s^{\tau} - s^{\tau-1}) & \leq &
(\sigma_i^g - \tilde{\lambda}_i^{g}) (\delta^{t} - 1) ( g_i )^+, i \in I.
\end{eqnarray*}
Taking this into account and putting together \eqref{eqn : TheLong}, \eqref{eq : MultiplTerm}, \eqref{eq : FinalTerm} and \eqref{eq : InitialTerm}
we obtain for $j=0,1$
\begin{eqnarray*}
r^{t+j}_{1-j} & \leq & - \sum \limits_{\tau = 1}^{t} \frac{1}{2} (s^{\tau} - s^{\tau-1})^T B (s^{\tau} - s^{\tau-1}) \\
&& - \sum \limits_{i \in V} (\sigma_i^F - \tilde{\lambda}_{i}^F) (1 - \delta^t) d(F_i,P)
- \sum \limits_{i \in E} (\sigma_i^h - \tilde{\lambda}_i^{h}) (1 - \delta^{t}) \vert h_i \vert
- \sum \limits_{i \in I} (\sigma_i^g - \tilde{\lambda}_i^{g}) (1 - \delta^{t}) ( g_i )^+
\end{eqnarray*}
and hence \eqref{eqn : BoundMain} and \eqref{eqn : BoundMain2} follow by monotonicity
of $\delta$ and \eqref{EqPropSigma}. This completes the proof.
\end{proof}
\subsubsection{Searching for the next iterate}
We choose the next iterate as a point from the polygonal line connecting the points $s_k^0, \ldots, s_k^{N_k}$.
Each line segment $[s_k^{t-1},s_k^t]$ corresponds to the convex subproblem solved by Algorithm \ref{AlgSol}
and hence each line search function $\hat\phi_k^t$ corresponds to the usual $\ell_1$ merit function from
nonlinear programming. This makes it technically more difficult to prove
the convergence behavior stated in Proposition \ref{pro : ToZeroAtN} which is also the motivation for the following
procedure.
First we parametrize the polygonal line connecting the points $s_k^0, \ldots, s_k^{N_k}$
by its length as a curve $\hat s_k: [0,1] \to \mathbb{R}^n$ in the following way.
We define $t_k(1) := N_k$, for every $\gamma \in [0,1)$
we denote by $t_k(\gamma)$ the smallest number $t$ such that $S_k^t > \gamma S_k^{N_k}$ and we set $\alpha_k(1) := 1$,
\[\alpha_k(\gamma) := \frac{\gamma S_k^{N_k} - S_k^{t_k(\gamma)-1}}{S_k^{t_k(\gamma)} - S_k^{t_k(\gamma)-1}}, \gamma \in [0,1), \]
where $S_k^0 := 0, S_k^t := \sum_{\tau=1}^{t} \Vert s_k^{\tau} - s_k^{\tau-1} \Vert$ for $t=1, \ldots, N_k$.
Then we define
\[ \hat s_k (\gamma) = s_k^{t_k(\gamma)-1} + \alpha_k(\gamma)(s_k^{t_k(\gamma)} - s_k^{t_k(\gamma)-1}).\]
Note that $\norm{\hat s_k (\gamma)} \leq \gamma S_k^{N_k}$.
In order to simplify the proof of Proposition \ref{pro : ToZeroAtN},
for $\gamma \in [0,1]$ we further consider the following line search functions
\begin{equation}
\begin{array}{c}
Y_k(\gamma) := \phi_k^{t_k(\gamma)}(\alpha_k(\gamma)), \quad \hat Y_k(\gamma) := \hat\phi_k^{t_k(\gamma)}(\alpha_k(\gamma)), \\
Z_k(\gamma) := (1 - \alpha_k(\gamma)) \hat\phi_k^{t_k(\gamma)}(0) + \alpha_k(\gamma) \hat\phi_k^{t_k(\gamma)}(1).
\end{array}
\end{equation}
Now consider some sequence of positive numbers $\gamma^k_1 = 1, \gamma^k_2, \gamma^k_3, \ldots$ with
$1 > \bar \gamma \geq \gamma^k_{j+1} / \gamma^k_j \geq \underline \gamma > 0$ for all $j \in \mathbb{N}$.
Consider the smallest $j$, denoted by $j(k)$ such that for some given constant $\xi \in (0,1)$ one has
\begin{equation} \label{eq : NextIterCond}
Y_k(\gamma^k_j) - Y_k(0) \leq \xi \left( Z_k(\gamma_{j}^{k}) - Z_k(0) \right).
\end{equation}
Then the new iterate is given by
\[ x_{k+1} := x_k + \hat s_k(\gamma^k_{j(k)}).\]
As can be seen from the proof of Lemma \ref{lem : NextIterCons}, this choice ensures a decrease in merit function
$\Phi$ defined in the next subsection.
The following relations are direct consequences of the properties of $\phi_k^t$ and $\hat\phi_k^t$
\begin{equation} \label{eq : PropOfNewLSF}
\vert Y_k(\gamma) - \hat Y_k(\gamma) \vert = o(\gamma S_k^{N_k}), \quad
\hat Y_k(\gamma) \leq Z_k(\gamma), \quad
Z_k(\gamma) - Z_k(0) \leq 0.
\end{equation}
The last property holds due to Proposition \ref{pro : MainMerit} and
\begin{equation} \label{eq : Z}
Z_k(\gamma) - Z_k(0) = (1 - \alpha_k(\gamma)) r_{k,0}^{t_k(\gamma)} + \alpha_k(\gamma) r_{k,1}^{t_k(\gamma)},
\end{equation}
which follows from $\alpha_k(0) = 0$, $S_k^{t_k(0)-1} = 0$ and hence $\hat\phi_k^{t_k(0)}(0) = \hat\phi_k^{1}(0)$.
We recall that $r^t_{k,0}$ and $r^{t}_{k,1}$ are defined by \eqref{eq : DefOfR}.
\begin{lemma} \label{lem : WellDef}
The new iterate $x_{k+1}$ is well defined.
\end{lemma}
\begin{proof}
In order to show that the new iterate is well defined, we have to prove the existence of some $j$ such that \eqref{eq : NextIterCond}
is fulfilled. Note that $S_k^{t_k(0) - 1} = 0$ and $S_k^{t_k(0)} > 0$. There is some $\delta_k > 0$ such that
$\vert Y_k(\gamma) - \hat{Y}_k(\gamma) \vert \leq \frac{-(1 - \xi)r^{t_k(0)}_{k,1} \gamma S_k^{N_k}}{S_k^{t_k(0)}}$,
whenever $\gamma S_k^{N_k} \leq \delta_k$. Since $\lim_{j \to \infty} \gamma_j^k = 0$, we can choose $j$ sufficiently large
to fulfill $\gamma_j^k S_k^{N_k} < \min \{ \delta_k, S_k^{t_k(0)} \}$ and then $t_k(\gamma_j^k) = t_k(0)$ and
$\alpha_k(\gamma_j^k) = \gamma_j^k S_k^{N_k} / S_k^{t_k(0)}$, since $S_k^{t_k(0) - 1} = 0$.
This yields
\begin{equation} \label{eq : MainTrick}
Y_k(\gamma_j^k) - \hat{Y}_k(\gamma_j^k) \leq
- (1 - \xi) \alpha_k(\gamma_j^k) r^{t_k(\gamma_j^k)}_{k,1}.
\end{equation}
Then by second property of \eqref{eq : PropOfNewLSF}, \eqref{eq : Z}, taking into account
$r^{t_k(\gamma_j^k)}_{k,0} \leq 0$ by Proposition \ref{pro : MainMerit}
and $Y_k(0) = Z_k(0)$ we obtain
\begin{eqnarray*}
Y_k(\gamma_j^k) - Y_k(0) & \leq &
\hat Y_k(\gamma_j^k) - Y_k(0) - (1 - \xi) \alpha_k(\gamma_j^k) r^{t_k(\gamma_j^k)}_{k,1} \\
& \leq & \xi (Z_k(\gamma_j^k) - Z_k(0)) + (1 - \xi) \left( Z_k(\gamma_j^k) - Z_k(0) - \alpha_k(\gamma_j^k) r^{t_k(\gamma_j^k)}_{k,1} \right) \\
& \leq & \xi (Z_k(\gamma_j^k) - Z_k(0)) + (1 - \xi) (1 - \alpha_k(\gamma_j^k)) r^{t_k(\gamma_j^k)}_{k,0} \leq \xi (Z_k(\gamma_j^k) - Z_k(0)).
\end{eqnarray*}
Thus \eqref{eq : NextIterCond} is fulfilled for this $j$ and the lemma is proved.
\end{proof}
\subsection{Convergence of the basic algorithm}
We consider the behavior of the Algorithm \ref{AlgMPCC} when it does not prematurely stop and it generates an infinite
sequence of iterates
\[x_k, B_k, (s_k^{t}, \delta_k^{t}), \lambda_k^{t}, (V_{1,k}^{t}, V_{2,k}^{t}), t = 0, \ldots, N_k \textrm{ and }
\theta_k, \underline\lambda_k^{N_k}, \overline\lambda_k^{N_k}.\]
Note that $\delta_k^{N_k} < \zeta$. We discuss the convergence behavior under the following assumption.
\begin{assumption} \label{ass : AlgMPCC}
\begin{enumerate}
\item There exist constants $C_x, C_s, C_{\lambda}$ such that
$$\norm{x_k} \leq C_x, \quad S_k^{N_k} \leq C_s, \quad
\hat \lambda_k^h, \hat \lambda_k^g, \hat \lambda_k^F \leq C_{\lambda}$$
for all $k$, where $\hat \lambda_k^h := \max_{i \in E} \{ \tilde \lambda_{i,k}^h \}$,
$\hat \lambda_k^g := \max_{i \in I} \{ \tilde \lambda_{i,k}^g \}$,
$\hat \lambda_k^F := \max_{i \in V} \{ \tilde \lambda_{i,k}^F \}$.
\item There exist constants $\bar C_{B}, \underbar C_{B}$ such that $\underbar{C}_B \leq \lambda(B_k), \norm{B_k} \leq \bar C_B$
for all $k$, where $\lambda(B_k)$ denotes the smallest eigenvalue of $B_k$.
\end{enumerate}
\end{assumption}
For our convergence analysis we need one more merit function
\[
\Phi_k(x) := f(x) + \sum \limits_{i \in E} \sigma_{i,k}^h \vert h_i(x) \vert + \sum \limits_{i \in I} \sigma_{i,k}^g ( g_i(x) )^+
+ \sum \limits_{i \in V} \sigma_{i,k}^F d(F_i(x),P).
\]
\begin{lemma} \label{lem : GeneralMerit}
For each $k$ and for any $\gamma \in [0,1]$ it holds that
\begin{equation} \label{eq: generals}
\Phi_k(x_k + \hat s_k(\gamma)) \leq Y_k(\gamma) \quad \textrm{ and } \quad \Phi_k(x_k) = Y_k(0).
\end{equation}
\end{lemma}
\begin{proof}
The first claim follows from the definitions of $\Phi_k$ and $Y_k$ and the estimate
\[d(F_i(x_k + s),P^1), d(F_i(x_k + s),P^2) \geq \min\{ d(F_i(x_k + s),P^1), d(F_i(x_k + s),P^2) \} =
d(F_i(x_k + s),P),\]
which holds by \eqref{eq : DistOfCup}. The second claim follows from \eqref{eq : InitPart}.
\end{proof}
A simple consequence of the way that we define the penalty parameters in \eqref{eqn : PenalParam} is the following lemma.
\begin{lemma} \label{lem : Sigmas}
Under Assumption \ref{ass : AlgMPCC} there exists some $\bar k$ such that for all $k \geq \bar k$ the penalty parameters remain constant,
$\bar \sigma := \sigma_{k}$ and consequently $\Phi_k(x) = \Phi_{\bar k}(x)$.
\end{lemma}
\begin{remark}
Note that we do not use $\Phi_k$ for calculating the new iterate because its first order approximation is in general not convex
on the line segments connecting $s_k^{t-1}$ and $s_k^{t}$ due to the involved min operation.
\end{remark}
\begin{lemma} \label{lem : NextIterCons}
Assume that Assumption \ref{ass : AlgMPCC} is fulfilled. Then
\begin{equation} \label{eq : FToZero}
\lim_{k \to \infty} Y_k(\gamma_{j(k)}^{k}) - Y_k(0) = 0.
\end{equation}
\end{lemma}
\begin{proof}
Take an existed $\bar k$ from Lemma \ref{lem : Sigmas}. Then we have for $k \geq \bar k$
\[ \Phi_{k+1}(x_{k+1}) = \Phi_{\bar k}(x_{k+1}) = \Phi_{\bar k}(x_{k} + \hat s_k(\gamma^k_{j(k)})) = \Phi_{k}(x_{k} + \hat s_k(\gamma^k_{j(k)}))
\leq Y_k(\gamma_{j(k)}^{k}) < Y_k(0) = \Phi_k(x_k) \]
and therefore $\Phi_{k+1}(x_{k+1}) - \Phi_k(x_k) \leq Y_k(\gamma_{j(k)}^{k}) - Y_k(0) < 0$.
Hence the sequence $\Phi_k(x_k)$ is monotonically decreasing and therefore convergent, because it is bounded below by Assumption \ref{ass : AlgMPCC}.
Hence
\[ - \infty < \lim_{k \to \infty} \Phi_k(x_k) - \Phi_{\bar k}(x_{\bar k}) = \sum_{k = \bar k}^{\infty} (\Phi_{k+1}(x_{k+1}) - \Phi_k(x_k))
\leq \sum_{k = \bar k}^{\infty} (Y_k(\gamma_{j(k)}^{k}) - Y_k(0)) \]
and the assertion follows.
\end{proof}
\begin{proposition} \label{pro : ToZeroAtN}
Assume that Assumption \ref{ass : AlgMPCC} is fulfilled. Then
\begin{equation} \label{eq : RTtoZero}
\lim_{k \to \infty} \hat Y_k(1) - \hat Y_k(0) = 0
\end{equation}
and consequently
\begin{equation} \label{eq : STtoZero}
\lim_{k \to \infty} \norm{s_k^{N_k}} = 0.
\end{equation}
\end{proposition}
\begin{proof}
We prove \eqref{eq : RTtoZero} by contraposition. Assuming on the contrary that \eqref{eq : RTtoZero} does not hold,
by taking into account $\hat Y_k(1) - \hat Y_k(0) \leq 0$ by Proposition \ref{pro : MainMerit}, there
exists a subsequence $K = \{ k_1, k_2, \ldots \}$ such that $\hat Y_k(1) - \hat Y_k(0) \leq \bar r < 0$.
By passing to a subsequence we can assume that for all $k \in K$ we have $k \geq \bar k $ with $\bar k$ given by Lemma \ref{lem : Sigmas}
and $N_k = \bar N$, where we have taken into account \eqref{eq : StepsBound}. By passing to a subsequence once more
we can also assume that
\[ \lim_{k \setto K \infty} S_k^t = \bar S^t, \lim_{k \setto K \infty} r_{k,1}^t = \bar r_1^t,
\lim_{k \setto K \infty} r_{k,0}^t = \bar r_0^t, \,\, \forall t \in \{1, \ldots, \bar N\}, \]
where $r_{k,1}^t$ and $r_{k,0}^t$ are defined by \eqref{eq : DefOfR}.
Note that $\bar r_1^{\bar N} \leq \bar r < 0$.
Let us first consider the case $\bar{S}^{\bar N} = 0$.
There exists $\delta > 0$ such that $\vert Y_k(\gamma) - \hat{Y}_k(\gamma) \vert
\leq (\xi - 1) \bar{r}_1^{\bar N} \gamma S_k^{\bar N} \, \forall k \in K,$
whenever $\gamma S_k^{\bar N} \leq \delta$.
Since $\bar{S}^{\bar N} = 0$ we can assume that
$S_k^{\bar N} \leq \min \{ \delta, 1/2 \} \, \forall k \in K$. Then
\[
Y_k(1) - Y_k(0) \leq r_{k,1}^{\bar N} + (\xi - 1) \bar{r}_1^{\bar N} S_k^{\bar N}
\leq r_{k,1}^{\bar N} + (\xi - 1) r_{k,1}^{\bar N} = \xi r_{k,1}^{\bar N} = \xi (Z_k(1) - Z_k(0)) \leq \frac{\xi \bar{r}_1^{\bar N}}{2} < 0
\]
and this implies that for the next iterate we have $j(k) = 1$ and hence $\gamma_{j(k)}^k = 1$, contradicting \eqref{eq : FToZero}.
Now consider the case $\bar{S}^{N} \neq 0$ and let us define the number $\bar \tau := \max \{ t \,\vert\, \bar{S}^{t} = 0 \} + 1$.
Note that Proposition \ref{pro : MainMerit} yields
\begin{equation} \label{eq : RvsSrelat}
r_{k,1}^t, r_{k,0}^{t+1} \leq - \frac{\lambda(B_k)}{2} \sum_{\tau = 1}^{t} \norm{s_k^{\tau} - s_k^{\tau -1}}^2
\leq - \frac{\underbar C_{B}}{2} \frac{1}{t} \left( \sum_{\tau = 1}^{t} \norm{s_k^{\tau} - s_k^{\tau -1}} \right)^2
= - \frac{\underbar C_{B}}{2} \frac{1}{t} (S_k^t)^2
\end{equation}
and therefore $\tilde r := \max_{t > \bar \tau} \bar r^t < 0$, where $\bar r^t := \max\{ \bar r_0^t, \bar r_1^t \}$.
By passing to a subsequence we can assume
that for every $t > \bar \tau$ and every $k \in K$ we have $r_{k,0}^t,r_{k,1}^{t} \leq \frac{\bar r^t}{2}$.
Now assume that for infinitely many $k \in K$ we have $\gamma_{j(k)}^k S_k^{\bar N} \geq S_k^{\bar \tau}$, i.e.
$t_k(\gamma_{j(k)}^{k}) > \bar \tau$.
Then we conclude
\[
Y_k(\gamma_{j(k)}^{k}) - Y_k(0) \leq \xi (Z_k(\gamma_{j(k)}^{k}) - Z_k(0)) =
\xi \left( (1 - \alpha_k(\gamma_{j(k)}^{k})) r_{k,0}^{t_k(\gamma_{j(k)}^{k})}
+ \alpha_k(\gamma_{j(k)}^{k}) r_{k,1}^{t_k(\gamma_{j(k)}^{k})} \right) \leq \frac{\xi \tilde r}{2} < 0
\]
contradicting \eqref{eq : FToZero}. Hence for all but finitely many $k \in K$, without
loss of generality for all $k \in K$, we have $\gamma_{j(k)}^k S_k^{\bar N} < S_k^{\bar \tau}$.
There exists $\delta > 0$ such that
\begin{equation} \label{eq : MainEstim}
\vert Y_k(\gamma) - \hat{Y}_k(\gamma) \vert \leq \frac{\vert \bar{r}^{\bar{\tau}} \vert (1 - \xi) \underline \gamma
\gamma S_k^{\bar N}}{8 S^{\bar \tau}} \, \forall k \in K,
\end{equation}
whenever $\gamma S_k^{\bar N} \leq \delta$.
By eventually choosing $\delta$ smaller we can assume $\delta \leq S^{\bar \tau} / 2$ and by passing to a subsequence
if necessary we can also assume that for all $k \in K$ we have
\begin{equation} \label{eq : FirstAs}
2 S_k^{\bar \tau-1} / \underline \gamma \leq \delta < S_k^{\bar \tau} \leq 2 S^{\bar \tau}.
\end{equation}
Now let for each $k$ the index $\tilde j(k)$ denote the smallest $j$ with $\gamma_j S_k^{\bar N} \leq \delta$.
It obviously holds that $\gamma_{\tilde j(k)-1}^{k} S_k^{\bar N} > \delta$ and by \eqref{eq : FirstAs} we obtain
\[S_k^{\bar \tau-1} \leq \underline \gamma \delta \leq \underline \gamma \gamma_{\tilde j(k)-1}^{k} S_k^{\bar N}
\leq \gamma_{\tilde j(k)}^{k} S_k^{\bar N} \leq \delta < S_k^{\bar \tau}\]
implying $t_k(\gamma_{\tilde j(k)}^{k}) = \bar \tau$ and
\[\alpha_k(\gamma_{\tilde j(k)}^{k}) \geq \frac{\underline{\gamma} \delta - S_k^{\bar \tau -1}}
{S_k^{\bar \tau} - S_k^{\bar \tau -1}} \geq \frac{\underline{\gamma} \delta}{4 S^{\bar \tau}}\]
by \eqref{eq : FirstAs}.
Taking this into account together with \eqref{eq : MainEstim} and $\gamma_{\tilde j(k)}^{k} S_k^{\bar N} \leq \delta$ we conclude
\[
Y_k(\gamma_{\tilde j(k)}^{k}) - \hat{Y}_k(\gamma_{\tilde j(k)}^{k}) \leq
\frac{\vert \bar{r}^{\bar{\tau}} \vert (1 - \xi) \underline{\gamma} \gamma_{\tilde j(k)}^{k} S_k^{\bar N}}{8 S^{\bar \tau}}
\leq - (1 - \xi) \frac{\underline{\gamma} \delta}{4 S^{\bar \tau}}r_{k,1}^{\bar \tau}
\leq - (1 - \xi) \alpha_k(\gamma_{\tilde j(k)}^{k}) r_{k,1}^{t_k(\gamma_{\tilde j(k)}^{k})}.
\]
Now we can proceed as in the proof of Lemma \ref{lem : WellDef} to show that $\tilde j(k)$ fulfills \eqref{eq : NextIterCond}.
However, this yields $\tilde j(k) \geq j(k)$ by definition of $j(k)$ and hence
$\gamma_{j(k)}^{k} S_k^{\bar N} \geq \gamma_{\tilde j(k)}^{k} S_k^{\bar N} \geq S_k^{\bar \tau-1}$
showing $t_k(\gamma_{j(k)}^{k}) = t_k(\gamma_{\tilde j(k)}^{k}) =\bar \tau$. But then we also have
$\alpha_k(\gamma_{j(k)}^{k}) \geq \alpha_k(\gamma_{\tilde j(k)}^{k}) \geq \frac{\underline{\gamma} \delta}{4 \bar S^{\bar \tau}}$ and from
\eqref{eq : NextIterCond} we obtain
\[
Y_k(\gamma_{j(k)}^{k}) - Y_k(0) \leq \xi (Z_k(\gamma_{j(k)}^{k}) - Z_k(0)) \leq \xi \alpha_k(\gamma_{j(k)}^{k}) r_{k,1}^{t_k(\gamma_{j(k)}^{k})}
\leq \frac{\xi \underline{\gamma} \delta \tilde r}{8 \bar S^{\bar \tau}} < 0
\]
contradicting \eqref{eq : FToZero} and so \eqref{eq : RTtoZero} is proved.
Condition \eqref{eq : STtoZero} now follows from \eqref{eq : RTtoZero} because we conclude from \eqref{eq : RvsSrelat} that
$\hat Y_k(1) - \hat Y_k(0) \leq - \frac{\underbar C_{B}}{2} \frac{1}{N_k} (S_k^{N_k})^2
\leq - \frac{\underbar C_{B}}{2} \frac{1}{N_k} \norm{s_k^{N_k}}^2$.
\end{proof}
Now we are ready to state the main result of this section.
\begin{theorem} \label{The : Mstat}
Let Assumption \ref{ass : AlgMPCC} be fulfilled. Then every limit point of the sequence of iterates $x_k$
is at least M-stationary for problem \eqref{eq : genproblem}.
\end{theorem}
\begin{proof}
Let $\bar{x}$ denote a limit point of the sequence $x_k$ and let $K$ denote a subsequence such that
$\lim_{k \setto K \infty} x_k = \bar x$. Further let $\underline \lambda$ be a limit point of the bounded sequence
$\underline \lambda_k^{N_k}$ and assume without loss of generality that
$\lim_{k \setto K \infty} \underline \lambda_k^{N_k} = \underline \lambda$.
First we show feasibility of $\bar{x}$ for the problem \eqref{eq : genproblem} together with
\begin{equation} \label{eq : FinalMultInNC}
\underline \lambda_i^g \geq 0 = \underline \lambda_i^g g_i(\bar x), i \in I \quad \textrm{ and } \quad
(\underline \lambda^{H}, \underline \lambda^{G}) \in N_{P^{\vert V \vert}}(F(\bar x)).
\end{equation}
Consider $i \in I$. For all $k$ it holds that
\[0 \geq \left( (1 - \theta_{i,k}^{g} \delta_k^{N_k}) g_i(x_k) + \nabla g_i(x_k) s_k^{N_k} \right) \perp
\underline \lambda_{i,k}^{g,N_k} \geq 0.\]
Since $0 \leq \delta_k^{N_k} \leq \zeta$, $\theta_{i,k}^{g} \in \{0,1\}$ we have
$1 \geq (1 - \theta_{i,k}^{g} \delta_k^{N_k}) \geq 1 - \zeta$ and together with $s_k^{N_k} \to 0$ by Proposition \ref{pro : ToZeroAtN}
we conclude
\[ 0 \geq \limsup_{k \setto K \infty} \left( g_i(x_k) + \frac{\nabla g_i(x_k) s_k^{N_k}}{(1 - \theta_{i,k}^{g} \delta_k^{N_k})} \right)
= g_i(\bar x),\]
$\underline \lambda_i^g \geq 0$ and
\[0 = \lim_{k \setto K \infty} \underline \lambda_{i,k}^{g,N_k} \left( g_i(x_k) + \frac{\nabla g_i(x_k) s_k^{N_k}}
{(1 - \theta_{i,k}^{g} \delta_k^{N_k})} \right)
= \underline \lambda_i^g g_i(\bar x).\]
Hence $\underline \lambda_i^g \geq 0 = \underline \lambda_i^g g_i(\bar x)$.
Similar arguments show that for every $i \in E$ we have
\[ 0 = \lim_{k \setto K \infty} \left( h_i(x_k) + \frac{\nabla h_i(x_k) s_k^{N_k}}{(1 - \delta_k^{N_k})} \right)
= h_i(\bar x).\]
Finally consider $i \in V$. Taking into account \eqref{eq : DistIfIn}, \eqref{eq : betaeff} and $\delta_k^{N_k} \leq \zeta$ we obtain
\begin{eqnarray*}
d(F_i(x_k),P) & \leq & \norm{ \delta_k^{N_k} (\theta_{i,k}^H H_i(x_k), - \theta_{i,k}^G G_i(x_k))^T + \nabla F_i(x_k) s_k^{N_k}}_1 \\
& \leq & \zeta d(F_i(x_k),P) + \norm{\nabla F_i(x_k) s_k^{N_k}}_1.
\end{eqnarray*}
Hence, $\nabla F_i(x_k) s_k^{N_k} \to 0$ by Proposition \ref{pro : ToZeroAtN} implies
\[(1-\zeta)d(F_i(\bar x),P) = \lim_{k \setto K \infty} (1-\zeta) d(F_i(x_k),P) \leq
\lim_{k \setto K \infty} \norm{\nabla F_i(x_k) s_k^{N_k}}_1 = 0,\]
showing the feasibility of $\bar x$. Moreover, the previous arguments also imply
\begin{equation} \label{eq : AuxFtoF}
\tilde F_i(x_k,s_k^{N_k},\delta_k^{N_k}) := \delta_k^{N_k} (\theta_{i,k}^H H_i(x_k), - \theta_{i,k}^G G_i(x_k))^T
+ F_i(x_k) + \nabla F_i(x_k) s_k^{N_k} \setto K F_i(\bar x).
\end{equation}
Taking into account \eqref{eq : MstatLNC}, the fact that $\underline \lambda_k^{N_k}$ fulfills M-stationarity conditions
at $(s_k^{N_k},\delta_k^{N_k})$ for \eqref{eq : deltaprob} yields
\[(\underline \lambda_{k}^{H,N_k}, \underline \lambda_{k}^{G,N_k}) \in N_{P^{\vert V \vert}}(\tilde F(x_k,s_k^{N_k},\delta_k^{N_k})).\]
However, this together with $(\underline \lambda_{k}^{H,N_k}, \underline \lambda_{k}^{G,N_k}) \setto K
(\underline \lambda^{H}, \underline \lambda^{G})$, \eqref{eq : AuxFtoF}, and \eqref{eq : propLimitNC} yield
$(\underline \lambda^{H}, \underline \lambda^{G}) \in N_{P^{\vert V \vert}}(F(\bar x))$
and consequently \eqref{eq : FinalMultInNC} follows.
Moreover, by first order optimality condition we have
\[
B_k s_k^{N_k} + \nabla f(x_k)^T + \sum \limits_{i \in E} \underline \lambda_{i,k}^{h,N_k} \nabla h_i(x_k)^T
+ \sum \limits_{i \in I} \underline \lambda_{i,k}^{g,N_k} \nabla g_i(x_k)^T
+ \sum \limits_{i \in V} \nabla F_i(x_k)^T \underline \lambda_{i,k}^{F,N_k} = 0
\]
for each $k$ and by passing to a limit and by taking into account that $B_ks_k^{N_k} \to 0$
by Proposition \ref{pro : ToZeroAtN} we obtain
\[
\nabla f(\bar{x})^T + \sum \limits_{i \in E} \underline\lambda_{i}^{h} \nabla h_i(\bar{x})^T
+ \sum \limits_{i \in I} \underline\lambda_{i}^{g} \nabla g_i(\bar{x})^T
+ \sum \limits_{i \in V} \nabla F_i(\bar{x})^T \underline \lambda_{i}^{F} = 0.\]
Hence, invoking \eqref{eq : MstatLNC} again, this together with the feasibility of $\bar x$ and \eqref{eq : FinalMultInNC}
implies M-stationarity of $\bar x$ and the proof is complete.
\end{proof}
\section{The extended SQP algorithm for MPVC}
In this section we investigate what can be done in order to secure $\mathcal{Q}_M$-stationarity of the limit points.
First, note that to prove M-stationarity of the limit points in Theorem \ref{The : Mstat} we only used that
$(\underline \lambda_{k}^{H,N_k}, \underline \lambda_{k}^{G,N_k}) \in N_{P^{\vert V \vert}}(\tilde F(x_k,s_k^{N_k},\delta_k^{N_k}))$,
i.e. it is sufficient to exploit only the M-stationarity of the solutions of auxiliary problems.
Further, recalling the comments after Lemma \ref{Lem : QstatQPpiece}, the solution $(s,\delta)$ of $QP(\rho, I^{1}(s,\delta) \cup I^{00}(s,\delta))$
is M-stationary for the auxiliary problem. Thus, in Algorithm \ref{AlgSol} for solving the auxiliary problem,
it is sufficient to consider only the last problem of the four problems \eqref{eqn : b1b2Stat},\eqref{eqn : I00EmptyStat}.
Moreover, definition of limiting normal cone \eqref{eq : LimitNCdef} reveals that, in general, the limiting process abolishes any
stationarity stronger that M-stationarity, even S-stationarity.
Nevertheless, in practical situations it is likely that some assumption, securing that a stronger stationarity
will be preserved in the limiting process, may be fulfilled. E.g., let $\bar x$ be a limit point of $x_k$.
If we assume that for all $k$ sufficiently large it holds that
$I^{00}(\bar x) = I^{00}(s_k^{N_k},\delta_k^{N_k})$, then $\bar x$ is at least $\mathcal{Q}_M$-stationary for \eqref{eq : genproblem}.
This follows easily, since now for all $i \in I^{00}(\bar x)$ it holds that $\underline \lambda_{i,k}^{G,N_k} = 0$,
$\overline \lambda_{i,k}^{H,N_k}, \overline \lambda_{i,k}^{G,N_k} \geq 0$ and consequently
\[\underline \lambda_i^{G} = \lim_{k \to \infty} \underline \lambda_{i,k}^{G,N_k} = 0, \quad
\overline \lambda_i^{H} = \lim_{k \to \infty} \overline \lambda_{i,k}^{H,N_k} \geq 0, \quad
\overline \lambda_i^{G} = \lim_{k \to \infty} \overline \lambda_{i,k}^{G,N_k} \geq 0.\]
This observation suggests that to obtain a stronger stationarity of a limit point,
the key is to correctly identify the bi-active index set at the limit point
and it serves as a motivation for the extended version of our SQP method.
Before we can discuss the extended version, we summarize some preliminary results.
\subsection{Preliminary results}
Let $a: \mathbb{R}^n \rightarrow \mathbb{R}^p$ and $b: \mathbb{R}^n \rightarrow \mathbb{R}^q$ be continuously differentiable.
Given a vector $x \in \mathbb{R}^n$ we define the linear problem
\begin{equation} \label{eq : AuxdProb}
\begin{array}{lrl}
LP(x) & \min\limits_{d \in \mathbb{R}^{n}} & \nabla f(x) d \\
& \textrm{subject to } & \phantom{(b(x))^- +} \nabla a(x) d = 0, \\
&& (b(x))^- + \nabla b(x) d \leq 0, \\
&& -1 \leq d \leq 1.
\end{array}
\end{equation}
Note that $d=0$ is always feasible for this problem. Next we define a set $A$ by
\begin{equation} \label{eq : FeasSetDef}
A := \{x \in \mathbb{R}^n \,\vert\, a(x) = 0, b(x) \leq 0\}.
\end{equation}
Let $\bar x \in A$ and recall that the Mangasarian-Fromovitz constraint qualification (MFCQ) holds at $\bar x$
if the matrix $\nabla a(\bar x)$ has full row rank and there exists a vector $d \in \mathbb{R}^n$ such that
\[\nabla a(\bar x) d = 0, \quad \nabla b_i(\bar x) d < 0, \, i \in \mathcal I(\bar x) := \{i \in \{1, \ldots, q\} \,\vert\, b_i(\bar x) = 0\}.\]
Moreover, for a matrix $M$ we denote by $\norm{M}_p$ the norm given by
\begin{equation} \label{eq : MatNormDef}
\norm{M}_p := \sup \{ \norm{M u}_p \,\vert\, \norm{u}_{\infty} \leq 1\}
\end{equation}
and we also omit the index $p$ in case $p = 2$.
\begin{lemma} \label{lem : LP1}
Let $\bar x \in A$, let assume that MFCQ holds at $\bar x$ and let $\bar d$ denote the solution of $LP(\bar x)$.
Then for every $\epsilon > 0$ there exists $\delta > 0$ such that if $\norm{x - \bar x} \leq \delta$ then
\begin{equation}
\nabla f(x) d \leq \nabla f(\bar x) \bar d + \epsilon,
\end{equation}
where $d$ denotes the solution of $LP(x)$.
\end{lemma}
\begin{proof}
The classical Robinson's result (c.f. \cite[Corollary 1, Theorem 3]{Ro76}), together with MFCQ at $\bar x$,
yield the existence of $\kappa > 0$ and $\tilde \delta > 0$ such that for every $x$ with $\norm{x - \bar x} \leq \tilde \delta$
there exists $\hat d$ with $\nabla a(x) \hat d = 0$, $(b(x))^- + \nabla b(x) \hat d \leq 0$ and
\[
\norm{\bar d - \hat d} \leq \kappa \max \{ \norm{ \nabla a(x) \bar d }, \norm{ ( (b(x))^- + \nabla b(x) \bar d )^+ } \}
=: \nu.
\]
Since $\norm{\hat d}_\infty \leq \norm{\hat d - \bar d + \bar d}_\infty \leq 1 + \nu$,
by setting $\tilde d := \hat d /(1 + \nu)$ we obtain that $\tilde d$ is feasible for $LP(x)$ and
\[\norm{\bar d - \tilde d} \leq \frac{1}{1 + \nu} \norm{\bar d - \hat d + \nu \bar d} \leq \frac{ (1 + \sqrt n) \nu}{1 + \nu}
\leq (1 + \sqrt n) \nu.\]
Thus, taking into account $\nabla a(\bar x) \bar d = 0$, $(b(\bar x))^- + \nabla b(\bar x) \bar d \leq 0$
and $\norm{\bar d}_{\infty} \leq 1$, we obtain
\[
\norm{\bar d - \tilde d} \leq (1 + \sqrt n) \kappa \max \{ \norm{ \nabla a(x) - \nabla a(\bar x) },
\norm{b(x) - b(\bar x)} + \norm{\nabla b(x) - \nabla b(\bar x)} \}.
\]
Hence, given $\epsilon > 0$, by continuity of objective and constraint functions as well as their derivatives at $\bar x$
we can define $\delta \leq \tilde \delta$ such that for all $x$ with $\norm{x - \bar x} \leq \delta$ it holds that
\[\norm{\nabla f(x) - \nabla f(\bar x)}_{1}, \ \norm{\nabla f(x)} \norm{\bar d - \tilde d} \ \leq \ \epsilon / 2.\]
Consequently, we obtain
\[
\nabla f (x) \tilde d \leq \norm{\nabla f(x)} \norm{\tilde d - \bar d} +
\norm{\nabla f(x) - \nabla f(\bar x)}_{1} \norm{\bar d}_{\infty} + \nabla f(\bar x) \bar d \leq \nabla f(\bar x) \bar d + \epsilon
\]
and since $\nabla f (x) d \leq \nabla f (x) \tilde d$ by feasibility of $\tilde d$ for $LP(x)$, the claim is proved.
\end{proof}
\begin{lemma} \label{lem : LP2}
Let $\nu \in (0,1)$ be a given constant and
for a vector of positive parameters $\omega = (\omega^{\mathcal E},\omega^{\mathcal I})$ let us define the following function
\begin{equation} \label{eq : varphiDef}
\varphi(x) := f(x) + \sum_{i \in \{1,\ldots,p\}} \omega_i^{\mathcal E} \vert a_i(x) \vert
+ \sum_{i \in \{1,\ldots,q\}} \omega_i^{\mathcal I} (b_i(x))^+.
\end{equation}
Further assume that there exist $\epsilon > 0$ and a compact set $C$ such that for all $x \in C$
it holds that $\nabla f(x) d \leq - \epsilon$, where $d$ denotes the solution of $LP(x)$.
Then there exists $\tilde \alpha > 0$ such that
\begin{equation} \label{eq : VarphiAlpha}
\varphi(x + \alpha d) - \varphi(x) \leq \nu \alpha \nabla f(x) d
\end{equation}
holds for all $x \in C$ and every $\alpha \in [0,\tilde \alpha]$.
\end{lemma}
\begin{proof}
Definition of $\varphi$, together with $u^+-v^+ \leq (u - v^+)^+$ for $u,v \in \mathbb{R}$, yield
\begin{equation} \label{eq : VarPhiBound}
\varphi(x + \alpha d) - \varphi(x) \leq f(x + \alpha d) - f(x) +
\norm{\omega}_{\infty} ( \norm{a(x + \alpha d) - a(x)}_1 + \norm{(b(x + \alpha d) - (b(x))^+)^+}_1).
\end{equation}
By uniform continuity of the derivatives of constraint functions and objective function on compact sets,
it follows that there exists $\tilde \alpha > 0$ such that for all $x \in C$ and every $h$ with $\norm{h}_{\infty} \leq \tilde \alpha$ we have
\begin{equation} \label{eq : ContDeriv}
\norm{\nabla f(x+h) - \nabla f(x)}_{1}, \, \, \norm{\omega}_{\infty} ( \norm{\nabla a(x + h) - \nabla a(x)}_1 +
\norm{\nabla b(x + h) - \nabla b(x)}_1 ) \leq \frac{1-\nu}{2} \epsilon.
\end{equation}
Hence, for all $x \in C$ and every $\alpha \in [0,\tilde \alpha]$ we obtain
\begin{eqnarray*}
f(x + \alpha d) - f(x) & = & \nu \alpha \nabla f(x) d + (1 - \nu) \alpha \nabla f(x) d +
\int_{0}^{1} (\nabla f(x + t \alpha d) - \nabla f(x)) \alpha d \mathrm{d} t \\
& \leq & \nu \alpha \nabla f(x) d - (1 - \nu) \alpha \epsilon + \frac{1-\nu}{2} \alpha \epsilon =
\nu \alpha \nabla f(x) d - \frac{1-\nu}{2} \alpha \epsilon.
\end{eqnarray*}
On the other hand, taking into account $\nabla a(x) d = 0$, $\norm{d}_{\infty} \leq 1$, \eqref{eq : ContDeriv} and
\[(b(x))^- + \alpha \nabla b(x) d = (1- \alpha)(b(x))^- + \alpha ((b(x))^- + \nabla b(x) d) \leq 0\]
we similarly obtain for all $x \in C$ and every $\alpha \in [0,\tilde \alpha]$
{\setlength\arraycolsep{2pt}
\begin{eqnarray*}
\lefteqn{\norm{\omega}_{\infty} ( \norm{a(x + \alpha d) - a(x)}_1 + \norm{(b(x + \alpha d) - (b(x))^+)^+}_1)} \\
& \leq & \norm{\omega}_{\infty} \Big( \norm{\smallint_{0}^{1} (\nabla a(x + t \alpha d) - \nabla a(x)) \alpha d \mathrm{d} t}_1
+ \norm{\smallint_{0}^{1} (\nabla b(x + t \alpha d) - \nabla b(x)) \alpha d \mathrm{d} t}_1 \Big)
\leq \frac{1-\nu}{2} \alpha \epsilon.
\end{eqnarray*}}
Consequently, \eqref{eq : VarphiAlpha} follows from \eqref{eq : VarPhiBound} and the proof is complete.
\end{proof}
\subsection{The extended version of Algorithm \ref{AlgMPCC}}
For every vector $x \in \mathbb{R}^n$ and every partition $(W_1 , W_2) \in \mathcal{P}(V)$
we define the linear problem
\begin{equation} \label{eq : dProb}
\begin{array}{lrll}
LP(x,W_1) & \min\limits_{d \in \mathbb{R}^{n}} & \nabla f(x) d & \\
& \textrm{subject to } & \phantom{(g_i(x))^- + } \nabla h_i(x) d = 0 & i \in E, \\
&& (g_i(x))^- + \nabla g_i(x) d \leq 0 & i \in I, \\
&& \phantom{(F_i(x))^- + } \nabla F_i(x) d \in P^1 & i \in W_1, \\
&& (F_i(x))^- + \nabla F_i(x) d \in P^2 & i \in W_2, \\
&& -1 \leq d \leq 1. &
\end{array}
\end{equation}
Note that $d=0$ is always feasible for this problem and that the problem $LP(x,W_1)$ coincides with the problem $LP(x)$ with $a,b$ given by
\begin{equation} \label{eq : abDef}
a := (h_i(x), i \in E, -H_i(x), i \in W_1)^T, \, b := (g_i(x), i \in I, -H_i(x), i \in W_2, G_i(x), i \in W_2)^T.
\end{equation}
The following proposition provides the motivation for introducing the problem $LP(x,W_1)$.
\begin{proposition} \label{Pro : SolLP}
Let $\bar x$ be feasible for \eqref{eq : genproblem}. Then $\bar x$ is $\mathcal{Q}$-stationary with respect to
$(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$ if and only if the solutions $\bar d^1$ and $\bar d^2$ of the problems
$LP(\bar x,I^{0+}(\bar x) \cup \beta^1)$ and $LP(\bar x, I^{0+}(\bar x) \cup \beta^2)$ fulfill
\begin{equation} \label{eq : LinSolZero}
\min \{ \nabla f (\bar x) \bar d^1 , \nabla f (\bar x) \bar d^2 \} = 0.
\end{equation}
\end{proposition}
\begin{proof}
Feasibility of $d=0$ for $LP(\bar x,I^{0+}(\bar x) \cup \beta^1)$ and $LP(\bar x,I^{0+}(\bar x) \cup \beta^2)$
implies
\[\min \{ \nabla f (\bar x) \bar d^1 , \nabla f (\bar x) \bar d^2 \} \leq 0.\]
Denote by $\tilde d^1$ and $\tilde d^2$ the solutions
of $LP(\bar x,I^{0+}(\bar x) \cup \beta^1)$ and $LP(\bar x,I^{0+}(\bar x) \cup \beta^2)$
without the constraint $-1 \leq d \leq 1$, and denote these problems by
$\tilde{LP}^1$ and $\tilde{LP}^2$. Clearly, we have
\[\min \{ \nabla f (\bar x) \tilde d^1 , \nabla f (\bar x) \tilde d^2 \} \leq
\min \{ \nabla f (\bar x) \bar d^1 , \nabla f (\bar x) \bar d^2 \}.\]
The dual problem of $\tilde{LP}^j$ for $j=1,2$ is given by
\begin{equation} \label{eq : DualdProb}
\begin{array}{rl}
\max\limits_{\lambda \in \mathbb{R}^m}
& - \sum_{i \in I} \lambda_i^g (g_i(\bar x))^- - \sum_{i \in W_2^j} \left( \lambda_i^H (-H_i(\bar x))^- + \lambda_i^G (G_i(\bar x))^- \right) \\
\textrm{subject to } & \eqref{eq : StatEq} \,\, \textrm{ and } \,\,
\lambda_i^g \geq 0, i \in I, \lambda_i^H, \lambda_i^G \geq 0, i \in W_2^j, \lambda_i^G = 0, i \in W_1^j,
\end{array}
\end{equation}
where $\lambda = (\lambda^h,\lambda^g,\lambda^H,\lambda^G)$, $m = \vert E \vert + \vert I \vert + 2 \vert V \vert$,
$W_1^j := I^{0+}(\bar x) \cup \beta^j$, $W_2^j := V \setminus W_1^j$.
Assume first that $\bar x$ is $\mathcal{Q}$-stationary with respect to $(\beta^1,\beta^2) \in \mathcal{P}(I^{00}(\bar x))$.
Then the multipliers $\overline\lambda$, $\underline\lambda$ from definition of $\mathcal{Q}$-stationarity are feasible
for dual problems of $\tilde{LP}^1$ and $\tilde{LP}^2$, respectively, both with the objective value equal to zero.
Hence, duality theory of linear programming yields that $\min \{ \nabla f (\bar x) \tilde d^1 , \nabla f (\bar x) \tilde d^2 \} \geq 0$
and consequently \eqref{eq : LinSolZero} follows.
On the other hand, if \eqref{eq : LinSolZero} is fulfilled, is follows that
$\min \{ \nabla f (\bar x) \tilde d^1 , \nabla f (\bar x) \tilde d^2 \} = 0$
as well. Thus, $d=0$ is an optimal solution for $\tilde{LP}^1$ and $\tilde{LP}^2$
and duality theory of linear programming yields that the solutions $\lambda^1$ and $\lambda^2$
of the dual problems exist and their objective values are both zero.
However, this implies that for $j=1,2$ we have
\[\lambda_i^{g,j} g_i(\bar x) = 0, i \in I, \lambda_i^{H,j} H_i(\bar x) = 0 , \lambda_i^{G,j} G_i(\bar x) = 0, i \in V\]
and consequently $\lambda^1$ fulfills the conditions of $\overline\lambda$ and $\lambda^2$
fulfills the conditions of $\underline\lambda$,
showing that $\bar x$ is indeed $\mathcal{Q}$-stationary with respect to $(\beta^1,\beta^2)$.
\end{proof}
Now for each $k$ consider two partitions $(W_{1,k}^1,W_{2,k}^1), (W_{1,k}^2,W_{2,k}^2) \in \mathcal P(V)$
and let $d_k^1$ and $d_k^2$ denote the solutions of $LP(x_k, W_{1,k}^1)$ and $LP(x_k, W_{1,k}^2)$.
Choose $d_k \in \{d_k^1, d_k^2\}$ such that
\begin{equation} \label{eq : d_kDef}
\nabla f (x_k) d_k = \min_{d \in \{d_k^1, d_k^2\}} \nabla f (x_k) d
\end{equation}
and let $(W_{1,k},W_{2,k}) \in \{(W_{1,k}^1,W_{2,k}^1), (W_{1,k}^2,W_{2,k}^2)\}$ denote the corresponding partition.
Next, we define the function $\varphi_k$ in the following way
\begin{equation} \label{eq : varphikDef}
\varphi_k(x) := f(x) + \sum \limits_{i \in E} \sigma_{i,k}^h \vert h_i(x) \vert +
\sum \limits_{i \in I} \sigma_{i,k}^g ( g_i(x) )^+
+ \sum \limits_{i \in W_{1,k}} \sigma_{i,k}^F d(F_i(x),P^1)
+ \sum \limits_{i \in W_{2,k}} \sigma_{i,k}^F d(F_i(x),P^2).
\end{equation}
Note that the function $\varphi_k$ coincides with $\varphi$ for $a,b$ given by \eqref{eq : abDef} with $(W_1,W_2) := (W_{1,k},W_{2,k})$
and $\omega = (\omega^{\mathcal E}, \omega^{\mathcal I})$ given by
\[
\omega^{\mathcal E} := (\sigma^h_{i,k}, i \in E, \sigma^F_{i,k}, i \in W_{1,k}), \qquad
\omega^{\mathcal I} := (\sigma^g_{i,k}, i \in I, \sigma^F_{i,k}, i \in W_{2,k}, \sigma^F_{i,k}, i \in W_{2,k}).
\]
\begin{proposition} \label{Pro : MeritLP}
For all $x \in \mathbb{R}^n$ it holds that
\begin{equation} \label{eq : PhiVarphi}
0 \leq \varphi_k(x) - \Phi_k(x) \leq \norm{\sigma_k^F}_{\infty} \vert V \vert
\max \{ \max_{i \in W_{1,k}} d(F_i(x),P^1), \max_{i \in W_{2,k}} d(F_i(x),P^2) \}.
\end{equation}
\end{proposition}
\begin{proof}
Non-negativity of the distance function, together with \eqref{eq : DistOfCup} yield for every $i \in V, j = 1,2$
\[0 \leq d(F_i(x),P^j) - d(F_i(x),P) \leq d(F_i(x),P^j). \]
Hence \eqref{eq : PhiVarphi} now follows from
\[
\sum_{j = 1,2} \, \, \sum_{i \in W_{j,k}} \sigma_{i,k}^F d(F_i(x),P^j)
\leq \norm{\sigma_k^F}_{\infty} \vert V \vert \max_{j=1,2} \,\, \max_{i \in W_{j,k}} d(F_i(x),P^j).
\]
\end{proof}
An outline of the extended algorithm is as follows.
\begin{algorithm}[Solving the MPVC*] \label{AlgMPCC*} \rm \mbox{}
\Itl1{1:} Initialization:
\Itl2{} Select a starting point $x_0 \in \mathbb{R}^n$ together with a positive definite $n \times n$ matrix $B_0$,
\Itl3{} a parameter $\rho_0 > 0$ and constants $\zeta \in (0,1)$, $\bar\rho > 1$ and $\mu \in (0,1)$.
\Itl2{} Select positive penalty parameters $\sigma_{-1} = (\sigma^h_{-1}, \sigma^g_{-1}, \sigma^F_{-1})$.
\Itl2{} Set the iteration counter $k := 0$.
\Itl1{2:} Correction of the iterate:
\Itl2{} Set the corrected iterate by $\tilde x_{k} := x_k$.
\Itl2{} Take some $(W_{1,k}^1,W_{2,k}^1), (W_{1,k}^2,W_{2,k}^2) \in \mathcal P(V)$, compute $d_k^1$ and $d_k^2$
\Itl3{} as solutions of $LP(x_k, W_{1,k}^1)$ and $LP(x_k, W_{1,k}^2)$ and let $d_k$ be given by \eqref{eq : d_kDef}.
\Itl2{} Consider a sequence of numbers $\alpha_k^{(1)} = 1, \alpha_k^{(2)}, \alpha_k^{(3)}, \ldots$ with
$1 > \bar \alpha \geq \alpha_k^{(j+1)} / \alpha_k^{(j)} \geq \underline \alpha > 0$.
\Itl2{} If $\nabla f (x_k) d_k < 0$, denote by $j(k)$ the smallest $j$ fulfilling either
\begin{eqnarray} \label{eqn : NextIterCond}
\Phi_k(x_k + \alpha_k^{(j)} d_k) - \Phi_k(x_k) & \leq & \mu \alpha_k^{(j)} \nabla f (x_k) d_k, \\ \label{eqn : NextIterCond2}
\textrm{or } \qquad \alpha_k^{(j)} & \leq & \frac{\Phi_k(x_k) - \varphi_k(x_k)}{\mu \nabla f (x_k) d_k}.
\end{eqnarray}
\Itl3{} If $j(k)$ fulfills \eqref{eqn : NextIterCond}, set $\tilde x_{k} := x_k + \alpha_k^{j(k)} d_k$.
\Itl1{3:} Solve the Auxiliary problem:
\Itl2{} Run Algorithm \ref{AlgSol} with data $\zeta, \bar\rho, \rho:= \rho_k, B:=B_k, \nabla f := \nabla f (\tilde x_k),$
\Itl3{} $h_i := h_i(\tilde x_k), \nabla h_i := \nabla h_i (\tilde x_k), i \in E,$ etc.
\Itl2{} If the Algorithm \ref{AlgSol} stops because of degeneracy,
\Itl3{} stop the Algorithm \ref{AlgMPCC*} with an error message.
\Itl2{} If the final iterate $s^N$ is zero, stop the Algorithm \ref{AlgMPCC*} and return $\tilde x_k$ as a solution.
\Itl1{4:} Next iterate:
\Itl2{} Compute new penalty parameters $\sigma_{k}$.
\Itl2{} Set $x_{k+1} := \tilde x_k + s_k$ where $s_k$ is a point on the polygonal line connecting the points
\Itl3{} $s^0,s^1, \ldots, s^N$ such that an appropriate merit function depending on $\sigma_{k}$ is decreased.
\Itl2{} Set $\rho_{k+1} := \rho$, the final value of $\rho$ in Algorithm \ref{AlgSol}.
\Itl2{} Update $B_{k}$ to get positive definite matrix $B_{k+1}$.
\Itl2{} Set $k := k+1$ and go to step 2.
\end{algorithm}
Naturally, Remark \ref{rem : StoppingCriteria} regarding the stopping criteria for Algorithm \ref{AlgMPCC} aplies to this algorithm as well.
\begin{lemma} \label{lem : WellDefNI}
Index $j(k)$ is well defined.
\end{lemma}
\begin{proof}
In order to show that $j(k)$ is well defined, we have to prove the existence of some $j$ such that either \eqref{eqn : NextIterCond}
or \eqref{eqn : NextIterCond2} is fulfilled. By \eqref{eq : PhiVarphi} we know that $\Phi_k(x_k) - \varphi_k(x_k) \leq 0$.
In case $\Phi_k(x_k) - \varphi_k(x_k) < 0$ every $j$ sufficiently large clearly fulfills \eqref{eqn : NextIterCond2}. On the
other hand, if $\Phi_k(x_k) - \varphi_k(x_k) = 0$, taking into account \eqref{eq : PhiVarphi} we obtain
\[\Phi_k(x_k + \alpha d_k) - \Phi_k(x_k) \leq \varphi_k(x_k + \alpha d_k) - \varphi_k(x_k).\]
However, Lemma \ref{lem : LP2} for $\nu := \mu$ and $C:= \{x_k\}$ yields that if $\nabla f (x_k) d_k < 0$
then there exists some $\tilde \alpha$ such that
\[\varphi_k(x_k + \alpha d_k) - \varphi_k(x_k) \leq \mu \alpha \nabla f (x_k) d_k\]
holds for all $\alpha \in [0,\tilde \alpha]$ and thus \eqref{eqn : NextIterCond} is fulfilled for every $j$ sufficiently large.
This finishes the proof.
\end{proof}
\subsection{Convergence of the extended algorithm}
We consider the behavior of the Algorithm \ref{AlgMPCC*} when it does not prematurely stop and it generates an infinite
sequence of iterates
\[x_k, B_k, \theta_k, \underline\lambda_k^{N_k}, \overline\lambda_k^{N_k},
(s_k^{t}, \delta_k^{t}), \lambda_k^{t}, (V_{1,k}^{t}, V_{2,k}^{t}), \,\,
\textrm{ and } \,\, \tilde x_k, d_k^1, d_k^2, (W_{1,k}^1,W_{2,k}^1), (W_{1,k}^2,W_{2,k}^2).\]
We discuss the convergence behavior under the following additional assumption.
\begin{assumption} \label{ass : AlgMPVC*}
Let $\bar x$ be a limit point of the sequence of iterates $x_k$.
\begin{enumerate}
\item Mangasarian-Fromovitz constraint qualification (MFCQ) holds at $\bar x$ for
constraints $x \in A$, where $A$ is given by \eqref{eq : FeasSetDef} and $a,b$ are given by \eqref{eq : abDef}
with $(W_1,W_2) := (I^{0+}(\bar x),V \setminus I^{0+}(\bar x))$ or
$(W_1,W_2) := (I^{0+}(\bar x) \cup I^{00}(\bar x),V \setminus (I^{0+}(\bar x) \cup I^{00}(\bar x)))$.
\item There exists a subsequence $K(\bar x)$ such that
$\lim_{k \setto{K(\bar x)} \infty} x_k = \bar x$ and
\[W_{1,k}^1 = I^{0+}(\bar x), \,\, W_{1,k}^2 = I^{0+}(\bar x) \cup I^{00}(\bar x) \textrm{ for all } k \in K(\bar x).\]
\end{enumerate}
\end{assumption}
Note that the Next iterate step from Algorithm \ref{AlgMPCC*} remains almost unchanged compared to the Next iterate step from Algorithm \ref{AlgMPCC},
we just consider the point $\tilde x_k$ instead of $x_k$. Consequently, most of the results from subsections 4.1 and 4.2 remain valid,
possibly after replacing $x_k$ by $\tilde x_k$ where needed, e.g. in Lemma \ref{lem : GeneralMerit}. The only exception is
the proof of Lemma \ref{lem : NextIterCons}, where we have to show that the sequence $\Phi_k(x_k)$ is monotonically decreasing.
This follows now from \eqref{eqn : NextIterCond} and hence Lemma \ref{lem : NextIterCons} remains valid as well.
We state now the main result of this section.
\begin{theorem} \label{The : Betastat}
Let Assumption \ref{ass : AlgMPCC} and Assumption \ref{ass : AlgMPVC*} be fulfilled.
Then every limit point of the sequence of iterates $x_k$
is at least $\mathcal Q_M$-stationary for problem \eqref{eq : genproblem}.
\end{theorem}
\begin{proof}
Let $\bar{x}$ denote a limit point of the sequence $x_k$ and let $K(\bar x)$ denote a subsequence from Assumption \ref{ass : AlgMPVC*} (2.).
Since
\[\norm{x_{k}-\tilde x_{k-1}} \leq S_{k-1}^{N_{k-1}} \to 0 \]
we conclude that $\lim_{k \setto{K(\bar x)} \infty} \tilde x_{k-1} = \bar x$ and by applying Theorem \ref{The : Mstat} to sequence $\tilde x_{k-1}$
we obtain the feasibility of $\bar x$ for problem \eqref{eq : genproblem}.
Next we consider $\bar d^1,\bar d^2$ as in Proposition \ref{Pro : SolLP} with $\beta^1 := \emptyset$
and without loss of generality we only consider $k \in K(\bar x), k \geq \bar k$, where $\bar k$ is given by Lemma \ref{lem : Sigmas}.
We show by contraposition that the case $\min \{ \nabla f (\bar x) \bar d^1 , \nabla f (\bar x) \bar d^2 \} < 0$ can not occur.
Let us assume on the contrary that, say $\nabla f (\bar x) \bar d^1 < 0$.
Assumption \ref{ass : AlgMPVC*} (2.) yields that $W_{1,k}^1 = I^{0+}(\bar x)$ and
feasibility of $\bar x$ for \eqref{eq : genproblem} together with $I^{0+}(\bar x) \subset W_{1,k}^1 \subset I^{0}(\bar x)$ imply
$\bar x \in A$ for $A$ given by \eqref{eq : FeasSetDef} and $a,b$ given by \eqref{eq : abDef} with $(W_1,W_2) := (W_{1,k}^1,W_{2,k}^1)$.
Taking into account Assumption \ref{ass : AlgMPVC*} (1.), Lemma \ref{lem : LP1} then yields that for
$\epsilon := - \nabla f(\bar x) \bar d^1 /2 > 0$ there exists $\delta$ such that for all $\norm{x_k - \bar x} \leq \delta$
we have $\nabla f(x_k) d_k \leq \nabla f(x_k) d_k^1 \leq \nabla f(\bar x) \bar d^1 /2 = - \epsilon$, with
$d_k$ given by \eqref{eq : d_kDef}.
Next, we choose $\hat k$ to be such that for $k \geq \hat k$ it holds that $\norm{x_k - \bar x} \leq \delta$
and we set $\nu := (1 + \mu)/2$, $C:= \{x \,\vert\, \norm{x - \bar x} \leq \delta\}$. From Lemma \ref{lem : LP2} we obtain that
\begin{equation} \label{eq : varphiInter}
\varphi_k(x_k + \alpha d_k) - \varphi_k(x_k) \leq \frac{1+ \mu}{2} \alpha \nabla f(x_k) d_k
\end{equation}
holds for all $\alpha \in [0,\tilde \alpha]$.
Moreover, by choosing $\hat k$ larger if necessary we can assume that for all $i \in V$ we have
\begin{equation} \label{eq : FCloseToBarx}
\norm{F_i(x_k) - F_i(\bar x)}_1 \leq - \min \left\{\frac{1 - \mu}{2} , \mu \right\}
\frac{ \underline \alpha \tilde \alpha \nabla f(x_k) d_k}{\norm{\sigma_k^F}_{\infty} \vert V \vert}.
\end{equation}
For the partition $(W_{1,k},W_{2,k}) \in \{(W_{1,k}^1,W_{2,k}^1), (W_{1,k}^2,W_{2,k}^2)\}$ corresponding to $d_k$ it holds that
$I^{0+}(\bar x) \subset W_{1,k} \subset I^{0}(\bar x)$ and this, together with the feasibility of $\bar x$ for \eqref{eq : genproblem},
imply $F_i(\bar x) \in P^j, i \in W_{j,k}$ for $j=1,2$. Therefore, taking into account \eqref{eq : DistIfIn}, we obtain
\[\max \{ \max_{i \in W_{1,k}} d(F_i(x_k),P^1), \max_{i \in W_{2,k}} d(F_i(x_k),P^2) \} \leq \max_{i \in V} \norm{F_i(x_k) - F_i(\bar x)}_1.\]
Consequently, \eqref{eq : PhiVarphi} and \eqref{eq : FCloseToBarx} yield for all $\alpha > \underline \alpha \tilde \alpha$
\[\varphi(x_k) - \Phi_k(x_k) < - \min \left\{\frac{1 - \mu}{2} , \mu \right\} \alpha \nabla f (x_k) d_k.\]
Thus, from \eqref{eq : varphiInter} and \eqref{eq : PhiVarphi} we obtain for all $\alpha \in (\underline \alpha \tilde \alpha,\tilde \alpha]$
\begin{eqnarray*}
\Phi_k(x_k + \alpha d_k) - \Phi_k(x_k) & \leq & \varphi(x_k + \alpha d_k) - \varphi(x_k) + \varphi(x_k) - \Phi_k(x_k)
\leq \mu \alpha \nabla f (x_k) d_k \\
\textrm{and } \qquad \Phi_k(x_k) - \varphi(x_k) & > & \mu \alpha \nabla f (x_k) d_k.
\end{eqnarray*}
Now consider $j$ with $\alpha_k^{(j-1)} > \tilde \alpha \geq \alpha_k^{(j)}$.
We see that $\alpha_k^{(j)} \in (\underline \alpha \tilde \alpha,\tilde \alpha]$, since
$\alpha_k^{(j)} \geq \underline \alpha \alpha_k^{(j-1)} > \underline \alpha \tilde \alpha$
and consequently $j$ fulfills \eqref{eqn : NextIterCond} and violates \eqref{eqn : NextIterCond2}.
However, then we obtain for all $k \geq \hat k$
\[ \Phi_k(x_{k+1}) - \Phi_k(x_{k}) \leq \mu \alpha_k^{(j(k))} \nabla f (x_k) d_k =
\mu \underline \alpha \tilde \alpha \nabla f(\bar x) \bar d /2 < 0,\]
a contradiction.
Hence it follows that the solutions $\bar d^1,\bar d^2$ fulfill
$\min \{ \nabla f (\bar x) \bar d^1 , \nabla f (\bar x) \bar d^2 \} = 0$ and by
Proposition \ref{Pro : SolLP} we conclude that $\bar x$
is $\mathcal Q$-stationary with respect to $(\emptyset,I^{00}(\bar x))$ and
consequently also $\mathcal Q_M$-stationary for problem \eqref{eq : genproblem}.
\end{proof}
Finally, we discuss how to choose the partitions $(W_{1,k}^1,W_{2,k}^1)$ and $(W_{1,k}^2,W_{2,k}^2)$ such that Assumption \ref{ass : AlgMPVC*} (2.)
will be fulfilled. Let us consider a sequence of nonnegative numbers $\epsilon_k$ such that for every limit point $\bar x$
with $\lim_{k \setto K \infty} x_k = \bar x$ it holds that
\begin{equation} \label{eq : EpsK}
\lim_{k \setto K \infty} \frac{\epsilon_k}{\norm{x_{k}- \bar x}_{\infty}} \to \infty
\end{equation}
and let us define
\begin{eqnarray*}
\tilde I^{0+}_k & := & \{ i \in V \,\vert\, \vert H_i(x_k) \vert \leq \epsilon_k < G_i(x_k) \}, \\
\tilde I^{00}_k & := & \{ i \in V \,\vert\, \vert H_i(x_k) \vert \leq \epsilon_k \geq \vert G_i(x_k) \vert \}, \\
\tilde I^{0-}_k & := & \{ i \in V \,\vert\, \vert H_i(x_k) \vert \leq \epsilon_k < -G_i(x_k) \}, \\
\tilde I^{+0}_k & := & \{ i \in V \,\vert\, H_i(x_k) > \epsilon_k \geq \vert G_i(x_k) \vert \}, \\
\tilde I^{+-}_k & := & \{ i \in V \,\vert\, H_i(x_k) > \epsilon_k < -G_i(x_k) \}.
\end{eqnarray*}
\begin{proposition}
For $W_{1,k}^1$ and $W_{1,k}^2$ defined by $W_{1,k}^1 := \tilde I^{0+}_k$ and $W_{1,k}^1 := \tilde I^{0+}_k \cup \tilde I^{00}_k$
the Assumption \ref{ass : AlgMPVC*} (2.) is fulfilled.
\end{proposition}
\begin{proof}
Let $\bar x$ be a limit point of the sequence $x_k$ such that $\lim_{k \setto{K} \infty} x_k = \bar x$.
Recall that $\mathcal{F}$ is given by \eqref{eqn : mathcFdef} and let us set
$L := \max_{\norm{x - \bar x}_{\infty} \leq 1} \norm{\nabla \mathcal{F}(x)}_{\infty}$,
where $\norm{\nabla \mathcal{F}(x)}_{\infty}$ is given by \eqref{eq : MatNormDef}.
Further, taking into account \eqref{eq : EpsK}, consider $\hat k$ such that for all $k \geq \hat k$ it holds that
$\norm{x_k - \bar x}_{\infty} \leq \min \left\{ \epsilon_k / L,1 \right\}$.
Hence, for all $k \in K$ with $k \geq \hat k$ we conclude
\begin{equation} \label{eq : FxktoBarx}
\norm{ \mathcal{F}(x_k) - \mathcal{F}(\bar x)}_{\infty} \leq
\int_0^1 \norm{\nabla \mathcal{F}(\bar x + t(x_k - \bar x))}_{\infty}\norm{x_k - \bar x}_{\infty} dt \leq \epsilon_k.
\end{equation}
Now consider $i \in I^{0+}(\bar x)$, i.e. $H_i(\bar x) = 0 < G_i(\bar x)$. By choosing $\hat k$ larger if necessary we can assume that for all
$k \geq \hat k$ it holds that $\epsilon_k < G_i(\bar x)/2$ and consequently, taking into account \eqref{eq : FxktoBarx},
for all $k \in \{k \in K \,\vert\, k \geq \hat k\}$ we have
\[
\vert H_i(x_k) \vert = \vert H_i(x_k) - H_i(\bar x) \vert \leq \epsilon_k < G_i(\bar x) - \epsilon_k \leq G_i(x_k),
\]
showing $i \in \tilde I^{0+}_k$. By similar argumentation and by increasing $\hat k$ if necessary we obtain that
for all $k \in \{k \in K \,\vert\, k \geq \hat k\} =: K(\bar x)$ it holds that
\begin{equation} \label{eq : IndexSetSub}
I^{0+}(\bar x) \subset \tilde I^{0+}_k, \,\, I^{00}(\bar x) \subset \tilde I^{00}_k, \,\, I^{0-}(\bar x) \subset \tilde I^{0-}_k, \,\,
I^{+0}(\bar x) \subset \tilde I^{+0}_k, \,\, I^{+-}(\bar x) \subset \tilde I^{+-}_k.
\end{equation}
However, feasibility of $\bar x$ for \eqref{eq : genproblem} yields
\[V = I^{0+}(\bar x) \cup I^{00}(\bar x) \cup I^{0-}(\bar x) \cup I^{+0}(\bar x) \cup I^{+-}(\bar x)\]
and the index sets $\tilde I^{0+}_k, \tilde I^{00}_k, \tilde I^{0-}_k, \tilde I^{+0}_k, \tilde I^{+-}_k$ are
pairwise disjoint subsets of $V$ by definition. Hence we claim that \eqref{eq : IndexSetSub} must in fact hold with equalities.
Indeed, e.g.
\[\tilde I^{0+}_k \subset V \setminus (\tilde I^{00}_k \cup \tilde I^{0-}_k \cup \tilde I^{+0}_k \cup \tilde I^{+-}_k)
\subset V \setminus (I^{00}(\bar x) \cup I^{0-}(\bar x) \cup I^{+0}(\bar x) \cup I^{+-}(\bar x)) = I^{0+}(\bar x).\]
This finishes the proof.
\end{proof}
Note that if we assume that there exist a constant $L > 0$, a number $N \in \mathbb{N}$ and a limit point $\bar x$ such that for all $k \geq N$ it holds that
\[\norm{x_{k+1} - \bar x}_{\infty} \leq L \norm{x_{k+1} - x_k}_{\infty},\]
by setting $\epsilon_k := \sqrt{\norm{x_{k} - x_{k-1}}_{\infty}}$ we obtain \eqref{eq : EpsK}, since
\[\frac{\sqrt{\norm{x_{k} - x_{k-1}}_{\infty}}}{\norm{x_{k} - \bar x}_{\infty}} \geq
\frac{\sqrt{\norm{x_{k}- \bar x}_{\infty}}}{\sqrt{L} \norm{x_{k}- \bar x}_{\infty}} =
\frac{1}{\sqrt{L \norm{x_{k} - \bar x}_{\infty}}} \to \infty.\]
\section{Numerical results}
Algorithm \ref{AlgMPCC} was implemented in MATLAB.
To perform numerical tests we used a subset of test problems considered in the thesis of Hoheisel \cite{Ho09}.
First we considered the so-called academic example
\begin{equation} \label{eq : academic}
\begin{array}{rl}
\min\limits_{x \in \mathbb{R}^{2}} & 4x_1 + 2x_2 \\
\textrm{subject to } & x_1 \geq 0, \\
& x_2 \geq 0, \\
& (5 \sqrt{2} - x_1 - x_2)x_1 \leq 0, \\
& (5 - x_1 - x_2)x_2 \leq 0. \\
\end{array}
\end{equation}
As in \cite{Ho09}, we tested 289 different starting points $x^0$ with $x^0_1,x^0_2 \in \{-5,-4,\ldots,10,20 \}$.
For 84 starting points our algorithm found a global minimizer $(0,0)$ with objective value 0, while for the
remaining 205 starting points a local minimizer $(0,5)$ with objective value 10 was found. Hence, convergence to
the perfidious candidate $(0,5 \sqrt{2})$, which is not a local minimizer, did not occur (see \cite{Ho09}).
Expectantly, after adding constraint $3 - x_1 - x_2 \leq 0$ to the model \eqref{eq : academic}, to artificially exclude the point $(0,0)$,
unsuitable for the practical application, we reached the point $(0,5)$, now a global minimizer.
For more detailed information about the problem we refer the reader to \cite{Ho09} and \cite{AchHoKa13}.
Next we solved 2 examples in truss topology optimization, the so called Ten-bar Truss and Cantilever Arm.
The underlying model for both of them is as follows:
\begin{equation} \label{eq : trusstopology}
\begin{array}{rll}
\min\limits_{(a,u) \in \mathbb{R}^{N} \times \mathbb{R}^{d}} & V := \sum_{i=1}^N \ell_i a_i \\
\textrm{subject to } & K(a)u = f,& \\
& f u \leq c, & \\
& a_i \leq \bar a_i & i \in \{1,2,\ldots,N\}, \\
& a_i \geq 0 & i \in \{1,2,\ldots,N\}, \\
& (\sigma_i(a,u)^2 - \bar\sigma^2)a_i \leq 0 & i \in \{1,2,\ldots,N\}. \\
\end{array}
\end{equation}
Here the matrix $K(a)$ denotes the global stiffness matrix of the structure $a$ and the vector $f \in \mathbb{R}^d$ contains the
external forces applying at the nodal points. Further, for each $i$ the function $\sigma_i(a,u)$ denotes the
stress of the $i-$th potential bar and $c, \bar a_i, \bar\sigma$ are positive constants. Again, for more background
of the model and the following truss topology optimization problems we refer to \cite{Ho09}.
\begin{figure}
\includegraphics[width=\textwidth]{TenBar.jpg}
\caption{Ten-bar Truss example} \label{fig : TenBar}
\end{figure}
In the Ten-bar Truss example we consider the ground structure depicted in Figure \ref{fig : TenBar}(a) consisting of $N = 10$ potential bars and
6 nodal points. We consider a load which applies at the bottom right hand node pulling vertically to the ground with force
$\norm{f} = 1$. The two left hand nodes are fixed, and hence the structure has $d = 8$ degrees of freedom
for displacements.
We set $c := 10, \bar a := 100$ and $\bar\sigma := 1$ as in \cite{Ho09} and the resulting structure consisting of 5 bars
is shown in Figure \ref{fig : TenBar}(b) and is the same as the one in \cite{Ho09}.
For comparison, in the following table we show the full data containing also the stress values.
\begin{center}
\begin{tabular}{|ccc|c|}
\hline
$i$ & $a_i^*$ & $\sigma_i(a^*,u^*)$ & $u_i^*$ \\
\hline
1 & 0 & 1.029700000000000 & -1.000000000000000 \\
2 & 1.000000000000000 & 1.000000000000000 & 1.000000000000000 \\
3 & 0 & 1.119550000000000 & -2.000000000000000 \\
4 & 1.000000000000000 & 1.000000000000000 & 1.302400000000000 \\
5 & 0 & 0.485150000000000 & -1.970300000000000 \\
6 & 1.414213562373095 & 1.000000000000000 & -3.000000000000000 \\
7 & 0 & 0.302400000000000 & -8.000000000000000 \\
8 & 1.414213562373095 & 1.000000000000000 & -6.511800000000000 \\ \cline{4-4}
9 & 2.000000000000000 & 1.000000000000000 & $f^T u^* = 8$ \\
10 & 0 & 1.488200000000000 & $V^*= 8.000000000000002$ \\
\hline
\end{tabular}
\end{center}
We can see that although our final structure and optimal volume are the same as the final structure and the optimal volume in \cite{Ho09},
the solution $(a^*,u^*)$ is different. For instance, since $f^T u^* = 8 < 10 = c$, our solution does not reach the maximal compliance.
Similarly as in \cite{Ho09}, we observe the effect of vanishing constraints since the stress values from the table show
that
\[\sigma_{max}^* := \max_{1 \leq i \leq N} \vert \sigma_{i}(a^*,u^*) \vert = 1.4882 >
\hat\sigma^* := \max_{1 \leq i \leq N : a_i^* > 0} \vert \sigma_{i}(a^*,u^*) \vert = 1 = \bar\sigma.\]
\begin{figure}
\includegraphics[width=\textwidth]{CantilArm.jpg}
\caption{Cantilever Arm example} \label{fig : CantilArm}
\end{figure}
In the Cantilever Arm example we consider the ground structure depicted in Figure \ref{fig : CantilArm}(a) consisting of $N = 224$ potential bars and
27 nodal points. Again, we consider a load acting at the bottom right hand node pulling vertically to the ground with force
$\norm{f} = 1$. Now the three left hand nodes are fixed, and hence $d = 48$.
We proceed as in \cite{Ho09} and we first set $c := 100, \bar a := 1$ and $\bar\sigma := 100$. The resulting structure consisting of only 24 bars
(compared to 38 bars in \cite{Ho09}) is shown in Figure \ref{fig : CantilArm}(b).
Similarly as in \cite{Ho09}, we have $\max_{1 \leq i \leq N} a_i^{*1} = \bar a$ and $f u^{*1} = c$.
On the other hand, our optimal volume $V^{*1} = 23.4407$ is a bit larger than the optimal volume 23.1399 in \cite{Ho09}.
Also, analysis of our stress values shows that
\[\sigma_{max}^{*1} := \max_{1 \leq i \leq N} \vert \sigma_{i}(a^{*1},u^{*1}) \vert = 60.4294 >>
\hat\sigma^{*1} := \max_{1 \leq i \leq N : a_i^{*1} > 0} \vert \sigma_{i}(a^{*1},u^{*1}) \vert = 2.6000\]
and hence, although it holds true that both absolute stresses as well as absolute ''fictitious stresses'' (i.e., for zero bars)
are small compared to $\bar\sigma$ as in \cite{Ho09}, the difference is that in our case they are not the same.
The situation becomes more interesting when we change the stress bound to $\bar\sigma= 2.2$.
The obtained structure consisting again of only 25 bars (compared to 37 or 31 bars in \cite{Ho09}) is shown in Figure \ref{fig : CantilArm}(c).
As before we have $\max_{1 \leq i \leq N} a_i^{*2} = \bar a$ and $f u^{*2} = c$.
Our optimal volume $V^{*2} = 23.6982$ is now much closer to the optimal volumes 23.6608 and 23.6633 in \cite{Ho09}.
Similarly as in \cite{Ho09}, we clearly observe the effect of vanishing constraints since our stress values show
\[\sigma_{max}^{*2} := \max_{1 \leq i \leq N} \vert \sigma_{i}(a^{*2},u^{*2}) \vert = 24.1669 >>
\hat\sigma^{*2} := \max_{1 \leq i \leq N : a_i^{*2} > 0} \vert \sigma_{i}(a^{*2},u^{*2}) \vert = 2.2 = \bar\sigma. \]
Finally, we obtained 32 bars (in contrast to 24 bars in \cite{Ho09}) satisfying both
\[a_i^{*2} < 0.005 = 0.005 \bar a \, \, \textrm{ and } \, \, \vert \sigma_{i}(a^{*2},u^{*2}) \vert > 2.2 = \bar\sigma. \]
To better demonstrate the performance of our algorithm we conclude this section by a table with more detailed information
about solving Ten-bar Truss problem and 2 Cantilever Arm problems (CA1 with $\bar\sigma := 100$ and CA2 with $\bar\sigma := 2.2$).
We use the following notation.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Problem&name of the test problem \\
$(n,q)$&number of variables, number of all constraints \\
$k^*$&total number of outer iterations of the SQP method \\
$(N_0, \ldots, N_{k^*-1})$&total numbers of inner iterations corresponding to each outer iteration \\
$\sum_{k=0}^{k^*-1}j(k)$&overall sum of steps made during line search\\
$\sharp f_{eval}$&total number of function evaluations, $\sharp f_{eval} = k^* + \sum_{k=0}^{k^*-1}j(k)$\\
$\sharp \nabla f_{eval}$&total number of gradient evaluations, $\sharp \nabla f_{eval} = k^*+1$\\
\hline
\end{tabular}
\end{center}
\begin{center}\small
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Problem & $(n,q)$ & $k^*$ & $(N_0, \ldots, N_{k^*-1})$ & $\sum_{k=0}^{k^*-1}j(k)$ & $\sharp f_{eval}$ & $\sharp \nabla f_{eval}$\\
\hline
Ten-bar Truss & $(18,39)$ & $14$ & $(1, \ldots, 1, 2, 2, 2, 2, 1, 1)$ & $67$ & $81$ & $15$ \\
CA1 & $(272,721)$ & $401$ & $(1, \ldots, 1)$ & $401$ & $802$ & $402$ \\
CA2 & $(272,721)$ & $1850$ & $(1, \ldots, 1)$ & $1850$ & $3700$ & $1851$ \\
\hline
\end{tabular}
\end{center}
\section*{Acknowledgments} This work was supported by the Austrian Science Fund (FWF) under grant P 26132-N25.
|
train/arxiv
|
BkiUf4425V5jfOBX6Mn6
| 5 | 1 |
\section{INTRODUCTION}
The statistical
theory of out-of-equilibrium systems is one of the
most challenging, rapidly evolving, and an interdisciplinary
domain of modern research. Its fundamental importance is rooted in the
fact that the majority of the situations encountered in
nature (in its broadest sense, including physical, chemical,
and biological systems) are nonequilibrium ones. Such
systems exhibit very complex and often counter-intuitive
behaviors, resulting from a
generic interplay between a large number of degrees of
freedom, nonlinearities, ``noise" or perturbations of various origins,
driving forces and dissipation (corresponding to various levels of
coarse-grained description).
Usually, noise and stochastic equations appear in the modeling
when one concentrates the study
on a few relevant variables
(hereafter called ``system under study") and approximates
the effects of the eliminated degrees of freedom
through a ``random force" or ``noise" with prescribed statistical properties.
In particular, these eliminated degrees of freedom have a characteristic time scale,
that is translated into the specific correlation time of the noise that
intends to mimic them.
In many situations of interest,
the characteristic response time of the system under study
is much larger than this specific time scale of the
eliminated variables.
Following the seminal works of Einstein, Langevin
and Smoluchowsky, the noise is then safely modeled
as a Gaussian white noise (GWN) that is $\delta$-correlated in time.
In this case the
model system is generically referred to as
a ``Brownian particle". The Brownian motion under the action of the
GWN is a Wiener process, and
a detailed mathematical analysis can be carried on the basis of
Langevin or Fokker-Planck equations.
The Wiener process and the GWN are, of course, stochastic processes
of fundamental importance.
However, they do not exhaust all the situations one may be
called to model. Indeed, there are cases when the eliminated degrees of freedom are {\em slow} on the time scale of the studied system. Then one
has to mimic the result of the coarse-graining over such a set of slow variables
through a {\em colored noise}, i.e., a noise that has
a non-negligible correlation time (and thus a non-flat power spectrum).
Such noises have been studied in great detail in zero-dimensional
systems, and their specific properties are known to have a profound
influence on the behavior of these systems.
Generically, they lead the system out of equilibrium
(they break the detailed balance in the configuration space of the
system).
The effect of the color of the noise on noise-induced transitions
and phase transitions
continues to be documented, and has been found to be quite important,
e.g., it can alter the type of transition and lead to re-entrance
phenomena. Other noise-induced effects have also been found to be
sensitive to the correlation time of the noise, like the stochastic resonance,
the synchronization of several noisy dynamical units, and the
directed transport in ratchets.
The two most commonly discussed examples of colored noise are the
Ornstein-Uhlenbeck process and the dichotomous Markov noise (DMN).
Although the Ornstein-Uhlenbeck process is easily invoked
in view of Doob's theorem~\footnote{Roughly speaking, Doob's theorem states that the Ornstein-Uhlenbeck process is the only stationary Gaussian diffusion process. See, e.g.,
Ref. ~\cite{vankampen92} for more details.},
the purpose of this review is to show that {\em DMN has
particular virtues and interest.}
But before proceeding with this argumentation,
let us define the DMN and describe its
main stochastic properties.
\subsection{Definition of DMN}
The DMN $\xi(t)$ is a very simple {\em two-valued $\pm A_{\pm}$ stochastic process, with constant transition rates $k_{\pm}$ between the two states}~\cite{vankampen92,horsthemke84}. Figure~\ref{figure1}
illustrates a realization of DMN.
\begin{figure}[h!]
\quad \vspace{1.5cm}\\
\centerline{\psfig{file=figure1.eps,width=11cm}} \vspace*{8pt}
\caption{A realization of an asymmetric DMN $\xi(t)$ that jumps between two values $\pm A_{\pm}$ with constant transition rates $k_{\pm}$. The waiting times $\tau_{\pm}$
in the two states are exponentially-distributed stochastic variables,
thus ensuring the Markovian character of the DMN, see the main text. }
\label{figure1}
\end{figure}
The constancy of the transition
rates corresponds to exponentially distributed waiting times $\tau_{\pm}$ in the two states,
\begin{equation}
\mbox{Probability}(\tau_{\pm})=k_{\pm}\exp(-k_{\pm}\tau_{\pm})
\end{equation}
(i.e., the transitions are driven by Poisson renewal processes) and DMN is {\em Markovian}. It is therefore completely characterized by the initial state and the
matrix of the transition probabilities,
\begin{eqnarray}
\left[P_{ij}(t)\right]_{i,j=\pm}&=&\left[\mbox{Probability}(\xi(t)=iA_i|\xi(0)=jA_j)\right]_{i,j=\pm}
\nonumber\\
&&\nonumber\\
&=&\left[
\begin{array}{cc}
P_{--}(t) &\;\;\; P_{-+}(t)\\
P_{+-}(t) &\;\;\; P_{++}(t)
\end{array}
\right]=
\tau_c
\left[
\begin{array}{cc}
k_++k_-e^{-t/\tau_c} &\;\;\; k_+(1-e^{-t/\tau_c})\\
k_-(1-e^{-t/\tau_c}) &\;\;\; k_-+k_+e^{-t/\tau_c}
\end{array}
\right]\,,
\end{eqnarray}
where
\begin{equation}
\tau_c=\frac{1}{k_++k_-}
\end{equation}
is the characteristic relaxation time to the stationary state of the DMN.
In the foregoing we shall be exclusively concerned with {\em stationary DMN}, for which the stationary probabilities of the two states are
\begin{equation}
\mbox{Probability}(\xi=A_+)={k_-}\,\tau_c\;,\;\;
\mbox{Probability}(\xi=-A_-)=k_+\,\tau_c\,,
\end{equation}
with the corresponding mean value
\begin{equation}
\langle\xi(t)\rangle =(k_-A_+-k_+A_-)\,\tau_c\,.
\end{equation}
Moreover, we shall generally consider zero-mean DMN, $\langle\xi(t)\rangle=0$,
in order to avoid any systematic bias introduced by the noise in the dynamics of the driven system.
The stationary temporal autocorrelation function of DMN
is exponentially-decaying
\begin{equation}
\langle \xi(t) \xi(t')\rangle =
\displaystyle\frac{D}{\tau_c}\exp\left(-\frac{|t-t'|}{\tau_c}\right)\,,
\end{equation}
corresponding to a {\em finite correlation time} $\tau_c$ and an ``amplitude"
\begin{equation}
D=k_+k_-\tau_c^3(A_++A_-)^2\,.
\end{equation}
The power spectrum is thus a Lorenzian related to the characteristic time scale $\tau_c$,
\begin{equation}
S(\omega)=\frac{D}{\pi(1+\omega^2\,\tau_c^2)}\,,
\end{equation}
so the DMN is indeed {\em colored}.
A particular case of DMN, that will be widely used in the foregoing, is that of the
{\em symmetric} DMN, for which $A_+=A_-\equiv A$ and $k_+=k_-\equiv k$,
so that the correlation time is $\tau_c=(2k)^{-1}$, and the ``amplitude"
$D=A^2/(2k)$, with, moreover, $\langle\xi(t)\rangle=0$.
Let us underline yet another important property of the stationary,
zero-mean DMN, namely that in appropriate limits it reduces either
to a {\em white shot noise} (WSN), or to a {\em Gaussian white
noise} (GWN), see Ref.~\cite{vandenbroeck83} for a detailed
discussion. Figure~\ref{figure2} represents schematically these
relationships.
\begin{figure}[h!]
\quad \vspace{1.5cm}\\
\centerline{\psfig{file=figure2.eps,width=12cm}} \vspace*{8pt}
\caption{Schematic representation of the relationships between DMN and other important stationary, zero-mean stochastic processes. DMN reduces to white shot noise or to Gaussian white noise in the appropriate limits (see the main text); however, DMN and the Ornstein-Uhlenbeck process cannot be mapped one onto the other. }
\label{figure2}
\end{figure}
Consider the stationary {\em asymmetric} DMN of zero mean,
characterized by {\em three independent parameters} (e.g., $k_{\pm}$ and $A_+$),
with $A_+/k_+=A_-/k_-\equiv \lambda$. Then, by taking the limit
\begin{equation}
A_-,\,k_-\rightarrow \infty\,,\;\mbox{while keeping} \;\;A_-/k_-=\lambda=\mbox{finite}\,,
\end{equation}
one recovers the WSN driven by a Poisson process (also called the Campbell's process);
this one is characterized by {\em two parameters} ($\lambda$ and, e.g., $k_+$). Considering further on the limit $A_{+}, \,k_{+}\rightarrow \infty$ with $\lambda \rightarrow 0$ but $D=\lambda^2{k_+}=A_+^2/k_+=\mbox{finite}$,
WSN reduces to a GWN; this latter is characterized by a {\em single parameter}, the noise ``amplitude" $D$.
Note also that the {\em symmetric} DMN can reduce directly (i.e., without passing through the stage of a WSN) to the GWN by taking simultaneously the limits $A,\,k\rightarrow \infty$, such that $D=A^2 \,\tau_c=A^2/(2k)=\mbox{finite}$ represents the amplitude of the GWN.
There is no mapping possible between DMN and the Ornstein-Uhlenbeck process:
indeed, one cannot eliminate the non-Gaussian character of the DMN while keeping a finite correlation time. The two noises have distinct statistical properties, a point that is
also generally reflected in the response properties of the dynamical systems
driven by these noises. However, as indicated by the central limit theorem, the superposition of $M$ suitably-scaled, independent DMNs $\xi_i(t)\,,i=1,...,M$\,,
converges in the limit $M\rightarrow \infty$ to an Ornstein-Uhlenbeck process.
This property may be used to tackle systems driven by an Ornstein-Uhlenbeck noise
(that are notoriously difficult to study)
by constructing approximate solutions corresponding to the superposition of
several DMNs.
\subsection{Motivation, relevance, and the importance of using DMN}
This review is intended to present DMN as a {\em tool for modeling stochastic processes}.
We shall thus try to clarify its applicability, flexibility, and limits, as well as to describe a few prototypical applications. Several essential points,
briefly resumed below,
will be resulting from this review.
\begin{itemize}
\item As it will become clear, systems driven by DMN are {\em encountered in a wide variety of physical and mathematical models}. Besides the ``fashion-effect" (peaked around the 1970s), there are deeper reasons to this and we shall address them.\\
\item A basic point is that {\em DMN mimics the effects of
finite correlation time (colour) of the noise in a very simple way}. It constitutes thus a good alternative to the widely-spread Ornstein-Uhlenbeck process, which is often quite untractable analytically. So, the DMN reflects the effect of the eliminated {\em slow} degrees
of freedom on the dynamics of the relevant variables under study. Refer for example to Refs.~\cite{hanggi95,vankampen76,vankampen85,zwanzig01}
for detailed discussions on this essential point of the modeling of stochastic systems.
Moreover, the interplay between the {\em intrinsic} time scale of the DMN and other time scales present in the problem (e.g., characteristic relaxation time of the system under study, external periodic perturbations, etc.) may lead to nontrivial effects (e.g., multistability and hysteretic behavior, stochastic resonance, synchronization effects, etc.). These effects are
absent when a white noise is acting (even multiplicatively !) on the system.\\
\item One has to realize that {\em DMN may be directly a very good representation
of a simple and frequently-encountered physical situation},
namely that of the {\em thermally activated transitions between two configurations/states} of a system
(as far as the ``intra-configuration" motion is unimportant, filtered-out or coarse-grained).
A very clear illustration is offered by the small electronic devices (e.g., MOSFET, MIM, JFET, point-contact resistances, etc.), as described in the review article Ref.~\cite{kirton89}. The active volume of such devices is so small that they contain only a reduced number of charge carriers; then the alternate capture and emission of carriers at individual defect sites generates {\em measurable} discrete jumps in the device resistance. These jumps
are naturally modeled as a DMN process. Their study provides a powerful mean of investigating the nature of defects at the interfaces, the kinetics of carriers' capture and emission (see, e.g., Ref.~\cite{xiao04} for a recent experimental example),
and has demonstrated the defect origin of low-frequency $1/f$ noise in these devices, etc.\\
\item Generically, DMN {\em drives the system out-of-equilibrium} (i.e., it breaks the detailed balance in the configuration space of the system). Therefore it may lead to novel behaviors that are not accessible in equilibrium systems (like, for example, directed transport out of microscopic fluctuations). In particular, memory effects may become important -- the driven system is typically non-Markovian.\\
\item On one hand, DMN is ``sufficiently simple" so that it {\em often allows for full
analytical description}, and it represents therefore a good research, as well as didactical tool. The emphasis of our presentation will thus be, to the largest possible extent, on {\em analytically-solvable} models and on {\em exact results},
as stated in the title of this review~\footnote{One should keep in mind the difficulties generically encountered in the analysis of out-of-equilibrium systems (when very
often one has to appeal to various approximations),
and therefore the scarcity of exact results for such systems.}.
Moreover, the ``simplicity" of DMN allows to dissect the essential mechanisms
at work behind various nonequilibrium,
often seemingly complex phenomena. One can
thus reveal the ``minimal scenario" and ingredients
for the appearance of these processes.\\
\item On the other hand, DMN is ``sufficiently rich" so that it leads to {\em highly nontrivial and varied nonequilibrium behaviors in nonlinear systems}. This illustrates once more the fundamental fact that stochasticity may affect the dynamics more strongly than just a simple perturbation around the deterministic evolution: it may induce {\em qualitatively} different behaviors.\\
\item As it was mentioned above, DMN {\em reduces, in appropriate limits, to WSN and to GWN}, and thus offers an alternative way of solving problems connected with these noises (in particular, master equation for DMN is often simpler to solve than the Fokker-Planck equation for GWN).
Let us also note that the relationship of DMN with the GWN
played a role in the famous (and unfounded~!)
{\em It\^o versus Stratonovich dilemma}~\footnote{Recall that a stochastic differential equation with {\em multiplicative} GWN is not defined unambiguously: one has to supply it with an integration rule. Two such rules were essentially proposed in the existing literature, namely the {\em It\^o interpretation} (and the use of martingale formalism), and the {\em Stratonovich interpretation}. The controversy arose as to which of these two conventions was ``the correct one". The fact that the master equation for
a system driven by DMN (for which there is no ambiguity in interpretation) reduces,
in the appropriate limit, to the Stratonovich form of the Fokker-Planck equation for the GWN was used as an argument in favor of the Stratonovich interpretation. However, as explained in detail in Ref.~\cite{vankampen81}, such a controversy is actually completely meaningless.}, see
Refs.~\cite{vankampen81} and \cite{vandenbroeck83} for pertinent comments on this point. \\
\item From the point of view of numerical simulations, DMN has the advantage of
{\em being easy to implement as an external noise with finite support}.\\
\item Last, but surely not least,
systems driven by DMN point to {\em interesting practical applications}.
\end{itemize}
\subsection{The structure of the review}
Despite an impressive literature on the systems driven by DMN,
the main lines and subjects
of interest are easy to infer. The type of systems that are generically studied are {\em zero-dimensional dichotomous flows}, corresponding to a single stochastic variable driven by a DMN that may be additive or multiplicative, as described in Sec.~II.
We shall be first addressing in Sec.~III
the problem of the transient (time-dependent) characteristics of such flows, that might be important for some practical applications and/or finite observation times.
We shall classify the ``solvable" cases and discuss in detail the seminal example of the dichotomous diffusion on a line. We shall turn afterwards to the ``more productive" study of the
asymptotic, long-time behavior of the dichotomous flows. As it will become clear, both the physics and the mathematical approach are very different if the asymptotic dynamics exhibits or not {\em unstable critical points} -- this represents a main point of this review. The ``standard old theories" were limited to the absence of unstable critical points, as described in Sec.~IV. Most of the results referred to the celebrated noise-induced transitions and phase transitions that we shall briefly present. In Sec.~V we turn to the situations when the asymptotic dynamics presents unstable critical points. We show how calculations have to be modified in order to deal with such situations
and illustrate the method on three prototypical examples, namely the hypersensitive response, the rocking ratchet, and the stochastic Stokes' drift. Section~VI will be devoted to a brief presentation of escape statistics (mean first-passage time problems, resonant activation over a fluctuating barrier, etc.). In Sec.~VII we shall discuss the stochastic resonance phenomenon
and its DMN-induced enhancement.
Section~VIII is devoted to some comments on spatial patterns induced by DMN forcing, while Sec.~IX describes briefly random maps with DMN. A few conclusions and perspectives are relegated to Sec.~X.
A last warning refers to the fact that this review is non-exhaustive
(and we do apologize for the omissions), pedagogical to a large extent, and rather non-technical. We emphasized subjects that count amongst the most
recent in this domain and in which we were directly involved.
But, of course, all important earlier results are also
described in detail, for the sake of a correct perspective
on the field.
\section{SYSTEMS DRIVEN BY DMN: DEFINITION OF DICHOTOMOUS FLOWS}
A zero-dimensional dichotomous flow corresponds to the temporal evolution of some characteristic scalar variable $x=x(t)$ of the system under study, whose velocity switches at random between two dynamics: the ``+" dynamics, $\dot{x}(t)=f_+(x)$, and the ``--" dynamics, $\dot{x}(t)=f_-(x)$ (the dot designates the time-derivative). This process can be described by the following stochastic differential equation:
\begin{equation}
\dot{x}(t)=f(x)\,+\,g(x)\,\xi(t)\,,
\end{equation}
where $\xi(t)$ is a realization of the DMN taking the values $\pm A_{\pm}$
with transition rates $k_{\pm}$ between these values, and $f_{\pm}(x)=f(x)\pm g(x)A_{\pm}$.
If $g(x)$ is a constant, the DMN acts {\em additively}; otherwise, for $g(x)\neq$ constant,
the noise is {\em multiplicative}.
We shall consider throughout the paper only {\em constant} transition rates
$k_{\pm}$, although some recent stochastic nonequilibrium models of protein motors
use $x$-dependent transition rates $k_{\pm}(x)$, see Ref.~\cite{julicher99}.
Most of our results can be
easily generalized to cover these situations too. Moreover, if not stated explicitely otherwise, we shall be working with a {\em symmetric DMN}, $A_{\pm}\equiv A$ and $k_{\pm}\equiv k$.
For the simplicity of the presentation, we shall often refer to $x(t)$ as the
``position of an overdamped
particle"; but one should keep in mind that its actual nature depends on the system under study (i.e., that $x(t)$ can be a spatial coordinate, a current, a concentration, a reaction coordinate, etc.).
It is a stochastic process, and for most of the practical purposes its properties can be essentially described
through the properties and evolution equation of the
probability distribution function $P(x,t)$~\footnote{See, e.g., Ref.~\cite{kubo}
for the discussion of the complete characterization of a stochastic process in terms of various associated probability distribution functions.}.
Indeed, one is in general interested by the
the {\em mean over the realizations of the DMN} of an arbitrary function ${\cal F}(x)$ of the stochastic
variable $x(t)$, a type of quantity that we shall denote throughout the text by $\langle ... \rangle$: \begin{equation}
\langle {\cal F} \rangle =\int dx P(x,t) {\cal F}(x) \,.
\end{equation}
In the nonstationary regime of the dichotomous flow this is an explicit function of time, as discussed below.
\section{THE TIME-DEPENDENT PROBLEM}
One is, of course, tempted to address first the question of the non-stationary, transient stochastic behavior of the
variable $x(t)$, i.e., that of the temporal evolution of its probability density from a given initial state
(a given initial probability distribution function) to the
(presumably existent) asymptotic stationary state with the corresponding stationary
probability distribution function.
\subsection{The master equation and the time-dependent ``solvable" cases}
The master equation for the probability density of the compound (or vectorial) stochastic process $[x(t),\,\xi(t)]$, namely $P_{\pm}(x,t)=P(x,t;\xi=\pm 1)$, can be easily written down. It corresponds to
a Liouville flow in the phase space (describing the deterministic evolution between two jumps of the DMN) plus a gain-and-loss term (reflecting the switch of the DMN between its two values):
\begin{eqnarray}
\partial_tP_+(x,t)&=&-\partial_x[f_+(x)\;P_+]-k(P_+-P_-)\,, \\
\partial_tP_-(x,t)&=&-\partial_x[f_-(x)\;P_-]+k(P_+-P_-)
\end{eqnarray}
(see, e.g., Ref.~\cite{balakrishnan93} for a detailed derivation).
One is interested, however, in the
evolution equation for the marginal probability
density of the stochastic variable $x(t)$, ${P(x,\,t)=P_+(x,t)+P_-(x,t)}$, for which one
obtains (using the above equations):
\begin{eqnarray}
\partial_t {P(x,t)}&&=-\partial_x[f(x){P(x,t)}]\nonumber\\
&&+ A^2\,
\partial_x \,g(x)\int_0^t dt_1\exp[(-\partial_x
f(x)-2k)(t-t_1)]\,
\partial_x[g(x){P(x,t_1)}]\,.
\label{master}
\end{eqnarray}
This is an intricate {\em integro-differential} equation
in time, with a kernel that involves the exponential of the differential operator $\partial_x$.
Thus, although the DMN is such a simple stochastic process,
the statistics of the driven process $x(t)$ may be remarkably complicated
and, in general, $x(t)$ {\em is not a Markovian process}.
While the stationary solution
$P_{st}(x)$ to which the probability density $P(x,t)$ tends in the asymptotic limit $t\rightarrow \infty$ can be computed under rather general conditions (see Secs.~IV and V below), one can also address the legitimate question of the solvability of this
time-dependent master equation (\ref{master}).
A particularly relevant problem
is whether one could construct/recover some Markovian process out of $x(t)$. As shown
recently in Refs.~\cite{balakrishnan01I} and \cite{balakrishnan03}, this is achieved in the so-called {\em solvable cases}, when $P(x,t)$ obeys a closed partial differential equation
of finite-order in time.
If the order of this equation is $n \geqslant 1$, then one can construct a finite-dimensional vectorial Markovian process out of $x(t)$ and its temporal derivatives; more precisely,
the vectorial process $\left[x(t), \, \dot{x}(t),\, ... , x^{(n-1)}(t)\right]$ is Markovian.
It is in this very point that lies the importance of knowing whether
a dichotomous flow is solvable:
if one can reconstruct a Markovian property at some level
of differentiation of $x(t)$, then the knowledge of a finite-number
of initial conditions for the probability density and its time-derivatives
are enough in order to determine entirely the subsequent evolution of $P(x,t)$.
As discussed in detail in Refs.~\cite{balakrishnan01I} and
\cite{balakrishnan03}, the
{\em condition of solvability} is related to the behavior of the differential operators
\begin{equation}
{{\cal A}}=-\partial_x[f(x)\,...]\;,\quad { {\cal B}}=-\partial_x[g(x)\,...]
\end{equation}
and the hierarchy of their comutators,
\begin{equation}
{{\cal C}_n}=\left[{ {\cal A}},\,{{\cal C}_{n-1}}\right]\quad (n\geqslant 1)\;,\quad \mbox{with}\; \quad{ {\cal C}_0}={{\cal B}}\,.
\end{equation}
If this hierarchy closes, then $P(x,t)$ satisfies a finite-order differential equation in time~\footnote{The justification of these conditions is rather long and delicate, and will not be given here
(it involves the use of the stochastic Liouville equation for the phase-space density, Van Kampen's lemma, the Shapiro-Longinov~\cite{shapiro} formula of differentiation for a functional of the noise $\xi(t)$, etc.). See Ref.~\cite{balakrishnan01I} for the details of their derivation. }.
In more detail, if
\begin{equation}
{{\cal C}_{{n}}}=\displaystyle\sum_{k=0}^
{{{ n-1}}}\beta_k\,{ {\cal C}_k}\,,
\end{equation}
with $\beta_k$ some constants, then $P(x,t)$ satisfies a partial differential equation of order
$(n+1)$ in time. If the linear combination involves ${\cal A}$ as well, i.e., if
\begin{equation}
{ {\cal C}_{{n}}}={\alpha
{ {\cal A}}}+\displaystyle\sum_{k=0}^{{{ n-1}}}\beta_k\,{ {\cal C}_k}\,,
\end{equation}
then the order of the equation is $(n+2)$.
Despite the seeming simplicity of these criteria, one should realize that
they are quite restrictive and thus the classes of solvable cases are rather reduced.
In particular, the following cases are {\em not solvable}:
(a) The cases with an additive DMN ($g(x)=$ constant) and a nonlinear drift
term ($f(x)=$ nonlinear in $x$).
(b) When either $f(x)$ or $g(x)$ is a polynomial of order $\geqslant 2$, $P(x,t)$ does not satisfy {\em in general} a finite-order equation (although it may do so in special cases of specific relationships between the coefficients of the polynomials).
A few examples of {\em solvable cases}:
(a) When $f(x)=\mbox{constant}\cdot g(x)$, i.e., ${\cal C}_1=0$, the process $x(t)$ can be mapped, through a nonlinear transformation, onto a pure {\em dichotomous diffusion}
for which the time-dependent solution is well-known (see the subsection below).
(a1) A subcase of interest is that of $f(x)=Ag(x)$, so that $f_-(x)=0$. This corresponds to the so-called delayed evolution, in which the deterministic dynamics
governed by the flow $f_+(x)=2f(x)$ is interrupted at random instants and $x$ remains frozen at its current value, till the noise switches back and the
``$+$" dynamics is continued.
A closely-related type of flow, called interrupted evolution, is characterized by the fact that in the quiescent ``--" state ($f_-=0$) the stochastic variable is reset
to a random value drawn from a fixed distribution. See Ref.~\cite{balakrishnan01II} for more details and applications of these two types of flows.
(a2) When $f(x)=0$, i.e., $f_+(x)=-f_-(x)$, the problem reduces again to the dichotomous diffusion. This case is of interest, e.g., in problems involving the exchange of stability between two critical points of the alternate dynamics, see Ref.~\cite{balakrishnan01II}.
(b) Another case presented in the literature, see Ref.~\cite{sancho84}, is that of ${\cal C}_1=-\beta_0{\cal B}$,
i.e., $f'(x)g(x)-g'(x)f(x)=\beta_0g(x)$, when $P(x,t)$ obeys a second-order differential equation. The early-day Hongler's model, see Ref.~\cite{hongler79},
with $f(x)=-\mbox{tanh}(x)$ and $g(x)=\mbox{sech}(x)$ fall in this class.
An important warning, however: one should be aware of the fact that ``solvable" in the sense
indicated here above does not imply, in general, that one can express $P(x,t)$ in a simple, closed algebraic form of some ``standard" functions. Indeed, as explained in Refs.~\cite{balakrishnan01I} and \cite{balakrishnan03}, this is merely an exceptional situation, and probably the only case that was completely explored till now is that of the dichotomous diffusion
(and processes that reduce to it through some change of variables).
\subsection{Dichotomous diffusion on a line}
It describes the stochastic position $x(t)$ of a particle whose velocity
is a DMN,
i.e., it is represented by the following stochastic differential equation:
\begin{equation}
\dot{x}(t)=\xi(t)\,.
\end{equation}
The corresponding master equation for the probability densities $P_{\pm}(x,t)$ is
\begin{equation}
\partial_t P_{+}=-A\partial_xP_++k(P_--P_+)\,,\quad
\partial_t P_{-}=A\partial_xP_-+k(P_+-P_-)
\label{ppm}
\end{equation}
(for the case of a symmetric DMN $\pm A$ with transition rate $k$).
This simple process $x(t)$ is an example of so-called {\em persistent diffusion
on a line} as detailed in Refs.~\cite{furth20,taylor21,goldstein51,balakrishnan88I}. Indeed, it can be
obtained~\footnote{In the same way as the normal diffusion is
obtained from the usual, simple discrete random walk on a lattice.} as the continuum limit of a ``persistent" random walk on a
$1D$ lattice, i.e., a random walk for which the transition probabilities left or right
at a given stage depend on the {\em direction} of the preceding jump. This means that the jump probabilities have a ``memory" of the previous state of the system, and therefore $x(t)$ is no longer Markovian. However, according to the general discussion in Sec.~III.A, the vectorial process $[x(t), \,\dot{x}(t)]$ is Markovian. The probability distribution $P(x,t)$ obeys a second-order (in time) hyperbolic partial differential equation known as the {\em telegrapher's equation}~\footnote{This equation is well-known in the theory of electromagnetic signal propagation in transmission lines, and describes both the ``ballistic" transmission and the damping (diffusion) of the signal, see the main text. A model with $x$-dependent transition rates $k=k(x)$ corresponds to inhomogeneities in the transmission cables, see Ref.~\cite{hongler86}. }:
\begin{equation}
(\partial_{tt}+2k\partial_t-A^2\partial_{xx})P(x,t)=0\,.
\label{telegrapher}
\end{equation}
Note that each of $P_{\pm}(x,t)$ also obeys this equation.
Telegrapher's equation can be solved for various initial conditions of $P(x,t)$ and its time-derivative $\dot{P}(x,t)$~\cite{balakrishnan88I}.
The short-time behavior (for $t \ll \tau_c=1/2k$) is governed by
the wave-like part of the equation, i.e., the mean square
displacement behaves ballistically: $\langle x^2(t)\rangle \approx A^2t^2$;
while the long-time behavior (for $t \gg \tau_c=1/2k$) is of course diffusive,
$\langle x^2(t)\rangle \approx (A^2/k)t=2Dt$, in
agreement with the central-limit theorem.
One can easily illustrate this behavior on the example of the ``symmetric" initial conditions
$P(x,t=0)=\delta(x)$ and $\dot{P}(x,t=0)=0$, when the solution
of the telegrapher's equation can be expressed in terms of modified Bessel functions of zeroth and first order as:
\begin{eqnarray}
P(x,t)&=&\frac{e^{-kt}}{2}[\delta(x-At)+\delta(x+At)]+
\frac{ke^{-kt}}{2A}\left[\frac{}{}I_0\left(k\sqrt{t^2-x^2/A^2}\right)\right.\nonumber\\
&&\left.+\frac{t}{\sqrt{t^2-x^2/A^2}}I_1\left(k\sqrt{t^2-x^2/A^2}\right)\right]\;\left[\theta(x+At)-\theta(x-At)\right]\,.
\end{eqnarray}
The terms in $\delta$ (of amplitude that is decreasing exponentially in time)
describe the ballistic motion corresponding to the persistence of DMN in the
``+" or ``--" state during time $t$. The remaining terms represent the sum of contributions resulting from trajectories that imply one, two, etc ... transitions between the ``+" and ``--" states of the DMN during time $t$, and tend to a Gaussian
(reflecting an usual Brownian motion) in the asymptotic limit. Henceforth,
\begin{equation}
\langle x^2(t) \rangle =
\frac{A^2}{2k^2}\,\left[2kt-1+\exp(-2kt)\right]\,,
\end{equation}
with the mentioned ballistic, respectively diffusive behavior for short and long times.
\subsubsection{Applications}
As reviewed in Ref.~\cite{vandenbroeck90},
several physical situations can be modeled through a dichotomous diffusion
and the telegrapher's equation, and we present them briefly below.
For most of them the transient regime and its stochastic properties are important, in view of the
finite-observation time, or the limited dimensions of the experimental device;
from here the relevance of knowing the time-dependent statistics, and not only the asymptotic regime.
{\em (a) Broadening of peaks in a chromatographic column}. \\
A chromatographic column is a very simple device for separating particles of different
characteristics; it consists of a cylindrical tube, where particles are drifted by a fluid, with mean velocities depending on their mobilities. So, at the end of the tube, one is expecting to receive neat ``delta-peaks" of identical particles. It was noted experimentally, however, that there is a broadening of these peaks, i.e., the identical particles do not arrive exactly at the same moment at the end of the column (despite all the precautions related to ensuring identical initial conditions at the beginning of the tube, etc.).
In the model proposed in
Refs.~\cite{giddings55} and \cite{giddings57} for this broadening phenomenon, for each species of
particles present in this sorting-device there are random
switches, with a specific transition rate $k_+$, from a mobile state (in which particles are dragged with a specific velocity $v$ along the chromatographic column)
to an immobile, adsorbed state; they are afterwards randomly desorbed,
with a characteristic desorbing rate $k_-$, see Fig.~\ref{figure3}
for a schematic representation.
\begin{figure}[thb]
\quad \vspace{1.5cm}\\
\centerline{\psfig{file=figure3.eps,width=12cm}}
\caption{Schematic representation of a chromatographic column of length $L_c$.
Particles switch at random, with transition rate $k_+$, from the mobile state of velocity $v_+=v$
to an adsorbed, immobile state ($v_-=0$), and can return to the ``free" state
with a transition rate $k_-$. }
\label{figure3}
\end{figure}
This stochastic process can be described as
\begin{equation}
\dot{x}(t)=v/2+\xi(t)\,,
\end{equation}
where $x(t)$ designates the position of the particles along the column, and $\xi(t)$ is a DMN that takes the values $\pm v/2$ with transition rates $k_{\pm}$. This results in an asymptotic mean velocity of the particles
\begin{equation}
\langle \dot{x}\rangle =\frac{k_-}{k_++k_-}\;v\,
\end{equation}
and a dispersion of the particles around this drift profile (a ``broadening" of the profile) given by an effective diffusion coefficient
\begin{equation}
D_{\mbox{eff}}=\frac{k_+k_-}{(k_++k_-)^3}\; v^2\,.
\end{equation}
The separating efficiency of the chromatographic column is thus determined by its length $L_c$, through the condition that for each type of particles ``convection wins over dispersion",
\begin{equation}
L_c>\frac{2D_{\mbox{eff}}}{\langle \dot{x}\rangle}=\frac{2k_+v}{(k_++k_-)^2}\,,
\end{equation}
and also by requiring ``sufficiently different" asymptotic mean velocities of the
particles of different species. Note also that reaching the asymptotic regime
requires times $t \gg \tau_c=1/(k_++k_-)$, and therefore $L_c$ should
be actually much larger than the right-hand side of the above equation.\\
{\em (b) Measuring reaction rates through electrophoresis}. \\
In this system, as modeled in Ref.~\cite{mysels56}, the jumps (with transition rates $k_{\pm}$)
between the two states of a DMN correspond
to a particle undergoing a chemical transformation between two configurations,
$A^{(1)}$ and $A^{(2)}$, with different
electrophoretic mobilities $\mu_{\pm}$. See Fig.~\ref{figure4} for illustration.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure4.eps,width=12cm}}
\caption{Schematic representation of an electrophoresis experiment. An ensemble of particles undergo random transitions between two states, $A^{(1)}$ and $A^{(2)}$,
that have different electrophoretic mobilities $\mu_{+}$,
respectively $\mu_-$. By applying an external electric field
$E$ and by measuring the mean drift velocity of the particles, as well as the
diffusion around the mean drift position, one can obtain the values of the transition rates $k_{\pm}$
between the states $A^{(1,2)}$.}
\label{figure4}
\end{figure}
Of course, by measuring ``statically" the
concentrations $[A^{(1,2)}]$ of the two configurations, one can obtain the ratio of the transition rates $k_{\pm}$, since at equilibrium $k_+ [A^{(1)}] = k_- [A^{(2)}]$.
When applying an external field,
one can measure ``dynamically" the average velocity of the electrophoretic peak,
\begin{equation}
\langle\dot{x}\rangle =
\displaystyle\frac{\mu_+k_-+\mu_-k_+}{k_++k_-}\;E
\end{equation}
(which contains the same information as the equilibrium measurements), but also
the dispersion of the particles around this peak, i.e., their effective diffusion coefficient
\begin{equation}
D_{\mbox{eff}}=\displaystyle\frac{k_+k_-(\mu_+-\mu_-)^2}{(k_++k_-)^3}\;E^2\,,
\end{equation}
and thus one can determine $k_+$ and $k_-$ separately.\\
{\em (c) Taylor dispersion}. \\
In the fifties, Taylor investigated, both theoretically and experimentally, the motion of tracers in a Poiseuille flow in cylindrical configuration,
see Refs.~\cite{taylor1,taylor2,taylor3}. Applications refer, e.g., to the motion of pollutants in rivers, etc.
In the long-time regime, he found that
the tracers are being dragged downstream along the $x$-axis of the cylinder, with the mean velocity $u$ of the flow. But besides that, the tracers are also dispersed around this drift peak, with an effective diffusion coefficient
$D_{\mbox{eff}}$ that is inversely proportional to their molecular diffusion
coefficient $D$. See the left-hand side of Fig.~\ref{figure5} for an illustration.
\begin{figure}[h!]
\quad \vspace{1.5cm}\\
\centerline{\psfig{file=figure5.eps,width=12cm}}
\caption{Schematic representation of a Poiseuille flow in a cylinder (left-hand side of the figure),
with the expression of the effective diffusion coefficient along the direction of the flow
as found theoretically by Taylor. The right-hand side of the figure represents
a caricature of the Taylor dispersion seen as a dichotomous diffusion, see the main text. Indicated are the
model stochastic process, and the resulting effective diffusion coefficient along the axis of the cylinder.}
\label{figure5}
\end{figure}
A very simple model of dichotomous diffusion introduced later, see Ref.~\cite{thacker75}, allows to capture the essential features of the Taylor dispersion. It consists of tracers that jump at random,
with transition rates $k$, between two layers of fluid with velocitis $u+V$
and $u-V$ respectively. This results in an effective diffusion coefficient
along the $x$-axis, $D_{\mbox{eff}}$ that is inversely proportional to the transition rate $k$.
One realizes that, as in the case of the Taylor dispersion, $D_{\mbox{eff}}$ is proportional to the typical time that is needed by a particle to sample all the available velocities
($a^2/D$ for the Taylor flow, and $k^{-1}$ for the dichotomous diffusion). Note also
the divergence of $D_{\mbox{eff}}$ when there is no transition between the layers of the fluid (i.e., when $D\rightarrow 0$ for Taylor dispersion, respectively $k\rightarrow 0$
for dichotomous diffusion); indeed, in this case particles with different velocities move apart from each other with a ballistic motion.\\
{\em (d) Jepsen-McKean gas}. \\
This is a very simple (although theoretically very productive)
model of a classical gas. It consists of elastic, hard-point particles that are initially randomly distributed on a line
(with an average distance $\lambda$ between them),
and with randomly distributed initial velocities taking
two possible values, $+ v$ and $-v$,
see Refs.~\cite{jepsen65,mckean67,protopopescu85}.
At collisions particles simply exchange trajectories, see Fig.~\ref{figure6}
for a schematic representation.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure6.eps,width=10cm}}
\caption{Schematic representation of one realization of the trajectories of the hard-point particles
of a Jepsen-McKean gas (see main text). The thick line represents the trajectory of a test-particle that undergoes a dichotomous diffusion due to the collisions with the other particles.}
\label{figure6}
\end{figure}
The velocity of a particle is thus just $\dot{x}(t)=\xi(t)$, with the DMN
$\xi(t)$ taking the values $\pm v$ with transition rate $k=v/\lambda$.
For this system, the master equation for the probability densities $P_{\pm}(x,t)$
is nothing else but Boltzmann's equation (which is {\em exact} for this model!), and the probability distribution function
$P(x,t)$ (that obeys telegrapher's equation) is proportional to
the spatial density of particles.\\
{\em (e) Kubo-Anderson oscillator.}\\
Finally, an example that has numerous
applications in spectroscopy (see, e.g., Ref.~\cite{mukamel} for a review),
is the so-called ``random-frequency oscillator" introduced in Refs.~\cite{anderson53,kubo54I,kubo54II,anderson54,kubo69}. It is described by the stochastic evolution of a phase-like variable,
\begin{equation}
\dot{u}(t)=-i\,\left[ \omega_0 +\xi(t)\right]\,u\,,
\end{equation}
where, in one of the simplest variants, $\xi(t)$ is a DMN $\pm A$ with transition rate $k$.
The mean value of the phase variable
\begin{eqnarray}
\langle u(t)\rangle &=& \langle u(0) \rangle \langle \exp\left[-i \omega_0 t - i
\int_0 ^t \xi(t') dt' \right]\rangle \nonumber\\
&=&\langle u(0) \rangle e^{-i\omega_0 t-kt}\,
\left\{\cos(t\sqrt{A^2-k^2})+
\frac{k}{\sqrt{A^2-k^2}}\sin(t\sqrt{A^2-k^2})\right\},
\end{eqnarray}
leads to a power spectrum
\begin{equation}
S(\Delta \omega)=\frac{2{k}}{\pi}\;\frac{A^2}{[(\Delta
\omega)^2-A^2]^2+4{k^2} (\Delta \omega)^2}\,,
\end{equation}
with $\Delta \omega = \omega - \omega_0$. As illustrated in Fig.~\ref{figure7}, there
is a change in the shape of the spectrum with increasing frequency $k$
of the transition, the so-called {\em motional narrowing}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure7.eps,width=13cm}}
\caption{Schematic representation of the motional narrowing of the power spectrum of the random-frequency oscillator driven by a DMN $\pm A$ (see the main text).
The spectrum, that is two-peaked for low transition rates $k$ of the DMN, becomes progressively single-peaked and narrower with increasing $k$. }
\label{figure7}
\end{figure}
Such a random-frequency oscillator can describe, for example, the Larmor precession of a spin in a random magnetic field;
or the thermal transition of a molecule between configurations with different
absorbtion frequencies, etc.
By measuring the motional narrowing in the power spectrum, one can obtain various information on the structure of the system under study. For example, from measurements on the NMR spectrum at different temperatures
(which means for different Arrhenius-like transition rates between different structural configurations), one can determine the energy barriers
between the various configurations of a molecule -- see, e.g., two early-day papers
of J. A. Pople, Refs.~\cite{pople1} and \cite{pople2}.
\subsubsection{Dichotomous diffusion and Quantum mechanics}
Let us now mention briefly two types of connections with quantum mechanics problems.
{\em (a) Relativistic extension of the analogy between quantum mechanics and \\
Brownian motion.}
As first remarked in Ref.~\cite{gaveau84}, the telegrapher's equation~(\ref{telegrapher}) can be converted into the Klein-Gordon equation
through a simple transformation, that suggests a formal connection with relativistic wave equations. More precisely, setting:
\begin{equation}
P_{\pm}(x,t)=e^{-kt} \,\psi_{1,2}(x,t)
\end{equation}
in Eq.~(\ref{telegrapher}), one finds that each of the ``components" $\psi_{1,2}$ obeys
the Klein-Gordon equation:
\begin{equation}
(\partial_t^2-A^2\partial_x^2-k^2)\psi_{1,2}=0\,.
\label{coupled1}
\end{equation}
Or, by substituting in the master Eq.~(\ref{ppm}), one finds the coupled equations for the components $\psi_{1,2}$:
\begin{equation}
(\partial_t\pm A\partial_x)\psi_{1,2}=k\psi_{2,1}\,.
\end{equation}
On the other hand, the Dirac equation for a free relativistic particle of rest mass $m_0$
reads
\begin{equation}
(i\gamma^{\mu}\partial_{\mu}-m_0c/\hbar)\psi=0
\end{equation}
($c$ is the speed of light, and $\hbar$ the reduced Plancks's constant).
In $(1+1)$ dimensions $\partial_{\mu}=(c^{-1}\partial_t,\partial_x)$ and
using Weyl's representation
\begin{equation}
\gamma^0=\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)\,,\quad
\gamma^1=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)\,,
\end{equation}
the Dirac's equation for $\psi=(\psi_1,\,\psi_2)$ reduces to
\begin{equation}
(\partial_t\pm c\partial_x)\psi_{1,2}=(m_0c^2/i\hbar)\psi_{2,1}\,.
\label{coupled2}
\end{equation}
Of course, these are exactly Eqs.~(\ref{coupled1}), provided that we make the following identifications:
\begin{equation}
A \leftrightarrow c\,,\quad k \leftrightarrow m_0c^2/i\hbar\,.
\end{equation}
The factor $i$ that appears above reflects the familiar analytical continuation to ``imaginary time" that appears in the usual passage from the diffusion equation to the Schr\"odinger equation (or from the diffusion kernel to the quantum mechanics propagator). So, in this picture, a Dirac particle in one dimension, with helicity amplitudes $\psi_{1,2}$ moves back and forth on a line, at the speed of light (Zitterbewegung!),
and at random times it flips both the direction of propagation and the chirality.
This analogy was generalized later in Ref.~\cite{balakrishnan05} to the case of
a velocity-biased dichotomous diffusion. The transition rates
between the $+A$ state and the $-A$ state are unequal, $k_+\neq k_-$, thus resulting in a net drift velocity, see Ref.~\cite{filliger04} for a detailed discussion of this process.
The modifications in the master equation and its solutions can be interpreted in terms of a Lorentz transformation to a frame moving precisely with this drift velocity.
As a consequence, in the connection to the Dirac equation this results in a modification of the correspondence between the rest mass of the Dirac particle and the frequency
of direction reversal in the dichotomous diffusion, the latter being corrected
by the time-dilatation factor corresponding to the drift velocity.
The analogy between stochastic differential equations and quantum mechanical waves has been developed further in the context of stochastic quantum mechanics (see Ref.~\cite{balakrishnan05} for some pertinent references). However, many nontrivial open questions still remain (e.g., the extension of the analogy between diffusive processes and relativistic wave equations to more than one spatial dimension, to cite only an example). \\
{\em (b) Quantum random walks and dichotomous diffusion.}
In Ref.~\cite{blanchard04} one considers a one-dimensional
space and time-discrete quantum random walker
driven by a Hadamard ``coin-tossing process".
Namely, at each time step the two-component wave function $\psi(n,t)=(\psi_1(n,t),\,\psi_2(n,t))$
of the particle on a discrete lattice
at position $n$ suffers a modification of the chirality
and a jump to the neighbour sites,
according to the rules:
\begin{eqnarray}
&&\psi_1(n,t+1)=-\frac{1}{\sqrt{2}}\psi_1(n+1,t)+\frac{1}{\sqrt{2}}
\psi_2(n-1,t)\,,\nonumber\\
&&\psi_2(n,t+1)=\frac{1}{\sqrt{2}}\psi_1(n+1,t)+\frac{1}{\sqrt{2}}
\psi_2(n-1,t)\,.
\end{eqnarray}
In the continuum space and time limit, the probability distribution for the quantum particle (i.e., the square of the wave function) is shown to obey a hyperbolic equation that is similar to telegrapher's equation obtained for a dichotomous diffusion with space-dependent transition rates between the two states of the
driving DMN. This leads to an asymptotic ballistic behavior of the mean square ``displacement" of the quantum particle,
contrary to its classical counterpart
(an effect of quantum interferences). The generalization to a larger class of unitary transformations of the chirality than the Hadamard coin-tossing will lead, in the continuum limit, to a larger class of hyperbolic equations describing piecewise deterministic motions.
Solving these equations is, however, a very difficult task~\cite{blanchard04}.\\
In view of the already-discussed difficulty and thus scarcity of exact results for time-dependent problems, but mainly in view of the relevance for practical purposes,
we shall turn below to the asymptotic, stationary regime of the dichotomous flows.
\section{DICHOTOMOUS FLOWS: STATIONARY SOLUTIONS WITHOUT UNSTABLE CRITICAL POINTS}
The first subsection will be devoted to the definition and general properties of simple
stationary dichotomous flows on the real axis, in the absence of unstable critical points.
We discuss afterwards
the appearance of noise-induced transitions on the particular example of the genetic model.
A comparison is made between the effects of the DMN and those of a deterministic periodic
dichotomous perturbation.
In the third paragraph we are considering a simple ensemble of coupled
dichotomous flows, and analyse the mechanism behind the onset of a global
instability in this model of a spatially-extended system. The DMN and the
deterministic periodic forcing are contrasted again in this context.
Finally, in the fourth
subsection we are ready to address the problem of the nonequilibrium
phase transitions induced
by DMN in a mean-field type of model. Comparison with the phase transitions
induced by GWN allows to emphasize specific features
(multistability, hysteretic behavior, etc.) related to the colour
of the DMN noise.
\subsection{Generalities on stationary dichotomous flows}
The vaste majority of the studies on dichotomous flows refer to the {\em stationary},
long-time behavior. We shall briefly present here the most important,
generic results for flows $f_{\pm}(x)$ that are defined and ``sufficiently smooth"
(at least continuous) on the whole real axis,
without any specific periodicity properties, and {\em without unstable critical
points}. This last condition, that avoids important mathematical difficulties
(that will be revisited in Sec.~V) was the only one considered in the ``classical"
literature on the subject. We will admit it throughout this Section.
Note, however, that each of $f_{\pm}(x)$ may have, eventually, one
{\em stable} critical point (of course, in view of the supposed continuity and absence of unstable critical points of $f_{\pm}(x)$, no more than one fixed point is allowed for each $f_{\pm}(x)$).
The only non-trivial situation that can therefore be considered in this context appears when each of the alternate ``$+$" and ``--" flows does have a {\em stable}
critical point, let us call them $x_+$, respectively $x_-$~\footnote{There are two other
possible situations, but they are both trivial:
(i) When one single flow has a stable critical point, the asymptotic distribution reduces to a $\delta$-peak at this point. (ii) When none of the two flows has fixed points, the initial distribution reduces progressively to zero throughout the real axis -- the particles escape to infinity. }:
\begin{equation}
f_{\pm}(x_{\pm})=0\,, \quad \mbox{with} \quad f'_{\pm}(x_{\pm})<0
\end{equation}
(the prime denotes the derivative with respect to $x$).
Due to the competition between these attractors, the asymptotic motion of the particles settles down in an alternate flow between the two attractors, a process called
{\em dynamic stability}, which leads to a nontrivial stationary probability distribution $P_{st}(x)$. Thus the attractors define the compact support of $P_{st}(x)$, see Fig.~\ref{figure8}.
The support depends on the amplitude of the DMN, however it is clear that it does not depend on the transition rates. Indeed, due to the exponential distribution of the
waiting times in each state of DMN, the particle can persist into
one of the flows till reaching the corresponding attractor, whatever
its initial position with respect to this attractor.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure8.eps,width=10cm}}
\caption{The attractors $x_-$ and $x_+$ of the alternate flows $f_-(x)$, respectively $f_+(x)$ define the compact support of the stationary probability distribution $P_{st}(x)$ of the dichotomous flow. See the main text.}
\label{figure8}
\end{figure}
The stationary master equation for $P_{st}(x)$ can be easily solved, see e.g.
Ref.~\cite{horsthemke84}, and leads to the following expression
(for $x_- \leqslant x\leqslant x_+$):
\begin{eqnarray}
P_{st}(x)d x&=&{\cal N} \left(\frac{d x}{|f_+(x)|}+\frac{d x}{|f_-(x)|}\right)\times\nonumber\\
&& \times \left\{k\exp[-k\,T_+(x_-,x)]\right\}\;
\left\{k\exp[-k\,T_-(x_+,x)]\right\}\,,
\end{eqnarray}
where
\begin{equation}
T_{\pm}(u,x)=\int_u^x\frac{dz}{f_{\pm}(z)}\,,
\end{equation}
and ${\cal N}$ is a normalization factor.
The structure of this expression has a simple intuitive interpretation: \\
(a) Particles that are found in the interval $dx$ around $x$ can be either in the $+$, or in the -- flow, spending in $dx$ a time that is inversely proportional
to their velocity. From here the term $\left({d x}/{|f_+(x)|}+{d x}/{|f_-(x)|}\right)$.\\
(b) In order to reach the point $x$, particles: (i) either come from $x_-$ and are driven by the $f_+(x)$ flow (towards its attractor); this takes a time $T_+(x_-,x)$, and the probability for the DMN to persist in the $+$ state during this time is
$\left\{k\exp[-k\,T_+(x_-,x)]\right\}$; (ii) or come from $x_+$ and
are driven by the $f_-(x)$ flow; this takes a time $T_-(x_+,x)$
and happens with probability $\left\{k\exp[-k\,T_-(x_+,x)]\right\}$.
Near the attractors, $P_{st}(x)$ has a typical behavior
\begin{equation}
P_{st}(x) \sim \left|x-x_{\pm}\right|^{-1-k/f'_{\pm}(x_{\pm})}\,,
\end{equation}
which reflects the competition between two time scales, namely that of the switching
between the two flows and that of the dynamics in the vicinity of the attractors. More precisely: (a) If the transition rate $k$ is low (${k}/{|f'_{\pm}(x_{\pm})|}<1$), there is
an accumulation of particles at the stable fixed point (and this results in an integrable divergence of the probability distribution function); (b) If the transition rate $k$ is large,
the alternance of dynamics is sufficiently efficient in sweeping-out the effect of the attractor. $P_{st}$ becomes zero at the borders, either with a divergent slope
(for $1<{k}/{|f'_{\pm}(x_{\pm})|}<2$) or with a zero slope for
${k}/{|f'_{\pm}(x_{\pm})|}>2$.\\
Another point of interest is the location of the {\em maxima} $x_m$ of $P_{st}(x)$, and their dependence on the parameters of the noise. Why are these maxima important? Because (in view of the ergodicity of the system) they offer indications on the values of the stochastic variables that are most-likely encountered in a finite-time
measurement on the system. The equation giving the extrema
\begin{equation}
f(x_m) - Dg(x_m) g'(x_m) + \tau_c \; \left[ 2 \, f(x_m) \, f'(x_m) -
\, f^2(x_m) \, \frac{g'(x_m)}{g(x_m)} \right]=0
\end{equation}
has several contributions: the first term in the left-hand side corresponds to the deterministic
(noiseless) steady-state. The second term is related
to the multiplicative nature of the DMN ($g'(x)\neq 0$), and persists in the
limit of a Gaussian white noise (where it is known as the ``Stratonovich spurious drift",
see, e.g, Ref.~\cite{vandenbroeck97}). Finally, the last term on the left-hand side
is due to the finite correlation time of the DMN~\footnote{ In relation with the already-mentioned differences in their statistical properties, note that this term is different from the
term corresponding to an Ornstein-Uhlenbeck process.}, and, of course, is absent
in the GWN-limit.\\
The nonlinearity of $f_{\pm}(x)$ leads to the possibility of multiple maxima of $P_{st}(x)$. Moreover, by modifying the parameters of the noise, one can vary the
position of these maxima, or can even create new maxima
(that are absent in the absence of the noise).
The macroscopic state of the system can thus undergo a
{\em qualitative} change under the influence of the noise.
Coloured-noise processes can give rise to states that disappear in
the limit of a white noise. These very simple
remarks are at the basis of the celebrated {\em noise-induced transitions},
see Ref.~\cite{horsthemke84} for a review. Note, however, that this change in the shape of $P_{st}(x)$ {\em does not lead to a breaking of ergodicity}
(roughly, the initial state of the stochastic system does not influence on its asymptotic behavior). Also, sometimes, these noise-induced transitions reduce mathematically to a nonlinear change of the stochastic variable.
\subsection{The genetic model: a case-study of transition induced by DMN}
One of the best candidates to illustrate the concept of noise-induced transition is, without doubt, represented by the so-called {\em genetic model}, see Refs.~\cite{arnold78} and \cite{horsthemke84}.
Indeed, in the absence of the noise this model exhibits a unique
steady state that is stable: the system is not capable of any transitions under deterministic conditions.
We shall show that, depending on the parameters of the noise, new macroscopically observable states (in the sense described in the paragraph above)
can be generated~-- ~i.e., {\em purely} noise-induced transitions
can take place in the system.
The equation of the model is
\begin{equation}
\dot{x}=(1/2 - x) +\lambda x(1-x)\,,
\label{gen}
\end{equation}
where $x(t)$ is the state variable (that takes values between $0$ and $1$),
and $\lambda$ is the control parameter which models the coupling of the
system to its environment.
This very simple model was initially
introduced on a purely theoretical basis, see Ref.~\cite{arnold78}.
A realization of it is given by the mean-field description of
the coupled auto-catalytic reactions:
\begin{equation}
A+X+Y
\left.
\begin{array}{c}
k_1\\
\rightleftarrows \\
k_2
\end{array}
\right.
2Y+C
\end{equation}
\begin{equation}
B+X+Y
\left.
\begin{array}{c}
k_3 \\
\rightleftarrows \\
k_4
\end{array}
\right.
2X+D
\end{equation}
The reactions conserve the total number of $X$ and $Y$ particles:
\begin{equation}
X(t)+Y(t)=N=\mbox{constant}
\end{equation}
Considering
\begin{equation}
\alpha=\frac{k_2C}{k_2C+k_4D}=1/2
\end{equation}
and
\begin{equation}
\lambda=\frac{k_3B+k_4D-k_1A-k_2C}{k_2C+k_4D}\,,
\end{equation}
one obtains the genetic equation~(\ref{gen}) for the variable $x=X/N$. The
concentrations $A$, $B$, $C$, and $D$ are externally-controlled and determine the value
of $\lambda$.
Later on it was shown
that this model can be used to describe quite realistically a mechanism of
genetic evolution in a two-genotype population -- and this is where the name of the model came from (see Ref.~\cite{horsthemke84} for detailed comments).
In this context, $x$ represents the frequency of one of the two genotypes.
The term $(1/2-x)$ corresponds to a mutation between genotypes,
that tends to equalize the frequency of their expression.
Finally, the term $\lambda x(1-x)$
corresponds to a natural selection mechanism that tends
to favor the genotype that is ``best adapted" to the environment;
$\lambda$ represents the ``selection rate", that is environment-dependent and
that can be either positive or negative.
The deterministic ($\lambda$ = constant) steady-state is just
\begin{equation}
x_{\mbox{det}}=[\lambda-1+(\lambda^2+1)^{1/2}]/(2\lambda)
\end{equation}
and, as it was already mentioned, one can easily verify that it is asymptotically, globally stable.
Let us now try to model the effects of a fluctuating environment
on the genotype dynamics.
This can be done by assigning a fluctuating
``selection rate" $\lambda$; for example $\lambda=\xi(t)$ can be a symmetric DMN
$\{\pm A,\, k\}$ (such that on the average no genotype is favored by the natural selection),
see Ref.~\cite{horsthemke84}.
The corresponding stochastic differential equation
\begin{equation}
\dot{x}=(1/2-x) +\xi(t) \,x(1-x)
\label{genstoc}
\end{equation}
leads to a stationary probability distribution function $P_{st}(x)$ that can be studied
in full analytical detail. Putting together the information on the support of $P_{st}(x)$ (the region between the attractors of the alternate ``+" and ``--" dynamics);
on its maxima; on its behavior (and that of its derivative, see above) near the borders, one can obtain the ``phase diagram" of the system in the plane of the parameters $k$ and $A$ of the DMN, see Fig.~\ref{figure9}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure9.eps,width=12cm}} \vspace*{8pt}
\caption{The phase diagram of the genetic model driven by a symmetric DMN in the plane of the parameters $(k, \,A)$ of the DMN. The borderlines correspond, respectively, to:
(i) $A=2\sqrt{k-1}$ for $k\geqslant 1$ ; (ii) $A=\sqrt{k^2-1}$ for $k\geqslant 1$; (iii) $A=k$ for $k\geqslant 2$; and
(iv) $A=\sqrt{k^2/4-1}$ for $k\geqslant 2$. The shape of $P_{st}(x)$ for the
different regions is represented qualitatively, illustrating the peak-splitting and the peak-damping effects, see the main text.}
\label{figure9}
\end{figure}
The simplicity of the stochastic structure of the DMN (that was invoked in the Introduction)
allows us to decipher the two mechanisms at work in fashioning the shape of the stationary probability distribution $P_{st}(x)$. The first one is a {\em peak-damping mechanism}, which is a generic disordering effect of the noisy environment. The second
mechanism is a {\em peak-splitting mechanism}, signature of the two-valued DMN,
and source of the noise-induced qualitative changes in the shape of $P_{st}(x)$ (i.e., of the noise-induced transitions). Recall, once more, that there is no ergodicity breaking in the
system. Note, moreover, {\em the disappearance of the signatures of a
stochastic behavior for too large or too low transition rates of the noise,
which is a generic effect
for all dichotomous flows}. This is very simple to explain intuitively:\\
(a) In the first case (``large" $k$-s) the system ``does not have time"
to adapt to the rapidly-changing instantaneous value of the noise,
and the stochastic effects are simply smeared out.
(From here results the single deterministic peak of $P_{st}(x)$
in Fig.~\ref{figure9});\\
(b) In the latter case (``small" $k$-s) half of the systems of the statistical
ensemble are evolving along $f_{+}(x)$, and half along $f_{+}(x)$,
practically without interchange between these subensembles.
(From here result the two deterministic peaks of
$P_{st}(x)$ in Fig.~\ref{figure9}.)\\
\subsubsection{What is the difference with a dichotomous periodic forcing?}
In this context, people also addressed the following question: what is the
difference (if any) in the transitions induced by random (DMN) and periodic dichotomous
perturbations? Let us consider this problem in the context of the genetic model,
following Ref.~\cite{doering85}, and take in Eq.~(\ref{genstoc})
$\xi(t)$ to be a dichotomous periodic forcing as shown in Fig.~\ref{figure10}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure10.eps,width=10cm}} \vspace*{8pt}
\caption{A deterministic periodic dichotomous forcing $\xi(t)$ of amplitude $A$ and period $T$.}
\label{figure10}
\end{figure}
The stationary probability distribution $P_{st}(x)$
(corresponding to random initial conditions in the variable
of the system and in the phase of the perturbation)
is given (with obvious notations) by:
\begin{equation}
P_{st}(x)dx=\frac{1}{T}(dt_++dt_-)=
\frac{1}{T}\left(\frac{dx}{|f_+(x)|}+\frac{dx}{|f_-(x)|}\right)\,.
\end{equation}
One notices that its support depends on both the amplitude $A$ and, contrarily to the case of the DMN, also on the period $T$ of the perturbation (which limits the time the system spends in the ``+", respectively ``--" states). But, more important, $P_{st}(x)$ is
{\em always bimodal and qualitatively independent of the parameters
of the perturbation}, see Fig.~\ref{figure11}. This means that {\em no transitions are induced by the periodic perturbation}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure11.eps,width=10cm}} \vspace*{8pt}
\caption{Qualitative profile of the stationary probability distribution $P_{st}(x)$
for the case of a deterministic periodic dichotomous forcing $\xi(t)$ in the
genetic model. We considered two perturbations of the same amplitude $A$
and different periods $T_1$ and $T_2$. $P_{st}(x)$ is always bimodal and its support depends on both the amplitude and the period of the perturbation.}
\label{figure11}
\end{figure}
We can infer from this, as done in detail in Ref.~\cite{doering85},
the importance of the {\em exponentially rare,
``large excursions" of the DMN} (i.e., long intervals of time when it persists
in keeping the same value); these are reflected in the low-frequency part of the
power spectrum of the DMN perturbation. This part is missing in the spectral
distribution of the deterministic periodic forcing (which is, actually, a succession of $\delta$-peaks), and in some circumstances (like in the genetic model) it
can be important
in determining the overall behavior.
One is thus tempted to extrapolate and state that it looks like the behavior induced by the DMN is ``richer" than that induced by a periodic perturbation. We shall question this point further with the next case-study, by introducing an example of {\em coupled dichotomous flows}. This will also allow us to make a first step
towards the concept of
{\em noise-induced nonequilibrium phase transition} in spatially-extended systems.
\subsection{Overdamped parametric oscillators: a simple case-study for perturbation-induced instabilities}
Over the past two-three decades there has been an increasing interest in the nonequilibrium behavior and the role of noise on {\em spatially extended systems} modeled as {\em ensembles of simple dynamical units coupled to each other}. Noise was longly-thought to be only a source of disorder -- as suggested by many day-to-day situations, as well as by equilibrium statistical mechanics. However, it is now acquired wisdom that in out-of-equilibrium, nonlinear systems noise can actually lead to the {\em creation of new, ordered states}, i.e., to {\em noise-induced phase transitions}.
These represent a result of the {\em collective, noisy, and generically nonlinear evolution} of the component units, and are absent in the absence of the noise.
Moreover, the collective behavior is usually {\em qualitatively} different from that of the single dynamical units.
The first step in the onset of a phase transition is the
instability of the reference state, and the collective effects are very important at this
stage, either by promoting the instability, or by preventing/delaying it.
We shall illustrate the onset of the instability on the example of
a system consisting of coupled {\em parametric oscillators}, which are widely-spread simple dynamical units~\footnote{Indeed, as discussed in
detail in Refs.~\cite{bena99,vandenbroeck00,bena02}, many linear and nonlinear, deterministic and stochastic systems that exhibit {\em energetic instabilities} can be modeled as parametric oscillators which undergo the parametric resonance phenomenon.}. For simplicity, we shall
consider only the inertialess case (and we refer the reader to Refs.~\cite{bena99,vandenbroeck00,bena02} for the case with inertia, and for further details and discussions).
Once this step overtaken, we can proceed in Sec.~IV.D with a more ``standard" discussion of a {\em phase transition induced by} DMN in a system of coupled dynamical flows. We shall moreover address specific effects related to the colour of the noise
as compared to the GWN case.
\subsubsection{A single oscillator}
Let us start by considering first the dynamics of a single unit,
namely a single scalar variable decaying
at a rate that is parametrically modulated, see Ref.~\cite{vandenbroeck98}:
\begin{equation}
\dot{x}(t)=\left[-1+\xi(t)\right]x(t)\,.
\label{malt}
\end{equation}
In the spirit of the comparison presented
in Sec.~IV.B.1, we shall consider $\xi(t)$ to be either
a symmetric DMN $\{\pm A,\,k\}$, or a deterministic periodic perturbation (of given amplitude and period) with a randomly uniformly distributed initial phase.
One can obtain analytically the temporal behavior of the first moment $\langle x(t)\rangle$
(where $\langle ...\rangle$ designates the mean over the realizations of the DMN in the first case, and over the initial phase in the case of the deterministic perturbation), with the typical profiles illustrated qualitatively in Fig.~\ref{figure12}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure12.eps,width=10cm}} \vspace*{8pt}
\caption{Qualitative features of the temporal
evolution of the first moment $\langle x(t)\rangle$ of the model
considered in Eq.~(\ref{malt}), for both a DMN and a deterministic
periodic perturbation. Note in the case of the periodic perturbation
the transient increase of $\langle x(t)\rangle$ above its initial
value.} \label{figure12}
\end{figure}
Whatever the parameters of the deterministic periodic perturbation~\footnote{Except for the case of the quenched disorder $T\rightarrow \infty$ with $A>1$.}, {\em after a transient increase above the initial value}, the mean tends asymptotically to zero: the absorbing state $x=0$ is a global attractor.
For the case of the DMN, if the amplitude $A >\sqrt{2k+1}$ then the mean diverges in time~\footnote{Of course, the mean goes asymptotically to zero for $A <\sqrt{2k+1}$.}, although the absorbing state is still a global attractor, i.e.,
the probability distribution $P(x,t)$ tends asymptotically to $P_{st}(x)=\delta(x)$. The reason for this
counter-intuitive behavior has to be looked for into the existence of the extremely rare realizations of the DMN that lead to extremely large excursions
of $x$ away from zero. Even if the latter ones are exponentially few, their weight
in the calculation of the mean is exponentially large, and can thus finally lead to the reported divergence of the mean. Although it cannot be argued that this represents
a noise-induced transition in the sense discussed in Secs.~IV.A and IV.B, it is
nevertheless an essential qualitative change in the transient stochastic behavior of the system.
\subsubsection{Coupled parametric oscillators}
Consider now $N$ such identical overdamped oscillators that are coupled in the simplest
possible way, namely an {\em all-to-all harmonic coupling}:
\begin{equation}
\dot{x}_i=[-1+{\xi_i(t)}]x_i-\sum_j K_{ij}(x_i-x_j)\,,
\end{equation}
with $i=1,...,N$ and $K_{ij}=K/N$. Let us take the thermodynamic limit $N\rightarrow \infty$ and write down the evolution equation for
a generic oscillator (dropping the subscript $i$ for simplicity of notation)~\footnote{We admitted
here, as a result of the law of large numbers, that
$\langle x(t)\rangle = \displaystyle\sum_i x_i(t)/N$ is a self-averaging macroscopic intensive variable, and that its value coincides with the ensemble average over the realizations of $\xi(t)$.}:
\begin{equation}
\dot{x}(t)=[-1+{\xi(t)}]x-K[x-\langle x(t)\rangle]\,.
\end{equation}
One can then easily deduce the {\em instability border} that separates the asymptotic absorption regime $\langle x(t)\rangle \rightarrow 0$ from the
asymptotic explosive regime $|\langle x(t)\rangle |\rightarrow \infty$. This is illustrated on Fig.~\ref{figure13}
in the plane of the parameters $A$ (amplitude of the perturbation)
and $K$ (coupling constant), for a fixed value of the transition rate $k$ for the DMN, respectively of the
period $T=1/(2k)$ for the deterministic forcing.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\psfig{file=figure13.eps,width=10cm}} \vspace*{8pt}
\caption{The instability borders in the $(K,\,A)$ plane for an ensemble of globally, harmonically coupled overdamped parametric oscillators, with a DMN, respectively a deterministic periodic parametric perturbation (qualitative representation;
see the main text). Note that the instability is re-entrant with respect to the coupling strength $K$.}
\label{figure13}
\end{figure}
For the DMN, this border is simply given by:
\begin{equation}
A=\sqrt{1+2k +K}\;;
\end{equation}
therefore, the origin of the global instability of $\langle x(t)\rangle$
is very clear in this case: if each oscillator is unstable, the mean will also be unstable. However, note that {\em coupling has a stabilizing effect in this context}, which means that you need a larger
amplitude $A$ of the noise in order to get unstable the center-of-mass of the oscillators.
For the periodic modulation, the instability border is given by
the following implicit equation:
\begin{eqnarray}
\frac{8A^2k K}{[(1+K)^2-A^2]^2}&&\left[\cosh\left(\frac{A}{4k}\right)-
\cosh\left(\frac{1+K}{4k}\right)\right]
\nonumber\\
&&=
\left[1-\frac{K(1+K)}{(1+K)^2-A^2}\right]\sinh\left(\frac{1+K}{4k}\right)\,.
\end{eqnarray}
So, in this example, things look to be more spectacular in the case of the
deterministic periodic forcing (as compared to the DMN case).
Indeed, although {\em almost} each individual oscillator is asymptotically stable (i.e., $x(t) \rightarrow 0$ with probability almost 1), we have, however, a diverging mean. The reason is the existence of the initial transient increase of the mean (see the previous paragraph). This is due to the presence of a pool (that is permanently changing) of ``exceptional" individuals that make very large excursions away from zero. If the coupling is not too weak, and not too strong either, then it is promoting this initial transient increase, and these ``exceptional" individuals are thus leading the mean. So, in this situation, coupling may have a destabilizing effect! As expected, however,
the transition is {\em re-entrant}~\footnote{Re-entrance (i.e., the fact that the transition appears only for certain range of parameters of the noise or of the deterministic part of system's dynamics) is generic in out-of-equilibrium phase transitions. However, it is rather rare in equilibrium situations (examples include some $XY$ helimagnets and insulator-metal transitions).} with respect to the coupling strength $K$
(a too large coupling do not allow for large excursions; while a too weak coupling does not allow the exceptional individuals to pass their role to the others, i.e., the permanent refreshing of the leading pool).
This simple example allowed us to make an important step in understanding the origin
of the onset of a global instability that may lead to a phase transition in an extended system: it is an initial, transient, local instability in the system that is promoted and is ``not let to die out" by the coupling between the dynamical units of the system.
As discussed in a very pedagogical way in Ref.~\cite{vandenbroeck97}, this is
a generic mechanism behind the onset of nonequilibrium phase transitions~\footnote{An exception is discussed in Ref.~\cite{carillo03}.}.
This mechanism (although we shall not enter into further details) is also at work in the example we shall present in the next paragraph.
\subsection{Phase transitions induced by noise: a model and a comparison between DMN and GWN}
This model corresponds to a mean-field coupling between dichotomous flows driven multiplicatively by the DMN $\xi(t)$. The corresponding equation
for the self-averaging stochastic variable $x(t)$ is written:
\begin{equation}
\dot{x}(t)=f(x)+g(x)\xi(t)-K(x-\langle x\rangle)\,.
\label{PT}
\end{equation}
The stationary probability density depends in an intricate way on the mean value $\langle x\rangle$, namely
\begin{eqnarray}
P_{st}(x; {\langle x \rangle})&\sim&\left(\displaystyle\frac{1}{|f_+(x;{\langle x \rangle})|}+\displaystyle\frac{1}{|f_-(x;{\langle x \rangle})|}\right)\nonumber\\
&\times&\exp\left[-k\int^x\left(\frac{dz}{f_{+}(z;{\langle x \rangle})}+
\frac{dz}{f_{-}(z;{\langle x \rangle})}\right)\right]\,,
\end{eqnarray}
where
\begin{equation}
f_{\pm}(x;{\langle x \rangle})=f(x)\pm A g(x)- K(x-{\langle x \rangle})\,.
\end{equation}
The support of $P_{st}(x;{\langle x \rangle})$
is determined by $x_{\pm}$ (with, for example, $x_- \leqslant x \leqslant x_+$),
which are also functions of $\langle x \rangle$ determined by:
\begin{equation}
f_{\pm}(x_{\pm};{\langle x \rangle})=0\,.
\end{equation}
It follows therefore that the self-consistency condition for the mean:
\begin{equation}
{\langle x \rangle}=\displaystyle \int_{x_-}^{x_+}\,dx\; x\; P_{st}(x;\langle x \rangle)
\end{equation}
is highly nonlinear in $\langle x \rangle$ and can thus, in principle,
admit multiple solutions~\footnote{Just like in a mean-field like description of equilibrium situations.}.
It results the possibility of true
{\em ergodicity and symmetry-breaking phase transitions}, with the
asymptotic shape of the probability distribution dependent on the initial conditions.
In this model, the phase transition takes place between a
disordered phase with $\langle x\rangle = 0$, and an ordered phase with
$\langle x\rangle \neq 0$. These nonequilibrium phase transitions are shown to exhibit
characteristics that are similar
to second-order equilibrium phase transitions~\cite{kim98,vandenbroeck94I,vandenbroeck97I},
with the mean $\langle x\rangle$ playing the role of an order parameter.
For concretness, we
shall consider the following expressions for $f(x)$ and $g(x)$:
\begin{equation}
f(x)=-x(1+x^2)^2 \quad\mbox{and} \quad g(x)=1+x^2\,.
\label{PT1}
\end{equation}
This will allow to compare this model, see Ref.~\cite{kim98}, with the situation when $\xi(t)$ is a GWN, which represents one of the first
models~\footnote{Although completely academic, this model was first introduced because of its simplicity: no simpler forms for $f(x)$ and $g(x)$ were found to
lead to phase transitions.} introduced in the literature in order to describe a nonequilibrium noise-induced phase transition, see Refs.~\cite{vandenbroeck94I,vandenbroeck94II,vandenbroeck97I}.
Following the above-cited references~\footnote{Where the phase diagram was obtained analytically up to a point,
and afterwards by symbolic calculations.},
in Fig.~\ref{figure14} we represented {\em qualitatively},
the phase diagram of the model in the
plane of the parameters $K$ (the coupling constant) and $D$ (the intensity of the noise),
both for the DMN ($D=A^2/2k$) and a GWN. In both cases one obtains ordered and disordered
phases and transition lines between them, and also the phenomenon of {\em re-entrance with
respect to the intensity $D$ of the
noise.}
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=10.1cm\epsfbox{figure14a.eps}}
\hspace{-0.4cm}\centerline{\epsfxsize=10.1cm\epsfbox{figure14b.eps}}
\caption{Qualitative phase diagram in the plane $D$ (intensity of the noise) and $K$
(coupling constant) for the mean-field models described in the main text,
equations~(\ref{PT}), (\ref{PT1}).
Upper panel: $\xi(t)$
is a DMN with a fixed value of the transition rate $k$. Lower panel: $\xi(t)$ is a GWN.
In both cases the transition order-disorder is re-entrant with respect to the noise intensity $D$.
Note in the case of the DMN the multistability regime, as well as the re-entrance of the
phase transition with respect to the coupling strength $K$; both effects are absent in the case of a GWN.}
\label{figure14}
\end{figure}
But, besides these, DMN also induces a {\em multistability regime} of coexistence of two stable phases,
one ordered and one disordered. This phenomenon is absent for the GWN,
i.e., it is entirely induced by the {\em finite correlation time} (colour)
of the noise. Of course, connected to this multistability, one encounters a {\em hysteretic behavior of the
order parameter $\langle x\rangle$}. Moreover, in the case of the DMN one also obtains a
{\em re-entrance with respect to the coupling strength $K$}, that does not appear
for the DMN.
So the colour of the DMN noise can alter significantly the characteristics
of the phase transition as compared to the GWN case.
\section{DICHOTOMOUS FLOWS: STATIONARY SOLUTIONS WITH UNSTABLE CRITICAL POINTS}
\subsection{Generalities}
The problem of the {\em unstable} critical points of the asymptotic dynamics was
essentially raised in the context of the existence of a {\em directed stationary flow}
(e.g., ratchet-like) in the system, $\langle \dot{x} \rangle \neq 0$.
This is a physical situation that gave rise to a lot of interest
and research, since it represents one of the paradigms of out-of-equilibrium systems:
obtaining directed motion (i.e., useful work) ``out of fluctuations",
a situation that has to be contrasted with the equilibrium one, as encoded in
the second principle of thermodynamics. The asymptotic value
$\langle \dot{x} \rangle$ is thus the quantity of main interest in this
respect.
In order to understand the peculiarities related to the existence of unstable
critical points,
let us go back to a general dichotomous flow,
\begin{equation}
\dot{x}(t)=f(x)\,+\,g(x)\,\xi(t)\,.
\end{equation}
It is clear that any systematic asymptotic drift is impossible
as long as we consider (as we did till now)
open boundary conditions and/or only stable critical points
of the alternate $f_{\pm}(x)$ dynamics.
One thus needs a {\em periodic system}:
\begin{equation}
f_{\pm}(x+L)=f_{\pm}(x)\,,
\end{equation}
with $L$ the spatial period. But this is not enough yet
in order to have a directed current. One also needs the
{\em breaking of the detailed balance}
(i.e., the fact that the system is driven
out-of-equilibrium, and thus directed current
``out-of-noise" is not ruled out by
the second principle of thermodynamics); and also
a {\em breaking of the spatial inversion symmetry}
(so that current is not ruled-out by Curie's general
symmetry principle). These conditions are discussed in detail
in the review article Ref.~\cite{reimann02},
see also Sec.~V.C below.
But when $f_{\pm}(x)$ are periodic (and at least continuous over one
spatial period), then critical points (if any) {\em appear in pairs};
therefore, the presence of stable critical points automatically
implies the presence of an equal number of {\em unstable} critical points.
The physics and the mathematics of the problem are completely different when
the asymptotic dynamics allows the ``crossing" of the unstable fixed points
(i.e., the existence of a directed current in the system) as compared to the
case when no such points are present or ``crossed".
A very simple illustration of the changes that may appear in the
physics of the problem when unstable fixed points are ``crossed" is
given by the following example, see Fig.~\ref{figure15}. Consider an
overdamped particle in a symmetric periodic potential
$V(x)=V_0\cos(2\pi x/L+\varphi)$, that switches dichotomically
between $\pm V(x)$. In the absence of an external force, the
dynamics is restricted to the interval between two fixed points,
$[L(k\pi-\varphi)/2\pi,\,L((k+1)\pi-\varphi)/2\pi]$ ($k$ integer).
Thus, as far as the right-left symmetry is not broken, there is no
current flowing through the system, see the upper panel of the
figure. Suppose now that we apply a small external force $|F|<2\pi
V_0/L$ that breaks right-left symmetry but cannot induce ``escape"
from a finite interval in neither of the separate $f_{\pm}=\mp 2\pi
V_0/L \sin(2\pi x/L+\varphi)+F$ dynamics (corresponding to the
effective potentials $\pm V(x)-Fx$). However, switching between
these dynamics allows ``crossing" of the unstable fixed points, and
thus the appearance of running solutions with a finite average
velocity. See the lower panel of Fig.~\ref{figure15}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=10cm\epsfbox{figure15a.eps}}
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=10cm\epsfbox{figure15b.eps}}
\caption{``Crossing" unstable critical points in the asymptotic dynamics
of a dichotomous flow
allows for a directed current in the system when a small symmetry-breaking force $F$
is applied. See the main text for further details.}
\label{figure15}
\end{figure}
From the point of view of the mathematics, the presence
of unstable fixed points induces some serious difficulties in the
calculation of the asymptotic probability distribution function
$P_{st}(x)$.
This is the main reason why,
with a few exceptions, see Refs.~\cite{zapata98,balakrishnan01II,czernik00}
(and also Refs.~\cite{behn89,behn93} for the related problem of the mean-first passage time), the problem of dichotomous flows with unstable critical points was generally not approached. We have recently made
progress towards identifying the source of spurious divergences that arise in the
usual analytical treatment of the problem, see Refs.~\cite{bena02I,bena03,bena05},
and we are now in the position to consider this situation as well.
\subsection{Solution to the mathematical difficulties}
Consider thus a periodic dichotomous flow $f_{\pm}(x)$,
with an asymptotic state characterized by a nonzero stationary
flow $\langle \dot{x}\rangle \neq 0$ through the system.
One notices (see Ref.~\cite{bena03} for further details) that the
master equation for the stationary probability distribution function
$P_{st}(x)$:
\begin{eqnarray}
&&\left[f_+(x)f_-(x)\right]P_{st}'(x)
+\left[{(f_+f_-)'}-{(f_+f_-)}
(\mbox{ln}|f_+-f_-|)'+k(f_++f_-)\right]{P_{st}(x)}\nonumber\\
&&\nonumber\\
&&\hspace{3cm}=\frac{\langle \dot{x} \rangle}{L}
\left[2k+\frac{f_+-f_-}{2}\;\left(\frac{f_++f_-}{f_+-f_-}\right)'\;\right]
\label{master1}
\end{eqnarray}
becomes {\em singular at the critical points of $f_{\pm}(x)$}.
The crucial point of the problem resides in finding the {\em correct}
solution to this equation, i.e., the one that has {\em acceptable
mathematical and physical properties} (see below, and also Ref.~\cite{bronshtein97}).
A blind application of the
method of variation of parameters to this differential equation
leads to a solution of the form:
\begin{equation}
P_{st}(x)=\displaystyle\frac{\langle\dot{x}\rangle}{L}\;\;
\displaystyle\left|\frac{f_{+}(x)-f_{-}(x)}{f_{+}(x)f_{-}(x)}
\right|
\left[CG(x,x_0)+K(x,x_0;x)\right],
\label{case13}
\end{equation}
where $C$ is a constant of integration that arises from the general solution to
the homogeneous part of Eq.~(\ref{master1}), the second contribution is the
particular solution of the full inhomogeneous equation, $x_0$ is an arbitrary
point in $[0,L)$, and we have defined the functions:
\begin{eqnarray}
G(u,v)&=&\exp\left\{-k\int_v^udz\left[\displaystyle
\frac{1}{f_{+}(z)}+\frac{1}{f_{-}(z)}\right]\right\},
\nonumber\\
K(u,v;w)&=&\int_v^udz\;
\left[\displaystyle
\frac{2k}{f_{+}(z)-f_{-}(z)}+
\left(\frac{f_{+}(z)+f_{-}(z)}{2\left[f_{+}(z)-f_{-}(z)\right]}\right)
'\right]
\nonumber\\
&&\times \mbox{sgn}
\left[\frac{f_{+}(z)f_{-}(z)}{f_{+}(z)-f_{-}(z)}\right] \;G(w,z)
\label{case14}
\end{eqnarray}
(sgn is the signum function).
The point is that using a {\em unique} integration constant $C$ over the whole
period $[0,\,L]$ is not the right thing to do, since it leads to unphysical
divergences in the
expression of $P_{st}(x)$. More precisely, one has:\\
a). Integrable singularities at the
stable fixed points $x_s$ of $f_+$ or $f_-$:
\begin{equation}
P_{st}(x)\sim |x-x_s|^{-1+k/|f'_{(\pm)}(x_s)|} \, ,
\end{equation}
provided that the transition rate $k<|f'_{(\pm)}(x_s)|$.
Of course, these are causing no problems, and are in agreement with
the intuitive image
that there may be an accumulation of the particles at the stable
critical point of one
of the two dynamics, provided that the switch to the
alternate dynamics is not sufficiently rapid to ``sweap" particles away from
this attractor.\\
b). {\em Non-integrable singularities at the unstable fixed points}
$x_u$ of $f_+$ or $f_-$:
\begin{equation}
P_{st}\sim |x-x_u|^{-1-k/|f'_{(\pm)}(x_u)|}\,.
\end{equation}
These are, of course,
{\em meaningless and one has to find a method to avoid their appearance}.
The fundamental idea, as explained in detail in Ref.~\cite{bena03},
is then to use {\em different integration constants} in the different intervals
between fixed points, and then to {\em adjust these constants} such that
$P_{st}(x)$ acquires the right behavior, namely:
\begin{itemize}
\item $P_{st}(x)$ is continuous or has at most integrable
singularities in $[0,\,L]$
\item The fact that the probability density is periodic:
\begin{equation}
P_{st}(x+L)=P_{st}(x)\,,
\end{equation}
with the eventual exception of some singularity points (see the simple example
below).
This condition is determined, of course, by the supposed
periodicity of the dichotomous flow.
\item The usual normalization condition one imposes
to a probability density (in this case, restricted
to a spatial period):
\begin{equation}
\displaystyle\int_0^L P_{st}(x)dx=1
\end{equation}
\end{itemize}
It can be shown that, under rather general conditions~\footnote{Actually, the
only ``stringent" condition is that $f_{\pm}(x)$ are continuous over the
spatial period. Under special circumstances, even this condition can be
relaxed, as it will becom clear from the examples we shall be considering in
the next subsections.} these requirements ensure the
{\em existence and uniquenness}
of a well-defined stationary periodic probability density.
In order to understand the mathematical mechanism involved in
the elimination of the unphysical divergences appearing at the
unstable critical
points of the dynamics, let us consider the simplest possible situation, namely
when both $f_{\pm}(x)$ are continuous over the spatial period,
and only one of the two alternate flows has two critical points. For example,
see Fig.~\ref{figure16}, $f_{-}(x)$ has no fixed points, while $f_{+}(x)$ has a
stable $x_s$ and an unstable $x_u$ critical points (to fix ideas,
suppose $x_s<x_u$).
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=11cm\epsfbox{figure16.eps}}
\caption{Qualitative representation of the ``+" velocity profile $f_+(x)=f_+(x+L)$
of a dichotomous flow, exhibiting two critical
points: $x_s$ that is stable, and $x_u$ that is unstable. See the main text for further details.}
\label{figure16}
\end{figure}
Consider the expression of $P_{st}(x)$ in Eq.~(\ref{case13}),
for which we shall choose {\em different integration constants}
in each of the separate
intervals $[0,x_s)$, $(x_s,x_u)$, and $(x_u,L]$ between the fixed
points.
There is {\em exactly one} choice of this constant valid for both
$(x_s,x_u)$ and
$(x_u,L)$ such that the divergence at $x_u$ is removed, namely
$C=-K(x_u,x_0; x_0)$;
and another choice valid in the
interval $[0,x_s)$ that ensures the required continuity
and periodicity of $P(x)$.
The acceptable expression for the probability density is
therefore found to be:
\begin{equation}
P_{st}(x)=
\left\{
\begin{array}{ll}
\displaystyle\frac{\langle\dot{x}\rangle}{L}\;\;
\displaystyle\left|\frac{f_{+}(x)-f_{-}(x)}{f_{+}(x)+f_{-}(x)}\right|
\left[K(L,x_u;L)G(x,0)+K(x,0;x_0)\right]
&\mbox{for}\,\,\,\,x \in [0,x_s)\\
\\
\displaystyle\frac{\langle\dot{x}\rangle}{L}\;\;
\displaystyle\left|\frac{f_{+}(x)-f_{-}(x)}{f_{+}(x)+f_{-}(x)}\right|
K(x,x_u;x),&\mbox{for}\,\,\,\,x \in (x_s,L).
\end{array}
\right.
\label{case110}
\end{equation}
These expressions can be further simplified if one takes as the basic
period not
$[0,L]$, but $[x_s,x_s+L]$. Then the simple-looking, ``compact"
expression
\begin{equation}
P_{st}(x)=\displaystyle\frac{\langle\dot{x}\rangle}{L}\;\;\displaystyle
\left|\frac{f_{+}(x)-f_{-}(x)}{f_{+}(x)f_{-}(x)}\right|
K(x,x_u;x)
\label{compact}
\end{equation}
holds throughout
this new basic period. Moreover, $P_{st}(x)$ meets all the requirements
enumerated above. In particular, let us check its behavior at the fixed points
$x_{1,2}$. For $x=x_u$ (the unstable fixed point that caused problems before!),
the above expression of $P_{st}(x)$ presents an indeterminacy of the type
``0/0": by applying {\em H\^ospital's rule} one simply finds that $P_{st}(x)$
is {\em continuous at the unstable fixed point}:
\begin{equation}
\lim_{x\rightarrow x_u}P_{st}(x)=
\displaystyle\frac{\langle\dot{x}\rangle}{L}\;
\displaystyle\frac{
2k/f'_{+}(x_u)+1}
{f_{-}(x_u)\left[k/f'_{+}(x_u)+1\right]}\,.
\end{equation}
The behavior at the {\em stable} fixed point $x_s$ is continuous
\begin{equation}
\lim_{x\rightarrow x_s} P_{st}(x)=
\displaystyle\frac{\langle\dot{x}\rangle}{L}\;
\displaystyle
\frac{2k/|f'_{+}(x_s)|-1}
{f_{-}(x_s)\left[k/|f'_{+}(x_s)|-1\right]}
\end{equation}
for $k/|f'_{+}(x_s)|>1$, while it has a power-law integrable divergence for
$k/|f'_{+}(x_s)|<1$, and a marginal logarithmic divergence for
$k/|f'_{+}(x_s)|=1$. One also notices the periodicity of $P_{st}(x)$
(except at the eventual singularity point $x_s$).
Imposing the normalization of $P_{st}(x)$ over one spatial period, one obtains
finally the expression of the {\em asymptotic mean velocity} through the
system, which is the most important quantity for most practical purposes:
\begin{eqnarray}
\langle\dot{x}\rangle&=&L\left\{
\int_{x_s}^{x_s+L}dx\displaystyle
\left|\displaystyle\frac{f_{+}(x)-f_{-}(x)}{f_{+}(x)f_{-}(x)}\right|
\;\;\displaystyle\int_{x_u}^{x}dz\;\mbox{sgn}\left[
\displaystyle\frac{f_{+}(z)f_{-}(z)}{f_{+}(z)-f_{-}(z)}\right]
\right.\nonumber\\
&&\hspace{-1cm}\times \left.
\left[\displaystyle\frac{2k}{f_{+}(z)-f_{-}(z)}+
\displaystyle\left(\frac{f_{+}(z)+f_{-}(z)}{2\left[f_{+}(z)-f_{-}(z)
\right]}\right)^{'}\right]\exp\left[-k\int_z^x dw\;
\left(\frac{1}{f_{+}(w)}+\frac{1}{f_{-}(w)}\right)\right]
\right\}^{-1}.
\nonumber\\
&&
\label{asymptotic}
\end{eqnarray}
This mathematical reasoning can be generalized to any type of combination of
critical points of the alternate dynamics of the dichotomous flow, as shown in
detail in the above-mentioned references, and we shall illustrate it on three
examples below.
The first one refers to an interesting physical situation,
namely that of {\em hypersensitive response}, that is actually in
tight connection with the system already described qualitatively
in Fig.~\ref{figure15}. The second one referes to a {\em rocking ratchet},
and the third one is an important particular realization of such a
ratchet, namely a {\em stochastic Stokes' drift} effect.
\subsection{Nonlinear and hypersensitive response with DMN}
Generally and somehow loosely speaking,
the term {\em hypersensitive response} referes to a
large and highly-nonlinear response of a noisy,
out-of-equilibrium system to a
small, directed external forcing. This phenomenon
was discovered rather recently, and a lot of
attention was given to several of its variants, both
theoretically, see
Refs.~\cite{tarlie98,berdi98,ginzburg98,ginzburg99I,ginzburg99II,ginzburg01,geraschenko01,mankin03}, and experimentally,
see Refs.~\cite{geraschenko98,geraschenko00,ginzburg02,fateev02,ginzburg03}.
We have recently introduced a simple model of dichotomous flow that exhibits
such a hypersensitive behavior, see Refs.~\cite{bena02I} and \cite{bena05}.
It describes an overdamped particle that switches dichotomically between a
symmetric potential $V(x)$ and its negative, as represented schematically in
Fig.~\ref{figure15}. When the symmetry of the system is slightly broken by a
small directed external force $F$, the system responds with a nonlinear
drift~\footnote{Although very simple and seemingly robust, this particular
system has not been
realized experimentally yet. We are currently investigating the possibility of
its realization using a SQUID.}.
The dynamics can be described by the following stochastic equation (with the
DMN acting multiplicatively):
\begin{equation}
\dot{x}=F+\xi(t)v(x)\,,
\label{hyper1}
\end{equation}
where $v(x)=-V'(x)$, with $v(x)=v(x+L)$,
and (through a rescaling of $v(x)$) the DMN $\xi(t)=\pm 1$.\\
(a) When the external force is sufficiently large, {\em there is no fixed point
in either $f_{\pm}(x)$ dynamics} [$F^2-v^2(x)\neq 0$ for any $x\in [0,L)$]. By
integrating the stationary master equation~(\ref{master1})
corresponding to Eq.~(\ref{hyper1}),
one obtains the following expression for the stationary probability density,
\begin{equation}
P_{st}(x)=
\frac{\langle\dot{x}\rangle}{LF}
\left\{1 + \frac{v(x)\displaystyle\int_x^{x+L}dz\;
v'(z)\exp\left[-\int_z^x dw \frac{2kF}
{F^2-v^2(w)}\right]}{[F^2-v^2(x)]
\left\{\exp\left[\displaystyle\int_0^L
dz \frac{2kF}{F^2-v^2(z)}\right]-1
\right\}}
\right\}\,.
\label{hyper3}
\end{equation}
By imposing the normalization condition of $P_{st}(x)$ over $[0,\,L)$,
one obtains finally the asymptotic mean velocity:
\begin{equation}
\frac{\langle\dot{x}\rangle}{F}=
\left\{1+\frac{\displaystyle\int_0^Ldx\,\frac{v(x)}{F^2-v^2(x)}
\int_x^{x+L}dz\;v'(z)\exp\left[-\int_z^x
dw \frac{2kF}{F^2-v^2(w)}\right]}
{L\left\{\exp\left[\displaystyle\int_0^L dz \frac{2kF}{F^2-v^2(z)}
\right]-1\right\}}\right\}^{-1}\,.
\label{hyper4}
\end{equation}\quad \\
(b) Let us consider the more interesting case of {\em small forcing},
when the {\em asymptotic dynamics has critical points}. In order to fix ideas
(and without losing any relevant element), we take $v(x)$ continuously
decreasing in $[0,\,L/2]$ and symmetric about $L/2$, $v(x+L/2)=-v(x)$.
Then $P_{st}(x+L/2)=P_{st}(x)$, which allows us to concentrate on a
half-period. In this case, the ``--" dynamics has an unstable fixed point
$x_u$, and the ``+" dynamics has a stable critical point $x_s$ (with $x_s >x_u$)
in $[0,\,L/2]$. According to the general discussion in the preceding section,
one obtains the following physically acceptable solution in
the interval $[x_s-L/2, \,x_s]$ (that can be afterwards extended by periodicity
to the whole real axis):
\begin{eqnarray}
P_{st}(x) &=&
\displaystyle\frac{\langle\dot{x}\rangle}{LF}
\left\{1+\displaystyle\frac{v(x)}{|F^2-v^2(x)|}
\displaystyle\int_{x_u}^xdz\;
\mbox{sgn}\left[F^2-v^2(z)\right]v'(z)
\right. \nonumber\\
&&\nonumber\\
&&\hspace{3cm}\times\,
\left.\exp\left[-\displaystyle\int_z^x
dw\frac{2kF}{F^2-v^2(w)}\right]\right\}\,.\label{hyper6}
\end{eqnarray}
At the unstable fixed point $x_u$, the probability
density is continuous,
\begin{equation}
\lim_{x\searrow x_u}P_{st}(x)
=\lim_{x\nearrow x_u}P_{st}(x)
=\frac{\langle\dot{x}\rangle}
{LF\left\{1-\displaystyle\frac{1}{2(k/|v'(x_u)|+1)}\right\}}\,.
\end{equation}
At the stable fixed point $x_s$, depending on the transition rate $k$, the
probability density is either continuous,
\begin{equation}
\lim_{x\nearrow x_s}P_{st}(x)
=\lim_{x \searrow (x_s-L/2)}P_{st}(x)
=\frac{\langle\dot{x}\rangle}{LF\left\{1+\displaystyle\frac{1}{2(k/|v'(x_s)|-1)}\right\}}
\end{equation}
(for $k/|v'(x_s)|>1$), or divergent but integrable
(for $k/|v'(x_s)| \leqslant 1$).
It is the presence of these fixed points, and in particular the divergences of
$P_{st}(x)$ that cause a highly nonlinear conductivity of the system, as discussed below.
Finally, from the normalization of $P_{st}(x)$, the average
asymptotic velocity is obtained as
\begin{eqnarray}
\frac{\langle\dot{x}\rangle}{F}&=&
\left\{1+\frac{2}{L}\displaystyle\int_{x_s-L/2}^{x_s}dx\;
\frac{v(x)}{|F^2-v^2(x)|}\right.
\displaystyle\int_{x_u}^xdz\;
\mbox{sgn}\left[F^2-v^2(z)\right]v'(z)\nonumber\\
&&\nonumber\\
&&\hspace{5cm}\times\,\left.
\exp\left[-\displaystyle\int_z^xdw\frac{2kF}{F^2-v^2(w)}
\right]\right\}^{-1}\,.
\label{hyper11}
\end{eqnarray}
These results for $P_{st}$ and the mean current are general and exact.
However, they still involve triple integrals, and are too complicated to offer a
picture of what is going on in the system. We shall thus consider further the
particular case of a {\em piecewise profile of $v(x)$}, as represented in
Fig.~\ref{figure17}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=12.cm\epsfbox{figure17.eps}}
\caption{Piecewise linear profile $v(x)$ for the hypersensitive response model,
see Eq.~(\ref{hyper1}) in the
main text. A small applied force $F$ induces critical points in both ``+" and ``--" dynamics
($x_u$ is an unstable fixed point of the ``--" dynamics, while $x_s$ is a stable fixed point of the ``+" dynamics). Note the symmetry of the system with respect to $x=L/2$.}
\label{figure17}
\end{figure}
In this case, the integrals can be evaluated explicitely, and it is found that
the behavior of the system is extremely rich.
The response $\langle \dot{x}\rangle$ to the external perturbation
$F$ is highly-dependent, in a non-monotonic way,
on the transition rate $k$ of the DMN,
more precisely on the control parameter $\alpha=lk/v_0$
(for the significance of $l$ and $v_0$ refer to Fig.~\ref{figure17}).
One notices the existence of four different regimes for the response,
as illustrated in Fig.~\ref{figure18}. \\
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=12.cm\epsfbox{figure18.eps}}
\caption{The mean asymptotic velocity $\langle \dot{x}\rangle/v_0$
as a function of the external driving $F/v_0$ for different values of the
control parameter $\alpha=lk/v_0$ (for the meaning of $l$ and $v_0$, see Fig.~\ref{figure17}). The solid lines are the corresponding
result of the theory, and the
symbols represent the result of averaging over
numerical simulations of the dynamics of an ensemble of 20000 particles.
The dashed line corresponds to the linear response $\langle \dot{x}\rangle=F$.
There are four different regimes
for the response of the system to the external forcing $F/v_0$, see the main text.
The inset is a detail of the region around the origin (corresponding to the hyper-nonlinear regime).
We are grateful to Prof. R. Kawai for providing us the data for this plot.
See also Fig.~5 in Ref.~\cite{bena05}.}
\label{figure18}
\end{figure}
(a) A first trivial {\em linear response} regime corresponds
to very high applied forces
$F/v_0 \gg 1$, when the details of the substrate potential $\pm V(x)$
are ``forgotten", and $\langle \dot{x}\rangle\approx F$. All curves in
Fig.~\ref{figure18} approach this limit with increasing $F$.\\
(b) A second {\em linear response} regime $\langle \dot{x}\rangle\approx F$
appears when $\alpha \gg 1$, i.e.,
for very high transition rate of the DMN, such that the effects of the fluctuating
forces are smeared out. This regime is visible on Fig.~\ref{figure18} for $\alpha=1$.\\
In none of these regime is the response dependent on the characteristics of the DMN.
The following two regimes are, on the contrary, far from trivial and appear only
in the presence of critical points of the alternate $f_{\pm}(x)$ dynamics.\\
(c) The {\em adiabatic regime} of constant response
appears for very slow switching rates $k$, more precisely when in-between two flips of
the potenatial (refer to Fig.~\ref{figure15}) the particles have enough time to move
between two succesive extrema of the potential, and eventually to wait
in a minimum of the potential till the next flip, that will put them in motion again.
Therefore, the condition for this regime
is that the average time between switches $k^{-1}$ is much longer than the typical escape
time from a region close to a maximum of the potential, $\tau\sim -(l/v_0) \mbox{ln}f$;
therefore: $1>f \gg \exp(-1/\alpha)$.
Then the mean velocity is simply half of the spatial period of the substrate potential
divided by the mean switching time,
(i.e., it is independent of the
applied force $F$ and is directly proportional to $k$):
\begin{equation}
\langle \dot{x}\rangle\approx lk/v_0\,.
\end{equation}
This regime is well seen on Fig.~\ref{figure18} for
$\alpha=0.01$.\\
(d) Finally, the {\em hyper-nonlinear} regime which is realized for small forcing,
$f <\exp(-1/\alpha)\ll 1$. In this case, the particles manage to advance to the next
minimum of the potential only in the exponentially rare cases when the DMN persists
in the same state for sufficiently long time, much longer than both
$k^{-1}$ and $\tau$ (see again Fig.~\ref{figure15} and see above for the meaning of $\tau$).
The mean velocity then falls
rapidly to zero with decreasing $F$:
\begin{equation}
\langle \dot{x}\rangle \approx \frac{v_0^2 L}{8 l^2 k\;\mbox{ln}^2(F/v_0)}\,,
\end{equation}
i.e., it is inversely proportional to $k$, and the corresponding
diverging susceptibility
\begin{equation}
\chi=\displaystyle \left.\frac{\partial \langle\dot{x}\rangle}{\partial F}\right|_{F=0} \rightarrow \infty
\end{equation}
indicates the highly-nonlinear and sensitive character of the response in this region.
This regime appears for all the values of $\alpha$ and sufficiently small $F$,
as seen on the inset of Fig.~\ref{figure18}.
We are therefore again in a situation that is contrary to the intuition and
to what is usually encountered in
equilibrium systems, namely {\em a strongly nonlinear
response for small forcing,
and a linear response for large forcing}.
\subsection{Ratchets with DMN}
One of the paradigms of out-of-equilibrium system
(with an overwhelming literature these last years) is the {\em ratchet effect}.
As it was already briefly mentioned above, a {\em ratchet} is,
roughly speaking, a device
that allows to get work (i.e., directed transport) out of fluctuations.
Although one can think of macroscopic ratchets
(e.g., self-winding wrist-watches,
wind-mills), more interesting from the conceptual point of view are
the {\em microscopic rectifiers}, for which microscopic thermal
fluctuations are relevant. Indeed, while
the second law of
thermodynamics rules out directed transport (apart from transients) in a
spatially-periodic system in contact with a single heat bath~\footnote{A
classical illustration is given by the Smoluchowski-Feynman ratchet-and-pawl
device in contact with a single thermostat, see Ref.~\cite{feynman}.},
there is no such fundamental law that prohibits stationary
directed transport in a system driven out-of-equilibrium by a
deterministic or stochastic forcing.
Such a driving forcing can be provided, for example,
by a DMN that can act either multiplicatively, or additively.
Besides the {\em breaking of detailed
balance}, a further indispensable requirement for directed transport
is {\em breaking of spatial symmetry}~\footnote{Such that directed current is
not ruled out by Curie's symmetry principle, ``If a certain phenomenon is not
ruled-out by symmetries, then it may/will appear".},
see Ref.~\cite{reimann02}.
There are three main possible mechanisms of symmetry-breaking, namely
(i) a built-in asymmetry of the system (in the absence of the driving
perturbation); (ii) an asymmetry induced by the perturbation; (iii) a
dynamically-induced asymmetry, as a collective effect,
through an out-of-equilibrium symmetry-breaking
phase transition.
The case of an out-of-equilibrium driving by a {\em multiplicative} DMN
corresponds to the so-called {\em flashing ratchet}:
generically, an overdamped Brownian particle) jumps dichotomously,
at random, between two asymmetric periodic potentials $V_{\pm}(x)$:
\begin{equation}
\dot{x}(t)=-V'(x)[1+\xi(t)]+\xi_{GWN}(t)\,,
\end{equation}
where $\xi(t)$ is the DMN $\pm A$ with transition rate $k$, $\xi_{GWN}$ is a Gaussian white noise, and, of course, $V_{\pm}(x)=V(x)(1\pm A)$.
The particular case of $V_{-}(x)=0$, i.e., a flat potential corresponding
to a free diffusion in the ``--" dynamics, is called {\em on-off ratchet}.
This results in a net flow of the particles,
in a direction that is determined by the asymmetry of $V(x)$.
Various models, aspects, and experimental realizations of flashing or on-off ratchets,
including sometimes the effect of inertia,
were discussed in Refs.~\cite{bug87,ajdari92,astumian94,prost94,rousselet94,chauwin95,faucheux95,mielke95,gore97,kula98I,chen99}, see also Ref.~\cite{reimann02}.
Systems driven out of equilibrium by an {\em additive} DMN belong to the
class of the so-called {\em rocking ratchets}:
an asymmetric basic potential $V(x)$ is rocked by a zero-mean additive force
(a DMN $\xi(t)$ in the cases of interest to us):
\begin{equation}
\dot{x}(t)=-V'(x)+\xi(t) +\mbox {eventually}\; \xi_{GWN}(t)\,.
\end{equation}
This leads generically to an
asymmetry in the nonlinear response of the system, and thus to a
systematic (directed) motion, be it in the presence or in the absence of a thermal noise.
Inertia of particles was also find to have an important effect on the direction of the drift.
The literature on various model-realizations and applications is huge, and we cite here
only a few references,~\cite{magnasco93,svoboda93,doering94,astumian94,chialvo95,mielke95,millonas96,kula96,zapata96,berdichevsky97,park97,zapata98,kula98,kula98I,nikitin98,arizmendi98,li98},
and also Ref.~\cite{reimann02} for further examples.
As an illustration, we propose here a very simple analytically solvable model
of a {\em rocking ratchet}, see Ref.~\cite{bena03} for a detailed discussion.
It is described by the following stochastic differential equation:
\begin{equation}
\dot{x}(t)=f(x)+\xi(t)\,,
\label{rock1}
\end{equation}
with an {\em additive and asymmetric} DMN $\xi(t)$,
that takes the values $\pm A_{\pm}$ with transition rate $k$ between these states.
In order to fix ideas, we shall suppose $A_- > A_+ >0$.
Here
\begin{equation}
f(x)=-V'(x)= \left\{
\begin{array}{ll}
+f_1\,,&x\in[0,\,L_1)\\
-f_2\,, &x\in[L_1,L_1+L_2)
\end{array}
\right.
\end{equation}
with $f(x)=f(x+L)\,,\quad L=L_1+L_2$, and $0<f_2<f_1$.
This corresponds to an overdamped particle gliding in a rocked sawtooth potential,
as depicted qualitatively in Fig.~\ref{figure19}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=11cm\epsfbox{figure19.eps}}
\caption{Effective potentials for the rocking-ratchet model in Eq.~(\ref{rock1}). Depending on the values $\pm A_{\pm}$ of the DMN forcing, one may have running solutions in both $f_{\pm}$ dynamics (the solid lines $V(x)\mp A_{\pm}x$), or running solutions in only one of the alternate dynamics (dashed lines), or no running solutions at all. We also represented
the unperturbed basic potential $V(x)$.}
\label{figure19}
\end{figure}
It is obvious that three distinct situations are possible, depending on the values
$\pm A_{\pm}$ of the DMN. In all these cases one can obtain closed analytical expressions for the stationary probability density $P_{st}(x)$ and the mean asymptotic velocity $\langle \dot{x}\rangle$. We refer the reader to Ref.~\cite{bena03} for the exact results, while here we shall comment on a few features. One can have: \\
(a) A regime of {\em strong forcing}, when there is no critical point in any of the
alternate $f_{\pm}(x)$ dynamics, and running solutions appear for both tilts.
With our conventions, this corresponds to $A_->f_1$ and $A_+>f_2$.
There results a non-zero current through the system, determined by the interplay between the characteristics of the noise and those of the basic potential $V(x)$ (in particular, its bias, if any).
One can also consider the GWN limit, for which already known results~\cite{risken84} are recovered
in a very simple way. In particular, the current through the system is strictly detemined by the bias of $V(x)$, such that, when the deterministic potential is unbiased, one recovers
the equilibrium state with a Boltzmann distribution and no current.
\\
(b) The regime of {\em intermediate forcing}, when there are critical points in only one
of the alternate dynamics. In our case, this happens when $A_->f_1$ and $A_+<f_2$,
and only the ``+" dynamics has critical points.
It is obvious that the sign of the flow is determined by that of the dynamics without fixed points (``--" in our case); sometimes the current may thus be opposite to
the bias of the basic potential $V(x)$.
One notices the possibility of
current reversal when varying the amplitude of the of the noise
(at fixed $k$), i.e., when passing from regime (a) to regime (b).
As a limiting case of both (a) and (b) regimes, one can consider the white
shot-noise limit (see Sec.~I.A), and recovers easily the results of Ref.~\cite{czernik00}.\\
(c) Finally, the regime of {\em weak forcing}, when both alternate dynamics have critical points. There is not too much interest in it, since there is no flow through the system.
Additional thermal noise is needed to generate rectified motion, and this problem has been addressed (mainly in the case of adiabatically slow forcing) in Refs.~\cite{magnasco93,astumian94,doering94,kula96,kula98,luczka97}.
\subsection{Stochastic Stokes' drift}
{\em Stokes' drift} refers to the systematic motion that a tracer acquires in a viscous fluid under the action of a longitudinal wave traveling through this fluid,
see the original reference~\cite{stokes47}. The {\em deterministic effect} (that does not account for the fluctuations or perturbations in the system) has a simple intuitive explanation, as illustrated through the example of Fig.~\ref{figure20}~\cite{vandenbroeck99}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=10.cm\epsfbox{figure20.eps}}
\caption{Schematic representation of the deterministic Stokes' drift for an overdamped particle driven by a longitudinal square-like wave, see the main text.}
\label{figure20}
\end{figure}
Consider a longitudinal square-like wave of wavelength $L$,
propagating with velocity $v$, and an overdamped
tracer particle, that is entrained with a force $f=\dot{x}=bv$ while in the
crest part of the wave, and $f=\dot{x}=-bv$ while in the trough part (with $0<b<1$ the situation of physical relevance). The suspended particle spends a longer time in the regions of the wave train where it is driven in the direction of propagation of the wave, namely
$t=L/[2(1-b)v]$, than the time it spends in those regions where the drag force acts in the opposite direction, namely $t=L/[2(1+b)v]$. Therefore, the particle is driven on the average in the direction of the wave propagation, with $v_0=b^2v$, the deterministic value of the Stokes' drift.
This effect has been studied in various practical contexts, ranging from the motion of tracers in meteorology and oceanography, to the doping impurities in crystal growth, see
the citations in Refs.~\cite{bena00,bena05}.
Recent studies, see Refs.~\cite{janson98,vandenbroeck99,bena00,bena05}
and references therein, show the importance the {\em stochastic effects}
may have on Stokes' drift. The thermal diffusion of the dragged particles,
as well as the application of a coloured external perturbation modify markedly both the {\em direction} and the {\em magnitude} of the drift velocity.
We introduced~\cite{bena00,bena05} a very simple,
analytically tractable model for a stochastic Stokes' drift,
described by a dichotomous flow as:
\begin{equation}
\dot{x}(t)=f(x-vt) +\xi(t)\,.
\end{equation}
Here $f(x-vt)$ corresponds to the block-wave represented in Fig.~\ref{figure20},
and $\xi(t)$ is a symmetric DMN of values $\pm A$ and transition rate $k$.
One can perform a simple transformation of variables
(which corresponds to going to the wave co-moving frame),
$y=x-vt$, through which the model can be mapped onto a {\em rocking ratchet} problem (as described in the previous section, with an asymmetric basic sawtooth potential and an additive DMN):
\begin{equation}
\dot{y}(t)=F(y)+\xi(t)\,,
\end{equation}
with $F(y)=-(1-b)v$ for $y \in[0,\,L/2)$ and $F(y)=-(1+b)v$ for $y \in [L/2,\,L)$.
As discussed previously, the behavior
of the system, in particular the solution of the associated master equation for the stationary probability density $P_{st}(x)$ and the asymptotic drift velocity $\langle \dot{y} \rangle =
\langle \dot{x}\rangle -v$, depend on whether or not there are critical
points in the dichotomous dynamics.
There are two important effects that appear due to the stochastic DMN forcing:
(i) {\em the enhancement of the drift as compared to its deterministic value}; (ii)
the possibility of {\em drift reversal} when modifying the amplitude of he noise.
Refer to~\cite{bena00,bena05} for the detailed calculations.
Also, noise induces a nonlinear dependence of the drift
on the amplitude of the wave. Therefore, if several waves
are present, their contributions {\em are not additive},
which is a generic feature of stochastic Stokes' drift,
contrary to its deterministic counterpart.
In particular, as illustrated qualitatively in Fig.~\ref{figure21},
if two orthogonally-propagating but otherwise identical waves are present,
one can induce a significant change in the direction and magnitude
of the resulting drift simply by changing the transition rate
$k$ of the DMN. This is an effect that may have
important practical applications, e.g., in directing
doping impurities in crystal growth.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=11.cm\epsfbox{figure21.eps}}
\caption{Schematic representation of the nonadditive character of the stochastic Stokes' drift.
Consider an overdamped tracer under the action of two identical square waves $f_1$ and $f_2$ (thick lines), propagating in orthogonal directions, as well as a DMN
(thick dashed line). DMN is taking the values $\pm A$ and is acting along a direction that
does not coincide with the bisecting line of the two waves $f_1$ and $f_1$.
The direction and the magnitude of the resulting drift velocity
(thin lines) change
significantly with the transition rate $k$ of the DMN. $k=\mbox{infinity}$ corresponds to the deterministic, noiseless limit of linear superposition of Stokes' drift.}
\label{figure21}
\end{figure}
\section{ESCAPE PROBLEMS FOR SYSTEMS DRIVEN BY DMN}
\subsection{Mean first-passage times for dichotomous flows}
The mean first-passage time (MFPT) represents the mean value of the
time at which a stochastic variable, starting at a given initial value, reaches
a pre-assigned threshold value for the first time. It is a concept with many
applications in physics, chemistry, engineering -- ranging from the decaying
of metastable and unstable states, nucleation processes,
chemical reaction rates,
neuron dynamics, self-organized criticality, dynamics of spin systems,
diffusion-limited aggregation, general stochastic systems with absorbing states,
etc., as discussed
in Ref.~\cite{redner01}.
MFPT expressions were obtained in closed analytical form for some
particular Markovian and non-Markovian
stochastic processes, including one-dimensional Fokker-Planck
processes, continuous-time random walks or persistent random walks
with nearest and next-to-nearest neighbour jumps, and birth-and-death processes, see Refs.~\cite{stratonovich63,hanggi81,weiss81,bala83,chris86,hanggi90}
for some examples.
For general non-Markovian processes the problem of the MFPT is delicate and intricate, as spelled out first in Ref.~\cite{hanggi83}. However,
for general dichotomous flows one can obtain
exact results, using various approaches and techniques
(backward equations, stochastic trajectory counting and analysis), see
Refs.~\cite{hanggi85,sancho85,rodriguez86,masoliver86I,masoliver86II,masoliver86III,doering87,weiss87,tsironis88,balakrishnan88II,behn89,kus91,behn93,olarrea95I,olarrea95II}.
Consider an overdamped particle that starts at $t=0$ in $x_0$
inside some interval $[x_A, \,x_B]$ of the real axis,
and being driven by the dichotomous flow
$\dot{x}(t)=f(x)+g(x)\xi(t)$.
Denote by $T_{\pm}(x_0)$ the MFPT corresponding to the particle leaving the
interval $[x_A, \,x_B]$, provided that the initial value of the DMN $\xi(t=0)$
was $\pm A$, respectively.
The coupled equations for $T_{\pm}(x_0)$ are readily found to be:
\begin{eqnarray}
&&\left[f(x_0)+Ag(x_0)\right]\,\frac{dT_+(x_0)}{dx_0}-k(T_+-T_-)=-1\,,\\
&&\left[f(x_0)-Ag(x_0)\right]\,\frac{dT_-(x_0)}{dx_0}-k(T_--T_+)=-1\,.
\end{eqnarray}
The true difficulty here consists in assigning the correct
{\em boundary conditions},
corresponding either to absorbing or to (instantaneously) reflecting boundaries,
and also in treating the critical points of the alternate ``+" and ``--" dynamics. For further details see the above-cited references.
The particular case of the so-called {\em bistability driven by} DMN,
when $f(x)$ derives from a bistable potential $V(x)$, $f(x)=-V'(x)$,
received a lot of attention, see Refs.~\cite{hanggi83I,chris84,balakrishnan88II,heureux88,heureux89,porra91,porra92,heureux95}. In these systems the escape over the potential barrier out of the
attraction domain of one of the two minima of the bistable potential
is driven by an external additive or multiplicative DMN.
Consider the interval $[x_A, x_B]$ around a stable state $x_s$ of $V(x)$
(i.e., $f(x_s)=0,\, f'(x_s)<0$), containing a potential barrier (an unstable state) of $V(x)$ at $x_u$ (i.e., $f(x_u)=0,\, f'(x_u)>0$).
Let us compute
the MFPTs $T_{\pm}(x_0)$ out of this interval, as functions of the initial
position $x_0$. Let us admit that $x_A$ is a perfectly reflecting boundary
\begin{equation}
T_{-}(x_A)=T_+(x_A)
\end{equation}
and $x_B$ is an absorbing boundary
\begin{equation}
T_+(x_B)=0\,,
\end{equation}
and, moreover, that the two alternate flows $f_{+}(x)=f(x)+Ag(x)$ and $f_{-}(x)=f(x)-Ag(x)$
have opposite signs throughout $[x_A, x_B]$ (for example, $f_+(x)>0$ and $f_-(x)<0$,
$\forall x\in [x_A, x_B]$).
Then one finds (see Ref.~\cite{balakrishnan88II}):
\begin{eqnarray}
&&T_+(x_0)=\nonumber\\
&&\int_{x_0}^{x_B} dz\;\frac{g(z)}{D[g(z)+f(z)/A]^2[g(z)-f(z)/A]P_{st}(z)}
\int_{x_A}^z du \;\frac{[g(u)+f(u)/A]P_{st}(u)}{g(u)}\nonumber\\
&&\nonumber\\
&&+\frac{DP_{st}(x_A)[g^2(x_A)-f^2(x_A)/A^2]}{Ag(x_A)}\int_{x_0}^{x_B}du\;
\frac{g(u)}{D[g(u)+f(u)/A]^2[g(u)-f(u)/A]P_{st}(u)}\,,\nonumber\\
\label{tp}
\end{eqnarray}
and
\begin{eqnarray}
&&T_-(x_0)=T_-(x_B)\nonumber\\
&&+\int_{x_0}^{x_B} dz\;\frac{g(z)}{D[g(z)-f(z)/A]^2[g(z)+f(z)/A]P_{st}(z)}
\int_{x_A}^z du \;\frac{[g(u)-f(u)/A]P_{st}(u)}{g(u)}\nonumber\\
&&\nonumber\\
&&-\frac{DP_{st}(x_A)[g^2(x_A)-f^2(x_A)/A^2]}{Ag(x_A)}\int_{x_0}^{x_B}du\;
\frac{g(u)}{D[g(u)+f(u)/A]^2[g(u)-f(u)/A]P_{st}(u)}\,.\nonumber\\
\label{tm}
\end{eqnarray}
The value of $T_-(x_B)$ can be obtained by taking $x_0=x_A$ in equation~(\ref{tm})
and using the perfectly reflecting boundary condition $T_-(x_A)=T_{+}(x_A)$,
together with equation~(\ref{tp}) for $x_0= x_A$.
Here $P_{st}(x)$ is the stationary probability density of the dichotomous flow,
\begin{equation}
P_{st}(x)=\frac{g(x)}{[g(x)+f(x)/A][g(x)-f(x)/A]}\,\int_0^x dz\;
\frac{f(z)}{D[g(z)-f(z)/A][g(z)+f(z)/A]}\,,
\end{equation}
and $D=A^2/2k$.
In the weak-noise limit $D \ll 1$ ( for $x_A<x_s<x_u$ and
$x_B$ being the other stable point of the potential $V(x)$),
using the steepest descent approximation, one obtains
for the activation rate over the barrier at $x_u$:
\begin{equation}
{\cal {R}}=\frac{1}{T_+}=\frac{\sqrt{|f'(x_s)|\,f'(x_u)}}{2\pi}\;\exp(-\Delta \Phi/D)\,,
\end{equation}
independent of $x_0$ (at order ${\cal O}(D^0)$),
and with an ``Arrhenius barrier"
\begin{equation}
\Delta \Phi=-\int_{x_s}^{x_u} du\;\frac{f(u)}{[g(u)-f(u)/A][g(u)+f(u)/A]}\,.
\end{equation}
One can also show that, up to order ${\cal O}(D^0)$,
the activation rate ${\cal {R}}$ is equal
to the Kramer's escape rate out of this basin of attraction.
The latter one can be evaluated as the constant net flux of probability (or the flow of particles in an ensemble representation) through the
borders of the domain over the integrated probability (the population)
inside the domain (the so-called ``flux over population" method),
see also Refs.~\cite{hanggi90,reimann96}.
Thus Kramer's rate can be simply obtained from the stationary master equation for
the probability density of the process.
It has been recently
shown in Ref.~\cite{reimann99}, that the result
written symbolically as~\footnote{See Ref.~\cite{reimann99} for the right interpretation of Eq.~(\ref{symbol})}:
\begin{equation}
\mbox{MFPT}=\frac{1}{\mbox{Kramer's escape rate}}
\label{symbol}
\end{equation}
holds actually for {\em arbitrary DMN strength}, and not only in the weak-noise limit.
And even more: it is valid for an arbitrary time-homogeneous (stationary) stochastic process, and not only for the DMN.
On the basis of this result, one can therefore express
the MFPT in terms of the stationary probability density
and avoid the complications
related to the use of the backward equation
with the cumbersome boundary conditions.
\subsection{Resonant activation over a fluctuating barrier}
A related problem, that is ubiquitous in natural sciences,
is that of the thermally-activated escape over a
potential barrier. In many situations of interest the noise is additive
GWN and weak, and the escape time is governed by a simple Arrhenius factor
determined by the height of the barrier.
However, far from equilibrium the interplay between the noise
and the global properties of the potential may be much more intricate.
In particular, the potential that is experienced by the Brownian particle~\footnote{Recall that the term ``Brownian particle" may refer to a true particle, but it may also represent some other state variable or collective coordinate of the current system under study.} cannot be always appropriately modeled as a static one, but it may be randomly
varying on a time scale that is comparable to the escape time itself;
this may have nontrivial resonant-like effects on the behavior of the particle.
A few relevant experimental realizations in physical, chemical, and biological systems are presented in Ref.~\cite{reimann97} and cited references therein; they include glasses, dye lasers, biomolecular ionic channel kinetics, protein folding, etc.
As an example, let an overdamped particle
which moves in a fluctuating potential $V(x,t)$
with a barrier,
under the influence of a heat bath at a temperature $T$, as described by the
Langevin equation
\begin{equation}
\dot{x}(t)=-V'(x,t)+\xi_{\mbox{GWN}}(t)\,,
\end{equation}
with $\langle \xi_{GWN}(t_1)\xi_{GWN}(t_2)\rangle =2T\delta(t_1-t_2)$
(provided that particle's mass, the friction coefficient, and Boltzmann's constant $k_B$
were set equal to 1, and time is measured in terms of the friction coefficient).
Consider that $V(x,t)$ switches at random, {\em dichotomously}, with transition rate $k$, between two profiles, $V_{\pm}(x)$, with two values of the barrier height, $V_{-}^{\mbox{barrier}}<V_{+}^{\mbox{barrier}}$.
For very slow barrier fluctuation rates (significantly slower than the time required
to cross the highest barrier), the MFPT, as expected, approaches the average
of the MFPTs for the alternate barrier configurations. Also, for barrier fluctuations that are fast compared to the typical crossing time, the MFPT approaches the value required to cross the average barrier $V_{\mbox{mean}}^{\mbox{barrier}}=(V_{-}^{\mbox{barrier}}+V_{+}^{\mbox{barrier}})/2$.
However, for {\em intermediate fluctuation rates}, a resonance-like phenomena was shown to occur,
namely, it was shown that the MFPT over the barrier
goes through a minimum when the correlation time
(the inverse of the switching rate)
of the fluctuating potential is of the order
of the escape time over the lowest barrier $V_{-}^{\mbox{barrier}}$.
More precisely, this minimum of the MFPT (as a function of $k$)
corresponds to a maximum in the probability that the barrier is in the ``down" configuration
at the instant of crossing, as a function of the switching rate $k$.
This phenomenon was called {\em resonant activation}. Its various aspects and
different realizations, were approached theoretically in Refs.~\cite{doering92,vandenbroeck93,bier93,zurcher93,pechukas94,marchesoni95,reimann96,marchi96,gaveau96,schmid99,li99,lehmann00}, including comparisons with a deterministic,
periodic oscillation of the barrier in Ref.~\cite{jung93,klik01}. A simple experimental realization in an electronic circuit was studied in Ref.~\cite{mantegna00}.
Comprehensive discussions of the ``generic character" of the resonant activation
(i.e., its occurence for general forms
of the potential barriers, for various intensities of the GWN, as well as for other types of
correlated fluctuations of the potential barrier -- as, for example, fluctuations that are driven by an Ornstein-Uhlenbeck process) were given in Refs.~\cite{reimann95,reimann97,reimann97I}.
Probably the simplest example, that admits a complete analytical treatment, can be found
in Ref.~\cite{doering92}, and corresponds to a symmetric piecewise potential that fluctuates, with a transition rate $k$, between $V(x)$ and $-V(x)$, with $V_{+}^{\mbox{barrier}}=-V_{-}^{\mbox{barrier}}=V_0>0$, see Fig.~\ref{figure22}.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=11.cm\epsfbox{figure22.eps}}
\caption{Piecewise linear symmetric potential that fluctuates dichotomously between
the configurations $V(x)$ and $-V(x)$, with the barriers $V_{+}^{\mbox{barrier}}=-V_{-}^{\mbox{barrier}}=V_0>0$.}
\label{figure22}
\end{figure}
The MFPT from $0$ to $L$
(provided that the particle starts in $x=0$, with the barrier in the $\pm V(x)$ configuration
with equal probability $1/2$, and it is absorbed at $x=L$)
is found to be:
\begin{eqnarray}
\frac{\langle t(0\rightarrow L)\rangle\; T}{L^2}=&&C_+\left[\frac{V_0}{\alpha T}(1-e^{-\alpha})-
\frac{kL^2\alpha}{V_0}-\frac{V_0}{T}\right]\nonumber\\
&&+C_-\left[-\frac{V_0}{\alpha T}(1-e^{\alpha})+
\frac{kL^2\alpha}{V_0}-\frac{V_0}{T}\right]\,,
\end{eqnarray}
where
\begin{equation}
\alpha=\left(\frac{V_0^2}{T^2}+\frac{2kL^2}{T}\right)^{1/2}
\end{equation}
and
\begin{equation}
C_{\pm}=\frac{(2kL^2/T)(\alpha\mp e^{\pm \alpha})\mp V_0^2/T^2}
{2\alpha V_0/T(1+2kL^2T/V_0^2)(2kL^2/T\mbox{cosh}\alpha +V_0^2/T^2)}\,.
\end{equation}
A qualitative representation of the MFPT as a function of the switching rate $k$
is given in Fig.~\ref{figure23}
for a fixed values of $V_0/T$. One notices the presence of a {\em minimum}
of the MFPT corresponding to a
certain value of the barrier fluctuation rate, that is found to be of the order of the
inverse of the time required to ``cross" the down potential configuration.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=11.cm\epsfbox{figure23.eps}}
\caption{Schematic representation (continuous thick line) of the log of the dimensionless MFPT $\langle t(0\rightarrow L)\rangle$ as a function of the log of the dimensionless transition rate $k$ for the piecewise linear symmetric potential discussed in the main text, see Fig.~\ref{figure22}. The dashed lines represent the two deterministic limits corresponding, respectively, to very slow and very high transition rates $k$.}
\label{figure23}
\end{figure}
For the limit of slow transition rates,
one finds
\begin{equation}
\frac{\langle t(0\rightarrow L)\rangle_{\mbox{slow}}\;T}{L^2}=\frac{T^2}{2V_0^2}
\left(e^{V_0/T}-1-V_0/T\right)
+\frac{T^2}{2V_0^2}
\left(e^{-V_0/T}-1+V_0/T\right)
\end{equation}
(corresponding to the mean of MFPTs over the independent deterministic ``up" and ``down" potential configurations),
while in the limit of fast transition rates one finds the MFPT corresponding to the
transition over the mean barrier $\langle V_0\rangle=[V_0+(-V_0)]/2=0$ (i.e., the free diffusion on a line),
\begin{equation}
\frac{\langle t(0\rightarrow L)\rangle_{\mbox{fast}}\;T}{L^2}=
\lim_{\langle V_0\rangle \rightarrow 0} \frac{T^2}{\langle V_0\rangle ^2}
\left(e^{\langle V_0\rangle /T}-1-\frac{\langle V_0\rangle}{T}\right)=\frac{1}{2}\,.
\end{equation}
We also mention the recent results of Refs.~\cite{spa1,spa4,spa5} in which the
problem of escape time of a particle from a metastable state with a
fluctuating barrier,
in the presence of both thermal noise and dichotomous noise, was solved in full
analytical detail. The related problem of steady-state distributions and the
exact results for the diffusion spectra were addressed in Refs.~\cite{spa2,spa3}.
Finally,
general equations for computing the effective diffusion coefficient in randomly
switching potentials are derived for arbitrary mean rate of the potential
switching and arbitrary intensity of the Gaussian white noise,
Refs.~\cite{spa5,spa6,spa7}.
\section{STOCHASTIC RESONANCE WITH DMN}
Generally speaking, the {\em stochastic resonance} (SR) phenomenon
refers to the ``enhanced sensitivity" of a
(nonlinear) system to a small deterministic periodic forcing
in the presence of an ``optimal amount" of noise -- see Ref.~\cite{gammaitoni98}
for a (by now incomplete) review of various model-systems with SR and their possible practical applications.
Such a definition is very broad, and till now there is no
agreement about the precise signature of the SR, the necessary conditions of its occurrence,
as well as the ``right" quantifiers, see Refs.~\cite{berdi98,evsti05}
for further comments on this point. There is therefore a huge and varied literature on SR and its numerous realizations. In particular, for systems with DMN-driving see Refs.~\cite{berdichevsky96,barzykin98,berdichevsky99,neiman99,rozenfeld00,freund00,gitterman03} for a few examples.
A canonical model for SR is an overdamped particle in a symmetric double-well potential
$V(x)=-ax^2+bx^4$ (with $a,\, b>0$), driven simultaneously by an additive GWN and an additive, weak periodic signal $s(t)$, see Fig.~\ref{figure24} for an illustration. A ``hand-waving" argumentation for the appearance of SR would be the following. One one side,
for too low GWN intensities, the thermally-activated jumps between the two wells are too rare and the particles do not take benefit of the alternate decrease of the potential barrier (on one side or another) due to the external signal; on the other side, for too large GWN intensities, the jumps between the wells are very frequent (a large number take place during one period of the external signal) and thus,
again, the response of the system is not synchronized with and does not benefit of the external signal.
However, for intermediate noise intensities, the thermally-activated transition rates are comparable with the rocking rate
of the potential, and the particles take advantage of the alternate decrease of the potential barrier,
resulting in an enhanced response of the system to the applied external perturbation.
\begin{figure}[h!]
\quad \vspace{1.5cm} \\
\centerline{\epsfxsize=9.cm\epsfbox{figure24.eps}}
\caption{Schematic representation of a double-well symmetric potential
that is periodically rocked by a weak external signal $s(t)$.
Considered are four stages of one rocking period, corresponding
to successive maximum and minimum heights of the potential barrier.
For an intermediate, optimum thermal noise intensity,
there is a synchronization between the
rocking period and the thermally-activated hopping
between the two wells.}
\label{figure24}
\end{figure}
As shown, for example, in Ref.~\cite{gammaitoni98},
both the spectral power amplification (SPA) (that represent the weight of the signal part in the output power spectrum), and the signal-to-noise ratio (SNR) (the SPA rescaled by the input power) represent good measures of the SR. Indeed, both of them show a {\em nonmonotonous behavior
with a maximum as a function of the GWN intensity}.
As it was recently shown, see Refs.~\cite{neiman99,rozenfeld00,freund00}, the addition of a DMN
has very important effects on the behavior of the system: (i) DMN can synchronize the switching time between the two stable states of the double-well (i.e., for a certain interval of the GWN intensity, the mean switching rate of the system
is locked to the switching rate of the DMN), a phenomenon corresponding,
in the limit of a weak external perturbation $s(t)$,
to the {\em resonant activation} described in Sec.~VI.B; (ii) moreover, the {\em SR is greatly enhanced by the DMN} (i.e., the SPA and/or the SNR can reach larger maximal values as compared to the case when no DMN is present)~\footnote{Note that the DMN is {\em weak} and cannot, by itself, induce transitions between the stable states of the double-well potential: the jumps are still induced by the GWN.}.
Following Refs.~\cite{neiman99,rozenfeld00,freund00}, let us illustrate these results on
a simplified model. We shall neglect the intrawell motion, and
describe only the belonging of the particle to one or the other
wells of $V(x)$ through the two-valued stochastic variable $\sigma(t)=\pm 1$.
The thermally-induced transition rate between the two-wells is given by
$a=\exp(-\Delta V/D)$, where $\Delta V$ is the height of the potential barrier,
and $D$ is the properly-scaled GWN intensity.
An additive DMN $\xi(t)={\pm A}$ of low switching rate $k\ll 1$ modifies these transition rates,
\begin{equation}
W(\sigma, \xi)=\exp\left(-\frac{\Delta V+\sigma \xi}{D}\right)\,,\quad\sigma=\pm 1\,,\xi=\pm A\,.
\label{trate}
\end{equation}
Considering the four-state stochastic process $(\sigma,\,\xi)$
one can write down the following master equation:
\begin{equation}
\frac{d}{dt}P(t; \sigma,\xi)=-W(\sigma,\xi)P(t;\sigma,\xi)+W(-\sigma,\xi)P(t;-\sigma,\xi)+k[P(t;\sigma,-\xi)-P(t;\sigma,\,\xi)]\,.
\label{MSRE}
\end{equation}
Therefore, the mean switching rate (MSR) of the output $\sigma(t)$ is found to be:
\begin{equation}
{\cal {R}} =\frac{\pi}{2}\left(a_1+a_2-\frac{(a_2-a_1)^2}{a_1+a_2+2k}\right)\,,
\end{equation}
with $a_{1,2}=\exp[-(\Delta V\pm A)/D]$. For small values of the GWN intensity
$D\ll1$, one recovers the limit of fast switching rates of the DMN, $k\gg 1$,
that corresponds to an {\em effective potential with a lowered barrier $\Delta V-A$}:
\begin{equation}
{\cal {R}} \approx \pi/2 \exp[-(\Delta V-A)/D] \quad (D\ll 1)\,.
\end{equation}
For large noise intensities, the MSR approaches the other deterministic limit of the DMN, namely $k\rightarrow 0$, that corresponds to {\em a double-well static potential with an asymmetry},
\begin{equation}
{\cal {R}} \approx \pi \exp(-\Delta V)/\mbox{cosh}(A/D) \quad (D>1)\,.
\end{equation}
For intermediate values of $D$ (and finite values of $k$), the MSR is
locked to $\approx k$, and changes very slowly with $D$, a regime that corresponds to the
{\em resonant activation} described in Sec.~VI.B above.
Let us now add a small periodic perturbation to the system, $s(t)=S\cos(\Omega t+\varphi)$,
with $\varphi$ randomly uniformly distributed in $[0,2\pi)$. At slow
($\Omega <k\ll 1$) and weak ($S\ll A< \Delta V $) forcing, the transition rates in equation~(\ref{trate}) become
\begin{equation}
W(\sigma,\xi) \rightarrow W(\sigma,\xi)\exp[-S\sigma \cos(\Omega t+\varphi)/D]\approx
W(\sigma,\xi)[1-S\sigma \cos(\Omega t+\varphi)/D]\,,
\label{modtrate}
\end{equation}
and the evolution of the system is described by the master equation~(\ref{MSRE})
with the time-periodic transition rates~(\ref{modtrate}). The autocorrelation function
$\langle \sigma(t_1)\sigma(t_2)\rangle$ can be computed in closed analytical form
in the limit of the {\em linear-response theory} with respect to the signal $s(t)$. This leads finally to the following output power spectrum (averaged over the phase $\varphi$ of the external
periodic signal $s(t)$):
\begin{equation}
{\cal{P}}(\omega)={\cal{N}}(\omega)+S^2\pi\eta\delta(\omega-\Omega)\,,
\end{equation}
where
\begin{equation}
{\cal{N}}(\omega) =\frac{4(a_1+a_2)}{(a_1+a_2)^2+\omega^2}\left[1+\frac{(a_2-a_1)^2}{4k^2+\omega^2}\right]-\frac{4(a_2-a_1)^2}{(a_1+a_2+2k)(4k^2+\omega^2)}
\end{equation}
is the power spectrum of the background (independent of the applied periodic
perturbation $s(t)$). The SPA is found to be:
\begin{equation}
\eta=\frac{4}{\pi^2D^2}\;\frac{{\cal R}}{(a_1+a_2)^2+\Omega^2}\,,
\end{equation}
and the signal-to-noise ratio is:
\begin{equation}
SNR=\pi \frac{\eta}{{\cal{N}}(\Omega)}\,.
\end{equation}
Both the SPA $\eta$ and the SNR exhibit two maxima as a function of the GWN intensity,
one for low noise intensities, and the other one, weaker, for large noise intensities.
As seen above, the small-noise intensity region $D\ll 1$ corresponds to a potential with a lowered barrier, so that SR may occur for smaller noise intensity as compared to the case
when no DMN is present: that is why SR is greatly enhanced. In the intermediate range of $D$,
the MSR is locked to the switching rate of the DMN, and thus the system is not sensitive to the periodic perturbation: SR does not appear. Finally, for large values of the noise intensity, the system behaves as an asymmetric one: SR still appears (the second peak), but during one-round trip switching of the DMN the system performs many transitions between the potential wells; thus, with further increase of the asymmetry (of the noise-intensity $D$)
the SR is gradually suppressed.
This discussion highlights the {\em relationship between the} {\em resonant activation over a fluctuating barrier} (see Sec.~VI.B) and the {\em (enhanced) stochastic resonance}: although both phenomena may appear in the same system, they are typically appearing in quite different regimes of the parameters of the system. This conclusion is corroborated, in a slightly different context, by the results of Ref.~\cite{schmitt06}.
\section{SPATIAL PATTERNS INDUCED BY DMN}
Till now, with the exception of noise-induced
phase-transition situations,
we considered only zero-dimensional systems, i.e., systems
with no spatial dimensions.
We will now briefly turn to the spatially-extended systems,
for which it was shown that noise,
and in particular DMN, can create {\em spatial patterns},
see Refs.~\cite{parrondo96,ojalvo99,buceta02I,buceta02II,buceta02III,buceta02IV,buceta03I,buceta03II,buceta03III,parrondo03,wio03}.
Pattern formation in a non-fluctuating system is generically related to
the onset of a symmetry-breaking global instability of
the homogeneous state of the system. It appears
when the control parameter
(that is related to the ratio between the driving forces
-- that tend to destabilize the system -- and
the dissipative forces in the system --
that have a stabilizing effect) exceeds a
threshold value, see e.g. Ref.~\cite{cross77}. In the vicinity of the
instability threshold, the dynamics of the system is essentially described by
{\em nonlinear amplitude equations}, that correspond
to the evolution of the relevant slow modes on
the central manifold of the critical point
of the dynamics. These equations are generic, in the sense that
their form does not depend on the details of the underlying dynamics, but
only on the type of bifurcation that appears in the system,
the symmetries of the system, the dimensionnality, and the eventual
conservation laws.
A well-known example of amplitude equation is the
Swift-Hohenberg equation, that was succesfully used to describe
the onset of the Rayleigh-B\'enard convective rolls. It corresponds to a system
undergoing a soft pitchfork bifurcation,
described by a single scalar field $\psi(\vec{r},\,t)$, provided that
the system is invariant under spatial inversion $\vec{r} \rightarrow -\vec{r}$
and under field-inversion $\psi \rightarrow -\psi$. Its form is:
\begin{equation}
\partial_t
\psi(\vec{r},t)=-V'(\psi(\vec{r},t))-(K_c^2+\Delta^2)^2\psi(\vec{r},t)\,,
\end{equation}
where $\Delta$ is the Laplace operator, and $V(\psi)$ is an even function of $\psi$
(in the simpest case, $V(\psi)$ is just a quartic, symmetric potential),
with $V'(\psi)=dV/d\psi$.
As long as $V(\psi)$ is monostable, no spatial structures appear in the system --
the stable steady-state is spatially homogeneous. However, if under the variation of the control parameter the local
potential $V(\psi)$ acquires two stable equilibrium points (separated by a barrier),
the Swift-Hohenberg model leads to pattern formation with a critical wavevector $K_c$. More precisely, one has the following conditions for the marginal stability:
\begin{equation}
V'(\psi)+K_c^2\psi=0\,,\quad V''(\psi)<0\,.
\label{marstab}
\end{equation}
In the case of {\em noise-induced spatial patterns}, the appearance of the pattern is
strictly conditioned by the presence of the noise, i.e., no pattern appears in the
noiseless, deterministic system. Such an example is offered by the
Swift-Hohenberg equation with a time-dependent potential $V(\psi,t)$
that switches at random,
{\em dichotomously}, between two profiles, $V_1(\psi)$ and $V_2(\psi)$
(a so-called dichotomous global alternation of dynamics), see
Refs.~\cite{buceta02II,buceta02III,buceta02IV}. We consider that both
potentials
$V_{1,2}(\psi)$ are monostable, i.e., neither of the two dynamics alone will
lead to patterns (there are no patterns in the absence of the DMN).
However, it
was shown that the {\em random alternation} of the two dynamics leads to
the appearance of {\em stationary spatial patterns}, depending on the characteristics of
the potentials
$V_{1,2}$ and on the switching rate $k$ of the DMN. This result can be put in
parallel with the well-known Parrondo paradox (or the flashing ratchet mechanism),
in which the (random) alternation between two fair games (or two unbiased
diffusions) is no longer fair (is no longer unbiased);
this means that this alternation leads to a
directed flow in the system -- i.e., a net gain (a directed current). See
Refs.~\cite{parrondo03,wio03} for a pedagogical discussion of these points.
A simple qualitative argument can clarify the underlying mechanism. Let $\tau=1/k$
the average time the system is spending in each of the alternate dynamics.
Of course, if the switching rate is very low, $k\rightarrow 0$, in-between the switches
the field will reach, respectively, the stationary homogeneous states $\psi_{1,2}$
corresponding to each of the monostable potentials $V_{1,2}(\psi)$:
\begin{equation}
V_{1,2}(\psi_{1,2})+K_c^2\psi_{1,2}=0\,,\quad V''_{1,2}(\psi_{1,2})<0\,.
\end{equation}
Suppose now that the switching rate is very high (the sojourn time $\tau$ is much smaller than the characteristic relaxation time in each of the alternate $V_{1,2}$ potentials). Then the evolution of the field will be driven by the deterministic {\em effective potential} $V_{\mbox{eff}}(\psi)=[V_1(\psi)+V_2(\psi)]/2$.
If
\begin{equation}
V_{\mbox{eff}}({\psi})+K_c^2\psi=0 \quad \mbox{and}\quad V''_{\mbox{eff}}({\psi})>0\,,
\label{effective}
\end{equation}
then, according to the general considerations above, the system becomes marginally unstable with respect to the onset of patterns of wavevector $K_c$. Note that $V_{\mbox{eff}}$ may become bistable
only for {\em nonlinear} systems (i.e., for potentials $V_{1,2}$ that are at least quadratic).
Of course, by continuity arguments one may expect
the onset of the spatial patterns for finite switching rates $k$, too,
but the marginal stability condition Eq.~(\ref{effective}) will become dependent on $k$.
When the global
switching is not random, but periodic in time, see
Ref.~\cite{buceta02I,buceta02IV}, besides the stationary spatial patterns one will
also obtain periodic spatio-temporal patterns.
The case of a potential $V(\psi, \vec{r})$ that has
a spatial, quenched dichotomous disorder was also shown to lead to spatial
patterns, see Ref.~\cite{buceta03I}.
Moreover, as pointed out in
Refs.~\cite{buceta02III,buceta03III}, these studies are relevant
for other situations that lead to pattern formation, such as Turing
instabilities in reaction-diffusion systems.
\section{DISCRETE-TIME DICHOTOMOUS FLOWS}
The influence of noise on {\em discrete-time} dynamical systems
(maps) is much less-documented than for the continuos-time ones.
Aspects like the shift, broadening, or even suppression of bifurcations,
the behavior of the invariant densities and of the
Lyapunov exponents near the onset of chaotic behavior,
and the destabilization of locally stable states by noise
have been documented, in general, for weak GWN, see Refs.~\cite{cru81,shraiman81,hirsch82,arecchi84,linz86,beale89,reimann91,graham91,reimann94} and references therein for examples.
With few exceptions, see Refs.~\cite{irwin90,fraser92,gutierrez93,kosinka03},
the effects of a finite correlation time of the noise,
such as a DMN, have not been addressed in general.
For example, Ref.~\cite{gutierrez93} considers
the logistic map with a {\em dichotomously fluctuating parameter}:
\begin{equation}
x_{n+1}=\mu(1+\xi_n)x_n(1-x_n)\,.
\end{equation}
$\xi_n$ is a dichotomous noise $\pm A$ with a probability $\alpha$
of repeating the same value in the next iteration
($\alpha=1/2$ corresponds to the white noise limit,
while $\alpha=0$ and $\alpha=1$ correspond to two deterministic limits),
and $\mu(1+A)<4$.
Such a system may derive from a continuous-time one which is
driven by a random sequence of pulses of constant duration
(this duration thus constitutes the time step of the map).
The influence of the
``correlation time" $\alpha$
of the noise on the dynamics is found to be quite dramatic:
by simply varying it one can obtain all transitions,
from chaos to regular motion.
However, this field definitely calls for further investigations.
\section{CONCLUSIONS AND PERSPECTIVES}
The main goal of this review was to present the DMN as a flexible and very efficient tool in modeling out-of-equilibrium situations, allowing to emphasize the richness and variety of the encountered phenomena. We hope that the reader will find in it an useful ingredient in his/her
modeling of various experimental situations, out of which only a small part was reported in this paper.
In this respect, we are thinking of the cutting-edge techniques that allow to go to
the direct study and visualization of
microscopic and nanoscopic systems (including biological,
living systems): we expect coloured noise (and, in particular, DMN)
to play an important role in modeling the interaction of such small systems with their surroundings.
We also emphasize a few other opened conceptual problems. A first one would be the effect
of inertia on dichotomous flows, and, more general, the statistical properties of
several coupled dichotomous flows. The problem of fluctuation-like theorems for DMN-driven systems is to be addressed in further details.
Also, as it was mentioned above, the study of maps driven by DMN is still an opened field.
\section*{Acknowledgements}
We are grateful to Profs. Christian Van den Broeck, Michel Droz,
Katja Lindenberg, Ryoichi Kawai,
Venkataraman Balakrishnan, Avadh Saxena, Mamad Malek Mansour, Florence Baras,
and Max-Olivier Hongler
for over-the-years discussions on DMN-related subjects.
We also thank Fran\c{c}ois Coppex for precious help in
preparing the manuscript and acknowledge partial support from the Swiss
National Science Foundation.
|
train/arxiv
|
BkiUd2PxK6nrxl9bN_IX
| 5 | 1 |
\section{Introduction}
In recent years, deep neural network based semantic segmentation models have achieved considerable success. This success is much reliant on the large pixel-level annotated dataset over which these models are trained.
However, like many other deep neural network based models, semantic segmentation models suffer from considerable performance degradation when tested on images from the domain different than then one used in training.
This problem, attributed to the domain shift, is exacerbated in semantic segmentation algorithms since many of them are trained on the synthetic dataset, due to lack of large real-world annotated datasets, and are tested over the real-world images.
Retraining or fine-tuning for new domains is expensive, time consuming, and in many cases not possible due to the large number of ever-changing domains, especially in case of autonomous vehicles, and unavailability of annotated data.
To overcome domain shift, unsupervised domain adaptation (UDA), has been employed with reasonable success \cite{zou2018unsupervised,zou2019crst, mlsl2020}, but state-of-the-art is still lacking desired accuracy.
Many unsupervised domain adaptation algorithms for semantic segmentation \cite{hoffman2017cycada, chen2017no,clan_2019_CVPR, Yunsh2019bidirect, structure_2019_CVPR, dada_2019_ICCV, iqbal2022leveraging} perform global marginal distribution alignment through adversarial learning to translate the input image or feature volume or output probability tensor from one domain to other.
The adversarial loss looks at the whole tensor (image/feature or output probability) even when the objective is to improve the pixel-level label assignments \cite{clan_2019_CVPR}, more-over aligning marginal distributions does not guarantee preserving the discriminative information across the domain \cite{zhang2019category}.
Self-supervised learning methods \cite{mlsl2020, pan2020unsupervised, Lian_2019_pycda, LSE_2020_Naseer, zou2019crst, Yunsh2019bidirect,munir2021ssal} (either independently or along with adversarial learning ) try to overcome this challenge by back-propagating the cross-entropy loss computed over pixel-level pseudo-labels generated by the source model.
Quality of these pseudo-labels is dependent upon the generalization capacity of the classifier and effects overall adaptation process.
The deep neural network based semantic segmentation model when trained by minimizing cross-entropy loss, greedily learns representations that capture inter-class variations.
When optimally trained these inter-class variations should help map accurate decision boundary, projecting pixels from different classes to different sides of it (decision boundary).
However, due to the domain shift, the decision boundary is not aligned in target domain, resulting in noisy pseudo-labels leading to poor self-supervised domain adaptation.
Previous works \cite{chen2020homm, kumagai2019unsupervised} have shown discriminative clustering on target data and moment matching across domains helps in adaptation .
CAG-UDA \cite{zhang2019category} $\&$ \cite{Deng_2019_ICCV} tried to align the class aware cluster centers across domains for better adaptation.
However, visual semantic classes exhibit large set of variations, due to difference in texture, style, color, pose, illumination etc.. These variations are generally assumed to be across instance, e.g. two different types of cars, but they do manifest frequently in the same instance too, e.g. pixels belonging to different road locations or to different parts of car.
Class aware single cluster based alignment might align centers of the source and target domain without aligning overall distribution, leaving classes with large variations vulnerable to misclassification in target domain.
Learning to capture intra-class variations by representing each class with multiple modes and aligning the modes across domain might overcome these challenges.
Therefore, we propose a novel class aware multi-modal distribution alignment method for unsupervised domain adaption of semantic segmentation model.
We combine together the ideas of distribution alignment and pseudo-label based adaptation, however, instead of just using discriminatively learned features during the adaptation, we explicitly learn representations separately.
In addition to learning the inter-class variation through minimizing cross-entropy loss, i-e the pixel-level intra-class features variations are captured by learning a multi-modal for each class (Fig. \ref{img:teaser}), resulting in a much more generalized representation.
Both of these tasks have competing requirements, minimizing cross entropy loss results in learning inter-class discriminative representation along with intra-class consistency. Whereas multi-modal distribution learning intends to preserve information that can model intra-class variations.
We disentangle these two information requirements by developing class-aware multi-modal distribution learning (MMDL) module , parallel to standard segmentation head.
MMDL extracts the spatially low-resolution feature volume from the encoding block and maps to the spatially high-resolution embedding.
Class aware multi-modal modeling is performed over these embedding using Distance metric learning \cite{repmet2019}.
Since both of these heads share the backbone, simultaneously decreasing loss on both act as a regularizer over the learned features, resulting in the less noisy pseudo-labels.
During domain adaptation, the high quality pseudo-labels allow us to learn domain-invariant class discriminative feature representations in the discriminative space.
At the same time, stochastic mode-alignment is performed across domains, by minimizing distance between representation of source pixels and target pixels mapping to same mode; thus preserving intra-class variations.
Modes themselves are updated by increasing the posterior probability of target pixel belonging to the mode identified closest to target.
During adaptation too, these losses computed paralelly act as regularizer over each other, hence dampening each others noise.
Our contributions are summarized as follows.
First, we propose a multi-modal distribution alignment strategy for the self-supervised domain adaptation.
By designing a multi-modal distribution learning (MMDL) module parallel to standard segmentation head, with shared backbone, we disentangle inter-class discriminative and intra-class variation information; allowing them to be used during adaptation separately.
We show that due to regularization of MMDL, the pseudo-labels generated over target domain are more accurate. Lastly, to perform stochastic mode alignment, we introduce the \textit{cross domain consistency loss}.
We present state-of-the-art performance for benchmark synthetic to real, e.g., GTA-V/SYNTHIA to Cityscapes adaptation.
\begin{figure*}[t]
\centering
\includegraphics[width=1 \textwidth]{images/model_plus_ma.pdf}
\scriptsize
\caption{The proposed DRSL approach (a) Base features extracted from the base network are used for two separate tasks. The MMDL-FR module captures intra-class variations through multi-modal distribution learning. Semantic Segmentation head estimates the discriminative class boundaries necessary for the primary segmentation task. This disentanglement allows us simultaneous alignment in discriminative and multi-modal space, allowing MMDL-FR module to act as a regularizer over the Segmentation Head. (b) The proposed \textit{Stochastic mode alignment}: Minimizing $\mathcal{L}_{mcl}$ brings the source and target embeddings of the same mode of the same class closer than any source pixel's embedding belonging to different class. $\mathcal{L}_{ma}$ decreases the in-mode variance for the target samples by forcing them to come closer to the assigned mode and move away from other class's modes.
}
\label{img:model}
\end{figure*}
\section{Related Work}
The domain shift between testing and training data deteriorates the model performance in most of the computer vision tasks like classification \cite{tzeng2017adversarial, pinheiro2018unsupervised, xu2019larger, deng2019cluster, belal2021knowledge, schrom2021improved}, object detection \cite{chen2018domain, khodabandeh2019robust,hsu2020progressive} and semantic segmentation \cite{chen2017no, chen2017road, clan_2019_CVPR, curr2017_ICCV, dai2019curriculum, hoffman2017cycada, iqbal2020weakly, LSE_2020_Naseer, yang2021exploring}. In this work, we focus on the domain shift problem for semantic segmentation with self-supervised learning. Our work is related to semantic segmentation, domain adaptation, and self-supervised learning.
\noindent \textbf{Domain Adaptation for Semantic segmentation:}
Recent works \cite{vu2019advent, mlsl2020, LSE_2020_Naseer, chen2017road, chen2017no, tsai2018learning, Lian_2019_pycda, iqbal2020weakly, zou2018unsupervised, Cordts2016Cityscapes, dada_2019_ICCV, guo2021metacorrection} aiming to minimize the distribution gap between source and target domains are focused in two main directions. 1) adversarial learning and, 2) self-supervised learning for unsupervised domain adaptation (UDA) of semantic segmentation.
\noindent \textbf{Adversarial Domain Adaptation:} Adversarial learning is the most explored area for output space \cite{tsai2018learning, wang2020differential, vu2019advent, pan2020unsupervised,dada_2019_ICCV}, latent/feature space \cite{chen2017no, mancini2018boosting} and input space adaptation \cite{hoffman2017cycada, clan_2019_CVPR, zhang2018fully, Yunsh2019bidirect}. We briefly describe the feature space/feature alignment, as our work is related to it.
The authors in \cite{kim2020learning, hoffman2017cycada, zhang2020towards} used adversarial loss to minimize the distribution gap between the high level features representations of the source and target domain images.
However, these methods do not align class-wise distribution shifts but instead match the global marginal distributions. To overcome this, \cite{chen2017no, clan_2019_CVPR} combined category level adversarial loss (by defining class discriminators) with domain discriminator at feature space. \cite{iqbal2020weakly} tried to regularize the segmentation network using weak labels along with latent space marginal distribution alignment for domain adaptation of semantic segmentation.
\textcolor{black}{Similarly, the authors in \cite{yang2021exploring} investigated the robustness of the UDA of semantic segmentation and proposed a self-training augmented adversarial learning to improve the robustness to adversarial examples. Their approach resulted better performance in the presence of adversarial examples, however, reducing the performance over normal input images.
}
\noindent \textbf{Self-supervised learning:}
Self-supervised learning for UDA is recently studied for major computer vision tasks like semantic segmentation and object detection \cite{tri2018fully, mlsl2020, khodabandeh2019robust, Lian_2019_pycda}.
The authors in \cite{zou2018unsupervised} proposed a self-paced self-training approach by generating class balanced pseudo-labels and class spatial priors extracted from the source dataset used to condition the pseudo-label generation. Zou et al. \cite{zou2019crst} extended the \cite{zou2018unsupervised} with confidence regularization strategies and soft pseudo-labels for self-training based UDA for semantic segmentation. LSE \cite{LSE_2020_Naseer} further worked with self-generated scale-invariant examples and entropy based dynamic selection for self-supervised learning.
\textcolor{black}{The authors in \cite{guo2021metacorrection} proposed a domain-aware meta-learning approach (MetaCorrection) to correct the segmentation loss and condition the pseudo-labels based on noise transition matrix. They report considerable mIoU gain especially when applied on pre-adapted model.
}
In this work, we exploit a strategy similar to \cite{zou2018unsupervised} to generate pseudo-labels for target domain images during adaptation.
\noindent \textbf{Clustering Based Features Regularization:}
Some previous works also explored the effect of discriminative clustering on target data and moment matching across domains for target data adaptation \cite{chen2020homm, kumagai2019unsupervised}.
Recently, \cite{zhang2019category, Deng_2019_ICCV} tried to define category anchors on the last feature volume of the segmentation model to align class aware centers across the source and target domains. Tsai et al. \cite{tsai2019domain} tried to match the clustering distribution of discriminative patches from source and target domain images. Similarly, \cite{mlsl2020} and \cite{Lian_2019_pycda} exploited latent space and output space respectively by defining category based classification modules, forcing towards class-aware adaptation.
However, these methods do not explore the intra-class variations present in source or target data but instead
leverage the discriminative property to align the inter-class clusters. We specifically focus to capture the intra-class variations present in the source and target data by learning class-aware mixture models to help the adaptation.
\section{Distribution Regularised Self-supervised Learning}
\label{sec:method}
In this section, we provide details of our distribution regularized self-supervised learning (DRSL) architecture. It employs DeepLab-v2 \cite{chen2018deeplab} as a baseline and embeds new components that enable the semantic segmentation model to be robust to domain shift.
\subsection{Preliminaries}
For supervised semantic segmentation, we have access to source domain images $\{\mathrm{x_s, y_s}\}$ from $X_s \in \mathbb{R} ^{H\times W\times 3}$ with corresponding ground truth labels $Y_s \in \mathbb{R} ^{H\times W\times K}$. The $\{H, W\}$ shows the width and height of source domain images and $K$ shows the number of classes. Let $\mathcal{G}$ be a segmentation model with weights $\mathrm{w_g}$ that predicts the $K$ channel softmax probability outputs. For a given source image $\mathrm{x_s}$, the segmentation probability vector of class $c$ at any pixel location (${i,j}$) is obtained as $p(c|\mathrm{x_s, w_g})_{i,j} = \mathcal{G}(\mathrm{x_s})_{i,j}$
For fully labeled source data, the network parameters $\mathrm{w_g}$ are learned by minimizing the cross entropy loss (Eq. \ref{eqn:2}),
\begin{equation}
\small
\mathcal{L}^s_{seg} (\mathrm{x_s, y_s}) = -\sum_{i=1}^H \sum_{j=1}^W \sum_{c=1}^K \mathrm{y_s^{(c,i,j)}} ~\log(p(c|\mathrm{x_s, w_g})_{c,i,j})
\label{eqn:2}
\end{equation}
where $\mathcal{L}^s_{seg}$ is the source domain segmentation loss.
For unsupervised domain adaptation of the target domain, we have access to the target domain images $\{\mathrm{x_t, -}\}$ from $X_t \in \mathbb{R} ^{H_t\times W_t\times 3}$ with no ground truths available.
Thus, we adapt the iterative process used by \cite{mlsl2020, zou2018unsupervised} to first generate pseudo-labels $\mathrm{\hat{y}_t}$ using the source trained model and then fine-tune the source trained model on target data using Eq. \ref{eqn:3}.
\begin{equation}
\small
\mathcal{L}^t_{seg} (\mathrm{x_t, \hat{y}_t}) = -\sum_{i=1}^{H_t} \sum_{j=1}^{W_t} \mathrm{b^{(i,j)}_t} \sum_{c=1}^K \mathrm{\hat{y}_t^{(c,i,j)}} \log(p(c|\mathrm{x_t, w_g})_{c,i,j})
\label{eqn:3}
\end{equation}
where $\mathcal{L}^t_{seg}$ is the segmentation loss for target domain images
with respect to generated pseudo-labels $\mathrm{\hat{y}_t}$. $\mathrm{b_t}$ represents a binary mask with same resolution as $\mathrm{\hat{y}_t}$ to back-propagate loss for pixels which are assigned pseudo-labels.
The total loss for the segmentation model is the combination of true labels based source domain loss and pseudo-labels based target domain loss and is given by Eq. \ref{eqn:loss_seg_total},
\begin{equation}
\small
\mathcal{L}_{\mathcal{G}}(\mathrm{x_s, y_s,x_t, \hat{y}_t}) = \mathcal{L}^s_{seg} (\mathrm{x_s, y_s}) + \mathcal{L}^t_{seg} (\mathrm{x_t, \hat{y}_t})
\label{eqn:loss_seg_total}
\end{equation}
\subsection{Multi-Modal Distribution Learning}
\label{sec:drsl}
We propose to learn the complex intra-class variations through a multi-modal distribution learning (MMDL) framework where instead of a single cluster/anchor, each class is represented by multiple modes. This diverse representation of each class is used in the adaptation process to align the domains on fine-grained level. Furthermore, we disentangle the task of learning these intra-class variations (MMDL) from the main segmentation task by designing a separate module for it called multi-modal distribution learning based feature regularization (MMDL-FR). The proposed MMDL-FR module is model agnostic and can be appended at the encoder of any segmentation network.
The MMDL-FR module consists of mixture models based per-pixel classification augmented with distance metric learning (DML) based per-pixel embedding block.
The input of the MMDL-FR module is the feature volume $F \in \mathbb{R} ^{h\times w\times d}$, where $\{h, w, \text{and} ~d\}$ shows the spatial height, width and depth of the encoder output (base features) as shown in Fig. \ref{img:model}(a).
The embedding block is comprised of 4 fully convolutional layers with different dilation rates (similar to ones used in the last layer of the segmentation network) followed by an upsampling layer.
The output of the embedding block $\mathcal{E}$ is a feature volume $E = \mathcal{E}(F) \in \mathbb{R} ^{h_o\times w_o\times \hat{d}}$, where $(h_o, w_o)=(H/2, W/2)$ (Sec.\ref{sec:ablation}) and $d>>\hat{d}$ for any randomly selected source image.
To train the MMDL-FR module, we adapt a formulation similar to \cite{repmet2019}.
For each class $c$, a multi-modal distribution with $M$ number of modes is learned.
Let $e=E(i,j)$ be embedding for location $(i, j)$, a vector $V^{c}_m$ represent the center of the mode $m, (m=1,..,M)$ of the class $c, (c=1,...,K)$ of the mixture models. In this work, these mode centers are formulated as the weights of a fully connected layer with size $K \cdot ~M \cdot ~\hat{d}$, and are reshaped into $(K \times M) \times \hat{d}$ producing $K \times M$ matrix for each input embedding vector $e$. This simple method makes it easy to flow back gradients to the fully connected layer and learn the segmentation backbone during training.
To compute the classification probability for each embedding vector $e$, we compute the euclidean distance $D^{c}_m(e) = ||e-V^{c}_m||^2_2$ between $e$ and representative $V^{c}_m$ and compute the posterior probabilities $q^c_m(e) \propto exp(- (D^{c}_m(e))^2 / 2\sigma^2)$,
\textcolor{black}{where $\sigma^2$ is the variance of each mode and is set to 0.5.}
For class $c$ posterior probability, we take the maximum over $M$ modes of class $c$ as, $Q(C=c|e)=\max_{m=1,...,M} q^c_m(e)$,
where C = c shows class c.
\noindent \textbf{Loss Functions: }
To train the MMDL-FR module, two losses are used, i.e., triplet loss and the cross entropy loss. The triplet loss for \textit{embedding block} is defined by Eq. \ref{eqn:6},
\begin{equation}
\small
\mathcal{L}_{emb}(E) =\sum_{e \in E} |\min_m D^{c^*}_m(e) - \min_{m, c^* \neq c}D^{c}_m(e) + \alpha|_+
\label{eqn:6}
\end{equation}
where $|.|_+$ is the Relu function and $\alpha$ is the minimum margin between the distance of an embedding $e$ to the closest mode representative $V^{c^*}_m$ of the true class $c^*$, and distance of embedding $e$ to the closest mode representative of the incorrect class $V^{c}_m$.
Similarly, the cross entropy loss for mixture models based classification is given by Eq. \ref{eqn:7},
\begin{equation}
\small
\mathcal{L}_{cls} (\mathrm{E, u_f}) = - \sum_{e \in E} \sum_{c=1}^K \mathrm{u_f^{(c)}} \log(Q(C=c|e))
\label{eqn:7}
\end{equation}
where $\mathrm{u_f^{(c)}}$ is the embedding classification label obtained from $\mathrm{y_s^{c}}$ or $\mathrm{\hat{y}_t^{c}}$ for class $c$.
The triplet loss enforces the embedding block to learn representation that capture intra-class variation information, while cross entropy loss pushes them to not lose necessary class-specific information.
Due to these two losses, the MMDL-FR module acts as a regularizer at latent space over the shared backbone, so that the shared features are much more informative if only segmentation head is used.
\subsection{Stochastic Mode Alignment}
One of the characteristic of domain generalization will be that the multi-modal distribution learning over one domain should result in the modes which are very close to the modes learned in the other domain.
However, due to the domain shift, this is not generally true.
That is in the target domain, the features of pixels assigned pseudo-label $c$ might not be closer to the any of the modes belonging to the class $c$.
In addition, features in target domain mapping to same mode might not be closer to each other, resulting in low posterior probability.
We minimize two loss functions to perform \textit{stochastic mode alignment}.
For first, we apply \textit{domain invariant consistency loss}, ensuring that features of pixels mapped to same modes of same class should be near to each other regardless of the domain the are sampled from.
Assume a batch consisting of arbitrary number of source and target images, $\{(x_s^i, y_s^i)|i=0,1\dots,N_s,(x_t^i,\hat{y}_t^i)|i=0,1\dots,N_t \}$, where $\hat{y}_t^i$ are the pseudo-labels assigned to $x_t^i$.
Embedding $E_t^i=\mathcal{E}(x_t^i)$ and $E_s^i=\mathcal{E}(x_s^i)$ are computed for all the target and source images in the batch.
We randomly sample $N_e$ number of embedding, $\{e_t^i|i=0,1\dots,N_e\}$ from $\{E_t^i|i=0,1\dots,N_t \}$, choosing only from the ones having valid pseudo-label.
For \textit{domain invariant consistency}, we create a triplet $(e_t^i, e_s^i, \hat{e}_{s}^i)$ such that pseudo-label of $e_t^i$ and ground-truth label of $e_s^i$ is same class $c$, and both map to same mode $m$ of class $c$. $\hat{e}_{s}^i$ on the other hand is source pixel's embedding of any class $c^{+} \ne c$.
This loss when minimized brings $e_t^i$ closer to $e_s^i$ than any source pixel's embedding belonging to different class.
\begin{equation}
\small
\mathcal{L}_{mcl} = \sum_{i}^{N_e} | || e_t^i - e_s^i ||_2^2 - || e_t^i - \hat{e}_{s}^i ||_2^2 + \alpha1|_+
\label{eqn:ada}
\end{equation}
Note: we could have chosen most closest source sample as negative, however, this would have been computationally prohibitive. Margin, $\alpha1$, is set to 1, for all experiments.
The in-mode variance for the target samples is decreased by forcing them to come closer to the assigned mode and move away from the modes of the other classes. We sample $T_e$ embeddings per image per class from both source and target images and create set $E_s$ and $E_t$ respectively. Eq. \ref{eqn:6-sma} minimizes the triplet loss for both the source and target embeddings simultaneously.
\begin{equation}
\small
\mathcal{L}_{ma}(E_s, E_t) = \frac{1}{T^s_e} \mathcal{L}_{emb}(E_s) + \frac{1}{T^t_e} \mathcal{L}_{emb}(E_t)
\label{eqn:6-sma}
\end{equation}
where $T^s_e$ and $T^t_e$ represent cardinality of $E_s$ and $E_t$, which might be different since samples from all classes might not be available.
\subsection{Total Loss for Training and Adaptation}
The DRSL model is trained using the combination of segmentation losses, mode consistency loss and MMDL-FR module losses.
Let $\mathcal{L}_{cls}^s$ and $\mathcal{L}_{cls}^t$ represent call to Eq. \ref{eqn:7} using sourse and target embeddings respectively.
The source model with MMDL module is trained using Eq.\ref{eqn:loss-src}.
\begin{equation}
\begin{split}
\small
\mathcal{L}_{src} = \mathcal{L}^s_{seg} + \beta ~\mathcal{L}_{emb} +\eta \mathcal{L}_{cls}^s
\label{eqn:loss-src}
\end{split}
\end{equation}
During adaptation to target domain the loss functions in Eq.\ref{eqn:drsl} and Eq.\ref{eqn:drsl+} are used.
\begin{equation}
\begin{split}
\small
\mathcal{L}_{DRSL} = \mathcal{L}_{\mathcal{G}} + \beta ~\mathcal{L}_{ma} +\eta (\mathcal{L}_{cls}^s+\mathcal{L}_{cls}^t)
\label{eqn:drsl}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\small
\mathcal{L}_{DRSL+} = \mathcal{L}_{\mathcal{G}} + \beta ~\mathcal{L}_{ma} +\eta (\mathcal{L}_{cls}^s+\mathcal{L}_{cls}^t) + \gamma ~\mathcal{L}_{mcl}
\label{eqn:drsl+}
\end{split}
\end{equation}
where, $\beta$, $\eta$ and $\gamma$ are hyper-parameters to limit the effect of MMDL-FR module loss values.
\begin{table*}[h]
\centering
\caption{Semantic segmentation performance for GTA-V to Cityscapes adaptation. The abbreviations "$A_I$", "$A_F$" and "$A_O$" stand for adversarial training at input space, latent space, and output space. Similarly, "$S_T$" represents self-supervised learning.}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|c|c|ccccccccccccccccccc|c|c}
\hline
\multicolumn{23}{c}{GTA-V $\rightarrow$ Cityscapes}\\
\hline
Methods & \rot{Baseline} & \rot{Appr.} & \rot{Road} & \rot{Sidewalk} & \rot{Building} & \rot{Wall} & \rot{Fence} & \rot{Pole} & \rot{T. Light} & \rot{T. Sign} & \rot{Veg.} & \rot{Terrain} & \rot{Sky} & \rot{Person} & \rot{Rider} & \rot{Car} & \rot{Truck} & \rot{Bus} & \rot{Train} & \rot{M.cycle} & \rot{Bicycle} & \rot{mIoU}& \rot{mIoU Gain} \\ \hline \hline
Source \cite{chen2018deeplab} & \multirow{8}{*}{\rotHalf{DeepLab-v2}} & -& 75.8 & 16.8 & 77.2 & 12.5 & 21.0 & 25.5 & 30.1 & 20.1 & 81.3 & 24.6 & 70.3 & 53.8 & 26.4 & 49.9 & 17.2 & 25.9 & 6.5 & 25.3 & 36.0 & 36.6 & - \\
MinEnt \cite{vu2019advent} & & $A_O + S_T$ & 86.6 & 25.6 & 80.8 & 28.9 & 25.3 & 26.5 & 33.7 & 25.5 & 83.3 & 30.9 & 76.8 & 56.8 & 27.9 & 84.3 & \textbf{33.6} & 41.1 & 1.2 & 23.9 & 36.4 & 43.6 &7.0\\
FCAN \cite{zhang2018fcan} & & $A_I + A_O$ & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 46.6 &10.0\\
IntraDA \cite{pan2020unsupervised} & & $A_O + S_T$ & 90.6 & 37.1 & 82.6 & 30.1 & 19.1 & 29.5 & 32.4 & 20.6 & {\ul 85.7} & {\ul 40.5} & 79.7 & 58.7 & {\ul 31.1} & \textbf{86.3} & 31.5 & {\ul 48.3} & 0.0 & 30.2 & 35.8 & 46.3 & 9.7 \\
PyCDA \cite{Lian_2019_pycda} & & $S_T$ & 90.5 & 36.3 & \textbf{84.4} & {\ul 32.4} & \textbf{28.7} & 34.6 & 36.4 & 31.5 & \textbf{86.8} & 37.9 & 78.5 & 62.3 & 21.5 & {\ul 85.6} & 27.9 & 34.8 & {\ul 18.0} & 22.9 & \textbf{49.3} & 47.4 & 10.8 \\
LSE \cite{LSE_2020_Naseer} & & $S_T$ & 90.2 & 40.0 & {\ul 83.5} & 31.9 & {\ul 26.4} & 32.6 & 38.7 & 37.5 & 81.0 & 34.2 & \textbf{84.6} & 61.6 & \textbf{33.4} & 82.5 & {\ul32.8} & 45.9 & 6.7 & 29.1 & 30.6 & 47.5 & 10.9 \\ \hline
Source \cite{wu2019Resnet38} & \multirow{3}{*}{\rotHalf{ResNet-38}} & - & 70.0 & 23.7 & 67.8 & 15.4 & 18.1 & 40.2 & 41.9 & 25.3 & 78.8 & 11.7 & 31.4 & {\ul 62.9} & 29.8 & 60.1 & 21.5 & 26.8 & 7.7
& 28.1 & 12.0 & 35.4 & - \\
CBST \cite{zou2018unsupervised} & & $S_T$ & 86.8 & 46.7 & 76.9 & 26.3 & 24.8 & 42.0 & { \ul 46.0} & \textbf{38.6} & 80.7 & 15.7 & 48.0 & 57.3 & 27.9 & 78.2 & 24.5 & \textbf{49.6} & 17.7 & 25.5 & 45.1 & 45.2 & 9.8 \\
CRST \cite{zou2019crst} & & $S_T$ & 84.5 & 47.7 & 74.1 & 27.9 & 22.1 & \textbf{43.8} & \textbf{46.5} & {\ul 37.8} & 83.7 & 22.7 & 56.1 & 56.8 & 26.8 & 81.7 & 22.5 & 46.2 & \textbf{27.5} & \textbf{32.3} & {\ul 47.9} & 46.8 & 11.4 \\ \hline
Source \cite{chen2018deeplab} & \multirow{5}{*}{\rotHalf{DeepLab-v2}} & -& 71.7 & 18.5 & 67.9 & 17.4 &10.2 &36.5 &27.6 &6.3 & 78.4 &21.8 &67.6 &58.3 &20.7 &59.2 & 16.4 & 12.5 & 7.9 & 21.2 & 13.0 & 33.8 & -\\
MRENT \cite{zou2019crst} & & $S_T$ & 91.8 & 53.4 & 80.6 & \textbf{32.6} & 20.8 & 34.3 & 29.7 & 21.0 & 84.0 & 34.1 & {\ul 80.6} & 53.9 & 24.6 & 82.8 & 30.8 & 34.9 & 16.6 & 26.4 & 42.6 & 46.1 & 12.3\\
Ours (DRSL) & & $A_I + S_T$ & \textbf{92.8} & \textbf{57.5} & 82.8 & 28.7 & 17.7 & 40.6 & 34.3 & 27.0 & 85.5 & \textbf{42.7} & 77.8 & 62.3 & 30.8 & 82.2 & 24.3 & 38.5 & 8.4 & {\ul 31.1} & 39.6 & {\ul 47.6} & {\ul 13.8}\\
Ours (DRSL+) & & $A_I + S_T$ & {\ul 92.6} & {\ul 55.9} & 82.4 & 29.0 & 24.6 & {\ul 42.7} & 38.3 & 35.7 & 85.5 & 39.5 & 77.0 & \textbf{64.2} & 26.2 & 83.9 & 19.5 & 31.6 & 9.3 & 27.1 & 42.5 & \textbf{47.8} & \textbf{14.0}
\\
\hline
\end{tabular}
}
\label{table:gta2city}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width= \textwidth]{images/gta2city-ISA-01.pdf}\\
\footnotesize
\begin{tabular}{P{2cm}P{2cm}P{2cm}P{2cm}P{2cm}}
Target Image & Gound Truth & DeepLab-v2 \cite{chen2018deeplab} &DRSL (Ours) & DRSL+ (Ours)
\end{tabular}
\caption{Semantic segmentation qualitative results for Cityscapes validation set when adapted from GTA-V dataset.}
\label{img:gta2city}
\end{figure*}
\section{Experiments and Results}
We performed multiple experiments for domain adaptation of semantic segmentation and compare the obtained results with state-of-the-art methods.
\subsection{Experimental Setup}
\textbf{Datasets: }
Following \cite{Lian_2019_pycda, zou2018unsupervised, mlsl2020}, we use the standard benchmark setting of \textit{synthetic-to-real} setup for our experiments. Specifically we setup for, \textit{GTA-V to Cityscapes} and \textit{SYNTHIA to Cityscapes} dataset, where the prior is source domain dataset and the later is the target domain dataset.
\noindent \textbf{Cityscapes} \cite{Cordts2016Cityscapes} dataset is a known benchmark for the task of semantic segmentation and domain adaptation.
The dataset have 5000 high resolution labeled images partitioned as, training (2975), validation (500) and testing (1125). However, the annotations are only available for training and validation sets.
\noindent \textbf{GTA-V} dataset \cite{Richter_2016_ECCV} is obtained from the video game and the images are densely labeled with similar classes to cityscapes. There are 24966 images with spatial resolution spatial resolution $1052 \times 1914$. The GTA-V dataset also covers the road scene imagery.
\noindent \textbf{SYNTHIA} \cite{Ros_2016_CVPR} is another synthetic labeled images collection having 16 classes similar to Cityscapes. The dataset have 9400 images each with a spatial size $760 \times 1280$. Contrary to GTA-V and Cityscapes, SYNTHIA dataset has more viewpoint variations where; the camera is not supposed to be on the top of a vehicle every time.
\textbf{Network Architecture: }
\label{sec:network}
Following \cite{vu2019advent, tsai2018learning}, we use ResNet-101 \cite{he2016deep} backbone based DeepLab-v2 \cite{chen2018deeplab} as our baseline segmentation model.
Parallel to the segmentation head is the multi-modal distribution learning based feature regularization (MMDL-FR) module consisting of a combination of DML based Embedding Block (EB) and multi-modal distribution learning.
We call the DeepLab-v2 last block as the encoder (base network) and the output feature-map as base features.
For segmentation, these features are passed to segmentation layer while for MMDL-FR, these features are passed to the embedding block(Fig. \ref{img:model}).
The embedding block consists of 4 fully convolutional layers with different dilation rates (similar to ones used in the segmentation layer of the segmentation network), producing an aggregated output.
Unlike \cite{repmet2019}'s fully-connected layers based DML for embedding generation, our strategy preserves the spatial structure necessary for segmentation and requires much less memory.
The modes of the multi-modal are modeled with a fully connected layer as described in Sec. 3.2. and shown in Fig. \ref{img:model}.
For each input the embedding block of the MMDL-FR module outputs an embedding volume $E$ of size $(h\times w\times \hat{d})$.
For an input image, we select a maximum of $T_e$ embedding vectors per-class at random for further processing.
\textbf{Implementation Details: }
\label{sec:implement-details}
To implement the proposed approach and conduct the experiments, we use PyTorch deep learning framework and a single GTX 1080ti GPU with a single Core-i5 machine with 32GB RAM. The ImageNet \cite{russakovsky2015imagenet} trained weights for ResNet-101 \cite{he2016deep} are used to train the DeepLab-v2 on source dataset. SGD optimizer with weight decay of $5\times 10^{-4}$, momentum of 0.9, and initial learning rate of $2.5 \times 10^{-4}$ for source domain training and $5 \times 10^{-5}$ during adaptation is used. In both source training and adaptation, we used a scale variance (0.5-1.5) and horizontal flipping randomly. For DML and mixture models based classification, the loss weights are set to $\beta = 0.25$ and $\eta = 0.1$ to limit the excessive gradient flow to segmentation model. Similarly for mixture models, the number of modes $M$ is set to 3, and the number of embedding $T_e$ per-class per-image is set to 300. For both source and target domain images, due to GPU memory limitations, small patches of size $512 \times 512$ cropped at random compared to original high-resolution images are processed.
The baseline segmentation model and the MMDL-FR module are initially trained with original source domain images, \textcolor{black}{in-general called as source-only model.}
For self-supervised domain adaptation, selection of pixels as pseudo-labels is an important step as the adaptation process depends on the quality of pseudo-labels. We adapt an approach similar to \cite{zou2018unsupervised}, to generate pseudo-labels using the original source data trained model. For a given class $c$, we select $\delta$ confident pixels as pseudo-labels in the first round ($\delta=20\%$ ) and increase this number of pixels ratio by 5\% in each additional round.
To further help the adaptation, we have obtained the translated version of the source domain datasets using CycleGan\cite{hoffman2017cycada} and use these alongside original source images during adaptation.
\begin{comment}
\begin{table}[H]
\footnotesize
\caption{Performance (mIoU) gain comparison between the GTA-V trained source models and the respective GTA-V to Cityscapes adapted models.}
\centering
\begin{tabular}{c|ccc}
\hline
Dataset & \multicolumn{3}{c}{GTA-V $\rightarrow$ Cityscapes} \\
\hline
Methods & Source only & UDA Algo. & mIoU gain \\ \hline
\hline
FCN in the wild \cite{hoffman2016fcns}& 21.2 & 27.1 & 5.9\\
Curriculam DA \cite{curr2017_ICCV} & 22.3 & 28.9 & 6.6\\
AdaptSetNet \cite{tsai2018learning} & 36.6 & 42.4 & 5.8 \\
MinEnt \cite{vu2019advent} & 36.6 & 42.3 & 5.7 \\
CLAN \cite{clan_2019_CVPR} & 36.6 & 43.2 & 6.6 \\
All Structure \cite{structure_2019_CVPR} & 36.6 & 45.4 & 8.8 \\
CBST \cite{zou2018unsupervised} & 35.4 & 46.2 & 10.8 \\
PyCDA \cite{Lian_2019_pycda} & 36.6 & 47.4 & 10.8 \\
LSE \cite{LSE_2020_Naseer} & 36.6 & 47.5 & 10.9 \\
MRENT \cite{zou2019crst} & 33.6 & 46.1 & 12.5 \\
\hline
Ours (DRSL) & 33.6 & {\ul 47.6} & {\ul 14.0} \\
Ours (DRSL+) & 33.6 & \textbf{47.8} & \textbf{14.2} \\ \hline
\end{tabular}%
\label{table:gain}
\end{table}
\end{comment}
\begin{table*}[h]
\centering
\caption{Semantic segmentation performance of DRSL for SYNTHIA to Cityscapes adaptation.
We present the mIoU (16-classes) and mIoU* (13-classes) comparison with existing state-of-the-art domain adaptation methods for the Cityscapes validation set.
}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|c|c|cccccccccccccccc|c|c}
\hline
\multicolumn{19}{c}{SYNTHIA $\rightarrow$ Cityscapes}\\
\hline
Methods & \rot{Baseline} & \rot{Appr.} & \rot{Road} & \rot{Sidewalk} & \rot{Building} & \rot{Wall} & \rot{Fence} & \rot{Pole} & \rot{T. Light} & \rot{T. Sign} & \rot{Veg.} & \rot{Sky} & \rot{Person} & \rot{Rider} & \rot{Car} & \rot{Bus} & \rot{M.cycle} & \rot{Bicycle} & \rot{mIoU} & \rot{mIoU*} \\ \hline \hline
Source \cite{chen2018deeplab} & \multirow{8}{*}{\rotHalf{DeepLab-v2}} & - & 64.3 & 21.3 & 73.1 & 2.4 & 1.1 & 31.4 & 7.0 & 27.7 & 63.1 & 67.6 & 42.2 & 19.9 & 73.1 & 15.3 & 10.5 & 38.9 & 34.9 & 40.3 \\
CLAN \cite{clan_2019_CVPR} & & $A_O$ & 81.3 & 37.0 & 80.1 & - & - & - & 16.1 & 13.7 & 78.2 & 81.5 & 53.4 & 21.2 & 73.0 & 32.9 & {\ul 22.6} & 30.7 & - & 47.8 \\
Structure \cite{structure_2019_CVPR} & & $A_F + A_O$ & \textbf{91.7} & \textbf{53.5} & 77.1 & 2.5 & 0.2 & 27.1 & 6.2 & 7.6 & 78.4 & 81.2 & 55.8 & 19.2 & 82.3 & 30.3 & 17.1 & 34.3 & 41.5 & 48.7 \\
LSE \cite{LSE_2020_Naseer} & & $S_T$ & {\ul 82.9} & {\ul 43.1} & 78.1 & 9.3 & 0.6 & 28.2 & 9.1 & 14.4 & 77.0 & 83.5 & 58.1 & 25.9 & 71.9 & \textbf{38.0} & \textbf{29.4} & 31.2 & 42.6 & 49.4\\
CRST \cite{zou2019crst} & & $S_T$ & 67.7 & 32.2 & 73.9 & 10.7 & {\ul 1.6} & 37.4 & 22.2 & 31.2 & 80.8 & 80.5 & 60.8 & {\ul 29.1} & {\ul 82.8} & 25.0 & 19.4 & 45.3 & 43.8 & 50.1 \\ \hline
Source \cite{wu2019Resnet38} & \multirow{3}{*}{\rotHalf{ResNet-38}} & - & 32.6 & 21.5 & 46.5 & 4.81 & 0.03 & 26.5 & 14.8 & 13.1 & 70.8 & 60.3 & 56.6 & 3.5 & 74.1 & 20.4 & 8.9 & 13.1 & 29.2 & 33.6 \\
CBST \cite{zou2018unsupervised} & & $S_T$ & 53.6 & 23.7 & 75.0 & 12.5 & 0.3 & 36.4 & {\ul 23.5} & 26.3 & 84.8 & 74.7 & \textbf{67.2} & 17.5 & \textbf{84.5} & 28.4 & 15.2 & \textbf{55.8} & 42.5 & 48.4 \\
MLSL \cite{mlsl2020} & & $S_T$ &73.7 &34.4 &78.7 &{\ul 13.7} &\textbf{2.9} &36.6 &\textbf{28.2} &22.3 &\textbf{86.1} &76.8 &{\ul 65.3} &20.5 &81.7 &31.4 &13.9 &47.3 &44.4 &50.8 \\
\hline
Source \cite{chen2018deeplab} & \multirow{3}{*}{\rotHalf{DeepLab-v2}} & - & 69.2 & 26.6 & 66.5 & 6.5 & 0.1 & 33.2 & 4.1 & 18.0 & 80.5 & 80.0 & 55.3 & 15.1 & 67.5 & 20.1 & 6.8 & 14.0 & 35.2 & 40.3\\
DRSL & & $A_I + S_T$ & 70.1 & 30.1 & \textbf{81.6} & \textbf{15.6} & 1.0 & {\ul 40.9} & 20.9 & \textbf{36.4} & {\ul 85.4} & {\ul 84.0} & 59.4 & 26.9 & 81.8 & {\ul 35.9} & 16.7 & 48.1 &{\ul 45.9} & {\ul 52.0}\\
DRSL+ & & $A_I + S_T$ & 82.8 & 40.1 & {\ul 81.3} & 13.0 & 1.6 & \textbf{41.6} & 19.8 & {\ul 33.1} & 85.3 & \textbf{84.3} & 59.5 & \textbf{30.1} &
78.6 & 25.3 & 19.8 & {\ul 51.7} & \textbf{46.7} &\textbf{53.2} \\ \hline
\end{tabular}
}
\label{table:syn2city}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width= \textwidth]{images/syn2city-ISA-01.pdf}\\
\footnotesize
\begin{tabular}{P{2cm}P{2cm}P{2cm}P{2cm}P{2cm}}
Target Image & Gound Truth & DeepLab-v2 \cite{chen2018deeplab} &DRSL (Ours) & DRSL+ (Ours)
\end{tabular}
\caption{Semantic segmentation qualitative results for SYNTHIA to Cityscapes adaptation.}
\label{img:syn2city}
\end{figure*}
\subsection{Experimental Results}
In this section, we present experimental results of the proposed approach for semantic segmentation. We follow the standard synthetic to real adaptation setup.
\subsubsection{Results on GTA-V to Cityscapes Adaptation}
Table \ref{table:gta2city} presents domain adaptation performance for the task of semantic segmentation of the proposed DRSL approach compared to existing adversarial learning and self-supervised learning architectures. To have a fair comparison, the methods are divided into three groups where each comparing model is listed with its respective source model and backbone network.
Fig. \ref{img:gta2city} shows example images to highlight the performance of the proposed DRSL qualitatively. The DRSL improves the performance for both objects and stuff classes, as shown in Fig. \ref{img:gta2city} (Column. 4). Small and far away objects like person, traffic light, and signboards are better adapted alongside near to camera objects and large area stuff classes like road, bus, and sidewalk.
The cross domain mode alignment loss further penalizes the adaptation for small objects, further improving the performance for classes like bicycle, traffic sign, traffic light, pole, fence and person as shown in Table. \ref{table:gta2city} (DRSL+).
Overall, the proposed DRSL+ outperforms the latest self-supervised learning frameworks with clear gaps, surpassing the source dataset trained model with 14.0$\%$ gain in mIoU(last column of Table. \ref{table:gta2city}).
The DRSL+ performs well on both object classes as well as stuff classes compared to previous methods which may perform better on some classes but fail on other classes.
Compared to CRST and MRENT \cite{zou2019crst} which regularizes the labels and models for high predictions, the proposed approach achieves a mIoU gain of 1.0 and 1.7\% respectively. Similarly, the DRSL outperforms the PyCDA \cite{Lian_2019_pycda}, which works on pyramid level labeling, and LSE \cite{LSE_2020_Naseer} which incorporates scale invariances with class balancing strategies augmented with higher mIoU baseline models. Compared to composite adversarial learning-based methods like FCAN \cite{zhang2018fcan} and IntraDA \cite{pan2020unsupervised}, DRSL shows improvement with a minimum of 1\% in mIoU and specifically with high margins in small objects.
Similarly, compared to CAG-UDA\cite{zhang2019category} (mIoU=43.9\% without warm-up training), the DRSL+ gains 3.9\% in mIoU.
\subsubsection{Results on SYNTHIA to Cityscapes Adaptation}
Table \ref{table:syn2city} presents the proposed DRSL approach segmentation performance for SYNTHIA to Cityscapes adaptation. To have a fair comparison with existing methods, the comparing methods are divided into three groups and the respective source model results with different setups are shown. Moreover, for SYNTHIA to Cityscapes, we show the mIoU (16-classes) and mIoU* (13-classes) as shown by \cite{mlsl2020, zou2018unsupervised}.
Fig.\ref{img:syn2city} shows qualitative results for DRSL and DRSL+ compared to baseline results. Row-1 and row-2 of Fig.\ref{img:syn2city} focuses on objects like rider, bicycle, person, and the stuff classes, row-3 highlights the faraway objects and segmentation for road scene imagery.
The DRSL approach performs well on both stuff and object classes adaptation and shows an improvement of 11.7\% in mIoU and 12.9\% in mIoU* compared to the baseline model (source). Compared to strong CBST\cite{zou2018unsupervised} and MLSL\cite{mlsl2020} self-supervised learning approaches, the DRSL shows a minimum improvement of 2.3\% and 2.4\% in mIoU and mIoU* respectively. Similarly, the DRSL shows significant improvement to existing regularization based models, like CRST \cite{zou2019crst} and entropy-based methods, e.g., LSE\cite{LSE_2020_Naseer} and MinEnt \cite{vu2019advent}. Compared to CAG-UDA\cite{zhang2019category} (44.5\% mIoU and 51.4\% mIoU*), the DRSL+ gains 2.2\% in mIoU and 1.9\% in mIoU* respectively. The gaps can be more visible if compared with "without warm-up" training CAG-UDA.
\subsubsection{Ablation Experiments}
Ablation experiments are performed for GTA-V to Cityscapes.
\label{sec:ablation}
\noindent \textbf{Multi-Modal Distribution Learning based Regularization Module (MMDL-FR): }
During training and adaptation it's essential to understand the balance between the segmentation and different elements of MMDL-FR.
We search over a range of values to identify (empirically) optimal values for the loss scaling factors, $\beta$ and $\eta$ (Table. \ref{table:cfr-values}).
Based on the experiments, $\beta$ and $\eta$ are set to 0.25 and 0.1 respectively, for all the experiments including SYNTHIA to Cityscapes.
\begin{table}[!htb]
\small
\centering
\caption{Effect of $(\beta, \eta)$ values of the MMDL-FR module.}
\resizebox{3.5in}{!}{
\begin{tabular}{c|ccccc}
\hline
$\beta, \eta$& (0.0, 0.0) & (0.1, 0.1) &(0.25, 0.1) & (0.5, 0.5) & (1.0, 1.0)\\ \hline
DRSL (mIoU) & 44.9 & 46.1 &\textbf{47.6} &45.9 & 46.0\\ \hline
\end{tabular}
}
\label{table:cfr-values}
\end{table}
\noindent \textbf{Effect of MMDL-FR Module on Adaptation Process: }
As described in Sec. \ref{sec:drsl} and Fig. \ref{img:model}, the MMDL-FR module regularizes the encoder (base-network) of the segmentation model with DML based embedding block and MMDL based classification. The MMDL-FR overall enhances the adaptation performance compared to the non-regularized version of the proposed method as shown in Table. \ref{table:drsl-effect}.
\begin{table}[!htb]
\small
\centering
\caption{Effect of MMDL-FR module on adaptation.}
\resizebox{3.5in}{!}{
\begin{tabular}{c|ccc}
\hline
Methods & Source~\cite{chen2018deeplab} &Without MMDL-FR & With MMDL-FR \\ \hline
mIoU & 33.6 &44.9 & \textbf{47.6} \\ \hline
\end{tabular}
}
\label{table:drsl-effect}
\end{table}
\noindent \textbf{Effect of Modes: }
As described in Sec. \ref{sec:drsl}, it is very critical to select correct number of modes for multi-modal in MMDL. We have experimented with multiple number of modes (Table. \ref{table:drsl-modes}) and selected M=3 for all the experiments.
\begin{table}[!htb]
\small
\centering
\caption{Effect to number of modes (M) in MMDL.}
\begin{tabular}{c|ccc}
\hline
Number of Modes (M) & M=1 &M=3 & M=5 \\ \hline
mIoU & 44.7 &\textbf{47.6} & 46.2 \\ \hline
\end{tabular}
\label{table:drsl-modes}
\end{table}
\noindent \textbf{Effect of Labels Reduction for MMDL-FR Module: }
The output of the embedding block in the MMDL-FR module is 8 times reduced compared to input image size.
Embeddings needed to be upsampled 8 times if labels are not reduced requiring a lot of memory. Contrary to this, reducing labels 8 times introduces boxing effect. Based on these observations the scale factor 2 is used. A comparative performance of labels reduction is shown in Table. \ref{table:embedding}.
\begin{table}[H]
\footnotesize
\centering
\caption{Effect of label reduction ratio on mIoU.}
\begin{tabular}{ccccc}
\hline
\multicolumn{5}{c}{GTA-V $\rightarrow$ Cityscapes} \\
\hline
Label Reduction Ratio & 1 & 2 & 4 & 8 \\
Embeddings Upsampling Ratio & 8 & 4 & 2 & 1 \\
Adaptation Performance (mIoU) & 47.1 & \textbf{47.6} & 46.8 & 46.4 \\
\hline
\end{tabular}
\label{table:embedding}
\end{table}
\noindent \textbf{Pseudo-label Accuracy: }
To understand how the MMDL-FR results in more accurate pseudo-labels during the adaptation process, we compute mIoU of pseudo-labels for when MMDL-FR is not used (A) and when MMDL-FR is used (B).
At the start of adaptation (round-0), we have same mIoU for both A $\&$ B (Table-\ref{table:pl-ious}) since MMDL-FR will start to contribute when adaptation starts , i.e., \textit{during} round-0.
Due to MMDL-FR, the predictions by B after round-0 have much lower self-entropy and pseudo-labels have higher mIoU than the ones generated by model-A, thus improving self-supervised domain adaptation.
\begin{table}[!htb]
\small
\centering
\caption{Pseudo-labels with $\&$ without MMDL-FR module}
\resizebox{3.5in}{!}{
\begin{tabular}{c|c|c|c|c}
\hline
\multirow{2}{*}{Method}& \multicolumn{2}{c}{Start of Round-0} & \multicolumn{2}{|c}{Start of Round-1}\\ \cline{2-5}
& mIoU & Self-Entropy & mIoU & Self-Entropy \\ \hline
A: Without MMDL-FR \{ST, ISA\} & 73.9 &6.56 $\times 10^{-2}$ & 76.4 &1.57$\times 10^{-2}$\\
B: With MMDL-FR \{ST, ISA, MMDL-FR\} & 73.9& 6.56$\times 10^{-2}$& \textbf{78.7}& \textbf{1.14}$\boldsymbol{\times 10^{-2}}$\\ \hline
\end{tabular}
}
\label{table:pl-ious}
\end{table}
\noindent \textbf{Effect of Consistency Loss Weight: }
The cross domain mode consistency loss helps to make the embeddings of the source and target images belonging to the same mode of the same class closer, helping to better adapt the small object classes. However, its contribution in the whole loss needs to be limited to make the system stable. Our experiments suggests $\gamma=0.1$ suits the DRSL+ as shown in Table. \ref{table:drsl-cons-loss}.
\begin{table}[!htb]
\small
\centering
\caption{Effect of cross domain mode consistency loss.}
\begin{tabular}{c|ccc}
\hline
Loss weight $\gamma$ & 0.01 &0.1 & 0.25 \\ \hline
mIoU & 46.0 &\textbf{47.8} & 45.3 \\ \hline
\end{tabular}
\label{table:drsl-cons-loss}
\end{table}
\noindent \textbf{Effect of Input Space Adaptation (ISA): }
Removing ISA module, mIoU decreases 1.6 points, from 47.6 (DRSL) to 46.0 (DRSL w/o ISA), indicating that ISA is needed but not vital for the effectiveness of the proposed model.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a distribution regularized self-supervised learning approach for domain adaptation of semantic segmentation.
Parallel to the semantic segmentation decoding head, we employ a clustering based feature regularization (MMDL-FR) module.
Where segmentation head identifies what can differentiate a class, MMDL-FR explicitly models intra-class pixel-level feature variations, allowing the model to capture much richer representation of the class at pixel-level, thus improving model's generalization.
Moreover, this disentanglement of information w.r.t tasks improves task dependent representation learning and allows performing separate domain alignments.
Shared base-network enables MMDL-FR to act as regularizer over segmentation head, thus reducing the noisy pseudo-labels. Extensive experiments on the standard synthetic to real adaptation show that the proposed DRSL outperforms the state-of-the-art approaches.
{\small
|
train/arxiv
|
BkiUdiM5qsJBjrwPoCJi
| 5 | 1 |
\section{Introduction}
The main goal of this paper is to enumerate planar graphs subject to a condition on the minimum degree $\delta$, and to analyze the corresponding planar random graphs. Asking for $\delta\ge1$ is not very interesting, since a random planar graph contains in expectation a constant number of isolated vertices. The condition $\delta\ge2$ is directly related to the concept of the core of a graph. Given a connected graph $\mathcal{G}$, its \textit{core} (also called 2-core in the literature) is the maximum subgraph $\mathcal{C}$ with minimum degree at least two. The core $\mathcal{C}$ is obtained from $\mathcal{G}$ by repeatedly removing vertices of degree one. Conversely, $\mathcal{G}$ is obtained by attaching rooted trees at the vertices of $\mathcal{C}$. The \emph{kernel} of $\mathcal{G}$ is obtained by replacing each maximal path of vertices of degree two in the core $\mathcal{C}$ with a single edge. The kernel has minimum degree at least three, and $\mathcal{C}$ can be recovered from $\mathcal{K}$ by replacing edges with paths. Notice that $\mathcal{G}$ is planar if and only $\mathcal{C}$ is planar, if and only if $\mathcal{K}$ is planar.
As shown in Figure \ref{Fig:graph}, the kernel may have loops and multiple edges, which must be taken into account since our goal is to analyze simple graphs. Another issue is that when replacing loops and multiple edge with paths
the same graph can be produced several times. To this end we weight multigraphs appropriately according to the number of loops and edges of each multiplicity. We remark that the concepts of core and kernel of a graph are instrumental in the theory of random graphs \cite{giant,probplanar}.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
&&\\
$\mathcal{G}$ & $\mathcal{C}$ & $\mathcal{K}$ \\
\begin{tikzpicture}[scale=0.7, auto,swap]
\foreach \pos/\name in {{(0,-2)/1}, {(0,2)/2}, {(-1,0)/4},
{(-2,0)/6}, {(-3,0)/3}, {(-1.5,1)/5}, {(-2,-.66)/7},
{(-1,-1.33)/8}, {(-2,0)/6}, {(1,-1.5)/10}, {(1,-.5)/9},
{(1,1.5)/13}, {(-3,1)/12}, {(-3,-1)/11}, {(-1.5,2)/14},
{(1,2.66)/15}, {(-2,-2)/16}}
\node[selected vertex] (\name) at \pos {$\name$};
\foreach\source/\dest in {1/2,1/4,2/4,1/8,1/10,2/5,2/13,3/5,3/6,
3/7,3/11,3/12,4/6,7/8,9/10,5/14,2/15,13/15,1/16,7/16}
\path[selected edge](\source) -- (\dest);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.7, auto,swap]
\foreach \pos/\name in {{(0,-2)/1}, {(0,2)/2}, {(-1,0)/4},
{(-2,0)/6}, {(-3,0)/3}, {(-1.5,1)/5}, {(-2,-.66)/7},
{(-1,-1.33)/8}, {(-2,0)/6},
{(1,1.5)/13}, {(1,2.66)/15}, {(-2,-2)/16}}
\node[selected vertex] (\name) at \pos {$\name$};
\foreach \pos/\name in {{(1,-1.5)/10}, {(1,-.5)/9},{(-3,1)/12}, {(-3,-1)/11},{(-1.5,2)/14}}
\node[vertex] (\name) at \pos {$\name$};
\foreach\source/\dest in
{1/2,1/4,2/4,1/8,2/5,3/5,3/6,3/7,4/6,7/8,1/16,7/16,2/13,2/15,13/15}
\path[selected edge](\source) -- (\dest);
\foreach\source/\dest in {1/10,3/11,3/12,9/10,5/14}
\path[edge](\source) -- (\dest);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.7, auto,swap]
\foreach \pos/\name in {{(0,-2)/1}, {(0,2)/2}, {(-1,0)/4},
{(-2,0)/6},{(-2,-.66)/7}, {(-3,0)/3}}
\node[selected vertex] (\name) at \pos {$\name$};
\foreach \pos/\name in {{(1,-1.5)/10}, {(1,-.5)/9},
{(1,1.5)/13}, {(-3,1)/12}, {(-3,-1)/11}, {(-1.5,1)/5},
{(-1,-1.33)/8}, {(-2,0)/6},{(-1.5,2)/14}, {(1,2.66)/15}, {(-2,-2)/16}}
\node[vertex] (\name) at \pos {$\name$};
\foreach\source/\dest in {1/2,1/4,2/4,2/3,3/4,3/7,1/7}
\path[strong edge](\source) -- (\dest);
\foreach\source/\dest in
{1/10,2/13,3/11,3/12,9/10,5/14,1/16,7/16,2/15,13/15}
\path[edge](\source) -- (\dest);
\path[strong edge] (1) edge [bend left] (7);
\path[strong edge, every loop/.style={looseness=15}] (2)
edge [in=-45,out=45,loop] (2);
\end{tikzpicture}
\\ \hline
\end{tabular}
\end{center}
\caption{Core and kernel of a graph.}\label{Fig:graph}
\end{figure}
It is convenient to introduce the following definitions: a \emph{2-graph} is a connected graph with minimum degree at least two, and a \emph{3-graph} is a connected graph with minimum degree at least three.
In order to enumerate planar 2- and 3-graphs, we use generating functions.
From now on all graphs are labelled and generating functions are of the exponential type. Let $c_n, h_n$ and $k_n$ be, respectively, the number of planar connected graphs, 2-graphs and 3-graphs with $n$ vertices, and let
$$
C(x) = \sum c_n {x^n \over n!}, \qquad
H(x) = \sum h_n {x^n \over n!}, \qquad
K(x) = \sum k_n {x^n \over n!}
$$
be the associated generating functions. Also, let $t_n=n^{n-1}$ be the number of (labelled) rooted trees with $n$ vertices and let $T(x)=\sum t_n x^n/n!$. The decomposition of a connected graph into its core and the attached trees implies the following equation
\begin{equation}\label{cores}
C(x) = H(T(x)) + U(x),
\end{equation}
where $U(x)= T(x)- T(x)^2/2$ is the generating functions of unrooted trees.
Since $T(x)=x e^{T(x)}$, we can invert the above relation and obtain
$$
H(x) = C(xe^{-x}) -x + {x^2\over 2}.
$$
The equation defining $K(x)$ is more involved and requires the bivariate generating function
$$
C(x,y) = \sum c_{n,k}\,y^k {x^n \over n!},
$$
where $c_{n,k}$ is the number of connected planar graphs with $n$ vertices and $k$ edges. We can express $K(x)$ in terms of $C(x,y)$ as
\begin{equation}\label{kernels}
K(x) = C(A(x),B(x)) + E(x),
\end{equation}
where $A(x),B(x),E(x)$ are explicit elementary functions (see Section \ref{sec:equations}).
From the expression of $C(x)$ as the solution of a system of functional-differential equations \cite{gn}, it was shown that
$$
c_n \sim \kappa n^{-7/2} \gamma^n n!,
$$
where $\kappa\approx 0.4104\cdot 10^{-5}$ and $\gamma \approx 27.2269$ are computable constants. In addition, analyzing the bivariate generating function $C(x,y)$ it is possible to obtain results on the number of edges and other basic parameters in random planar graphs. Our main goal is to extend these results to planar 2-graphs and 3-graphs.
Using Equations (\ref{cores}) and (\ref{kernels}) we obtain precise asymptotic estimates for the number of planar 2- and 3-graphs:
$$
\renewcommand{\arraystretch}{1.5}
\begin{array}{llll}
h_n \sim & \kappa_2 n^{-7/2} \gamma_2^n n!,
\qquad &\gamma_2 \approx 26.2076, & \kappa_2 \approx 0.3724\cdot 10^{-5}, \\
k_n \sim & \kappa_3 n^{-7/2} \gamma_3^n n!,
\qquad &\gamma_2 \approx 21.3102, & \kappa_3 \approx 0.3107\cdot 10^{-5}.
\end{array}
$$
As is natural to expect, $h_n$ and $k_n$ are exponentially smaller than $c_n$.
Also, the number of 2-connected planar graphs is known to be asymptotically
$\kappa_{c} n^{-7/2} 26.1841^n n!$ (see \cite{bgw}), smaller than the number of 2-graphs.
This is consistent, since a 2-connected has minimum degree at least two.
By enriching Equations (\ref{cores}) and (\ref{kernels}) taking into account the number of edges, we prove that the number of edges in random planar 2-graphs and 3-graphs are both asymptotically normal with linear expectation and variance.
The expected number of edges in connected planar graphs was shown to be \cite{gn} asymptotically $\mu n$, where $\mu \approx 2.2133$. We show that the corresponding constants for planar 2-graphs and 3-graphs are
$$
\mu_2\approx 2.2614, \qquad \mu_3\approx 2.4065.
$$
This conforms to our intuition that increasing the minimum degree also increases the expected number of edges.
We also analyze the size $X_n$ of the core in a random connected planar graph, and the size $Y_n$ of the kernel in a random planar 2-graph. We show that both variables are asymptotically normal with linear expectation and variance and that
$$
\renewcommand{\arraystretch}{1.3}
\begin{array}{lll}
\mathbf{E}\, X_n \sim &\lambda_2 n, \qquad &\lambda_2 \approx 0.9618, \\
\mathbf{E}\, Y_n \sim &\lambda_3 n, \qquad &\lambda_3 \approx 0.8259.
\end{array}
$$
We remark that the value of $\lambda_2$ has been recently found by McDiarmid \cite{colin} using alternative methods.
Also, we remark that the expected size of the largest block (2-connected component) in random connected planar graphs is asymptotically $0.9598n$ \cite{3-conn}. Again this is consistent since the largest block is contained in the core.
The picture is completed by analyzing the size of the trees attached to the core. We show that the number of trees with $k$ vertices attached to the core is asymptotically normal with linear expectation and variance. The expected value is asymptotically
$$
C {k^{k-1} \over k! } \rho^k n,
$$
where $C>0$ is a constant and $\rho \approx 0.03673$ is the radius of convergence of $C(x)$.
For $k$ large, the previous quantity grows like
$$
{C \over \sqrt{2\pi}}\cdot k^{-3/2} (\rho e)^k n.
$$
This quantity is negligible when $k \gg \log(n)/(\log(1/\rho e))$. Using the method of moments, we show that the size $L_n$ of the largest tree attached to the core is in fact asymptotically
$$
{\log(n) \over \log(1/\rho e)}.
$$
Moreover, we show that $L_n/\log n$ converges in law to a Gumbel distribution.
This result provides new structural information on the structure of random planar graphs.
Our last result concerns the distribution of the vertex degrees in random planar 2-graphs and 3-graphs. We show that for each fixed $k\ge2$ the probability that a random vertex has degree $k$ in a random planar 2-graph tends to a positive constant $d_H(k)$, and for each fixed $k\ge3$ the probability that a random vertex has degree $k$ in a random planar 3-graph tends to a positive constant $d_K(k)$. Moreover
$ \sum _{k\ge2} p_H(k) = \sum _{k\ge3} p_K(k) = 1$,
and the probability generating functions
$$
p_H(w) =\sum _{k\ge2} p_H(k)w^k, \qquad p_K(w)= \sum _{k\ge3} p_K(k)w^k
$$
are computable in terms of the probability generating function $p_C(w)$ of connected planar graphs, which was fully determined in \cite{degree}.
The previous results show that almost all planar 2-graphs have a vertex of degree two, and almost all planar 3-graphs have a vertex of degree three. Hence asymptotically all our results hold also for planar graphs with minimum degree exactly two and three, respectively. In addition, all the results for connected planar graphs extend easily to arbitrary planar graphs. This is because the expected size of the largest component in a random planar graph is $n-O(1)$ (see \cite{3-conn}). We will not repeat for each of our results the corresponding statement for graphs of minimum degree exactly two or three.
It is natural to ask why we stop at minimum degree three. The reason is that
there seems to be no combinatorial decomposition allowing to deal with planar graphs of minimum degree four or five (a planar graph has always a vertex of degree at most five). It is already an open problem to enumerate 4-regular planar graphs. In contrast, the enumeration of cubic planar graphs was completely solved in \cite{cubic}.
The contents of the paper are as follows.
In Section \ref{sec:maps} we find analogous results for planar maps, that is, connected planar graphs with a fixed embedding. They are simpler to derive and serve as a preparation for the results on planar graphs, while at the same time they are new and interesting by themselves.
In Section \ref{sec:equations} we find equations linking the generating functions of connected graphs, 2-graphs and 3-graphs; to this end we must consider multigraphs as well as simple graphs. In Section~\ref{sec:graphs} we use singularity analysis in order to prove our main results on asymptotic enumeration and properties of random planar 2-graphs and 3-graphs.
The analysis of the distribution of the degree of the root, which is technically more involved, is deferred to Section~\ref{sec:degree}. We conclude with some remarks and open problems.
We assume familiarity with the basic results of analytic combinatorics as described in \cite{flajolet}. In particular, we need the following.
\begin{quote}
\emph{Transfer Theorem.}
If $f(z)$ is analytic in a $\Delta$-domain and satisfies, locally around its dominant singularity~$\rho$, the estimate
$$
f(z) \sim (1-z/\rho)^{-\alpha}, \qquad z \to \rho,
$$
with $\alpha \not\in \{0,-1,-2,\dots \}$, then the coefficients of $f(z)$ satisfy
$$
[z^n]f(z) \sim {n^{\alpha-1} \over \Gamma(\alpha)} \,\rho^{-n}.
$$
\end{quote}
\begin{quote}
\emph{Quasi-powers Theorem}.
Let $X_n$ be a discrete parameter with associated bivariate generating function $F(z,u)$, where $u$ marks the parameter. Suppose that there is a representation $$
F(z,u) = A(z,u) + B(z,u) C(z,u)^{-\alpha}, \qquad \alpha \not\in \{0,-1,-2,\dots \}
$$
in a bivariate $\Delta$-domain. Let $\rho(u)$ be the unique singularity of $u \mapsto F(z,u)$, given by $C(\rho(u),u)=0$. Then $X_n$ is asymptotically Gaussian with linear expectation and variance, and
$$
\mathbf{E}\, X_n \sim \left({-\rho'(1) \over \rho(1)} \right) n,
\qquad \mathbf{Var}\, X_n \sim \left(-{\rho''(1) \over \rho(1)} - {\rho'(1) \over \rho(1)} + \left({\rho'(1) \over \rho(1)}\right)^2 \right) n.
$$
\end{quote}
\section{Planar maps}\label{sec:maps}
We recall that a planar map is a connected planar multigraph embedded in the plane up to homeomorphism. A map is rooted if one of the edges is distinguished and given a direction. In this way a rooted map has a root edge and a root vertex (the tail of the root edge). We define the root face as the face to the right of the directed root edge. A rooted map has no automorphisms, in the sense that every vertex, edge and face is distinguishable. From now on all maps are planar and rooted. We stress the fact that maps may have loops and multiple edges.
The enumeration of rooted planar maps was started by Tutte in his seminal paper
\cite{tutte}. Let $m_n$ be the number of rooted maps with $n$ edges, with the
convention that $m_0=0$.
Then
$$
m_n = {2 \cdot 3^n \over (n+2)(n+1)} { 2n \choose n}, \quad n\geq 1
$$
The generating function $M(z) = \sum_{n\ge0} m_n z^n$ is equal to
\begin{equation}\label{th:Mn}
M(z) = {18z-1 + (1-12z)^{3/2} \over 54z^2}-1.
\end{equation}
Either from the explicit formula or from the expression for $M(z)$ and the transfer theorem, it follows that
\begin{equation}\label{eq:estimatesmaps}
m_n \sim {2 \over \sqrt\pi} \, n^{-5/2} 12^n.
\end{equation}
If $m_{n,k}$ is the number of maps with $n$ edges and degree of the root face equal to $k$, then $M(z,u) = \sum m_{n,k} u^k z^n$ satisfies the equation
\begin{equation}\label{eq:maps}
M(z,u) = zu^2(M(z,u)+1)^2 + uz \left({uM(z,u)-M(z,1)\over u-1}+1\right).
\end{equation}
By duality, $M(z,u)$ is also the generating function of maps
in which $u$ marks the degree of the root vertex. This is a convenient modifications of the usual equations for maps,
where the empty map is also counted.
The core $\mathcal{C}$ of a map $\mathcal{M}$ is obtained, as for graphs,
by removing repeatedly vertices of degree one,
so that $\mathcal{C}$ has minimum degree at leat two.
Then $\mathcal{M}$ is obtained from $\mathcal{C}$ by placing a planar tree
at each corner (pair of consecutive half-edges) of $\mathcal{C}$. This is equivalent
to replacing each edge with a non-empty planar tree rooted at an edge.
The number $t_n$ of planar trees with $n\geq 1$ edges
is equal to the $n$-th Catalan number
and the generating function $T(z) = \sum t_n z^n$ satisfies
$$
T(z) = {1 \over 1-z(1+T(z))}-1.
$$
We define a 2-map as a map with minimum degree at least two,
and a 3-map as a map with minimum degree at least three.
Let $h_n$ and $k_n$ be, respectively,
the number of 2-maps and 3-maps with $n$ edges.
\begin{theorem}\label{th:maps}
The generating functions $H(z)$ and $K(z)$ of 2-maps and 3-maps, respectively, are given by
$$
H(x) = \displaystyle {1-x \over 1+x} \left(M\left({x \over (1+x)^2} \right) -x\right), \qquad K(x) = \displaystyle{H\displaystyle\left({x\over 1+x}\right) -x \over 1+x}.
$$
The following estimates hold:
\begin{equation}\label{eq:estimates23maps}
h_n \sim \kappa_2 n^{-5/2} (5+2\sqrt6)^n, \qquad
k_n \sim \kappa_3 n^{-5/2} (4+2\sqrt6)^n,
\end{equation}
where
$$\kappa_2 = \frac{2}{\sqrt{\pi}} \left(\frac{2}{3}\right)^{5/4}
\approx 0.6797,\quad
\kappa_3 = \frac{2}{\sqrt{\pi}}\left(4-4\sqrt{\frac{2}{3}}\right)^{5/2}
\approx 0.5209.
$$
\end{theorem}
\begin{proof}
The decomposition of a map into its core and the collection of trees attached to the corners implies the following equation:
\begin{equation}\label{eq:MH}
M(z) = T(z)+ H\left(T(z)\right){1+T(z)\over 1-T(z)} .
\end{equation}
The first summand corresponds to the case where the map is a tree, and the second one where the core is not empty:
each edge is replaced with a non-empty tree whose root corresponds
to the original edge. The factor
$$
{1+T(z) \over 1-T(z)} = 1+{2\,T(z) \over 1-T(z)}
$$
is interpreted as follows.
The first summand corresponds to the case where
the root of the map is in the core, and the second one to the
case where it is in a pendant rooted tree $\tau$, which we place at the
left-back corner of the root edge of the core. In this case there
is a non-empty sequence of non-empty trees from the root
edge $e$ of $\tau$ to the root edge of the core, and the factor $2$
distinguishes the two possible directions of $e$.
In order to invert the former relation let $x=T(z)$, so that
$$
z={x\over(1+x)^2}.
$$
We obtain
\begin{equation}\label{eq:HM}
H(x) = {1-x \over 1+x} \left(M\left({x \over (1+x)^2} \right) -x\right)
= x+3x^2+16x^3+96x^4+624x^5 + \cdots
\end{equation}
\begin{figure}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
&&\\
$\mathcal{M}$ & $\mathcal{C}$ & $\mathcal{K}$ \\
\begin{tikzpicture}[scale=1.1, auto,swap]
\foreach \pos/\name in {{(0,0)/1}, {(2,0)/2}, {(2,3)/3},
{(1,3)/4}, {(1,3.5)/5}, {(0,3)/6}, {(0,3.5)/7},
{(-0.5,3)/8}, {(-0.5,2)/9}, {(0,2)/10}, {(-0.5,1)/11},
{(0,1)/12}, {(-0.5,0.5)/13}, {(0.5,2.25)/14}, {(1,2.5)/15},
{(1,1)/16}, {(1,1.5)/17}, {(1.5,1.5)/18}, {(1.33,1)/19},
{(1.66,1)/20}}
\node[selected point] (\name) at \pos{};
\foreach\source/\dest in {1/2,1/12,1/17,2/3,3/4,3/17,4/6,6/7,6/8,7/8,
6/14,6/10,9/10,10/12,11/12,12/13,14/15,14/17,16/17,17/18,18/19,18/20}
\path[selected edge](\source) -- (\dest);
\begin{scope}[very thick, ->]
\draw (0.99,0)--(1,0);
\end{scope}
\path[selected edge](2) edge [bend right] (3);
\path[selected edge](4) edge [bend right] (5);
\path[selected edge](4) edge [bend left] (5);
\path[selected edge, every loop/.style={looseness=30}] (3)
edge [in=0,out=90,loop] (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=1.1, auto,swap]
\foreach \pos/\name in {{(0,0)/1}, {(2,0)/2}, {(2,3)/3},
{(1,3)/4}, {(1,3.5)/5}, {(0,3)/6}, {(0,3.5)/7},
{(-0.5,3)/8}, {(0,2)/10},
{(0,1)/12}, {(0.5,2.25)/14},
{(1,1.5)/17}}
\node[selected point] (\name) at \pos{};
\foreach \pos/\name in {{(-0.5,2)/9}, {(-0.5,1)/11},
{(-0.5,0.5)/13}, {(1,2.5)/15},
{(1,1)/16}, {(1.5,1.5)/18}, {(1.33,1)/19},
{(1.66,1)/20}}
\node[point] (\name) at \pos{};
\foreach\source/\dest in {1/2,1/12,1/17,2/3,3/4,3/17,4/6,6/7,6/8,7/8,
6/14,6/10,10/12,14/17}
\path[selected edge](\source) -- (\dest);
\foreach\source/\dest in {9/10,11/12,12/13,14/15,16/17,17/18,18/19,18/20}
\path[edge](\source) -- (\dest);
\begin{scope}[very thick, ->]
\draw (0.99,0)--(1,0);
\end{scope}
\path[selected edge](2) edge [bend right] (3);
\path[selected edge](4) edge [bend right] (5);
\path[selected edge](4) edge [bend left] (5);
\path[selected edge, every loop/.style={looseness=30}] (3)
edge [in=0,out=90,loop] (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=1.1, auto,swap]
\foreach \pos/\name in {{(0,0)/1}, {(2,0)/2}, {(2,3)/3},
{(1,3)/4}, {(0,3)/6},
{(1,1.5)/17}}
\node[selected point] (\name) at \pos{};
\foreach \pos/\name in {{(1,3.5)/5}, {(0,3.5)/7},
{(-0.5,3)/8}, {(0,2)/10},
{(0,1)/12}, {(0.5,2.25)/14}, {(-0.5,2)/9}, {(-0.5,1)/11},
{(-0.5,0.5)/13}, {(1,2.5)/15},
{(1,1)/16}, {(1.5,1.5)/18}, {(1.33,1)/19},
{(1.66,1)/20}}
\node[point] (\name) at \pos{};
\foreach\source/\dest in {1/2,1/6,1/17,2/3,3/4,3/17,4/6,
6/17}
\path[strong edge](\source) -- (\dest);
\foreach\source/\dest in {9/10,11/12,12/13,14/15,16/17,17/18,18/19,18/20,
6/7,7/8,6/8}
\path[edge](\source) -- (\dest);
\begin{scope}[very thick, ->]
\draw (0.99,0)--(1,0);
\end{scope}
\path[strong edge](2) edge [bend right] (3);
\path[edge](4) edge [bend right] (5);
\path[edge](4) edge [bend left] (5);
\path[strong edge, every loop/.style={looseness=30}]
(3)edge [in=0,out=90,loop] (3);
\path[strong edge, every loop/.style={looseness=30}] (6)
edge [in=180,out=90,loop] (6);
\path[strong edge, every loop/.style={looseness=30}] (4)
edge [in=45,out=135,loop] (4);
\end{tikzpicture}
\\ \hline
\end{tabular}
\end{center}
\caption{Core and kernel of a map.}
\end{figure}\label{fig:map}
Let now $\mathcal{C}$ be a 2-map. The kernel $\mathcal{K}$ of $\mathcal{C}$ is defined as follows: replace every maximal path of vertices of degree
two in $\mathcal{C}$ with a single edge (see Figure \ref{fig:map}). Clearly $\mathcal{K}$ is a 3-map and $\mathcal{C}$ can be obtained by replacing edges in $\mathcal{K}$ with paths.
It follows that
\begin{equation}\label{eq:HK}
H(z) = K\left({z \over 1-z} \right){1 \over 1-z}+ {z \over 1-z}.
\end{equation}
The first term corresponds to the substitution of paths for edges, and the extra factor $1/(1-z)$ indicates where to locate the new root edge
in the path replacing the original root edge. The last term corresponds to cycles, whose kernel is empty. Inverting the relation $x=z/(1-z)$ we obtain
\begin{equation}\label{eq:KH}
K(x) = \displaystyle{H\displaystyle\left({x\over 1+x}\right) -x \over 1+x} =
2 z^2+9 z^3+47 z^4+278 z^5+ \cdots
\end{equation}
In order to obtain asymptotic estimates for $h_n$ and $k_n$ we need
to locate the dominant singularities of $H(z)$ and $K(z)$.
The singularity of $M(z)$ is at $\rho = 1/12$, and that of $T(z)$ is at $1/4$. Hence the singularity of $H(z)$ is at $\sigma = \rho T(\rho)^2 = 5 - 2 \sqrt6$. If follows that the singularity of $K(z)$ is at $\tau = \sigma/(1-\sigma) = (\sqrt6-2)/4.$ For future reference we display these basic constants, that is, the dominant singularities for 2- and 3-maps:
$$
\sigma = 5-2\sqrt6, \qquad\tau = {\sqrt6-2 \over 4}.
$$
The singular expansion of $M(z)$ at the singularity $z=1/12$ can be obtained directly from the explicit formula (\ref{th:Mn}), and is equal to
$$
M(z) = {1 \over 3} -{4\over 3}Z^2 + {8\over 3}Z^3 + O(Z^4),
$$
where $Z =\sqrt{1-12z}$.
Plugging this expression into (\ref{eq:HM}) and expanding gives
$$
H(x) = H_0 +H_2 X^2 + \frac{8}{3}\left(\frac{2}{3}\right)^{5/4}
X^3 + O(X^4),
$$
where now $X=\sqrt{1-x/\sigma}$.
A similar computation using (\ref{eq:KH}) gives
$$
K(x) = K_0 +K_2 X^2 +
\frac{8}{3}\left(4-4\sqrt{\frac{2}{3}}\right)^{5/2} X^3 + O(X^4),
$$
where $X=\sqrt{1-x/\tau}$.
The estimates for $h_n$ and $k_n$ follow directly by the transfer theorem and using the equality $\Gamma(-3/2)=4\sqrt\pi/3$.
\end{proof}
Our next result is a limit law for the size of the core and the kernel in random maps.
\begin{theorem}
The size $X_n$ of the core of a random map with $n$ edges, and the size $Y_n$ of the kernel of a random 2-map with $n$ edges
are asymptotically Gaussian with
$$
\renewcommand{\arraystretch}{2}
\begin{array}{ll}
\mathbf{E}\, X_n \sim \displaystyle{\sqrt 6 \over 3}n \approx 0.8165n, & \quad \mathbf{Var}\, X_n \sim \displaystyle\frac{n}{6} \approx 0.1667n,\\
\mathbf{E}\, Y_n \sim (2\sqrt6 -4)n \approx 0.8990n, & \quad \mathbf{Var}\, Y_n
\sim (18\sqrt{6}-44)n\approx 0.0908n.\\
\end{array}
$$
The size $Z_n$ of the kernel of a random map with $n$ edges is also
asymptotically Gaussian with
$$
\mathbf{E}\, Z_n \sim
\left(4-{4\sqrt6\over3}\right)n \approx 0.7340n,\qquad
\mathbf{Var}\, Z_n \sim \left({128\over 3} - {52\over 3}\sqrt{6}\right)n
\approx 0.2088n.
$$
\end{theorem}
\begin{proof}
If $u$ marks the size of the core in maps then and immediate extension of (\ref{eq:MH}) yields
\begin{equation}\label{eq:MHu}
M(z,u) = H\left(uT(z)\right){1+T(z)\over 1-T(z)} +T(z).
\end{equation}
It follows that the singularity $\xi(u)$ of the univariate function $z \mapsto M(z,u)$ is given by
$$
\xi(u) = {\sigma u\over (\sigma+u)^2}.
$$
We are in a situation where the quasi-powers theorem applies, so that the distribution is asymptotically Gaussian with linear expectation and variance.
An easy calculation gives
$$
- {\xi'(1) \over \xi(1)} = {\sqrt6 \over 3}, \qquad
-{\xi''(1) \over \xi(1)} - {\xi'(1) \over \xi(1)}
+ \left({\xi'(1) \over \xi(1)}\right)^2 = {1\over 6}.
$$
If now $u$ marks the size of the kernel in 2-maps then an extension of (\ref{eq:HK}) gives
\begin{equation}\label{eq:HKu}
H(z,u) = K\left({uz \over 1-z} \right){1 \over 1-z}+ {z \over 1-z}.
\end{equation}
The singularity $\chi(u)$ of $z \mapsto K(z,u)$ is now given by
$$
\chi(u) = {\tau \over \tau+u}.
$$
Again the quasi-powers theorem applies and we have
$$
- {\chi'(1) \over \chi(1)} = {2\sqrt6 -4}, \qquad
-{\chi''(1) \over \chi(1)} - {\chi'(1) \over \chi(1)}
+ \left({\chi'(1) \over \chi(1)}\right)^2 = 18\sqrt{6}-44.
$$
The last statement concerning $Z_n$ follows by combining equations (\ref{eq:MHu}) and (\ref{eq:HKu}), obtaining an expression of $M(z,u)$ in terms
of $K(z)$, and repeating the same computations as before for the
corresponding singularity function.
\end{proof}
It is interesting to compare the previous result with the known results on the
largest block (2-connected components) in random maps \cite{airy}.
The expected size of the largest block in random maps is asymptotically $n/3$,
rather smaller than the size of the core.
In other words, the core $\mathcal{C}$ consists of the largest largest block $\mathcal{B}$
together with smaller blocks attached to $\mathcal{B}$
comprising in total ${\sqrt6-1 \over 3}n \approx 0.4832n$ edges.
An explanation for this is the presence of a linear number of loops,
which belong to the core, but do not belong to the largest block.
\medskip
Our next goal is to analyze the size of the trees attached to the core of a random map.
\begin{theorem}
Let $X_{n,k}$ count trees with $k$ edges attached to the core of a random map with $n$ edges. Then $X_{n,k}$ is asymptotically normal and
$$
\mathbf{E}\, X_{n,k} \sim \alpha_k n
$$
where
$$
\alpha_k = ({4+{5\over3}\sqrt6}){1\over k+1}{2k \choose k}\left({1\over12}\right)^k,
\quad k \geq 1.
$$
Moreover, $\sum_{k\ge1} \alpha_k=\sqrt6/3$.
\end{theorem}
\begin{proof}
The generating function for trees, where variable $w_k$ marks trees with $k$ edges, is equal to
$$
T(z,w_k) = T(z) + (w_k-1) t_k z^k,
$$
where $t_k = C_k = {1\over k+1}\binom{2k}{k}$ is the $k$-th coefficient
of $T(x)$.
The scheme for the core decomposition is then
$$
M(z,w_k) = H(T(z,w_k)){1+T(z)\over 1-T(z)} +T(z).
$$
It follows that the singularity $\rho_k(w_k)$ of the univariate
function $z\mapsto M(z,w_k)$ is given by the equation
$$
T(\rho_k(w_k)) + (w_k-1) t_k \rho_k(w_k)^k = \sigma.
$$
An easy calculation gives
$$
-{\rho_k'(1)\over \rho_k(1)} =
{1-\sigma\over\sigma(1+\sigma)}C_k \left(1\over12\right)^{k}.
$$
The first part of the proof is concluded by noticing that
$(1-\sigma)/(\sigma(1+\sigma))=4+{5\over3}\sqrt6$.
Finally, $\sum_{k\ge1}\alpha_k= \sqrt6/3$ follows from the closed form of the generating function for the Catalan numbers.
\end{proof}
Recall that the size of the core is asymptotically ${1-\sigma\over1+\sigma}n={\sqrt6\over3}n$. Hence
the asymptotic probability that a random tree attached to the core has size $k$ is
\begin{equation}\label{beta-k}
\beta_k={\alpha_k\over\sqrt6/3}
\sim {1\over \sigma\sqrt\pi} k^{-3/2} 3^{-k},\qquad k \to \infty,
\end{equation}
It follows that if $k \gg \log (n)/\log (3)$, the expected number $\alpha_kn$ of trees of size $k$ tends to zero. This indicates that the size $L_n$ of the largest tree attached to the core is at most $\log (n)/\log (3)$ with high probability. We are going to show that in fact
$L_n/\log(n)$ tends to $\log(3)$. Before that we need some preliminaries.
In order to analyze the parameter $L_n$, we use the theory of
Boltzmann samplers as developed in \cite{boltzmann}.
We can model a random map (different from a tree) as follows:
take a random 2-map and replace every edge
with a rooted tree, in a way that the probability that the size of a tree
is $k$ equals $\beta_k$, independently for each tree.
For the sake of simplicity we only consider the trees placed in edges different from the root. In other words if $m+1$ is the size of the core of a random map, then the sizes of the trees attached to the core are modeled as
a sequence $Y_1,\ldots,Y_m$ of i.i.d. random variables, where $P(Y_i=k) = \beta_k$.
Then the size of the largest tree attached to the core of a random map is equal to $\max\{Y_1,\ldots,Y_m\}$.
In order to analyze this extremal parameter we follow the approach of Gao and Wormald for the analysis of the maximum vertex degree in random maps~\cite{gw}.
\begin{theorem}\label{th:maxtrees}
Let $Y_1,\ldots,Y_m$ i.i.d. integer random variables such that
$P(Y_i=k)=\beta_k$. Let $\gamma=\gamma(m)$ be
such that $m\gamma^{-3/2}3^{-\gamma}/(\sigma\sqrt{\pi})=1$.
Then
$$
P(\max\{Y_1,\ldots Y_m\}<x+\gamma(m))\sim
\exp\left(-3^{1-x}/2\right)
$$
uniformly for $|x|$ bounded with $x+\gamma(m)$ an integer.
\end{theorem}
We need the following technical lemma from~\cite{gw}, based on the method of moments.
\begin{lemma}\label{lm:gao}
Suppose that $X_1,\ldots,X_m$, where $X_i=X_i(m)$, are non-negative
integer variables and there is a sequence of positive reals $\gamma(m)$, and constants $0<\alpha$, $c<1$ such that
\begin{enumerate}[(i)]
\item $\gamma(m)\rightarrow\infty$ and $m-\gamma(m)\rightarrow \infty$;
\item for any fixed $r$ and sequences $k_i(m)$ with
$|k_i(m)-\gamma(m)|=O(1)$, for $1\leq i \leq r$ we have
$$
\mathbf{E}\,([X_{k_1(n)}]_{l_1}[X_{k_2(n)}]_{l_2}\ldots[X_{k_r(n)}]_{l_r})
\sim \prod_{j=1}^{r}\alpha^{(k_j(m)-\gamma(m))l_j},
$$
where $[X]_m=X(X-1)\ldots(X-m+1)$ is the $m$th factorial moment.
\item $P(X_{k(m)}>0)=O(c^{k(m)-\gamma(m)})$ uniformly for all
$k(m)>\gamma(m)$.
\end{enumerate}
Then there exists a function $\omega=\omega(m)\rightarrow\infty$
(sufficiently slowly as $m\rightarrow\infty$) so that the following
holds. For $k=\lfloor\gamma-\omega\rfloor$, the total variation distance
between the distribution of $(X_k,X_{k+1},\ldots)$, and that of
$(Z_k,Z_{k+1},\ldots)$ tends to 0, where the $Z_j=Z_j(m)$ are
independent Poisson random variables with $\mathbf{E}\, Z_j = \alpha^{j-\gamma(m)}$.
\end{lemma}
We need to check that the random variables
$X_j(m)=|\{i|1\leq i \leq m, Y_i=j\}|$, where $Y_1,\ldots,Y_m$ are
as in Theorem~\ref{th:maxtrees}, satisfy the conditions of the lemma.
\begin{lemma}\label{lm:trees}
Let $Y_1,\ldots,Y_m$ be i.i.d. discrete random variables such that
$P(Y_i=k) = \beta_k$, defined in (\ref{beta-k}),
and let $X_1,\ldots,X_m$
be random variables such that $X_i$ counts occurrences of $i$
in $Y_1\ldots Y_m$.
Then for any fixed $r$ and sequences $k_i(m)$ such that
$|k_i(m)-\gamma(m)|=O(1)$ the following
equation holds
$$
\mathbf{E}\,([X_{k_1(m)}]_{l_1}[X_{k_2(m)}]_{l_2}\ldots[X_{k_r(m)}]_{l_r})
\sim \prod_{j=1}^{r}3^{-(k_j(m)-\gamma(m))l_j},
$$
where $\gamma=\gamma(m)$ is such that
$m\gamma^{-3/2}3^{-\gamma}/(\sqrt{\pi}\sigma)=1$.
\end{lemma}
\begin{proof}
The random variable $[X_i]_l$ counts $l$-tuples of different
elements of $Y_1,\ldots,Y_m$ taking value~$i$. Similarly, we can express $[X_{k_1}]_{l_1}[X_{k_2}]_{l_2}\ldots[X_{k_r}]_{l_r}$
as the sum of indicator functions $\mathbf{1}_q$, where $q=\langle t_1,\ldots,t_r \rangle$, $t_i=\langle s_{i1},\ldots,s_{il_i}\rangle$,
$1\le i \le r$, $1\leq s_{ij} \leq m$
and $s_{ij_1} \neq s_{ij_2}$ for $j_1\neq j_2$, and
$\mathbf{1}_q=1$ if $Y_{s_{ij}} = k_i(m)$fofr all $i,j$, and 0 otherwise.
We have
$$
P(\mathbf{1}_q=1)=v(q)\prod_{i=1}^r (p_{k_i(m)})^{l_i}
$$
where $v(q)$ equals 0 if there exist $i_1$, $i_2$, $j_1$ and $j_2$
such that $s_{i_1j_1}=s_{i_2j_2}$ and $k_{i_1}(m) \neq k_{i_2}(m)$,
and~1 otherwise.
Hence
$$
\mathbf{E}\,([X_{k_1(m)}]_{l_1}[X_{k_2(m)}]_{l_2}\ldots[X_{k_r(m)}]_{l_r})
=\sum_q\mathbf{E}\,(\mathbf{1}_q)=\prod_{i=1}^r (p_{k_i(m)})^{l_i}\sum_qv(q).
$$
The sum $\sum_qv(q)$ is bounded from below by $[m]_{l_1+ \cdots +l_r}$,
which are the tuples in which all the $s$ are different. It is also
bounded from above by $[m]_{l_1}\cdots[m]_{l_r}$, which is when all the $k_i(m)$ are equal. Both expressions are asymptotically
equivalent, so that $\sum_qv(q)\sim m^{l_1+\cdots+l_r}$, and
$$
\mathbf{E}\,([X_{k_1(m)}]_{l_1}[X_{k_2(m)}]_{l_2}\ldots[X_{k_r(m)}]_{l_r})
\sim\prod_{i=1}^r (m\cdot p_{k_i(m)})^{l_i}.
$$
In order to obtain the claimed result, first note that
$p_k\sim k^{-3/2}3^{-k}/(\sqrt{\pi}\sigma)$. This can be easily
checked using the Stirling's approximation. Since
$m\gamma^{-3/2}3^{-\gamma}/(\sqrt{\pi}\sigma)=1$ by assumption,
we have $m\cdot p_{k_i(m)}\sim (k_i(m)/\gamma(m))^{-3/2}3^{-(k_i(m)-\gamma(m))}$.
Moreover, we assume that $|k_i(m)-\gamma(m)|=O(1)$, so
$k_i(m)/\gamma(m)\sim 1$. This finally implies
$$
\mathbf{E}\,([X_{k_1(m)}]_{l_1}[X_{k_2(m)}]_{l_2}\ldots[X_{k_r(m)}]_{l_r})
\sim \prod_{j=1}^{r}3^{-(k_j(m)-\gamma(m))l_j}.
$$
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:maxtrees}.]
The sequence
$X_1,\ldots,X_m$, where $X_i$ counts occurrences
of $i$ in $Y_1,\ldots,Y_m$, satisfies the conditions of
Lemma~\ref{lm:gao}: condition $(i)$ is easy to check, and
condition $(ii)$ follows from Lemma~\ref{lm:trees}.
Condition $(iii)$ is proven as follows:
$$
P(X_{k(m)}>0) = P\left(\bigvee_{i=1}^{m}\{Y_i=k(m)\}\right)\leq
\sum_{i=1}^mP(Y_i=k(m)) =
$$
$$
=m p_{k(m)}\sim
\left({k(m)\over \gamma(m)}\right)^{-3/2}3^{-(k(m)-\gamma(m))}=
O\left(3^{-(k(m)-\gamma(m))}\right).
$$
Therefore, by Lemma~\ref{lm:gao} there exists a function
$\omega=\omega(m)\rightarrow\infty$ so that for
$k=\lfloor\gamma-\omega\rfloor$, the total variation distance
between the distribution of $(X_k,X_{k+1},\ldots)$, and that of
$(Z_k,Z_{k+1},\ldots)$ tends to 0, where the
$Z_j=Z_j(m)$ are independent Poisson random variables with
$\mathbf{E}\, Z_j=3^{-(j-\gamma(m))}$. Now, assume that
$k=x+\gamma(m)$ is an integer. Then:
$$
P(\max\{Y_1,\ldots,Y_m\}<k) =
P\left(\bigwedge_{j\geq k}\{X_j(m)=0\}\right)\sim
P\left(\bigwedge_{j\geq k}\{Z_j(m)=0\}\right)=
$$
$$
=\prod_{j\geq k}\exp\left(-3^{-(j-\gamma(m))}\right)
=\exp\left(\sum_{j\geq k}-3^{-(j-\gamma(m))}\right)
=\exp\left(-3^{1-(k-\gamma(m))}/2\right),
$$
And since $k-\gamma(m)=x$ this concludes the proof.
\end{proof}
Note that if we modify the constant $C$ the result is the same,
since the solution of the equation $Cm\gamma^{-3/2}3^{-\gamma}=1$
is $\gamma = \log_3 m + 3/2\log_3\log m +O(1)$, regardless of
the value of $C$. Hence if now $m = cn$ we get the same expression.
The size of the core of a random planar map is is between
$((1-\sigma)/(1+\sigma)-\epsilon)n$ and $n$ for every $\epsilon>0$ with high probability. We can finally state our main result on this parameter.
\begin{theorem}\label{th:finalmaxtrees}
Let $L_n$ be the size the largest tree attached to the core of a random map with $n$ edges.
Let $\gamma(n)$ be such that
$n\gamma^{-3/2}3^{-\gamma}(6+2\sqrt{6})/\sqrt{\pi}=1$
Then
$$
P(\Delta_n<x+\gamma(n))\sim
\exp\left(-3^{1-x}/2\right)
$$
uniformly for $|x|$ bounded with $x+\gamma(n)$ an integer.
\qed
\end{theorem}
\medskip
Our last result in this section deals with the distribution of the degree of the root vertex in 2-maps and 3-maps.
We let $M(z,u)$ be the GF of maps, where $z$ marks edges and $u$ marks the degree of the root vertex. Similarly, $H(z,u)$ is the GF for 2-maps, and $T(z,u) = 1/(1-uz(T(z)+1))-1$ for trees, where again $u$ marks the degree of the root. Then we have
$$
M(z,u) = H\left(T(z), {u (T(z,u)+1) \over T(z)+1}\right) (T(z,u)+1)
+H(T(z)){T(z,u)\over 1-T(z)}+ T(z,u).
$$
The first term corresponds to the case where the root belongs to the core:
we replace each edge with a tree, and each edge incident
to the root vertex is replaced with a possibly empty tree, where
$u$ marks the degree of the root. The term $T(z)+1$ in the denominator
ensures that an edge is not replaced twice with a tree. The factor
$T(z,u)+1$ allows to place a possibly empty tree in the root corner.
The second term corresponds to the case where the root belongs to
a tree attached to the core: the denominator $1-T(z)$
encodes a sequence of trees going from the core to the root edge.
The last term corresponds to the case where the core is empty, and therefore
the map is a tree.
If we change variables $x=T(z)$ and $w = u(T(u,z)+1)/(T(z)+1)$, the inverse is
$$
z ={x \over (1+x)^2}, \qquad u={w(1+x) \over 1+wx}.
$$
The former equation becomes
\begin{equation}\label{eq:2mapsdeg}
H(x,w) = \displaystyle{M\left(\displaystyle{x \over (1+x)^2}, \displaystyle{w(1+x) \over 1+wx}\right)
\over 1+wx} -
{wx \over 1+x} M\left({x\over (1+x)^2}\right) +{1\over 1+wx}+{wx^2\over 1-x} - 1.
\end{equation}
The first terms are
$$
H(x,u)=
{ w}^{2}x+ \left( {w}^{2}+2{w}^{4} \right) {x}^{2}+ \left(
3{w}^{2}+4{w}^{3}+ 4{w}^{4}+5{w}^{6} \right) {x}^{3}+
\cdots $$
The relationship between $H(z,u)$ and $K(z,u)$ is simpler:
$$
H(z,u) = K\left({z \over 1-z},u\right) +
K\left({z \over 1-z}\right){zu^2 \over 1-z} + {zu^2 \over 1-z}.
$$
Inverting gives
\begin{equation}\label{eq:3mapsdeg}
K(x,u) = H\left({x \over 1+x},u\right)
-{xu^2\over 1+x}H\left({x\over 1+x}\right) - {xu^2\over 1+x},
\end{equation}
and the first terms are
$$
K(z,u)= 2u^4z^2+(4u^3+5u^6)z^3+(9u^3+9u^4+15u^5+14u^8)z^4 + \cdots
$$
In order to analyze $H(z,u)$ and $K(z,u)$ we need the expansion of $M(z,u)$ near the singularity $\rho=1/12.$
As we have seen, the expansion of $M(z)$ near $z=1/12$ is
$$
M(z)= {1 \over 3} -{4 \over 3}Z^2 + {8 \over 3}Z^3 + O(Z^4),
$$where $Z=\sqrt{1-12z}$. Since $M(z,u)$ satisfies (\ref{eq:maps}) we obtain
\begin{equation}\label{eq:singMapsdeg}
M(z,u) = M_0(u) + M_2(u)Z^2 + M_3(u)Z^3 + O(Z^4).
\end{equation}
A simple computation by indeterminate coefficients gives
$$
M_3(u) = {8u \over \sqrt{3(2+u)(6-5u)^3}}.
$$
The limiting probability that a random map has a root vertex (or face) of
degree $k$ is equal to
$$
p_M(k)= {[u^k][z^n] M(z,u) \over [z^n]M(z)}.
$$
Both coefficients can be estimated using transfer theorems and we get that the probability
generating function of the distribution is given by
\begin{equation}\label{eq:distMaps}
p_M(u) = \sum p_M(k) u^k = {M_3(u) \over M_3(1)} = {\frac {u\sqrt
{3}}{\sqrt { \left( 2+u \right) \left( 6-5\,u \right) ^{3}}}}
\end{equation}
Our goal is to obtain analogous results for 2-maps and 3-maps.
\begin{theorem}\label{degree-maps}
Let $p_M(u)$ be as before, and let $p_H(u)$ and $p_K(u)$ be the probability generating functions for the distribution of the root degree in 2-maps and 3-maps, respectively.
Then we have
$$
p_H(u) = {p_M\left( \displaystyle{u(1+\sigma)\over 1+u\sigma}\right)
\displaystyle{1+\sigma \over 1+u\sigma} -u\sigma \over 1- \sigma},
$$
$$
p_K(u) = {p_H(u)-u^2\sigma\over 1-\sigma},
$$
where $\sigma = 5-2\sqrt{6}$, as in Theorem~\ref{th:maps}.
Furthermore, the limiting probabilities that the degree of the root vertex is equal to $k$ exist, both for 2-maps and 3-maps, and are asymptotically
$$
\renewcommand*{\arraystretch}{1.5}
\begin{array}{l}
p_H(k) \sim \nu_2 k^{1/2} w_H^k, \qquad \\
p_M(k) \sim \nu_3 k^{1/2} w_K^k,
\end{array}
$$
where $w_H = w_K = \sqrt{2/3} \approx 0.8165$,
$\nu_2 = \sqrt{3(1-\sigma)/ (64\pi)}
\approx 0.1158$,
$\nu_3 = \sqrt{3/ (64\pi(1-\sigma))} \approx 0.1288$.
\end{theorem}
The correction terms $u\sigma$ in $p_H(u)$ and $u^2\sigma$ in $p_K(u)$
are due to the fact, respectively, that 2-maps have no vertices of degree one and 3-maps no vertices of degree two.
\begin{proof}
Since $M(z,u)$ satisfies (\ref{eq:singMapsdeg}) and $H(x,w)$ satisfies
(\ref{eq:2mapsdeg}), we obtain
$$
H(z,u) = H_0(u)+H_2(u)Z^2+H_3(u)Z^3+O(Z^4),
$$
where $Z = \sqrt{1-z/\sigma}$, and $H_3(u)$ can be computed as
$$
H_3(u) = \left({1-\sigma\over 1+\sigma}\right)^{3/2}
\left({M_3\left(u(1+\sigma)/(1+u\sigma)\right) \over 1+u\sigma}
-{M_3(1)u\sigma\over 1+\sigma}\right).
$$
The probability generating function of the distribution is given by
\begin{equation}\label{eq:prob2Maps}
p_H(u) = {H_3(u)\over H_3(1)} =
{p_M\left( \displaystyle{u(1+\sigma)\over 1+u\sigma}\right)
\displaystyle{1+\sigma \over 1+u\sigma} -u\sigma \over 1- \sigma},
\end{equation}
as claimed in the statement.
Now by (\ref{eq:3mapsdeg}), $K(u,z)$ satisfies
$$
K(z,u) = K_0(u) + K_2(u) Z^2 + K_3(u) Z^3 + O(Z^4),
$$
where now $Z = \sqrt{1-z/\tau}$ and $K_3(u)$ is
$$
K_3(u) = \left({1\over 1+\tau}\right)^{3/2}
\left(H_3(u)-H_3(1){\sigma u^2}\right).
$$
The probability generating function of the distribution is given by
\begin{equation}\label{eq:prob3Maps}
p_K(u) = {K_3(u)\over K_3(1)} = {p_H(u)-u^2\sigma\over 1-\sigma}.
\end{equation}
The asymptotics of the distributions can be obtained from that of $p_M(u)$.
The singularity of $p_M(u)$ is at $u_M=6/5$, and its
expansion is computed from the explicit formula
in (\ref{eq:distMaps}) as
\begin{equation}\label{eq:mapsexpansion}
p_M(u) = P_{-3}U^{-3} + O(U^{-2}),
\end{equation}
where $U = \sqrt{1-5u/6}$ and $P_{-3} = 1/(4\sqrt{10})$.
The singularity of $p_H$ and $p_K$ is obtained by solving
the equation
$$
\displaystyle{u(1+\sigma)\over 1+u\sigma} = u_M = {6\over 5},
$$
giving $u_H = u_K = \sqrt{3/2}$. Hence, the
exponential growth constants are $w_H = w_K = \sqrt{2/3}$. The singular
expansion of $p_H(u)$ is obtained by composing
(\ref{eq:prob2Maps}) and (\ref{eq:mapsexpansion}), giving as a result
\begin{equation}\label{eq:2mapsexpansion}
p_H(u) = Q_{-3}U^{-3} + O(U^{-2}),
\end{equation}
where now $U = \sqrt{1-u\sqrt{2/3}}$, and
$Q_{-3} = P_{-3}\sqrt{15(1-\sigma)/8} = \sqrt{3(1-\sigma)}/16$.
The singular expansion of $p_K(u)$ is obtained by composing
(\ref{eq:prob3Maps}) and (\ref{eq:2mapsexpansion})
giving as a result
\begin{equation}\label{eq:3mapsexpansion}
p_K(u) = R_{-3}U^{-3} + O(U^{-2}),
\end{equation}
where $U$ is as before and
$R_{-3} = Q_{-3}/(1-\sigma) = \sqrt{3/(1-\sigma)}/16$.
The estimates for $p_H(k)$ and $p_M(k)$ follow directly
by the transfer theorem.
\end{proof}
\section{Equations for 2-graphs and 3-graphs}\label{sec:equations}
In this section we find expressions for the generating functions of 2- and 3-graphs in terms of the generating function of connected graphs. The results are completely general and specialize to the generating functions of planar graphs, since a graph is planar if and on if its core its planar, and in turn the core is planar if and only if its kernel is planar.
Let $C(x,y)$ be the generating function of connected graphs, where $x$ marks vertices and $y$ marks edges. Denote by $H(x,y)$ and $K(x,y)$ the generating functions, respectively, of 2-graphs and 3-graphs.
We will find equations of the form
$$
\begin{array}{ll}
H(x,y) &= C(A_1(x,y),B_1(x,y))+E_1(x,y) \\
K(x,y) &= C(A_2(x,y),B_2(x,y))+E_2(x,y),
\end{array}
$$ where
$A_i$, $B_i$ and $E_i$ are explicit functions.
From now on all graphs are labelled,
and all generating functions are of the exponential type.
\paragraph{2-graphs.}\label{subsec:cores}
Let $\mathcal{G}$ be a connected graph. The core
$\mathcal{C}$ of $\mathcal{G}$
is obtained by removing repeatedly vertices of degree one, so that
$\mathcal{G}$ is obtained from $\mathcal{C}$
by replacing each vertex of $\mathcal{G}$ with a rooted
tree. The number $T_n$ of rooted trees
with $n$ edges is known to be $n^{n-1}$, and the
generating function $T(x) = \sum T_nx^n/n!$ satisfies
$$
T(x) = xe^{T(x)}.
$$
The core of $\mathcal{G}$ can be empty, in which case
$\mathcal{G}$ must be an (unrooted) tree. The number $U_n$ of
unrooted trees is known to be $n^{n-2}$, and
the generating function $U(x) = \sum u_n x^n/n!$
is equal to
$$U(x) = T(x) - \frac{T(x)^2}{2}.
$$
\begin{theorem}\label{th:cores}
Let $h_n$ be the number of 2-graphs with $n$ vertices.
Then $H(x) = \sum h_n x^n/n!$ is given by
\begin{equation}\label{eq:cores}
H(x) = C(xe^{-x})-x+\frac{x^2}{2}.
\end{equation}
\end{theorem}
\begin{proof}
The decomposition of a graph into its core and the attached rooted trees implies the following equation:
\begin{equation}\label{eq:corestoconnected}
C(z) = H(T(z)) + U(z).
\end{equation}
The first summand corresponds to the case where the core is non-empty, and
the second summand corresponds to the case where the graph is a tree.
In order to invert the former relation let $x=T(z)$, so that
$$
z=xe^{-x}, \qquad U(z) = x-\frac{x^2}{2}.
$$
We obtain
$$
H(x) = C(xe^{-x})-x+\frac{x^2}{2} =
\frac{x^3}{3!}+10\frac{x^4}{4!}+252\frac{x^5}{5!}+\ldots
$$
\end{proof}
Equation~(\ref{eq:cores}) can be extended by taking edges into account.
The generating functions $T(x,y)$ and $U(x,y)$ are easily
obtained as $T(x,y) = T(xy)/y$
and $U(x,y) = U(xy)/y$, and a quick computation gives
\begin{equation}\label{eq:coresedges}
H(x,y) = C(xe^{-xy},y)-x+\frac{x^2y}{2} =
y^3\frac{x^3}{3!}
+(3y^4+6y^5+y^6)\frac{x^4}{4!}+\ldots
\end{equation}
\paragraph{3-graphs.}\label{subsec:3-graphs}
A multigraph is a graph where loops and
multiple edges are allowed. As in the case of simple
graphs, we define a $k$-multigraph as a connected multigraph in which the degree of each vertex is at least $k$. Let $\mathcal{\widetilde C}$ be a 2-multigraph.
The kernel $\mathcal{\widetilde K}$ of $\mathcal{\widetilde C}$
is defined as follows: replace every maximal path of vertices
of degree two in $\mathcal{\widetilde C}$ with a single
edge. Clearly $\mathcal{\widetilde K}$ is a 3-multigraph,
and $\mathcal{\widetilde C}$ can be obtained by replacing
edges in $\mathcal{\widetilde K}$ with paths.
Let $\mathcal{\widetilde G}$ be a multigraph.
For each $i\geq 1$, let $\alpha_i$ be the number of vertices in
$\mathcal{\widetilde G}$ which are incident to exactly $i$ loops,
and let $\beta_i$ be the number of $i$-edges, that is,
edges of multiplicity $i$.
The weight of $\mathcal{\widetilde G}$ is defined as
$$w(\mathcal{\widetilde G}) = \prod_{i\geq 1} \left(\frac{1}{2^{i}i!}\right)^{\alpha_i}\cdot
\prod_{i\geq 1} \left(\frac{1}{i!}\right)^{\beta_i}.
$$
This definition is justified by the fact that when replacing
an $i$-edge with $i$ different paths, the order of the paths is irrelevant.
Similarly, when replacing a loop with a path, the orientation is irrelevant.
Note that the weight
satisfies $0<w(\mathcal{\widetilde G})\leq 1$, and
moreover $w(\mathcal{\widetilde G}) = 1 $ if and only if
$\mathcal{\widetilde G}$ is simple.
With this definition, the sum $\widetilde K_n$ of the weights of all 3-multigraphs with $n$ vertices is finite.
As a preliminary step to computing the generating function of
3-graphs, we establish a relation between
3-multigraphs and connected multigraphs.
In order to distinguish between edges of different multiplicity, we introduce infinitely many variables as follows.
Let $\widetilde C_{n,m,l_1,l_2,\ldots}$ be the sum of the
weights of connected multigraphs with $n$ vertices, $m$ loops
and $l_i$ $i$-edges for each $i\ge1$. Define similarly $\widetilde K_{n,m,l_1,l_2,\ldots}$
for 3-multigraphs, and let
$$\widetilde C(x,z,y_1,y_2,\ldots) = \sum
\widetilde C_{n,m,l_1,l_2,\ldots} x^nz^my_1^{l_1}y_2^{l_2}\ldots/n!$$
and
$$\widetilde K(x,z,y_1,y_2,\ldots) = \sum
\widetilde K_{n,m,l_1,l_2,\ldots} x^nz^my_1^{l_1}y_2^{l_2}\ldots/n!.
$$
\begin{theorem}\label{th:kernel}
Let $\widetilde C(x,z,y_1,y_2,\ldots)$ and
$\widetilde K(x,z,y_1,y_2,\ldots)$ be as before. Then
\begin{equation}\label{eq:multikernels}
\begin{array}{ll}
\widetilde K (x,z,y_1,y_2,\ldots) = \\
\widetilde C\left(xe^{-x(y_1+s)},-sxy_1-xy_2+z,
s+y_0, s^2+2y_1s+y_2,\ldots,
\sum_{j=0}^k \binom{k}{j}y_js^{k-j},\ldots\right)
+E(x,y_1),
\end{array}
\end{equation}
where
$$y_0 = 1, \quad
s = -\frac{xy_1^2}{1+xy_1},\quad
E(x,y) = -x+\frac{x^2y}{2+2xy}-\ln \sqrt{1+xy}+\frac{xy}{2}-\frac{(xy)^2}{4}.
$$
\end{theorem}
The proof of Theorem~\ref{th:kernel} is quite technical and is given below.
As a corollary we obtain the generating function of 3-graphs.
Recall that $C(x,y)$ is the generating function of connected graphs.
\begin{corollary}\label{cor:kernel}
Let $K_{n,m}$ be the number of 3-graphs with
$n$ vertices and $m$ edges. The
generating function $K(x,y) = \sum K_{n,m}x^ny^m/n!$
is given by
\begin{equation}\label{eq:simplekernel}
K(x,y) = C\left(A(x,y),B(x,y) \right)+E(x,y),
\end{equation}
where
$$A(x,y) = xe^{(x^2y^3-2xy)/(2+2xy)}, \qquad
B(x,y) = (y+1)e^{-xy^2/(1+xy)}-1, $$
and $E(x,y)$ is as in Theorem~\ref{th:kernel}.
\end{corollary}
\begin{proof}
Since the weight of a simple graph is one,
the number of simple 3-graphs is equivalent to the number
of weighted 3-multigraphs without loops or multiple edges. This
observation leads to
\begin{equation}\label{eq:kernels}
K(x,y) = \widetilde K (x,0,y,0,\ldots,0,\ldots).
\end{equation}
Moreover, for each connected multigraph $\mathcal{\widetilde G}$,
a connected simple graph $\mathcal{G}$ can be obtained
by removing loops and replacing each multiple edge with a single edge.
Then $\mathcal{\widetilde G}$ is obtained from $\mathcal{G}$
by replacing each edge with a multiple edge, and attaching zero or
more loops at each vertex. This can be encoded as
\begin{equation}\label{eq:multicon}
\widetilde{C}(x,z,y_1,y_2,\ldots,y_k,\ldots) =
C\left( x{e^{z/2}},\sum _{i\geq1}{\frac {y_{{i}}
}{ i !}} \right),
\end{equation}
where the exponential and the $1/i!$ terms take care
of the weights.
Finally, Equation~(\ref{eq:simplekernel}) can be obtained
by combining (\ref{eq:kernels}),~(\ref{eq:multikernels})
and~(\ref{eq:multicon}).
\end{proof}
We remark that a formula equivalent to (\ref{eq:simplekernel})
was obtained by Jackson and Reilly \cite{jackson}, using the principle of inclusion and exclusion. Our approach emphasizes the assignment of weights to multigraphs, which are needed in the various combinatorial decompositions.
Note that taking $y=1$ in Equation~(\ref{eq:simplekernel})
we obtain the univariate generating function $K(x)$ of
3-graphs as
\begin{equation}\label{eq:simpleunikernel}
K(x) = K(x,1) = C(A(x,1),B(x,1))+E(x,1)
\end{equation}
The proof of Theorem~\ref{th:kernel} requires the generating
function of 2-multigraphs.
Let $\widetilde H_{n,m,l_1,l_2,\ldots}$ be the sum of the
weights of 2-multigraphs with $n$ vertices, $m$ loops
and $l_i$ $i$-edges ($i\ge1$), and let
$$\widetilde H(x,z,y_1,y_2,\ldots) = \sum
\widetilde H_{n,m,l_1,l_2,\ldots} x^nz^my_1^{l_1}y_2^{l_2}\ldots/n!.
$$
\begin{lemma}
Let $\widetilde H(x,z,y_1,y_2,\ldots)$ and
$\widetilde K(x,z,y_1,y_2,\ldots)$
be as before. The following equation holds:
\begin{equation}\label{eq:multikernelcores}
\begin{split}
\widetilde{K}(x,z,y_1,y_2,\ldots,y_k,\ldots) =&
\widetilde{H}\left(x,-sxy_1-xy_2+z,y_1+s,y_2+2y_1s+s^2,\ldots,
\sum_{j=0}^{k}\binom{k}{j}y_js^{k-j},\ldots\right)
\\&
-\ln \sqrt{1+xy_1}-\frac{xz}{2}+\frac{x^2y_2}{4}+
\frac{xy_1}{2}-\frac{(xy_1)^2}{4},
\end{split}
\end{equation}
where
$$s = -\frac{xy_1^2}{1+xy_1}.$$
\end{lemma}
\begin{proof}
The kernel of a 2-multigraph is obtained by replacing
each edge with a path. This implies the following equation:
\begin{equation}\label{eq:multicoreskernel}
\begin{split}
\widetilde{H}(x,z,y_1,y_2,\ldots,y_k,\ldots) =&
\widetilde{K}\left(x,sxy_1+xy_2+z,y_1+s,y_2+2y_1s+s^2,\ldots,
\sum_{j=0}^{k}\binom{k}{j}y_js^{k-j},\ldots\right)
\\&
-\ln \sqrt{1-xy_1}+\frac{xz}{2}+\frac{x^2y_2}{4}-
\frac{xy_1}{2}-\frac{(xy_1)^2}{4},
\end{split}
\end{equation}
where
$$s = \frac{xy_1^2}{1-xy_1}.$$
The first summand corresponds to the case where there is
at least one vertex of degree $\ge3$, and thus the kernel
is not empty. The other summands correspond to cycles (each vertex is of degree exactly two): from the logarithm encoding cycles we must
take care of cycles of length one or two.
If the kernel is not empty, we replace every edge and every loop with
a path. The expression $s$ encodes a nontrivial
path, consisting of at least one vertex.
Each loop can be replaced with either another loop, or a vertex and a double edge, or a path consisting of at least two vertices; these operations are
encoded, respectively, by $z,xy_2$ and $s$. Note that if the kernel has an $i$-loop, then we can replace any of the loops with a path, in both
orientations. Therefore there are $2i$ ways to obtain the
same graph, which compensates the fact that the weight
of the new graph will be $2i$ times the weight of the old graph.
%
Each $k$-edge can be replaced with a $j$-edge and $k-j$ nontrivial paths, where $0\leq j \leq k$. There are $(k-j)!$ ways to obtain the same graph, and the weight becomes ${k!}/{j!}$ times the previous weight. Therefore
$y_k$ is replaced with $\binom{k}{j}y_js^{k-j}$, for $j=0,\dots,k$.
A simple computation shows that inverting (\ref{eq:multicoreskernel})
gives (\ref{eq:multikernelcores}), as claimed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:kernel}]
Given a multigraph it is clear
that every vertex incident to a loop or to a multiple edge
belongs to the core. Therefore,
Equation~(\ref{eq:coresedges}) can be easily extended
to multigraphs, giving the equation
\begin{equation}\label{eq:multicores}
\widetilde{H}(x,z,y_1,y_2,\ldots,y_k,\ldots) =
\widetilde C \left( x{e^{-xy_1}},z,y_1,y_2,\ldots,y_k,\ldots \right)-x
+\frac{{x}^{2}y_1}{2}.
\end{equation}
Finally, Equation~(\ref{eq:multikernels}) is obtained by
composing (\ref{eq:multikernelcores}) and
(\ref{eq:multicores}).
\end{proof}
As mentioned before, Theorem \ref{th:cores} and Corollary \ref{cor:kernel}
hold for planar graphs as well. In the next section we use them to enumerate and analyze planar 2- and 3-graphs.
\section{Planar graphs}\label{sec:graphs}
In this section we follow the ideas of Section~\ref{sec:maps} on planar maps in order to obtain related results for planar 2-graphs and 3-graphs.
The asymptotic enumeration of planar graphs was solved in~\cite{gn}.
From now on we assume that we know the generating function
$C(x,y)$ of connected planar graphs, where
$x$ marks vertices and $y$ marks edges, as well as
its main properties, such as the dominant singularities and the singular expansions around them.
In this section we use the equations obtained in
Section~\ref{sec:equations} to compute some parameters
in planar graphs. Most of the computations will be analogous
to the ones of maps, but technically more involved.
In order to compare the following results, we recall \cite{gn}
that the number of connected planar graphs is
$c_n\sim \kappa n^{-7/2}\gamma^n$, where $\kappa\approx 0.4104\cdot 10^{-5}$
and $\gamma \approx 27.2269$. As expected, there will be slightly
fewer connected 2-graphs and 3-graphs than connected planar graphs.
Besides, the expected degree of 2-graphs and 3-graphs will be slightly
higher.
\subsection{Planar 2-graphs}
We start our analysis with planar 2-graphs. The analysis for 3-graphs in the next subsection is a bit more involved.
\begin{theorem}\label{th:2graphs}
Let $h_n$ be the number of planar 2-graphs.
The following estimate holds:
$$
h_n \sim \kappa_2 n^{-7/2} \gamma_2^nn!,
$$
\noindent where $\gamma_2\approx 26.2076$ and $\kappa_2 \approx 0.3724\cdot 10^{-5}.$
\end{theorem}
\begin{proof}
Recall Equation~(\ref{eq:cores}) from Section~\ref{sec:equations}:
$$H(x) = C(xe^{-x})-x+\frac{x^2}{2}.$$
In order to obtain an asymptotic estimate for $h_n$ we need
to locate the dominant singularity of $H(x)$.
The singularity of $C(x)$ is $\rho = \gamma^{-1} \approx 0.0367$~\cite{gn}.
Hence the singularity of $H(x)$ is at
$\sigma = T(\rho) \approx 0.0382$. Therefore, the exponential growth constant
of $h_n$ is $\gamma_2 = \sigma^{-1} \approx 26.2076$.
Note that we use the same symbol $\sigma$ as in Section~\ref{sec:maps} for maps, but they correspond to different constants. No confusion should arise and it helps emphasizing the parallelism between planar maps and graphs.
The singular expansion of $C(x)$ at the singularity $x=\rho$ is
$$
C(x) = C_0+C_2X^2+C_4X^4+C_5X^5+O(X^6),
$$
where $X =\sqrt{1-x/\rho}$, and $C_5 \approx -0.3880\cdot 10^{-5}$
is computed in~\cite{gn}.
Plugging this expression into (\ref{eq:cores}) and expanding gives
$$
H(x) = H_0+H_2X^2+H_4X^4+H_5X^5+O(X^6),
$$
where now $X=\sqrt{1-x/\sigma}$ and
$H_5 = C_5 (1-\sigma)^{5/2}\approx -0.3520\cdot 10^{-5}$.
The estimate for $h_n$ follows directly by the transfer theorem.
\end{proof}
Our next result is a limit law for the number of edges in a random
planar 2-graph. We recall \cite{gn} that the expected number of edges in random connected planar graphs is asymptotically $\mu n$, where $\mu \approx 2.2133$,
and the variance is $\lambda n$ with $\lambda\approx 0.4303.$
\begin{theorem}
The number $X_n$ of edges in a random planar 2-graph with $n$ vertices
is asymptotically Gaussian with
$$
\mathbf{E}\, X_n \sim \mu_2n
\approx 2.2614n,
$$
$$
\mathbf{Var}\, X_n \sim \lambda_2n
\approx 0.3843n.
$$
\end{theorem}
\begin{proof}
Equation~(\ref{eq:coresedges}) from Section~\ref{sec:equations}
$$H(x,y) = C(xe^{-xy},y)-x+\frac{x^2y}{2}$$
implies that the singularity $\sigma(y)$ of the univariate
function $x\mapsto H(x,y)$ is given by
$$\sigma(y)e^{-\sigma(y)y} = \rho(y),$$
\noindent where $\rho(y)$ is the singularity of the univariate
function $x\mapsto C(x,y)$.
An easy calculation gives
$$\mu_2=-\displaystyle\frac{\sigma'(1)}{\sigma(1)} =
{-\rho'(1)/\rho - \sigma \over 1-\sigma} = {\mu - \sigma \over 1-\sigma} \approx 2.2614,
$$
which provides the constant for the expectation.
Similarly
$$
\renewcommand{\arraystretch}{3}
\begin{array}{ll}
& \lambda_2=-\displaystyle{\sigma''(1) \over \sigma(1)} -{\sigma'(1)\over\sigma(1)} +
\left({\sigma'(1)\over\sigma(1)}\right)^2 = \\
& \displaystyle{\displaystyle{-\rho''(1)\over \rho(1)}-3\sigma'(1)
-{3\sigma'(1)^2\over \sigma}+\sigma'(1)^2+2\sigma'(1)\sigma
+\sigma^2-{\sigma'(1)\over\sigma}+\left({\sigma'(1)\over \sigma}\right)^2 \over 1-\sigma}.
\end{array}
$$
This value can be computed from the known values of $\mu,\lambda$ and $\sigma$.
\end{proof}
Next we determine a limit law for the size of the core and
the kernel in random connected planar graphs.
\begin{theorem}
The size $X_n$ of the core of a random connected planar graph
with $n$ edges is asymptotically Gaussian with
$$
\mathbf{E}\, X_n \sim (1-\sigma)n \approx 0.9618n,
\qquad \mathbf{Var}\, X_n \sim \sigma n \approx 0.0382n.
$$
\end{theorem}
\begin{proof}
The generating function $\widehat C(x,u)$ of connected planar graphs, where $u$ marks the size of the core, is given by
\begin{equation}
\widehat C(x,u) = H(uT(x)) + U(x).
\end{equation}
It follows that the singularity $\xi(u)$ of the univariate
function $x\mapsto \widehat C(x,u)$ is given by the equation
$$uT(\xi(u)) = \sigma.$$
We can isolate $\xi(u)$ obtaining the explicit formula
$$\xi(u) = {\sigma e^{-\sigma / u} \over u}.$$
An easy calculation gives
$$-\frac{\xi'(1)}{\xi(1)} = 1-\sigma,\qquad
-{\xi''(1)\over \xi(1)} -{\xi'(1)\over \xi(1)}
+\left({\xi'(1)\over \xi(1)}\right)^2= \sigma.$$
\end{proof}
Our next goal is to analyze the size of the trees attached
to the core of a random connected planar graph.
\begin{theorem}
Let $X_{n,k}$ count trees with $k$ vertices attached to the core
of a random connected planar graph with $n$ vertices. Then
$X_{n,k}$ is asymptotically normal and
$$
\mathbf{E}\, X_{n,k} \sim \alpha_k n, \qquad
\mathbf{Var}\, X_n \sim \beta_k n,
$$
where
$$
\alpha_k = \frac{1-\sigma}{\sigma}\frac{k^{k-1}}{k!}\rho^k,
$$
and $\beta_k$ is described in the proof.
\end{theorem}
\begin{proof}
The generating function of trees where variable $w_k$ marks
trees with $k$ vertices is equal to
$$
T(x,w_k) = T(x) + (w_k-1)T_kx^k,
$$
where $T_k = k^{k-1}/k!$ is the $k$-th coefficient of $T(x)$.
The composition scheme for the core decomposition is then
$$
C(x,w_k) = H(T(x,w_k)) +U(x).
$$
It follows that the singularity $\rho_k(w_k)$ of the
univariate function $x\mapsto C(x,w_k)$ is given
by the equation
$$
T(\rho_k(w_k)) + (w_k-1)T_k(\rho_k(w_k))^k = \sigma.
$$
An easy calculation gives
$$
\alpha_k = -\frac{\rho'_k(1)}{\rho_k(1)} =
\frac{1-\sigma}{\sigma}\frac{k^{k-1}}{k!}\rho^k
$$
$$
\beta_k = -{\rho_k''(1)\over \rho_k(1)}-{\rho_k'(1)\over \rho_k(1)}
+\left({\rho'(1)\over\rho(1)}\right)^2=
{1\over \sigma^2}\left(T_k\rho^k
(T_k\rho^k(1-2k+4\sigma-2k\sigma^2)+\sigma-\sigma^2\right)
$$
\end{proof}
As expected, $\sum_{k\geq 0}\alpha_k = 1-\sigma$,
since there are $\sigma n$ vertices not in the core, and therefore
there are $(1-\sigma)n$ trees attached to the core.
Moreover, $\sum_{k\geq 0}k\alpha_k = 1$, since a connected graph
is the union of the trees attached to its core.
To conclude this section, we consider as for maps the parameter $L_n$ equal to the size of largest tree attached to the core of a random planar connected graph. The estimate for the $\alpha_k$ is
$$
\alpha_k \sim {1 -\sigma \over \sigma \sqrt{2\pi}} k^{-3/2} (e\rho)^k,
\qquad k\to \infty.
$$
We can perform exactly the same analysis as in Section~\ref{sec:maps} in order to obtain a law for $L_n$.
\begin{theorem}\label{th:finalmaxtreesgraphs}
Let $L_n$ be the size of largest tree attached to the core of a random planar connected graph. Let $\gamma(n)$ be such that
$n\gamma^{-3/2}(e\rho)^{\gamma}(1-\sigma)/(\sqrt{2\pi}\sigma)=1$
Then
$$
P(L_n<x+\gamma(n))\sim
\exp\left(-(e\rho)^{-x}/(1-e\rho)\right)
$$
uniformly for $|x|$ bounded with $x+\gamma(n)$ an integer.
\qed
\end{theorem}
\subsection{Planar 3-graphs}
We recall again that the generating function of connected planar graphs $C(x,y)$, where $x$ marks vertices and $y$ marks edges, was computed in~\cite{gn}.
\begin{theorem}\label{th:3graphs}
Let $k_n$ be the number of planar 3-graphs.
The following estimate holds:
$$
k_n \sim \kappa_3 n^{-7/2} \gamma_3^{n}n!,
$$
where
$$
\gamma_3 \approx 21.3102, \qquad \kappa_3 \approx 0.3107\cdot 10^{-5}.
$$
\end{theorem}
\begin{proof}
Recall Equation~(\ref{eq:simpleunikernel}) from Section~\ref{sec:equations}:
\begin{equation}\label{eq:kernelrep}
K(x) = C\left(A(x),B(x) \right)+E(x),
\end{equation}
where
$$A(x) = xe^{(x^2-2x)/(2+2x)}, \qquad
B(x) = 2e^{-x/(1+x)}-1, \qquad E(x) = \frac{x^2}{2+2x}-\ln \sqrt{1+x}-\frac{x}{2}-\frac{x^2}{4}.
$$
In order to obtain an estimate for $k_n$ we need
to locate the dominant singularity of $K(x)$.
The singularity curve of $C(x,y)$ is given by
$(X(t),Y(t))$, where $t\in(0,1)$ and $X$, $Y$ are
explicit functions defined in~\cite{gn}.
Hence the singularity $\tau$ of $K(x)$ is obtained by solving the equations
$$
\left\lbrace \begin{array}{l}
X(t) = A(\tau) \\ Y(t) = B(\tau)
\end{array}\right.
$$
The solution $\tau$ of the equation can be computed numerically
and is $\tau \approx 0.0469$. The exponential growth constant
is then $\gamma_3 = \tau^{-1} \approx 21.3102$.
The singular expansion of $C(x,y)$
at the singularity $x=\rho(y)$ is of the form
$$C(x,y) = C_0(y)+C_2(y)X^2+C_4(y)X^4+C_5(y)X^5+O(X^6),$$
where $X = \sqrt{1-x/\rho(y)}$, and $C_5(y)$ is
an expression computed in \cite{gn}. Plugging this expression
into (\ref{eq:kernelrep}) and expanding gives
$$
K(z) = K_0 + K_2Z^2 + K_4Z^4+K_5Z^5+O(Z^6),
$$
where now $Z = \sqrt{1-z/\tau}$.
In order to compute the dominant coefficient $K_5$, we need to expand $C_5(B(z))\left(1-D(z)\right)^{5/2}$, where
$D(z)= A(z)/\rho(B(z))$, near the singularity $z = \tau$. The Taylor expansion of $D(z)$ is
$$
D(z) = D(\tau) + D'(\tau)(z-\tau) + O((z-\tau)^2).
$$
Since $A(\tau),B(\tau)$ is a singular point, we have
$A(\tau) = \rho(B(\tau))$, and
$D(\tau) = A(\tau)/\rho(B(\tau)) = 1$. Therefore,
$\sqrt{1-D(z)}$ is computed as
$$\sqrt{\tau D'(\tau)(1-z/\tau) + O((x-\tau)^2) } =
\sqrt{\tau D'(\tau)}Z + O(Z^2),
$$
and $(1-D(z))^{5/2} = (\tau D'(\tau))^{5/2}Z^5+O(Z^6)$.
Since $C_5(y)$ is analytic at $y=B(\tau)$, we conclude that
$K_5 = C_5(B(\tau))(\tau D'(\tau))^{5/2}
\approx-0.2937\cdot 10^{-5}$.
The estimate for $k_n$ follows directly by the transfer theorem, with $\kappa_3 = K_5/\Gamma(-5/2) \approx 0.3107\cdot 10^{-5}$.
\end{proof}
Our next result is a limit law for the number of edges
in a random planar 3-graph.
\begin{theorem}
The number $X_n$ of edges in a random planar 3-graph with $n$ vertices
is asymptotically Gaussian with
$$
\mathbf{E}\, X_n \sim \mu_3 n \approx 2.4065n, \qquad
\mathbf{Var}\, X_n \sim \lambda_3 n \approx 0.3126n
$$
\end{theorem}
\begin{proof}
Recall Equation~(\ref{eq:simplekernel}) from Section~\ref{sec:equations}:
\begin{equation
K(x,y) = C\left(A(x,y),B(x,y) \right)+E(x,y),
\end{equation}
where
$$A(x,y) = xe^{(x^2y^3-2xy)/(2+2xy)}, \qquad
B(x,y) = (y+1)e^{-xy^2/(1+xy)}-1, $$
$$
E(x,y) = -x+\frac{x^2y}{2+2xy}-\ln \sqrt{1+xy}+\frac{xy}{2}-\frac{(xy)^2}{4}.
$$
It follows that the singularity $\tau(y)$ of the univariate
function $x\mapsto K(x,y)$ is given by the equation
$$
A(\tau(y),y) = \rho(B(\tau(y),y)),
$$
where $\rho(y)$ is as before the singularity of $x\mapsto C(x,y)$. The value of $\tau(1)=\tau$ is already known. In order to compute $\tau'(1)$ we
differentiate and obtain
$$
A_x(\tau,1)\tau'(1) +A_y(\tau,1) =
\rho'(B(\tau,1))\left[B_x(\tau,1)\tau'(1)+B_y(\tau,1)\right].
$$
Solving for $\tau'(1)$ we obtain
$$
\tau'(1) = -{\rho'(B(\tau,1))B_y(\tau,1)-A_y(\tau,1)\over
\rho'(B(\tau,1))B_x(\tau,1)-A_x(\tau,1)}.
$$
Since $\rho = X\circ Y^{-1}$, where $X$ and $Y$ are explicit
functions defined in \cite{gn}, $\rho'(y)$ can be computed
as $X'(Y^{-1}(y))/Y'(Y^{-1}(y))$.
After some calculations we finally get a value of
$\tau'(1)\approx-0.1129$ and
$$
\mu_3 =-\frac{\tau'(1)}{\tau(1)} \approx 2.4065.
$$
Using the same procedure we can isolate $\tau''(1)\approx0.3700$ and obtain
the variance as
$$
\lambda_3 = -\frac{\tau''(1)}{\tau(1)}
-\frac{\tau'(1)}{\tau(1)}+
\left(\frac{\tau'(1)}{\tau(1)}\right)^2\approx 0.3126.
$$
\end{proof}
Next we determine the limit law for the size of
the kernel in random planar 2-graphs.
\begin{theorem}
The size $Y_n$ of the kernel of a random planar 2-graph
with $n$ edges is asymptotically Gaussian with
\begin{equation}
\mathbf{E}\, Y_n \sim \mu_K n
\approx 0.8259n,\qquad
\mathbf{Var}\, Y_n \sim \lambda_K n \approx 0.1205n
\end{equation}
\end{theorem}
\begin{proof}
Recall that the decomposition of a simple 2-graph into its kernel gives
$$H(x) = \widetilde H(x,0,1,0,\ldots)
= \widetilde K\left(x,\frac{x^2}{1-x},\frac{1}{1-x},
\ldots,k\left(\frac{x}{1-x}\right)^{k-1}+
\left(\frac{x}{1-x}\right)^{k},\ldots\right)+E(x,1).$$
If $u$ marks the size of the kernel then
$$H(x,u) = \widetilde K\left(ux,\frac{x^2}{1-x},\frac{1}{1-x},
\ldots,k\left(\frac{x}{1-x}\right)^{k-1}+
\left(\frac{x}{1-x}\right)^{k},\ldots\right)+E(x,1).$$
Composing with Equations~(\ref{eq:multikernels}) and (\ref{eq:multicon})
we get
$$H(x,u) = C\left(A(x,u),
B(x,u)
\right)+F(x,u)
$$
where
$$
A(x,u)=ux\exp\left({-x
\left(2u+x+{u}^{2}x-2\,ux \right)
\over 2(1-x+ux)}\right),
$$
$$
B(x,u)=-1+2\exp\left({x
\left( 1-u \right)\over1-x+ux}\right),
$$
and $F(x,u)$ is a correction term which does not affect the singular analysis. It follows that the singularity $\chi(u)$ of the univariate function $x\mapsto H(x,u)$ is given by the equation
$$
A(\chi(u),u)=\rho(B(\chi(u),u)),
$$
If we differentiate the former expression and replace u with 1 we get
$$
A_x(\sigma,1)\chi'(1)+A_y(\sigma,1)=
\rho'(1)(B_x(\sigma,1)\chi'(1)+B_y(\sigma,1)).
$$
Note that $\chi(1)=\sigma$, where $\sigma$ is, as before,
the singularity of the generating function $H(x)$ of planar
2-graphs. Moreover, $B(x,1)=1$.
After some calculations we finally get $\chi'(1)\approx-0.03135$ and
$$\mu_K = -\frac{\chi'(1)}{\chi(1)} =
\frac{2\rho'(1)e^{\sigma}+\sigma^2-\sigma+1}{1-\sigma}.$$
This is computed using the known values of $\sigma$ and $\rho'(1)=-\rho\mu$.
Using the same procedure we can isolate $\chi''(1)\approx0.05295$ and compute
$\lambda_K$ as
$$
\lambda_K = -{\chi''(1)\over \chi(1)}
-{\chi'(1)\over \chi(1)} + \left({\chi'(1)\over \chi(1)}\right)^2
\approx 0.1205.
$$
\end{proof}
Note that, since the expected size of the core of a random connected
planar graph is $1-\sigma$, the expected size of the kernel of a random
connected planar graph with $n$ vertices is
asymptotically
$(1-\sigma)\mu_K n = (2\rho'(1)e^{\sigma}+\sigma^2-\sigma+1)n \approx 0.7944n$.
\section{Degree distribution}\label{sec:degree}
In this section we compute the limit probability that a vertex
of a planar 2-graph or 3-graph has a given degree.
In order to do that, we compute
the probability distribution of the root of a rooted planar 2-graph
and 3-graph. Since every vertex is equally likely to be the root,
we conclude that the average distribution is the same. Note that
this is not true for maps, so in this section we only compute
the distribution for graphs.
This section is rather technical, especially the part of 3-graphs,
so that is why we separate its content from that of Section~\ref{sec:graphs}.
Let $c^{\bullet}_n$ be the number of rooted connected planar graphs
with $n$ vertices, i.e., $c^{\bullet}_n = n\cdot c_n$.
Let $C^{\bullet}(x) = \sum c^{\bullet}_n x^n = xC'(x)$ be its associated
generating function.
Let $c^{\bullet}_{n,k}$ be the number of rooted connected planar graphs
with $n$ vertices and such that the root degree is exactly $k$.
Let $C^{\bullet}(x,w) = \sum c^{\bullet}_{n,m} x^n u^m$ be its associated
generating function. The limit probability $d_k$ that the root vertex has
degree $k$ can be obtained as
$$
d_k = \lim_{n\rightarrow \infty}{c^{\bullet}_{n,k}\over c^{\bullet}_n}=
\lim_{n\rightarrow \infty}{[x^n][w^k]C^{\bullet}(x,w)\over [x^n]C^{\bullet}(x)}.
$$
Therefore, the probability distribution $p(w) =\sum d_k w^k$ can be
obtained from the knowledge of $C^{\bullet}(w,u)$. In \cite{degree} this
function is computed, and $d_k$ is proven to be asimptotically
$$
d_k \sim c\cdot k^{-1/2}q^k,
$$
where $c\approx 3.0175$ and $q\approx 0.6735$ are computable constants.
Our goal is to obtain similar results for 2-graphs and 3-graphs, by
respectively computing generating function $H^{\bullet}(x,w)$
and $K^{\bullet}(x,w)$ in terms of $C^{\bullet}(x,w)$.
\subsection{2-graphs}
\begin{theorem}\label{th:coresdeg}
Let $h^{\bullet}_{n,k}$ be the number of rooted 2-graphs with
$n$ vertices and with root degree $k$. Let
$H^{\bullet}(x,w) = \sum h^{\bullet}_{n,k} x^n w^k$ be its associated
generating function. The following equation holds
\begin{equation}\label{eq:coresdeg}
H^{\bullet}(x,w) = e^{x(1-w)}C^{\bullet}(xe^{-x},w)
-xwC^{\bullet}(xe^{-x})-x+x^2w
\end{equation}
\end{theorem}
\begin{proof}
The decomposition of a graph into ins core and the attached rooted trees
implies the following equation:
$$
C^{\bullet}(z,w) = H^{\bullet}(T(z),w){T(z,w)\over T(z)}+
H^\bullet(T(z)){wT(z,w)\over 1-T(z)}+T(z,w),
$$
where $T(z,w) = z\cdot e^{wT(z)}$ is the generating function of rooted trees where
$w$ marks the degree of the root.
The first addend corresponds to the case where the root is in the core.
In this case, the degree of the graph root is the degree of the core root
plus the degree of the root of its appended tree. The second addend
corresponds to the case where the root is in an attached tree.
In this case there is a sequence of trees between the core and the
root, and finally a rooted tree. The degree of the graph root
is the degree of the root of the rooted tree plus one. The last addend
corresponds to the case where the graph is a tree, and therefore
its core is empty.
In order to invert the former relation let $x = T(z)$ so that
$$
z = xe^{-x},\quad T(z,w) = x e^{-x(1-w)},\quad
H^{\bullet}(T(z)) = (1-x)C^{\bullet}(xe^{-x})+x^2-x.
$$
After some calculations we obtain
$$
H^{\bullet}(x,w) = e^{x(1-w)}C^{\bullet}(xe^{-x},w)
-xwC^{\bullet}(xe^{-x})-x+x^2w =
$$
$$
={1\over 2}w^2x^3+\left(w^2+{2\over 3}w^3\right)x^4+
\left({9\over 2}w^2+{13\over 3}w^3+{41\over 24}w^4 \right)x^5+\ldots
$$
\end{proof}
The probability distribution $p(w)$ can be computed using transfer theorems.
The expansion of $C^\bullet (x,w)$ near the singularity $x=\rho$ gives
the following equation
\begin{equation}\label{eq:condeg}
C^\bullet (x,w) = C_0(w) + C_2(w)X^2 + C_3(w)X^3 + O(X^4),
\end{equation}
where $X = \sqrt{1-x/\rho}$. The probability
distribution can be computed as
$$
p(w) = {C_3(w)\over C_3(1)}.
$$
Our goal is to obtain the same result by applying the relation
obtained in (\ref{eq:coresdeg}).
\begin{theorem}
Let $e_k$ be the limit probability that a random vertex has degree $k$
in a 2-graph. Let $p_H(w) = \sum e_k w^k$ be its probability distribution.
Let $p(x)$ be as before. The following equation holds:
\begin{equation}\label{eq:distcores}
p_H(w) = {e^{\sigma(1-w)}p(w)-\sigma w \over 1-\sigma},
\end{equation}
where $\sigma = T(\rho)$, as in Theorem~\ref{th:2graphs}.
Furthermore, the limiting probability that the degree of
a random vertex is equal to $k$ exists, and is asymptotically
$$
p_H(k)\sim \nu_2k^{-1/2}q^k,
$$
where $q\approx 0.6735$ and $\nu_2\approx3.0797$.
\end{theorem}
\begin{proof}
Since $C^\bullet(x,w)$ satisfies (\ref{eq:condeg}), and
$H^\bullet(x,w)$ satisfies (\ref{eq:coresdeg}), we obtain
$$
H^\bullet(x,w) = H_0(w) + H_2(w)X^2 + H_3(w)X^3+O(X^4),
$$
where $X = \sqrt{1-x/\sigma}$, and $H_3(w)$ is computed as
$$
H_3(w) = e^{\sigma(1-w)}C_3(w)(1-\sigma)^{3/2}-
w\sigma C_3(1) (1-\sigma)^{3/2}
$$
The probability generating function of the distribution
is given by
$$
p_H(w) = {H_3(w)\over H_3(1)} =
{(1-\sigma)^{3/2}\left(e^{\sigma(1-w)}C_3(w) - w\sigma C_3(1)\right)
\over (1-\sigma)^{3/2}C_3(1)(1-\sigma)}
= {e^{\sigma(1-w)}p(w)-\sigma w \over 1-\sigma}.
$$
The asymptotics of the distribution can be obtained from $p(w)$.
The singularity of $p(w)$ is obtained in \cite{degree} as
$r \approx 1.4849$. The expansion of $p(w)$ near the singularity
is computed as
$$
p(w) = P_{-1}W^{-1} + O(1),
$$
where $P_{-1}\approx 5.3484$ is a computable constant,
and $W=\sqrt{1-w/r}$. Plugging this expression into (\ref{eq:distcores})
we get
$$
p_H(w) = Q_{-1}W^{-1} + O(1),
$$
where $Q_{-1} = P_{-1}e^{\sigma(1-r)}/(1-\sigma) \approx 5.4586$.
The estimate for $p_H(k)$ follows directly by singularity analysis.
\end{proof}
\subsection{3-graphs}
In order to prove a similar result for 3-graphs, we need to extend
the generating function $C^{\bullet}(x.w)$ so that it takes edges
into account. This function $C^{\bullet}(x,y,w)$
was computed in \cite{degree}, and our goal is to obtain the
analogous generating function for 3-graphs, $K^{\bullet}(x,w)$,
in terms of
We remark that the expression given in \cite{degree} for
$C^{\bullet}(x,y,w)$ is extremely involved and needs several pages to write it down.
\begin{theorem}\label{th:kerneldeg}
Let $k^{\bullet}_{n,k}$ be the number of rooted 3-graphs with
$n$ vertices and with root degree $k$. Let
$K^{\bullet}(x,w) = \sum k^{\bullet}_{n,k} x^n w^k$ be its associated
generating function. The following equation holds
\begin{equation}\label{eq:kerneldeg}
\begin{split}
K^\bullet (x,w) = B_0(x,w)\cdot
C^\bullet\left(B_1(x),B_2(x),B_3(x,w)
\right)+A(x,w)
\end{split}
\end{equation}
where
$$
B_0(x,w) = e ^{(w^2-1)x^2/(2+2x) + x(1-w)/(1+x)}, \qquad
B_1(x) = x e^{(x^2-2x)/(2+2x)},
$$
$$
B_2(x) = 2e^{-x/(1+x)}-1,\qquad
B_3(x,w) = \frac{(1+w)e^{-wx/(1+x)}-1}{2e^{-x/(1+x)}-1},
$$
$$
A(x,w) = A_0(x)+A_1(x)w+A_2(x)w^2,
$$
and $A_0(x)$, $A_1(x)$, $A_2(x)$ are analytic functions.
\end{theorem}
In order to prove this theorem we need some technical lemmas
that relate different classes of graphs.
\begin{lemma}
Let $\widetilde{C}^{\bullet}(x,w,z,y_1,\ldots,y_k,\ldots)$ be the generating
function of rooted connected planar weighted multigraphs where $x$ marks
vertices, $w$ marks the root degree, $z$ marks loops, and $y_k$ marks
$k$-edges. The following equation holds
\begin{equation}\label{eq:multigraphsdeg}
\widetilde{C}^\bullet (x,w,z,y_1,\ldots,y_k\ldots) =
e ^{z\cdot(w^2-1)/2} C^\bullet\left(x e^{z/2},\sum_{i\geq 1}\frac{y_i}{i!},
\frac{\sum_{i\geq 1}w^i\cdot y_i /i!}{\sum_{i\geq 1} y_i /i!}\right).
\end{equation}
\end{lemma}
\begin{proof}
Given a simple connected planar graph $\mathcal{G}$,
a connected planar multigraph
can be obtained from $\mathcal{G}$ by replacing each edge with a multiple edge,
and placing 0 or more loops in each vertex (see proof of
Corollary~\ref{cor:kernel} for details). In the case of rooted graphs,
if we replace an edge incident to the root with a $i$ edge, its root degree
is increased in $i-1$. Therefore, instead of replacing such an edge
with a multiple edge with generating function $y_i/i!$, we replace it with
a multiple edge with generating function $w^iy_i/i!$.
Similarly, when we add a loop incident to the root vertex, the root degree
is increased by 2. Therefore, its associated generating function is not
$z$, but $zw^2$.
\end{proof}
\begin{lemma}
Let $\widetilde{H}^{\bullet}(x,w,z,y_1,\ldots,y_k,\ldots)$ be the generating
function of rooted planar weighted 2-multigraphs where $x$ marks
vertices, $w$ marks the root degree, $z$ marks loops, and $y_k$ marks
$k$-edges. The following equation holds
\begin{equation}\label{eq:multicoresdeg}
\widetilde{H}^\bullet (x,w,z,y_1,\ldots,y_k\ldots) =
e^{y_1x(1-w)}\widetilde{C}^\bullet (xe^{-y_1 x},w,z,y_1,\ldots,y_k\ldots)
-w\cdot A(x,z,y_1,\ldots y_k,\ldots) - x,
\end{equation}
for a given function $A(x,z,y_1,\ldots y_k,\ldots)$ that does not
depend on $w$.
\end{lemma}
\begin{proof}
The decomposition of a planar connected weighted multigraph into
its core and the attached rooted trees implies the following equation:
$$
\widetilde{C}^{\bullet}(x,w,z,y_1,\ldots,y_k,\ldots) =
\widetilde{H}^{\bullet}(T(x,y_1),w,z,y_1,\ldots,y_k,\ldots)
{T(x,y_1,w)\over T(x,y_1)}+
$$
$$
+\widetilde{H}^\bullet(T(x,y_1),z,y_1,\ldots,y_k,\ldots)
{wT(x,y_1,w)\over 1-T(x,y_1)}+T(x,y_1,w),
$$
where $T(x,y) = T(xy)/y$ is the generating function of
rooted trees where $x$ marks vertices and $y$ marks edges,
and $T(x,y,w) = T(xy,w)/y$ is the generating function of rooted
trees where $x$ marks vertices, $y$ marks edges, and $w$ marks the
root degree. The justification of this relation is analogous to
the proof of Theorem~\ref{th:coresdeg}, as well as
the inverse.
\end{proof}
\begin{lemma}
Let $K^{\bullet}(x,w)$ be the generating function of rooted simple planar
3-graphs where $x$ marks vertices and $w$ marks the root degree.
The following equation holds
\begin{equation}\label{eq:kernelcoresdegree}
K^{\bullet}(x,w) =
\widetilde H^{\bullet}(x,w,-sx,1+s,2s+s^2,\ldots ,ks^{k-1}+s^k,\ldots)
+w^2A(x),
\end{equation}
for a given function $A(x)$, and where $s=-x/(1+x)$.
\end{lemma}
\begin{proof}
Recall from Section~\ref{sec:equations}
the decomposition (\ref{eq:multicoreskernel})
of a planar 2-multigraph into its kernel and paths of vertices
$$
\widetilde{H}(x,z,y_1,y_2,\ldots,y_k,\ldots) =
\widetilde{K}\left(x,sxy_1+xy_2+z,y_1+s,y_2+2y_1s+s^2,\ldots,
\sum_{j=0}^{k}\binom{k}{j}y_js^{k-j},\ldots\right)
$$
$$
-\ln \sqrt{1-xy_1}+\frac{xz}{2}+\frac{x^2y_2}{4}-
\frac{xy_1}{2}-\frac{(xy_1)^2}{4},
$$
where $s = xy_1^2/(1-xy_1)$ is a nonempty path of edges and vertices.
If we root a vertex of a planar 2-multigraph there are two options:
either it belongs to the kernel or it belongs to an edge of the kernel.
In the former case, its degree corresponds to the degree of the corresponding
vertex in the kernel. In the latter case its degree must be 2. With this
observation we can extend this equation so that it considers rooted graphs
and it takes the root degree into account, as
$$
\begin{array}{ll}
\widetilde{H}^\bullet(x,w,z,y_1,y_2,\ldots,y_k,\ldots)=&\\
\widetilde{K}^\bullet\left(x,w,sxy_1+xy_2+z,y_1+s,y_2+2y_1s+s^2,\ldots,
\sum_{j=0}^{k}\binom{k}{j}y_js^{k-j},\ldots\right)
+w^2A(x,z,y_1,\ldots,y_k,\ldots),
\end{array}
$$
where $A(x,z,y_1,\ldots,y_k,\ldots)$ is a function that does not depend
on $w$. This relation can be inverted as in Section~\ref{sec:equations},
and finally we can conclude (\ref{eq:kernelcoresdegree}) from
the following equation
$$
K^\bullet(x,w) = \widetilde K^\bullet(x,w,0,1,0,\ldots,0,\ldots).
$$
\end{proof}
Using these lemmas we finally prove Theorem~\ref{th:kerneldeg}.
\begin{proof}
The equation (\ref{eq:kerneldeg}) is obtained by combining equations
(\ref{eq:kernelcoresdegree}), (\ref{eq:multicoresdeg}) and
(\ref{eq:multigraphsdeg}).
\end{proof}
The expression obtained in Theorem~\ref{th:kerneldeg} allows us to prove
the following result.
\begin{theorem}
Let $f_k$ be the limit probability that a random vertex has degree
$k$ in a planar 3-graph. The limit probability distribution
$p_K(w) = \sum f_k w^k$ exists and is computable.
\end{theorem}
\begin{proof}
The generating function $C^\bullet(x,y,w)$ is expressed
in~\cite{degree} as
$$
C^\bullet(x,y,w) = C_0(y,w)+C_2(y,w)X^2+C_3(y,w)X^3+O(X^4),
$$
where $X=\sqrt{1-x/\rho(y)}$. If we compose this expression with
(\ref{eq:kerneldeg}) we obtain
\begin{equation}\label{eq:kernelexp}
\begin{array}{ll}
K^\bullet(x,w) = B_0(x,w)\times\\
\left[C_0(B_2(x)),B_3(x,w))+
C_2(B_2(x),B_3(x,w))X^2+
C_3(B_2(x),B_3(x,w))X^3+O(X^4)\right]+A(x,w),
\end{array}
\end{equation}
where $X=\sqrt{1-B_1(x)/\rho(B_2(x)}$. If we define
$D(x) = B_1(x)/\rho(B_2(x))$ then we can proceed as in the proof
of Theorem~\ref{th:3graphs}, obtaining that
$X=\sqrt{\tau D'(\tau)}Z+O(Z^2)$, where $Z = \sqrt{1-x/\tau}$.
Plugging this expression into~(\ref{eq:kernelexp}) we obtain
$$
K^\bullet(z,w) = K_0(w)+K_2(w)Z^2+K_3(w)Z^3+O(Z^4),
$$
where $Z = \sqrt{1-z/\tau}$ and
$$
K_3(w) = B_0(\tau,w)C_3(B_2(\tau),B_3(\tau,w))
(\tau D'(\tau))^{3/2}+a_0+a_1w+a_2w^2,
$$
for some constants $a_0$, $a_1$ and $a_2$. The limit probability distribution
of the root vertex being of degree $k$ is computed as
$$
p_K(w) = \frac{K_3(w)}{K_3(1)} =
\frac{B_0(\tau,w)C_3(B_2(\tau),B_3(\tau,w))(\tau D'(\tau))^{3/2}
+a_0+a_1w+a_2w^2}
{B_0(\tau,1)C_3(B_2(\tau),1)(\tau D'(\tau))^{3/2}+a_0+a_1+a_2}.
$$
Since we know that a 3-graph has no vertices of degree 0, 1 or 2,
we can choose suitable values of $a_0$, $a_1$ and $a_2$ such that
the probability distribution $p_K(w)=\sum f_kw^k$ satisfies
$f_0=f_1=f_2=0$. The function $C_3(y,w)$ is described in~\cite{degree},
and every other function that appears in the previous expression
is explicit. Therefore,
$p_K$ is computable, as we wanted to prove.
\end{proof}
We remark that $p_K(w)$ is expressed in terms of $C_3(x,w)$, which
is a very involved (although elementary) function, given in the appendix in \cite{degree}.
\section{Concluding remarks}\label{sec:conclude}
Most of the results we have obtained can be extended to other classes of graphs. Let $\mathcal{G}$ be a class of graphs closed under taking minors such that the excluded minors of $\mathcal{G}$ are 2-connected. Interesting examples are the classes of series-parallel and outerplanar graphs.
Given such a class $\mathcal{G}$, a connected graph is in $\mathcal{G}$ if and only if its core is in~$\mathcal{G}$. Hence Equation (\ref{eq:cores}) also
holds for graphs in $\mathcal{G}$. Using the results from \cite{SP}, we have performed the corresponding computations for the classes of series-parallel and outerplanar graphs (there are no results for kernels since outerplanar and series-parallel have always minimum degree at most two). The results are displayed in the next table, together with the data for planar graphs.
The expected number of edges is $\mu n$, and the expected size of the core is $\kappa n$. It is worth remarking that the size of the core is always linear, whereas the size of the largest block in series-parallel and outerplanar graphs is only $O(\log n)$ \cite{3-conn,KS}.
$$
\renewcommand{\arraystretch}{1.5}
\begin{array}{|l|c|c|c|}
\hline
\hbox{Graphs} & \hbox{Growth constant} & \mu \hbox{ (edges) } & \kappa \hbox{ (core)} \\
\hline
\hbox{Outerplanar} & 7.32 & 1.56 & 0.84\\
\hline
\hbox{Outerplanar 2-graphs} & 6.24 & 1.67 & \\
\hline
\hbox{Series-parallel} & 9.07 & 1.62 & 0.875 \\
\hline
\hbox{Series-parallel 2-graphs} & 8.01 & 1.70 & \\
\hline
\hbox{Planar} & 27.23 & 2.21 & 0.962 \\
\hline
\hbox{Planar 2-graphs} & 26.21 & 2.26 & \\
\hline
\end{array}
$$
\bigskip
The $k$-core of a graph $G$ is the maximum subgraph
of G in which all vertices have degree at least $k$. Equivalently, it is the subgraph of G formed by deleting repeatedly (in any order) all vertices of degree less than $k$. In this terminology, what we have called the core of a graph is the 2-core. Using the results from \cite{gn} it is not difficult to show that the 3-core, 4-core and 5-core of a random planar graph have all linear size with high probability (there is no 6-core since a planar graph has always a vertex of degree at most five). The interesting question is however whether the $k$-core has a connected component of linear size (as is the case for $k=2$).
We have performed computational experiments on random planar graphs, using the algorithm described in \cite{fusy}, and based on the results we formulate the following conjecture.
\paragraph{Conjecture.} With high probability the 3-core of a random planar graph has one component of linear size.
With high probability the components of the 4-core of a random planar graph are all sublinear.
\bigskip
We have not been able to prove neither of the conjectures. As opposed to the kernel, the 3-core is obtained by repeatedly \emph{removing} vertices of degree two. These deletions may have long-range effects that appear difficult to analyze. Even more challenging appears the analysis of the 4-core.
|
train/arxiv
|
BkiUdWk4eIZjpBzzwJz-
| 5 | 1 |
\section{Introduction}
\label{1-Introduction}
In recent years, Graph Convolutional Networks (GCNs) have achieved remarkable success in a variety of graph-based networks for tasks ranging from traffic prediction \cite{article32,article33,article34} and recommender systems \cite{article28,article35,article36} to biochemistry \cite{article37,article38}. Despite their effectiveness and popularity, the training of GCNs usually requires a significant amount of labeled data to achieve satisfactory performance. However, obtaining these labels may be time-consuming, laborious and expensive. A great deal of research has turned to semi-supervised learning (SSL) methods to address the problem of few labeled samples. SSL enables co-learning of models from labeled and unlabeled data, resulting in improved precision of decision worlds and robustness of models.
\par In GCNs, as illustrated in Figure \ref{fig:1a}, some pseudo-labeling methods \cite{article5,article17} have been proposed to obtain labels of unlabeled samples. In addition to pseudo-label learning methods, as illustrated in Figure \ref{fig:1b}, some recent work employs the output of the model to instruct the message propagation. Several studies \cite{article4,article7,article27} use the given labels and the estimated labels to optimize the topology graph, instructing the message propagation of the GCN model. ConfGCN \cite{article3} uses the estimated confidence to determine the effect of one node on another during neighborhood aggregation, changing the topological graph implicitly.
\begin{figure}[h]
\centering
\subfloat[]{
\begin{minipage}[h]{0.85\linewidth}
\label{fig:1a}
\centering
\includegraphics[width=0.85\linewidth]{1a.png}
\end{minipage}%
}%
\subfloat[]{
\begin{minipage}[h]{0.85\linewidth}
\label{fig:1b}
\centering
\includegraphics[width=0.85\linewidth]{1b.png}
\end{minipage}
}%
\subfloat[]{
\begin{minipage}[h]{0.85\linewidth}
\label{fig:1c}
\centering
\includegraphics[width=0.85\linewidth]{1c.png}
\end{minipage}
}%
\caption{The comparison between previous Graph-based SSL methods and our proposed DCC-GCN.}
\label{fig:1}
\centering
\end{figure}
\par Although graph-based SSL methods have achieved noticeable progress in recent years, extensive researches \cite{article1,article18,article30} have revealed that neural network models are over-confident in their predictions. \cite{article1,article2,article21} have revealed that regardless of the accuracy of the outputs generated by the models, the confidence of the outputs gradually increases with training, resulting in the difficulty of selecting truly high-confidence samples as pseudo labels or correctly instructing GCN model message propagation. This seems to be a common problem in selecting truly high-confidence samples, which gives rise to a fundamental question: \itshape How to select high-confidence and low-confidence samples accurately? \upshape Existing methods tend to utilize high-confidence samples, yet seldom fully explore the use of low-confidence samples, which rises another critical question: \itshape How to calibrate low-confidence samples accurately? \upshape
\par In this paper, we introduce a framework called Dual-Channel Consistency based Graph Convolutional Networks (DCC-GCN) to select and calibrate low-confidence samples based on dual-channel consistency for GCN. As the first contribution of this study, we proved that classification errors are often where the predictions of different classifiers are inconsistent. Based on the above findings, as illustrated in Figure \ref{fig:1c}, we present a novel method to select low-confidence and high-confidence samples accurately based on inconsistent results of two different models. Previous methods only use a single topology graph for node aggregation, which fails to fully utilize the richness of features. Therefore, we trained two different classifiers using topology and feature graphs to select low-confidence samples. As the second contribution of this study, we confirm that the low-confidence samples obtained based on dual-channel consistency severely constrain the model's performance. In contrast to existing methods that focus on utilizing high-confidence samples, we improve the model's performance by calibrating for low-confidence samples using the neighborhood's high-confidence samples. In summary, the contributions of this paper are as follows:
\par 1. We propose an efficient graph-based SSL method called DCC-GCN, which could identify low-confidence and high-confidence samples more reliably based on the consistency of the dual-channel models. Our introduced dual-channel structure in SSL uses topology and feature graphs simultaneously, extracting more relevant information from node features and topology.
\par 2. We confirmed that the low-confidence samples limit the model's performance and improve the model's performance by calibrating for low-confidence samples using the neighborhood's high-confidence samples.
\par 3. Our extensive experiments on a range of benchmark data sets clearly show that DCC-GCN can significantly improve the accuracy of low-confidence samples. Under different label rate settings, DCC-GCN consistently outperforms the most advanced graph-based SSL algorithm.
\par The rest of the paper is organized as follows. We review related work in Section \ref{2-Related works}, and develop DCC-GCN in Section \ref{3-Methods}. Then, We report experimental results on eight famous common datasets in Section \ref{4-Experiment}. In Section \ref{5-Discussion} we experimentally investigate the capability of DCC-GCN in calibrating low-confidence samples and aggregating features and topology. Finally, We conclude the paper in Section \ref{6-Conclusion}.
\section{Related works}
\label{2-Related works}
\subsection{Graph Convolutional Network}
\label{sec:2-1}
GCN \cite{article8} defines the convolution using a simple linear function of the graph Laplacian on semi-supervised classification on graphs, but this limits its capability at aggregating the information from nodes with similar features. To combat the shortcoming of GCN, GAT \cite{article11} introduces the attention mechanism in message propagation. More recently, task-specific GCNs were proposed in different fields such as \cite{article28,article32,article33,article34,article35,article36}.
\subsection{Graph-based Semi-supervised Learning}
\label{sec:2-2}
SSL on graphs is aim at classifying nodes in a graph, where labels are available only for a small subset of nodes. Conventionally, pseudo-label learning approaches continuously adopt model predictions to expand the labeled training set \cite{article5,article17,article29}. Recently, more attention has been attracted to instructing model training based on output results \cite{article3,article4,article7,article16,article27}. Typically, AGNN \cite{article27} builds a new aggregation matrix based on the learned label distribution, which updates the topology using the output. ConfGCN \cite{article3} uses the confidence of sample estimates to determine the influence of one node on another during neighborhood aggregation, changing the weights of the edges in the topology. Furthermore, E-GCN \cite{article4} utilizes both given and estimated labels to optimize the topology.
\par Despite the noticeable achievements of these graph-based semi-supervised methods in recent years, the main concern in SSL that selects more reliable high-confidence samples has not been well addressed. Selecting samples with high confidence based merely on the softmax probability of the model output is extremely easy to optimize the model in the wrong direction, resulting in a degradation of model performance.
\subsection{Confidence Calibration}
\label{sec:2-3}
Confidence calibration is to calibrate the outputs (also known logits) of original models. Although confidence correction has been extensively studied in CV and NLP \cite{article1,article6,article19,article20,article30}, confidence correction has been rarely studied in Deep graph learning. CaGCN \cite{article40} was the first to study the problem of confidence calibration in GCNs. In CaGCN, the confidence is first corrected according to the assumption that the confidence of neighboring nodes tends to be the same and then generates pseudo labels. Unlike CaGCN, our proposed DCC-GCN model focuses on achieving a more reliable selection of high-confidence and low-confidence samples through dual-channel consistency. Then, DCC-GCN can constantly improve the model's performance by calibrating for low-confidence samples vulnerable to misclassification.
\section{Methods}
\label{3-Methods}
The goal of our method, DCC-GCN, is to select reliable low-confidence samples and calibrate the feature embeddings of the low-confidence samples accurately. Before introducing our method, we briefly summarize the basic concepts of graph convolution in GCNs.
\subsection{Preliminaries}
\label{sec:3-0}
The GCN \cite{article8} is primarily used to process non-Euclidean graph data, generating node-level representations via message propagation. For graph $\mathcal{G}(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ denotes the set of all nodes and $\mathcal{E}$ denotes the set of all edges, $X \in \mathbb{R}^{|\mathcal{V}| \times d}$ represents a matrix of d-dimensional input features. Feature representation of node $v \in \mathcal{V}$ at layer $k$ is obtained by aggregating its 1-hop neighbors at layer $k-1$. Use $\mathcal{N}^{1}(v)=\{u \mid(u, v) \in \mathcal{E}\}$ to denote the 1-hop neighborhood of node $v$. The graph convolution operation is defined as:
\begin{equation}
\label{equ:1}
\mathbf{h}_{v}^{l}=\sigma\left(W^{t} \cdot \text{aggregate}\left(\left\{\mathbf{h}_{u}^{l-1} \mid u \in \mathcal{N}^{1}(v)\right\}\right)\right).
\end{equation}
$\sigma(\cdot)$ represents the activation function, and aggregate usually needs to be differentiable and permutation invariant. $l$-layer GCN produces the prediction by repeating the message propagation $l$ times and then passing the final output through an MLP.
\textbf{Assumption 1}. \itshape (Symmetric Error) The GCN model $g(\cdot)$ has a classification accuracy of $p$ and makes symmetric errors for each category. We have $P[g(v)=\text{label}(v)]=p$ and $P[g(v)=k]=\frac{1-p}{c-1}$ for $\text{label}(v) \neq k \in[c]$, where $v$ is the node, and $\text{label}(v)$ is the ground-truth label of node $v$. Symmetry error is a common assumption that has been utilized in the literature \cite{article12,article13}. \upshape
\textbf{Theorem 1}. The average classification accuracy of the two GCN models is is $p_{1}$ and $p_{2}$, respectively. $N$ and $N_{r}$ represent the number of nodes in graph and the number of samples correctly classified by both GCN models respectively. The mean classification accuracy of the selected low-confidence samples, $p_{low-conf}$, is lower than the mean accuracy of the model.
\begin{equation}
\label{equ:2}
p_{low-conf}=\frac{p_{1} N-N_{r}}{\left(1-p_{1} p_{2}\right) N}<p_{1}\left(\frac{1-p_{2}}{1-p_{1} p_{2}}\right).
\end{equation}
All proofs can be found in Section A of the supplementary material.
\textbf{Analysis 1}. Calibration of the feature embedding of the low-confidence samples is the key to improving the model's performance. As can be seen from \textbf{Theorem 1}, the classification accuracy of low-confidence samples is quite low. For example, the theoretical upper bound for $p_{low-conf}$ is 0.364 when $p_{1}=0.8$ and $p_{2}=0.7$, which is significantly lower than the average classification accuracy, limiting the overall performance of the model severely.
\textbf{Assumption 2}. \itshape The calibrated classification accuracy of the low-confidence samples is lower than the accuracy of the model, that is, $p_{low-conf}^{\prime}<p_{1}$. \upshape
\textbf{Theorem 2}. Calibrating for low confidence samples gives an upper bound to the improvement in accuracy compared to the original GCN. $\gamma$ depends on the correlation of the two models.
\begin{equation}
\label{equ:3}
p_{GAIN}<\left(1-p_{1}\right)\left[p_{1} p_{2}+\frac{\left(1-p_{1}\right)\left(1-p_{2}\right)}{c-1}+\gamma\right].
\end{equation}
All proofs can be found in Section B of the supplementary material.
\textbf{Analysis 2}. According to the \textbf{Theorem 2}, the upper bound of model improvement accuracy $p_{GAIN}$ is determined by $p_{1}$ and $p_{2}$. Furthermore, when $p_{1}$ is fixed, the $p_{GAIN}$ is greatest when $p_{2}$ is the same as $p_{1}$. See details in Section C of the supplementary material.
\subsection{Creating Diversity in GCN Models}
\label{sec:3-1}
The dual-channel uses two GCN models, each checked for the other, to filter out low-confidence samples that are difficult to classify. Diversity among two channels in DCC-GCN plays an important role in the training process, which ensures models do not converge to the same parameters.
\par According to \textbf{Analysis 2}, the maximum performance improvement bound is obtained when the accuracy of second channel model is the same as the first model. Following \cite{article9}, we construct a feature graph based on the feature matrix $X \in \mathbb{R}^{N \times M}$, which as input, can achieve comparable results to the original GCN. We first clustering the node features using \itshape KNN\upshape. When clustering node features, the higher the similarity of the representations, the higher the value of cosine similarity. We use the inverse of the cosine similarity to measure the distance between different feature representations.
\begin{equation}
\label{equ:4}
d\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\frac{1}{s\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)}=\frac{\left|\boldsymbol{x}_{i}\right|\left|\boldsymbol{x}_{j}\right|}{\boldsymbol{x}_{i} \cdot \boldsymbol{x}_{j}}.
\end{equation}
And then, we choose top $k$ similar node pairs for each node to set edges and finally get the adjacency matrix $A^{\prime}$. Therefore, the feature graph obtained by \itshape KNN \upshape clustering $\mathcal{G}^{\prime}=\left(A^{\prime}, X\right)$.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{2.png}
\caption{Schematic overview of DCC-GCN. Solid lines represent the process of semi-supervised training and selection of low-confidence samples, and dashed lines represent the calibration of low-confidence samples. The inputs are the graphs $\mathcal{G}$ and $\mathcal{G}^{\prime}$, and the feature matrix $X$.}
\label{fig:2}
\end{figure}
\subsection{Dual-Channel Consistency based GCN}
\label{sec:3-2}
Following ConfGCN \cite{article3}, DCC-GCN uses co-variance matrix based symmetric Mahalanobis distance to define the distance between two nodes. Formally, for nodes $u$ and $v$, with label distributions $\boldsymbol{\mu}_{u}$ and $\boldsymbol{\mu}_{v}$ and co-variance matrices $\boldsymbol{\Sigma}_{u}$ and $\boldsymbol{\Sigma}_{v}$, distance between them is defined as follows:
\begin{equation}
\label{equ:5}
d_{M}(u,v)=\left(\boldsymbol{\mu}_{u}-\boldsymbol{\mu}_{v}\right)^{T}\left(\boldsymbol{\Sigma}_{u}^{-1}+\boldsymbol{\Sigma}_{v}^{-1}\right)\left(\boldsymbol{\mu}_{u}-\boldsymbol{\mu}_{v}\right),
\end{equation}
where $\boldsymbol{\mu}_{v} \in \mathbb{R}^{c}$ and $\boldsymbol{\Sigma}_{v} \in \mathbb{R}^{c \times c}$. $\boldsymbol{\mu}_{v, k}$ represents the score that node $v$ belongs to label $k$ and $\left(\boldsymbol{\Sigma}_{v}\right)_{k k}$ denotes the variance of $\boldsymbol{\mu}_{v, k}$. An important property of the distance $d_{M}(u,v)$ is that if diagonal matrices has a large eigenvalue, then the constraint that requires $\boldsymbol{\mu}_{u}$ and $\boldsymbol{\mu}_{v}$ to be close will be relaxed.
Then, we define $r_{u v}=\frac{1}{d_{M}(u, v)}$, the influence of node $u$ on node $v$ during message propagation.
\par $\hat{A}^{\prime}$ and $\hat{A}$ represent the normalized adjacency matrix of feature graph and topology graph, respectively. In the first channel, $X=[\boldsymbol{h}_{v}^{0}]$ and $A=[\alpha_{u v}]$ are used as inputs to the GCN model and the output of the $l$-th layer of node $v$ can be expressed as:
\begin{equation}
\label{equ:6}
\boldsymbol{h}_{v}^{l}=\sigma\left(\sum_{u \in \mathcal{N}_{v}} r_{u v} \alpha_{u v} \boldsymbol{h}_{u}^{l-1} \Theta^{l}\right).
\end{equation}
\subsection{Select the Low-confidence Samples}
\label{sec:3-3}
Due to overconfidence, high-confidence and low-confidence samples cannot be accurately selected using only a single model. Therefore, We determined high-confidence and low-confidence samples based on dual-channel consistency. Specifically, for the two channels, we expand its output feature matrix $\left[\boldsymbol{h}_{v}^{l}\right]$ to two times its dimensionality and input the expanded output into the MLP to obtain the classification result. The samples with the same classification result for the two GCN models in dual channels are the high-confidence samples, which form the set $\mathcal{V}_{high-conf}$. The samples with different classification results are the low-confidence samples, which form the set $\mathcal{V}_{low-conf}$.
\subsection{Calibration of Low Confidence Samples}
\label{sec:3-4}
According to \textbf{Analysis 1}, the accuracy of low-confidence samples is quite low, which restricts the performance of the model. Therefore, the calibration of low confidence samples can improve the model's performance. For high-confidence nodes, $v \in \mathcal{V}_{high-conf}$, the representations of output layer $\boldsymbol{z}_{v}= \boldsymbol{h}_{v}^{l}$. For the low-confidence samples $u \in \mathcal{V}_{low-conf}$, it is difficult to obtain the correct labels directly using the embeddings of the outputs due to the low classification accuracy of the low-confidence samples. As shown in Figure \ref{fig:3}, the embedding of the node $u$ is obtained by aggregating the embeddings of the neighbouring high-confidence nodes.
\begin{equation}
\label{equ:7}
\boldsymbol{z}_{u}=\sum_{v \in \mathcal{N}_{high-c o n f}^{m}(u)} r_{v u} \boldsymbol{h}_{v}^{l},
\end{equation}
\noindent where $\mathcal{N}_{high-conf}^{m}(u)=\mathcal{N}^{m}(u) \cap \mathcal{V}_{high-conf}$ represents the set of nodes of high-confidence in the $m$-hop neighbors of $v$. $m$ represents the hop of the neighbours used for the calibration, and in the experiments $m$ was set to 2.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{3.png}
\caption{Dark shaded nodes indicate high-confidence nodes and light grey nodes indicate low-confidence nodes. As shown in the figure on the right, the embedding of the low-confidence node a is constructed by summing the embeddings of the high-confidence samples in its 1-hop neighbors (for simplicity, $m$ is set to 1 in the figure).}
\label{fig:3}
\end{figure}
The final node representations are created by concatenating the two channel representations, $\hat{\boldsymbol{z}}_{v}=contact\left(\boldsymbol{z}_{v}, \boldsymbol{z}_{v}^{\prime}\right)$, where $\boldsymbol{z}_{v}$ and $\boldsymbol{z}_{v}^{\prime}$ denote the node representations of the first and second channel, respectively. The final representations matrix could be obtained: $Z=\left[\hat{\boldsymbol{z}}_{v}\right]$. Let $V_{L}$ be the set of training nodes and $Y$ be the one-hot label matrix, and converting the model output embedding $Z$ to $\hat{Y}$ by utilizing a linear layer:
\begin{equation}
\label{equ:8}
\hat{Y}=\text{softmax}(W \cdot Z+\boldsymbol{b}),
\end{equation}
\noindent where $\hat{Y}$ is the predicted label.
\subsection{Optimization Objective}
\label{sec:3-5}
Following \cite{article39}, we include the following two terms and cross-entropy loss of the label in objective function:
\begin{equation}
\label{equ:9}
L_{smooth}=\sum_{(u, v) \in \mathcal{E}}\left(\boldsymbol{\mu}_{u}-\boldsymbol{\mu}_{v}\right)^{T}\left(\boldsymbol{\Sigma}_{u}^{-1}
+\boldsymbol{\Sigma}_{v}^{-1}\right)\left(\boldsymbol{\mu}_{u}-\boldsymbol{\mu}_{v}\right),
\end{equation}
\begin{equation}
\label{equ:10}
L_{label}=\sum_{v \in V_{L}}\left(\boldsymbol{\mu}_{v}-\boldsymbol{Y}_{v}\right)^{T}\left(\boldsymbol{\Sigma}_{v}^{-1}+\frac{1}{\varphi} \boldsymbol{I}\right)\left(\boldsymbol{\mu}_{v}-\boldsymbol{Y}_{v}\right),
\end{equation}
where $\frac{1}{\varphi} \boldsymbol{I} \in \mathbb{R}^{L \times L}$ and ${\varphi}>0$. $L_{smooth}$ enforce neighboring nodes to be close to each other, and $L_{label}$ enforce label distribution of nodes in $V_{L}$ close to their input label distribution. Finally, we include the standard cross-entropy loss for semi-supervised multi-class classification over all the labeled nodes.
\begin{equation}
\label{equ:11}
L_{cross}=-\sum_{v \in V_{L}} Y \ln Z.
\end{equation}
Label distributions $\boldsymbol{\mu}_{v}$ and co-variance matrices $\boldsymbol{\Sigma_{v}}$ jointly with other parameters $\Theta^{l}$ is obtained by minimizing the objective function:
\begin{equation}
\label{equ:12}
L=L_{cross}+\lambda_{1} L_{smooth}+\lambda_{2} L_{label}.
\end{equation}
\section{Experiment}
\label{4-Experiment}
In this section, we conduct experiments on several datasets to demonstrate the effectiveness of our proposed DCC-GCN model.
\subsection{Datasets}
\label{sec:4-1}
We chose the four benchmark citation dataset datasets: Cora, Citeseer, Pubmed \cite{article8} and CoraFull \cite{article10}, where nodes denote documents, undirected edges denote citations between documents, and the categories of nodes are labelled according to the topic of the paper. We have also selected four other publicly available datasets. Nodes in the ACM dataset \cite{article15} denote papers, and edges indicate the presence of common authors for two papers. The above datasets all use the representation of keywords in articles as features. Flickr \cite{article18} is an image sharing website that allows users to share photos and videos. It is a social network in which users are represented by nodes and edges,and nodes are organized into nine classes according to their interests. UAI2010 is a dataset has been tested in GCN for community detection in \cite{article26}. Table \ref{tab:1} shows an overview of eight datasets.
\begin{table}[h]\small
\centering
\caption{Details of the datasets used in the paper.}
\label{tab:1}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
Dataset &Classes &Features &Nodes &Edges \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Cora &7 &1,433 &2,708 &5,429 \\
Citeseer &6 &3,703 &3,327 &4,732 \\
Pubmed &3 &500 &19,717 &44,338 \\
CoraFull &70 &8,710 &19,793 &65,331 \\
ACM &3 &1,870 &3,025 &13,128\\
Flickr &9 &12,047 &7,575 &239,738\\
UAI2010 &19 &4,973 &3,067 &28,311\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Baseline}
\label{sec:4-2}
To evaluate our proposed DCC-GCN, we compared it to the following baseline:
\begin{itemize}
\item
\textbf{LP} \cite{article14} Propagation of a node's label to neighboring nodes based on the distance between the node and the neighboring nodes.
\end{itemize}
\begin{itemize}
\item
\textbf{GCN} \cite{article8} is spectral graph convolution localized first-order approximation used for semi-supervised learning on graph-structured data.
\end{itemize}
\begin{itemize}
\item
\textbf{GAT} \cite{article11} uses the attention mechanism to determine the weights of node neighbors. By adaptively assigning weights to different neighbors, the performance of the graph neural network is improved.
\end{itemize}
\begin{itemize}
\item
\textbf{DGCN} \cite{article22} uses two parallel simple feedforward networks, the difference is only when the input graph structure information is different, and the parameters of the two parallel graph convolution modules are shared.
\end{itemize}
\begin{itemize}
\item
\textbf{AGNN} \cite{article23} introduces attention mechanism in the propagation layer, so that the attention of the central node to the neighbor node will be different in the process of feature aggregation.
\end{itemize}
\begin{itemize}
\item
\textbf{ConfGCN} \cite{article3} jointly estimate label scores and their confidence values in GCN, using these estimated confidence values to determine the effect of one node on another.
\end{itemize}
\begin{itemize}
\item
\textbf{E-GCN} \cite{article4} uses both the given label and the estimated label for the topology optimization of the GCN model.
\end{itemize}
\subsection{Node Classification Results}
\label{sec:4-3}
In order to make a fair comparison, we do not set the validation set and use all samples except the training set as the test set. We use Accuracy (ACC) and macro F1-score (F1) to evaluate performance of models. 20 samples from each category were selected as the training set, and in Table \ref{tab:2} we present the average precision of all results over 10 runs.
\begin{table}[h]
\centering
\caption{Classification accuracies of compared methods on CoraFull, ACM, Flickr, and UAI2010 datasets. We ignore 3 classes with less than 50 nodes in CoraFull dataset (since the test set has too few nodes in these classes).}
\label{tab:2}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccc}
\hline\noalign{\smallskip}
Dataset &Metrics &LP &GCN &GAT &AGNN &DGCN &ConfGCN &E-GCN &DCC-GCN\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{CoraFull}&ACC&30.2&54.3&55.4&55.6&54.9&\underline{55.9}&55.7&\textbf{56.8}\\
&F1 &27.4&49.7&51.6&52.0&50.7&\underline{52.3}&52.1&\textbf{52.3}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{ACM} &ACC&61.5&86.5&86.2&85.5&85.9&87.9&\underline{88.1}&\textbf{88.9}\\
&F1 &61.1&86.4&86.0&83.2&84.3&85.3&\underline{85.6}&\textbf{87.8}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{Flickr} &ACC&24.3&41.5&38.5&38.8&42.1&\underline{44.1}&43.8&\textbf{72.0}\\
&F1 &20.8&40.2&36.8&37.1&40.1&\underline{43.3}&42.7&\textbf{72.8}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{UAI2010} &ACC&41.7&47.6&49.3&50.2&50.4&\underline{56.2}&55.8&\textbf{69.7}\\
&F1 &32.6&35.9&37.2&39.0&41.2&\underline{43.2}&42.7&\textbf{48.2}\\
\noalign{\smallskip}\hline
\end{tabular}}
\end{table}
We observe a significant improvement in the effectiveness of DCC-GCN compared to GCN on the Flickr and UAI2010 datasets. This is because our method extracts the feature information of the nodes more effectively. See Section \ref{sec:5-4} for a detailed analysis of this phenomenon.
\subsection{Results under Scarce Labeled Training Data}
\label{sec:4-4}
In order to evaluate our model more comprehensively, we set up training sets with different label rates on three datasets, Cora, Citeseer and Pubmed. For Cora and Citeseer: \{0.5\%, 1\%, 1.5\%, 2\%\} and for Pubmed: \{0.03\%, 0.05\%, 0.07\%, 0.1\%\}. The results of the semi-supervised node classification for the different models are summarised in Tables \ref{tab:3} to \ref{tab:5}.
\begin{table}[ht]\small
\centering
\caption{Classification Accuracy on Cora.}
\label{tab:3}
\begin{tabular}{ccccc}
\multicolumn{5}{c}{\textbf{Cora Dateset}}\\
\hline\noalign{\smallskip}
\textbf{Label Rate} & 0.5\% & 1\% & 1.5\% & 2\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{LP} &57.6 &61.2 &62.4 &63.4\\
\textbf{GCN} &54.2 &61.0 &66.2 &72.8\\
\textbf{GAT} &54.3 &60.3 &66.5 &72.5\\
\textbf{AGNN} &54.8 &60.9 &66.7 &72.8\\
\textbf{DGCN} &56.3 &62.6 &67.3 &73.1\\
\textbf{ConfGCN} &60.1 &63.6 &69.5 &73.5\\
\textbf{E-GCN} &60.8 &65.2 &70.4 &73.4\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{DCC-GCN} &\textbf{63.9} &\textbf{67.2} &\textbf{71.8} &\textbf{74.6}\\
\textbf{GAIN} &\textbf{3.1} &\textbf{2.0} &\textbf{1.4} &\textbf{1.1}\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table}[ht]\small
\centering
\caption{Classification Accuracy on Citeseer.}
\label{tab:4}
\begin{tabular}{ccccc}
\multicolumn{5}{c}{\textbf{Citeseer Dateset}}\\
\hline\noalign{\smallskip}
\textbf{Label Rate} & 0.5\% & 1\% & 1.5\% & 2\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{LP} &39.6 &43.2 &45.5 &48.2\\
\textbf{GCN} &46.6 &56.3 &59.8 &64.8\\
\textbf{GAT} &46.7 &56.6 &60.3 &65.2\\
\textbf{AGNN} &47.8 &57.1 &60.5 &65.3\\
\textbf{DGCN} &48.2 &58.3 &61.8 &65.8\\
\textbf{ConfGCN} &49.2 &61.3 &63.2 &66.7\\
\textbf{E-GCN} &51.2 &60.7 &64.0 &66.4\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{DCC-GCN} &\textbf{53.3} &\textbf{62.8} &\textbf{65.3} &\textbf{67.9}\\
\textbf{GAIN} &\textbf{2.1} &\textbf{1.5} &\textbf{1.3} &\textbf{1.2}\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table}[ht]\small
\centering
\caption{Classification Accuracy on Pubmed.}
\label{tab:5}
\begin{tabular}{ccccc}
\multicolumn{5}{c}{\textbf{Pubmed Dateset}}\\
\hline\noalign{\smallskip}
\textbf{Label Rate} &0.03\% &0.05\% &0.07\% &0.1\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{LP} &59.4 &61.8 &62.3 &63.6\\
\textbf{GCN} &57.2 &59.8 &63.4 &67.6\\
\textbf{GAT} &57.8 &60.4 &64.5 &67.8\\
\textbf{AGNN} &57.9 &60.3 &64.3 &68.0\\
\textbf{DGCN} &58.3 &61.0 &64.6 &68.2\\
\textbf{ConfGCN} &60.8 &62.3 &66.1 &69.2\\
\textbf{E-GCN} &61.4 &62.8 &66.9 &68.9\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{DCC-GCN} &\textbf{63.3} &\textbf{63.9} &\textbf{67.8} &\textbf{70.0}\\
\textbf{GAIN} &\textbf{1.9} &\textbf{1.1} &\textbf{0.9} &\textbf{0.8}\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
We have the following observations:
Compared to the baseline, we proposed DCC-GCN consistently outperforms GCN, GAT, AGNN, DGCN, ConfGCN and E-GCN on all the datasets. It is evident that when labeled data is insufficient, GCN's performance suffers significantly due to inefficient label information propagation. For example, On the Cora and Pubmed datasets, GCN performs even worse than LP \cite{article14} when the training size is limited. Within a certain range, the performance improvement of DCC-GCN compared to the GCN model, demonstrating the effectiveness of this framework on graphs with few labeled nodes.
\section{Discussion}
\label{5-Discussion}
\subsection{Effect of Low-confidence Samples Calibration}
\label{sec:5-1}
According to Analysis 1 in Section \ref{3-Methods}, the critical factor in increasing the model's performance is the Calibration of the low-confidence samples. In this section, we test the accuracy of low-confidence samples before ands after the calibration by graph structure.
Before calibrating the low-confidence samples, we first employ GCN as the base models to learn a label distribution for each node. Training epochs is taken as 100, 100, 200 for Cora, Citeseer and Pubmed under different label rates, respectively. In Figure \ref{fig:4}, the original and calibrated accuracy of the low-confidence samples are drawn separately for Cora, Citeseer, and Pubmed under different label rates. It is obvious that the accuracy of low-confidence samples after the calibration improved significantly.
\begin{figure}[h]
\centering
\subfloat[Cora]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{4a.png}
\end{minipage}%
}%
\subfloat[Citeseer]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{4b.png}
\end{minipage}%
}%
\subfloat [Pubmed]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{4c.png}
\end{minipage}%
}%
\caption{The original and calibrated accuracy of the low-confidence samples.}
\label{fig:4}
\centering
\end{figure}
\subsection{Reliability of High-confidence Samples}
\label{sec:5-2}
In SSL, training bias \cite{article31} is a common problem. When the model generates false predictions with high confidence, these false predictions will further strengthen the deviation of the model and lead to the deterioration of the model performance. Selecting more reliable high-confidence samples is the key to improve model performance.
\par In this section, we conduct a simple experiment to prove the reliability of high-confidence samples selected based on dual-channel consistency. Firstly, Obtain high-confidence samples based on DCC-GCN, and then we randomly select 100 samples to join the training of the original GCN model as pseudo-labeled samples. We also used the traditional method of obtaining pseudo-labels \cite{article42} as a comparison. The loss function of the model is:
\begin{equation}
\label{equ:13}
Loss=L_{s}+\alpha \cdot L_{p},
\end{equation}
where $\alpha=0.3$ when the number of training epochs is greater than 100, and $\alpha=0$, otherwise. $L_{s}$ and $L_{p}$ denote the cross-entropy loss for the labelled and pseudo-labelled samples, respectively. We use the same hyper-parameters and training set, test set for all datasets as in \cite{article8}. The experimental results are shown in Table \ref{tab:6}.
\begin{table}[htb]
\begin{center}
\caption{Classification Accuracy on Cora, Citeseer and Pubmed.}
\label{tab:6}
\begin{tabular}{cccc}
\hline Method &Cora &Citeseer &Pubmed\\
\hline GCN &81.5 &70.3 &79.0 \\
GCN + pseudo-labels (\cite{article42}) &81.5 &70.3 &79.1 \\
GCN + pseudo-labels (DCC-GCN) &\textbf{83.1} &\textbf{72.1} &\textbf{79.1} \\
\hline
\end{tabular}
\end{center}
\end{table}
Experimental results show that the selection of pseudo-labels using the output of a single GCN was ineffective in improving the model's performance since they are most likely to be simple samples that do not help much in training. Adding high-confidence samples based on DCC-GCN directly into the training set as pseudo-label samples can improve the model's performance, proving that high-confidence samples can be selected reliably based on DCC-GCN.
\subsection{Analysis of Neighborhood Hop for Calibration}
\label{sec:5-3}
In this section, we explore the effect of the neighbourhood hop for calibration. On the Cora, Citeseer, and Pubmed datasets, we set $m$ from 1 to 4 at different labelling rates, respectively, and the experimental results are shown in Figure \ref{fig:5}.
\begin{figure}[h]
\centering
\subfloat[Cora]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{5a.png}
\end{minipage}%
}%
\subfloat[Citeseer]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{5b.png}
\end{minipage}%
}%
\subfloat [Pubmed]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{5c.png}
\end{minipage}%
}%
\caption{Analysis of neighbourhood hop used during the calibration process.}
\label{fig:5}
\centering
\end{figure}
With only the 1-hop neighbors during the calibration, DCC-GCN could not improve the accuracy significantly, and in some cases, the classification accuracy is even lower than that of GCN on the Cora and Pubmed datasets. However, it is easy to observe that DCC-GCN with hop larger than 1 for low-confidence samples calibration all outperform the original GCNs with a large margin, especially when the graph has low label rate.
\subsection{Ablation Study}
\label{sec:5-4}
In this section, we remove various parts of our model to study the impact of each component. We first demonstrate that the calibration of low confidence samples based on dual-channel consistency, has a large effect on the results compared to model without calibration, then we show that the aggregation of dual-channel is more beneficial than using one channel only.
For simplicity, we adopt DCC-GCN (w/o Calibration) and DCC-GCN (w/o Aggregation) to represent the reduced models by removing the calibration of low-confidence samples and the aggregation of dual-channel , respectively. The comparison is shown in Table \ref{tab:7}.
\begin{table}[h]\small
\centering
\caption{Ablation study of the calibration of low-confidence samples and the aggregation of dual-channel on Cora, Citeseer, and Pubmed datasets.}
\label{tab:7}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Method &Metrics &Cora-Full &ACM &Flickr &UAI2010\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{DCC-GCN} &ACC&\textbf{57.1}&\textbf{88.9}&\textbf{72.0}&\textbf{69.7}\\
&F1 &\textbf{52.4}&\textbf{87.8}&\textbf{72.8}&\textbf{48.2}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{DCC-GCN(w/o Calibration)} &ACC&55.4&86.7&70.1&65.8\\
&F1 &50.6&86.4&70.6&43.9\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{DCC-GCN(w/o Aggregation)} &ACC&56.8&88.1&45.1&48.5\\
&F1&51.1 &87.2&43.2&36.3\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The results of the original model are consistently better than all the other two variants, indicating the effectiveness of using the two constraints together. Apparently, ACC and F1 decrease when the aforementioned component is dropped from the framework. It reveals that our proposed calibration of low-confidence samples and aggregation of dual-channel modules are able to improve the performance on semi-supervised learning task substantially.
In addition, the improvement of DCC-GCN over DCC-GCN (w/o Calibration) is more substantial on the UAI2010 and Flickr. To reveal this, we visualize the embedding results of Flickr dataset generated by GCN, KNN and DCC-GCN via using t-SNE \cite{article41} in Figure \ref{fig:6}.
\begin{figure}[h]
\centering
\subfloat[GCN]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{6a.png}
\end{minipage}%
}%
\subfloat[MLP]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{6b.png}
\end{minipage}%
}%
\subfloat [DCC-GCN]{
\begin{minipage}[h]{0.32\linewidth}
\centering
\includegraphics[width=2in]{6c.png}
\end{minipage}%
}%
\caption{T-SNE embeddings of nodes in Flickr dataset. (a), (b) and (c) denote representations learned by GCN, MLP, and DCC-GCN respectively.}
\label{fig:6}
\centering
\end{figure}
The classification accuracy of GCN and MLP is 41.5\% and 57.3\%, respectively, which indicates that the features of nodes contain more information than the graph topology structure. As can be observed, the embeddings generated by DCC-GCN can exhibit more coherent clusters compared with the other two methods. This is due to DCC-GCN can adaptively learn information between topology and node features, training two classifiers based on the graph $\mathcal{G}$ and $\mathcal{G}^{\prime}$ in dual-channel.
\section{Conclusion}
\label{6-Conclusion}
In this paper, we propose a semi-supervised learning framework for GCN named DCC-GCN based on dual-channel consistency. DCC-GCN could select low-confidence samples through dual-channel consistency and calibrate them based on samples with high-confidence in the neighbourhood. DCC-GCN can learn the topology of the graph that is most suitable for the task of graph-based semi-supervised node classification. Experiments on several benchmarks demonstrate that our method is superior to state-of-the-art graph-based semi-supervised learning methods.
\section{Acknowledgements}
\label{7-Acknowledgements}
This work was supported by the National Key R\&D Program of China under Grant No. 2017YFB1002502.
\bibliographystyle{elsarticle-harv}
|
train/arxiv
|
BkiUcLY5qsNCPe9kIQfu
| 5 | 1 |
\section{Introduction}
The problem of compressive sensing originated in the context of Fourier series \cite{MR2230846}. The aim is to reconstruct a linear combination of a small number of complex exponentials from as few samples as possible, when only the number of the exponentials entering the linear combination is known. The additional challenge was to come up with practical and efficient methods for the reconstruction (which by its combinatorial nature is NP-hard, unless extra information is available).
Later on, the compressed sensing problem evolved to include a more general setup. The overall problems and main challenges, however, remained the same; and they concerned mostly the construction of sampling schemes that would allow (and guarantee) efficient reconstruction from as few measurements as possible, and the design of efficient reconstruction algorithms. For the latter, $\ell^1$-minimization turned out to be a popular choice, and the chief technical condition to guarantee success for the reconstruction method was the {\em restricted isometry condition} (see \cite{MR2243152} for the first introduction of these ideas, and \cite{MR3100033} for an in-depth study). However, there still remained the problem of constructing measurement matrices (or, in the Fourier case, sampling sets) for which the RIP was actually provably fulfilled. An important (somewhat partial) answer to this problem was provided by random methods; e.g., in the case of random sampling of Fourier matrices the RIP assumption turns out
to be true under rather weak assumptions on the number of samples \cite{MR2417886}, at least with high probability.
However, the case of {\em deterministic sampling sets} with {\em provably} guaranteed reconstruction poses altogether different challenges. Firstly, the verification of properties like RIP is a very complex problem by itself \cite{MR3164973}, hence special care must be taken to allow such estimates. A first successful example for a deterministic construction with the RIP property was presented by DeVore \cite{MR2371999}. For the Fourier setting, in \cite{b1,MR2628828} sampling sets and inversion algorithms were constructed for cyclic groups; an alternative construction of deterministic sampling sets guaranteeing RIP for the cyclic case was developed in \cite{5464880}. All these construction have a common restriction, commonly known as {\em quadratic bottleneck}: In order to reconstruct linear combinations of $t$ basis vectors, they need $O(t^2)$ samples. A more recent paper by Bourgain and collaborators \cite{MR2817651} managed to improve this to $O(t^{2-\epsilon})$, for a very small $\epsilon>0$, using
rather
involved arguments from additive combinatorics.
This paper considers efficiently sampling Fourier-sparse vectors on finite abelian groups. It can be seen as complementary to \cite{b1,5464880,MR2628828}, with the main difference being that this paper focuses on {\em finite vector spaces} rather than cyclic groups. We develop a general, simple scheme for the design of sampling sets, together with algorithms that allow reconstruction.
Hence the new methods provide an alternative means of explicitly designing {\em universal sampling sets} in a specific family of finite abelian groups $G$, i.e., sampling sets $\Omega \subset G$ that allow the reconstruction of any given linear combination of $t$ characters of $G$ from its restriction on $\Omega$, together with {\em explicit} inversion algorithms, both for the noisy and noise free cases. The groups $G$ we consider are finite vector spaces, and the sampling sets will be written as unions of suitable affine subspaces. The sampling sets actually fulfill the RIP property, which allows one to use the standard methods such as $\ell^1$-minimization. However, the special structure of the sampling set makes the inversion algorithm particularly amenable to the use of a more structured (and potentially faster) reconstruction algorithm, using FFT methods. It should be stressed, though, that our construction is {\em not} able to beat the quadratic bottleneck.
\section{Notation}\label{s1}
Let $p$ be a prime and $r$ a positive integer. When considering their additive groups structure we have that $(\mathbb{Z}/p\mathbb{Z})^r\cong\mathbb{F}_p^r$. The vector space structure of $\mathbb{F}_p^r$ will enable us to construct the sampling sets needed in the algorithms described in this paper. We will write $\mathbb{F}_p^r$ also when considering only its additive group structure. Also in order to avoid complicated notations we will identify, where needed, elements of $\mathbb{Z}$ with their images in $\mathbb{F}_p$, so that for example 1 could be viewed as either an element of $\mathbb{Z}$ or of $\mathbb{F}_p$.
We will write $H\leq G$ for a subgroup $H$ of $G$. Subsets of $\mathbb{F}_p^r$ are subgroups if and only if they are vector spaces (since $\mathbb{F}_p$ is a field of prime order). So when looking at $\mathbb{F}_p^r$ as a vector space we will write $H\leq \mathbb{F}_p^r$ for a subspace $H$. For any subgroup $H\leq G$ we will also write $\mathrm{Rep}(G/H)$ for a set of representatives of cosets of $H$ in $G$.
In Section \ref{s2} we will also be working with both $\mathbb{F}_p$ and $\mathbb{F}_q$, where $q$ is a power of $p$. Since vector spaces over $\mathbb{F}_q$ can be viewed also as vector spaces over $\mathbb{F}_p$, we will write $\dim_{\mathbb{F}_p}(V)$ and $\dim_{\mathbb{F}_q}(V)$ for the dimension of $V$ as a vector space over $\mathbb{F}_p$ or $\mathbb{F}_q$ respectively. Similarly we will write $\mathrm{span}_{\mathbb{F}_p}(A)$ and $\mathrm{span}_{\mathbb{F}_q}(A)$ for the span of $A$ as vector space over $\mathbb{F}_p$ or $\mathbb{F}_q$ respectively.
Let $\widehat{\mathbb{F}_p^r}$ consist of all group homomorphisms $\mathbb{F}_p^r\rightarrow\mathbb{C}$. We have that
\[\widehat{\mathbb{F}_p^r}=\{\chi_{(y_1,\ldots,y_r)}:y_i\in\mathbb{F}_p\}=\{\chi_y:y\in\mathbb{F}_p^r\},\]
where, if $\omega_p=e^{2\pi i/p}$, we define $\chi_{(y_1,\ldots,y_r)}(x_1,\ldots,x_r):=\omega_p^{x_1y_1+\ldots+x_ry_r}$ for $(x_1,\ldots,x_r)\in\mathbb{F}_p^r$. This is well defined since $\omega_p^p=1$. Also it is easy to check that $\chi_y\chi_z=\chi_{y+z}$ for $y,z\in\mathbb{F}_p^r$
For any function $f:(\mathbb{Z}/p\mathbb{Z})^r\rightarrow\mathbb{C}$ let $\widehat{f}:\widehat{\mathbb{F}_p^r}\rightarrow\mathbb{C}$ be its Fourier transform, so that
\[f=\sum_{\chi_y\in\widehat{\mathbb{F}_p^r}}\widehat{f}(\chi_y)\chi_y.\]
Formulas for the Fourier transform for $\mathbb{F}_p^r$ are given by
\[\widehat{f}(\chi_y)=\frac{1}{p^r}\sum_{x\in\mathbb{F}_p^r}f(x)\chi_y(x)^{-1}.\]
Under the bijection $\chi_y\leftrightarrow y$ this corresponds (up to a scalar multiple) to the usual discrete Fourier transform on the grid $\{0,1,\ldots,p-1\}^r$ (under the identification of this grid with $\mathbb{F}_p^r$).
For any function $h:X\rightarrow Y$ and any subset $Z\subseteq X$ let $h|_Z:Z\rightarrow Y$ be the restriction of $h$ to $Z$.
Let now $t$ be a positive integer and assume that $\widehat{f}=\widehat{g}+\epsilon$ with $\widehat{g}$ having support consisting of at most $t$ elements and with $\|\epsilon\|_1$ ``small''. We will present in this paper two algorithms which approximate $\widehat{f}$ by a function $\widehat{f'}$ which, like $\widehat{g}$, also has support consisting of at most $t$ elements. Further $\widehat{f'}$ satisfies
\[\|\widehat{f}-\widehat{f}'\|_1\leq (1+3\sqrt{2})\|\epsilon\|_1,\]
that is $\widehat{f'}$ is close to the best possible such approximation of $\widehat{f}$. In particular if $\|\epsilon\|_1=0$ then $\widehat{f'}=\widehat{f}$, so that the algorithms reconstruct $\widehat{f}$ (and then also $f$) in this case.
The algorithms presented here are based on similar algorithms presented in \cite{b1,MR2628828} for the group of unit length elements of $\mathbb{C}$.
As in \cite{b1,MR2628828} sampling sets will be obtained by taking unions of cosets of subgroups. In \cite{b1,MR2628828} subgroups of distinct prime orders are considered. This cannot be applied here since $\mathbb{F}_p^r$ is a $p$-group. However, using the vector space structure of $\mathbb{F}_p^r$, we can construct many small subgroups with certain ``orthogonality'' properties (as will be seen in the next section), which will be used to construct the sampling sets presented here.
\section{Complexities of sets and algorithms}
Before describing the sampling sets and the corresponding algorithms, we present their size/running time complexities and compare them to those presented in other papers.
\vspace{6pt}
\noindent
\begin{tabular}{|l|l|l|l|}
\hline
&(Size of) group&Sampling set&Time complexity\\
\hline
Theorem \ref{t2}&$\mathbb{F}_p^r$&$O(pt^2r^2)$&$O(p^rt^2r^2)$\\
\hline
Theorem \ref{t9'}&$\mathbb{F}_p^r$&$O(pt^2r^3\log(p))$&$O(p^2t^2r^3\log(p))$\\
\hline
Algorithm 1 of \cite{b1}&$\mathbb{Z}/n\mathbb{Z}$&$O(\frac{t^2\log(n)^2}{\log(t\log(n))})$&$O(\frac{t^2\log(n)^3}{\log(t\log(n))})$\\
\hline
Section 2 of \cite{MR2817651}&$\mathbb{Z}/p\mathbb{Z}$&$O(t^{2-\epsilon})$&$O(pt^{2-\epsilon})$\\%RIP
\hline
Section 3 of \cite{MR2371999}&n&$O(\frac{t^2\log(n/t)^2}{\log(t)^2})$&$O(n\frac{t^2\log(n/t)^2}{\log(t)^2})$\\%RIP
\hline
Section 2 of \cite{5464880}&$\mathbb{Z}/p\mathbb{Z}$&$\Omega(t^2)$&$\Omega(pt^2)$\\%RIP
\hline
Algorithm 1 of \cite{MR2628828}&$\mathbb{Z}/n\mathbb{Z}$&$O(t^2\log(n)^2)$&$O(nt\log(n)^2)$\\
\hline
Algorithm 2 of \cite{MR2628828}&$\mathbb{Z}/n\mathbb{Z}$&$O(t^2\log(n)^4)$&$O(t^2\log(n)^4)$\\
\hline
\end{tabular}
\vspace{6pt}
As can be seen from the table, the second algorithm presented here needs more sampling points then the first one (it needs about $r\log(p)=\log(|\mathbb{F}_p^r|)$ times as many sampling points). However the first algorithm has a factor $p^r=|\mathbb{F}_p^r|$ in its running time, while the second algorithm is much faster.
The complexities for the construction of the sampling sets used in Theorem \ref{t2} and \ref{t9'} are given as follows.
\vspace{6pt}
\noindent
\begin{tabular}{|l|l|}
\hline
&Time complexity for the construction of the sampling set\\
\hline
Theorem \ref{t2}&$O(pt^2r^5\log(p)^2)$\\
\hline
Theorem \ref{t9'}&$O(pt^2r^6\log(p)^3)$\\
\hline
\end{tabular}
\section{Construction of the sampling sets}\label{s2}
The sets constructed here are unions of (shifted) subspaces of $\mathbb{F}_p^r$. We begin with a lemma that will allow us to find certain families of subspaces which will be of basic importance for the construction of our sampling sets.
\begin{lemma}\label{l1}
Let $p$ be a prime. Also let $1\leq h\leq r$ be integer and define $q:=p^h$ and $s:=\lceil r/h\rceil$. Let $\pi:\mathbb{F}_q^s\rightarrow\mathbb{F}_p^r$ be a surjective homomorphism and fix $a_1,\ldots,a_n\in\mathbb{F}_q^s$. For $1\leq i\leq n$ integer choose $H_i\leq\mathbb{F}_p^r$ of dimension $h$ containing $\pi(\mathrm{span}_{\mathbb{F}_q}\{ a_i\})$. Fix a positive integer $m\leq n$. If
\[\mathrm{span}_{\mathbb{F}_q}\{ a_{i_1},\ldots,a_{i_m}\}=\mathbb{F}_q^s\]
for every $1\leq i_1<\ldots<i_m\leq n$ then
\[\mathrm{span}_{\mathbb{F}_p}\{ H_{i_1},\ldots,H_{i_m}\}=\mathbb{F}_p^r\]
for every $1\leq i_1<\ldots<i_m\leq n$.
\end{lemma}
In the lemma $\{a_{i_1},\ldots,a_{i_m}\}$ is a subset of $\{a_1,\ldots,a_n\}$ with $m$ elements. Also, if $z$ is a primitive element of $\mathbb{F}_q$, so that elements of $\mathbb{F}_q$ can be written as $b_0+b_1z+\ldots+b_{h-1}z^{h-1}$ with $b_i\in\mathbb{F}_p$, we can take $\pi:\mathbb{F}_q^s\rightarrow\mathbb{F}_p^r$ to be the composition of $\pi_1:\mathbb{F}_q^s\rightarrow\mathbb{F}_p^{hs}$ and $\pi_2:\mathbb{F}_p^{hs}\rightarrow\mathbb{F}_p^r$ given as follows
\begin{align*}
\pi_1(\sum b_i^{(1)}z^i,\ldots,\sum b_i^{(s)}z^i)&:=(b_0^{(1)},\ldots,b_{h-1}^{(1)},\ldots,b_0^{(s)},\ldots,b_{h-1}^{(s)}),\\
\pi_2(c_1,\ldots,c_{hs})&:=(c_1,\ldots,c_r).
\end{align*}
If $\pi(\mathrm{span}_{\mathbb{F}_q}\{ a_i\})$ has dimension $h$ as vector space over $\mathbb{F}_p$, then we have to take $H_i=\pi(\mathrm{span}_{\mathbb{F}_q}\{ a_i\})$. If $\pi(\mathrm{span}_{\mathbb{F}_q}\{ a_i\})$ has dimension less than $h$ as vector space over $\mathbb{F}_p$, then we can choose $H_i=\mathrm{span}_{\mathbb{F}_p}\{\pi(\mathrm{span}_{\mathbb{F}_q}\{ a_i\}),e_1,\ldots,e_j\}$ for a certain $1\leq j\leq r$ (here $e_k$ is the $k$-th standard basis element of $\mathbb{F}_p^r$).
\begin{proof}
Notice that a surjective homomorphism $\pi:\mathbb{F}_q^s\rightarrow\mathbb{F}_p^r$ always exists since, as $\mathbb{F}_p$-vector spaces,
\[\mathbb{F}_p^r\leq\mathbb{F}_p^{h\times\lceil r/h\rceil}\cong\mathbb{F}_q^s.\]
In particular $\dim_{\mathbb{F}_p}(\mathbb{F}_p^r)\leq\dim_{\mathbb{F}_p}(\mathbb{F}_q^s)$. Also we can construct $H_i$, as
\[\dim_{\mathbb{F}_p}(\pi(\mathrm{span}_{\mathbb{F}_q}\{ a_i\}))\leq\dim_{\mathbb{F}_p}(\mathrm{span}_{\mathbb{F}_q}\{ a_i\})=h\dim_{\mathbb{F}_q}(\mathrm{span}_{\mathbb{F}_q}\{ a_i\})\leq h.\]
If $\mathrm{span}_{\mathbb{F}_q}\{ a_{i_1},\ldots,a_{i_m}\}=\mathbb{F}_q^s$ then
\[\mathbb{F}_p^r\geq\mathrm{span}_{\mathbb{F}_p}\{ H_{i_1},\ldots,H_{i_m}\}\geq\pi(\mathrm{span}_{\mathbb{F}_q} \{a_{i_1},\ldots,a_{i_m}\})=\pi(\mathbb{F}_q^s)=\mathbb{F}_p^r\]
and so the lemma follows.
\end{proof}
If some subspaces $H_1,\ldots,H_n$ of $\mathbb{F}_p^r$ satisfy $\mathrm{span}_{\mathbb{F}_p}\{ H_{i_1},\ldots,H_{i_m}\}=\mathbb{F}_p^r$ for every $1\leq i_1<\ldots<i_m\leq n$, we say that $H_1,\ldots,H_n$ are \emph{$m$-generating}. Similarly, if $a_1,\ldots,a_n\in\mathbb{F}_q^s$ satisfy $\mathrm{span}_{\mathbb{F}_q}\{ a_{i_1},\ldots,a_{i_m}\}=\mathbb{F}_q^s$ for every $1\leq i_1<\ldots<i_m\leq n$, we say that $\{a_1,\ldots,a_n\}$ is \emph{$m$-generating}. For $m=s$ we have that $\{a_1,\ldots,a_n\}\subseteq \mathbb{F}_q^s$ is $s$-generating if and only if any $s$ of its elements are linearly independent. So a subset of $\mathbb{F}_q^s$ is $s$-generating if and only if it has full spark.
It is not always possible to construct a $m$-generating subset of $\mathbb{F}_q^s$ of size $n$, for any $n$. We will in the next lemma show though that $\mathbb{F}_q^s$ has a $s$-generating multiset for $\mathbb{F}_q^s$ of size $q+1$ (we will consider multisets and not sets in order to cover also the case $s=1$) and construct one such multiset explicitly.
\begin{lemma}\label{t51}
Let
\[A:=\{(1,x,\ldots,x^{s-1}):x\in\mathbb{F}_q\}\cup\{(0,\ldots,0,1)\in\mathbb{F}_q^s\}.\]
Then $A\subseteq\mathbb{F}_q^s$ is $s$-generating and $|A|=q+1$.
\end{lemma}
\begin{proof}
It's clear that $|A|=q+1$. For the proof of $A$ being $s$-generating see the beginning of Chapter 11 \S 5 of \cite{ms}. This can also be seen by noticing that the matrices obtained from any $s$ distinct elements of $A$ all have one of the following forms (up to exchanging rows):
\[\left(\begin{array}{cccc}
1&x_1&\cdots&x_1^{s-1}\\
\vdots&\vdots&&\vdots\\
1&x_s&\cdots&x_s^{s-1}
\end{array}\right)\hspace{24pt}\mbox{or}\hspace{24pt}\left(\begin{array}{ccccc}
1&x_1&\cdots&x_1^{s-2}&x_1^{s-1}\\
\vdots&\vdots&&\vdots\\
1&x_{s-1}&\cdots&x_{s-1}^{s-2}&x_{s-1}^{s-1}\\
0&0&\cdots&0&1
\end{array}\right)\]
with distinct $x_j$ and so such matrices are invertible.
\end{proof}
In order to construct one of the two sampling sets needed in the algorithms, we still need a stable sampling set for 1-sparse Fourier signals on $\mathbb{F}_p$.
We will show in the next lemma how such a set can be found. In the same lemma we will also give an algorithm which, applied to the function $f(x)=\sum_{y=0}^{p-1} a_y\omega_p^{yx}$, under certain assumptions on $f$, returns $y'$ with $|a_{y'}|$ maximal. In order to state the theorem we need the following definition.
\begin{defi}
For $d>0$ and $x\in\mathbb{R}$ let $|x|_d:=\mathrm{dist}(x,d\mathbb{Z})$ be the minimum distance of $x$ from an integer multiple of $d$.
\end{defi}
\begin{lemma}\label{t11}
Let $K:=\{0\}\cup\{2^i:0\leq i\leq k\mbox{ and }i\in\mathbb{Z}\}\subseteq\mathbb{F}_p$, where $k\in\mathbb{Z}$ is minimal such that $2^k\geq p/3$. Let $f(x)=\sum_{y=0}^{p-1} a_y\chi_y(x)$. If there exists $y'$ such that $|a_{y'}|>2\sum_{y\not=y'}|a_y|$, then the following algorithm returns $y'$.
\begin{algorithmic}
\State Set $b(2^l):=\arg(f(2^l)/f(0))$, for $0\leq l\leq k$ (if $f(2^l)$ or $f(0)$ are zero set $b(2^l):=0$).
\State Set $e:=(0,\ldots,0)$ ($Length(e)=p$).
\For {$l$ from $0$ to $k$}
\For {$j$ from $0$ to $p-1$}
\If {$e(j+1)=0$ and $|p\,b(2^l)/(2\pi)-2^lj|_p\geq p/6$}
\State $e(j+1):=1$
\EndIf
\EndFor
\EndFor
\noindent\Return the index $j$ such that $e(j+1)=0$. If such a $j$ does not exist, return 0.
\end{algorithmic}
\end{lemma}
\begin{proof}
If no $y'$ exists then there is nothing to prove, so we will now assume that there exists $y'$ with $|a_{y'}|>2\sum_{y\not=y'}|a_y|$. Then, for $x\in\mathbb{F}_p$, we have that
\[\left|\sum_{y\not=y'} a_y\chi_y(x)\right|=\left|\sum_{y\not=y'} a_y\omega_p^{xy}\right|\leq \sum_{y\not=y'}\left| a_y\omega_p^{xy}\right|=\sum_{y\not=y'}\left| a_y\right|<|a_{y'}|/2\]
and so
\[|f(x)|=\left|\sum_{y=0}^{p-1} a_y\chi_y(x)\right|=\left|\sum_{y=0}^{p-1} a_y\omega_p^{xy}\right|\geq \left|a_{y'}\omega_p^{xy'}\right|-\left|\sum_{y\not=y'} a_y\omega_p^{xy}\right|>|a_{y'}|/2\geq 0.\]
So $f(x)\not= 0$ for every $x\in\mathbb{F}_p$. In particular $b(2^i)=\arg(f(2^l)/f(0))$.
For $z\in\mathbb{C}$ with $|z|<1/2$ we have that $|\arg(1+z)|_{2\pi}<\pi/6$. It then follows that
\[|\arg(f(x))-\arg(a_{y'}\omega_p^{xy'})|_{2\pi}=|\arg(f(x)/(a_{y'}\omega_p^{xy'}))|_{2\pi}< \pi/6\]
and so
\begin{align*}
|\arg(f(x)/f(0))-\arg(\omega_p^{xy'})|_{2\pi}&=|\arg(f(x))-\arg(f(0))\\
&\hspace{24pt}-\arg(a_{y'}\omega_p^{xy'})+\arg(a_{y'})|_{2\pi}\\
&\leq|\arg(f(x))-\arg(a_{y'}\omega_p^{xy'})|_{2\pi}\\
&\hspace{24pt}+|\arg(f(0))-\arg(a_{y'})|_{2\pi}\\
&< \pi/3
\end{align*}
for every $x\in\mathbb{F}_p$. Then
\begin{align*}
|p\,b(2^l)/(2\pi)-2^ly'|_p&=|p/(2\pi)(\arg(f(2^l)/f(0))-\arg(\omega_p^{2^ly'}))|_p\\
&=p/(2\pi) |\arg(f(2^l)/f(0))-\arg(\omega_p^{2^ly'})|_{2\pi}\\
&< p/6
\end{align*}
for $0\leq l\leq k$, in particular $e(y'+1)=0$.
We will now show that if $y\not= y'$ then $e(y+1)=1$, which will then prove the lemma. In order to do this we will show that if $j\not=0$ then there exists $l$ such that $0\leq l\leq k$ and $|2^lj|_p\geq p/3$, which also proves that if $y\not= y'$ then there exists $l$, $0\leq l\leq k$, with $|2^l(y'-y)|_p\geq p/3$. Hence, for the same $l$ we have that
\[|p\,b(2^l)/(2\pi)-2^ly|_p\geq|2^l(y'-y)|_p-|p\,b(2^l)/(2\pi)-2^ly'|_p\geq p/3-p/6=p/6.\]
Assume that $j\not\equiv 0 \mod p$ and that $|2^lj|_p<p/3$ for all $0\leq l\leq k$. Then, up to a multiple of $p$, we also have that $j\in\pm\{1,\ldots,\lceil{p/3}\rceil-1\}$ (by considering the case $l=0$). As $|2^lj|_p=|2^l(-j)|_p$ we can assume that $j\in\{1,\ldots,\lceil{p/3}\rceil-1\}$. Let $l$ be minimal such that $2^lj\geq p/3$. As $1\leq j<p/3$ we have that $1\leq l\leq k$ by definition of $k$. As $1\leq 2^{(l-1)}j<p/3$ it follows that $p/3\leq 2^lj<2p/3$ and so $|2^lj|_p\geq p/3$. Since $1\leq l\leq k$ this gives a contradiction and so the lemma is proved.
\end{proof}
We will now construct the sampling sets which will be used in the algorithms presented in the next section.
\begin{defi}\label{d1}
Let $m\geq 1$ and $n:=4t(m-1)+1$ and let $H_1,\ldots,H_n\leq \mathbb{F}_p^r$ be $m$-generating and all of dimension $h$ with $1\leq h\leq r$. Also let $K\subseteq\mathbb{F}_p$ be as in Lemma \ref{t11}. For $1\leq i\leq n$ choose $x_{1,i},\ldots,x_{r-h,i}\in\mathbb{F}_p^r$ such that $\mathrm{span}_{\mathbb{F}_p}\{ H_i,x_{1,i},\ldots,x_{r-h,i}\}=\mathbb{F}_p^r$. The wanted sets are
\begin{align*}
\Gamma_1:=&\cup_i H_i,\\
\Gamma_2:=&\cup_i (H_i+\{kx_{i,j}:k\in K,\,\,1\leq j\leq r-h\}),
\end{align*}
where for sets $A$ and $B$ we define $A+B:=\{a+b:a\in A,\, b\in B\}$.
\end{defi}
\begin{rem}\label{r1}
From Lemmas \ref{l1} and \ref{t51}, $m$-generating $H_1,\ldots,H_n$ exist if $m=\lceil r/h\rceil$ and $4t(\lceil r/h\rceil-1)\leq p^h$. Also $x_{1,i},\ldots,x_{r-h,i}$ exists for $1\leq i\leq n$ as $H_i$ is of dimension $h$.
It can be easily checked from the definitions that
\begin{align*}
|\Gamma_1|&\leq np^h,\\
|\Gamma_2|&\leq np^h|K|(r-h).
\end{align*}
\end{rem}
We will now bound $np^h$ and $|K|$.
\begin{theor}\label{c20}
Let $h\in\mathbb{Z}$ be minimal such that $h\geq 1$ and $4t(\lceil r/h\rceil-1)\leq p^h$ and let $K$ as in Lemma \ref{t11}. Also let $n:=4t(\lceil r/h\rceil-1)+1$. Then $np^h<16pt^2r^2$ and $|K|\leq 2+\log_2(p)$.
\end{theor}
\begin{proof}
Notice that $h\leq r$, since $4t(\lceil r/r\rceil-1)=0\leq p^r$
If $h=1$ we have that $np^h\leq (4t(r-1)+1)p\leq 4ptr<16pt^2r^2$.
If $h>1$ then $p^{h-1}< 4t(\lceil r/(h-1)\rceil -1)<4tr$ and so
\[np^h\leq pp^{h-1}\left(4t\left(\left\lceil\frac{r}{h}\right\rceil-1\right)+1\right)<p\cdot4tr\cdot\left(\frac{4tr}{h}+1\right)\leq16pt^2r^2.\]
When looking at $|K|$, we have that $|K|=2+k$, where $k\in\mathbb{Z}$ is minimal such that $2^k\geq p/3$. So $2^k<2p/3$ and then $k<\log_2(2p/3)<\log_2(p)$.
\end{proof}
From Remark \ref{r1} and Theorem \ref{c20} we obtain bounds on $|\Gamma_1|$ and $|\Gamma_2|$.
\begin{rem}\label{r2}
From Remark \ref{r1} and Theorem \ref{c20} we obtain that $\Gamma_1$ and $\Gamma_2$ can be chosen with
\begin{align*}
|\Gamma_1|&\leq 16pt^2r^2,\\
|\Gamma_2|&\leq 16pt^2r^3(2+\log_2(p)).
\end{align*}
\end{rem}
We still need to prove that the sampling sets constructed in Definition \ref{d1} are actually sampling sets. This will be done in the next section by proving that the reconstruction algorithms which will be presented work. Since the reconstruction algorithms need values $f(x)$ with $x\in\Gamma_1$ or $x\in\Gamma_2$ respectively, this will also prove that the given sets are sampling sets. An alternative way would be to prove that $\Gamma_1$ and $\Gamma_2$ satisfy the RIP property (see Section \ref{s3} for remarks about it).
\section{Preliminaries to the reconstruction algorithms}\label{s5}
We will now prove a lemma which will play a crucial role in the two reconstruction algorithms we will present in Section \ref{s4}. For any $H\leq \mathbb{F}_p^r$ and $L\leq \widehat{\mathbb{F}_p^r}$ let $H^\perp:=\{\chi_y\in\widehat{\mathbb{F}_p^r}:\chi_y(h)=1\,\,\,\,\forall h\in H\}$ and $L^\perp:=\{x\in\mathbb{F}_p^r:\chi_y(x)=1\,\,\,\,\forall \chi_y\in L\}$ be the annihilators of $H$ and $L$ respectively.
\begin{lemma}\label{l8}
Let $H_1,\ldots,H_n$ be $m$-generating with $n=4t(m-1)+1$, $f=\sum_{\chi_y\in\widehat{\mathbb{F}_p^r}}a_{\chi_y}\chi_y$ be a $t$-sparse Fourier function. Write $\widehat{f}=\widehat{g}+\epsilon$ with the support of $\widehat{g}$ having at most $t$ elements. Then for every $\chi_z\in\widehat{\mathbb{F}_p^r}$ we have that
\[\left|\left\{1\leq i\leq n:\sum_{\chi_y\in\chi_z H_i^\perp\setminus\{\chi_z\}}|a_{\chi_y}|\leq\frac{\|\epsilon\|_1}{t}\right\}\right|\geq 2t(m -1)+1.\]
\end{lemma}
\begin{proof}
First we will show that the matrix $M$ is $(n,m-1)$-coherent, and then we will apply (a variant of) Lemma 2 of \cite{b1} to conclude the proof of the lemma.
Let $M$ be the matrix with rows labeled by
\[A:=\{\chi_y H^\perp_i:1\leq i\leq n,\,\,\chi_y\in\mathrm{Rep}(\widehat{\mathbb{F}_p^r}/H^\perp_i)\},\]
columns labeled by elements of $\widehat{\mathbb{F}_p^r}$ and $M_{\chi_y H^\perp_i,\chi_z}=1$ if $\chi_z\in\chi_y H^\perp_i$ or $M_{\chi_y H^\perp_i,\chi_z}=0$ otherwise. Then each column of $M$ contains exactly $n$ entries equal to 1 and
\[\sum_{a\in A}M_{a,\chi_z}M_{a,\chi_w}=\sum_{i=1}^nM_{\chi_z H^\perp_i,\chi_w}=|\{1\leq i\leq n:(\chi_z)^{-1}\chi_w\in H^\perp_i\}|\]
for $\chi_z,\chi_w\in\widehat{\mathbb{F}_p^r}$. Let $\langle\chi_x\rangle\subseteq\widehat{\mathbb{F}_p^r}$ be the subgroup generated by $\chi_x$. Since the $H_i$ are $m$-generating, so that no more than $m-1$ of them can be contained in a fixed proper subspace of $\mathbb{F}_p^r$, and since $\langle\chi_x\rangle^\perp\lneq \mathbb{F}_p^r$ for $1\not=\chi_x\in \widehat{\mathbb{F}_p^r}$ (that is $x\not=0\in\mathbb{F}_p^r$), we have that
\begin{align*}
\sum_{a\in A}M_{a,\chi_z}M_{a,\chi_w}&=|\{1\leq i\leq n:(\chi_z)^{-1}\chi_w\in H^\perp_i\}|\\
&=|\{1\leq i\leq n:\chi_{w-z}\in H^\perp_i\}|\\
&=|\{1\leq i\leq n:\langle\chi_{w-z}\rangle\leq H^\perp_i\}|\\
&=|\{1\leq i\leq n:H_i\leq\langle\chi_{w-z}\rangle^\perp\}|\\
&\leq m-1
\end{align*}
for $\chi_z\not=\chi_w$ (that is $x\not=w$ as elements of $\mathbb{F}_p^r$).
Assume first that $\epsilon\not=0$. If $h<r$ the lemma then follows from Lemma 2 of \cite{b1} with (using the notation of Lemma 2 of \cite{b1}) $k=t$ (we have that $t<n$ by assumption), $\epsilon'=1$ and $c=4$. If $h=r$ then $H_i^\perp=\{1\}$ for every $1\leq i\leq n$ and so
\[\sum_{\chi_y\in\chi_z H_i^\perp\setminus\{\chi_z\}}|a_{\chi_y}|=0\leq\frac{\|\epsilon\|_1}{t} \]
for every $\chi_z\in\widehat{\mathbb{F}_p^r}$ and $1\leq i\leq n$. In particular the lemma holds also in this case.
Assume now that $\epsilon=0$. Then
\[f=\sum_{j=1}^ta_{\chi_{y_j}}\chi_{y_j}\]
for some $\chi_{y_j}\in\widehat{F_p^r}$. In particular
\[\sum_{\chi_y\in\chi_z H_i^\perp\setminus\{\chi_z\}}|a_{\chi_y}|=\sum_{{1\leq j\leq t:\chi_{y_j}\not=\chi_z,}\atop{\chi_{y_j}\in\chi_z H_i^\perp}}|a_{\chi_{y_j}}|.\]
Since $\chi_{y_j}\in\chi_z H_i^\perp$ if and only if $(\chi_z)^{-1}\chi_{y_j}\in H_i^\perp$ and, for each $\chi_{y_j}\not=\chi_z$, there exists at most $m-1$ such $i$, we have that
\[\sum_{\chi_y\in\chi_z H_i^\perp\setminus\{\chi_z\}}|a_{\chi_y}|=0\]
for at least $n-t(m-1)\geq 2t(m-1)+1$ distinct values of $i$. In particular the lemma holds also in this case.
\end{proof}
Let $H$ be any subgroup of $\mathbb{F}_p^r$. Then $\widehat{\mathbb{F}_p^r}/H^\perp\cong\widehat{H}$ through $\chi_y H^\perp\mapsto(h\mapsto\chi_y(h))$. For any function $f=\sum_{\chi_y\in\widehat{\mathbb{F}_p^r}}a_{\chi_y} \chi_y$ on $\mathbb{F}_p^r$ we have that
\begin{equation}\label{eq1}
\widehat{f|_H}(\chi_y H^\perp)=\sum_{\chi_z\in\chi_y H^\perp}a_{\chi_z}=\sum_{\chi_z\in\chi_y H^\perp}\widehat{f}(\chi_z)
\end{equation}
since
\[f|_H=\sum_{\chi_y\in\widehat{\mathbb{F}_p^r}}a_{\chi_y} \chi_y|_H=\sum_{\chi_y H^\perp\in \widehat{\mathbb{F}_p^r}/H^\perp}\chi_y|_H\sum_{\chi_z\in\chi_y H^\perp}a_{\chi_z}.\]
This identification will be used in the proof of the algorithms which we will present in the next section.
\section{Reconstruction algorithms}\label{s4}
We are now ready to present the reconstruction algorithms. Through all of this section let $f$, $g$ and $\epsilon$ be as in Section \ref{s1} and $\Gamma_j$, $H_1,\dots,H_n$, $K$, $m$, $n$, $h$ and $x_{l,i}$ as in Definition \ref{d1}. The first algorithm we present reconstructs or approximates $\widehat{f}$ from $f|_{\Gamma_1}$.
\begin{theor}\label{t2}
For $1\leq j\leq n$ let $F_j$ be the Fourier transform for $H_j$ and define $c_j:=F_j(f|_{H_j})$. Then the following algorithm returns $\widehat{f}'$ with $|\mathrm{supp}(\widehat{f}')|\leq t$ and $\|\widehat{f}-\widehat{f}'\|_1\leq (1+3\sqrt{2})\|\epsilon\|_1$.
\vspace{12pt}
\begin{algorithmic}
\State Set $\widehat{f}':=(0,\ldots,0)$, $\widehat{f}'':=(0,\ldots,0)$ and $Y:=(0,\ldots,0)$
\For {$j$ from $1$ to $n$}
\For {$2t-1$ values of $\chi_y H_j^\perp$ such that $|c_j(\chi_y H_j^\perp)|$ is largest}
\For {$\chi_z\in\chi_y H_j^\perp$}
\State $Y(\chi_z):=Y(\chi_z)+1$
\EndFor
\EndFor
\EndFor
\For {$\chi_y\in\widehat{\mathbb{F}_p^r}$}
\If {$Y(\chi_y)>2t(m -1)$}
\State $X:=()$
\For {$j$ from $1$ to $n$}
\State Append $c_j(\chi_y H_j^\perp)$ to $X$
\EndFor
\State $\widehat{f}''(\chi_y):=\mathrm{Median}(\{\Re(x):x\in X\})+i\mathrm{Median}(\{\Im(x):x\in X\})$
\EndIf
\EndFor
\For {$t$ values of $\chi_y$ for which $|\widehat{f}''(\chi_y)|$ is largest}
\State $\widehat{f}'(\chi_y):=\widehat{f}''(\chi_y)$
\EndFor
\noindent \Return $\widehat{f}'$.
\end{algorithmic}
\end{theor}
In the algorithm $\widehat{f}'$, $\widehat{f}''$ and $Y$ are labeled by elements of $\widehat{\mathbb{F}_p^r}$. The Fourier transform for $H_j$ can be defined similarly to that of $\mathbb{F}_p^r$ (notice that $H_j\cong\mathbb{F}_p^h$ by definition). Under the identification at the end of Section \ref{s5} giving $\widehat{H_j}\cong\widehat{F_p^r}/H^\perp$ we can also define $F_j$ as a function $F_j:\widehat{F_p^r}/H^\perp\rightarrow \mathbb{C}$.
\begin{proof}
We will first show that if $|a_{\chi_z}|> 2\|\epsilon\|_1/t$ then $Y(\chi_z)> 2t(m-1)$ and next prove that if $Y(\chi_w)> 2t(m-1)$ then $|\widehat{f}''(\chi_w)-a_{\chi_w}|\leq\sqrt{2}\|\epsilon\|_1/t$. Using these results we will then prove the theorem.
By definition of $\epsilon$, for $1\leq j\leq n$ we have that
\begin{equation}\label{eq2}
|\{\chi_y H_j^\perp\in\widehat{\mathbb{F}_p^r}/H_j^\perp:|c_j(\chi_y H_j^\perp)|>\|\epsilon\|_1/t\}|\leq 2t-1,
\end{equation}
since, if $g=\sum_{l=1}^t a_{\chi_{y_l}}\chi_{y_l}$ (where some of the coefficients might be 0) and $\{\chi_{y_l} H_j^\perp\}:=\{\chi_{y_1} H_j^\perp,\ldots,\chi_{y_t} H_j^\perp\}$, then, from Equation \eqref{eq1},
\begin{align*}
\sum_{\chi_y H_j^\perp\in(\widehat{\mathbb{F}_p^r}/H^\perp)\setminus\{\chi_{y_l} H_j^\perp\}}|c_j(\chi_y H_j^\perp)|&=\sum_{\chi_y H_j^\perp\in(\widehat{\mathbb{F}_p^r}/H_j^\perp)\setminus\{\chi_{y_l} H_j^\perp\}}\left|\sum_{\chi_b\in\chi_y H_j^\perp} a_{\chi_b}\right|\\
&\leq\sum_{\chi_y H_j^\perp\in(\widehat{\mathbb{F}_p^r}/H_j^\perp)\setminus\{\chi_{y_l} H_j^\perp\}}\sum_{\chi_b\in\chi_y H_j^\perp} |a_{\chi_b}|\\
&\leq\sum_{\chi_y\in\widehat{\mathbb{F}_p^r}\setminus\{\chi_{y_1},\ldots,\chi_{y_t}\}}|a_{\chi_y}|\\
&\leq \|\epsilon\|_1
\end{align*}
and then in particular there are less than $t$ elements $\chi_y H_j^\perp\in(\widehat{\mathbb{F}_p^r}/H_j^\perp)\setminus\{\chi_{y_l} H_j^\perp\}$ such that $|c_j(\chi_y H_j^\perp)|>\|\epsilon\|_1/t$. In particular there are at most $2t-1$ elements $\chi_y H_j^\perp\in(\widehat{\mathbb{F}_p^r}/H_j^\perp)$ with $|c_j(\chi_y H_j^\perp)|>\|\epsilon\|_1/t$.
If $|a_{\chi_z}|> 2\|\epsilon\|_1/t$ then
\[|\{j:|c_j(\chi_z H^\perp)|>\|\epsilon\|_1/t\}|>2t(m-1)\]
by Lemma \ref{l8} and Equation \eqref{eq1} as then for at least $2t(m-1)$ values $j$ we have
\[|c_j(\chi_z H_j^\perp)|=\left|\sum_{\chi_y\in\chi_z H_j^\perp}a_{\chi_y}\right|\geq|a_{\chi_z}|-\left|\sum_{\chi_y\in\chi_z H_j^\perp\setminus\{\chi_z\}}a_{\chi_y}\right|>\|\epsilon\|_1/t.\]
So $Y(\chi_z)> 2t(m-1)$ in this case.
Again by Lemma \ref{l8} and Equation \eqref{eq1}, we have that whenever $Y(\chi_w)> 2t(m-1)$ then $|\widehat{f}''(\chi_w)-a_{\chi''}|\leq\sqrt{2}\|\epsilon\|_1/t$, since in this case $\widehat{f}''(\chi_w)=\mathrm{Median}(\{\Re(x):x\in X\})+i\mathrm{Median}(\{\Im(x):x\in X\})$ and
\begin{align*}
|\mathrm{Median}(\{\Re(x):x\in X\})-\Re(a_{\chi_w})|&\leq\|\epsilon\|_1/t,\\
|\mathrm{Median}(\{\Im(x):x\in X\})-\Im(a_{\chi_w})|&\leq\|\epsilon\|_1/t.
\end{align*}
We will now prove that $\|\widehat{f}-\widehat{f}'\|_1\leq (1+3\sqrt{2})\|\epsilon\|_1$, which will prove the theorem, since by definition $|\mathrm{supp}(\widehat{f}')|\leq t$. To do this let $T:=\{\chi_{y_1},\ldots,\chi_{y_t}\}$ and $T':=\mathrm{supp}(\widehat{f}')$. We can write $T\setminus T'=T_1\cup T_2$, where
\begin{align*}
T_1&=\{\chi_y\in T\setminus T':|a_{\chi_y}|\leq 2\|\epsilon\|_1/t\},\\
T_2&=\{\chi_y\in T\setminus T':|a_{\chi_y}|> 2\|\epsilon\|_1/t\}.
\end{align*}
Notice that $T',T_2\subseteq \mathrm{supp}(\widehat{f}'')$. For $\chi_b\in T_2$ and $\chi_c\in T'\setminus T$ we have
\[|a_{\chi_c}|+\sqrt{2}\|\epsilon\|_1/t\geq |\widehat{f}''(\chi_c)|\geq|\widehat{f}''(\chi_b)|\geq|a_{\chi_b}|-\sqrt{2}\|\epsilon\|_1/t\]
and so $|a_{\chi_b}|\leq |a_{\chi_c}|+2\sqrt{2}\|\epsilon\|_1/t$.
Also for $\chi_b\in T_2$ we have by the previous part that $\widehat{f}''(\chi_b)\not=0$. From the definition of $\widehat{f}'$ if $|T_2|>0$ then $|T'|=t$. In this last case, from $T_2\subseteq T\setminus T'$ and $|T|=|T'|$, it follows that $|T'\setminus T|\geq |T_2|$ (this last inequality holds also if $|T_2|=0$).
In particular
\begin{align*}
\sum_{\chi_b\in T\setminus T'}|a_{\chi_b}|&=\sum_{\chi_b\in T_1}|a_{\chi_b}|+\sum_{\chi_b\in T_2}|a_{\chi_b}|\\
&\leq |T_1|2\frac{\|\epsilon\|_1}{t}+|T_2|2\sqrt{2}\frac{\|\epsilon\|_1}{t}+\sum_{\chi_c\in T'\setminus T}|a_{\chi_c}|\\
&\leq 2\sqrt{2}\epsilon+\sum_{\chi_c\in T'\setminus T}|a_{\chi_c}|
\end{align*}
and then
\begin{align*}
\left\|\widehat{f}-\widehat{f}'\right\|_1\!\!=&\left\|\widehat{f}|_{T'}-\widehat{f}'|_{T'}\right\|_1\!\!+\!\left\|\widehat{f}|_{T\setminus T'}-\widehat{f}'|_{T\setminus T'}\right\|_1\!\!+\!\left\|\widehat{f}|_{\widehat{\mathbb{F}_p^r}\setminus (T\cup T')}-\widehat{f}'|_{\widehat{\mathbb{F}_p^r}\setminus (T\cup T')}\right\|_1\\
=&\left\|\widehat{f}|_{T'}-\widehat{f}''|_{T'}\right\|_1\!\!+\!\sum_{\chi_b\in T\setminus T'}|a_{\chi_b}|\!+\!\sum_{\chi_y\in \widehat{\mathbb{F}_p^r}\setminus(T\cup T')}|a_{\chi_y}|\\
\leq &|T'|\sqrt{2}\frac{\|\epsilon\|_1}{t}+2\sqrt{2}\|\epsilon\|_1+\sum_{\chi_c\in T'\setminus T}|a_{\chi_c}|+\!\sum_{\chi_y\in \widehat{\mathbb{F}_p^r}\setminus(T\cup T')}|a_{\chi_y}|\\
\leq &3\sqrt{2}\|\epsilon\|_1+\!\sum_{\chi_y\not\in T}|a_{\chi_y}|\\
\leq &(1+3\sqrt{2})\|\epsilon\|_1
\end{align*}
and so the theorem is proved.
\end{proof}
Before presenting the second algorithm we need the following lemma.
\begin{lemma}\label{l7}
Let $H$ be any subgroup of $\mathbb{F}_p^r$, $x\in \mathbb{F}_p^r$ and $f=\sum_{\chi_w\in\widehat{\mathbb{F}_p^r}} a_{\chi_w} \chi_w$. If $\overline{f}$ is a function of $H$ with $\overline{f}(y):=f(x+y)$, $y\in H$, then
\[\widehat{\overline{f}}(\chi_w H^\perp)=\sum_{\chi_z\in\chi_w H^\perp}a_{\chi_z}\chi_z(x).\]
\end{lemma}
\begin{proof}
We have that
\begin{align*}
\overline{f}(y)&=\sum_{\chi_w\in\widehat{\mathbb{F}_p^r}} a_{\chi_w}\chi_w(x+y)\\
&=\sum_{\chi_w\in\widehat{\mathbb{F}_p^r}} a_{\chi_w}\chi_w(x)\chi_w(y)\\
&=\sum_{\chi_w H^\perp\in\widehat{\mathbb{F}_p^r}/H^\perp}\sum_{\chi_z\in\chi_w H^\perp}a_{\chi_z}\chi_z(x)\chi_z(y)\\
&=\sum_{\chi_w H^\perp\in\widehat{\mathbb{F}_p^r}/H^\perp}\chi_w(y)\sum_{\chi_z\in\chi_w H^\perp}a_{\chi_z}\chi_z(x)
\end{align*}
from which the lemma follows.
\end{proof}
We will now present the second reconstruction theorem. Here we use $f|_{\Gamma_2}$ in order to construct $\widehat{f}'$.
\begin{theor}\label{t9'}
For $1\leq j\leq n$ and $x\in \mathbb{F}_p^r$ let $g_{j,x}(y)=f(x+y)$, $y\in H_j$, and $c_{j,x}=F_j(g_{j,x})$, where $F_j$ is the Fourier transform for $H_j$. The following algorithm returns $\widehat{f}'$ such that $|\mathrm{supp}(\widehat{f}')|\leq t$ and $\|\widehat{f}-\widehat{f}'\|_1\leq (1+3\sqrt{2})\|\epsilon\|_1$.
\begin{algorithmic}
\State Set $\widehat{f}':=(0,\ldots,0)$, $\widehat{f}'':=(0,\ldots,0)$, $Y:=(0,\ldots,0)$ and $Z:=()$
\For {$j$ from $1$ to $n$}
\For {$2t-1$ values of $\chi_w H_j^\perp$ for which $|c_{j,0}(\chi_w H_j^\perp)|$ is largest}
\State $\overline{\chi_w}:=()$
\For {$1\leq l\leq r-h$}
\State Append $\chi_{w_l}(\mathrm{span}_{\mathbb{F}_p} \{x_{l,j}\})^\perp$ obtained from the algorithm in \linebreak Lemma \ref{t11} for $\overline{f}(a)=c_{j,ax_{l,j}}(\chi_w H_j^\perp)$, $a\in H$ to $\overline{\chi_w}$
\EndFor
\State Reconstruct $\chi_w$ from $\chi_w H_j^\perp$ and $\overline{\chi_w}$
\State $Y(\chi_w):=Y(\chi_w)+1$
\If {$Y(\chi_w)=2t(\lceil r/h\rceil-1)+1$}
\State Append $\chi_w$ to $Z$
\EndIf
\EndFor
\EndFor
\For {$\chi_w\in Z$}
\State $X:=()$
\For {$j$ from $1$ to $n$}
\State Append $c_{j,0}(\chi_w H_j^\perp)$ to $X$
\EndFor
\State $\widehat{f}''(\chi_w):=\mathrm{Median}(\{\Re(x):x\in X\})+i\mathrm{Median}(\{\Im(x):x\in X\})$
\EndFor
\For {$t$ values of $\chi_w$ for which $|\widehat{f}''(\chi_w)|$ is largest}
\State $\widehat{f}'(\chi_w):=\widehat{f}''(\chi_w)$
\EndFor
\noindent \Return $\widehat{f}'$.
\end{algorithmic}
\end{theor}
In the algorithm $\widehat{f}'$, $\widehat{f}''$ and $Y$ are vectors indexed by elements of $\widehat{\mathbb{F}_p^r}$. To see how $\chi_w$ can be reconstructed from $\chi_w H_j^\perp$ and $\overline{\chi_w}$ see the proof of the theorem.
\begin{proof}
Since $c_{j,0}=F_j(f|_{H_j})$ it is enough, from the proof of Theorem \ref{t2}, to prove that that if $|a_{\chi_w}|>2\|\epsilon\|_1/t$ then $\chi_w\in Z$, that is in this case $Y(\chi_w)>2t(m-1)$.
We will first show how $\chi_w$ can be reconstructed from $\chi_w H_j^\perp$ and $\overline{\chi_w}$. We will assume that $H_j=\{(0,\ldots,0,a_{r-h+1},\ldots,a_r)\in \mathbb{F}_p^r\}$ and that $x_{l,j}=(0,\ldots,0,1,0,\ldots,0)$ with $l$-th coefficient 1 and all other coefficients 0 for $1\leq l\leq r-h$ (this can always be assumed, up to changing the basis of $\mathbb{F}_p^r$)
We easily have that
\begin{align*}
H_j^\perp&=\{\chi_{(z_1,\ldots,z_{r-h},0,\ldots,0)}:z_i\in\mathbb{F}_p\},\\
(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp&=\{\chi_{(z_1,\ldots,z_{l-1},0,z_{l+1},\ldots,z_r)}:z_i\in\mathbb{F}_p\}.
\end{align*}
So, using that $\widehat{\mathbb{F}_p^r}/H^\perp\cong \widehat{H}$ for any subgroup $H\leq \mathbb{F}_p^r$,
\begin{align*}
\widehat{H_j}&\cong \widehat{\mathbb{F}_p^r}/H_j^\perp=\{\chi_{(0,\ldots,0,y_{r-h+1},\ldots,y_r)}H_j^\perp:y_i\in\mathbb{F}_p\},\\
\widehat{\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\}}\!&\cong \widehat{\mathbb{F}_p^r}/(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp\!=\!\{\chi_{(0,\ldots,0,y_l,0,\ldots,0)}(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp:y_l\in\mathbb{F}_p\}.
\end{align*}
If $\chi_w H_j^\perp=\chi_{(0,\ldots,0,y_{r-h+1},\ldots,y_r)}H_j^\perp$ and $(\overline{\chi_w})_l=\chi_{(0,\ldots,0,y_l,0,\ldots,0)}(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp$ for $1\leq l\leq r-h$ then $\chi_w=\chi_{(y_1,\ldots,y_r)}$. Notice that in this case
\begin{align*}
(\overline{\chi_w})_1&=\{\chi_z:z_1=y_1&&&&&&\},\\
\vdots&&&\ddots\\
(\overline{\chi_w})_{r-h}&=\{\chi_z:&&&z_{r-h}=y_{r-h}&&&\},\\
\chi_w H_j^\perp&=\{\chi_z:&&&&&z_{r-h+1}=y_{r-h+1},\ldots,z_r=y_r&\},
\end{align*}
In particular $\chi_w$ is the only element contained in $\chi_w H_j^\perp$ and in all of the $(\overline{\chi_w})_l$.
For $\chi_w\in\widehat{\mathbb{F}_p^r}$ we have by Lemma \ref{l8} that if
\[J=\{1\leq j\leq n:\sum_{\chi_z\in\chi_w H_j\setminus\{ \chi_w\}}|a_{\chi_z}|\leq\|\epsilon\|_1/t\}\]
then $|J|>2t(m-1)$. Assume now that $|a_{\chi_w}|>2\|\epsilon\|_1/t$ and $j\in J$. Then $|c_{j,0}(\chi_w H_j^\perp)|>\|\epsilon\|_1/t$ and so it is between the $2t-1$ largest values of $|c_{j,0}(\chi_b H_j^\perp)|$ from Equation \eqref{eq2} from the proof of Theorem \ref{t2}. We will now show that we reconstruct $\chi_w$ from $\chi_w H_j^\perp$ and $\overline{\chi_w}$. This will prove the theorem, since then $Y(\chi_w)>2t(m-1)$.
Clearly $\chi_w\in\chi_w H_j^\perp$. So it is enough to prove for $1\leq l\leq r-h$ that $\chi_w\in(\overline{\chi_w})_l(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp$. Using Lemma \ref{l7} we have that
\[\overline{f}(a)=c_{j,ax_{l,j}}(\chi_w H_j^\perp)=\sum_{\chi_z\in\chi_w H^\perp}\chi_z(ax_{l,j})a_{\chi_z}\]
As $\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\}\cong \mathbb{F}_p$ we can define $\phi_b(a):=\chi_b(ax_{l,j})$ for $a\in\mathbb{F}_p$ and $\chi_b\in\widehat{\mathbb{F}_p^r}$. Notice that if $\chi_b(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp=\chi_k(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp$ if and only if $\phi_b=\phi_k$. So
\[\overline{f}=\sum_{\chi_b(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp\in \widehat{\mathbb{F}_p^r}/(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp}d_{\phi_b} \phi_b\]
where the coefficients $d_{\phi_b}$ are given by
\[d_{\phi_b}=\sum_{\chi_k\in(\chi_w H_j^\perp)\cap (\chi_b(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp)}a_{\chi_k}.\]
Let
\[c:=\sum_{\chi_w\not=\chi_k\in(\chi_w H_j^\perp)\cap (\chi_w(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp)}|a_{\chi_b}|.\]
Since $j\in J$ we have that $\sum_{\chi_w\not=\chi_b\in\chi_w H_j^\perp}|a_{\chi_b}|\leq \|\epsilon\|_1/t$ (in particular $c=v\|\epsilon\|_1$ for some $0\leq v\leq 1$) and $|a_{\chi_w}|>2\|\epsilon\|_1$. So
\[|d_{\phi_w}|\geq |a_{\chi_w}|-c>(2-v)\|\epsilon\|_1\]
and
\[\sum_{\phi_b\not=\phi_w}|d_{\phi_b}|\leq
\sum_{\chi_b\in \chi_w H_j^\perp\setminus\{\chi_w\}}|a_{\chi_b}|-c\leq(1-v)\|\epsilon\|_1.\]
Since $(2-v)\geq 2(1-v)$ for $0\leq v\leq 1$, in this case the algorithm in Lemma \ref{t11} returns $\phi_w$, which corresponds to $\chi_w(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp$ under the isomorphism $\widehat{\mathbb{F}_p^r}/(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp\rightarrow\widehat{\mathbb{F}_p}$ sending $\chi_b(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp\mapsto \phi_b$. So $\chi_w\in(\overline{\chi_w})_l(\mathrm{span}_{\mathbb{F}_p}\{ x_{l,j}\})^\perp$, which concludes the proof of the theorem.
\end{proof}
\section{Remarks to the extension to groups of the form $(\mathbb{Z}/p^a\mathbb{Z})^r$}
One can extend the algorithms to work also on groups of the form $(\mathbb{Z}/p^a\mathbb{Z})^r$ with $a\geq 1$. This can be done as follows:
\begin{enumerate}
\item
Construct $m$-generating subspaces $H_1,\ldots,H_n\subseteq\mathbb{F}_p^r$ of dimension $h$ and find a basis $\{z_{1,i},\ldots,z_{h,i}\}$ for each subspace $H_i$.
Assuming coefficients of the vectors $z_{j,i}$ are integers, define $\overline{H_i}:=\langle z_{1,i},\ldots,z_{h,i}\rangle\subseteq (\mathbb{Z}/p^a\mathbb{Z})^r$ (the subgroup generated by $z_{1,i},\ldots,z_{h,i}$) for $1\leq i\leq n$.
\item
Extend Lemma \ref{t11} to work for $\mathbb{Z}/p^a\mathbb{Z}$ instead of only for $\mathbb{F}_p$ by taking $k$ maximal with $2^k\geq p^a/3$ and changing $p$ to $p^a$.
\item
Take $\overline{x_{j,i}}\in\mathbb{Z}/p^a\mathbb{Z}$ with $\langle H_i,\overline{x_{1,i}},\ldots,\overline{x_{r-h,i}}\rangle=(\mathbb{Z}/p^a\mathbb{Z})^r$.
\item
Define $\overline{\Gamma_1}$ and $\overline{\Gamma_2}$ similarly to $\Gamma_1$ and $\Gamma_2$.
\item
In the algorithms substitute $\mathbb{F}_p^r$ with $(\mathbb{Z}/p^a\mathbb{Z})^r$ and $\widehat{\mathbb{F}_p^r}$ with $\widehat{(\mathbb{Z}/p^a\mathbb{Z})^r}$.
\end{enumerate}
It can be proved that the set $\overline{H_i}$ are $m$-generating for $(\mathbb{Z}/p^a\mathbb{Z})^r$. This depend on square matrices with integer coefficients being singular when reduced to $\mathbb{Z}/p^a\mathbb{Z}$ exactly when the determinant is divisible by $p$, independently of the value of $a$. Also one can prove that
\[(\mathbb{Z}/p^a\mathbb{Z})^r\cong \overline{H_i}\times\langle \overline{x_{1,i}}\rangle\times\cdots\times\langle\overline{x_{r-h,i}}\rangle,\]
which is needed in order to adapt the proof of the theorems. However from $|H_i|=p^{ah}$ and $p^h\geq 4t$ (if $h<r$) we have
\[|\overline{\Gamma_1}|\geq|H_1|= p^{ah}\geq 4^at^a.\]
So $|\overline{\Gamma_1}|$ cannot be quadratic in $t$ (unless possibly for $a\leq 2)$. Looking at upper bound on $|\overline{\Gamma_1}|$ we obtain
\[|\overline{\Gamma_1}|\leq np^{ah}<16pt^2r^2p^{(a-1)h}.\]
Again as $p^h\geq 4t$ for $h<r$,
\[16pt^2r^2p^{(a-1)h}\geq 16pt^2r^2(4t)^{a-1}=4^{a+1}pt^{a+1}r^2.\]
So, also for $a=2$, the given upper bound is not quadratic in $t$. For this reason we did not extend the paper to groups of the form $(\mathbb{Z}/p^a\mathbb{Z})^r$.
\section{Further remarks}\label{s3}
From
\[\left|\widehat{\mathbbm{1}_H}(\chi_y)\right|=\left|p^{-r}\sum_{x\in H}\chi_y(x)\right|=p^{-r}|H|\mathbbm{1}_{H^\perp}(\chi_y)=p^{h-r}\mathbbm{1}_{H^\perp}(\chi_y)\]
for any subspace $H$ of $\mathbb{F}_p^r$ of dimension $h$, it can be checked that, if $H_1,\ldots,H_n$ are $m$-generating, then $\sum_i\left|\widehat{\mathbbm{1}_{H_i}}(\chi_y)\right|\leq (m-1)p^{h-r}$ for every $1\not=\chi_y\in\widehat{\mathbb{F}_p^r}$ and $\sum_i\left|\widehat{\mathbbm{1}_{H_i}}(1)\right|=np^{h-r}:=C$. In particular it can be proved that the matrix with columns labeled by the elements of $\cup_{j\in I}H_j$ (taking the union as a multiset, so that some columns may be repeated), rows labeled by the elements of $\widehat{\mathbb{F}_p^r}$ and coefficients $\chi_y(g)/\sqrt{C}$ satisfies the RIP property of rank $t$ and constant $(t-1)(m-1)/n$. In particular if $n>2(t-1)(m-1)$ and the subspaces $H_i$ are $m$-generating then we could approximate any $t$-sparse Fourier signal also using reconstruction algorithms based on the RIP property.
Some slight variation of the algorithms presented in Theorem \ref{t2} and \ref{t9'} (based on similar algorithms from \cite{MR2628828}) can also be constructed taking $n$ to be $2t(m-1)$, which however returns functions with
\[\|(\widehat{f}+\epsilon)-\widehat{f}'\|_1\leq (1+2t)\|\epsilon\|_1.\]
If $\|\epsilon\|_1>0$ these algorithms give worse approximations than the ones described here. However if $\|\epsilon\|_1=0$ then also these algorithms reconstruct $f$ and they work on smaller (about half the size) sets. Since sizes of sampling sets and complexities of the algorithms are about the same (they differ only by a small multiplicative constant), these algorithms have not been presented here.
\section{Acknowledgments}
The author thanks Hartmut F\"uhr for help in reviewing the paper.
The results presented here are part of the author Ph.d. thesis \cite{lm}. The results have also already been presented in the form of an extended abstract in \cite{lm2}.
The author was supported during her Ph.d. by the DFG grant GRK 1632 for the Graduiertenkolleg Experimentelle und konstruktive Algebra at RWTH Aachen University.
|
train/arxiv
|
BkiUdOs5qoYAwaI90ONA
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Algebraic models have been developed to describe systems in several fields of
physics. Examples in nuclear
physics include the Interacting Boson Model (IBM) \cite{ibm}
and its many extensions, the
vibron model \cite{vibron}, and the $SO(8)$ \cite{so8} and $Sp(6)$
\cite{so8,sp6} fermion dynamical symmetry models. There exist similar
models in chemical physics \cite{chem1,chem2}
and hadronic physics \cite{part}. Such models are typically
constructed by restricting the dynamics to a few important degrees of
freedom. There is often an invariant group $G$ such that
any Hamiltonian we construct commutes with the group Casimir operators.
For the IBM this group is $U(6)$. One also often imposes the condition that
any Hamiltonian be invariant with respect to a
lower dimensional group $G'$ which is a subgroup of $G$. Often $G'$ is
$SO(3)$, implying spherical symmetry. For a given representation of $G$, the
Hamiltonian is block diagonal, with each block corresponding to a
different representation of $G'$. In this paper, we show that it is possible
for one block to have a particularly simple structure.
We limit the discussion to the IBM.
We first find the classical limit of the
original quantum problem. This is a dynamical system with six degrees
of freedom. Imposing zero angular momentum suppresses three of these degrees
of freedom \cite{paar,thesis},
leaving three. The group structure of this reduced classical system is then
studied leading to the identification of
what we call an effective group structure. In particular, there
is an effective group $G_{\rm eff} = U(3)$ which is analogous to $G=U(6)$ of
the original system. We then quantise this reduced classical model to obtain
a new quantum model. That this new model is different from the original is
not contradictory since more than one quantum problem can share the same
classical limit. Nevertheless, the solution of this new model
is similar to that of the original in the $J=0$ representation.
The idea that $J=0$ states correspond to a smaller dimension in
the classical mechanics was noticed in a very different context in
reference (\cite{scc}).
This result is of some interest in the study of chaos in collective
nuclei \cite{paar,thesis,ibmchaos,appr}. The three dimensional system is
further reduced by boson number conservation so that there are only
two independent degrees of freedom. Two dimensional systems are
particularly amenable to the usual analyses of chaos since the
classical motion can be depicted on Poincar\'{e} sections and
periodic orbits are easier to find. We can also study the quantum
mechanical wavefunctions and Husimi \cite{husi} distributions in
detail. It is then possible to
study wavefunctions and look for localisation on unstable periodic
orbits \cite{heller} among other effects.
In section 2 we briefly describe the IBM including its classical limit. The
group chains of the effective $U(3)$ model are discussed in section 3 in the
context of the classical dynamics. In section 4 we discuss the allowed
representations of the effective group chains. We impose a constraint that
our model only admit states which belong to the symmetric representation of
the permutation group $S_3$. It is then shown that the representations have
the same structure as in the original $U(6)$ model. In section 5 we discuss a
specific classical Hamiltonian and show how it can be integrated at the three
dynamical symmetries of the effective model. In section 6 we describe a
quantum Hamiltonian with the required $U(3)$ symmetry and which corresponds
to the original $U(6)$ Hamiltonian. We find its eigenenergies at the
dynamical symmetry limits and show that they are equal to
the energies of the original model to leading order in the particle number.
Therefore this can be thought of as a semiclassical approximation.
The philosophy of this approach is similar to that of reference \cite{hl}.
\section{The Interacting Boson Model}
\label{sec:ibm}
In this section we review those features of the IBM which are of special
interest in this paper. See reference [1] for a complete review.
The IBM is a model for the structure of even-even
collective nuclei which assumes that the monopole and quadrupole degrees of
freedom are the most important. It also assumes that all excitations are
bosonic because of the existence of pairing interactions which are dominant at
low energies. Therefore, we introduce one monopole boson operator
$s^{\dagger}$ and five quadrupole boson operators $d^{\dagger}_{\mu}$
where $\mu=-2,\ldots,2$.
The 36 bilinear operators
$\{s^{\dagger}s,s^{\dagger}d_{\mu},d^{\dagger}_{\mu}s,d^{\dagger}_{\mu}d_{\nu}
\}$ are the generators of a $U(6)$ algebra. This means that any Hamiltonian
constructed from the generators will commute with the $U(6)$ Casimir operators.
Since the system is bosonic, we only consider symmetric representations so
there is only one independent Casimir operator,
\begin{equation} \label{eq:qnum}
\hat{N}=s^{\dagger}s + \sum_{\mu}d^{\dagger}_{\mu}d_{\mu}
\end{equation}
The eigenvalue of this operator, $N$, is
the number of bosonic pairs or half the number of valence nucleons.
Any eigenstate of such a Hamiltonian belongs to a specific $U(6)$
representation which is labelled by $N$.
In addition to the $U(6)$ symmetry, we demand spherical symmetry by requiring
that there be $SO(3)$ invariance.
There are three group chains which connect these two groups
\begin{equation} \label{eq:chain}
U(6) \supset \left\{
\begin{array}{c}
U(5) \supset O(5)\\
SU(3) \\
O(6) \supset O(5)
\end{array}
\right\} \supset SO(3) \supset SO(2).
\end{equation}
If we construct a Hamiltonian solely out of the Casimir invariants of one
chain, then it is solvable and we say that it has a dynamical symmetry.
The classical limit of the quantum model is obtained
\cite{hl,cllim2,cllim1,cllim3}
through the use of coherent states \cite{klaud}. These are defined as
\begin{equation} \label{eq:cohstate}
|\mbox{\boldmath $\alpha$}\rangle=
\exp(-|\mbox{\boldmath $\alpha$}|^{2}/2)\exp\Bigl(\alpha _{s}s^{\dagger}+
\sum_{\mu}\alpha_{\mu}d_{\mu}^{\dagger}\Bigr)|0\rangle
\end{equation}
where $|0\rangle$ is the vacuum state which contains no bosons.
Each coherent state is parametrised by six continuous complex variables
$\alpha_{j}$, where $j \in (s,-2,\ldots,2)$. Under time evolution a coherent
state $|\mbox{\boldmath $\alpha$}\rangle$ will evolve to a new state which is
approximately another coherent state $|\mbox{\boldmath $\alpha'$}\rangle$.
Therefore, it is sufficient to study the time dependence of the variables
$\alpha_{j}$. A time-dependent variational
approximation valid for large boson number \cite{klaud,bo}
shows that these variables evolve according to Hamilton's equations
\begin{equation}
\frac{d\alpha_j}{dt}=
\frac{\partial {\cal H} (\mbox{\boldmath $\alpha$})}
{\partial (i \alpha_j^*)} \;\;\;\;\;\;\;\;
\frac{d(i\alpha_j^*)}{dt}=-\frac{\partial {\cal H}
(\mbox{\boldmath $\alpha$})} {\partial \alpha_j}
\end{equation}
where
${\cal H} (\mbox{\boldmath $\alpha$}) =
\langle\mbox{\boldmath $\alpha$}|
\hat{H}|\mbox{\boldmath $\alpha$}\rangle$ is the classical
Hamiltonian and $\alpha_j$ and $i\alpha^*_j$ are canonical position and
momentum coordinates. In what follows we will denote quantum operators with
carats and their classical counterparts with script font.
The classical limit of any quantum operator, including the Hamiltonian, is
obtained by making the substitutions \cite{klaud,bo}
\begin{equation} \label{eq:dtoa}
d^{\dagger}_{\mu} \rightarrow \alpha^{*}_{\mu} \;\;\;\;\;\;
d_{\mu} \rightarrow \alpha_{\mu} \;\;\;\;\;\;
s^{\dagger} \rightarrow \alpha^{*}_{s} \;\;\;\;\;\;
s \rightarrow \alpha_{s}.
\end{equation}
For example, the classical limit of the number operator (\ref{eq:qnum}) is
${\cal N} = \alpha_s^* \alpha_s + \sum_{\mu}\alpha_\mu^*\alpha_\mu$. The fact
that it is conserved means that the phase space is compact.
This prescription of finding the classical limit is not unique. For example,
the coherent states defined in equation (\ref{eq:cohstate}) do not belong to
a specific $U(6)$
or $SO(3)$ representation. We might like them to have this property so that
in the classical dynamics we are studying a reduced dimensional phase space
on which the classical boson number and angular momentum are fixed. This has
been done for the $U(6)$ algebra \cite{cllim1} in which use of projected
coherent states
means that there is one fewer degree of freedom in the classical dynamics.
It has not been done for the $SO(3)$ algebra; presumably the resulting
phase space would be very topologically complicated. In this work we will
show that we can define a simple phase space for the special case of
zero angular momentum.
Another method of finding the classical limit is to identify the quantum
commutators with the classical Poisson brackets \cite{moyal}.
One then identifies each of
the generators as a phase space coordinate and uses the Casimir invariants to
eliminate variables; this has been done explicitly for the $SU(2)$ and $SU(3)$
algebras \cite{dim}. This is an elegant approach but is difficult in this
application because the bosonic nature of the IBM only allows
symmetric representations of $U(6)$ and it is not clear how to invoke
this constraint using this method.
In reference \cite{hl} the authors discuss how the variables
$\alpha_j$ can be related to the intrinsic variables of the
Bohr-Mottelson model \cite{bohrmot}.
These are the deformation parameters $\beta$ and
$\gamma$, the Euler angles $\Omega$ and the corresponding momenta $p_{\beta}$,
$p_{\gamma}$ and ${\cal L}_{1,2,3}$. The result is
\begin{mathletters} \label{eq:alpha}
\begin{eqnarray}
\alpha_s & = & \exp (-i\Theta)\sqrt{{\cal N}-\frac{1}{2}
\sum_{\mu}(p^*_{\mu}p_{\mu} + q^*_{\mu}q_{\mu})} \label{equationa}\\
\alpha_{\mu} & = & \exp (-i\Theta)(q^*_{\mu}+ip_{\mu}) \nonumber
\end{eqnarray}
where
\begin{eqnarray}
q_{\mu} & = & \sum_{\nu=-2}^2{\cal D}_{\mu\nu}^{(2)}(\Omega)a_{\nu}
\label{equationb}\\
p_{\mu} & = & \sum_{\nu=-2}^2{\cal D}_{\mu\nu}^{*(2)}(\Omega)b_{\nu}
\nonumber
\end{eqnarray}
and
\begin{equation} \label{equationc}
a_{\pm 2} = \frac{1}{\sqrt{2}}\beta\sin\gamma
\;\;\;\;\;\;\;\;\;\;
a_{\pm 1} = 0
\;\;\;\;\;\;\;\;\;\;
a_{0} = \beta\cos\gamma
\end{equation}
\begin{eqnarray}
b_{\pm 2} & = & \frac{1}{\sqrt{2}}\Bigl(\frac{p_{\gamma}}{\beta}\cos\gamma +
p_{\beta}\sin\gamma \pm \frac{i{\cal L}_3}{2\beta\sin\gamma}\Bigr) \nonumber\\
b_{\pm 1} & = & -\frac{1}{2\sqrt{2}\beta}\Bigl(\frac{i{\cal L}_1}
{\sin (\gamma-2\pi/3)} \pm \frac{{\cal L}_2}{\sin(\gamma-4\pi/3)}\Bigr)
\label{equationd}\\
b_0 & = & -\frac{p_{\gamma}}{\beta}\sin\gamma + p_{\beta}\cos\gamma.\nonumber
\end{eqnarray}
\end{mathletters}
Here ${\cal D}_{\mu\nu}^{(2)}(\Omega)$ are the Wigner matrices which are a
function of the three Euler angles $\Omega$ and $\Theta$ is a global phase
of no importance. This choice of variables explicitly conserves ${\cal N}$.
There is a simplification when the magnitude of the angular momentum
is zero \cite{paar,thesis}.
In that case we have that the ${\cal L}_i$ are zero and we can choose a frame
in which ${\cal D}_{\mu\nu}^{(2)}(\Omega)= \delta_{\mu\nu}$. Equation (6)
then implies
{\samepage
\begin{eqnarray}
\alpha_s & = & \exp (-i\Theta)\sqrt{{\cal N}-\frac{1}{2}(\beta^2 + p_{\beta}^2
+ \frac{p_{\gamma}^2}{\beta^2})} \nonumber \\
\alpha_{\pm 2} & = & \frac{1}{2}\Bigl(\beta\sin\gamma +
i(\frac{p_{\gamma}}{\beta}\cos\gamma + p_{\beta}\sin\gamma)\Bigr)
\label{eq:alphas} \\
\alpha_{\pm 1} & = & 0 \nonumber \\
\alpha_0 & = & \frac{1}{\sqrt{2}}\Bigl(\beta\cos\gamma +
i(-\frac{p_{\gamma}}{\beta}\sin\gamma + p_{\beta}\cos\gamma)\Bigr).\nonumber
\end{eqnarray}
}
\noindent Then the motion is described by only two degrees of freedom, $\beta$
and $\gamma$.
For example, $\hat{H} = \hat{n}_d$ is a quantum Hamiltonian belonging to
the $U(5)$ dynamical symmetry. ($\hat{n}_d$ is the $U(5)$ Casimir operator and
equals $\sum d^{\dagger}_{\mu}d_{\mu}$.) Its classical limit for zero angular
momentum is ${\cal H} = (\beta^2+p_{\beta}^2+p_{\gamma}^2/\beta^2)/2$ which is
a harmonic oscillator in two dimensions. We have ignored terms
which arise from normal ordering since they vanish in the classical limit.
Therefore, the original $U(5)$ symmetry is manifest as a $U(2)$ symmetry in
this situation. We show in the next section that all of
the groups in the original model map to lower dimensional groups when we study
the classical problem with zero angular momentum.
This dimensional reduction can also be understood on the quantum level by
counting the number of distinct eigenvalues which is necessary to specify a
quantum state. For a fixed value of $N$ we need five quantum numbers to do
this.
We always have angular momentum $J$ and one of its components $M$ as good
eigenvalues. Therefore, in general we need three additional quantum numbers so
that the system is three dimensional \cite{cllim3}. However, in the special
case
that $J=0$, it follows that $K$, the projection of $J$ onto the symmetry axis,
is also zero. ($K$ is also a
``missing quantum number'' in the decomposition from $SU(3)$ to $SO(3)$.)
Therefore, once we specify $J=0$, we
only need to specify two additional quantum numbers to identify a state,
which means that the system is essentially two dimensional.
\section{Effective Group Chains}
\label{sec:efg}
In this section, we discuss the various group chains
which arise in classifying
the classical behaviour for zero angular momentum. First, it is convenient to
define new variables
\begin{equation} \label{eq:alphapm}
\alpha_{p,m} = \frac{1}{\sqrt{2}}(\alpha_2\pm\alpha_{-2}).
\end{equation}
Equation (\ref{eq:alphas}) then guarantees that $\alpha_m=\alpha_{\pm 1}=0$.
After making this canonical transformation we are free to use the fact that
$\alpha_m=0$ so that $\alpha_{\pm}=\alpha_p/\sqrt{2}$.
For now, we will explicitly
keep $\alpha_s$ and not use ${\cal N}$ conservation to eliminate it.
We then have a problem in three degrees of freedom given by $\alpha_s$,
$\alpha_0$ and $\alpha_p$ and can define nine bilinear objects
\begin{equation}
g_{ij} = \alpha^*_i\alpha_j \;\;\;\;\;\; i,j\in (s,0,p).
\end{equation}
Their Poisson brackets are
$\{g_{ij},g_{kl}\}= \frac{1}{i}(g_{il}\delta_{jk} - g_{kj}\delta_{il})$, which
is the classical version \cite{moyal} of the quantum commutator relation
$[\hat{g}_{ij},\hat{g}_{kl}]= (\hat{g}_{il}\delta_{jk} -
\hat{g}_{kj}\delta_{il}).$ These are the
commutator relations for the generators of the $U(3)$ algebra and
we conclude that classically we have a $U(3)$ algebra. The Casimir invariant
$\sum_i\alpha_i^*\alpha_i$, has zero Poisson bracket with any Hamiltonian we
construct from the generators and is therefore always a
constant of motion. This is
just the particle number and is numerically equal to ${\cal N}$.
We wish to consider the nature of the possible Hamiltonians which can be
constructed from
these nine generators. These Hamiltonians are integrable if they have three
independent constants in involution \cite{licht}.
This is guaranteed if there is a dynamical symmetry \cite{cllim3,feng},
so we want to know the possible dynamical symmetries. To find these,
it is helpful to refer to the original $U(6)$ model. Its first group chain is
obtained by considering a $U(5)$ subalgebra which consists of 25 generators.
Of these generators, all but four are zero for zero angular momentum.
The remaining ones are
\begin{equation}
\begin{array}{ccc}
\alpha^*_p\alpha_p & \;\;\;\;\;\;\;\;\;\; & \alpha^*_0\alpha_p\\
\alpha^*_p\alpha_0 & \;\;\;\;\;\;\;\;\;\; & \alpha^*_0\alpha_0
\end{array}
\end{equation}
which have a $U(2)$ algebra. We can consider an $SO(2)$ subalgebra whose
generator is $g_3=i(\alpha^*_p\alpha_0 - \alpha^*_0\alpha_p)$. The reason for
this choice will be discussed below. Therefore, we can identify one
group chain as
\begin{equation} \label{eq:u2}
U(3) \supset U(2) \supset SO(2)
\end{equation}
If we write down a Hamiltonian in terms of the Casimir invariants of these
groups, then these invariants are constants of motion in involution and the
motion is integrable.
The second group chain of the $U(6)$ model is $SU(3)$ which has eight
generators \cite{ibm}.
The first three are the three components of the angular momentum
and the other five are components of the rank two quadrupole tensor,
\begin{equation} \label{eq:qclass}
{\cal Q}_{\mu} = \alpha_s^*\tilde{\alpha}_{\mu} +
\alpha_{\mu}^*\alpha_s
- \frac{\sqrt{7}}{2}(\alpha^*\tilde{\alpha})^{(2)}_{\mu}
\end{equation}
where the last term means that we couple the objects in the brackets to
angular momentum two and $\tilde{\alpha}_{\mu}=(-1)^{\mu}\alpha_{-\mu}$.
In the present case all of the angular momenta components are zero. In
addition, ${\cal Q}_{\pm 1} = 0$ and ${\cal Q}_2={\cal Q}_{-2}$ so we are
left with two independent quantities
\begin{eqnarray}
q_0 & = & \alpha^*_s\alpha_0 + \alpha^*_0\alpha_s -
\frac{1}{\sqrt{2}}(\alpha^*_p\alpha_p - \alpha^*_0\alpha_0)\\
q_2 & = & \alpha^*_s\alpha_p + \alpha^*_p\alpha_s -
\frac{1}{\sqrt{2}}(\alpha^*_p\alpha_0 + \alpha^*_0\alpha_p),\nonumber
\end{eqnarray}
where $q_0={\cal Q}_0$ and $q_2=\sqrt{2}{\cal Q}_{\pm 2}$.
It is straightforward to show that $q_0$ and $q_2 $ have zero Poisson bracket
so they are the generators of a $U(1) \times U(1)$ algebra. We can therefore
identify the group chain as
\begin{equation} \label{eq:u1u1}
U(3) \supset U(1) \times
U(1).
\end{equation}
The third group chain is $O(6)$ in the original model. It has 15
generators. There are three components of the angular momentum, which are
zero. There are five components of a quadrupole tensor which is the same as
in equation (\ref{eq:qclass}) but without the last term. As in the
$SU(3)$ case ${\cal Q}_{\pm 1}=0$ and ${\cal Q}_2={\cal Q}_{-2}$, so only two
components are independent. Finally, there are seven components of a rank
three octupole
tensor ${\cal O}$. These are all zero except the $\mu=\pm 2$ components which
are equal. Therefore, we have three independent quantities
\begin{eqnarray} \label{eq:so3gen}
g_1 & = & \alpha^*_0\alpha_s + \alpha^*_s\alpha_0 \nonumber\\
g_2 & = & \alpha^*_p\alpha_s + \alpha^*_s\alpha_p \\
g_3 & = & i(\alpha^*_p\alpha_0 - \alpha^*_0\alpha_p)\nonumber
\end{eqnarray}
with $g_1={\cal Q}_0$, $g_2=\sqrt{2}{\cal Q}_{\pm 2}$ and $
g_3=2i{\cal O}_{\pm 2}$. These have an $SO(3)$ structure.
We can consider $g_3$ to be the generator of an $SO(2)$ algebra, as in the
$U(2)$ chain. We then have the group chain
\begin{equation} \label{eq:o3}
U(3) \supset SO(3) \supset SO(2).
\end{equation}
In summary, the original $U(6)$ group chain structure as shown in equation
(\ref{eq:chain}) has the following simpler structure when we consider the
classical model for zero angular momentum:
\begin{equation} \label{eq:chaineff}
U(3) \supset \left\{
\begin{array}{c}
U(2) \supset SO(2)\\
U(1) \times U(1)\\
SO(3) \supset SO(2).
\end{array}
\right.
\end{equation}
A few comments are in order. In the original $U(6)$ model, there is a
constraint that all group chains must contain the $SO(3)$ subalgebra for
which
the $\alpha_{\mu}^*$ are rank two spherical tensors. For example, this
constrains the $U(5)$ subalgebra to be that one which contains all the
$\alpha_{\mu}^{*}\alpha_{\nu}$ generators and none of the generators which
contain $\alpha_s$ or $\alpha_s^*$. An analogous constraint on the allowed
representations in this picture is that all states must be in a symmetric
representation of the $S_3$ permutation group, as will be discussed.
Another important point is that for the classical mechanics, these group
chains are exact. The Poisson brackets of the relevant degrees of freedom
have precisely the structure appropriate for the three chains derived above.
There are slight problems upon requantisation, as will be discussed below.
It is also useful to motivate the choice of $SO(2)$ generator for the $U(2)$
and $SO(3)$ algebra chains. Consider the original $O(5)$ Casimir invariant
\cite{ibm}
\begin{equation}
{\cal C}_2(O(5)) = 4\Bigl(
(\alpha^*\tilde{\alpha})^{(3)}\cdot(\alpha^*\tilde{\alpha})^{(3)}
+ (\alpha^*\tilde{\alpha})^{(1)}\cdot(\alpha^*\tilde{\alpha})^{(1)}
\Bigr).
\end{equation}
The second term is zero for zero angular momentum. Also
$(\alpha^*\tilde{\alpha})^{(3)}_{\pm 1,\pm 3} = 0$ since no combination of
$\alpha_0$ or $\alpha_{\pm 2}$ can combine to give an odd index. In addition
$(\alpha^*\tilde{\alpha})^{(3)}_0 = 0$ due to the equality of $\alpha_2$ and
$\alpha_{-2}$. This leaves
\begin{equation}
(\alpha^*\tilde{\alpha})^{(3)}_{\pm 2} = \pm\frac{1}{2}
(\alpha^*_p\alpha_0 - \alpha^*_0\alpha_p).
\end{equation}
Therefore,
\begin{eqnarray}
{\cal C}_2(O5) & = & -2(\alpha^*_p\alpha_0 - \alpha^*_0\alpha_p)^2 \\
& = & 2g_3^2, \nonumber
\end{eqnarray}
where $g_3$ is precisely the $SO(2)$ generator in the first and third group
chains. In fact, equations (\ref{eq:alphas}) and (\ref{eq:so3gen}) imply
$g_3=p_{\gamma}$ so that
$g_3$ generates $\gamma$ rotations. Thus, $SO(2)$ symmetry is the same as
$\gamma$ independence and implies $p_{\gamma}$ conservation. In the
zero angular momentum limit,
this result agrees with the well know connection between the $O(5)$
Casimir invariant and $p_{\gamma}$ \cite{hl}.
It should be emphasised that the $SO(3)$ and $SO(2)$ groups discussed here
are not groups of spatial rotations but are groups describing abstract
transformations among the bosons.
\section{Representations of the Effective Groups}
In this section, we discuss the allowed representations of the three
group chains. We start with the first group chain which is given by equation
(\ref{eq:u2}). $U(3)$ is labelled by the quantum number $N$; this is the same
as the $U(6)$ eigenvalue since in each case $N$ refers to the number of bosons.
The $U(2)$ representation label is denoted
by $n$ and can take all integer values from
0 to $N$. For a given representation of $U(2)$, the allowed representations
of $SO(2)$ are $\mu = -n, -n+2,\ldots,n.$
This selection of every other representation is a general property of the group
chain $U(d)\supset SO(d)$. For $d=2$, it is derived in reference
\cite{messiah} in a discussion of two dimensional harmonic oscillators.
However, not all of these $SO(2)$ representations are physically realised.
There is an additional constraint that all wavefunctions are in a symmetric
representation of $C_{3v}\approx S_3$, the three point permutation group
\cite{bohrmot}. This is most easily seen in the configuration
space shown in Fig.~1 for which $\beta$ and
$\gamma$ are polar coordinates. Since $U(3)$ is
a compact group there is a constraint that $\beta\leq 2$. The heavy lines
at $\gamma=0$, $2\pi/3$ and $4\pi/3$ denote prolate symmetry and the dashed
lines at $\gamma=\pi/3$, $\pi$ and $5\pi/3$ denote oblate symmetry. There are
six physically indistinguishable regions connected by the six elements of
the $C_{3v}$ group. This means that all states must belong to the symmetric
representation of this group. We can think of the configuration space as
being given by just one of the six domains. This is not the case for general
angular momentum, since the angular momentum axis picks out a direction which
differentiates among the domains.
More fundamentally, the fact that the system has the symmetry of the three
point permutation group arises from the fact that for zero angular
momentum no axis is special so any relabelling of the $x$, $y$ and $z$
axes leaves the system invariant. Such relabellings correspond to spatial
rotations and reflections. The fact that the zero angular momentum
states belong to the symmetric representation of the $S_3$ group arises because
they must be invariant with respect to all spatial rotations and reflections
and therefore with respect to all relabellings. Only the symmetric
representation has this property so it follows that the zero angular
momentum states belong to the symmetric representation of $S_3$. Therefore,
the condition of having zero angular momentum specifies both the $S_3$
symmetry and the relevant representation.
It is a simple result of group theory \cite{gt} that the functions
$\cos 3\mu\gamma$, with $\mu$ a non-negative integer, belong to the symmetric
$A$ representation of $C_{3v}$. (The functions $\sin 3\mu\gamma$ belong to the
$B$ representation and all other possibilities belong to the two dimensional
$E$ representation.) Thus, we can label the allowed representations of $SO(2)$
by $\mu$ non-negative and divisible by three. This prescription yields
precisely the same number of allowed states as the original $U(6)$ problem.
This is simple enough to show in general but is most easily demonstrated
with an example.
Consider the $U(3)$ representation $N=6$. The possible representations of
the first group chain are shown in Table~I. For comparison, we also show
the representation labels for the first group chain
of the original $U(6)$ model \cite{ibm} in the $J=0$ representation as shown
in equation (\ref{eq:chain}). The $U(5)$ algebra is labelled by the eigenvalue
$n_d$, the $O(5)$ algebra is labelled by $v$ and the missing quantum number
in going from $O(5)$ to $O(3)$ is identically zero and is not considered.
In each case, the number of allowed states is the same. What is more, the
representation labels are identical. This is a general result; the
representation labels of the effective groups can be identified with
the labels of the original model.
Since the third group chain is similar to the first, we discuss it next. Recall
that it is given by equation (\ref{eq:o3}). Following the earlier discussion,
the allowed $SO(3)$ representations include every other
integer counting down from $N$; we label them by the quantum number $I$.
The decomposition from $SO(3)$ to $SO(2)$ follows the normal rule so we have,
\begin{eqnarray}
I & = & N,N-2,\ldots,\mbox{1 or 0} \\
\mu & = & -I,-I+1,\ldots,I\nonumber
\end{eqnarray}
However, as above, we only consider $\mu$ non-negative and divisible
by three. The results for $N=6$ are shown in Table~II. We have seven states
as in the previous group chain. The multiplicities of
the $SO(2)$ representations are the same for both group chains; in
each case we have $\mu=0^4,3^2,6$. The left half of the table shows the result
for the $J=0$ representation of the third group chain shown in equation
(\ref{eq:chain}). $O(6)$ and $O(5)$ representations are labelled by
$\sigma$ and $v$ respectively. We again find that the group labels are the
same.
We conclude by considering the second group chain. We are
constrained to select only those representations which are symmetric in $S_3$.
To do this it is convenient to think of this group chain as
\begin{equation}
\begin{array}{ccccc}
U(3) & \hspace{.3cm} & \supset & \hspace{.3cm} & U(1)\times U(1)\times
U(1).\\
N & & & &
n_s \hspace{1cm} n_+ \hspace{1cm} n_-
\end{array}
\end{equation}
where each number labels the representation of the group above it.
Then our states are $|n_s\rangle |n_+\rangle |n_-\rangle$ with $N=n_s+n_++n_-$.
In this case, the
$S_3$ group acts to interchange the labels so that the representation which is
completely symmetric under this group is
\begin{eqnarray}
|\phi(n_s,n_-,n_+)\rangle & = &\frac{1}{\sqrt{6}}\Bigl(
|n_s\rangle |n_+\rangle |n_-\rangle +
|n_-\rangle |n_s\rangle |n_+\rangle +
|n_+\rangle |n_-\rangle |n_s\rangle \\
& & + |n_s\rangle |n_-\rangle |n_+\rangle
+ |n_+\rangle |n_s\rangle |n_-\rangle
+ |n_-\rangle |n_+\rangle |n_s\rangle\Bigr).\nonumber
\end{eqnarray}
We can label the states with three integers
\begin{equation} \label{eq:nval}
k \geq l \geq m \;\;\;\;\;\;\;\;\; k+l+m=N
\end{equation}
such that the states are the symmetric combination of distributing the integers
$(k,l,m)$ among the eigenvalues $(n_s,n_-,n_+)$. Our representations are then
\begin{eqnarray}
m & = & 0,1,\ldots,\left[\frac{N}{3}\right] \nonumber\\
l & = & m,m+1,\ldots,\left[\frac{N-m}{2}\right] \label{eq:ineq}\\
k & = & N - l - m\nonumber
\end{eqnarray}
where $[x]$ is the smallest integer less than or equal to $x$.
This guarantees that the
conditions (\ref{eq:nval}) are satisfied. It is convenient to define two
other labels
\begin{eqnarray}
a & = & 2(k-l) \label{eq:pq}\\
b & = & 2(l-m). \nonumber
\end{eqnarray}
By equation (\ref{eq:ineq}) these are both non-negative.
We can now determine the allowed representations for this group chain with
$N=6$. The seven pairs of values of $a$ and $b$ are shown in Table~III. Also
shown are the representations of the second
group chain of equation (\ref{eq:chain}) which are
labelled by two eigenvalues $(\lambda,\mu)$. As before, the representation
labels are identical. In terms of Young tableaux, the procedure
for selecting the allowed representations of $U(1)\times U(1)$ is
identical to that of finding the allowed representations of $SU(3)$
\cite{gt}.
In conclusion, we see that within the $U(3)$ picture there is a very
clear way to find the allowed representations of each group chain. These
labels are identical to those of the original $U(6)$ model. At this level,
the correspondence is exact since the effective group
chains give the same numbers of states with the same multiplicities
and the same group labels.
However, once
we try to calculate quantum eigenvalues, we will see that there are
differences and this $U(3)$ picture will be shown to be a leading order
semiclassical approximation.
\section{The Classical Hamiltonian}
It is helpful at this stage to describe a classical Hamiltonian which can
interpolate among the three dynamical symmetries. One choice is the extended
consistent-Q Hamiltonian \cite{cw88} whose classical limit for arbitrary
angular momentum is \cite{thesis,appr}
\begin{equation}
{\cal H} = \eta{\cal NU} - (1-\eta){\cal Q}^{\chi}\cdot{\cal Q}^{\chi}.
\end{equation}
This depends on two parameters, $\eta$ and $\chi$. The quantity ${\cal U}$ is
the classical analogue of $\hat{n}_d$. For zero angular momentum,
we have ${\cal Q}_{\pm 1}=0$ and ${\cal Q}_2={\cal Q}_{-2}$ so the Hamiltonian
reduces to
\begin{equation} \label{eq:cham}
{\cal H} = \eta{\cal NU} -
(1-\eta)\Bigl({\cal Q}_0^2(\bar{\chi}) + 2{\cal Q}_2^2(\bar{\chi})\Bigr)
\end{equation}
with
\begin{eqnarray}
{\cal U} & = & \alpha_0^*\alpha_0 + \alpha_p^*\alpha_p \nonumber\\
{\cal Q}_0(\bar{\chi}) & = & \alpha_0^*\alpha_s + \alpha_s^*\alpha_0 -
\frac{\bar{\chi}}{\sqrt{2}}(\alpha_p^*\alpha_p - \alpha_0^*\alpha_0)
\label{eq:cterms}\\
{\cal Q}_2(\bar{\chi}) & = & \Bigl(\alpha_p^*\alpha_s + \alpha_s^*\alpha_p -
\frac{\bar{\chi}}{\sqrt{2}}(\alpha_p^*\alpha_0 + \alpha_0^*\alpha_p)\Bigr)
/\sqrt{2}. \nonumber
\end{eqnarray}
where we have made use of the substitution (\ref{eq:alphapm}).
The first term in equation (\ref{eq:cterms}) describes vibrations and the
second describes
quadrupole interactions. The parameter $\bar{\chi}$ is related to the more
commonly used $\chi$ by the relation $\bar{\chi} = \chi/(-\sqrt{7}/2)$.
$n_d$ is a 1-body term which scales as $\cal N$ while the quadrupole term is
2-body and scales as ${\cal N}^2$. Therefore we have multiplied the first term
by $\cal N$ \cite{thesis,appr} so that both terms scale the same.
This Hamiltonian can also be expressed in terms of the intrinsic coordinates
defined in equation (\ref{eq:alphas}) for which we find \cite{paar,thesis,hl}
${\cal U}=(p_{\beta}^2
+\beta^2+p_{\gamma}^2/\beta^2)/2$ and
{\samepage
\begin{eqnarray}
{\cal H} & = & \eta{\cal N}{\cal U} -(1-\eta)\Bigl[2\beta^2({\cal
N}-{\cal U}) \nonumber\\
& & -\bar{\chi}\sqrt{{\cal N}-{\cal U}}
\Bigl((p_{\gamma}^2/\beta-\beta p_{\beta}^2-\beta^3)\cos
3\gamma + 2p_{\beta}p_{\gamma}\sin 3\gamma\Bigr) \\
& & +\frac{\bar{\chi}^2}{2}({\cal U}^2 - p_{\gamma}^2)\Bigr].
\nonumber
\end{eqnarray}}
The Hamiltonian for $\bar{\chi}=0$ has the reasonably simple form
\begin{equation}
{\cal H} = \eta{\cal N}{\cal U} -2(1-\eta)\beta^2({\cal N}-{\cal U}).
\end{equation}
This is clearly $\gamma$-independent so it has an $SO(2)$ symmetry for all
values
of $\eta$. This is enough to ensure integrability because ${\cal N}$,
${\cal H}$ and $p_{\gamma}^2$ are three constants of motion in involution.
The Hamiltonian has additional $U(2)$ and $SO(3)$ symmetries at
the values $\eta=1$
and $\eta=0$ respectively. These limits are called over-integrable.
Two familiar examples of over-integrable systems in three dimensions are
the harmonic oscillator and the Coulomb potential. These have $U(3)$
and $SO(4)$ symmetries respectively \cite{golds} even though the $SO(3)$
symmetry of spherical rotations is enough to ensure integrability.
Another case of interest is $\bar{\chi}=1$ and $\eta=0$, for which
the Hamiltonian has a $U(1)\times U(1)$ symmetry. This is not manifest if we
consider the Hamiltonian in the original coordinates $\alpha_j$, but a change
of variables makes it clear \cite{thesis}. Define new coordinates $\zeta_j$
by
\begin{eqnarray}
\alpha_s & = & (\zeta_s-\zeta_+ -\zeta_-)/\sqrt{3} \nonumber\\
\alpha_0 & = & (2\zeta_s + \zeta_+ +\zeta_-)/\sqrt{6} \\
\alpha_p & = & (\zeta_+ - \zeta_-)/\sqrt{2}.\nonumber
\end{eqnarray}
This transformation is both unitary and canonical. $\zeta_s=
(\alpha_s+\sqrt{2}\alpha_0)/\sqrt{3}$ is the classical analog of the $SU(3)$
boson condensate \cite{ami}.
Straightforward algebra leads to
\begin{equation} \label{eq:hu1u1}
{\cal H} = -2({\cal J}_s^2+{\cal J}_+^2+{\cal J}_-^2-{\cal J}_+{\cal J}_-
-{\cal J}_+{\cal J}_s-{\cal J}_-{\cal J}_s),
\end{equation}
where ${\cal J}_i=\zeta_i^*\zeta_i$.
It is clear that $\{{\cal J}_i,{\cal H}\}=0$, so
the three ${\cal J}_i$ constitute a set of independent constants of motion.
Their
existence implies integrability; in fact they are precisely the action
coordinates of the problem \cite{thesis}.
We can use the fact that ${\cal N} = {\cal J}_s+{\cal J}_++{\cal J}_-$
to obtain
\begin{equation}
{\cal H} =
-\Bigl[2{\cal N}^2 -6{\cal N}({\cal J}_++{\cal J}_-)
+ 6({\cal J}_+^2+{\cal J}_-^2+{\cal J}_+{\cal J}_-)\Bigr].
\end{equation}
It is now clear that the meaning of the $U(1)\times U(1)$ symmetry is that
there are two independent oscillations
\begin{equation}
\zeta_{\pm}(t) = \zeta_{\pm}(0)\exp(i\Omega_{\pm}t)
\end{equation}
with frequencies
\begin{equation}
\Omega_{\pm} = 6{\cal N} - 12{\cal J}_{\pm} - 6{\cal J}_{\mp}.
\end{equation}
The frequencies depend explicitly on the amplitudes of the motion. For small
oscillations thy are approximately degenerate with values
close to $6{\cal N}$. This corresponds to vibrations near the potential
minimum at $\gamma=0$ and $\beta=\sqrt{4{\cal N}/3}$
which are approximately degenerate \cite{hl}.
\section{The Quantum Hamiltonian}
We next consider the effect of quantising the classical system discussed
above. As a first step we will define boson creation and annihilation
operators by reversing equation (\ref{eq:dtoa})
\begin{equation} \label{eq:atod}
\alpha_p^* \rightarrow d^{\dagger}_p \;\;\;\;\;
\alpha_p \rightarrow d_p \;\;\;\;\;
\alpha_0^* \rightarrow d^{\dagger}_0 \;\;\;\;\;
\alpha_0 \rightarrow d_0 \;\;\;\;\;
\alpha^*_s \rightarrow s^{\dagger} \;\;\;\;\;
\alpha_s \rightarrow s.
\end{equation}
In general, there are ordering ambiguities in going from a classical to a
quantum Hamiltonian. Here a natural ordering suggests itself. We started with
a $U(6)$ quantum Hamiltonian, found its classical limit and then recognised
that
there was an effective $U(3)$ chain which describes zero angular momentum.
At no point was it
necessary to change the ordering of terms in the Hamiltonian. Therefore, we
will take the $U(3)$ quantum Hamiltonian with the same ordering as the
original quantum Hamiltonian.
The result of quantising equations (\ref{eq:cham}) and (\ref{eq:cterms}) is
\begin{mathletters} \label{eq:qham}
\begin{equation} \label{eqa}
\hat{H} = N\eta\hat{n}_d - (1-\eta)\Bigl(\hat{Q}^2_0(\bar{\chi})
+2\hat{Q}^2_2(\bar{\chi})\Bigr)
\end{equation}
where
\begin{eqnarray}
\hat{n}_d & = & d^{\dagger}_0 d_0 + d^{\dagger}_p d_p\nonumber\\
\hat{Q}_0(\bar{\chi}) & = & d^{\dagger}_0s + s^{\dagger}d_0 -
\frac{\bar{\chi}}{\sqrt{2}}(d^{\dagger}_pd_p - d^{\dagger}_0d_0)
\label{eqb}\\
\hat{Q}_2(\bar{\chi}) & = & \Bigl(d^{\dagger}_ps + s^{\dagger}d_p -
\frac{\bar{\chi}}{\sqrt{2}}(d^{\dagger}_pd_0 + d^{\dagger}_0d_p)\Bigr)
/\sqrt{2} \nonumber
\end{eqnarray}
\end{mathletters}
It is worth stressing that this quantum Hamiltonian was obtained in a three
step
process. We first found the classical limit of the $U(6)$ model. We then
used a dimensional reduction appropriate for zero angular momentum. Finally,
we requantised using a natural ordering. The effect of these three steps is
formally the same as substituting
\begin{equation}
\begin{array}{ccc}
d_{\pm 1} = 0 & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; & d^{\dagger}_{\pm 1} = 0 \\
d_2 = d_{-2} & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; &
d^{\dagger}_2 = d^{\dagger}_{-2}
\end{array}
\end{equation}
in the original Hamiltonian.
We know that these substitutions are inconsistent
because Heisenberg's uncertainty principle implies that two noncommuting
variables can not be simultaneously equal to zero. The error is in the
requantisation since we should include the effects of the zero point motion
of the three neglected degrees of freedom. These contribute in the next to
leading order term in $\hbar$ (since $\hbar\sim 1/N$ \cite{bo}).
The conclusion is that
we have defined a new quantum problem of lower dimension whose solution is
a semi-classical approximation to the original $U(6)$ problem. In reference
\cite{hl} the authors found a similar ${\cal O}(1/N)$ discrepancy after a
similar analysis.
We next discuss the eigenenergies of the $U(3)$ Hamiltonian above.
For $\eta=1$, we have $\hat{H} = N\hat{n}_d$ which is proportional to the
linear $U(2)$ Casimir operator. The energies are then
\begin{equation}
E_i = Nn_{di}
\end{equation}
where $i$ indicates the i'th state and $n_{di}$ labels its $U(2)$
representation. This result agrees exactly with the result for the original
$U(6)$ vibrational limit. This is not a general result but arises because
there are no ordering ambiguities in our choice of Hamiltonian. Consider the
more common choice for a vibrational
classical Hamiltonian in $d$ dimensions; $H=\sum_i(p_i^2+x_i^2)/2$.
Its quantum energies are $E=\sum_i n_i+d/2$. In this case, neglecting
degrees of freedom would result in a discrepancy in the second term. We will
see that such higher order discrepancies are the rule, not the exception.
As mentioned, the semiclassical limit is obtained for large
$N$. All group labels are of order $N$, so the eigenvalues of the quantum
Hamiltonian ((\ref{eq:qham})
have leading order terms which are quadratic in the
group labels and the next order terms are linear.
We next consider the third group chain which corresponds to
$\eta=\bar{\chi}=0$. In that case,
\begin{eqnarray}
\hat{H} & = & -\Bigl(\hat{Q}_0^2(0)+2\hat{Q}_2^2(0)\Bigr)\\
& = & -(\hat{g}_1^2 + \hat{g}_2^2) \nonumber
\end{eqnarray}
where the quantum generators $\hat{g}_1$, $\hat{g}_2$ and $\hat{g}_3$ are
obtained by applying the substitution (\ref{eq:atod}) to the classical
generators given in equation (\ref{eq:so3gen}). We then have
\begin{equation}
\hat{H} = -\hat{C}(O3) + \hat{C}(O2) \nonumber
\end{equation}
so that
\begin{equation} \label{eq:o3en}
E_{I\mu} = -I(I+1) + \mu^2.
\end{equation}
The result for the original model is
\begin{equation} \label{eq:o6en}
E_{\sigma v} = -\sigma(\sigma+4) + v(v+3),
\end{equation}
where $\sigma$ and $v$ label the $O(6)$ and $O(5)$ representations,
respectively.
Recalling the identification of $I$ and $\mu$ with $\sigma$ and $v$, we
see that this is a leading order approximation to the energies.
It is possible to make an argument about the next order term for the
energies of this dynamical symmetry.
Consideration of the radial Schr\"{o}dinger equation in $d$ dimensions gives
the semi-classical approximation for the eigenvalues of the $SO(d)$ Casimir
operator
\cite{jim}
\begin{equation} \label{eq:sod}
C\Bigl(SO(d)\Bigr) = \left(I+\frac{d-2}{2}\right)^2
\end{equation}
where $I$ is the integer which labels the $SO(d)$ representation. The special
case of three dimensions gives the well known Langer modification \cite{lang}
\begin{eqnarray}
C\Bigl(SO(3)\Bigr) & \approx & \left(J+\frac{1}{2}\right)^2 \\
& = & J(J+1) + \frac{1}{4}.\nonumber
\end{eqnarray}
This disagrees with the exact result by a factor of $1/4$, which is a third
order term. The $(d-2)/2$ factor in equation (\ref{eq:sod}) can be thought of
as arising from turning points and their phase space generalisations
\cite{gutz}. We can then interpret equation (\ref{eq:o3en}) as
\begin{equation} \label{eq:o3appr}
E_{I\mu} \approx -\Bigl(I+\frac{d-2}{2}\Bigr)^2 +
\Bigl(\mu+\frac{(d-1)-2}{2}\Bigr)^2
\end{equation}
with $d=3$. This agrees with equation (\ref{eq:o3en}) in the first two terms
and the dimension $d$ only enters into the second leading term. To account
for the suppressed degrees of freedom, it is reasonable to add 3 to the
dimension. Substituting $d=6$ into formula (\ref{eq:o3appr}) gives
\begin{eqnarray}
E_{I\mu} & = & -(I+2)^2 + (\mu+\frac{3}{2})^2 \\
& = & -I(I+4) + \mu(\mu+3) - \frac{7}{4}. \nonumber
\end{eqnarray}
As promised, this agrees with the original result (\ref{eq:o6en})
for the leading two terms.
In practise, this is sufficient since constants are unimportant when
calculating the differences between energies. However, it should be stressed
that this is only a plausibility argument and is not rigorous.
We next discuss the energy eigenvalues of the second dynamical symmetry.
Its classical Hamiltonian, in the coordinates $\zeta$, is given by equation
(\ref{eq:hu1u1}). We quantise by replacing these variables by creation and
annihilation operators. The Hamiltonian is then a function of the group
Casimir operators. These are $U(1)$ operators whose eigenvalues are
integers. We then obtain the energies
\begin{equation}
E_{n_s n_+ n_-} = -2(n_s^2+n_+^2+n_-^2 - n_s n_+ - n_s n_- - n_+ n_-).
\end{equation}
We can also obtain this by a direct semiclassical approximation of equation
(\ref{eq:hu1u1}) by substituting ${\cal J}_i=n_i$ which is appropriate for
complex phase space \cite{voros}. It is interesting to note that in this
situation the semiclassical approximation is exact. However there are
still semiclassical errors arising from the dimensional reduction as
shown in the next paragraph.
Expressing this in terms of $a$ and $b$ defined in equation (\ref{eq:pq})
leads to
\begin{equation}
E_{ab} = -\frac{1}{2}(a^2 + b^2 + ab).
\end{equation}
The exact result for the $SU(3)$ limit of the original model is
\begin{equation}
E_{\lambda\mu} = -\frac{1}{2}(\lambda^2 + \mu^2 +\lambda\mu +3\lambda+3\mu).
\end{equation}
where $\lambda$ and $\mu$ label the $SU(3)$ representation. Earlier it was
argued that $a$ and $b$ can be identified with $\lambda$ and $\mu$ so we see
that the previous two formulae agree to leading order.
An important point is that the approximation for all the group chains is
valid if $N$ is large but with no added constraint. Therefore, this
procedure reproduces, approximately, the entire spectrum for a given value
of $N$. This is in contrast to the results of reference \cite{hl}
where the approximate energies are valid for $N$ large and for the other
quantum numbers much smaller than $N$. This reproduces the energies of
the low-lying states but not of the entire spectrum.
In principle, it should be possible to derive the next order terms to the
energies by more sophisticated semiclassical arguments. However, for our
purposes it is not important since the leading order terms are
sufficient to identify the quantum states with the effective group
representations. It is this correspondence which is the central result of
this paper.
\section{Conclusion}
The point which is established in this paper is that
if a Hamiltonian has a group structure and is block diagonal, then
at least one of the blocks may have a simpler effective group structure.
This was motivated by studying the classical mechanics for which the
effective group chains are unambiguous. The quantum
mechanics is a little more troublesome since it is inconsistent to
completely ignore degrees of
freedom; their zero point motions can be significant in understanding the
quantum structure. Nevertheless, we have shown that it is possible to
requantise within the picture of
the effective group structure. Arguments about the required point
group symmetry of the states limit the allowed representations so that
the number of states is the same as in the original model. Furthermore,
we can identify the representation labels of the effective groups with the
labels of the original groups. Therefore, at this level even the quantum
mechanics of the effective groups is unambiguous; it is completely consistent
to identify quantum states
with representations of the lower dimensional effective
groups. Only at the point of calculating quantum energies is there a problem.
Because we have not consistently accounted for the missing degrees of freedom,
there are higher order corrections to the energies derived here. The
interesting problem of how to get the higher order terms remains.
This is a useful result because it means that we have a simple understanding
of the $J=0$ quantum states. They can be defined in a two
dimensional space and their dynamical symmetries have a clear, intuitive
interpretation. The case of zero angular momentum has also been of interest
in studying chaos \cite{paar,thesis,appr,ibmchaos} since the two dimensional
nature of the classical motion makes the analysis of the classical
dynamics simpler. It is therefore useful to understand
the nature of the possible symmetries in the reduced problem.
It is also possible to calculate quantum energies away from the dynamical
symmetry limits within this effective group picture. For example, it
is quite simple to diagonalise a general Hamiltonian in a basis of
$U(2)$ eigenstates. This has been done and the results will be discussed in a
subsequent publication.
There are two interesting features of the IBM model which will probably be
easier to understand for zero angular momentum. The first is that there is a
suppression of chaos for a family of Hamiltonians between the $SU(3)$ and
$U(5)$ limits \cite{thesis,appr}. We can now try to explain it as a
property of the
effective model between the $U(1)\times U(1)$ and $U(2)$ dynamical symmetries.
The other feature is that the IBM has a partial
dynamical symmetry \cite{prtl}. This term describes a situation in which the
Hamiltonian does not have any symmetry and yet a subset of its quantum
eigenstates do. This affects the classical dynamics by reducing the extent of
chaos \cite{alw}. A semiclassical understanding
of this effect might be possible for zero angular momentum for which the
partial $SU(3)$ symmetry becomes a simpler partial $U(1)\times U(1)$ symmetry.
More generally, this may serve as a useful starting point to study
the effects of classical chaos on collective nuclear structure.
An interesting point is that the $U(3)$ model discussed here is
equivalent to the three level Lipkin model \cite{lipkin} in the case where the
number of fermions equals the number of single particle states in each level.
It is amusing to
note that this model, which provides a phenomenological tool for the
study of shell effects, might actually have a physical realisation in
collective nuclei. However, unlike the usual studies involving the
Lipkin model, we have the additional constraint of being in a symmetric
$S_3$ representation. The Lipkin model has been studied in the context of
chaos \cite{meredith} but without explicit reference to its
dynamical symmetries.
This idea might also apply in other systems. For example, there exist
models of triatomic molecules \cite{chem2} based on assuming a $U(4)
\times U(4)$ algebra and insisting on an $SO(3)$ subalgebra. There are various
dynamical symmetries, some of which involve an $SO(4)$ algebra. Insisting
on the $J=0$ representation of $SO(3)$ limits the allowed $SO(4)$
representations. Then the quantum states can be specified in terms of one
fewer quantum number so there is a dimensional reduction of one. It might be
that this situation is also described by an effective group chain, but this
must
be worked out in detail. It would also be interesting to see if this idea can
be applied to fermionic systems for which it is difficult to refer to
the classical limit.
\acknowledgments
I would like to thank Yoram Alhassid and Ami Leviatan for useful discussions
and Jim Morehead for allowing me to use his unpublished calculations.
I also thank Peter Dimon for critical comments on the manuscript.
This work was supported under the EU Human Capital and Mobility Programme.
|
train/arxiv
|
BkiUfZE5qhDCTY2qNKxK
| 5 | 1 |
\section*{Introduction}
We proceed here with the study of generalized symmetries of
systems of PDEs, started in \cite{as}. Let us remind that the basic
idea of the appoach proposed consists in considering
finite-dimensional subspaces of the space of all the generalized
symmetries and the action of infinitesimal shifts with respect to
some (in)dependant variables\footnote{we shall refer to these
variables as to "selected" ones in what follows.}
on these subspaces. This enabled us to obtain the explicit formulas,
describing the dependance of the generalized symmetries from these
subspaces on the "selected" variables. We refer the reader to
\cite{o, fn, i, s, m} for the general motivations of the study of
generalized symmetries of PDEs.
In this paper we apply the results of \cite{as} to establish a simple
criterion of existence of the generalized symmetries from the
above-mentioned finite-dimensional subspaces, which {\it really\/}
depend on the "selected" variables.
As in \cite{as}, let us consider the system of PDEs of the form:
\begin{equation} \label{1}
F _{\nu}(x,u,\dots, u^{(d)}) = 0, \quad \nu =1, \dots ,f,
\end{equation}
where $u=u(x)=(u_1, \dots, u _n)^{T}$
is unknown vector-function of $m$
independant variables $x=(x_1, \dots, x_m)$;
$u^{(s)}$
denotes the set of derivatives of $u$ with respect to $x$ of the
order $s$; $^{T}$ denotes matrix transposition.
\begin{definition}{\rm \cite{o}}. The differential operator ${\rm Q}$
of the form
\begin{equation} \label{2}
{\rm Q} = \sum _{i=1} ^{m} \xi _i (x,u,\dots, u^{(q)}) \partial
/\partial x_i +
\sum _{\alpha =1} ^{n} \eta _{\alpha} (x,u,\dots, u^{(q)}) \partial /
\partial u_{\alpha}
\end{equation}
is called the generalized symmetry
of order $q$ of the system of PDEs
(\ref{1}) if
its prolongation ${\rm {\bf pr}}{\rm Q}$ annulates (\ref{1}) on the
set $M$ of (sufficiently smooth) solutions of (\ref{1}):
\begin{equation} \label{3}
{\rm {\bf pr} Q} [F_ {\nu}] \mid _M = 0, \quad \nu =1, \dots ,f.
\end{equation}
\end{definition}
Let (like in \cite{as}) $Sym$ denote Lie algebra over the field $\bf C$ of
complex numbers\footnote{In fact, everywhere in our considerations
(except the example) $\bf C$ may be replaced by the arbitrary
algebraically closed field $\bf K$, since Theorem 1 below holds true for
this case too \cite{as}.}
(with respect to the so-called Lie bracket $[,]$ \cite{o})
of all the generalized symmetries of (\ref{1}) of non-negative orders,
$Sym ^{(q)}$ be the linear space of the generalized symmetries of (\ref{1})
of order not higher than $q$, $Sym _{q} \equiv Sym^{(q)}/Sym^{(q-1)}$
$(q \neq 0)$, $Sym _0 \equiv Sym^{(0)}$.
Let us denote the coordinates on the manifold of 0-jets $M^{(0)}$
\cite{o} as $z_A$: $z_{i} = x_{i}, i=1, \dots, m, z_{m+\alpha } = u _{\alpha},
\alpha =1, \dots, n$ (from now on the indices $A, B, C, D, \dots$
will run from 1 to $m+n$ and we shall denote $\partial _A \equiv
\partial /\partial z_A$).
In \cite{as} we have proved the following statement:
\begin{Theorem} \label{t4}
Let $W$ be a linear subspace of the linear space of all the
differential operators of the form (\ref{2}) of arbitrary finite
orders, $A_1, \dots , A_g$ be fixed integers from the range $1,
\dots, m+n$, $g \leq m+n$,
$W_{A_{1}, \dots, A_{g}}$ be the linear space of the operators, obtained from
the operators from $W$ by setting $z_{A_{1}}=0, \dots, z_{A_g} = 0$ in their
coefficients, $V= W \bigcap Sym$, for some $q_1$ the dimension of the
subspace of $V$ $V^{(q_{\lower2pt\hbox{\tiny{1}}})} \equiv W \bigcap Sym
^{(q_{\lower2pt\hbox{\tiny{1}}})}$
$v^{(q_{\lower2pt\hbox{\tiny{1}}})} < \infty$
and for any generalized symmetry of (\ref{1}) ${\rm Q} \in
V^{(q_{\lower2pt\hbox{\tiny{1}}})}$ $\partial {\rm Q} / \partial z_{A_s}
\in V ,s=1, \dots, g$.
Then in each $V^{(q)} \equiv W \bigcap Sym^{(q)}$ ($q=0,
\dots, q_1)$ there exists
a basis of linearly independant
generalized symmetries ${\rm Q}_l^{(q,\gamma)}$,
$l=1, \dots, r_{\gamma}^{(q)}$, $\gamma=1, \dots, \rho ^{(q)}$ ($\rho^{(q)}
\leq v^{(q)}$, $\sum_{\gamma =1}^{\rho^{(q)}} r_{\gamma}^{(q)}
= v^{(q)}$) of the form
\begin{equation} \label{8aaa}
\begin{array}{lll}
{\rm Q}_l^{(q,\gamma)} =
\exp(\sum\limits_{s=1}^{g}
\lambda _{\gamma} ^{(q,A_{s})} z_{A_{s}})
\times \\
\sum\limits_{j_{1}=0} ^{k_{\gamma}^{(q,A_{1})} - 1}
\dots \sum\limits _{j_{g}=0} ^{k_{\gamma}^{(q,A_{g})} - 1}
(z_{A_{1}}) ^{j_{1}} (z_{A_{2}}) ^{j_{2}} \dots (z_{A_{g}}) ^{j_{g}}
\: {\rm C}_{l,j_{1}, \dots, j_{g} }^{(q,\gamma)},
\end{array}
\end{equation}
where ${\rm C}_{l,j_{1}, \dots , j_{g} }^{(q,\gamma)}$
are some differential operators from $W_{A_{1},\dots, A_{g}}$
of order $q$ or lower;
$\lambda _{\gamma} ^{(q,A_{s})} \in \bf C$ are some constants and
$k_{\gamma}^{(q,A_{s})}, s=1, \dots, g$ are some fixed numbers from the
range $1,\dots, r_{\gamma}^{(q)}$.
\end{Theorem}
\section{The criterium of dependance from "selected" variables}
First of all let us notice that acting on any ${\rm Q}_{l}^{(q,\gamma)}$
(\ref{8aaa})
by the operators $\partial /\partial z_{A_s} - \lambda_{\gamma}^{(q,A_s)}$,
$s=1,\dots,g$ appropriate number of times, we can obtain the generalized
symmetry from $V^{(q)}$ of the~form
\begin{equation} \label{specsym}
\begin{array}{lll}
{\rm R}_{l}^{(q,\gamma)} = \exp(\sum\limits_{s=1}^{g}
\lambda_{\gamma}^{(q,A_{s})} z_{A_{s}})
\times \\
\sum\limits_{j_{1}=0}^{\varepsilon_{\gamma}^{(q,A_1)}}
\dots \sum\limits_{j_{g}=0}^{\varepsilon_{\gamma}^{(q,A_g)}}
(z_{A_{1}}) ^{j_{1}} (z_{A_{2}}) ^{j_{2}} \dots (z_{A_{g}}) ^{j_{g}}
{\rm K}_{l, j_{1}, \dots , j_{g} }^{(q,\gamma)},
\end{array}
\end{equation}
where
${\rm K}_{l, j_{1}, \dots ,
j_{g}}^{(q,\gamma)}$ are differential operators from $W_{A_{1},
\dots, A_{g}}$ of order $q$ or
lower, $\varepsilon_{\gamma}^{(q,A_s)} = 0$ if
$\lambda_{\gamma}^{(q,A_s)} \neq 0$ and
$\varepsilon_{\gamma}^{(q,A_s)} = 1$ if $\lambda_{\gamma}^{(q,A_s)} =
0$, $s=1, \dots,g$.
Moreover, if
${\rm Q}_{l}^{(q,\gamma)}$ (\ref{8aaa})
really depends from $z_{A_i}$, acting on it $k_{\gamma}^{(q,A_{i})} -
1-\varepsilon_{\gamma} ^{(q,A_i)}$ times
by the operator $\partial /\partial z_{A_i} - \lambda_{\gamma}^{(q,A_i)}$ and
appropriate number of times by the operators $\partial /\partial
z_{A_s} - \lambda_{\gamma}^{(q,A_s)}$, $s \neq i$, yields the
generalized symmetry of the form (\ref{specsym}), where at least one
operator ${\rm K}_{l, j_{1}, \dots , j_{g} }^{(q,\gamma)}$ with
$j_i=\varepsilon_{\gamma}^{(q,A_i)}$ is not equal to zero.
Let us mention that the above procedure of obtaining the symmetries
of the form (\ref{specsym}) from (\ref{8aaa}) gives not the unique
symmetry (\ref{specsym}) but the set of such symmetries. We will show
how to obtain one of such symmetries, satisfying the conditions,
imposed on ${\rm K}_{j_1,\dots, j_g}^{(q,\gamma)}$ above. Let (for
the sake of simplicity) $i=1$.
Among the terms in (\ref{8aaa}) with $j_1=k_{\gamma}^{(q,A_1)} - 1 \equiv
r_1$ we choose the term(s) with maximal value of $j_2$, which we'll denote
$r_2$; among the last ones we choose the term(s) with maximal value
of $j_3$, which will be
denoted as $r_3$ and so on. At the end of this procedure we will
obtain the only (nonzero!) term from (\ref{8aaa}), in which
$j_s=r_s$, $s=1, \dots, g$. Then, if we act on ${\rm
Q}_l^{(q,\gamma)}$ (\ref{8aaa}) by the operator
$$
\prod\limits_{s=1}^{g} (\partial /\partial z_{A_s} -\lambda_{\gamma}
^{(q, A_s)})^{r_{s} - \varepsilon_{\gamma}^{(q,A_{s})}},
$$
the presence of this term will assure that in resulting
expression (which obviously will be of the form (\ref{specsym})) at least one
${\rm K}_{l, j_{1}, \dots , j_{g} }^{(q,\gamma)}$ with
$j_1=\varepsilon_{\gamma}^{(q,A_1)}$ is not equal to zero.
Similar considerations may be undertaken for all other values of $i$.
The possible non-uniqueness of this construction follows from the fact that
(we again restrict to $i=1$) it is possible to act in a different
order: e.g. instead of searching the terms with maximal $j_2$ to
search the terms with maximal $j_g$, amongst them -- those with
maximal $j_3$, and only at the end consider $j_2$.
Now let us formulate explicitly the statement to be proved:
\begin{Theorem} \label{t5}
Provided the conditions of Theorem \ref{t4} are fulfilled, the system
(\ref{1}) possesses generalized symmetries from $V^{(q)}$, $q=0,\dots, q_1$
with $z_{A_i}$-dependant coefficients if and only if there exists
generalized symmetry of (\ref{1}) from $V^{(q)}$ of the form
\begin{equation} \label{specsym2}
{\rm R} = \exp(\sum\limits_{s=1}^{g}
\lambda_s z_{A_{s}})
\sum\limits_{j_{1}=0}^{\varepsilon_1}
\dots \sum\limits_{j_{g}=0}^{\varepsilon_g}
(z_{A_{1}}) ^{j_{1}} (z_{A_{2}}) ^{j_{2}} \dots (z_{A_{g}}) ^{j_{g}}
{\rm K}_{j_{1}, \dots , j_{g} },
\end{equation}
where $\lambda_s \in \bf C$, $\varepsilon_s =
0$ if $\lambda_s \neq 0$ and $\varepsilon_s = 1$ if $\lambda_s = 0$,
$s=1, \dots,g$ and ${\rm K}_{j_{1}, \dots,j_{g}}$ are
differential operators from $W_{A_{1},\dots, A_{g}}$ of order $q$ or
lower, and for this symmetry either $\lambda _i \neq 0$ or $\lambda
_i=0$ but at least one operator ${\rm K}_{j_{1}, \dots , j_{g}}$ with
$j_i=1$ is not equal to zero.
\end{Theorem}
{\it Proof.}
Sufficiency is obvious. Necessity follows from the fact that the
basis of linearly independant generalized symmetries of (\ref{1}) from all
$V^{(q)}$, $q=0, \dots, q_1$ is given by (\ref{8aaa}) in virtue
of Theorem \ref{t4} and from the above considerations. $\triangleright$
Thus, in order to check the existence of $z_{A_i}$-dependant
symmetries of (\ref{1}) from $V^{(q_{\lower2pt\hbox{\tiny{1}}})}$ it
suffices (provided the conditions of Theorem \ref{t4} are fulfilled)
to check the existence of the specific symmetries (\ref{specsym}), as
described above in Theorem \ref{t5}.
In particular, for $g=1$ (\ref{1}) admits $z_{A_1}$-de\-pendant
symmetries from $V^{(q)}$, $q=0, \dots, q_1$ if and only if there
exists the symmetry from $V^{(q)}$ of the form
\begin{equation} \label{cases}
{\rm Q}= \exp(\lambda z_{A_1}) {\rm K}_{0}, \lambda \in \bf C, \lambda \neq 0
\: \mbox{or} \:
{\rm Q}= {\rm K}_{0} + z_{A_1} {\rm K}_{1}, {\rm K}_1 \neq 0,
\end{equation}
where ${\rm K}_{0}$ and ${\rm K}_{1}$ are the operators from $W$ with
$z_{A_1}$-independant coefficients of order not higher than $q$.
Now let us illustrate all these ideas by the following
{\it \bfseries Example.} Let $m=2$, $n=1$, $u_1 \equiv u$, $x=(x_1 \equiv t,
x_2 \equiv y)$, $u _{(l)} = \partial ^{l} u / \partial y ^{l}$ and
(\ref{1}) be the evolution equation
\begin{equation} \label{eveq}
\partial u / \partial t =G(u, u_{(1)}, \dots, u _{(d)}), \quad d\geq 2,
\end{equation}
$W$ be the linear space of the differential operators of the form (\ref{2})
with $\xi _i \equiv 0$, whose coefficient
$\eta \equiv \eta _{1}$, which is called the characteristic of the symmetry
\cite{s},
depends only on $y, u, u_{(1)}, u_{(2)}, \dots $, $V = W \bigcap Sym$.
In \cite{s} it is proved that for such a $V$ $v^{(q)} \leq v^{(1)} + q - 1$
for $q=1,2, \dots$ and $v^{(1)} \leq d+3$.
In \cite{as} it is proved that the generalized
symmetries of order $q \geq 2$ from $W$ depend on $y$ as polynomials of
order not higher than $v^{(q)} -1 \leq v^{(1)}+q-2$.
Therefore, according to Theorem \ref{t5} and (\ref{cases}),
the equation (\ref{eveq}) posseses $y$-dependant symmetries from $W$
if and only if there exists a symmetry from $W$, which depends on $y$
linearly.
As a final remark, let us mention that all the
generalizations of Theorem \ref{t4}, presented in \cite{as}, imply
the corresponding (evident) generalizations of Theorem \ref{t5}.
|
train/arxiv
|
BkiUcyg5qX_AY5k-yQJY
| 5 | 1 |
\section{Introduction}
Discrete dynamical analogs of Mertens' theorem concern a
map~$T:X\to X$, and are motivated by work of
Sharp~\cite{MR1139566} on Axiom A flows. A set of the form
\[
\tau=\{x,T(x),\dots,T^k(x)=x\}
\]
with cardinality~$k$ is called a closed orbit of
length~$\vert\tau\vert=k$, and the results provide asymptotics
for a weighted sum over closed orbits. For the discrete case of
a hyperbolic diffeomorphism~$T$, we always have
\[
M_T(N):=\sum_{\vert\tau\vert\le
N}\frac{1}{{\rm e}^{h\vert\tau\vert}}\sim\log(N),
\]
where~$h$ is the topological entropy, with more explicit
additional terms in many cases. The main term~$\log(N)$ is not
really related to the dynamical system, but is a consequence of
the fact that the number of orbits of length~$n$
is~$\frac{1}{n}{\rm e}^{hn}+\bigo({\rm e}^{h'\!n})$ for some~$h'<h$
(see~\cite{apisit}). Without the assumption of hyperbolicity,
the asymptotics change significantly, and in particular depend
on the dynamical system. For quasihyperbolic (ergodic but not
hyperbolic) toral automorphisms, Noorani~\cite{MR1787646} finds
an analogue of Mertens' theorem in\mc{nooraniconstant} the form
\begin{equation}\label{nooranimain}
M_T(N)=m\log(N)+C_{\ref{nooraniconstant}}+\littleo(1)
\end{equation}
for some~$m\in\mathbb N$. The
constant~$C_{\ref{nooraniconstant}}$ is related to analytic
data coming from the dynamical zeta function. For more general
non-hyperbolic group automorphisms, the coefficient of the main
term may be non-integral (see~\cite{MR2339472} for example).
In this note Noorani's result~\eqref{nooranimain} with improved error
term~$\bigo(N^{-1})$ is recovered using elementary arguments,
and the coefficient~$m$ of the main term in~\eqref{nooranimain}
is expressed as an integral over a sub-torus. This reveals the
effect of resonances between the eigenvalues of unit modulus,
and examples show that the value of~$m$ may be very different to
the generic value given in~\cite{MR1787646}.
\section{Toral automorphisms}
Let~$T:\mathbb T^d\to\mathbb T^d$ be a toral automorphism
corresponding to a matrix~$A_T$ in~$\genlin_{d}(\mathbb Z)$
with eigenvalues~$\{\lambda_i\mid1\le i\le d\}$, arranged so
that
\[
\vert\lambda_1\vert\ge\cdots\ge\vert\lambda_s\vert>1=
\vert\lambda_{s+1}\vert=\cdots=\vert\lambda_{s+2t}\vert>
\vert\lambda_{s+2t+1}\vert\ge\cdots\ge\vert\lambda_d\vert.
\]
The map~$T$ is \emph{ergodic} with respect to Lebesgue measure
if no eigenvalue is a root of unity, is \emph{hyperbolic} if in
addition~$t=0$ (that is, there are no eigenvalues of unit
modulus), and is \emph{quasihyperbolic} if it is ergodic
and~$t>0$. The topological entropy of~$T$ is given
by~$h=h(T)=\sum_{j=1}^{s}\log\vert\lambda_j\vert$.
\begin{theorem}\label{theorem}
Let~$T$ be a quasihyperbolic
toral automorphism with topological
entropy~$h$. Then there are\mc{mainconstantquasicase}
constants~$C_{\ref{mainconstantquasicase}}$ and~$m\ge1$ with
\[
\sum_{\vert\tau\vert\le N}\frac{1}{{\rm e}^{h\vert\tau\vert}}=m\log
N+C_{\ref{mainconstantquasicase}}+\bigo\(N^{-1}\).
\]
The coefficient~$m$ in the main term is given by
\[
m=\int_X\prod_{i=1}^{t}
\(2-2\cos(2\pi x_i)\)\thinspace{\rm{d}} x_1\dots\thinspace{\rm{d}} x_{t},
\]
where~$X\subset\mathbb T^d$ is the closure
of~$\{(n\theta_1,\dots,n\theta_{t})\mid n\in\mathbb Z\}$,
and~${\rm e}^{\pm2\pi{{\rm i}}\theta_1},\dots,
{\rm e}^{\pm2\pi{{\rm i}}\theta_{t}}$ are the eigenvalues with unit
modulus of the matrix defining~$T$.
\end{theorem}
As we will see in Example~\ref{examples}, the quantity~$m$
appearing in Theorem~\ref{theorem} takes on a wide range of
values. In particular,~$m$ may be much larger, or much smaller,
than its generic value~$2^t$.
\begin{proof}
Since~$T$ is ergodic,
\begin{equation*
F_T(n)=\vert\{x\in\mathbb T^d\mid T^n(x)=x\}\vert=\vert\mathbb
Z^d/(A_T^n-I)\mathbb Z^d\vert=\prod_{i=1}^{d}\vert\lambda_i^n-1\vert,
\end{equation*}
so
\[
O_T(n)=\frac{1}{n}\sum_{m|
n}\mu(n/m)\prod_{i=1}^{d}\vert\lambda_i^m-1\vert.
\]
Write~$\Lambda=\prod_{i=1}^{s}\lambda_i$ (so the topological entropy of~$T$
is~$\log\vert\Lambda\vert$) and
\[
\kappa=\min\{\vert\lambda_s\vert,\vert\lambda_{s+2t+1}\vert^{-1}\}>1.
\]
The eigenvalues of unit modulus contribute nothing to the
topological entropy, but multiply the
approximation~$\vert\Lambda\vert^n$ to~$F_T(n)$ by an
almost-periodic factor bounded above by~$2^{2t}$ and bounded
below by~$A/n^B$ for some~$A,B>0$, by Baker's theorem
(see~\cite[Ch.~3]{MR1700272} for this argument).
\begin{lemma}\label{lemma:1}
$\left\vert
F_T(n)-\vert\Lambda\vert^n\displaystyle\prod_{i=s+1}^{s+2t}\vert\lambda_i^n-1\vert
\right\vert\cdot\vert\Lambda\vert^{-n}=\bigo(\kappa^{-n}).$
\end{lemma}
\begin{proof}
We have
\begin{equation}\label{equation:definesABC}
\prod_{i=1}^{d}(\lambda_i^n-1)=
\underbrace{\prod_{i=1}^{s}(\lambda_i^n-1)}_{U_n}
\underbrace{\prod_{i=s+1}^{s+2t}(\lambda_i^n-1)}_{V_n}
\underbrace{\prod_{i=2t+s+1}^{d}(\lambda_i^n-1)}_{W_n},
\end{equation}
where~$U_n$ is equal to the sum of~$\Lambda^n$ and~$(2^s-1)$ terms
comprising products of eigenvalues, each no larger
than~$\kappa^{-n}\vert\Lambda\vert^n$ in modulus,~$W_n$ is equal to
the sum of~$(-1)^{d-s}$ and~$2^{d-2t-s}-1$ terms bounded above in
absolute value by~$\kappa^{-n}$, and~$\vert V_n\vert\le2^{2t}$. It
follows that
\begin{eqnarray*}
\frac{\left\vert\prod_{i=1}^{d}(\lambda_i^n-1)-
(-1)^{d-s}\Lambda^n\prod_{i=s+1}^{s+2t}(\lambda_i^n-1)
\right\vert}{\vert\Lambda\vert^{n}}&=& \frac{\left\vert V_n\(U_nW_n-
(-1)^{d-s}\Lambda^n\)\right\vert}{\vert\Lambda\vert^{n}}\\
&=&\frac{\left\vert
V_n\(\Lambda^n+\bigo\(\Lambda^n/\kappa^n\)-\Lambda^n\)
\right\vert}{\vert\Lambda\vert^n}\\
&=&\bigo(\kappa^{-n}).
\end{eqnarray*}
The statement of the lemma follows by the reverse triangle
inequality.
\end{proof}
Now
\[
M_T(N)=\sum_{n=1}^{N}\frac{1}{n\vert\Lambda\vert^n}\(F_T(n)+\sum_{d|
n,d<n}\mu\(\textstyle\frac{n}{d}\)F_T(d)\)
\]
and
\[
\left\vert
\sum_{n=N}^{\infty}\frac{1}{n\vert\Lambda\vert^n}\sum_{d| n,
d<n}\mu\(\textstyle\frac{n}{d}\)F_T(d)
\right\vert\le
\sum_{n=N}^{\infty}\frac{1}{n}\cdot n\cdot\bigo(\vert\Lambda\vert^{-n/2})
=\bigo\(\vert\Lambda\vert^{-N/2}\),
\]
so there is a\mc{constantwithmoebiushalfn}
constant~$C_{\ref{constantwithmoebiushalfn}}$ for which
\begin{equation*
\left\vert
\sum_{n=1}^{N}\frac{1}{n\vert\Lambda\vert^n}\sum_{d| n,
d<n}\mu\(\textstyle\frac{n}{d}\)F_T(d)-
C_{\ref{constantwithmoebiushalfn}}\right\vert=
\bigo\(\vert\Lambda\vert^{-N/2}\).
\end{equation*}
Therefore, by Lemma~\ref{lemma:1} and using the notation
from~\eqref{equation:definesABC},
\begin{equation*}
M_T(N)=\sum_{n=1}^{N}\frac{1}{n}\(V_n+\bigo\(\kappa^{-n}\)\)+C_{\ref{constantwithmoebiushalfn}}
+\bigo\(\vert\Lambda\vert^{-N/2}\).\label{equation:basicrelationship}
\end{equation*}
Clearly\mc{constantforsumofbigoepsilonminusn} there is a
constant~$C_{\ref{constantforsumofbigoepsilonminusn}}$ for which
\begin{equation}\label{equation:bigoepsilonbound}
\left\vert\sum_{n=1}^{N}\frac{1}{n}\bigo\(\kappa^{-n}\)-C_{\ref{constantforsumofbigoepsilonminusn}}\right\vert
=\bigo\(\kappa^{-N}\),
\end{equation}
so by~\eqref{equation:basicrelationship}
and~\eqref{equation:bigoepsilonbound},
\begin{equation}\label{equation:mainboundforMTN}
M_T(N)=\sum_{n=1}^{N}\frac{1}{n}V_n+C_{\ref{constantwithmoebiushalfn}}+
C_{\ref{constantforsumofbigoepsilonminusn}}+\bigo(R^{-N})
\end{equation}
where~$R=\min\{\kappa,\vert\Lambda\vert^{1/2}\}.$
Since the
complex eigenvalues appear in conjugate pairs we may arrange
that~$\lambda_{i+t}=\bar{\lambda_{i}}$ for~$s+1\le i\le s+t$, and
then
\[
\vert\lambda_i-1\vert\vert\lambda_{i+t}-1\vert=(\lambda_i-1)(\lambda_{i+t}-1).
\]
It follows that~$V_n=\prod_{i=s+1}^{s+2t}(\lambda_i^n-1)$.
Put
\[
\Omega=\left\{\prod_{i\in I}\lambda_i \mid
I\subseteq\{s+1,\ldots,s+2t\}\right\},
\]
write
\[
{\mathcal I}(\omega)=\{I\subset\{s+1,\dots,s+2t\}\mid\prod_{i\in I}\lambda_i=\omega\},
\]
\[
K(\omega)=\sum_{I\in\mathcal{I}(\omega)}(-1)^{\vert I\vert},
\]
and let~$m=K(1)$ (notice that~$\mathcal{I}(\omega)=\emptyset$
unless~$\omega\in\Omega$).
Then~$V_n=\sum_{\omega\in\Omega}K(\omega)\omega^n$ so,
by~\eqref{equation:mainboundforMTN},\mc{anotherconstant}\mc{thatconstantplusgamma}
\begin{eqnarray*}
M_T(N)&=&\sum_{n=1}^{N}\frac{1}{n}\sum_{\omega\in\Omega}K(\omega)
\omega^n+C_{\ref{anotherconstant}}+\bigo\(R^{-N}\)\\
&=&m\sum_{n=1}^{N}\frac{1}{n}+\sum_{\omega
\in\Omega\setminus\{1\}}K(\omega)\sum_{n=1}^{N}\frac{\omega^n}{n}
+C_{\ref{anotherconstant}}+\bigo\(R^{-N}\)\\
&=& m\log N-\sum_{\omega\in\Omega\setminus\{1\}}K(\omega)
\log(1-\omega)+C_{\ref{thatconstantplusgamma}}+\bigo(N^{-1}),
\end{eqnarray*}
since~$\sum_{n=1}^{N}\frac{1}{n}=\log N+\gamma+\bigo(N^{-1})$,
an
$\sum_{n=1}^{N}\frac{\omega^n}{n}=-\log(1-\omega)+\bigo(N^{-1})
$
for~$\omega\neq1$ by the Abel continuity theorem and partial
summation.
If the eigenvalues of modulus one
are~${\rm e}^{\pm2\pi{{\rm i}}\theta_1},\dots,
{\rm e}^{\pm2\pi{{\rm i}}\theta_{t}}$ then
\[
V_n=\prod_{i=1}^{t}(1-{\rm e}^{2\pi{{\rm i}}\theta_in})
(1-{\rm e}^{-2\pi{{\rm i}}\theta_in})
=\prod_{i=1}^{t}\(2-2\cos(2\pi\theta_i n)\).
\]
Let~$X\subset\mathbb T^{t}$ be
the closure
of~$\{(n\theta_1,\dots,n\theta_{t})\mid n\in\mathbb Z\}$,
so that by the Kronecker--Weyl lemma we have
\[
\frac{1}{N}\sum_{n=1}^{N}\prod_{i=1}^{t}\(2-2\cos(2\pi\theta_i n)\)
\longrightarrow \int_{X}\prod_{i=1}^{t}\(2-2\cos(2\pi x_i)\)\thinspace{\rm{d}}
x_1\dots\thinspace{\rm{d}} x_t
\]
as~$N\to\infty$. Then, by partial summation,
\begin{eqnarray*}
\sum_{n=1}^{N}\frac{1}{n}V_n&=&\sum_{n=1}^{N}\(\frac{1}{n}-
\frac{1}{n+1}\)\sum_{m=1}^{n}V_m+\frac{1}{N+1}\sum_{m=1}^{N}V_m\\
&\sim&
\(\int_{X}\prod_{i=1}^{t}\(2-2\cos(2\pi x_i)\)\thinspace{\rm{d}} x_1\dots\thinspace{\rm{d}} x_t\)\log N,
\end{eqnarray*}
so that~$m$ has the form stated.
\end{proof}
The exact value of~$m$ is determined by the structure of the
group~$X$, which in turn is governed by additive relations among the
arguments of the eigenvalues of unit modulus. Here are some
illustrative examples.
\begin{example}\label{examples}
\noindent(a) If all the arguments~$\theta_i$ are independent
over~$\mathbb Q$ (the generic case), then~$X=\mathbb T^t$, so
\begin{equation*}
m\negthinspace=\negthinspace\int_0^1
\negthinspace\cdots\negthinspace\int_0^1\prod_{i=1}^{t}
\(2\negthinspace-\negthinspace 2\cos(2\pi x_i)\)\thinspace{\rm{d}} x_1\dots
\thinspace{\rm{d}} x_{t}
\negthinspace=\negthinspace
\( \int_0^1(2\negthinspace-\negthinspace 2\cos(2\pi x_1))\thinspace{\rm{d}} x_1
\negthinspace\negthinspace\)^{\negthinspace t}\negthinspace\negthinspace=2^t.
\end{equation*}
\smallskip
\noindent(b) A simple example with~$m>2^t$ is the following.
Let~$T_2$ be the automorphism of~$\mathbb T^8$ defined by the
matrix~$A\oplus A$, where
\begin{equation}\label{definesA}
A=\begin{pmatrix}0&0&0&-1\\1&0&0&8\\0&1&0&-6\\0&0&1&8
\end{pmatrix}.
\end{equation}
Here~$X$ is a diagonally embedded circle, and
\begin{eqnarray*}
m&=&\iint_{\{x_1=x_2\}}\prod_{j=1}^{2}
\(2-2\cos(2\pi jx_j)\)\thinspace{\rm{d}} x_1\thinspace{\rm{d}} x_2\\
&=&\int_0^1(2-2\cos(2\pi x))^2\thinspace{\rm{d}} x=6>2^2.
\end{eqnarray*}
Extending this example, let~$T_n$ be the automorphism
of~$\mathbb T^{4n}$ defined by the matrix~$A\oplus\cdots\oplus
A$ ($n$ terms). The matrix corresponding to~$T_n$ has~$2n$
eigenvalues with modulus one (comprising two conjugate
eigenvalues with multiplicity~$n$). Then~$X$ is again a
diagonally embedded circle, and
\begin{eqnarray*}
&=&\int_0^1(2-2\cos(2\pi x))^t\thinspace{\rm{d}} x\ =\ \frac{(2t)!}{(t!)^2}\ \sim\
\frac{2^{2t}}{\sqrt{\pi t}}
\end{eqnarray*}
by Stirling's formula.
This is much larger than~$2^t$,
reflecting the density of the syndetic set on
which the almost-periodic factor is
close to~$2^{2t}$. Indeed,
this example shows that~$\frac{m}{2^t}$ may be arbitrarily large.
\smallskip
\noindent(c) A simple example with~$m< 2^t$ is the following.
Let~$S$ be the automorphism of~$\mathbb T^{12}$ defined by the
matrix~$A\oplus A^2\oplus A^3$, with~$A$ as
in~\eqref{definesA}. Again~$X$ is a diagonally embedded circle,
and
\begin{eqnarray*}
m&=&\iiint_{\{x_1=x_2=x_3\}}\prod_{j=1}^{3}
\(2-2\cos(2\pi jx_j)\)\thinspace{\rm{d}} x_1{\rm d}x_2\thinspace{\rm{d}} x_3\\
&=&\int_0^1(2-2\cos(2\pi x))(2-2\cos(4\pi x))(2-2\cos(6\pi
x))\thinspace{\rm{d}} x\ =\ 6\ <\ 2^3.
\end{eqnarray*}
Extending this example, the value of~$m$ for the automorphism
of~$\mathbb T^{4t}$ defined by the matrix~$A\oplus
A^2\oplus\cdots\oplus A^t$ as~$t$ varies gives the sequence
\[ 2,4,6,10,12,20,24,34,44,64,78,116,148,208,286,410,556,808,1120,
1620,\dots
\]
(we thank Paul Hammerton for computing these numbers). This
sequence, entry~A133871 in the Encyclopedia of Integer
Sequences~\cite{MR95b:05001}, does not seem to be readily
related to other combinatorial sequences.
\smallskip
\noindent(d) Generalizing the example in~(c), for any
sequence~$(a_n)$ of natural numbers, we could look at the
automorphisms~$S_n$ of~$\mathbb T^{4n}$ defined by the
matrices~$\bigoplus_{k=1}^n A^{a_k}$, with~$A$ as
in~\eqref{definesA}. In order to make~$m$ small, we need a ``sum-heavy'' sequence, that is, one with many
three-term linear relations of the form~$a_i+a_j=a_k$.
More precisely, one would like many
linear relations with an odd number of terms, and few with an
even number of terms. Constructing such sequences, and understanding
how dense they may be, seems to be difficult.
Taking~$(a_n)$ to be the sequence whose first eight terms
are~$1,2,3,5,7,8,11,13$ and whose subsequent terms are defined
by the recurrence~$a_{n+8}=100a_n$, we find that the
automorphism~$S_{8n}$ of~$\mathbb T^{32n}$
has~$m=2^{4n}=2^{t/2}$. Thus~$\frac m{2^t}$ may be arbitrarily
small.
\end{example}
We close with some remarks.
\smallskip
\noindent(a) In the quasihyperbolic case the~$\bigo(1/N)$ term
is oscillatory, so no improvement of the asymptotic in terms of
a monotonic function is possible. The extent to which the
exponential dominance of the entropy term fails in this setting
is revealed by the following. Let~$F_T(n)$ denote the number of
points fixed by the automorphism~$T^n$. On the one hand,
Baker's theorem implies that~$F_T(n)^{1/n}\rightarrow e^h$
as~$n\to\infty$. On the other hand Dirichlet's theorem shows
that~$F_T(n+1)/F_T(n)$ does not converge
(see~\cite[Th.~6.3]{MR1461206}).
\smallskip
\noindent(b) The formula for~$m$ in the statement
of~\cite[Th.~1]{MR1787646} is incorrect in a minor way; as stated
in~\cite[Rem.~2]{MR1787646} and as illustrated in the examples
above,~$m$ should be~$K(1)$, which is not necessarily the same
as~$2^{t}$.
\smallskip
\noindent(c) The proof of Theorem~\ref{theorem} also gives an
elementary proof of the asymptotics in the hyperbolic case: in
the notation of the proof, $V_n=1$ so $m=1$. Applying now the
Euler-MacLaurin summation formula (see Ram Murty~\cite[Th.
2.1.9]{MR1803093}) we get an asymptotic of the shape
\[
\sum_{\vert\tau\vert\le N}\frac{1}{{\rm e}^{h\vert\tau\vert}}=\log
N+C_{\ref{mainconstantquasicase}}+\sum_{r=0}^{k-1}\frac{B_{r+1}}{(r+1)N^{r+1}}+\bigo\(N^{-(k+1)}\),
\]
where $B_1=-\frac 12$, $B_2=\frac 16$,\ldots are the Bernoulli
numbers, for any~$k\ge1$.
\def$'${$'$}
|
train/arxiv
|
BkiUbfjxK4sA-9zR-_ID
| 5 | 1 |
\section{Introduction}\label{Intr}
The most important challenge for cosmologists
is to explain the accelerated expansion of our
universe that was directly measured for the first time from Type
Ia supernovae observations \cite{Riess98,Perl99}. These supernovae
were used as standard candles, because one can measure their
redshifts $z$ and luminosity distances $D_L$. The observed
dependence $D_L(z)$ based on further measurements
\cite{SNTable,WeinbergAcc12} argues for the accelerated growth of
the cosmological scale factor $a(t)$ at late stage of its
evolution.
This result was confirmed via observations of cosmic microwave
background anisotropy \cite{WMAP}, baryon acoustic oscillations
(BAO) or large-scale galaxy clustering
\cite{WeinbergAcc12,Eisen05,SDSS} and other observations
\cite{WeinbergAcc12,WMAP,Plank13}. In particular, our attention
should be paid to measurements of the Hubble parameter $H(z)$ for
different redshifts $z$
\cite{Simon05,Stern10,Moresco12,Blake12,Zhang12,Busca12,Chuang12,Gazta09,Anderson13,Oka13,Delubac14,Font-Ribera13}.
The results of these measurements and estimations are represented
below in Table~\ref{AT1} of Appendix.
The values $H(z)$ were calculated with two methods: evaluation of
the age difference for galaxies with close redshifts in
Refs.~\cite{Simon05,Stern10,Moresco12,Blake12,Zhang12,Busca12,Chuang12}
and the method with BAO analysis
\cite{Gazta09,Anderson13,Oka13,Delubac14,Font-Ribera13}.
In the first method the equality
\begin{equation}
a(t)=a_0/(1+z)
\label{z} \end{equation}
and its consequence
$$
H(z)=\frac1{a(t)}\frac{da}{dt}=-\frac1{1+z}\frac{dz}{dt}
$$
are used. Here $a_0\equiv a(t_0)$ is the current value
of the scale factor $a$.
Baryon acoustic oscillations (BAO) are disturbances in the cosmic
microwave angular power spectrum and in the correlation function
of the galaxy distribution, connected with acoustic waves
propagation before the recombination epoch
\cite{WeinbergAcc12,Eisen05}. These waves involved baryons coupled
with photons up to the end of the drag era corresponding to
$z_d\simeq 1059.3$ \cite{Plank13}, when baryons became decoupled
and resulted in a peak in the galaxy-galaxy correlation function
at the comoving sound horizon scale $r_s(z_d)$
\cite{Eisen05,Plank13}.
In Table~\ref{AT2} of Appendix we represent estimations of two
observational manifestations of the BAO effect. These values are
taken from Refs.~\cite{WMAP,BlakeBAO11,Chuang13}, they confirm the
conclusion about accelerated expansion of the universe. In
addition, this data with observations of Type Ia supernovae and
the Hubble parameter $H(z)$ are stringent restrictions on possible
cosmological theories and models.
To explain accelerated expansion of the universe various
cosmological models have been suggested, they include different
forms of dark matter and dark energy in equations of state
and various modifications of Einstein gravity
\cite{Clifton,Bamba12,CopelandST06}. The most popular among
cosmological models is the $\Lambda$CDM model with a $\Lambda$
term (dark energy) and cold dark matter (see reviews
\cite{Clifton,CopelandST06}). This model with 5\% fraction of
visible baryonic matter nowadays ($\Omega_b=0.05$), 24\% fraction
of dark matter ($\Omega_c=0.24$) and 71\% fraction of dark energy
($\Omega_\Lambda=0.71$) \cite{WMAP} successfully describes
observational data for Type Ia supernovae, anisotropy of cosmic
microwave background, BAO effects and $H(z)$ estimates
\cite{WeinbergAcc12,WMAP,Plank13}.
However, there are some problems in the
$\Lambda$CDM model connected with vague nature of dark matter and
dark energy, with fine tuning of the observed value of $\Lambda$,
which is many orders of magnitude smaller than expected vacuum
energy density, and with surprising proximity $\Omega_\Lambda$ and
$\Omega_m=\Omega_b+\Omega_c$ nowadays, though these parameters
depend on time in different ways (the coincidence problem)
\cite{Clifton,Bamba12,CopelandST06,Kunz12}.
Therefore a large number of alternative cosmological models have
been proposed. They include modified gravity with $f(R)$
Lagrangian \cite{SotiriouF,NojOdinFR}, theories with scalar fields
\cite{CaldwellDS98,KhouryA04}, models with nontrivial equations of
state
\cite{KamenMP01,Bento02,Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12,LuXuWL11,PaulTh13},
with extra dimensions
\cite{Mohammedi02,Darabi03,BringmannEG03,PanigrahiZhCh06,MiddleSt11,FarajollahiA10,PahwaChS,GrSh13}
and many others
\cite{Clifton,Bamba12,CopelandST06,Kunz12}.
Among these gravitational models we concentrate here on the model
with generalized Chaplygin gas (GCG)
\cite{KamenMP01,Bento02,Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12}.
The equation of state in this model
\begin{equation}
p=-B_0/\rho^\alpha
\label{pGCG} \end{equation}
generalizes the corresponding equation
$p=-B/\rho$ for the original Chaplygin gas model \cite{KamenMP01}.
Generalized Chaplygin gas with EoS (\ref{pGCG}) plays the roles of
both dark matter and dark energy, it is applied to describing
observations of type Ia supernovae, BAO effects, the Hubble
parameter $H(z)$ and other observational data in various
combinations \cite{Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12}.
The equation of state similar to Eq.~(\ref{pGCG}) is used in the
multidimensional gravitational model of I. Pahwa, D.~Choudhury and
T.R.~Seshadri \cite{PahwaChS} (the PCS model in references below).
In this model the $1+3+d$ dimensional spacetime
is symmetric and isotropic in two subspaces: in 3 usual spatial
dimensions and in $d$ additional dimensions. Matter has zero
(dust-like) pressure in usual dimensions and negative pressure
$p_e$ in the form (\ref{pGCG}) in extra dimensions:
\begin{equation}
T^{\mu}_{\nu} = \mbox{diag}\,(-\rho,0,0,0,p_e,\dots,p_e),
\qquad
p_e=-B_0{\rho}^{-\alpha}
\label{Tmn}
\end{equation}
(in Sects.~\ref{Intr}, \ref{Mod} we use units with $c=1$).
In Ref.~\cite{PahwaChS} the important case $d=1$ was omitted. This
case was considered in Ref.~\cite{GrSh13}, where we analyzed
singularities of cosmological solutions in the PCS model
\cite{PahwaChS} and suggested how to modify the equation of state
(\ref{Tmn}) for the sake of avoiding the finite-time future
singularity (``the end of the world'') which is inevitable in the
PCS model. Main advantages of the multidimensional models
\cite{PahwaChS} and \cite{GrSh13} are: naturally arising dynamical
compactification and successful description of the Type Ia
supernovae observations.
In this paper we compare the $\Lambda$CDM model, the model with
generalized Chaplygin gas (GCG) \cite{KamenMP01,Bento02}, and also
the models PCS \cite{PahwaChS} and \cite{GrSh13} with $d$ extra
dimensions from the point of view of their capacity to describe
recent observational data for type Ia supernovae, BAO and $H(z)$.
In the next section we briefly summarize the dynamics of the
mentioned models, in Sect.~\ref{Observ} we analyze parameters of
the mentioned models resulting in the best description of the
observational data from Ref.~\cite{SNTable} and Appendix.
\section{Models}\label{Mod}
For all cosmological models in this paper the Einstein equations
\begin{equation}
G^\mu_\nu=8\pi G T^\mu_\nu+\Lambda\delta^\mu_\nu,
\label{Eeq}\end{equation}
determine dynamics of the universe. Here
$T^\mu_\nu$ and $G^\mu_\nu=R^\mu_\nu-\frac12R\delta^\mu_\nu$ are
the energy momentum tensor and the Einstein tensor, $\Lambda$ is
nonzero only in the $\Lambda$CDM model. The energy momentum tensor
has the form (\ref{Tmn}) in the multidimensional models
\cite{PahwaChS,GrSh13} and the standard form
\begin{equation}
T^{\mu}_{\nu} = \mbox{diag}\,(-\rho,p,p,p)
\label{T4}
\end{equation}
in models with $3+1$ dimensions. In the $\Lambda$CDM model
baryonic and dark matter may be considered as one component of
dust-like matter with density $\rho=\rho_b+\rho_{dm}$, so we
suppose $p=0$ in Eq.~(\ref{T4}). The fraction of relativistic
matter (radiation and neutrinos) is close to zero for observable
values $z\le2.3$. In the GCG model
\cite{KamenMP01,Bento02,Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12}
pressure $p$ in the form (\ref{pGCG}) plays the role of dark
energy, corresponding to the $\Lambda$ term in the $\Lambda$CDM
model.
For the Robertson-Walker metric with the curvature sign $k$
\begin{equation}
ds^2 = -dt^2+a^2(t)\Big[(1-k r^2)^{-1}dr^2+r^2
d\Omega\Big]
\label{metrRW}
\end{equation}
the Einstein equations (\ref{Eeq}) are reduced to the system
\begin{eqnarray}
3\frac{\dot{a}^2+k}{a^2}=8\pi G\rho+\Lambda,\label{Esysa}\\
\dot{\rho}=-3\frac{\dot{a}}{a}(\rho+p).\label{Esys2}
\end{eqnarray}
Eq.~(\ref{Esys2}) results from the continuity
condition $T^{\mu}_{\nu; \mu}=0$, the dot denotes the time
derivative.
Using the present time values of the Hubble constant
and the critical density
\begin{equation}
H_0=\frac{\dot a} a\Big|_{t=t_0}=H\Big|_{z=0},\qquad
\rho_{cr}=\frac{3H_0^2}{8\pi G},
\label{rocr}\end{equation}
we introduce dimensionless time $\tau$, densities
$\bar{\rho}_i$, pressure $\bar{p}$ and logarithm of the scale
factor \cite{PahwaChS,GrSh13}:
\begin{equation}
\tau=H_0t,\qquad\bar{\rho}=\frac{\rho}{\rho_{cr}},\qquad
\bar{\rho}_b=\frac{\rho_b}{\rho_{cr}},\qquad
\bar{p}=\frac{p}{\rho_{cr}},\qquad {\cal A}=\log\frac a{a_0}.
\label{tau} \end{equation}
We denote derivatives with
respect to $\tau$ as primes and rewrite the system (\ref{Esysa}),
(\ref{Esys2})
\begin{eqnarray}
{\cal A}'(\tau)&=&\sqrt{\bar{\rho}+\Omega_\Lambda+\Omega_ke^{-2{\cal A}}}, \label{Asy} \\
\bar{\rho}'(\tau)&=&-3{\cal A}'(\bar{\rho}+\bar{p}). \label{rhsy}
\end{eqnarray}
Here
\begin{equation}
\Omega_m=\frac{\rho(t_0)}{\rho_{cr}},\qquad
\Omega_\Lambda=\frac{\Lambda}{3H_0^2},\qquad
\Omega_k=-\frac{k}{a_0^2H_0^2}
\label{Omega1} \end{equation}
are present time fractions of matter
($\Omega_m=\Omega_b+\Omega_c$), dark energy and curvature in the
equality
\begin{equation}
\Omega_m+\Omega_\Lambda+ \Omega_k=1,
\label{sumOm} \end{equation}
resulting from Eq.~(\ref{Esysa}) if we fix $t=t_0$.
If we know an equation of state $\bar{p}=\bar{p}(\bar{\rho})$ for
any model, we can solve the Cauchy problem for the system (\ref{Asy}),
(\ref{rhsy}) including initial conditions for variables
(\ref{tau}) at the present epoch $t=t_0$ (here and below $t=t_0$
corresponds to $\tau=1$)
\begin{equation}
{\cal A}\big|_{\tau=1}=0,\qquad
\bar{\rho}\big|_{\tau=1}=\Omega_m.
\label{init1} \end{equation}
In the $\Lambda$CDM model Eq.~(\ref{rhsy}) yields
$\bar{\rho}=\Omega_m e^{-3{\cal A}}=\Omega_m(1+z)^3$, so we solve
only equation (\ref{Asy})
\begin{equation}
{{\cal A}'}^2=\frac{H^2}{H_0^2}=\Omega_m e^{-3{\cal A}}+
\Omega_\Lambda+\Omega_ke^{-2{\cal A}}.
\label{ALCDM} \end{equation}
with the first initial condition
(\ref{init1}).
Equation (\ref{rhsy}) may be solved also and in the GCG model, but
in this case we are to decompose all matter into two components
\cite{LuGX10,LiangXZ11,CamposFP12,XuLu12,LuXuWL11}. One of these
components is usual dust-like matter including baryonic matter;
the other component is generalized Chaplygin gas with density
$\rho_g\equiv\rho_{GCG}$ (and corresponding
$\bar{\rho}_g=\rho_g/\rho_{cr}$). If the first component is pure
baryonic and the latter describes both dark matter and dark
energy, equations of state are:
\begin{equation}
\bar{\rho}=\bar{\rho}_b+\bar{\rho}_g,\qquad\bar{p}_b=0,\qquad\bar{p}=\bar{p}_g=-B\,(\bar{\rho_g})^{-\alpha}
\label{EoSC}
\end{equation}
If we use the integrals $\bar{\rho}_b=\Omega_b e^{-3{\cal A}}$ and
$\bar{\rho}_g=\big[B+Ce^{-3{\cal
A}(1+\alpha)}\big]^{1/(1+\alpha)}$ of Eq.~(\ref{rhsy}) for these
components, equation (\ref{Asy}) takes the form
\cite{Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12,LuXuWL11}
\begin{equation}
{{\cal A}'}^2=\frac{H^2}{H_0^2}=\Omega_b e^{-3{\cal A}}+
(1-\Omega_b-\Omega_k)\Big[B_s+(1-B_s)\,e^{-3{\cal
A}(1+\alpha)}\Big]^{1/(1+\alpha)}+\Omega_ke^{-2{\cal A}}.
\label{AGCG} \end{equation}
We solve this equation with the initial condition (\ref{init1}) $
{\cal A}\big|_{\tau=1}=0$. The dimensionless constant $B_s$
\cite{XuLu12,LuXuWL11} (it is denoted $A_s$ in
Refs.~\cite{LuGX10,LiangXZ11}) is expressed via $B$ or $B_0$:
\begin{equation}
B_s=B\cdot(1-\Omega_b-\Omega_k)^{-1-\alpha},\qquad
B=B_0\,\rho_{cr}^{-1-\alpha}.
\label{Bs} \end{equation}
For the multidimensional model PCS \cite{PahwaChS} and the model
\cite{GrSh13} in spacetime with $1+3+d$ dimensions the following
metric is used \cite{PahwaChS}:
\begin{equation}
ds^2 = -dt^2+a^2(t)\left(\frac{dr^2}{1-k r^2} + r^2
d\Omega\right)+b^2(t)\left(\frac{d R^2}{1-k_2 R^2}+R^2 d\Omega_{d-1}
\right).
\label{metricInd}
\end{equation}
Here $b(t)$ and $k_2$ are the scale factor and
curvature sign in extra dimensions (along with $a$ and $k$ for
usual dimensions). For cosmological solutions in
Refs.~\cite{PahwaChS,GrSh13}
the scale factor $a(t)$ grows while $b(t)$ diminishes,
in other words, some form of dynamical compactification
\cite{Mohammedi02,Darabi03,BringmannEG03,PanigrahiZhCh06,MiddleSt11,FarajollahiA10,PahwaChS}
takes place, a size of compactified $b$ is small enough to play no
essential role at the TeV scale.
In Refs.~\cite{PahwaChS,GrSh13} the authors considered only one
component of their matter. Here we generalize these models and
introduce the ``usual'' component with density $\bar{\rho}_b$ and
the ``exotic'' component with $\bar{\rho}_e=\rho_e/\rho_{cr}$ and
pressure $\bar{p}_e=p_e/\rho_{cr}$ in extra dimensions similarly
to Eq.~(\ref{EoSC}):
\begin{equation}
\bar{\rho}=\bar{\rho}_b+\bar{\rho}_e,\qquad\bar{p}_e=-B\,(\bar{\rho_e})^{-\alpha}
\label{EoPCS} \end{equation}
Dynamical equations for the models \cite{PahwaChS,GrSh13} result
from the Einstein equations (\ref{T4}) with $\Lambda=0$ and the
energy momentum tensor (\ref{Tmn}), (\ref{EoPCS}). In our notation
(\ref{tau}) with ${\cal B}=\log\big(b/b_0\big)$ (where
$b_0=b(t_0)$) these equations for $k_2=0$ and $d>1$ are
\cite{PahwaChS,GrSh13}
\begin{eqnarray}
{\cal A}''&=&\frac1{d+2}\Big[ d(d-1)\,{\cal B}'\big(\frac12{\cal
B}'-{\cal A}'\big)-3(d+1)\,{{\cal A}'}^2 -3d\bar{p}_e
+(2d+1)\Omega_ke^{-2{\cal A}}\Big], \quad\label{Ad} \\
\bar{\rho}_b'&=&-\bar{\rho}_b(3{\cal A}'+d{\cal B}'),\qquad
\bar{\rho}_e'=-3\bar{\rho_e}{\cal
A}'-d(\bar{\rho_e}+\bar{p}_e)\,{\cal B}',
\label{rhod}\\
{\cal B}'&=&(d-1)^{-1}\Big[-3{\cal A}'+\sqrt{3\big[(d+2)\,{{\cal
A}'}^2+2(d-1)\, (\bar{\rho}+\Omega_ke^{-2{\cal A}})\big]/d}\Big].
\label{Bd}
\end{eqnarray}
If $d=1$ one should use
\cite{GrSh13}
\begin{equation}
{\cal B}'=(\bar{\rho}+\Omega_ke^{-2{\cal A}})/{\cal A}'-{\cal A}'
\label{B1} \end{equation}
instead of Eq.~(\ref{Bd}).
For the system (\ref{Ad})~--~(\ref{rhod}) the initial conditions
include Eqs.~(\ref{init1}) and the additional condition
\begin{equation}
{\cal A}'\big|_{\tau=1}=1
\label{init2} \end{equation}
resulting from definitions of ${\cal A}$ (\ref{tau}) and $H_0$
(\ref{rocr}):
$${\cal A}'(\tau)=\frac d{d\tau}\log\frac a{a_0}=\frac1{H_0}
\frac{\dot a}a.$$
For the model PCS \cite{PahwaChS,GrSh13} we have the analog of
Eq.~(\ref{sumOm})
\begin{equation}
\Omega_m+\Omega_B+ \Omega_k=1,
\label{sumOmI} \end{equation}
resulting from Eqs.~(\ref{Bd}) or (\ref{B1}) at $\tau=1$. Here
$\Omega_B=-d\big(B'+\frac{d-1}6{B'}^2\big)\big|_{\tau=1}$ is the
contribution from $d$ extra dimensions.
The models $\Lambda$CDM, GCG, PCS with suitable values of model
parameters have cosmological solutions describing accelerated
expansion of the universe
\cite{WMAP,Plank13,Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12,PahwaChS,GrSh13}.
We consider restrictions on these parameters coming from recent
observational data for type Ia supernovae \cite{SNTable}, BAO
\cite{WMAP,BlakeBAO11,Chuang13} and from measuring the Hubble
parameter $H(z)$
\cite{Simon05,Stern10,Moresco12,Blake12,Zhang12,Busca12,Chuang12,Gazta09,Anderson13,Oka13,Delubac14,Font-Ribera13},
(Tables~\ref{AT2}, \ref{AT1}).
\section{Observational data and model parameters }\label{Observ}
Recent observational data on Type Ia supernovae in the Union2.1
compilation \cite{SNTable} include redshifts $z=z_i$ and distance
moduli $\mu_i$ with errors $\sigma_i$ for $N_S=580$ supernovae.
The distance modulus
$\mu_i=\mu(D_L)=5\log\big(D_L/10\mbox{pc}\big)$ is logarithm of
the luminosity distance \cite{Plank13,Clifton}:
\begin{equation}
D_L(z)=\frac{c\,(1+z)}{H_0\sqrt{|\Omega_k|}}\mbox{\,Sin}_k
\bigg(H_0\sqrt{|\Omega_k|}\int\limits_0^z\frac{d\tilde z}{H(\tilde
z)}\bigg),\quad\;
\mbox{Sin}_k(x)=\left\{\begin{array}{ll} \sinh x, &\Omega_k>0,\\
x, & \Omega_k=0,\\ \sin x, & \Omega_k<0. \end{array}\right.
\label{DL} \end{equation}
In particular, for the flat universe ($k=\Omega_k=0$) the expression (\ref{DL})
is
$$
D_L=c\,(1+z)\int\limits_0^z\frac{d\tilde z}{H(\tilde z)}
=\frac{ca_0^2}{H_0a(\tau)}\int\limits_\tau^1\frac{d\tilde\tau}{a(\tilde\tau)},
$$
To describe the Type Ia supernovae data \cite{SNTable} we fix
values of model parameters $p_1,p_2,\dots$ for the chosen model
$\Lambda$CDM, GCG or PCS and calculate dependence of the scale
factor $a(\tau)$ on dimensionless time $\tau$. Further, we
calculate numerically the integral expression (\ref{DL}) and the
distance modulus $\mu(\tau)$. For each value of redshift $z_i$ in
the table \cite{SNTable} we find the corresponding $\tau=\tau_i$
with using linear approximation in Eq.~(\ref{z}) and the
theoretical value $\mu_{th}=\mu(\tau_i,p_1,p_2,\dots)$ from the
dependence $\mu(\tau)$ (\ref{DL}).
We search a good fit between theoretical predictions $\mu_{th}$
and the observed data $\mu_i$ as the minimum of
\begin{equation}
\chi^2_S(p_1,p_2,\dots)=\sum_{i=1}^{N_S}
\frac{\big[\mu_i-\mu_{th}(z_i,p_1,p_2,\dots)\big]^2}{\sigma_i^2}
\label{chiS} \end{equation}
or the maximum of the corresponding likelihood function
${\cal L}_S(p_1,p_2,\dots)=\exp(-\chi^2_S/2)$
in the space of model parameters $p_1,p_2,\dots$
The Type Ia supernovae data \cite{SNTable} and the best fits for
the mentioned models $\Lambda$CDM, GCG and PCS are shown in
Fig.~\ref{F1}b in $z,D_L$ plane. Details of the optimization
procedure are described below.
Model predictions for the Hubble parameter $H(z)=\dot a/a=H_0{\cal
A}'(\tau)$ we compare with observational data
\cite{Simon05,Stern10,Moresco12,Blake12,Zhang12,Busca12,Chuang12,Gazta09,Anderson13,Oka13,Delubac14,Font-Ribera13},
from Table~\ref{AT1} (Fig.~\ref{F1}c) and use the $\chi^2$
function similar to (\ref{chiS}):
\begin{equation}
\chi^2_H(p_1,p_2,\dots)=\sum_{i=1}^{N_H}
\frac{\big[H_i-H_{th}(z_i,p_1,p_2,\dots)\big]^2}{\sigma_{H,i}^2}.
\label{chiH} \end{equation}
Here $N_H=34$, theoretical values $H_{th}(z_i,\dots)=H_0{\cal A}'\big(\tau(z_i)\big)$
are obtained from the calculated dependence ${\cal A}(\tau)$ and the equality
(\ref{z}) $z=e^{-{\cal A}}-1$.
The observational data for BAO \cite{WMAP,BlakeBAO11,Chuang13}
(Table~\ref{AT2}) includes two measured values \cite{Eisen05}
\begin{equation}
d_z(z)= \frac{r_s(z_d)}{D_V(z)}
\label{dz} \end{equation}
and
\begin{equation}
A(z) = \frac{H_0\sqrt{\Omega_m}}{cz}D_V(z).
\label{Az} \end{equation}
They are connected with the distance \cite{WMAP,Eisen05,Plank13}
\begin{equation}
D_V(z)=\bigg[\frac{cz D_L^2(z)}{(1+z)^2H(z)}\bigg]^{1/3},
\label{DV} \end{equation}
expressed here via the luminosity distance (\ref{DL}).
The BAO observations \cite{WMAP,BlakeBAO11,Chuang13} in
Table~\ref{AT2} are not independent. So the $\chi^2$ function for
the values (\ref{dz}) and (\ref{Az})
\begin{equation}
\chi^2_B(p_1,p_2,\dots)=(\Delta d)^TC_d^{-1}\Delta d+
(\Delta { A})^TC_A^{-1}\Delta { A}.
\label{chiB} \end{equation}
includes the columns
$\Delta d=[d_{z,th}(z_i,p_1,\dots)-d_z(z_i)]$, $\Delta
A=[A_{th}(z_i,p_1,p_2,\dots)-{ A}(z_i)]$, $i=1,\dots,N_B$ and the
covariance matrices $C_d^{-1}$ and $C_A^{-1}$
\cite{WMAP,BlakeBAO11} described in Appendix.
\smallskip
The best fits to the observational data for Type Ia supernovae
\cite{SNTable}, $H(z)$ and BAO data from Tables~\ref{AT2},
\ref{AT1} are presented in Fig.~\ref{F1} for the models
$\Lambda$CDM, GCG and PCS (with $d=1$ and $d=6$). The values of
model parameters are tabulated below in Table~\ref{Optim}. They
are optimal from the standpoint of minimizing the sum of all
$\chi^2$ (\ref{chiS}), (\ref{chiH}) and (\ref{chiB}):
\begin{equation}
\chi^2_\Sigma=\chi^2_S+\chi^2_H+\chi^2_B.
\label{chisum} \end{equation}
\begin{figure}[th]
\centerline{\includegraphics[scale=0.76,trim=3mm 0mm 2mm 6mm]{Cfig1}}
\caption{\small For the models $\Lambda$CDM, GCG, PCS ($d=1$
and $d=6$) with the optimal values of model parameters from
Table~\ref{Optim} we present (a) the scale factor $a(\tau)$; (b)
the luminosity distance $D_L(z)$ and the Type Ia supernovae data
\cite{SNTable}; (c) dependence $H(z)$ with the data points from
Table~\ref{AT1} and (d) the distance (\ref{DV}) $D_V(z)$ with the
data points from Table~\ref{AT2}.}
\label{F1}
\end{figure}
Predictions of different models in Fig.~\ref{F1} are rather close,
in particular, the curves for the models $\Lambda$CDM and GCG
practically coincide. The Hubble parameter $H(z)$ in
Fig.~\ref{F1}c is measured in km\,c${}^{-1}$Mpc${}^{-1}$, the
distances $D_L(z)$ and $D_V(z)$ in Fig.~\ref{F1}b,\,d are in Gpc.
The data points for $D_V(z)=r_s(z_d)/d_z(z)$ in Fig.~\ref{F1}d are
calculated from $d_z(z_i)$ in Table~\ref{AT2}. Here the error
boxes include the data spread between the recent estimations of
the comoving sound horizon size:
\begin{equation}
r_s(z_d)=147.49\pm 0.59\mbox{ Mpc \cite{Plank13}},\qquad
r_s(z_d)=153.3\pm 2.0\mbox{ Mpc \cite{Anderson13,BlakeBAO11}}.
\label{rs} \end{equation}
\subsection{$\Lambda$CDM model}
In the $\Lambda$CDM model we use three free parameters $H_0$,
$\Omega_m$ and $\Omega_\Lambda$ in Eq.~(\ref{ALCDM}) for
describing the considered observational data at $z\le2.3$. For the
Hubble constant $H_0$ different approaches result in different
estimations. In particular, observations of Cepheid variables in
the project Hubble Space Telescope (HST) give the recent estimate
$H_0=73.8\pm2.4$ km\,c${}^{-1}$Mpc${}^{-1}$ \cite{Riess11}. On the
other hand, the satellite projects Planck Collaboration (Planck)
\cite{Plank13} and Wilkinson Microwave Anisotropy Probe (WMAP)
\cite{WMAP} for observations of cosmic microwave background
anisotropy result in the following values (in
km\,c${}^{-1}$Mpc${}^{-1}$):
\begin{equation}\begin{array}{ll}
H_0=67.3 \pm 1.2 & \mbox{ (Planck \cite{Plank13})},\\
H_0=69.7\pm2.4 & \mbox{ (WMAP \cite{WMAP})},\\
H_0=73.8\pm2.4 & \mbox{ (HST \cite{Riess11})}.
\end{array}
\label{H0}\end{equation}
The nine-year results from WMAP \cite{WMAP} include also the
estimate $H_0=69.33\pm0.88$ km\,c${}^{-1}$Mpc${}^{-1}$ with added
recent BAO and $H_0$ observations.
For the $\Lambda$CDM model many authors
\cite{WMAP,Plank13,Tonry03,Knop03,Kowalski08,ShiHL12,FarooqMR13,FarooqR13,Farooqth}
calculated the best fits for parameters $H_0$, $\Omega_m$ and
$\Omega_\Lambda$ for describing the Type Ia supernovae, $H(z)$ and
BAO data in various combinations. In
Refs.~\cite{ShiHL12,FarooqMR13,FarooqR13,Farooqth} some other
cosmological models were compared with the $\Lambda$CDM model. In
particular, the authors \cite{ShiHL12} compared 8 models with two
information criteria including minimal $\chi^2$ and the number of
model parameters. Optimal values of these parameters were pointed
out in Ref.~\cite{ShiHL12} with the exception of $H_0$, though
$H_0$ is the important parameter for all 8 models.
In Refs.~\cite{FarooqMR13,FarooqR13,Farooqth} the $\Lambda$CDM,
XCDM and $\phi$CDM models were applied to describe the supernovae,
$H(z)$ and BAO data. For all mentioned models the authors
\cite{FarooqMR13,FarooqR13,Farooqth} fixed two values of the
Hubble constant $H_0=68\pm2.8$ \cite{GottV01} and $H_0=73.8\pm2.4$
km\,c${}^{-1}$Mpc${}^{-1}$ \cite{Riess11} and searched optimal
values of other model parameters. But they did not estimated the
best choice of $H_0$ among these two values and in the segment
between them.
In this paper we pay the special attention to dependence of
$\chi^2_\Sigma$ minima on $H_0$. This dependence is very
important if we compare different cosmological models.
The results of calculations
\cite{WMAP,Plank13,Kowalski08,ShiHL12,FarooqMR13,FarooqR13,Farooqth},
as usual, are presented as level lines for
the functions $\chi^2(p_1,p_2)$ or ${\cal
L}_S(p_1,p_2)=\exp(-\chi^2_S/2)$ of two parameters at $1\sigma$
(68.27\%), $2\sigma$ (95.45\%) and $3\sigma$ (99.73\%) confidence
levels. In particular, if a value $H_0$ is fixed, these two
parameters for the $\Lambda$CDM model may be $\Omega_m$ and
$\Omega_\Lambda$.
In Fig.~\ref{F2} we use this scheme for 3 fixed values $H_0$
(\ref{H0}) indicated on the panels (including the optimal value
$H_0=70.262$ km\,c${}^{-1}$Mpc${}^{-1}$) and draw level lines of
the functions (\ref{chiS}), (\ref{chiH}), (\ref{chiB}) and
(\ref{chisum}) $\chi^2(\Omega_m,\Omega_\Lambda)$ in the
$\Omega_m,\Omega_\Lambda$ plane and for
$\chi^2_\Sigma(\Omega_m,H_0)$ with fixed $\Omega_\Lambda=0.769$ in
the bottom-right panel. The points of minima are marked in
Fig.~\ref{F2} as hexagrams for $\chi^2_S$, pentagrams for
$\chi^2_H$, diamonds for $\chi^2_B$ and circles for
$\chi^2_\Sigma$. Minimal values of the functions $\chi^2$
(\ref{chiS}), (\ref{chiH}), (\ref{chiB}) and (\ref{chisum}) at
these points are tabulated in Table~\ref{TLCDM} so we can compare
efficiency of this description for different $H_0$. For the same
purpose we point out the corresponding values $\chi^2$ for some
level lines in Fig.~\ref{F2} and present the dependence of minima
$\min\chi^2_\Sigma$ on $H_0$ and on $\Omega_m$ in the left bottom
panels of Fig.~\ref{F2}. Here we denote
$\min\chi^2_\Sigma(H_0)=\min\limits_{\Omega_m,\Omega_\Lambda}\chi^2_\Sigma$,
$\min\chi^2_\Sigma(\Omega_m)=\min\limits_{H_0,\Omega_\Lambda}\chi^2_\Sigma$
and graphs of the fractions $\chi^2_S$, $\chi^2_H$, $\chi^2_B$ in
$\min\chi^2_\Sigma(H_0)$ are also shown.
In the bottom panels we present how parameters of a minimum point
of $\chi^2_\Sigma$ depend on $H_0$ and on $\Omega_m$. In
particular, for the dependence on $H_0$ the coordinates
$\Omega_m(H_0)$ and $\Omega_\Lambda(H_0)$ of this point are
calculated, the value $\Omega_k$ is determined from
Eq.~(\ref{sumOm}). For the dependence on $\Omega_m$ we also
present the graph $h(\Omega_m)$, where $h=H_0/100$.
\begin{figure}[th]
\centerline{\includegraphics[scale=0.79,trim=5mm 0mm 5mm 6mm]{Cfig2}}
\caption{\small The $\Lambda$CDM model. For the values $H_0$ (\ref{H0})
and the optimal value $H_0=70.26$ km\,c${}^{-1}$Mpc${}^{-1}$ level
lines are drawn at $1\sigma$, $2\sigma$ and $3\sigma$ (thick
solid) for $\chi^2_S(\Omega_m,\Omega_\Lambda)$ (black), for
$\chi^2_H(\Omega_m,\Omega_\Lambda)$ (green) and
$\chi^2_B(\Omega_m,\Omega_\Lambda)$ (red in the top row), the sum
(\ref{chisum}) $\chi^2_\Sigma(\Omega_m,\Omega_\Lambda)$ (the
middle row), $\chi^2_\Sigma(\Omega_m,H_0)$ for
$\Omega_\Lambda=0.758$ (the bottom-right panel); dependence of
$\min\chi^2_\Sigma$, its fractions $\chi^2$ and parameters of a
minimum point on $H_0$ and on $\Omega_m$. }
\label{F2}
\end{figure}
We see in Fig.~\ref{F2} and in Table~\ref{TLCDM} that the
dependence of $\min\chi^2_\Sigma(H_0)$
is appreciable and significant. This function has the distinct
minimum and achieves its minimal value $585.35$ at
$H_0\simeq70.26$. The optimal values of the $\Lambda$CDM model
parameters $\Omega_m\simeq0.276$, $\Omega_\Lambda\simeq0.769$,
corresponding to this minimum are presented in Table~\ref{Optim},
these values are taken for the $\Lambda$CDM curves in
Fig.~\ref{F1}.
The mentioned sharp dependence of $\min\chi^2_\Sigma$ on $H_0$ is
connected with two factors: (1) the similar dependence of the main
contribution $\chi^2_S(H_0)$ shown in the same panel; (2) the
large shift of the minimum point for $\chi^2_S$ in the
$\Omega_m,\Omega_\Lambda$ plane corresponding to $H_0$ growth. For
$H_0=68$ and $73.8$ km\,c${}^{-1}$Mpc${}^{-1}$ this minimum point
is far from the similar points of $\chi^2_H$ and $\chi^2_B$. Only
for $H_0$ close to 70 km\,c${}^{-1}$Mpc${}^{-1}$ all these three
minimum points are near each other (the top-right panel in
Fig.~\ref{F2}).
Only the value $H_0= 69.7$ km\,c${}^{-1}$Mpc${}^{-1}$ in
Table~\ref{TLCDM} is close to the optimal value in
Table~\ref{Optim}. We may conclude that the values of the Hubble
constant $H_0=68$ and $73.8$ km\,c${}^{-1}$Mpc${}^{-1}$ taken in
Refs.~\cite{FarooqMR13,FarooqR13,Farooqth}, unfortunately, lie to
the left and to the right from the optimal value $H_0\simeq70$
km\,c${}^{-1}$Mpc${}^{-1}$. We see the significant difference
between the large values $\min\chi^2_\Sigma=673.64$ or $707.84$
for the too small and too large values of $H_0$ in
Table~\ref{TLCDM} and the optimal value $\min\chi^2_\Sigma=585.35$
for $H_0=70.262$ in Table~\ref{Optim}.
In the middle row panels of Fig.~\ref{F2} with $\chi^2_\Sigma$ the
flatness line $\Omega_m+\Omega_\Lambda=1$ (or $\Omega_k=0$) is
shown as the black dashed straight line. This line shows that only
for $H_0$ close to the optimal value from Table~\ref{Optim} the
following recent observational limitations on the $\Lambda$CDM
model parameters (\ref{Omega1}) from surveys \cite{WMAP,Plank13}
\begin{equation}
\begin{array}{llll}
& \Omega_m= 0.279\pm 0.025, & & \Omega_m= 0.314\pm 0.02\\
\mbox{WMAP \cite{WMAP}: \ }&\Omega_\Lambda= 0.721\pm 0.025,\quad &
\mbox{ \ \ Planck \cite{Plank13}: \ }&\Omega_\Lambda= 0.686\pm 0.025,\\
& \Omega_k=-0.0027^{+0.0039}_{-0.0038}; & &
\Omega_k=-0.0005^{+0.0065}_{-0.0066}\end{array}
\label{Omk} \end{equation}
are satisfied on $1\sigma$ or $2\sigma$ level.
For $H_0=67.3$ and $73.8$ km\,c${}^{-1}$Mpc${}^{-1}$ the optimal
values of parameters $\Omega_m$, $\Omega_\Lambda$, $\Omega_k$ in
Table~\ref{TLCDM} are far from restrictions (\ref{Omk}) for
$\Omega_k$ even on $3\sigma$ level.
\begin{table}[ht]
\caption{ The $\Lambda$CDM model. For given $H_0$ (\ref{H0}) the
calculated minima of $\chi^2_S$, $\chi^2_H$, $\chi^2_B$ and
$\chi^2_\Sigma$ with $\Omega_m$, $\Omega_\Lambda$, $\Omega_k$
correspondent to $\min\chi^2_\Sigma$.}
\begin{center}
\begin{tabular}{||c||c|c|c||c|c|c|c||} \hline
$H_0$ & $\min\chi^2_S$ & $\min\chi^2_H$ & $\min\chi^2_B$
& $\min\chi^2_\Sigma$ &$\Omega_m$ & $\Omega_\Lambda$&$\Omega_k$\\ \hline
67.3& 599.37 & 18.492& 5.548& 673.64 & 0.285& 0.568& 0.147\\ \hline
69.7& 562.73 & 17.993& 3.517& 588.53 & 0.278& 0.734&$-0.012$\\ \hline
73.8& 639.90 & 19.466& 5.322& 707.84 & 0.269& 0.961&$-0.230$\\ \hline
\end{tabular}
\end{center}
\label{TLCDM}\end{table}
Graphs of the optimal values $\Omega_m$, $\Omega_\Lambda$ and
$\Omega_k$ depending on $H_0$ are presented in the second bottom
panel. We see that the value $\Omega_m$ weakly depends on
$H_0$, but $\Omega_\Lambda$ and $\Omega_k$ satisfy conditions
(\ref{Omk}) only for $H_0\simeq70$ km\,c${}^{-1}$Mpc${}^{-1}$.
The dependence of $\min\limits_{H_0,\Omega_\Lambda}\chi^2_\Sigma$ on
$\Omega_m$ is rather sharp because of the correspondent dependence
of its fraction $\chi^2_B$. This fact for $\chi^2_B$ is connected
with the contribution from the value $A(z)$ (\ref{Az}) measurements,
because $A(z)$ is proportional to $\sqrt{\Omega_m}$ and $\chi^2_B$
is very sensitive to $\Omega_m$ values. Note that the fractions
$\chi^2_S$ and $\chi^2_H$ (in $\min\chi^2_\Sigma$) weakly depend on
$\Omega_m$.
Dependencies of $\min\chi^2_\Sigma$ on $H_0$, $\Omega_m$ and also
$\Omega_\Lambda$, $\Omega_k$ let us calculate estimates of
acceptable values for these model parameters. They are presented
below in Table~\ref{Estim}.
Coordinates $h=H_0/100$ and $\Omega_\Lambda$ of the minimum point
for $\chi^2_\Sigma$ depend on $\Omega_m$ in a such manner that
only for $\Omega_m\simeq0.27$ values $\Omega_\Lambda$ and
$\Omega_k$ satisfy conditions (\ref{Omk}). Note that the optimal
value of $h$ is close to 0.7 for all $\Omega_m$ in the limits
$0<\Omega_m<1$.
\subsection{GCG model}
Let us apply the model with generalized Chaplygin gas (GCG)
\cite{KamenMP01,Bento02,Makler03,LuGX10,LiangXZ11,CamposFP12,XuLu12}
to describing the same observational data for Type Ia supernovae,
$H(z)$ and BAO. We use here Eq.~(\ref{AGCG}) with the initial
condition ${\cal A}\big|_{\tau=1}=0$, so we have 5 independent
free parameters in this model: $H_0$, $\Omega_b$, $\Omega_k$,
$\alpha$ and $B_s$. However we really used only 4 free parameters,
because the fraction $\Omega_b$ may include not only baryonic but
also a part of cold dark matter. Our calculations yield that the
minimum over remaining 4 parameters $\min\limits_{H_0,\Omega_k,
\alpha,B_s}\chi^2_\Sigma$ practically does not depend on
$\Omega_b$ in the range $0\le\Omega_b\le0.25$ (see Fig.~\ref{F3}).
So in our analysis presented in Fig.~\ref{F3} (except for 3
bottom-right panels) we fixed the value
$$ \Omega_b=0.047,
$$
that is the simple average of the WMAP $\Omega_b=0.0464$ \cite{WMAP}
and Planck $\Omega_b=0.0485$ \cite{Plank13} estimations.
In the GCG model $\Omega_\Lambda=0$ and $\Omega_m=1-\Omega_k$ in
accordance with Eq.~(\ref{sumOm}) and the formal definition
(\ref{Omega1}). However we should use the effective value
$\Omega_m^{eff}$ in this model, in particular, in expression
(\ref{Az}). In
Refs.~\cite{LuGX10,LiangXZ11,CamposFP12,XuLu12,LuXuWL11} the
following effective value is used
\begin{equation}
\Omega_m^{eff}=\Omega_b+(1-\Omega_b-\Omega_k)(1-B_s)^{1/(1+\alpha)}.
\label{Ommeff1} \end{equation}
This value results from correspondence between the models
$\Lambda$CDM with Eq.~(\ref{ALCDM}) and GCG with Eq.~(\ref{AGCG})
in the early universe at $z\gg1$.
But in our investigation the majority of observational data is
connected with redshifts $0<z<1$, so in Eq.~(\ref{Az}) we are to
consider the present time limit of the value
$\Omega_m^{eff}\equiv\Omega_{0m}^{eff}=\lim\limits_{z\to0}\Omega_m^{eff}$.
If we compare limits of the right hand sides of Eqs.~(\ref{ALCDM})
and (\ref{AGCG}) at $z\to0$ or ${\cal A}\to0$, we obtain another
effective value
\begin{equation}
\Omega_m^{eff}=\Omega_b+(1-\Omega_b-\Omega_k)(1-B_s).
\label{Ommeff2} \end{equation}
Values $\chi^2_B$ calculated with expressions (\ref{Ommeff1}) and
(\ref{Ommeff2}) are different if $\alpha\ne0$. This difference
looks like rather small if we compare minima of the sum
(\ref{chisum}) $\min\chi^2_\Sigma=\min\limits_{\Omega_k,
\alpha,B_s}\chi^2_\Sigma$ depending on $H_0$. In Fig.~\ref{F3}
this dependence with Eq.~(\ref{Ommeff2}) for $\Omega_m^{eff}$ is
the blue solid line and for the case with Eq.~(\ref{Ommeff1}) it
is the violet dash-and-dot line. We see that the lines closely
converge in the vicinity of the minimum point $H_0\simeq70$
km\,c${}^{-1}$Mpc${}^{-1}$. The dependence
$\min\chi^2_\Sigma(H_0)$ in both cases (\ref{Ommeff1}) and
(\ref{Ommeff2}) has the sharp minimum and resembles the case of
the $\Lambda$CDM model in Fig.~\ref{F2}. The value
$\min\chi^2_\Sigma\simeq584.54$ of this minimum, its parameters in
Table~\ref{Optim}, graph of the contribution $\chi^2_S$ and
dependence on $H_0$ for parameters $\alpha,\Omega_k,B_s$ of the
minimum point in the bottom-left panel in Fig.~\ref{F3} are
presented for the case with Eq.~(\ref{Ommeff2}).
One should note that all mentioned dependencies are different for
the case (\ref{Ommeff1}), in particular, the absolute minimum of
$\chi^2_\Sigma$ is 584.31. This difference is illustrated in the
central panels in Fig.~\ref{F3} with level lines of
$\chi^2_\Sigma(\alpha,B_s)$ for $H_0=73.8$ and 70.093
km\,c${}^{-1}$Mpc${}^{-1}$ (with the specified values $\Omega_k$,
optimal for these $H_0$). These level lines are blue for the
expression (\ref{Ommeff2}) and they are thin violet for
Eq.~(\ref{Ommeff1}). Positions of the optimal points are close
only if $H_0$ is close to its optimal value in Table~\ref{Optim}.
We suppose that the estimation of $\chi^2_B$ with the expression
(\ref{Ommeff2}) is more adequate to the considered values $z$. So
in Table~\ref{Optim} and in other panels of Fig.~\ref{F3} we use
only Eq.~(\ref{Ommeff2}). Notations in Fig.~\ref{F3} correspond
to Fig.~\ref{F2}.
\begin{figure}[bh]
\centerline{\includegraphics[scale=0.8,trim=5mm 0mm 5mm -1mm]{Cfig3}}
\caption{\small The GCG model. For $H_0$ (\ref{H0}) and the
optimal value $H_0=70.093$ km\,c${}^{-1}$Mpc${}^{-1}$ level lines
of $\chi^2_\Sigma$ and other $\chi^2$ are presented in
$\alpha,B_s$; $\alpha,H_0$; $\Omega_k,H_0$ and $\Omega_b,H_0$
planes in notations of Fig.~\ref{F2}. In the bottom-left panels we
analyze dependence of
$\min\chi^2_\Sigma$
and parameters of a minimum
point on $H_0$, $\Omega_k$, $\alpha$ and $\Omega_b$.
}
\label{F3}
\end{figure}
The similar dependence of $\min\chi^2_\Sigma$ on $H_0$ for the
$\Lambda$CDM and GCG models results in unsuccessful description of
the data with $H_0=67.3$ and 73.8 km\,c${}^{-1}$Mpc${}^{-1}$ with
the corresponding optimal values $\Omega_k=0.247$ and $-0.295$.
Fig.~\ref{F3} illustrates large distances between minimum points
of $\chi^2_S$, $\chi^2_H$ and $\chi^2_B$ in these cases. The
mentioned distances are small for the optimal values from
Table~\ref{Optim} $H_0=70.093$ km\,c${}^{-1}$Mpc${}^{-1}$ and
$\Omega_k=-0.19$. For these optimal values we present level lines
of $\chi^2_\Sigma$ in $\alpha,B_s$; $\alpha,H_0$; $\Omega_k,H_0$
and $\Omega_b,H_0$ planes. In these panels other model parameters
are fixed and specified.
When we test dependence of the minimum $\min\chi^2_\Sigma$ on
$H_0$, $\Omega_k$, $\alpha$ and $\Omega_b$ in Fig.~\ref{F3}, we
minimize this value over all other parameters (except for the
above mentioned $\Omega_b$). In particular,
$\min\chi^2_\Sigma(\Omega_k)=\min\limits_{H_0,
\alpha,B_s}\chi^2_\Sigma$, this function has the distinct minimum
near $\Omega_k\simeq0$ and resembles the dependence
$\min\chi^2_\Sigma(H_0)$. The optimal value of $H_0$ or
$h=H_0/100$ is practically constant and close to $h\simeq0.7$ if
we vary $\Omega_k$, $\alpha$ or $\Omega_b$.
As
mentioned above the dependence of $\min\chi^2_\Sigma$ on
$\Omega_b$ is very weak, so we fixed in our previous analysis
$\Omega_b=0.047$.
For the graph
$\min\chi^2_\Sigma(\alpha)=\min\limits_{H_0,\Omega_k,B_s}\chi^2_\Sigma$
the correspondent minimum is achieved if $\alpha$
is negative: $\alpha=-0.066$ (see Table~\ref{Optim}). In the GCG model
this parameter is connected with the square of adiabatic sound
speed \cite{Makler03,CamposFP12,XuLu12}
\begin{equation}
c_s^2 = \frac{\delta p}{\delta\rho}=-\alpha \frac{p}\rho.
\label{cs2} \end{equation}
If we accept the restriction $\alpha\ge0$ (equivalent to
$c_s^2\ge0$) in our investigation with the mentioned observational
data, we obtain the optimal value $\alpha=0$ and the GCG model
will be reduced to the $\Lambda$CDM model with
$\Omega_\Lambda=B=B_s(1-\Omega_b-\Omega_k)$. The dependence of
$\min\chi^2_\Sigma$ and other parameters on $\alpha$ in
Fig.~\ref{F3} show that for $\alpha=0$ we have
$\min\chi^2_\Sigma\simeq585.35$ and the optimal values of $H_0$,
$\Omega_k$, $\Omega_\Lambda=B$ corresponding to the $\Lambda$CDM
model in Table~\ref{Optim}.
\begin{table}[hb]
\caption{Optimal values of model parameters ($\Omega_b=0.047$, for
the GCG model $\Omega_m=\Omega_m^{eff}$ (\ref{Ommeff2})).}
\begin{center}
\begin{tabular}{||l||c||c|c|l||} \hline
Model &$\min\chi^2_\Sigma$&$H_0$ &$\Omega_m$& other parameters \\ \hline
$\Lambda$CDM& 585.35& 70.262& 0.276&$\Omega_\Lambda=0.769,\;\; \Omega_k=-0.045$ \\ \hline
GCG & 584.54& 70.093& 0.277&$\Omega_k=-0.019,\;\;\alpha=-0.066,\;\;B_s=0.759$ \\ \hline
PCS, $d=1$ & 588.41& 69.52 & 0.286 &$\Omega_k=-0.040,\;\;\alpha=-0.256,\;\;B=2.067$\\ \hline
PCS, $d=2$ & 591.10& 69.49 & 0.288 &$\Omega_k=-0.017,\;\;\alpha=-0.372,\;\;B=1.599$\\ \hline
PCS, $d=3$ & 592.18& 69.34 & 0.288 &$\Omega_k=-0.027,\;\;\alpha=-0.431,\;\;B=1.461$\\ \hline
PCS, $d=6$ & 592.56& 69.29 & 0.289 &$\Omega_k=-0.029,\;\;\alpha=-0.493,\;\;B=1.302$\\ \hline
\end{tabular}
\end{center}
\label{Optim}\end{table}
\subsection{PCS model}
The multidimensional gravitational model of I. Pahwa, D.~Choudhury
and T.R.~Seshadri \cite{PahwaChS} has the set of model parameters
$H_0$, $\Omega_b$, $\Omega_m$, $\Omega_k$, $\alpha$, $B$ similar
to the GCG model, but also it has the additional integer-valued
parameter $d$ (the number of extra dimensions). Our analysis
demonstrated that the value $d=1$ is the most preferable for
describing the observational data for supernovae, BAO and $H(z)$.
So it is the case $d=1$ that we present in almost all panels of
Fig.~\ref{F4} (except for 2 panels with dependencies of
$\min\chi^2_\Sigma$ on $H_0$ and $\Omega_k$). We use the
similarity of model parameters for the GCG and PCS models draw in
Fig.~\ref{F4} the same graphs and level lines for the PCS model as
in Fig.~\ref{F3} in correspondent panels. Colors of correspondent
lines also coincide. Naturally we use in Fig.~\ref{F4} the value
$B$ instead of $B_s$.
The minimum $\min\chi^2_\Sigma$ (over
all other parameters) increases when the baryon fraction $\Omega_b$ grows.
This dependence is more distinct than in the GCG case
(Fig.~\ref{F3}), but it is also rather weak for small $\Omega_b$.
So for the multidimensional model PCS we also fix $\Omega_b=0.047$
and really use only 5 remaining parameters $H_0$, $\Omega_m$,
$\Omega_k$, $\alpha$, $B$. The value $\Omega_b=0.047$ is fixed in
all panels of Fig.~\ref{F4} like for Fig.~\ref{F3} (except for 3
bottom-right panels).
The dependence of
$\min\chi^2_\Sigma=\min\limits_{\Omega_m,\Omega_k,
\alpha,B}\chi^2_\Sigma$ on $H_0$ has the distinct minimum at
$H_0\simeq69.52$ for $d=1$ (the solid blue line here and in panels
of this row). The similar behavior takes place for $d=2$ (the
violet dashed line) and for $d=6$ (the purple dots). The minimal
value $\min\chi^2_\Sigma\simeq588.41$ for $d=1$ is larger than for
the $\Lambda$CDM and GCG models and for $d\ge2$ the minima are
still worse (see Table~\ref{Optim}).
\begin{figure}[ht]
\centerline{\includegraphics[scale=0.8,trim=5mm 0mm 5mm -1mm]{Cfig4}}
\caption{\small The PCS model with $d=1$. Notations and panels
correspond to Fig.~\ref{F3}, in particular, in the bottom-left
panels we analyze dependence of
$\min\chi^2_\Sigma$
and parameters of a minimum
point on $H_0$, $\Omega_k$, $\alpha$ and $\Omega_b$.
}
\label{F4}
\end{figure}
These bad results for the PCS model are connected with description
of the $H(z)$ recent data with high $z$ ($z>2$ in
Table~\ref{AT1}). When we excluded 3 data points
\cite{Busca12,Delubac14,Font-Ribera13} for $H(z)$ with $z\ge2.3$,
we obtained absolutely other results presented below in
Table~\ref{Optim31}.
In Fig.~\ref{F4} all level lines and graphs correspond to the
whole $H(z)$ data with $N_H=34$ points. But only one except is
done for the dependence of $\min\chi^2_\Sigma$ on $H_0$ for $d=1$:
here $N_H=31$, this graph is shown as the red dash-and-dot line.
The minimum value for this line $\min\chi^2_\Sigma\simeq582.68$ is
in Table~\ref{Optim31}.
Level lines of functions $\chi^2$ are shown in Fig.~\ref{F4} in
the same panels as for the GCG model in Fig.~\ref{F3}, in
particular, for the values (\ref{H0})
$H_0=67.3$, $73.8$ and the optimal
value $69.52$ km\,c${}^{-1}$Mpc${}^{-1}$. If $H_0$ is too large,
the domain of acceptable level of $\chi^2_\Sigma$ becomes very
narrow. One should note that for all level lines we change only
two parameters, all remaining model parameters are fixed (they are
from Table~\ref{Optim} or optimal for a given $H_0$).
In 6 top-left panels with the $\alpha,B$ plane we draw thin
purple lines bounding the domain of regular solutions (below these
lines). The upper domain (for larger $B$) consists of singular
solutions, they have singularities in the past with infinite value
of density $\rho$ corresponding to nonzero value of the scale
factor $a$ \cite{GrSh13}. These solutions are nonphysical and
should be excluded. It is interesting that the optimal solutions
in Fig.~\ref{F4} and in Tables~\ref{Optim} and \ref{Optim31} are
near this border, but they are regular and describe the standard
Big Bang $\rho\to\infty\;\Leftrightarrow\;a\to0$ with dynamical
compactification of extra dimensions.
\section{Conclusion}
We considered how the $\Lambda$CDM, GCG and PCS models describe
the observational data for type Ia supernovae, BAO and $H(z)$
\cite{SNTable}, Tables~\ref{AT2}, \ref{AT1}. These observations
distinctly restrict acceptable values for the Hubble constant
$H_0$ and other parameters of the mentioned models. We used our
calculations for dependance $\min\chi^2_\Sigma(p)$, where the
absolute minimum (over other parameters) of the value
(\ref{chisum}) $\chi^2_\Sigma$ depend on a fixed parameter $p$. On
the base of these calculations (presented partially in
Figs.~\ref{F2}, \ref{F3}, \ref{F4}) we obtained the following
$1\sigma$ estimates for parameters of the $\Lambda$CDM, GCG and
PCS ($d=1$) models:
\begin{table}[h]
\caption{$1\sigma$ estimates of model parameters ($\Omega_b=0.047$
in the GCG and PCS models).}
\begin{tabular}{||l||c||c|c|l||} \hline
Model &$\min\chi^2_\Sigma$&$H_0$&$\Omega_k$&other parameters \\ \hline
$\Lambda$CDM$\!$& 585.35& $70.262\pm0.319$&$-0.04\pm0.032$& $\Omega_m$=$\,0.276_{-0.008}^{+0.009},
\rule[-0.5em]{0mm}{1.6em}\; \;\Omega_\Lambda$=$\,0.769\pm0.029$ \\
\hline
GCG & 584.54& $70.093\pm0.369$&$-0.019\pm0.045$ &$\alpha\!=
\!-0.066_{-0.074}^{+0.072},\;\; B_s$=\,$0.759_{-0.016}^{+0.015}$
\rule[-0.5em]{0mm}{1.6em} \\ \hline
PCS, $d=1$\rule{0mm}{1.2em}& 588.41& $69.523_{-0.350}^{+0.366}$ &
$-0.04\pm0.045$&$\Omega_m$=$\,0.286\pm0.010,\,\;\alpha=-0.256_{-0.03}^{+0.032}$\\
\hline
\end{tabular}
\label{Estim}\end{table}
Our estimates for the $\Lambda$CDM model are in agreement with the
WMAP observational restrictions (\ref{Omk}) on $\Omega_m$,
$\Omega_\Lambda$, $\Omega_k$ \cite{WMAP}, but they are in tension
with the Planck data \cite{Plank13}. This fact is connected with
too low value $H_0=67.3$
km\,c${}^{-1}$Mpc${}^{-1}$ (\ref{H0}) in the Planck survey
\cite{Plank13}.
For the GCG model $\min\chi^2_\Sigma$ is slightly better and our
limitations on $H_0$ and $\Omega_k$ in Table~\ref{Estim} are
rather close to the $\Lambda$CDM case. However, if we require
$\alpha\ge0$ in accordance with Eq.~(\ref{cs2}) and
Refs.~\cite{CamposFP12,XuLu12}, the GCG model with the optimal
value $\alpha=0$ will be reduced to the $\Lambda$CDM model with
its optimal parameters in Tables~\ref{Optim}, \ref{Estim} and the
same $\min\chi^2_\Sigma$.
Values $\chi^2_B$ and $\chi^2_\Sigma$ for the GCG model
essentially depend on the expression for $\Omega_m^{eff}$
(\ref{Ommeff1}) or (\ref{Ommeff2}). But the optimal parameters in
Table~\ref{Optim} for these expressions are rather close.
We mentioned above that the multidimensional model PCS is less
effective in description of the considered observational data, and
that the main problem of this model is connected with the $H(z)$
recent data with high $z$ ($z>2$). We excluded 3 $H(z)$ data
points \cite{Busca12,Delubac14,Font-Ribera13} with $z=2.3$, 2.34,
2.36 and for remaining $N_H=31$ points of $H(z)$ and the same
SN and BAO data from \cite{SNTable}, Table~\ref{AT2}. we
calculated $\min\chi^2_\Sigma$ and optimal values of model
parameters presented here in Table~\ref{Optim31}.
\begin{table}[hb]
\caption{Optimal values of model parameters for $\Omega_b=0.047$
and $N_H=31$ $H(z)$ data points with $z<2$.}
\begin{center}
\begin{tabular}{||l||c||c|c|l||} \hline
Model &$\min\chi^2_\Sigma$&$H_0$ &$\Omega_m$& other parameters \\ \hline
$\Lambda$CDM& 583.71& 70.12 & 0.281&$\Omega_\Lambda=0.751,\;\; \Omega_k=-0.032$ \\ \hline
GCG & 583.70& 70.11 & 0.291&$\Omega_k=-0.046,\;\;\alpha=-0.028,\;\;B_s=0.756$ \\ \hline
PCS, $d=1$ & 582.68& 69.89 & 0.281 &$\Omega_k=-0.114,\;\;\alpha=-0.174,\;\;B=2.078$\\ \hline
PCS, $d=2$ & 582.93& 69.82 & 0.282 &$\Omega_k=-0.118,\;\;\alpha=-0.290,\;\;B=1.616$\\ \hline
PCS, $d=6$ & 583.23& 69.78 & 0.282 &$\Omega_k=-0.126,\;\;\alpha=-0.398,\;\;B=1.291$\\ \hline
\end{tabular}
\end{center}
\label{Optim31}\end{table}
We see that the model PCS \cite{PahwaChS} describes the reduced
set of data with $z<2$ better than other models. The best fit is
for $d=1$, the optimal value of $H_0$ close to $70$
km\,c${}^{-1}$Mpc${}^{-1}$.
This example demonstrates that predictions of any cosmological
model essentially depend on data selection. Moreover, there is the
important problem of model dependence (in addition to mutual
dependence) of observational data, in particular, data in
Tables~\ref{AT2}, \ref{AT1}.
Leaving the last problem beyond this paper, we can conclude that
the considered observations of type Ia supernovae \cite{SNTable},
BAO (Table~\ref{AT2}) and the Hubble parameter $H(z)$
(Table~\ref{AT1}) confirm effectiveness of the $\Lambda$CDM model,
but they do not deny other models. The important argument in favor
of the $\Lambda$CDM model is its small number $N_p$ of model
parameters (degrees of freedom). This number is part of
information criteria of model selection statistics, in particular,
the Akaike information criterion is \cite{ShiHL12} $AIC =
\min\chi^2_\Sigma + 2N_p$. This criterion supports the leading
position of the $\Lambda$CDM model.
|
train/arxiv
|
BkiUd205qhLAB70I4WrY
| 5 | 1 |
\section{Introduction}
The rapidly evolving e-commerce touches almost every aspect of our lives.
We now turn to Amazon for a daily necessity, LinkedIn for a job, and Uber for a ride.
And the personalized recommender systems (RS) form the core of such application in e-commerce.
Recently, several works have highlighted that RS may be subject to algorithmic bias along different dimensions, leading to negative impact on the underrepresented or disadvantaged groups \cite{Geyik2019,Zhu2018,fu2020fairness}.
For example, the ``Matthew Effect'' becomes increasingly evident in RS, where the head contents get more and more popular, while the long-tail contents are difficult to achieve relatively fair exposure \cite{10.1145/3292500.3330707}.
Existing research on improving fairness in ranking or RS has focused on static setting, which only assess the immediate impact of the learning approach instead of the long-term consequences \cite{doi:10.1177/1064804619884160, ijcai2019-862}.
For example, suppose there are 4 items in the system, A, B, C and D, with A, B belonging to popular group $G_0$ and C, D belonging to long-tail group $G_1$.
When using demographic parity as fairness constraint in recommendation and recommend two item each time, without considering the position bias, we will have AC, BC, AD and BD pairs to be recommended to consumers.
Suppose D is an item with lots of potential, as long as a consumer sees D, he/she is willing to click on it.
Then as time goes by, D will get a much higher utility score than C, and D is still in $G_1$, therefore, the algorithm will more likely to recommend D to maximize total utility as well as satisfy group fairness, which will bring a new ``Matthew Effect'' in $G_1$.
The above example shows that imposing seemingly fair decisions through a static criteria can lead to unexpected results in the long run.
Moreover, fairness cannot be defined in static or one-shot stetting without considering the long-run impact, and the long-term fairness cannot be achieved without understanding the underlying dynamics.
We define \textit{static fairness} as the fairness which does not consider the item's utility and attribute or group label change in the system due to the user feedback/interactions throughout the recommendation process, and \textit{dynamic fairness}, on the other hand, considers the utility and fairness constraints as dynamic factors and learns a strategy that accommodates such dynamics.
Furthermore, \textit{long-term fairness} views the recommendation as a long term process instead of a one-shot objective, and aims to maintain fairness in the long run by achieving dynamic fairness over time.
Novel to this work, we study the long-term fairness of item exposure in the recommender systems, while items are separated into groups based on item popularity.
The challenge is that during the recommendation process, items will receive different extents of exposure based on recommendation strategies and user feedback, causing the underlying group labels change through items.
To achieve the aforementioned long-term fairness in recommendation, we pursue to answer following three key questions:
\begin{itemize}
\item How to model long-term fairness of item exposure with changing group labels in recommendation scenarios?
\item How to update recommendation strategy according to real-time item exposure records and user interactions?
\item How to optimize it effectively over large-scale datasets?
\end{itemize}
In this work, we aim to address the above challenges simultaneously.
Specially, we propose to model the sequential interactions between consumers and recommender systems as a Markov Decision Process (MDP), and then turn the MDP into a Constrained MDP (CMDP) by constraining the fairness of item exposure at each iteration dynamically.
We leverage the constrained policy optimization (CPO) with adapted neural network architecture to automatically learn the optimal policy under different fairness constraints.
To the best of our knowledge, this is the first attempt to model the dynamic nature of fairness with respect to changing group labels and demonstrate its effects in the long run.
We illustrate the long-term impact of fairness in recommendation by providing empirical results on several real-world datasets, which verify the superiority of our framework in terms of the trade-off between recommendation performance and fairness performance not only from a short-term perspective, but also a long-term perspective.
\section{Related Work}
\subsection{Fairness in Ranking and Recommendation}
There has been growing concerns on fairness recently, especially in the context of intelligent decision making systems, such as recommender systems. Various types of bias have been found to exist in recommendations such as gender and race~\cite{chen2018investigating,abdollahpouri2019unfairness}, item popularity~\cite{Zhu2018}, user feedback~\cite{fu2020fairness} and opinion polarity~\cite{Yao2017}. Different notions of fairness and algorithms have since been proposed to counteract such issues. There are mainly two types of fairness definitions in recommendations: \textit{individual} fairness and \textit{group} fairness. The former requires treating individuals similarly regardless of their protected attributes such as demographic information, while the latter requires treating different groups similarly. Our work focuses on the popularity group fairness, yet also addresses individual fairness through accommodation to dynamic group labels.
The relevant methods related to fairness in ranking and recommendation can be roughly divided into three subcategories: to optimize utility (often represented by relevance) subject to a bounded fairness constraint \cite{
singh2018fairness,Geyik2019,zehlike2017fa,celis2018ranking},
to optimize fairness with a lower bound utility \cite{Zhu2018}, and to jointly optimize utility and fairness \cite{celis2019controlling}.
Based on the characteristics of the recommender system itself, there also have been a few work related to multi-sided fairness in multi-stakeholder systems ~\cite{burke18a,mehrotra2018,Gao2019how}. These works have proposed effective algorithms for fairness-aware ranking and recommendation, yet they fall in the category of \textit{static fairness} where the protected attribute or group labels were fixed throughout the entire ranking or recommendation process. Therefore, it is not obvious how such algorithms can be adapted to dynamic group labels that change the fairness constraints overtime. The closest literature to our work on \textit{dynamic fairness} include \citeauthor{Saito20} ~\cite{Saito20} and \citeauthor{Morik2020}~\cite{Morik2020} which incorporated user feedback in the learning process, and can dynamically adjust to the changing utility with fairness constraints. However, these two works focused on the changing utility of items and did not
consider the scenario where group labels could be dynamic due to the nature of recommendations being an interactive process. To the best of our knowledge, we make the first attempt on dynamic group fairness with a focus on the changing group labels of items.
\subsection{RL for Recommendation} In order to capture the interactive nature of recommendation scenarios, reinforcement learning (RL) based solutions have become a hot topic recently.
A group of work \cite{li2010contextual,bouneffouf2012contextual,zeng2016online} model the problem as contextual multi-armed bandits which can easily incorporate collaborative filtering methods \cite{cesa2013gang,zhao2013interactive}.
In the meantime, some literature \cite{shani2005mdp,mahmood2007learning,mahmood2009improving,zheng2018drn} found that it is natural to model the recommendation process as a Markov Decision Process.
In general, this direction can be further categorized as either \textit{policy-based} \cite{Dulac-ArnoldESC15, zhao2018deep, chen2019large, chen2019top} or \textit{value-based} \cite{zhao2018recommendations, zheng2018drn} methods.
While policy-based methods aim to learn a policy that generates an action (e.g. recommended items) based on a state, value-based approaches finds the action with the best estimated quality.
Early attempts \cite{Dulac-ArnoldESC15} of policy-based approaches apply deterministic policies \cite{Silver2014DeterministicPG, Lillicrap2016DDPG} that directly construct actions and propose to use continuous item embeddings to model the large action space.
And \citeauthor{zhao2018deep} \cite{zhao2018deep} employ a deep Deterministic Policy Gradient framework (DDPG) \cite{Lillicrap2016DDPG} for page-wise recommendation.
As an alternative track, recently studies \cite{chen2019large, chen2019top} explore the stochastic policies that model a distribution of actions.
\citeauthor{chen2019large} \cite{chen2019large} utilize a balanced hierarchical clustering tree to model the item distribution.
\citeauthor{chen2019top} \cite{chen2019top} uses an off-policy correction framework to address the data bias issue in top-K recommender system.
On the other hand, value-based methods \cite{zhao2018recommendations, zheng2018drn, liu2020end} aims to model the quality (i.e. Q-value) of actions so that the best action correspond to the one with maximum Q-value.
\citeauthor{zhao2018recommendations} \cite{zhao2018recommendations} proposes to use Deep Q-Network (DQN) to estimate the Q-value of a state-action pair in RL and incorporate both positive and negative feedback to learn optimal recommendation strategies.
And \citeauthor{zheng2018drn} \cite{zheng2018drn} uses a dueling Q-Network to model the Q-value.
\citeauthor{liu2020end} \cite{liu2020end} later finds that the embedding component of RL cannot always be nicely trained thus proposed an End-to-End framework with additional supervised learning signal.
\section{Preliminary}
\subsubsection{\textbf{Markov Decision Processes.}} In this paper, we study reinforcement learning in Markov Decision Processes (MDPs).
An MDP is a tuple $M = (\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \mu, \gamma)$, where $S$ is a set of $n$ states, $\mathcal{A}$ is a set of $m$ actions,
$\mathcal{P}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0,1]$ denotes the transition probability function, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the reward function, $\mu: \mathcal{S} \rightarrow [0,1]$ is the starting state distribution, and $\gamma \in [0,1)$ is the discount factor.
A stationary policy $\pi: \mathcal{S} \rightarrow P(\mathcal{A})$ is a map from states to probability distributions over actions, with $\pi(a | s)$ denoting the probability of selecting action $a$ in state $s$.
We denote the set of all stationary policies by $\Pi$.
In reinforcement learning, we aim to learn a policy $\pi$, which maximizes the infinite horizon discounted total return $J(\pi)$,
\begin{equation*}
\small
J(\pi) \doteq \underset{\tau \sim \pi}{\mathrm{E}}\left[\sum_{t=0}^{\infty} \gamma^{\top} \mathcal{R}\left(s_{t}, a_{t}, s_{t+1}\right)\right],
\end{equation*}
where $\tau$ denotes a trajectory, i.e., $\tau=(s_{0}, a_{0}, s_{1}, a_{1}, \dots)$, and $\tau \sim \pi$ is shorthand which indicates that the distribution over trajectories depends on $\pi$ :
$s_{0} \sim \mu, a_{t} \sim \pi\left(\cdot | s_{t}\right), s_{t+1} \sim P\left(\cdot | s_{t}, a_{t}\right)$.
Letting $\mathcal{R}(\tau)$ denote the discounted return of a trajectory, we express the on-policy value function as $V^{\pi}(s) \doteq$ $\mathrm{E}_{\tau \sim \pi}\left[\mathcal{R}(\tau) | s_{0}=s\right]$, the on-policy action-value function as $Q^{\pi}(s, a) \doteq \mathrm{E}_{\tau \sim \pi}\left[\mathcal{R}(\tau) | s_{0}=s, a_{0}=a\right]$ and the advantage function as $A^{\pi}(s, a) \doteq Q^{\pi}(s, a)-V^{\pi}(s)$.
\subsubsection{\textbf{Constrained Markov Decision Processes.}}
A Constrained Markov Decision Process (CMDP) is an MDP augmented with constraints that restrict the set of allowable policies for that MDP.
In particular, we augment the MDP with a set $\mathcal{C}$ of auxiliary cost functions, $C_{1}, \ldots, C_{m}$ (with each function $C_{i}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ mapping transition tuples to costs, like the usual reward), and limits $\mathbf{d}_{1}, \ldots, \mathbf{d}_{m}$.
Let $J_{C_{i}}(\pi)$ denote the expected discounted return of policy $\pi$ with respect to cost function $C_{i}$:
\begin{equation*}
J_{C_{i}}(\pi)=\underset{\tau \sim \pi}{\mathrm{E}}\left[\sum_{t=0}^{\infty} \gamma^{\top} C_{i}\left(s_{t}, a_{t}, s_{t+1}\right)\right] .
\end{equation*}
The set of feasible stationary policies for a CMDP is then
$
\Pi_{C} \doteq\left\{\pi \in \Pi: \forall i, J_{C_{i}}(\pi) \leq \mathbf{d}_{i}\right\},
$
and the reinforcement learning problem in a CMDP is
$
\pi^{*}=\arg \max _{\pi \in \Pi_{C}} J(\pi).
$
Finally, we define on-policy value functions, action-value functions, and advantage functions for the auxiliary costs in analogy to $V^{\pi}, Q^{\pi},$ and $A^{\pi},$ with $C_{i}$ replacing $R:$ respectively, we denote these by $V_{C_{i}}^{\pi}, Q_{C_{i}}^{\pi},$ and $A_{C_{i}}^{\pi}$.
\subsubsection{\textbf{Constrained Policy Optimization.}}\label{sec:CPO}
Inspired by trust region methods \cite{DBLP:journals/corr/SchulmanLMJA15}, \citeauthor{DBLP:journals/corr/AchiamHTA17} proposed Constrained Policy Optimization (CPO) \cite{DBLP:journals/corr/AchiamHTA17} in 2017, which uses a trust region instead of penalties on policy divergence to enable larger step sizes.
CPO has policy updates of the following form
\begin{equation} \label{eq:cpo}
\begin{aligned}
\pi_{k+1}&=\arg \max _{\pi \in \Pi_{\theta}} \underset{\underset{a \sim \pi} {s \sim d^{\pi_{k}} }}{\mathrm{E}}\left[A^{\pi_{k}}(s, a)\right], \\
\text { s.t. } & J_{C_{i}}\left(\pi_{k}\right)+\frac{1}{1-\gamma} \underset{\underset{a \sim \pi} {s \sim d^{\pi_{k}} }}{\mathrm{E}}\left[A_{C_{i}}^{\pi_{k}}(s, a)\right] \leq \mathbf{d}_{i}, \forall i \\
& \bar{D}_{K L}\left(\pi \| \pi_{k}\right) \leq \delta
\end{aligned}
\end{equation}
where $\Pi_{\theta} \subseteq \Pi$ is a set of parameterized policies with parameters $\theta$ (e.g., neural networks with fixed architecture), $d^{\pi_{k}}$ is the state distribution under policy $\pi_{k}$, $\bar{D}_{K L}$ denotes the average KL-divergence, and $\delta > 0$ is the step size.
The set $\left\{\pi_{\theta} \in \Pi_{\theta}: D_{K L}\left(\pi|| \pi_{k}\right) \leq \delta\right\}$ is called the trust region.
For problems with only one linear constraint, there is an analytical solution, which is given by \citeauthor{DBLP:journals/corr/AchiamHTA17} in \cite{DBLP:journals/corr/AchiamHTA17}'s supplementary material (Theorem 2).
Denoting the gradient of the objective in Eq. \eqref{eq:cpo} as $g$, the gradient of constraint as $b$, the Hessian of the KL-divergence as $H$, and defining $c = J_C(\pi_k) - d$, the approximation to Eq. \eqref{eq:cpo} is
\begin{equation} \label{eq:cpo_approx}
\begin{aligned}
\theta_{k+1} &= \arg \max _{\theta}\ g^{\top}\left(\theta-\theta_{k}\right) \\
\text { s.t. } & c+b^{\top}\left(\theta-\theta_{k}\right) \leq 0 \\
& \frac{1}{2}\left(\theta-\theta_{k}\right)^{\top} H\left(\theta-\theta_{k}\right) \leq \delta
\end{aligned}
\end{equation}
For a thorough review of CMDPs and CPO, we suggest readers to refer to \cite{altman1999constrained, DBLP:journals/corr/AchiamHTA17} respectively.
\section{Problem Formulation}
In this section, we first describe a CMDP $M_{rec}$ that models the recommendation processes with general constraints, then, we describe several fairness constraints, which are suitable for recommendation scenarios, and finally, we combine these two parts together and introduce the objective.
\subsection{\textbf{CMDP for Recommendation}}
In each timestamp ($t_1$, $t_2$, $t_3$, $t_4$, $t_5$, $\dots$), when a user sends a request to the recommendation system, the recommendation agent $G$ will take the feature representation of the current user and item candidates $\mathcal{I}$ as input, and generate a list of items $L\in\mathcal{I}^K$ to recommend, where $K \geq 1$.
User $u$ who has received the list of recommended item/items $L$ will give his/her feedback $B$ by his/her clicks on this set of items.
Thus, the state $s$ can be represented by user features (e.g., user's recent click history), action $a$ is represented by item features, reward $r$ is the immediate reward (e.g., whether user click on this item) by taking action $a$ in the current state, and cost $c$ is the immediate cost (e.g., whether the recommended item/items come(s) from the sensitive group).
\begin{itemize}
\item \textbf{State $\mathcal{S}$:} A state $s_t$ is the representation of user's most recent positive interaction history $H_t$ with the recommender, as well as his/her demographic information (if exists).
\item \textbf{Action $\mathcal{A}$:} An action $a_t\ = \{ a_t^{1},\ \dots,\ a_t^{K}\} \in \mathcal{I}^K$ is a recommendation list with $K$ items to a user $u$ at time $t$ with current state $s_t$.
\item \textbf{Reward $\mathcal{R}$:} Given the recommendation based on the action $a_t$ and the user state $s_t$, the user will provide his/her feedback, i.e., click, skip, or purchase, etc. The recommender receives immediate reward $R(s_t, a_t)$ according to the user's feedback.
\item \textbf{Cost $\mathcal{C}$:} Given the recommendation based on the action $a_t$, the environment can provide a cost value based on the problem-specific cost function, i.e., the summation of items that come from the sensitive group, and send the immediate cost $C(s_t, a_t)$ to the recommender.
\item \textbf{Discount rate $\mathcal{\gamma}_r$ and $\mathcal{\gamma}_c$:} $\mathcal{\gamma}_r \in [0,1]$ is a factor measuring the present value of long-term rewards, while $\mathcal{\gamma}_c \in [0,1]$ is another factor measuring the present value of long-term costs.
\end{itemize}
\subsection{Fairness Constraints}
To be consistent with the previous definition in CMDP for recommendation and solve the dynamic change of underlying labels, we define analogs of several frequently proposed fairness constraints.
\subsubsection{Demographic Parity Constraints}
Following~\cite{singh2018fairness}, we can use exposure to define fairness between different groups of items.
Demographic parity requires that the average exposure of the items from each group is equal.
In our setting, we enforce this constraint at each iteration $t$.
Denoting the average exposure in a group at iteration $t$ as
\begin{equation} \label{eq:expo}
\text {Exposure}_t \left(G_{j} \right) = \underset{a^l_t \in a_t}{\sum} \vmathbb{1} (a^l_t \in G_{j}),\ l=1,...,K\\
\end{equation}
\noindent
Then we can express demographic parity constraint as follows,
\begin{equation}
\frac{\text {Exposure}_t\left(G_{0}\right)}{|G_{0}|}=\frac{\text {Exposure}_t\left(G_{1} \right)}{|G_{1}|},
\end{equation}
where groups $G_0$ and $G_1$ is designed to reflect the popularity difference in recommendation scenario.
\subsubsection{Exact-$K$ Fairness Constraints}
Inspired by \cite{zehlike2017fa}, we define an Exact-$K$ fairness in ranking that requires the proportion of protected candidates in every recommendation list with length $K$ remains statistically below or indistinguishable from a given maximum $\alpha$, as this kind of fairness constraint is more suitable and feasible in practice for recommender systems.
The concrete form of this fairness is shown as below,
\begin{equation} \label{eq:alpha_expo}
\frac{\text {Exposure}_t\left(G_{0}\right)}{\text {Exposure}_t\left(G_{1} \right)} \leq \alpha
\end{equation}
\noindent
Note that when $\alpha=\frac{|G_0|}{|G_1|}$ and the equation holds strictly, the above expression of Exact-$K$ fairness would be exactly the same as demographic parity.
\subsection{Fairness Constrained Policy Optimization for Recommendation (FCPO)}
Our goal is to learn the optimal policy for the platform, which is able to maximize the cumulative reward under a certain fairness constraint, as is mentioned in previous section.
Specially, in this work, the reward function and the cost function are defined as
\begin{equation} \label{eq:reward}
\centering
\small
\begin{aligned}
R(s_{t},a_{t}, s_{t+1}) &= \sum_{l=1}^{K} \vmathbb{1} (a_{t}^l \text{ gets positive feedback})\\
\end{aligned}
\end{equation}
\begin{equation} \label{eq:cost}
\small
\centering
\begin{aligned}
C(s_{t},a_{t}, s_{t+1}) &= \sum_{l=1}^{K} \vmathbb{1}(a_{t}^l\ is\ in\ sensitive\ group)
\end{aligned}
\end{equation}
where $a_{t} = \{ a_{t}^1,\ \dots,\ a_{t}^K \}$ represents a recommendation list including $K$ item ids which are selected by the current policy at time point $t$.
It can be easily found that the expression of cost function is the same as Eq. \eqref{eq:expo}, which represents the total number of items in a specific group exposed to some user at time $t$.
Let's consider the sensitive group as group $G_0$, then we get
\begin{equation*}
\small
\begin{aligned}
\frac{\text {Exposure}_t\left(G_{0}\right)}{\text {Exposure}_t\left(G_{1} \right)} \leq &\alpha \\
\text {Exposure}_t\left(G_{0}\right) \leq &\alpha \text {Exposure}_t\left(G_{1} \right)\\
(1+\alpha)\text {Exposure}_t\left(G_{0}\right) \leq &\alpha \text{Exposure}_t\left(G_{0}\right) \\&+ \alpha \text {Exposure}_t\left(G_{1} \right)\\
(1+\alpha)\text {Exposure}_t\left(G_{0}\right) \leq &\alpha K\\
C(s_{t},a_{t}, s_{t+1}) \leq &\frac{\alpha}{1+\alpha} K = \alpha^\prime K
\end{aligned}
\end{equation*}
Let $C \leq \alpha^{\prime} K$ be satisfied at each iteration, we can get the cumulative discounted cost
\begin{equation} \label{eq:fc}
J_{C}(\pi)=\underset{\tau \sim \pi}{\mathrm{E}}\left[\sum_{t=0}^{\top} \gamma^{\top}_c\ C\left(s_{t}, a_{t}, s_{t+1}\right)\right] \leq \sum_{t=0}^T \gamma_{c}^t\ \alpha^{\prime} K
\end{equation}
where $T$ is the number of recommendations.
Eq. \eqref{eq:fc} is the group fairness constraint for our optimization problem and we can denote the limit of the unfairness $\sum_{t=1}^T \gamma_{c}^t\ \alpha^{\prime} K$ as $\mathbf{d}$.
Once we finished defining a specific CMDP for recommendation and having the specific reward function Eq. \eqref{eq:reward}, cost function Eq. \eqref{eq:cost} and the limit of the constraint $\mathbf{d}$, we can bring them into Eq. \eqref{eq:cpo} to build our fairness constrained policy optimization.
And we will introduce the framework to solve it in next section.
\section{Proposed Framework}
Our solution to the aforementioned problem follows an Actor-Critic learning scheme, but with an extra critic network designed for the fairness constraint.
In this section, we illustrate how to construct and learn each of these components.
\begin{figure}[h]
\centering
\mbox{
\hspace{-10pt}
\centering
\includegraphics[scale=0.4]{fig/model.pdf}
}
\caption{The illustration of the proposed method.}
\label{fig:model}
\vspace{-10pt}
\end{figure}
\subsection{The Actor}
The actor component $\pi_\theta$ parameterized by $\theta$ serves as the same functionality as a stochastic policy that samples an action $a_t\in\mathcal{I}^K$ given the current state $s_t\in\mathbb{R}^m$ of a user:
\begin{equation}
a_t \sim \pi_\theta(\cdot|s_t)
\end{equation}
As depicted in Fig. \ref{fig:state}, $s_t$ is acquired by extracting and concatenating the user embedding $\mathbf{v}_u\in\mathbb{R}^d$ and user's history embedding $\mathbf{h}_u$:
\begin{equation}\label{eq:state_rep}
s_t = [\mathbf{v}_u; \mathbf{h}_u],~\mathbf{h}_u=\mathrm{GRU}(H_t)
\end{equation}
where $H_t = \{H_t^1, H_t^2, \dots, H_t^N\}$ denotes the most recent $N$ items from $u$'s interaction history, and the history embedding $\mathbf{h}_u$ is acquired by encoding $N$ item embeddings via GRU \cite{Cho2014gru}.
Note that the user's recent history is organized as a queue, and it is updated only if the recommended item receives a positive feedback from the user,
\begin{equation}
\begin{aligned}
H_{t+1} &= \{H_t^2, \dots, H_t^N, a_t^l\}\\
s_{t+1} &= [\mathbf{v}_u;\mathrm{GRU}(H_{t+1})]
\end{aligned}
\end{equation}
This ensures that the state $s_t$ always represents the users' most recent interest.
We assume that the probability of actions conditioned on states follows a continuous high-dimensional Gaussian distribution with mean $\mu$ and diagonal covariance matrix $\Sigma$.
For better representation ability, we approximate the distribution via a neural network $\pi_\theta: \mathbb{R}^m \rightarrow \mathbb{R}^{K \times d}, \mathbb{R}^{K\times d \times d}$, which maps the encoded state $s_t$ to $\mu$ and $\Sigma$.
Specifically, we define $\pi_\theta(s_t)$ to be a multi-layered perceptron with tanh as the non-linear activation function.
With the help of $f_\theta$, we can sample a vector from $\mathcal{N}(\mu,\Sigma)$ and then convert it into a proposal matrix $W\in\mathbb{R}^{K\times d}$, whose $k$-th row, denoted by $W_k\in\mathbb{R}^d$, represents a proposed ``ideal'' item embedding.
Then, the probability matrix $P\in\mathbb{R}^{K\times |\mathcal{I}|}$ of selecting the $k$-th candidate item is given by:
\begin{equation}\label{eq:weights}
P_k = \mathrm{softmax}(W_k \mathcal{V}^\top),~ k=1,\ldots,K,
\end{equation}
where $\mathcal{V}\in\mathbb{R}^{|\mathcal{I}|\times d}$ is the embedding matrix of all candidate items.
This is equivalent to using dot product to determine similarity between $W_k$ and any item.
As the result of taking the action at step $t$, the Actor recommends the $k$-th item as follows:
\begin{equation} \label{eq:action}
a_t^k = \argmax_{i\in[|\mathcal{I}|]} P_{k,i},~ k=1,\ldots,K,
\end{equation}
where $P_{k,i}$ denotes the probability of taking the $i$-th item at rank $k$.
\begin{figure}[]
\centering
\mbox{
\hspace{-10pt}
\centering
\includegraphics[scale=0.34]{fig/actor.pdf}
}
\caption{The architecture of Actor.
$\theta$ consists of parameters of both the Actor network in $f_\theta$ and the state representation model in Eq. \eqref{eq:state_rep}.}
\label{fig:state}
\vspace{-10pt}
\end{figure}
\subsection{The Critics}
\subsubsection{Critic for Value Funtion}
Given the action policy represented by the actor network discussed in previous section, a Critic network $V_\omega(s_t)$ is constructed to approximate the true state value function $V_\omega^\pi(s_t)$ and be used to optimize the actor.
The Critic network is updated according to temporal-difference learning that minimizes the mean squared error:
\begin{equation} \label{eq:value_update}
\mathcal{L}(\omega) = \sum_t \left(y_t - V_\omega(s_t)\right)^2
\end{equation}
where $y_t = r_t + \gamma_r V_{\omega}(s_{t+1})$.
\subsubsection{Critic for Cost Function}
In addition to the accuracy performance, we introduce a separate Critic network $V_\phi(s)$ for the purpose of constrained policy optimization as explained in section \ref{sec:CPO}, which is updated similarly with Eq. \eqref{eq:value_update},
\begin{equation} \label{eq:cost_update}
\mathcal{L}(\phi) = \sum_t \left(y_t - V_\phi(s_t)\right)^2
\end{equation}
where $y_t = c_t + \gamma_c V_{\phi}(s_{t+1})$.
\subsection{Training Procedure}
An illustration of the proposed FCPO is shown in Fig. \ref{fig:model}, containing one actor and two critics.
We also present the detailed training procedure of our model in Algorithm \ref{alg:FCPO}.
In each recommendation session, there are two phases --- the trajectory generation phase (line 3-10) and model updating phase (line 11-19), where each trajectory contains several transition results between consumer and recommendation agent.
\subsection{Testing Procedure}
After finishing the training procedure, FCPO gets fine-tuned hyper-parameters and well-trained parameters.
Then we conduct the evaluation of our model on several public real-world datasets.
Since our ultimate goal is to achieve long-term group fairness of item exposure with dynamically changing group labels, we propose both short-term offline evaluation and long-term online evaluation.
\subsubsection{Short-term evaluation} This follows Algorithm \ref{alg:FCPO}, while the difference from training is that it only contains trajectory generation phase without any updates to the model parameters.
Once we received the recommendation results in all trajectories, namely $a_t$, we can use the log data to calculate the recommendation performance, and compute the fairness performance based on the exposure records with fixed group label.
\subsubsection{Long-term evaluation}\label{sec:long_term_eval} This process follows Algorithm \ref{alg:FCPO}, except for initializing random model parameters, we set well-trained model parameters into our model in advance.
The model parameters will be updated throughout the testing process so as to model an online learning procedure in practice, meanwhile, the item labels will change dynamically based on the current impression results.
Through artificially control the length of the recommendation list and the number of recommendation sessions, we can study the long-term performance of the proposed model.
\begin{algorithm}[h]
\textbf{Input:} log data, cost limit value $\mathbf{d}$ and line search rate $\beta$ \\
\textbf{Output:} parameters $\theta$, $\omega$ and $\phi$ of actor network, value function, cost function \\
Randomly initialize $\theta$, $\omega$ and $\phi$. \\
Initialize replay buffer $D$;
\For{$session\ =\ 1\ ...\ M$}{
Initialize user state $s_0$ from log data\;
\For{$t\ =\ 1\ ...\ T$}{
Observe current state $s_t$ based on Eq. \eqref{eq:state_rep}\\
Select an action $a_t\ = \{ a_t^{1},\ \dots,\ a_t^{K}\} \in \mathcal{I}^K$ based on Eq. \eqref{eq:weights} and Eq. \eqref{eq:action}\\
Calculate reward $r_t$ and cost $c_t$ according to environment feedback based on Eq. \eqref{eq:reward_cost}\\
Update $s_{t+1} = s_t$ if $r_t^{l}$ is positive, $l=1...K$.\\
Store transition $(s_t,a_t,r_t,c_t,s_{t+1})$ in $D$ in the corresponding trajectory distinguished by $s_0$.
}
Sample minibatch of $\mathcal{N}$ trajectories $\mathcal{T}$ from $D$;\\
Calculate advantage value $A$, advantage cost value $A_c$, discounted cumulative reward, and discounted cumulative cost for each trajectory;\\
Obtain gradient direction $d_\theta$ by solving Eq. \eqref{eq:cpo_approx};\\
\Repeat{$f_{\theta'}(s)$ in trust region \& advantage improves
\& non-positive cost
}{
$\theta' \leftarrow \theta + d_\theta$\\
$d_\theta \leftarrow \beta d_\theta$
}
(Policy update) $\theta = \theta'$\;
(Value update) Optimize $\bm{\omega}$
based on Eq.\eqref{eq:value_update}\;
(Cost update) Optimize $\bm{\phi}$
based on Eq.\eqref{eq:cost_update}\;
}
\caption{Parameters Training for FCPO}
\label{alg:FCPO}
\end{algorithm}
\section{Experiments}
\subsection{Dataset Description}
We use the consumer transaction data from $Movielens$ \cite{Harper:2015:MDH:2866565.2827872} in our experiments to verify the recommendation performance of \textbf{FCPO}\footnote{Codes will be released once published.}.
We choose $Movielens100K$ and $Movielens1M$ \footnote{https://grouplens.org/datasets/movielens/} dataset, which include one hundred thousand and one million user transactions, respectively (user id, item id, rating, timestamp, etc.).
For each dataset, we sort the transactions of each consumer according to the purchase timestamp, and then split the records into training and test sets chronologically by 4:1, and the last item of each user in the training set is chosen to be the validation.
Some basic statistics of the experimental datasets are shown in Table \ref{tab:dataset}.
We split items into two group based on item popularity, specially number of exposure for each item.
Group $G_0$ and $G_1$ reflect item popularity, with top 20\% of products in terms of number of impressions belonging to the popular group $G_0$, and rest 80\% to the long-tail group $G_1$.
Moreover, for RL based recommender, the initial state for each user in the training set is the first five clicked items in the training set, and the initial state is the last five clicked items in the training set.
Each time the RL agent recommend one item to users, while the length of the recommendation is easy to adjust.
\begin{table}
\caption{Basic statistics of the experimental datasets.}
\label{tab:dataset}
\centering
\setlength{\tabcolsep}{5pt}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}
{lcccccccc} \toprule
Dataset & \#users & \#items & \#act./user & \#act./item & \#act. & density \\\midrule
Movielens100K & 943 & 1682 & 106 & 59.45 & 100,000 & 6.305\%\\
Movielens1M & 6040 & 3706 & 166 & 270 &1,000,209 & 4.468\%\\\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\subsection{Experimental Setup}\label{sec:experimental_setup}
We compare our model with the following baselines, including both traditional and RL based methods.
\begin{itemize}
\item {\bf MF}: Collaborative Filtering based on matrix factorization is a representative method for rating prediction.
Basically, the user and item rating vectors are considered as the representation vector for each user and item.
\item {\bf BPR-MF}: Bayesian Personalized Ranking \cite{bpr} is one of the most widely used ranking-based methods for the top-N recommendation.
It is considered a classification problem with two classes that bought and not bought. Instead of classifying with the prediction of a single good, BPR tried to use the difference between two predictions.
\item {\bf NCF}: Neural Collaborative Filtering \cite{he2017neural} is one of the state-of-the-art recommender algorithms, which is based on deep neural networks.
In the evaluation part, we choose Neural Matrix Factorization to conduct the experiments, fusing both Generalized Matrix Factorization (GMF) and Multiple Layer Perceptron (MLP) under the NCF framework.
\item {\bf LIRD}: It is the shorthand for LIst-wise Recommendation based on Deep reinforcement learning \cite{DBLP:journals/corr/abs-1801-00209}. The original paper simply utilizes the concatenation of item embeddings to represent the user state, for fair comparison, we replace the state representation with the same structure of FCPO, as is shown is Fig. \ref{fig:state}.
\end{itemize}
In this work, we also include a classical fairness baseline in our experiment to compare the fairness performance with our model.
\begin{itemize}
\item {\bf FOE}: Fairness of Exposure in Ranking \cite{singh2018fairness} is a computational framework allowing group fairness constraints on ranking in terms of exposure allocation.
The authors expressed the problem of finding the utility-maximizing probabilistic ranking under fairness constraint as a linear program, which can be solved with standard algorithms.
\end{itemize}
The original FOE is targeted at solving fairness in ranking tasks, especially searching, therefore, we made a few modification to the original FOE so as to serve a recommendation task.
Since FOE can be seen as a reranking framework, it needs a utility measure to get the expected utility of a document $d$ for query $q$.
Similarly, we use , such as MF, BPR and NCF, as base rankers to get the probability of user $i$ clicking item $j$ and.
Thus, in the experiment, we have \textbf{MF-FOE}, \textbf{BPR-FOE} and \textbf{NCF-FOE} as our fairness baseline.
The reason why there does not exist a \textbf{LIRD-FOE} is that \textbf{LIRD} is a sequence model.
We cannot simply rerank the recommendation lists as the future states and actions will be influenced by current actions causally.
Meanwhile, FOE for personalized recommendation needs to solve a linear program with size $|\mathcal{I}| \times |\mathcal{I}|$ for each consumer, which brings huge computational costs.
In order to make the problem feasible, we rerank top-200 items from the base ranker (e.g. MF) through FOE, and select the latest top-k (k<200) as the final recommendation results.
We implement MF, BPR-MF, NCF, MF-FOE, BPR-FOE and NCF-FOE using \textit{Pytorch} \cite{paszke2019pytorch} with Adam optimizer.
For all the methods, we consider latent dimensions $d$ from \{16, 32, 64, 128, 256\}, learning rate $lr$ from \{1e-1, 5e-2, 1e-2, \dots, 5e-4, 1e-4\}, and the L2 penalty is chosen from \{0.01, 0.1, 1\}.
We tune the hyper-parameters using the validation set and terminate training when the performance on the validation set does not change too much within 20 epochs.
Based on the ranking results of all items, we adopt several common Top-K metrics --- $Recall$, $F1\ Score$, and $NDCG$ --- to evaluate each recommender model's performance.
We also consider two fairness measures --- Gini Index and Popularity Rate, with respect to item exposures.
We implement \textbf{FCPO} with $Pytorch$ as well.
We perform PMF \cite{mnih2008probabilistic} to pretrain 100-dimensional user and item embedding, and fix them through the whole experiment.
After that, as described in previous section, we pass user embedding and user's corresponding latest five positive feedback $H_t$ through two GRU layers to get state representation $s_t$, and then pass the $s_t$ through two hidden layer with activation function set as Tanh in the policy network.
And for the two Critics, they are designed with the same architecture --- two hidden layer with activation function set as Tanh, and with the same LBFGS optimizer \cite{andrew2007scalable}, while containing two different parameter spaces.
Finally, we fine-tune FCPO's hyperparameters on our validation set.
We also set different level of fairness constraint, basically through adjusting different values of $\alpha^{\prime}$ in Eq.
\eqref{eq:fc}.
Thus, we have $\mathbf{FCPO-1}$ represent free of fairness constraint, $\mathbf{FCPO-2}$ with mild constraint, and $\mathbf{FCPO-3}$ with strict constraint in our experiment.
\begin{table*}[]
\caption{Summary of the performance on two datasets.
We evaluate for ranking ($Recall$, $F_1$ and $NDCG$, in percentage (\%) values) and fairness ($Gini$ $Index$ and $Popularity$ $Rate$) performance, whiles $K$ is the length of recommendation list.
When FCPO is the best, its improvements against the best baseline are significant at p < 0.01.}
\centering
\begin{adjustbox}{max width=\linewidth}
\setlength{\tabcolsep}{7pt}
\begin{tabular}
{m{1.33cm} ccc ccc ccc ccc ccc} \toprule
\multirow{2}{*}{Methods}
& \multicolumn{3}{c}{Recall (\%)}
& \multicolumn{3}{c}{F1 (\%)}
& \multicolumn{3}{c}{NDCG (\%)}
& \multicolumn{3}{c}{Gini Index(\%)}
& \multicolumn{3}{c}{Popularity Rate (\%)}\\\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} \cmidrule(lr){14-16}
& 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 \\\midrule
\multicolumn{16}{c}{MovieLens-100K} \\\midrule
MF & 1.847 & 3.785 & 7.443 & 2.457 & 3.780 & 5.074 & 3.591 & 4.240 & 5.684 & 98.99 & 98.37 & 97.03 & 99.98 & 99.96 & 99.92\\
BPR-MF & 1.304 & 3.539 & 8.093 & 1.824 & 3.592 & 5.409 & 3.025 & 3.946 & 5.787 & 98.74 & 98.17 & 97.01 & 99.87 & 99.87 & 99.78\\
NCF & \underline{1.995} & 3.831 & 6.983 & \underline{2.846} & \underline{4.267} & \underline{5.383} & \underline{5.319} & \underline{5.660} & \underline{6.510} & 99.70 & 99.39 & 98.80 & 100.0 & 100.0 & 100.0\\
LIRD & 1.769 & \underline{5.467} & \underline{8.999} & 2.199 & 4.259 & 4.934 & 3.025 & 3.946 & 5.787 & 99.70 & 99.41 & 98.81 & 100.0 & 100.0 & 100.0\\\midrule
MF-FOE & 1.164 & 2.247 & 4.179 & 1.739 & 2.730 & 3.794 & 3.520 & 3.796 & 4.367 & \underline{86.29} & \underline{84.05} & \underline{82.98} & 92.90 & 91.89 & 90.98 \\
BPR-FOE & 0.974 & 2.053 & 4.404 & 1.496 & 2.568 & 3.933 & 3.127 & 3.514 & 4.332 & 86.50 & 84.38 & 83.78 & \underline{92.17} & \underline{91.36} & \underline{90.70}\\
NCF-FOE & 1.193 & 1.987 & 4.251 & 1.759 & 2.398 & 3.698 & 4.033 & 3.897 & 4.633 & 96.92 & 94.53 & 90.44 & 100.0 & 100.0 & 100.0\\\midrule
FCPO-1 & \textbf{4.740} & \textbf{8.607} & \textbf{14.48} & \textbf{4.547} & \textbf{5.499} & \textbf{5.855} & \textbf{6.031} & \textbf{7.329} & \textbf{9.323} & 98.73 & 98.07 & 96.75 & 92.60 & 90.42 & 85.85\\
FCPO-2 & 3.085 & 5.811 & 10.41 & 3.270 & 4.164 & 4.953 & 4.296 & 5.203 & 7.104 & 97.95 & 96.88 & 94.78 & 70.07 & 68.28 & 65.55\\
FCPO-3 & 0.920 & 1.668 & 3.329 & 1.272 & 1.807 & 2.535 & 2.255 & 2.369 & 2.871 & \textbf{75.23} & \textbf{74.06} & \textbf{73.23} & \textbf{36.52} & \textbf{36.66} & \textbf{36.94}\\\midrule
\multicolumn{16}{c}{MovieLens-1M} \\\midrule
MF & 1.152 & 2.352 & 4.650 & 1.701 & 2.814 & 4.103 & 3.240 & 3.686 & 4.574 & 99.44 & 99.18 & 98.74 & 99.92 & 99.90 & 99.86\\
BPR-MF & 1.240 & 2.627 & 5.143 & 1.773 & 2.943 & 4.197 & 3.078 & 3.593 & 4.632 & 98.93 & 98.44 & 97.61 & 99.40 & 99.23 & 98.96\\
NCF & 1.178 & 2.313 & 4.589 & 1.832 & 2.976 & \underline{4.382} & \underline{4.114} & \underline{4.380} & \underline{5.080} & 99.85 & 99.71 & 99.42 & 100.0 & 100.0 & 100.0\\
LIRD & \underline{1.961} & \underline{3.656} & \underline{5.643} & \underline{2.673} & \underline{3.758} & 4.065 & 3.078 & 3.593 & 4.632 & 99.87 & 99.73 & 99.46 & 100.0 & 100.0 & 95.00\\\midrule
MF-FOE & 0.768 & 1.534 & 3.220 & 1.246 & 2.107 & 3.345 & 3.321 & 3.487 & 4.021 & 92.50 & 91.06 & 91.32 & 98.89 & 98.78 & 98.68 \\
BPR-FOE & 0.860 & 1.637 & 3.387 & 1.374 & 2.233 & 3.501 & 3.389 & 3.594 & 4.158 & \underline{90.48} & \underline{88.92} & \underline{89.01} & \underline{96.56} & \underline{96.12} & \underline{95.78}\\
NCF-FOE & 0.748 & 1.403 & 2.954 & 1.230 & 1.980 & 3.175 & 3.567 & 3.589 & 4.011 & 97.73 & 96.57 & 95.04 & 100.0 & 100.0 & 100.0\\\midrule
FCPO-1 & \textbf{2.033} & \textbf{4.498} & \textbf{8.027} & \textbf{2.668} & \textbf{4.261} & \textbf{5.201} & \textbf{4.398} & \textbf{5.274} & \textbf{6.432} & 99.81 & 99.67 & 99.34 & 99.28 & 96.93 & 91.70\\
FCPO-2 & 1.520 & 3.218 & 6.417 & 2.015 & 3.057 & 4.145 & 3.483 & 3.920 & 5.133 & 99.47 & 99.10 & 97.41 & 72.66 & 68.27 & 71.35\\
FCPO-3 & 0.998 & 1.925 & 3.716 & 1.449 & 2.185 & 2.948 & 2.795 & 2.987 & 3.515 & \textbf{88.97} & \textbf{88.34} & \textbf{87.70} & \textbf{63.43} & \textbf{62.73} & \textbf{61.45}\\\bottomrule
\end{tabular}\label{tab:result}
\end{adjustbox}
\end{table*}
\begin{figure*}[t]
\mbox{
\hspace{-25pt}
\centering
\subfigure[NDCG --- Negative Gini Index@20]{\label{fig:ml100k_ndcg_gini}
\includegraphics[width=0.29\textwidth]{fig/ml100k_ndcg-gini.png}}
\hspace{-20pt}
\subfigure[NDCG --- Long-tail Rate@20 ]{\label{fig:ml100k_ndcg_pop}
\includegraphics[width=0.29\textwidth]{fig/ml100k_ndcg-pr.png}}
\hspace{-20pt}
\subfigure[NDCG --- Negative Gini Index@20]{\label{fig:ml1m_ndcg_gini}
\includegraphics[width=0.29\textwidth]{fig/ml1m_ndcg-gini.png}}
\hspace{-20pt}
\subfigure[NDCG --- Long-tail Rate@20]{\label{fig:ml1m_ndcg_pop}
\includegraphics[width=0.29\textwidth]{fig/ml1m_ndcg-pr.png}}
}
\caption{NDCG@20 against Negative Gini Index@20 and NDCG@20 against Long-tail Rate@20 in two datasets. $x$-axis is the negative gini index in \ref{fig:ml100k_ndcg_gini} and \ref{fig:ml1m_ndcg_gini}, and is the long-tail rate in \ref{fig:ml100k_ndcg_pop} and \ref{fig:ml1m_ndcg_pop}; $y$-axis represents the value of NDCG.}
\label{fig:ndcg_fairness}
\end{figure*}
\subsection{Experimental Results}
The major experimental results are shown in Table \ref{tab:result}, besides, we also plot the $NDCG-Gini\ Index$ and $NDCG-Popularity\ Rate$ in Fig. \ref{fig:ndcg_fairness} under the length of recommendation list $K=20$. We analyze and discuss the results in terms of the following perspectives.
\subsubsection*{\bf i) Short-term Recommendation Performance:}
For recommendation performance, we compare FCPO-1 with MF, BPR, NCF and LIRD based on $Recall@k$, $F1@k$ and $NDCG@k$.
The results of the recommendation performance are shown in Table \ref{tab:result}.
The largest value on each dataset and for each evaluation measure is significant at 0.01 level.
Among all the baseline models, NCF is the strongest on Movielens100K: when averaging across recommendation lengths, NCF gets 11.45\% improvement than MF, 18.01\% than BPR, and 6.17 \% than LIRD; and LIRD is the strongest on Movielens1M: when averaging across recommendation lengths, LIRD gets 17.69\% improvement than MF, 14.50\% than BPR, and 9.68 \% than NCF.
Our FCPO approach achieves the best top-K recommendation performance against all baselines on both datasets.
On the one hand, when averaging across recommendation lengths on Movielens100K, FCPO gets 33.09\% improvement than NCF; on the other hand, when averaging across recommendation lengths on Movielens1M, FCPO gets 18.65 \% improvement than LIRD.
These observations imply that the proposed method do have the ability of capturing dynamic user-item interactions, which captures better user preferences resulting in better recommendation results.
Another interesting observation is that FCPO is better than LIRD even though they use same state representation as well as similar training procedure, the reason for this may be attributed to the trust region based optimization method, which stabilizes the model learning process.
\subsubsection*{\bf ii) Short-term Fairness Performance:}
For fairness performance, we compare FCPO-3 with MF-FOE, BPR-FOE and NCF-FOE based on $Gini\ Index@k$ and $Popularity\ Rate@k$, which are also shown in Table \ref{tab:result}.
We can easily see that there exists a trade-off between the recommendation performance and the fairness performance, which is understandable, as most of the long-tail items have relatively fewer user interactions.
In order to better illustrate the trade-off between FCPO and FOE, we fix the length recommendation list at twenty and plot NDCG against Negative Gini Index and Long-tail Rate in Fig. \ref{fig:ndcg_fairness} for both datasets, where the long-tail rate is equal to one minus popularity rate.
The bule line represents FCPO under three different level of fairness constraint.
We choose negative Gini index and long-tail rate instead of the original ones as they follow the bigger the better.
In most cases, for the same Gini Index, our method achieves much better NDCG; meanwhile, under the same NDCG scores, our method achieves better fairness.
In other words, our method FCPO can achieve much better trade-off than FOE in both individual fairness (measured by Gini Index) and group fairness (measured by Long-tail Rate).
We can see that even with free fairness constraint, FCPO-1 is much better than traditional baselines, or baselines with FOE on group fairness.
\subsubsection*{\bf iii) Efficiency Performance:}
We compare FOE-based methods with FCPO in terms of the single-core CPU running time for generating recommendation list of size $K=100$ for all users.
The running time between the base ranker of FOE-based methods is relatively the same as the actor in FCPO, but the additional reranking step of FOE may take substantial amount of time.
In our observation on ML100K dataset, the recommendation time is 90min, 6h30min, and 60h30min for reranking from $200$, $400$, and $800$ respectively, while FCPO only takes around 3h, selecting items from the entire item set (around $16K$).
Our observation on ML1M dataset shows that FOE-based methods takes 10h30min, 43h30min, and 400h for reranking from $200$, $400$, and $800$ respectively, the time in ML100K, while FCPO takes around 11h.
\subsection{Long-term Fairness in Recommendation}
We compared FCPO with a static short-term fairness solution (i.e. MF-FOE) for 300 steps of recommendation.
For MF-FOE, we run $M=3$ session of $K=100$ recommendations to let it captures the dynamics of the item group labels, while FCPO only needs to continuously run for 300 steps without changing test procedure.
In other words, MF-FOE keeps the same item group labels for $K$ item recommendations and has to retrain its parameters after the labels updated at the end of each session.
As mentioned in section \ref{sec:experimental_setup}, FOE-based method becomes significantly time consuming when dealing with large candidate item set.
Thus, instead of doing whole item set fairness control, we first select the top $2K$ items as candidates, then apply FOE to re-rank the items and generate the final $K$ recommendations.
As shown in the last two rows of Figure \ref{fig:long_term}, when convergence, MF-FOE performs much worse than FCPO on both GINI index and popularity rate.
For these two fairness metrics, the 0-th entry before the first recommended item correspond to the feature of the original dataset, so the item exposure of MF-FOE exhibits a higher popularity rate and minor improvement on GINI index than the data.
Within each session of MF-FOE, fairness metrics quickly converges and they are further improved only when the item exposure information is updated.
On the contrary, since FCPO makes adjustment of its policy according to the fairness feedback, it can successfully and continuously suppress the fairness metric to a much lower value within $M$ tested sessions.
Though we kept skeptical whether the fairness performance gap between MF-FOE and FCPO will eventually vanish, we do observe that MF tends to favor popular items than unpopular ones.
As a result, setting a very small $T$(e.g. $T<20$) to speed up the recommendation could result in a candidate set filled with popular items and applying FOE becomes futile.
Nevertheless, the overall performance especially accuracy metric (correspond to Figure \ref{fig:ml100k_long_term_ndcg} and \ref{fig:ml1m_long_term_ndcg}) of MF-FOE is consistently outperformed by FCPO, which indicates that MF-FOE sacrifices the recommendation performance more than FCPO in order to control fairness.
\begin{figure}[t]
\mbox{
\hspace{-10pt}
\centering
\subfigure[]{\label{fig:ml100k_long_term_ndcg}
\includegraphics[width=0.25\textwidth]{fig/ml100k_long_term_ndcg.pdf}}
\hspace{-10pt}
\subfigure[]{\label{fig:ml1m_long_term_ndcg}
\includegraphics[width=0.25\textwidth]{fig/ml1m_long_term_ndcg.pdf}}
}
\mbox{
\hspace{-10pt}
\centering
\subfigure[]{
\label{fig:ml100k_long_term_gini}
\includegraphics[width=0.25\textwidth]{fig/ml100k_long_term_gini.pdf}}
\hspace{-10pt}
\subfigure[]{\label{fig:ml1m_long_term_gini}
\includegraphics[width=0.25\textwidth]{fig/ml1m_long_term_gini.pdf}}
}
\mbox{
\hspace{-10pt}
\centering
\subfigure[]{\label{fig:ml100k_long_term_pr}
\includegraphics[width=0.25\textwidth]{fig/ml100k_long_term_pr.pdf}}
\hspace{-10pt}
\subfigure[]{\label{fig:ml1m_long_term_pr}
\includegraphics[width=0.25\textwidth]{fig/ml1m_long_term_pr.pdf}}
}
\caption{Long-term performance on ML100K (first column) and ML1M (second column).
X-axis is recommendation step, Y-axis is the evaluated metric on accumulated item exposure from beginning till current step.}
\label{fig:long_term}
\end{figure}
\section{Conclusion and Future Work}
In this work, we propose a novel problem of modeling long-term fairness in recommendation with respect to changing group labels based on item popularity and accomplish through addressing the dynamic fairness problem at each iteration.
Specially,
Experiments verify that our framework can not only achieve a better trade-off between recommendation performance and fairness performance from a short-term perspective, but also from a long-term perspective.
For future work, we will consider about how to solve the long-term fairness problem using an individual fairness constrained and generalize it to more fairness constraints.
\balance
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Personalized recommender system (RS) is a core function of many online services such as e-commerce, advertising, and online job markets.
Recently, several works have highlighted that RS may be subject to algorithmic bias along different dimensions, leading to a negative impact on the underrepresented or disadvantaged groups \cite{Geyik2019,Zhu2018,fu2020fairness,singh2018fairness,ge2020understanding}.
For example, the ``Matthew Effect'' becomes increasingly evident in RS, where some items get more and more popular, while the long-tail items are difficult to achieve relatively fair exposure \cite{10.1145/3292500.3330707}.
Existing research on improving fairness in recommendation systems or ranking has mostly focused on static settings, which only assess the immediate impact of fairness learning instead of the long-term consequences \cite{doi:10.1177/1064804619884160, ijcai2019-862}.
For instance, suppose there are four items in the system, A, B, C, and D, with A, B belonging to the popular group $G_0$ and C, D belonging to the long-tail group $G_1$.
When using demographic parity as fairness constraint in recommendation and recommend two items each time, without considering the position bias, we will have AC, BC, AD, or BD to be recommended to consumers.
Suppose D has a higher chance of click,
then after several times, D will get a higher utility score than other items, but since D is still in $G_1$, the algorithm will tend to recommend D more to maximize the total utility and to satisfy group fairness.
This will bring a new ``Matthew Effect'' on $G_1$ in the long term.
The above example shows that imposing seemingly fair decisions through static criteria can lead to unexpected unfairness in the long run.
In essence, fairness cannot be defined in static or one-shot setting without considering the long-term impact, and long-term fairness cannot be achieved without understanding the underlying dynamics.
We define \textit{static fairness} as the one that does not consider the changes in the recommendation environment, such as the changes in item utility, attributes, or group labels due to the user feedback/interactions throughout the recommendation process.
Usually, static fairness provides a one-time fairness solution based on fairness-constrained optimization.
\textit{Dynamic fairness}, on the other hand, considers the dynamic factors in the environment and learns a strategy that accommodates such dynamics.
Furthermore, \textit{long-term fairness} views the recommendation as a long term process instead of a one-shot objective and aims to maintain fairness in the long run by achieving dynamic fairness over time.
Technically, we study the long-term fairness of item exposure in recommender systems, while items are separated into groups based on item popularity.
The challenge is that during the recommendation process, items will receive different extents of exposure based on the recommendation strategy and user feedback, causing the underlying group labels to change over time.
To achieve the aforementioned long-term fairness in recommendation, we pursue to answer the following three key questions:
\begin{itemize}
\item How to model long-term fairness of item exposure with changing group labels in recommendation scenarios?
\item How to update the recommendation strategy according to real-time item exposure records and user interactions?
\item How to optimize the strategy effectively over large-scale datasets?
\end{itemize}
In this work, we aim to address the above challenges simultaneously.
Specially, we propose to model the sequential interactions between consumers and recommender systems as a Markov Decision Process (MDP), and then turn it into a Constrained Markov Decision Process (CMDP) by constraining the fairness of item exposure at each iteration dynamically.
We leverage the Constrained Policy Optimization (CPO) with adapted neural network architecture to automatically learn the optimal policy under different fairness constraints.
We illustrate the long-term impact of fairness in recommendation systems by providing empirical results on several real-world datasets, which verify the superiority of our framework on recommendation performance, short-term fairness, and long-term fairness.
To the best of our knowledge, this is the first attempt to model the dynamic nature of fairness with respect to changing group labels, and to show its effectiveness in the long term.
\section{Related Work}
\subsection{Fairness in Ranking and Recommendation}
There have been growing concerns on fairness recently, especially in the context of intelligent decision-making systems, such as recommender systems. Various types of bias have been found to exist in recommendations such as gender and race~\cite{chen2018investigating,abdollahpouri2019unfairness}, item popularity~\cite{Zhu2018}, user feedback~\cite{fu2020fairness} and opinion polarity~\cite{Yao2017}. Different notions of fairness and algorithms have since been proposed to counteract such issues. There are mainly two types of fairness definitions in recommendations: \textit{individual} fairness and \textit{group} fairness. The former requires treating individuals similarly regardless of their protected attributes, such as demographic information, while the latter requires treating different groups similarly. Our work focuses on the popularity group fairness, yet also addresses individual fairness through accommodation to dynamic group labels.
The relevant methods related to fairness in ranking and recommendation can be roughly divided into three subcategories: optimizing utility (often represented by relevance) subject to a bounded fairness constraint \cite{
singh2018fairness,Geyik2019,zehlike2017fa,celis2018ranking},
optimizing fairness with a lower bound utility \cite{Zhu2018}, and jointly optimizing utility and fairness \cite{celis2019controlling}.
Based on the characteristics of the recommender system itself, there also have been a few works related to multi-sided fairness in multi-stakeholder systems ~\cite{burke18a,mehrotra2018,Gao2019how}. These works have proposed effective algorithms for fairness-aware ranking and recommendation, yet they fall in the category of \textit{static fairness} where the protected attribute or group labels were fixed throughout the entire ranking or recommendation process. Therefore, it is not obvious how such algorithms can be adapted to dynamic group labels that change the fairness constraints over time. The closest literature to our work on \textit{dynamic fairness} includes \citeauthor{Saito20} ~\cite{Saito20} and \citeauthor{Morik2020}~\cite{Morik2020}, which incorporated user feedback in the learning process, and could dynamically adjust to the changing utility with fairness constraints. However, they focused on the changing utility of items and did not consider the scenario where group labels could be dynamic due to the nature of recommendations being an interactive process. To the best of our knowledge, we make the first attempt on dynamic group fairness, focusing on the changing group labels of items.
\subsection{RL for Recommendation} In order to capture the interactive nature of recommendation scenarios, reinforcement learning (RL) based solutions have become an important topic recently.
A group of work \cite{li2010contextual,bouneffouf2012contextual,zeng2016online} model the problem as contextual multi-armed bandits, which can easily incorporate collaborative filtering methods \cite{cesa2013gang,zhao2013interactive}.
In the meantime, some literature \cite{shani2005mdp,mahmood2007learning,mahmood2009improving,zheng2018drn,xian2020cafe,xian2019reinforcement} found that it is natural to model the recommendation process as a Markov Decision Process (MDP).
In general, this direction can be further categorized as either \textit{policy-based} \cite{Dulac-ArnoldESC15, zhao2018deep, chen2019large, chen2019top} or \textit{value-based} \cite{zhao2018recommendations, zheng2018drn,pei2019value} methods.
Typically, policy-based methods aim to learn a policy that generates an action (e.g. recommended items) based on a state.
Such policy is optimized through policy gradient and can be either deterministic \cite{Dulac-ArnoldESC15, Silver2014DeterministicPG, Lillicrap2016DDPG, zhao2018deep} or stochastic \cite{chen2019large, chen2019top}.
On the other hand, value-based methods aims to model the quality (i.e. Q-value) of actions so that the best action corresponds to the one with best value.
There also exist several works considering using RL to solve fairness problems in machine learning \cite{Wen2019FairnessWD, jabbari2017fairness}.
\citeauthor{jabbari2017fairness} \cite{jabbari2017fairness} considered to optimize the
meritocratic fairness defined in \cite{joseph2016fairness} based on long-term rewards.
Their work is designed for a specific fairness constraint and is not suitable for our problem setting.
\citeauthor{Wen2019FairnessWD} \cite{Wen2019FairnessWD} studied a reinforcement learning problem under group fairness constraint, where the state consists of both the feature and the sensitive attributes.
They developed model-free and model-based methods to learn a decision rule to achieve both demographic parity and near-optimal fairness.
Different from our work that focuses on item-side fairness, they focused on the user-side fairness.
\section{Preliminary}
\subsubsection{\textbf{Markov Decision Processes.}} In this paper, we study reinforcement learning in Markov Decision Processes (MDPs).
An MDP is a tuple $M = (\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \mu, \gamma)$, where $S$ is a set of $n$ states, $\mathcal{A}$ is a set of $m$ actions,
$\mathcal{P}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0,1]$ denotes the transition probability function, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the reward function, $\mu: \mathcal{S} \rightarrow [0,1]$ is the starting state distribution, and $\gamma \in [0,1)$ is the discount factor.
A stationary policy $\pi: \mathcal{S} \rightarrow P(\mathcal{A})$ is a map from states to probability distributions over actions, with $\pi(a | s)$ denoting the probability of selecting action $a$ in state $s$.
We denote the set of all stationary policies by $\Pi$.
In reinforcement learning, we aim to learn a policy $\pi$, which maximizes the infinite horizon discounted total return $J(\pi)$,
\begin{equation}
\label{eq:discounted_return}
\small
J(\pi) \doteq \underset{\tau \sim \pi}{\mathrm{E}}\left[\sum_{t=0}^{\infty} \gamma^{\top} R\left(s_{t}, a_{t}, s_{t+1}\right)\right],
\end{equation}
where $\tau$ denotes a trajectory, i.e., $\tau=(s_{0}, a_{0}, s_{1}, a_{1}, \dots)$, and $\tau \sim \pi$ is a shorthand indicating that the distribution over trajectories depends on $\pi$ :
$s_{0} \sim \mu, a_{t} \sim \pi\left(\cdot | s_{t}\right), s_{t+1} \sim P\left(\cdot | s_{t}, a_{t}\right)$.
Let $R(\tau)$ denote the discounted return of a trajectory, we express the on-policy value function as $V^{\pi}(s) \doteq$ $\mathrm{E}_{\tau \sim \pi}\left[R(\tau) | s_{0}=s\right]$, the on-policy action-value function as $Q^{\pi}(s, a) \doteq \mathrm{E}_{\tau \sim \pi}\left[R(\tau) | s_{0}=s, a_{0}=a\right]$, and the advantage function as $A^{\pi}(s, a) \doteq Q^{\pi}(s, a)-V^{\pi}(s)$.
\subsubsection{\textbf{Constrained Markov Decision Processes.}}
A Constrained Markov Decision Process (CMDP) is an MDP augmented with constraints that restrict the set of allowable policies for that MDP.
In particular, the MDP can be constrained with a set of auxiliary cost functions $C_{1}, \ldots, C_{m}$ and the corresponding limits $\mathbf{d}_{1}, \ldots, \mathbf{d}_{m}$, which means that the discounted total cost over the cost function $C_{i}$ should be bounded by $\mathbf{d}_i$. Each function $C_{i}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ maps transition tuples to costs, like the reward in traditional MDP.
Let $J_{C_{i}}(\pi)$ denote the discounted total cost of policy $\pi$ with respect to the cost function $C_{i}$:
\begin{equation}
\small
J_{C_{i}}(\pi)=\underset{\tau \sim \pi}{\mathrm{E}}\left[\sum_{t=0}^{\infty} \gamma^{\top} C_{i}\left(s_{t}, a_{t}, s_{t+1}\right)\right] .
\end{equation}
The set of feasible stationary policies for a CMDP is then
$
\Pi_{C} \doteq\left\{\pi \in \Pi: \forall i, J_{C_{i}}(\pi) \leq \mathbf{d}_{i}\right\},
$
and the reinforcement learning problem in a CMDP is
$
\pi^{*}=\arg \max _{\pi \in \Pi_{C}} J(\pi).
$, where $J(\pi)$ is the discounted total reward defined in Eq. \eqref{eq:discounted_return}.
Finally, in analogy to $V^{\pi}, Q^{\pi},$ and $A^{\pi}$, we denote these by $V_{C_{i}}^{\pi}, Q_{C_{i}}^{\pi}$, and $A_{C_{i}}^{\pi}$, which replaces reward function $R$ with cost function $C_{i}$, respectively.
\subsubsection{\textbf{Constrained Policy Optimization.}}\label{sec:CPO}
Inspired by trust region methods \cite{DBLP:journals/corr/SchulmanLMJA15}, \citeauthor{DBLP:journals/corr/AchiamHTA17} \cite{DBLP:journals/corr/AchiamHTA17} proposed Constrained Policy Optimization (CPO), which uses a trust region instead of penalties on policy divergence to enable larger step sizes.
CPO has policy updates of the following form:
\begin{equation} \label{eq:cpo}
\small
\begin{aligned}
\pi_{k+1}&=\arg \max _{\pi \in \Pi_{\theta}} \underset{\underset{a \sim \pi} {s \sim d^{\pi_{k}} }}{\mathrm{E}}\left[A^{\pi_{k}}(s, a)\right], \\
\text { s.t. } & J_{C_{i}}\left(\pi_{k}\right)+\frac{1}{1-\gamma} \underset{\underset{a \sim \pi} {s \sim d^{\pi_{k}} }}{\mathrm{E}}\left[A_{C_{i}}^{\pi_{k}}(s, a)\right] \leq \mathbf{d}_{i}, \forall i \\
& \bar{D}_{K L}\left(\pi \| \pi_{k}\right) \leq \delta
\end{aligned}
\end{equation}
where $\Pi_{\theta} \subseteq \Pi$ is a set of parameterized policies with parameters $\theta$ (e.g., neural networks with fixed architecture), $d^{\pi_{k}}$ is the state distribution under policy $\pi_{k}$, $\bar{D}_{K L}$ denotes the average KL-divergence, and $\delta > 0$ is the step size.
The set $\left\{\pi_{\theta} \in \Pi_{\theta}: D_{K L}\left(\pi|| \pi_{k}\right) \leq \delta\right\}$ is called the trust region.
Particularly, for problems with only one linear constraint, there is an analytical solution, which is also given by \citeauthor{DBLP:journals/corr/AchiamHTA17} \cite{DBLP:journals/corr/AchiamHTA17}.
Denoting the gradient of the objective in Eq. \eqref{eq:cpo} as $g$, the gradient of constraint as $b$, the Hessian of the KL-divergence as $H$, and defining $c = J_C(\pi_k) - d$, the approximation to Eq. \eqref{eq:cpo} is
\begin{equation} \label{eq:cpo_approx}
\small
\begin{aligned}
\theta_{k+1} &= \arg \max _{\theta}\ g^{\top}\left(\theta-\theta_{k}\right) \\
\text { s.t. } & c+b^{\top}\left(\theta-\theta_{k}\right) \leq 0 \\
& \frac{1}{2}\left(\theta-\theta_{k}\right)^{\top} H\left(\theta-\theta_{k}\right) \leq \delta
\end{aligned}
\end{equation}
A more comprehensive review of CMDPs and CPO can be seen in \cite{altman1999constrained} and \cite{ DBLP:journals/corr/AchiamHTA17} respectively.
\section{Problem Formulation}
In this section, we first describe a CMDP that models the recommendation process with general constraints, and then, we describe several fairness constraints, which are suitable for recommendation scenarios. Finally, we combine these two parts together and introduce the fairness-constrained optimization problem.
\subsection{\textbf{CMDP for Recommendation}}
In each timestamp ($t_1$, $t_2$, $t_3$, $t_4$, $t_5$, $\dots$), when a user sends a request to the recommendation system, the recommendation agent $G$ will take the feature representation of the current user and item candidates $\mathcal{I}$ as input, and generate a list of items $L\in\mathcal{I}^K$ to recommend, where $K \geq 1$.
User $u$ who has received the list of recommended item/items $L$ will give his/her feedback $B$ by his/her clicks on this set of items.
Thus, the state $s$ can be represented by user features (e.g., user's recent click history), action $a$ is represented by items in $L$, reward $r$ is the immediate reward (e.g., whether user clicks on an item in $L$) by taking action $a$ in the current state, and cost $c$ is the immediate cost (e.g., whether the recommended item/items come from the sensitive group).
\begin{itemize}
\item \textbf{State $\mathcal{S}$:} A state $s_t$ is the representation of user's most recent positive interaction history $H_t$ with the recommender, as well as his/her demographic information (if exists).
\item \textbf{Action $\mathcal{A}$:} An action $a_t\ = \{ a_t^{1},\ \dots,\ a_t^{K}\}$ is a recommendation list with $K$ items to a user $u$ at time $t$ with current state $s_t$.
\item \textbf{Reward $\mathcal{R}$:} Given the recommendation based on the action $a_t$ and the user state $s_t$, the user will provide his/her feedback, i.e., click, skip, or purchase, etc. The recommender receives immediate reward $R(s_t, a_t)$ according to the user's feedback.
\item \textbf{Cost $\mathcal{C}$:} Given the recommendation based on the action $a_t$, the environment provides a cost value based on the problem-specific cost function, i.e., the number of items in the recommendation list that come from the sensitive group, and sends the immediate cost $C(s_t, a_t)$ to the recommender.
\item \textbf{Discount rate $\mathcal{\gamma}_r$ and $\mathcal{\gamma}_c$:} $\mathcal{\gamma}_r \in [0,1]$ is a factor measuring the present value of long-term rewards, while $\mathcal{\gamma}_c \in [0,1]$ is another factor measuring the present value of long-term costs.
\end{itemize}
\subsection{Fairness Constraints}
To be consistent with the previous definition in CMDP for recommendation and solve the dynamic change of underlying labels, we define analogs of several frequently proposed fairness constraints.
\subsubsection{\textbf{Demographic Parity Constraints}}
Following~\cite{singh2018fairness}, we can use exposure to define the fairness between different groups of items.
Demographic parity requires that the average exposure of the items from each group is equal.
In our setting, we enforce this constraint at each iteration $t$.
Denoting the number of exposure in a group at iteration $t$ as
\begin{equation} \label{eq:expo}
\small
\text {Exposure}_t \left(G_{j} \right) = \underset{a^l_t \in a_t}{\sum} \vmathbb{1} (a^l_t \in G_{j}),\ l=1,...,K\\
\end{equation}
Then we can express demographic parity constraint as follows,
\begin{equation}
\small
\frac{\text {Exposure}_t\left(G_{0}\right)}{|G_{0}|}=\frac{\text {Exposure}_t\left(G_{1} \right)}{|G_{1}|},
\end{equation}
where groups $G_0$ and $G_1$ are divided based on the item popularity in the recommendation scenario.
\subsubsection{\textbf{Exact-$K$ Fairness Constraints}}
We define an Exact-$K$ fairness in ranking that requires the proportion/chance of protected candidates in every recommendation list with length $K$ remains statistically below or indistinguishable from a given maximum $\alpha$.
This kind of fairness constraint is more suitable and feasible in practice for recommender systems as the system can adjust the value of $\alpha$.
The concrete form of this fairness is shown as below,
\begin{equation} \label{eq:alpha_expo}
\small
\frac{\text {Exposure}_t\left(G_{0}\right)}{\text {Exposure}_t\left(G_{1} \right)} \leq \alpha
\end{equation}
Note that when $\alpha=\frac{|G_0|}{|G_1|}$ and the equation holds strictly, the above expression would be exactly the same as demographic parity.
\subsection{\mbox{\!\!\!\!FCPO: Fairness Constrained Policy Optimization}}
An illustration of the proposed FCPO is shown in Fig. \ref{fig:model}, containing one actor and two critics. Our goal is to learn the optimal policy for the platform, which is able to maximize the cumulative reward under a certain fairness constraint, as mentioned in previous section.
Specially, in this work, the reward function and the cost function are defined as
\begin{equation} \label{eq:reward}
\centering
\small
\begin{aligned}
R(s_{t},a_{t}, s_{t+1}) &= \sum_{l=1}^{K} \vmathbb{1} (a_{t}^l \text{ gets positive feedback})\\
\end{aligned}
\end{equation}
\begin{equation} \label{eq:cost}
\small
\centering
\begin{aligned}
C(s_{t},a_{t}, s_{t+1}) &= \sum_{l=1}^{K} \vmathbb{1}(a_{t}^l\ is\ in\ sensitive\ group)
\end{aligned}
\end{equation}
where $a_{t} = \{ a_{t}^1,\ \dots,\ a_{t}^K \}$ represents a recommendation list including $K$ item IDs, which are selected by the current policy at time point $t$.
We can see that the expression of cost function is the same as Eq. \eqref{eq:expo}, which represents the total number of items in a specific group exposed to users at time $t$.
Let us consider the sensitive group as group $G_0$, then we have
\begin{equation*}
\small
\begin{aligned}
\frac{\text {Exposure}_t\left(G_{0}\right)}{\text {Exposure}_t\left(G_{1} \right)} \leq &\alpha \\
\text {Exposure}_t\left(G_{0}\right) \leq &\alpha \text {Exposure}_t\left(G_{1} \right)\\
(1+\alpha)\text {Exposure}_t\left(G_{0}\right) \leq &\alpha \text{Exposure}_t\left(G_{0}\right) + \alpha \text {Exposure}_t\left(G_{1} \right)\\
(1+\alpha)\text {Exposure}_t\left(G_{0}\right) \leq &\alpha K\\
C(s_{t},a_{t}, s_{t+1}) \leq &\frac{\alpha}{1+\alpha} K = \alpha^\prime K
\end{aligned}
\end{equation*}
Let $C \leq \alpha^{\prime} K$ be satisfied at each iteration, we can get the discounted total cost,
\begin{equation} \label{eq:fc}
\small
J_{C}(\pi)=\underset{\tau \sim \pi}{\mathrm{E}}\left[\sum_{t=0}^{T} \gamma^{T}_c\ C\left(s_{t}, a_{t}, s_{t+1}\right)\right] \leq \sum_{t=0}^T \gamma_{c}^t\ \alpha^{\prime} K
\end{equation}
where $T$ is the length of a recommendation trajectory.
Eq. \eqref{eq:fc} is the group fairness constraint for our optimization problem and we can denote the limit of the unfairness $\mathbf{d}$ as
\begin{equation}\label{eq:fairness_limit}
\small
\mathbf{d} = \sum_{t=1}^T \gamma_{c}^t\ \alpha^{\prime} K.
\end{equation}
Once we finished defining the specific CMDP for recommendation and we have the specific reward function Eq. \eqref{eq:reward}, cost function Eq. \eqref{eq:cost} and the limit of the constraint $\mathbf{d}$, we can take them to Eq. \eqref{eq:cpo} and build our fairness constrained policy optimization framework.
It is worth noting that our model contains only one linear fairness constraint, therefore, as mentioned in Preliminary, we can get an analytical solution by solving Eq. \eqref{eq:cpo_approx} if the problem is feasible.
We then introduce the framework in the following section.
\begin{figure}[t]
\centering
\mbox{
\centering
\includegraphics[scale=0.36]{fig/model.pdf}
}
\caption{Illustration of the proposed method.}
\label{fig:model}
\vspace{-15pt}
\end{figure}
\section{Proposed Framework}
Our solution to the aforementioned fairness constrained optimization problem follows an Actor-Critic learning scheme, but with an extra critic network designed for the fairness constraint.
In this section, we illustrate how to construct and learn each of these components.
\subsection{The Actor}
The actor component $\pi_\theta$ parameterized by $\theta$ serves as the same functionality as a stochastic policy that samples an action $a_t\in\mathcal{I}^K$ given the current state $s_t\in\mathbb{R}^m$ of a user.
As depicted in Fig. \ref{fig:state}, $s_t$ is first acquired by extracting and concatenating the user embedding $\mathbf{e}_u\in\mathbb{R}^d$ and user's history embedding $\mathbf{h}_u$:
\begin{equation}\label{eq:state_rep}
s_t = [\mathbf{e}_u; \mathbf{h}_u],~\mathbf{h}_u=\mathrm{GRU}(H_t)
\end{equation}
where $H_t = \{H_t^1, H_t^2, \dots, H_t^N\}$ denotes the most recent $N$ items from user $u$'s interaction history, and the history embedding $\mathbf{h}_u$ is acquired by encoding $N$ item embeddings via Gated Recurrent Unites (GRU) \cite{Cho2014gru}.
Note that the user's recent history is organized as a queue, and it is updated only if the recommended item $a_t^l \in a_t$ receives a positive feedback,
\begin{equation}
\small
H_{t+1}=\left\{
\begin{array}{cc}
\{H_t^2,\ \dots,\ H_t^N,\ a_t^l\} & r_t^l>0 \\
H_t & \text{Otherwise}
\end{array}
\right.
\end{equation}
This ensures that the state can always represent the user's most recent interests.
\begin{figure}[t]
\vspace{-10pt}
\centering
\mbox{
\hspace{-10pt}
\centering
\includegraphics[scale=0.34]{fig/actor.pdf}
}
\vspace{-10pt}
\caption{The architecture of the Actor.
$\theta$ consists of parameters of both the Actor network in $f_\theta$ and the state representation model in Eq. \eqref{eq:state_rep}.}
\label{fig:state}
\end{figure}
We assume that the probability of actions conditioned on states follows a continuous high-dimensional Gaussian distribution with mean $\mu\in\mathbb{R}^{Kd}$ and covariance matrix $\Sigma\in\mathbb{R}^{Kd\times Kd}$ (only elements at diagonal are non-zeros and there are actually $Kd$ parameters).
For better representation ability, we approximate the distribution via a neural network
that maps the encoded state $s_t$ to $\mu$ and $\Sigma$.
Specifically, we adopt a Multi Layer Perceptron (MLP) with tanh($\cdot$) as the non-linear activation function, i.e. $(\mu,\Sigma)=\mathrm{MLP}(s_t)$.
Then, we can sample a vector from the Gaussian distribution $\mathcal{N}(\mu,\Sigma)$ and convert it into a proposal matrix $W\sim \mathcal{N}(\mu,\Sigma)\in\mathbb{R}^{K\times d}$,
whose $k$-th row, denoted by $W_k\in\mathbb{R}^d$, represents a proposed ``ideal'' item embedding.
Then, the probability matrix $P\in\mathbb{R}^{K\times |\mathcal{I}|}$ of selecting the $k$-th candidate item is given by:
\begin{equation}\label{eq:weights}
\small
P_k = \mathrm{softmax}(W_k \mathcal{V}^\top),~ k=1,\ldots,K,
\end{equation}
where $\mathcal{V}\in\mathbb{R}^{|\mathcal{I}|\times d}$ is the embedding matrix of all candidate items.
This is equivalent to using dot product to determine similarity between $W_k$ and any item.
As the result of taking the action at step $t$, the Actor recommends the $k$-th item as follows:
\begin{equation} \label{eq:action}
\small
a_t^k = \argmax_{i\in \{1,\dots,|\mathcal{I}|\}} P_{k,i},~ \forall k=1,\ldots,K,
\end{equation}
where $P_{k,i}$ denotes the probability of taking the $i$-th item at rank $k$.
\subsection{The Critics}
\subsubsection{\textbf{Critic for Value Function}}
A Critic network $V_\omega(s_t)$ is constructed to approximate the true state value function $V_\omega^\pi(s_t)$ and be used to optimize the actor.
The Critic network is updated according to temporal-difference learning that minimizes the MSE:
\begin{equation} \label{eq:value_update}
\small
\mathcal{L}(\omega) = \sum_t \Big(y_t - V_\omega(s_t)\Big)^2
\end{equation}
where $y_t = r_t + \gamma_r V_{\omega}(s_{t+1})$.
\subsubsection{\textbf{Critic for Cost Function}}
In addition to the accuracy performance, we introduce a separate Critic network $V_\phi(s)$ for the purpose of constrained policy optimization as explained in section \ref{sec:CPO}, which is updated similarly with Eq. \eqref{eq:value_update},
\begin{equation} \label{eq:cost_update}
\small
\mathcal{L}(\phi) = \sum_t \Big(y_t - V_\phi(s_t)\Big)^2
\end{equation}
where $y_t = c_t + \gamma_c V_{\phi}(s_{t+1})$.
\subsection{Training Procedure}
We also present the detailed training procedure of our model in Algorithm \ref{alg:FCPO}.
In each round, there are two phases --- the trajectory generation phase (line 4-13) and model updating phase (line 14-23), where each trajectory contains $T$ transition results between consumer and the recommendation agent.
\subsection{Testing Procedure}
After finishing the training procedure, FCPO gets fine-tuned hyper-parameters and well-trained parameters.
Then we conduct the evaluation of our model on several public real-world datasets.
Since our ultimate goal is to achieve long-term group fairness of item exposure with dynamically changing group labels, we propose both short-term evaluation and long-term evaluation.
\subsubsection{\textbf{Short-term Evaluation}} This follows Algorithm \ref{alg:FCPO}, while the difference from training is that it only contains the trajectory generation phase without any updates to the model parameters.
Once we receive the recommendation results in all trajectories, namely $a_t$, we can use the log data to calculate the recommendation performance, and compute the fairness performance based on the exposure records with fixed group labels.
We will introduce how to get the initial group label in the experiment part.
\subsubsection{\textbf{Long-term Evaluation}}\label{sec:long_term_eval} This process follows Algorithm \ref{alg:FCPO}, instead of initializing random model parameters, we set well-trained model parameters into our model in advance.
The model parameters will be updated throughout the testing process so as to model an online learning procedure in practice; meanwhile, the item labels will change dynamically based on the current impression results, which means that the fairness constraint will change through time.
To observe long-term performance, we repeatedly recommend $T$ times, so the total number of recommended items is $TK$.
\begin{algorithm}[t]
\small
\textbf{Input:} step size $\delta$, cost limit value $\mathbf{d}$, and line search ratio $\beta$ \\
\textbf{Output:} parameters $\theta$, $\omega$ and $\phi$ of actor network, value function, cost function \\
Randomly initialize $\theta$, $\omega$ and $\phi$. \\
Initialize replay buffer $D$;
\For{$Round\ =\ 1\ ...\ M$}{
Initialize user state $s_0$ from log data\;
\For{$t\ =\ 1\ ...\ T$}{
Observe current state $s_t$ based on Eq. \eqref{eq:state_rep};\\
Select an action $a_t\ = \{ a_t^{1},\ \dots,\ a_t^{K}\} \in \mathcal{I}^K$ based on Eq. \eqref{eq:weights} and Eq. \eqref{eq:action}\\
Calculate reward $r_t$ and cost $c_t$ according to environment feedback based on Eq. \eqref{eq:reward} and Eq. \eqref{eq:cost};\\
Update $s_{t+1}$ based on Eq. \eqref{eq:state_prime};\\
Store transition $(s_t,a_t,r_t,c_t,s_{t+1})$ in $D$ in its corresponding trajectory.
}
Sample minibatch of $\mathcal{N}$ trajectories $\mathcal{T}$ from $D$;\\
Calculate advantage value $A$, advantage cost value $A_c$;\\
Obtain gradient direction $d_\theta$ by solving Eq. \eqref{eq:cpo_approx} with $A$ and $A_c$;\\
\Repeat{$\pi_{\theta'}(s)$ in trust region \& loss improves
\& cost $\leq \mathbf{d}$
}{
$\theta' \leftarrow \theta + d_\theta$\\
$d_\theta \leftarrow \beta d_\theta$
}
(Policy update) $\theta \leftarrow \theta'$\;
(Value update) Optimize $\bm{\omega}$
based on Eq.\eqref{eq:value_update}\;
(Cost update) Optimize $\bm{\phi}$
based on Eq.\eqref{eq:cost_update}\;
}
\caption{Parameters Training for FCPO}
\label{alg:FCPO}
\end{algorithm}
\section{Experiments}
\subsection{Dataset Description}
We use the user transaction data from $Movielens$ \cite{Harper:2015:MDH:2866565.2827872} in our experiments to verify the recommendation performance of \textbf{FCPO}\footnote{https://github.com/TobyGE/FCPO}.
We choose $Movielens100K$ and $Movielens1M$ \footnote{\url{https://grouplens.org/datasets/Movielens/}} datasets, which include one hundred thousand and one million user transactions, respectively (user id, item id, rating, timestamp, etc.).
For each dataset, we sort the transactions of each user according to the timestamp, and then split the records into training and testing sets chronologically by 4:1, and the last item of each user in the training set is put into the validation set.
Some basic statistics of the experimental datasets are shown in Table \ref{tab:dataset}.
We split items into two groups $G_0$ and $G_1$ based on item popularity, i.e., the number of exposures for each item. Specifically, the top 20\% items in terms of number of impressions belong to the popular group $G_0$, and the remaining 80\% belong to the long-tail group $G_1$.
Moreover, for RL-based recommenders, the initial state for each user during training is the first five clicked items in the training set, and the initial state during testing is the last five clicked items in the training set.
For simplicity, each time the RL agent recommends one item to the user, while we can adjust the length of the recommendation list easily in practice.
\subsection{Experimental Setup}\label{sec:experimental_setup}
\textbf{Baselines:}
We compare our model with the following baselines, including both traditional and RL based methods.
\begin{itemize}
\item {\bf MF}: Collaborative Filtering based on matrix factorization \cite{koren2009matrix} is a representative method for rating prediction.
Basically, the user and item rating vectors are considered as the representation vector for each user and item.
\item {\bf BPR-MF}: Bayesian Personalized Ranking \cite{bpr} is one of the most widely used ranking methods for top-K recommendation, which models recommendation as a pair-wise ranking problem.
\item {\bf NCF}: Neural Collaborative Filtering \cite{he2017neural} is a simple neural network-based recommendation algorithm.
In particular, we choose Neural Matrix Factorization to conduct the experiments, fusing both Generalized Matrix Factorization (GMF) and Multiple Layer Perceptron (MLP) under the NCF framework.
\item {\bf LIRD}: It is the short for List-wise Recommendation based on Deep reinforcement learning \cite{DBLP:journals/corr/abs-1801-00209}. The original paper simply utilizes the concatenation of item embeddings to represent the user state. For fair comparison, we replace the state representation with the same structure of FCPO, as is shown is Fig. \ref{fig:state}.
\end{itemize}
In this work, we also include a classical fairness baseline called Fairness Of Exposure in Ranking (FOE) \cite{singh2018fairness} in our experiment to compare the fairness performance with our model.
\textbf{FOE} can be seen as a reranking framework based on group fairness constraints,
and it is originally designed for searching problems, so we made a few modification to accommodate the recommendation task.
We use ranking prediction model such as MF, BPR, and NCF as the base ranker, where the raw utility is given by the predicted probability of user $i$ clicking item $j$.
In our experiment, we have \textbf{MF-FOE}, \textbf{BPR-FOE} and \textbf{NCF-FOE} as our fairness baselines.
Since FOE assumes independence of items in the list, it cannot be applied to LIRD, which is a sequential model and the order in its recommendation makes a difference.
Meanwhile, FOE for personalized recommendation needs to solve a linear program with size $|\mathcal{I}| \times |\mathcal{I}|$ for each consumer, which brings huge computational costs.
In order to make the problem feasible, we let FOE rerank top-200 items from the base ranker (e.g. MF), and select the new top-K (K<200) as the final recommendation results.
\begin{table}[t]
\vspace{-10pt}
\caption{Basic statistics of the experimental datasets.}
\label{tab:dataset}
\centering
\setlength{\tabcolsep}{5pt}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}
{lcccccccc} \toprule
Dataset & \#users & \#items & \#act./user & \#act./item & \#act. & density \\\midrule
Movielens100K & 943 & 1682 & 106 & 59.45 & 100,000 & 6.305\%\\
Movielens1M & 6040 & 3706 & 166 & 270 &1,000,209 & 4.468\%\\\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{-15pt}
\end{table}
We implement MF, BPR-MF, NCF, MF-FOE, BPR-FOE and NCF-FOE using \textit{Pytorch} with Adam optimizer.
For all the methods, we consider latent dimensions $d$ from \{16, 32, 64, 128, 256\}, learning rate $lr$ from \{1e-1, 5e-2, 1e-2, \dots, 5e-4, 1e-4\}, and the L2 penalty is chosen from \{0.01, 0.1, 1\}.
We tune the hyper-parameters using the validation set and terminate training when the performance on the validation set does not change within 5 epochs.
We implement \textbf{FCPO} with $Pytorch$ as well.
We perform PMF \cite{mnih2008probabilistic} to pretrain 100-dimensional user and item embeddings, and fix them through the whole experiment.
We set $|H_t|=5$, and use 2 layer of GRU to get state representation $s_t$.
For the policy network and each of the two critic networks, we use two hidden layer MLP with tanh($\cdot$) as activation function.
Critics are learned through LBFGS optimizer \cite{andrew2007scalable}.
Finally, we fine-tune FCPO's hyperparameters on our validation set.
In order to examine the trade-off between performance and fairness, we set different level of fairness constraint controlled by the values of $\alpha^{\prime}$ in Eq.
\eqref{eq:fc} and calculate the limit $\mathbf{d}$ using Eq. \eqref{eq:fairness_limit}.
We denote the resulting alternatives as \textbf{FCPO-1}, \textbf{FCPO-2}, and \textbf{FCPO-3}, whose corresponding fairness be constrained by setting $\alpha^{\prime}=1$,
$\alpha^{\prime}=0.8$, and
$\alpha^{\prime}=0.4$ correspondingly in our experiments.
\textbf{Evaluation Metrics:}
We adopt several common top-K ranking metrics including \text{Recall}, \text{F1 Score}, and \text{NDCG} to evaluate each model's recommendation performance.
In addition to these accuracy-based metrics, we also include two fairness measures -- \text{Gini Index} and \text{Popularity Rate}, with respect to item exposures for individual items and groups, respectively.
Gini Index measures the inequality among values of a frequency distribution (for example, numbers of impressions), which can be seen as an individual level measure.
Given a list of impressions from all items, $\mathcal{M}=[g_{1},g_{2},...,g_{|\mathcal{I}|}]$, the Gini Index can be calculated by Eq.\eqref{eq:gini},
\begin{equation}\label{eq:gini}
\small
Gini\ Index(\mathcal{G}) = \frac{1}{2|\mathcal{I}|^2\bar{g}}\sum_{i=1}^{|\mathcal{I}|}\sum_{j=1}^{|\mathcal{I}|} |g_{i} - g_{j}|,
\end{equation}
where $\bar{g}$ represents the mean of all item impressions.
Popularity Rate, on the other hand, simply refers to the proportion of popular items in the recommendation list against the total number of items in the list, which can be seen as a popularity level measure of fairness.
Both of the two fairness measures are the smaller, the fairer to the recommender system.
\begin{table*}[]
\caption{Summary of the performance on two datasets.
We evaluate for ranking ($Recall$, $F_1$ and $NDCG$, in percentage (\%) values, \% symbol is omitted in the table for clarity) and fairness ($Gini$ $Index$ and $Popularity$ $Rate$, also in \% values), whiles $K$ is the length of recommendation list.
When FCPO is the best, its improvements against the best baseline are significant at p < 0.01.}
\centering
\begin{adjustbox}{max width=\linewidth}
\setlength{\tabcolsep}{7pt}
\begin{tabular}
{m{1.33cm} ccc ccc ccc ccc ccc} \toprule
\multirow{2}{*}{Methods}
& \multicolumn{3}{c}{Recall (\%) $\uparrow$}
& \multicolumn{3}{c}{F1 (\%) $\uparrow$}
& \multicolumn{3}{c}{NDCG (\%) $\uparrow$}
& \multicolumn{3}{c}{Gini Index (\%) $\downarrow$}
& \multicolumn{3}{c}{Popularity Rate (\%) $\uparrow$}\\\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} \cmidrule(lr){14-16}
& K=5 & K=10 & K=20 & K=5 & K=10 & K=20 & K=5 & K=10 & K=20 & K=5 & K=10 & K=20 & K=5 & K=10 & K=20 \\\midrule
\multicolumn{16}{c}{Movielens-100K} \\\midrule
MF & 1.847 & 3.785 & 7.443 & 2.457 & 3.780 & 5.074 & 3.591 & 4.240 & 5.684 & 98.99 & 98.37 & 97.03 & 99.98 & 99.96 & 99.92\\
BPR-MF & 1.304 & 3.539 & 8.093 & 1.824 & 3.592 & 5.409 & 3.025 & 3.946 & 5.787 & 98.74 & 98.17 & 97.01 & 99.87 & 99.87 & 99.78\\
NCF & \underline{1.995} & 3.831 & 6.983 & \underline{2.846} & \underline{4.267} & \underline{5.383} & \underline{5.319} & \underline{5.660} & \underline{6.510} & 99.70 & 99.39 & 98.80 & 100.0 & 100.0 & 100.0\\
LIRD & 1.769 & \underline{5.467} & \underline{8.999} & 2.199 & 4.259 & 4.934 & 3.025 & 3.946 & 5.787 & 99.70 & 99.41 & 98.81 & 100.0 & 100.0 & 100.0\\\midrule
MF-FOE & 1.164 & 2.247 & 4.179 & 1.739 & 2.730 & 3.794 & 3.520 & 3.796 & 4.367 & \underline{86.29} & \underline{84.05} & \underline{82.98} & 92.90 & 91.89 & 90.98 \\
BPR-FOE & 0.974 & 2.053 & 4.404 & 1.496 & 2.568 & 3.933 & 3.127 & 3.514 & 4.332 & 86.50 & 84.38 & 83.78 & \underline{92.17} & \underline{91.36} & \underline{90.70}\\
NCF-FOE & 1.193 & 1.987 & 4.251 & 1.759 & 2.398 & 3.698 & 4.033 & 3.897 & 4.633 & 96.92 & 94.53 & 90.44 & 100.0 & 100.0 & 100.0\\\midrule
FCPO-1 & \textbf{4.740} & \textbf{8.607} & \textbf{14.48} & \textbf{4.547} & \textbf{5.499} & \textbf{5.855} & \textbf{6.031} & \textbf{7.329} & \textbf{9.323} & 98.73 & 98.07 & 96.75 & 92.60 & 90.42 & 85.85\\
FCPO-2 & 3.085 & 5.811 & 10.41 & 3.270 & 4.164 & 4.953 & 4.296 & 5.203 & 7.104 & 97.95 & 96.88 & 94.78 & 70.07 & 68.28 & 65.55\\
FCPO-3 & 0.920 & 1.668 & 3.329 & 1.272 & 1.807 & 2.535 & 2.255 & 2.369 & 2.871 & \textbf{75.23} & \textbf{74.06} & \textbf{73.23} & \textbf{36.52} & \textbf{36.66} & \textbf{36.94}\\\midrule
\multicolumn{16}{c}{Movielens-1M} \\\midrule
MF & 1.152 & 2.352 & 4.650 & 1.701 & 2.814 & 4.103 & 3.240 & 3.686 & 4.574 & 99.44 & 99.18 & 98.74 & 99.92 & 99.90 & 99.86\\
BPR-MF & 1.240 & 2.627 & 5.143 & 1.773 & 2.943 & 4.197 & 3.078 & 3.593 & 4.632 & 98.93 & 98.44 & 97.61 & 99.40 & 99.23 & 98.96\\
NCF & 1.178 & 2.313 & 4.589 & 1.832 & 2.976 & \underline{4.382} & \underline{4.114} & \underline{4.380} & \underline{5.080} & 99.85 & 99.71 & 99.42 & 100.0 & 100.0 & 100.0\\
LIRD & \underline{1.961} & \underline{3.656} & \underline{5.643} & \underline{2.673} & \underline{3.758} & 4.065 & 3.078 & 3.593 & 4.632 & 99.87 & 99.73 & 99.46 & 100.0 & 100.0 & 95.00\\\midrule
MF-FOE & 0.768 & 1.534 & 3.220 & 1.246 & 2.107 & 3.345 & 3.321 & 3.487 & 4.021 & 92.50 & 91.06 & 91.32 & 98.89 & 98.78 & 98.68 \\
BPR-FOE & 0.860 & 1.637 & 3.387 & 1.374 & 2.233 & 3.501 & 3.389 & 3.594 & 4.158 & \underline{90.48} & \underline{88.92} & \underline{89.01} & \underline{96.56} & \underline{96.12} & \underline{95.78}\\
NCF-FOE & 0.748 & 1.403 & 2.954 & 1.230 & 1.980 & 3.175 & 3.567 & 3.589 & 4.011 & 97.73 & 96.57 & 95.04 & 100.0 & 100.0 & 100.0\\\midrule
FCPO-1 & \textbf{2.033} & \textbf{4.498} & \textbf{8.027} & \textbf{2.668} & \textbf{4.261} & \textbf{5.201} & \textbf{4.398} & \textbf{5.274} & \textbf{6.432} & 99.81 & 99.67 & 99.34 & 99.28 & 96.93 & 91.70\\
FCPO-2 & 1.520 & 3.218 & 6.417 & 2.015 & 3.057 & 4.145 & 3.483 & 3.920 & 5.133 & 99.47 & 99.10 & 97.41 & 72.66 & 68.27 & 71.35\\
FCPO-3 & 0.998 & 1.925 & 3.716 & 1.449 & 2.185 & 2.948 & 2.795 & 2.987 & 3.515 & \textbf{88.97} & \textbf{88.34} & \textbf{87.70} & \textbf{63.43} & \textbf{62.73} & \textbf{61.45}\\\bottomrule
\end{tabular}\label{tab:result}
\end{adjustbox}
\vspace{-10pt}
\end{table*}
\begin{figure*}[t]
\mbox{
\hspace{-15pt}
\centering
\subfigure[NDCG vs Negative Gini on ML100K]{\label{fig:ml100k_ndcg_gini}
\includegraphics[width=0.26\textwidth]{fig/ml100k_ndcg-gini.png}}
\hspace{-10pt}
\subfigure[NDCG vs Long-tail Rate on ML100K]{\label{fig:ml100k_ndcg_pop}
\includegraphics[width=0.26\textwidth]{fig/ml100k_ndcg-pr.png}}
\hspace{-10pt}
\subfigure[NDCG vs Negative Gini on ML1M]{\label{fig:ml1m_ndcg_gini}
\includegraphics[width=0.26\textwidth]{fig/ml1m_ndcg-gini.png}}
\hspace{-10pt}
\subfigure[NDCG vs Long-tail Rate on ML1M]{\label{fig:ml1m_ndcg_pop}
\includegraphics[width=0.26\textwidth]{fig/ml1m_ndcg-pr.png}}
}
\caption{NDCG@20 vs. Negative Gini Index@20 and NDCG@20 vs. Long-tail Rate@20 in two datasets. $x$-axis is the negative gini index in \ref{fig:ml100k_ndcg_gini} and \ref{fig:ml1m_ndcg_gini}, and is the long-tail rate in \ref{fig:ml100k_ndcg_pop} and \ref{fig:ml1m_ndcg_pop}; $y$-axis represents the value of NDCG.}
\label{fig:ndcg_fairness}
\vspace{-10pt}
\end{figure*}
\balance
\subsection{Experimental Results}
The major experimental results are shown in Table \ref{tab:result}, besides, we also plot the \textit{NDCG vs. Negative Gini Index} and \textit{NDCG vs. Long-tail Rate} in Fig. \ref{fig:ndcg_fairness} under the length of recommendation list $K=20$. We analyze and discuss the results in terms of the following perspectives.
\subsubsection*{\bf i) Recommendation Performance:}
For recommendation performance, we compare FCPO-1 with MF, BPR, NCF, and LIRD based on $Recall@k$, $F1@k$ and $NDCG@k$.
The results of the recommendation performance are shown in Table \ref{tab:result}.
The largest value on each dataset and for each evaluation measure is significant at 0.01 level.
Among all the baseline models, NCF is the strongest on Movielens100K: when averaging across recommendation lengths, NCF gets 11.45\% improvement than MF, 18.01\% than BPR, and 6.17 \% than LIRD; and LIRD is the strongest on Movielens1M: when averaging across recommendation lengths, LIRD gets 17.69\% improvement than MF, 14.50\% than BPR, and 9.68 \% than NCF.
Our FCPO approach achieves the best top-K recommendation performance against all baselines on both datasets.
On the one hand, when averaging across recommendation lengths on Movielens100K, FCPO gets 33.09\% improvement than NCF; on the other hand, when averaging across recommendation lengths on Movielens1M, FCPO gets 18.65 \% improvement than LIRD.
These observations imply that the proposed method does have the ability to capture dynamic user-item interactions, which captures better user preferences resulting in better recommendation results.
Another interesting observation is that FCPO is better than LIRD even though they use the same state representation and similar training procedure. This may be attributed to the trust-region-based optimization method, which stabilizes the model learning process.
\subsubsection*{\bf ii) Short-term Fairness Performance:}
For fairness performance, we compare three FCPOs with MF-FOE, BPR-FOE, and NCF-FOE based on $Gini\ Index@k$ and $Popularity\ Rate@k$, which are also shown in Table \ref{tab:result}.
We can easily see that there exists a trade-off between the recommendation performance and the fairness performance both in FCPO and FOE, which is understandable, as most of the long-tail items have relatively fewer user interactions.
In order to better illustrate the trade-off between FCPO and FOE, we fix the length of the recommendation list at 20 and plot NDCG against Negative Gini Index and Long-tail Rate in Fig. \ref{fig:ndcg_fairness} for both datasets, where the long-tail rate is equal to one minus popularity rate.
The blue line represents FCPO under three different levels of fairness constraint.
We choose Negative Gini Index and Long-tail Rate instead of the original ones as they are the bigger, the better, which is easier for comparison.
In most cases, for the same Gini Index, our method achieves much better NDCG; meanwhile, under the same NDCG scores, our method achieves better fairness.
In other words, our method FCPO can achieve much better trade-off than FOE in both individual fairness (measured by Gini Index) and group fairness (measured by Long-tail Rate).
We can see that even with the light fairness constraint, FCPO-1 is better than traditional baselines and the FOE-based methods on group fairness.
\subsubsection*{\bf iii) Efficiency Performance:}
We compare FOE-based methods with FCPO in terms of the single-core CPU running time to generate a recommendation list of size $K=100$ for all users.
The running time between the base ranker of FOE-based methods is relatively the same, but the additional reranking step of FOE may take substantial time.
In our observation on Movielens100K dataset, the recommendation time is 90min, 6h30min, and 60h30min for reranking from $200$, $400$, and $800$ items, respectively, while FCPO only takes around 3h and select items from the entire item set (1682 items).
Our observation on Movielens1M dataset shows that FOE-based methods take 10h30min, 43h30min, and 397h to rerank from $200$, $400$, and $800$ items, respectively, while FCPO takes around 11h33min selecting in the entire item set (3706 items).
As mentioned before, these experiments are running on single-core CPU for fair comparison, therefore, we can easily speed them up by using parallel computing.
\begin{figure}[t]
\mbox{
\hspace{-20pt}
\centering
\subfigure[NDCG on Movielens100K]{\label{fig:ml100k_long_term_ndcg}
\includegraphics[width=0.26\textwidth]{fig/ml100k_long_term_ndcg.pdf}}
\hspace{-10pt}
\subfigure[NDCG on Movielens1M]{\label{fig:ml1m_long_term_ndcg}
\includegraphics[width=0.26\textwidth]{fig/ml1m_long_term_ndcg.pdf}}
}
\mbox{
\hspace{-20pt}
\centering
\subfigure[Gini on Movielens100K]{
\label{fig:ml100k_long_term_gini}
\includegraphics[width=0.26\textwidth]{fig/ml100k_long_term_gini.pdf}}
\hspace{-10pt}
\subfigure[Gini on Movielens1M]{\label{fig:ml1m_long_term_gini}
\includegraphics[width=0.26\textwidth]{fig/ml1m_long_term_gini.pdf}}
}
\mbox{
\hspace{-20pt}
\centering
\subfigure[Popularity Rate on Movielens100K]{\label{fig:ml100k_long_term_pr}
\includegraphics[width=0.26\textwidth]{fig/ml100k_long_term_pr.pdf}}
\hspace{-10pt}
\subfigure[Popularity Rate on Movielens1M]{\label{fig:ml1m_long_term_pr}
\includegraphics[width=0.26\textwidth]{fig/ml1m_long_term_pr.pdf}}
}
\caption{Long-term performance on Movielens100K (first column) and Movielens1M (second column).
X-axis is recommendation step, y-axis is the evaluated metric (first row: \textbf{NDCG}, second row: \textbf{Gini}, third row: \textbf{Popularity Rate}) on accumulated item exposure from beginning to current step.}
\label{fig:long_term}
\end{figure}
\subsection{Long-term Fairness in Recommendation}
We compared FCPO with a static short-term fairness solution (i.e., MF-FOE) for 400 steps of recommendation.
For MF-FOE, we run 4 rounds of $K=100$ recommendations to let it capture the dynamics of the item group labels, while FCPO only needs to continuously run for 400 steps.
In other words, MF-FOE keeps the same item group labels for $K$ item recommendations and has to retrain its parameters after the labels updated at the end of each round.
As mentioned in section \ref{sec:experimental_setup}, FOE-based method becomes significantly time-consuming when dealing with large candidate item sets.
Thus, instead of doing whole item set fairness control, we first select the top $2K$ items as candidates, and then apply FOE to rerank the items and generate the final $K$ recommendations.
As shown in Fig. \ref{fig:ml100k_long_term_gini}, \ref{fig:ml1m_long_term_gini}, \ref{fig:ml100k_long_term_pr}, and \ref{fig:ml1m_long_term_pr}, when model convergences, MF-FOE performs much worse than FCPO on both Gini Index and Popularity Rate on two datasets.
Within each round of MF-FOE, fairness metrics quickly converges and they are further improved only when the item exposure information is updated.
On the contrary, since FCPO makes adjustment of its policy according to the fairness feedback, it can successfully and continuously suppress the fairness metric to a much lower value during testing.
As shown in Fig.\ref{fig:ml100k_long_term_pr} and Fig.\ref{fig:ml1m_long_term_pr}, due to this dynamic change of recommendation policy, FCPO exhibits greater fluctuation and unstable behavior than MF-FOE.
Though we kept skeptical whether the fairness performance gap between MF-FOE and FCPO will eventually vanish, we do observe that MF tends to much favor popular items than unpopular ones in Table \ref{tab:result}.
As a result, setting a very small $K$ (e.g. $K<20$) to speed up the recommendation could result in a candidate set filled with popular items and applying FOE becomes futile.
Besides, the overall performance of MF-FOE -- especially on accuracy metrics (corresponding to Fig. \ref{fig:ml100k_long_term_ndcg} and \ref{fig:ml1m_long_term_ndcg}) -- is consistently outperformed by FCPO, which indicates that MF-FOE sacrifices the recommendation performance more than FCPO in order to control fairness.
\section{Conclusion and Future Work}
In this work, we propose to model the long-term fairness in recommendation with respect to dynamically changing group labels. We accomplish the task by addressing the dynamic fairness problem through a fairness-constrained reinforcement learning framework.
Experiments on standard benchmark datasets verify that our framework achieves better performance in terms of recommendation accuracy, short-term fairness, and long-term fairness.
In the future, we will generalize the framework to optimize individual fairness constraints and other recommendation scenarios such as e-commerce recommendation and point-of-interest recommendation.
\bibliographystyle{ACM-Reference-Format}
|
train/arxiv
|
BkiUeLnxK3YB9i3RMEyd
| 5 | 1 |
\section{Introduction}
Let $k$ be an algebraically closed field.
A potential $W$ for a quiver $Q$ is, roughly speaking, a linear combination of
cyclic paths in the complete path algebra $k\langle\langle Q\rangle\rangle$.
The Jacobian algebra $\mathcal{P}(Q,W)$ associated to a quiver with a potential
$(Q,W)$ is the quotient of the complete path algebra $k\langle\langle Q\rangle\rangle$ modulo the Jacobian
ideal $J(W)$. Here, $J(W)$ is the closure of the ideal of $k\langle\langle Q\rangle\rangle$
which is generated by the cyclic derivatives of $W$ with respect to the arrows
of $Q$.
Quivers with potential were introduced in\cite{DWZ08} in order to construct additive
categorifications of cluster algebras with skew-symmetric exchange
matrix. For the just mentioned categorification it is crucial that
the potential for $Q$ be non-degenerate, i.e. that it can be mutated along
with the quiver arbitrarily, see \cite{DWZ08} for more details on quivers with
potentials.
In \cite{FST08} the authors introduced, under some mild hypothesis, for each oriented
surface with marked points $(S,M)$ a mutation finite cluster algebra with
skew symmetric exchange matrices. More precisely, each triangulation $\mathbb{T}$ of
$(S,M)$ by tagged
arcs corresponds to a cluster and the corresponding exchange matrix is
conveniently coded into a quiver $Q(\mathbb{T})$.
Labardini-Fragoso in \cite{LF09} enhanced this construction by introducing potentials
$W(\mathbb{T})$ and showed that these potentials are compatible with mutations.
In particular, these potentials are non-degenerate. Ladkani showed that for
surfaces with empty boundary and a triangulation $\mathbb{T}$ which has no
self-folded triangles the Jacobian algebra $\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T}))$ is symmetric
(and in particular finite-dimensional).
It follows that for any triangulation $\mathbb{T}'$ of a closed surface $(S,M)$ the
Jacobian algebra $\mathcal{P}(Q(\mathbb{T}'), W(\mathbb{T}'))$ is weakly symmetric by \cite{HI11}.
In \cite{GLFS13} it is shown by a degeneration argument that these algebras are tame.
Next, following Amiot\cite[Sec. 3.4]{Ami12} and
Labardini-Fragoso \cite[Theorem 4.2]{LF13} we have a triangulated
2-Calabi-Yau category $\mathcal{C}_{(S,M)}$ together with a family of cluster tilting
objects $(T_{\mathbb{T}})_{\mathbb{T} \text{ triangulation of } (S,M)}$, related by Iyama-Yoshino
mutations such that $\operatorname{End}_{\mathcal{C}_{(S,M)}}(T(\mathbb{T})) \cong\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T}))^{\text{op}}$ (see for example \cite{Ke08} for details of $n$-Calabi-Yau categories).
\begin{thm}
Let $S$ be a closed oriented surface with a non-empty finite collection $M$ of
punctures, excluding only the case of a sphere with $4$ (or less) punctures.
For an arbitrary tagged triangulation $\mathbb{T}$, the Jacobian algebra
$\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T}))$ is symmetric, tame, its stable Auslander-Reiten
quiver consists only of stable tubes of rank $1$ or $2$ and it is an algebra of exponential growth.
\end{thm}
Recall that for symmetric algebras the $\tau$-periodicity coincides with the $\Omega$-periodicity, where $\tau$ is the Auslander-Reiten translation and $\Omega$ is the Heller translate or syzygy. Then, the Theorem shows that the existing characterization of algebras which are symmetric, tame and with the non-projective indecomposable modules $\Omega$-periodic, was not complete. This family of Jacobian algebras form a new family with these properties.
As consequence of the Theorem we have the following result.
\begin{corollary}
Let $S$ be a closed oriented surface with a non-empty finite collection $M$
punctures, excluding only the case of a sphere with $4$ (or less) punctures.The Auslander-Reiten quiver of the generalized cluster category
$\mathcal{C}_{(S,M)}$ consists only of stable tubes of rank $1$ or $2$.
\end{corollary}
\begin{notation}
Let $(S,M)$ be a marked surface with empty boundary and $\mathbb{T}$
be an tagged triangulation of $(S, M)$. We construct the
unreduced signed adjacency quiver $\widehat{Q}(\mathbb{T})$ of the
triangulation $\mathbb{T}$ and following \cite{LF09} we construct the
unreduced potential $\widehat{W}(\mathbb{T})$. The quiver with potential
$(Q(\mathbb{T}), W(\mathbb{T}))$ associated to the triangulation $\mathbb{T}$ of the
marked surface $(S, M)$ is the reduced part of
$(\widehat{Q}(\mathbb{T}), \widehat{W}(\mathbb{T}))$.
Let $\mathcal C$ be the generalized cluster category of $(\mathcal
S, M)$. We denote by $\Sigma$ the suspension functor of $\mathcal
C$ and by $\Sigma_n$ the suspension functor in a $n$-angulated
category $(\mathcal F, \Sigma_n,\pentagon)$ (cf. \cite{GKO13} for
definition of $n$-angulated categories).
\end{notation}
\section{Proof of the results}
We show first that the stable Auslander-Reiten
quiver of a Jacobian algebra $\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T},\lambda))$ of an arbitrary tagged triangulation $\mathbb{T}$ of a closed oriented surface $S$ consists only of stable tubes of rank $1$ or $2$ (see \cite[Chapter VII]{ARS97} for details of Auslander-Reiten quiver). In order to prove it, we
establish two preliminary results for 2-Calabi-Yau-tilted symmetric algebras, recall, that any finite dimensional Jacobian algebra is a 2-Calabi-Yau-tilted algebra.
\begin{proposition}\label{lema}
Let $\mathcal C$ be a triangulated 2-Calabi-Yau category and Hom-finite and $T\in \mathcal{C}$ be a cluster-tilting object. If $\operatorname{End}_\mathcal{C}(T)$ is symmetric, then its stable Auslander-Reiten
quiver consists only of stable tubes of rank $1$ or $2$.
\end{proposition}
\begin{proof}
By \cite[Theorem I 2.4]{RVB02} there is a relation between the Serre functor $\mathbb S$, the Auslander-Reiten translation $\tau_\mathcal{C}$ and the suspension functor $\Sigma$ of $\mathcal{C}$ on objects, that is, $\mathbb S=\Sigma\tau_\mathcal{C}$. Since $\mathcal{C}$ is 2-Calabi-Yau, we have $\Sigma\tau=\Sigma^2$, therefore
$\tau_\mathcal{C}=\Sigma$.
Denote by $\Lambda$ to the algebra $\operatorname{End}_\mathcal{C}(T)$ and by $\tau_\Lambda$ to the Auslander-Reiten translation of $\mod \Lambda$.
By hypothesis, $\Lambda$ is a symmetric algebra, then by \cite[Lema]{Ri08} the subcategory add($T$) is closed under the Serre functor $\tau_\mathcal{C}^2=\mathbb S=\Sigma^2$.
Then by \cite[Remark 6.3]{GKO13}, the stable category
\underline{mod}($\Lambda$) is a 3-Calabi-Yau category. Hence by
\cite[Theorem I 2.4]{RVB02} we have
$\Omega^{-1}\tau_{\Lambda}=\Omega^{-3}$, because $\Omega^{-1}$ is
the suspension functor in \underline{mod}$\Lambda$ , therefore
$\Omega^{-2}=\tau_{\Lambda}$.
On the other hand, $\Lambda$ is symmetric, then
$\Omega^2=\tau_{\Lambda}$, therefore $\Omega^4=\mathds 1_{
\textrm{\underline{mod}}\Lambda}$. Then
$\Omega^{4}=\tau_{\Lambda}^2=\mathds 1_{
\textrm{\underline{mod}}\Lambda}$.
\end{proof}
\begin{rem} Note that in proof of Proposition \ref{lema}, we also show that symmetric 2-Calabi-Yau-tilted algebras are algebras whose
indecomposable non-projective modules are $\Omega$-periodic and the period is a divisor of $4$.
\end{rem}
\begin{proposition}\label{lema2}
Let $\mathcal C$ be a triangulated 2-Calabi-Yau category and Hom-finite. If there exists a cluster-tilting object $T\in \mathcal{C}$ such that $\operatorname{End}_\mathcal{C}(T)$ is symmetric,
then the Auslander-Reiten quiver of the category
$\mathcal{C}$ consists only of stable tubes of rank $1$ or $2$
\end{proposition}
\begin{proof}
Let $T$ be the cluster-tilting object in $\mathcal C$ such that the 2-Calabi-Yau-tilted algebra $\Lambda=\operatorname{End}_{\mathcal C}(T)$ is symmetric, therefore by \cite[Proposition 6.4]{GKO13}
proj$(\Lambda)=\operatorname{add}(T)$ is a $4$-angulated category, with Nakayama functor
$\nu$ as suspension. It well known that the Nakayama functor of a
symmetric algebra is the identity.
By \cite[Remark 6.3]{GKO13}, then the suspension in the
$4$-angulated category $\operatorname{add}(T)$ satisfies $\Sigma_4=\Sigma^2$, but
also satisfies $\Sigma_4=\nu=\mathds 1_{\operatorname{add}(T)}$,
then $\Sigma^2=\mathds 1_{\operatorname{add} (T)}$. Hence
$\tau^2=\Sigma^2=\mathds 1_{\textrm{add}(T)}$.
The result follows for Proposition \ref{lema} and the equivalence
$\mathcal C/(\textrm{add}(\Sigma
T))\stackrel{F}{\cong}\textrm{mod}\Lambda$ proved in
\cite[Proposition 2.1]{KR07}.
\end{proof}
As consequence of Proposition \ref{lema2}, we have a result of the Auslader-Reiten quiver of the generalized cluster category of a closed surface with punctures.
\begin{corollary}\label{cor}
Let $S$ be a closed oriented surface with a non-empty finite collection $M$
punctures, excluding only the case of a sphere with $4$ (or less) punctures.The Auslander-Reiten quiver of the generalized cluster category
$\mathcal{C}_{(S,M)}$ consists only of stable tubes of rank $1$ or $2$.
\end{corollary}
\begin{proof}
It was proved in \cite[Proposition 4.7]{Lad12} that there is a particular
triangulation $\mathbb T$ of $(\mathcal S, M)$ such that the
Jacobian algebra $\Lambda=\mathcal P(Q(\mathbb{T}),W(\mathbb{T}))$ is symmetric.
By \cite[Proposition 4.10]{FST08}, \cite[Theorem 7.1]{LF12} and
\cite[Theorem 3.2]{KY11}, the generalized cluster category $\mathcal
C$ is independent of the choice of the triangulation $\mathbb T$,
then the generalized cluster category $\mathcal C$ is equivalent to
the generalized cluster category $\mathcal C_{(Q(\mathbb{T}),W(\mathbb{T}))}$. The
result follows from Proposition \ref{lema2}.
\end{proof}
\begin{rem}\label{simetrica}
A partial converse to Proposition \ref{lema2} is given in \cite[Lemma 2.2. (c)]{BIKR08}. Moreover, these result and Corollary \ref{cor} implies that any Jacobian algebra of a tagged triangulation of a closed Riemann surface is not only weakly-symmetric, but is symmetric.
\end{rem}
Now, we prove the first part of Theorem, that is, we prove that for an arbitrary tagged triangulation $\mathbb T$, the Auslander Reiten quiver of the Jacobian algebra $\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T},\lambda))$ consists only of stable tubes of rank 1 or 2.
\begin{proof}[Proof of periodic module category]
Let $\mathbb{T}$ be a triangulation of the marked surface $(\mathcal S,
M)$ and $\Lambda=\mathcal P(Q(\mathbb{T}), W(\mathbb{T}))$ be the Jacobian Algebra
of the triangulation $\mathbb{T}$. It was already known that $\Lambda$ is a
tame algebra (see \cite{GLFS13}), and by Remark \ref{simetrica} $\Lambda$ is symmetric. Therefore, it
only remains to prove that its stable Auslander-Reiten quiver
consists only of stable tubes of rank $1$ or $2$. Consider $T$ the
cluster tilting object of $\mathcal C$ such that $\mathcal
C/(\textrm{add}(\Sigma T))\stackrel{F}{\cong}\textrm{mod}\Lambda$.
The functor $F$ also induces the equivalence
\underline{mod}$\Lambda\cong\mathcal C/(T, \Sigma T)$. Therefore the
statement follows from Corollary \ref{cor}.
\end{proof}
Finally, before we prove the second part of the main result, we recall the
definition of a tame algebra of exponential growth.
The algebra $A$ is \textit{tame} if for every dimension $d\in
\mathbb N$ there is a finite number of $A-k[X]-$bimodules $N_1,
\dots, N_{i(d)}$ such that each $N_i$ is finitely generated free
over the polynomial ring $k[X]$ and almost all $d$-dimensional
$A$-modules are isomorphic to $N_i\otimes_{k[X]}k[X]/ (X-\lambda)$
for some $i\in\{1,\dots, i(d)\}$ and some $\lambda$. Let $\mu_A(d)$ be the smallest possible number of these $N_i$. We say that $A$ is tame of \textit{exponential growth} if $\mu_A(d)> r^d$ for
infinitely many $d\in \mathbb N$ and some $r\in \mathbb R$ and
$r>1$.
Also, we recall the definition of string and band in string algebras. Given an arrow $\alpha:i \to j$ in a quiver $Q$, we denote by $\alpha^{-1}:j\to i$ a formal inverse of $\alpha$. Given such a formal inverse $l=\alpha^{-1}$, one writes $l^{-1}=\alpha$. Let $\bar{Q_1}$ be the set of all arrows and their formal inverses, the elements of $\bar{Q_1}$ are letters. A \emph{string} $w$, of an algebra $kQ/I$, is a sequence $l_1l_2\cdots l_n$ of the elements of $\bar{Q_1}$ such that
\begin{itemize}
\item[(W1)] We have $l_i^{-1}\neq l_{i+1}$, for all $i\leq i< n$.
\item[(W2)] No proper subsequence of $w$ or its inverse belongs to $I$.
\item[(W3)] end($l_i$) = start($l_{i+1}$) for all $i$.
\end{itemize}
A string $w$ is said to be a \emph{cyclic} if all powers $w^m$, $m\in\mathbb N$ are strings. Given a cyclic word $w$, the powers $w^m$ with $m\geq 2$ are said to be \emph{proper powers}. A cyclic word $w$ is said to be \emph{primitive} provided it is not a proper power of some other word. A \emph{band} is a cyclic primitive string.
\begin{rem}\label{growth}
From the proof of \cite[Theorem 3.6]{GLFS13} follows that mutations of quiver with potential also preserve exponential growth, therefore it is enough to show a particular triangulation of each closed surface which induces a Jacobian algebra of exponential growth.
\end{rem}
\begin{lemma}\label{exponential}
Let $A$ be a finite dimensional algebra and $A'$ be a quotient of $A$. If $A'$ is of exponential growth, then so is $A$.
\end{lemma}
The proof follows from the fact that any $A'$-module is also an
$A$-module and if $L\cong N$ as $A-k[x]-$bimodules, then $L\cong N$
as $A'-k[x]-$bimodules. Therefore $\mu_{A'}(d)\leq \mu_A(d)$.
Our goal is to find a triangulation $\mathbb{T}$ for each closed surface with marked points $(S, M)$ such that there is a quiotient of the Jacobian algebra $\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T}))$ which is an algebra of exponential growth. We have to distinguish in
two cases: the sphere with 5 punctures and the other closed
surfaces.
\begin{lemma}
For an arbitrary triangulation of a sphere with 5 punctures, the
Jacobian algebra $\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T}))$ is an algebra of exponential
growth.
\end{lemma}
\begin{proof}
Consider the skewed-gentle triangulation $\mathbb{T}$ of the Figure
\ref{esfera} (see definition of skewed-gentle triangulation in \cite[Section 6.7]{GLFS13}) and the Jacobian algebra $\Lambda=\mathcal{P}(Q(\mathbb{T}),
W(\mathbb{T})$. The skewed-gentle triangulation $\mathbb{T}$ was already studied in \cite{GLFS13}.
\begin{figure}[ht!]
\centering
\subfloat{
\begin{tikzpicture}[scale=0.7]
\draw[gray] (3.5,3.5) circle (3.5cm);
\draw[gray](0,3.5) arc (180:360: 3.5cm and 0.5cm);
\draw[gray,thin,dashed] (7,3.5) arc (0:180: 3.5cm and 0.5cm);
\draw[thick, black] (3.5,7) .. controls +(180:3.6cm) and +(180:3.6cm) .. (3.5,0);
\draw[thick, black] (3.5,7) .. controls +(180:.65cm) and +(90:3cm) .. (1.7,3.06);
\draw[thick, black] (3.5,7) .. controls +(0:1cm) and +(90:3cm) .. (5.8,3);
\draw[thick, black] (3.5,3) .. controls +(-90:1cm) and +(90:2cm) .. (3.5,0);
\draw[thick, black] (3.5,7) .. controls +(-120:2cm) and +(130:2cm) .. (3.5,0);
\draw[thick, black] (3.5,7) .. controls +(-45:2cm) and +(15:1cm) .. (3.5,0);
\draw[very thick, black] (3.5,7) .. controls +(0:.3cm) and +(90:3.5cm).. (5.3,3.3);
\draw[very thick, black] (5.3,3.3) .. controls +(260:.8cm) and +(-90:.7cm).. (6.3,3.3);
\draw[very thick, black] (6.3,3.3) .. controls +(90:3cm) and +(-5:.9cm).. (3.5,7);
\draw[very thick, black] (3.5,7) .. controls +(180:1.2cm) and +(90:2cm).. (1.3,3.3);
\draw[very thick, black] (1.3,3.3) .. controls +(270:.7cm) and +(-95:.7cm).. (2.1,3.1);
\draw[very thick, black] (2.1,3.1) .. controls +(85:3cm) and +(-150:.5cm).. (3.5,7);
%
\draw[very thick, black] (3.5,0) .. controls +(-70:.4cm) and +(0:1cm).. (3.5,3.3);
\draw[very thick, black] (3.5,0) .. controls +(-100:.4cm) and +(170:1cm).. (3.5,3.3);
\filldraw [black] (3.5,7) circle (2pt)
(3.5,0) circle (2pt)
(1.7,3.06) circle(2pt)
(5.8,3.1) circle(2pt)
(3.5,2.98) circle(2pt);
\draw (3.5,7.5) node {\tiny$p_{4}$};
\draw (3.5,-.5) node {\tiny$p_{5}$};
\draw (1.7,2.3) node {\tiny$p_{1}$};
\draw (3.5,3.7) node {\tiny$p_{2}$};
\draw (5.8,2.5) node {\tiny$p_{3}$};
\end{tikzpicture}
}
\subfloat{
\begin{tikzpicture}
\matrix (m)[matrix of math nodes, row sep=3em,column sep=3em,ampersand replacement=\&]
{\& 2\& \& 5\& \& 8\&\&\\
1\&\ \& 4\&\ \&7\&\&1\\
\&3\&\ \&6\&\&9\&\\};
\path[-stealth]
(m-2-1) edge node[above]{$a_1$} (m-2-3)
(m-1-2) edge node[above]{$c_1$} (m-2-1)
(m-2-3) edge node[right]{$b_1$}(m-1-2) edge node[left]{$b_3$}(m-1-4) edge node[left]{$b_4$}(m-3-4) edge node[right]{$b_2$}(m-3-2)
(m-3-2) edge node[left]{$c_2$}(m-2-1)
(m-3-4) edge node[right]{$c_4$}(m-2-5)
(m-2-5) edge node[above]{$a_2$}(m-2-3)
(m-2-5)edge node[above]{$a_3$}(m-2-7)
(m-1-6) edge node[left]{$c_5$}(m-2-5)
(m-2-7) edge node[right]{$b_6$} (m-3-6)
(m-1-4) edge node[right]{$c_3$}(m-2-5)
(m-2-7) edge node[right]{$b_5$}(m-1-6)
(m-3-6) edge node[left]{$c_6$}(m-2-5);
\end{tikzpicture}
}
\caption{Triangulation $\mathbb{T}$ of the sphere with 5 punctures}
\label{esfera}
\end{figure}
Let $I$ be the ideal of $k\langle\langle Q(\mathbb{T})\rangle\rangle$ generated
by the set $\partial(W')$ of cyclic derivatives of $W'$, where
$W'=b_5c_5a_2b_1c_1+a_1b_4c_4a_3$. Then the quotient algebra
$\Lambda'=\Lambda/ I$ is isomorphic $k\langle Q' \rangle/J$ where
$Q'$ is the quiver in Figure \ref{gentle} and $J$ is the ideal in
$k\langle Q' \rangle$ generated by $\epsilon_i^2-\epsilon_i$,
$a_ib_i$, $b_ic_i$ and $c_ia_i$ for $i=1,2,3$ and the set
$\{b_2\epsilon_2c_2a_3$, $\epsilon_2c_2a_3a_1$, $c_2a_3a_1b_2$, $a_1b_2\epsilon_2c_2$, $\epsilon_3c_3a_2b_1\epsilon_1c_1$, $c_3a_2b_1\epsilon_1c_1b_3$, $a_2b_1\epsilon_1c_1b_3\epsilon_3$, $b_1\epsilon_1c_1b_3\epsilon_3c_3$, $\epsilon_1c_1b_3\epsilon_3c_3a_2$, $c_1b_3\epsilon_3c_3a_2b_1$, $b_3\epsilon_3c_3a_2b_1\epsilon_1\}$.
Then the quotient $\Lambda/I$ is a skewed-gentle algebra (see definition of skewed-gentle algebras in \cite{GP99}).
\begin{figure}[ht!]
\begin{tikzpicture}
\matrix (m)[matrix of math nodes, row sep=3em,column sep=3em,ampersand replacement=\&]
{\& 4\& \& 5\& \& 6\&\&\\
1\&\ \& 2\&\ \&3\&\&1\\};
\path[-stealth]
(m-2-1) edge node[above]{$a_1$} (m-2-3)
(m-1-2) edge node[above]{$c_1$} (m-2-1)
(m-2-3) edge node[right]{$b_1$}(m-1-2) edge node[left]{$b_2$}(m-1-4)
(m-2-5) edge node[above]{$a_2$}(m-2-3)edge node[above]{$a_3$}(m-2-7)
(m-1-6) edge node[left]{$c_3$}(m-2-5)
(m-1-4) edge node[right]{$c_2$}(m-2-5)
(m-2-7) edge node[right]{$b_3$}(m-1-6);
\path
(m-1-2) edge [loop above] node {$\epsilon_1$} (m-1-2)
(m-1-4) edge [loop above] node {$\epsilon_2$} (m-1-4)
(m-1-6) edge [loop above] node {$\epsilon_3$} (m-1-6);
\end{tikzpicture}
\caption{Skewed-gentle quiver $Q'$}
\label{gentle}
\end{figure}
The indecomposable representation of skewed-gentle(or more generally clannish) algebras are described by a combinatorial rule in terms of certain words; similar to the more widely known special biserial algebras, see \cite{CB89} for more details. Thus, in our situation it is enough to show that the corresponding clan $C=(k, Q',
S_p,(q_b), \leq)$ (see \cite[Definition 1.1]{CB89}) admits two
bands such that any arbitrary combination of them is again a band.
Let $C=(k, Q', S_p,(q_b\mid b\in S_p), \leq)$ be the clan of the
algebra $\Lambda'$, where the special loops $S_p$ are $\epsilon_1,
\epsilon_2$ and $\epsilon_3$ and with the following relations:
$a_i<b_i^{-1}$, $b_i<c_i^{-1}$ and $c_i<a_i^{-1}$ for $i=1,2,3$.
Recall, a word in a clan $C$ is a formal sequence $w_1w_2 \dots w_n$
of letter with $n>0$, $s(w_i)=t(w_{i+1})$ for each $i$, and such
that for each $i$ the letters $w_i^{-1}$ and $w_{i+1}$ are
incomparable.
Now, consider the following bands $\alpha=a_1a_2^{-1}a_3$ and
$\beta=a_1b_2\epsilon_2^*c_2c_3^{-1}\epsilon_3^*b_3^{-1}$. Observe
that $a_3^{-1}$ and $a_1$ are incomparable and also $b_3$ and $a_1$
are incomparable, then the product $\alpha\beta$ and $\beta\alpha$
are well defined and they are also bands, therefore $\Lambda'$ is an
algebra of exponential growth and by Lemma \ref{exponential} then
$\Lambda$ is so. One need some number theory argument to prove that the string algebras or skewed-gentle algebra having two independent bands are of exponential growth, see \cite[Lemma 1]{Sk87}. Finally, as we mention in Remark \ref{growth} mutations of
quiver with potential preserve exponential growth property and flips of tagged triangulations are compatible with mutations of quiver with potential (cf. \cite[Theorem 30]{LF09}), then any
Jacobian algebra associated to a triangulation of a sphere with 5
punctures is also of exponential growth.
\end{proof}
For an ideal triangulation $\mathbb{T}$, the \textit{valency}
val$_{\mathbb{T}}(p)$ of a puncture $p\in\mathbb P$ is the number of arcs
in $\mathbb{T}$ incident to $p$, where each loop at p is counted twice.
\begin{rem}\label{quivers}
Let $\mathbb{T}$ be a triangulation with no self-folded triangles of a
marked surface with empty boundary $(S, \mathbb
M)$ and $(Q(\mathbb{T}), W(\mathbb{T})$ be the quiver with potential associated to the triangulation $\mathbb{T}$.
\begin{enumerate}
\item The quiver $Q(\mathbb{T})$ is a block-decomposable graph of blocks of type II, where a block of type II is a 3-cycle. See \cite{FST08} for details.
\item If every puncture $p\in M$ has valency at least 3, then any arrow $\alpha$ of $Q(\mathbb{T})$
has exactly two arrows $\beta,\gamma$ starting at the
terminal vertex of $\alpha$ and exactly one arrow $\delta$ ending
at the terminal vertex of $\alpha$. Following Ladkani in
\cite{Lad12} there are two functions $f, g:Q_1(\mathbb{T})\to Q_1(\mathbb{T})$ such that
$\alpha f(\alpha)f^2(\alpha)$ is a 3-cycle arising from a triangle in
$\mathbb{T}$ and $(g^{n_\alpha-1}(\alpha))(g^{n_\alpha-2}(\alpha))\dots
(g(\alpha))(\alpha)$ is a cycle surrounding a puncture
$q$, where $n_\alpha=\min\{r>0 \mid g^r(\alpha)=\alpha\}$.
\item If every puncture has valency at least four, then there are no commutative relations involving only paths of length two.
\end{enumerate}
\end{rem}
\begin{proof}[Proof of exponential growth property]
It was proven in \cite[Proposition 5.1]{Lad12}, that if the marked
surface (S,M) is not a sphere with 4 or 5 punctures, it has a
triangulation $\mathbb{T}$ with no self-folded triangles and in which each
puncture $p\in M$ has valency at least four.
Let $f, g: Q_1(\mathbb{T})\to Q_1(\mathbb{T})$ be functions as in Remark \ref{quivers} part
2). Let $I$ be the ideal in $A$ generated by the relations $\alpha
f(\alpha)$ for every arrow $\alpha$ in $Q_1(\mathbb{T})$ and consider the
quotient $A'=A/I$. It is clear that this quotient is a string
algebra.
To prove that $A'$ is an algebra of exponential growth it is enough
to prove that $A'$ admits two bands $\xi$ and $\eta$ such that any arbitrary combination of them is again a band.
Consider $\alpha:i\to j$ an arrow of the quiver $Q(\mathbb{T})$, we denote by
$\mathbb{T}_{\alpha}$ the following piece of the triangulation $\mathbb{T}$. The
vertices of $\alpha$ are two arcs of a triangle $\triangle_\alpha$
of $\mathbb{T}$, and if $q$ and $p$ are the endpoints of the remaining arc
of this triangle, we let $\mathbb{T}_{\alpha}$ the set of arcs of $\mathbb{T}$
that have $q$ or $p$ as one of its endpoints. We observe that the
arc with endpoints $p$ and $q$ is a side of exactly two triangles,
one of them is the triangle $\triangle_\alpha$ and we denote by
$\triangle_\delta$ the other one. In Figure \ref{triangulation}, we
show the piece of triangulation $\mathbb{T}_\alpha$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=2.6][
every node/.style={align=center}]
\draw (0,0) -- (1,1) plot coordinates {(0,0) (1,1) (2,0) (0,0) (1,-1) (2,0)};
\draw(0,0)--(-1,1) plot coordinates{(0,0) (-1,-1)};
\draw(2,0)--(3,1) plot coordinates{(2,0) (3,-1)};
\draw[->](1.02,0.04)--node[right]{\small$\beta$}(1.49,0.49);
\draw[->, thick](1.4,0.5)-- node[above] {\small$\alpha$} (0.57,0.5);
\draw[->](0.52,0.47)--node[left]{\small$\gamma$}(0.97,0.03);
\draw[->,thick](0.55,-0.5)--node[below]{\small$\delta$}(1.45,-0.5);
\draw[->](1.45,-0.45)--node[right]{\small$g^{n_\beta-1}\beta$}(1.01,-0.03);
\draw[->](0.98,-0.03)--node[left]{\small$g\gamma$}(0.5,-0.45);
\foreach \x in {-0.1,0,0.1}
\draw (-0.5,\x) circle (0.005);
\foreach \x in {-0.1,0,0.1}
\draw (2.5,\x) circle (0.005);
\draw[->, thick](1.55,0.5)--node[above]{\small$g\beta$}(2.45,0.5);
\draw[->, thick](2.45,-0.5)--node[below]{\small$g^{n_\beta-2}\beta$}(1.55,-0.5);
\draw[->, thick](0.45,-0.5)--node[below]{\small$g^{2}\gamma$}(-0.45,-0.5);
\draw[->, thick](-0.45,0.5)--node[above]{\small$g^{n_\gamma-1}\gamma$}(0.45,0.5);
\filldraw(2,0) circle(1pt) node[right]{\small p};
\filldraw(0,0) circle(1pt) node[left]{\small q};
\filldraw(1,1) circle(1pt);
\filldraw(1,-1) circle(1pt);
\end{tikzpicture}
\caption{The piece of triangulation $\mathbb{T}_{\alpha}$}
\label{triangulation}
\end{figure}
Denote by $\alpha\gamma\delta$ the 3-cycle arising from the triangle
$\triangle_\alpha$. Surrounding the puncture $p$, there is a cycle
$w(p)$ starting at $i$ which is denoted by
$$(\gamma)(g\gamma)(g^2\gamma)(g^3\gamma)\cdots(g^{n_\gamma-1}\gamma),$$
and surrounding $q$ there is a cycle $w(q)$ starting at $j$
which is denoted by
$$(g\beta)(g^2\beta)\cdots(g^{n_{\beta}-2}\beta)(g^{n_\beta-1}\beta)(\beta).$$
We denote by
$\rho_1(\alpha)$ the word
$$(g^2\gamma)(g^3\gamma)\cdots(g^{n_\gamma-1}\gamma),$$ by
$\rho_2(\alpha)$ the word
$$(g\beta)(g^2\beta)\cdots(g^{n_{\beta}-2}\beta)$$ and by
$\delta$ the arrow $fg\gamma$, which is part of the 3-cycle arising
form the triangle $\triangle_\delta$.
Then denote by $\xi(\alpha)$ the word
$$(\alpha)(\rho_1(\alpha))^{-1}(\delta)(\rho_2(\alpha))^{-1}.$$
Observe that $\rho_1(\alpha)$ and $\rho_2(\alpha)$ are defined for any arrow $\alpha\in Q_1$, therefore we can define $\xi(\alpha)$ for any arrow $\alpha$.
In similar way, we construct a word $\eta$ for the
arrow $g\beta$, that is, there is a piece a triangulation $\mathbb{T}_{g\beta}$, and paths $\rho_1(g\beta)$
and $\rho_2(g(\beta))$ and an arrow $\delta'$, such that
$$\eta=(\rho_2(g\beta))(\delta')^{-1}(\rho_1(g\beta))(g\beta)^{-1}.$$
Using the notation of $\xi(g\beta)$, we have that $\eta=\xi(g\beta)^{-1}$.
Observe that $$\rho_2(g\beta)=(\alpha)(g\alpha)\dots(g^{n_\alpha-3}\alpha).$$
Both words are bands and they can be composed arbitrarily, therefore
$A'$ is an algebra of exponential growth, then by Lemma
\ref{exponential} $A$ is an algebra of exponential growth.
The result follows from the fact that exponential growth is preserved by mutations (see Remark \ref{growth}).
\end{proof}
\begin{rem}
\begin{itemize}
\item [(1)] Acording to the Theorem of this work and \cite[Proposition 2.10]{FST08}, each closed surface $S$ of genus $g$ with $p$ punctures, excluding only the case of a sphere with 4 punctures, and each tagged triangulation $\mathbb{T}$, the Jacobian algebra $\mathcal{P}(Q(\mathbb{T}), W(\mathbb{T}))$ is symmetric and tame with a 2-periodic module category and with $6(g-1)+ 3p$ simple modules. However, $\mathcal{P}(Q(\mathbb{T}), W(\mathbb{T}))$ is neither of the following algebras:
\begin{itemize}
\item[a)] an algebra of blocks of quaternion type, because in the case of a torus with one puncture, it was shown by Ladkani in \cite{Lad12}, that the Cartan matrix of the associated Jacobian algebra is singular, therefore it is not blocks of quaternion type by definition. In the other cases, the module category of the Jacobian algebra has at least 6 simple modules, in contrast to algebras of quaternion type which have at most 3 simple modules (see \cite[Theorem]{Erd88}).
\item[b)] an algebra socle equivalent to an algebra of tubular type, because this algebra is an algebra of polynomial growth.
\item[c)] socle equivalent to an algebra of Dynkin type, algebras which are of finite representation type.
\end{itemize}
Therefore, the existing characterization of algebras which are symmetric, tame and with the non-projective indecomposable modules $\Omega$-periodic, was not complete. These family of Jacobian algebras form a new family with these properties.
\item [(2)] A similar statement to the theorem is known to hold for the sphere with $4$
punctures, except
that in this case the potentials depend also on the choice of a parameter
$\lambda\in k\setminus\{0,1\}$ and in this case the Jacobian algebras
$\mathcal{P}(Q(\mathbb{T}),W(\mathbb{T},\lambda))$ are (weakly) symmetric of tubular type
$(2,2,2,2)$ and this is of polynomial(lineal) growth, and the category $\mathcal{C}_{(Q,M,\lambda)}$ is a tubular cluster category
of type $(2,2,2,2)$, see \cite{GeGo13} and \cite{BG09}.
\item [(3)] Similar result to Theorem and Corolally was
announced by Ladkani at the abstract of the Second ARTA conference, see \cite{Lad13}.
\item[(4)] The results of this work were presented at the Second and Third ARTA conference.
\end{itemize}
\end{rem}
\begin{acknowledgements}
I want to thank Christof Geiss for suggesting me n-angulated
categories for solving this problem, for suggesting me including the
exponential growth property of this kind of algebras and valuable
comments. I also want to thank Sonia Trepode for her constant
support.
\end{acknowledgements}
|
train/arxiv
|
BkiUdns5qoTDt4TW7Wz9
| 5 | 1 |
\section{Introduction}
The study of spinodal decomposition and coarsening in quenched Ising
models has been vigorously pursued\cite{reviews}.
Binder and Stauffer\cite{Binder}
predicted that, following a quench at $t=0$,
the structure function of a coarsening system would grow
with a single length scale $L(t)$. Numerical studies have verified
that, when this length scale is removed from the results, the reduced
structure factor is very nearly constant in time\cite{Lebowitz}.
Lifshitz and Slyozov\cite{Lif-Slov} gave a further prediction:
domain size should asymptotically grow as
$L \sim t^{1/ 3}$
for a conserved order parameter (COP) model.
Monte Carlo simulations
\cite {Barkema} have checked this result.
All of this theory describes the
equilibrium case, where nothing acts on the coarsening process
besides a thermal bath. In real systems, however, phase segregation
can be affected by several influences, including gravity, elastic stress,
or electric fields. Such forces often push material around, instead
of preferring one phase over another. Given this wide area of potential
experimental application, it seems reasonable to ask: what happens when you
take a COP Ising model and apply a uniform force to push
particles (up spins) across the lattice?
This type of model was first introduced by Katz, Lebowitz,
and Spohn\cite{KLS}, who found that the external driving force raised $T_c$.
Subsequent research has carefully investigated the ordering phase transition
of this model.\cite{Schmittman} In addition to work which analyzed interface
roughness\cite{Leung etal} and domain shape\cite{Boal},
one study has checked to see whether the
scaling and growth law results of the equilibrium model can carry
over to the nonequilibrium one\cite{Yeung etal}.
Such studies of the driven diffusive lattice gas (DDLG)
have almost always employed the same Monte Carlo dynamics --- those of Kawasaki
\cite{Kawasaki}. For this specific class of model, there is no barrier
to hinder particle motion along the interface between two phases, and
domains tend to elongate along the direction of the force. When considering
a directionally averaged measure of domain size, however, Yeung {\it et al.}
\cite{Yeung etal}.
found that the Kawasaki form of the DDLG showed familiar $L_{av} \sim t^{1/3}$
coarsening behavior.
The present study finds that this slow rate of
coarsening, as well as the orientation of domains, is model-dependent.
Noting that nonequilibrium problems have an inherent sensitivity to dynamics,
we have studied a DDLG with particle mobility that goes down when
the number of bonds to neighbors goes up. The resulting motion has free
diffusion of single particles across empty spaces.
(A similar bond--counting approach
was used to model electromigration of thin films on
semiconductors\cite{Ohta}, but that work did not study coarsening.)
In our model, the
external driving force bunches domains up along the field (so that they
lengthen in the transverse direction), and it can push entire domains of
vacancies across the lattice so that they sweep up other vacancies
and grow quickly. For moderate lattice fillings, the resulting domain radius
grows as $L_{av} \sim t^{\zeta}$, where $\zeta$ varies roughly from
0.4 to 0.7.
For high concentrations of particles,
the early stages of growth can be {\it exponential} in time.
When we approached the problem of coarsening in an electric field, we were
interested in fast motion of isolated particles through the middle of a
domain, rather than along an interface. Such bulk diffusion was relevant to
the electromigration studies of Moeckly, Lathrop,
and Buhrman\cite{Moeckly}. In their room temperature observations of
YB$_2$Cu$_3$O$_{7- \delta}$ thin film devices,
they found that a small electric bias ($\sim 10^3 V/$cm)
could produce macroscopic
motion of oxygen. The associated force was so tiny that it would only have
moved an oxygen atom a few lattice constants per second in a fully oxygenated
sample, where the activation energy for oxygen motion is about 1 eV
\cite{Cannelli} and
the diffusion constant is about $10^{-12} {\rm cm^2/s}$ near room temperature
\cite{Rothman}. In an oxygen depleted region, however, small forces may
have a large impact: internal friction measurements give an
activation energy of 0.1 eV for motion of a completely isolated oxygen
atom, and the chemical diffusion data of LaGraff {\it et al.}\cite {LaGraff}
suggest that the diffusion constant of YBCO can rise by more than an order
of magnitude as the oxygen in the chain plains is depleted.
To study the effects of such differences in mobility, we wanted the simplest
model that could describe a
density-dependent diffusion constant. We therefore chose a two
dimensional DDLG with
modified continuous Monte Carlo dynamics\cite{Barkema}.
Thus, we group atoms according
to their coordination $q$, increment time by an amount which increases
as the number of highly mobile atoms decreases,
and propose a move from list $q$ with probability
$$ P[q]= dt (4-q) N(q) e^{-4Jq},\eqno(1)$$
where $N[q]$ is the number of $q$
--coordinated atoms. This continuous Monte Carlo scheme
satisfies detailed balance, so the equilibrium state at $\Delta$ = 0
is that of the nearest--neighbor Ising model:
${\cal H} = -J \sum_{\langle ij\rangle} S_i S_j$. This dynamics
allows atoms with low coordination
to move quickly. It also produces a basic particle--hole
asymmetry, illustrated by the fact that isolated atoms can zip
across vacant spaces (rate 1), while isolated holes hardly move
(rate $e^{-12J}$).
We include the electric potential by accepting all proposed forward
moves, a fraction $e^{-\Delta}$ of the proposed sideways moves, and only
$e^{-2\Delta}$ of the proposed moves against the field, where $2\Delta kT$
is a local potential difference along the field\cite{timescale}.
Motivated by the YBCO experiments, we have focused much of our
attention on the limits of high particle concentration, relatively low fields,
and strong coupling to nearest neighbors (i.e. a highly
concentration--dependent mobility).
Figure \ref{snapshots} shows two of the interesting behaviors we found. In
both pictures, the black regions are vacancy clusters which move collectively
downwards as an external force pushes (white) particles up. In the
symmetry-breaking field, the vacancy blocks become short and wide
\cite{widening}. The pictures on the left (Fig. 1a and 1b) have 90\% lattice
filling. Here, isolated runaway processes dominate:
large domains move much faster than small ones and sweep up many
vacancies, thus becoming even bigger and faster. We discuss such
processes in the following section on exponential growth. The pictures
on the right have 70\% lattice filling.
Here, blocks of all length scales are moving and combining, and a mean
domain radius grows as a power in time.
Note that the late snapshot at the
bottom (Fig. 1d) resembles a scaled--up version of the snapshot at the top
(Fig. 1c).
In section III, we evaluate scaling collapses
and we construct a simple picture for domain growth in this regime.
Finally, in section IV, we will look at small driving forces.
In this limit, we find that
the early stages of growth show the $L \sim t^{1/3}$ behavior expected for
the zero field case, and then a crossover to fast growth occurs.
We interpret this crossover as the time at which the area
swept out through linear motion along the field equals the area visited
by diffusive motion, and we derive the field dependence of the domain size
at crossover.
\section{Exponential Growth}
The runaway growth of the high filling regime
is fundamentally tied to a separation of time-scales
produced by faceting.
The pictures in Figure \ref{snapshots} were generated with strong coupling
between neighbors, so atoms with two neighbors moved much more quickly
than atoms with three. In this regime, the base of each vacancy domain
tends to be
flat, with all atoms having three neighbors. After a stagnant
period, one of these strongly pinned atoms pops out of the base and leaves
behind two doubly coordinated particles. The remaining atoms then have
a lower barrier to motion. One by one, the rest of the row soon dislodges
and moves rapidly across the empty space.
Under such conditions, one would
expect the velocity of a region to be proportional to its horizontal width
(i.e. the number of ways to produce the initial break).
Figure \ref{velvs_width} shows
this behavior at low temperatures for isolated vacancy domains.
Periodically, a domain will collide with a vacancy in its path and that
will provide the initial break to move the domain through an extra row
of atoms. Again, the rate of such motion increases linearly with the width of
the domain.
Domain size is therefore a crucial factor in determining growth.
Besides moving more quickly to sweep up new vacancies, wide blocks
clear larger regions as they move. In general, we expect:
$${dn \over dt} = w\cdot \Delta v \cdot c.\eqno(2)$$
Here, $n$ is the area of the block in question, $w$ is its width,
$c$ is the concentration of
vacancies in the region ahead of it, and $\Delta v$ is the relative
velocity of the block we are describing (in comparison to that of vacancies
which it overtakes). For low temperatures and low vacancy concentrations,
small vacancy blocks will move at negligible velocities and large blocks will
move with $v \sim w$. In this regime we expect:
$${dn \over dt} \propto w\cdot w \cdot c.\eqno(3)$$
If the width and height of a region scale similarly,
then the above result gives:
${dn / dt} \propto n$ or $n$ growing exponentially in time.
In practice, we find that width grows more quickly than height.
This tendency should only enhance the rate of growth.
To check this prediction against our simulation, we calculated two--point
correlation functions in both horizontal and vertical directions.
For a rough measure of length-scales, we used the width at quarter max
of each of our correlation functions\cite {lengthscale}. Figure \ref{expgrowth}
shows vertical block size, horizontal block size, and
the product of the two (a typical domain area) as a function of time.
To run the simulation efficiently enough to observe a large range of size,
we used a fast model with nearly infinite coupling (where uncoordinated
atoms always moved first, and then all singly coordinated atoms moved).
As figure \ref{90percent_3temps}
demonstrates, we found the same behavior at low temperature for
standard finite--coupling
dynamics. Note that the vertical scale on these
plots is logarithmic, so the straight line observed does indicate
exponential growth.
Notice that the runaway growth does not continue indefinitely. For very large
domains, the time required to move a full row of atoms from bottom to
top is comparable to the time between initial ``three--moves''. If the
rate of $q=2$ moves is the limiting step, then motion from each kink
in the domain can proceed independently and large blocks will approach a
terminal velocity. Crossover to this behavior will occur when the
time required to move an entire row of atoms through sequential $q=2$ moves
is approximately equal to the expected waiting time before one atom
in that row moves from a triply coordinated site. The slowdown in growth
in figure \ref{expgrowth} occurs when these two time-scales are comparable
to each other. Figure \ref{velvs_width} shows the
velocity of a vacancy domain in an empty lattice as a function of domain
width at two different temperatures. Note that the low temperature plot
is fairly linear, while the high temperature results do indeed approach a
terminal velocity. Figure \ref{90percent_3temps}
shows simulation results with a
domain roughening crossover which varies with temperature.
There is another way to produce rough domain bases and eliminate exponential
growth. In the case where vacancy clusters are constantly running into
one another,
their bases will always contain doubly coordinated atoms, and $q=2$ moves
will again be the rate limiting step. Figure \ref{near82_growth}
shows the changeover from
exponential to power-law growth as the number of vacancies increases.
Looking at configurations with $82\%$ filling, we see isolated runaway
domains whose acceleration slows as their bases become rough.
This behavior makes intuitive sense because,
in runaway growth, the size of large domains increases more quickly than
the spacing between small ones, so large domains can grow to be larger than
a typical interdomain spacing. We will find later that horizontal correlation
functions in the power-law growth regime nearly scale,
so the relationship between
domain width and horizontal domain spacing remains nearly fixed.
This is consistent with our observation that growth which starts in
the crowded, power-law regime tends to stay power-law.
\section{Power-law growth at lower fillings}
We have found that the exponential growth regime occurs for low temperatures
and low vacancy concentrations, where only a few
domains become large enough to respond strongly to the external field.
At lower particle fillings, most vacancies will join clumps soon
after coarsening begins, since most of the vacancies are connected
through atoms with single or double coordination at quench. At such
fillings, vacancy domains no longer move through a nearly stationary
sprinkling of tiny vacancy clumps. Instead, the lattice contains a
distribution of block sizes, most of which are moving steadily
in the field. Frequent collisions between domains provide sources of
fast moving atoms, so that motion is not characterized by long
waiting times with flat domain bases. Thus, we no longer expect the
velocity of a domain to be proportional to its width.
Figures \ref{collapses} and
\ref{near70growth} show results from the simulation at lower fillings.
The first, a check for dynamical scaling, gives clear evidence that
domains of all length scales are growing at similar rates.
The horizontal correlation function shows strong hints of scaling,
but the vertical correlation function has an anticorrelation dip which
grows more pronounced with time. (That is, the regions between vacancy
domains are becoming more thoroughly swept out.)
Although growth in this regime is not completely self--similar,
a scaling picture may be a useful first step towards describing
coarsening at these fillings.
Figure \ref{near70growth} shows characteristic domain area as a function
of time for 60\%, 70\%, and 80\% concentrations.
This growth is significantly faster than the
$t^{1/3}$ behavior of a zero field model. If we fit growth at each
concentration to ${\rm domain~area} \sim t^{2\zeta}$, $\zeta$ varies from
about $.65$ at 60\% filling to approximately $.75$
at 70\% and 80\% filling.
Although the behavior of our model in this moderately full regime is
complex, we have tried to piece together a simple picture which
would mimic the observations described above. We start with the question:
in a scaling regime where growth is still dominated by catch--up events,
what kind of velocity distribution would produce linear domain growth?
An elementary argument proceeds as follows: we can describe each time
in a scaling regime with characteristic horizontal and vertical length-scales
$L_h$ and a $L_v$. In a typical
collision, the area gained by a vacancy cluster will scale with the product
of these two lengths, i.e.
$$dn \sim L_h \cdot L_v.\eqno(4)$$
A typical time between collisions will scale as the vertical length-scale
divided by the velocity difference of the two colliding domains:
$$dt \sim L_v / \Delta v.\eqno(5)$$
Together, these two results indicate that the area of a typical domain
will increase linearly in time if $\Delta v \sim 1/L_h$.
Is this scaling picture useful for understanding simulation results?
One obvious objection is that vertical correlations in our model
do not settle into a final scaling shape until late in the simulation,
and so the typical vertical spacing between domains does not scale
perfectly with the vertical height of the domains.
Also, we have neglected any enhancement in coarsening due to velocity
fluctuations, and in fact our simulation results indicate that domain areas
in this regime have somewhat faster than linear growth in time. Realizing that
our scaling picture is an approximate description, at best,
we have investigated the size dependence of domain velocities.
Recall that a large domain with several kinks in its
base should approach a terminal velocity where motion proceeds from each
kink independently. Is the dominant correction to this terminal velocity
a term of the form $v_{cor}/L_h$? Note that this the form one might
expect if the dominant correction is due to behavior at
the sides of a domain base. Figure \ref{velvs_invwidth}
shows velocity plotted against
$1/w$ for domains of various size moving through empty space at high
temperature, so that the domain bases had several kinks. We tried plotting
velocity vs. $w^x$ and find the best asymptotic linear fit for $x$= .7 to 1.2.
Figure \ref{velvs_invw_oc7} shows velocity as a function of $1/w$ in an
actual simulation run at 70\% filling. Despite poor statistics in the latter
plot, figures \ref{velvs_invwidth} and \ref{velvs_invw_oc7} together seem to
confirm that vacancy clusters in our model approach
constant velocity with $1/L_h$ corrections. Thus, our scaling picture may
provide a first step towards explaining observed growth at these fillings.
For fillings below $50\%$, we must focus on domains of particles,
instead of vacancies. These clumps of particles actually move against
the field direction while a wind of particles sweeps into them on one
side and tears particles away on the other.
Figure \ref{powers} shows preliminary simulation results
at these fillings. Domain area grows roughly as $t^{.75}$ at
20\% filling ($\zeta \approx .4$), as $t^{.95}$ at 30\% filling
($\zeta \approx .5$), and as $t^{1.1}$ at 50\% filling ($\zeta \approx .55$).
Note that growth at low fillings is dominated by the shorter time-scales
associated with motion of atoms with few neighbors. Note also that the
growth exponent $\zeta$ increases with filling. We do not at present have
an explanation for the latter effect.
\section{Low field crossover}
For high fillings and strong interparticle couplings, we have seen that large
external fields can dramatically enhance coarsening. In most experimental
applications, however, the potential difference between neighboring sites
is much less than $kT$, so it is natural to ask how weak fields affect
domain growth. In the zero field
limit, our model corresponds to standard Lifshitz-Slyozov growth with
asymptotic $L \sim t^{1/3}$ behavior. Slightly away from this limit, we
find that low fields
produce such slow coarsening
for a while, and then generate a crossover to fast growth and noticeable
anisotropy. Figure \ref{crossover} shows this behavior. First, initial
transients die
away on a time scale given by the rate of motion for $q=3$ atoms (as in
\cite{Barkema}), and
$t^{1/3}$ growth sets in. When the characteristic domain size is still much
less than $kT / 2\Delta$, growth takes off and the presence of the field
also appears in a loss of square domain symmetry. Figure \ref{cross_vsfield}
shows rough visual estimates of crossover length as a function
of field. Although this plot may include systematic error from
pinpointing a crossover in increasingly rounded curves, it strongly suggests
that the crossover length has a weak dependence on field.
To gain a physical understanding of the crossover, first note that it
represents a transition between diffusion-dominated growth, and
driven collisions produced by the external field. In this low field
regime, where the potential drop across the domain is still less than
$kT$, we can describe the driven motion with linear response theory.
We will argue that
crossover occurs when a typical block absorbs more vacancies through
concerted motion along the field than through diffusive motion.
Driven collisions should win
when the area of a circle swept out through diffusion, $\pi Dt$, is
equal to the area swept out by linear motion, i.e.
$$\pi Dt = v \cdot t \cdot w\eqno(6)$$
Note that we can replace the horizontal length scale $w$ with
a general length scale $L$, since this early growth regime is precisely
when length scales in all directions are the same.
To describe the crossover more completely, we need to know how $D$ and $v$
vary with the size of a domain in our model. For velocity, we refer back
to the section describing exponential growth, where we found $v \sim L$
whenever motion was limited by the slow rate of dislodging the first
atom from a row. To describe the variation of the diffusion constant, note
that $D \cong \omega (\Delta x)^2$, where omega is the frequency of a typical
move and $\Delta x$ is the center of mass displacement caused by such
a move. For a faceted domain, a typical move takes an atom from one rare
kink in the boundary to another. Such a move displaces the atom by a distance
of order $L$ and the center of mass by $ \Delta x \sim {1/ L}$.
Since the frequency of these moves will be proportional to $L$, we expect
that faceted domains will have $D \sim {1 / L}$.
Note that these results for $v$ and $D$ are consistent with the Einstein
relation
that should apply at such small fields:${D/{2kT}}= {v/ force}$.
(The driving force will be proportional to total charge of a block
and therefore its area.)
Plugging these results into equation $(6)$,
we find that the prediction $v \cdot L_{cross}$ = $\pi D$ becomes
$\Delta \cdot L_{cross}^2 \propto L_{cross}^{-1}$ or
$L_{cross} \propto \Delta^{-1 / 3}$.\cite{rough}
If this relationship correctly describes the crossover to fast
growth, then simulations with a well-developed $L \propto t^{1/3}$ growth
before crossover should follow the scaling collapse
$L(\Delta,t) = \Delta^{-1/3} {\cal L}(t\Delta)$, at least until fast growth has
taken over.
Figure \ref{crosscollapse} shows such a collapse. Considering the numerical
difficulty of achieving well-developed $t^{1/3}$ growth for a wide range
of fields, we believe an $\Delta^{-1/3}$ crossover is supported by the data; in
any case, our results strongly indicate
that length scale at crossover varies only weakly with field.
\section{Conclusions}
Thus, in our model, a reduction in external field only produces a
small increase in the minimum domain size for takeoff.
Although the departure from $t^{1/3}$ growth appears to be small at low
fillings, the field--driven takeoff at higher fillings soon leads to large
empty regions which move steadily against the
field as particles sweep quickly through their centers. For moderately
high lattice fillings, fast growth involves most of the vacancy regions.
Here, domain growth is very roughly linear in time, and horizontal correlations
come close to scaling. At very high lattice fillings and low temperatures,
fewer domains undergo significant coarsening, but those that do
have runaway, exponential growth.
In our model, explosive growth ends when domains
are large enough to have rough bases, either through thermal effects or
constant collisions. Our study has also explored the strong
impact which faceting can have on size dependence in velocities of
vacancy clusters. This is a subject with potential applications in the study of
void electromigration in small aluminum interconnects, where faceting is well
documented and voids
have been observed to move through the middle of single crystal grains
\cite{aluminum}. Most importantly, our model demonstrates that external
driving forces can be surprisingly effective in producing domain clumping
and macroscopic particle fluxes.
How such enhancement plays out in particular
experimental systems is still an open question. In the instance of Moeckly's
YBCO electromigration experiments, particle motion takes place in
the anisotropic environment of the oxygen ``chain'' planes, and our simple
model does not incorporate such inherent anisotropies. Also,
associating our model with the YBCO experiment involves abstracting
our simulated phase separation of completely filled and empty
areas to an experimental phase separation which may be less extreme.
Neutron and TEM observations of YBCO suggest that domain segregation in
these planes produces regions of more closely spaced oxygen
rows and less closely spaced rows\cite{spacing}. Still, the more open
environment of widely spaced rows does allow increased mobility\cite{LaGraff2}.
Moeckly's observations of
large oxygen-depleted regions suggest that field-induced clumping
is vital component of his experiments. Without describing the specific
characteristics of YBCO, our model provides a qualitative
check that small external driving forces can indeed facilitate the segregation
of domains with high particle mobility.
This study serves as a preliminary survey of a broad range of
interesting and potentially relevant model behaviors. An extension
to three dimensions would allow us to study the effect of a
finite-temperature roughening transition. Our study of
faceting effects should be expanded to cover other types of dynamics,
such as those which facilitate surface diffusion. Our simple picture
of coarsening in the presence of scaling clearly needs to
be modified to include departures from scaling and fluctuations
in domain velocity. And an entire regime
of low filling needs to be explored and understood.
Our work illustrates the breadth of issues involved in studying
how a separation of time-scales due to faceting
can affect response to an external driving force.
We have demonstrated that useful approaches to this problem may be found
outside the long wavelength, late time limit.
Further work should improve our understanding of
particle motion and domain coarsening in systems which,
instead of being conveniently isolated in a thermal bath, are
knocked out of equilibrium by an external force.
We would like to thank B. Moeckly, R. Buhrman, J.~Marko, and G.~Barkema
for helpful conversations. This work was
partly funded by the NPSC~(LKW), and the NSF under grant
DMR--91--18065~(LKW,~JPS). This research was conducted using the resources
of the Cornell Theory
Center, which receives major funding from the National Science Foundation
(NSF) and New York State. Additional funding comes from the Advanced
Research Projects Agency (ARPA), the National Institutes of Health (NIH),
IBM Corporation, and other members of the center's Corporate Research
Institute. (Roughly 1000 IBM SP1 processor hours were used.)
|
train/arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.