text
stringlengths
448
13k
label
int64
0
1
doc_idx
stringlengths
8
14
--- abstract: 'We study strong-coupling lattice QCD with staggered-Wilson fermions, with emphasis on discrete symmetries and possibility of their spontaneous breaking. We perform hopping parameter expansion and effective potential analyses in the strong-coupling limit. From gap equations we find nonzero pion condensate in some range of a mass parameter, which indicates existence of the parity-broken phase in lattice QCD with staggered-Wilson fermions. We also find massless pions and PCAC relations around second-order phase boundary. These results suggest that we can take a chiral limit by tuning a mass parameter in lattice QCD with staggered-Wilson fermions as with the Wilson fermion.' author: - Tatsuhiro Misumi - 'Takashi Z. Nakano' - Taro Kimura - Akira Ohnishi title: | Strong-coupling Analysis of Parity Phase Structure\ in Staggered-Wilson Fermions --- Introduction {#sec:Intro} ============ Since the dawn of lattice field theory [@Wil], the doubling problem of fermions has been a notorious obstacle to lattice simulations. Among several prescriptions for this problem, the Wilson fermion simply bypasses the no-go theorem [@NN] by introducing a species-splitting mass term into the naive lattice fermion. This Wilson term is regarded as one example of “flavored-mass terms" which split original 16 fermion species into plural branches [@CKM1; @CKM12]. It has
1
member_4
been recently shown that the flavored-mass terms can also be constructed for staggered fermions [@KS; @Suss; @Sha] in Ref. [@Adams1; @Adams2; @Hoel]. The original purpose of introducing these terms was establishment of the index theorem with staggered fermions [@Adams1]. A bonus here is that staggered fermions with the flavored-mass terms can be applied to lattice QCD simulations as Wilson fermion and an overlap kernel. One possible advantage of these novel formulations, called staggered-Wilson and staggered-overlap, is reduction of matrix sizes in the Dirac operators, which would lead to reduction of numerical costs in lattice simulations. The numerical advantage in the staggered-overlap fermion have been shown in [@PdF]. Now further study is required toward lattice QCD with these lattice fermions. The purpose of this work is to reveal properties of staggered-Wilson fermions in terms of the parity-phase structure (Aoki phase) [@AokiP; @AokiU1; @CreutzW; @SS; @ACV; @Sharpe]. As is well-known, the existence of the Aoki phase and the second-order phase boundary in Wilson-type fermions enables us to perform lattice QCD simulations by taking a chiral limit since the critical behavior near the phase boundary reproduces massless pions and the PCAC relation. Besides, understanding the phase structure also gives practical information for the
1
member_4
application of the overlap [@GW; @Neu] and domain-wall [@Kap; @FuSh] versions, both built on the Wilson-type kernel. Thus, in order to judge applicability of these new lattice fermions, it is essential to investigate the Aoki phase in the staggered-Wilson fermions. The phase structure for the staggered-Wilson fermion was first studied by using the Gross-Neveu model in Ref. [@CKM2; @MCKNO] and the present paper shows further investigation of this topic. In this paper, we investigate strong-coupling lattice QCD [@KaS] with emphasis on parity-phase structure for two types of staggered-Wilson fermions [@Adams2; @Hoel]. Firstly we discuss discrete symmetries of staggered-Wilson fermions, and show that physical parity and charge conjugation can be defined in both cases while hypercubic symmetry depends on types of staggered-Wilson fermions. Secondly, we perform hopping-parameter expansion and effective potential analysis for meson fields in the strong-coupling limit. For this purpose, we develop a method to derive the effective potential for lattice fermions with multiple-hopping terms. The gap equations show that pion condensate becomes non-zero in some range of a mass parameter, which indicates that parity-broken phase appears in this range. We also study meson masses around the second-order phase boundary, and find that massless pions and PCAC relations are
1
member_4
reproduced. Lastly, we discuss parity-flavor symmetry breaking for 2-flavor cases. These results suggest that we can take a chiral limit by tuning a mass parameter in lattice QCD with staggered-Wilson fermions as with the Wilson fermion. This paper is organized as follows. In Sec. \[sec:SWF\], we review staggered flavored-mass terms and two types of staggered-Wilson fermions. In Sec. \[sec:Sym\], we study discrete symmetries of staggered-Wilson fermions. In Sec. \[sec:HPE\], we study hopping parameter expansion in lattice QCD with these fermions. In Sec. \[sec:EPA\], we investigate Aoki phase structure by effective potential analysis. In Sec. \[sec:Tf\], we discuss parity-flavor symmetry breaking in two-flavor cases. In Sec. \[sec:SD\], we devote ourselves to a summary and discussion. Staggered-Wilson fermions {#sec:SWF} ========================= Before looking into staggered-Wilson fermions, we review the Wilson fermion and its relatives. The Wilson term splits 16 species of naive fermions into 5 branches with 1, 4, 6, 4 and 1 fermions. We call this kind of species-splitting terms “flavored-mass terms". As shown in [@CKM1], there are 4 types of flavored-mass terms for naive fermion which satisfy $\gamma_{5}$ hermiticity. ($\gamma_{5}$ in the naive fermion is flavored such as $\gamma_{5}\otimes(\tau_{3}\otimes\tau_{3}\otimes\tau_{3}\otimes\tau_{3})$ in the spin-flavor representation.) All these terms with proper mass shifts lead
1
member_4
to a second derivative term as $\sim a\int dx^{4} \bar{\psi}D^{2}_{\mu}\psi$ up to $\mathcal{O}(a^2)$ errors. Thus we can regard them as cousins of Wilson fermion. There are also non-trivial flavored-mass terms for staggered fermions, which split 4 tastes into branches and satisfy $\gamma_{5}$ hermiticity. Since $\gamma_{5}$ is expressed in spin-taste representation as $\gamma_{5}\otimes\gamma_{5}$ in this case, we only have two flavored-mass terms satisfying $\gamma_{5}$ hermiticity: ${{\bf 1}}\otimes\gamma_{5}$ and ${{\bf 1}}\otimes \sigma_{\mu\nu}$. (For larger discrete symmetry one needs to take a proper sum for $\mu,\nu$ in the latter case.) These spin-flavor representations translate into four- and two-hopping terms in the one-component staggered action up to $\mathcal{O}(a)$ errors. The first type is given by $$M_A= \epsilon\sum_{sym} \eta_{1}\eta_{2}\eta_{3}\eta_{4} C_{1}C_{2}C_{3}C_{4} = ({{\bf 1}}\otimes \gamma_{5}) + \mathcal{O}(a) \ , \label{AdamsM}$$ with $$\begin{aligned} (\epsilon)_{xy}&=(-1)^{x_{1}+...+x_{4}}\delta_{x,y} \ , \\ (\eta_{\mu})_{xy}&=(-1)^{x_{1}+...+x_{\mu-1}}\delta_{x,y} \ , \\ C_{\mu}&=(V_{\mu}+V_{\mu}^{\dag})/2 \ , \\ (V_{\mu})_{xy}&=U_{\mu,x}\delta_{y,x+\mu} \ .\end{aligned}$$ The second type is given by $$\begin{aligned} M_H&={i\over{\sqrt{3}}}(\eta_{12}C_{12}+\eta_{34}C_{34}+\eta_{13}C_{13}+\eta_{42}C_{42} +\eta_{14}C_{14}+\eta_{23}C_{23}) \ , \nonumber\\ &= [{{\bf 1}}\otimes (\sigma_{12}+\sigma_{34}+\sigma_{13}+\sigma_{42}+\sigma_{14}+\sigma_{23}) ] + \mathcal{O}(a) \ , \label{HoelM}\end{aligned}$$ with $$\begin{aligned} (\eta_{\mu\nu})_{xy}&=\epsilon_{\mu\nu}\eta_{\mu}\eta_{\nu}\delta_{x,y} \ , \\ (\epsilon_{\mu\nu})_{xy}&=(-1)^{x_{\mu}+x_{\nu}}\delta_{x,y} \ , \\ C_{\mu\nu}&=(C_{\mu}C_{\nu}+C_{\nu}C_{\mu})/2 \ .\end{aligned}$$ We refer to $M_A$ and $M_H$ as the Adams- [@Adams1] and Hoelbling-type [@Hoel], respectively. The former splits the 4 tastes into two branches with
1
member_4
positive mass and the other two with negative mass. These two branches correspond to $+1$ and $-1$ eigenvalues of $\gamma_{5}$ in the taste space. The latter splits them into one with positive mass, two with zero mass and the other one with negative mass. We note that $M_{A}$ and $M_{H}$ are also derived from the flavored mass terms for naive fermions through spin-diagonalization as shown in [@CKM1]. Now we introduce a Wilson parameter $r=r\delta_{x,y}$ and shift mass as in Wilson fermions [@Hoel]. Then the Adams-type staggered-Wilson fermion action is given by $$\begin{aligned} S_{\rm A}\,&=\,\sum_{xy}\bar{\chi}_{x}[\eta_{\mu}D_{\mu} +r(1+M_A)+M]_{xy}\chi_{y} \ , \label{AdamsS} \\ D_{\mu}&={1\over{2}}(V_{\mu}-V^\dagger_{\mu}) \ .\end{aligned}$$ Here $M$ stands for the usual taste-singlet mass ($M=M\delta_{x,y}$). The Hoelbling-type fermion action is given by $$\begin{aligned} S_{\rm H}\,=\,\sum_{xy}\bar{\chi}_{x}[\eta_{\mu}D_{\mu} +r(2+M_H)+M]_{xy}\chi_{y} \ . \label{HoelS}\end{aligned}$$ It is obvious that these lattice fermions have possibility to be two- or one-flavor Wilson fermions. In lattice QCD simulation with these fermions, the mass parameter $M$ will be tuned to take a chiral limit as Wilson fermions. For some negative value of the mass parameter: $-1<M<0$ for Adams-type and $-2<M<0$ for Hoelbling-type, we obtain two-flavor and one-flavor overlap fermions respectively by using the overlap formula in principle. Discrete Symmetries {#sec:Sym} =================== In this section,
1
member_4
we discuss discrete symmetry of staggered-Wilson fermions. A potential problem with staggered-Wilson fermions in lattice QCD is discrete symmetry breaking. As discussed in [@Adams2; @Hoel], the discrete symmetry possessed by the original staggered fermion is broken to its subgroup both in the Adams-type and Hoelbling-type actions. We below list all the staggered discrete symmetries (shift, axis reversal, rotation and charge conjugation), and look into their status in staggered-Wilson fermions. Shift transformation is given by $$\begin{aligned} \mathcal{S}_{\mu}:\,\,\chi_{x} \to \zeta_{\mu}(x)\chi_{x+\hat{\mu}}, \,\,\,\, \bar{\chi}_{x} \to \zeta_{\mu}(x)\bar{\chi}_{x+\hat{\mu}},\,\,\,\, U_{\nu,x} \to U_{\nu, x+\hat{\mu}} \ , \label{shift1}\end{aligned}$$ with $\zeta_{1}(x)=(-1)^{x_{2}+x_{3}+x_{4}}$, $\zeta_{2}(x)=(-1)^{x_{3}+x_{4}}$, $\zeta_{3}(x)=(-1)^{x_{4}}$ and $\zeta_{4}(x)=1$. It is obvious that this transformation flips the sign of both flavored-mass terms. The Adams type is invariant under the two-shift subgroup as $x \to x+\hat{1}\pm\hat{\mu}$ while the Hoelbling type is invariant under four-shift subgroup as $x\to x+\hat{1}\pm\hat{2}\pm\hat{3}\pm\hat{4}$. Note that these subgroups include the doubled shift $x\to x+2\hat{\mu}$ as their subgroup. The axis reversal transformation is given by, $$\begin{aligned} \mathcal{I}_{\mu}:\,\,\chi_{x} \to (-1)^{x_{\mu}}\chi_{Ix}, \,\,\,\, \bar{\chi}_{x} \to (-1)^{x_{\mu}}\bar{\chi}_{Ix},\,\,\,\, U_{\nu,x} \to U_{\nu, Ix} \ , \label{axis1}\end{aligned}$$ with $I=I^{\mu}$ is the axis reversal $x_{\mu}\to -x_{\mu}$, $x_{\rho}\to x_{\rho}$, $\rho\not= \mu$. It again flips the sign of the flavored-mass terms. The staggered rotational transformation is given by $$\begin{aligned} \mathcal{R}_{\rho\sigma}:\,\,\chi_{x} \to S_{R}(R^{-1}x)\chi_{R^{-1}x},\,\,\,\,
1
member_4
\bar{\chi_{x}} \to S_{R}(R^{-1}x)\bar{\chi}_{R^{-1}x},\,\,\,\, U_{\nu, x} \to U_{\nu, Rx} \ , \label{rot1}\end{aligned}$$ where $R_{\rho\sigma}$ is the rotation $x_{\rho}\to x_{\sigma}$, $x_{\sigma}\to -x_{\rho}$, $x_{\tau}\to x_{\tau}$, $\tau \not= \rho, \sigma$ and $S_{R}(x)={1\over{2}}[1\pm\eta_{\rho}(x)\eta_{\sigma}(x)\mp\zeta_{\rho}(x)\zeta_{\sigma}(x) +\eta_{\rho}(x)\eta_{\sigma}(x)\zeta_{\rho}(x)\zeta_{\sigma}(x)]$ with $\rho$$\sigma$. It is notable that the Adams-type fermion keeps this staggered rotation symmetry while the Hoelbling type loses it. The staggered charge conjugation transformation is given by $$\mathcal{C}:\,\,\chi_{x}\to\epsilon_{x}\bar{\chi}_{x}^{T},\,\,\,\, \bar{\chi}_{x}\to-\epsilon_{x}\chi_{x}^{T},\,\,\,\, U_{\nu,x} \to U_{\nu,x}^{*} \ . \label{C0}$$ The Adams-type fermion keeps this symmetry while the Hoelbling type loses it. We next elucidate residual subgroups possessed by staggered-Wilson fermions, and discuss how to define physical discrete symmetries as Parity, Charge conjugation and Hypercubic symmetry. For this purpose we separate spin and flavor rotations in the above transformations. Here we utilize the momentum-space representation in [@GS; @DS]. In this representation we can define two set of clifford generators $\Gamma_{\mu}$ and $\Xi_{\mu}$, which operate on spinor and flavor spaces in the momentum-space field $\phi(p)$, respectively. (Details are shown in Appendix. \[SFS\].) Then the shift transformation translates into $$\mathcal{S}_{\mu}:\,\,\phi(p)\,\,\to\,\, \exp(ip_{\mu})\Xi_{\mu}\,\phi(p) \ . \label{shift2}$$ The axis reversal translates into $$\mathcal{I}_{\mu}:\phi(p)\,\,\to\,\,\Gamma_{\mu}\Gamma_{5}\Xi_{\mu}\Xi_{5}\,\phi(Ip) \ . \label{axis2}$$ The rotational transformation translates into $$\mathcal{R}_{\rho\sigma}:\,\,\phi(p)\,\,\to\,\,\exp({\pi\over{4}}\Gamma_{\rho}\Gamma_{\sigma}) \exp({\pi\over{4}}\Xi_{\rho}\Xi_{\sigma})\,\phi(R^{-1}p) \ . \label{rot2}$$ By using this representation we figure out residual discrete symmetries of staggered-Wilson fermions as
1
member_4
follows. We first consider parity. Both staggered-Wilson fermions are invariant under $$\mathcal{I}_{s}\mathcal{S}_{4}\sim \exp(ip_{4})\Gamma_{1}\Gamma_{2}\Gamma_{3}\Gamma_{5}\,\phi(-{\bf p},p_{4})\sim \exp(ip_{4})\Gamma_{4}\,\phi(-{\bf p},p_{4}) \ , \label{parity}$$ with $\mathcal{I}_{s}\equiv \mathcal{I}_{1}\mathcal{I}_{2}\mathcal{I}_{3}$. This is essentially the continuum parity transformation [@GS]. In the continuum limit the phase factor disappears and it results in the continuum parity transformations $P : \psi(p)\to\gamma_{4}\psi(-{\bf p},p_{4})$ for the Dirac fermion. We thus conclude both staggered-Wilson fermions possess symmetry leading to physical parity symmetry $P$. We note the simple combination of $\mu$-shift and $\mu$-axis reversal (shifted-axis reversal:$\mathcal{S}_{\mu}\mathcal{I}_{\mu}$) is also a symmetry of both fermions. We next consider physical charge conjugation. In the case of Adams fermion the staggered charge conjugation symmetry $\mathcal{C}$ in Eq. (\[C0\]) remains intact. Thus, physical charge conjugation for the two-flavor branch can be formed in a similar way to usual staggered fermions as shown in [@GS] ($C\sim\mathcal{C}\mathcal{S}_{2}\mathcal{S}_{4} \mathcal{I}_{2}\mathcal{I}_{4}$). On the other hand, the Hoelbling type breaks $\mathcal{C}$. In this case, however, we can define another charge conjugation by combining $\mathcal{C}$ with rotation transformation as $$\mathcal{C}_{T} : \,\,\mathcal{R}_{21}\mathcal{R}_{13}^{2}\mathcal{C} \ . \label{RRRC}$$ The Hoelbling action is invariant under this transformation. By using this symmetry we can define physical charge conjugation $C$ for one-flavor branch as in the Adams type. We thus conclude that both staggered-Wilson
1
member_4
fermions have proper charge conjugation symmetry. $N_f$ $\mathcal{S}$$\&$$\mathcal{I}$-subgroup $\mathcal{R}$-subgroup $P$ $C$ $SW_{4}$ ----------- ------- ------------------------------------------------------------------------------------------------------------------ ------------------------------------------------ ------------ ------------ ------------ -- Staggered $4$ $\mathcal{S}_{\mu}$,$\mathcal{I}_{\mu}$ $\mathcal{R}_{\mu\nu}$ $\bigcirc$ $\bigcirc$ $\bigcirc$ Adams $2$ $\mathcal{S}_{\mu}\mathcal{S}_{\nu}$, $\mathcal{S}_{\mu}\mathcal{I}_{\mu}$ $\mathcal{R}_{\mu\nu}$ $\bigcirc$ $\bigcirc$ $\bigcirc$ Hoelbling $1$ $\mathcal{S}_{\mu}\mathcal{S}_{\nu}\mathcal{S}_{\rho}\mathcal{S}_{\sigma}$, $\mathcal{S}_{\mu}\mathcal{I}_{\mu}$ $\mathcal{R}_{\mu\nu}\mathcal{R}_{\rho\sigma}$ $\bigcirc$ $\bigcirc$ $\times$ We lastly consider hypercubic symmetry. In staggered fermions, the rotation Eq. (\[rot1\]) and axis reversal Eq. (\[axis1\]) form hypercubic symmetry [@KilSha], which enhances to Euclidian Lorentz symmetry in the continuum limit. In the case of the Adams-type fermion, the action is invariant under the rotation Eq. (\[rot1\]) and the shifted-axis reversal $\mathcal{S}_{\mu}\mathcal{I}_{\mu}$. These two symmetries can form proper hypercubic symmetry $SW_{4}$ in this case. Thus we conclude that Adams fermion recovers Lorentz symmetry in the continuum. In the Hoelbling-type formulation, the action breaks the rotation symmetry into its subgroup called doubled rotation [@Hoel], which is given by $$\mathcal{R}_{\rho\sigma}\mathcal{R}_{\mu\nu}\sim \exp[{\pi\over{4}}(\Gamma_{\rho}\Gamma_{\sigma}+\Gamma_{\mu}\Gamma_{\nu})] \exp[{\pi\over{4}}(\Xi_{\rho}\Xi_{\sigma}+\Xi_{\mu}\Xi_{\nu})] \phi(R_{\rho\sigma}^{-1}R_{\mu\nu}^{-1}p) \ , \label{DR}$$ where $(\mu,\nu,\sigma,\rho)$ is any permutation of ($1,2,3,4$). It is also invariant under sequential transformations of ($\mu,\nu$ rotation), ($\nu,\mu$ rotation), ($\mu$ shift) and ($\nu$ shift) as $$\mathcal{S}_{\nu}\mathcal{S}_{\mu}\mathcal{R}_{\nu\mu}\mathcal{R}_{\mu\nu}\sim \exp(ip_{\mu}+ip_{\nu})\Gamma_{\mu}\Gamma_{\nu}\,\phi(\tilde{p}) \ , \label{SDR}$$ with $\tilde{p}_{\mu,\nu}=-p_{\mu,\nu}, \tilde{p}_{\tau}=p_{\tau}$, $\tau\not=\mu,\nu$. The loss of rotation symmetry indicates that we cannot define hypercubic symmetry in the Hoelbling fermion. It implies that
1
member_4
it could not lead to a correct continuum theory, and we would need to tune parameters to restore Lorentz symmetry. Indeed the recent study on symmetries of staggered-Wilson fermions by Sharpe [@Steve] reports that recovery of Lorentz symmetry requires fine-tuning of coefficients in the gluonic sector in lattice QCD with Hoelbling fermion. To summarize, Adams fermion possesses physical parity, charge conjugation and hypercubic symmetries while Hoelbling fermion loses hypercubic as shown in Table. \[Table:sym\]. It seems that Hoelbling fermion cannot be straightforwardly applied to lattice QCD while Adams type can. We note that both staggered-Wilson fermions have proper parity symmetry, and we can discuss spontaneous parity symmetry breaking. Moreover, we may find some symptom due to Lorentz symmetry breaking in Hoelbling fermion in the parity-phase structure. This is another motivation to study parity-phase structure in strong-coupling lattice QCD. In the end of this section, we comment on special symmetries in staggered-Wilson fermions without mass shift. Hoelbling fermion without the mass shift ($\eta_{\mu}D_{\mu}+M_{H}$) possesses special charge conjugation symmetry ($\mathcal{C}_{T}': \chi\to\bar\chi,\, \bar{\chi}\to\chi$). This topic is beyond the scope of this work, but we note that it can do good to two flavors in the central branch. Use of the central branch is
1
member_4
intensively discussed in [@Rev; @PdF]. Hopping Parameter Expansion {#sec:HPE} =========================== In this section we investigate parity-phase structure in lattice QCD with staggered-Wilson fermions in the framework of hopping parameter expansion (HPE) in the strong-coupling regime [@AokiP]. In the hopping parameter expansion, we treat a mass term as a leading action while we perturbatively treat hopping terms. We thus perform expansion by a hopping parameter which essentially corresponds to inverse of a mass parameter. By using self-consistent equations we derive one- and two-point functions, and calculate meson condensates and meson mass for two types of staggered-Wilson fermions. We for simplicity drop the flavor indices until we discuss the two-flavor case in details in Sec. \[sec:Tf\]. However it is easy to recover the flavor indices for the field $\chi_{f}$, the mass parameter $M_{f}$ and the condensate $\Sigma_{f}$ ($f=1,2,...$). Hoelbling type {#subsec:HPE-H} -------------- ![Feynman rules in hopping parameter expansion (HPE) with the Hoelbling-type staggered-Wilson fermion. $a$ and $b$ stand for the color indices. $\mathcal{W}_{\alpha\mu\beta\nu,x}^{(2)}$ is given in Table \[Table:U2\].[]{data-label="FR-H"}](HPE-Feynman-rule-Hoelbling.eps){height="6cm"} $\alpha$ $\beta$ $\mathcal{W}_{\alpha\mu\beta\nu,x}^{(2)}$ ---------- --------- --------------------------------------------------------------------- $+$ $+$ $U_{\mu,x}U_{\nu,x+\hat{\mu}}$ $-$ $-$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}-\hat{\nu}}^\dagger$ $+$ $-$ $U_{\mu,x}U_{\nu,x+\hat{\mu}-\hat{\nu}}^\dagger$ $-$ $+$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}}$ ![Feynman diagram for mesonic one-point functions in the $\mathcal{O}(K^{3})$ self-consistent equation of HPE with
1
member_4
the Hoelbling fermion. Black circles stand for the leading one-point function $\langle \chi_x \bar{\chi}_x \rangle_0$ while white circles stand for $\langle \chi_x \bar{\chi}_x \rangle$ which include next-leading and higher hopping terms. By summing up higher contributions, we obtain the second equality.[]{data-label="One-H"}](HPEwHoelbling.eps){height="6cm"} We begin with the Hoelbling-type fermion, which contains two-hopping terms in the action. One reason that we start with Hoelbling type regardless of the lower rotation symmetry is that 2-hopping calculation can be a good exercise for 4-hopping case in Adams type. To perform the HPE for the Hoelbling-type fermion, we rewrite the action Eq. (\[HoelS\]) by redefining $\chi \rightarrow \sqrt{2K} \chi$ with the hopping parameter $K=1/[2(M+2r)]$, $$S = \sum_{x} {{\bar{\chi}}}_x \chi_x + 2K \sum_{x, y} {{\bar{\chi}}}_{x}(\eta_{\mu} D_\mu )_{xy}\chi_{y}+ 2 K r \sum_{x,y} {{\bar{\chi}}}_x (M_{H})_{xy} \chi_y \ , \label{HPE-Hoel}$$ where $M_{H}$ is given by Eq. (\[HoelM\]). The plaquette action is $1/g^2$ term and we can omit it in the strong-coupling limit. In this section, we derive one- and two-point functions by using a $\mathcal{O}(K^{3})$ self-consistent equation: Solving this equation leads to truncation of diagrams as the ladder approximation having all diagrams to $\mathcal{O}(K^3)$ are taken into account. More precisely, this approximation does not take account of all diagrams, but
1
member_4
it successfully includes certain kinds of diagrams to all orders of $K$ thanks to a self-consistent approach. We thus expect that it works to figure out existence of Aoki phase. We note that this approximation especially works well for a small hopping parameter $K\ll1$. In Fig. \[FR-H\], we depict Feynman rules in the HPE for this fermion. The fundamental Feynman rules contain contributions from 0-hopping (mass term), 1-hopping (kinetic term) and 2-hopping (flavored-mass term) terms. First, by using these Feynman rules, we derive meson condensates from an one-point function of the meson operator $\mathcal{M}_{x}=\bar{\chi}_{x}\chi_{x}$ in the mean-field approximation. The one-point function is defined as $$\begin{aligned} \langle \chi_x^a \bar{\chi}_x^b \rangle \equiv& - \delta_{ab} \Sigma_x = \frac{\int \mathcal{D}[\chi,\bar{\chi},U] \chi_x^a \bar{\chi}_x^b\ e^{S} }{ \int \mathcal{D}[\chi,\bar{\chi},U] e^{S} }\ .\end{aligned}$$ Note that we use the partition function $Z=\int \mathcal{D}[\chi,\bar{\chi},U] e^{S}$, not $Z=\int \mathcal{D}[\chi,\bar{\chi},U] e^{-S}$, following the convention for the partition function in the strong-coupling analysis [@KaS]. The leading term in the hopping parameter expansion is given by $$\begin{aligned} \langle \chi_x^a \bar{\chi}_x^b \rangle_0 = \frac{\int \mathcal{D}[\chi,\bar{\chi},U]\, \chi_x^a \bar{\chi}_x^b\ e^{S_0}} {\int \mathcal{D}[\chi,\bar{\chi},U]\, e^{S_0}} = - \delta^{ab} \ ,\end{aligned}$$ where $S_0=\sum_x \bar{\chi}_x \chi_x$. By using the Feynman rules, we can evaluate the diagrams in Fig. \[One-H\]. $$\begin{aligned} \langle
1
member_4
\chi_x^a \bar{\chi}_x^b \rangle & \equiv - \delta^{ab} \Sigma_x {\nonumber}\\ &= \langle \chi_x^a \bar{\chi}_x^b \rangle_0 {\nonumber}\\ & + \sum_{\pm \mu} (-1) (K \eta_{\mu,x})^2 \langle (\chi^a \bar{\chi})_x \rangle_0 U_{\mu,x} \langle (\chi \bar{\chi})_{x+\hat{\mu}} \rangle_0 U_{\mu,x}^\dagger \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\substack{\pm \mu,\pm \nu \\ (\mu \neq \nu)}} (-1) \left( 2K r i \eta_{\mu \nu,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 \langle (\chi^a \bar{\chi})_x \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)} \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)\dagger} \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\pm \mu,\pm \nu} (-1) (K \eta_{\mu,x})^2 (-1) (K \eta_{\nu,x})^2 \langle (\chi^a \bar{\chi})_x \rangle_0 U_{\mu,x} \langle (\chi \bar{\chi})_{x+\hat{\mu}} \rangle_0 U_{\mu,x}^\dagger \langle (\chi \bar{\chi})_x \rangle_0 U_{\nu,x} {\nonumber}\\ & \times \langle (\chi \bar{\chi})_{x+\hat{\nu}} \rangle_0 U_{\nu,x}^\dagger \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\substack{\pm \mu,\pm \nu ,\pm \rho, \pm \sigma \\ (\mu \neq \nu, \rho \neq \sigma)}} (-1) \left( 2K r i \eta_{\mu \nu,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 (-1) \left( 2K r i \eta_{\rho \sigma,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 {\nonumber}\\ & \times \langle (\chi^a \bar{\chi})_x \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)} \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)\dagger} \langle (\chi \bar{\chi})_x \rangle_0 \mathcal{W}_{\rho\sigma,x}^{(2)} \langle (\chi \bar{\chi})_{x+\hat{\rho}+\hat{\sigma}} \rangle_0 \mathcal{W}_{\rho\sigma,x}^{(2)\dagger} \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\substack{\pm \mu,\pm \nu ,\pm \rho \\ (\mu \neq \nu)}} (-1) \left( 2K r i \eta_{\mu \nu,x} \displaystyle \frac
1
member_4
{1}{2^3 \sqrt{3}} \right)^2 (-1) \left( K \eta_{\rho,x} \right)^2 \langle (\chi^a \bar{\chi})_x \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)} {\nonumber}\\ & \times \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 U_{\rho,x+\hat{\mu}+\hat{\nu}} \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}} \rangle_0 U_{\rho,x+\hat{\mu}+\hat{\nu}}^\dagger \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)\dagger} \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \mathcal{O}(K^5) \ , \label{Eq:HPE-Hoelbling1-detail}\end{aligned}$$ where $(\chi \bar{\chi})_x$ stands for $\chi_x \bar{\chi}_x$ and $\mathcal{W}_{\mu\nu,x}^{(2)}=\mathcal{W}_{+\mu+\nu,x}^{(2)}$ in Table. \[Table:U2\]. Note that we consider only connected diagrams in Fig. \[One-H\]. By summing higher hopping terms, the one-point function is obtained as shown in Fig. \[One-H\], which is given by $$- \Sigma_x \equiv - \langle \mathcal{M}_x \rangle = - \langle \mathcal{M}_x \rangle_0 + 2K^2 \sum_\mu \Sigma_x \Sigma_{x+\hat{\mu}} - 2 \cdot \displaystyle \frac {1}{24} (Kr)^2 \sum_{\mu \neq \nu} \Sigma_x \Sigma_{x+\hat{\mu}+\hat{\nu}} \ . \label{Eq:HPE-Hoelbling1}$$ The equation contains terms to $\mathcal{O}(K^{2})$, and $\mathcal{O}(K^{3})$ diagrams are found to vanish due to cancellation between the diagrams. Here we solve it in a self-consistent way for condensate $\Sigma$ within mean-field approximation. We here assume $\Sigma_x=\sigma_x +i \epsilon_x \pi_x$ as the condensate. $\sigma$ and $\pi$ correspond to chiral and pion condensates, respectively. We substitute this form of $\Sigma_{x}$ in Eq. (\[Eq:HPE-Hoelbling1\]) and obtain a self-consistent equation $$- \left( \sigma +i \epsilon_x \pi \right)=-1 + 2K^2 \cdot 4 \left( \sigma^2 + \pi^2 \right) -2 \cdot
1
member_4
\displaystyle \frac {1}{24} (Kr)^2 \cdot 4 \cdot 3 \left( \sigma +i \epsilon_x \pi \right)^2 \ , \label{HPE-HoelSelf1}$$ which yields $- \sigma = -1 + 16 K^2 \pi^2$ and $- i \pi = - 8K^2 \cdot 2 i \sigma \pi$. For simplicity, we have set $r=2\sqrt{2}$ to make the equation Eq. (\[HPE-HoelSelf1\]) simpler. Of course we can also discuss for other values of $r$ in general. Now we have two solutions depending on $\pi=0$ or $\pi\not=0$: For $\pi=0$, we have a trivial solution $\sigma=1$. For $\pi \neq 0$, we have a solution as $$\sigma = \displaystyle \frac{1}{16K^2},\,\,\,\,\,\,\,\,\,\,\,\,\,\, \pi = \pm \sqrt{ \displaystyle \frac{1}{16K^2} \left( 1- \displaystyle \frac{1}{16K^2} \right) } \ . \label{cond}$$ Nonzero pion condensate implies spontaneous parity breaking for the range $\mid{K}\mid > 1/4$. The sign of the pion condensate in Eq. (\[cond\]) reflects the $Z_2$ parity symmetry of the theory. Thus the parity-broken phase, if it exists, appears in a parameter range $-4\sqrt{2}-2<M<-4\sqrt{2}+2$ in the strong-coupling limit. We note that the critical hopping parameter $|K_c|=1/4$ is small, and we speculate that the $\mathcal{O}(K^{3})$ self-consistent equation is valid around the value. Next, we discuss the meson mass from a two-point function of the meson operator $\mathcal{S}(0,x)\equiv\langle\mathcal{M}_{0}\mathcal{M}_{x}\rangle$. From Fig. \[Two-H\], we
1
member_4
derive the following $\mathcal{O}(K^{3})$ equation for a two-point function. ![Feynman diagram for mesonic two-point functions for $\mathcal{O}(K^{3})$ self-consistent equation with the Hoelbling fermion. []{data-label="Two-H"}](HPEwHoelbling-Mass.eps){height="4cm"} $$\begin{aligned} \mathcal{S}(0,x) = &\langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ = &- \delta_{0x} N_c {\nonumber}\\ &-K^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c (\eta_{\mu,0})^2 \biggl[ U_{\mu,0}^{cd} \chi_{\hat{\mu}}^d {{\bar{\chi}}}_{\hat{\mu}}^e (U_{\mu,0}^{\dagger})^{ef} + (U_{\mu,-\hat{\mu}}^{\dagger})^{cd} \chi_{-\hat{\mu}}^d {{\bar{\chi}}}_{-\hat{\mu}}^e U_{\mu,-\hat{\mu}}^{ef} \biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & - \left( 2 K r i \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c \sum_{\alpha,\beta=\pm} \sum_{\mu \neq \nu} (\eta_{\mu\nu,0})^2 \biggl[ (\mathcal{W}_{\alpha\mu\beta\nu,0}^{(2)})^{cd} \chi_{\alpha\hat{\mu}\beta\hat{\nu}}^d {{\bar{\chi}}}_{\alpha\hat{\mu}\beta\hat{\nu}}^e (\mathcal{W}_{\alpha\mu\beta\nu,0}^{(2) \dagger})^{ef} \biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle \ , \label{HPE-HoelTwo} \end{aligned}$$ where $\mathcal{W}_{\alpha\mu\beta\nu,0}^{(2)}$ is defined in Table. \[Table:U2\]. Note that we consider only connected diagrams in Fig. \[Two-H\]. By integrating out the link variables in the strong-coupling limit, it is simplified as $$\begin{aligned} \mathcal{S}(0,x) \equiv \langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle = - \delta_{0x} N_c &+ K^2 \sum_{\pm \mu} \langle \chi_{\hat{\mu}}^a {{\bar{\chi}}}_{\hat{\mu}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & + \left( 2 K r i \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \sum_{\substack{\pm \mu,\pm \nu \\ (\mu \neq \nu)}} \langle \chi_{\hat{\mu}+\hat{\nu}}^a {{\bar{\chi}}}_{\hat{\mu}+\hat{\nu}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle \ . \label{Eq:HPE-Hoelbling2}\end{aligned}$$ Then the self-consistent equation for $\mathcal{S}$ is given in the momentum space as $$\begin{aligned} \mathcal{S}(p) &= - N_c + \biggl[- K^2 \sum_\mu \left( e^{-ip_\mu}
1
member_4
+ e^{ip_\mu} \right) {\nonumber}\\ &+ \left( 2 K r \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \sum_{\mu \neq \nu} \left( e^{-i(p_\mu+p_\nu)} + e^{i(p_\mu+p_\nu)} + e^{-i(p_\mu-p_\nu)} + e^{i(p_\mu-p_\nu)} \right)\biggr] \mathcal{S}(p) \ . \label{HPE-HoelSelf2}\end{aligned}$$ We finally obtain the meson propagator as $$\mathcal{S}(p) = N_c \biggl[ - 2 K^2 \sum_\mu \cos p_\mu + 4 \left( 2 K r \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \sum_{\mu \neq \nu} \cos p_\mu \cos p_\nu - 1 \biggr]^{-1} \ . \label{MP}$$ Here the pole of $\mathcal{S}(p)$ should give meson mass. Since $\chi$ is an one-component fermion, it may seem to be difficult to find the pion excitation from Eq. (\[MP\]). However, as we discussed, $\gamma_{5}$ in the staggered fermion is given by $\epsilon_{x}=(-1)^{x_{1}+...+x_{4}}$ and the pion operator is given by $\pi_{x}=\bar{\chi}_{x}i\epsilon_{x}\chi_{x}$. We therefore identify momentum of pion by measuring it from a shifted origin $p=(\pi,\pi,\pi,\pi)$. Here we set $p=(i m_\pi a + \pi, \pi,\pi,\pi)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP\]). Then we derive the pion mass $m_\pi$ as $$\begin{aligned} \cosh(m_\pi a) &= 1 + \displaystyle \frac{1-16K^2}{6K^2} \ , \label{HPE-Hoelpi}\end{aligned}$$ where we again set $r=2\sqrt{2}$ for simplicity. In this result, the pion mass becomes zero at $|K|=1/4$, and tachyonic in the range $\mid{K}\mid > 1/4$. It implies that there occurs a phase transition between
1
member_4
parity-symmetric and parity-broken phases at $|K|=1/4$, which is consistent with the result from the one-point function in Eq. (\[cond\]). We note that the massless pion at the phase boundary is consistent with the scenario of second-order transition. We can also derive the sigma meson mass by substituting $p=(i m_\pi a, 0, 0, 0)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP\]) as $$\begin{aligned} \cosh(m_\sigma a) &= 1 + \displaystyle \frac{1}{2K^2} \ . \label{HPE-Hoelsigma}\end{aligned}$$ Adams type {#subsec:HPE-A} ---------- We investigate the parity-phase structure for the Adams-type staggered-fermion by using $\mathcal{O}(K^{3})$ self-consistent equations in hopping parameter expansion. The approach is basically parallel to that of Hoelbling type. We just need to consider Feynman diagrams for this case. The action Eq. (\[AdamsS\]) is rewritten by redefining $\chi \rightarrow \sqrt{2K} \chi$ with $K=1/[2(M+r)]$ as, $$S = \sum_{x} {{\bar{\chi}}}_x \chi_x + 2K \sum_{x, y} {{\bar{\chi}}}_{x}(\eta_{\mu} D_\mu )_{xy}\chi_{y}+ 2 K r \sum_{x,y} {{\bar{\chi}}}_{x} (M_{A})_{xy} \chi_{y} \ , \label{HPE-Adams}$$ where $M_{A}$ is given in Eq. (\[AdamsM\]). In Fig. \[FR-A\], the Feynman rules in the HPE for this fermion are depicted. ![Feynman rules for the HPE with the Adams fermion. $a$ and $b$ stand for the color indices. We show the concrete forms of $\mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,x}^{(4)}$ in Table \[Table:U4\] of Appendix \[AdamsEff\].[]{data-label="FR-A"}](HPE-Feynman-rule-Adams.eps){height="6cm"}
1
member_4
First, we derive meson condensates from the one-point function $\mathcal{M}_{x}=\bar{\chi}_{x}\chi_{x}$. The equation for the one-point function is obtained as shown in Fig. \[One-A\], $$\begin{aligned} - \Sigma_x & \equiv - \langle \mathcal{M}_x \rangle {\nonumber}\\ & = - \langle \mathcal{M}_x \rangle_0 + 2K^2 \sum_\mu \Sigma_x \Sigma_{x+\hat{\mu}} - 2 \cdot \displaystyle \frac {1}{(4!)^2 \cdot 2^3} (Kr)^2 \sum_{\mu \neq \nu \neq \rho \neq \sigma} \Sigma_x \Sigma_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}+\hat{\sigma}} \ . \label{Eq:HPE-Adams1}\end{aligned}$$ ![Feynman diagram for mesonic one-point functions in the $\mathcal{O}(K^{3})$ self-consistent equations of HPE with the Adams fermion. There is a 4-hopping fundamental diagram, which is peculiar to this fermion.[]{data-label="One-A"}](HPEwAdams.eps){height="3cm"} We substitute $\Sigma_x=\sigma_x +i \epsilon_x \pi_x$ for $\Sigma_{x}$ in Eq. (\[Eq:HPE-Adams1\]) and obtain the self-consistent equation $$- \left( \sigma +i \epsilon_x \pi \right)= -1 + 2K^2 \cdot 4 \left( \sigma^2 + \pi^2 \right) - 2 \cdot \displaystyle \frac {1}{(4!)^2 \cdot 2^3} (Kr)^2 \cdot 4! \left( \sigma +i \epsilon_x \pi \right)^2 \ . \label{HPE-AdamsSelf1}$$ From this, we obtain $- \sigma = -1 + 16 K^2 \pi^2$ and $- i \pi = - 8K^2 \cdot 2 i \sigma \pi$. Here we have set $r=16\sqrt{3}$ to make the equation simple. We again have two solutions: For $\pi=0$, we have a trivial solution $\sigma=1$. For $\pi \neq 0$, we have
1
member_4
a non-trivial solution as $$\sigma = \displaystyle \frac{1}{16K^2},\,\,\,\,\,\,\,\,\,\,\,\,\,\, \pi = \pm \sqrt{ \displaystyle \frac{1}{16K^2} \left( 1- \displaystyle \frac{1}{16K^2} \right) } \ . \label{condA}$$ It indicates that parity-broken phase appears in the range of the hopping parameter as $\mid{K}\mid > 1/4$ or equivalently $-16\sqrt{3}-2<M<-16\sqrt{3}+2$. Next, we discuss the meson mass from a two-point function of the meson operator $\mathcal{S}(0,x)\equiv\langle\mathcal{M}_{0}\mathcal{M}_{x} \rangle$. From Fig. \[Two-A\] we derive the following equation for two-point functions, ![Feynman diagram for mesonic two-point functions for $ \mathcal{O}(K^{3})$ self-consistent equations of HPE with the Adams fermion.[]{data-label="Two-A"}](HPEwAdams-Mass.eps){height="4cm"} $$\begin{aligned} \mathcal{S}(0,x) = &\langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ =&- \delta_{0x} N_c {\nonumber}\\ &-K^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c (\eta_{\mu,0})^2 \biggl[ U_{\mu,0}^{cd} \chi_{\hat{\mu}}^d {{\bar{\chi}}}_{\hat{\mu}}^e (U_{\mu,0}^{\dagger})^{ef}+ (U_{\mu,-\hat{\mu}}^{\dagger})^{cd} \chi_{-\hat{\mu}}^d {{\bar{\chi}}}_{-\hat{\mu}}^e U_{\mu,-\hat{\mu}}^{ef}\biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & + \left( 2 K r \epsilon \eta_5 \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c \sum_{\alpha,\beta,\gamma,\delta=\pm} \sum_{\mu \neq \nu \neq \rho \neq \sigma} \biggl[ \left( \mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,0}^{(4)} \right)^{cd} \chi_{\alpha\hat{\mu}\beta\hat{\nu}\gamma\hat{\rho}\delta\hat{\sigma}}^d {{\bar{\chi}}}_{\alpha\hat{\mu}\beta\hat{\nu}\gamma\hat{\rho}\delta\hat{\sigma}}^e {\nonumber}\\ & \times \left( \mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,0}^{(4) \dagger} \right)^{ef} \biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle \ , \label{HPE-AdamsTwo} \end{aligned}$$ where $\mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,x}^{(4)}$ is defined in Table. \[Table:U4\] of Appendix \[AdamsEff\]. By integrating out the link variables in the strong-coupling limit, it is simplified as $$\begin{aligned} \mathcal{S}(0,x) \equiv \langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle =
1
member_4
&- \delta_{0x} N_c + K^2 \sum_{\pm \mu} \langle \chi_{\hat{\mu}}^a {{\bar{\chi}}}_{\hat{\mu}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & - \left( 2 K r \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \sum_{\substack{\pm \mu, \pm \nu, \pm \rho, \pm \sigma \\ (\mu \neq \nu \neq \rho \neq \sigma)}} \langle \chi_{\hat{\mu}+\hat{\nu}+\hat{\rho} +\hat{\sigma}}^a {{\bar{\chi}}}_{\hat{\mu}+\hat{\nu}+\hat{\rho} +\hat{\sigma}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle \ . \label{Eq:HPE-Adams2}\end{aligned}$$ Then the self-consistent equation for $\mathcal{S}$ is given in the momentum space as $$\begin{aligned} \mathcal{S}(p) &= - N_c + \biggl[- K^2 \sum_\mu \left( e^{-ip_\mu} + e^{ip_\mu} \right) {\nonumber}\\ &+ \left( 2 K r \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \sum_{\alpha,\beta,\gamma,\delta=\pm} \sum_{\mu \neq \nu \neq \rho \neq \sigma} e^{+i(\alpha p_\mu + \beta p_\nu + \gamma p_\rho + \delta p_\sigma)} \biggr] \ . \label{HPE-AdamsSelf2}\end{aligned}$$ We finally obtain the meson propagator as $$S(p) = N_c \biggl[ - 2 K^2 \sum_\mu \cos p_\mu + 16 \left( 2 K r \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \sum_{\mu \neq \nu \neq \rho \neq \sigma} \cos p_\mu \cos p_\nu \cos p_\rho \cos p_\sigma- 1 \biggr]^{-1} \ . \label{MP2}$$ Here we set $p=(i m_\pi a + \pi, \pi,\pi,\pi)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP2\]), which gives the pion mass $m_\pi$ as $$\begin{aligned} \cosh(m_\pi a) &= 1 + \displaystyle \frac{1-16K^2}{10K^2} \ , \label{HPE-Adamspi}\end{aligned}$$ where we again set $r=16\sqrt{3}$ for simplicity.
1
member_4
Here the pion mass becomes zero at $\mid{K}\mid = 1/4$ and becomes tachyonic in the range $\mid{K}\mid > 1/4$. It suggests that there occurs a second-order phase transition between parity-symmetric and broken phases at $|K|=1/4$, which is consistent with Eq. (\[condA\]). We can also derive the sigma meson mass by substituting $p=(i m_\pi a, 0, 0, 0)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP2\]) as $$\begin{aligned} \cosh(m_\sigma a) &= 1 + \displaystyle \frac{1}{6K^2} \ . \label{HPE-Asigma}\end{aligned}$$ Effective Potential Analysis {#sec:EPA} ============================ In the previous section, we have investigated the parity-phase structure in hopping parameter expansion. We found a strong sign of parity-broken phase for $\mid{K}\mid >1/4$. In order to judge whether the parity-broken phase is realized as a vacuum, the analysis of the gap solution in the hopping parameter expansion is not enough, and we need to investigate the effective potential for meson fields. In this section, we consider the effective potential of meson fields for $SU(N)$ lattice gauge theory with staggered-Wilson fermions. In the strong-coupling limit and the large $N$ limit, effective action can be exactly derived by integrating the link variables [@KaS; @AokiP]. Then, by solving a saddle-point equation, we can investigate a vacuum and find meson condensations. In this
1
member_4
section we again begin with the Hoelbling case as exercise, and go on to Adams fermion with better discrete symmetry. Hoelbling type {#hoelbling-type} -------------- In the strong-coupling limit we can drop plaquette action. Then the partition function for meson fields $\mathcal{M}_x=({{\bar{\chi}}}_x \chi_x)/N$ with the source $J_{x}$ is given by $$\begin{aligned} Z(J) &= \displaystyle \int {{\cal D}}\left[ \chi, {{\bar{\chi}}}, U \right] \exp \left[ N \sum_x J_x \mathcal{M}_x + S_F \right] \ . \label{ZJ}\end{aligned}$$ where $S_{F}$ stands for the fermion action. Here we have defined $\mathcal{M}_x$ with a $1/N$ factor to ensure it to have a certain large $N$ limit. In this case, $S_F$ is the Hoelbling-type staggered-Wilson action Eq. (\[HoelS\]). $N$ stands for the number of color. In the large $N$ limit, we can perform the link integral. We here consider the effective action up to $\mathcal{O}(\mathcal{M}^{3})$ for meson field $\mathcal{M}$. This order corresponds to the $\mathcal{O}(K^{3})$ self-consistent equation in the hopping parameter expansion. We develop a method to perform the link-variable integral with multi-hopping fermion action terms. In our method, we perform the link integral by introducing two kinds of link-variable measures. Now we formally rewrite the partition function as, $$\begin{aligned} Z(J) &= \displaystyle \int {{\cal D}}\left[ \chi, {{\bar{\chi}}}\right] \exp
1
member_4
--- abstract: | A non-uniform hypergraph $H=(V,E)$ consists of a vertex set $V$ and an edge set $E\subseteq 2^V$; the edges in $E$ are not required to all have the same cardinality. The set of all cardinalities of edges in $H$ is denoted by $R(H)$, the set of edge types. For a fixed hypergraph $H$, the Turán density $\pi(H)$ is defined to be $\lim_{n\to\infty}\max_{G_n}h_n(G_n)$, where the maximum is taken over all $H$-free hypergraphs $G_n$ on $n$ vertices satisfying $R(G_n)\subseteq R(H)$, and $h_n(G_n)$, the so called Lubell function, is the expected number of edges in $G_n$ hit by a random full chain. This concept, which generalizes the Turán density of $k$-uniform hypergraphs, is motivated by recent work on extremal poset problems. The details connecting these two areas will be revealed in the end of this paper. Several properties of Turán density, such as supersaturation, blow-up, and suspension, are generalized from uniform hypergraphs to non-uniform hypergraphs. Other questions such as “Which hypergraphs are degenerate?" are more complicated and don’t appear to generalize well. In addition, we completely determine the Turán densities of $\{1,2\}$-hypergraphs. author: - 'Travis Johnston [^1]' - 'Linyuan Lu [^2]' title: 'Turán Problems on Non-uniform Hypergraphs' --- Introduction ============ A
1
member_5
hypergraph $H$ is a pair $(V,E)$; $V$ is the vertex set, and $E\subseteq 2^V$ is the edge set. If all edges have the same cardinality $k$, then $H$ is a $k$-uniform hypergraph. Turán problems on $k$-uniform hypergraphs have been actively studied for many decades. However, Turán problems on non-uniform hypergraphs are rarely considered (see [@MZ; @Lemons] for two different treatments). Very recently, several groups of people have started actively studying extremal families of sets avoiding given sub-posets. Several new problems have been established. One of them is the diamond problem: [**The diamond conjecture: [@GriLu]**]{} Any family ${\mathcal{F}}$ of subsets of $[n]:=\{1,2,\ldots,n\}$ with no four sets $A,B,C,D$ satisfying $A\subseteq B\cap C$, $B\cup C \subseteq D$ can have at most $(2+o(1)){\binom{n}{\lfloor \frac{n}{2}\rfloor}}$ subsets. This conjecture, along with many other problems, motivates us to study Turán-type problems on non-uniform hypergraphs. The details of this connection will be given in the last section. We briefly review the history of Turán Problems on uniform hypergraphs. Given a positive integer $n$ and a $k$-uniform hypergraph $H$ on $n$ vertices (or $k$-graph, for short), the Turán number ${{\rm ex}}(n,H)$ is the maximum number of edges in a $k$-graph on $n$ vertices that does not contain $H$ as
1
member_5
a subgraph; such a graph is called $H$-*free*. Katona et al. [@KNS] showed that $f(n,H)={{\rm ex}}(n,H)/{n\choose k}$ is a decreasing function of $n$. The limit $\displaystyle \pi(H)=\lim_{n\to \infty} f(n,H)$, which always exists, is called the [*Turán density*]{} of $H$. For $k=2$, the graph case, Erdős-Stone-Simonovits proved $\pi(G)=1-\frac{1}{\chi(G)-1}$ for any graph $G$ with chromatic number $\chi(G)\geq 3$. If $G$ is bipartite, then ${{\rm ex}}(n,G)=o(n^2)$. The magnitude of ${{\rm ex}}(n,G)$ is unknown for most bipartite graphs $G$. Let $K^r_k$ denote the complete $r$-graph on $k$ vertices. Turán determined the value of ${{\rm ex}}(n,K^2_{k})$ which implyies that $\pi(K^2_k)=1-\frac{1}{k-1}$ for all $k\geq 3$. However, no Turán density $\pi(K^r_k)$ is known for any $k>r\geq 3$. The most extensively studied case is when $k=4$ and $r=3$. Turán conjectured [@Tu] that $\pi(K_4^3)= 5/9$. Erdős [@Er81] offered \$500 for determining any $\pi(K^r_k)$ with $k>r\geq 3$ and \$1000 for answering it for all $k$ and $r$. The upper bounds for $\pi(K_4^3)$ have been sequentially improved: $0.6213$ (de Caen [@dC94]), $0.5936$ (Chung-Lu [@ChLu]), $0.56167$ (Razborov [@Razborov], using flag algebra method.) There are a few uniform hypergraphs whose Turán density has been determined: the Fano plane [@FS05; @KS05a], expanded triangles [@KS05b], $3$-books, $4$-books [@FMP], $F_5$ [@FrFu83], extended complete graphs [@Pik], etc.
1
member_5
In particular, Baber [@baber] recently found the Turán density of many $3$-uniform hypergraphs using flag algebra methods. For a more complete survey of methods and results on uniform hypergraphs see Peter Keevash’s survey paper [@KeevashSurvey]. A non-uniform hypergraph $H=(V,E)$ consists of a vertex set $V$ and an edge set $E\subseteq 2^V$. Here the edges of $E$ could have different cardinalities. The set of all the cardinalities of edges in $H$ is denoted by $R(H)$, the set of edge types. For a fixed hypergraph $H$, the Turán density $\pi(H)$ is defined to be $\lim_{n\to\infty}\max_{G_n}h_n(G_n)$, where the maximum is taken over all $H$-free hypergraphs $G_n$ on $n$ vertices satisfying $R(G_n)\subseteq R(H)$. $h_n(G_n)$, the so called Lubell function, is the expected number of edges in $G_n$ hit by a random full chain. The Lubell function has been a very useful tool in extremal poset theory, in particular it has been used in the study of the diamond conjecture. In section 2, we show that our notion of $\pi(H)$ is well-defined and is consistent with the usual definition for uniform hypergraphs. We also give examples of Turán densities for several non-uniform hypergraphs. In section 3, we generalize the supersaturation Lemma to non-uniform hypergraphs. Then
1
member_5
we prove that blowing-up will not affect the Turán density. Using various techniques, we determine the Turán density of every $\{1,2\}$-hypergraph in section 4. Remarkably, the Turán densities of $\{1,2\}$-hypergraphs are in the set $$\bigg\{1,\frac{9}{8}, \frac{5}{4}, \frac{3}{2}, \frac{5}{3}, \ldots, 2-\frac{1}{k},\ldots \bigg \}.$$ Among $r$-uniform hypergraphs, $r$-partite hypergraphs have the smallest possible Turán density. Erdős proved that any $r$-uniform hypergraph forbidding the complete $r$-uniform $r$-partite hypergraphs can have at most $O(n^{r-1/\delta})$ edges. We generalize this theorem to non-uniform hypergraphs. A hypergraph is degenerate if it has the smallest possible Turán density. For $r$-uniform hypergraphs, a hypergraph $H$ is degenerate if and only if $H$ is the subgraph of a blow-up of a single edge. Unlike the degenerate $r$-uniform hypergraphs, the degenerate non-uniform hypergraphs are not classified. For non-uniform hypergraphs, chains–one natural extension of a single edge–are degenerate. Additionally, every subgraph of a blow-up of a chain is also degenerate. However, we give an example of a degenerate, non-uniform hypergraph not contained in any blow-up of a chain. This leaves open the question of which non-uniform hypergraphs are degenerate. In section 6, we consider the suspension of hypergraphs. The suspension of a hypergraph $H$ is a new hypergraph, denoted by $S(H)$, with
1
member_5
one additional vertex, $\ast$, added to every edge of $H$. In a hypergraph Turán problem workshop hosted by the AIM Research Conference Center in 2011, the following conjecture was posed: $\displaystyle \lim_{t\to\infty}\pi(S^t(K^{r}_n))=0$. We conjecture $\displaystyle\lim_{t\to\infty} \pi(S^t(H))=|R(H)|-1$ holds for any hypergraph $H$. Some partial results are proved. Finally in the last section, we will point out the relation between the Turán problems on hypergraphs and extremal poset problems. Non-uniform hypergraphs ======================= Notation -------- Recall that a hypergraph $H$ is a pair $(V,E)$ with the vertex set $V$ and edge set $E\subseteq 2^{V}$. Here we place no restriction on the cardinalities of edges. The set $R(H):=\{|F|\colon F\in E\}$ is called the set of its [*edge types*]{}. A hypergraph $H$ is $k$-uniform if $R(H)=\{k\}$. It is non-uniform if it has at least two edge types. For any $k\in R(H)$, the [*level hypergraph*]{} $H^k$ is the hypergraph consisting of all $k$-edges of $H$. A uniform hypergraph $H$ has only one (non-empty) level graph, i.e., $H$ itself. In general, a non-uniform hypergraph $H$ has $|R(H)|$ (non-empty) level hypergraphs. Throughout the paper, for any finite set $R$ of non-negative integers, we say, $G$ is an $R$-graph if $R(G)\subseteq R$. We write $G^R_n$ for a hypergraph on
1
member_5
$n$ vertices with $R(G)\subseteq R$. We simplify it to $G$ if $n$ and $R$ are clear under context. Let $R$ be a fixed set of edge types. Let $H$ be an $R$-graph. The number of vertices in $H$ is denoted by $v(H):=|V(H)|$. Our goal is to measure the edge density of $H$ and be able to compare it (in a meaningful way) to the edge density of other $R$ graphs. The standard way to measure this density would be: $$\mu(H) = \frac{|E(H)|}{\sum_{k\in R(H)}\binom{v(H)}{k}}.$$ This density ranges from 0 to 1 (as one would expect)–a complete $R$-graph having a density of 1. Unfortunately, this is no longer a useful measure of density since the number of edges with maximum cardinality will dwarf the number of edges of all other sizes. Specifically, one could take $k$-uniform hypergraph (where $k=\max\{r:r\in R(H)\}$) on enough vertices and make its density as close to 1 he likes. The problem is that this $k$-uniform hypergraph is quite different from the complete $R$-graph (when $|R|>1$) with the same number of vertices. Instead, we use the Lubell function to measure the edge density. This is adapted from the use of the Lubell function studying families of subsets. For a
1
member_5
non-uniform hypergraph $G$ on $n$ vertices, we define the Lubell function of $G$ as $$\label{eq:lubell} h_{n}(G):=\sum_{F\in E(G)}\frac{1}{\binom{n}{|F|}}=\sum_{k\in R(G)}\frac{|E(H^{k})|}{\binom{n}{k}}.$$ The Lubell function is the expected number of edges hit by a random full chain. Namely, pick a uniformly random permutation $\sigma$ on $n$ vertices; define a random full chain $C_\sigma$ by $$\{ \{\emptyset\}, \{\sigma(1)\}, \{\sigma(1), \sigma(2)\}, \cdots, [n]\}.$$ Let $X=|E(G)\cap C_{\sigma}|$, the number of edges hit by the random full chain. Then $$\label{eq:X} h_n(G)={{\rm E}}(X).$$ Given two hypergraphs $H_1$ and $H_2$, we say $H_1$ is a subgraph of $H_2$, denoted by $H_1\subseteq H_2$, if there exists a 1-1 map $f\colon V(H_1)\to V(H_2)$ so that $f(F)\in E(H_2)$ for any $F\in E(H_1)$. Whenever this occurs, we say the image $f(H_1)$ is an [*ordered copy*]{} of $H_2$, written as $H_1\stackrel{f}{\hookrightarrow} H_2$. A necessary condition for $H_1\subseteq H_2$ is $R(H_1)\subseteq R(H_2)$. Given a subset $K\subseteq V(H)$ and a subset $S\subseteq R(H)$, the [*induced subgraph*]{}, denoted by $H^{[S]}[K]$, is a hypergraph on $K$ with the edge set $\{F\in E(H)\colon F\subseteq K \mbox{ and } |F|\in S\}$. When $S=R(H)$, we simply write $H[K]$ for $H^{[S]}[K]$. Given a positive integer $n$ and a subset $R\subseteq [n]$, the complete hypergraph $K^R_{n}$ is a hypergraph on $n$ vertices with edge
1
member_5
set $\bigcup_{i\in R} \binom{[n]}{i}$. For example, $K^{\{k\}}_n$ is the complete $k$-uniform hypergraph. $K^{[k]}_n$ is the non-uniform hypergraph with all possible edges of cardinality at most $k$. (0,0)–(2,1)–(2,-1)–cycle; at (0,0) \[vertex\_open\] (v1) \[label=above:[1]{}\] ; at (2,1) \[vertex\_open\] (v2) \[label=above:[2]{}\] ; at (2,-1) \[vertex\_open\] (v3) \[label=below:[3]{}\] ; (v1)–(v2); (v1)–(v3); (v2)–(v3); at (1,-2) [Illustration of $K_{3}^{\{2,3\}}$]{} ; (0,0)–(2,0)–(3.5,2)–cycle; (2,0)–(3.5,2)–(2,4)–cycle; (3.5,2)–(2,4)–(0,4)–cycle; (2,4)–(0,4)–(-1.5,2)–cycle; (0,4)–(-1.5,2)–(0,0)–cycle; (-1.5,2)–(0,0)–(2,0)–cycle; at (0,0) \[vertex\_open\] (v1) \[label=below:[1]{}\] ; at (2,0) \[vertex\_open\] (v2) \[label=below:[2]{}\] ; at (3.5,2) \[vertex\_open\] (v3) \[label=right:[3]{}\] ; at (2,4) \[vertex\_open\] (v4) \[label=above:[4]{}\] ; at (0,4) \[vertex\_open\] (v5) \[label=above:[5]{}\] ; at (-1.5,2) \[vertex\_open\] (v6) \[label=left:[6]{}\] ; (v1)–(v2)–(v3)–(v4)–(v5)–(v6)–(v1); at (1,-1) [A (tight) cycle $C_{6}^{\{2,3\}}$]{}; Given a family of hypergraphs ${\mathcal{H}}$ with common set of edge-types $R$, we define $$\pi_n({\mathcal{H}}):= \max \left\{h_{n}(G)\colon v(G)=n, G\subseteq K^{R}_{n}, \text{ and } G \mbox{ contains no subgraph in }{\mathcal{H}}\right\}.$$ A hypergraph $G:=G_n^R$ is [*extremal*]{} with respect to the family ${\mathcal{H}}$ if 1. $G$ contains no subgraph in ${\mathcal{H}}$. 2. $h_n(G)=\pi_n({\mathcal{H}})$. The Turán density of ${\mathcal{H}}$ is defined to be $$\begin{aligned} \pi(\mathcal{{\mathcal{H}}}):&= \lim_{n\to \infty} \pi_n({\mathcal{H}}) \\ &= \lim_{n\to \infty} \max \left\{\sum_{F\in E(G)} \frac{1}{\binom{n}{|F|}}\colon v(G)=n, G\subseteq K^{R}_{n}, \text{ and } G \mbox{ contains no subgraph in }{\mathcal{H}}\right\}\end{aligned}$$ when the limit exists; we will soon show that this limit
1
member_5
always exists. When ${\mathcal{H}}$ contains one hypergraph $H$, then we write $\pi(H)$ instead of $\pi(\{H\})$. Throughout, we will consider $n$ growing to infinity, and $R$ to be a fixed set (not growing with $n$). Note that $\pi(\mathcal{H})$ agrees with the *usual* definition of $$\displaystyle \pi(\mathcal{H})=\lim_{n\to\infty}\frac{\text{ex}(\mathcal{H},n)}{\binom{n}{k}}$$ when $\mathcal{H}$ is a set of $k$-uniform hypergraphs. The following result is a direct generalization Katona-Nemetz-Simonovit’s theorem [@KNS]. For any family ${\mathcal{H}}$ of hypergraphs with a common edge-type $R$, $\pi(\mathcal{H})$ is well-defined, i.e. the limit $\displaystyle \lim_{n\to\infty}\pi_{n}({\mathcal{H}})$ exists. It suffices to show that $\pi_n({\mathcal{H}})$, viewed as a sequence in $n$, is decreasing. Write $R=\{k_1,k_2,...,k_{r}\}$. Let $G_{n}\subseteq K^{R}_{n}$ be a hypergraph with $v(G_{n})=n$ not containing $\mathcal{H}$ and with Lubell value $h_n(G_{n})=\pi_n({\mathcal{H}})$. For any $\ell< n$, consider a random subset $S$ of the vertices of $G$ with size $|S|=\ell$. Let $G_{n}[S]$ be the induced subgraph of $G_{n}$ (whose vertex set is restricted to $S$). Clearly $$\pi_\ell({\mathcal{H}}) \geq \mathbb{E}(h_{\ell}(G_{n}[S])).$$ Write $E(G_{n})=E_{k_1}\bigcup E_{k_2}\bigcup ...\bigcup E_{k_r}$ where $E_{k_i}$ contains all the edges of size $k_{i}$. Note that the expected number of edges of size $k_i$ in $G_{n}[S]$ is precisely $\frac{\binom{\ell}{k_i}}{\binom{n}{k_i}}|E_{k_i}|$. It follows that $$\begin{aligned} \pi_\ell({\mathcal{H}}) &\geq \mathbb{E}(h_{\ell}(G_{n}[S])) \\ &=\sum_{i=1}^{r} \frac{\mathbb{E}(|E_{k_i} \bigcap \binom{S}{k_i}|)}{\binom{\ell}{k_{i}}} \\ &=\sum_{i=1}^{r} \frac{ \frac{\binom{\ell}{k_{i}}}{\binom{n}{k_i}}|E_{k_i}|}{\binom{\ell}{k_{i}}} \\ &=\sum_{i=1}^{r} \frac{ |E_{k_i}|
1
member_5
}{\binom{n}{k_i}} \\ &= h_{n}(G_{n})\\ &=\pi_n({\mathcal{H}}).\end{aligned}$$ The sequence $\pi_n({\mathcal{H}})$ is non-negative and decreasing; therefore it converges. For a fixed set $R:=\{k_1,k_2,\ldots, k_r\}$ (with $k_1<k_2<\cdots < k_r$), an [*$R$-flag*]{} is an $R$-graph containing exactly one edge of each size. The chain $C^R$ is a special $R$-flag with the edge set $${{\rm E}}(C^R)=\{[k_1], [k_2],\ldots, [k_r]\}.$$ \[p1\] For any hypergraph $H$, the following statements hold. 1. $|R(H)|-1\leq \pi_n(H)\leq |R(H)|$. 2. For subgraph $H'$ of $H$, we have $\pi(H')\leq \pi(H)- |R(H)|+|R(H')|$. 3. For any $R$-flag $L$ on $m$ vertices and any $n\geq m$, we have $\pi_n(L)=|R|-1$. [**Proof:**]{} Pick any maximal proper subset $R'$ of $R(H)$. Consider the complete graph $K_n^{R'}$. Since $K_n^{R'}$ misses one type of edge in $R(H)\setminus R'$, it does not contain $H$ as a subgraph. Thus $$\pi_n(H)\geq h_n(K_n^{R'})=|R'|=|R(H)|-1.$$ The upper bound is due to the fact $h_n(K_n^{R(H)})=|R(H)|$. Proof of item 2 is similar. Let $S=R(H')$ and $G^S_n$ be an extremal hypergraph for $\pi_n(H')$. Extend $G^S_n$ to $G^{R(H)}_n$ by adding all the edges with cardinalities in $R(H)\setminus S$. The resulting graph $G^{R(H)}_n$ is $H$-free. We have $$\pi_n(H)\geq \pi_n(G^{R(H)}_n)=\pi_{n}(G^S_n)+ |R(H)|-|S|= |R(H)|-|R(H')|+ \pi_n(H').$$ Taking the limit as $n$ goes to infinity, we have $$\pi(H)\geq |R(H)|-|R(H')|+ \pi(H').$$ Finally, for item 3, consider $L$-free hypergraph $G_n^R$. Pick
1
member_5
a random $n$-permutation $\sigma$ uniformly. Let $X$ be the number of edges of $G^R_n$ hit by a random flag $\sigma(L)$. Note that each edge $F$ has probability $\frac{1}{{n\choose |F|}}$ of being hit by $\sigma(L)$. We have $$\label{eq:ef} {{\rm E}}(X)=\sum_{F\in E(G)}\frac{1}{{n\choose |F|}}=h_n(G).$$ Since $G^R_n$ is $L$-free, we have $X\leq r-1$. Taking the expectation, we have $$h_n(G^R_n)={{\rm E}}(X)\leq r-1.$$ Hence, $\pi_n(H)\leq r-1$. The result is followed after combining with item 1. $\square$ A hypergraph $H$ is called [*degenerate*]{} if $\pi(H)=|R(H)|-1$. By Proposition \[p1\], flags, and specifically chains, are degenerate hypergraphs. A necessary condition for $H$ to be degenerate is that every level hypergraph $H^{k_i}$ is $k_i$-partite. The following examples will show that the converse is not true. [**Example 1:**]{} The complete hypergraph $K^{\{1,2\}}_2$ has three edges $\{1\}$, $\{2\}$, and $\{1,2\}$. We claim $$\label{eq:k12} \pi(K^{\{1,2\}}_2)=\frac{5}{4}.$$ The lower bound is from the following construction. Partition $[n]$ into two parts $A$ and $B$ of nearly equal size. Consider the hypergraph $G$ with the edge set $$E(G)={A\choose 1} \cup \left({[n]\choose 2}\setminus {A\choose 2} \right).$$ It is easy to check $h_n(G)=\frac{5}{4}+O(\frac{1}{n})$ and that $G$ contains no copy of $K_{2}^{\{1,2\}}$. Now we prove the upper bound. Consider any $K^{\{1,2\}}_2$-free hypergraph $G$ of edge-type $\{1,2\}$ on $n$ vertices. Let
1
member_5
$A$ be the set of all singleton edges. For any $x,y\in A$, $xy$ is not a 2-edge of $G$. We have $$\begin{aligned} h_n(G)&\leq \frac{|A|}{n} + 1-\frac{{|A|\choose 2}}{{n\choose 2}} \\ &= 1+ \frac{|A|}{n} -\frac{|A|^2}{n^2} + O\left(\frac{1}{n}\right) \\ &\leq 1+ \frac{1}{4} + O\left(\frac{1}{n}\right).\end{aligned}$$ In the last step, we use the fact that $f(x)=1+x-x^2$ has the maximum value $\frac{5}{4}$. Combining the upper and lower bounds we have $\pi(K^{\{1,2\}}_2)=\frac{5}{4}$. The argument is easily generalized to the complete graph $K^{\{1,k\}}_k$ (for $k>1$). We have $$\label{eq:k1k} \pi(K^{1,k}_k)=1+ \frac{k-1}{k^{\frac{k}{k-1}}}.$$ Let $H$ be a hypergraph. The [*suspension*]{} of $H$, denoted by $S(H)$, is a new hypergraph with the vertex set $V(H)\cup \{\ast\}$ and the edge set $\{F\cup \{\ast\}\colon F\in E(H)\}$. Here $\ast$ is a new vertex not in $H$. Let $H$ be a hypergraph. The $k$-degree of a vertex $x$, denoted $d_{k}(x)$, is the number of edges of size $k$ containing $x$. [**Example 2:**]{} Consider $H:=S(K^{\{1,2\}}_2)$. The edges of $H$ are $\{1,2\}$, $\{2,3\}$, $\{1,2,3\}$. We claim $$\pi(S(K^{\{1,2\}}_2))= \frac{5}{4}.$$ The lower bound is from the following construction. Partition $[n]$ into two parts $A$ and $B$ of nearly equal size. Consider the hypergraph $G$ with the edge set $E=E_{2}\bigcup E_{3}$ where $E_{2}=\binom{A}{2}\bigcup \binom{B}{2}$ and $E_{3}=\binom{[n]}{3}\setminus \left(\binom{A}{3} \bigcup \binom{B}{3}\right)$. It is
1
member_5
easy to check $h_n(G)=\frac{5}{4}+O\left(\frac{1}{n}\right)$ and that $G$ is $H$-free. Now we prove the upper bound. Consider any $H$-free hypergraph $G$ of edge-type $\{2,3\}$ on $n$ vertices. Recall that $d_{2}(v)$ denotes the number of 2-edges that contain $v$. For each pair of 2-edges that intersect $v$ there is a unique 3-set containing those two pairs. This 3-set cannot appear in the edge set of $G$ since $G$ is $H$-free. We say that the edge is forbidden. Note that each 3-edge may be forbidden up to 3 times in this manner–depending on which of the three vertices we call $v$. Hence there are at least $\frac{1}{3} \sum_{v\in [n]} \binom{d_{2}(v)}{2}$ 3-edges which are not in $G$. Note that this is true for any $H$-free graph $G$ with number of vertices. Hence $$\begin{aligned} h_n(G) &\leq \frac{ \binom{n}{3}-\frac{1}{3}\sum_{v\in [n]} \binom{d_{2}(v)}{2} }{\binom{n}{3}} + \frac{ \frac{1}{2}\sum_{v\in [n]} d_{2}(v)}{\binom{n}{2}} \\ &= 1-\frac{\sum_{v\in [n]} d_2(v)^{2}}{6\binom{n}{3}} +\left(\frac{1}{6\binom{n}{3}}+\frac{1}{2\binom{n}{2}}\right)\sum_{v\in [n]} d_2(v) +1.\end{aligned}$$ Applying Cauchy-Schwarz Inequality and letting $m:=\sum_vd_2(v)$, we have $$\begin{aligned} h_n(G) &\leq \frac{-\left(\sum_{v\in [n]} d_2(v)\right)^2}{6n\binom{n}{3}} +\left(\frac{1}{6\binom{n}{3}}+\frac{1}{2\binom{n}{2}}\right)\sum_{v\in [n]} d_2(v) + 1 \\ &=-\frac{m^2}{n^4} + \frac{m}{n^2}+1 +O\left(\frac{1}{n}\right)\\ &\leq \frac{5}{4} +O\left(\frac{1}{n}\right).\end{aligned}$$ In the last step, we use the fact that $f(x)=1+x-x^2$ has the maximum value $\frac{5}{4}$. Taking the limit, we get $\pi(S(K_{2}^{\{1,2\}}))\leq \frac{5}{4}.$ We
1
member_5
can generalize this construction, giving the following lower bound for $S^k(K_2^{\{1,2\}})$ (the $k$-th suspension of $K_2^{\{1,2\}}$). The details of the computation are omitted. $$\label{eq:sk122} \pi(S^k(K_2^{\{1,2\}}))\geq 1+ \frac{1}{2^{k+1}}.$$ \[conj:1\] For any $k\geq 2$, $\pi(S^k(K_2^{\{1,2\}}))= 1+ \frac{1}{2^{k+1}}.$ Supersaturation and Blowing-up ============================== Supersaturation Lemma [@ErSi] is an important tool for uniform hypergraphs. There is a natural generalization of the supersaturation lemma and blowing-up in non-uniform hypergraphs. [**(Supersaturation)**]{} For any hypergraph $H$ and $a>0$ there are $b$, $n_0>0$ so that if $G$ is a hypergraph on $n>n_0$ vertices with $R(G)=R(H)$ and $h_n(G)>\pi(H)+a$ then $G$ contains at least $b{n\choose v(H)}$ copies of $H$. [**Proof:**]{} Let $R:=R(H)$ and $r:=|R|$. Since $\displaystyle \pi(H)=\lim_{n\to\infty}\pi_n(H)$, there is an $n_0$ so that for $m\geq n_0$, $\pi_m(H)\leq \pi(H)+\frac{a}{2}$. For any $n_0\leq m\leq n$, there must be at least $\frac{a}{2r}{n\choose m}$ $m$-sets $M\subset V(G)$ inducing a $R$-graph $G[M]$ with $h(G[M])>\pi(H)+\frac{a}{2}$. Otherwise, we have $$\sum_{M}h_m(G[M])\leq \left(\pi(H)+\frac{a}{2}\right){n\choose m} + \frac{a}{2r}{n\choose m}r=(\pi(H)+a){n\choose m}.$$ But we also have $$\begin{aligned} \sum_Mh_m(G[M]) &=\sum_M \sum_{\stackrel{F\in E(G)}{F\subseteq M}}\frac{1}{{m\choose |F|}} \\ &=\sum_{F\in E(G)}\sum_{M\supseteq F}\frac{1}{{m\choose |F|}} \\ &= \sum_{F\in E(G)} \frac{{n-|F| \choose m-|F|}}{{m\choose |F|}} \\ &=\sum_{F\in E(G)} \frac{{n\choose m}}{{n\choose |F|}}\\ &= {n\choose m}h_n(G).\end{aligned}$$ This is a contradiction to the assumption that $h_n(G)>\pi(H)+a$. By the choice of $m$, each of these $m$-sets contains
1
member_5
a copy of $H$, so the number of copies of $H$ in $G$ is at least $\frac{a}{2r}{n\choose m}/{{n-v(H)\choose m-v(H)}}=b {n\choose v(H)}$ where $b:=\frac{a}{2r}{m\choose v(H)}^{-1}$. $\square$ Supersaturation can be used to show that “blowing up” does not change the Turán density $\pi(H)$ just like in the uniform cases. For any hypergraph $H_n$ and positive integers $s_1,s_2,\ldots, s_n$, the [*blowup*]{} of $H$ is a new hypergraph $(V,E)$, denoted by $H_n(s_1,s_2,\ldots, s_n)$, satisfying 1. $V:=\sqcup_{i=1}^n V_i$, where $|V_i|=s_i$. 2. $E=\cup_{F\in {{\rm E}}(H)} \prod_{i\in F} V_i$. When $s_1=s_2=\cdots=s_n=s$, we simply write it as $H(s)$. Consider the following simple example. Take $H$ to be the hypergraph with vertex set $[3]$ and edge set $E=\{\{1,2\}, \{1,2,3\}\}$, a chain. Consider the blow-ups $H(2,1,1)$ and $H(1,1,2)$ illustrated below. (0,1)–(2,0)–(4,0)–cycle; at (0,1) \[vertex\] (v1) \[label=above:[1]{}\] ; at (2,0) \[vertex\] (v2) \[label=below:[2]{}\] ; at (4,0) \[vertex\] (v3) \[label=right:[3]{}\] ; (v1)–(v2); at (2,-2) (empty) [$H$]{}; (0,1)–(2,0)–(4,0)–cycle; (0,-1)–(2,0)–(4,0)–cycle; at (0,1) \[vertex\] (v11) \[label=above:[$v_{1,1}$]{}\] ; at (0,-1) \[vertex\] (v12) \[label=below:[$v_{1,2}$]{}\] ; at (2,0) \[vertex\] (v2) \[label=left:[$v_2$]{}\] ; at (4,0) \[vertex\] (v3) \[label=right:[$v_3$]{}\] ; (v11)–(v2); (v12)–(v2); at (2,-2) [$H(2,1,1)$]{}; (0,0)–(2,0)–(4,1)–cycle; (0,0)–(2,0)–(4,-1)–cycle; at (0,0) \[vertex\] (v1) \[label=above:[$v_{1}$]{}\] ; at (2,0) \[vertex\] (v2) \[label=right:[$v_{2}$]{}\] ; at (4,1) \[vertex\] (v31) \[label=above:[$v_{3,1}$]{}\] ; at (4,-1) \[vertex\] (v32) \[label=below:[$v_{3,2}$]{}\]
1
member_5
; (v1)–(v2); at (2,-2) [$H(1,1,2)$]{}; In the blow-up $H(2,1,1)$ vertex 1 splits into vertices $v_{1,1}$ and $v_{1,2}$; vertex 2 becomes $v_2$ and vertex 3 becomes $v_3$. In the blow-up $H(1,1,2)$ vertex 3 splits into vertices $v_{3,1}$ and $v_{3,2}$; vertex 1 becomes $v_1$ and vertex 2 becomes $v_2$. [**(Blowing up)**]{} \[blowup\] Let $H$ be any hypergraph and let $s\geq 2$. Then $\pi(H(s))=\pi(H)$. [**Proof:**]{} Let $R:=R(H)$. By the supersaturation lemma, for any $a>0$ there is a $b>0$ and an $n_0$ so that any $R$-graph $G$ on $n\geq n_0$ vertices with $h_n(G)>\pi(H)+a$ contains at least $b{n \choose v(H)}$ copies of $H$. Consider an auxiliary $v(H)$-uniform hypergraph $U$ on the same vertex set as $G$ where edges of $U$ correspond to copies of $H$ in $G$. For any $S>0$, there is a copy of $K=K^{v(H)}_{v(H)}(S)$ in $U$. This follows from the fact that $\pi(K^{v(H)}_{v(H)}(S))=0$ since it is $v(H)$-partite, and $h_{n}(U)=b>0$. Now color each edge of $K$ by one of $v(H)!$ colors, each color corresponding to one of $v(H)!$ possible orders the vertices of $H$ are mapped to the parts of $K$. The pigeon-hole principle gives us that one of the color classes contains at least $S^{v(H)}/v(H)!$ edges. For large enough $S$ there is a
1
member_5
monochromatic copy of $K^{v(H)}_{v(H)}(s)$, which gives a copy of $H(s)$ in $G$. $\square$ [**(Squeeze Theorem)**]{} Let $H$ be any hypergraph. If there exists a hypergraph $H^{\prime}$ and integer $s\geq 2$ such that $H^{\prime}\subseteq H\subseteq H^{\prime}(s)$ then $\pi(H)=\pi(H^{\prime})$. [**Proof:**]{} One needs only observe that for any hypergraphs $H_{1}\subseteq H_{2}\subseteq H_{3}$ that $\pi(H_{1})\leq \pi(H_{2})\leq \pi(H_{3})$. If $H_{3}=H_{1}(s)$ for some $s\geq 2$ then $\pi(H_{1})=\pi(H_{3})$ by the previous theorem. $\square$ Turán Densities of $\{1,2\}$-hypergraphs ======================================== In this section we will determine the Turán density for any hypergraph $H$ with $R(H)=\{1,2\}$. We begin with the following more general result. Let $H=H^{1}\cup H^{k}$ be a hypergraph with $R(H)=\{1,k\}$ and $E(H^{1})=V(H^{k})$. Then $$\pi(H) = \begin{cases} 1+\pi(H^{k}) & \text{if } \pi(H^{k})\geq 1-\frac{1}{k}; \\ 1+\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{1/(k-1)}\left(1-\frac{1}{k}\right) & \text{otherwise.} \end{cases}$$ [**Proof:**]{} For each $n\in \mathbb{N}$, let $G_{n}$ be any $H$-free graph $n$ vertices with $h_{n}(G_{n})=\pi_{n}(H)$. Partition the vertices of $G_{n}$ into $X_{n}=\{v\in V(G_{n}):\{v\}\in E(G)\}$ and $\bar{X_{n}}$ containing everything else. Say that $|X_{n}|=x_{n}n$ and $|\bar{X_{n}}|=(1-x_{n})n$ for some $x_{n}\in [0,1]$. Since $(x_{n})$ is a sequence in $[0,1]$ it has a convergent subsequence. Consider $(x_{n})$ to be the convergent subsequence, and say that $x_{n}\to x\in [0,1]$. With the benefit of hindsight, we know that $x>0$, however, for the upper bound portion of this proof
1
member_5
we will not assume this knowledge. Since there is no copy of $H$ in $G_{n}$, it follows that $G_{n}[X_{n}]$ contains no copy of $H^{k}$. We have that $$\begin{aligned} \pi(H) &=\lim_{n\to\infty} h_{n}(G_{n}) \\ &=\lim_{n\to\infty} \sum_{F\in H^{1}}\frac{1}{\binom{n}{1}} + \sum_{F\in H^{k}} \frac{1}{\binom{n}{k}} \\ &\leq \lim_{n\to\infty} \frac{x_{n}n}{\binom{n}{1}} + \frac{\binom{n}{k}-(1-\pi_{x_{n}n}(H^{k}))\binom{x_{n}n}{k}}{\binom{n}{k}} \\ &=\lim_{n\to\infty} 1 + x_{n} - (1-\pi_{x_{n}n}(H^{k}))\frac{\binom{x_{n}n}{k}}{\binom{n}{k}} \\ &\leq \lim_{n\to\infty} \begin{cases} 1+\frac{1}{\sqrt{n}} & \text{if } x_{n}n \leq \sqrt{n}, \\ 1+ x_{n} - (1-\pi_{x_{n}n}(H^{k}))x_{n}^{k} & \text{if } x_{n}n>\sqrt{n}, \end{cases} \\ &\leq \max\{1, 1+x-(1-\pi(H^{k}))x^{k}\}.\end{aligned}$$ Let $f(x)=1+x-(1-\pi(H^{k}))x^{k}$ and then note that $$\pi(H)= \lim_{n\to\infty} h_{n}(G) \leq \max_{x\in [0,1]} f(x).$$ An easy calculus exercise shows that $f^{\prime\prime}(x)<0$ for all $x>0$, and $f^{\prime}(x)=0$ when $x=\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{\frac{1}{k-1}}.$ If $\frac{1}{k(1-\pi(H^{k}))}\geq 1$ then $f^{\prime}(x)>0$ when $x\in [0,1)$ and hence $f(x)$ is maximized when $x=1$. Note that $f(1)=1+\pi(H^{k})$. If, on the other hand, $\frac{1}{k(1-\pi(H^{k}))}< 1$ it follows that $f(x)$ is maximized at $x=\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{1/(k-1)}$. Together, this gives us $$\pi(H) \leq \begin{cases} 1+\pi(H^{k}) & \text{if } \pi(H^{k})\geq 1-\frac{1}{k}; \\ 1+\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{1/(k-1)}\left(1-\frac{1}{k}\right) & \text{otherwise}. \end{cases}$$ To get equality, take $x$ that maximizes $f(x)$ as above. For any $n\in \mathbb{N}$ (thinking of $n\to \infty$) partition $[n]$ into two sets $X$ and $\bar{X}$ with $|X|=xn$ and $|\bar{X}|=(1-x)n$. Let $E(G^{1})=\{\{v\}:v\in X\}$ and let $g^{k}$ be a $k$-uniform graph on $xn$ vertices attaining $|E(g^{k})|=\text{ex}(xn,H^{k})$
1
member_5
and $g^{k}$ is $H^{k}$-free. Then $$E(G^{k})=\{F\in \binom{[n]}{k}:\text{either } F\in E(g^{k}) \text{ or } F\cap \bar{X}\neq \emptyset\}.$$ Then $G=G^{1}\cup G^{k}$ is $H$-free and (by choice of $x$) we have that $\displaystyle \lim_{n\to\infty}h_{n}(G)$ attains the upper bound of $\pi(H)$. $\square$ Let us now return to the task of determining $\pi(H)$ when $H=H^{1}\cup H^{2}$. Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is not bipartite, then $$\pi(H)=1+\pi(H^{2})= 1+\left(1-\frac{1}{\chi(H^{2})-1}\right)=2-\frac{1}{\chi(H^{2})-1}.$$ [**Proof:**]{} First, $\pi(H)\geq 1+\pi(H^{2})$ since one can construct an $H$-free graph $G_{n}$ by letting $$E(G_{n})=\{\{v\}:v\in V(G_{n})\}\cup E(G^{\prime}_{n})$$ where $G^{\prime}_{n}$ attains $h_{n}(G^{\prime}_{n})=\pi_{n}(H^{2})$ and $G^{\prime}_{n}$ is $H^{2}$-free. Then $$\pi(H)\geq \lim_{n\to\infty} h_{n}(G_{n}) = \lim_{n\to\infty} 1+\pi_{n}(H^{2}) = 1+\pi(H^{2}).$$ To get the upper-bound, first add every missing $1$-edge into $H$, call the new graph $H^{\prime}$. Note that $\pi(H)\leq \pi(H^{\prime})$. Note that we didn’t change the edge set $H^{2}$. The Erdős-Stone-Simonovits theorem states that if $H^{2}$ is not bipartite, then $\pi(H^{2})=1-\frac{1}{\chi(H^{2})-1}$. Also, if $H^{2}$ is not bipartite, then $\chi(H^{2})\geq 3$. With the added vertices, taking $k=2$, we apply the previous theorem. Since $$\pi(H^{2})=1-\frac{1}{\chi(H^{2})-1}\geq 1-\frac{1}{2}$$ we may conclude that $\pi(H)\leq \pi(H^{\prime})=1+\pi(H^{2})$. $\square$ It remains to investigate the cases when $H^{2}$ is bipartite. Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is bipartite and $K_{2}^{\{1,2\}}\subseteq H$ then $\pi(H)=\frac{5}{4}$. [**Proof:**]{} First, in example 1, we computed $\pi(K_{2}^{\{1,2\}})=\frac{5}{4}$. Second, $H$ must
1
member_5
be contained in some blow-up of $K_{2}^{\{1,2\}}$ since $H^{2}$ is bipartite, i.e. there exists some $s>2$ such that $H\subseteq K_{2}^{\{1,2\}}(s)$. So, by the squeeze theorem we have $$\frac{5}{4}=\pi(K_{2}^{\{1,2\}}) \leq \pi(H) \leq \pi(K_{2}^{\{1,2\}}(s))=\frac{5}{4}.$$ Hence $\pi(H)=\frac{5}{4}$ as claimed. $\square$ We will say that $H=H^{1}\cup H^{2}$ is a ***closed path*** (from $x_{1}$ to $x_{k}$) of length $k$ if $V(H)=\{x_{1}, x_{2},...,x_{k}\}$ and $E(H^{1})=\{\{x_{1}\}, \{x_{k}\}\}$ and $E(H^{2})=\{ \{x_{i}, x_{i+1}\}:1\leq i\leq k-1\}$. We will denote a closed path of length $k$, or a closed $k$-path, by $\bar{P}_{k}$. Pictorially, we view a closed path of length $k$ as follows: at (0,0) \[vertex\_closed\] (v1) \[label=below:[$x_{1}$]{}\] ; at (1,0) \[vertex\_open\] (v2) \[label=below:[$x_{2}$]{}\] ; at (2,0) \[vertex\_open\] (v3) \[label=below:[$x_{3}$]{}\] ; at (3,0) \[vertex\_open\] (v4) \[label=below:[$x_{k-2}$]{}\] ; at (4,0) \[vertex\_open\] (v5) \[label=below:[$x_{k-1}$]{}\] ; at (5,0) \[vertex\_closed\] (v6) \[label=below:[$x_{k}$]{}\] ; (v1)–(v2); (v2)–(v3); (v4)–(v5); (v5)–(v6); (v3)–(2.3, 0); (2.7, 0)–(v4); at (2.5, 0) [$\dots$]{}; Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is bipartite and $H$ does not contain a copy of $K_{2}^{\{1,2\}}$ and $H$ contains a closed path of length $2k$, then $\pi(H)=\frac{9}{8}$. [**Proof:**]{} First, we will give a construction giving us the lower bound. For any $n\in \mathbb{N}$ let $G_{n}$ have vertex set $[n]$. Partition the vertices of $G_{n}$ into two sets $X$ and $\bar{X}$
1
member_5
where $|X|=\frac{3n}{4}$ and $|\bar{X}|=\frac{n}{4}$. Let $$E(G)=\{\{x\}:x\in X\} \cup \{\{x,\bar{x}\}: x\in X \text{ and } \bar{x}\in \bar{X}\}.$$ It is clear that $G_{n}$ contains no closed paths of length $2k$ when $k\geq 1$. Also, $$\begin{aligned} \lim_{n\to\infty} h_{n}(G_{n}) &= \lim_{n\to\infty} \frac{|X|}{\binom{n}{1}}+\frac{|X|\cdot |\bar{X}|}{\binom{n}{2}} \\ &= \lim_{n\to\infty} \frac{3}{4}+\frac{\frac{3}{16} n^{2}}{\binom{n}{2}} \\ &= \frac{3}{4}+\frac{3}{8} =\frac{9}{8}.\end{aligned}$$ Thus $\pi(H)\geq \frac{9}{8}$ for any $H$ containing a closed path of length $2k$ for any $k\geq 1$. Since $H^{2}$ is bipartite, and $H^{2}$ does not contain a copy of $K_{2}^{\{1,2\}}$, then $H$ is contained in a blow-up of a closed $4$-path. To see this, note that there is a bipartition of the vertices of $H$, $V(H)=A\cup B$, (with respect to the $2$-edges in $H$). Furthermore, we can partition $A$ into $A_{1}\cup A_{2}$ where $v\in A_{1}$ if $\{v\}\in E(H)$ and $v\in A$, $v\in A_{2}$ if $v\in A\setminus A_{1}$. And similarly partition $B$ into $B_{1}\cup B_{2}$ with $v\in B_{1}$ if $\{v\}\in E(H)$ and $v\in B$. Then note that there are no edges from $A_{1}$ to $B_{1}$ since $H$ contains no copy of $K_{2}^{\{1,2\}}$. So $H\subset \bar{P}_{4}(\max\{|A_{1}|, |A_{2}|, |B_{1}|, |B_{2}|)$–a blow-up of $\bar{P}_{4}$. Below is a graphical representation of $H$, illustrating that $H$ is contained in a blow-up of $\bar{P}_{4}$. (0, 0) circle (1 cm);
1
member_5
(0,-3) circle (1 cm); (3, 0) circle (1 cm); (3,-3) circle (1 cm); at (-1.5, 0) [$A_{1}$]{}; at (-1.5,-3) [$A_{2}$]{}; at (4.5, 0) [$B_{2}$]{}; at (4.5, -3) [$B_{1}$]{}; at (0, .5) \[vertex\_closed\] (v1) ; at (-.5, -.5) \[vertex\_closed\] (v2) ; at (.35,-.35) \[vertex\_closed\] (v3) ; at (3, .5) \[vertex\_open\] (v4) ; at (2.65,-.35) \[vertex\_open\] (v5) ; at (3.5, -.5) \[vertex\_open\] (v6) ; at (0, -3.5) \[vertex\_open\] (v7) ; at (-.5, -2.65) \[vertex\_open\] (v8) ; at (3, -3.5) \[vertex\_closed\] (v9) ; at (3.5, -2.65) \[vertex\_closed\] (v10) ; (.6,-.2)–(2.75, 0); (.35, .2)–(2.5, .5); (3,-.4)–(0,-2.65); (3.15, -.5)–(.35, -3); (.5, -3.5)–(2.5, -3.25); (.65,-3.25)–(2.7, -2.8); Since $\pi(H)\leq \pi(\bar{P}_{4}(s))=\pi(\bar{P}_{4})$ we need only show that $\pi(\bar{P}_{4})\leq \frac{9}{8}$. Let $G_{n}$ be a family of $\bar{P}_{4}$-free graphs such that $h_{n}(G_{n})=\pi_{n}(\bar{P}_{4})$. Partition the vertices of $G_{n}$ as follows: $$\begin{aligned} X_{n} &= \{v:\{v\}\in E(G_{n})\}, \\ Y_{n} &= \{v: \{v\}\notin E(G_{n}) \text{ and } \exists x_{1}\neq x_{2}\in X_{n} \text{ with } \{x_{1}, v\}, \{x_{2}, v\} \in E(G_{n})\}, \\ Z_{n} &= V(G)\setminus (X_{n}\cup Y_{n}).\end{aligned}$$ Let us say that $|X_{n}|=xn$, $|Y_{n}|=yn$ and hence $|Z_{n}|=(1-x-y)n$. First, note that $E(G)\cap \binom{Y_{n}}{2}=\emptyset$. Otherwise, since each vertex in $Y_{n}$ has at least 2 neighbors in $X_{n}$, $G_{n}$ would contain a closed path of length $4$. Also, each vertex
1
member_5
in $Z_{n}$ has at most 1 neighbor in $X_{n}$. It follows that $$\begin{aligned} \pi(\bar{P}_{4}) &=\lim_{n\to\infty} \pi_{n}(\bar{P}_{4}) \\ &= \lim_{n\to \infty} h_{n}(G_{n}) \\ &\leq \lim_{n\to\infty} \frac{|X_{n}|}{\binom{n}{1}} + \frac{|X_{n}|\cdot |Y_{n}|}{\binom{n}{2}} + \frac{|Y_{n}|\cdot |Z_{n}|}{\binom{n}{2}} + \frac{\binom{|Z_{n}|}{2}}{\binom{n}{2}} + \frac{|Z_{n}|}{\binom{n}{2}} \\ &\leq \lim_{n\to\infty} \frac{xn}{\binom{n}{1}} + \frac{xyn^{2}}{\binom{n}{2}} + \frac{y(1-x-y)n^{2}}{\binom{n}{2}} + \frac{\frac{(1-x-y)^{2}n^{2}}{2}}{\binom{n}{2}} + \frac{(1-x-y)n}{\binom{n}{2}} \\ &\leq \max_{\substack {0\leq x\leq 1 \\ 0\leq y\leq 1-x}} x + 2xy + 2y(1-x-y)+ (1-x-y)^{2} \\ &=\frac{9}{8}.\end{aligned}$$ The last inequality is an easy multivariate calculus exercise. One can also verify it with software, such as *Mathematica*, the syntax being: Maximize[{x^2-x-y^2+2*x*y+1, 0<=x<=1, 0<=y<=1-x}, {x,y}]. It may be of interest to note that the maximum value of the function is obtained when $x=\frac{3}{4}$ and $y=\frac{1}{4}$. In this case $Z_{n}$ is empty. Since our upper bound matches the lower bound, we have the desired result. $\square$ Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is bipartite and $H^{2}$ does not contain a closed $2k$-path for any $k\geq 1$, then $\pi(H)=1$. [**Proof:**]{} First, since $|R(H)|=2$ we have, trivially, that $\pi(H)\geq 1$. Since $H$ contains no path of length $2k$ for any $k\geq 1$ it must be the case that $H$ is contained in a blow-up of a chain $C^{\{1,2\}}=\{\{x\}, \{x,y\}\}$. This is most clearly seen by again, considering the
1
member_5
previous illustration. The difference is, in this case, $B_{1}$ (or $A_{1}$) is empty. (0, 0) circle (1 cm); (0,-3) circle (1 cm); (3, 0) circle (1 cm); at (-1.5, 0) [$A_{1}$]{}; at (-1.5,-3) [$A_{2}$]{}; at (4.5, 0) [$B_{2}$]{}; at (0, .5) \[vertex\_closed\] (v1) ; at (-.5, -.5) \[vertex\_closed\] (v2) ; at (.35,-.35) \[vertex\_closed\] (v3) ; at (3, .5) \[vertex\_open\] (v4) ; at (2.65,-.35) \[vertex\_open\] (v5) ; at (3.5, -.5) \[vertex\_open\] (v6) ; at (0, -3.5) \[vertex\_open\] (v7) ; at (-.5, -2.65) \[vertex\_open\] (v8) ; (.6,-.2)–(2.75, 0); (.35, .2)–(2.5, .5); (3,-.4)–(0,-2.65); (3.15, -.5)–(.35, -3); at (1.5, -4) [$H$]{}; at (7,0) \[vertex\_closed\] (w1) ; at (8,0) \[vertex\_open\] (w2) ; at (7,-1) \[vertex\_open\] (w3) ; (w1)–(w2)–(w3); at (7.5, -1.5) [$K$]{}; It is clear that $H$ is contained in a blow-up of $K$ where $$K=\{\{x\}, \{x,y\}, \{y,z\}\}\subseteq C^{\{1,2\}}(2,1)=\{\{x\}, \{z\}, \{x, y\}, \{z, y\}\}.$$ It follows that $\pi(H)\leq \pi(C^{\{1,2\}})=1$. $\square$ The combination of these propositions completely determines $\pi(H)$ when $R(H)=\{1,2\}$. The results are summarized by the following theorem. \[t12\] For any hypergraph $H$ with $R(H)=\{1,2\}$, we have $$\pi(H) = \begin{cases} 2-\frac{1}{\chi(H^{2})-1} & \text{if } H^{2} \text{ is not bipartite}; \\ \frac{5}{4} & \text{if } H^{2} \text{ is bipartite and } \min \{k:\bar{P}_{2k}\subseteq H\}=1; \\ \frac{9}{8}
1
member_5
--- author: - 'Mark Bun [^1]' - 'Roi Livni [^2]' - 'Shay Moran [^3]' bibliography: - 'biblio.bib' title: | An Equivalence Between Private Classification\ and Online Prediction --- Introduction ============ This paper continues the study of the close relationship between differentially-private learning and online learning. #### Differentially-Private Learning. Statistical analyses and computer algorithms play significant roles in the decisions which shape modern society. The collection and analysis of individuals’ data drives computer programs which determine many critical outcomes, including the allocation of community resources, decisions to give loans, and school admissions. While data-driven and automated approaches have obvious benefits in terms of efficiency, they also raise the possibility of unintended negative impacts, especially against marginalized groups. This possibility highlights the need for [*responsible*]{} algorithms that obey relevant ethical requirements (see e.g. [@Oneil2016weapons]). [*Differential Privacy*]{} (DP) [@DworkMNS06] plays a key role in this context. Its initial (and primary) purpose was to provide a formal framework for ensuring individuals’ privacy in the statistical analysis of large datasets. But it has also found use in addressing other ethical issues such as [*algorithmic fairness*]{} (see, e.g. [@DworkHPRZ12; @cummings19fairness]). Many tasks which involve sensitive data arise in machine learning (e.g. in medical applications and in
1
member_6
social networks). Consequently, a large body of practical and theoretical work has been dedicated to understand which learning tasks can be performed by DP learning algorithms. The simplest and most extensively studied model of learning is the private PAC model [@Valiant84; @KasiviswanathanLNRS11], which captures binary classification tasks under differential privacy. A partial list of works on this topic includes [@KasiviswanathanLNRS11; @BeimelBKN14; @BunNSV15; @FeldmanX15; @BeimelNS16; @BunDRS18; @Beimel19Pure; @AlonLMM19; @kaplan2019privately]. In this manuscript we make progress towards characterizing what tasks are DP PAC-learnable by demonstrating a qualitative equivalence with online-learnable tasks. #### Online Learning. Online learning is a well-studied branch of machine learning which addresses algorithms making real-time predictions on sequentially arriving data. Such tasks arise in contexts including recommendation systems and advertisement placement. The literature on this subject is vast and includes several books, e.g. [@cesabianchi06prediction; @Shalev-Shwartz12book; @Hazan16oco]. [*Online Prediction*]{}, or [*Prediction with Expert Advice*]{} is a basic setting within online learning. Let $\mathcal{H} = \{h:X\to \{\pm1\} \}$ be a class of predictors (also called experts) over a domain $X$. Consider an algorithm which observes examples $(x_1,y_1)\ldots (x_T,y_T)\in X\times\{\pm 1\}$ in a sequential manner. More specifically, in each time step $t$, the algorithm first observes the instance $x_t$, then predicts a
1
member_6
label $\hat{y}_t\in\{\pm 1\}$, and finally learns whether its prediction was correct. The goal is to minimize the [*regret*]{}, namely the number of mistakes compared to the best expert in $\mathcal{H}$: $$\sum_{t=1}^T 1[y_t\neq \hat{y}_t] - \min_{h^*\in\mathcal{H}} \sum_{t=1}^T 1[y_t\neq h^*(x_t)].$$ In this context, a class $\mathcal H$ is said to be online learnable if for every $T$, there is an algorithm that achieves sublinear regret $o(T)$ against any sequence of $T$ examples. The [*Littlestone dimension*]{} is a combinatorial parameter associated to the class ${\mathcal{H}}$ which characterizes its online learnability [@Littlestone87online; @Bendavid09agnostic]: $\mathcal H$ is online learnable if and only if it has a finite Littlestone dimension $d<\infty$. Moreover, the best possible regret $R(T)$ for online learning of ${\mathcal{H}}$ satisfies $$\Omega (\sqrt{dT}) \leq R(T) \leq O(\sqrt{dT\log T}).$$ Furthermore, if it is known that if one of the experts never errs (i.e. in the realizable mistake-bound model), then the optimal regret is exactly $d$. #### Stability. While at a first glance it may seem that online learning and differentially-private learning have little to do with one another, a line of recent works has revealed a tight connection between the two [@Agarwal17dponline; @Abernathy17onlilnedp; @AlonLMM19; @bousquet2019passing; @NeelRW19; @Joseph2019TheRO; @Gonen19privateonline]. At a high-level, this connection appears to
1
member_6
boil down to the notion of stability, which plays a key role in both topics. On one hand, the definition of differential privacy is itself a form of stability; it requires robustness of the output distribution of an algorithm when its input undergoes small changes. On the other hand, stability also arises as a central motif in online learning paradigms such as [*Follow the Perturbed Leader*]{} [@Kalai02geometricalgorithms; @kalai05efficient] and [*Follow the Regularized Leader*]{} [@abernethy08competing; @Shalev07ftrl; @Hazan16oco]. In their monograph [@DworkR14], Dwork and Roth identified stability as a common factor of learning and differential privacy: [*“Differential privacy is enabled by stability and ensures stability… we observe a tantalizing moral equivalence between learnability, differential privacy, and stability.”*]{} This insight has found formal manifestations in several works. For example, Abernethy et al. used DP inspired stability methodology to derive a unified framework for proving state of the art bounds in online learning [@Abernathy17onlilnedp]. In the opposite direction, Agarwal and Singh showed that certain standard stabilization techniques in online learning imply differential privacy [@Agarwal17dponline]. Stability plays a key role in this work as well. Our main result, which shows that any class with a finite Littlestone dimension can be privately learned, hinges on the
1
member_6
following form of stability: for $\eta > 0$ and $n\in\mathbb{N}$, a learning algorithm ${\mathcal{A}}$ is [*$(n,\eta)$-globally stable*]{}[^4] with respect to a distribution ${\mathcal{D}}$ over examples if there exists an hypothesis $h$ whose frequency as an output is at least $\eta$. Namely, $$\Pr_{S\sim {\mathcal{D}}^n}[{\mathcal{A}}(S) = h] \geq \eta.$$ Our argument follows by showing that every ${\mathcal{H}}$ can be learned by a globally-stable algorithm with parameters $\eta = \exp(-d), n=\exp(d)$, where $d$ is the Littlestone dimension of ${\mathcal{H}}$. As a corollary, we get an equivalence between global stability and differential privacy (which can be viewed as a form of local stability). That is, the existence of a globally stable learner for ${\mathcal{H}}$ is equivalent to the existence of a differentially-private learner for it (and both are equivalent to having a finite Littlestone dimension). #### Littlestone Classes. It is natural to ask which classes have finite Littlestone dimension. First, note that every finite class ${\mathcal{H}}$ has Littlestone dimension $d \leq \log\lvert {\mathcal{H}}\rvert$. There are also many natural and interesting infinite classes with finite Littlestone dimension. For example, let $X=\mathbb{F}^n$ be an $n$-dimensional vector space over a field $\mathbb{F}$ and let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ consist of all (indicators of) affine subspaces of dimension $\leq d$.
1
member_6
The Littlestone dimension of ${\mathcal{H}}$ is $d$. More generally, any class of hypotheses that can be described by polynomial *equalities* of constant degree has finite Littlestone dimension.[^5] This can be generalized even further to classes that are definable in [*stable theories*]{}. This (different, still) notion of stability is deep and well-explored in model theory. We refer the reader to [@Chase19modelmachine], Section 5.1 for more examples of stable theories and the Littlestone classes they correspond to. #### Organization. The rest of this manuscript is organized as follows. In Section \[sec:results\] we formally state our main results and discuss some implications. Section \[sec:overview\] overviews some of the main ideas in the proofs. Sections \[sec:preliminaries\] - \[sec:wrapping\] contain complete proofs, and the last section (Section \[sec:conc\]) concludes the paper with some suggestions for future work. Main Results {#sec:results} ------------ We next present our main results. We begin with the statements concerning the relationship between online learning and differentially-private learning. In Section \[sec:stability\] we present and discuss the notion of global stability, and finally in Section \[sec:boosting\] we discuss an implication for private boosting. Throughout this section some standard technical terms are used. For definitions of these terms we refer the reader to Section 
1
member_6
\[sec:preliminaries\]. \[thm:main\] Let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ be a class with Littlestone dimension $d$, let ${\varepsilon},\delta \in (0, 1)$ be privacy parameters, and let $\alpha,\beta \in (0, 1/2)$ be accuracy parameters. For $$n = O\left(\frac{16^d \cdot d^2 \cdot (d + \log(1/\beta\delta))}{\alpha{\varepsilon}}\right) = O_d\left(\frac{\log(1/\beta\delta)}{\alpha{\varepsilon}}\right)$$ there exists an $({\varepsilon},\delta)$-DP learning algorithm such that for every realizable distribution ${\mathcal{D}}$, given an input sample $S\sim {\mathcal{D}}^n$, the output hypothesis $f={\mathcal{A}}(S)$ satisfies ${\operatorname{loss}}_{{\mathcal{D}}}(f)\leq \alpha$ with probability at least $1-\beta$, where the probability is taken over $S\sim {\mathcal{D}}^n$ as well as the internal randomness of ${\mathcal{A}}$. A similar result holds in the agnostic setting: \[thm:agnostic\] Let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ be a class with Littlestone dimension $d$, let ${\varepsilon}$, and $\delta \in (0, 1)$ be privacy parameters, and let $\alpha,\beta \in (0, 1/2)$ be accuracy parameters. For $$n = O\left(\frac{16^d \cdot d^2 \cdot (d + \log(1/\beta\delta))}{\alpha\epsilon} +\frac{\textrm{VC}({\mathcal{H}})+\log 1/\beta}{\alpha^2\epsilon} \right)$$ there exists an $({\varepsilon},\delta)$-DP learning algorithm such that for every distribution ${\mathcal{D}}$, given an input sample $S\sim {\mathcal{D}}^n$, the output hypothesis $f={\mathcal{A}}(S)$ satisfies $${\operatorname{loss}}_{{\mathcal{D}}}(f)\leq \min_{h\in {\mathcal{H}}} {\operatorname{loss}}_{{\mathcal{D}}}(h)+ \alpha$$ with probability at least $1-\beta$, where the probability is taken over $S\sim {\mathcal{D}}^n$ as well as the internal randomness of ${\mathcal{A}}$. follows from by Theorem 2.3 in [@alon2020closure] which provides a general mechanism
1
member_6
to transform a learner in the realizable setting to a learner in the agnostic setting[^6]. We note that formally the transformation in [@alon2020closure] is stated for a constant ${\varepsilon}=O(1)$. Taking ${\varepsilon}=O(1)$ is without loss of generality as a standard “secrecy-of-the-sample” argument can be used to convert this learner into one which is $({\varepsilon}, \delta)$-differentially private by increasing the sample size by a factor of roughly $1/{\varepsilon}$ and running the algorithm on a random subsample. See [@KasiviswanathanLNRS11; @Vadhan17] for further details. \[thm:equivalence\] The following statements are equivalent for a class ${\mathcal{H}}\subseteq \{\pm 1\}^X$: 1. ${\mathcal{H}}$ is online learnable. 2. ${\mathcal{H}}$ is approximate differentially-privately PAC learnable. Theorem \[thm:equivalence\] is a corollary of Theorem \[thm:main\] (which gives $1\to 2$) and the result by Alon et al. [@AlonLMM19] (which gives $2\to 1$). We comment that a quantitative relation between the learning and regret rates is also implied: it is known that the optimal regret bound for ${\mathcal{H}}$ is $\tilde \Theta_d(\sqrt{T})$, where the $\tilde \Theta_d$ conceals a constant which depends on the Littlestone dimension of ${\mathcal{H}}$ [@Bendavid09agnostic]. Similarly, we get that the optimal sample complexity of agnostically privately learning ${\mathcal{H}}$ is $\Theta_d(\frac{\log({1}/(\beta\delta))}{\alpha^2{\varepsilon}})$. We remark however that the above equivalence is mostly interesting from a theoretical
1
member_6
perspective, and should not be regarded as an efficient transformation between online and private learning. Indeed, the Littlestone dimension dependencies concealed by the $\tilde \Theta_d(\cdot)$ in the above bounds on the regret and sample complexities may be very different from one another. For example, there are classes for which the $\Theta_d(\frac{\log({1}/(\beta\delta))}{\alpha{\varepsilon}})$ bound hides a $\mathrm{poly}(\log^*(d))$ dependence, and the $\tilde \Theta_d(\sqrt{T})$ bound hides a $\Theta(d)$ dependence. One example which attains both of these dependencies is the class of thresholds over a linearly ordered domain of size $2^d$ [@AlonLMM19; @kaplan2019privately]. ### Global Stability {#sec:stability} Our proof of Theorem \[thm:main\], which establishes that every Littlestone class can be learned privately, hinges on an intermediate property which we call [*global stability*]{}: Let $n\in\mathbb{N}$ be a sample size and $\eta > 0$ be a global stability parameter. An algorithm ${\mathcal{A}}$ is $(n,\eta)$-globally-stable with respect to a distribution ${\mathcal{D}}$ if there exists an hypothesis $h$ such that $$\Pr_{S\sim{\mathcal{D}}^n}[A(S) = h] \geq \eta.$$ While global stability is a rather strong property, it holds automatically for learning algorithms using a finite hypothesis class. By an averaging argument, every learner using $n$ samples which produces a hypothesis in a finite hypothesis class ${\mathcal{H}}$ is $(n, 1/|{\mathcal{H}}|)$-globally-stable. The following proposition
1
member_6
generalizes “Occam’s Razor" for finite hypothesis classes to show that global stability is enough to imply similar generalization bounds in the realizable setting. \[prop:gs\] Let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ be a class, and assume that ${\mathcal{A}}$ is a learner for ${\mathcal{H}}$ (i.e. ${\operatorname{loss}}_S({\mathcal{A}}(S))=0$ for every realizable sample $S$). Let ${\mathcal{D}}$ be a realizable distribution such that ${\mathcal{A}}$ is $(n,\eta)$-globally-stable with respect to ${\mathcal{D}}$, and let $h$ be a hypothesis such that $\Pr_{S\sim{\mathcal{D}}^n}[A(S) = h] \geq \eta$, as guaranteed by the definition of global stability. Then, $${\operatorname{loss}}_{\mathcal{D}}(h) \leq \frac{\ln(1/\eta)}{n}.$$ Let $\alpha$ denote the loss of $h$, i.e. ${\operatorname{loss}}_{\mathcal{D}}(h) = \alpha$, and let $E_1$ denote the event that $h$ is consistent with the input sample $S$. Thus, $\Pr[E_1] = (1-\alpha)^n$. Let $E_2$ denote the event that ${\mathcal{A}}(S)=h$. By assumption, $\Pr[E_2]\geq \eta$. Now, since ${\mathcal{A}}$ is consistent we get that $E_2\subseteq E_1$, and hence that $\eta\leq(1-\alpha)^n$. This finishes the proof (using the fact that $1-\alpha \leq e^{-\alpha}$ and taking the logarithm of both sides). Another way to view global stability is in the context of *pseudo-deterministic algorithms* [@Gat11pseudo]. A pseudo-deterministic algorithm is a randomized algorithm which yields some fixed output with high probability. Thinking of a realizable distribution ${\mathcal{D}}$ as an instance on which PAC
1
member_6
learning algorithm has oracle access, a globally-stable learner is one which is “weakly” pseudo-deterministic in that it produces some fixed output with probability bounded away from zero. A different model of pseudo-deterministic learning, in the context of learning from membership queries, was defined and studied by Oliveira and Santhanam [@OliveiraS18]. We prove Theorem \[thm:main\] by constructing, for a given Littlestone class ${\mathcal{H}}$, an algorithm ${\mathcal{A}}$ which is globally stable with respect to realizable distribution. ### Boosting for Approximate Differential Privacy {#sec:boosting} Our characterization of private learnability in terms of the Littlestone dimension has new consequences for boosting the privacy and accuracy guarantees of differentially-private learners. Specifically, it shows that the existence of a learner with weak (but non-trivial) privacy and accuracy guarantees implies the existence of a learner with any desired privacy and accuracy parameters — in particular, one with $\delta(n) = \exp(-\Omega(n))$. \[thm:adp-boost\] There exists a constant $c > 0$ for which the following holds. Suppose that for some sample size $n_0$ there is an $({\varepsilon}_0, \delta_0)$-differentially private learner $\cal W$ for a class ${\mathcal{H}}$ satisfying the guarantee $$\Pr_{S\sim {\mathcal{D}}^{n_0}}[{\operatorname{loss}}_{{\mathcal{D}}}({\cal W}(S)) > \alpha_0 ] < \beta_0$$ for ${\varepsilon}_0 = 0.1, \alpha_0 = \beta_0 = 1/16$, and $\delta_0 \le c/n_0^2\log
1
member_6
n_0$. Then there exists a constant $C_{\mathcal{H}}$ such that for every $\alpha, \beta, {\varepsilon}, \delta \in (0, 1)$ there exists an $({\varepsilon}, \delta)$-differentially private learner for ${\mathcal{H}}$ with $$\Pr_{S\sim {\mathcal{D}}^{n}}[{\operatorname{loss}}_{{\mathcal{D}}}({{\mathcal{A}}}(S)) > \alpha] < \beta$$ whenever $n \ge C_{\mathcal{H}}\cdot \log(1/\beta\delta)/\alpha{\varepsilon}$. Given a weak learner $\cal W$ as in the statement of Theorem \[thm:adp-boost\], the results of [@AlonLMM19] imply that ${\operatorname{Ldim}}({\mathcal{H}})$ is finite. Hence Theorem \[thm:main\] allows us to construct a learner for ${\mathcal{H}}$ with arbitrarily small privacy and accuracy, yielding Theorem \[thm:adp-boost\]. The constant $C_{{\mathcal{H}}}$ in the last line of the theorem statement suppresses a factor depending on ${\operatorname{Ldim}}({\mathcal{H}})$. Prior to our work, it was open whether arbitrary learning algorithms satisfying approximate differential privacy could be boosted in this strong a manner. We remark, however, that in the case of *pure* differential privacy, such boosting can be done algorithmically and efficiently. Specifically, given an $({\varepsilon}_0, 0)$-differentially private weak learner as in the statement of Theorem \[thm:adp-boost\], one can first apply random sampling to improve the privacy guarantee to $(p{\varepsilon}_0, 0)$-differential privacy at the expense of increasing its sample complexity to roughly $n_0 /p$ for any $p \in (0, 1)$. The Boosting-for-People construction of Dwork, Rothblum, and Vadhan [@DworkRV10] (see also [@BunCS20])
1
member_6
then produces a strong learner by making roughly $T \approx \log(1/\beta)/\alpha^2$ calls to the weak learner. By composition of differential privacy, this gives an $({\varepsilon}, 0)$-differentially private strong learner with sample complexity roughly $n_0 \cdot \log(1/\beta)/\alpha^2{\varepsilon}$. What goes wrong if we try to apply this argument using an $({\varepsilon}_0, \delta_0)$-differentially private weak learner? Random sampling still gives a $(p{\varepsilon}_0, p\delta_0)$-differentially private weak learner with sample complexity $n_0 / p$. However, this is not sufficient to improve the $\delta$ parameter of the learner *as a function of the number of samples $n$*. Thus the strong learner one obtains using Boosting-for-People still at best guarantees $\delta(n) = \tilde{O}(1/n^2)$. Meanwhile, Theorem \[thm:adp-boost\] shows that the existence of a $(0.1, \tilde{O}(1/n^2))$-differentially private learner for a given class implies the existence of a $(0.1, \exp(-\Omega(n))$-differentially private learner for that class. We leave it as an interesting open question to determine whether this kind of boosting for approximate differential privacy can be done algorithmically. Proof Overview {#sec:overview} ============== We next give an overview of the main arguments used in the proof of Theorem \[thm:main\]. The proof consist of two parts: (i) we first show that every class with a finite Littlestone dimension can be learned by
1
member_6
a globally-stable algorithm, and (ii) we then show how to generically obtain a differentially-private learner from any globally-stable learner. Step 1: Finite Littlestone Dimension $\implies$ Globally-Stable Learning ------------------------------------------------------------------------ Let ${\mathcal{H}}$ be a concept class with Littlestone dimension $d$. Our goal is to design a globally-stable learning algorithm for ${\mathcal{H}}$ with stability parameter $\eta = \exp(-d)$ and sample complexity $n=\exp(d)$. We will sketch here a weaker variant of our construction which uses the same ideas but is simpler to describe. The property of ${\mathcal{H}}$ that we will use is that it can be online learned in the realizable setting with at most $d$ mistakes (see Section \[sec:online\] for a brief overview of this setting). Let ${\mathcal{D}}$ denote a realizable distribution with respect to which we wish to learn in a globally-stable manner. That is, ${\mathcal{D}}$ is a distribution over examples $(x,c(x))$ where $c\in{\mathcal{H}}$ is an unknown target concept. Let $\mathcal{A}$ be a learning algorithm that makes at most $d$ mistakes while learning an unknown concept from ${\mathcal{H}}$ in the online model. Consider applying $\mathcal{A}$ on a sequence $S=((x_1,c(x_1))\ldots (x_n,c(x_n)))\sim{\mathcal{D}}^n$, and denote by $M$ the random variable counting the number of mistakes $\mathcal{A}$ makes in this process. The mistake-bound guarantee on ${\mathcal{A}}$
1
member_6
guarantees that $M\leq d$ always. Consequently, there is $0\leq i \leq d$ such that $$\Pr[M=i] \geq \frac{1}{d+1}.$$ Note that we can identify, with high probability, an $i$ such that $\Pr[M=i] \geq 1/2d$ by running ${\mathcal{A}}$ on $O(d)$ samples from ${\mathcal{D}}^n$. We next describe how to handle each of the $d+1$ possibilities for $i$. Let us first assume that $i=d$, namely that $$\Pr[M=d] \geq \frac{1}{2d}.$$ We claim that in this case we are done: indeed, after making $d$ mistakes it must be the case that ${\mathcal{A}}$ has completely identified the target concept $c$ (or else ${\mathcal{A}}$ could be presented with another example which forces it to make $d+1$ mistakes). Thus, in this case it holds with probability at least $1/2d$ that ${\mathcal{A}}(S)=c$ and we are done. Let us next assume that $i=d-1$, namely that $$\Pr[M=d-1] \geq \frac{1}{2d}.$$ The issue with applying the previous argument here is that before making the $d$’th mistake, ${\mathcal{A}}$ can output many different hypotheses (depending on the input sample $S$). We use the following idea: draw two samples $S_1,S_2 \sim {\mathcal{D}}^n$ independently, and set $f_1 = {\mathcal{A}}(S_1)$ and $f_2={\mathcal{A}}(S_2)$. Condition on the event that the number of mistakes made by ${\mathcal{A}}$ on each of $S_1,S_2$ is exactly 
1
member_6
$d-1$ (by assumption, this event occurs with probability at least $(1/2d)^2$) and consider the following two possibilities: - $\Pr[f_1=f_2]\geq\frac{1}{4}$, - $\Pr[f_1=f_2] < \frac{1}{4}$. If (i) holds then using a simple calculation one can show that there is $h$ such that $\Pr[A(S) = h] \geq \frac{1}{(2d)^2}\cdot \frac{1}{4}$ and we are done. If (ii) holds then we apply the following [*“random contest”*]{} between $S_1,S_2$: 1. Pick $x$ such that $f_1(x)\neq f_2(x)$ and draw $y\sim\{\pm 1\}$ uniformly at random. 2. If $f_1(x)\neq y$ then the output is ${\mathcal{A}}(S_1 \circ (x,y))$, where $S_1\circ (x,y)$ denotes the sample obtained by appending $(x,y)$ to the end of $S$. In this case we say that $S_1$ “won the contest”. 3. Else, $f_2(x)\neq y$ then the output is ${\mathcal{A}}(S_2 \circ (x,y))$. In this case we that $S_2$ “won the contest”. Note that adding the auxiliary example $(x,y)$ forces ${\mathcal{A}}$ to make exactly $d$ mistakes on $S_i\circ (x,y)$. Now, if $y\sim\{\pm 1\}$ satisfies $y = c(x) $ then by the mistake-bound argument it holds that ${\mathcal{A}}(S_i\circ (x,y))=c$. Therefore, since $\Pr_{y\sim\{\pm 1\}}[c(x)=y] = 1/2$, it follows that $$\Pr_{S_1,S_2, y}[{\mathcal{A}}(S_i\circ (x,y))=c] \geq \frac{1}{(2d)^2}\cdot \frac{3}{4}\cdot\frac{1}{2} =\Omega(1/d^2),$$ and we are done. Similar reasoning can be used by induction to handle the remaining cases
1
member_6
(the next one would be that $\Pr[M=d-2] \geq \frac{1}{2d}$, and so on). The proof we present in Section \[sec:LSstable\] is based on a similar idea of performing “random contests,” although the construction becomes more complex to handle other issues, such as generalization, which were not addressed here. For more details we refer the reader to the complete argument in Section \[sec:LSstable\]. Step 2: Globally-Stable Learning $\implies$ Differentially-Private Learning --------------------------------------------------------------------------- Given a globally-stable learner ${\mathcal{A}}$ for a concept class ${\mathcal{H}}$, we can obtain a differentially-private learner using standard techniques in the literature on private learning and query release. If ${\mathcal{A}}$ is a $(\eta, m)$-globally stable learner with respect to a distribution ${\mathcal{D}}$, we obtain a differentially-private learner using roughly $m/\eta$ samples from that distribution as follows. We first run ${\mathcal{A}}$ on $k \approx 1/\eta$ independent samples, non-privately producing a list of $k$ hypotheses. We then apply a differentially-private “Stable Histograms” algorithm [@KorolovaKMN09; @BunNS16] to this list which allows us to privately publish a short list of hypotheses that appear with frequency $\Omega(\eta)$. Global stability of the learner ${\mathcal{A}}$ guarantees that with high probability, this list contains *some* hypothesis $h$ with small population loss. We can then apply a generic differentially-private learner
1
member_6
(based on the exponential mechanism) on a fresh set of examples to identify such an accurate hypothesis from the short list. Preliminaries {#sec:preliminaries} ============= PAC Learning ------------ We use standard notation from statistical learning; see, e.g., [@Shalev14book]. Let $X$ be any “domain” set and consider the “label” set $Y=\{\pm 1\}$. A [*hypothesis*]{} is a function $h : X\to Y$, which we alternatively write as an element of $Y^X$. An [*example*]{} is a pair $(x, y) \in X\times Y$. A [*sample*]{} $S$ is a finite sequence of examples. Let ${\mathcal{D}}$ be a distribution over $X \times \{\pm 1\}$. The population loss of a hypothesis $h : X \to \{\pm 1\}$ is defined by $${\operatorname{loss}}_{{\mathcal{D}}}(h) = \Pr_{(x, y) \sim {\mathcal{D}}}[h(x) \ne y].$$ Let $S=\bigl((x_i,y_i)\bigr)_{i=1}^n$ be a sample. The empirical loss of $h$ with respect to $S$ is defined by $${\operatorname{loss}}_{S}(h) = \frac{1}{n}\sum_{i=1}^n1[h(x_i)\neq y_i].$$ Let ${\mathcal{H}}\subseteq Y^X$ be a [*hypothesis class*]{}. A sample $S$ is said to be [*realizable by ${\mathcal{H}}$*]{} if there is $h\in H$ such that ${\operatorname{loss}}_S(h)=0$. A distribution ${\mathcal{D}}$ is said to be [*realizable by ${\mathcal{H}}$*]{} if there is $h\in H$ such that ${\operatorname{loss}}_{\mathcal{D}}(h)=0$. A [*learning algorithm*]{} $A$ is a (possibly randomized) mapping taking input samples to output hypotheses. We
1
member_6
also use the following notation: for samples $S,T$, let $S\circ T$ denote the combined sample obtained by appending $T$ to the end of $S$. Online Learning {#sec:online} --------------- #### Littlestone Dimension. The Littlestone dimension is a combinatorial parameter that captures mistake and regret bounds in online learning [@Littlestone87online; @Bendavid09agnostic].[^7] Its definition uses the notion of [*mistake trees*]{}. A mistake tree is a binary decision tree whose internal nodes are labeled by elements of $X$. Any root-to-leaf path in a mistake tree can be described as a sequence of examples $(x_1,y_1),...,(x_d,y_d)$, where $x_i$ is the label of the $i$’th internal node in the path, and $y_i=+1$ if the $(i+1)$’th node in the path is the right child of the $i$’th node and $y_i = -1$ otherwise. We say that a mistake tree $T$ is [*shattered* ]{}by ${\mathcal{H}}$ if for any root-to-leaf path $(x_1,y_1),...,(x_d,y_d)$ in $T$ there is an $h\in {\mathcal{H}}$ such that $h(x_i)=y_i$ for all $i\leq d$ (see Figure \[fig:shatteredtree\]). The Littlestone dimension of ${\mathcal{H}}$, denoted ${\operatorname{Ldim}}({\mathcal{H}})$, is the depth of largest complete tree that is shattered by ${\mathcal{H}}$. We say that ${\mathcal{H}}$ is a Littlestone class if it has finite Littlestone dimension. ![[]{data-label="fig:shatteredtree"}](shatteredtree.pdf) #### Mistake Bound and the Standard Optimal
1
member_6
Algorithm (${\mathsf{SOA}}$). The simplest setting in which learnability is captured by the Littlestone dimension is called the [*mistake-bound model*]{} [@Littlestone87online]. Let ${\mathcal{H}}\subseteq \{\pm 1\}^X$ be a fixed hypothesis class known to the learner. The learning process takes place in a sequence of trials, where the order of events in each trial $t$ is as follows: - the learner receives an instance $x_t\in X$, - the learner responses with a prediction $\hat y_t\in \{\pm 1\}$, and - the learner is told whether or not the response was correct. We assume that the examples given to the learner are realizable in the following sense: For the entire sequence of trials, there is a hypothesis $h\in {\mathcal{H}}$ such that $y_t = h(x_t)$ for every instance $x_t$ and correct response $y_t$. An algorithm in this model learns ${\mathcal{H}}$ with mistake bound $M$ if for every realizable sequence of examples presented to the learner, it makes a total of at most $M$ incorrect predictions. Littlestone showed that the minimum mistake bound achievable by any online learner is exactly ${\operatorname{Ldim}}({\mathcal{H}})$ [@Littlestone87online]. Furthermore, he described an explicit algorithm, called the [*Standard Optimal Algorithm*]{} (${\mathsf{SOA}}$), which achieves this optimal mistake bound. [**Standard Optimal Algorithm (${\mathsf{SOA}}$)**]{}\ 1. Initialize ${\mathcal{H}}_1
1
member_6
= {\mathcal{H}}$. 2. For trials $t = 1, 2, \dots$: - For each $b \in \{\pm 1\}$ and $x \in X$, let ${\mathcal{H}}_t^b(x) = \{h \in {\mathcal{H}}_t : h(x) = b\}$. Define $h : X \to \{\pm 1\}$ by $h_t(x) = {\operatorname{argmax}}_b {\operatorname{Ldim}}({\mathcal{H}}_t^{b}(x))$. - Receive instance $x_t$. - Predict $\hat{y}_t = h_t(x_t)$. - Receive correct response $y_t$. - Update ${\mathcal{H}}_{t+1} = {\mathcal{H}}_t^{y_t}(x_t)$. #### Extending the ${\mathsf{SOA}}$ to non-realizable sequences. Our globally-stable learner for Littlestone classes will make use of an optimal online learner in the mistake bound model. For concreteness, we pick the ${\mathsf{SOA}}$ (any other optimal algorithm will also work). It will be convenient to extend the ${\mathsf{SOA}}$ to sequences which are not necessarily realizable by a hypothesis in ${\mathcal{H}}$. We will use the following simple extension of the ${\mathsf{SOA}}$ to non-realizable samples: \[def:soaext\] Consider a run of the ${\mathsf{SOA}}$ on examples $(x_1,y_1),\ldots, (x_m,y_m)$, and let $h_t$ denote the predictor used by the ${\mathsf{SOA}}$ after seeing the first $t$ examples (i.e., $h_t$ is the rule used by the ${\mathsf{SOA}}$ to predict in the $(t+1)$’st trial). Then, after observing both $x_{t+1},y_{t+1}$ do the following: - If the sequence $(x_1,y_1),\ldots, (x_{t+1},y_{t+1})$ is realizable by some $h\in{\mathcal{H}}$ then apply the usual update
1
member_6
rule of the ${\mathsf{SOA}}$ to obtain $h_{t+1}$. - Else, set $h_{t+1}$ as follows: $h_{t+1}(x_{t+1}) = y_{t+1}$, and $h_{t+1}(x)=h_t(x)$ for every $x\neq x_{t+1}$. Thus, upon observing a non-realizable sequence, this update rule locally updates the maintained predictor $h_t$ to agree with the last example. Differential Privacy -------------------- We use standard definitions and notation from the differential privacy literature. For more background see, e.g., the surveys [@DworkR14; @Vadhan17]. For $a, b, {\varepsilon}, \delta \in [0, 1]$ let $a\approx_{{\varepsilon},\delta} b$ denote the statement $$a\leq e^{{\varepsilon}}b + \delta ~\text{ and }~ b\leq e^{\varepsilon}a + \delta.$$ We say that two probability distributions $p,q$ are [*$({\varepsilon},\delta)$-indistinguishable*]{} if $p(E) \approx_{{\varepsilon},\delta} q(E)$ for every event $E$. \[def:private\] A randomized algorithm $$A: (X\times \{\pm 1\})^m \to \{\pm 1\}^X$$ is $({\varepsilon},\delta)$-differentially-private if for every two samples $S,S'\in (X\times \{\pm 1\})^n$ that disagree on a single example, the output distributions $A(S)$ and $A(S')$ are $({\varepsilon},\delta)$-indistinguishable. We emphasize that $({\varepsilon}, \delta)$-indistinguishability must hold for every such pair of samples, even if they are not generated according to a (realizable) distribution. The parameters ${\varepsilon},\delta$ are usually treated as follows: ${\varepsilon}$ is a small constant (say $0.1$), and $\delta$ is negligible, $\delta = n^{-\omega(1)}$, where $n$ is the input sample size. The case of
1
member_6
$\delta=0$ is also referred to as [*pure differential privacy*]{}. Thus, a class ${\mathcal{H}}$ is privately learnable if it is PAC learnable by an algorithm $A$ that is $({\varepsilon}(n),\delta(n))$-differentially private with ${\varepsilon}(n) \leq 0.1$, and $\delta(n) \leq n^{-\omega(1)} $. Globally-Stable Learning of Littlestone Classes {#sec:LSstable} =============================================== Theorem Statement ----------------- The following statement any class ${\mathcal{H}}$ with a bounded Littlestone dimension can be learned by a globally-stable algorithm. \[thm:littlestone-frequent\] Let ${\mathcal{H}}$ be a hypothesis class with Littlestone dimension $d\geq 1$, let $\alpha>0$, and set $$m = (8^{d+1}+1)\cdot\Bigl\lceil\frac{d}{\alpha}\Bigr\rceil.$$ Then there exists a randomized algorithm $G : (X \times \{\pm 1\})^m \to \{\pm 1\}^X$ with the following properties. Let ${\mathcal{D}}$ be a realizable distribution and let $S\sim {\mathcal{D}}^m$ be an input sample. Then there exists a hypothesis $f$ such $$\Pr[G(S) = f] \geq \frac{1}{(d+1)2^{d+1}} \text{ and } {\operatorname{loss}}_{{\mathcal{D}}}(f) \leq \alpha.$$ The distributions ${\mathcal{D}}_k$ ----------------------------------- The Algorithm $G$ is obtained by running the ${\mathsf{SOA}}$ on a sample drawn from a carefully tailored distribution. This distribution belongs to a family of distributions which we define next. Each of these distributions can be sampled from using black-box access to i.i.d. samples from ${\mathcal{D}}$. Recall that for a pair of samples $S,T$, we denote by $S\circ T$ the
1
member_6
sample obtained by appending $T$ to the end of $S$. Define a sequence of distributions ${\mathcal{D}}_k$ for $k\geq 0$ as follows: [**Distributions ${\mathcal{D}}_k$**]{}\ Let $n$ denote an “auxiliary sample” size (to be fixed later) and let ${\mathcal{D}}$ denote the target realizable distribution over examples. The distributions ${\mathcal{D}}_k = {\mathcal{D}}_k({\mathcal{D}},n)$ are defined by induction on $k$ as follows: 1. ${\mathcal{D}}_0$: output the empty sample $\emptyset$ with probability 1. 2. Let $ k\ge 1 $. If there exists a $f$ such that $$\Pr_{S \sim {\mathcal{D}}_{k-1}, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f] \geq 2^{-d},$$ or if ${\mathcal{D}}_{k-1}$ is undefined then ${\mathcal{D}}_k$ is undefined. 3. Else, ${\mathcal{D}}_k$ is defined recursively by the following process: - Draw $S_0,S_1\sim {\mathcal{D}}_{k-1}$ and $T_0,T_1\sim{\mathcal{D}}^n$ independently. - Let $f_0={\mathsf{SOA}}(S_0\circ T_0)$, $f_1={\mathsf{SOA}}(S_1\circ T_1)$. - If $f_0=f_1$ then go back to step (i). - Else, pick $x\in \{x: f_0(x)\neq f_1(x)\}$ and sample $y\sim\{\pm 1\}$ uniformly. - If $f_0(x)\neq y$ then output $S_0 \circ T_0\circ \bigl((x,y)\bigr)$ and else output $S_1\circ T_1\circ \bigl((x,y)\bigr)$. Please see Figure \[fig:Dk\] for an illustration of sampling $S\sim {\mathcal{D}}_k$ for $k=3$. We next observe some basic facts regarding these distributions. First, note that whenever ${\mathcal{D}}_k$ is well-defined, the process in Item 3 terminates with probability 1. Let $k$ be such
1
member_6
that ${\mathcal{D}}_k$ is well-defined and consider a sample $S$ drawn from ${\mathcal{D}}_k$. The size of $S$ is $\lvert S\rvert = k\cdot(n + 1)$. Among these $k\cdot(n+1)$ examples there are $k\cdot n$ examples drawn from ${\mathcal{D}}$ and $k$ examples which are generated in Item 3(iv). We will refer to these $k$ examples as . Note that during the generation of $S\sim {\mathcal{D}}_k$ there are examples drawn from ${\mathcal{D}}$ which do not actually appear in $S$. In fact, the number of such examples may be unbounded, depending on how many times Items 3(i)-3(iii) were repeated. In Section \[sec:monte\] we will define a “Monte-Carlo” variant of ${\mathcal{D}}_k$ in which the number of examples drawn from ${\mathcal{D}}$ is always bounded. This Monte-Carlo variant is what we actually use to define our globally-stable learning algorithm, but we introduce the simpler distributions ${\mathcal{D}}_k$ to clarify our analysis. The $k$ tournament examples satisfy the following important properties. \[obs:mistakebound\] Let $k$ be such that ${\mathcal{D}}_k$ is well-defined and consider running the ${\mathsf{SOA}}$ on the concatenated sample $S\circ T$, where $S\sim {\mathcal{D}}_k$ and $T\sim {\mathcal{D}}^n$. Then 1. Each tournament example forces a mistake on the ${\mathsf{SOA}}$. Consequently, the number of mistakes made by the ${\mathsf{SOA}}$ when run on $S\circ
1
member_6
--- abstract: 'We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.' address: - '$^1$Extreme Computing Research Center (ECRC), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia.' - '$^2$Department of Computer Science, American University of Beirut (AUB), Beirut, Lebanon.' author: - Wajih Halim Boukaram$^1$ - George Turkiyyah$^2$ - Hatem Ltaief$^1$ - 'David E. Keyes$^1$' bibliography: - 'arxiv\_batch\_svd.bib' title: Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression --- Introduction {#sec:intro} ============ The singular value decomposition (SVD) is a factorization of a general $m \times n$ matrix $A$ of the form $$A = U \Sigma V^*.$$ $U$
1
member_7
is an $m \times m$ orthonormal matrix whose columns $U_i$ are called the left singular vectors. $\Sigma$ is an $m \times n$ diagonal matrix whose diagonal entries $\sigma_i$ are called the singular values and are sorted in decreasing order. $V$ is an $n \times n$ orthonormal matrix whose columns $V_i$ are called the right singular vectors. When $m > n$, we can compute a reduced form $A = \hat{U} \hat{\Sigma} V^*$ where $\hat{U}$ is an $m \times n$ matrix and $\hat{\Sigma}$ is an $n \times n$ diagonal matrix. One can easily obtain the full form from the reduced one by extending $\hat{U}$ with $(m - n)$ orthogonal vectors and $\hat{\Sigma}$ with an $(m - n)$ zero block row. Without any loss of generality, we will focus on the reduced SVD of real matrices in our discussions. The SVD of a matrix is a crucial component in many applications in signal processing and statistics as well as matrix compression, where truncating the $(n - k)$ singular values that are smaller than some threshold gives us a rank-$k$ approximation $\tilde{A}$ of the matrix $A$. This matrix is the unique minimizer of the function $f_k(B) = || A - B ||_F$. In the
1
member_7
context of hierarchical matrix operations, effective compression relies on the ability to perform the computation of large batches of independent SVDs of small matrices of low numerical rank. Randomized methods [@halko2011finding] are well suited for computing a truncated SVD of these types of matrices and are built on three computational kernels: the QR factorization, matrix-matrix multiplications and SVDs of smaller $k \times k$ matrices. Motivated by this task, we discuss the implementation of high performance batched QR and SVD kernels on the GPU, focusing on the more challenging SVD tasks. The remainder of this paper is organized as follows. Section \[sec:background\] presents different algorithms used to compute the QR factorization and the SVD as well as some considerations when optimizing for GPUs. Section \[sec:batch\_qr\] discusses the batched QR factorization and compares its performance with existing libraries. Sections \[sec:registers\], \[sec:shared\] and \[sec:block\_global\] discuss the various implementations of the SVD based on the level of the memory hierarchy in which the matrices can reside. Specifically, Section \[sec:registers\] describes the implementation for very small matrix sizes that can fit in registers, Section \[sec:shared\] describes the implementation for matrices that can reside in shared memory, and Section \[sec:block\_global\] describes the block Jacobi implementation for
1
member_7
larger matrix sizes that must reside in global memory. Section \[sec:randomized\] details the implementation of the batched randomized SVD routine. We then discuss some details of the application to hierarchical matrix compression in Section \[sec:application\]. We conclude and discuss future work in Section \[sec:conclusion\]. Background {#sec:background} ========== In this section we give a review of the most common algorithms used to compute the QR factorization and the SVD of a matrix as well as discuss some considerations when optimizing on the GPU. QR Factorization ---------------- The QR factorization decomposes an $m \times n$ matrix $A$ into the product of an orthogonal $m \times m$ matrix $Q$ and an upper triangular $m \times n$ matrix $R$ [@golub2013matrix]. We can also compute a reduced form of the decomposition where Q is an $m \times n$ matrix and R is $n \times n$ upper triangular. The most common QR algorithm is based on transforming $A$ into an upper triangular matrix using a series of orthogonal transformations generated using Householder reflectors. Other algorithms such as the Gram-Schmidt or Modified Gram-Schmidt can produce the QR factorization by orthogonalizing a column with all previous columns; however, these methods are less stable than the Householder orthogonalization and
1
member_7
the orthogonality of the resulting $Q$ factor suffers with the condition number of the matrix. Another method is based on Givens rotations, where entries in the subdiagonal part of the matrix are zeroed out to form the triangular factor and the rotations are accumulated to form the orthogonal factor. This method is very stable and has more parallelism than the Householder method; however it is more expensive, doing about 50% more work, and it is more challenging to extract the parallelism efficiently on the GPU. For our implementation, we rely on the Householder method due to its numerical stability and simplicity. The method is described in pseudo-code in Algorithm \[alg:qr\]. \[t\] $[Q, R] = [I, A]$ $v = \text{house}(R(i))$ $R = (I - 2vv^T) R$ \[alg:qr:trailing\_update\] $Q = Q (I - 2vv^T)$ SVD Algorithms -------------- Most implementations of the SVD are based on the two-phase approach popularized by Trefethen et al. [@trefethen1997numerical], where the matrix $A$ first undergoes bidiagonalization of the form $A = Q_U B Q_V^T$ where $Q_U$ and $Q_V$ are orthonormal matrices and $B$ is a bidiagonal matrix. The matrix $B$ is then diagonalized using some variant of the QR algorithm, the divide and conquer method or a
1
member_7
combination of both to produce a decomposition $B = U_B \Sigma V_B^T$. The complete SVD is then determined as $A = (Q_U U_B) \Sigma (Q_V V_B)^T$ during the backward transformation. These methods require significant algorithmic and programming effort to become robust and efficient while still suffering from a loss of relative accuracy [@demmel1992jacobi]. An alternative is the one-sided Jacobi method where all $n(n-1)/2$ pairs of columns are repeatedly orthogonalized in sweeps using plane rotations until all columns are mutually orthogonal. When the process converges (i.e., all columns are mutually orthogonal up to machine precision), the left singular vectors are the normalized columns of the modified matrix with the singular values as the norms of those columns. The right singular vectors can be computed either by accumulating the rotations or by solving a system of equations. Our application does not need the right vectors, so we omit the details of computing them. Algorithm \[alg:jacobi\] describes the one-sided Jacobi method. Since each pair of columns can be orthogonalized independently, the method is also easily parallelized. The simplicity and inherent parallelism of the method make it an attractive first choice for an implementation on the GPU. \[b\] $G = A_{ij}^T A_{ij}$ \[alg:jacobi:gram\] $R
1
member_7
= rot(G)$ $A_{ij} = A_{ij} R$ \[alg:jacobi:rot\] GPU Optimization Considerations ------------------------------- GPU kernels are launched by specifying a grid configuration which lets us organize threads into blocks and blocks into a grid. Launching a GPU kernel causes a short stall (as much as 10 microseconds) as the kernel is prepared for execution. This kernel launch overhead prevents kernels that complete their work faster than the overhead from executing in parallel, essentially serializing them. To overcome this limitation when processing small workloads, the work is batched into a single kernel call when possible [@batchqr_haidar; @batch_haidar]. All operations can then be executed in parallel without incurring the kernel launch overhead, with the grid configuration used to determine thread work assignment. A warp is a group of threads (32 threads in current generation GPUs, such as the NVIDIA K40) within a block that executes a single instruction in lockstep, without requiring any explicit synchronization. The occupancy of a kernel tells us the ratio of active warps to the maximum number of warps that a multiprocessor can host. This metric is dependent on the amount of resources that a kernel uses, such as register and shared memory usage and kernel launch configuration, as well
1
member_7
as the compute capability of the card ([@wilt2013cuda] for more details). While not a requirement for good performance [@volkov2010better], it is generally a good idea to aim for high occupancy. Memory on the GPU is organized into a hierarchy of memory spaces as shown in Figure \[fig:memory\_hierarchy\]. At the bottom, we have global memory which is accessible by all threads and is the most plentiful but the slowest memory. The next space of interest is the shared memory which is accessible only by threads within the same block and is configurable with the L1 cache to be at most 48KB per thread block on current generation GPUs. Shared memory is very fast and acts as a programmer controllable cache. Finally, we have the registers which are local to the threads. Registers are the fastest of all memory, but the total number of registers usable by a thread without performance implications is limited. If a kernel needs more registers than the limit, then registers are spilled to “local" memory, which is in the slow but cached global memory. Making good use of the faster memories and avoiding excessive accesses to the slower ones is key to good performance on the GPU.
1
member_7
As such, it is common to use blocking techniques in many algorithms, where a block of data is brought in from global memory and processed in one of the faster memories. ![The memory hierarchy of a modern GPU.[]{data-label="fig:memory_hierarchy"}](memory.pdf){width="45.00000%"} Related Work ------------ Batched GPU routines for LU, Cholesky and QR factorizations have been developed in [@batchqr_haidar; @batch_haidar; @charara_batch_tdla] using a block recursive approach which increases data reuse and leads to very good performance for relatively large matrix sizes. GPU routines optimized for computing the QR decomposition of very tall and skinny matrices are presented in [@caqr_anderson] where they develop an efficient transpose matrix-vector computation that is employed with some minor changes in this work. GPU-CPU hybrid algorithms for batched SVD using Jacobi and bidiagonalization methods are introduced in [@kotas_svd] where pair generation for the Jacobi method and the solver phase of the bidiagonalization are handled on the CPU. The work in [@Kang2015] employs the power method to construct a rank 1 approximation for 2D filters in convolutional neural networks. Routines to handle the SVD of many matrices on GPUs is presented in [@badolato_2015] where each thread within a warp computes the SVD of a single matrix. Batched QR Decomposition {#sec:batch_qr} ========================
1
member_7
In this section, we discuss implementation details of our batched QR kernel and compare it with other implementations from the MAGMA 2.2 [@tnld10] and CUBLAS 8 [@nvidia-cublas] libraries. Implementation -------------- One benefit of the Householder algorithm is that the application of reflectors to the trailing matrix (line \[alg:qr:trailing\_update\] of the algorithm) can be blocked together and expressed as a matrix-matrix multiplication (Level 3 BLAS) instead of multiple matrix-vector multiplications (Level 2 BLAS). The increased arithmetic intensity typically allows performance to improve when the trailing matrix is large. However, for small matrix blocks, the overhead of generating the blocked reflectors from their vector form as well as the lower performance of the matrix-matrix multiplication for small matrices hinder performance. We can obtain better performance by applying multiple reflectors in their vector form and performing the transpose matrix-vector multiplication efficiently within a thread block [@caqr_anderson]. First, we perform the regular factorization on a column block $P$ (called a panel). The entire panel is stored in registers, with each thread storing one row of the panel, and the transpose matrix-vector product is computed using a series of reductions using shared memory and warp shuffles [@warp_shfl] which allow threads within a warp to read
1
member_7
each other’s registers. Figure \[fig:register\_storage\_reduction\] shows the data layout for a theoretical warp of size 8 with 4 columns in registers and a warp reduction using shuffles. Once we factor the panel, we can apply the reflectors to the trailing sub-matrix in a separate kernel that is optimized for performing the core matrix-vector product in the update. In this second kernel, we load both the factored panel $P$ and a panel $M_i$ of the trailing sub-matrix $M$ to registers and apply the reflectors one at a time, updating the trailing panel in registers. Let us take an example of a $32 \times 8$ trailing panel $M_i$. For each reflector, we compute the matrix-vector product $M_i^Tv$ by flattening the $32 \times 8$ product into a reduction of a 256 vector in shared memory that has been padded to avoid bank conflicts. The reduction can then be serialized until it reaches a size of 32, where a partial reduction to a vector of size 8 can take place in 2 steps. This final vector is the product $M_i^Tv$ which can then be quickly applied to the registers storing $M_i$. This process is repeated for each trailing panel within the same kernel to
1
member_7
maximize the use of the reflectors which have been stored in registers. Figure \[fig:qr\_fig\] shows one step of a panel factorization and the application of its reflectors to the trailing submatrix. Since threads are limited to 1024 per block on current architectures, we use the approach developed in [@journals/concurrency/KurzakLDB10] to factorize larger matrices. We first factorize panels up to the thread block limit in a single kernel call. The panels below the first are then factorized by first loading the triangular factor into shared memory and then proceeding with the panel factorization as before, taking the triangular portion into consideration when computing reflectors and updates. To keep occupancy up for the small matrices on devices where the resident block limit could be reached before the thread limit, we assign multiple operations to a single thread block. For a batch of $N$ matrices of dimensions $m \times n$, kernels can be launched using $N/b$ thread blocks of size $m \times b$, where each thread block handles $b$ operations. ![Left: matrix rows allocated to thread registers in a warp. Right: parallel warp reduction using shuffles within registers.[]{data-label="fig:register_storage_reduction"}](reg_svd.pdf){width="0.6\linewidth"} ![One step of the QR factorization where a panel P is factored to produce a
1
member_7
triangular factor R and reflectors V which are used to update the trailing sub-matrix M.[]{data-label="fig:qr_fig"}](qr_fig.pdf){width="65.00000%"} Performance ----------- Figures \[fig:batch\_qr\] and \[fig:batch\_qr\_rect\] show the performance of our batched QR for 1000 square and rectangular matrices with a panel width of $16$, tuned for the P100 GPU. We compare against the vendor implementation in CUBLAS as well as the high performance library MAGMA. We can see that our proposed version performs well for rectangular matrices with column size of 32 and starts losing ground against MAGMA for the larger square matrix sizes where the blocked algorithm starts to show its performance benefits. A nested implementation where our kernel can be used to factor relatively large panels in a blocked algorithm will likely show some additional performance improvements for the large square matrices, but we leave that as future work. [0.45]{} ![Comparing batched QR kernels for 1000 matrices of varying size on a P100 GPU in single and double precision.](qr_perf_1.pdf "fig:") [0.45]{} ![Comparing batched QR kernels for 1000 matrices of varying size on a P100 GPU in single and double precision.](qr_perf_2.pdf "fig:") Register Memory One-Sided Jacobi {#sec:registers} ================================ In this section we will discuss the first batched SVD kernel where the matrix data
1
member_7
is hosted in registers and analyze the performance of the resulting kernel. Implementation -------------- In this implementation, to avoid repeated global memory accesses, we attempt to fit the matrix in register memory using the same layout as the panel in the QR factorization, i.e. one row per thread; however, the number of registers that a thread uses has an impact on occupancy which can potentially lead to lower performance. In addition, once the register count exceeds the limit set by the GPU’s compute capability, the registers spill into “local" memory which resides in cached slow global memory. Since we store an entire matrix row in the registers of one thread, we use the serial one-sided Jacobi algorithm to compute the SVD where column pairs are processed by the threads one at a time. The bulk of the work lies in the computation of the Gram matrix $G = A_{ij}^T A_{ij}$ (line \[alg:jacobi:gram\] of Algorithm \[alg:jacobi\]) and in the update of the columns (line \[alg:jacobi:rot\]). Since the Gram matrix is symmetric, this boils down to three dot products which are executed as parallel reductions within the warp using warp shuffles. The computation of the $2 \times 2$ rotation matrix as well
1
member_7
as the convergence test is performed redundantly in each thread. Finally, the column update is done in parallel by each thread on its own register data. As with the QR kernel, we keep occupancy up for the smaller matrix sizes by assigning multiple SVD operations to a single block of threads with each operation assigned to a warp to avoid unnecessary synchronizations. Performance {#subsec:reg_perf} ----------- We generate batches of 1000 test matrices with varying condition numbers using the `latms` LAPACK routine and calculate performance based on the total number of rotations needed for convergence. Figures \[fig:reg\_svd\_perf\] and \[fig:reg\_svd\_occupancy\] show the performance on a P100 GPU of the register-based batched SVD kernel and the effect increased register usage has on occupancy. Profiling the kernel, we see that the Gram matrix computation takes about 500 cycles, column rotations take about 240 cycles, and the redundantly computed convergence test and rotation matrices dominate at 1900 cycles. The fact that the redundant portion of the computation dominates means that it is preferable to assign as few threads as possible when processing column pairs. Due to the low occupancy for the larger matrix sizes and the register spills to local memory for matrices larger than
1
member_7
30, it is obvious that the register approach will not suffice for larger matrix sizes. This leads us to our next implementation based on the slower but more parallel-friendly shared memory. [.45]{} ![Performance of the batched register memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](reg_perf_1.pdf "fig:") [.45]{} ![Performance of the batched register memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](reg_perf_2.pdf "fig:") Shared Memory One-Sided Jacobi {#sec:shared} ============================== While the register based SVD performs well for very small matrix sizes, we need a kernel that can handle larger sizes and maintain reasonably high occupancy. This leads us to building a kernel based on shared memory, the next level of the GPU memory hierarchy. This section discusses the implementation details of this kernel and analyze its performance when compared with the register kernel. Implementation -------------- In this version, the matrix is stored entirely in shared memory, which is limited to at most 48 KB per thread block on current generation GPUs. Using the same thread assignment as the register based kernel would lead to very poor occupancy due to the high shared
1
member_7
memory consumption, where potentially only a few warps will be active in a multiprocessor. Instead, we exploit the inherent parallelism of the one-sided Jacobi to assign a warp to a pair of columns, i.e., there are $n/2$ warps processing an $m \times n$ matrix stored in shared memory. There are a total of $n(n-1)/2$ pairs of columns, so we must generate all pairings in $n-1$ steps, with each step processing $n/2$ pairs in parallel. There are many ways of generating these pairs, including round robin, odd-even, and ring ordering [@parosbj_zhou; @ZHOU19971]. We implement the round robin ordering using shared memory to keep track of the column indexes of the pairs with the first warp in the block responsible for updating the index list after each step. Figure \[fig:round\_robin\] shows this ordering for a matrix with 8 columns. When the number of matrix rows exceeds the size of the warp, the thread-per-row assignment no longer allows us to use fast warp reductions, which would force us to use even more resources, as the reductions would now have to be done in shared memory. Instead, we assign multiple rows to a thread, serializing a portion of the reduction over those rows until
1
member_7
warp reductions can be used. This follows our observation in Section \[subsec:reg\_perf\] to assign as few threads as possible to process column pairs, frees up valuable resources and increases the overall performance of the reduction. Row padding is used to keep the rows at multiples of the warp size, and column padding is used to keep the number of columns even. Kernels can then be launched using $32\times n/2$ threads to process each matrix. Figures \[fig:shared\_alloc\] and \[fig:shared\_warp\_reduction\] show examples of the thread allocation and reductions for a $8 \times 8$ matrix using a theoretical warp size of 4. ![Distribution of column pairs to warps at each step of a sweep.[]{data-label="fig:round_robin"}](pair_generation.pdf){width="0.6\linewidth"} [0.3]{} ![Shared memory kernel implementation details.](shared_alloc.pdf "fig:"){width="\linewidth"} [0.4]{} ![Shared memory kernel implementation details.](shared_warp_reduction.pdf "fig:"){width="\linewidth"} Performance {#performance-1} ----------- Figures \[fig:shared\_svd\_perf\] and \[fig:shared\_svd\_occupancy\] show the performance of the parallel shared SVD kernel compared to the serial register SVD kernel on a P100 GPU. We can see the improved growth in performance in the shared memory kernel due to the greater occupancy as well as the absence of any local memory transactions. Looking at the double precision occupancy, we notice two dips in occupancy at matrix sizes 22 and 32 as the
1
member_7
number of resident blocks become limited by the registers/block limits of the device, dropping to 2 and then 1 resident blocks. Performance increases steadily from there as we increase the number of threads assigned to the operation until we reach a matrix size of $64 \times 64$ where we reach the block limit of 1024 threads. To handle larger sizes, we must use a blocked version of the algorithm or the randomized SVD as we see in Sections \[sec:block\_global\] and \[sec:randomized\], respectively. [0.45]{} ![Performance of the batched shared memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](smem_perf_1.pdf "fig:") [0.45]{} ![Performance of the batched shared memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](smem_perf_2.pdf "fig:") Global Memory One-Sided Block Jacobi {#sec:block_global} ==================================== When we can no longer store the entire matrix in shared memory, we have to operate on the matrix in the slower global memory. Instead of repeatedly reading and updating the columns one at a time, block algorithms that facilitate cache reuse have been developed [@bevcka1999block1; @bevcka1999block2; @bevcka2015new]. The main benefit of the block Jacobi algorithm is its high degree of
1
member_7
parallelism; however, since we implement a batched routine for independent operations, we will use the serial block Jacobi algorithm for individual matrices and rely on the parallelism of the batch processing. The parallel version, where multiple blocks are processed simultaneously, can still be used when the batch size is very small, but we will focus on the serial version. In this section we will discuss the implementation details for two global memory block Jacobi algorithms that differ only in the way block columns are orthogonalized and compare their performance with parallel streamed calls to the cuSOLVER 8 [@nvidia-cusolver] library routines. Gram Matrix Block Jacobi SVD ---------------------------- The block Jacobi algorithm is very similar to the vector Algorithm \[alg:jacobi\], orthogonalizing pairs of blocks columns instead of vectors. The first method of orthogonalizing pairs of block columns is based on the SVD of their Gram matrix. During the $p$-th sweep, each pair of $m \times k$ block columns $A^{(p)}_i$ and $A^{(p)}_j$ is orthogonalized by forming a $2k \times 2k$ Gram matrix $G^{(p)}_{ij} = {[A^{(p)}_i A^{(p)}_j]}^T [A^{(p)}_i A^{(p)}_j] = {A^{(p)}_{ij}}^T A^{(p)}_{ij}$ and generating a block rotation matrix $U^{(p)}_{ij}$, computed as the left singular vectors of $G^{(p)}_{ij}$ (or equivalently its eigenvectors, since it is
1
member_7
symmetric positive definite). Updating $A^{p+1}_{ij} = A^p_{ij} U^{(p)}_{ij}$ orthogonalizes the block columns, since we have $${A^{p+1}_{ij}}^T A^{p+1}_{ij} = {U^{(p)}_{ij}}^T {A^p_{ij}}^T A^p_{ij} U^{(p)}_{ij} = {U^{(p)}_{ij}}^T G^{(p)}_{ij} U^{(p)}_{ij} = \Lambda^{p}_{ij},$$ where $\Lambda^{p}_{ij}$ is a diagonal matrix of the singular values of $G^{(p)}_{ij}$. Orthogonalizing all pairs of block columns until the entire matrix is orthogonal will give us the left singular vectors as the normalized columns and the singular values as the corresponding column norms. If the right singular vectors are needed, we can accumulate the action of the block rotation matrices on the identity matrix. For our batched implementation, we use highly optimized batched `syrk` and `gemm` routines from MAGMA to compute $G$ and to apply the block rotations, while the SVD is computed by our shared memory batched kernel. Since different matrices will converge in different numbers of sweeps, we keep track of the convergence of each operation $l$ by computing the norm $e_l$ of the off-diagonal entries of $G$ scaled by its diagonal entries. While this term is an inexact approximation of the off-diagonal terms of the full matrix in each sweep, it is still a good indication of convergence and will cost us at most an extra cheap sweep,
1
member_7
since the final sweep will not actually perform any rotations within the SVD of $G$. The entire batched operation will then converge when $e = \max e_l < \epsilon$, where $\epsilon$ is our convergence tolerance. This gives us the Gram matrix path of the batched block Jacobi Algorithm \[alg:block\_jacobi\] to compute the SVD of a batch of matrices in global memory. It is worth noting that the computation of the Gram matrix can be optimized by taking advantage of the special structure of $G$, but since the bulk of the computation is in the SVD of G, it will not result in any significant performance gains. Direct Block Jacobi SVD ----------------------- The Gram matrix method is an indirect way of orthogonalizing block columns and may fail to converge if the matrix is very ill-conditioned. Ill-conditioned matrices can be handled by directly orthogonalizing the columns using their SVD. Since the block columns are rectangular, we first compute their QR decomposition followed by the SVD of the triangular factor $R$. Overwriting the block column $A^p_{ij}$ by the orthogonal factor $Q$ and multiplying it by the left singular vectors of $R$ scaled by the singular values will give us the new block column
1
member_7
$A^{p+1}_{ij}$: $$A^p_{ij} = Q^p_{ij} R^p_{ij} = \left( Q^p_{ij} U^p_{ij} \Sigma^p_{ij} \right) {V^p_{ij}}^T = A^{p+1}_{ij} {V^p_{ij}}^T.$$ If the right singular vectors are needed, we can accumulate the action of $V^p_{ij}$ on the identity matrix. For our batched implementation, we use the batch QR routine developed in Section \[sec:batch\_qr\] and `gemm` routines from MAGMA to multiply the orthogonal factor by the left singular vectors, while the SVD is computed by our shared memory batched kernel. The same convergence test used in the Gram matrix method can be used on the triangular factor, since the triangular factor should be close to a diagonal matrix if a pair of block columns are orthogonal. This gives us the direct path of the batched block Jacobi Algorithm \[alg:block\_jacobi\] to compute the SVD of a batch of matrices in global memory. \[t\] $e_l = 0$ $G = \text{batchSyrk}(A_{ij})$ $[A_{ij}, G] = \text{batchQR}(A_{ij})$ $e_l = \text{max}(e_l, \text{scaledOffdiag}(G))$ $U = \text{batchSvd}(G)$ $A_{ij} = \text{batchGemm}(A_{ij}, U)$ $e = \text{max}(e_l)$ Performance {#performance-2} ----------- Figures \[fig:block\_jacobi\_profile1\] and \[fig:block\_jacobi\_profile1\] show the profiling of the different computational kernels involved in the batched block algorithms with a block width of $32$, specifically percentages of total execution time for determining convergence and memory operations, matrix multiplications,
1
member_7
QR decompositions and the SVD of the Gram matrix. For the Gram matrix approach, the SVD is the most costly phase, even for the larger operations, while the QR and SVD decompositions take almost the same time for the larger matrices in the direct approach. Figure \[fig:block\_jacobi\_perf\] shows the performance of the batched block Jacobi SVD of 200 matrices using both methods and Figure \[fig:osbjvscustream\] compares the performance of our batched SVD routine with a batched routine that uses the cuSOLVER SVD routine using 20 concurrent streams on a P100 GPU. Increasing the number of streams for cuSOLVER showed little to no performance benefits, highlighting the performance limitations of routines that are bound by kernel launch overhead. The matrices are generated randomly using the `latms` LAPACK routine with a condition number of $10^7$. The Gram matrix approach fails to converge in single precision for these types of matrices, whereas the direct approach always converges; however the Gram matrix approach performs better when it is applicable for the larger matrices due to the strong performance of the matrix-matrix multiplcations. The performance of the block algorithm can be improved by preprocessing the matrix using QR and LQ decompositions to decrease the number
1
member_7
of sweeps required for convergence [@Oksa_2006] as well as by adaptively selecting pairs of block columns based on the computed offdiagonal norms of their Gram matrices. These changes are beyond the scope of this paper and will be the focus of future work. [0.45]{} ![Profile of the different phases of the block Jacobi SVD for 200 matrices of varying size on a P100 GPU in double precision. Single precision exhibits similar behavior.[]{data-label="fig:block_jacobi_profile"}](block_perf_1.pdf "fig:") [0.45]{} ![Profile of the different phases of the block Jacobi SVD for 200 matrices of varying size on a P100 GPU in double precision. Single precision exhibits similar behavior.[]{data-label="fig:block_jacobi_profile"}](block_perf_1b.pdf "fig:") [0.45]{} ![Batched block Jacobi performance for 200 matrices of varying size on a P100 GPU in single and double precision arithmetics.](block_perf_2.pdf "fig:") [0.45]{} ![Batched block Jacobi performance for 200 matrices of varying size on a P100 GPU in single and double precision arithmetics.](block_perf_3.pdf "fig:") Randomized SVD {#sec:randomized} ============== As mentioned in Section \[sec:intro\], we are often interested in a rank-$k$ approximation of a matrix $A \approx \tilde{U} \tilde{S} \tilde{V}$. We can compute this approximation by first determining the singular value decomposition of the full $m \times n$ matrix $A$ and then truncating the $n-k$ smallest singular values
1
member_7