text
stringlengths 448
13k
| label
int64 0
1
| doc_idx
stringlengths 8
14
|
---|---|---|
---
author:
- 'Davide Gaiotto and Theo Johnson-Freyd'
title: Mock modularity and a secondary elliptic genus
---
Introduction
============
In [@WittenTMF] the following puzzle was posed; the goal of this paper is to propose a solution. Let us say that a (1+1)-dimensional quantum field theory with minimal, aka ${\mathcal N}=(0,1)$, supersymmetry is [*null*]{} if supersymmetry is spontaneously broken and [*nullhomotopic*]{} if it can be connected to a null theory by a sequence of deformations, including deformations that may zig-zag up and down along RG flow lines.
\[mainpuzzle\] Show that the supersymmetric quantum field theory of three free antichiral fermions $\bar\psi_1,\bar\psi_2,\bar\psi_3$ and supersymmetry generated by $G = \sqrt{-1}\, {:}\bar\psi_1\bar\psi_2\bar\psi_3{:}$ is not nullhomotopic.
For the remainder of this paper we will write simply “SQFT” for “(1+1)-dimensional quantum field theory with ${\mathcal N}=(0,1)$ supersymmetry”, and “SCFT” for an SQFT which is furthermore superconformal. The SQFT in Puzzle \[mainpuzzle\] is an SCFT, and is the (conjectured) limit under RG flow of the ${\mathcal N}=(0,1)$ sigma model with target the round $S^3$ and minimal nonzero B-field. The puzzle is difficult because, as is shown in [@WittenTMF], the direct sum of 24 copies of the SQFT in Puzzle \[mainpuzzle\] *is* nullhomotopic (as is the ${\mathcal N}=(0,1)$ | 0 | non_member_980 |
sigma model with target $S^3$ and B-field of strength $24$). So the puzzle requires constructing a torsion-valued deformation-invariant of SQFTs that is more sensitive than the elliptic genus.
The motivation for the puzzle comes from the theory of Topological Modular Forms (TMF) described for example in [@MR1989190; @MR3223024]. Based on suggestions from [@MR992209], it is conjectured in [@MR2079378; @MR2742432] that every SQFT defines a class in TMF, invariant under deformations of the SQFT. Indeed, [@MR2079378; @MR2742432] conjecture that this TMF class exactly captures the deformation class of the corresponding SQFT. The TMF-valued invariant of an SQFT refines the usual modular-form valued elliptic index. It is known that the TMF-valued invariant of $S^3$ (with minimal nonzero B-field) has exact order $24$, hence the puzzle.
In this paper we will propose a solution to this puzzle. Let ${\mathcal B}$ be an SCFT which, like the one in Puzzle \[mainpuzzle\], has gravitational anomaly $c_R - c_L = 3/2$.[^1] If ${\mathcal B}$ is not initially conformal, flow it to the IR before proceeding. Build:
1. a “generalized mock modular form $f_1$” with source equal to the torus one-point function of the supersymmetry generator of ${\mathcal B}$.[^2] The $q$-expansion of $f_1$ is not determined by | 0 | non_member_980 |
${\mathcal F}$, but the class of $[f_1] \in {{\mathds}C}(\!(q)\!) / {\mathrm{MF}}_2$ is.
2. a nonnegative-integral $q$-series $f_2$ equal to half the graded dimension of the space of [bosonic]{} Ramond-sector ground states in ${\mathcal B}$.[^3] The class $[f_2] \in {{\mathds}C}(\!(q)\!) / 2{{\mathds}Z}(\!(q)\!)$ is a sort of “mod-2 index” of ${\mathcal F}$.
Neither class $[f_1] \in {{\mathds}C}(\!(q)\!) / {\mathrm{MF}}_2$ nor $[f_2] \in {{\mathds}C}(\!(q)\!) / 2{{\mathds}Z}(\!(q)\!)$ is a deformation invariant of ${\mathcal B}$. But we will argue that the class $[f_1] - [f_2] \in {{\mathds}C}(\!(q)\!) / [ {\mathrm{MF}}_2 + 2{{\mathds}Z}(\!(q)\!)]$ is invariant under SQFT deformations. Furthermore, we will compute that for the SQFT in Puzzle \[mainpuzzle\], this invariant is nonzero and in fact has exact order $24$ in ${{\mathds}C}(\!(q)\!) / [ {\mathrm{MF}}_2 + 2{{\mathds}Z}(\!(q)\!)]$. This is our solution to Puzzle \[mainpuzzle\].
Homotopy-theoretic considerations imply the existence (and nontriviality) of an invariant like ours [@DBE2015], which when restricted to sigma models is described both analytically and geometrically in [@MR3278648] (where it is also shown that the invariant of $S^3$ has exact order $24$). But topological arguments do not explain how to compute this invariant of SQFTs except when the SQFT can be deformed to a sigma model. Our description of the invariant connects it | 0 | non_member_980 |
explicitly to holomorphic anomalies of noncompact SQFTs, and thereby to mock modularity, which is of great current interest due to the “moonshine” of [@MR2802725; @MR3271175].
Whenever the CFT ${\mathcal B}$ is rational, the source of the holomorphic anomaly equation is a modular-invariant bilinear combination of vector-valued holomorphic and anti-holomorphic modular forms. The corresponding generalized mock modular form is then a “mixed mock modular form”: a bilinear combination of the same vector-valued holomorphic modular form and a true vector-valued mock modular form. In such a situation, our arguments can be seen as a justification for the very existence of mock modular forms with interesting integrality properties.
The paper is structured as follows.
Section \[sec.general\] presents the general story. We first make some brief remarks about gravitational anomalies, and a $\eta(\tau)^n$ normalization factor that we include in the elliptic genus[^4] and in one-point functions in order to correct the multipliers that would otherwise be present. We then discuss the properties of certain SQFTs which violate the compactness constraint in a controllable manner. It turns out to be still possible to define the elliptic genus of such SQFTs [@Eguchi:2006tu; @MR2821103; @Ashok:2013pya; @Gupta:2017bcp]. Such elliptic genus satisfies a “holomorphic anomaly” equation with a source | 0 | non_member_980 |
which we characterize in a precise manner in terms of the torus one-point function of the supersymmetry generator in a compact “boundary SQFT”. We use this construction to argue that the torus one-point function of the supersymmetry generator in a nullhomotopic SQFT is the source of a holomorphic anomaly equation for a generalized mock modular form. We then argue that the coefficients of this generalized mock modular form are (even) integral, up to a correction arising as a type of “mod 2 index” of the boundary SQFT. It follows that, if solutions of the holomorphic anomaly equation fail to have integral (plus correction) mock modular parts, the SQFT cannot be nullhomotopic. This is the justification for our invariants. We then recast our invariant in homotopy-theoretic terms, where it becomes the “secondary invariant” of [@MR3278648].
To illustrate our proposed invariant, we focus on two families of examples. First, in Section \[sec.cigar\], we study the sigma models with target $S^1$ and the “cigar.” We start by reviewing the $S^1$ sigma models, with an emphasis on the role that the target-space spin structure plays on the behaviour of the model. This provides a chance to illustrate the mod-2 index, and allows us to | 0 | non_member_980 |
compute our invariant for the $T^3$ sigma model with its Lie group framing. We then analyze the “cigar,” which is a noncompact manifold with “boundary” $S^1$, and demonstrate explicitly that the corresponding sigma model enjoys our predicted holomorphic anomaly and integrality.
The second set of examples, which we study in Section \[sec.S3\], are the ones from [@WittenTMF]: the ${\mathcal N}=(0,1)$ sigma model with target the round $S^3$ and with B-field of strength $k$. We first warm up with the $k=1$ case of Puzzle \[mainpuzzle\], and then study the general case. In all cases, we find that our invariant is precisely $k \pmod {24}$, showing that the $S^3_k$ sigma model is nullhomotopic if and only if $k = 0 \pmod {24}$. We then mention a few related constructions and puzzles: we build an antiholomorphic SCFT of central charge $c_R = 27/2$ which we expect to represent the 3-torsion element in $\pi_{27}\mathrm{TMF}$; and we speculate that $S^3_k$ is “flavoured-nullhomoptic” for all $k$, with the nullhomotopy given by a certain “trumpet” geometry with ${\mathcal N}=(0,4)$ supersymmetry.
Let us end this introduction by emphasizing that we expect our invariant captures only some of the torsion in the space of ${\mathcal N}=(0,1)$ SQFTs.[^5] It is | 0 | non_member_980 |
known that the TMF classes represented by the group manifolds (with their Lie group framings) $$\mathrm{Sp}(2),\qquad G_2,\qquad G_2\times U(1)$$ are nonzero: their exact orders are, respectively, $$3, \qquad 2, \qquad 2.$$ The same logic as in [@WittenTMF] suggests that the ${\mathcal N}=(0,1)$ sigma models for $\mathrm{Sp}(2)$ and $G_2$ flow in the IR to SCFTs consisting purely of ($10$ and $14$, respectively) antichiral free fermions, with supersymmetries that encode the structure constants of the Lie algebras $\mathfrak{sp}(2)$ and $\mathfrak{g}_2$.[^6] The sigma model with target $G_2\times U(1)$ does not flow to a purely-antichiral theory, but rather to the product of the $\mathfrak{g}_2$-theory and the “standard” circle theory studied in §\[subsubsec.S1-nonbounding\]. The elliptic and mod-2 indexes of all three SQFTs vanish. Moreover, the invariant described in this paper vanishes for all three SQFTs — the first two for degree reasons, but the third nontrivially. Due to the expected relationship between TMF and SQFTs, we expect that these SQFTs are not nullhomotopic. We leave the reader with the following puzzles:
\[sp2puzzle\]
1. Show that the SQFT $\overline{\operatorname{Fer}}(\mathfrak{sp}(2))$ of $10$ antichiral free fermions and supersymmetry encoding the structure constants of $\mathfrak{sp}(2)$ is not nullhomotopic.
2. Show that the SQFT $\overline{\operatorname{Fer}}(\mathfrak{g}_2)$ of $14$ antichiral free fermions | 0 | non_member_980 |
and supersymmetry encoding the structure constants of $\mathfrak{g}_2$ is not nullhomotopic.
3. Show that the product of $\overline{\operatorname{Fer}}(\mathfrak{g}_2)$ with a (standard) $S^1$ sigma model is not nullhomotopic.
A torsion invariant of SQFTs {#sec.general}
============================
Gravitational anomalies and spectator fermions {#sec.anomaly}
----------------------------------------------
For a quantum field theory to be [*gravitationally nonanomalous*]{}, its partition function must be valued in numbers (as opposed to a section of some line bundle on the moduli space of spacetimes); it must have a well-defined Hilbert space (as opposed to a section of some gerbe); and so on for higher-codimensional data [@FreedTeleman2012]. In the $(1+1)$-dimensional SQFT case, the [*elliptic genus*]{} $Z_{RR}$ is by definition the partition function on flat tori with nonbounding, aka Ramond–Ramond aka periodic–periodic, spin structures. The moduli space of Ramond–Ramond flat tori is three-real-dimensional — in addition to the complex and anticomplex parameters $(\tau,\bar\tau)$, there is also a “size” parameter — but we will generally compute in the IR aka large-torus limit. In this limit, the partition function of a nonanomalous $(1+1)$-dimensional SQFT will definitely be modular for the full $\mathrm{SL}(2,{{\mathds}Z})$ with weight $(0,0)$ and no multiplier.[^7] Indeed, modularity is transparent from the path-integral description.[^8]
The SQFT in Puzzle \[mainpuzzle\], and more generally any | 0 | non_member_980 |
${\mathcal N}=(0,1)$ sigma model, suffers a gravitational anomaly due to the unpaired antichiral fermions. For an SCFT, the gravitational anomaly is the difference between the left- and right-moving central charges; for an SQFT, these separate central charges are not well-defined, but the total gravitational anomaly is, and is preserved under RG flow. We will normalize the gravitational anomaly so that for an SCFT with left and right moving central charges $c_L$ and $c_R$, the anomaly is $n := 2(c_R - c_L)$. The factor of $2$ is natural because then $n$ ranges over ${{\mathds}Z}$. The gravitational anomaly $n \in {{\mathds}Z}$ plays the role of homotopical degree in §\[sec.BN\], and so we will occasionally refer to it as the [*degree*]{} of the SQFT.
The gravitational anomaly manifests in various ways. First of all, it produces a nontrivial multiplier for the behaviour of the elliptic genus under the $\tau \mapsto \tau+1$, namely $Z_{RR}(\tau+1,\bar\tau+1) = e^{-2\pi i n/24} Z_{RR}(\tau,\bar\tau)$; the path integral description still guarantees that $Z_{RR}$ transforms under $\tau \mapsto -1/\tau$ with weight $(0,0)$ and some multiplier. Second, the gravitational anomaly leads to an ambiguity in the parity of “the” Ramond sector of the theory. This leads to a sign ambiguity when trying | 0 | non_member_980 |
to define “the” elliptic index.
Our convention, standard in algebraic topology, will be to trade nontrivial multipliers for nontrivial weights of modular forms, by including a normalization factor of $\eta(\tau)^n$ whenever necessary, where $\eta(\tau) = q^{1/24} \prod_{j=1}^\infty (1-q^j)$ is Dedekind’s eta function. The combination $\eta(\tau)^n Z_{RR}(\tau,\bar\tau)$ is sometimes called the [*Witten genus*]{} of a gravitationally-anomalous SQFT, and we will use that term. It is automatically modular without multiplier, of weight $(\frac n 2, 0)$.
When $n>0$, the Witten genus can be interpreted as follows. Consider the nonanomalous SQFT $\operatorname{Fer}(n) \otimes {\mathcal F}$, where $\operatorname{Fer}(n) = \operatorname{Fer}(1)^{\otimes n}$ means the holomorphic CFT of $n$ chiral fermions $\psi_1,\dots,\psi_n$, acted upon trivially by the right-moving supersymmetry. Deformations of ${\mathcal F}$ correspond to deformations of $\operatorname{Fer}(n) \otimes {\mathcal F}$ which preserve the $\operatorname{Fer}(n)$-subsector. The $n$ free chiral fermions in $\operatorname{Fer}(n)$ are called [*spectators*]{}. Because of the zero modes of the chiral fermions, the plain elliptic genus of $\operatorname{Fer}(n) \otimes {\mathcal F}$ vanishes. But because we have a distinguished $\operatorname{Fer}(n) \subset \operatorname{Fer}(n) \otimes {\mathcal F}$, we find a distinguished observable, namely ${:}\psi_1\cdots\psi_n{:}$. The Witten genus of the gravitationally-anomalous SQFT ${\mathcal F}$ is precisely the one-point function of $(-1)^{\frac{n}{4}} {:}\psi_1\cdots\psi_n{:}$ in $\operatorname{Fer}(n) \otimes {\mathcal F}$. [^9] | 0 | non_member_980 |
By “one-point function,” we will always mean the torus one-point function on nonbounding, aka Ramond–Ramond, tori.
The spectator fermions furthermore allow the sign ambiguity to be handled by demanding that $\operatorname{Fer}(n) \otimes {\mathcal F}$ have *trivialized* anomaly, including a choice of parity for its Ramond-sector ground state.[^10] More precisely, the sign ambiguity can be swapped for an ambiguity in the choice of generators of $\operatorname{Fer}(n)$. We will largely ignore the sign ambiguity in this paper, since we will not try to add different SQFTs together (and so we will never risk thinking we have found a cancellation when there was not one).
Another phenomenon in gravitationally-anomalous SQFTs becomes transparent when working with spectator fermions. Consider the Ramond-sector Hilbert space ${\mathcal H}_R$ for the spectated theory $\operatorname{Fer}(n) \otimes {\mathcal F}$. The decoupled $\operatorname{Fer}(n)$ subalgebra of the full observable algebra provides operators on this Hilbert space: specifically, ${\mathcal H}_R$ is naturally a module for the fermion zero modes,[^11] which form a copy of the $n$th Clifford algebra $\operatorname{Cliff}(n)$. Moreover, the SQFT $\operatorname{Fer}(n) \otimes {\mathcal F}$ compactified on $S^1$ automatically possesses a time-reversal structure — showing this is an interesting exercise, solved in [@GPPV §3.2.2] — and hence ${\mathcal H}_R$ possesses a real | 0 | non_member_980 |
form, acted on by the real Clifford algebra $\operatorname{Cliff}(n,{{\mathds}R})$. Modules for $\operatorname{Cliff}(n,{{\mathds}R})$ represent degree-$n$ classes in oriented K-theory.
Non-compact SQFTs {#sec.noncompact}
-----------------
The discussion in §\[sec.anomaly\] implicitly assumed that the SQFT in question was “compact” — for instance, that its Hamiltonian had sufficiently discrete spectrum — in order for its elliptic genus and K-theory class to be well-defined. There are at least two distinct ways one can enlarge the space of SQFTs which admit some kind of elliptic genus.
### Flavoured-compact theories {#subsec.flavoured-compact}
The first way is to consider SQFTs equipped with a continuous global symmetry, say $U(1)$ for simplicity, and define a “flavoured” elliptic genus as a partition function on a torus equipped with a flat $U(1)$ connection. The flat connection can be parameterized by a point $\xi$ in the elliptic curve $E_\tau$ of parameter $\tau$. As long as the current sits in a standard $(0,1)$ multiplet, there is a superpartner of the anti-holomorphic part of the current which should guarantee the independence of the partition function on the anti-holomorphic part $\bar \xi$ of the connection, leaving a holomorphic dependence on $\xi$. As a consequence, the flavoured Witten genus is a Jacobi form, of weight $\frac{n}{2}$ and index $\ell$ | 0 | non_member_980 |
determined by the ’t Hooft anomaly coefficient for the $U(1)$ global symmetry.
Such a flavoured SQFT can be considered “compact” as long as the Hamiltonian on the circle has sufficiently discrete spectrum for a generic choice of flat $U(1)$ connection on the circle. Then the Witten genus will be well-defined as a meromorphic Jacobi form. If the SQFT is compact even in the un-flavoured sense then the Witten genus will be a holomorphic Jacobi form and admit an expansion in terms of theta functions of index $\ell$, with coefficients which form a vector-valued modular form.
For a sigma model, the calculation of the flavoured Witten genus will involve the equivariant analogues of the characteristic classes involved in the calculation of the standard Witten genus.
The canonical example is a $(0,1)$ sigma model with target ${{\mathds}R}^2$ and a $U(1)$ isometry acting as rotations of ${{\mathds}R}^2$. The corresponding flavoured Witten genus is a meromorphic Jacobi form of weight $1$ and index $-1$:
$$\label{eqn.R2}
Z_{RR}({{\mathds}R}^2)[\xi;\tau] = \frac{\eta(\tau)^3}{\theta(\xi;\tau)} =\frac{1}{x^{\frac12}-x^{-\frac12}}\prod_{n=1}^\infty \frac{(1-q^n)^2}{(1-x q^n)(1-x^{-1} q^n)}$$
with $q = \exp 2 \pi i \tau$ and $x = \exp 2 \pi i \xi$. The analogue of TMF for the flavoured SQFTs does not appear to be well-studied [@GPPV]. It | 0 | non_member_980 |
would be very interesting to do so.
### Theories with cylindrical ends
The second way we can enlarge the space of SQFTs which admit some kind of elliptic genus is by considering the SQFT analogue of sigma models on manifolds with an asymptotic boundary region which approaches ${{\mathds}R}^+ \times B$ for some compact manifold $B$. We can formalize this notion by requiring the SQFT ${\mathcal F}$ to be equipped with a local operator $\Phi$ with the following property:
- Consider the direct product of ${\mathcal F}$ and a free Fermi multiplet, i.e. a free chiral fermion $\lambda$ annihilated by the supercharge.[^12]
- Deform the product theory by a “fermionic superpotential” $\lambda (\Phi-p)$, i.e. add the terms $\lambda (\bar G_0 \Phi) - (\Phi-p)^2$ to the Lagrangian, where $p \in {{\mathds}R}$ is a parameter.
- The result is a family of SQFTs ${\mathcal B}_p$ parameterized by a point $p$ in ${{\mathds}R}$. We require ${\mathcal B}_p$ to stabilize to some compact SQFT ${\mathcal B}$ for large positive $p$ and ${\mathcal B}_p$ to spontaneously break supersymmetry for large negative $p$.
Note that in particular the family ${\mathcal B}_p$ built from ${\mathcal F}$ is a nullhomotopy of the compact SQFT ${\mathcal B}$. Conversely, any nullhomotopy | 0 | non_member_980 |
of ${\mathcal B}$ can be converted into a noncompact SQFT ${\mathcal F}$ by reversing the steps above, i.e. promoting the parameter of the deformation family to a dynamical $(0,1)$ chiral multiplet.
What is the elliptic genus of ${\mathcal F}$? The question is subtle because, being noncompact, ${\mathcal F}$ has continuous spectrum in its Hamiltonian. Because of this, the usual supersymmetric cancelation argument verifying that the elliptic genus is holomorphic breaks down. Let us work in the IR limit. In this limit, the Witten genus $\eta(\tau)^n Z_{RR}({\mathcal F})(\tau,\bar\tau)$ will automatically transform as a weight $(\frac n2,0)$ modular form under the action of $\mathrm{SL}(2,{{\mathds}Z})$ acting simultaneously on both $(\tau,\bar\tau)$, since modularity is manifest from the path-integral description of the index.
Let us try to prove that $Z_{RR}({\mathcal F})$ is holomorphic. We will fail, and by failing we will instead compute the [*holomorphic anomaly*]{} of ${\mathcal F}$. We will try to apply the usual proof of holomorphicity of the index. For any QFT,[^13] $$\frac{\partial}{\partial\bar{\tau}}Z_{RR}({\mathcal F}) = - 2\pi i (\text{torus one-point function of } \bar{T} \mbox{ in ${\mathcal F}$}).$$ Use the supersymmetry: $\bar{T} = \frac12 [\bar{G}_0,\bar{G}]$, where the commutator $[,]$ means the supercommutator, $\bar{G}_0$ is the generator of ${\mathcal N}=(0,1)$ supersymmetry and $\bar{G}$ | 0 | non_member_980 |
the anti-holomorphic component of the supercurrent. In a compact theory, the one-point function of any anti-commutator $[\bar{G}_0, O]$ would vanish. In a non-compact theory, with non-compact direction parameterized by the expectation value of the operator $\Phi$, we can imagine integrating by parts in the space of fields to obtain a term proportional to[^14] the torus one-point function of $O$ evaluated in the boundary theory ${\mathcal B}$, resulting in the following [*holomorphic anomaly equation*]{}. Including the spectator fermions to fix the normalizations, we propose: [^15] $$\begin{gathered}
\label{eqn.holomorphicanomaly}
\sqrt{-8\tau_2} \frac{\partial}{\partial\bar{\tau}} \bigl[\text{Witten genus of }{\mathcal F}\bigr] \\
=(\text{torus one-point function of } (-1)^{\frac{n}{4}}{:}\psi_1\cdots\psi_{n-1}\bar{G}{:} \mbox{ in $\operatorname{Fer}(n-1) \times {\mathcal B}$})\end{gathered}$$ Here $\tau_2 = \frac1{2i}(\tau - \bar\tau)$ is the imaginary part of $\tau$. The sign of the square root $\sqrt{-8}$ is essentially arbitrary, and can be absorbed in the ambiguity in the sign of $\bar{G}$ or in the sign of the Ramond sector of ${\mathcal B}$. As a reality check, note that, in the IR limit, both sides are real-analytic modular of weight $(\frac{n-1}2, \frac32)$ with trivial multipliers.
Being a bit loose with the phase of the torus one-point function, we can write equation (\[eqn.holomorphicanomaly\]) as $$\label{eqn.holomorphicanomaly2}
\sqrt{-8\tau_2} \, \eta(\tau) \frac{\partial}{\partial\bar{\tau}} Z_{RR}({\mathcal F}) = (\text{torus | 0 | non_member_980 |
one-point function of } \bar{G} \mbox{ in ${\mathcal B}$}).$$
A simple generalization of this construction is to require ${\mathcal B}_p$ to stabilize to some compact SQFT ${\mathcal B}_+$ for large positive $p$ and to another compact SQFT ${\mathcal B}_-$ for large positive $p$ for large negative $p$. Then we expect, up to a phase factor: $$\begin{gathered}
\label{eqn.bordismZRR}
\sqrt{-8\tau_2} \frac{\partial}{\partial\bar{\tau}}\bigl[\text{Witten genus of }{\mathcal F}\bigr] = \bigl[ (\text{torus one-point function of } {:}\psi_1\cdots\psi_n{:}\bar{G} \mbox{ in ${\mathcal B}_+$}) \\ -(\text{torus one-point function of } {:}\psi_1\cdots\psi_n{:}\bar{G} \mbox{ in ${\mathcal B}_-$}) \bigr]\end{gathered}$$ This means that, although the torus one-point function of $\bar{G}$ is not a deformation-invariant of an SQFT, it changes in a controlled fashion. This control is the basis of our torsion invariant.
Integrality of the $q$-expansion {#sec.int}
--------------------------------
In addition to holomorphicity, the other fundamental fact about the Witten genus $\eta^n Z_{RR}({\mathcal F})$ of a (compact) SQFT ${\mathcal F}$ is the integrality of its $q$-expansion. We briefly review the argument. Let ${\mathcal F}[S^1]$ denote the $S^1$-equivariant supersymmetric quantum mechanics model produced by compactifying ${\mathcal F}$ on a circle (with Ramond aka nonbounding spin structure). Then $\eta^n Z_{RR}({\mathcal F})$ can be interpreted as the supersymmetric index of ${\mathcal F}[S^1]$, with $q$ parameterizing the $S^1$-action, | 0 | non_member_980 |
and indices are well-known to be integral, since they merely count with signs the number of supersymmetric ground states. Note that the compactification breaks manifest modularity. Indeed, suppose we didn’t already know that $Z_{RR}({\mathcal F})$ was holomorphic (for ${\mathcal F}$ compact). Then this compactification implements the canonical way to extract a holomorphic function from a real-analytic modular form: analytically continue away from $\bar \tau = \tau^*$ and take the limit $\bar \tau \to -i \infty$.[^16]
Depending on the value of the gravitational anomaly $n$, one can make stronger statements than mere integrality. The $q$-dependence is immaterial — the statements hold in general for SQM models of degree $n$ equipped with a time-reversal symmetry.[^17] There are various ways to define the notion of “degree-$n$ SQM model”, just like the choices in §\[sec.anomaly\] for how to handle the gravitational anomaly. One general way is to work with SQM models that are relative, in the sense of [@FreedTeleman2012], to certain short-range-entangled $(1+1)$-dimensional phases. When $n\geq0$, another explicit method is to employ spectator fermions. Then a [*degree-$n$ SQM model*]{} is an SQM model (i.e. a super Hilbert space ${\mathcal H}$ with an odd operator $G$ generating the supersymmetry; it is “compact” when $G$ is | 0 | non_member_980 |
Fredholm) equipped with an action by the $n$th Clifford algebra $\operatorname{Cliff}(n)$ (which should (super)commute with $G$). The presence of a time-reversal symmetry equips ${\mathcal H}$ with a real structure, acted on by the real Clifford algebra $\operatorname{Cliff}(n,{{\mathds}R})$. The supersymmetric ground states are then a finite-dimensional $\operatorname{Cliff}(n,{{\mathds}R})$-module $V$.[^18]
The usual [supersymmetric index]{} of the SQM model ignores the time-reversal symmetry: it depends just on $V\otimes {{\mathds}C}$ as a $\operatorname{Cliff}(n,{{\mathds}C})$-module. When $n$ is even, $\operatorname{Cliff}(n,{{\mathds}C})$ has two irreducible modules, differing by parity. Choose one of them arbitrarily to be “the” irrep $I$; then $V \otimes {{\mathds}C}\cong I^{a|b} = I \otimes_{{\mathds}C}{{\mathds}C}^{a|b}$, where ${{\mathds}C}^{a|b}$ means the (complex) supervector space with graded dimension $(a,b)$. The ordinary [*index*]{} of $V$ is simply $a-b$.[^19] But in the presence of a time-reversal symmetry, we don’t just have the $\operatorname{Cliff}(n,{{\mathds}C})$-module $V \otimes {{\mathds}C}$ — we have the $\operatorname{Cliff}(n,{{\mathds}R})$-module $V$. It turns out that when $n = 2 \pmod 4$, there is only one irreducible $\operatorname{Cliff}(n,{{\mathds}R})$-module $J$, with complexification $J \otimes {{\mathds}C}\cong I^{1|1}$. Thus the index vanishes when $n = 2 \pmod 4$. And when $n = 4 \pmod 8$, there are two irreducible $\operatorname{Cliff}(n,{{\mathds}R})$-modules, $J^{1|0}$ and $J^{0|1}$, but $J^{1|0} \otimes {{\mathds}C}= I^{2|0}$ splits as two copies of the irreducible | 0 | non_member_980 |
$\operatorname{Cliff}(n,{{\mathds}C})$-module, and so the index is automatically even. In summary, the index of a degree-$n$ SQM model with time reversal symmetry lives in $m{{\mathds}Z}$ with: $$\label{eqn.mofn}
m = \begin{cases}
1, & n = 0 \pmod 8, \\
2, & n = 4 \pmod 8, \\
0, & \text{else}.
\end{cases}$$ In the SQFT case that we care about, the Witten genus $\eta^n Z_{RR}({\mathcal F})$ has $q$-expansion in $m{{\mathds}Z}(\!(q)\!)$.[^20]
### The mod-2 index
That the indexes of time-reversal SQM models vanish in degrees $2$ and $6$ mod $8$ and are even in degree $4$ mod $8$ is compensated by a more refined “mod-2 index,” which is nontrivial in degrees $n=1$ and $2 \pmod 8$. We will review its construction because it already measures some torsion in the space of SQFTs and because a variation of the mod-2 index appears when trying to understand indexes of noncompact SQM models. We start with the case $n=1$. The even subalgebra of $\operatorname{Cliff}(1,{{\mathds}R})$ is isomorphic to ${{\mathds}R}$, and $\operatorname{Cliff}(1,{{\mathds}R})$ has a unique irreducible module, namely itself. We will call it ${{\mathds}R}^{1|1}$. The supersymmetric ground states $V$ of a degree-$1$ SQM model is then isomorphic to ${{\mathds}R}^{a|a}$ for some nonnegative integer $a$. By definition, the [*mod-2 index*]{} | 0 | non_member_980 |
of the SQM model is $a \pmod 2$.
Although the integer $a$ is not a deformation invariant of the degree-$1$ SQM model, the mod-2 index $a\pmod 2$ is. To explain why, we can repeat the logic from §\[sec.noncompact\] to reinterpret a deformation of a degree-$n$ SQM model as a mildly-noncompact degree-$(n+1)$ SQM model: promote the deformation parameter to a dynamical supersymmetric multiplet; the fermion in this multiplet contributes $+1$ to the degree of the model. The upshot is that any deformation of a degree-$n$ SQM model will add or subtract to the ground states $V$ some $\operatorname{Cliff}(n+1,{{\mathds}R})$-module (thought of as a $\operatorname{Cliff}(n,{{\mathds}R})$-module).
But the even subalgebra of $\operatorname{Cliff}(2,{{\mathds}R})$ is isomorphic to ${{\mathds}C}$ thought of as a real algebra.[^21] Because ${{\mathds}C}$ is a field, $\operatorname{Cliff}(2,{{\mathds}R})$ is irreducible as a module over itself. We will call this irreducible module ${{\mathds}C}^{1|1}$. It has even graded dimension when restricted to $\operatorname{Cliff}(1,{{\mathds}R})$, since $\dim_{{\mathds}R}{{\mathds}C}$ is even, and so adding or subtracting it to $V = {{\mathds}R}^{a|a}$ will not change the value of $a \pmod 2$. This is why the mod-2 index of a degree-$1$ SQM model is a deformation invariant.
The same logic also defines a deformation-invariant mod-2 index of a degree-$2$ SQM model. The | 0 | non_member_980 |
ground states $V$ are isomorphic, as a $\operatorname{Cliff}(2,{{\mathds}R})$-module, to ${{\mathds}C}^{a|a}$ for some integer $a$, and the [*mod-2 index*]{} is $a \pmod 2$. A deformation will involve adding or subtracting from $V$ the underlying $\operatorname{Cliff}(2,{{\mathds}R})$-module of some $\operatorname{Cliff}(3,{{\mathds}R})$-module. The even subalgebra of $\operatorname{Cliff}(3,{{\mathds}R})$ is isomorphic to the quaternion algebra ${{\mathds}H}$. Because ${{\mathds}H}$ is a skew field, $\operatorname{Cliff}(3,{{\mathds}R})$ is irreducible as a module over itself. Call this irreducible module ${{\mathds}H}^{1|1}$. Since $\dim_{{\mathds}C}{{\mathds}H}$ is even, if you add or subtract some multiple of ${{\mathds}H}^{1|1}$ to ${{\mathds}C}^{a|a}$, you do not change the value of $a \pmod 2$.
The story repeats when $n = 1$ or $2 \pmod 8$, because the category of $\operatorname{Cliff}(n,{{\mathds}R})$-supermodules depends only on the value of $n \pmod 8$. What about when $n = 3$? The ground states $V$ for an SQM model are then isomorphic to ${{\mathds}H}^{a|a}$ for some nonnegative integer $a$, and so we may still contemplate a [*mod-2 index*]{} defined to be the value of $a \pmod 2$. But now this mod-2 index is not a deformation invariant because $\operatorname{Cliff}(4,{{\mathds}R})$ is not irreducible over itself. In fact, both irreps of $\operatorname{Cliff}(4,{{\mathds}R})$ restrict over $\operatorname{Cliff}(3,{{\mathds}R})$ to copies of ${{\mathds}H}^{1|1}$, and so only the dataless “$a \pmod 1$” is a | 0 | non_member_980 |
deformation invariant. The same is true when $n = 5$, $6$, or $7 \pmod 8$.
### Noncompact SQM models
We turn now to the index of “mildly noncompact” SQM models. The definition of “mild noncompactness” mirrors §\[sec.noncompact\]: the SQM model ${\mathcal X}$ should come with a local operator $\Phi$ parameterizing the noncompact direction; writing ${\mathcal Y}_p$ for the theory produced from ${\mathcal X}$ by adding a fermion $\lambda$ and a fermionic superpotential $\lambda(\Phi-p)$, we demand that ${\mathcal Y}_p$ stabilizes in the limits $p \to \pm \infty$ to compact SQM models ${\mathcal Y}_\pm$. For definiteness, we will first describe the case when ${\mathcal X}$ has degree $n=4$, in which case each ${\mathcal Y}_p$ is an SQM model of degree $3$.
We lose no generality by assuming that as $p$ varies, the Hilbert space ${\mathcal H}$ of ${\mathcal Y}_p$ is independent of $p$, with the only variation being in the choice of supersymmetry.[^22] Since ${\mathcal Y}_p$ has degree $3$, ${\mathcal H}$ is naturally a module for $\operatorname{Cliff}(3,{{\mathds}R})$. Choose[^23] an isomorphism $\operatorname{Cliff}(3,{{\mathds}R}) \cong {{\mathds}H}\otimes \operatorname{Cliff}(-1,{{\mathds}R})$, and let $\gamma$ denote the generator of $\operatorname{Cliff}(-1,{{\mathds}R})$. The supersymmetry generator in ${\mathcal Y}_p$ is $G(p) = g(p) \gamma$, where $g(p)$ is a quaternionic matrix; the time-reversal structure | 0 | non_member_980 |
for SQM models of degree $3$ requires that $g(p)$ be “quaternionically self-adjoint,” meaning that its eigenvalues live in ${{\mathds}R}\subset {{\mathds}H}$. Thus, after a $p$-dependent change of basis, the only thing varying with $p$ is the spectrum of $g(p)$, which by compactness is a discrete subset (with finite multiplicities) of ${{\mathds}R}$.
The “index” of a noncompact SQM model ${\mathcal X}$ can be defined as the Ramond partition function $Z_R({\mathcal X}) = \mathrm{Tr}_{\mathcal H}(-1)^F \cdots$, but the noncompactness means that this partition function will depend nontrivially on the length of the worldline torus. The limit $\bar\tau \to -i\infty$ corresponds to the IR limit of $Z_R({\mathcal X})$, which merely counts supersymmetric ground states. We will use the term “index” to mean this IR limit.
If the limits $g(\pm\infty) = \lim_{p \to \pm \infty} g(p)$ have no kernel, then the index (in the IR sense) of ${\mathcal X}$ is relatively easy to compute: it counts with signs the number of eigenvalues of $g$ that cross $0$, times a factor of 2 coming from the quaternionic nature of degree-$3$ and degree-$4$ SQM models. Indeed, to say that the limits $g(\pm\infty)$ have no kernel is to say that supersymmetry is spontaneously broken in these limits, and | 0 | non_member_980 |
${\mathcal X}$ wasn’t really “noncompact” at all, because it flows to a compact theory, and the factor of $2$ is the one coming from (\[eqn.mofn\]) when $n=4$.
If, on the other hand, supersymmetry is not spontaneously broken in the boundary theories ${\mathcal Y}_\pm$, then the index of ${\mathcal X}$ receives a fractional contribution from each eigenvalue that lands on $0$ in the limits $p \to \pm \infty$. After multiplying by the factor of $2$ coming from (\[eqn.mofn\]), we find that the index of ${\mathcal X}$ is odd depending on the number of supersymmetric ground states in the boundary theories ${\mathcal Y}_\pm$. And this number is exactly the non-deformation-invariant mod-2 index of ${\mathcal Y}_\pm$!
The same argument holds whenever ${\mathcal X}$ has degree $n = 4 \pmod 8$. When $n = 0 \pmod 8$, a similar argument holds without a factor of $2$. To give a unified formula, we complexify and strip off the spectator fermions. Then, after complexifying, the supersymmetric ground states in ${\mathcal Y}_\pm$ form a vector space isomorphic to ${{\mathds}C}^{a|a}$, with $a \in m{{\mathds}Z}$, and the mod-2 index is $\frac a m \pmod 2$. We will call this number $a$ the “bosonic index” of ${\mathcal Y}_\pm$. The end | 0 | non_member_980 |
---
abstract: |
We present adaptive optics assisted integral field spectroscopy of nine H$\alpha$-selected galaxies at $z$=0.84–2.23 drawn from the HiZELS narrow-band survey. Our observations map the kinematics of these star-forming galaxies on $\sim$kpc-scales. We demonstrate that within the ISM of these galaxies, the velocity dispersion of the star-forming gas ($\sigma$) follows a scaling relation $\sigma\propto\Sigma_{\rm SFR}^{1/n}$+$\,constant$ (where $\Sigma_{\rm SFR}$ is the star formation surface density and the constant includes the stellar surface density). Assuming the disks are marginally stable (Toomre $Q$=1), this follows from the Kennicutt-Schmidt relation ($\Sigma_{\rm SFR}$=$A\Sigma_{\rm
gas}^n$), and we derive best fit parameters of $n$=1.34$\pm$0.15 and $A$=3.4$_{-1.6}^{+2.5}\times$10$^{-4}$M$_{\odot}$yr$^{-1}$kpc$^{-2}$, consistent with the local relation, and implying cold molecular gas masses of M$_{\rm gas}$=10$^{9-10}$M$_{\odot}$ and molecular gas fractions M$_{\rm gas}$/(M$_{\rm
gas}$+M$_{\star}$)=0.3$\pm$0.1, with a range of 10–75%. We also identify eleven $\sim$kpc-scale star-forming regions (clumps) within our sample and show that their sizes are comparable to the wavelength of the fastest growing mode. The luminosities and velocity dispersions of these clumps follow the same scaling relations as local H[ii]{} regions, although their star formation densities are a factor $\sim$15$\pm$5$\times$ higher than typically found locally. We discuss how the clump properties are related to the disk, and show that their high | 0 | non_member_981 |
masses and luminosities are a consequence of the high disk surface density.
author:
- 'A.M. Swinbank, Ian Smail, D. Sobral, T. Theuns, P.N. Best, & J.E. Geach,'
title: 'The Properties of the Star-Forming Interstellar Medium at =0.8–2.2 from HiZELS: Star-Formation and Clump Scaling Laws in Gas Rich, Turbulent Disks'
---
Introduction
============
The majority of the stars in the most massive galaxies (M$_{\star}\gsim$10$^{11}$M$_{\odot}$) formed around 8–10billion years ago, an epoch when star formation was at its peak [@Hopkins06; @Sobral12b]. Galaxies at this epoch appear to be gas-rich ($f_{\rm gas}$=20–80%; @Tacconi10 [@Daddi10; @Geach11]) and turbulent [@Lehnert09], with high velocity dispersions given their rotational velocities ($\sigma$=30–100kms$^{-1}$, $v_{\rm max}$/$\sigma\sim
$0.2–1; e.g. @ForsterSchreiber09 [@Genzel08; @Wisnioski11; @Bothwell12]). Within the dense and highly pressurised inter-stellar medium (ISM) of these high-redshift galaxies, it has been suggestes that star formation may be triggered by fragmentation of dynamically unstable gas (in contrast to star-formation occurring in giant molecular clouds in the Milky-Way which continually condense from a stable disk and then dissipate). This process may lead to the to the formation of massive ($\sim $10$^{8-9}$M$_{\odot}$) star-forming regions [e.g. @ElmegreenD07; @Bournaud09] and give rise to the the clumpy morphologies that are often seen in high-redshift starbursts [@Elmegreen09].
In order to | 0 | non_member_981 |
explain the ubiquity of “clumpy” disks seen in images of high-redshift galaxies, numerical simulations have also suggested that most massive, star-forming galaxies at $z$=1–3 continually accrete gas from the inter-galactic medium along cold and clumpy streams from the cosmic web [@Keres05; @Dekel09; @Bournaud09; @VandeVoort11]. This mode of accretion is at its most efficient at $z\sim $1–2, and offers a natural route for maintaining the high gas surface densities, star formation rates and clumpy morphologies of galaxies at these epochs. In such models, the gas disks fragment into a few bound clumps which are a factor 10–100$\times$ more massive than star-forming complexes in local galaxies. The gravitational release of energy as the most massive clumps form, torques between in-spiraling clumps and energy injection from star formation are all likely to contribute to maintaining the high turbulence velocity dispersion of the inter-stellar medium (ISM) [e.g. @Bournaud09; @Lehnert09; @Genzel08; @Genzel11].
In order to refine or refute these models, the observational challenge is now to quantitatively measure the internal properties of high-redshift galaxies, such as their cold molecular gas mass and surface density, disk scaling relations, chemical make up, and distribution and intensity of star formation. Indeed, constraining the evolution of the star formation | 0 | non_member_981 |
and gas scaling relations with redshift, stellar mass and/or gas fraction are required in order to understand star formation throughout the Universe. In particular, such observations are vital to determine if the prescriptions for star formation which have been developed at $z$=0 can be applied to the rapidly evolving ISM of gas-rich, high-redshift galaxies [@Krumholz10; @Hopkins12b].
To gain a census of the dominant route by which galaxies assemble the bulk of their stellar mass within a well selected sample of high-redshift galaxies, we have conducted a wide field (several degree-scale) near-infrared narrow-band survey (the High-Z Emission Line Survey; HiZELS) which targets H$\alpha$ emitting galaxies in four precise ($\Delta z$=0.03) redshift slices: $z$=0.40, 0.84, 1.47 and 2.23 [@Geach08; @Sobral09; @Sobral10; @Sobral11; @Sobral12a; @Sobral12b]. This survey provides a large, star formation limited sample of identically selected H$\alpha$ emitters with properties “typical” of galaxies which will likely evolve into $\sim $L$_{\star}$ galaxies by $z$=0, but seen at a time when they are assembling the bulk of their stellar mass, and thus at a critical stage in their evolutionary history. Moreover, since HiZELS was carried out in the best-studied extra-galactic survey fields, there is a wealth of multi-wavelength data, including 16–36 medium and broad-band | 0 | non_member_981 |
photometry (from rest-frame UV–mid-infrared wavelengths allowing robust stellar masses to be derived), *Herschel* 250–500$\mu$m imaging (allowing bolometric luminosities and star formation rates to be derived) as well as high-resolution morphologies for a subset from the *Hubble Space Telescope* CANDELS and COSMOS ACS surveys.
In this paper, we present adaptive optics assisted integral field spectroscopy of nine star-forming galaxies selected from HiZELS. The galaxies studied here have H$\alpha$-derived star formation rates of 1–27M$_\odot$yr$^{-1}$ and will likely evolve into $\sim
$L$^{\star}$ galaxies by $z$=0. They are therefore representative of the high-redshift star-forming population. We use the data to explore the scaling relations between the star formation distribution intensity and gas dynamics within the ISM, as well as the properties of the largest star-forming regions. We adopt a cosmology with $\Omega_{\Lambda}$=0.73, $\Omega_{m}$=0.27, and H$_{0}$=72kms$^{-1}$Mpc$^{-1}$ in which 0.12$''$ corresponds to a physical scale of 0.8kpc at $z$=1.47, the median redshift of our survey. All quoted magnitudes are on the AB system. For all of the star formation rates and stellar mass estimates, we use a @Chabrier03 initial mass function (IMF).
Observations
============
Details of the target selection, observations and data-reduction are given in @Swinbank12a. Briefly, we selected nine galaxies from HiZELS with H$\alpha$ fluxes | 0 | non_member_981 |
0.7–1.6$\times$10$^{-16}$ergs$^{-1}$cm$^{-2}$ (star formation ratesof SFR$_{\rm
H\alpha}$=1–27M$_{\odot}$yr$^{-1}$) which lie within 30$''$ of bright (R$<$15) stars. We performed natural guide star adaptive optics (AO) observations with the SINFONI IFU between 2009 September and 2011 April in $\sim $0.6$''$ seeing and photometric conditions with exposure times between 3.6 to 13.4ks. At the three redshift slices of our targets, $z$=0.84\[2\], $z$=1.47\[6\] and $z$=2.23\[1\], the H$\alpha$ emission line is redshifted to $\sim
$1.21, 1.61 and 2.12$\mu$m (i.e. into the $J$, $H$ and $K$-bands respectively). The median strehl achieved for our observations is 20% and the median encircled energy within 0.1$''$ (the approximate spatial resolution of our observations) is 25%.
The data were reduced using the SINFONI [esorex]{} data reduction pipeline which extracts, flat-fields, wavelength calibrates and forms the data-cube for each exposure. The final (stacked) data-cube for each galaxy was generated by aligning the individual data-cubes and then combining them using an average with a 3-$\sigma$ clip to reject cosmic rays. For flux calibration, standard stars were observed each night either immediately before or after the science exposures and were reduced in an identical manner to the science observations.
As Fig. \[fig:2dmaps\] shows, all nine galaxies in our SINFONI-HiZELS survey (SHiZELS) display strong H$\alpha$ emission, | 0 | non_member_981 |
with luminosities of L$_{\rm H\alpha}\sim $10$^{41.4-42.4}$ergs$^{-1}$. Fitting the H$\alpha$ and \[N[ii]{}\]$\lambda\lambda$6548,6583 emission lines pixel-by-pixel using a $\chi^{2}$ minimisation procedure we construct intensity, velocity and velocity dispersion maps of our sample and show these in Fig. \[fig:2dmaps\] (see also @Swinbank12a for details).
Analysis & Discussion
=====================
Galaxy Dynamics and Star Formation {#sec:dynSF}
----------------------------------
As @Swinbank12a demonstrate, the ratio of dynamical-to-dispersion support for this sample is $v$sin($i$)/$\sigma$=0.3–3, with a median of 1.1$\pm$0.3, which is consistent with similar measurements for both AO and non-AO studies of star-forming galaxies at this epoch [e.g. @ForsterSchreiber09]. The velocity fields and low kinemetry values of the SHiZELS galaxies (total velocity asymmetry, K$_{\rm tot}$=0.2–0.5) also suggest that at least six galaxies (SHiZELS 1, 7, 8, 9, 10, & 11) have dynamics consistent with large, rotating disks, although all display small-scale deviations from the best-fit dynamical model, with $<$data$-$model$>$=30$\pm$10kms$^{-1}$, with a range from $<$data$-$model$>$=15–70kms$^{-1}$ [@Swinbank12a].
We also use the multi-wavelength imaging to calculate the rest-frame SEDs of the galaxies in our sample and so derive the stellar mass, reddenning and estimates of the star-formation history [@Sobral11]. From the broad-band SEDs (Fig. 1 of @Swinbank12a), the average E(B$-$V) for our sample is E(B$-$V)=0.28$\pm$0.10 which corresponds to A$_v$=1.11$\pm$0.27mag and indicates A$_{\rm
| 0 | non_member_981 |
H\alpha}$=0.91$\pm$0.21mag. The resulting dust-corrected H$\alpha$ star formation rate for the sample is SFR$_{\rm
H\alpha}$=16$\pm$5M$_{\odot}$yr$^{-1}$, which is consistent with that inferred from the far-infrared SEDs using stacked [*Herschel*]{} SPIRE observations(SFR$_{\rm
FIR}$=18$\pm$8M$_{\odot}$yr$^{-1}$; @Swinbank12a)
Next, to investigate the star formation occurring within the ISM of each galaxy, we measure the star formation surface density and velocity dispersion of each pixel in the maps. Since we do not have spatially resolved reddening maps, for each galaxy we simply correct the star formation rate in each pixel using the best-fit E(B$-$V) for that system. We also remove the rotational contribution to the line width at each pixel by calculating the local $\Delta$V/$\Delta$R across the point spread function (PSF) for each pixel [@Davies11]. In Fig. \[fig:SF\_galgal\] we plot the resulting line of sight velocity dispersion ($\sigma$) as a function of star formation surface density ($\Sigma_{\rm SFR}$) for each galaxy in our sample. We see that there appears to be a correlation between $\Sigma_{\rm SFR}$ and $\sigma$, and as @Krumholz10 show, this power-law correlation may be a natural consequence of the gas and star formation surface density scaling laws. For example, first consider the Toomre stability criterion, $Q$, [@Toomre64]. $$Q\,=\,\frac{\sigma\kappa}{\pi G\Sigma_{\rm disk}}
\label{eqn:toomre}$$ where $\sigma$ denotes the | 0 | non_member_981 |
line of sight velocity dispersion, $\Sigma_{\rm disk}$ is the average surface density of the disk, $\kappa$=$a$$v_{\rm max}$/$R$ where $v_{\rm max}$ is the rotational velocity of the disk, $R$ is the disk radius and $a$=$\sqrt2$ for a flat rotation curve. Galaxy whose disks have $Q< $1 are unstable to local gravitational collapse and will fragment into clumps, whereas those with $Q\gsim $1 have sufficient rotational support for the gas to withstand and collapse. As @Hopkins12b [e.g. see also @Cacciato12] point out, gas-rich galaxies are usually driven to $Q\sim $1 since regions with $Q< $1 begin forming stars, leading to super-linear feedback which eventually arrests further collapse due to energy/momentum injection (recovering $Q\sim$ 1). For galaxies with $Q\gg $1, there is no collapse, no dense regions form and hence no star formation (and so such galaxies would not be selected as star-forming systems).
Following @Rafikov01, and focusing on the largest unstable fluctuations, the appropriate combination of gas and stellar surface density ($\Sigma_{\rm gas}$ and $\Sigma_{\star}$ respectively) is $$\Sigma_{\rm disk}\,=\,\Sigma_{\rm gas}\,+\,\left(\frac{2}{1+f_{\sigma}^2}\right)\Sigma_{\star}
\label{eqn:KSlaw}$$ where $f_{\sigma}$=$\sigma_{\star}$/$\sigma_{g}$ is the ratio of the velocity dispersion of the stellar component to that of the gas (see also the discussion in @Romeo11)..
Next, @KS98 show that the gas and | 0 | non_member_981 |
star formation surface densities follow a scaling relation $$\left(\frac{\Sigma_{\rm SFR}}{\rm M_{\odot}\,yr^{-1}\,kpc^{-2}}\right)\,=\,A\left(\frac{\Sigma_{\rm gas}}{\rm M_{\odot}\,pc^{-2}}\right)^{\it n}
\label{eqn:KS98}$$ For local, star-forming galaxies, the exponent, $n\sim1.5$ and the absolute star formation efficiency, $A$=1.5$\pm$0.4$\times$10$^{-4}$ [@Kennicutt98] implying an efficiency for star formation per unit mass of $\sim $0.04 which holds across at least four orders of magnitude in gas surface density.
Combining these relations, the velocity dispersion, $\sigma$, should therefore scale as $$\frac{\sigma}{\rm km\,s^{-1}}\,=\,\frac{\pi\,\times\,10^6\,G\,R}{\sqrt2\,v_{\rm max}}
\left(\left(\frac{\Sigma_{\rm SFR}}{A}\right)^{1/n}+\left(\frac{2}{1+f_{\sigma^2}}\right)\frac{\Sigma_{\star}}{10^6}\,\right)
\label{eqn:sS}$$ where $\Sigma_{\rm SFR}$ and $\Sigma_{\star}$ are measured in M$_{\odot}$yr$^{-1}$ and M$_{\odot}$kpc$^{-2}$ respectively, $R$ is in kpc, $v_{\rm max }$ in kms$^{-1}$, and $G$=4.302$\times$10$^{-6}$kpcM$_{\odot}^{-1}$(kms$^{-1}$)$^2$. With a power law index of $n$=1.4, and a marginally stable disk ($Q$=1), for each galaxy we therefore expect a power law relation $\sigma\propto\Sigma_{\rm SFR}^{0.7}$+$constant$ [@Krumholz12].
---------------------- ---------- ------------- ------------------- ------------------------- --------------- ---------------------------- ------------------ ------------- --------------------------------------- -----------------------------------------
ID RA Dec $z_{\rm H\alpha}$ SFR$_{\rm H\alpha}^{a}$ $r_{1/2}^{b}$ $\sigma_{\rm H\alpha}^{c}$ $v_{\rm asym}^d$ E(B$-$V) $\log$($\frac{M_{\star}}{M_{\odot}}$) $\log$($\frac{M_{\rm gas}}{M_{\odot}}$)
(J2000) (J2000) (M$_{\odot}$/yr) (kpc) (kms$^{-1}$) (kms$^{-1}$)
SHiZELS-1 021826.3 $-$044701.6 0.8425 2 1.8$\pm$0.3 98$\pm$15 112$\pm$11 0.4$\pm$0.1 10.03$\pm$0.15 9.4$\pm$0.4
SHiZELS-4 100155.3 $+$021402.6 0.8317 1 1.4$\pm$0.5 77$\pm$20 ... 0.0$\pm$0.2 9.74$\pm$0.12 8.9$\pm$0.4
SHiZELS-7 021700.4 $-$050150.8 1.4550 8 3.7$\pm$0.2 75$\pm$11 145$\pm$10 0.2$\pm$0.2 9.81$\pm$0.28 9.8$\pm$0.4
SHiZELS-8 021821.0 $-$051907.8 1.4608 7 3.1$\pm$0.3 69$\pm$10 160$\pm$12 0.2$\pm$0.2 10.32$\pm$0.28 9.8$\pm$0.4
SHiZELS-9 021713.0 | 0 | non_member_981 |
$-$045440.7 1.4625 6 4.1$\pm$0.2 62$\pm$11 190$\pm$20 0.2$\pm$0.2 10.08$\pm$0.28 9.8$\pm$0.4
SHiZELS-10 021739.0 $-$044443.1 1.4471 10 2.3$\pm$0.2 64$\pm$8 30$\pm$12 0.3$\pm$0.2 9.42$\pm$0.33 9.9$\pm$0.4
SHiZELS-11 021821.2 $-$050248.9 1.4858 8 1.3$\pm$0.4 190$\pm$18 224$\pm$15 0.5$\pm$0.2 11.01$\pm$0.24 10.1$\pm$0.4
SHiZELS-12 021901.4 $-$045814.6 1.4676 5 0.9$\pm$0.5 115$\pm$10 ... 0.3$\pm$0.2 10.59$\pm$0.30 9.6$\pm$0.4
SHiZELS-14 100051.6 +02:3334.5 2.2418 27 4.6$\pm$0.4 131$\pm$17 ... 0.4$\pm$0.1 10.90$\pm$0.20 10.1$\pm$0.4
Median ... ... 1.46 7$\pm$2 2.4$\pm$0.7 75$\pm$19 147$\pm$31 0.3$\pm$0.1 10.25$\pm$0.50 9.8$\pm$0.2
\[table:gal\_props\]
---------------------- ---------- ------------- ------------------- ------------------------- --------------- ---------------------------- ------------------ ------------- --------------------------------------- -----------------------------------------
Notes: $^{a}$H$\alpha$ star formation rate using the calibration from @Kennicutt98 with a Chabrier IMF; SFR$_{\rm
H\alpha}$=4.6$\times$10$^{-42}$L$_{\rm H\alpha}$. $^b$H$\alpha$ half light radius, deconvolved for the PSF. $^c$Average velocity dispersion for each galaxy, corrected for beam-smearing due to the PSF. $^d$$v_{\rm asym}$ denotes the best-fit asymptotic rotation speed of the galaxy, and is corrected for inclination (see @Swinbank12a for details on the kinematic modeling of these galaxies).
In order to test whether this model provides an adequate description of our data, we fit the $\Sigma_{\rm SFR}$–$\sigma$ distribution for each galaxy in our sample. To estimate the stellar surface density, $\Sigma_{\star}$, we we follow @Sobral11 and perform a full SED $\chi^2$ fit of the rest-frame UV–mid-infrared photometry using the @Bruzual03 and Bruzual (2007) population synthesis models. We | 0 | non_member_981 |
use photometry from up to 36 (COSMOS) and 16 (UDS) wide, medium and narrow bands (spanning [*GALEX*]{} far-UV and near-UV bands to [ *Spitzer*]{}/IRAC) and calculate the rest-frame spectral energy distribution, reddening, star-formation history and stellar mass [@Sobral10]. The stellar masses of these galaxies range from 10$^{9.7-11.0}$M$_{\odot}$ (Table 1; see also @Swinbank12a).
Since the stellar masses are calculated from 2$''$ aperture photometry (and then corrected to total magnitudes using aperture corrections, @Sobral10), to estimate the stellar surface density in the same area as our IFU observations, we assume that stellar light follows an exponential profile with Sersic index, n$_{\rm serc}$=1–2 and calculate the fraction of the total stellar mass within the disk radius, $R$ (which we define as two times the H$\alpha$ half light radius, $r_{\rm h}$). Allowing a range of power-law index from $n$=1.0–1.8 and a ratio of stellar- to gas- velocity dispersion of $f_{\sigma}$=1–2 [@Korchagin03], we calculate the best-fit absolute star formation efficiency, $A$ and in Fig. \[fig:SF\_galgal\] we overlay the best-fit solutions. Over the range $n$=1.0–1.8, the best fit absolute star formation efficiency for the sample is $A$=(4.1$\pm$2.4)$\times$10$^{-4}$M$_{\odot}$yr$^{-1}$kpc$^{-2}$ (where the error-bar incorporates the galaxy-to-galaxy variation, a range of $f_{\sigma}$=1–2, and the errors on the stellar masses of | 0 | non_member_981 |
each galaxy). We note that at low star formation rates and stellar masses, there is a non-zero velocity dispersion due to the sound speed ($c_s$) of the gas ($c_s\lsim $10kms$^{-1}$ for the Milky Way at the solar circle) which we have neglected since this is below both the resolution limit of our observations and the minimum velocity dispersion caused the stellar disks in these systems.
We can improve these constraints further assuming that star formation in each galaxy behaves in a similar way. We reiterate that this model assumes the star formation is occurring in a marginally Toomre stable disk, where the star formation follows the Kennicutt-Schmidt Law. Over a range $A$=10$^{-5}$–10$^{-2}$(M$_{\odot}$yr$^{-1}$kpc$^{-2}$) and $n$=0.8–2.5 we construct a likelihood distribution for all nine galaxies and then convolve these to provide a composite likelihood distribution, and show this in Fig. \[fig:KS\]. Although the values of $n$ and $A$ are clearly degenerate, the best-fit solutions have $n$=1.34$\pm$0.15 and $A$=3.4$_{-1.6}^{+2.5}$$\times$10$^{-4}$M$_{\odot}$yr$^{-1}$kpc$^{-2}$ Our derived values for the absolute star-formation efficiency, $A$, and power-law index, $n$ are within the 1-$\sigma$ of the values derived for local galaxies [e.g. @KS98; @Leroy08].
Using the $^{12}$CO to trace the cold molecular gas, @Genzel10 showed that gas and star-formation surface densities | 0 | non_member_981 |
of high-redshift ($z\sim $1.5) star-forming galaxies and ULIRGs are also well described by the Kennicutt-Schmidt relation with coefficients $n$=1.17$\pm$0.10 and A=(3.3$\pm$1.5)$\times$10$^{-4}$M$_{\odot}$yr$^{-1}$kpc$^{-2}$, which is comparable to the coefficients we derive from our sample.
In Fig. \[fig:KS\] we plot the star formation and gas-surface surface density for both local and high-redshift star-forming galaxies and ULIRGs from @Genzel10 and overlay the range of acceptable solutions implied by our data. We reiterate that we have adopted $Q$=1 for this analysis and note that if we adopt $Q<$ 1 then the absolute star formation efficiency will be increased proportionally (as shown in Fig. \[fig:KS\]). Nevertheless, this shows that the values of $n$ and $A$ we derive are consistent with the local and high-redshift star-forming galaxies and ULIRGs, but free from uncertainties associated with converting $^{12}$CO luminosities to molecular gas mass, CO excitation or spatial extent of the gas reservoir.
Using the values of $n$ and $A$ we have derived, we infer cold molecular gas masses for the galaxies in our sample of M$_{\rm
gas}$=10$^{9-10}$M$_{\odot}$ with a median M$_{\rm
gas}$=7$\pm$2$\times$10$^{9}$M$_{\odot}$. This suggests a cold molecular gas fraction of M$_{\rm gas}$/(M$_{\rm
gas}$+M$_{\star}$)=0.3$\pm$0.1 but with a range of 10–75%, similar to those derived for other high-redshift starbursts in | 0 | non_member_981 |
other surveys [@Tacconi10; @Daddi10; @Swinbank11].
Finally, with estimates of the disk surface density, we can use Eq. \[eqn:toomre\] to construct maps of the spatially resolved Toomre parameter, $Q(x,y)$. Since we set $Q$=1 to derive the coefficients $n$ and $A$, by construction the average $Q$ across the population is unity, but the relative range of $Q(x,y)$ within the ISM of each galaxy is unaffected by this assumption. In Fig. \[fig:2dmaps\] we show the maps of $Q(x,y)$ for each galaxy in our sample (with contours marking $Q(x,y)$=0.5, 1.0 and 2.0). This shows that there is a range of Toomre $Q$ across the ISM, and to highlight the variation with radius, in Fig. \[fig:Q\_r\] we show the Toomre parameter within each pixel of each galaxy as a function of radius (normalised to the half light radius, $r_{\rm h}$). This shows that in the central regions, on average the Toomre $Q$ increases by a factor $\sim $4$\times$ compared to $Q$ at the half light radius, whilst a radii greater than $r_{\rm h}$, $Q$ decreases by approximately the same factor.
### Identification of Star-Forming Regions {#sec:SFregions}
As Fig. \[fig:2dmaps\] shows, the galaxies in our sample exhibit a range of H$\alpha$ morphologies, from compact (e.g. SHiZELS11 | 0 | non_member_981 |
& 12) to very extended/clumpy (e.g. SHiZELS7, 8, 9 & 14). To identify star-forming regions on $\sim $kpc scales and measure their basic properties we isolate the star-forming clumps above the background ($\sigma_{\rm bg}$) by first converting the H$\alpha$ flux map into photon counts (accounting for telescope efficiency) and then search for 3$\sigma_{\rm bg}$ over-densities above the radially averaged background light distribution. In this calculation, we demand that any region is at least as large as the PSF. We identify eleven such regions and highlight these in Fig. \[fig:2dmaps\].
It is still possible that selecting star-forming regions in this way may give misleading results due to random associations and signal-to-noise effects. We therefore use the H$\alpha$ surface brightness distribution from the galaxies and randomly generate $10^5$ mock images to test how many times a “clump” is identified. We find that only 2$\pm$1 spurious clumps (in our sample of eleven galaxies) could be random associations.
Next, we extract the velocity dispersion and luminosity of each clump from the using an isophote defining the star-forming region and report their values in Table 2 (the clump velocity dispersions have been corrected for the local velocity gradient from the galaxy dynamics and sizes are | 0 | non_member_981 |
deconvolved for the PSF). Using the velocity dispersion and star formation density of each clump, and fixing the power-law index in the Kennicutt-Schmidt relation to $n$=1.34, we compute their absolute star formation efficiencies, deriving a median $A_{\rm
clump}$=5.4$\pm$1.5$\times$10$^{-4}$ (Fig. \[fig:KS\]). This corresponds to an offset (at fixed $n$) from the galaxy-average of $A_{\rm clump}$/$A$=1.3$\pm$0.4. Equivalently, if we fix the absolute star formation efficiency to that of the galaxy-average, then the Toomre parameter in these regions is $Q$=0.8$\pm$0.4.
------------------ ------------------------ ------------------------ ----------------------- ---------------
Galaxy SFR $\sigma_{\rm H\alpha}$ \[N[ii]{}\]/H$\alpha$ $r_{\rm h}$
(M$_{\odot}$yr$^{-1}$) (kms$^{-1}$) (kpc)
SHiZELS-7 0.5$\pm$0.1 40$\pm$10 0.07$\pm$0.03 0.8$\pm$0.2
SHiZELS-7 1.3$\pm$0.1 61$\pm$12 0.34$\pm$0.03 1.0$\pm$0.2
SHiZELS-8 2.0$\pm$0.1 79$\pm$10 0.36$\pm$0.03 0.7$\pm$0.2
SHiZELS-8 1.6$\pm$0.2 95$\pm$14 0.26$\pm$0.04 0.8$\pm$0.2
SHiZELS-8 1.9$\pm$0.1 140$\pm$20 0.21$\pm$0.04 0.9$\pm$0.2
SHiZELS-9 2.1$\pm$0.2 97$\pm$15 0.31$\pm$0.04 0.7$\pm$0.2
SHiZELS-9 2.3$\pm$0.1 80$\pm$10 0.26$\pm$0.03 1.3$\pm$0.2
SHiZELS-9 0.9$\pm$0.1 86$\pm$14 0.40$\pm$0.03 $< $0.7
SHiZELS-14 0.5$\pm$0.1 56$\pm$12 0.12$\pm$0.04 0.9$\pm$0.2
SHiZELS-14 1.1$\pm$0.2 121$\pm$20 0.24$\pm$0.03 $< $0.7
SHiZELS-14 0.2$\pm$0.1 100$\pm$25 $-$0.03$\pm$0.05 0.9$\pm$0.3
Median 1.4$\pm$0.4 88$\pm$9 0.24$\pm$0.06 0.85$\pm$0.10
\[table:clumps\]
------------------ ------------------------ ------------------------ ----------------------- ---------------
\
Notes: Half light radius, r$_h$, is deconvolved for PSF and the velocity dispersion, $\sigma$, is corrected for local velocity gradient (see § \[sec:dynSF\]). The star formation rates (SFR) are calculated from the H$\alpha$ line luminosity using SFR$_{\rm
H\alpha}$=4.6$\times$10$^{-42}$L$_{\rm H\alpha}$.
| 0 | non_member_981 |
The Scaling Relations of Local and High-Redshift Star-Forming Regions {#sec:scaling}
---------------------------------------------------------------------
The internal kinematics and luminosities of H[ii]{} regions in local galaxies, derived from the line widths of their emission lines, have been the subject of various studies for some time [e.g. @Terlevich81; @Arsenault90; @Rozas98; @Rozas06; @Relano05]. In particular, if the large line widths of star-forming H[ii]{} regions reflect the virialization of the gas then they can be used to determine their masses. However, it is unlikely that this condition holds exactly at any time during the evolution of a H[ii]{} region due to the input of radiative and mechanical energy, principally from their ionizing stars [e.g. @Castor75]. Nonetheless, the least evolved H[ii]{} regions may well be within a factor of a few (2–3) of having their kinematics determined by their virial masses (at an early stage, the stellar ionizing luminosities are maximized whereas the mechanical energy input is minimized; @Leitherer99). In the case of H[ii]{} regions close to virial equilibrium, the use of the line-width to compute gaseous masses offers a relatively direct means to study the properties since it is independent of the small-scale structure (density, filling factor, etc.).
@Terlevich81 showed that the H$\beta$ luminosity of the most | 0 | non_member_981 |
luminous H[ii]{} regions varies as L(H$\beta$)$\propto\sigma^{4.0\pm0.8}$. This result suggests that the most luminous H[ii]{} regions are likely to be virialized, so that information about their masses, and the resultant mass-luminosity relation, could be obtained using the virial theorem (they also claimed a relation between a radius parameter and the square of the velocity dispersion $\sigma$ for H[ii]{} regions, as further evidence for virialization). However, more recent studies, in particular by @Rozas06 suggest that in super-giant H[ii]{} regions, L$\propto\sigma^{2.9\pm0.2}$ may be a more appropriate scaling (the lower exponent arises since H[ii]{} regions with the largest luminosities are generally density-bound, which means that a significant fraction of the ionizing radiation escapes and so does not contribute to the luminosity, making shallower slopes physically possible).
To investigate the scaling relations of star-forming regions, in Fig. \[fig:scaling\] we show the relations between luminosity, size and velocity dispersion of the clumps in our sample compared to Giant Molecular Clouds (GMCs) and H[ii]{} regions in the Milky Way and local galaxies [@Terlevich81; @Arsenault90; @Bordalo11; @Fuentes-Masip00; @Rozas06]. In this plot, we also include the measurements of giant star-forming regions from other high-redshift star-forming galaxies at $z\sim $1 from @Wisnioski11b, the $z\sim $1–2 galaxies from SINS [@Genzel11], and | 0 | non_member_981 |
the clumps identified in strongly lensed $z\sim
$1.5–3 galaxies from @Jones10 and @Stark08.
Despite the scatter, the radius–$\sigma$ and $\sigma$–Luminosity relations of the high-redshift clumps approximately follow the same scaling relations as those locally, but extending up to $\sim $kpc scales. Indeed, including all of the data-points in the fits, we derive the scaling between size ($r$), luminosity ($L$) and velocity dispersion ($\sigma$) of $$\log\left(\frac{r}{\rm kpc}\right)\,=\,(1.01\,\pm\,0.08)\,\log\left(\frac{\sigma}{\rm km\,s^{-1}}\right)\,+\,(0.8\,\pm\,0.1)
\label{eqn:rs}$$ and $$\log\left(\frac{L}{\rm erg\,s^{-1}}\right)\,=\,(3.81\,\pm\,0.29)\log\left(\frac{\sigma}{\rm km\,s^{-1}}\right)\,+\,(34.7\pm0.4)
\label{eqn:Ls}$$ Equation \[eqn:rs\] suggests $\sigma\propto R$. If the clouds are self-gravitating clouds with $\sigma\propto R$, then the virial density is constant. The relation L$\propto\sigma^{3.81\pm0.29}$ is in reasonable agreement with the early work from @Terlevich81, and steeper than that found for super-giant H[ii]{} regions in local galaxies [@Rozas06], although the large error bars (on both the local and high-redshift data) preclude any firm conclusions. Clearly a larger sample is required to confirm this result and/or test whether the scatter in the data is intrinsic.
If the star-forming regions we have identified are short lived, then these scaling relations effectively reflect initial collapse conditions of the clump as it formed, since a clump can not evolve far from those initial conditions [e.g. @Ceverino10]. In this case, the relation between radius, | 0 | non_member_981 |
velocity dispersion and gas mass should follow $r$=$\sigma^2$/($\pi G \Sigma_{\rm disk}$) (see § \[sec:Sigma\_disk\_Sigma\_clump\]). In Fig. \[fig:scaling\] we therefore overlay contours of constant gas mass in the r–$\sigma$ plane, which suggests that the [*initial*]{} gas masses for the clumps is $M_{\rm
gas}^{initial}$=2$\pm$1$\times$10$^9$M$_{\odot}$ a factor $\sim $1000$\times$ more massive then the star-forming complexes in local galaxies (e.g. see also @Elmegreen09 [@Genzel11; @Wisnioski11b]). Assuming our gas mass estimates from § \[sec:dynSF\], then these star-forming regions may contain as much as $\sim$10–20% of the cold molecular gas in the disk.
Turning to the relation between size and luminosity of the star-forming regions, it is evident from Fig. \[fig:scaling\] that the star formation densities of the high-redshift clumps higher than those locally. Indeed, local star-forming regions follows a scaling relation $$\log\left(\frac{L}{\rm erg\,s^{-1}}\right)\,=\,(2.91\,\pm\,0.15)\log\left(\frac{r}{\rm kpc}\right)\,+\,(32.1\,\pm\,0.3)
\label{eqn:Lr}$$ We do not have sufficient number of objects or the dynamic range to measure both the slope and zero-point of the size-luminosity relation in the high-redshift clumps, and so instead we fix the slope of the local relation (which is $L\propto r^{2.91\pm0.15}$) and fit for the zero-point evolution and obtain $$\log\left(\frac{L}{\rm erg\,s^{-1}}\right)\,=\,(2.91\,\pm\,0.15)\log\left(\frac{r}{\rm kpc}\right)\,+\,(33.2\,\pm\,0.4)
\label{eqn:Lr_hiz}$$ This suggests that high-redshift star-forming regions have luminosities at a fixed size that are on average | 0 | non_member_981 |
a factor 15$\pm$5$\times$ larger than those locally (see also @Swinbank09 [@Swinbank10Nature; @Jones10; @Wisnioski11b]). We note that high luminosities at fixed size have been found in local starbursts, such as in the Antennae [@Bastian06], whilst offsets of factors $\sim $50$\times$ have been inferred for star-forming regions in high-redshift galaxies [e.g. @Swinbank09; @Jones10; @Wisnioski11b].
The Relation between the Disk and Clump Properties {#sec:Sigma_disk_Sigma_clump}
--------------------------------------------------
It is possible to relate the properties of the clumps to the overall properties of the disk [e.g. @Hopkins12a]. For example, the velocity dispersion of the fastest growing Jeans unstable mode which can not be stabilised by rotation in a gas disk is given by $$\sigma_t(R)^2\,=\,\pi\,G\,\Sigma_{\rm disk}\,R
\label{eqn:sigma_Sigma}$$ [e.g. @Escala08; @Elmegreen09b; @Dekel09; @Genzel11; @Livermore12a]. The critical density for collapse ($\rho_c$), on scale $R$ from a turbulent ISM is given by $$\rho_c\,=\,\frac{3}{4\,\pi\,R^{3}}\,M_{\rm J}\,\simeq\,\frac{9}{8\,\pi\,R^2\,G}\,\sigma_t(R)^2
\label{eqn:MJeans}$$ where $\sigma_t(R)$ is the line of sight velocity turbulent velocity dispersion and $M_{\rm J}$ is the Jeans mass. The critical density for collapse therefore scales as $$\rho_c(R)\,=\,\frac{9}{8R}\,\Sigma_{\rm disk}
\label{eqn:R_SigmaDisk}$$ Assuming that the cloud contracts by a factor $\simeq $2.5 as it collapses, the post-collapse surface density of the cloud is $$\Sigma_{\rm cloud}\simeq\,10\,\rho_c R\simeq \,10\,\Sigma_{\rm disk}
\label{eqn:Sigmacloud_Sigmadisk}$$ [see also @Livermore12a]. Thus, the surface density of the | 0 | non_member_981 |
collapsed cloud is independent of radius and proportional to the surface density of the disk, with the normalisation set by the collapse factor and under the assumption $Q$=1. @Hopkins12b show that this model provides an reasonable fit to giant molecular clouds in the Milky-Way, and further, suggests that the surface density (and hence surface brightness) of clouds should increase with the surface density of the disk.
Using our estimates of the stellar and gas masses and spatial extent of the galaxies in our sample, we derive disk surface densities of $\Sigma_{\rm
disk}$=1.1$\pm$0.4$\times$10$^{9}$M$_{\odot}$kpc$^{-2}$, and hence expect the mass surface densities of the star-forming regions that form to have mass surface densities of $\Sigma_{\rm clump}\sim
$10$^{10}$M$_{\odot}$kpc$^{-2}$. It is instructive to compare this to the average mass surface density of the clumps. For example, assuming that their velocity dispersions are virial and adopting $M_{\rm clump}$=$C\sigma^2 r_h$/$G$, using the average velocity dispersion and size of the clumps (Table 2), we derive an average clump mass surface density of $\Sigma_{\rm
clump}$=8$\pm$2$\times$10$^9$M$_{\odot}$kpc$^{-2}$ with $C$=5 (appropriate for a uniform density sphere). Although this calculation should be considered crude as it is unclear whether the velocity dispersions we measure are virial, it is encouraging that the predicted surface | 0 | non_member_981 |
mass densities of the clumps are similar to those inferred from their velocity dispersions and sizes.
Finally, @Hopkins12b (see also @Escala08 and @Livermore12a) show that for a marginally stable disk of finite thickness, density structures on scales greater than $h$ will tend to be stabilised by rotation which leads to an exponential cut off of the clump mass function above $$M_0\,\simeq\,\frac{4\pi}{3}\,\rho_c(h)\,h^3\,=\,\frac{3\,\pi\,G^2}{2}\frac{\Sigma_{\rm disk}^3}{\kappa^4}
\label{eqn:M0_1}$$ or $$\frac{M_0}{M_{\odot}}\,=\,8.6\,\times\,10^{3}\left(\frac{\Sigma_{\rm disk}}{\rm 10\,M_{\odot}\,pc^{-2}}\right)^3\,\left(\frac{\kappa}{\rm 100\,km\,s^{-1}\,kpc}\right)^{-4}
\label{eqn:M0_2}$$ This suggests that the most massive clumps that can form in a disk (“the cut off mass”) depends strongly on the disk surface density – increasing the disk surface density increases mass of the clumps that are able to form [e.g. @Escala11]. However, there is also a competing (stabilising) factor from the epicyclic frequency such that a fixed radius, higher circular velocities reduce the mass of the largest clumps able to form.
Applying equation \[eqn:M0\_2\] to the Milky-Way, with a cold molecular gas fraction of 10%, $f_{\sigma}$=2 [@Korchagin03], the average surface density is $\Sigma_{\rm
disk}$=35M$_{\odot}$pc$^{-2}$ and for $\kappa$=220kms$^{-1}$/8kpc [@Feast97] the cut off mass should be $M_0\sim$10$^7$M$_{\odot}$, in good agreement with the characteristic mass of the largest galactic GMCs [e.g. @Stark06]
How does the cut-off mass for our high-redshift sample compare to | 0 | non_member_981 |
local galaxies? For $f_{\sigma}$=2, and using the scaling relations derived in § \[sec:dynSF\] to estimate the gas mass (Table 1), ($A$=3.4$\times$10$^{-4}$M$_{\odot}$yr$^{-1}$kpc$^{-2}$ and $n$=1.34) we derive a range of cut off masses of M$_{0}$=0.3–30$\times$10$^{9}$M$_{\odot}$ (with a median and error of the sample of $M_{0}$=9$\pm$5$\times$10$^{9}$M$_{\odot}$). This is similar to the mass inferred for the brightest star-forming regions seen in high-resolution images of other high-redshift galaxies [@Elmegreen89; @ElmegreenD07; @Elmegreen09; @Bournaud09; @ForsterSchreiber11; @Genzel11; @Wisnioski11b], and a factor $\sim $1000$\times$ higher than the largest characteristic mass of a star-forming region in the Milky-Way.
In Fig. \[fig:mcut\] we plot our estimates of the the cut off mass versus the clump star-formation densities for the galaxies in our sample [see also @Livermore12a]. We use the H$\alpha$ derived star-formation rate for each clump, corrected for galaxy reddening (note that we do not have reddening estimates for individual clumps and so we assume a factor 2$\times$ uncertainty in their star formation surface density). We also include estimates of the cut off mass and star formation surface density from the SINS survey of $z\sim $2 galaxies from @Genzel11 (with dynamics measured from @ForsterSchreiber09 and @Cresci09), as well as measurements from the lensing samples of @Livermore12a ($z\sim
$1) and @Jones10 ($z\sim | 0 | non_member_981 |
---
abstract: 'Metapopulation theory for a long time has assumed dispersal to be symmetric, i.e. patches are connected through migrants dispersing bi-directionally without a preferred direction. However, for natural populations symmetry is often broken, e.g. for species in the marine environment dispersing through the transport of pelagic larvae with ocean currents. The few recent studies of asymmetric dispersal concluded, that asymmetry has a distinct negative impact on the persistence of metapopulations. Detailed analysis however revealed, that these previous studies might have been unable to properly disentangle the effect of symmetry from other potentially confounding properties of dispersal patterns. We resolve this issue by systematically investigating the symmetry of dispersal patterns and its impact on metapopulation persistence. Our main analysis based on a metapopulation model equivalent to previous studies but now applied on regular dispersal patterns aims to isolate the effect of dispersal symmetry on metapopulation persistence. Our results suggest, that asymmetry in itself does not imply negative effects on metapopulation persistence. For this reason we recommend to investigate it in connection with other properties of dispersal instead of in isolation.'
address:
- 'University of Gothenburg, Department of Marine Ecology, Box 461, SE-405 30 Göteborg, Sweden'
- 'University of Gothenburg, Department | 0 | non_member_982 |
of Marine Ecology, Tjärnö Marine Biological Laboratory, SE-452 96 Strömstad, Sweden'
author:
- David Kleinhans
- 'Per R. Jonsson'
title: On the impact of dispersal asymmetry on metapopulation persistence
---
Connectivity matrix, dispersal network, symmetry, metapopulation viability.
Introduction
============
Many species are structured in space with dispersal and migration connecting local populations into metapopulations [@Levins69; @Hanski97]. The fundamental dynamics of metapopulations are determined by local extinction, dispersal from the local populations, and colonisation success leading to the establishment of new sub-populations [@Levins69]. Metapopulation dynamics may determine a range of ecological and evolutionary aspects including population size [@Gyllenberg92], persistence [@Roy05], spatial distribution [@Roy08], epidemic spread [@McCallum02; @Davis08], gene flow [@Sultan02], and local adaptation [e.g. @Hanski98; @Joshi01]. Much interest has focused on the effect of the spatial structure of metapopulations and how local populations are connected through dispersal. Connectivity among subpopulations is also increasingly emphasized in management and conservation, e.g. to prevent fragmentation of landscapes [@Crooks] and in the design of protected areas and nature reserves [@vanTeeffelen06].
Early models [e.g. @Levins69; @Hanski] assumed identical dispersal probability among habitat patches. The initial focus of spatially explicit metapopulation theory was to explore processes that generate spatial patterns in homogeneous landscapes [@Hanski02; @Malchow]. Later, spatially | 0 | non_member_982 |
explicit models were designed to let dispersal probability be a function of patch size or the distance between local habitat patches [@Hanski94; @Hanski02]. One aspect of dispersal that only has been implicitly included in realistic models but not studied in isolation is when dispersal is asymmetric. Asymmetric dispersal is expected for many metapopulations, e.g. where dispersal is dominated by wind transport of pollen and seeds [@Nathan01], and for marine species with spores and larvae transported by ocean currents [@Wares01]. Consequently, it is important to understand how asymmetric dispersal may affect the dynamics and persistence of metapopulations with potential implications for the design of nature reserves. Some studies have considered asymmetric dispersal [e.g. @Pulliam91; @Kawecki02; @Artzy-Randrup10] but have not analysed effects on metapopulation viability.
In a recent contribution a conceptual model was developed to explore the effects of dispersal asymmetry on metapopulation persistence [@Vuilleumier06]. The viability of metapopulations was investigated for different dispersal patterns randomly connecting pairs of patches through either unidirectional or bidirectional dispersal routes. @Vuilleumier06 concluded, that asymmetric dispersal leads to a distinct increase in the extinction risk of metapopulations. In a similar study @Bode08 investigated correlations between metapopulation viability and statistics of the dispersal network; they also found | 0 | non_member_982 |
that asymmetric dispersal links resulted in higher extinction risk. Another very recently published work investigates metapopulation viability for a selection of asymmetric dispersal patterns and supports the findings of previous works [@Vuilleumier10].
The main objective with this study is to isolate the effect of dispersal asymmetry from other properties of the metapopulation network. When changing the degree of symmetry of dispersal networks this generally may simultaneously influence the number of isolated patches and other aspects of the network such as the balance of dispersal in the individual patches [see e.g. Figure 4 in @Vuilleumier06]. Since metapopulations are known to be sensitive in particular to the density of the dispersal network [@Barabasi04Review], the existence of closed cycles of dispersal [@Armsworth02], and the hierarchy of dispersal in directed networks [@Bode08; @Artzy-Randrup10] these secondary implications could confound any effect of asymmetric dispersal. We resolve the problem by restricting our main analysis to *regular* networks.
In this paper we in particular analyse the effect of asymmetric dispersal on metapopulation persistence in more detail, with an initial focus on regular dispersal networks. We employ models of synthetic dispersal patterns and demonstrate that asymmetric dispersal per se may not lead to an increase in metapopulation extinction | 0 | non_member_982 |
risk. The significance of our results, their consequence for general dispersal patterns and their relations to previous works are addressed in detail in Section \[sec:discussion\].
Material and Methods
====================
For ease of discussion we focus on the metapopulation model used in previous approaches [@Vuilleumier06; @Bode08; @Vuilleumier10]. This stochastic patch occupancy model connects a number of $N$ patches through a complex dispersal matrix; the model is detailed in Section \[sec:metapopulation-model-vuilleumier\]. Within the scope of this work the viability of metapopulations exposed to dispersal patterns with different degree of symmetry is investigated. A consistent definition of the degree of symmetry and details on the dispersal patterns are provided in Sections \[sec:symmetry-def\] and \[sec:viab-metap-conn\].
\[sec:metapopulation-model-vuilleumier\]Metapopulation model
------------------------------------------------------------
We consider a metapopulation consisting of $N$ patches of equal quality, where, at a given time, each patch is either empty ($0$) or populated ($1$). Interactions of the patches are specified by means of the $N\times N$ connectivity matrix $D$, where the elements $d_{ij}\in\{0,1\}$ determine whether patch $j$ is connected to patch $i$ ($d_{ij}=1$) or not ($d_{ij}=0$). For ease of discussion we require $d_{ii}=0$ for all $i$ implying that patches are not connected with themselves.
Building on previous works we used a stochastic discrete time model | 0 | non_member_982 |
for a metapopulation of $N$ patches and tested metapopulation viability with respect to different connectivity matrices [@Vuilleumier06; @Bode08; @Vuilleumier10]. The model, which is attractive in its simplicity, implements dispersal through the connectivity matrix $D$. Initially all $N$ patches are populated. At each time step two events occur in succession: first, populated patches go extinct at per patch probability $e$. Subsequently, empty patches can be colonised with probability $c$ by each incoming dispersal connection from a populated patch. Newly populated patches cannot give rise to colonisation of other patches at the same time step they have been colonised.
In order to estimate the extinction risk of metapopulations the model is iterated $T$ times. If any populated patch is left after the $T^{\mbox{th}}$ iteration, the metapopulation is termed *viable* and *extinct* otherwise. As @Vuilleumier06 we chose the parameters $e=0.5$ and $T=1,000$, and discuss the probability of extinction of metapopulations consisting of $N=100$ patches as a function of the colonisation probability $c$.
\[sec:symmetry-def\]Symmetry of dispersal patterns
--------------------------------------------------
For characterisation of the symmetry properties of dispersal patterns the connectivity matrix $D$ is divided into its symmetric and anti-symmetric contributions, $S$ and $A$, by defining the matrix elements
\[eq:def-a-s\] $$\begin{aligned}
s_{ij} & := & \min\left(d_{ij},d_{ji}\right)\\
| 0 | non_member_982 |
a_{ij} & := & d_{ij}-s_{ij}\quad.
\end{aligned}$$
Based on these matrices the degree of symmetry $\gamma$ of dispersal patterns is defined as the ratio of symmetric connections among all connections: $$\gamma:=\frac{\sum_{i,j}s_{ij}}{\sum_{i,j}a_{ij}+s_{ij}}\quad.\label{eq:def-deg-asymm}$$ Note that $1-\gamma$ is related to the asymmetry $Z$ discussed in [@Bode08].
By means of Equation , the symmetry properties of dispersal patterns are put on a firm footing: Dispersal patterns are called *symmetric* if $\gamma=1$ and *anti-symmetric* if $\gamma=0$. Generally connectivity matrices $D$ with intermediate $\gamma$ are neither symmetric nor anti-symmetric. We term them *asymmetric* if $\gamma<1$ corresponding to dispersal directed at least to some degree.
\[sec:viab-metap-conn\]Viability of metapopulations connected through regular dispersal patterns
------------------------------------------------------------------------------------------------
![\[fig:Algorithm-examples\]Examples of connectivity matrices $D$ generated by the algorithm described in Section \[sec:viab-metap-conn\] and in \[sec:algor-gener-balanc\] for a reduced number of patches, $N=8$, and different combinations of $L/N$ and $\gamma$. Only non-zero entries are printed explicitly. For reasons of clarity symmetric connections are denoted by ’S’ and asymmetric connections by ’A’. The colours indicate separated closed cycles of dispersal that can be identified in the matrices. While the connectivity matrices with $L/N=1$ (upper row) are degenerate into $2$ ($\gamma=0.0$), $3$ ($\gamma=0.5$), and $4$ ($\gamma=1.0$) clusters respectively, the clusters of all three matrices generated with | 0 | non_member_982 |
$L/N=2$ (lower row) already extend to the entire metapopulation. In spite of the fact that the matrices displayed here are only *examples* of randomly generated matrices, this trend is representative. For instance all simulations performed for $N=100$ and $L/N>2$ resulted in dispersal matrices with a single cluster only. Note that our results are based on much larger metapopulations consisting of $N=100$ patches.](balanced-example.eps){width="\textwidth"}
Previous works demonstrated that changes in the symmetry of dispersal patterns in particular affect the local symmetry of migrant flow, since asymmetry can result in *donor*- and *recipient*-dominated patches not present in symmetric networks [@Vuilleumier06]. In order to isolate the effect of the degree of symmetry from these secondary effects, we focus on a specific set of dispersal patterns: we restrict our main analysis to dispersal patterns with the number of dispersal connections, $L$, being an integer multiple of $N$ randomly distributed on the patches under the constraint, that each patch obtains exactly $L/N$ in- and outgoing connections with defined degree of symmetry. The random patterns considered, hence, are *regular* with the connections evenly distributed to all patches available [@Artzy-Randrup10; @NetworkAnalysis]. An algorithm efficiently generating regular random dispersal patterns for small and intermediate $L/N$ and arbitrary degrees of | 0 | non_member_982 |
symmetry ($\gamma$) is detailed in \[sec:algor-gener-balanc\]. Examples of random connectivity matrices generated for $N=8$ and different combinations of $L/N$ and $\gamma$ are exhibited in Fig.\[fig:Algorithm-examples\]. Please regard that for the simulations metapopulations consisting of $N=100$ are used resulting in connectivity matrices of dimension $100\times100$ instead.
The regular dispersal patterns we use here restrict our analysis to metapopulations with all patches connected at a fixed density independent of the choice of $\gamma$. For $L/N>2$ the largest cluster extends to the entire metapopulation independent from the degree of dispersal symmetry resulting in irreducible connectivity matrices [@Caswell01; @Bode06]. For a detailed discussion of the impact of regularity on our results we refer to Section \[sec:discussion:regul\].
The viability of metapopulations exposed to these dispersal patterns was tested in the following manner: a sample of $100$ dispersal patterns connecting the $N=100$ patches was generated for each combination of $L/N=1,\ldots,10$ and $10$ different values of $\gamma$. For any of these patterns the viability of $10$ independent realisations of metapopulations was tested for different values of the colonisation probability $c$ according to the procedure outlined in Section \[sec:metapopulation-model-vuilleumier\], resulting in a statistics for a total of $1,000$ simulations on $100$ randomly generated connectivity matrices for every choice | 0 | non_member_982 |
of $L/N$, $\gamma$, and $c$. For our main analysis we record the number of viable metapopulations out of the $1,000$ simulations and prepare the results for graphical analysis. The sensitivity of this test procedure and its interpretation with respect to the statistics of extinction times is discussed in Section \[sec:discussion:interpret\].
\[sec:results\]Results
======================
![\[fig:balanced\]Results on the viability of metapopulations exposed to dispersal patterns with regular dispersal randomly generated by the algorithm described in Section \[sec:viab-metap-conn\] and \[sec:algor-gener-balanc\]. In the upper row the viability is plotted as a function of the effective number of connections per patch, $L/N$, and the colonisation probability $c$. At every combination of $L/N$ and $c$ the viability of $1,000$ different dispersal patterns has been investigated. Green and red squares indicate parameters, where either all $1,000$ patterns either were viable or not. The intermediate region where some of the patterns were viable and others were not is coloured yellow. The three panels present the results for different degrees of symmetry, increasing from $\gamma=0$ (anti-symmetric dispersal patterns) on the left to $\gamma=1$ (symmetric patterns) on the right hand side. In the lower row the simulation results are presented accordingly as a function of the symmetry $\gamma$ and the colonisation | 0 | non_member_982 |
probability $c$ for three different number of connections per patch, $L/N=2,
3$ and $5$. Only a vanishing impact of symmetry is observed for $L/N> 3$.](E50-balanced_fixed_gamma.eps "fig:"){width="\textwidth"}\
![\[fig:balanced\]Results on the viability of metapopulations exposed to dispersal patterns with regular dispersal randomly generated by the algorithm described in Section \[sec:viab-metap-conn\] and \[sec:algor-gener-balanc\]. In the upper row the viability is plotted as a function of the effective number of connections per patch, $L/N$, and the colonisation probability $c$. At every combination of $L/N$ and $c$ the viability of $1,000$ different dispersal patterns has been investigated. Green and red squares indicate parameters, where either all $1,000$ patterns either were viable or not. The intermediate region where some of the patterns were viable and others were not is coloured yellow. The three panels present the results for different degrees of symmetry, increasing from $\gamma=0$ (anti-symmetric dispersal patterns) on the left to $\gamma=1$ (symmetric patterns) on the right hand side. In the lower row the simulation results are presented accordingly as a function of the symmetry $\gamma$ and the colonisation probability $c$ for three different number of connections per patch, $L/N=2,
3$ and $5$. Only a vanishing impact of symmetry is observed for $L/N> 3$.](E50-balanced_fixed_l.eps "fig:"){width="\textwidth"}
| 0 | non_member_982 |
For each scenario $(L/N,\gamma,c)$ a total of $1,000$ simulations were performed. For straightforward statistical evaluation of the viability of metapopulations exposed to the respective conditions the simulation results were divided into three different groups, which are colour coded in the graphical presentation of the results: if all $1,000$ simulated metapopulations either went extinct or were viable the scenario is coloured red or green, respectively. Otherwise, i.e. if the number of extinct simulations out of $1,000$ is greater than $0$ but smaller than $1,000$, the scenario was coloured yellow.
The results are illustrated in Fig. \[fig:balanced\]. The three panels in the upper row show the viability of the metapopulation as a function of the number of dispersal connections per patch, $L/N$, and the colonisation probability $c$ for different values of $\gamma$: anti-symmetric dispersal ($\gamma = 0.0$), asymmetric dispersal with intermediate degree of symmetry ($\gamma = 0.5$), and symmetric dispersal ($\gamma = 1.0$). The lower panels of Fig.\[fig:balanced\] contain the same results, but now analysed with respect to the effect of the degree of symmetry, $\gamma$, for three different values of $L/N$. In fact, for $L/N > 3$ no statistically significant impact of symmetry is observed.
\[sec:discussion\]Discussion
============================
\[sec:discussion:interpret\]Interpretation and significance of | 0 | non_member_982 |
results
----------------------------------------------------------------------
First of all the results depicted in Fig. \[fig:balanced\] suggest that the impact of the degree of symmetry on metapopulation viability decreases with increasing $L/N$. Already at $L/N>3$ no statistical significant impact of the degree of symmetry, i.e. no systematic differences depending on the degree of symmetry, can be detected on the basis of the scenarios and the statistical evaluation chosen.
At a small number of dispersal connections per patch ($L/N=1,2$) metapopulation viability is significantly reduced for more symmetric dispersal (Fig. \[fig:balanced\], lower panels). The reason for this effect straightforwardly can be understood from considerations concerning the structure of the underlying dispersal patterns: Let us first focus on patterns with $L/N=1$. In this case a metapopulation with a symmetric dispersal pattern necessarily consists of a number of patches only pairwisely connected through dispersal (Figure \[fig:Algorithm-examples\]). The largest closed dispersal cycle (synonymous to the giant component of the dispersal network [@Berchenko09]), hence, involves only two patches. For the particular metapopulation model applied a lower bound for the extinction probability of a pair of patches per time step is $e^2$. On the contrary the mean size of the largest closed dispersal cycle estimated from the $100$ dispersal patterns generated for | 0 | non_member_982 |
the same conditions but antisymmetric dispersal ($\gamma=0$) was $62.7$. For $L/N=2$ the mean size of the largest cycles was $77.5$ for the symmetric dispersal matrices generated, whereas for the asymmetric case all dispersal matrices already extended to the entire metapopulation (i.e. their mean size was $100$). Hence we are faced with a percolation problem on random graphs [@Callaway00], where the percolation threshold depends on the symmetry properties of the dispersal pattern. Analysis of the eigenvalues of associated state transition matrices reveals, that the mean time to extinction of a set of patches participating in a closed cycle of dispersal increases with the size of the cycle. For this reason differences in viability at small $L/N$ are attributed to hierarchical differences of the generated matrices at only a few number of connections, namely $L/N\le
3$. This density is much smaller then relevant cases discussed e.g.in [@Vuilleumier06] as will be discussed in more detail in Section \[sec:discussion:prev\].
![\[fig:E50-itime\]Extinction statistics for the metapopulations with different values of the colonisation probability $c$ connected through dispersal matrices with $\gamma=0.5$ and $L/N=4$. The individual lines indicate the number of non-extinct simulations (out of $1,000$) as a function of the simulation time. The dashed line corresponds to | 0 | non_member_982 |
the upper bound for the expectation value of the number of extinct simulation for cases where all simulations went extinct, $1,000\exp(-6.9\times
10^{-3} t)$, as derived in the manuscript text. From the figure it becomes evident, that the number of non-extinct simulations after an initial relaxation phase indeed decreases exponentially in time (i.e. linear in this logarithmic plot). The upper bound approaches $1/M$ with $t\to T$, which is a general result for sufficiently large $M$ and $T$ as a Taylor expansion of expression shows. For this reason the boundary line indeed exhibits the border between the cases marked red and yellow in Figure \[fig:balanced\].](E50-itime.eps){width=".6\textwidth"}
How meaningful is the statistical evaluation of the results with respect to the effect of the symmetry of dispersal patterns on expected extinction times of metapopulations? In order to approach this question we aim to derive lower and upper bounds for extinction times in the red and green regions of the figures, which then help to evaluate the graphical presentation of the results in more detail. If we disregard the initial time period of relaxation of the metapopulation to a quasistationary state, we can assume that the statistics of extinction times is exponentially distributed. This exponential distribution | 0 | non_member_982 |
complies with a constant risk of metapopulation extinction per time step, which we call $r$. The chance, that a metapopulation has not gone extinct after $T$ time steps then is $(1-r)^T$. For every combination of parameters we perform $M$ simulations with $M=1,000$ in our case[^1]. It is then straightforward to calculate the probability $P(M|r)$ that all $M$ simulations are viable, $$\label{eq:pMr}
P(M|r)=(1-r)^{M T}\quad.$$ Accordingly the chance that a simulation goes extinct during the $T$ simulation steps is $1-(1-r)^T$, resulting in the probability $P(0|r)$ of observing $0$ viable simulations of $$\label{eq:p0r}
P(0|r)=\left(1-(1-r)^{T}\right)^M\quad.$$ More interesting, however, would be the expressions $P(r|M)$ and $P(r|0)$, the probability distributions of the metapopulation extinction risk $r$ given the fact that either all or none of the simulations are viable. These expressions straightforwardly can be calculated using Bayes’ theorem. Using uniform prior distributions we obtain $$\begin{aligned}
\label{eq:prM}
P(r|M)&=&\left(\int_0^1dr' (1-r')^{M T}\right)^{-1}(1-r)^{M T}\quad
\mbox{and}\\
\label{eq:pr0}
P(r|0)&=&\left[\int_0^1dr' \left(1-(1-r')^{T}\right)^M\right]^{-1}\left(1-(1-r)^{T}\right)^M\quad.\end{aligned}$$ Using a maximum likelihood approach confidence intervals for $r$ can be calculated. Applying a confidence level of $95\%$ the upper bound for $r$ in cases where all simulations are viable is $5.1\times
10^{-8}$. As a lower bound for $r$ for cases where all simulations went extinct we obtain $0.057$. Since the latter | 0 | non_member_982 |
result strongly depends on the prior distribution we instead use the inflection point of the sigmoid function at $$\label{eq:sig-inflection}
1-\left[(T-1)/(MT-1)\right]^{1/T}$$ as a more conservative estimate, which for the case of our simulations is located at approximately $6.9\times 10^{-3}$. The inverse of $r$ corresponds to the mean time to extinction. From our considerations we, hence, expect the mean time to extinction for the scenarios marked by red squares in Figure \[fig:balanced\] to be below $6.9^{-1}\times 10^3\approx 145$ and the respective value for the conditions marked green to be in the order of $2\times 10^7$ or larger. Intermediate values are expected for the conditions marked yellow in the individual plots. Figure \[fig:E50-itime\] demonstrates, that assumptions we needed to make seem to hold and that the estimates indeed reflect the underlying extinction statistics to a great extent.
Obviously the classification of the conditions by the three scenarios to a meaningful extent reflects the extinction risks of the metapopulation in a sense, that Figure \[fig:balanced\] succeeds to highlight the main results. From the bounds for the mean extinction times to extinction derived above for the respective classes we can conclude that metapopulations in the red regions almost surely go extinct within a short time, | 0 | non_member_982 |
whereas metapopulations in the green regions are likely to be persistent. The yellow region decreases in range with increasing $L/N$. That is, the transition between threatened and persistent metapopulation sharpens with increasing $L/N$.
![\[fig:E50-replicate\]Extreme example of the variation in the number of viable replicates between the different samples (here: $\gamma=0.3$, $L/N=10$, $c=0.15$). In particular sample $98$ deviates strongly from the general mean. Since we can assume that the main source of variations is the stochastic simulation procedure rather than qualitative differences between the random dispersal patterns relevant for the present study, we do not investigate the within-sample variations further within the scope of this work.](E50-replicate.eps){width=".5\textwidth"}
The $10$ replicate simulations performed for each parameter set and each dispersal pattern in addition allow to investigate and to discuss the variability within the sample of $100$ dispersal patterns. In the regions marked red and green by definition all samples show the same behaviour. Detailed analysis of the yellow regions shows only very few cases of large variability of the number of extinct replicates between the samples. One example of rather high variability is depicted in Figure \[fig:E50-replicate\]. Overall the differences between the random dispersal patterns generated for each scenario do not seem to | 0 | non_member_982 |
be relevant for the present study, which is probably due to the decision of using regular dispersal patterns.
\[sec:discussion:regul\]Impact of regularity on the results
-----------------------------------------------------------
So far we focused on regular dispersal patterns. This approach made it possible to investigate the impact of the degree of symmetry of connectivity matrices on metapopulation viability independently from other possibly confounding effects, which is important in order to assess the role of dispersal symmetry for metapopulations. Our results on regular dispersal patterns show a remarkably low effect of symmetry ($\gamma$) on the viability of metapopulations at intermediate and high density of dispersal paths, $L/N$. At low $L/N$ symmetric dispersal even results in a slightly negative effects on the viability. How do these results relate to the more general case where the dispersal network is not regular?
![\[fig:nonreg\]Results on the viability of metapopulations exposed to general dispersal patterns randomly generated by modification of the algorithm described in \[sec:algor-gener-balanc\]. The analysis and graphical presentation of the simulation results is accordant to the procedure described in the caption of Figure \[fig:balanced\]. Please note that $L/N$ now specifies the mean number of connections per patch, while the actual number of dispersal links now can vary between patches.](NR-E50-nonreg_fixed_gamma.eps | 0 | non_member_982 |
"fig:"){width="\textwidth"}\
![\[fig:nonreg\]Results on the viability of metapopulations exposed to general dispersal patterns randomly generated by modification of the algorithm described in \[sec:algor-gener-balanc\]. The analysis and graphical presentation of the simulation results is accordant to the procedure described in the caption of Figure \[fig:balanced\]. Please note that $L/N$ now specifies the mean number of connections per patch, while the actual number of dispersal links now can vary between patches.](NR-E50-nonreg_fixed_l.eps "fig:"){width="\textwidth"}
In order to follow up this question we repeated the simulations accordingly, but now without the constraint of having regular dispersal networks. Technically this was implemented by skipping steps 4c and 4d of the pattern generation algorithm detailed in \[sec:algor-gener-balanc\], which then controls for the desired degree of symmetry only. The parameter $L/N$ now should be understood in a statistical sense, such that $L$ dispersal connections randomly were distributed between the $N$ patches resulting in a mean density of $L/N$ connections per patch. The results are depicted in Figure \[fig:nonreg\]. Interestingly, the minor effect of symmetry at low density of dispersal connections now shifts to a slight advantage for metapopulations with a symmetric dispersal pattern. From $L/N\ge 7$ no significant differences with respect to the simulation results based on regular dispersal | 0 | non_member_982 |
patterns (Figure \[fig:balanced\]) are observed.
In non-regular dispersal patterns the existence of isolated patches not participating in dispersal has an impact on the effective density of dispersal connections in the metapopulations [see also @Bode08]. Moreover, in the case of asymmetric dispersal there exist patches that either only receive or only provide migrants, i.e.*sinks* or *sources*, and that cannot actively take part in the metapopulation dynamics [@Artzy-Randrup10]. Since both of these effects are most distinct at small densities of the random dispersal networks, we assume that these differences basically drive the minor differences at low $L/N$ between our results on regular and the general case of random dispersal. Arguments for *not* assigning this effects to asymmetry in dispersal but to examine them separately are made in Section \[sec:conclusions\].
\[sec:discussion:prev\]Relation to previous works
-------------------------------------------------
In general our results suggest essentially no direct negative effect of asymmetric dispersal on metapopulation viability at intermediate and high densities of the dispersal network, at least as far as the stochastic patch occupancy model applied in this work is concerned. This is in contrast to the findings in [@Vuilleumier06] where it was concluded that extinction risk significantly increased when dispersal became asymmetric. The analysis in [@Vuilleumier06] is | 0 | non_member_982 |
not restricted to cases with regular dispersal only, although the relaxation of regular dispersal is not not sufficient to explain the qualitative differences in the results as shown in the previous section.
The description of the random patterns investigated in [@Vuilleumier06] does not provide all information necessary for an in-depth comparison with our results. In [@Vuilleumier06] the number of dispersal connections was chosen randomly for each of the $2,000$ metapopulations. Additional information provided on two particular patterns suggest that the densities are comparable or higher than the densities we investigated in our study. From our results we therefore do not expect a significant impact of dispersal asymmetry at these density of connections.
The analysis of the results in [@Vuilleumier06] is based on the number of connected patches in contrast to our analysis using the global mean number of connections $L/N$. The statistics of the number of connected patches seems to differ significantly between the asymmetric and the symmetric connectivity matrices investigated, a phenomenon we were not able to reproduce. In particular the example of a symmetric random pattern with more than $85$ connections per patch but only $96$ connected patches raises questions, since the largest cycle of closed dispersal in | 0 | non_member_982 |
non-regular connectivity matrices we generated always extended to at least $99$ patches for densities above $7$ connections per patch with a strong trend towards $100$ patches with increasing density. For this reason we assume, that the effects described in [@Vuilleumier06] originate from differences in network topology between the investigated connectivity matrices rather than differences in dispersal asymmetry.
@Bode08 investigated the same metapopulation model as in the present work in a slightly different setup ($N=10$, $e=0.4$, and $L/N=2.6$). Instead of simulating individual realisations, the probability of metapopulations to go extinct within $100$ time steps was calculated numerically for different dispersal patterns. This method restricts the analysis to rather small metapopulations of $10$ patches. Extinction probabilities were calculated for metapopulations connected through different dispersal patterns generated by the small world algorithm [see e.g. @Watts98; @Kininmonth09] initiated with a particular symmetric dispersal pattern (Bode, pers.communication). @Bode08 concluded from qualitative graphical analysis of their simulation results[^2], that asymmetry reduces persistence and exhibits a distinct threat to metapopulations.
The discussion of our results in Section \[sec:discussion:interpret\] relates our graphical analysis to the extinction probability in a certain number of time steps[^3], which allows for a comparison of the results. From additional simulation data we received | 0 | non_member_982 |
from Bode it seems, that the negative effect in their approach is larger than what we would expect from our simulation for the general, non-regular case (Section \[sec:discussion:regul\]). Additional simulations performed for metapopulations likewise subjected to non-regular dispersal patterns but reduced to the size of $10$ patches indicated a general increase in the probability of extinction but no significant impact of metapopulation size on the impact of symmetry. We therefore assume, that the differences related to symmetry observed by @Bode08 partly are owed to the fact, that the patterns in their study were generated from a particular symmetric starting configuration of the small world algorithm and that the similarity of patterns to this starting configuration correlates with the symmetry properties.
Recently another work was devoted to the effect of asymmetry on metapopulation viability [@Vuilleumier10]. This work aims to cover different aspects of asymmetry simultaneously, which makes it difficult to ascribe the variety of effects observed to certain properties of dispersal matrices. One configuration, however, seems to be equivalent to the simulations we performed for general dispersal matrices in Section \[sec:discussion:regul\] for anti-symmetric and symmetric dispersal, respectively [@Vuilleumier10 p. 229, Fig. 2, right column]. The results the authors obtain on these | 0 | non_member_982 |
patterns are in agreement with our observations, that the degree of symmetry of dispersal matrices has no significant impact on metapopulation viability at intermediate density of dispersal connections [cp. @Vuilleumier10 p. 213, Fig. 6, difference between the plots in the right column].
\[sec:conclusions\]Conclusions
==============================
We investigated the consequences of the symmetry of dispersal patterns on the viability of metapopulations. Our analyses are based on simulations of a stochastic patch occupancy model.
First we define the degree of dispersal symmetry, $\gamma$, which is based on the symmetry of the connectivity matrix (Equation \[eq:def-deg-asymm\]). In order to be able to minimise possibly confounding effects we restrict our main analysis to regular dispersal patterns, where asymmetry does neither affect the homogeneity of dispersal nor the local balances of incoming and outgoing dispersal connections. For these patterns we do not see any negative effect of dispersal asymmetry. For the more general case of non-regular dispersal patterns minor negative effects of asymmetric dispersal on metapopulation viability are confirmed, but only at rather weak densities of dispersal (cp. Section \[sec:discussion:regul\]). At these densities differences in dispersal symmetry generally are accompanied by other hierarchical differences of the dispersal network. This e.g. becomes evident from a neat example | 0 | non_member_982 |
---
abstract: 'Moment approximation methods are gaining increasing attention for their use in the approximation of the stochastic kinetics of chemical reaction systems. In this paper we derive a general moment expansion method for any type of propensities and which allows expansion up to any number of moments. For some chemical reaction systems, more than two moments are necessary to describe the dynamic properties of the system, which the linear noise approximation (LNA) is unable to provide. Moreover, also for systems for which the mean does not have a strong dependence on higher order moments, moment approximation methods give information about higher order moments of the underlying probability distribution. We demonstrate the method using a dimerisation reaction, Michaelis-Menten kinetics and a model of an oscillating p53 system. We show that for the dimerisation reaction and Michaelis-Menten enzyme kinetics system higher order moments have limited influence on the estimation of the mean, while for the p53 system, the solution for the mean can require several moments to converge to the average obtained from many stochastic simulations. We also find that agreement between lower order moments does not guarantee that higher moments will agree. Compared to stochastic simulations our approach is numerically | 0 | non_member_983 |
highly efficient at capturing the behaviour of stochastic systems in terms of the average and higher moments, and we provide expressions for the computational cost for different system sizes and orders of approximation. [We show how the moment expansion method can be employed to efficiently [quantify parameter sensitivity]{}.]{} Finally we investigate the effects of using too few moments on parameter estimation, and provide guidance on how to estimate if the distribution can be accurately approximated using only a few moments.'
author:
- Angelique Ale
- Paul Kirk
- 'Michael P.H. Stumpf'
title: A general moment expansion method for stochastic kinetic models
---
[^1]
[^2]
Introduction
============
Cellular behaviour is shaped by molecular process that can be described by systems of chemical reactions between different molecular species. At the macroscopic scale, the dynamics of these processes are frequently described in terms of mean concentrations of species using deterministic mass action kinetics (MAK). The deterministic solution, however, does not always capture the essential kinetics of the chemical system accurately, because it excludes stochastic effects.
The stochastic kinetics of chemical reaction systems are captured by the chemical master equation (CME), a probabilistic description of the system (also known as Kolmogorov’s forward equation) [@Gardiner2009; | 0 | non_member_983 |
@Kampen:1992aa]. The CME is a set of differential or difference equations that capture how the probability over the states (e.g. the abundances of the different molecular species) evolves over time. The CME offers an exact description of a system’s dynamics but can only be solved analytically for very simple systems. Exact single realisations of the CME, corresponding e.g. to observing processes inside a single cell, can be obtained numerically using for example Gillespie’s Stochastic Simulation Algorithm (SSA) [@Gillespie1976]. The SSA is a discrete simulation method that proceeds by randomly selecting a reaction that occurs at each subsequent time step, according to the probability of that reaction occurring next. This method is associated with considerable computational cost, that increases dramatically with the size of the model, which can make it infeasible to comprehensively characterize large systems through simulations.
A number of numerical methods have been developed to approximate the solution of the CME for more complex systems, including methods that approximate the CME by describing the probability distribution in terms of its moments [[@Gillespie2009; @Gomez2007; @Lee2009; @Singh2010; @Singh2006; @Ruess2011; @Goutsias2007; @Barzel2012]]{}. When only the mean (the first moment) is taken into account, the moment expansion reduces to the MAK description. In | 0 | non_member_983 |
the linear noise approximation (LNA), the CME is approximated by taking into account the mean (the first moment), and the variance and covariance (second central moments) of the distribution, whereby the second central moments are decoupled form the mean[@Komorowski2009; @Komorowski2010; @Pahle2012; @Wallace2012]. This is valid in the limit of large volumes and molecule numbers, or when the system consists of first-order reactions only.
For smaller systems and more complex reactions, a number of different approaches have been developed that aim to [capture]{} the temporal changes in coupled moments. These expansions can be performed in terms of the moments or the central moments about the mean; the conversion from molecule numbers can be kept implicit in the rate parameters or made more explicit from the outset. the expansion is done in terms of central moments up to third order for first and second order rate equations, and up to second order for general rate equations. In a recent paper [@Grima2012] it was shown that expansion up to three moments tends to deliver more accurate results than expansion up to only second order; in particular the variances are improved by going to higher moments. While the MFK, 2MA and 3MA approaches include | 0 | non_member_983 |
expressions for the first, second and third central moments, no general formulation exists in these frameworks to generate higher order central moments in an automatic way.
In this paper we derive a general method for expanding the CME in terms of its central moments about the mean, which does not make extraneous assumptions about the form of the reactions, and that can be evaluated up to any number of moments; in our exposition we follow the notation used in @Gillespie2009. [The new method described here non-trivially generalizes the work of Gomez et al.[@Gomez2007]: in particular we are able to generate arbitrary order higher central moments automatically and in a computationally efficient manner; combining computer algebra systems with numerical simulation engines does allow us to tackle problems that stymy e.g. the linear noise approximation or conventional (3rd order) MFK approaches. The interplay between noise and non-linear dynamics can give rise to very complicated behaviour, and only a handful of systems have been considered. The moments of the distribution described by the CME reflect this intricate relationship between stochasticty and non-linearity and we spend some time discussing this for a typical non-linear biomolecular system. Thus while going to arbitrary moments is straightforward | 0 | non_member_983 |
in our framework, we focus our discussion on the lower (up to sixth order) moments.]{}
[Our expansion is based on two successive Taylor expansions that allow us to express the CME of the moment generating function in computationally convenient form]{}; and we truncate the system by setting higher order terms in the Taylor expansions to zero. The outlined method [could]{} fit into a general framework for parameter inference based on maximum entropy distributions derived from the calculated moments. [We furthermore demonstrate that parameter sensitivity analyses may be performed naturally and efficiently in this framework: we can consider the rate of change of the moments with respect to the parameters, which also allows us to study the factors underlying cell-to-cell variability.]{} We illustrate the [general]{} method using three molecular reaction systems: a simple dimerisation reaction, which allows for a detailed investigation; Michaelis-Menten enzyme kinetics; and the p53 system, an oscillating tumour suppressor system [@Batchelor:2009hk].
Moment expansion method
=======================
We consider a system with $N$ different molecular species $(X_1,...X_N)$ that are involved in $L$ chemical reactions with reaction rates $k_l$, $$\begin{aligned}
\underbar{s}_1X_1+...+\underbar{s}_NX_N \xrightarrow{k_l} \bar{s}_1X_1+...+\bar{s}_NX_N,\end{aligned}$$ with $\underbar{s}_i$ and $\bar{s}_i$ the number of molecules of type $X_i$ before and after the reaction, respectively. The | 0 | non_member_983 |
time evolution of the system’s state is described by the chemical master equation (CME), $$\begin{aligned}
\frac{dP(\textbf{x})}{dt}=\sum_{l}P(\textbf{x}-\textbf{s}_l)a_l(\textbf{x}-\textbf{s}_l)-P(\textbf{x})a_l(\textbf{x}),\end{aligned}$$ in which $a_l(\textbf{x})$ are the propensity functions, with $a_l(\textbf{x})dt$ defined as the probability of reaction $l$ occurring in an infinitesimal time interval $dt$ when the number of molecules in the system is $\textbf{x}$ , $P(\textbf{x})$ the probability that the system contains $\textbf{x}$ molecules and $\textbf{s}_{l}=\bar{\mathbf{s}}_{l}-\underbar{\textbf{s}}_{l}$.
We start the derivation of the moment expansion method by deriving a moment generating function from the CME. In general, the moment generating function of a random variable **x** is defined as [@Gardiner2009] $$\begin{aligned}
m(\theta,\mathbf{x})=\sum_{\textbf{x}}e^{\theta \textbf{x}}P(\textbf{x}).\end{aligned}$$ The moments, $\left<\textbf{x}^\textbf{n}\right>$, with $\textbf{x}^\textbf{n}=x_1^{n_1}...x_d^{n_d}$, of the probability distribution can be found by taking the $n$-th order derivatives of the moment generating function with respect to $\theta$. The first moment is, of course, equal to the mean, $\mu=\left<x\right>$. The variance, skewness (related to asymmetry of the distribution) and kurtosis (related to the chance of outliers) can be derived from the central moments about the mean, $\left<(\textbf{x}-\mu)^\textbf{n}\right>$.
Using the CME we can write the time dependent moment generating function [@Azunre2007] $$\begin{aligned}
\frac{dm}{dt}=\sum_{l}\left[(e^{\theta \textbf{s}_l}-1)\sum_{\textbf{x}}e^{\theta \textbf{x}}P(\textbf{x})a_l(\textbf{x})\right]
\label{mgf}\end{aligned}$$ The time evolution of the mean concentration $\mu_i$ of species $X_i$ can be obtained by taking the first derivative | 0 | non_member_983 |
of Eq. \[mgf\] with respect to $\theta_i$ and subsequently setting $\theta$ to zero, $$\begin{aligned}
\frac{d\mu_{i}}{dt}=\left.\frac{d}{d\theta_i}\frac{dm}{dt}\right|_{\theta=0}= \sum_{l}s_l\left<a_l(\mathbf{x})\right>\end{aligned}$$ This expression can be evaluated by expanding $a_l(\textbf{x})$ in a Taylor expansion about the mean, $$\begin{aligned}
\frac{d\mu_i}{dt}=S\left[\sum_{l}\left.\sum_{n_1=0}^\infty...\sum_{n_d=0}^\infty\frac{1}{\mathbf{n!}}\frac{\mathbf{\partial^{n}}a_l(\textbf{x})}{\partial\mathbf{x^{n}}}\right|_{x=\mu}\mathbf{M_{x^{n}}}\right],
\label{eq6}\end{aligned}$$ where $S$ is the stoichiometry matrix, $$\begin{aligned}
\mathbf{\partial^{n}}=\partial^{n_1+..+n_d}\nonumber\\
\mathbf{n!}=n_1!...n_d!\nonumber\\
\mathbf{M_{x^{n}}}=M_{x_1^{n_1},...,x_d^{n_d}}\nonumber\\
\mathbf{\partial x^n}=\partial x_1^{n_1}...\partial x_d^{n_d},\nonumber
\label{mmgf}\end{aligned}$$ and we have substituted the central moments around the mean for the expected values of the expansion terms $$\begin{aligned}
M_{x_1^{n_1},...,x_d^{n_d}}=\left<(x_1-\mu_1)^{n_1}...(x_n-\mu_n)^{n_2}\right>.\nonumber
\label{mmgf}\end{aligned}$$ The first central moment $\left<\mathbf{x}-\mu\right>$ is zero. Higher order central moments can be derived from the moments, for example the covariance between $x_1$ and $x_2$ can be written as $$\begin{aligned}
\sigma_{x_1x_2}^2=\left<(x_1-\mu_{1})(x_2-\mu_{2})\right>=\nonumber\\
\left<x_1x_2\right>+\mu_1\mu_2-\left<x_1\right>\left<\mu_2\right>-\left<x_2\right>\left<\mu_1\right>.
\label{cmgf2}\end{aligned}$$ In general the relation between the central moments, $\mathbf{M_{x^{n}}}$, and the moments, $\mathbf{\mu^n}$, can be formulated as $$\begin{aligned}
\mathbf{M_{x^{n}}}=\sum_{k_1=0}^{n_1}...\sum_{k_d=0}^{n_d}\binom{\mathbf{n}}{\mathbf{k}}(-1)^{\mathbf{(n-k)}}\underbrace{\mathbf{\mu^{(n-k)}}}_{\alpha}\underbrace{\left<\mathbf{x^{k}}\right>}_{\beta},
\label{eq7}\end{aligned}$$ where $$\begin{aligned}
(-1)^{\mathbf{(n-k)}}=(-1)^{(n_1-k_1)}...(-1)^{(n_d-k_d)}\nonumber\\
\mathbf{\mu^{(n-k)}}=\mu_1^{(n_1-k_1)}...\mu_d^{(n_d-k_d)}\nonumber\\
\mathbf{x^k}=x_1^{k_1}...x_d^{k_d}\nonumber\\
\binom{\mathbf{n}}{\mathbf{k}}=\binom{n_1}{k_1}...\binom{n_d}{k_d}.\nonumber
\label{eq8}\end{aligned}$$ We obtain the time evolution equations of the central moments by taking the time derivative of Eq. \[eq7\], $$\begin{aligned}
\frac{d\mathbf{M_{x^{n}}}}{dt}=\sum_{k_1=0}^{n_1}...\sum_{k_d=0}^{n_d}\binom{\mathbf{n}}{\mathbf{k}}(-1)^\mathbf{{(n-k)}}\left[\alpha\frac{d\beta}{dt}+\beta\frac{d\alpha}{dt}\right].
\label{cmgf}\end{aligned}$$ The term $\alpha$ makes the time evolution of the central moments a function of Eq. \[eq6\], $$\begin{aligned}
\frac{d\alpha}{dt}=\sum_{i=1}^N(n_i-k_i)\mu_i^{-1}\alpha\frac{d\mu_i}{dt}\end{aligned}$$ The term $\beta$ gives rise to mixed moments, and the derivative of $\beta$ with respect to time yields the time evolution equations for the mixed moments. Therefore we also need to | 0 | non_member_983 |
include the time derivatives of the mixed moments in our system of equations. We obtain the time derivative of the term $\left<\mathbf{x^{k}}\right>$ by taking higher order derivatives of the moment generating function Eq. \[mgf\], resulting in $$\begin{aligned}
\frac{d\beta}{dt}=\sum_{e_1=0}^{k_1}...\sum_{e_d=0}^{k_d}\mathbf{s^{e}}\binom{\mathbf{k}}{\mathbf{e}}\underbrace{\left<\mathbf{x^{(k-e)}}a(x) \right> }_{\left<F\right>},\end{aligned}$$ where $$\begin{aligned}
\mathbf{s^e}=s_{1}^{e_1}...s_{d}^{e_d}\nonumber\\
\mathbf{x^{(k-e)}}=x_1^{k_1-e_1}...x_d^{k_d-e_d}.\nonumber
\label{mmgf}\end{aligned}$$ By expanding the individual terms of the resulting expressions in a second Taylor expansion, the time evolution of the mixed moments becomes a function of the central moments; the time evolution of the central moments remains a function of the central moments alone, $$\begin{aligned}
\left<F\right>=\left.\sum_{n_1=0}^\infty...\sum_{n_d=0}^\infty\frac{1}{\mathbf{n!}}\frac{\mathbf{\partial^{n}}F(x)}{\mathbf{\partial x^{n}}}\right|_{x=\mu}{\mathbf{M_{x^{n}}}}.\end{aligned}$$
When the model under investigation is non-linear, each central moment will depend on a higher order central moment, which may itself also depend on higher order moments; hence the number of equations we would need to include is in principle infinite. To overcome this, we can obtain a closed set of equations by evaluating the time evolution equations for $\nu$ moments and truncating the Taylor series after the $\nu$th order, thereby setting all higher order [central moments equal to zero. By truncating the Taylor expansion (i.e. setting terms of the Taylor expansion corresponding to $\sum n_i>\nu$ to 0), the equations are only dependent on the central moments up to | 0 | non_member_983 |
the selected order $\nu$.]{} Alternatively, the set of equations could be closed using moment closure techniques based on common expressions for well known probability distributions [@Milner2011]. In this paper we will use truncation as well as closure based on a Gaussian distribution.
Results
=======
We illustrate the approach in a range of applications that serve to highlight both the basic properties of the moment expansion method as well as the advantages this method offers in situations where other methods [@Ito2010; @Komorowski2012; @Wallace2012] tend to fail.
Dimerisation
------------
We first illustrate the moment approximation method using a simple dimerisation reaction [@Wilkinson2012]. The system describes the reversible formation and disintegration of a dimeric molecule, $$\begin{aligned}
X_1+X_1 \xrightleftharpoons[k_2]{k_1} X_2.\end{aligned}$$ The system can be written in terms of two propensities $$\begin{aligned}
\begin{array}{cc}
a_1: k_1x_1(x_1-1);&\hspace{7mm} a_2: k_2x_2,
\end{array}\end{aligned}$$ and the stoichiometry matrix $$\begin{aligned}
S=\left[\begin{array}{cc}
-2 & 2\\
1 & -1\end{array} \right],\end{aligned}$$ where the columns correspond to reactions and the rows to variables.
The exact kinetics of the system can be straightforwardly simulated using the Stochastic Simulation Algorithm (SSA) [@Gillespie1976]. One realisation of the SSA is equivalent to observing the kinetics of the system inside a single cell (Figure \[dimer1\]a), whereas the average over many realisations | 0 | non_member_983 |
mimics the observation of the average kinetics for a large number of cells (Figure \[dimer1\]b, n=100,000). The system reaches a stationary state after about $4$ seconds.
![\[\]Study of dimerization system, initial values $\mathbf{x}=[301,0]$, parameters $\mathbf{k}=[1.66\cdot10^{-3}, 0.2]$. a) Single SSA realisation. b) Average of 100,000 SSA simulations. c-d) Histogram of SSA runs (grey bars) and probability density of normal distribution (blue line) calculated from mean and variance of SSA runs corresponding to points c and d in figure (b). e) Mean for both variables, calculated using SSA, the moment approximation using 1 moment (deterministic) and two central moments (2m). f) Variance of $x_1$ calculated using SSA, two central moments (2m), three central moments with Gaussian closure (3m) and four central moments (4m). g) Third central moment calculated using SSA and moment approximation method, fourth central moment calculated using SSA and moment approximation method. []{data-label="dimer1"}](Figure1){width="47.00000%"}
In Fig. \[dimer1\]c-d we plotted histograms of the SSA at two time points that correspond to different regimes in the trajectory: transient state with decreasing molecule number (Fig. \[dimer1\]c) and stationary state (Fig. \[dimer1\]d). We calculated the normal probability density function from the mean and variance of the SSA distribution at those two time points (blue line), | 0 | non_member_983 |
ignoring higher order moments. The normal probability density functions fit the histograms very well, which indicates that the distribution of molecule numbers is approximately normal over the time course, and higher order moments would have limited influence on describing the kinetics of the mean for this system.
Figure \[dimer1\]d shows the mean molecule numbers calculated with the moment approximation method compared to the SSA results. The results for the deterministic (including only the mean) as well as the two moment approximation are approximately equal to the means calculated with the SSA. We compare the higher order central moments calculated with the general moment approximation method described above with the results from the SSA simulations in Figures \[dimer1\]f-h. In the 3–moment approximation we closed the equations using the Gaussian probability distribution (setting the fourth cumulant equal to zero [@Grima2012]), while in the other approximations we truncated higher order moments. The approximated moments are close to the exact moments calculated from the SSA, which is also clear from the errors displayed in Table \[table1\], calculated as $\epsilon=(1/N)\sum_{n} \sqrt{((M_{SSA}-M_{ma})/M_{SSA})^2}$ with $M_{SSA}$ the moment or central moment calculated based on the SSA, $M_{ma}$ the corresponding value calculated with the moment approximation, and $N$ the | 0 | non_member_983 |
number of time points taken into account. The larger error for the mean when using the deterministic approach is due to small differences in the first part of the trajectory, the decreasing part, where the contributions of the higher order central moments are largest. [The larger errors calculated for the third central moment are due to the fluctuations that are still present in the third central moment trajectory calculated based on 100,000 SSA runs. Increasing the number of SSA runs would reduce this effect.]{}
$\epsilon [\%]$ deterministic 2m 3m 4m 5m 6m
----------------- --------------- -------- -------- -------- -------- -------- --
$\mu_1$ 0.300 0.0545 0.0546 0.0546 0.0546 0.0546
$M_{x_1^2}$ 0.961 0.803 0.804 0.801 0.803
$M_{x_1^3}$ 18.3 18.1 18.2 18.2
$M_{x_1^4}$ 2.39 1.33 1.83
: \[table1\]Error between mean, second and third central moment calculated with SSA and approximation methods for the dimerization system.
Michaelis-Menten enzyme kinetics
--------------------------------
We next look at Michaelis-Menten enzyme kinetics, where an enzyme, $E$, and substrate, $S$, first bind to form a complex, $SE$; following this, the complex can dissociate into the original components $S$ and $E$, or $S$ can be converted into the product, $P$, $$\begin{aligned}
S+E \xrightarrow{k_1} SE\nonumber\\
SE \xrightarrow{k_2} S+E\\
SE \xrightarrow{k_3} P+E\nonumber\end{aligned}$$ The system | 0 | non_member_983 |
is often reduced to a system of two variables ($S$ and $P$) [@Wilkinson2012], with three reaction propensities $$\begin{aligned}
\begin{array}{c}
a_1: k_1S[E(0)-S(0)+S+P];\nonumber\\
a_2: k_2[S(0)-(S+P)];\nonumber\\
a_3: k_3[S(0)-(S+P)]
\end{array}\end{aligned}$$ and the stoichiometry matrix $$\begin{aligned}
S=\left[\begin{array}{ccc}
-1 & 1 & 0\\
0 & 0 & 1\end{array} \right].\end{aligned}$$
![\[\]Study of Michaelis-Menten kinetics, with parameters $\textbf{k}=[1.66\cdot10^{-3},1\cdot10^{-4},0.01]$, and initial conditions, $S(0)=301$, $P(0)=0$, and $E(0)=120$. (a) Single SSA realisation. Trajectories calculated using (b) moment approximation including only the mean (deterministic). (c-d) Variance of S and covariance between S and P calculated using SSA and approximation using 2 moments. (e) Skewness of S calculated using SSA and approximation with 3 moments, (f) Kurtosis calculated using SSA and approximation up to 4 and 6 moments. []{data-label="mmfigure1"}](Figure2){width="48.00000%"}
One trajectory calculated with Gillespie’s Stochastic Simulation Algorithm is shown in Figure \[mmfigure1\]a, and the mean calculated by solving the ODE system using the deterministic representation of the system is displayed in Figure \[mmfigure1\]b. For this system the deterministic representation is very close to the stochastic solution. The variance of the substrate and the covariance between the substrate and the product (Figure \[mmfigure1\]c-d) calculated based on 100,000 SSA runs can be closely approximated using the moment approximation method expanded up to two moments. Figure | 0 | non_member_983 |
\[mmfigure1\]e shows the skewness of the distribution over time, calculated as $$\begin{aligned}
\gamma=\frac{\left<(x_1-\mu_1)^3\right>}{\sigma_{11}^3}=\frac{\left<(x_1-\mu_1)^3\right>}{\left<(x_1-\mu_1)^2\right>^{3/2}}.\end{aligned}$$ For a normal distribution the skewness is zero. The skewness is approximated well using the moment approximation method up to three moments. The kurtosis, given by $$\begin{aligned}
\gamma_2=\frac{\left<(x_1-\mu_1)^4\right>}{\sigma_{11}^4}=\frac{\left<(x_1-\mu_1)^4\right>}{\left<(x_1-\mu_1)^2\right>^{4/2}},\end{aligned}$$ indicates the thickness of the tails of the probability distribution, relating to the probability of outliers. For a normal distribution, the kurtosis is equal to 3. When four moments are used to approximate the system, the approximation of the kurtosis that we obtain from the SSA is not as close as when also higher moments, here six moments, are included in the calculation. This illustrates that agreement between lower-order moments does not guarantee that higher-order moments will also agree. This problem is likely exacerbated for more complex models, e.g. those exhibiting non-linear dynamics.
P53 system
----------
Finally, we investigate the oscillatory p53 system[@Geva2006], which consists of three proteins connected through a nonlinear feedback loop: p53, precursor of Mdm2 and Mdm2. The system can be written in terms of six propensities, $$\begin{aligned}
\begin{array}{cc}\vspace{2mm}
a_1: k_1;&\hspace{7mm} a_2: k_2x\\\vspace{2mm}
a_3: k_{3}\frac{xy}{x+k_{7}};&\hspace{7mm} a_4: k_4x\\
a_5: k_5y_0; &\hspace{7mm} a_6:k_6y,
\label{p53system}
\end{array}\end{aligned}$$ and the stoichiometry matrix $$\begin{aligned}
S=\left[ \begin{array}{cccccc}
1 & -1 & -1 & | 0 | non_member_983 |
0& 0& 0\\
0 & 0 & 0 & 1& -1 & 0\\
0 & 0 & 0 & 0 & 1 & -1\end{array} \right],\end{aligned}$$ where $x$ is the concentration of p53, $y_0$ the concentration of precursor of Mdm2, $y$ is the concentration of Mdm2, $k_1$ is the p53 production rate, $k_2$ is the Mdm2-independent p53 degradation rate, $k_{3}$ the saturating p53 degradation rate, $k_{7}$ is the p53 threshold for degradation by Mdm2, $k_{4}$ is the p53-dependent Mdm2 production rate, $k_5$ is the Mdm2 maturation rate and $k_6$ is the Mdm2 degradation rate.
Figure \[p53figure1\]a shows an SSA simulation of the model for parameters $q_1=$\[90,0.002,1.7,1.1,0.93,0.96,0.01\] and initial values $\textbf{x}(0)=[70, 30, 60]$, which exhibits oscillating behaviour. Because the oscillations for different realisations of the SSA (corresponding to different cells) are stochastically out of phase, the average over 100,000 stochastic simulations shows a damped oscillation, reflecting the presence of a negative feedback loop. Figures \[p53figure1\]c-h show the trajectories of the mean calculated with the moment approximation method. In the deterministic approximation, the oscillations are not damped but expanding, which would indicate a positive instead of negative feedback loop. The LNA and the 2 moment approximation show the same effect. The mean calculated | 0 | non_member_983 |
with the LNA is always equal to the mean calculated with the deterministic approximation because the mean is not coupled to the variance. When 3 moments are included, the system shows damped oscillations, although not as damped as the SSA trajectories. By including more moments the trajectories converge to the SSA trajectories. When 6 moments are taken into account, the trajectories calculated with the moment approximation show a similar behaviour to the trajectories calculated with the SSA. This is confirmed by the cumulative difference between the SSA trajectories and the trajectories calculated with the moment approximation shown in Figures \[p53figure1\]i-j, which show a clear decrease in cumulative error for all variables when 6 central moments compared to 2 central moments are included in the approximation. Including more moments would improve the estimation further.
We analyze the distribution of the p53 model over the time course by looking at the central moments (Figure \[p53figure2\]). The variance for the SSA is first increasing, then after about 20 hours it levels off. In Figure \[p53figure2\]b we compare the variance calculated based on the SSA with the LNA and moment approximation method. When up to five central moments are included, the variance keeps increasing | 0 | non_member_983 |
and does not level off. Only for the case of 6 moments does the variance reach a stable value, and even then the value is about three times higher than that predicted by the SSA simulations. Figure \[p53figure2\]c shows the comparison of the skewness of the SSA distribution to the skewness of a normal distribution ($\gamma=0$). For three time points where the skewness is relatively large (indicated by d, e, f) we display the histogram of the 100,000 SSA realisations together with the probability density of the normal distribution (cyan line) calculated based on the mean and variance of the SSA and the 6 moment approximation. Additionally, we plot the skew-normal distribution calculated using the mean, variance and third central moment; this is defined by the probability density function $$\begin{aligned}
f(x)=\frac{2}{\omega}\phi\left(\frac{x-\xi}{\omega}\right)\psi\left(\alpha\left(\frac{x-\xi}{\omega}\right)\right), \end{aligned}$$ with $\xi$ a location parameter and $\omega$ a scale parameter. The parameter $\delta$ can be calculated from the estimated skewness using the relation $$\begin{aligned}
|\delta|=\sqrt{\frac{\pi}{2}\frac{|\hat{\gamma}|^{2/3}}{|\hat{\gamma}|^{2/3}+((4-\pi)/2)^{2/3}}}.\end{aligned}$$ The parameter $\alpha$ can be calculated from $\delta$ with $\delta=\alpha/(\sqrt{1-\delta^2})$, and $\omega$ can be calculated from the variance using $\sigma^2=\omega^2(1-2\delta^2/\pi)$. These plots confirm what we saw above, that using only the mean and variance does not capture the full distribution in this case, | 0 | non_member_983 |
and also including the skewness is not enough. [When comparing the skewness of the p53 distribution with the skewness of the Michaelis-Menten enzyme kinetics system (Figure \[mmfigure1\]e), we find that the maximum value of the skewness for both systems is approximately equal, and in both systems the skewness does not have a large effect on the mean.]{}
![Analysis of distribution for p53 model. (a) Variance calculated based on SSA runs. (b) Variance calculated with SSA, LNA and moment approximation method. (c) Skewness calculated based on SSA runs (blue line) and skewness for normal distribution (cyan dashed line). (d-f) Histograms calculated based on SSA for points d, e and f in figure (c), and probability densitiy function of normal distribution calculated using mean and variance based on SSA (cyan line).[]{data-label="p53figure2"}](Figure4){width="48.00000%"}
![\[\][]{data-label="contour1"}](Figure5){width="50.00000%"}
Parameter Sensitivity Estimation
--------------------------------
Assessing parameter sensitivity is a key concern when fitting any parametric model [@Saltelli2004; @Erguler2011]. Such analyses allow us to quantify how rapidly the outputs of our model change as we vary its parameters, which can provide insights into the robustness of the model and the relative influence that each parameter has upon the model’s behaviour. However, sensitivity analyses of stochastic models can be difficult and/or computationally | 0 | non_member_983 |
costly [@Gunawan2005; @Plyasunov2007], and often involve simulating many times in order to obtain Monte Carlo estimates of sensitivity coefficients. The development of efficient methods for stochastic sensitivity analyses has been the focus of much recent research [@Plyasunov2007; @Komorowski2012b; @Sheppard2012].
In the context of our proposed approach, a natural and straightforward way to assess parameter sensitivity is to consider the rate at which the moments vary with the parameters. This motivates the calculation of simple sensitivity coefficients [@Varma1999; @Saltelli2004] of the form $s_{ij}(t) = \frac{\partial m_i(t, \boldsymbol{\theta})}{\partial \theta_j}$, where $m_i$ is the (estimated) $i$-th moment and $\theta_j$ is the $j$-th parameter. Within our moment approximation framework, the $s_{ij}$’s may either be estimated by perturbing the model’s parameters and computing a finite difference approximation, or obtained automatically by employing the CVODES solver of @Serban2003 when solving the system of ODEs (Equations and ). In Figure \[sens-fig01\], we reconsider the dimerisation model of Section III A. We focus upon the sensitivity of the mean and 2nd and 3rd central moments of the two molecular species to the parameter $k_1$ (similar results are obtained for the sensitivity to $k_2$). Figures \[sens-fig01\]a, d and g show how the moments estimated from 100,000 SSA runs vary | 0 | non_member_983 |
when we increase the original value of $k_1$ by 10 percent. Figures \[sens-fig01\]b, e and h show the same for the moments estimated using our proposed approach with 6 central moments (6m). Figures \[sens-fig01\]c, f and i show sensitivity coefficients estimated from both the SSA and 6m outputs using a finite difference approximation (in the 6m case, the sensitivity coefficients may instead be obtained automatically using the CVODES solver, which yields identical results). There is generally good agreement between the coefficients estimated using the two different approaches. However, as we consider higher moments, our ability to assess sensitivity using the SSA output rapidly diminishes, since the variability caused by the change in the parameter value is overwhelmed by the variability in the estimator due to finite sample size. This may be rectified by increasing the number of SSA simulations, but at considerable computational cost. In contrast, the sensitivity coefficients associated with higher moments may still be straightforwardly calculated using the moment expansion approach (although, of course, care must be taken to ensure that appropriately many moments have been taken into account by the approximation — see Section III E).
Simple Heuristics for Moment Expansions
---------------------------------------
Our results for the p53 | 0 | non_member_983 |
system clearly demonstrate that failure to take a sufficient number of moments into account can lead to incorrect conclusions and biased parameter estimates. Ideally we would like to know from the outset whether a deterministic approach or a two moment approximation is sufficient to capture the general statistical behaviour of the stochastic system. But without recourse to a large number of SSA runs it is impossible to predict the statistical properties of the solutions to non-linear stochastic systems. And in such cases it is generally not feasible to perform large numbers of SSA simulations and we need a different approach. We should look at the assumption made at the beginning of our derivation, where we assumed that we can approximate the propensity functions with a Taylor expansion. For a single variable a Taylor expansion of a function $f(x)$ about the point $c$ has the general form $$\begin{aligned}
f(x)=f(c)+\frac{f'x}{\partial x}\left(x-c\right)+\frac{f''x}{\partial^2 x}\left(x-c\right)^2+\ldots\end{aligned}$$ By taking into account only the first term, we assume that the function value in point $x$ is the same as for point $c$; by taking into account also the second term we assume that $f(x)$ can be approximated by a straight line between points $x$ and $c$, etc.. Truncating | 0 | non_member_983 |
the expansion at a low order will only result in a good approximation in case $x$ is close to point $c$, where we have approximated the true function. In our case $c$ is the mean, $\mu$, implying that an approximation using a few moments will be accurate only in case all observations are close to the mean. In case it is possible to perform a single realisation of the SSA, we can assess this quality by comparing the mean calculated with the deterministic approach with the trajectory calculated with the SSA (Figure \[p53figure5\]).
Figure \[p53figure5\]a displays the deterministic mean of $x_1$ and one SSA simulation for the dimerisation system. In this case the trajectory is close to the mean over the complete time course. A single trajectory for p53, displayed in Figure \[p53figure5\]c, compared to the deterministic result shows that in this case the distance of the trajectory from the mean is much larger. In case a single SSA realisation of the model is not possible, but experimental data are available, the distance form the mean can still be investigated in the same way. Figures \[p53figure5\]b,d show the mean calculated using the deterministic approach together with ‘measured’ values at three | 0 | non_member_983 |
time points (obtained with the SSA), repeated three times, resulting in nine data points. Also when looking at the distance form these nine points from the deterministic mean, it is clear that for the dimerisation system $(x-\mu)$ is small, whereas for the p53 system it is [relatively]{} large, indicating that a larger number of moments is necessary to capture the full distribution.
Such simple heuristics have the advantage of being computationally affordable. While inadequate at guaranteeing good performance of an expansion using any finite number of moments, we can use them to capture any gross inadequacy of a given approximation relatively reliably. Such small-scale analyses should precede or accompany moment expansions. More generally, we can consider this problem from the point of view of statistical model checking; see e.g. [@Gelman:2003]. But the question as to how many stochastic simulations need to be averaged over to get a good idea of the mean (or any higher moment) is challenging to answer for all but the most trivial systems [@Toni:2008aa].
![\[\]Study of the deviation from the mean $(x-\mu)$ (a) Deterministic mean and single SSA trajectory for dimerisation system. (b) Deterministic mean and 9 points taken from different SSA trajectories for dimerisation system. | 0 | non_member_983 |
(c) Deterministic mean and single SSA trajectory for p53 system. (d) Deterministic mean and 9 points taken from different SSA trajectories for p53 system. []{data-label="p53figure5"}](Figure7){width="48.00000%"}
Computational Complexity
------------------------
The computational complexity of the moment approximation method depends on the number of variables in the system, and the number of terms that need to be evaluated for each central moment. Because of the symmetry of central moments, e.g. $\left<(x_1-\mu_1)(x_2-\mu_2)\right>=\left<(x_2-\mu_2)(x_1-\mu_1)\right>$, there are many terms we do not need to include in the ODE representation of the system. The total number of central moment terms that could be nonzero when approximating a system with $d$ variables and up to $N$ moments is given by $$\begin{aligned}
N_{cm}=\frac{(N+d)!}{N!d!}-d-1\end{aligned}$$ We subtract $d$ terms because the first order central moments are always zero, and one term corresponding to the zeroth order central moment. For the deterministic case, the total number of ODE equations necessary to describe the system is equal to the number of variables, each term describes the mean of one of the variables. For the LNA the total number of equations is equal to the number of equations needed for the two moment approximation. We displayed the number of central moment terms to be evaluated | 0 | non_member_983 |
Subsets and Splits